text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Why the over-55s are turning off the BBC The BBC is so keen to attract youth that it's neglecting its core viewers, says Andrew Billen Emma Corrin in Netflix's The Crown and Paul Mescal and Daisy Edgar-Jones in the BBC's Normal People Andrew Billen Friday November 27 2020, 12.01am, The Times Trying to be all things to all people is the ultimate fool's errand — and it has been the BBC's for almost 100 years. Its funding gives it no choice. By law the BBC taxes every British household in which someone wishes to watch or record (or, in the early days, listen to) anything transmitted across an airwave. For its first 33 years this was not an onerous obligation. Beyond Radio Luxembourg and Lord Haw-Haw, the BBC had no competition. Then, in 1955, came ITV, a network whose progenitors were impresarios, not a Presbyterian Scot who knew what was good for us. Now, in the age of the streaming service, the BBC exists in a busy hypermarket of content. Netflix, Amazon, Disney and its other
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,453
Q: How to get the date from "2019-06-27T12:30:00.000+0000"? I want to get date and time from 2019-06-27T12:30:00.000+0000 in android. I tried a code but it is not working. DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ssZ"); Date date = null;//You will get date object relative to server/client timezone wherever it is parsed try { date = dateFormat.parse("2019-06-27T12:30:00.000+0000"); } catch (ParseException e) { e.printStackTrace(); } DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); //If you need time just put specific format for time like 'HH:mm:ss' String dateStr = formatter.format(date); A: You're missing the millisecond portion of your SimpleDateFormat, below is what you need for that specific pattern: DateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSSZ"); A: You're missing out on parts of the Date The pattern you need is, DateFormat dateformat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'+'ssss");
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,910
Q: How to get effect size for Kruskal Wallis test in Python? I am trying to do Kruskal Wallis test in python that not only gives me the H statistics and the p value, but also the effect size. I have tried scipy's stats.kruskal() function, but only the H and p were returned. with a pandas dataframe, so I converted the two columns of the dataframe (in the future I may need more than two) into arrays and ran scipy.stats.kruskal(L_arr,E_arr) I first converted the two columns of interest from a pandas dataframe to two arrays L_arr and E_arr. Then I ran: import scipy.stats as stats stats.kruskal(L_arr, E_arr) The result I got: KruskalResult(statistic=1.2752179327521276, pvalue=0.2587900768563777) I wish there is some way for me to get the effect size as well?
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,286
Liquor and Jealousy. In October 1893, 64-year-old Patrick Finney of New Bedford, Pennsylvania, was visiting his old friend and drinking buddy James Campbell in Hazelton, Ohio. Campbell had been a saloonkeeper in Pittsburgh before retiring and moving with his wife to Hazelton, a suburb of Youngstown. As was their custom, Finney and the Campbells were drinking heavily the night of October 9. James Campbell had a reputation as a man of ungovernable temper when intoxicated and this night was no exception. Around 10:00, when it became clear that Campbell had exceeded his limit, a neighbor who had been drinking with them helped Campbell to bed. Finney and Mrs. Campbell stayed up and continued talking. Around twenty minutes after going to bed, Campbell came back downstairs. Still drunk and angry, Campbell was holding a 22-caliber revolver. "I'll fix you," he said, then fired three shots. One went through his wife's chest, killing her instantly and the other two hit Finney in the head and abdomen. The police arrived quickly, arresting Campbell and rushing Finney to the hospital. Campbell told the police he had shot his wife and friend because he had caught them in a compromising position, but once he was locked in a cell he said, "I don't know what made me do it." The newspapers concluded that liquor and unfounded jealousy were the cause. In January 1894, James Campbell was indicted for the murder of his wife and attempted murder of Patrick Finney. He announced his intention to plead insanity but being "crazy drunk" has never been a good defense. The following March, Campbell was found guilty of second-degree murder. "An Old Man's Murderous Jealousy," Evening Herald, October 10, 1893. "Commits Double Murder." Daily Inter Ocean, October 10, 1893. "Jealousy Causes Murder," Patriot, October 10, 1893. "On Trial for his Life," Plain Dealer, February 27, 1894. "Plenty Of Indictments," Plain Dealer, January 18, 1894. "Shot his Wife Dead," National Police Gazette, November 4, 1893. Labels: 1890s , Alcohol , Gunshot , Jealousy , Ohio Married at 15, Dead at 20. A Troubling Spirit. The Poughkeepsie Tragedy.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,217
I was playing on survival and in my village my population was knocked down to 2, so I trapped the villagers and waited for them to make a baby. The baby escaped in the night, so I tried to encase it in blocks (sandstone in my case). I did, but a couple of minutes later I saw the baby running around. Indeed, the baby had escaped its confines. The baby will not be in the cage, and will be running around if it is not dead. This is not a duplicate of the baby animals escaping fences glitch, as it has to do with escaping solid blocks and not fences.
{ "redpajama_set_name": "RedPajamaC4" }
3,172
{"url":"http:\/\/mathhelpforum.com\/geometry\/204937-volume.html","text":"# Math Help - Volume\n\n1. ## Volume\n\nA cone is attached to a hemisphere of radius 4 cm. If the total height of the object is 10 cm, find its volume.\n\n2. ## Re: Volume\n\nLet's calculate the volume of the cone and the volume of the hemisphere seperatly and then add them. I will be accurate to 2 points after the decimal point.\n\nHemisphere: V = (2\/3)*pi*r^3 = 128*pi\/3 = 134.04\n\nIf the height of the entire object is 10, and the radius of the hemisphere is 4, then the height of the cone is 10 - 4 = 6.\n\nCone: V = (1\/3)*pi*r^2*h = 32*pi = 100.53\n\nVcone + Vhemisphere = 134.04 + 100.53 = 234.57\n\n3. ## Re: Volume\n\nHello, Farisco!\n\nA cone is attached to a hemisphere of radius 4 cm.\nIf the total height of the object is 10 cm, find its volume.\n\nCode:\n - * * *\n: * : *\n: * : *\n: * :4 *\n: :\n: * - - - - * - - - - *\n: \\ 4 : 4 \/\n10 \\ : \/\n: \\ : \/\n: \\ :6 \/\n: \\ : \/\n: \\ : \/\n: \\ : \/\n: \\ : \/\n: \\:\/\n- *\nWe have a hemisphere with radius 4.\nWe have cone with radius 4 and height 6.\n\nYou should be able to find the total volume\n. . without a calculator and without rounded-off decimals.\n\nA sphere has volume: . $V \\:=\\:\\tfrac{4}{3}\\pi r^3$, where $r$ is the radius.\n\nA half-sphere with radius 4 has volume: . $V \\:=\\:\\tfrac{1}{2} \\times \\tfrac{4}{3}\\pi(4^3) \\:=\\:\\frac{128\\pi}{3}$\n\nA circular cone has volume: . $V \\:=\\:\\tfrac{\\pi}{3}r^2h\\;\\;(r = \\text{radius, }\\:h = \\text{height})$\n\nA cone with $r = 4,\\,h=6$ has volume: . $V \\;=\\;\\tfrac{\\pi}{3}(4^2)(6) \\:=\\:32\\pi$\n\nThe total volume is: . $\\frac{128\\pi}{3} + 32\\pi \\;=\\;\\boxed{\\frac{224\\pi}{3}\\text{ cm}^3}$\n\nIf you want a decimal, now is the time to crank it out:\n\n. . . . $\\boxed{2}\\,\\boxed{2}\\,\\boxed{4}\\;\\;\\boxed{\\times} \\;\\; \\boxed{\\pi}\\;\\;\\boxed{\\div}\\;\\;\\boxed{3}\\;\\;\\boxed {=}$\n\nand we get: . $\\boxed{234.5722515}$","date":"2014-03-14 04:00:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 9, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7501013875007629, \"perplexity\": 1305.4121342128776}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-10\/segments\/1394678683789\/warc\/CC-MAIN-20140313024443-00081-ip-10-183-142-35.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction and Summary} A recent advance in the understanding of black holes are the computations \cite{Penington:2019npb,Almheiri:2019psf} of the time evolution of the entanglement entropy between a holographic black hole system and an external bath to which the black hole is coupled. A crucial ingredient in these computations are entanglement islands -- contributions to the entanglement entropy (EE) from regions that are disconnected and can be far away from the bath \cite{Almheiri:2019hni,Almheiri:2019yqk,Almheiri:2019psy,Almheiri:2019qdq,Penington:2019kki}. These contributions become dominant at late times and lead to Page curves for the time evolution of the entropy, in line with expectations based on unitarity. Reviews can be found in \cite{Almheiri:2020cfm,Raju:2020smc}. The discussions so far are largely based on bottom-up models and on low-dimensional theories where the features of gravity are qualitatively different. A prominent role is played by Karch/Randall models \cite{Karch:2000ct,Karch:2000gx}. The special case of a Karch/Randall model with a tensionless end-of-the-world brane, discussed in \cite{Geng:2020qvw}, can be embedded into Type IIB string theory as an orbifold of $AdS_5\times S^5$. But that case is somewhat peculiar in that the 4d graviton has a mass that can not be separated from the UV cut-off in the 4d gravitational description. The aim of the present work is to demonstrate in a UV-complete string theory setting the emergence of entanglement islands and Page curves for black holes in four-dimensional theories of gravity in which the graviton mass can be controlled, including theories with massless gravitons. Starting point are the discussions of islands and Page curves in general 5d Karch/Randall models \cite{Chen:2020uac,Chen:2020hmv,Geng:2020qvw,Geng:2020fxl,Rozali:2019day}, which can be used to model gravitating systems coupled to non-gravitating and gravitating baths. These models have the appealing feature that the quantum extremal surfaces \cite{Engelhardt:2014gca,Faulkner:2013ana} exhibiting island contributions are entirely geometrized, due to the doubly-holographic nature of these models. This allows for the identification of entanglement islands through classical Ryu/Takayanagi surfaces \cite{Ryu:2006bv}. We will uplift the discussions in these bottom-up models to Type IIB string theory, to provide UV completions and concrete holographically dual QFTs. The string theory constructions are based on holographic duals for 4d boundary CFTs and for 3d SCFTs engineered by configurations of D3, D5 and NS5 branes \cite{Gaiotto:2008sa,Gaiotto:2008sd,Gaiotto:2008ak}. Holographic duals for large classes of such theories were constructed in \cite{DHoker:2007zhm,DHoker:2007hhe,Aharony:2011yc,Assel:2011xz}, and they provide natural string theory realizations of the Karch/Randall models with non-gravitating and gravitating baths. We will study quantum extremal/minimal surfaces in these solutions and identify quantities that exhibit Page curve behavior. The key findings of \cite{Geng:2020fxl,Geng:2020qvw}, such as the existence of critical brane angles separating different phases of minimal surfaces, will find string theory realizations. We will also identify 10d versions of the ``left/right EE'' that was found to exhibit Page curve behavior in the 5d models with gravitating bath, where the usual notion of geometric EE becomes subtle. In the following we will first review relevant aspects of the discussion in the Karch/Randall models to set the stage and then summarize the main results of this paper. \medskip \textbf{Islands and Page curves in Karch/Randall models:} The Karch/Randall models for 4d gravity coupled to a non-gravitating bath are based on a part of $AdS_5$ cut off by an end-of-the-world (ETW) brane extending along an $AdS_4$ slice (fig.~\ref{fig:KR-nongrav}). The conformal boundary is cut off at the point where it is intersected by the ETW brane, so that these models are holographically dual to boundary conformal field theories (BCFTs) (see also \cite{Takayanagi:2011zk,Fujita:2011fp}). The advantage of these setups from the entanglement islands perspective is that they have 3 holographically related descriptions: \begin{itemize} \setlength{\parskip}{0 pt} \item[(a)] Einstein gravity on (asymptotically) AdS$_5$ + ETW brane \item[(b)] a 4d CFT with UV cut-off $+$ gravity on (asymptotically) AdS$_4$, coupled via transparent\\ boundary conditions at the boundary of AdS$_4$ to a 4d CFT on half of ${\mathds{R}}^{1,3}$ \item[(c)] a non-gravitational 4d CFT on half of ${\mathds{R}}^{1,3}$ coupled to 3d boundary degrees of freedom \end{itemize} These descriptions can be understood to arise from applying AdS/CFT twice: description (b) is obtained by converting the 3d boundary degrees of freedom in (c) to a gravitational theory on AdS$_4$, and description (a) geometrizes the entire BCFT. Description (b) is the one of interest for the black hole information paradox. To pose the paradox, the $AdS_4$ slices are replaced by $AdS_4$ black holes. This realizes a black hole on the ETW brane and on the remaining half of the conformal boundary of $AdS_5$, which serves as bath. It can be interpreted as coupling the gravity system on the ETW brane to a bath at the same temperature as the black hole. To quantify the entropy of the radiation one picks a region far in the bath system and computes its EE. One type of surface relevant for computing the EE holographically are Hartman-Maldacena (HM) surfaces \cite{Hartman:2013qma}, which connect the boundary of the radiation region to the corresponding point in the thermofield double. Due to the stretching of the space behind the horizon the area of these surfaces grows in time, suggesting an unbounded growth of the entropy. This is the version of the information paradox described in \cite{Almheiri:2019yqk}. The paradox is resolved by the existence of ``island minimal surfaces'' that stretch from the bath into the gravity system (fig.~\ref{fig:KR-nongrav}). The part of the ETW brane near the black hole that is captured by the surface constitutes the island contribution. Its computation is entirely geometrized through the existence of the 5d bulk. The area of the island surfaces is constant in time, which limits the growth of the entropy and leads to Page curves. As emphasized in \cite{Geng:2020qvw}, the graviton is generically massive in models with a non-gravitating bath. \begin{figure} \subfigure[][]{\label{fig:KR-nongrav} \begin{tikzpicture}[scale=0.8] \draw (-2.5,0) -- (0,0); \draw[thick](0,0) -- (1,0); \draw[very thick,blue] (1,0) -- (3,0); \node [anchor=south] at (1.8,0) {\small $R$}; \draw[thick] (0,0) -- (-2.5,-2/3*2.5); \draw[thick,green] (1,0) arc (180:214:98pt); \draw[thick,green] (1,0) arc (0:-148:28pt); \draw[thick,dashed,black] (2.5,0) arc (0:-146:70pt); \draw [fill=gray,opacity=0.3] (0,0) -- (-2.5,0) -- (-2.5,-2/3*2.5)--(0,0); \node at (-1.5,-0.75) {\small $I$}; \draw[very thick,red] (-0.8,-2/3*0.8) -- (-2.5,-2/3*2.5); \node at (-0.55,-0.18) {\footnotesize $\theta$}; \draw (-0.75,0) arc (180:210:25pt); \end{tikzpicture} } \hskip 20mm \subfigure[][]{\label{fig:KR-grav} \begin{tikzpicture}[scale=0.8] \draw (-2.5,0) -- (2.5,0); \draw[thick] (0,0) -- (-2.5,-2/3*2.5); \draw[thick] (0,0) -- (2.5,-2/3*2.5); \draw[thick,green] (0,0) -- (0,-2.5); \draw[thick,green] (0,0) arc (0:-113:28pt); \draw[thick,dashed,black] (2.05,-2/3*2.05) arc (-34:-146:70pt); \draw [fill=gray,opacity=0.3] (0,0) -- (-2.5,0) -- (-2.5,-2/3*2.5)--(0,0); \draw [fill=gray,opacity=0.3] (0,0) -- (2.5,0) -- (2.5,-2/3*2.5)--(0,0); \node at (-1.8,-0.95) {\small $I$}; \draw[very thick,red] (-1.35,-2/3*1.35) -- (-2.5,-2/3*2.5); \node at (-0.65,-0.18) {\footnotesize $\theta_{1}$}; \node at (0.65,-0.18) {\footnotesize $\theta_2$}; \draw (-0.9,0) arc (180:215:25pt); \draw (0.9,0) arc (0:-35:25pt); \end{tikzpicture} } \caption{ Left: Karch/Randall model for non-gravitating bath. The figure shows part of $AdS_5$ with the ETW brane cutting off the shaded region. The dashed curve is the black hole horizon and $R$ is the radiation region (blue). The green curve ending on the horizon represents the HM surface; the green curve extending from the boundary of $R$ to the ETW brane is the island surface. $I$ is the island (red). Right: For a gravitating bath a second ETW brane is introduced, leaving only a 3-dimensional part of the conformal boundary. \label{fig:KR}} \end{figure} A gravitating bath can be realized by introducing a second ETW brane as bath (fig.~\ref{fig:KR-grav}) \cite{Geng:2020fxl}. This modifies description (b) to now comprise two CFTs coupled to gravity on distinct $AdS_4$ spaces, and coupled to each other at the conformal boundaries. Description (c) is reduced to a 3d CFT. Since both ETW branes have dynamical gravity, a conventional geometric EE can not be defined on the second ETW brane. If one allows the end points of minimal surfaces on both ETW branes to be chosen dynamically, the surfaces can settle on the horizon and lead to a flat entropy curve, in line with the general arguments of \cite{Laddha:2020kvp}. The quantity that was found to exhibit Page curve behavior in \cite{Geng:2020fxl} instead corresponds to minimal surfaces anchored at the remaining point of the conformal boundary of $AdS_5$, and was interpreted as EE between defect degrees of freedom represented by the left and right ETW branes. The form of the entropy curve was found to have interesting dependence on the ETW brane angles, as will be discussed in more detail below. \medskip \textbf{Islands and Page curves in Type IIB:} In this work we will study 10d string theory versions of the Karch/Randall models and show that the qualitative features captured by the bottom-up models are realized in a UV-complete theory of quantum gravity. We will discuss black holes coupled to non-gravitating and to gravitating baths, realized through 10d black hole solutions based on the $AdS_4\times S^2\times S^2\times\Sigma$ solutions of Type IIB constructed in \cite{DHoker:2007zhm,DHoker:2007hhe,Aharony:2011yc,Assel:2011xz}. \begin{figure} \begin{tikzpicture} \shade [ left color=blue! 0, right color=blue! 20] (-2.2,0) rectangle (0,-2); \shade [ right color=blue! 0, left color=blue! 20] (0,0) rectangle (2.2,-2); \draw[thick] (-2.2,0) -- (2.2,0); \draw[thick] (-2.2,-2) -- (2.2,-2); \draw[dashed] (-2.2,0) arc (160:200:85pt); \draw[dashed] (2.2,0) arc (20:-20:85pt); \node at (1.7,-0.6) {$\Sigma$}; \node at (2.3,-2.3) {\small $x\rightarrow+\infty$}; \node at (-2.3,-2.3) {\small $x\rightarrow-\infty$}; \node at (3.0,-0.65) {\small $AdS_5$}; \node at (3.0,-1) {\small $\times$}; \node at (3.0,-1.35) {\small $S^5$}; \draw[thick] (0.2,-0.08) -- (0.2,0.08); \draw[thick] (-0.2,-0.08) -- (-0.2,0.08); \draw[thick] (0,-0.08) -- (0,0.08) node [anchor=south] {\small NS5}; \draw[thick] (0.2,-2-0.08) -- (0.2,-2+0.08); \draw[thick] (-0.2,-2-0.08) -- (-0.2,-2+0.08); \draw[thick] (0,-1.92) -- (0,-2.08) node [anchor=north] {\small D5}; \node at (-1,-1.75) {\small $y=0$}; \node at (-1,-0.25) {\small $y=\frac{\pi}{2}$}; \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[y={(0cm,1cm)}, x={(0.707cm,0.707cm)}, z={(1cm,0cm)}, scale=1.1] \draw[white,fill=gray!100] (0,0,0.5) circle (1.5pt); \draw[white,fill=gray!100] (0,0,1.5) circle (2pt); \draw[thick] (0,-0.39,0) -- (0,1,0); \draw[thick] (0,-1,0) -- (0,-0.6,0); \draw[thick] (0,-0.41,1) -- (0,1,1); \draw[thick] (0,-1,1) -- (0,-0.61,1); \draw[thick] (0,-1,2) -- (0,1,2); \node at (0,0,2.375) {$\cdots$}; \draw[thick] (0,-1,2.75) -- (0,1,2.75); \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (-1.1,\i,0.5) -- (0.65,\i,0.5);} \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (0.76,\i,0.5) -- (1.1,\i,0.5);} \foreach \i in {-0.05,0,0.05}{ \draw (-1.1,\i,1.5) -- (0.65,\i,1.5);} \foreach \i in {-0.05,0,0.05}{ \draw (0.76,\i,1.5) -- (1.1,\i,1.5);} \foreach \i in {-0.025,0,0.025}{ \draw (0,1.4*\i,0) -- (0,1.4*\i,1);} \foreach \i in {-0.05,-0.025,0,0.025,0.05}{ \draw (0,1.4*\i,1) -- (0,1.4*\i,2.05);} \foreach \i in {-0.045,-0.015,0.015,0.045}{ \draw (0,1.4*\i,2.7) -- (0,1.4*\i,5);} \node at (-0.18,-0.18,4) {\small D3}; \node at (1.0,0.2,0.75) {\footnotesize D5}; \node at (0,-1.25) {NS5}; \end{tikzpicture} \caption{ Left: Geometry of $AdS_4\times S^2\times S^2\times\Sigma$ solutions with $\Sigma=\lbrace x+iy\in\mathds{C}\vert \,0\leq y\leq \frac{\pi}{2}\rbrace$ for non-gravitating baths. On each boundary component an $S^2$ collapses, so the 10d geometry is closed. D5/NS5 brane sources are located on the $y=0$/$y=\frac{\pi}{2}$ boundaries. The limit $x\rightarrow -\infty$ is a regular point of the internal space. For $x\rightarrow\infty$ the solutions approach locally $AdS_5\times S^5$; this region corresponds to the conformal boundary in fig.~\ref{fig:KR-nongrav}. The ETW brane in fig.~\ref{fig:KR-nongrav} can be seen as effective description for the remaining 10d geometry. Right: Associated configuration of D5, NS5 and D3 branes, with D3-branes suspended between 5-branes and semi-infinite D3-branes emerging in one direction. The distribution of 5-brane sources in the supergravity solution encodes how many D5/NS5 branes there are and how the D3-branes end on them. \label{fig:AdS4-sol}} \end{figure} We start the discussion with non-gravitating baths. The solutions constructed in \cite{DHoker:2007zhm,DHoker:2007hhe,Aharony:2011yc} can be used to describe semi-infinite D3-branes terminating on a system of D5 and NS5 branes with additional D3-branes suspended between the 5-branes. The brane configurations engineer $\mathcal N=4$ SYM on a half space, corresponding to the semi-infinite D3-branes, coupled to a 3d SCFT on the boundary, corresponding to the D3-branes suspended between the D5 and NS5 branes. The structure of the supergravity solutions and brane setups is illustrated in fig.~\ref{fig:AdS4-sol}. At each point of $\Sigma$ there is an $AdS_4$ and two 2-spheres, with independently varying radii. The region $x\rightarrow\infty$ where the geometry becomes $AdS_5\times S^5$ is modeled in the Karch/Randall models in fig.~\ref{fig:KR-nongrav} by the $AdS_5$ region far away from the ETW brane. The ETW brane itself can be understood as effective description for the remaining part of the 10d solution, i.e.\ the region around the 5-brane sources in fig.~\ref{fig:AdS4-sol}. The intermediate holographic description, in which only the defect degrees of freedom are geometrized (description (b) above), corresponds to $AdS_4$ gravity in the region away from the $AdS_5\times S^5$ part coupled at the conformal boundary of $AdS_4$ to $\mathcal N=4$ SYM on a half space. The 4d graviton has a mass, which, in the limit where the number of semi-infinite D3-branes is small, is set by the ratio of 4d and 3d central charges \cite{Bachas:2018zmb}. We will modify these solutions by introducing black holes on the $AdS_4$ spaces, which leads to non-supersymmetric solutions of Type IIB that are asymptotic to the supersymmetric seed solutions and describe the dual QFTs at finite temperature. The radiation region $R$ will be defined in the asymptotic $AdS_5\times S^5$ region at $x\rightarrow\infty$ in fig.~\ref{fig:AdS4-sol}, while the ``physical black hole'' corresponds to the region around the 5-brane sources. The surfaces computing the entanglement entropy of the radiation region wrap both $S^2$'s and are anchored in the $AdS_5\times S^5$ region at a fixed value of the $AdS_4$ radial coordinate. For the non-gravitating baths we construct the HM surfaces explicitly at the time $t=0$ when their area is smallest. The minimal surfaces can be described by specifying the $AdS_4$ radial coordinate $r$ as function of the coordinates on the Riemann surface $x$ and $y$. The surfaces extend along the Riemann surface $\Sigma$, and either drop into the horizon in $AdS_4$ along a curve $x_h(y)$ (HM surfaces), or extend all the way to $x\rightarrow -\infty$, where they can close off smoothly before reaching the horizon in $AdS_4$ (island surfaces). The extremality condition is a non-linear PDE on $\Sigma$. The boundary conditions will be derived from regularity of the induced metric on the minimal surface, which will give a string theory justification for the use of Neumann boundary conditions at the ETW brane in the Karch/Randall models (other boundary conditions in 5d were discussed in \cite{Ghosh:2021axl}). Solutions to the PDE are obtained numerically. The class of $AdS_4\times S^2\times S^2\times \Sigma$ solutions is very broad, reflecting the breadth of brane configurations that can be realized with D3, D5 and NS5 branes. We will choose representative solutions with $N_5$ D5-branes at $(x,y)=(0,0)$, $N_5$ NS5-branes at $(x,y)=(0,\frac{\pi}{2})$ and $2N_5K$ semi-infinite D3-branes. Studying more general solutions will be left for the future. The 8d minimal surfaces can be visualized as 2d surfaces in the 3d space spanned by $\Sigma$ and the $AdS_4$ radial direction $r$, with the horizon at some finite $r_h$. The conformal boundary of $AdS_4$ at $r\rightarrow\infty$ corresponds to the defect in fig.~\ref{fig:KR-nongrav}. A sample of island and HM surfaces is shown in figs.~\ref{fig:islands}, \ref{fig:HM-surf}. The island surfaces show distinct behavior near the 5-brane sources, which is discussed in sec.~\ref{sec:near-pole}. The area differences between island surfaces and HM surfaces at $t=0$ are shown in fig.~\ref{fig:areadiff}. The results show that for radiation regions starting far in the bath (small $r$), the HM surface dominates at $t=0$. The area of the HM surface grows in time and sets the initial growth of the entropy, but the entropy growth is bounded by the constant area of the island surface. This evades an information paradox and shows that the entropy follows a Page curve. \medskip \textbf{Critical angle:} The analysis of \cite{Geng:2020fxl} found a critical value for the tension/angle of the ETW brane ($\theta$ in fig.~\ref{fig:KR}), where the behavior of the island surfaces changes qualitatively. The critical angle $\theta_c$ can be defined as follows: At zero temperature, for an island surface anchored at a fixed point in the bath system, one can ask for the end point on the ETW brane as function of $\theta$. For $\theta>\theta_c$ this is a finite point. As $\theta_c$ is approached, the end point on the ETW brane diverges towards the Poincar\'e horizon and below $\theta_c$ there are no more island minimal surfaces. Remarkably, a similar phenomenon can be identified in 10d. The angle $\theta$ in 5d is set by the tension of the ETW brane, which can be understood as a measure for the number of degrees of freedom represented by the ETW brane. The relevant parameters in the 10d solutions considered here are the radius of the asymptotic $AdS_5\times S^5$ region, which is set by the number of semi-infinite D3-branes, and the number of D5 and NS5 branes on which the D3-branes terminate. The latter determines the 3d SCFT that $\mathcal N=4$ SYM is coupled to at the boundary of the half space. One may expect that the brane angle in 5d captures the ratio of the number of D3-branes suspended between 5-branes and the number of semi-infinite D3-branes. This is indeed the case: For island surfaces at zero temperature, with fixed anchor point in the $AdS_5\times S^5$ region, the end point at $x=-\infty$ is shown as function of $N_5/K$, which controls the ratio of suspended and semi-infinite D3-branes, in fig.~\ref{fig:crit-ang}. The results indicate that there is a critical ratio at which the end point at $x=-\infty$ runs off towards the Poincar\'e horizon. For black hole solutions with finite temperature this behavior is regulated (fig.~\ref{fig:crit-ang-T}), and island surfaces can be found beyond the critical ratio. \medskip \textbf{Gravitating baths:} For the description of a gravitating bath the asymptotic $AdS_5\times S^5$ region in fig.~\ref{fig:AdS4-sol} is closed off. This corresponds to removing the semi-infinite D3-branes from the brane setup, leaving only D3-branes suspended between D5 and NS5 branes (fig.~\ref{fig:AdS4-sol-grav}). This is captured in the 5d Karch/Randall models by the introduction of a second ETW brane. The 10d solutions are holographic duals for 3d $T_\rho^\sigma[SU(N)]$ SCFTs \cite{Assel:2011xz} and have massless 4d gravitons. Closing off the $AdS_5\times S^5$ region removes the part in which the radiation region was defined, and a minimal surface stretching from $x=-\infty$ to $x=+\infty$ now has to satisfy Neumann boundary conditions on both ends. This allows it to settle onto the black hole horizon and leads to a constant entropy identical to the thermal entropy of the bath, in line with the general arguments of \cite{Laddha:2020kvp,Raju:2020smc}. \begin{figure} \begin{tikzpicture} \shade [ left color=blue! 0, right color=blue! 20] (-2.2,0) rectangle (0,-2); \shade [ right color=blue! 0, left color=blue! 20] (0,0) rectangle (2.2,-2); \draw[thick] (-2.2,0) -- (2.2,0); \draw[thick] (-2.2,-2) -- (2.2,-2); \draw[dashed] (-2.2,0) arc (160:200:85pt); \draw[dashed] (2.2,0) arc (20:-20:85pt); \node at (1.7,-0.6) {$\Sigma$}; \node at (2.3,-2.3) {\small $x\rightarrow+\infty$}; \node at (-2.3,-2.3) {\small $x\rightarrow-\infty$}; \draw[thick] (0.2,-0.08) -- (0.2,0.08); \draw[thick] (-0.2,-0.08) -- (-0.2,0.08); \draw[thick] (0,-0.08) -- (0,0.08) node [anchor=south] {\small NS5}; \draw[thick] (0.2,-2-0.08) -- (0.2,-2+0.08); \draw[thick] (-0.2,-2-0.08) -- (-0.2,-2+0.08); \draw[thick] (0,-1.92) -- (0,-2.08) node [anchor=north] {\small D5}; \node at (-1,-1.75) {\small $y=0$}; \node at (-1,-0.25) {\small $y=\frac{\pi}{2}$}; \end{tikzpicture} \qquad\qquad \begin{tikzpicture}[y={(0cm,1cm)}, x={(0.707cm,0.707cm)}, z={(1cm,0cm)}, scale=1.1] \draw[white,fill=gray!100] (0,0,0.5) circle (1.5pt); \draw[white,fill=gray!100] (0,0,1.5) circle (2pt); \draw[white,fill=gray!100] (0,0,3.5) circle (1pt); \draw[thick] (0,-0.39,0) -- (0,1,0); \draw[thick] (0,-1,0) -- (0,-0.6,0); \draw[thick] (0,-0.41,1) -- (0,1,1); \draw[thick] (0,-1,1) -- (0,-0.61,1); \draw[thick] (0,-1,2) -- (0,1,2); \node at (0,0,2.5) {$\cdots$}; \draw[thick] (0,-0.43,3) -- (0,1,3); \draw[thick] (0,-1,3) -- (0,-0.57,3); \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (-1.1,\i,0.5) -- (0.65,\i,0.5);} \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (0.76,\i,0.5) -- (1.1,\i,0.5);} \foreach \i in {-0.05,0,0.05}{ \draw (-1.1,\i,1.5) -- (0.65,\i,1.5);} \foreach \i in {-0.05,0,0.05}{ \draw (0.76,\i,1.5) -- (1.1,\i,1.5);} \foreach \i in {-0.025,0.025}{ \draw (-1.1,\i,3.5) -- (0.65,\i,3.5);} \foreach \i in {-0.025,0.025}{ \draw (0.76,\i,3.5) -- (1.1,\i,3.5);} \foreach \i in {-0.025,0,0.025}{ \draw (0,1.4*\i,0) -- (0,1.4*\i,1);} \foreach \i in {-0.05,-0.025,0,0.025,0.05}{ \draw (0,1.4*\i,1) -- (0,1.4*\i,2.05);} \foreach \i in {-0.045,-0.015,0.015,0.045}{ \draw (0,1.4*\i,2.95) -- (0,1.4*\i,3);} \foreach \i in {-0.015,0.015}{ \draw (0,1.4*\i,3) -- (0,1.4*\i,4);} \draw[thick] (0,-1,4) -- (0,1,4); \node at (-0.2,0,3.9) {\tiny D3}; \node at (1.0,0.2,0.75) {\footnotesize D5}; \node at (0,-1.25) {NS5}; \end{tikzpicture} \caption{ Left: $AdS_4\,{\times}\, S^2\,{\times}\, S^2\,{\times}\,\Sigma$ solutions for gravitating baths. The $AdS_5\times S^5$ region is closed off; the limits $x\rightarrow \pm\infty$ both lead to regular points in the internal space. This leaves the 3d conformal boundary of $AdS_4$, corresponding to the remaining point of the conformal boundary in fig.~\ref{fig:KR-grav}. Right: The associated brane configurations have no semi-infinite D3-branes, only D3-branes suspended between 5-branes. \label{fig:AdS4-sol-grav}} \end{figure} One can instead consider minimal surfaces splitting the internal space, which are expected to compute non-geometric entanglement entropies (whose holographic interpretation was initiated in \cite{Mollabashi:2014qfa,Karch:2014pma}). In the Karch/Randall models a ``left/right EE", represented by surfaces ending on the point where the two ETW branes meet in fig.~\ref{fig:KR-grav}, was found to exhibit Page curve behavior, and was interpreted as an internal entanglement entropy in \cite{Geng:2020fxl}. The Type IIB solutions realize the dual of the defect as full 10d geometry, making them an ideal setting for studies of minimal surfaces separating degrees of freedom according to their representation in the internal space. We consider surfaces wrapping the spatial part of $AdS_4$, both $S^2$'s, and a curve in $\Sigma$ which depends on the $AdS_4$ radial coordinate. The surfaces are anchored at the conformal boundary of $AdS_4$ along a curve $x(y)$ in $\Sigma$ which separates the 5-brane sources and defines a split into black hole system and bath. Such surfaces may be expected to compute EEs associated with decompositions of the quiver diagram in the UV description of the dual 3d SCFT. One again has to consider HM surfaces, extending through the horizon in $AdS_4$ into the thermofield double, and island surfaces which close off in one of the $x\rightarrow\pm\infty$ regions before reaching the horizon in $AdS_4$. These are 10d versions of the surfaces in fig.~\ref{fig:KR-grav}. The class of $AdS_4\times S^2\times S^2\times \Sigma$ solutions that could be considered is again broad, and we focus on simple representatives. We include two groups of D5-branes and two groups of NS5 branes, placed symmetrically at $x=\pm \delta$ on the boundary components of $\Sigma$. The separation of the 5-brane sources determines how the D3-branes in the associated brane configuration are suspended between the 5-branes. Comparing to the Karch/Randall models in fig.~\ref{fig:KR-grav}, these particular 10d solutions correspond to two equal ETW brane angles. Some 10d island surfaces are shown in fig.~\ref{fig:LRcrit2}. The corresponding HM surface is described by $x=0$ and a time-dependent embedding in the $AdS_4$ part of the geometry. The difference in areas between island and HM surfaces at $t=0$ is shown in fig.~\ref{fig:LRcrit1b}. We find that for $\delta$ above a ``Page value" $\delta_P$ the HM surface initially dominates at $t=0$. The entropy growth indicated by the HM surfaces is bounded by the constant area of the island surfaces, leading again to Page curves, shown in fig.~\ref{fig:page}. A second distinguished value for $\delta$ can be seen in fig.~\ref{fig:LRcrit1a}: at a critical value $\delta_c$ the cap-off point of the island surface at $x=-\infty$ diverges towards the conformal boundary of $AdS_4$, and no island minimal surfaces are found for $\delta<\delta_c$. The numerical results suggest that $\delta_c$ is slightly smaller than $\delta_P$, though we leave the possibility that the difference could be a numerical artifact. In the small (and possibly empty) range $\delta_c<\delta<\delta_P$ the island surfaces are found to dominate already at $t=0$, leading to a flat entropy curve. These results bear striking resemblance with critical and Page angles found in the Karch/Randall models in \cite{Geng:2020fxl}, suggesting that the ETW brane angles capture aspects of how the 5-brane sources are distributed on $\Sigma$ in~10d. In the regime where no island minimal surfaces were found in the 5d Karch/Randall models in \cite{Geng:2020fxl}, ``tiny island" limiting surfaces, which degenerate to an infinitesimal segment at the defect in fig.~\ref{fig:KR-grav}, were found to dominate and limit the entropy growth indicated by the HM surface. In 10d we find that similar tiny island surfaces connecting the $x=0$ locus to $x=\pm\infty$ arise for $\delta<\delta_c$. \medskip \textbf{Outline:} The main part is organized as follows. The 10d supergravity solutions are introduced in sec.~\ref{sec:IIBsol}. In sec.~\ref{sec:surfaces} the ansatz for extremal surfaces is discussed along with the extremality and boundary conditions and the behavior near the 5-branes. The method for constructing minimal surfaces is summarized in sec.~\ref{sec:numerics}. Island surfaces and Page curves are discussed for non-gravitating baths in sec.~\ref{sec:islands} and for gravitating baths in sec.~\ref{sec:grav-bath}. We close with a brief outlook in sec.~\ref{sec:outlook}. \section{Type IIB supergravity solutions}\label{sec:IIBsol} The general local form of the $AdS_4\times S^2\times S^2\times \Sigma$ solutions that will be used here was constructed in \cite{DHoker:2007zhm,DHoker:2007hhe}. For the study of minimal surfaces we will only need the geometry, which is a warped product of $AdS_4$ and two 2-spheres, $S_1^2$ and $S_2^2$, over a Riemann surface $\Sigma$. For the solutions of interest here $\Sigma$ can be taken as a strip, \begin{align} \Sigma&=\lbrace z\in\mathds{C}\,\vert\, 0\leq \mathop{\rm Im}(z)\leq \pi/2\rbrace~. \end{align} On each of the boundary components of the strip one of the $S^2$'s closes off smoothly, so that the 10d geometry has no boundary. Depending on the nature of the points at infinity, solutions for different types of field theories can be constructed: Janus solutions, dual to interface CFTs, can be realized if the points $\mathop{\rm Re}(z)\rightarrow \pm\infty$ both correspond to asymptotic $AdS_5\times S^5$ regions. Solutions with one asymptotic region closed off were constructed in \cite{Aharony:2011yc} and are dual to BCFTs. Duals for 3d SCFTs were constructed in \cite{Assel:2011xz} by closing both asymptotic $AdS_5\times S^5$ regions. The solutions are generally parametrized by two harmonic functions $h_1$, $h_2$ on $\Sigma$. The Einstein-frame metric takes the form \begin{align}\label{eq:10d-metric} ds^2&=f_4^2 ds^2_{4}+f_1^2 ds^2_{S_1^2}+f_2^2 ds^2_{S_2^2}+4\rho^2 |dz|^2~, \end{align} where $ds^2_{4}$ and $ds^2_{S_i^2}$ are line elements of unit-radius $AdS_4$ and $S^2$, respectively. The coefficient functions are given by \begin{align} f_4^8&=16\frac{N_1N_2}{W^2}~, & f_1^8&=16h_1^8\frac{N_2 W^2}{N_1^3}~, & f_2^8&=16 h_2^8 \frac{N_1 W^2}{N_2^3}~, & \rho^8&=\frac{N_1N_2W^2}{h_1^4h_2^4}~, \end{align} where \begin{align} W&=\partial\bar\partial (h_1 h_2)~, & N_i &=2h_1 h_2 |\partial h_i|^2 -h_i^2 W~. \end{align} The expressions for the fluxes and dilaton will not be needed here; they can be found in \cite{DHoker:2007zhm,DHoker:2007hhe,Aharony:2011yc,Assel:2011xz}. Based on this local form broad classes of supergravity solutions can be constructed which describe D3-branes intersecting, ending on, or suspended between D5 and NS5 branes. For the realization of Karch/Randall models with gravitating and non-gravitating baths we will employ representative solutions dual to BCFTs and 3d SCFTs, noting that more general solutions could be considered. The form of the harmonic functions $h_1$, $h_2$ is \begin{align} h_1&=\frac{\pi \alpha^\prime}{4} K e^z-\frac{\alpha^\prime}{4} \sum_{a}N_{\rm D5}^{(a)}\ln\tanh\left(\frac{z-\delta_a}{2}\right)+\rm{c.c.} \nonumber\\ h_2&=-\frac{i \pi \alpha^\prime}{4} K e^z-\frac{\alpha^\prime}{4}\sum_b N_{\rm NS5}^{(b)}\ln\tanh\left(\frac{i\pi}{4}-\frac{z-\delta_b}{2}\right)+\rm{c.c.} \end{align} The solutions describe semi-infinite D3-branes ending on D5-branes and NS5-branes which have additional D3-branes suspended between them. The number of semi-infinite D3-branes is controlled by $K$; for $K=0$ the solutions describe D3-branes suspended between D5 and NS5 branes. Groups of D5/NS5 branes are represented by the poles of $\partial h_1$/$\partial h_2$ on the boundary of $\Sigma$. The specific brane configuration can be characterized in terms of linking numbers, which are encoded in the distribution of the 5-brane sources on $\Sigma$ \cite{Aharony:2011yc,Assel:2011xz}. For $K\neq 0$ an $AdS_5\times S^5$ region emerges at $\mathop{\rm Re}(z)\rightarrow + \infty$, with $\mathop{\rm Re}(z)$ becoming the radial coordinate of $AdS_5$ in $AdS_4$ slicing and $\mathop{\rm Im}(z)$ becoming an angular coordinate on $S^5$. For $K=0$ the limit $\mathop{\rm Re}(z)\rightarrow\infty$ leads to a regular point in the internal space. The limit $\mathop{\rm Re}(z)\rightarrow -\infty$ leads to a regular point in both cases. We discuss the concrete solutions that will be used below first and briefly comment on the more general picture and dual field theories afterwards. The solutions we will study for non-gravitating baths are dual to $\mathcal N=4$ SYM on a half space coupled to 3d $T_\rho^\sigma[SU(N)]$ theories on the boundary. They are given by $h_{1/2}$ of the form \begin{align}\label{eq:h1h2-BCFT} h_1&=\frac{\pi \alpha^\prime}{4} K e^z-\frac{\alpha^\prime}{4}N_5\ln\tanh\left(\frac{z}{2}\right)+\rm{c.c.} \nonumber\\ h_2&=-\frac{i\pi\alpha^\prime}{4}K e^z-\frac{\alpha^\prime}{4}N_5\ln\tanh\left(\frac{i\pi}{4}-\frac{z}{2}\right)+\rm{c.c.} \end{align} The radii of $AdS_5$ and $S^5$ in the $AdS_5\times S^5$ region at $\mathop{\rm Re}(z)\rightarrow\infty$ are set by $L^4=8\pi{\alpha^\prime}^2N_5K$. The asymptotic string coupling is $\lim_{x\rightarrow\infty}e^\phi=1$. These solutions are string theory realizations of the Karch/Randall models with one ETW brane (fig.~\ref{fig:KR-nongrav}): the asymptotic region at $\mathop{\rm Re}(z)\rightarrow\infty$ corresponds to the $AdS_5$ part, while the region with the NS5/D5 sources is the string theory version of the ETW brane itself. The brane configuration involves $2N_5K$ semi-infinite D3-branes ending on a combination of $N_5$ D5-branes and $N_5$ NS5-branes (fig.~\ref{fig:brane-non-grav}). $N_5K$ D3-branes end on the D5 branes and $N_5K$ D3-branes end on the NS5-branes, and there are in addition $N_5^2/2$ D3-branes suspended between the D5 and NS5 branes. \begin{figure} \subfigure[][]{\label{fig:brane-non-grav} \begin{tikzpicture}[y={(0cm,1cm)}, x={(0.707cm,0.707cm)}, z={(1cm,0cm)}, scale=1.1] \draw[gray,fill=gray!100,rotate around={-45:(0,0,2)}] (0,0,2) ellipse (1.8pt and 3.5pt); \draw[gray,fill=gray!100] (0,0,0) circle (1.5pt); \foreach \i in {-0.05,0,0.05}{ \draw[thick] (0,-1,\i) -- (0,1,\i);} \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (-1.1,\i,2) -- (1.1,\i,2);} \foreach \i in {-0.045,-0.015,0.015,0.045}{ \draw (0,1.4*\i,0) -- (0,1.4*\i,2+\i);} \foreach \i in {-0.075,-0.045,-0.015,0.015,0.045,0.075}{ \draw (0,1.4*\i,2+\i) -- (0,1.4*\i,4);} \node at (-0.18,-0.18,3.4) {\scriptsize $2N_5 K$}; \node at (1.0,0.3,2) {\scriptsize $N_5$ D5}; \node at (0,-1.25) {\footnotesize $N_5$ NS5}; \node at (0.18,0.18,0.9) {{\scriptsize $N_5 K+\tfrac{N_5^2}{2}$}}; \end{tikzpicture} } \hskip 20mm \subfigure[][]{\label{fig:brane-grav} \begin{tikzpicture}[y={(0cm,1cm)}, x={(0.707cm,0.707cm)}, z={(1cm,0cm)}, scale=1.1] \draw[gray,fill=gray!100] (0,0,-0.5) circle (1.8pt); \draw[gray,fill=gray!100] (0,0,1) ellipse (1.8pt and 3pt); \draw[gray,fill=gray!100,rotate around={-45:(0,0,2.5)}] (0,0,2.5) ellipse (1.8pt and 3.5pt); \draw[gray,fill=gray!100] (0,0,4) circle (1.8pt); \foreach \i in {-0.05,0,0.05}{ \draw[thick] (0,-1,-0.5+\i) -- (0,1,-0.5+\i);} \foreach \i in {-0.05,0,0.05}{ \draw[thick] (0,-1,1+\i) -- (0,1,1+\i);} \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (-1.1,\i,2.5) -- (1.1,\i,2.5);} \foreach \i in {-0.075,-0.025,0.025,0.075}{ \draw (-1.1,\i,4) -- (1.1,\i,4);} \foreach \i in {-0.03,0,0.03}{ \draw (0,1.4*\i,-0.5) -- (0,1.4*\i,1);} \foreach \i in {-0.075,-0.045,-0.015,0.015,0.045,0.075}{ \draw (0,1.4*\i,1) -- (0,1.4*\i,2.5+\i);} \foreach \i in {-0.03,0,0.03}{ \draw (0,1.4*\i,2.5) -- (0,1.4*\i,4);} \node at (0,-1.25,-0.5) {\footnotesize $\tfrac{N_5}{2}$ NS5}; \node at (0,-1.25,1) {\footnotesize $\tfrac{N_5}{2}$ NS5}; \node at (1.0,0.35,2.5) {\scriptsize $\tfrac{N_5}{2}$ D5}; \node at (1.0,0.35,4) {\scriptsize $\tfrac{N_5}{2}$ D5}; \node at (0.22,0.22,1.75) {{\scriptsize $\tfrac{N_5^2}{2}$}}; \node at (0,0.3,0.25) {{\scriptsize $\tfrac{N_5^2}{4}\Delta$}}; \node at (0,0.3,3.5) {{\scriptsize $\tfrac{N_5^2}{4}\Delta$}}; \end{tikzpicture} } \caption{Brane configurations for representative non-gravitating bath solutions (left) and gravitating bath solutions (right). Hanany-Witten transitions can be used to make the 3d quiver gauge theories more apparent, as in figs.~\ref{fig:AdS4-sol}, \ref{fig:AdS4-sol-grav}. The numbers of D3-branes on the right are controlled by $\delta$ through $\Delta=\frac{1}{2}+\frac{2}{\pi}\arctan e^{2\delta}$. } \end{figure} The solutions for gravitating baths that will be considered below are holographic duals for 3d $T_\rho^\sigma[SU(N)]$ SCFTs. The functions $h_1$ and $h_2$ are given by \begin{align}\label{eq:h1h2-3d-grav} h_1&=-\frac{\alpha^\prime}{4}\frac{N_5}{2}\left[\ln\tanh\left(\frac{z-\delta}{2}\right)+ \ln \tanh\left(\frac{z+\delta}{2}\right)\right]+\rm{c.c.} \nonumber\\ h_2&=-\frac{\alpha^\prime}{4}\frac{N_5}{2}\left[\ln\tanh\left(\frac{i\pi}{4}-\frac{z-\delta}{2}\right) +\ln\tanh\left(\frac{i\pi}{4}-\frac{z+\delta}{2}\right)\right]+\rm{c.c.} \end{align} These solutions describe $N_5^2/2$ D3-branes suspended between two groups of D5-branes and two groups of NS5-branes, with $N_5/2$ 5-branes in each group. There are no semi-infinite D3-branes and the asymptotic $AdS_5\times S^5$ region at $\mathop{\rm Re}(z)\rightarrow \infty$ is closed off. The limits $\mathop{\rm Re}(z)\rightarrow \pm\infty$ both correspond to regular points in the internal space. The 5-brane groups are represented in the supergravity solutions by sources with $N_5/2$ D5 and $N_5/2$ NS5-branes, respectively, at $z=\pm\delta$ and $z=\pm\delta+i\pi/2$. The parameter $ \delta$ determines how the D3-branes terminate on the D5 and NS5 branes (fig.~\ref{fig:brane-grav}); for $\delta=0$ the numbers of D3-branes terminating on each group of 5-branes are equal. The dual 3d SCFTs are special cases of the theories discussed in sec.~5.3 of \cite{Coccia:2020wtk}. Comparing to the 5d Karch/Randall models, the closing off of the asymptotic $AdS_5\times S^5$ region corresponds to the introduction of the second ETW brane in fig.~\ref{fig:KR-grav}. The entire 10d solution corresponds to the remaining wedge of $AdS_5$ in fig.~\ref{fig:KR-grav}. The solutions (\ref{eq:h1h2-BCFT}) and (\ref{eq:h1h2-3d-grav}) are invariant under S-duality (exchange of $h_1$ and $h_2$ combined with $z\rightarrow \frac{i\pi}{2}-z$), reflecting that the associated brane configurations are invariant under S-duality (in fig.~\ref{fig:brane-non-grav} up to Hanany-Witten transitions). This will be useful below. From now on we set $\alpha^\prime=1$. Solutions with more general arrangements of 5-brane sources (poles in $\partial h_{1/2}$) and no asymptotic $AdS_5\times S^5$ region describe configurations of D3-branes suspended between D5 and NS5 branes that can be characterized by two Young tableaux $\rho$ and $\sigma$, which determine how precisely the D3-branes terminate on the 5-branes. The general relation between the distribution of the 5-brane sources on the boundary of $\Sigma$ and the Young tableaux $\rho$ and $\sigma$ can be found in \cite{Assel:2011xz}. The brane configurations engineer 3d $\mathcal N=4$ quiver gauge theories, and the supergravity solutions are dual to their IR fixed points. For solutions with $AdS_5\times S^5$ region and semi-infinite D3-branes the dual field theory is $\mathcal N=4$ SYM on a half space coupled to a 3d $T_\rho^\sigma[SU(N)]$ SCFT on the boundary \cite{Aharony:2011yc}. The free energies obtained holographically were matched to field theory computations using supersymmetric localization for the former in \cite{Assel:2012cp}, \cite{Coccia:2020wtk} and for the latter in \cite{,VanRaamsdonk:2020djx}. \subsection{Finite temperature} For each $AdS_4\times S^2\times S^2\times\Sigma$ solution one may replace $AdS_4$ by a finite temperature black hole and still obtain a solution to the Type IIB supergravity field equations: To verify the field equations one only needs that the 4d space is Einstein with negative curvature. This is true for the $AdS_4$ black hole metrics we will use, so that replacing $AdS_4$ by a black hole yields non-supersymmetric solutions which asymptotically approach the supersymmetric seed solution. From a more general perspective, the $AdS_4\times S^2\times S^2\times\Sigma$ solutions are in the class for which \cite{Gauntlett:2007ma} conjecture that a consistent truncation exists. Having a consistent truncation to 4d gauged supergravity would allow to uplift more general 4d solutions to 10d, but this is not needed for our purposes here. To introduce finite temperature, we replace the $AdS_4$ metric in (\ref{eq:10d-metric}) by the $AdS_4$ black hole metric \begin{align}\label{eq:ds2-AdS4-T} ds_4^2&=\frac{dr^2}{b(r)}+e^{2r}\left(-b(r)dt^2+ds^2_{{\mathds{R}}^2}\right)~, & b(r)&=1-e^{3(r_h-r)}~. \end{align} The horizon is at $r=r_h$, the conformal boundary at $r\rightarrow\infty$. It will be convenient to also introduce the tortoise coordinate $u$ by \begin{align}\label{eq:tortoise} du&=\frac{dr}{\sqrt{b(r)}}~, & u&=\frac{2}{3}\cosh^{-1}\left(e^{\frac{3}{2}(r-r_h)}\right)~. \end{align} The range $u\in{\mathds{R}}^+$ corresponds to the exterior region covered by the original coordinate $r$, with the horizon at $u=0$. The metric becomes \begin{align} ds^2_4&=du^2+e^{2r_h}\cosh^{4/3}\left(\frac{3u}{2}\right)\left[-\tanh^2\left(\frac{3u}{2}\right)dt^2+ds^2_{{\mathds{R}}^2}\right]~. \end{align} From the CFT perspective replacing $AdS_4$ by a planar black hole corresponds to adding a finite temperature for $\mathcal N=4$ SYM on $AdS_4$ for solutions with an $AdS_5\times S^5$ region. The black hole solutions without $AdS_5\times S^5$ region are dual to 3d $T_\rho^\sigma[SU(N)]$ SCFTs at finite temperature. \section{Extremal surfaces}\label{sec:surfaces} In this section we discuss the embedding ansatz for the surfaces that will be used for the entanglement entropy computations, the extremality and boundary conditions, and the behavior near the 5-brane sources. \subsection{Island surfaces} The surfaces of interest are 8d minimal surfaces in the 10d geometry (\ref{eq:10d-metric}) that wrap both $S^2$'s, (part of) the Riemann surface $\Sigma$, and a part of the $AdS_4$ black hole geometry. For the $AdS_4$ black hole we choose coordinates (\ref{eq:ds2-AdS4-T}), such that the 10d metric is given by (\ref{eq:10d-metric}) with \begin{align} ds_4^2&=\frac{dr^2}{b(r)}+e^{2r}\left(-b(r)dt^2+ds^2_{{\mathds{R}}^2}\right)~. \end{align} The surfaces can be described by specifying the $AdS_4$ radial coordinate $r$ for any given point of $\Sigma$. On $\Sigma$ we introduce real coordinates \begin{align} z=x+iy~, \end{align} with $x\in\mathds{R}$ and $0\leq y\leq\frac{\pi}{2}$. The embeddings are thus described by a single embedding function \begin{align} r=r(x,y)~. \end{align} The induced metric on the surface reads \begin{align}\label{eq:ind-met} ds^2_\gamma&=e^{2r}f_4^2ds^2_{{\mathds{R}}^2}+f_1^2ds^2_{S_1^2}+f_2^2ds^2_{S^2_2}+4\rho^2 (dx^2+dy^2) +\frac{f_4^2}{b(r)}\left(dx\, \partial_x r +dy \partial_y r \right)^2~. \end{align} The area of a general surface of this form is given by $A=V_{{\mathds{R}}^2}V_{S_1^2\times S_2^2}S_\gamma$, with \begin{align} S_\gamma&=4\int dx dy \,e^{2r}f_4^2f_1^2f_2^2\rho^2\sqrt{1+\frac{f_4^2}{4b(r)\rho^2}\left((\partial_x r)^2+(\partial_y r)^2\right)}~. \end{align} The combinations of metric functions appearing in this expression are given by \begin{align}\label{eq:fg-eval} f_4^2 f_1^2f_2^2\rho^2&=8\left|h_1 h_2 W\right|~, & \frac{f_4^2}{\rho^2}&=2\left|\frac{h_1h_2}{W}\right|~. \end{align} With these expressions the area simplifies to \begin{align}\label{eq:S} S_\gamma&=32\int dx dy \,e^{2r}\left|h_1 h_2 W\right|\sqrt{1+\frac{1}{2b(r)}\left|\frac{h_1 h_2}{W}\right| (\nabla r)^2}~. \end{align} Since $4W=\Delta(h_1h_2)$, the area depends on $h_1$ and $h_2$ only through the combination $h_1h_2$. The extremality condition resulting from variation of $S_\gamma$ (with $S_\gamma=\int L_\gamma$) can be written as \begin{align}\label{eq:eom-fg} 0\stackrel{!}{=} \frac{\delta L_\gamma}{L_\gamma}&= \frac{1}{1+g(\nabla r)^2}\left[2-\nabla(g\nabla r)+\frac{1}{2}g\nabla r\cdot \nabla\ln\left(\frac{1+g(\nabla r)^2)}{b(r) f^2}\right)\right]~, \end{align} where $\nabla$ is the covariant derivative with respect to the metric on $\Sigma$ and \begin{align} f&=|h_1 h_2 W|~, & g&=\frac{1}{2b(r)}\left|\frac{h_1 h_2}{W}\right|~. \end{align} The dependence on $r$ itself drops out for zero temperature, i.e.\ when $b(r)=1$. If $r(x,y)$ is a solution to the extremality condition at zero temperature then so is $r(x,y)+c$ with a constant $c$, with different asymptotic values at $x\rightarrow \pm\infty$; this reflects the defect conformal symmetry. \subsection{Boundary conditions}\label{sec:bc} We now discuss the boundary conditions for surfaces extending along $\Sigma$, starting with the two boundary components of the strip at $y=0$ and $y=\frac{\pi}{2}$. Near $y=0$ the sphere $S_1^2$, collapses, with $f_1^2\sim 4y^2 \rho^2$ so that the background has no conical singularity in the space parametrized by $y$ and $S_1^2$. The induced metric (\ref{eq:ind-met}) near $y=0$ consequently takes the form \begin{align} ds^2_\gamma&\approx e^{2r}f_4^2ds^2_{{\mathds{R}}^2}+f_2^2ds^2_{S^2_2}+4\rho^2 \left( dx^2+dy^2+y^2ds^2_{S_1^2}\right)+\frac{f_4^2}{b(r)}\left(dx\, \partial_x r +dy \partial_y r \right)^2~. \end{align} The contribution proportional to $(\partial_y r)^2 dy^2$ threatens to introduce a conical singularity in the $(y,S_1^2)$ part of the induced metric on the surface. A smooth metric is obtained with the Neumann boundary condition $\partial_y r\vert_{y=0}=0$. The reasoning for the second boundary component, where $S_2^2$ collapses, is analogous. We conclude \begin{align}\label{eq:Neumann-bc-y} \partial_y r(x,y)\big\vert_{y=0}&=0~, & \partial_y r(x,y)\big\vert_{y=\frac{\pi}{2}}&=0~. \end{align} For $x\rightarrow -\infty$ the space closes off smoothly; the limit corresponds to a single regular point on the boundary of $\Sigma$. For the surface to be smooth, $\lim_{x\rightarrow -\infty}r(x,y)$ should be independent of $y$. The asymptotic behavior of the metric functions, with coordinate $v=2e^{x}$ and $v\rightarrow 0$, is given by (see (3.15) of \cite{Assel:2011xz}) \begin{align} f_4^2&\approx L^2~, & f_1^2&\approx 4\sin^2\!y\,\rho^2~, & f_2^2&\approx 4\cos^2\!y\,\rho^2~, & 4\rho^2&\approx L^2v^2~. \end{align} The induced metric on the minimal surface becomes (noting that $\partial_y r\rightarrow 0$) \begin{align} ds^2_\gamma&\approx L^2\left[ e^{2r}ds^2_{{\mathds{R}}^2}+dv^2+v^2\left(dy^2+\sin^2\!y\, ds^2_{S_1^2}+\cos^2\!y\,ds^2_{S_2^2}\right)+(\partial_x r)^2\frac{dv^2}{v^2}\right]~. \end{align} The part in the round bracket is the line element for $S^5$, and a smooth ${\mathds{R}}^8$ with no conical singularity is obtained if \begin{align}\label{eq:bc-x-minus} \lim_{x\rightarrow-\infty}e^{-x}\partial_x r(x,y)=0~. \end{align} The conditions (\ref{eq:Neumann-bc-y}) and (\ref{eq:bc-x-minus}) are the 10d analog of the Neumann boundary conditions imposed at the ETW brane in the 5d Karch/Randall models. The nature of the limit $x\rightarrow +\infty$ is different for the solutions in (\ref{eq:h1h2-BCFT}) for a non-gravitating bath, where an $AdS_5\times S^5$ region emerges in this limit, compared to the solution (\ref{eq:h1h2-3d-grav}) for a gravitating bath. For the latter the limits $x\rightarrow \pm \infty$ both lead to regular boundary points, and the boundary condition at $x\rightarrow+\infty$ is given by (\ref{eq:bc-x-minus}) with $x\rightarrow -x$. For the former, with the emerging $AdS_5\times S^5$ region, a Dirichlet condition anchoring the surface is imposed instead. The general form is \begin{align} \lim_{x\rightarrow+\infty} r(x,y)&=r_0(y)~. \end{align} The form of $r_0(y)$ can be determined by considering global $AdS_5\times S^5$, corresponding to $h_1=\cosh z+\rm{c.c.}$ and $h_2=-i \sinh z+\rm{c.c.}$ In that case $|h_1h_2/W|=2\cosh^2(x)$, which is independent of $y$. As a result one can find extremal surfaces with no dependence on $y$, which is an angular coordinate on $S^5$. For more general solutions the boundary condition in the asymptotic $AdS_5\times S^5$ region at $x\rightarrow\infty$ therefore is that $r(x,y)$ should become independent of $y$ and satisfy a Dirichlet condition with $r_0(y)=r_R$. In summary, \begin{align}\label{eq:Dirichlet} \lim_{x\rightarrow+\infty}r(x,y)&=r^{}_R\qquad \text{for (\ref{eq:h1h2-BCFT}),} & \lim_{x\rightarrow+\infty}e^{+x}\partial_x r(x,y)&=0 \qquad \text{for (\ref{eq:h1h2-3d-grav}).} \end{align} \subsection{Near-pole behavior}\label{sec:near-pole} At zero temperature the minimal surfaces will show distinct behavior near the 5-brane sources, and cap off there.\footnote{This differs from the behavior of the spherical entangling surface centered on the defect studied in \cite{VanRaamsdonk:2020djx}, which has a simple universal embedding which is insensitive to the 5-brane sources.} In this section we will discuss this behavior analytically, using the form of the supergravity solutions near the 5-brane sources. At finite temperature the behavior near the 5-brane sources will be regulated by the horizon. To discuss the behavior near a pole at $z=z_0$ it is convenient to introduce coordinates centered on the pole, $z=z_0+R e^{i\varphi}$ for $z_0$ on the real line and $z=z_0-Re^{i\varphi}$ for $\mathop{\rm Im}(z_0)=\pi/2$. The combinations that appear in the area functional (\ref{eq:S}) behave at zero temperature as follows, \begin{align} f=|h_1h_2 W|&\approx f_0\sin^2(\varphi)(-\ln R)~, & g=\frac{1}{2}\left|\frac{h_1h_2}{W}\right|&\approx -R^2\ln R~, \end{align} where $f_0$ is a constant which depends on the solution under consideration. The value of $f_0$ will not be relevant, since the extremality condition (\ref{eq:eom-fg}) is invariant under constant rescalings of $f$. To discuss the near-pole behavior it is convenient to drop the overall factor in the extremality condition (\ref{eq:eom-fg}) and use the condition in the form \begin{align}\label{eq:eom-nb} 0&=2-\nabla\left(g\nabla r\right)+\frac{1}{2}g\nabla r\cdot \nabla\ln\left(\frac{1+g(\nabla r)^2}{f^2}\right)~. \end{align} The two non-trivial terms on the right hand side are generically of the same order, noting that $\nabla \ln(\ldots)=\mathcal O(1/R)$. A scaling analysis suggests to take $\nabla r=\mathcal O(1/(R\ln R))$ and make an ansatz \begin{align} r(R,\varphi)&= r_0\ln(-\ln R)+\frac{r_1(\varphi)}{\ln R}+\ldots \end{align} where the ellipsis denotes regular and subleading terms. The leading non-trivial order in the extremality condition (\ref{eq:eom-nb}) then is its finite part. The near-pole solution without divergences in $\varphi$ is given by $r_0=-1$ and $r_1$ constant. In summary, the behavior of the embedding near a 5-brane source at $z=z_0$ is given by \begin{align}\label{eq:r-near-pole} r(z,\bar z)&= -\ln(-\ln |z-z_0|)+\ldots~. \end{align} Since $\lim_{z\rightarrow z_0}r(z,\bar z)=-\infty$, the minimal surface drops into the Poincar\'e horizon at the source. At the point $z_0$ the background geometry is singular, as appropriate for a solution near a 5-brane source, and we do not impose additional regularity conditions for the minimal surface. \subsection{HM surfaces} We will focus on the non-gravitating bath solutions (\ref{eq:h1h2-BCFT}) for the discussion of HM surfaces; those for the gravitating bath solutions will be discussed in sec.~\ref{sec:grav-bath}. We use the tortoise coordinate $u$ defined in (\ref{eq:tortoise}) and parametrize the embedding at $t=0$ in terms of $x(u,y)$ instead of $r(x,y)$. The minimal surfaces range in $u$ from the value enforced by the Dirichlet boundary condition (\ref{eq:Dirichlet}) through the horizon into the thermofield double. We focus on surfaces anchored at the same point $r_R$ in the thermofield double, which are symmetric with respect to reflection across $u=0$ at $t=0$. So we can restrict to $u\geq 0$ to find the embeddings. From that perspective the HM surfaces end on the horizon at $u=0$ along a curve $x_h(y)$ which is determined by the extremality condition. The induced metric with the tortoise coordinate $u$ and the parametrization $x(u,y)$ becomes \begin{align} ds^2=\,&e^{2r_h}\cosh^{4/3}\left(\frac{3u}{2}\right)f_4^2ds^2_{\mathds{R}^2}+f_1^2 ds^2_{S_1^2}+f_2^2ds^2_{S_2^2}+ \left[f_4^2+4\rho^2(\partial_u x)^2\right]du^2 \nonumber\\ & +4\rho^2 \left[dy^2\left(1+(\partial_y x)^2\right)+(\partial_u x)(\partial_y x)(du\, dy+dy\,du)\right]~. \end{align} The area evaluated using (\ref{eq:fg-eval}) becomes \begin{align}\label{eq:HM-area} S&=32\int du dy\, e^{2r_h}\cosh^{4/3}\left(\frac{3u}{2}\right)|h_1 h_2W|\sqrt{\frac{1}{2}\left|\frac{h_1 h_2}{W}\right|\left(1+(\partial_y x)^2\right)+(\partial_u x)^2}~. \end{align} For the boundary conditions we start with the boundaries of $\Sigma$ at $y=0$ and $y=\frac{\pi}{2}$. Having no conical singularities at $y=0,\frac{\pi}{2}$ leads to the Neumann boundary conditions \begin{align}\label{eq:Neumann-bc-y-HM} \partial_y x(u,y)\big\vert_{y=0}&=0~, &\partial_y x(u,y)\big\vert_{y=\frac{\pi}{2}}&=0~, \end{align} analogously to the arguments for (\ref{eq:Neumann-bc-y}) before. The Dirichlet condition which anchors the surface, given in (\ref{eq:Dirichlet}) for the parametrization $r(x,y)$, here becomes \begin{align} \lim_{u\rightarrow u(r_R)} x(u,y)=\infty~. \end{align} On the other end the surface should intersect the horizon and end from the one-sided perspective at $u=0$. The symmetry under reflection across $u=0$ leads to the Neumann condition \begin{align} \partial_u x(u,y)\vert_{u=0}&=0~. \end{align} This condition also ensures that boundary terms in the variation of the area at $u=0$ vanish. \section{Solving for minimal surfaces}\label{sec:numerics} To summarize, the extremality conditions are non-linear second-order PDEs on the strip $\Sigma=\lbrace x+iy\vert x\in\mathds{R}, 0\leq y\leq\frac{\pi}{2}\rbrace$, with Neumann boundary conditions at $y=0$ and $y=\frac{\pi}{2}$. The domain and boundary conditions in the $x$ direction depend on the background solution and type of surface under consideration. The solutions are expected to be smooth, except for the locations on the two boundary components at $y\in\lbrace 0,\frac{\pi}{2}\rbrace$ where the D5/NS5 sources are in (\ref{eq:h1h2-BCFT}) and (\ref{eq:h1h2-3d-grav}). To solve these PDEs numerically we start with a trial surface satisfying the boundary conditions and let it dynamically settle on a minimal area configuration. To this end an auxiliary external time parameter $\tau$ is introduced, and the embedding, say for island surfaces, is described by a $\tau$-dependent function $r(x,y,\tau)$. The $\tau$-evolution for $r(x,y,\tau)$ is chosen as \begin{align}\label{eq:r-tau} \partial_\tau r(x,y,\tau)&=-L_\gamma^{-1}\frac{\delta L_\gamma}{\delta r(x,y,\tau)}~, \end{align} where $L_\gamma$ is the volume element of the surface in (\ref{eq:S}). This exerts a force on the embedding in the direction in which the area decreases. The right hand side is given by (\ref{eq:eom-fg}) with $r(x,y)$ replaced by $r(x,y,\tau)$. To numerically implement the relaxation the embedding function $r(x,y,\tau)$ is discretized in $x$ and $y$, and eq.~(\ref{eq:r-tau}) is replaced by a set of ODEs for the values of $r$ at the lattice points, $r_{ij}(\tau)$. We use $\tilde x = \tanh(x)$ to obtain a finite domain and a rectangular lattice with equidistant points. The derivatives are discretized using second-order finite differences and the boundary conditions are implemented such that they are compatible with the second-order accuracy of the finite differences.\footnote{% For Neumann boundary conditions the lattice is extended by one row beyond the actual domain. The Neumann boundary conditions in the $y$ direction, (\ref{eq:Neumann-bc-y}), apply for regular points of $\partial\Sigma$, not at the locations of the 5-brane sources. This has to be taken into account in the discretization.} The resulting set of ODEs is integrated numerically using Mathematica. Asymptotically the evolution of the $r_{ij}(\tau)$ is expected to settle on an equilibrium configuration $r^\star_{ij}$, which is a discretized solution to the extremality condition for the minimal surface. Letting the evolution (\ref{eq:r-tau}) run for a large time $\tau_{\rm max}\gg 1$ will yield an approximation to this equilibrium configuration. The quality of the final configuration $r_{ij}(\tau_{\rm max})$ can be assessed from the residuals \begin{align}\label{eq:residuals} R_{ij}&=\left|L_\gamma^{-1}\frac{\delta L_\gamma}{\delta r(x,y,\tau)}\right|_{\tau=\tau_{\rm max}}~. \end{align} We typically use a lattice with $\mathcal O(100)$ nodes in the $\tilde x$ and $y$ directions, though coarser lattices already capture the qualitative form of the surfaces well. For the surfaces and data shown below the residuals have decreased to $\mathcal O(10^{-6})$ or less. A limitation of this method is that it is unlikely to capture extremal surfaces for which the area functional does not take a local minimum (i.e.\ saddle points). However, the interest here is primarily in actual minimal surfaces. Due to the symmetry of the D5/NS5 brane sources in the supergravity solutions (\ref{eq:h1h2-BCFT}) and (\ref{eq:h1h2-3d-grav}) under S-duality combined with $z\rightarrow i\pi/2 -z$, the Einstein-frame metric is invariant under $y\rightarrow \frac{\pi}{2}-y$. For the minimal surfaces discussed here the boundary conditions respect this symmetry, so that the surfaces themselves are symmetric. The PDEs thus only have to be solved on the half of the strip $\Sigma$ with $0\leq y\leq\frac{\pi}{4}$, with a Neumann boundary condition at $y=\frac{\pi}{4}$ to enforce the symmetry. \section{Islands with non-gravitating baths}\label{sec:islands} In this section we discuss minimal surfaces, island contributions and the emergence of Page curves in the 10d solutions for non-gravitating baths, given in (\ref{eq:h1h2-BCFT}). The general structure of the supergravity solutions is illustrated in fig.~\ref{fig:AdS4-sol}, and we have D5 and NS5-brane sources, respectively, at $(x,y)=(0,0)$ and $(x,y)=(0,\pi/2)$. The 8d minimal surfaces can be visualized as 2d surfaces in the 3d space spanned by the $x$ and $y$ coordinates parametrizing $\Sigma$ and the $AdS_4$ radial direction. They are obtained using the relaxation method of sec.~\ref{sec:numerics}. We will start with a discussion of general features, before moving on to comparing the areas of island and HM surfaces. \begin{figure} \includegraphics[width=0.4\linewidth]{islands-rR4.pdf} \hskip 10mm \includegraphics[width=0.4\linewidth]{islands-rR3.pdf} \\ \includegraphics[width=0.4\linewidth]{islands-rR2.pdf} \hskip 10mm \includegraphics[width=0.4\linewidth]{islands-rR1.pdf} \caption{ Island surfaces from top left to bottom right anchored at $r_R\in\lbrace 5,3,2.1,1\rbrace$. The horizon is at $r_h=0$ and $N_5/K=2$. The $AdS_5\times S^5$ region emerges at $\tanh x=1$, the 5-brane sources are at $\tanh x=0$ and $\tanh x=-1$ is a regular point in the internal space. For smaller $r_R$ (smaller radiation region) the surfaces stay closer to the horizon. Near the 5-brane sources the surfaces reach to the horizon for all $r_R$. \label{fig:islands} } \end{figure} \begin{figure} \includegraphics[width=0.43\linewidth]{HM-rR1.pdf} \hskip 10mm \includegraphics[width=0.43\linewidth]{HM-rR3.pdf} \caption{ HM surfaces at $t=0$ for $r_R=1$ (left) and $r_R=2.1$ (right), with $r_h=0$ and $N_5/K=2$. The plots show the tortoise radial coordinate $u$, in which the horizon is intersected orthogonally. The further the HM surfaces are anchored from the horizon at $r_h=0$, the further they reach towards negative $x$. \label{fig:HM-surf} } \end{figure} \subsection{Island vs.\ HM surfaces} A sample of island surfaces with varying anchor points $r_R=\lim_{x\rightarrow\infty}r(x,y)$ in the $AdS_5\times S^5$ region, for supergravity solutions (\ref{eq:h1h2-BCFT}), with temperature $r_h=0$ and $N_5/K=2$, is shown in fig.~\ref{fig:islands}. Simultaneous rescalings of $N_5$ and $K$ lead to an overall rescaling of the metric functions in (\ref{eq:10d-metric}), so the form of the minimal surfaces only depends on the ratio $N_5/K$. The ratio $N_5/K$ controls the ratio of the number of D3-branes suspended between the 5-branes and the number of semi-infinite D3-branes. For $N_5/K=2$ these numbers are equal (fig.~\ref{fig:brane-non-grav}). For surfaces with large $r_R$, anchored far from the horizon, the impact of the 5-brane sources is clearly visible in fig.~\ref{fig:islands}, in line with the behavior discussed in sec.~\ref{sec:near-pole} (example surfaces at zero temperature are shown in fig.~\ref{fig:crit-T0-surfs}). As the anchor point $r_R$ is decreased, moving towards the horizon, the entire surface moves towards the horizon and the near-pole behavior becomes less pronounced. For the surfaces in fig.~\ref{fig:islands} a discretization with $(200,100)$ points in $(\tanh x,y)$ was used, and the residuals (\ref{eq:residuals}) at $\tau=10^3$ are reduced to $\mathcal O(10^{-10})$. The quality of the solutions can also be investigated using the undiscretized extremality condition (\ref{eq:eom-fg}): From a discretized solution one can construct a twice differentiable interpolating function $\tilde r(x,y)$. The interpolation should not necessarily be expected to capture the true solution accurately away from the lattice points, especially near the D5/NS5 sources where the true solution is not smooth. Evaluating (\ref{eq:eom-fg}) on the interpolation nevertheless only produces small errors near the poles, which decrease further with increased lattice resolution, suggesting that they are benign and not systematic. Examples of $t=0$ HM surfaces for $N_5/K=2$ are shown in fig.~\ref{fig:HM-surf}. For radiation regions that start far in the bath system, the surfaces are anchored close to the horizon at $\tanh x=1$, i.e.\ with $r_R$ close to $r_h$. These surfaces drop into the horizon along a curve $x_h(y)$ which is located well before reaching the D5/NS5 sources at $x=0$.\footnote{% If the initial trial surface reaches beyond the 5-brane sources, the relaxation transitions it into the $x>0$ region.} Upon moderately increasing $r_R$, the surfaces reach further towards smaller values of $x$. The curve $x_h(y)$ starts to bulge out towards negative values in the interior of $\Sigma$, i.e.\ for $y\neq \lbrace 0,\pi/2\rbrace$, while the boundary values $x_h(0)$ and $x_h(\pi/2)$ remain at larger values and stay shy of reaching the 5-brane sources at $x=0$. The behavior upon further increasing $r_R$ depends on $N_5/K$, and will be discussed below. With the surfaces in hand we can compare the areas between island and $t=0$ HM surfaces anchored at the same $r_R$ and discuss the time evolution of the entropy. The areas have the usual divergences associated with entanglement entropies in 4d. Rather than isolating the divergences separately for island and HM surfaces, we directly compute the finite area difference between island and HM surfaces anchored at the same $r_R$. For numerical stability it is desirable to take the difference at the level of the integrands, at least in the region of large $x$. Since the HM surface is obtained with a different parametrization, we transform the HM surface described by $x_{HM}(r,y)$ to a parametrization in terms of $r_{HM}(x,y)$, by inverting $x_{HM}(r,y)$ with respect to the first argument. The derivatives of $r_{\rm HM}$ can be expressed in terms of $x_{\rm HM}$, \begin{align} \partial_x r_{HM}(x,y)&=\frac{1}{\partial_r x_{HM}(r,y)}\Big\vert_{r=r_{HM}(x,y)}~, & \partial_y r(x,y)&=-\frac{\partial_y x_{HM}(r,y)}{\partial_r x_{HM}(r,y)}\Big\vert_{r=r_{HM}(x,y)}~. \end{align} This is used to replace the derivatives in the area functional (\ref{eq:S}) before replacing $r_{HM}(x,y)$ itself by the inverse of $x_{\rm HM}$, to avoid taking derivatives of the inverted function and improve numerical stability. The area integrands obtained this way are numerically smooth, and are used to compute the area differences with a cut-off $\tanh x\leq 1-\epsilon$. The dependence on the cut-off is very mild, with percent level variation between $\epsilon=10^{-2}$ and $\epsilon=10^{-3}$, and the latter is used for the plots. \begin{figure} \begin{tikzpicture} \node at (0,0){\includegraphics[width=0.4\linewidth]{areaDiff.pdf}}; \node at (3.6,-1.7) {\small $r_R$}; \node at (-3.5,2) {\small $\Delta S$}; \end{tikzpicture} \caption{Area difference $\Delta S=S_{\rm island}-S_{\rm HM}$ as function of the anchor point $r_R$ in the asymptotic $AdS_5\times S^5$ region. The defect is at $r=\infty$, the horizon at $r_h=0$. For $\Delta S>0$ the HM surface dominates at $t=0$. The radius of the $AdS_5\times S^5$ region is controlled by $N_5K$, the number of defect degrees of freedom by $N_5^2$. The color-coded dots are, from top to bottom, for $N_5/K\in\lbrace 1.2,1.6,2.0,2.4\rbrace$ with $K=1$. \label{fig:areadiff}} \end{figure} The results are shown in fig.~\ref{fig:areadiff}. They show that the island surface has larger area than the $t=0$ HM surface when $r_R$ is not too far from the horizon $r_h$. This leads to Page curves: The radiation region is identified far from the location where the gravity and bath systems meet ($r=\infty$), as the part of the $AdS_5\times S^5$ region at $x\rightarrow\infty$ with $AdS_4$ radial coordinate $r\leq r_R$, with $r_R$ close to $r_h$. The results in fig.~\ref{fig:areadiff} show that for these regions the area of the HM surface at $t=0$ is smaller than the area of the island surface. The area of the HM surface grows with time, but the entropy is bounded by the constant area of the island surface, leading to a Page curve. The results in fig.~\ref{fig:areadiff} show that the area difference between the island and $t=0$ HM surfaces is larger for larger $N_5/K$. One may compare this to expectations based on the Karch/Randall models: Larger $N_5/K$ corresponds to a BCFT with more 3d defect degrees of freedom relative to 4d bulk degrees of freedom, which in the 5d models amounts to larger tension of the ETW brane. Larger tension bends the ETW brane towards the conformal boundary of $AdS_5$ in fig.~\ref{fig:KR} (i.e.\ smaller $\theta$; a tensionless brane has $\theta=\pi/2$). From this 5d perspective one would expect the island surface to have larger area relative to the $t=0$ HM surface for smaller $\theta$, since the ETW brane is further from the bath. This is the 5d version of the area difference being larger for larger $N_5/K$ in 10d. The curves in fig.~\ref{fig:areadiff} further show transition points $\hat r_R$ at which the areas of the island and HM surfaces are equal at $t=0$, suggesting constant entropies for $r_R>\hat r_R$. Near the end points of the curves, which for small $N_5/K$ are close to $\hat r_R$, the evolution of trial HM surfaces via (\ref{eq:r-tau}) changes: beyond values $r_R^\star$ near the end points, the relaxation extends the trial surface all the way to $x=-\infty$ and ceases to settle on an equilibrium configuration. If the HM surface becomes a shallow minimum or a saddle point, the relaxation could transition over it towards the island surface. One may also suspect that HM surfaces extending to negative $x$ also on the boundary of $\Sigma$ become relevant (those would reach to the horizon along a curve $x_h(y)$ and in a disconnected region around the 5-brane sources, and can not be parametrized globally by $x(u,y)$). The value of $r_R^\star$ starts small for small $N_5/K$, increases to $r_R^\star\approx 2.1$ for $N_5/K=2$ (the surface on the right in fig.~\ref{fig:HM-surf} is close to $r_R^\star$), and appears to diverge towards $N_5/K\approx 4$. For larger $N_5/K$ HM surfaces can be found explicitly with no noticeable bound on $r_R$. The limit $N_5\gg K$ corresponds to the number of 3d degrees of freedom being large compared to the number of 4d degrees of freedom. This corresponds in the 5d bottom-up models to an ETW brane close to the conformal boundary of $AdS_5$, which is the limit considered in \cite{Chen:2020uac,Chen:2020hmv}. The separation between $r_R^\star$ and $\hat r_R$ appears to grow with $N_5/K$. For radiation regions far in the bath (small $r_R$) we find Page curves. For $r_R>\hat r_R$ fig.~\ref{fig:areadiff} suggests that the island surfaces lead to constant entropies, though if new types of HM surfaces become relevant the entropy curve may remain non-trivial. In either case, the entropy is bounded by the area of the island surfaces, which we find explicitly for small and large $r_R$. \subsection{Critical brane setups} \begin{figure} \subfigure[][]{\label{fig:crit-T0-plot} \includegraphics[width=0.4\linewidth]{crit-angle-T0.pdf} } \qquad \subfigure[][]{\label{fig:crit-T0-surfs} \includegraphics[width=0.4\linewidth]{crit-angle-T0-surf.pdf} } \caption{Left: $r_R-r_L$, where $r_R= \lim_{x\rightarrow +\infty}r(x,y)$ is the anchor of the minimal surface in the non-gravitating bath and $r_L= \lim_{x\rightarrow -\infty}r(x,y)$, as function of $N_5/K$ at zero temperature. Right: island surfaces, from top to bottom for $N_5/K\in\lbrace 1.2,1.6,2.0,2.4\rbrace$. At zero temperature $r_R-r_L$ is independent of $r_R$. \label{fig:crit-ang}} \end{figure} The ratio $N_5/K$ plays a prominent role also at zero temperature. A sample of island minimal surfaces for different values of $N_5/K$ at zero temperature is shown in fig.~\ref{fig:crit-T0-surfs}. For fixed anchor point in the asymptotic $AdS_5\times S^5$ region, the point where the surfaces close off at $x\rightarrow -\infty$ moves towards the Poincar\'e horizon as $N_5/K$ is increased. This is shown more quantitatively in fig.~\ref{fig:crit-T0-plot}, which shows the difference between $r_R=\lim_{x\rightarrow +\infty}r(x,y)$ and $r_L=\lim_{x\rightarrow -\infty} r(x,y)$ as function of $N_5/K$. For small $N_5/K$ the difference $r_R-r_L$ grows linearly, but for larger $N_5/K$ it starts to grow rapidly. The plot suggests the existence of a critical value, \begin{align}\label{eq:crit-nongrav-T0} \left(\frac{N_5}{K}\right)_{\rm crit}&\approx \ 4.0~, \end{align} at which $r_R-r_L$ diverges. For the surfaces from which the data in fig.~\ref{fig:crit-T0-plot} is extracted the residuals (\ref{eq:residuals}) are reduced to at most $\mathcal O(10^{-7})$. Increasing $N_5/K$ beyond the critical value appears to lead to irreducible residuals (\ref{eq:residuals}), which remain finite and keep driving the anchors $r_L$ and $r_R$ further apart with increasing runtime in $\tau$, rather than settling on an equilibrium configuration. This is consistent with $r_R-r_L$ diverging when $N_5/K$ approaches (\ref{eq:crit-nongrav-T0}), and there being no island minimal surfaces beyond the critical value at zero temperature. These results line up well with the observations in the Karch/Randall models: As noted before, the angle $\theta$ of the ETW brane in 5d is expected to be an effective description for the number of defect degrees of freedom relative to the number of 4d degrees of freedom, which in this particular example of a 10d solution is determined by the ratio $N_5/K$. The discussion in \cite{Geng:2020fxl} found that, as the angle is decreased, the point where the island minimal surface with fixed anchor on the bath brane hits the ETW brane moves towards the infrared, and diverges towards the Poincar\'e horizon at a critical angle. This is consistent with the behavior of the 10d solutions considered here if $1/\theta$ is identified with $N_5/K$. It would be interesting to investigate more general 10d solutions, e.g.\ with multiple 5-brane sources, in which one may expect a more complicated phase structure. \begin{figure} \includegraphics[width=0.4\linewidth]{crit-angle-finite-T.pdf} \caption{Difference $r_R-r_L$ between the end points at $x\rightarrow\pm\infty$ at finite temperature, with $r_h=0$, from bottom to top for $N_5/K\in\lbrace 2.25,2.75,3.25,3.75,4.25\rbrace$. The solid black line shows $r_L=0$.\label{fig:crit-ang-T} } \end{figure} At finite temperature the runaway behavior of the cap-off point $r_L$ at the critical $N_5/K$ is regulated by the black hole horizon, and island surfaces can be found beyond the critical $N_5/K$. The behavior can again be diagnosed by the difference $r_R-r_L$. At zero temperature and below the critical $N_5/K$ this difference is finite and independent of $r_R$, with its value growing rapidly towards the critical $N_5/K$. At finite temperature and below the critical value of $N_5/K$, the difference $r_R-r_L$ is not constant, but it approaches a constant for large $r_R$. This behavior can be seen in fig.~\ref{fig:crit-ang-T} as the curves that saturate towards a constant. The constant is set by the zero temperature value of $r_R-r_L$. As $N_5/K$ approaches the critical value, the point where the growth of $r_R-r_L$ saturates increases rapidly. The results are consistent with $r_R-r_L$ staying linear without bound for $N_5/K$ beyond the critical value. The end point $r_L$ appears stuck below a critical value, similar to the behavior found in the 5d models in \cite{Geng:2020fxl}. \section{Islands with gravitating baths}\label{sec:grav-bath} We now turn to the gravitating bath solutions (\ref{eq:h1h2-3d-grav}), in which the $AdS_5\times S^5$ region at $x\rightarrow\infty$ is reduced to a regular point of the internal space (fig.~\ref{fig:AdS4-sol-grav}). These solutions have massless 4d gravitons (the 4d Newton constant is related to the free energy of the dual 3d SCFTs which is proportional to $\int_\Sigma h_1 h_2 W$). Without the $AdS_5\times S^5$ region there is no natural place to geometrically define radiation regions (compatible with diffeomorphism invariance) at $x=\infty$, or to anchor minimal surfaces. Minimal surfaces stretching from $x=-\infty$ to $x=+\infty$ instead satisfy Neumann boundary conditions on both ends, as discussed in sec.~\ref{sec:bc}, and are found to settle onto the horizon. This leads to a flat entropy curve, in line with the arguments of \cite{Laddha:2020kvp} for gravitating baths.\footnote{Attempts to define notions of effective geometric entropies with dynamical gravity and discussions of their Page curves can be found in \cite{Krishnan:2020oun,Dong:2020uxp,Krishnan:2020fer}.} As suggested in \cite{Laddha:2020kvp}, a Page curve may still arise for other quantities in situations with gravitating baths. An alternative is to consider surfaces that divide the internal space, which may be expected to compute non-geometric EE's. Though the general interpretation of such surfaces may not be entirely understood, one can view some of them in the current context as limiting cases of surfaces computing geometric EE's, as suggested in \cite{Geng:2020fxl} (an earlier example where geometric EE's turn non-geometric in the IR can be found in \cite{Balasubramanian:2017hgy}). The proposal of \cite{Geng:2020fxl} can be made precise in 10d: Consider brane configurations where D3-branes suspended between 5-branes are kept finite in extent, to realize $\mathcal N=4$ SYM on an interval. One may compute conventional geometric EE's on that interval. Though holographic duals for $\mathcal N=4$ SYM on an interval are not explicitly known, these geometric EE's would be represented by conventional Ryu/Takayanagi surfaces in the putative holographic duals. As IR fixed points one obtains 3d $T_\rho^\sigma[SU(N)]$ SCFTs, with holographic duals of the form discussed here.\footnote{ The setup can be seen as string theory realization of wedge holography in the sense of \cite{Akal:2020wfl}. The internal space in the 10d $AdS_4$ solution is the string theory uplift of the wedge region. } At the IR fixed point the geometric EE's on the interval become non-geometric EE's, and the Ryu/Takayanagi surfaces become minimal surfaces in the internal space. We thus expect at least certain minimal surfaces separating regions in the internal space to compute non-geometric EE's. In lower-dimensional examples such EE's are discussed in \cite{Geng:2021iyq}. There are numerous ways to separate regions in the internal space in the 10d Type IIB solutions. One may for example divide one of the $S^2$'s, which should be related to a split of the Hilbert space based on the $R$-symmetry \cite{Karch:2014pma}. The surfaces which arise from geometric EE's as outlined above are expected to split the Riemann surface $\Sigma$ instead, where they can separate the 5-brane sources. As shown in \cite{Graham:2014iya}, minimal surfaces dividing the internal space end, when reaching the conformal boundary of the $AdS$ part, on an extremal sub-surface in the internal space. Boundary conditions can be imposed to fix the subleading behavior as the conformal boundary in the $AdS$ part is approached, instead of the leading behavior (e.g.\ for surfaces splitting the $S^5$ in $AdS_5\times S^5$ the slipping mode away from the equator). In the solutions (\ref{eq:h1h2-3d-grav}) there is a natural candidate extremal surface in $\Sigma$: due to the reflection symmetry of the solution under $x\rightarrow -x$, the locus $x(y)=0$ is extremal in $\Sigma$ and can serve as an anchor point for 8d minimal surfaces wrapping the spatial part of $AdS_4$, both $S^2$'s and a curve in $\Sigma$ which depends on the $AdS_4$ radial coordinate $u$. A symmetric HM surface which is anchored at $x(y)=0$ also in the thermofield double is given by $x(u,y)=0$. This entire surface is extremal thanks to the reflection symmetry in $x\rightarrow -x$. More general surfaces may be obtained by specifying non-trivial subleading behavior in the $AdS_4$ radial coordinate as the $x=0$ locus is approached.\footnote{Admissible choices for the fall-off behavior near the boundary of $AdS_4$ can be determined by linearizing the extremality condition around the $x(u,y)=0$ surface and performing a mode expansion in the $y$ direction.} We only impose that the surface be anchored at $x(y)=0$ for $u\rightarrow\infty$ and in the thermofield double, and then let the relaxation method settle on a surface. This procedure selects the $x(u,y)=0$ HM surface at $t=0$. Since $x(y)=0$ is an extremal curve in $\Sigma$, finding the HM surfaces for $t\neq 0$ reduces to a problem within the $AdS_4$ part of the geometry, which is identical to the discussion in appendix A of \cite{Geng:2020fxl}. \begin{figure} \includegraphics[width=0.3\linewidth]{3d-crit-3.pdf} \hskip 5mm \includegraphics[width=0.3\linewidth]{3d-crit-2.pdf} \hskip 5mm \includegraphics[width=0.3\linewidth]{3d-crit-1.pdf} \caption{Island surfaces in gravitating bath solutions, from left to right for $\delta\in\lbrace 0.5,0.4,0.3\rbrace$. The vertical axis shows the tortoise $AdS_4$ radial coordinate $u$. The surfaces are anchored at the conformal boundary of $AdS_4$ ($u\rightarrow\infty$) on the curve $x(y)=0$ in $\Sigma$. The plots only cover the $x\leq 0$ part of $\Sigma$. Near the 5-brane sources the surface caps off close to the horizon. The cap-off point at $x=-\infty$ increases as $\delta$ decreases. \label{fig:LRcrit2}} \end{figure} For the island surfaces we impose that they are similarly anchored for $u\rightarrow\infty$ along the $x(y)=0$ curve. They should reach one of the $x=\pm\infty$ regions with the Neumann boundary condition (\ref{eq:bc-x-minus}) for some value $u_L>0$, which is determined dynamically. Since the supergravity solution is invariant under $x\rightarrow -x$ the surfaces ending at $x=+\infty$ and $x=-\infty$ are symmetry-related, and we only construct the ones ending at $x=-\infty$ explicitly. A sample of island surfaces for different values of the 5-brane source locations $\delta$ on $\Sigma$ is shown in fig.~\ref{fig:LRcrit2} (the plots show only half the range of $x$). For larger $\delta$ the surfaces more rapidly approach the horizon and then stay close to it. This behavior is captured more quantitatively in fig.~\ref{fig:LRcrit1a}, which shows the end point at $x=-\infty$ in the tortoise coordinate $u$ as function of $\delta$. The cap-off points $u_L$ show an exponential fall-off towards large $\delta$, which is shown as the fitted dashed line. Towards small $\delta$ the cap-off points start to grow more rapidly. The data is consistent with $u_L$ diverging towards the conformal boundary for a critical value \begin{align}\label{eq:deltac} \delta_c&\approx 0.28~. \end{align} In line with this interpretation, the relaxation method does not settle on equilibrium minimal surfaces below $\delta_c$. Instead, the trial surfaces keep approaching the conformal boundary of $AdS_4$ at generic points of $\Sigma$, while staying close to the horizon at the 5-brane sources (in line with behavior derived in sec.~\ref{sec:near-pole}). This will be discussed further below. The area differences between the island and $t=0$ HM surfaces are computed similarly to the non-gravitating bath case. To implement the subtraction at the integrand level, the embedding for the island surface, $u_{\rm island}(x,y)$, has to be inverted with respect to the first argument to match the parametrization of the HM surface. The embeddings are not invertible on the entire domain, so the subtraction is implemented at the integrand level in a patch around $x=0$ and at the integral level for the remaining parts. The resulting area differences are shown in fig.~\ref{fig:LRcrit1b}, as colored curves for different choices of cut-off on the $AdS_4$ radial coordinate. The cut-off on the radial coordinate is imposed in Fefferman-Graham gauge, $e^{-u}\geq\epsilon$ corresponding to $\tanh u\leq 1-2\epsilon^2$, with $\epsilon$ varied between $0.005$ and $0.05$. The curves are indistinguishable for generic values of $\delta$. They only spread in a narrow region around $\delta_c$, where the island surfaces approach the conformal boundary of $AdS_4$ (though the cap-off point for the surfaces considered remains well below the cut-off) and residual cut-off dependence can be seen. The residual cut-off dependence is smooth, and is fitted for each $\delta$ to obtain an extrapolation to zero cut-off. The result is shown as dashed black curve. \begin{figure} \subfigure[][]{\label{fig:LRcrit1a} \begin{tikzpicture} \node at (0,0) {\includegraphics[width=0.4\linewidth]{LR-crit.pdf}}; \node at (-2.75,2.35) {\scriptsize $\tanh u_L$}; \node at (3.5,-1.7) {\footnotesize $\delta$}; \end{tikzpicture} } \hskip 10mm \subfigure[][]{\label{fig:LRcrit1b} \begin{tikzpicture} \node at (0,0) {\includegraphics[width=0.4\linewidth]{LR-areas.pdf}}; \node at (-2.7,2.25) {\scriptsize $\Delta S/N^4$}; \node at (3.5,0.45) {\footnotesize $\delta$}; \end{tikzpicture} } \caption{Left: Cap-off point $u_L=\lim_{x\rightarrow -\infty} u(x,y)$ at $x=-\infty$ as function of the separation of brane sources $\delta$. For large $\delta$, $u_L$ approaches the horizon at $u=0$ exponentially; the dashed line shows $u_L=1.17\exp(-4.28\,\delta)$. At a finite $\delta_c$, $u_L$ diverges towards the conformal boundary (at $\tanh u=1$ in the plot). Right: Area difference $\Delta S=S_{\rm island}-S_{\rm HM}$, as colored curves for different choices of cut-off on the $AdS_4$ radial coordinate. The dashed black curve shows an extrapolation to zero cut-off. \label{fig:LRcrit1}} \end{figure} The area differences in fig.~\ref{fig:LRcrit1} show that generically for large $\delta$ the HM surface at $t=0$ has smaller area than the island surface. The area of the HM surface grows in time, and when it equals that of the island surface the island surface becomes dominant, leading to a Page curve. The curves in fig.~\ref{fig:LRcrit1} suggest a second distinguished value for $\delta$, a ``Page value" $\delta_P$ where $\Delta S$ at $t=0$ vanishes. The value of $\delta_P$ obtained from the numerical data, \begin{align} \delta_P\approx 0.29\,, \end{align} is close to but slightly larger than the critical $\delta_c$ in (\ref{eq:deltac}). Since the difference between $\delta_c$ and $\delta_P$ is small and the island surfaces become numerically challenging for $\delta\approx \delta_c$, as evidenced in the spread of the curves in fig.~\ref{fig:LRcrit1}, the possibility remains that the true area difference may be non-negative for all $\delta>\delta_c$. In the (possibly empty) range $\delta_c<\delta<\delta_P$, the island surface dominates already at $t=0$ and leads to a flat entropy curve. Regardless of the relation between $\delta_c$ and $\delta_P$, for all $\delta>\delta_c$ the entropy growth indicated by the HM surface is limited by island surfaces whose area is constant. Finding time-dependent HM surfaces reduces to a problem within the $AdS_4$ part of the geometry, since $x=0$ is an extremal curve in $\Sigma$. Up to an overall factor, the area as function of time can then be determined as in appendix A of \cite{Geng:2020fxl}, to which we refer for details on that part of the computation. The overall factor arises from the parts of the internal space wrapped by the 8d minimal surfaces in the 10d solutions. It can be determined by integrating the area functional in (\ref{eq:HM-area}) evaluated on the $x=0$ embedding over $y$. This leads to the factor \begin{align}\label{eq:C-def} C&=32\int_0^\pi dy\,\sqrt{\frac{1}{2}\left|h_1^3h_2^3W\right|}\,\Bigg\vert_{x=0}~. \end{align} It will be convenient to discuss the time-dependent entropy curves normalized to this factor, so that the (re)normalized area of the HM surface does not depend on the details of the 10d solution. The area differences between island and HM surfaces at $t=0$ normalized to $C$ are shown in fig.~\ref{fig:DeltaSdC} as function of $1/\delta$. The normalized area differences are monotonically increasing with $\delta$. The time-dependent entropy curves, up to factors of $C$ and the 10d Newton constant, are shown in fig.~\ref{fig:page}. To obtain the curves a time-independent divergent part has been minimally subtracted, and a factor 2 has been included to account for the parts of the surfaces in the thermofield double. Fig~\ref{fig:page} shows the transition from the HM surface to the island surfaces for various $\delta$. The Page time, at which the transition occurs, increases monotonically with $\delta$: though the $t=0$ area differences in fig.~\ref{fig:LRcrit1b} are not monotonic, the Page time depends also on the growth rate of the HM surface, which decreases with $\delta$. The Page time vanishes at $\delta_P$. \begin{figure} \subfigure[][]{\label{fig:DeltaSdC} \includegraphics[width=0.42\linewidth]{DeltaSdC.pdf} }\hskip 15mm \subfigure[][]{\label{fig:page} \includegraphics[width=0.4\linewidth]{page.pdf} } \caption{Left: Area differences $\Delta S=S_{\rm island}-S_{\rm HM}$ normalized to the constant in (\ref{eq:C-def}). The plot shows the extrapolated curve of fig.~\ref{fig:LRcrit1b}. Right: Page curves. The solid line shows the time-dependent finite part of the area of the HM surface. The corresponding constant areas of island surfaces are shown as dashed lines, from bottom to top for $\delta\in\lbrace 0.29,0.32,0.4,0.5,0.6\rbrace$. The Page time increases monotonically with $\delta$.} \end{figure} The 10d results are remarkably consistent with the phase structure found in 5d Karch/Randall models if the inverse brane angle $\theta$ in 5d is seen as effective description for the brane stack separation $\delta$ on $\Sigma$: the analysis of \cite{Geng:2020fxl} identified critical angles and Page angles, with a phase structure of minimal surfaces that, with the aforementioned identification, qualitatively matches the results found here (compare e.g.\ fig.~5 of \cite{Geng:2020fxl} to fig.~\ref{fig:DeltaSdC}). The symmetry of the 10d solutions (\ref{eq:h1h2-3d-grav}) under $x\rightarrow -x$ suggests that they give rise to Karch/Randall models with two equal ETW brane angles. More general Karch/Randall models with two unequal brane angles descend from 10d solutions with asymmetric distributions of 5-brane sources on $\Sigma$. The range $\delta<\delta_c$, where no island minimal surfaces are found, corresponds to the regime above the critical ETW brane angle in 5d. The dominant contribution in 5d was identified as ``tiny islands", which arise as limiting surfaces that connect the defect to one of the ETW branes infinitesimally close to the conformal boundary. In 10d the behavior of the island surfaces for $\delta\rightarrow\delta_c$ and of the trial island surfaces below $\delta_c$, summarized around (\ref{eq:deltac}), both indicate that similar tiny island limiting surfaces arise for $\delta<\delta_c$. The evolution of trial island surfaces for $\delta<\delta_c$ indicates that the 10d tiny islands approach the conformal boundary of $AdS_4$ almost everywhere on $\Sigma$, except for narrow throats around the 5-brane sources where they reach to the horizon. In 5d the tiny islands were further motivated in \cite{Geng:2020fxl} through a deformation in which the ETW branes are separated and the tiny islands arise as limits of extremal surfaces computing geometric EE's. This deformation has a clear analog in 10d, as keeping some D3 branes finite in extent to describe $\mathcal N=4$ SYM on an interval. It would be interesting to study this deformation also in 10d, which would require as a first step the corresponding supergravity solutions. \section{Outlook}\label{sec:outlook} The results presented here demonstrate in a UV-complete string theory setting the emergence of entanglement islands and Page curves for black holes in four-dimensional theories of gravity. The gravity theories certainly differ from the one we experience in nature. But they have dynamical gravitons, with a mass that can be controlled, and show versions of the information paradox whose resolution can be analyzed using concrete AdS/CFT dualities. We close with some thoughts on avenues for future exploration: The discussions were based on representative Type IIB supergravity solutions that realize 5d Karch/Randall braneworlds with non-gravitating and gravitating baths in 10d. These solutions are members of a broad class of solutions corresponding to more general configurations of D3, D5 and NS5 branes. It would be interesting to study further examples. The brane angles that play a crucial role in the phenomenology of the Karch/Randall models were given analogs in the representative 10d solutions, where the entanglement entropies exhibit a similar phase structure. One may suspect more complicated phase structures to emerge for more general 10d solutions. It would be desirable to understand the time evolution of the entanglement entropies from the perspective of the dual QFTs. The (critical) parameters in the supergravity solutions translate in a precise way to brane configurations and in turn to parameters in $\mathcal N=4$ SYM on a half space and 3d $T_\rho^\sigma[SU(N)]$ SCFTs. This should provide a concrete starting point for investigating the resolution of information paradoxes through entanglement islands in 4d using QFT methods. A key holographic aspect appears to be a better understanding of minimal surfaces in the internal space and their associated field theory quantities. These are apparently quantities which exhibit Page curve behavior with a gravitating bath, both in the 5d Karch/Randall models and in the string theory versions. The 10d setups, with the full internal space present, should be a viable starting point for more detailed investigations of Page curve behavior in surfaces bisecting the internal space. The surfaces studied in sec.~\ref{sec:grav-bath} are natural candidates for computing EEs associated with decompositions of the quiver diagram in the UV description of the dual 3d SCFTs. \let\oldaddcontentsline\addcontentsline \renewcommand{\addcontentsline}[3]{} \begin{acknowledgments} I am grateful to Andreas Karch, Hao Geng, Carlos Perez-Pardavila, Suvrat Raju, Lisa Randall, Marcos Riojas, and Sanjit Shashi for very interesting and useful discussions. This work is supported, in part, by the US Department of Energy under Grant No.~DE-SC0007859 and by the Leinweber Center for Theoretical Physics. \end{acknowledgments} \let\addcontentsline\oldaddcontentsline
{ "redpajama_set_name": "RedPajamaArXiv" }
1,616
When you have something that doesn't fit elsewhere, or you're unsure where it goes, then post it here. Questions before you get tado°, or about how to install and set it up. Got a technical question about the hardware or the app? Find help here! Got something off-topic? Discuss it here.
{ "redpajama_set_name": "RedPajamaC4" }
9,587
Q: Akka Remote - Messages are not delivering (Java) Trying to use Akka for parallel computing but facing a problem while communicating actors. I have two different actors (whose names are sender and receiver) which works on two different systems working on different ports on same ip address (whose names are SenderSystem and Receiver System). What I want to do is send message from sender actor to receiver actor. But on console I see a message like this [INFO] [08/15/2015 12:36:51.645] [SenderSystem-akka.actor.default-dispatcher-4] [akka://SenderSystem/sender] Message [com.aliyesilkanat.akkadeneme.Messages$1] from Actor[akka://SenderSystem/deadLetters] to Actor[akka://SenderSystem/sender] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. Here is application.conf akka { loglevel = "INFO" actor { provider = "akka.remote.RemoteActorRefProvider" } remote { untrusted-mode = off enabled-transports = ["akka.remote.netty.tcp"] netty.tcp { hostname = "127.0.0.1" } } } sender.conf include "application" akka { actor { deployment { /receiver { remote = "akka://ReceiverSystem@127.0.0.1:8091" } } } remote.netty.tcp.port = 8090 } receiver.conf include "application" akka { remote.netty.tcp.port = 8091 } Receiver.java package com.aliyesilkanat.akkadeneme.receiver; import com.aliyesilkanat.akkadeneme.Messages; import akka.actor.UntypedActor; public class Receiver extends UntypedActor { @Override public void onReceive(Object msg) throws Exception { if (msg.equals(Messages.RECEIVE)) { System.out.println("receiver receives"); } } } ReceiverApplication.java package com.aliyesilkanat.akkadeneme.receiver; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.Props; import com.typesafe.config.ConfigFactory; public class ReceiverApplication { public static void main(String[] args) { startRecieverSystem(); } private static void startRecieverSystem() { final ActorSystem system = ActorSystem.create("ReceiverSystem", ConfigFactory.load("receiver")); ActorRef actorOf = system.actorOf(Props.create(Receiver.class), "receiverActor"); System.out.println("created receiver actor: " + actorOf.toString()); } } Sender.java package com.aliyesilkanat.akkadeneme.sender; import akka.actor.ActorSelection; import akka.actor.UntypedActor; import com.aliyesilkanat.akkadeneme.Messages; public class Sender extends UntypedActor { public Sender() { } @Override public void onReceive(Object msg) throws Exception { if (msg.equals(Messages.SEND)) { System.out.println(getSender() + " sends"); ActorSelection receiverActor = getContext().actorSelection( "akka.tcp://ReceiverSystem@127.0.0.1:8091/user/receiver"); // I am not sure about this one receiverActor.tell(Messages.RECEIVE, getSelf()); } } } SenderApplication.java package com.aliyesilkanat.akkadeneme.sender; import com.typesafe.config.ConfigFactory; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.Props; public class SenderApplication { private static ActorSystem system; public static void main(String[] args) { startSenderApp(); } private static void startSenderApp() { setSystem(ActorSystem.create("SenderSystem", ConfigFactory.load("sender"))); ActorRef actorOf = getSystem().actorOf(Props.create(Sender.class), "senderActor"); System.out.println("created sender actor: " + actorOf.toString()); } public static ActorSystem getSystem() { return system; } public static void setSystem(ActorSystem system) { SenderApplication.system = system; } } And lastly, main method public static void main(String[] args) { ReceiverApplication.main(null); SenderApplication.main(null); ActorSystem system = SenderApplication.getSystem(); ActorSelection ref = system.actorSelection("sender"); ref.tell(Messages.SEND, ActorRef.noSender()); } entire console output [INFO] [08/15/2015 12:48:12.220] [main] [Remoting] Starting remoting [INFO] [08/15/2015 12:48:12.451] [main] [Remoting] Remoting started; listening on addresses :[akka.tcp://ReceiverSystem@127.0.0.1:8091] [INFO] [08/15/2015 12:48:12.451] [main] [Remoting] Remoting now listens on addresses: [akka.tcp://ReceiverSystem@127.0.0.1:8091] created receiver actor: Actor[akka://ReceiverSystem/user/receiverActor#2084584126] [INFO] [08/15/2015 12:48:12.481] [main] [Remoting] Starting remoting [INFO] [08/15/2015 12:48:12.491] [main] [Remoting] Remoting started; listening on addresses :[akka.tcp://SenderSystem@127.0.0.1:8090] [INFO] [08/15/2015 12:48:12.491] [main] [Remoting] Remoting now listens on addresses: [akka.tcp://SenderSystem@127.0.0.1:8090] created sender actor: Actor[akka://SenderSystem/user/senderActor#-2012370784] [INFO] [08/15/2015 12:48:12.491] [SenderSystem-akka.actor.default-dispatcher-3] [akka://SenderSystem/sender] Message [com.aliyesilkanat.akkadeneme.Messages$1] from Actor[akka://SenderSystem/deadLetters] to Actor[akka://SenderSystem/sender] was not delivered. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'. A: You have in sender conf , remote deployment, so you can create Receiver actor remotly from Sender actor ActorRef actor = system.actorOf(Props.create(Receiver.class), "receiver"); actor.tell(Messages.SEND, ActorRef.noSender()); A: When the ActorSystem is local(same JVM) then call is akka://ActorSystem/user/ActorName. In your main method are: ActorSelection ref = system.actorSelection("sender"); try replace the sender for akka://SenderSystem/user/senderActor
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,471
Final 1 Final 2 Final 3 Venues Formula Pools Distribution Results and Ranking Pool Ranking Criteria Statistics Accreditation Statistical Previews Photos Finals-3 Hungary win World Grand Prix Group 3 gold medal Hungary bagged the gold medal in the 2017 FIVB Volleyball World Grand Prix Group 3 Finals in Canberra, Australia Canberra, Australia, July 23, 2017 – Hungary captured the Group 3 title in the 2017 FIVB Volleyball World Grand Prix with a straight-set win over Australia for a record undefeated run, while fellow debutants France finished third in the podium at the AIS Arena on Sunday. FIVB Volleyball World Grand Prix FIVB Volleyball World Grand Prix - Schedule FIVB Volleyball World Grand Prix - Teams Hungary defeated hosts Australia 3-0 (25-18, 25-17, 25-20) to win their first ever World Grand Prix title and set a record of eight straight wins on their debut. Hungary dominated Australia in the first two sets, but the hosts started to pick up their game and challenged Hungary in the third set. However, Hungary re-established their reign and scored the final points to complete their gold medal victory. Greta Szakmary showed consistency in scoring and scored 21 points, including 19 attacks and two blocks. Szakmary has been Hungary's scoring weapon in their 8-0 winning run in the World Grand Prix. "I am very happy for this victory," said Greta Szakmary on their win in Group 3. "We were focused on getting the gold and we played in this match to win it. This is a great achievement since we started the programme in 2015. This will actually give us a big boost in the future." After a solid preliminary round performance, France ended their unbeaten run in the semifinal match against Hungary in a five-set encounter. France bagged the bronze through a walkover.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,029
/** * Module dependencies. */ var mongoose = require('mongoose') , express = require('express') , mongoStore = require('connect-mongo')(express) , User = mongoose.model('User') , utils = require('../../lib/utils') , pkg = require('../../package.json') , passportSocketIo = require("passport.socketio"); function onAuthorizeSuccess(data, accept){ console.log('successful connection to socket.io'); // The accept-callback still allows us to decide whether to // accept the connection or not. accept(null, true); } function onAuthorizeFail(data, message, error, accept){ if(error) throw new Error(message); console.log('failed connection to socket.io:', message); // We use this callback to log all of our failed connections. accept(null, false); } /** * List */ exports.index = function(req, res){ /** * Here is how the tracker works, * first we create a list of user locations * this will include basic user info, (_id, username, role, location: {lat, lng}} * whenever the users location changes it emits an updateLocation notice * we then update our internal userLocations array. We then emit * a filtered version of the userLocations list that contains * each of the cab drivers location + the current users location * * When a user disconnects from the tunnel their entry in userLocations gets * removed. We then send the update list out to everyone to update their maps */ var userLocations = []; var user = req.user; var env = process.env.NODE_ENV || 'development' , config = require('../../config/config')[env] var sessionStore = new mongoStore({ url: config.db, collection : 'sessions' }); req.io.set('authorization', passportSocketIo.authorize({ cookieParser: express.cookieParser, key: 'connect.sid', // the name of the cookie where express/connect stores its session_id secret: pkg.name, // the session_secret to parse the cookie store: sessionStore, // we NEED to use a sessionstore. no memorystore please success: onAuthorizeSuccess, // *optional* callback on success - read more below fail: onAuthorizeFail, // *optional* callback on fail/error - read more below })); req.io.sockets.on('connection', function (socket) { //send filtered list socket.emit('locations', filterLocations(userLocations,user)); socket.on('updateLocation', function (data) { var inList = false; for(var i = 0;i<userLocations.length;i++){ if(userLocations[i]._id == user._id){ inList = true; userLocations[i] = { _id: user._id , username: user.username , role: user.role , status: 'available' , location: data }; } } if( false == inList ){ userLocations.push({ _id: user._id , username: user.username , role: user.role , status: 'available' , location: data }); } //send the updates socket.emit('locations', filterLocations(userLocations,user)); }); socket.on('enableStatus', function(data){ for(var i = 0;i<userLocations.length;i++){ if(userLocations[i]._id = user._id){ userLocations[i].status = 'available'; } } socket.emit('locations', filterLocations(userLocations,user)); }); socket.on('disableStatus', function(data){ for(var i = 0;i<userLocations.length;i++){ if(userLocations[i]._id = user._id){ userLocations[i].status = 'not_available'; } } socket.emit('locations', filterLocations(userLocations,user)); }); //disconnect event remove them from the map socket.on('disconnect', function () { for(var i = 0;i<userLocations.length;i++){ if(userLocations[i]._id = user._id){ var rm = userLocations.splice(i,1); socket.emit('remove', rm); socket.emit('locations', filterLocations(userLocations,user)); } } }); }); res.render('tracker/map', { title: 'Echolocation', user: user, message: req.flash('error') }); }; filterLocations = function(ulocs, user){ console.log("User Locs:",ulocs); var filterUserLocations = []; for(var no = 0; no < ulocs.length; no++){ //do not emit locations of other customers console.log("Current User:",user,"\n Current Locs:",ulocs[no]); if(ulocs[no].role == 'driver' || ulocs[no]._id == user._id){ if(ulocs[no].status == 'available' ){ console.log("Users Match:", user); filterUserLocations.push(ulocs[no]); } } } return filterUserLocations; };
{ "redpajama_set_name": "RedPajamaGithub" }
2,935
Augmenting presence of ISIS in South Asia under different banners IFFRAS 0 June 18, 2021 7:00 am Peace, harmony, prosperity and equality that have been overpowered by the internal conflicts and the domestic crisis in the South Asian region contribute to be the major factors leading to a massive emergence of terrorism and extremism within the boundaries of the region. Malicious activities and the collapse of the state institutions have further led the economies equipped with enormous amount of resources, into the bog of disasters, destabilisation of the economies politically and economically. It had been not so long that the United States strengthened its commitment and fuelled its efforts for almost four and a half years and stated to have dismantled the Islamic State of Iraq and Syria (ISIA) or Islamic State of Iraq and al-Sham (ISIS) or Islamic State of Iraq and the Levant (ISIL) from its geographical centre in Syria. With the Islamic State (IS) losing its grounds and its caliphate in the Middle East, the world was under an impression that the grounds would now be a safer place to live in as the destructive forces, breathing fresh life in the hardcore concepts of Islamic doctrine were now been wiped off the face of the earth but who would have imagined or given a thought to the idea that this downfall would lead to some other broader picture waiting to portray stances of the scenario that with the relief would come with an alert that the war with the ISIS is far from over. Though there have been fewer instances that the jihadist group known for its videos of beheading, other extreme executions of both soldiers and civilians and destruction of cultural heritage sites, has made a comeback on its territory i.e. Iraq or Syria, where it survived and enjoyed authority as a deadly insurgent organisation. But even if it has been driven out from its self-proclaimed caliphate it established in Iraq and Syria or there have been no instances of the organisation gaining back in the two countries, it must be kept in realisation that the war with the Islamic State of Iraq and Syria is not over yet. The situation has taken a much more complex turn as the Sunni jihadist group, an advocate of violent ideologies is just down but not out. Despite having lost its strength in its self-proclaimed caliphate in the two Middle East countries, ISIS continues to remain the most ambitious international terrorist organisation and is known to have strengthened its capacity and potential to carry out extreme violence and terror attacks in much more different ways. The violence potential has somehow fuelled up even after ISIS lost its territorial foothold in Iraq in 2017 and in Syria in recent 2019, where it had survived by shifting from semi-controversial warfare to hit and run insurgency. The situation gives rise to many questions over the structure that the terrorist organisation would take after losing its territory, because even though it has dispersed and is without a territory of its own, ISIS is battle hardened and refuses to be defeated. It continues to unleash the waves of terror all across the globe, gaining assistance from its worldwide network of support and resources. Several policymakers and the strategic community remained divided on whether the advent of so-called Islamic State (IS) would continue to be a threat to the Southern Asia. While one school of thoughts held belief that given the global approach among the jihadist groups and the grip of its ideology among disaffected and radicalised youth, the trans-national terrorist organisation poses a significant security threat to the South Asian region, the other school of thought opposes the idea given the monumental barriers of geography, culture and language would hinder its path and attempts to gain foothold in the given region of South Asia. But severe attacks in past few years in Dhaka, Kabul and Quetta have dispelled the myths that considered ISIS's presence in South Asia mere a media hype. The strategies, patterns, level of planning, sophistication and coordination manifested by these terror attacks are indicative of the growing footprints of the terrorist organisation in the region. Physical (ungoverned spaces), demographic (vulnerable and disenfranchised youth) and social sanctuary (chaotic living conditions) are the key factors needed by a terrorist group to flourish and penetrate in soils of a particular region. The prevalence of these reasons in the Southern Asia has given the Islamic State a perfect and conducive environment along with ideal conditions to gain the grounds. In addition to these, fallout in the political agencies, blame games, self-denials and the dismissive attitudes of the regional states have further exacerbated IS' efforts to intensify its regional footprint. The presence of long-due disputes in the region, the militarisation of sectarian differences and the act of giving a political character to religion have further contributed in galvanising IS' support in the region. Owing to its large sectarianism, internal turmoil, Afghanistan has proved itself to be the fermenting ground for terrorism and it is quite not unsurprising that ISIS finds the region as brewing ground to create a foothold. As per a UN report between 2,500 and 4,000 ISIS militants are estimated to be in Afghanistan that have carried out around 40 attacks in 2018. Apart from Afghanistan, in Bangladesh also, the influence of the trans-national terrorist organisation ISIS has been more prominent, endangering the stability of the region. The terror attacks in a restaurant in Dhaka in 2016, which the ISIS claimed responsibility for, demonstrated that local jihadist factions in the South Asian country were in touch with the Islamic State. The situation prevailing in the current times implies that the political vacuum and divisional socio-religious landscape in such economies are contributing towards forming new grounds for the ISIS. The recent Easter Sunday attacks in the island nation, Sri Lanka are nothing but an indication of the emergence of the trans-national terrorist organisation ISIS in the South Asian region. The Sri Lanka attacks, which the ISIS claimed responsibility for, give a proof that the ruthless Muslim terrorist organisation is remobilising its troops to spread the word of terror all across the globe. There is no denying the fact based on the coordinated suicide bombings in South-Asian nation that ISIS is making efforts for the establishment of its anachronistic caliphate in parts of the world, other than where it was defeated in. The Easter Sunday attacks that claimed more than 250 innocent lives and wounded more than 500 people bear a hallmark of the deadly presence of the hardline militia groups with international network, Islamic State (IS). The attacks and the coordinated strikes by the United Nations-designated terrorist organisation, one of the deadliest terror attacks in recent decades, are a portrayal of the trans-national terrorist organisation entering into a new phase of global expansion to revenge for the losses it held in its self-proclaimed caliphate in Iraq and Syria. The prevailing situation signals that the landless Islamic State is still capable of facilitating attacks and cause carnage beyond the boundaries of its former "caliphate". The Sri Lanka's Easter Sunday attacks on locations, including hotels and churches, are nothing but a symbolism of hatred towards those who do not line up with the hardcore Islamic ideologies because the attacks were a straight target to the foreigners and the Christians. Such large scale attack carried out by ISIS has brought the country to a state of emergency in an alarming development for the entire South Asian region. The pattern of the attacks in Sri Lanka represent a new modus operandi for the Islamic State in South Asian region that consists of three basic elements, including, regional militant groups inspired by the ISIS ideology, carrying out attacks in its name; citizens from South Asia returning to the region after joining ranks in Syria and ISIS preparing its foothold in South Asian region through various provinces, including the Islamic State of Iraq and Syria – Khorasan Province (ISIS-KP), active in Afghanistan and Pakistan. Despite several claims denied by the governance in the South Asian region, the possibility that after its defeat in its self-proclaimed caliphate in Syria and Iraq, South Asia provides safer haven and fertile grounds for ISIS penetration in the soils, cannot be turned a blind eye on. The presence of major terrorist organisations, networks and outfits, including, Taliban, Jaish-e-Mohammad, Lashkar-e-Jhangvi etc. that are already well-established in the region are one of the major reasons for ISIS ideology to find safer havens in South Asia. These aforementioned established terrorist networks along with the ISIS would be mutually beneficial from the advent of ISIS in the region. Also, the region could provide ISIS with more recruits as it could tap into the level of disillusionment of the Muslim minorities with the states, like in Sri Lanka while in case of countries like Pakistan and Afghanistan which host Muslim majority seem to be competing with each other in order to win more zealous followers. Despite several claims made by Sri Lankan government, there had been reports of more than 40 citizens from the island nation travelling to Syria and joining the ISIS. The rise of religious extremism in such nations also provides fuel for the rise of the Islamic State (IS) in the region. There can be no denying the fact that such states have used organisations such as the Taliban and Jaish-e-Mohammad to mastermind attacks on other states, paving way for much more intensified sponsored terrorism across the region and the world as well. The situation can now be seen entering more complexity as the Sunni-jihadist group-the ISIS is down but not out from the game as it continues to enjoy being the most ambitious international terrorist organisation with direct or indirect support from the Pakistani society that has always been seen inclined towards the strong Muslim brotherhood across the world. Moreover, other secret and law enforcement agencies have turned a silent bystander over the ferocious activities of the terrorist organisation. At the political and law-enforcement levels, a high-level game is in play where Pakistan has been aiding the jihadist group by helping them recruit men and women that the state had been denying of. The wall chalking has already been done by the Pakistani state for the ISIS as the state does not lack in corrupt forces in Pakistan to help recruiting youth and extending support to the terrorist organisation beneath its soils. Clearly, it can be stated that through such acts of providing safe grounds to ISIS, the states have themselves created a bog for them, to escape out of which, would be sooner or later, difficult for them. The terror attacks that led Sri Lanka into the state it is today, fuels the fact at the hands of the jihadist group that it is no more left just a terrorist group but, with the time, has turned into an untamed butchery unit emerging with a desire to fight the world with vengeance. Rooting in the European soils for more than three decades, the ISIS is known to have recruiting and radicalising youth from various South Asian nations, to train them and arm them in carrying out sporadic attacks. With no surprise in the fact that the frequency of such attacks has ebbed, and is likely to intensify dramatically, the quality of the attacks still overpowers the quantity. Not just a threat but ISIS is emerging like a reality for South Asian governments, with the region facing a large number of ISIS-inspired threats. There are rising concerns about the region providing safe havens for them to recruit and safe terrorist travel, however, to predict how the threats will permeate is still a challenge and to some extend impossible. And it must be understood that it is not just the ISIS but also several related groups and increased terror cells, with ferocious intents, everywhere around the world that need to be contended. A broader look and a view are necessary to contain the menace and overcome the threat. The regional counterterrorism and counter-extremism frameworks are what are needed to defeat such ferocious intents of either ISIS or any other threat to the regional as well as national security, the absence of which can prove to hinder isolated efforts by the states and further dig a hole for the entire South Asian region. The situation is alarming and needs an urgent eye to be looked upon with shared efforts and more intensified counterterrorism policies. https://www.iiss.org/blogs/analysis/2019/06/isis-south-asia https://www.orfonline.org/research/the-fall-of-isis-and-its-implications-for-south-asia/ https://www.youngbhartiya.com/article/islamic-state-in-south-asia-are-easter-bombings-validating-the-region-s-vulnerability https://www.thenews.com.pk/print/145594-Is-Isis-in-South-Asia https://thegeopolitics.com/isis-in-south-asia-dealing-with-a-regional-threat/ http://www.indiandefencereview.com/news/specter-of-isis-over-south-asia/ https://visionindiafoundation.com/sri-lanka-bombings-and-the-spreading-arc-of-isis-in-india/ https://www.orfonline.org/wp-content/uploads/2018/08/ORF_Monograph_ISIS_Final.pdf Photo Credit : https://en.wikipedia.org/wiki/Islamic_State_of_Iraq_and_the_Levant Canada: An easy target for radical groups to prosper Millions of lives to lose access to desperately needed humanitarian aid if cross-border aid shuts down US Nuclear Engagement 0 Two Hindu minor Sisters gang raped in Pakistan's Punjab 0 Deadly Suicide Car Bomb Blast Kills 4 in Kandahar 0 Enlisting the extremist factions operational on Pakistani soil 0 Khalistan and Canada 0
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
265
Der Schutterlindenberg ist ein hoher, am Nordrand von Lahr gelegener Auslieger der Vorbergzone des Schwarzwaldes. Name Seinen Namen hat der Berg von der Schutter, die an seinem Südfuß in Lahr aus dem Schwarzwald austritt und danach auf Nordlauf an seinem Westfuß vorbei durch die etwa 140 m unter dem Berggipfel liegende Oberrheinische Tiefebene zu ihrer Mündung von links in die Kinzig läuft. Der Namensbestandteil -linden- bezieht sich auf den etwa einen Hektar großen Lindenhain, der seinen Gipfel krönt und weithin sichtbar ist, da die Vorbergzone bei Lahr ansonsten waldfrei ist. Besonderheiten Ausblick Der Schutterlindenberg bietet ein Panorama, das von den Höhen des Schwarzwaldes über die Niederung der Oberrheinischen Tiefebene bis zu den Gipfeln der Vogesen reicht. Zu sehen sind im Nahbereich die Stadt Lahr und umliegende Dörfer. Im Fernbereich reicht der Blick im Süden bis zum Belchen, einem der höchsten Berge des Schwarzwaldes, im Westen zum höchsten Berg der Vogesen, dem Grand Ballon d'Alsace oder Großen Belchen, und im Nordwesten bis nach Straßburg, einer der größten Städte Frankreichs mit dem Straßburger Münster. Verfassungssäule Zur Erinnerung an die erste badische Verfassung von 1818 wurde am 22. August 1843 zur Ehrung von deren 25-jährigem Bestehen von der Stadt Lahr auf dem Schutterlindenberg eine Säule errichtet, die mit einer Verfassungsfeier eingeweiht wurde. Die vom Großherzog Karl Ludwig Friedrich von Baden gebilligte und dort gewürdigte Verfassung hatte einer badischen Volksvertretung ein Mitwirkungsrecht an Gesetzgebungsverfahren und an der Festsetzung von Steuern eingeräumt. Die Säule wurde 1945 durch amerikanischen Artilleriebeschuss beschädigt und 1962 von der Stadt Lahr wieder aufgestellt. Anlässlich des 150-jährigen Jubiläums der Badischen Revolution von 1848 wurde sie durch den Rotary-Club restauriert. Schubert-Denkmal Auf dem Gipfel des Schutterlindenberges befinden sich sowohl ein Gedenkstein als auch ein Pavillon mit der Büste des Lahrer Kaufmanns, Bürgermeisters, Handelskammerpräsidenten und Landtagsabgeordneten Wilhelm Schubert (1813–1893). Auf dem Stein wird er mit diesen Worten geehrt: Schubert zählt zu den bedeutendsten Republikanern seiner Stadt im Vormärz und während der Revolution 1848/49. Sendeturm Auf dem Schutterlindenberg steht ein Sendeturm in Stahlbetonbauweise, der die südliche Ortenau und das nördliche Breisgau mit UKW- und DAB-Radioprogrammen versorgt. Der Sendeturm wurde ursprünglich von den in Lahr stationierten Kanadischen Streitkräften errichtet und betrieb, bis zu ihrem Abzug im Jahre 1994 den kanadischen Militärsender CFN (Canadian Forces Network). Analoger Hörfunk (UKW) Digitales Radio (DAB+) Landschaftsschutzgebiet Der Schutterlindenberg und seine Umgebung sind durch Verordnung des ehemaligen Landratsamts Lahr vom 24. August 1966 als Landschaftsschutzgebiet (Schutzgebietsnummer 3.17.009) ausgewiesen. Das Gebiet hat eine Größe von 362,0 Hektar und reicht vom Bebauungsrand Lahrs im Süden über den Lierbach im Nordosten hinweg bis auf die Weinbauhügel Galgenberg, Essigberg und Wichberg schon auf Friesenheimer Grund, im Nordwesten endet es am dortigen Bergfuß. Sonstiges Der Schutterlindenberg ist ein traditioneller Festplatz der Stadt Lahr. Dort wurden Feste beispielsweise anlässlich des 100. Geburtstages von Friedrich Schiller am 10. November 1859 und anlässlich des 50. Jahrestages der Völkerschlacht bei Leipzig am 18. Oktober 1863 gefeiert. Am Pfingstmontag wird auf dem Berg ein ökumenischer Gottesdienst gefeiert. Der Schutterlindenberg ist eine alte Lahrer Weinbaulage. Die Großlage ist etwa 60 ha groß und reicht vom Schutterlindenberg nördlich bis Friesenheim. Sie unterteilt sich in die Einzellagen Herrentisch oberhalb von Lahr und Kronbühl im Norden. Es wird v. a. Weißer Burgunder angebaut. Nach dem Berg ist die Schutterlindenberg-Schule in Lahr benannt. Auf dem Berg steht eine steinerne Sitzbank, die ihren Platz einst auf der 1945 zerstörten Zollbrücke zu Dinglingen hatte. Einzelnachweise Berg in Baden-Württemberg Berg unter 1000 Meter Geographie (Lahr/Schwarzwald) Berg in Europa Landschaftsschutzgebiet im Ortenaukreis Berg im Ortenaukreis Schutter (Kinzig)
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,949
{"url":"https:\/\/socratic.org\/questions\/how-can-you-calculate-partial-pressure","text":"# How can you calculate partial pressure?\n\nAug 20, 2016\n\nBy means of $\\text{Dalton's Law of Partial Pressures}$.\n\n#### Explanation:\n\nIn a gaseous mixture, the pressure exerted by a component, ${P}_{A}$, is the same as the pressure it would exert if it ALONE occupied the container. The total pressure is the sum of the individual partial pressures.\n\n${P}_{\\text{Total}}$ $=$ ${P}_{A} + {P}_{B} + {P}_{C} + {P}_{D} \\ldots \\ldots . .$\n\nAnd if we assume ideality,\n\n$=$ $\\frac{{n}_{A} R T}{V} + \\frac{{n}_{B} R T}{V} + \\frac{{n}_{C} R T}{V} + \\frac{{n}_{D} R T}{V} +$\n\nThus, ${P}_{\\text{Total}}$ $=$ $\\frac{R T}{V} \\left\\{{n}_{A} + {n}_{B} + {n}_{C} + {n}_{D} +\\right\\}$\n\nAnd ${P}_{A}$ $=$ $\\frac{R T}{V} \\times {n}_{A} \/ \\left\\{{n}_{A} + {n}_{B} + {n}_{C} + {n}_{D} +\\right\\}$\n\nThus the partial pressure ${P}_{A} \\propto \\text{Mole fraction of Gas A}$, with the constant of proportionality $\\frac{R T}{V}$.","date":"2020-02-24 15:43:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 15, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9562212824821472, \"perplexity\": 561.1935346857955}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145960.92\/warc\/CC-MAIN-20200224132646-20200224162646-00259.warc.gz\"}"}
null
null
{"url":"https:\/\/blender.stackexchange.com\/questions\/115602\/how-to-get-mirrored-vector-by-plane-on-python","text":"# How to get Mirrored Vector by Plane on Python\n\nHow can I get mirrored vector(point) by Plane by using blender python modules?\n\nPlane will be defined as\n\n\u2022 plane_normal : Vector\n\u2022 plane_position : Vector\n\nI need to know function like\n\ndef get_mirrored_vector(point, plane_position, plane_normal):\nDo Something\nreturn mirrored_point\n\n\nactually, It sounds more of mathematical problem but I thought there might be existing function on mathutils or other blender modules\n\nYou can use mathutils.geometry.intersect_line_plane to project the point onto the plane. This can be done by intersecting the line defined by the point and the point offset by the plane's normal vector with the plane:\n\nproj = intersect_line_plane(point, point + plane_normal, plane_position, plane_normal)\n\nOnce you have the projected point you can use the vector from the original point to the projection to get the mirrored point.\n\nfrom mathutils.geometry import intersect_line_plane\n\ndef get_mirrored_vector(point, plane_position, plane_normal):\nproj = intersect_line_plane(point, point + plane_normal, plane_position, plane_normal)\nmirrored_point = 2 * proj - point\nreturn mirrored_point\n\n\u2022 Thank you very much for the answer with exact code! I didn't come up with the idea to use line \/ plane intersection function to use for this purpose. The code worked perfectly, Thank you. \u2013\u00a0Miumiu Aug 8 '18 at 9:53","date":"2020-02-18 10:41:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.44100651144981384, \"perplexity\": 2503.749663128183}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875143646.38\/warc\/CC-MAIN-20200218085715-20200218115715-00081.warc.gz\"}"}
null
null
\section{Introduction} In the early twentieth century Stefan Bergman discovered an important link between function theory, geometry and Hilbert space theory, namely the theory of Bergman kernel and Bergman metric. The Bergman theory makes essential use of complete orthonormal bases in the Bergman space, which sheds a particular light on the real difficulty of extending the Bergman theory to the $L^p$ case. In this paper we attempt to develop a general $p-$Bergman theory. For a bounded domain $\Omega\subset \mathbb C^n$ we define $A^p(\Omega)$ to be the $p-$Bergman space of $L^p$ holomorphic functions on $\Omega$. We start with a minimizing problem which was also considered by Bergman himself in the case $p=2$: \begin{equation}\label{eq:MinProb} m_p(z):=\inf\left\{\|f\|_p:f\in A^p(\Omega),f(z)=1\right\}. \end{equation} There exists at least one minimizer for $p>0$ and exactly one minimizer $m_p(\cdot,z)$ for every $p\ge 1$. We then define the $p-$Bergman kernel by $K_p(z)=m_p(z)^{-p}$ for $p>0$ and the off-diagonal Bergman kernel by $K_p(z,w)=m_p(z,w)K_p(w)$ for $p\ge 1$. Note that $K_2(z)$ and $K_2(z,w)$ are standard Bergman kernel and off-diagonal Bergman kernel respectively. After the early work of Narasimhan-Simha \cite{NS} and Sakai \cite{Sakai}, the study of $K_{2/m}(z)$ for $m\in \mathbb Z^+$ has attracted much attention in recent years (see e.g., \cite{Siu}, \cite{Tsuji06}, \cite{Tsuji07}, \cite{ChenInvariant}, \cite{BP}, \cite{BPK}, \cite{Yau}, \cite{Tsuji}, \cite{NZZ}, \cite{Taka}, \cite{DWZZ}). Our first result will indicate the basic difference between $K_p$ and $K_2$ when $p>2$. \begin{theorem}\label{th:NRA_0} Let $\Omega$ be a bounded complete Reinhardt domain in $\mathbb C^n$. Then the following properties hold: \begin{enumerate} \item If\/ $K_2(z,w)$ is not zero-free, then there exists $k_0\in \mathbb Z^+$ such that $K_{2k}(z)$ is not real-analytic on $\Omega$\/ for any integer $k\ge k_0$. \item Suppose there exist $\zeta_0,z_0\in \Omega$ such that $K_2(\zeta_0,z_0)=0$ and ${\rm ord}_{z_0} K_2(\zeta_0,\cdot)=1$. Then $K_{2k}(z)$ is not real-analytic on $\Omega$\/ for any integer $k\ge 2$. Moreover, either ${\rm Re\,}m_p(\zeta_0,\cdot)$ or ${\rm Im\,}m_p(\zeta_0,\cdot)$ is not real-analytic on $\Omega$\/ for any rational $p>2$. \end{enumerate} \end{theorem} In \cite{Lu}, Lu Qi-Keng asked: for which domains is $K_2(z,w)$ zero-free? It turns out that for most pseudoconvex domains $K_2(z,w)$ is not zero-free; among them most explicit examples are bounded pseudoconvex complete Reinhardt domains (cf. \cite{Boas} and \cite{JP}). We will verify that the Thullen-type domain $\{(z_1,z_2)\in \mathbb C^2: |z_1|+|z_2|^{2/\alpha}<1\}$ for $\alpha>2$ satisfies the hypothesis in Theorem \ref{th:NRA_0}/(2) by using the calculation in Boas-Fu-Straube \cite{BFS}. Note that $m_p(\cdot,z)$ and $K_p(\cdot,z)$ are holomorphic on $\Omega$ for fixed $z$. On the other hand, the function theory of $m_p(z,\cdot)$ or $K_p(z,\cdot)$ for fixed $z$ is completely mysterious. Using the calculus of variations, we get the following fundamental reproducing formula \begin{equation}\label{eq:RPF} f(z) = \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{K_p(\cdot,z)}\,f,\ \ \ \forall\,f\in A^p(\Omega). \end{equation} The nonlinear factor $ |m_p(\cdot,z)|^{p-2}$ causes the real difficulity for applications. Thus the reproducing formula is of limited use without the help of some techniques from nonlinear analysis of the $p-$Laplacian (cf. \cite{Lind1}). This also indicates the major difference to the Bergman theory. It is fairly easy to show that $m_p(z)$ and $K_p(z)$ are locally Lipschitz continuous. However, the regularity problem for $m_p(z,\cdot)$ or $K_p(z,\cdot)$ is more difficult. Regularity in a minimizing problem is classical and goes back to Hilbert's famous problem-list. \begin{theorem}\label{th:RegHolder} \begin{enumerate} \item For any $p >1 $ and any compact set $S\subset \Omega$, there exists a constant $C>0$ such that $$ |m_p(z,w)-m_p(z,w')|\le C|w-w'|^{\frac12},\ \ \ \forall\,z,w,w'\in S. $$ \item Let $S_w:=\{m_1(\cdot,w)=0\}$. For every open set $w\in U\subset\subset \Omega\backslash S_w$, there exists a constant $C>0$ such that $$ |m_1(z,w)-m_1(z,w')|\le C|w-w'|^{\frac12},\ \ \ \forall\,z,w'\in U. $$ \end{enumerate} The same conclusions also hold for $K_p$. \end{theorem} \begin{remark} It is interesting to point out that Theorem \ref{th:RegHolder} plays an essential role in the proof of Theorem \ref{th:NRA_0}. \end{remark} Note that the Cauchy-Schwarz inequality gives $$ |K_2(z,w)|\le K_2(z)^{\frac12}\,K_2(w)^{\frac12}\ \ \ \text{and}\ \ \ 2{\rm Re\,}K_2(z,w)\le K_2(z)+K_2(w). $$ Surprisingly, these inequalities remain valid for general $p\ge 1$. \begin{theorem} \begin{enumerate} \item $ |K_p(z,w)|\le K_p(z)^{\frac1p}\,K_p(w)^{\frac1q}, $ where $1/p+1/q=1$. \item $ {\rm Re}\left\{K_p(z,w)+K_p(w,z)\right\}\le K_p(z) + K_p(w). $ \end{enumerate} Each equality holds if and only if $z=w$. \end{theorem} \begin{remark} In particular, we have $|K_1(z,w)|\le K_1(z)$, so that $K_1(z,\cdot)$ is a bounded function on $\Omega$ for fixed $z$. \end{remark} We also investigate the $p-$Bergman metric given by $$ B_p(z;X) := {K_p(z)^{-\frac1p}}\cdot {\sup}_f\ |Xf(z)| $$ where the supremum is taken over all $f\in A^p(\Omega)$ with $f(z)=0$ and $\|f\|_p=1$. It is easy to see that $B_2(z;X)$ is the standard Bergman metric. The $p-$Bergman metric is an invariant (Finsler) metric for\/ {\it simply-connected\/} bounded domains, and is always no less than the Carath\'eodory metric $C(z;X)$ (the case $p=2$ goes back to Lu Qi-Keng \cite{Lu}; see also \cite{Hahn}). More interestingly, we have \begin{proposition} $B_p(z;X)\rightarrow C(z;X)$ as $p\rightarrow \infty$. \end{proposition} For a real-valued upper semicontinuous function $u$ defined on a domain $\Omega\subset \mathbb C^n$, we define the\/ {\it generalized Levi form} of $u$ by $$ i\partial\bar{\partial} u(z;X):=\liminf_{r\rightarrow 0+}\frac1{r^2}\left\{\frac1{2\pi}\int_0^{2\pi}u(z+re^{i\theta}X)d\theta-u(z)\right\}. $$ A natural question is to find the relationship between $i\partial\bar{\partial} \log K_p(z;X)$ and $B_p(z;X)$. Using the variation method, we are able to verify the following \begin{theorem} $$ i\partial\bar{\partial} \log K_p(z;X)\ge \left\{ \begin{array}{cl} \frac{p}{2(p-1)}\, B_p(z;X)^2 & \text{for\ \ }p\ge 2\\ \frac{p}2\, C(z;X)^2 & \text{for\ \ } p\le 2. \end{array} \right. $$ \end{theorem} \begin{remark} In particular, $\log K_p(z)$ is a (continuous) strictly psh function. \end{remark} In \cite{Yau}, Yau suggested to investigate the the relationship between the $p$-Bergman metrics when $p$ changes. Motivated by the spectrum theory of the $p-$Laplacian (cf. \cite{Lind}), we will show \begin{theorem} \begin{enumerate} \item $ \lim_{s\rightarrow p-} m_s(z,w) = m_p(z,w) $ for $p>1$ and $ \lim_{s\rightarrow p+} m_s(z,w) $ exists for $p\ge 1$. Moreover, if $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$ for some $p'>p$, then $$ m_p(z,w)=\lim_{s\rightarrow p+} m_s(z,w). $$ \item $\lim_{s\rightarrow p\pm} B_s(z;X)$ exist for $p>0$ and $ B_p(z;X) = \lim_{s\rightarrow p-} B_s(z;X). $ Moreover, if there exists $p'>p$ such that $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$, then $$ B_p(z;X)=\lim_{s\rightarrow p} B_s(z;X). $$ Conclusion (1) also holds for $K_p$. \end{enumerate} \end{theorem} On the other hand, we have \begin{proposition}\label{prop:NonCont} Let $\Omega=D\backslash S$ where $D$ is a bounded domain in $\mathbb C$ and $S$ is a compact set in $D$ which has positive $2-$capacity but zero $p-$capacity for every $p<2$. Then $$ K_2(z)>\lim_{p\rightarrow 2+} K_p(z). $$ \end{proposition} Recall that the $p-$capacity of $S$ is given by $ {\rm Cap}_p(S):=\inf_\phi \int_{\mathbb C} |\nabla \phi|^p $ where the infimum is taken over all $\phi\in C_0^\infty(\mathbb C)$ such that $\phi\ge 1$ on $S$. The condition of Proposition \ref{prop:NonCont} is satisfied for instance, if the $h-$Hausdorff measure $\Lambda_h(S)$ of $S$ is positive and finite where $h(t)=(\log1/t)^{-\alpha}$ for some $\alpha>1$. Due to the failure of $L^p-$estimates for $\bar{\partial}$ on general pseudoconvex domains when $p>2$ (compare \cite{FS}), it is plausible to study the boundary behavior of $K_p(z)$ through comparison with $K_2(z)$. With the help of $L^2$ estimates for $\bar{\partial}$, we are able to show the following \begin{theorem} Let $\Omega$ be a bounded pseudoconvex domain with $C^2-$boundary and $\delta$ denotes the boundary distance. Then the following properties hold: \begin{enumerate} \item There exist constants $\gamma,C>0$ such that the following estimates hold near $\partial \Omega:$ \begin{eqnarray*} {K_p(z)^{\frac1p}}/{K_2(z)^{\frac12}} & \le & C\, \delta(z)^{\frac12-\frac1p} |\log \delta(z)|^{\frac{n(p-2)}{2p\gamma}},\ \ \ p\ge 2,\\ {K_p(z)^{\frac1p}}/{K_2(z)^{\frac12}} & \ge & C^{-1}\, \delta(z)^{\frac12-\frac1p} |\log \delta(z)|^{-\frac{(n+\gamma)(p-2)}{2p\gamma}},\ \ \ p\le 2. \end{eqnarray*} \item For every $2\le p<2+\frac2n$ there exists a constant $C=C_{p,\Omega}>0$ such that the following estimate holds near $\partial \Omega:$ $$ {K_p(z)^{\frac1p}}/{K_2(z)^{\frac12}} \ge C^{-1}\, \delta(z)^{\frac{(n+1)(p-2)}{2p}} |\log \delta(z)|^{-\frac{(n+1)(p-2)}{2p\gamma}}. $$ \end{enumerate} \end{theorem} It is a straightforward consequence of the Ohsawa-Takegoshi extension theorem \cite{OT} that $K_2(z)\gtrsim \delta^{-2}$ holds for bounded pseudoconvex domains with $C^2-$boundary. Thus \begin{corollary} If $\Omega$ is a bounded pseudoconvex domain with $C^2-$boundary, $K_p(z)$ is an exhaustion function for every $2\le p<2+\frac2n$. \end{corollary} \begin{remark} It was shown in \cite{NZZ} that $K_p(z)$ is an exhaustion function for any $0<p<2$ and any bounded pseudoconvex domain. \end{remark} It is reasonable to ask the following \begin{problem} Is $K_p(z)$ an exhaustion function for any $p>2$? \end{problem} \begin{remark} The answer is affirmative when $\Omega$ is a simply-connected uniformly squeenzing domain or smooth strictly pseudoconvex domain (cf. \cite{DWZZ}). \end{remark} Most results in this paper extend to the $p-$Bergman space related to Hermitian line bundles over complex manifolds. Nevertheless, we stick to the simplest case of bounded domains with trivial line bundle in order to make the arguments as transparent as possible. \section{Definitions and basic properties} \subsection{The $p-$Bergman space} For a domain $\Omega\subset\subset \mathbb C^n$ we define the $p-$Bergman space to be $$ A^p(\Omega):=\left\{f\in \mathcal O(\Omega): \|f\|_p^p:=\int_\Omega |f|^p<\infty\right\}. $$ \begin{proposition}[Bergman inequality] For any compact set $S\subset \Omega$ there exists a constant $C_S>0$ such that \begin{equation}\label{eq:BergIneq} \sup_{S} |f|^p\le C_S \|f\|_p^p. \end{equation} \end{proposition} \begin{proof} Set $r=d(S,\partial \Omega)$. For any $z\in S$, we have $P(z,r/n):=\prod_{j=1}^n\Delta(z_j,r/n)\subset \Omega$. It follows directly from the mean-value inequality of psh functions that \begin{equation}\label{eq:BergIneq_2} |f(z)|^p \le \frac1{|P(z,r/n)|}\int_{P(z,r/n)} |f|^p \le \frac1{\pi^n(r/n)^{2n}} \|f\|_p^p. \end{equation} \end{proof} \begin{proposition}\label{prop:Banach} $A^p(\Omega)$ is a Banach space for $p\ge 1$. \end{proposition} \begin{proof} It suffices to verify that $A^p(\Omega)$ is a\/ {\it closed} subspace in $L^p(\Omega)$. Let $\{f_j\}\subset A^p(\Omega)$ satisfy $f_j\rightarrow f_0$ in $L^p(\Omega)$. By (\ref{eq:BergIneq}) we known that $\{f_j\}$ forms a normal family so that there exists a subsequence $f_{j_k}$ converging locally uniformly to some $\hat{f}_0\in \mathcal O(\Omega)$. Fatou's lemma yields $$ \|f_{j_k}-\hat{f}_0\|_p \le \liminf_{m\rightarrow \infty} \|f_{j_k}-f_{j_m}\|_p\le \liminf_{m\rightarrow \infty} \left[ \|f_{j_k}-f_{0}\|_p+\|f_{j_m}-f_{0}\|_p\right]=\|f_{j_k}-f_{0}\|_p, $$ which implies that $\hat{f}_0\in A^p(\Omega)$ and $f_{j_k}\rightarrow \hat{f}_0$ in $L^p(\Omega)$. Thus $f_0=\hat{f}_0$ holds a.e. on $\Omega$. \end{proof} \begin{remark} Analogously, one can show that $A^p(\Omega)$ is a complete metric space for $0<p<1$, where the metric is given by $d(f_1,f_2):=\|f_1-f_2\|_p^p$. \end{remark} \subsection{A minimizing problem} For a bounded domain in $\Omega\subset \mathbb C^n$, we consider the following minimizing problem: \begin{equation}\label{eq:Min_1} m_p(z)=m_{\Omega,p}(z)=\inf\left\{\|f\|_p:f\in A^p(\Omega),f(z)=1\right\}. \end{equation} \begin{proposition}[Existence] There exists at least one minimizer in \eqref{eq:Min_1}. \end{proposition} \begin{proof} Take $\{f_j\}\subset A^p(\Omega)$ such that $f_j(z)=1$ and $\|f_j\|_p\rightarrow m_p(z)$ as $j\rightarrow \infty$. The Bergman inequality implies that $\{f_j\}$ is a normal family so that there exists a subsequence $\{f_{j_k}\}$ which converges locally uniformly to some $f_0\in {\mathcal O}(\Omega)$. By Fatou's lemma, we have $$ \int_\Omega |f_0|^p \le \liminf_{k\rightarrow \infty} \int_\Omega |f_{j_k}|^p=m_p(z)^p. $$ On the other hand, since $f_j(z)=1$, we have $f_0(z)=1$ and $ \|f_0\|_p=m_p(z), $ i.e., $f_0$ is a minimizer. \end{proof} \begin{proposition}[Uniqueness]\label{prop:uniq} For $ p \ge 1$ there is only one minimizer in \eqref{eq:Min_1}. \end{proposition} \begin{proof} Let $f_1,f_2$ be two minimizers of \eqref{eq:Min_1}. We take $h:=\frac{f_1+f_2}2$. Clearly, $h$ belongs to $A^p(\Omega)$ and satisfies $h(z)=1$. Note that $$ \left|\frac{a_1+a_2}2\right|^p\le \frac{|a_1|^p+|a_2|^p}2, $$ and equality holds only when $a_1=a_2$. It follows that if $f_1\neq f_2$ then $$ \int_\Omega |f_1|^p \le \int_\Omega |h|^p < \int_\Omega \frac{|f_1|^p+|f_2|^p}2 = m_p(z)^p, $$ which is absurd. \end{proof} \begin{remark} It is not known whether the uniqueness result holds for\/ $0<p<1$. \end{remark} Let $ m_p(\cdot,z)$ denote a minimizer in (\ref{eq:Min_1}) (one warning: $m_p(z,z)=1\neq m_p(z)$!). \begin{definition} We call $K_p(z):=m_p(z)^{-p}$ the $p-$Bergman kernel for $p>0$ and $K_p(z,w):=m_p(z,w)K_p(w)$ the off-diagonal $p-$Bergman kernel for $p\ge 1$. \end{definition} \begin{proposition} For every $p>0$ we have \begin{equation}\label{eq:max_1} K_p(z)=\sup\left\{|f(z)|^p: f\in A^p(\Omega),\|f\|_p=1\right\}. \end{equation} \end{proposition} \begin{proof} Take $f_0\in A^p(\Omega)$ with $f_0(z)=1$ and $\|f_0\|_p=m_p(z)$. We have $$ \sup_{f\in A^p(\Omega)} \frac{|f(z)|^p}{\|f\|_p^p}\ge \frac{|f_0(z)|^p}{\|f_0\|_p^p}= m_p(z)^{-p}= K_p(z). $$ On the other hand, for $f\in A^p(\Omega)$ with $\|f\|_p=1$ we see that $\hat{f}:=f/f(z)\in A^p(\Omega)$ satisfies $\hat{f}(z)=1$. It follows that $$ m_p(z) \le \|\hat{f}\|_p=1/|f(z)|,\ \ \ \text{i.e.},\ |f(z)|^p \le m_p(z)^{-p}=K_p(z). $$ \end{proof} In particular, if $\Omega\subset \Omega'$, then $$ K_{\Omega,p}(z)\ge K_{\Omega',p}(z)\ \ \ \text{for\ }z\in \Omega. $$ Moreover, we have \begin{equation}\label{eq:BergIneq_3} {|\Omega|}^{-1} \le K_p(z)\le C_n \delta(z)^{-2n} \end{equation} in view of (\ref{eq:BergIneq_2}), where $\delta=\delta_\Omega$ denotes the boundary distance. Let us present a few elementary properties (some of them are known). \begin{proposition}[Transformation rule]\label{prop:trans_2} Let $F:\Omega_1\rightarrow \Omega_2$ be a biholomorphic mapping between bounded simply-connected domains. Let $J_F$ denote the complex Jacobian of $F$. Then \begin{eqnarray} m_{\Omega_1,p}(z) & = & m_{\Omega_2,p}(F(z)) |J_F(z)|^{-2/p},\ \ \ p>0, \label{eq:trans_1} \\ K_{\Omega_1,p}(z) & = & K_{\Omega_2,p}(F(z))|J_F(z)|^2,\ \ \ p>0, \label{eq:trans_2}\\ m_{\Omega_1,p}(z,w) & = & m_{\Omega_2,p}(F(z),F(w)) J_F(z)^{\frac2p} J_F(w)^{-\frac2p},\ \ \ p\ge 1,\label{eq:trans_3}\\ K_{\Omega_1,p}(z,w) & = & K_{\Omega_2,p}(F(z),F(w)) J_F(z)^{\frac2p} J_F(w)^{1-\frac2p}\,\overline{J_F(w)},\ \ \ p\ge 1\label{eq:trans_4}. \end{eqnarray} Moreover, equalities hold for arbitrary bounded domains when $2/p\in \mathbb Z^+$. \end{proposition} \begin{proof} Since $$ \int_{\Omega_2} |f_2|^p = \int_{\Omega_1} |f_2\circ F|^p |J_F|^2, $$ we conclude that $ f_2\in A^p(\Omega_2) $ if and only if $ {f}_1:=f_2 \circ F\cdot J_F^{2/p}\in A^p(\Omega_1)$ (only here we have to use the assumption that $\Omega_1$ is simply-connected). If $f_2(F(z))=1$, then $f_1(z)J_F(z)^{-2/p}=1$, so that $$ m_{\Omega_1,p}(z)^p \le |J_F(z)|^{-2}\,\int_{\Omega_1} |f_1|^p = |J_F(z)|^{-2}\,\int_{\Omega_2} |f_2|^p. $$ Take minimum over $f_2$, we get $$ m_{\Omega_1,p}(z)^p \le |J_F(z)|^{-2} m_{\Omega_2,p}(F(z))^p. $$ Consider $F^{-1}$ instead of $F$, we get the inverse inequality in \eqref{eq:trans_1}. Next, we define for fixed $w\in \Omega_1$, a holomorphic function by $$ f_1(z):=m_{\Omega_2,p}(F(z),F(w)) J_F(z)^{\frac2p} J_F(w)^{-\frac2p}. $$ Clearly, we have $f_1(w)=1$ and $$ \int_{\Omega_1}|f_1|^p = |J_F(w)|^{-2} \int_{\Omega_2}| m_{\Omega_2,p}(\cdot,F(w)) |^p = |J_F(w)|^{-2} m_{\Omega_2,p}(F(w))^p=m_{\Omega_1,p}(w)^p $$ in view of \eqref{eq:trans_1}. (\ref{eq:trans_3}) follows immediately from Proposition \ref{prop:uniq}. The remaining equalities follow from the relations $$ K_p(z)=m_p(z)^{-p}\ \ \ \text{and}\ \ \ K_p(z,w)=m_p(z,w)K_p(z). $$ \end{proof} \begin{remark} The simply-connected hypothesis can not be removed (see \cite{NZZ}, Remark 2.3). \end{remark} \begin{proposition}[Product rule]\label{prop:product_1} Let $\Omega'$ and $\Omega''$ be bounded domains in\/ $\mathbb C^{n}$ and\/ $\mathbb C^{m}$ respectively. Set $\Omega=\Omega'\times \Omega''$ and $z=(z',z'')$. Then we have \begin{eqnarray*} m_{\Omega,p}(z) & = & m_{\Omega',p}(z')\cdot m_{\Omega'',p}(z''),\ \ \ p>0,\\ m_{\Omega,p}(z,w) & = & m_{\Omega',\,p}(z',w')\cdot m_{\Omega'',\,p}(z'',w''), \ \ \ p\ge 1. \end{eqnarray*} The same conclusions also hold for $K_p$. \end{proposition} \begin{proof} For fixed $z'\in \Omega'$ and $z''\in \Omega''$ we take $f_1\in A^p(\Omega')$ and $f_2\in A^p(\Omega'')$ such that $f_1(z')= f_2(z'')=1$ and $$ m_{\Omega',p}(z')=\|f_1\|_p,\ \ \ m_{\Omega'',p}(z'')=\|f_2\|_p. $$ Fubini's theorem gives $$ \int_{\zeta'\in \Omega'}\int_{\zeta''\in \Omega''} |f_1(\zeta')f_2(\zeta'')|^p =\|f_1\|_p^p\cdot \|f_2\|_p^p=m_{\Omega',p}(z')^p\cdot m_{\Omega'',p}(z'')^p. $$ Thus $$ m_{\Omega,p}(z)\le m_{\Omega',p}(z')\cdot m_{\Omega'',p}(z''). $$ On the other hand, for every $h\in A^p(\Omega)$ we have \begin{eqnarray*} |h(z',z'')|^p & \le & K_{\Omega',p}(z')\cdot \int_{\Omega'} |h(\cdot,z'')|^p\\ & \le & K_{\Omega',p}(z')\cdot K_{\Omega'',p}(z'')\cdot \int_{\Omega'} \int_{\Omega''} |h|^p, \end{eqnarray*} so that $$ K_{\Omega,p}(z)\le K_{\Omega',p}(z')\cdot K_{\Omega'',p}(z''). $$ Since $K_p(z)=m_p(z)^{-p}$, the first equality follows immediately. Next, we note that for fixed $w\in \Omega$ the function $$ f_0(z):=m_{\Omega',\,p}(z',w')\cdot m_{\Omega'',\,p}(z'',w'') $$ is holomorphic on $\Omega$ and satisfies $f_0(w)=1$, \begin{eqnarray*} \int_\Omega |f_0|^p & = & \int_{\Omega'} |m_{\Omega',\,p}(\cdot,w')|^p\,\int_{\Omega'} |m_{\Omega'',\,p}(\cdot,w'')|^p\\ & = & m_{\Omega',\,p}(w')^p\,m_{\Omega'',\,p}(w'')^p\\ & = & m_{\Omega,p}(w)^p. \end{eqnarray*} By uniqueness of the minimizer we immediately get the the second equality. \end{proof} \begin{proposition}\label{prop:ball} For the unit ball\/ $\mathbb B^n\subset \mathbb C^n$ we have \begin{equation}\label{eq:ball} K_p(z,w)=K_{\mathbb B^n,\,p}(z,w)=\frac{n!}{\pi^n}\,\frac{(1-|w|^2)^{(n+1)(\frac2p-1)}}{(1-\langle z,w\rangle)^{\frac{2(n+1)}p}}. \end{equation} \end{proposition} \begin{proof} For any $f\in A^p(\mathbb B^n)$ with $f(0)=1$ we have $$ 1=|f(0)|^p \le \frac1{|\mathbb B^n|} \int_{\mathbb B^n} |f|^p, $$ while for $f_0\equiv 1$, $$ \int_{\mathbb B^n} |f_0|^p=|\mathbb B^n|\le \int_{\mathbb B^n} |f|^p. $$ Thus $f_0$ is a minimizer at $0$, so that $$ m_p(\cdot,0)=f_0(\cdot)\equiv 1, $$ and $$ K_p(\cdot,0)=\frac{m_p(\cdot,0)}{m_p(0)^p}=\frac1{|\mathbb B^n|}=\frac{n!}{\pi^n}. $$ For $a\in \Delta$ we set $w_a:=(a,0')$. Consider the following automorphism of $\mathbb B^n$ $$ F_a:z\mapsto \left(\frac{z_1-a}{1-\bar{a}z_1},\frac{\sqrt{1-|a|^2}\,z'}{1-\bar{a}z_1}\right). $$ A straightforward calculation shows $$ J_{F_a}(z)=\frac{(1-|a|^2)^{\frac{n+1}2}}{(1-\bar{a}z_1)^{n+1}},\ \ \ J_{F_a}(w_a)=(1-|a|^2)^{-\frac{n+1}2}. $$ It follows that \begin{eqnarray*} K_p(z,w_a) = K_p(z,F^{-1}_a(0)) & = & K_p(F_a(z),0) J_{F_a}(z)^{\frac2p} J_{F_a}(w_a)^{1-\frac2p}\overline{J_{F_a}(w_a)} \\ & = & \frac{n!}{\pi^n} \frac{(1-|a|^2)^{\frac{n+1}p}}{(1-\bar{a}z_1)^{\frac{2(n+1)}p}} (1-|a|^2)^{-(n+1)(1-\frac1p)}\\ & = & \frac{n!}{\pi^n} \frac{(1-|w_a|^2)^{(n+1)(\frac2p-1)}}{(1-\langle z,w_a\rangle)^{\frac{2(n+1)}p}}. \end{eqnarray*} For general $w\in \mathbb B^n$ we take a unitary transformation ${\mathcal U}$ with ${\mathcal U}(w)=(|w|,0')$. Thus \begin{eqnarray*} K_p(z,w) & = & K_p(\mathcal U(z),\mathcal U(w)) J_{\mathcal U}(z)^{\frac2p} J_{\mathcal U}(w)^{1-\frac2p}\overline{J_{\mathcal U}(w)}\\ & = & \frac{n!}{\pi^n} \frac{(1-|\mathcal U(w)|^2)^{(n+1)(\frac2p-1)}}{(1-\langle \mathcal U(z),\mathcal U(w)\rangle)^{\frac{2(n+1)}p}}\\ & = & \frac{n!}{\pi^n} \frac{(1-|w|^2)^{(n+1)(\frac2p-1)}}{(1-\langle z,w\rangle)^{\frac{2(n+1)}p}}. \end{eqnarray*} \end{proof} This proposition combined with the product rule gives \begin{proposition}\label{prop:polydisc} For the unit polydisc $\Delta^n\subset \mathbb C^n$ we have \begin{equation}\label{eq:polydisc} K_p(z,w)=K_{\Delta^n,\,p}(z,w)=\frac1{\pi^n} \prod_{j=1}^n \frac{(1-|w_j|^2)^{\frac4p-2}}{(1-\overline{w}_j z_j)^{\frac4p}}. \end{equation} \end{proposition} \begin{proposition}\label{prop:continuity} \begin{enumerate} \item Both $m_p(z)$ and $K_p(z)$ are locally Lipschitz continuous for $p>0$. \item Both $m_p(z,w)$ and $K_p(z,w)$ are continuous in $(z,w)$ for $p\ge 1$. \end{enumerate} \end{proposition} \begin{proof} (1) Let $S$ be a compact set in $\Omega$ and $z\in S$. Take $f\in A^p(\Omega)$ with $\|f\|_p=1$ such that $|f(z)|^p=K_p(z)$. It follows from Cauchy's estimates that for any $w\in S$, $$ K_p(z)^{1/p}=|f(z)|\le |f(w)|+C_S|z-w|\le K_p(w)^{1/p} +C_S|z-w|, $$ i.e., $K_p(z)^{1/p}$ is locally Lipschitz continuous in $z$, so are $K_p(z)$ and $m_p(z)$. (2) It suffices to verify continuity of $m_p(z,w)$. Let $z_0,w_0\in \Omega$ be fixed. We first verify that \begin{equation}\label{eq:converg_1} \lim_{w\rightarrow w_0} m_p(z_0,w) = m_p(z_0,w_0). \end{equation} Let $w_j\rightarrow w_0$. Since $$ \int_\Omega |m_p(\cdot,w_j)|^p=m_p(w_j)^p=\frac1{K_p(w_j)}\le |\Omega|, $$ so $\{m_p(\cdot,w_j)\}$ forms a normal family so that there exists a subsequence $\{m_p(\cdot,w_{j_k})\}$ converging locally uniformly to a function $f_0\in \mathcal O(\Omega)$. Fatou's lemma yields $$ \int_\Omega |f_0|^p \le \liminf_{k\rightarrow \infty} \int_\Omega |m_p(\cdot,w_{j_k})|^p = \liminf_{k\rightarrow \infty} m_p(w_{j_k})^p=m_p(w_0)^p. $$ On the other hand, Cauchy's estimates yield $$ \frac{|m_p(w_0,w_{j_k})-m_p(w_{j_k},w_{j_k})|}{|w_0-w_{j_k}|}\le C \int_\Omega |m_p(\cdot,w_{j_k})|\le C \|m_p(\cdot,w_{j_k})\|_p |\Omega|^{\frac1q}\le C |\Omega| $$ where $\frac1p+\frac1q=1$ and $C$ depends only on $w_0$. We then have $$ f_0(w_0)=\lim_{k\rightarrow \infty}m_p(w_0,w_{j_k}) = \lim_{k\rightarrow \infty} m_p(w_{j_k},w_{j_k})=1, $$ so that $f_0=m_p(\cdot,w_0)$ by uniqueness of the minimizer. Consequently, $$ \lim_{k\rightarrow \infty} m_p(z_0,w_{j_k}) = m_p(z_0,w_0). $$ Since the sequence $\{w_j\}$ can be chosen arbitrarily, we get (\ref{eq:converg_1}). Finally, \begin{eqnarray*} |m_p(z,w)-m_p(z_0,w_0)| & \le & |m_p(z,w)-m_p(z_0,w)| + |m_p(z_0,w)-m_p(z_0,w_0)| \\ & \le & C_0 |\Omega| |z-z_0| + |m_p(z_0,w)-m_p(z_0,w_0)| \\ & \rightarrow & 0 \end{eqnarray*} as $z\rightarrow z_0$ and $w\rightarrow w_0$. \end{proof} \subsection{A reproducing formula} Throughout this subsection we always assume that $p\ge 1$. \begin{lemma}\label{lm:var_2} For any $f\in A^p(\Omega)$ with $f(z)=0$, we have \begin{equation}\label{eq:Var_2} \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}\,f = 0. \end{equation} \end{lemma} \begin{proof} We will use the calculus of variations. For fixed $f$ we consider the family $$ f_t=m_p(\cdot,z)+tf\in A^p(\Omega),\ \ \ t\in \mathbb C. $$ Since $f_t(z)={1}$, we see that the function $J(t):=\|f_t\|^p_p$ attains the minimum at $t=0$. Rewrite $$ |f_t|^p=\left(|m_p(\cdot,z)|^2+tf\,\overline{m_p(\cdot,z)}+\overline{tf}\,m_p(\cdot,z)+|t|^2|f|^2\right)^{\frac{p}2}. $$ Since $$ \frac{\partial |f_t|^p}{\partial t}= {\frac{p}2} |f_t|^{{p}-2}\bar{f}_t f, $$ we have $$ \left|\frac{\partial |f_t|^p}{\partial t}\right|={\frac{p}2} |f_t|^{p-1}|f|\le {\frac{p}2} |f|(|m_p(\cdot,z)|+|f|)^{p-1}=:\phi $$ whenever $|t|\le 1$. Analogously, we may verify that $$ \left|\frac{\partial |f_t|^p}{\partial \bar{t}}\right|\le \phi. $$ Note that $$ \int_\Omega \phi \le {\frac{p}2} \|f\|_p \||m_p(\cdot,z)|+|f|\|_p^{p-1}<\infty $$ in view of H\"older's inequality when $p>1$. The inequality for $p=1$ is clearly trivial. It then follows from the dominated convergence theorem that $$ 0=\frac{\partial J}{\partial t}(0)=\int_\Omega \left.\frac{\partial |f_t|^p}{\partial t}\right|_{t=0}={\frac{p}2} \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}\,f, $$ i.e., (\ref{eq:Var_2}) holds. \end{proof} Now we reach the following fundamental fact. \begin{theorem}[Reproducing formula]\label{th:RP} For any $f\in A^p(\Omega)$ we have \begin{equation}\label{eq:RP} f(z) = m_p(z)^{-p} \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}\,f= \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{K_p(\cdot,z)}\,f. \end{equation} \end{theorem} \begin{proof} Let $f\in A^p(\Omega)$. With $f$ replaced by $f-f(z)$ in (\ref{eq:Var_2}), we obtain \begin{equation}\label{eq:RP_2} \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}\,f = f(z)\cdot \int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}. \end{equation} Substitute $f=m_p(\cdot,z)$ into (\ref{eq:RP_2}), we obtain $$ m_p(z)^p=\int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}. $$ This combined with (\ref{eq:RP_2}) yields (\ref{eq:RP}). \end{proof} Let us present a few simple consequences of Theorem \ref{th:RP}. \begin{proposition}\label{prop:indep} Given two distinct points $z,w\in \Omega$, $m_p(\cdot,z)$ and $m_p(\cdot,w)$ are not parallel, i.e., $m_p(\cdot,z)\neq c m_p(\cdot,w)$ for any $c\in \mathbb C$. \end{proposition} \begin{proof} Suppose on the contrary that $m_p(\cdot,z)=c m_p(\cdot,w)$ for some complex number $c$. It follows from Theorem \ref{th:RP} that for any $f\in A^p(\Omega)$, \begin{eqnarray*} f(z) & = & m_p(z)^{-p} \int_\Omega |m_p(\cdot,z)|^{p-2}\, \overline{m_p(\cdot,z)}\,f\\ & = & m_p(z)^{-p} |c|^{p-2}\bar{c} \int_\Omega |m_p(\cdot,w)|^{p-2}\, \overline{m_p(\cdot,w)}\,f\\ & = & \left[m_p(w)/m_p(z)\right]^{p} |c|^{p-2} \bar{c}\,f(w). \end{eqnarray*} On the other hand, $$ m_p(z)^p=\int_\Omega |m_p(\cdot,z)|^p = |c|^p \int_\Omega |m_p(\cdot,w)|^p=|c|^p m_p(w)^p. $$ Thus we have $f(z)=f(w)/c$. But this is absurd since one can choose $f\in A^p(\Omega)$ with $f(z)=0$ and $f(w)\neq 0$. \end{proof} \begin{problem} Let $w_1,\cdots,w_m$ be different points in $\Omega$. Is it possible to conclude that $m_p(\cdot,w_1)$, $\cdots$, $m_p(\cdot,w_m)$ are linearly independent? \end{problem} \begin{proposition}\label{prop:Triangle} We have \begin{equation}\label{eq:Triangle} |m_p(z,w)|\le m_p(w)/m_p(z) \end{equation} and equality holds if and only if $z=w$. Equivalently, \begin{equation}\label{eq:Holder_3} |K_p(z,w)|\le K_p(z)^{\frac1p}K_p(w)^{\frac1q} \end{equation} where $\frac1p+\frac1q=1$, and equality holds if and only if $z=w$. \end{proposition} \begin{proof} Substitute $f=m_p(\cdot,w)$ into (\ref{eq:RP}), we obtain \begin{eqnarray*} |m_p(z,w)| & = & m_p(z)^{-p} \left|\int_\Omega |m_p(\cdot,z)|^{p-2}\,\overline{m_p(\cdot,z)}\,m_p(\cdot,w)\right| \\ & \le & m_p(z)^{-p} m_p(z)^{p-1} m_p(w) \ \ \ \ \ (\text{H\"older's\ inequality})\\ & = & m_p(w)/m_p(z). \end{eqnarray*} Clearly, equality holds if $z=w$. On the other hand, if equality in \eqref{eq:Triangle} holds then there exists $r>0$ such that \begin{equation}\label{eq:iff} |m_p(\cdot,w)|^p =r \left(|m_p(\cdot,z)|^{p-2}|m_p(\cdot,z)|\right)^{\frac{p}{p-1}}=r |m_p(\cdot,z)|^p. \end{equation} Set $h:= m_p(\cdot,z)/m_p(\cdot,w)$ and $S_w:=\{m_p(\cdot,w)=0\}$. Since $m_p(w,w)=1$, it follows that $S_w$ is an analytic hypersurface of $\Omega$ and $h$ is holomorphic on $\Omega\backslash S_w$. By \eqref{eq:iff}, we see that $|h|$ is a constant on $\Omega\backslash S_w$. Thus $h$ has to be a constant, i.e., $m_p(\cdot,z)=cm_p(\cdot,w)$ for some complex number $c$ on $\Omega\backslash S_w$. By continuity, the same equality holds on $\Omega$, so that $z=w$ in view of Proposition \ref{prop:indep}. \end{proof} \begin{proposition}\label{prop:nonconst} For fixed $z\in \Omega$, $m_p(z,\cdot)\neq {\rm const}$. \end{proposition} \begin{proof} Note that $$ |m_p(z,w)|\le m_p(w)/m_p(z) = K_p(w)^{-\frac1p}\,m_p(z)^{-1}. $$ Since $\Omega$ is bounded, there exists a ball $B\supset \Omega$ such that $\partial \Omega\cap \partial B\neq \emptyset$. For any $w_0\in \partial \Omega\cap \partial B$, we have $$ K_{\Omega,p}(w)\ge K_{B,p}(w)\rightarrow \infty\ \ \ (w\rightarrow w_0), $$ which implies $m_p(z,w)\rightarrow 0$ as $w\rightarrow w_0$. On the other hand, we have $m_p(z,z)=1$. Thus $m_p(z,\cdot)\neq {\rm const}$. \end{proof} \begin{problem} Is it possible to conclude that $m_p(z,\cdot)$ can not be\/ {\it locally} constant? \end{problem} \subsection{An application to properly discontinuous groups} For a domain $\Omega\subset\subset \mathbb C^n$ we denote by ${\rm Aut}(\Omega)$ the group of holomorphic automorphisms of $\Omega$. A subgroup $G$ of ${\rm Aut}(\Omega)$ is said to be properly discontinuous if for every compact set $S\subset \Omega$ there are only a finite number of elements $g\in G$ with $S\cap g(S)\neq \emptyset$. A well-known result of H. Cartan states that $\Omega/G$ is a complex space if $G$ is properly discontinuous. Let $L(G_z)$ denote the set of limit points of $G_z:=\{g(z):g\in G\}$. Set $$ L(G):=\bigcup_{z\in G} L(G_z). $$ Since $G$ is properly discontinuous, we have $L(G)\subset \partial \Omega$. \begin{proposition}\label{th:p-holo} Let $\Omega\subset \mathbb C^n$ be a bounded simply-connected domain and $G\subset {\rm Aut}(\Omega)$ a properly discontinuous group. For any $0<p<\infty$ and any $w\in L(G)$, there exists $f\in A^p(\Omega)$ such that $$ \limsup_{z\rightarrow w} |f(z)|=\infty. $$ \end{proposition} \begin{proof} By a classical result of Poincar\'e-Siegel (cf. \cite{Siegel}) we know that \begin{equation}\label{eq:Siegel} \sum_{g\in G} |J_g(z)|^2<\infty,\ \ \ \forall\,z\in \Omega. \end{equation} Take $z_0\in \Omega$ and $\{g_j\}\subset \Gamma$ such that $z_j:=g_j(z_0)\rightarrow w$. By Proposition \ref{prop:trans_2} we have $$ K_p(z_0)=K_p(z_j) |J_{g_j}(z_0)|^2, $$ so that $$ \sum_{j=1}^\infty \frac{K_p(z_0)}{K_p(z_j)} =\sum_{j=1}^\infty |J_{g_j}(z)|^2<\infty $$ in view of (\ref{eq:Siegel}), which implies $$ \lim_{j\rightarrow \infty} K_p(z_j)=\infty. $$ Suppose on the contrary that $$ {\sup}_j\ |f(z_j)|<\infty,\ \ \ \forall\,f\in A^p(\Omega). $$ By the Bergman inequality, we see that the continuous linear functional $$ F_j: f\in A^p(\Omega)\mapsto f(z_j) $$ satisfies $ \sup_j |F_j(f)|<\infty, $ so that $\sup_j \|F_j\|<\infty$ in view of the Banach-Steinhaus theorem. Since $$ \|F_j\|=\sup_{f\in A^p(\Omega)} \frac{|f(z_j)|}{\|f\|_p}=K_p(z_j)^{1/p}, $$ we obtain $\sup_j K_p(z_j)<\infty$, a contradiction! \end{proof} \begin{corollary} For any neighborhood $U$ of $w\in L(G)$, the Hausdorff dimension of $\partial \Omega\cap U$ is no less than $2n-1$. \end{corollary} \begin{proof} Suppose on the contrary that there exist $\alpha<2n-1$ and a neighborhood $U$ of $w$ such that $$ \Lambda_\alpha(\partial\Omega\cap U)=0, $$ where $\Lambda_\alpha$ means the $\alpha-$dimensional Hausdorff measure. It follows that $\partial\Omega\cap U$ is a polar set, so that $D:=U \backslash \partial\Omega$ is connected and $D\subset \Omega$. Since $K_{D,p}(z)\ge K_{\Omega,p}(z)$ for $z\in D$, we infer from the proof of Proposition \ref{th:p-holo} that $$ \limsup_{z\rightarrow w} K_{D,p}(z)=\infty,\ \ \ \forall\, 0<p<\infty. $$ On the other hand, thanks to a theorem on removable singularity due to Harvey-Polking \cite{HP}, we have $A^p(D)=A^p(U)$ for $ p=\frac{2n-\alpha}{2n-\alpha-1}, $ so that $$ \limsup_{z\rightarrow w} K_{D,p}(z)<\infty, $$ a contradiction! \end{proof} Let $w\in L(G)$. We would like to ask the following \begin{problem} Does there exist $f\in A^\infty(\Omega)$ which can not be extended holomorphically across $w$? \end{problem} \begin{problem} Is it possible to conclude that $\Lambda_{2n-1}(\partial \Omega\cap U)>0$ for any neighborhood $U$ of $w$? \end{problem} \subsection{The $p-$Bergman metric} For $X=\sum_j X_j \partial/\partial z_j$ we define the $p-$Bergman metric to be \begin{equation}\label{eq:p-metric} B_p(z;X):= {K_p(z)^{-\frac1p}}\cdot {\sup}_f\ |Xf(z)| \end{equation} where the supremum is taken over all $f\in A^p(\Omega)$ with $f(z)=0$ and $\|f\|_p=1$. Note that $B_2(z;X)$ is the standard Bergman metric. A normal family argument shows that the "$\sup$" in (\ref{eq:p-metric}) can be replaced by "$\max$". For the sake of convenience, we set \begin{equation}\label{eq:max_3} \mathcal M_{p}(z;X):= \sup\left\{|Xf(z)|: f\in A^p(\Omega),f(z)=0,\|f\|_p=1\right\} \end{equation} and define $\mathcal M_p(\cdot,z;X)$ to be the maximizer of (\ref{eq:max_3}), i.e., $\mathcal M_{\Omega,p}(z;X)=X\mathcal M_p(\cdot,z;X)|_z$. \begin{proposition}\label{prop:invariant} Let $F:\Omega_1\rightarrow \Omega_2$ be a biholomorphic mapping between bounded simply-connected domains. Then \begin{equation}\label{eq:invariant} B_{\Omega_1,p}(z;X)=B_{\Omega_2,p}(F(z);F_\ast X). \end{equation} Moreover, \eqref{eq:invariant} holds for arbitrary bounded domains whenever $2/p\in \mathbb Z^+$.\end{proposition} \begin{proof} By Proposition \ref{prop:trans_2} it suffices to verify $$ \mathcal M_{\Omega_1,p}(z;X)=\mathcal M_{\Omega_2,p}(F(z);F_\ast X)|J_F(z)|^{2/p}. $$ If $f_2$ is a test function for $\mathcal M_{\Omega_2,p}(F(z);F_\ast X)$, then $ {f}_1:=f_2 \circ F\cdot J_F^{2/p}$ is a test function for $\mathcal M_{\Omega_1,p}(z;X)$. Then we have $$ {|X(f_2\circ F)(z) \cdot J_F(z)^{2/p}|} = {|X f_1(z)|} \le \mathcal M_{\Omega_1,p}(z;X). $$ Take supremum over $f_2\in A^p(\Omega_2)$, we get $$ \mathcal M_{\Omega_2,p}(F(z);F_\ast X)|J_F(z)|^{2/p}\le \mathcal M_{\Omega_1,p}(z;X). $$ Consider $F^{-1}$ instead of $F$, we get the inverse inequality. \end{proof} \begin{proposition} For the unit ball\/ $\mathbb B^n\subset \mathbb C^n$ we have \begin{equation}\label{eq:ballmetric} B_p(z;X) =c_{n,p} \left(\frac{|X|^2}{1-|z|^2}+\frac{|\sum_{j=1}^n z_j X_j|^2}{(1-|z|^2)^2}\right)^{\frac12} \end{equation} where $$ c_{n,p}= (\pi^n/n!)^{\frac1p}\cdot \sup\left\{\frac{|f(0)|}{\|z_1f\|_p}: f\in A^p(\mathbb B^n) \right\}. $$ \end{proposition} \begin{proof} For $z\in \mathbb B^n$ we take an automorphism $F$ of $\mathbb B^n$ such that $F(z)=0$. By (\ref{eq:invariant}), we have $$ \frac{B_p(z;X)}{B_2(z;X)}= \frac{B_p(0;F_\ast X)}{B_2(0;F_\ast X)}. $$ It suffices to compute the ratio $B_p(0;X)/B_2(0;X)$. After a unitary transformation, we may assume $X=|X|\partial/\partial z_1$. Since every $f\in \mathcal O(\Omega)$ with $f(0)=0$ admits a decomposition $f(z)=\sum_j z_j f_j(z)$ for certain $f_j\in \mathcal O(\Omega)$, we obtain $$ B_p(0;X)=\frac{|X|}{K_p(0)^{\frac1p}}\cdot\sup\left\{\frac{|f(0)|}{\|z_1f\|_p}: f\in A^p(\mathbb B^n) \right\}. $$ Since $K_p(0)=n!/\pi^n$ and $B_2(0;X)=(n+1)^{\frac12}|X|$, we obtain (\ref{eq:ballmetric}). \end{proof} \begin{problem} What is the product rule for $B_p$? \end{problem} Recall that the Carath\'eodory metric is defined by $$ C(z;X)=C_\Omega(z;X):=\sup\left\{|Xf(z)|:f\in A^\infty(\Omega), f(z)=0, \|f\|_\infty=1\right\}. $$ \begin{proposition}\label{prop:compare} $ B_p(z;X)\ge C(z;X). $ \end{proposition} \begin{proof} Take $h\in A^p(\Omega)$ and $f\in A^\infty(\Omega)$ with $f(z)=0$ and $\|f\|_\infty=1$. Set $g=f\cdot h$. Then we have $g(z)=0$, $\|g\|_p\le \|h\|_p$ and $$ |Xg(z)|=|Xf(z)|\cdot |h(z)|, $$ so that $$ \mathcal M_p(z;X) \ge \frac{|Xg(z)|}{\|g\|_p}\ge |Xf(z)|\cdot \frac{|h(z)|}{\|h\|_p}. $$ Take supremum over $f$ and $h$, we immediately get the desired conclusion. \end{proof} \begin{proposition}\label{prop:BergCar} \ \ \ $\lim_{p\rightarrow \infty} B_p(z;X)=C(z;X)$. \end{proposition} \begin{proof} Take a sequence $p_j\rightarrow \infty$ such that $$ \lim_{j\rightarrow \infty} B_{p_j}(z;X) =\limsup_{p\rightarrow \infty} B_p(z;X). $$ We also choose $f_j\in A^{p_j}(\Omega)$ with $\|f_j\|_{p_j}=1$, $f_j(z)=0$ and $$ B_{p_j}(z;X)=|Xf_j(z)|/K_{p_j}(z)^{\frac1{p_j}} $$ for every $j$. Since \begin{equation}\label{eq:BC_1} |f_j(\zeta)|^{p_j} \le C_n \delta(\zeta)^{-2n}, \end{equation} it follows that $\{f_j\}$ forms a normal family, so that there is a subsequence $\{f_{j_k}\}$ converging locally uniformly to some $f_\infty\in \mathcal O(\Omega)$ which satisfies $f_\infty (z)=0$ and for any $\zeta\in \Omega$, $$ |f_\infty(\zeta)|=\lim_{k\rightarrow \infty} |f_{j_k}(\zeta)|\le 1 $$ in view of \eqref{eq:BC_1}. Since $\lim_{p\rightarrow \infty} K_p(z)^{\frac1p}=1$ in view of \eqref{eq:BergIneq_3}, we have \begin{eqnarray*} C(z;X) & \ge & {|Xf_\infty(z)|} =\lim_{k\rightarrow \infty} {|Xf_{j_k}(z)|} \cdot \lim_{k\rightarrow \infty} K_{p_{j_k}}(z)^{-\frac1{p_{j_k}}}\\ & = & \lim_{k\rightarrow \infty} B_{p_{j_k}}(z;X) =\limsup_{p\rightarrow \infty} B_p(z;X). \end{eqnarray*} This combined with Proposition \ref{prop:compare} yields the conclusion. \end{proof} \begin{remark} We may define the $(p,q)-$Bergman metric by $$ B_{p,q}(z;X):={K_q(z)^{-\frac1p}}\cdot {\sup}_f\, |Xf(z)| $$ where the supremum is taken over all $f\in A^p(\Omega)$ with $f(z)=0$ and $\|f\|_p=1$. Analogously, we may verify that $$ B_{\Omega_1,p,q}(z;X)=B_{\Omega_2,p,q}(F(z);F_\ast X) $$ for any biholomorphic mapping $F:\Omega_1\rightarrow \Omega_2$ between bounded simply-connected domains. \end{remark} \section{Zeroes of $K_2(z,w)$ and non real-analyticity of $K_p(z)$} \begin{proposition}\label{prop:Holder_5} If\/ $\frac1p+\frac1q=\frac1r$, then \begin{eqnarray} m_r(z) & \le & m_p(z)\cdot m_q(z) \label{eq:Holder_5}\\ K_r(z)^{\frac1r} & \ge & K_p(z)^{\frac1p} \cdot K_q(z)^{\frac1q}. \label{eq:Holder_6} \end{eqnarray} \end{proposition} \begin{proof} It suffices to verify \eqref{eq:Holder_5}. Take two holomorphic functions $f_p$ and $f_q$ on $\Omega$ with $f_p(z)=f_q(z)=1$ and $$ \|f_p\|_p=m_p(z),\ \ \ \|f_q\|_q=m_q(z). $$ Set $f_r:=f_p f_q$. Then $f_r$ is a holomorphic function on $\Omega$ satisfying $f_r(z)=1$ and H\"older's inequality gives $$ \|f\|_r\le \|f_p\|_p\cdot \|f_q\|_q=m_p(z)\cdot m_q(z). $$ By definition of $m_r(z)$ we immediately get \eqref{eq:Holder_5}. \end{proof} \begin{proposition}\label{prop:NRA_1} Let $p\ge 1$ and $k\in \mathbb Z^+$. We have $K_p(z)=K_{pk}(z)$ if and only if $m_p(\cdot,z)=m_{pk}(\cdot,z)^k$. \end{proposition} \begin{proof} Suppose $K_p(z)=K_{pk}(z)$. Since $f_k:=m_{pk}(\cdot,z)^k$ is a holomorphic function satisfying $f_k(z)=1$ and $$ \int_\Omega |f_k|^p =\int_\Omega |m_{pk}(\cdot,z)|^{pk}=m_{pk}(z)^{pk}=m_p(z)^p, $$ it follows from uniqueness of the minimizer that $f_k=m_p(\cdot,z)$. The other direction follows from $$ m_p(z)^p = \int_\Omega |m_p(\cdot,z)|^p=\int_\Omega |m_{pk}(\cdot,z)|^{pk}=m_{pk}(z)^{pk}. $$ \end{proof} \begin{proposition}\label{prop:NRA_2} Suppose that $\Omega$ is a bounded simply-connected domain in $\mathbb C^n$ and $m_p(\cdot,z)$ is zero-free for some $p\ge 1$ and $z\in \Omega$. Then \begin{enumerate} \item \ \ \ $K_s(z) = K_p(z)$\/ for any $s\ge p$. \item \ \ \ $m_s(\cdot,z) = m_p(\cdot,z)^{p/s}$\/ for any $s\ge p$. \end{enumerate} \end{proposition} \begin{proof} (1) By the hypothesis we may define $f_{p,s}:=m_p(\cdot,z)^{\frac{p}s}\in \mathcal O(\Omega)$ with $f_{p,s}(z)=1$. Since $$ \int_\Omega |f_{p,s}|^s = \int_\Omega |m_p(\cdot,z)|^p=m_p(z)^p, $$ we have \begin{equation}\label{eq:NRA_1} K_s(z) \ge \frac{|f_{p,s}(z)|^s}{\|f_{p,s}\|_s^s} = \frac1{m_p(z)^p}=K_p(z). \end{equation} On the other hand, Proposition \ref{prop:Holder_5} yields that if $\frac1s+\frac1t=\frac1p$ then \begin{eqnarray*} K_p(z)^{\frac1p} & \ge & K_s(z)^{\frac1s}\cdot K_t(z)^{\frac1t}\\ & \ge & K_s(z)^{\frac1s}\cdot K_p(z)^{\frac1t} \end{eqnarray*} in view of \eqref{eq:NRA_1} since $t\ge p$. Thus we get $K_p(z)\ge K_s(z)$. (2) Note that $f_{p,s}\in \mathcal O(\Omega)$ satisfies $f_{p,s}(z)=1$ and $$ \int_\Omega |f_{p,s}|^s=m_p(z)^p=m_s(z)^s. $$ Uniqueness of the minimizer gives $f_{p,s}=m_s(\cdot,z)$. \end{proof} An immediate consequence is \begin{corollary} Let $\Omega$ be a bounded simply-connected domain. If $K_p(z)\neq K_2(z)$ for some $p>2$ and $z\in \Omega$, then $K_2(\cdot,z)$ has zeroes. \end{corollary} For a bounded domain $\Omega\subset \mathbb C^n$ we set $$ \mathcal F(\Omega):=\left\{z\in \Omega: K_2(\cdot,z) \ \text{is\ zero-free} \right\}\ \ \ \text{and}\ \ \ \mathcal N(\Omega):=\Omega\backslash \mathcal F(\Omega). $$ Since $K_2(\zeta,z)=\overline{K_2(z,\zeta)}$, we conclude that if $K_2(\zeta,z)=0$ then $\zeta,z\in \mathcal N(\Omega)$. \begin{proposition} $\mathcal F(\Omega)$ is a closed subset and $\mathcal N(\Omega)$ is an open subset. \end{proposition} \begin{proof} Suppose on the contrary that $\mathcal N(\Omega)$ is not open, i.e., there exist $z_0\in \mathcal N(\Omega)$ and a sequence of points $\{z_j\}\subset \mathcal F(\Omega)$ such that $z_j\rightarrow z_0$ as $j\rightarrow \infty$. Since $K_2(\cdot,z_j)\rightarrow K_2(\cdot,z_0)$, it follows from Hurwitz's theorem that either $z_0\in \mathcal F(\Omega)$ or $K_2(\cdot,z_0)\equiv 0$. The latter can not happen since $K_2(z_0,z_0)>0$. Thus we get a contradiction. \end{proof} For a set $E$ we denote by $E^\circ$ the set of inner points of $E$. Then we have \begin{theorem}\label{th:NRA_2} Let $\Omega$ be a bounded simply-connected domain in $\mathbb C^n$ such that both $\mathcal N(\Omega)$ and $\mathcal F(\Omega)^\circ$ are nonempty. Then the following properties hold: \begin{enumerate} \item There exists $k_0\in \mathbb Z^+$ such that $K_{2k}(z)$ is not real-analytic in $\Omega$ for any integer $k\ge k_0$. \item Suppose furthermore that there exists $\zeta_0,z_0\in \mathcal N(\Omega)$ with ${\rm ord}_{z_0}K_2(\zeta_0,\cdot)=1$. Then $K_{2k}(z)$ is not real-analytic in $\Omega$ for any integer $k\ge 2$. Moreover, either ${\rm Re}\,m_{p}(\zeta_0,\cdot)$ or ${\rm Im}\,m_{p}(\zeta_0,\cdot)$ cannot be real-analytic in $\Omega$ for any rational $p>2$. \end{enumerate} \end{theorem} \begin{proof} (1) By Proposition \ref{prop:NRA_2}, we have $K_{2k}(z)=K_2(z)$ for any $z\in \mathcal F(\Omega)$. Suppose on the contrary that $K_{2k}(z)$ is real-analytic in $\Omega$. Then $K_{2k}(z)=K_2(z)$ for any $z\in \Omega$ by the uniqueness theorem for real-analytic functions since $\mathcal F(\Omega)^\circ\neq \emptyset$. It then follows from Proposition \ref{prop:NRA_1} that \begin{equation}\label{eq:NRA_4} m_2(\cdot,z)=m_{2k}(\cdot,z)^k,\ \ \ \forall\,z\in \Omega. \end{equation} Take $\zeta_0,z_0\in \mathcal N(\Omega)$ so that $K_2(\zeta_0,z_0)=0$. Then $m_2(\zeta_0,z_0)=0$, which implies $m_{2k}(\zeta_0,z_0)=0$. Set $k_0':={\rm ord}_{z_0}K_2(\zeta_0,\cdot)={\rm ord}_{z_0}m_2(\zeta_0,\cdot)$. Since $\overline{K_2(\zeta_0,\cdot)}$ is holomorphic and not identically zero, we conclude that $k_0'<\infty$. Since $m_{2k}(\zeta_0,\cdot)$ is locally $\frac12-$H\"older continuous in view of Theorem \ref{th:reg_1} in the next section, we see that for $k\ge k_0:=2k_0'+1$, $$ {\rm ord}_{z_0}m_2(\zeta_0,\cdot)<{\rm ord}_{z_0}m_{2k}(\zeta_0,\cdot)^k, $$ which is contradictory to \eqref{eq:NRA_4}. (2) For any $k > 2$, we have $$ {\rm ord}_{z_0}m_2(\zeta_0,\cdot)=1< {\rm ord}_{z_0}m_{2k}(\zeta_0,\cdot)^k, $$ so that the first assertion follows analogously as (1). For the second assertion we write $p=k/l$ for $k,l\in \mathbb Z^+$. By Proposition \ref{prop:NRA_2}, we have $K_{p}(z)=K_2(z)$ and $m_p(\zeta_0,z)^k=m_2(\zeta_0,z)^{2l}$ for any $z\in \mathcal F(\Omega)$. Suppose on the other contrary that both ${\rm Re}\,m_{p}(\zeta_0,\cdot)$ and ${\rm Im}\,m_{p}(\zeta_0,\cdot)$ are real-analytic in $\Omega$. Since $\mathcal F(\Omega)^\circ\neq \emptyset$, it follows from the uniqueness theorem for real-analytic fucntions that $$ m_p(\zeta_0,z)^k=m_2(\zeta_0,z)^{2l},\ \ \ \forall\,z\in \Omega. $$ On the other hand, we have $$ {\rm ord}_{z_0}m_2(\zeta_0,\cdot)^{2l} = 2l <k \le {\rm ord}_{z_0}m_{2k}(\zeta_0,\cdot)^k $$ since $m_{2k}(\zeta_0,\cdot)$ is real-analytic, which is a contradiction. \end{proof} \begin{remark} Actually we have proved a stronger conclusion that $K_{2k}(z)-K_2(z)$ does not enjoy the unique continuation property, i.e., it vanishes identically if it vanishes on a nonempty open subset. There are plenty of non real-analytic functions which still verify the unique continuation property. \end{remark} \begin{proposition}\label{prop:Reinhardt} Let $\Omega$ be a bounded complete Reinhardt domain in $\mathbb C^n$. Then there exists $\varepsilon>0$ such that $$ B_\varepsilon (0):=\left\{z\in \mathbb C^n: |z|<\varepsilon \right\}\subset \mathcal F(\Omega). $$ \end{proposition} \begin{proof} Note that $$ K_2(z,w) =\sum_\alpha a_\alpha z^\alpha \bar{w}^\alpha,\ \ \ a_\alpha=1/\int_\Omega |z^\alpha|^{2}. $$ Take $r\ll 1$ so that $B_r(0)\subset \Omega$. The series expansion above implies that $$ K_2(z,w) = K_2(rz,w/r), \ \ \ \forall\,z\in \Omega,\ w\in B_{r^2}(0) $$ (this observation is essentially due to S. R. Bell). Thus for $r\ll 1$ there exists a constant $C_r>0$ such that $$ |K_2(z,w)|\le C_r,\ \ \ \forall\,z\in \Omega,\ w\in B_{r^2}(0), $$ and Cauchy's estimates gives $$ |K_2(z,0)-K_2(z,w)|\le C_r |w|, \ \ \ \forall\,z\in \Omega,\ w\in B_{r^2/2}(0). $$ Since $K_2(z,0)=1/|\Omega|$, it follows that if $\varepsilon$ is sufficiently small then $K_2(z,w)\neq 0$ for all $z\in \Omega$ and $w\in B_\varepsilon(0)$, i.e., $ B_\varepsilon (0)\subset \mathcal F(\Omega). $ \end{proof} \begin{remark} Since complete Reinhardt domains are always simply-connected, we see that Theorem \ref{th:NRA_0} follows from Theorem \ref{th:NRA_2} and Proposition \ref{prop:Reinhardt}. \end{remark} Finally we will show that the following Thullen-type domain $$ \Omega=\left\{(z_1,z_2): |z_1|+|z_2|^{2/\alpha}<1 \right\} $$ verifies the hypothesis of Theorem \ref{th:NRA_2}/(2) for every $\alpha>2$. It is known from (9) in \cite{BFS} that \begin{eqnarray*} K_2((z_1^2,0),(w_1^2,0)) & = & \frac1{4\alpha \pi^2 x}\cdot \frac{\partial}{\partial x^2}\left[\frac1{(1-x)^\alpha}-\frac1{(1+x)^\alpha}\right]\\ & = & \frac{\alpha(\alpha+1)}{4\alpha \pi^2 x}\cdot \left[\frac1{(1-x)^{\alpha+2}}-\frac1{(1+x)^{\alpha+2}}\right] \end{eqnarray*} when $x:=z_1 \bar{w}_1\neq 0$. It is easy to see that if $\alpha>2$ then equation $$ \frac1{(1-x)^{\alpha+2}}=\frac1{(1+x)^{\alpha+2}} $$ has a solution with $0<|x|<1$ if $$ \frac1{1-x}=\frac{e^{2\pi i/(\alpha+2)}}{1+x},\ \ \ \text{i.e.,}\ \ \ x=\frac{e^{2\pi i/(\alpha+2)}-1}{e^{2\pi i/(\alpha+2)}+1}=:x_\alpha. $$ Note that the function $\eta(x):=(1-x)^{-\alpha-2}-(1+x)^{-\alpha-2}$ satisfies \begin{eqnarray*} \eta'(x_\alpha) & = & (\alpha+2)\left[\frac1{(1-x_\alpha)^{\alpha+3}}+\frac1{(1+x_\alpha)^{\alpha+3}}\right]\\ & = & \frac{\alpha+2}{(1+x_\alpha)^{\alpha+3}}\left[e^{2\pi i\cdot \frac{\alpha+3}{\alpha+2}}+1\right]\\ & \neq & 0 \end{eqnarray*} for $\alpha>2$. Thus for $z_{1,\alpha}=\bar{w}_{1,\alpha}=\sqrt{x_\alpha}$ the order of $K_2((x_\alpha,0),\cdot)$ at the point $(\bar{x}_\alpha,0)$ equals $1$. \section{H\"older continuity of $m_p(z,\cdot)$ and $K_p(z,\cdot)$} Throughout this section we always assume that $p\ge 1$. Let us introduce a useful auxiliary function as follows $$ H_p(z,w):=K_p(z) + K_p(w) -{\rm Re}\left\{K_p(z,w)+K_p(w,z)\right\}. $$ Clearly, $ H_p(z,w)=H_p(w,z)$ {and} $H_p(z,z)=0$. Moreover, we have \begin{proposition}\label{prop:H-Lip} For every compact set $S\subset \Omega$, there exists a constant $C>0$ such that $$ |H_p(z,w)|\le C|z-w|,\ \ \ \forall\,z,w\in S. $$ \end{proposition} \begin{proof} It suffices to verify $$ |K_p(z,w)-K_p(w)|\le C|z-w|,\ \ \ \forall\,z,w\in S. $$ This follows from the fact that $K_p(\cdot,w)$ is holomorphic and uniformly bounded in a small neighborhood of $S$ in view of \eqref{eq:Holder_3}. \end{proof} A less obvious observation is \begin{theorem}\label{th:H-positive} We have $H_p(z,w)\ge 0$ and equality holds if and only if $z=w$. \end{theorem} For the proof of Theorem \ref{th:H-positive} and the regularity results in the sequel, the following "elementary" inequalities play a key role. \begin{proposition}\label{prop:eleIneqs} Let $a,b\in \mathbb C$. The following inequalities hold. \begin{enumerate} \item For $p\ge 2$ we have \begin{eqnarray}\label{eq:eleIneq_1} && {\rm Re}\left\{(|b|^{p-2}\bar{b}-|a|^{p-2}\bar{a})(b-a)\right\}\\ & \ge & \frac12 (|b|^{p-2}+|a|^{p-2})|b-a|^2\nonumber\\ & \ge & 2^{1-p} |b-a|^p;\nonumber \end{eqnarray} \item For $1\le p\le 2$ we have \begin{eqnarray}\label{eq:eleIneq_2} && {\rm Re}\left\{(|b|^{p-2}\bar{b}-|a|^{p-2}\bar{a})(b-a)\right\} \\ & \ge & (p-1) |b-a|^2(|a|+|b|)^{p-2}\nonumber \\ && + (2-p) |{\rm Im}(a\bar{b})|^2(|a|+|b|)^{p-4};\nonumber \end{eqnarray} \item For $p>2$ we have \begin{eqnarray}\label{eq:eleIneq_3} |b|^p & \ge & |a|^p+p{\rm Re}\left\{|a|^{p-2}\bar{a}(b-a)\right\} +\frac1{4^{p+3}}|b-a|^p; \end{eqnarray} \item For $1<p\le 2$ we have \begin{eqnarray}\label{eq:eleIneq_4} |b|^p & \ge & |a|^p+p{\rm Re}\left\{|a|^{p-2}\bar{a}(b-a)\right\} +A_p |b-a|^2(|a|+|b|)^{p-2} \end{eqnarray} where $A_p=\frac{p}2\min\{1,p-1\}$; \item For $p=1$ we have \begin{eqnarray}\label{eq:eleIneq_5} |b| & \ge & |a|+{\rm Re}\left\{|a|^{-1}\bar{a}(b-a)\right\} +A_1 |{\rm Im}(\bar{a}b)|^2(|a|+|b|)^{-3} \end{eqnarray} where $A_1>0$ is some numerical constant. \end{enumerate} \end{proposition} These inequalities have their origins in nonlinear analysis of the $p-$Laplacian (see e.g., \cite{Lind1}). We will provide the proof in Appendix. \begin{lemma}\label{lm:H-positive_1} For $p\ge 2$ we have \begin{equation}\label{eq:H-positive_1} |m_p(z,w)-m_p(z,w')|^p \le \frac{2^{p-1} K_p(z)}{K_p(w)K_p(w')}\cdot H_p(w,w'). \end{equation} \end{lemma} \begin{proof} Substitute at first $a=m_p(\cdot,w)$ and $b=m_p(\cdot,w')$ into (\ref{eq:eleIneq_1}) then take integration over $\Omega$, we obtain \begin{eqnarray*} && 2^{1-p} \int_\Omega |m_p(\cdot,w')-m_p(\cdot,w)|^p\\ & \le & \int_\Omega | m_p(\cdot,w') |^p -{\rm Re} \int_\Omega |m_p(\cdot,w')|^{p-2}\overline{m_p(\cdot,w')}\,m_p(\cdot,w)\\ && \\ && +\int_\Omega | m_p(\cdot,w) |^p-{\rm Re} \int_\Omega |m_p(\cdot,w)|^{p-2}\overline{m_p(\cdot,w)}\,m_p(\cdot,w')\\ & = & m_p(w')^p+m_p(w)^p-{\rm Re}\left\{m_p(w')^p m_p(w',w)+m_p(z)^p m_p(w,w')\right\}\\ & = & \frac{H_p(w',w)}{K_p(w)K_p(w')}, \end{eqnarray*} where the first equality follows from the reproducing formula. Since \begin{equation}\label{eq:meanvalue} |m_p(z,w')-m_p(z,w)|^p\le K_p(z)\int_\Omega |m_p(\cdot,w')-m_p(\cdot,w)|^p, \end{equation} we immediately get (\ref{eq:H-positive_1}). \end{proof} \begin{lemma}\label{lm:H-positive_2} For $1<p\le 2$ we have \begin{equation}\label{eq:H-positive_2} |m_p(z,w)-m_p(z,w')|^p \le \frac{C_p K_p(z)}{K_p(w')K_p(w)}\left[K_p(w')+K_p(w)\right]^{1-\frac{p}2} H_p(w,w')^{{\frac{p}2}} \end{equation} where $C_p=2^{\frac{(p-1)(2-p)}2}/(p-1)^{\frac{p}2}$. \end{lemma} \begin{proof} Let $f_1,f_2\in A^p(\Omega)$. H\"older's inequality yields \begin{eqnarray*} \int_\Omega |f_2-f_1|^p & = & \int_\Omega |f_2-f_1|^p (|f_2|+|f_1|)^{p(p-2)/2}(|f_2|+|f_1|)^{p(2-p)/2}\\ & \le & \left\{\int_\Omega |f_2-f_1|^2 (|f_2|+|f_1|)^{p-2} \right\}^{\frac{p}2} \left\{\int_\Omega (|f_2|+|f_1|)^{p} \right\}^{1-\frac{p}2}\\ & \le & \left\{\frac1{p-1}\int_\Omega {\rm Re}\left[ (|f_2|^{p-2}\bar{f}_2-|f_1|^{p-2}\bar{f}_1)(f_2-f_1)\right] \right\}^{\frac{p}2} \left\{\int_\Omega (|f_2|+|f_1|)^{p} \right\}^{1-\frac{p}2} \end{eqnarray*} in view of (\ref{eq:eleIneq_2}). Take $f_2=m_p(\cdot,w')$ and $f_1=m_p(\cdot,w)$, we obtain \begin{eqnarray*} \int_\Omega |m_p(\cdot,w')-m_p(\cdot,w)|^p & \le & \left\{\frac1{p-1} \frac{H_p(w',w)}{K_p(w')K_p(w)}\right\}^{\frac{p}2} \cdot \left\{ 2^{p-1} \left[ m_p(w')^p+m_p(w)^p\right] \right\}^{1-\frac{p}2}\\ & = & \frac{C_p}{K_p(w')K_p(w)} \left[ K_p(w')+K_p(w)\right]^{1-\frac{p}2} H_p(w',w)^{\frac{p}2}. \end{eqnarray*} This combined with (\ref{eq:meanvalue}) gives (\ref{eq:H-positive_2}). \end{proof} \begin{lemma}\label{lm:H-positive_3} \begin{equation}\label{eq:H-positive_3} \int_\Omega \frac{|{\rm Im} \{ m_1(\cdot,w')\overline{m_1(\cdot,w)} \}|^2}{(|m_1(\cdot,w'))|+|m_1(\cdot,w)|)^3}\le \frac{H_1(w,w')}{K_1(w')K_1(w)}. \end{equation} \end{lemma} \begin{proof} It suffices to take $b=m_p(\cdot,w)$, $a=m_p(\cdot,w')$ in (\ref{eq:eleIneq_2}) with $p=1$, then take integration over $\Omega$. \end{proof} \begin{proof}[Proof of Theorem \ref{th:H-positive}] Lemma \ref{lm:H-positive_1}$\sim$\ref{lm:H-positive_3} give $H_p(z,w)\ge 0$. Now suppose $H_p(z,w)=0$. It follows from Lemma \ref{lm:H-positive_1} and Lemma \ref{lm:H-positive_2} that $m_p(\cdot,z)=m_p(\cdot,w)$ whenever $p>1$, so that $z=w$ in view of Proposition \ref{prop:indep}. It remains to deal with the case $p=1$. Consider the following proper analytic subset in $\Omega$ $$ S_{z,w}:=\{m_p(\cdot,z)=0\}\cup \{m_p(\cdot,w)=0\}. $$ Let $V\subset\subset U\subset\subset \Omega\backslash S_{z,w}$ be two open sets. We take $$ h:=m_1(\cdot,z)/m_1(\cdot,w)\in \mathcal O(U). $$ Since $|{\rm Im\,}h|^2$ is subharmonic on $U$, it follows from the mean-value inequality and Lemma \ref{lm:H-positive_3} that for every $\zeta\in V$ \begin{eqnarray*} |{\rm Im\,}h(\zeta)|^2 \lesssim \int_U |{\rm Im\,}h|^2 & = & \int_U |{\rm Im} \{ m_1(\cdot,z)\overline{m_1(\cdot,w)} \}|^2 |m_1(\cdot,w)|^{-4}\\ & \lesssim & \int_U \frac{|{\rm Im} \{ m_1(\cdot,z)\overline{m_1(\cdot,w)} \}|^2}{(|m_1(\cdot,z))|+|m_1(\cdot,w)|)^3}\\ & \lesssim & \frac{H_1(z,w)}{K_1(z)K_1(w)}=0. \end{eqnarray*} Since $V$ and $U$ can be arbitrarily chosen, we conclude that ${\rm Im\,}h= 0$ on the domain $\Omega\backslash S_{z,w}$, so that $h={\rm const.}$, i.e., $m_p(\cdot,z)=cm_p(\cdot,w)$ holds on $\Omega\backslash S_{z,w}$ for some complex number $c$. By continuity, the same equality remains valid on $\Omega$, so that $z=w$ in view of Proposition \ref{prop:indep}. \end{proof} \begin{theorem}\label{th:reg_1} For any $p>1$ and any compact set $S\subset \Omega$, there exists a constant $C>0$ such that \begin{equation}\label{eq:reg_1} |m_p(z,w)-m_p(z,w')|\le C|w-w'|^{\frac12}, \ \ \ \forall\,z,w,w'\in S. \end{equation} The same conclusion also holds for $K_p$. \end{theorem} \begin{proof} It suffices to verify the conclusion for $m_p(z,w)$ since $K_p(z,w)=m_p(z,w)K_p(w)$ and $K_p(z)$ is locally Lipschitz continuous. Lemma \ref{lm:H-positive_2} together with Proposition \ref{prop:H-Lip} immediately yield \eqref{eq:reg_1} for $1<p\le 2$. Analogously, Lemma \ref{lm:H-positive_1} combined with Proposition \ref{prop:H-Lip} gives $$ |m_p(z,w)-m_p(z,w')| \lesssim |w-w'|^{\frac1p} $$ for $p> 2$, which is weaker than \eqref{eq:reg_1}, however. We have to adopt another approach. Substitute at first $a=m_p(\cdot,w)$ and $b=m_p(\cdot,w')$ into (\ref{eq:eleIneq_1}) then take integration over $\Omega$, we obtain $$ \int_\Omega (|m_p(\cdot,w')|^{p-2}+|m_p(\cdot,w)|^{p-2}) |m_p(\cdot,w')-m_p(\cdot,w)|^2 \le \frac{2H_p(w,w')}{K_p(w')K_p(w)} \lesssim |w-w'|, $$ which implies $$ \int_\Omega |m_p(\cdot,w)|^{p-2} |m_p(\cdot,w')-m_p(\cdot,w)|^2 \lesssim |w-w'|. $$ Now fix an open set $U$ with $ S\subset U\subset\subset \Omega. $ Since $\{m_p(\cdot,w):w\in S\}$ is a continuous family of holomorphic functions on $\Omega$ (in view of Proposition \ref{prop:continuity}), it follows from the celebrated theorem of Demailly-Koll\'ar \cite{DK} on semi-continuity of the complex singularity exponent that there exist positive constants $c=c(S)$ and $M=M(S)$ such that $$ \int_U |m_p(\cdot,w)|^{-c}\le M,\ \ \ \forall\,w\in S. $$ Fix $ \alpha:= \frac{2c}{p-2+c}<2. $ By H\"older's inequality, we have \begin{eqnarray*} \int_U |m_p(\cdot,w')-m_p(\cdot,w)|^\alpha & \le & \left\{\int_U |m_p(\cdot,w)|^{p-2} |m_p(\cdot,w')-m_p(\cdot,w)|^2 \right\}^{\alpha/2}\\ && \cdot \left\{\int_U |m_p(\cdot,w)|^{-\frac{p-2}2\alpha\cdot\frac2{2-\alpha}}\right\}^{1-\alpha/2}\\ & \le & \left\{\int_\Omega |m_p(\cdot,w)|^{p-2} |m_p(\cdot,w')-m_p(\cdot,w)|^2 \right\}^{\alpha/2}\\ && \cdot \left\{\int_U |m_p(\cdot,w)|^{-c}\right\}^{1-\alpha/2}\\ & \lesssim & |w-w'|^{\alpha/2}. \end{eqnarray*} This combined with the mean-value inequality gives (\ref{eq:reg_1}). \end{proof} \begin{theorem}\label{th:reg_2} Let $S_w:=\{m_1(\cdot,w)=0\}$. For every open set $w\in U\subset\subset \Omega\backslash S_w$, there exists a constant $C>0$ such that \begin{equation}\label{eq:reg_3} |m_1(z,w)-m_1(z,w')|\le C|w-w'|^{\frac12},\ \ \ \forall\,z,w'\in U. \end{equation} The same conclusion also holds for $K_1$. \end{theorem} \begin{proof} First of all, (\ref{eq:H-positive_3}) implies $$ \int_\Omega \frac{|{\rm Im} \{ m_1(\cdot,w')\overline{m_1(\cdot,w)} \}|^2}{(|m_1(\cdot,w'))|+|m_1(\cdot,w)|)^3}\lesssim {H_1(w',w)}\lesssim |w-w'|. $$ Take open sets $U',U''$ with $U\subset\subset U'\subset\subset U''\subset\subset \Omega\backslash S_w$. It follows from the mean-value inequality that \begin{eqnarray*} \sup_{U'} \left|{\rm Im}\left\{\frac{m_1(\cdot,w')}{m_1(\cdot,w)}\right\}\right|^2 &\lesssim& \int_{U''} \left|{\rm Im}\left\{\frac{m_1(\cdot,w')}{m_1(\cdot,w)}\right\}\right|^2\\ &\lesssim & \int_\Omega \frac{|{\rm Im} \{ m_1(\cdot,w')\overline{m_1(\cdot,w)} \}|^2}{(|m_1(\cdot,w'))|+|m_1(\cdot,w)|)^3}\\ & \lesssim & |w-w'|. \end{eqnarray*} Set $h:= \frac{m_1(\cdot,w')}{m_1(\cdot,w)}-1$. Then we have $$ \sup_{U'} |{\rm Im\,}h|\lesssim |w-w'|^{\frac12}. $$ To proceed the proof, we need the following cerebrated Borel-Carath\'eodory inequality \begin{equation}\label{eq:BC} \sup_{\Delta_r} |f|\le \frac{2r}{R-r}\, \sup_{\Delta_r} {\rm Im\,}f +\frac{R+r}{R-r}\, |f(0)| \end{equation} where $\Delta_r=\{z\in \mathbb C: |z|<r\}$ and $f\in \mathcal O(\Delta_R)$ for some $R>r$. Take a ball $B_R(w)\subset\subset U'$. Apply (\ref{eq:BC}) to every complex line through $w$, we obtain $$ \sup_{B_{\frac{R}2}(w)} |h| \le 2\sup_{B_{{R}}(w)} {\rm Im\,}h +3 |h(w)|. $$ Note that $$ |h(w)|=|m_1(w,w') -1|=|m_1(w,w')-m_1(w',w')|\lesssim |w-w'|. $$ Thus we obtain $$ \sup_{B_{\frac{R}2}(w)} |h|\lesssim |w-w'|^{\frac12}. $$ Take a chain of balls connecting $w$ and $z$ and apply \eqref{eq:BC} analogously on each ball, we obtain $$ |h(z)|\lesssim |w-w'|^{\frac12}, $$ from which (\ref{eq:reg_3}) immediately follows. \end{proof} \begin{problem} Let $z\in \Omega$ be fixed. Are $m_1(z,\cdot)$ and $K_1(z,\cdot)$ locally $\frac12-$H\"older continuous on $\Omega$? \end{problem} \begin{problem} What about the metric structure or analytic structure of the level set $\{m_p(z,\cdot)=c\}$ where $c\in \mathbb C$? \end{problem} \section{ $B_p(z;X)$ and $i\partial\bar{\partial} \log K_p(z;X)$} For a real-valued upper semicontinuous function $u$ defined on a domain $\Omega\subset \mathbb C^n$, we define the generalized Levi form of $u$ by $$ i\partial\bar{\partial} u(z;X):=\liminf_{r\rightarrow 0+}\frac1{r^2}\left\{\frac1{2\pi}\int_0^{2\pi}u(z+re^{i\theta}X)d\theta-u(z)\right\} $$ where we identify $X=\sum_j X_j\partial/\partial z_j$ with $(X_1,\cdots,X_n)$ for the sake of simplicity. Note that if $u$ is $C^2$ then $i\partial\bar{\partial} u(z;X)$ is the standard Levi form of $u$. \begin{theorem}\label{th:Levi_1} For every $p\le 2$ we have \begin{equation}\label{eq:Levi_1} i\partial\bar{\partial} \log K_p(z;X) \ge \frac{p}2 \, C(z;X)^2. \end{equation} \end{theorem} We need the following simple fact. \begin{lemma}\label{lm:var_1} For every $p>0$ we have \begin{equation}\label{eq:Var_1} \int_\Omega |m_p(\cdot,z)|^{p}f = 0 \end{equation} for all $f\in A^\infty(\Omega)$ with $f(z)=0$ and $\|f\|_\infty=1$. \end{lemma} \begin{proof} Given $f\in A^\infty(\Omega)$ with $f(z)=0$ and $\|f\|_\infty=1$, we define $$ f_t={m_p(\cdot,z)} (1+tf),\ \ \ t\in \mathbb C. $$ Clearly, $f_t$ belongs to $A^p(\Omega)$ and satisfies $f_t(z)=1$. We then have \begin{eqnarray*} m_p(z)^p & \le & \int_\Omega |f_t|^p = \int_\Omega |m_p(\cdot,z)|^p\left(1+2{\rm Re}\{tf\}+|t|^2|f|^2\right)^{\frac{p}2}\\ & = & \int_\Omega |m_p(\cdot,z)|^p\left(1+p{\rm Re}\{t f\}+O(|t|^2)\right)\\ & = & m_p(z)^p+p{\rm Re}\left\{ t \int_\Omega |m_p(\cdot,z)|^{p}f \right\}+O(|t|^2) \end{eqnarray*} as $t\rightarrow 0$. Take $t=se^{-i\arg \int_\Omega |m_p(\cdot,z)|^{p}f}$ with $s\in \mathbb R$, we immediately get (\ref{eq:Var_1}). \end{proof} {\begin{proof}[Proof of Theorem \ref{th:Levi_1}]Fix $r>0$ and $\theta\in \mathbb R$ for a moment. For $t\in \mathbb C$ with $|t|=O(r)$ and $f\in A^\infty(\Omega)$ with $f(z)=0$ and $\|f\|_\infty=1$ we define $ f_t={m_p(\cdot,z)} (1+tf) $ as above. We have \begin{eqnarray*} |f_t(z+re^{i\theta}X)|^p & = & |m_p(z+re^{i\theta}X,z)|^p |1+tf(z+re^{i\theta}X)|^p\\ & = & |m_p(z+re^{i\theta}X,z)|^p |1+t re^{i\theta} Xf(z)+O(r^3)|^p\\ & = & |m_p(z+re^{i\theta}X,z)|^p \left(1+p{\rm Re}\left\{t re^{i\theta} Xf(z)\right\}+O(r^3)\right) \end{eqnarray*} and \begin{eqnarray*} \|f_t\|_p^p & = & \int_\Omega |m_p(\cdot,z)|^p \left(1+2{\rm Re}\{tf\}+|tf|^2\right)^{\frac{p}2}\\ & = & \int_\Omega |m_p(\cdot,z)|^p \left(1+p{\rm Re}\{tf\} +\frac{p}2|tf|^2+\frac{p(p-2)}2({\rm Re}\{tf\})^2+O(r^3)\right)\\ &\le & m_p(z)^p\left(1+\frac{p|t|^2}2 + O(r^3)\right) \end{eqnarray*} for $p\le 2$, in view of Lemma \ref{lm:var_1}. Take $t=\varepsilon r e^{-i\theta}\overline{Xf(z)}$ where $\varepsilon>0$ is a constant to be determined later. Then we have $$ |f_t(z+re^{i\theta}X)|^p= |m_p(z+re^{i\theta}X,z)|^p \left(1+p\varepsilon r^2 |Xf(z)|^2+O(r^3)\right) $$ and $$ \|f_t\|_p^p\le m_p(z)^p\left(1+\frac{p}2 \varepsilon^2 r^2 |Xf(z)|^2 + O(r^3)\right), $$ so that $$ K_p(z+re^{i\theta}X)\ge \frac{ |f_t(z+re^{i\theta}X)|^p}{\|f_t\|_p^p}\ge \frac{ |m_p(z+re^{i\theta}X,z)|^p }{m_p(z)^p}\cdot\frac{1+p\varepsilon r^2 |Xf(z)|^2+O(r^3)}{1+\frac{p}2 \varepsilon^2 r^2 |Xf(z)|^2 + O(r^3)}. $$ Thus we have \begin{eqnarray*} i\partial\bar{\partial} \log K_p(z;X) & \ge & \liminf_{r\rightarrow 0+} \frac1{r^2}\left\{\frac1{2\pi}\int_0^{2\pi} \log \frac{ |m_p(z+re^{i\theta}X,z)|^p }{m_p(z)^p}\,d\theta-\log K_p(z) \right\}\\ && + p\varepsilon|Xf(z)|^2-\frac{p\varepsilon^2}2 |Xf(z)|^2\\ &\ge & p\varepsilon(1-\varepsilon/2) |Xf(z)|^2\\ & = & \frac{p}2\,|Xf(z)|^2 \end{eqnarray*} when $\varepsilon=1$. Here the second inequality follows by applying the mean-value inequality to the subharmonic function $$ u(t):=\log |m_p(z+tX,z)/m_p(z)|^p $$ with $u(0)=\log 1/m_p(z)^p=\log K_p(z)$. Take supremum over $f$, we obtain (\ref{eq:Levi_1}). \end{proof} An immediate consequence is the following \begin{corollary} \ \ \ $\liminf_{p\rightarrow 0+} \frac1p\, i\partial\bar{\partial} \log K_p(z;X)\ge \frac12\,C(z;X)^2$. \end{corollary} } {\begin{theorem}\label{th:Levi_2} For every $p\ge 2$ we have \begin{equation}\label{eq:Levi_2} i\partial\bar{\partial} \log K_p(z;X)\ge \frac{p}{2(p-1)}\, B_p(z;X)^2. \end{equation} \end{theorem} \begin{proof} Fix $r,\theta$ for a moment. Take $f\in A^p(\Omega)$ with $f(z)=0$ and $\|f\|_p=1$. For $t\in \mathbb C$ with $|t|=O(r)$ we define $$ f_t:=m_p(\cdot,z)+tf. $$ Analogously we have \begin{eqnarray*} |f_t(z+re^{i\theta}X)|^p & = & |m_p(z+re^{i\theta}X,z)+tf(z+re^{i\theta}X)|^p\\ & = & |m_p(z+re^{i\theta}X,z)+t re^{i\theta} Xf(z)+O(r^3)|^p\\ & = & |m_p(z+re^{i\theta}X,z)|^p \left(1+p{\rm Re}\left\{t re^{i\theta} Xf(z)\right\}+O(r^3)\right) \end{eqnarray*} since $m_p(z+re^{i\theta}X,z)=1+O(r)$. A straightforward calculation yields \begin{eqnarray*} \frac{\partial |f_t|^p}{\partial t} & = & \frac{p}2 |f_t|^{p-2} \overline{f_t}f\\ \frac{\partial^2 |f_t|^p}{\partial t^2} & = & \frac{p(p-2)}4 |f_t|^{p-4}\left( \overline{f_t}f\right)^2\\ \frac{\partial^2 |f_t|^p}{\partial t\partial \bar{t}} & = & \frac{p^2}4 |f_t|^{p-2} |f|^2. \end{eqnarray*} Set $J(t):=\|f_t\|_p^p$. From the proof of Lemma \ref{lm:var_2} we have already known that $$ \frac{\partial J}{\partial t}(0) = \frac{\partial J}{\partial \bar{t}}(0)=0. $$ Note that for $|t|\le 1$ we have $$ \left|\frac{\partial^2 |f_t|^p}{\partial t^2}\right| \le \frac{p(p-2)}4 |f_t|^{p-2} |f|^2\le \frac{p(p-2)}4 (|m_p(\cdot,z)|+|f| )^{p-2} |f|^2=: \frac{p(p-2)}4\cdot g $$ $$ \left|\frac{\partial^2 |f_t|^p}{\partial t \partial \bar{t}}\right| \le \frac{p^2}4\cdot g, $$ while H\"older's inequality gives $$ \int_\Omega g\le \|f\|_p^2 \left\{\int_\Omega (|m_p(\cdot,z)|+|f| )^p\right\}^{1-\frac2p}<\infty. $$ It follows from the dominated convergence theorem that $$ \frac{\partial^2 J}{\partial t^2}(0) = \int_\Omega \frac{\partial^2 |f_t|^p}{\partial t^2}(0)=\frac{p(p-2)}4 \int_\Omega |m_p(\cdot,z)|^{p-4}\left(\overline{m_p(\cdot,z)}\,f\right)^2 $$ $$ \frac{\partial^2 J}{\partial t\partial \bar{t}}(0) = \int_\Omega \frac{\partial^2 |f_t|^p}{\partial t\partial\bar{t}}(0)=\frac{p^2}4 \int_\Omega |m_p(\cdot,z)|^{p-2}|f|^2. $$ Thus we have \begin{eqnarray*} J(t) & = & J(0) + {\rm Re}\left\{\frac{\partial^2 J}{\partial t^2}(0) t^2\right\} +\frac{\partial^2 J}{\partial t\partial \bar{t}}(0) |t|^2 +o(|t|^2)\\ & \le & m_p(z)^p + \frac{p(p-1)}2 |t|^2 \int_\Omega |m_p(\cdot,z)|^{p-2}|f|^2 +o(|t|^2)\\ & \le & m_p(z)^p + \frac{p(p-1)}2 m_p(z)^{p-2} |t|^2 + o(|t|^2) \end{eqnarray*} in view of H\"older's inequality. Take $t=\varepsilon re^{-i\theta} \overline{Xf(z)}$ as above, we obtain $$ K_p(z+re^{i\theta}X)\ge \frac{ |m_p(z+re^{i\theta}X,z)|^p }{m_p(z)^p}\cdot\frac{1+p\varepsilon r^2 |Xf(z)|^2+O(r^3)}{1+\frac{p(p-1)}2 \varepsilon^2 r^2 |Xf(z)|^2/m_p(z)^2 + o(r^2)}, $$ so that \begin{eqnarray*} i\partial\bar{\partial} \log K_p(z;X) & \ge & p\varepsilon |Xf(z)|^2-\frac{p(p-1)}2 \frac{\varepsilon^2 |Xf(z)|^2}{m_p(z)^2}\\ & = & p\varepsilon |Xf(z)|^2 \left(1-\frac{p-1}2 \frac{\varepsilon}{m_p(z)^2}\right) \\ & = & \frac{p}{2(p-1)} |Xf(z)|^2 m_p(z)^2 \end{eqnarray*} when $\varepsilon=\frac{m_p(z)^2}{p-1}$. Take supremum over $f$, we obtain (\ref{eq:Levi_2}). \end{proof} Theorem \ref{th:Levi_2} together with Proposition \ref{prop:compare} yield the following \begin{corollary} \ \ \ $ \liminf_{p\rightarrow \infty} i\partial\bar{\partial} \log K_p(z;X)\ge \frac12\,C(z;X)^2. $ \end{corollary} By Theorem \ref{th:Levi_1} and Theorem \ref{th:Levi_2}, $\log K_p$ is a continuous strictly psh function on $\Omega$. In particular, we have \begin{corollary} The minimal set of $K_p(z)$ defined by $$ {\rm Min}_p(\Omega):= \{z\in \Omega: K_p(z)=\inf_{\zeta\in \Omega}\, K_p(\zeta)\} $$ is either empty or a totally real subset of\/ $\Omega$. \end{corollary} \begin{problem} What about the metric structure of\/ ${\rm Min}_p(\Omega)$? \end{problem} \section{Stability of $m_p$, $K_p$ and $B_p$ as $p$ varies} We first prove the following \begin{proposition}\label{th:stab_1} \begin{enumerate} \item $\lim_{s\rightarrow p\pm} K_s(z)$ exists for $p>0$ and \begin{equation}\label{eq:stab_1} \lim_{s\rightarrow p+} K_s(z)\le K_p(z) = \lim_{s\rightarrow p-} K_s(z). \end{equation} \item If there exists $p'>p$ such that $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$, then \begin{equation}\label{eq:stab_3} K_p(z)=\lim_{s\rightarrow p} K_s(z). \end{equation} \end{enumerate} \end{proposition} \begin{proof} (1) Let $0<p<s<\infty$ and $f\in A^s(\Omega)$. By H\"older's inequality, we have $$ \|f\|_p \le \|f\|_s \cdot |\Omega|^{\frac1p-\frac1s}, $$ so that $$ K_p(z)^{\frac1p}\ge \frac{|f(z)|}{\|f\|_p} \ge \frac{|f(z)|}{\|f\|_s\cdot |\Omega|^{\frac1p-\frac1s}}. $$ Take supremum over $f\in A^s(\Omega)$, we obtain \begin{equation}\label{eq:decreasing} |\Omega|^{\frac1p}\cdot K_p(z)^{\frac1p}\ge |\Omega|^{\frac1s}\cdot K_s(z)^{\frac1s}. \end{equation} It follows that both $\lim_{s\rightarrow p-}K_s(z)$ and $\lim_{s\rightarrow p+} K_s(z)$ exist and $$ K_p(z)\le \lim_{s\rightarrow p-}K_s(z);\ \ \ K_p(z)\ge \lim_{s\rightarrow p+}K_s(z). $$ To achieve the inequality $$ K_p(z)\ge \lim_{s\rightarrow p-}K_s(z), $$ we first take $f_s\in A^s(\Omega)$ with $\|f_s\|_s=1$ and $|f_s(z)|=K_s(z)^{\frac1s}$. Clearly $\{f_s\}$ forms a normal family so that there exists a sequence $s_j\uparrow p$ with $f_{s_j}$ converging locally uniformly to some ${f}_p\in \mathcal O(\Omega)$. By Fatou's lemma, we have $$ \int_\Omega |{f}_p|^p \le \liminf_{j\rightarrow \infty} \int_\Omega |f_{s_j}|^p \le \liminf_{j\rightarrow \infty} \left[\int_\Omega |f_{s_j}|^{s_j}\right]^{\frac{p}{s_j}}\cdot |\Omega|^{1-\frac{p}{s_j}}=1. $$ It follows that $$ K_p(z)\ge \frac{|{f}_p(z)|^p}{\|{f}_p\|_p^p}\ge |{f}_p(z)|^p=\lim_{j\rightarrow\infty} |f_{s_j}(z)|^p = \lim_{j\rightarrow\infty} K_{s_j}(z)^{\frac{p}{s_j}} = \lim_{j\rightarrow\infty} K_{s_j}(z)=\lim_{s\rightarrow p-}K_s(z). $$ (2) Let $\mathcal M_p(\cdot,z)$ be the maximizer in \eqref{eq:max_1}. Suppose there exists a sequence $\{f_j\}\subset A^{p'}(\Omega)$ for some $p'>p$ such that $$ \int_\Omega |f_j-\mathcal M_p(\cdot,z)|^p\rightarrow 0 $$ as $j\rightarrow \infty$. It follows that for every $0<\varepsilon<1$ there exists $j_\varepsilon\in \mathbb Z^+$ such that $$ \|f_{j_\varepsilon}\|_p\le 1+\varepsilon $$ and $$ |f_{j_\varepsilon}(z)|\ge (1-\varepsilon) K_p(z)^{\frac1p} $$ in view of the mean-value inequality. Since $$ |f_{j_\varepsilon}|^s\le 1+|f_{j_\varepsilon}|^{p'}\in L^1(\Omega) $$ for every $s\le p'$, we have $$ \int_\Omega |f_{j_\varepsilon}|^p=\lim_{s\rightarrow p+} \int_\Omega |f_{j_\varepsilon}|^s $$ in view of the dominated convergence theorem. Thus $$ \lim_{s\rightarrow p+} K_s(z)\ge \lim_{s\rightarrow p+} \frac{|f_{j_\varepsilon}(z)|^s}{\int_\Omega |f_{j_\varepsilon}|^s}=\frac{|f_{j_\varepsilon}(z)|^p}{\int_\Omega |f_{j_\varepsilon}|^p} \ge \left(\frac{1-\varepsilon}{1+\varepsilon}\right)^p K_p(z). $$ Since $\varepsilon$ can be arbitrarily small, we obtain $ \lim_{s\rightarrow p+} K_s(z)=K_p(z)$ so that (\ref{eq:stab_3}) holds. \end{proof} A bounded domain $\Omega$ in $\mathbb C^n$ is said to have positive hyperconvexity index if there exists a negative continuous psh function $\rho$ on $\Omega$ satisfying $-\rho\lesssim \delta^\alpha$ for some $\alpha>0$ (cf. \cite{ChenH-index}). It follows from Proposition 1.4 in \cite{ChenH-index} that if $\Omega$ has positive hyperconvexity index then $A^p(\Omega)$ lies dense in $A^2(\Omega)$ for some $p>2$. Thus we have \begin{corollary}\label{cor:stab_2} If\/ $\Omega$ has positive hyperconvexity index, then $$ K_2(z)=\lim_{p\rightarrow 2} K_p(z). $$ \end{corollary} Let $E$ be a compact set in $\mathbb C$. Let $\mathcal O({E})$ be the set of functions which are holomorphic in a neighborhood of $E$ and $L^p_a(E)$ the set of functions in $L^p(E)$ which are holomorphic in $E^\circ$. It is well-known \cite{HedbergApprox} that $\mathcal O({E})$ lies dense in $L^p_a(E)$ for $p\in [1,2)$. Since $\mathcal O({E})\subset A^{s}(E^\circ)$ for every $s>0$, we have \begin{corollary}\label{cor:Stab_3} If\/ $\Omega$ is a bounded fat domain in $\mathbb C$, i.e., $\overline{\Omega}^\circ=\Omega$, then \eqref{eq:stab_3} holds for $p\in [1,2)$. \end{corollary} The following proposition can be used to produce plenty of examples with \begin{equation}\label{eq:NC} K_2(z)>\lim_{p\rightarrow 2+} K_p(z). \end{equation} \begin{proposition}\label{prop:NC} Let $\Omega=D\backslash S$ where $D$ is a bounded domain in $\mathbb C$ and $S$ is a compact set in $D$ which has positive $2-$capacity but zero $p-$capacity for every $p<2$. Then \eqref{eq:NC} holds. \end{proposition} Recall that the $p-$capacity of $S$ is given by $$ {\rm Cap}_p(S):=\inf_\phi \int_{\mathbb C} |\nabla \phi|^p $$ where the infimum is taken over all $\phi\in C_0^\infty(\mathbb C)$ such that $\phi\ge 1$ on $S$. The condition of Proposition \ref{prop:NC} is satisfied for instance, if the $h-$Hausdorff measure $\Lambda_h(S)$ of $S$ is positive and finite where $h(t)=(\log1/t)^{-\alpha}$ for some $\alpha>1$. \begin{proof}[Proof of Proposition \ref{prop:NC}] It is a classical result that $A^p(\Omega)=A^p(D)$ if and only if ${\rm Cap}_q(S)=0$ where $\frac1p+\frac1q=1$ (cf. \cite{Carleson} and \cite{Hedberg}). Hence $$ K_{\Omega,p}(z)=K_{D,p}(z)\le C_n d(S,\partial D)^{-2n} $$ for all $z\in \Omega$ with $d(z,S)\le d(S,\partial D)/2$. On the other hand, since ${\rm Cap}_2(S)>0$, i.e., $S$ is non-polar, so there exists a regular point $a\in S$ for the Dirichlet problem on $\Omega$. Then we have $$ \lim_{z\rightarrow a} K_{\Omega,2}(z)=\infty $$ (see \cite{Ohsawa}). Thus we obtain $$ K_{\Omega,2}(z)> \lim_{p\rightarrow 2+} K_{\Omega,p}(z) $$ whenever $z$ is sufficiently close to $a$. \end{proof} \begin{theorem}\label{th:Stab_3} \begin{enumerate} \item For $p>1$ we have \begin{equation}\label{eq:Stab_3} \lim_{s\rightarrow p-} m_s(z,w) = m_p(z,w). \end{equation} \item $ \lim_{s\rightarrow p+} m_s(z,w) $ exists. Moreover, if $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$ for some $p'>p$, then \begin{equation}\label{eq:r-continuity_1} m_p(z,w)=\lim_{s\rightarrow p+} m_s(z,w). \end{equation} \end{enumerate} The same conclusions also hold for $K_p$. \end{theorem} \begin{proof} $(1)$ We first consider the case $p>2$. Fix $2< s<p$. Apply (\ref{eq:eleIneq_3}) with $a=m_s(\cdot,w)$, $b=m_p(\cdot,w)$, and $p$ replaced by $s$, we obtain \begin{eqnarray*} && C_s \int_\Omega |m_p(\cdot,w)-m_s(\cdot,w)|^s\\ & \le & \int_\Omega | m_p(\cdot,w) |^s - \int_\Omega | m_s(\cdot,w) |^s\\ && -p{\rm Re} \int_\Omega |m_s(\cdot,w)|^{s-2}\,\overline{m_s(\cdot,w)} \left[ m_p(\cdot,w)-m_s(\cdot,w) \right]\\ & \le & |\Omega|^{1-\frac{s}p} \left[\int_\Omega | m_p(\cdot,w) |^p\right]^{\frac{s}p}-m_s(w)^s\\ & = & |\Omega|^{1-\frac{s}p} m_p(w)^s-m_s(w)^s. \end{eqnarray*} Thus we have \begin{eqnarray*} |m_p(z,w)-m_s(z,w)|^s & \le & K_s(z)\int_\Omega |m_p(\cdot,w)-m_s(\cdot,w)|^s\\ & \le & C_s^{-1} K_s(z)\left\{ |\Omega|^{1-\frac{s}p} m_p(w)^s-m_s(w)^s\right\}. \end{eqnarray*} This combined with (\ref{eq:stab_1}) gives (\ref{eq:Stab_3}). Now suppose $1<p\le 2$. Fix $1<s<p$. Let $f_1,f_2\in A^s(\Omega)$. H\"older's inequality yields \begin{eqnarray}\label{eq:off_Ineq} \int_\Omega |f_2-f_1|^s & = & \int_\Omega |f_2-f_1|^s (|f_2|+|f_1|)^{s(s-2)/2}(|f_2|+|f_1|)^{s(2-s)/2}\nonumber\\ & \le & \left\{\int_\Omega |f_2-f_1|^2 (|f_2|+|f_1|)^{s-2} \right\}^{\frac{s}2} \left\{\int_\Omega (|f_2|+|f_1|)^{s} \right\}^{1-\frac{s}2}\nonumber\\ & \le & \left\{A_s^{-1}\int_\Omega \left[ |f_2|^s-|f_1|^s -s{\rm Re}\left( |f_1|^{s-2}\bar{f}_1 (f_2-f_1)\right)\right] \right\}^{\frac{s}2} \nonumber\\ && \cdot \left\{\int_\Omega (|f_2|+|f_1|)^{s} \right\}^{1-\frac{s}2} \end{eqnarray} in view of (\ref{eq:eleIneq_4}). Take $f_2=m_p(\cdot,w)$ and $f_1=m_s(\cdot,w)$, we obtain \begin{eqnarray*} && |m_p(z,w)-m_s(z,w)|^s\\ & \le & K_s(z) \int_\Omega |m_p(\cdot,w)-m_s(\cdot,w)|^s \\ & \le & K_s(z) \left\{A_s^{-1} \left[|\Omega|^{1-\frac{s}p} m_p(w)^s-m_s(w)^s\right] \right\}^{\frac{s}2}\\ && \cdot \left\{ 2^{s-1} \left [ |\Omega|^{1-\frac{s}p} m_p(w)^s+m_s(w)^s\right] \right\}^{1-\frac{s}2}, \end{eqnarray*} which gives (\ref{eq:Stab_3}). $(2)$ We first consider the case $p\ge 2$. Let $p\le s<r$. Apply (\ref{eq:eleIneq_3}) with $a=m_s(\cdot,w)$, $b=m_r(\cdot,w)$, and $p$ replaced by $s$, we obtain \begin{eqnarray*} && C_s \int_\Omega |m_r(\cdot,w)-m_s(\cdot,w)|^s\\ & \le & \int_\Omega | m_r(\cdot,w) |^s - \int_\Omega | m_s(\cdot,w) |^s\\ && -p{\rm Re} \int_\Omega |m_s(\cdot,w)|^{s-2}\overline{m_s(\cdot,w)}\,(m_r(\cdot,w)-m_s(\cdot,w))\\ & \le & |\Omega|^{1-\frac{s}r} \left\{\int_\Omega | m_r(\cdot,w) |^r\right\}^{\frac{s}r}-m_s(w)^s\\ & = & |\Omega|^{1-\frac{s}r} m_r(w)^s-m_s(w)^s. \end{eqnarray*} Thus we have \begin{eqnarray}\label{eq:r-continuity_3} |m_r(z,w)-m_s(z,w)|^s & \le & K_s(z)\int_\Omega |m_r(\cdot,w)-m_s(\cdot,w)|^s\nonumber\\ & \le & C_s^{-1} K_s(z) \left\{ |\Omega|^{1-\frac{s}r} m_r(w)^s-m_s(w)^s\right\}. \end{eqnarray} Since $\lim_{s\rightarrow p+} m_s(w)$ exists, it follows that $\{m_s(z,w)\}$ forms a Cauchy family as $s\rightarrow p+$, so that $ \lim_{s\rightarrow p+} m_s(z,w) $ also exists. Next suppose $1<p< 2$. Let $p\le s<r<2$. Take $f_2=m_r(\cdot,w)$ and $f_1=m_s(\cdot,w)$ in (\ref{eq:off_Ineq}), we obtain \begin{eqnarray}\label{eq:r-continuity_4} |m_r(z,w)-m_s(z,w)|^s & \le & K_s(z) \int_\Omega |m_r(\cdot,w)-m_s(\cdot,w)|^s\nonumber \\ & \le & K_s(z) \left\{A_s^{-1} \left[ |\Omega|^{1-\frac{s}r} m_r(w)^s-m_s(w)^s\right] \right\}^{\frac{s}2}\nonumber\\ && \cdot \left\{ 2^{s-1} \left[ |\Omega|^{1-\frac{s}r} m_r(w)^s+m_s(w)^s \right] \right\}^{1-\frac{s}2}, \end{eqnarray} from which the assertion immediately follows. Finally, we deal with the case $p=1$. Let $1\le s<r$. By a similar argument as the case $p\ge 2$ with (\ref{eq:eleIneq_3}) replaced by (\ref{eq:eleIneq_5}), we obtain \begin{equation}\label{eq:off-param_4} A_1 \int_\Omega \frac{|{\rm Im}\{m_r(\cdot,w)\overline{m_s(\cdot,w)}\}|^2}{(|m_r(\cdot,w)|+|m_s(\cdot,w)|)^3} \le |\Omega|^{1-\frac{s}r} m_r(w)^s-m_s(w)^s. \end{equation} Since $\{m_r(\cdot,w)\}_r$ and $\{m_s(\cdot,w)\}_s$ are normal families, there exist subsequences $\{m_{r_j}(\cdot,w)\}$ and $\{m_{s_j}(\cdot,w)\}$ which converge locally uniformly to certain holomorphic functions $h_1$ and $h_2$ respectively. Clearly, we have $h_1(w)=h_2(w)=1$. For fixed $w$ we set $S_w:=h_1^{-1}(0)\cup h_2^{-1}(0)$. For any open sets $V\subset\subset U \subset\subset \Omega\backslash S_w$ there exists a constant $C\gg 1$ such that $$ C^{-1} \le \min\{m_{r_j}(z,w),m_{s_j}(z,w)\}\le \max\{m_{r_j}(z,w),m_{s_j}(z,w)\}\le C,\ \ \ \forall\,z\in U $$ for all $j\gg 1$. Thus (\ref{eq:off-param_4}) combined with the mean-value inequality gives $$ \left|{\rm Im}\,\frac{m_{r_j}(z,w)}{m_{s_j}(z,w)}\right|^2 \lesssim |\Omega|^{1-\frac{s_j}{r_j}} m_{r_j}(w)^{s_j}-m_{s_j}(w)^{s_j} \rightarrow 0\ \ \ (j\rightarrow \infty),\ \ \ \forall z\in V, $$ which implies that ${\rm Im\,}\frac{h_1}{h_2}=0$ on $V$, hence on $\Omega\backslash S_w$. It follows that $h_1/h_2={\rm const.}$ on $\Omega\backslash S_w$, hence on $\Omega$. As $h_1(w)=h_2(w)=1$, we obtain $h_1=h_2$. In general, we consider two arbitrary convergent subsequences $\{m_{r^1_j}(\cdot,w)\}$ and $\{m_{r^2_j}(\cdot,w)\}$ of $\{m_r(\cdot,w)\}_r$. Let $\{s_j\}$ be a subsequence of $\{r^2_j\}$ with $s_j<r^1_j$. By the above argument we know that the sequences $\{m_{r^1_j}(\cdot,w)\}$ and $\{m_{s_j}(\cdot,w)\}$ has the same limit, which is also the limit of $\{m_{r^2_j}(\cdot,w)\}$. Thus $\lim_{p\rightarrow 1+} m_p(z,w)$ exists. If $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$ for some $p'>p$, then we have $m_p(w)=\lim_{s\rightarrow p}m_p(w)$. Thus (\ref{eq:r-continuity_3}) together with (\ref{eq:r-continuity_4}) yield (\ref{eq:r-continuity_1}) for $p>1$. Apply (\ref{eq:off-param_4}) with $s=1$ in a similar way as above, we obtain (\ref{eq:r-continuity_1}) for $p=1$. \end{proof} \begin{corollary}\label{cor:zero} Let $\Omega$ be a bounded domain with positive hyperconvexity index. If $K_2(\cdot,w)$ is not zero-free for some $w\in \Omega$, then there exists a number $\varepsilon=\varepsilon(w)>0$ such that $K_p(\cdot,w)$ is also not zero-free for $p\in (2-\varepsilon,2+\varepsilon)$. \end{corollary} \begin{proof} Suppose on the contrary that there exists a sequence $p_j\rightarrow 2$ such that $K_{p_j}(\cdot,w)$ is zero-free for all $j$. It follows from Theorem \ref{th:Stab_3} and Hurwitz's theorem that $K_2(\cdot,w)\equiv 0$, which is clearly impossible. \end{proof} \begin{proposition}\label{prop:stab_5} $(1)$ $\lim_{s\rightarrow p\pm} B_s(z;X)$ exists for $p>0$ and \begin{equation}\label{eq:stab_5} B_p(z;X) = \lim_{s\rightarrow p-} B_s(z;X). \end{equation} $(2)$ If there exists $p'>p$ such that $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$, then \begin{equation}\label{eq:stab_6} B_p(z;X)=\lim_{s\rightarrow p} B_s(z;X). \end{equation} \end{proposition} \begin{proof} $(1)$ Similar as the proof of Proposition \ref{th:stab_1}, we have $$ |\Omega|^{\frac1p}\cdot |X \mathcal M_p(\cdot,z;X)(z)|\ge |\Omega|^{\frac1s}\cdot |X \mathcal M_s(\cdot,z;X)(z)|, $$ so that $\lim_{s\rightarrow p\pm} |X \mathcal M_s(\cdot,z;X)(z)|$ exist and $$ |X \mathcal M_p(\cdot,z;X)(z)| \le \lim_{s\rightarrow p-} |X \mathcal M_s(\cdot,z;X)(z)| $$ $$ |X \mathcal M_p(\cdot,z;X)(z)|\ge \lim_{s\rightarrow p+} |X \mathcal M_s(\cdot,z;X)(z)|. $$ A normal family argument yields $$ |X \mathcal M_p(\cdot,z;X)(z)| = \lim_{s\rightarrow p-} |X \mathcal M_s(\cdot,z;X)(z)|. $$ This combined with Proposition \ref{th:stab_1} gives $$ B_p(z;X)=\lim_{s\rightarrow p-} B_s(z;X). $$ $(2)$ Suppose that $A^{p'}(\Omega)$ lies dense in $A^p(\Omega)$ for some $p'>p$. We may choose a sequence $f_j$ of functions in $A^{p'}(\Omega)$ such that $$ \int_\Omega |f_j-\mathcal M_p(\cdot,z;X)|^p\rightarrow 0 $$ as $j\rightarrow \infty$. It follows that for every $0<\varepsilon<1$ there exists $j_\varepsilon\in \mathbb Z^+$ such that $$ |f_{j_\varepsilon}(z)|\le \varepsilon, \ \ \ |Xf_{j_\varepsilon}(z)| \ge (1-\varepsilon) |X \mathcal M_p(\cdot,z;X)(z)|, $$ and $$ \|f_{j_\varepsilon}\|_p\le 1+\varepsilon. $$ Again we have $$ \int_\Omega |f_{j_\varepsilon}|^p=\lim_{s\rightarrow p+} \int_\Omega |f_{j_\varepsilon}|^s. $$ Since $$ \|f_{j_\varepsilon}\|_p \le \|f_{j_\varepsilon}\|_s \cdot |\Omega|^{\frac1p-\frac1s}\ \ \ \text{and}\ \ \ \|f_{j_\varepsilon}\|_s \le \|f_{j_\varepsilon}\|_{p'} \cdot |\Omega|^{\frac1s-\frac1{p'}}, $$ it follows that $$ \|f_{j_\varepsilon}\|_s - \|f_{j_\varepsilon}\|_p \le \|f_{j_\varepsilon}\|_s-\|f_{j_\varepsilon}\|_s^{\frac{s}p} + \|f_{j_\varepsilon}\|_s^{\frac{s}p} - \|f_{j_\varepsilon}\|_p \rightarrow 0 $$ as $s\rightarrow p+$. By using the test function $f_{j_\varepsilon}-f_{j_\varepsilon}(z)$, we obtain \begin{eqnarray*} \lim_{s\rightarrow p+} \mathcal |X \mathcal M_s(\cdot,z;X)(z)| & \ge & \liminf_{s\rightarrow p+} \frac{|X f_{j_\varepsilon}(z)|}{ \|f_{j_\varepsilon}-f_{j_\varepsilon}(z)\|_s}\\ & \ge & \liminf_{s\rightarrow p+} \frac{|X f_{j_\varepsilon}(z)|}{ \|f_{j_\varepsilon}\|_s+ |f_{j_\varepsilon}(z)| |\Omega|^{\frac1s}}\\ &\ge & \frac{1-\varepsilon}{1+\varepsilon+\varepsilon |\Omega|^{\frac1p}}\cdot |X \mathcal M_p(\cdot,z;X)(z)|. \end{eqnarray*} Thus (\ref{eq:stab_6}) holds. \end{proof} \section{Comparison of $K_p(z)$ and $K_2(z)$} \begin{theorem}\label{th:Comp_1} Let $\Omega$ be a bounded pseudoconvex domain with $C^2-$boundary. Then there exist constants $\gamma,C>0$ such that the following estimates hold near $\partial \Omega:$ \begin{eqnarray} \frac{K_p(z)^{\frac1p}}{K_2(z)^{\frac12}} & \le & C\, \delta(z)^{\frac12-\frac1p} |\log \delta(z)|^{\frac{n(p-2)}{2p\gamma}},\ \ \ p>2,\label{eq:comp_1}\\ \frac{K_p(z)^{\frac1p}}{K_2(z)^{\frac12}} & \ge & C^{-1}\, \delta(z)^{\frac12-\frac1p} |\log \delta(z)|^{-\frac{(n+\gamma)(p-2)}{2p\gamma}},\ \ \ p<2.\label{eq:comp_2} \end{eqnarray} \end{theorem} \begin{proof} The argument is essentially similar as \cite{ChenFu11} (see also \cite{ChenH-index}). Recall that there exists a smooth negative psh function $\rho$ on $\Omega$ such that $-\rho\asymp \delta^\gamma$ for some $\gamma>0$ (cf. \cite{DiederichFornaess77}). It then follows from a very useful estimate of Blocki \cite{BlockiGreen} for the pluricomplex Green function that there is a constant $C>1$ such that for any $z$ sufficiently close to $\partial \Omega$, \begin{equation}\label{eq:Blocki} \{g_\Omega(\cdot,z)\le -1\}\subset \left\{ C^{-1} \delta(z)|\log \delta(z)|^{-\frac1\gamma}\le \delta \le C\delta(z)|\log\delta(z)|^{\frac{n}\gamma}\right\}, \end{equation} where $g_\Omega(\zeta,z)$ is the pluricomplex Green function defined by $$ g_\Omega(\zeta,z)=\sup\left\{u(\zeta):u\in PSH^-(\Omega)\ \text{and}\ u(\zeta)\le \log |\zeta-z|+O(1)\ {\rm near\ }z\right\}. $$ Note that $g_\Omega(\cdot,z)$ is a continuous negative psh function on $\Omega\backslash \{z\}$ which satisfies \begin{equation}\label{eq:DF} -i\partial\bar{\partial} \log (-g_\Omega(\cdot,z))\ge i\partial \log (-g_\Omega(\cdot,z))\wedge \bar{\partial} \log (-g_\Omega(\cdot,z)) \end{equation} as currents. For $p>2$ and $z\in \Omega$ we take $f\in A^p(\Omega)$ with $\|f\|_p=1$ and $f(z)=K_p(z)^{\frac1p}$. Let $\chi:\mathbb R\rightarrow [0,1]$ be a cut-off function such that $\chi|_{(-\infty,-\log 2]}=1$ and $\chi|_{[0,\infty)}=0$. Set \begin{equation}\label{eq:v} v:=f\bar{\partial}\chi(-\log(-g_\Omega(\cdot,z))). \end{equation} By \eqref{eq:DF} we have $$ i\bar{v}\wedge v \le |f|^2 |\chi'(-\log(-g_\Omega(\cdot,z)))|^2 \cdot \left[ -i\partial\bar{\partial} \log (-g_\Omega(\cdot,z))\right]. $$ The Donnelly-Fefferman estimate (cf. \cite{DonnellyFefferman}, see also \cite{BerndtssonCharpentier}, \cite{BlockiDF}) then yields a solution of the equation $ \bar{\partial} u=v $ (in the sense of distributions) such that \begin{eqnarray*} \int_\Omega |u|^2 e^{-2n g_\Omega(\cdot,z)} & \le & C_0 \int_\Omega |f|^2 |\chi'(-\log(-g_\Omega(\cdot,z)))|^2 e^{-2ng_\Omega(\cdot,z)}\\ & \le & C_n \int_{\{\delta \le C\delta(z)|\log\delta(z)|^{\frac{n}\gamma}\}} |f|^2\ \ \ ({\rm by\ }(\ref{eq:Blocki}))\\ & \le & C_n |\{\delta \le C\delta(z)|\log\delta(z)|^{\frac{n}\gamma}\}|^{1-\frac2p}\|f\|_p^2\\ & \le & C \delta(z)^{1-\frac2p} |\log\delta(z)|^{\frac{n(p-2)}{p\gamma}} \end{eqnarray*} where the third inequality follows from H\"older's inequality. Here for the sake of simplicity we use the symbol $C$ to denote any large constant depending only on $\Omega$. Set $$ F:=f\chi(-\log(-g_\Omega(\cdot,z)))-u. $$ Clearly, we have $F\in {\mathcal O}(\Omega)$. Since $g_\Omega(\zeta,z)=\log |\zeta-z|+O(1)$ as $\zeta\rightarrow z$ and $u$ is holomorphic in a neighborhood of $z$, it follows that $u(z)=0$, i.e., $F(z)=f(z)=K_p(z)^{\frac1p}$. Moreover, we have \begin{eqnarray*} \int_\Omega |F|^2 & \le & 2\int_\Omega |f \chi(-\log(-g_\Omega(\cdot,z)))|^2+2\int_\Omega |u|^2\\ & \le & C \delta(z)^{1-\frac2p} |\log\delta(z)|^{\frac{n(p-2)}{p\gamma}} \end{eqnarray*} since $g_\Omega(\cdot,z)<0$. Thus we get $$ K_2(z)^{\frac12}\ge \frac{|F(z)|}{\|F\|_2}\ge C^{-1}\,K_p(z)^{\frac1p} \delta(z)^{\frac1p-\frac12} |\log\delta(z)|^{-\frac{n(p-2)}{2p\gamma}}, $$ i.e., (\ref{eq:comp_1}) holds. Next we define $$ A^2_{\alpha,\varepsilon}(\Omega):=\left\{f\in \mathcal O(\Omega): \|f\|_{\alpha,\varepsilon}^2:=\int_\Omega |f|^2 (-\rho+\varepsilon)^\alpha<\infty\right\},\ \ \ \alpha,\varepsilon>0. $$ Let $K_{\alpha,\varepsilon}$ denote the Bergman kernel associated to the Hilbert space $A^2_{\alpha,\varepsilon}(\Omega)$. We first compare $K_p(z)$ with $K_{\alpha,\varepsilon}(z)$ for $\alpha=\frac1\gamma(\frac2p-1)$ and $\varepsilon=\delta(z)^\gamma$ when $p<2$. Take $f\in A^2_{\alpha,\varepsilon}(\Omega)$ such that $\|f\|_{\alpha,\varepsilon}=1$ and $f(z)=K_{\alpha,\varepsilon}(z)^{\frac12}$. By H\"older's inequality we have \begin{eqnarray*} \int_\Omega |f|^p & = & \int_\Omega |f|^p (-\rho+\varepsilon)^{\frac1\gamma(1-\frac{p}2)} (-\rho+\varepsilon)^{-\frac1\gamma(1-\frac{p}2)}\\ & \le & \left[\int_\Omega |f|^2 (-\rho+\varepsilon)^\alpha\right]^{\frac{p}2}\left[\int_\Omega (-\rho+\varepsilon)^{-\frac1\gamma}\right]^{1-\frac{p}2}\\ & \le & C \left[\int_\Omega (\delta+\delta(z))^{-1}\right]^{1-\frac{p}2}\\ &\le & C |\log\delta(z)|^{1-\frac{p}2}. \end{eqnarray*} It follows that \begin{equation}\label{eq:comp_3} K_p(z)^{\frac1p} \ge \frac{|f(z)|}{\|f\|_p}\ge C^{-1} K_{\alpha,\varepsilon}(z)^{\frac12} |\log\delta(z)|^{\frac12-\frac1p}. \end{equation} Now we compare $K_{\alpha,\varepsilon}(z)$ with $K_2(z)$. Set $\psi:=-\alpha\log(-\rho+\varepsilon)$. Clearly, $\psi$ is psh on $\Omega$. Similar as above, we first take $f\in A^2(\Omega)$ with $\|f\|_2=1$ and $f(z)=K_2(z)^{\frac12}$ then solve the equation $\bar{\partial}u=v$ (where $v$ is given by \eqref{eq:v}) with the following estimate \begin{eqnarray*} \int_\Omega |u|^2 e^{-\psi-2n g_\Omega(\cdot,z)} & \le & C_0 \int_\Omega |f|^2 |\chi'(-\log(-g_\Omega(\cdot,z)))|^2 e^{-\psi-2ng_\Omega(\cdot,z)}\\ & \le & C_n \int_{\{\delta \le C\delta(z)|\log\delta(z)|^{\frac{n}\gamma}\}} |f|^2(-\rho+\varepsilon)^\alpha\\ & \le & \sup\left\{(-\rho+\varepsilon)^\alpha: \delta \le C\delta(z)|\log\delta(z)|^{\frac{n}\gamma}\right\} \cdot \|f\|_2^2\\ & \le & C \delta(z)^{\frac2p-1} |\log\delta(z)|^{\frac{n(2-p)}{p\gamma}}. \end{eqnarray*} Thus $ F:=f\chi(-\log(-g_\Omega(\cdot,z)))-u$ is a holomorphic function on $\Omega$ which satisfies $F(z)=f(z)=K_2(z)^{\frac12}$ and $$ \|F\|_{\alpha,\varepsilon}\le C \delta(z)^{\frac1p-\frac12} |\log\delta(z)|^{\frac{n(2-p)}{2p\gamma}}. $$ Thus $$ K_{\alpha,\varepsilon}(z)^{\frac12}\ge \frac{|F(z)|}{\|F\|_{\alpha,\varepsilon}}\ge C^{-1} K_2(z)^{\frac12} \delta(z)^{\frac12-\frac1p} |\log\delta(z)|^{-\frac{n(2-p)}{2p\gamma}}. $$ This together with (\ref{eq:comp_3}) yield (\ref{eq:comp_2}). \end{proof} \begin{theorem}\label{th:Comp_2} Let $\Omega$ be a bounded pseudoconvex domain with $C^2-$boundary. For every $2\le p<2+\frac2n$ there exists a constant $C=C_{p,\Omega}>0$ such that the following estimate holds near $\partial \Omega:$ \begin{equation}\label{eq:comp_6} \frac{K_p(z)^{\frac1p}}{K_2(z)^{\frac12}} \ge C^{-1}\, \delta(z)^{\frac{(n+1)(p-2)}{2p}} |\log \delta(z)|^{-\frac{(n+1)(p-2)}{2p\gamma}}. \end{equation} Here $\gamma$ is the same as above. \end{theorem} \begin{proof} For $0\le \alpha<1$ we define $$ A^2_{-\alpha}(\Omega):=\left\{f\in \mathcal O(\Omega): \|f\|_{-\alpha}^2:=\int_\Omega|f|^2 \delta^{-\alpha}<\infty\right\}. $$ Let $K_{-\alpha}$ denote the Bergman kernel associated to $A^2_{-\alpha}(\Omega)$. We first compare $K_p(z)$ with $K_{-\alpha}(z)$. Let $f\in A^2_{-\alpha}(\Omega)$. Since $\log |f|^2-\alpha\log \delta$ is psh on $\Omega$, so is $|f|^2\delta^{-\alpha}$. Apply the mean-value inequality to the psh function $|f|^2\delta^{-\alpha}$ on certain polydisc with center $z$ and volume $\asymp \delta(z)^{n+1}$, we have \begin{equation}\label{eq:comp_4} |f(z)|^2\delta(z)^{-\alpha}\le C_n \delta(z)^{-n-1} \|f\|_{-\alpha}^2. \end{equation} It follows that \begin{eqnarray}\label{eq:comp_4'} I_p(\varepsilon,f) & := & \int_{\{\delta=\varepsilon\}} |f|^p dS\le \sup_{\{\delta=\varepsilon\}} |f|^{p-2}\cdot I_2(\varepsilon,f) \nonumber \\ && \le C_n \varepsilon^{-\frac{(n+1-\alpha)(p-2)}2}\|f\|_{-\alpha}^{p-2}\cdot I_2(\varepsilon,f). \end{eqnarray} Here $dS$ denotes the surface element. Note that $$ \alpha=\frac{(n+1-\alpha)(p-2)}2 \iff \alpha=\frac{(n+1)(p-2)}p $$ and $$ \alpha<1 \iff p<2+\frac2n. $$ We fix such $\alpha$ and take $f\in A^2_{-\alpha}(\Omega)$ with $f(z)=K_{-\alpha}(z)^{\frac12}$ and $\|f\|_{-\alpha}=1$. For certain $\varepsilon_0\ll1$, \begin{eqnarray*} \int_\Omega |f|^p & = & \int_{\{\delta\ge \varepsilon_0\}} |f|^p +\int_0^{\varepsilon_0} I_p(\varepsilon,f)d\varepsilon\\ & \le & C+ C_n\int_0^{\varepsilon_0} \varepsilon^{-\alpha}I_2(\varepsilon,f) d\varepsilon \ \ \ \ \ (\text{by}\ \eqref{eq:comp_4'})\\ & = & C+C_n \|f\|_{-\alpha}^2=C+C_n \end{eqnarray*} where $C=C_{p,\Omega}$. Thus \begin{equation}\label{eq:comp_5} K_p(z)^{\frac1p}\ge \frac{|f(z)|}{\|f\|_p}\ge C^{-1} K_{-\alpha}(z)^{\frac12}. \end{equation} Next we compare $K_{-\alpha}(z)$ with $K_2(z)$. Put $$ \varphi=2ng_\Omega(\cdot,z)-\log(-g_\Omega(\cdot,z)+1). $$ Take $f\in A^2(\Omega)$ with $f(z)=K_2(z)^{\frac12}$ and $\|f\|_2=1$. If $v$ is given as above, then $$ i\bar{v}\wedge v \le |f|^2 |\chi'(-\log(-g_\Omega(\cdot,z)))|^2\, \frac{(-g_\Omega(\cdot,z)+1)^2}{g_\Omega(\cdot,z)^2}\cdot {i\partial \bar\partial{\varphi}}. $$ By Theorem 1.1 in \cite{Chen14} we may solve $\bar{\partial}u=v$ with the following estimate \begin{eqnarray*} \int_\Omega |u|^2e^{-\varphi}\delta^{-\alpha} & \le & C_{\alpha,\Omega} \int_\Omega |f|^2 |\chi'(-\log(-g_\Omega(\cdot,z)))|^2 \, \frac{(-g_\Omega(\cdot,z)+1)^2}{g_\Omega(\cdot,z)^2}\, e^{-\varphi}\delta^{-\alpha}\\ & \le & C_{\alpha,\Omega} \int_{\{\delta \ge C^{-1}\delta(z)|\log\delta(z)|^{-\frac{1}\gamma}\}} |f|^2\delta^{-\alpha}\ \ \ ({\rm by\ }\eqref{eq:Blocki})\\ & \le & C_{\alpha,\Omega}\, \delta(z)^{-\alpha} |\log\delta(z)|^{\frac{\alpha}\gamma}. \end{eqnarray*} It follows that $$ F:=\chi(-\log(-g_\Omega(\cdot,z)))f-u $$ is a holomorphic function on $\Omega$ satisfying $F(z)=f(z)=K_2(z)^{\frac12}$ and $$ \int_\Omega |F|^2 \delta^{-\alpha}\le C_{\alpha,\Omega}\, \delta(z)^{-\alpha} |\log\delta(z)|^{\frac{\alpha}\gamma}. $$ Thus $$ K_{-\alpha}(z)\ge \frac{|F(z)|^2}{\|F\|_{-\alpha}^2} \ge C_{\alpha,\Omega}^{-1}\,K_2(z) \delta(z)^{\alpha} |\log\delta(z)|^{-\frac{\alpha}\gamma}. $$ This together with (\ref{eq:comp_5}) yield (\ref{eq:comp_6}). \end{proof} \section{Concluding remarks} There are two interesting functions related to the limiting case $p=0$. The first one, which is introduced by Tsuji \cite{Tsuji}, is defined to be $$ K_0(z):=\left(\limsup_{m\rightarrow \infty} K_{2/m}(z)\right)^\ast $$ where $(\cdot)^\ast$ denotes the upper semicontinuous regularization. The second one, which arises from Siu's work on invariance of plurigenera \cite{Siu}, is defined by $$ \widehat{K}_0(z):=\sum_{m=1}^\infty \varepsilon_m K_{2/m}(z) $$ where $\{\varepsilon_m\}$ is a sequence of positive numbers satisfying $\sum \varepsilon_m<\infty$. By (\ref{eq:BergIneq_3}) we see that both $K_0$ and $\widehat{K}_0$ are well-defined so that $\log K_0$ and $\log \widehat{K}_0$ are psh on $\Omega$; moreover, $K_0$ always dominates $\widehat{K}_0$ while the latter is continuous. By Proposition \ref{prop:trans_2} we immediately obtain \begin{eqnarray*} K_{\Omega_1,0}(z) & = & K_{\Omega_2,0}(F(z))|J_F(z)|^2\\ \widehat{K}_{\Omega_1,0}(z) & = & \widehat{K}_{\Omega_2,0}(F(z))|J_F(z)|^2 \end{eqnarray*} for any biholomorphic mapping $F:\Omega_1\rightarrow \Omega_2$. The functions $K_{2/m}(z)\ (m\in \mathbb Z^+)$ , $K_0(z)$ and $\widehat{K}_0(z)$ can be used to produce various invariant {\it K\"ahler}\/ metrics. Following Narasimhan-Simha \cite{NS}, we introduce the following weighted Bergman space $$ A^2_p(\Omega):=\left\{f\in \mathcal O(\Omega):\int_\Omega {|f|^2}/{K_{p}}<\infty\right\}. $$ Let $K_{2,p}(z)$ denote the Bergman kernel associated to $A^2_p(\Omega)$. Then $$ ds^2_{2/m}:=\sum_{j,k=1}^n \frac{\partial^2 \log K_{2,2/m}(z)}{\partial z_j\partial\bar{z}_k} dz_j\otimes d\bar{z}_k $$ gives an invariant K\"ahler metric on $\Omega$ (cf. \cite{Sakai}; see also \cite{ChenInvariant}). Analogously, we may introduce the following weighted Bergman spaces $$ A^2_0 (\Omega):=\left\{f\in \mathcal O(\Omega):\int_\Omega {|f|^2}/{K_{0}}<\infty\right\} $$ $$ \widehat{A^2_0 (\Omega)}:=\left\{f\in \mathcal O(\Omega):\int_\Omega {|f|^2}/{\widehat{K}_{0}}<\infty\right\}. $$ Let $K_{2,0}(z)$ (resp. $\widehat{K_{2,0}}(z)$) denote the Bergman kernel associated to $A^2_0(\Omega)$ (resp. $\widehat{A^2_0 (\Omega)}$). It is not difficult to see that $$ ds^2_0:=\sum_{j,k=1}^n \frac{\partial^2 \log K_{2,0}(z)}{\partial z_j\partial\bar{z}_k} dz_j\otimes d\bar{z}_k $$ $$ \widehat{ds^2_0}:=\sum_{j,k=1}^n \frac{\partial^2 \log \widehat{K_{2,0}}(z)}{\partial z_j\partial\bar{z}_k} dz_j\otimes d\bar{z}_k $$ are invariant K\"ahler metrics on $\Omega$. \begin{problem} Is it possible to construct an invariant complete metric on any bounded pseudoconvex domain by using the $p-$Bergman kernel? \end{problem} It is also valuable to study the following high order minimizing problem: $$ m_p^{(\alpha)}(z):=\sup\left\{\|f\|_p: f\in A^p(\Omega),\partial^{(\alpha)}f(z)=1\ \text{and}\ \partial^{(\beta)}f(z)=0,\forall\ \beta\prec \alpha \right\} $$ where for $\alpha=(\alpha_1,\cdots,\alpha_n)$ and $\beta=(\beta_1,\cdots,\beta_n)$ we define $\beta\prec \alpha$ $\iff$ $|\beta|<|\alpha|$ or $|\beta|=|\alpha|$ and $\beta_j=\alpha_j$ for $j<k$ while $\beta_k>\alpha_k$. Analogously, one can show that there exists exactly one minimizer $m_p^{(\alpha)}(\cdot,z)$ for $p\ge 1$, and many properties of $m_p(\cdot,z)$ extend to $m_p^{(\alpha)}(\cdot,z)$. An elegant observation due to Bergman states that for every $z\in \Omega$, $\{m_2^{(\alpha)}(\cdot,z)/\|m_2^{(\alpha)}(\cdot,z)\|_2\}_\alpha$ is a complete orthonormal basis of $A^2(\Omega)$. \begin{problem} Does $\{m_p^{(\alpha)}(\cdot,z)/\|m_p^{(\alpha)}(\cdot,z)\|_p\}_\alpha$ form a Schauder basis of $A^p(\Omega)$? \end{problem} \section{Appendix} {\bf Proof of (\ref{eq:eleIneq_1}).} A straightforward calculation shows \begin{eqnarray*} (|b|^{p-2}+|a|^{p-2}) |b-a|^2 & = & |b|^p + |a|^p + |b|^{p-2}|a|^2+|a|^{p-2}|b|^2\\ && -2{\rm Re}( |b|^{p-2}\bar{b}a)-2{\rm Re}( |a|^{p-2}\bar{a}b)\\ (|b|^{p-2}-|a|^{p-2})(|b|^2-|a|^2) & = & |b|^p + |a|^p - |b|^{p-2}|a|^2 - |a|^{p-2}|b|^2. \end{eqnarray*} Summing up, we obtain the following basic equality: \begin{eqnarray}\label{eq:identity} && (|b|^{p-2}+|a|^{p-2}) |b-a|^2 + (|b|^{p-2}-|a|^{p-2})(|b|^2-|a|^2)\nonumber\\ & = & 2|b|^p+2|a|^p-2{\rm Re}( |b|^{p-2}\bar{b}a)-2{\rm Re} (|a|^{p-2}\bar{a}b)\nonumber\\ & = & 2 {\rm Re}\left\{(|b|^{p-2}\bar{b}-|a|^{p-2}\bar{a})(b-a)\right\}. \end{eqnarray} For every $p\ge 2$ we have $$ |b-a|^{p-2}\le (|a|+|b|)^{p-2}\le 2^{p-2} \max\{|a|^{p-2},|b|^{p-2}\}\le 2^{p-2}(|a|^{p-2}+|b|^{p-2}). $$ This combined with (\ref{eq:identity}) gives (\ref{eq:eleIneq_1}). {\bf Proof of (\ref{eq:eleIneq_2}).} Let $1<p\le 2$. The Newton-Lebnitz formula yields \begin{eqnarray*} |b|^{p-2}\bar{b}-|a|^{p-2}\bar{a} & = & \int_0^1 \frac{d}{dt} \left\{|a+t(b-a)|^{p-2}\cdot\overline{a+t(b-a)}\right\} dt\\ & = &(\bar{b}-\bar{a}) \int_0^1 |a+t(b-a)|^{p-2} dt\\ && + (p-2) \int_0^1 |a+t(b-a)|^{p-4} {\rm Re}\left\{t|b-a|^2+a(\bar{b}-\bar{a})\right\}\overline{a+t(b-a)}dt, \end{eqnarray*} so that \begin{eqnarray}\label{eq:Basic_Eq} && (|b|^{p-2}\bar{b}-|a|^{p-2}\bar{a})(b-a)\nonumber\\ & = & |b-a|^2 \int_0^1 |a+t(b-a)|^{p-2}\nonumber\\ && + (p-2) \int_0^1 |a+t(b-a)|^{p-4} {\rm Re}\left\{t|b-a|^2+a(\bar{b}-\bar{a})\right\}\overline{t|b-a|^2+a(\bar{b}-\bar{a})}dt. \end{eqnarray} It follows that \begin{eqnarray}\label{eq:Basic_Ineq} && {\rm Re} \left\{ (|b|^{p-2}\bar{b}-|a|^{p-2}\bar{a})(b-a)\right\}\nonumber\\ & = & |b-a|^2 \int_0^1 |a+t(b-a)|^{p-2}\nonumber\\ && + (p-2) \int_0^1 |a+t(b-a)|^{p-4} \left|{\rm Re}\left\{t|b-a|^2+a(\bar{b}-\bar{a})\right\}\right|^2dt\nonumber\\ & = & (p-1) |b-a|^2 \int_0^1 |a+t(b-a)|^{p-2} dt \nonumber\\ && +(2-p) \left|{\rm Im}\left\{a \bar{b} \right\}\right|^2 \int_0^1 |a+t(b-a)|^{p-4} dt \end{eqnarray} since \begin{equation}\label{eq:eleIneq_0} \left({\rm Re}\{\bar{a}(b-a)\}+t|b-a|^2\right)^2 +\left({\rm Im}\{\bar{a}b\}\right)^2 = |b-a|^2|a+t(b-a)|^2. \end{equation} \eqref{eq:Basic_Ineq} implies \eqref{eq:eleIneq_2} since $$ |a+t(b-a)|=|(1-t)a+tb|\le |a|+|b|,\ \ \ 0\le t\le 1. $$ {\bf Proof of (\ref{eq:eleIneq_4}).} Set \begin{eqnarray*} \eta(t) &:=& |a+t(b-a)|^2=|a|^2+2t{\rm Re}\{\bar{a}(b-a)\}+t^2|b-a|^2\\ \kappa(t) &:=& \eta(t)^{p/2}=|a+t(b-a)|^p. \end{eqnarray*} A straightforward calculation yields $$ \kappa'(t)=\frac{p}2\eta(t)^{\frac{p}2-1}\eta'(t)=p |a+t(b-a)|^{p-2} \left({\rm Re}\{\bar{a}(b-a)\}+t|b-a|^2\right) $$ and \begin{eqnarray*} \kappa''(t) & = & \frac{p}2 \eta(t)^{\frac{p}2-1}\eta''(t)+\frac{p}2\left(\frac{p}2-1\right)\eta(t)^{\frac{p}2-2}\eta'(t)^2\\ & = & p|b-a|^2 |a+t(b-a)|^{p-2} \\ && +p(p-2)|a+t(b-a)|^{p-4}\left({\rm Re}\{\bar{a}(b-a)\}+t|b-a|^2\right)^2. \end{eqnarray*} This combined with \eqref{eq:eleIneq_0} gives \begin{eqnarray}\label{eq:ident_5} \kappa''(t) & = & p|a+t(b-a)|^{p-4}\left({\rm Im}\{\bar{a}b\}\right)^2\nonumber\\ && + p(p-1) |a+t(b-a)|^{p-4}\left({\rm Re}\{\bar{a}(b-a)\}+t|b-a|^2\right)^2. \end{eqnarray} Thus we have \begin{eqnarray}\label{eq:KeyIneq_5} \kappa''(t) & \ge & p\min\{1,p-1\} |a+t(b-a)|^{p-4}\nonumber\\ && \cdot \left\{\left({\rm Im}\{\bar{a}b\}\right)^2+\left({\rm Re}\{\bar{a}(b-a)\}+t|b-a|^2\right)^2 \right\}\nonumber\\ & = & p\min\{1,p-1\}|b-a|^2 |a+t(b-a)|^{p-2}. \end{eqnarray} On the other hand, itegral by parts gives $$ \kappa(1)=\kappa(0)+\kappa'(0)+ \int_0^1 (1-t)\kappa''(t)dt, $$ that is, \begin{equation}\label{eq:KeyIdent} |b|^p = |a|^p + p{\rm Re}\{|a|^{p-2}\bar{a}(b-a)\} + \int_0^1 (1-t)\kappa''(t)dt. \end{equation} This combined with (\ref{eq:KeyIneq_5}) gives \begin{eqnarray}\label{eq:Key_Ineq5} |b|^p & \ge & |a|^p + p{\rm Re}\{|a|^{p-2}\bar{a}(b-a)\}\nonumber\\ && + p\min\{1,p-1\}|b-a|^2\int_0^1 (1-t) |a+t(b-a)|^{p-2}dt. \end{eqnarray} Since $|a+t(b-a)|\le |a|+|b|$, we see that (\ref{eq:eleIneq_4}) follows from (\ref{eq:Key_Ineq5}). {\bf Proof of (\ref{eq:eleIneq_3}).} Suppose $p>2$. Note that $$ I(t):=\int_0^1 (1-t) |a+t(b-a)|^{p-2}\ge \int_0^1 (1-t)||a|-t|b-a||^{p-2}. $$ If $|a|\ge |b-a|/2$, then $$ I(t)\ge |b-a|^{p-2} \int_0^{1/4}(1-t)(1/2-t)^{p-2}\ge \frac7{4^{p+3}} |b-a|^{p-2}. $$ If $|a|\le |b-a|/2$, then $$ I(t)\ge |b-a|^{p-2} \int_{3/4}^1 (1-t)(t-1/2)^{p-2}dt \ge \frac1{4^{p+3}} |b-a|^{p-2}. $$ These combined with (\ref{eq:Key_Ineq5}) gives (\ref{eq:eleIneq_3}). {\bf Proof of (\ref{eq:eleIneq_5}).} This follows directly from (\ref{eq:ident_5}) and (\ref{eq:KeyIdent}). \bigskip {\bf Acknowledgements.} The authors would like to thank Yuanpu Xiong for a number of valuable discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,184
Since 1969 every MINI has come standard with cutting-edge innovation, incredible attention to detail, and that unmistakable go kart feeling. But now, every MINI also comes standard with a three-year factory service plan, three years warranty, and three years roadside assist. A service and warranty package that's designed to keep your beautiful new MINI running smoothly, turning tightly, and shining brightly. Complete maintenance and scheduled servicing of the vehicle. Unlimited KMs service inclusive (not transferable to new owners). The service inclusive package includes the maintenance work and wear-and-tear repairs listed below, as well as services performed on the factory equipment of the vehicle (including oil and required original MINI parts). Engine oil change with oil filter and topping up amount. Service/exchange of air cleaner, fuel filter, microfilter, spark plugs. Service of vehicle check in accordance with MINI specifications. Brake pad, front and rear. Brake disc, front and rear. Clutch (if there is wear). Maximum of one set of wiper blades per year, if applicable. - Wheel alignments and wheel balance. Covers all manufacturing defects of the vehicle. General repairs resulting from normal wear and tear to the vehicle. Unlimited KMs (so you can get maximum twists and turns under your belt). Road side assistance is transferable on all NZ new MINIs. So what are you waiting for? Make yourself at home in the driver's seat of a new MINI today.
{ "redpajama_set_name": "RedPajamaC4" }
7,786
Diplomacy News Report Palestine Abbas: Decision on government soon Mahmoud Abbas, the Palestinian president, has hinted that he could sack the Hamas government but that any decision he might take could lead to a referendum. Abbas also promoted the idea of a cabinet of technocrats as a way to ease crippling Western sanctions, but he pledged not to force it on Hamas, and the Islamic ruling party was cool to the idea. Abbas addressed reporters for more than an hour at his headquarters in the West Bank city of Ramallah on Tuesday evening. He said talks on forming a unity coalition with Hamas were dead, due to its refusal to soften its stance towards Israel. "In the near future we need to reach options that will allow us to get out of this crisis as soon as possible … It is impossible to remain in this situation." He said the technocrat idea of a cabinet made up of professionals instead of politicians should be "considered seriously" as a way out of the current deadlock. "[Hamas] say a government of professionals is an American option. What is this? They say … this is a Zionist option, that they must stop this – these statements do not frighten us." Hamas has said it believed a unity government was still possible, but it has ruled out ever recognising Israel. The power struggle between Abbas's Fatah and Hamas has dashed the hopes of Palestinians that Western sanctions will be lifted. There has been fierce fighting this month between fighters from Hamas and Fatah in which 18 people were killed, sparking fears of civil war. Abbas did not say what his options were but his aides said he might call fresh elections, appoint an emergency government or hold a referendum to let the Palestinian people decide. Asked if he would call a referendum, Abbas said: "If there is no constitutional text on an issue I seek, I will go to the people and hold a referendum on that issue." "If I cannot solve the people's problems, I am worthless" While the Palestinian basic law, which serves as a constitution, allows the president to sack the government, it does not mention other alternatives such as calling early polls. Hamas has accused Fatah of trying to topple the government. It warned of more fighting if Abbas carries out his threats. Hamas defeated Fatah in parliamentary elections in January, prompting the West to cut off crucial official aid over the group's refusal to recognise Israel and renounce violence. "A government that is incapable of lifting the siege is worthless," said Abbas, referring to the Western embargo and Israeli restrictions on freedom of movement and goods. Abbas said efforts to arrange a summit with Ehud Olmert, the Israeli prime minister, were being hindered by the issue of Palestinian prisoners. Olmert had been expected to free a large number of prisoners held in Israeli jails as a gesture to Abbas, but that was put on hold when Palestinian fighters captured an Israeli soldier, Gilad Shalit, in a cross-border raid from Gaza in June. "Everything has stopped because Israel is linking the release of prisoners to the release of Shalit," Abbas said. In a separate development, Jack Wallace, the US general-consul in Jerusalem, has denied reports of a $42 million transfer from the US administration to Fatah. Following a meeting with Abbas, Wallace said the US administration would not offer financial assistance to political parties, insisting the media claims to this effect were baseless. Two residents killed in the Gaza Strip Hamad: `"Hamas might accept a government of professionals and independent experts`
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,140
require('js-test-commons'); require('./src/assert.spec'); require('./src/url-builder.spec'); require('./src/is-url.spec');
{ "redpajama_set_name": "RedPajamaGithub" }
7,640
The 2018 Southeast Sulawesi gubernatorial election took place on 27 June 2018 as part of the simultaneous local elections. It was held to elect the governor of Southeast Sulawesi along with their deputy, whilst members of the provincial council (Dewan Perwakilan Rakyat Daerah) will be re-elected in 2019. Former governor Ali Mazi came out on top of the three-candidate race, defeating former Kendari mayor Asrun and North Kolaka regent Rusda Mahmud. Timeline Registration for party-backed candidates were opened between 8 and 10 January 2018, while independent candidates were required to register between 22 and 26 November 2017. The candidates were assigned their ballot numbers on 13 February 2018. The campaigning period would commence between 15 February and 24 June, with a three-day election silence before voting on 27 June. In May 2018, KPU declared that there were 1,628,320 eligible voters for the election. Candidates Results References 2018 Indonesian gubernatorial elections Southeast Sulawesi
{ "redpajama_set_name": "RedPajamaWikipedia" }
47
{"url":"https:\/\/gateoverflow.in\/13101\/ugcnet-june2015-iii-35?show=150248","text":"1.5k views\n\nLet f(n) and g(n) be asymptotically non-negative functions. Which of the following is correct?\n\n1. $\\theta (f(n)^*g(n))=min (f(n),g(n))$\n2. $\\theta (f(n)^*g(n))=max(f(n),g(n))$\n3. $\\theta (f(n)+g(n))=min(f(n),g(n))$\n4. $\\theta (f(n)+g(n))=max(f(n),g(n))$\n| 1.5k views\n\nby Loyal (7.3k points)\nselected\n+1\ni think here no explaination given. then how best answer.\n0\n\n$f(n)\u2264f(n)*g(n)\\ and\\ g(n) \u2264 f(n)*g(n)$.\n\nHence, $max(f(n),g(n)) \u2208 O(f(n)*g(n))$\n\nif $f(n)$ or $g(n)$ =0 then $f(n)*g(n) <= max(f(n),g(n))$\n\nHence, $max(f(n),g(n))\u2208\u03a9(f(n)*g(n)$\n\nso $max(f(n),g(n))\u2208\u0398(f(n)*g(n))$\u00a0(im not sure please correct me if im wrong :) )\n\n$f(n)\u2264f(n)+g(n)\\ and\\ g(n) \u2264 f(n)+g(n)$.\n\nHence, $max(f(n),g(n)) \u2208 O(f(n)+g(n))$ just multiplying a constant,\n\n$f(n)+g(n) \u2264 2* max(f(n),g(n))$. Hence, $max(f(n),g(n))\u2208\u03a9(f(n)+g(n))$\n\nHence, we get that $max(f(n),g(n))\u2208\u0398(f(n)+g(n))$\n\nby Boss (10.5k points)\n+1 vote\n\nThe Answer should be A is False and B is True.\n\nAnd the question should be\n\n\u201cLet f(n) and g(n) are asymptotically non negative functions then which of the following is correct:\n\nf(n) * g(n) = theta(max(f(n), g(n))\n\nf(n) + g(n) = theta (max(f(n), g(n))\u201d\n\nActually, It is not advisable to write Theta (in general any asymptotic notation) on Left Hand Side of any Expression.\n\nWhy?? :\u00a0 Suppose it can be written in LHS,\n\nThen, We know x = theta(x),\n\nIt means theta(x) = x ---------------- Eq. 1\n\nAlso 2x = theta(x),\n\nThen from Eq. 1,\n\n2x = theta(x) = x;\n\nTherefore 2x = x, which gives 2 = 1.\n\nSo it can not appear on LHS.\n\nActually \u201c=theta(.)\u201d defines a binary RELATION between any two functions like a<=b, & writing theta(a) = b is something like writing <a=b, which is weird.\n\n\u201cLet f(n) and g(n) are asymptotically non negative functions then which of the following is correct:\n\nA. f(n) * g(n) = theta(max(f(n), g(n))\n\nB. f(n) + g(n) = theta (max(f(n), g(n))\u201d\n\nThe answer should be A is false in general and B is true.\n\nHow B is true ? : For any two asymptotically non negative functions f(n) and g(n),\n\nmax(f(n), g(n)) <= f(n) + g(n) <= 2*max(f(n), g(n)), thus for constants c1 = 1, and c2 = 2, f(n) + g(n) will be always bounded by max(f(n), g(n)).\n\nHow A is false? : \u00a0for any f(n) & g(n), three cases are possible:\n\ncase 1) when none of the f(n) and g(n) are constant functions - In this case max(f(n) , g(n)) <= f(n) * g(n) so max(f(n), g(n)) can not provide a upper bound for f(n) * g(n).\n\ncase 2) when both of the f(n) & g(n) are constant functions or when any one of the f(n) and g(n) is a non zero constant function, In this case f(n) * g(n) = theta(max(f(n), g(n))).\n\nCase 3) when at least any one of the f(n) and g(n) is 0, In this case f(n) * g(n) != theta(max(f(n), g(n))). Since max(f(n), g(n)) COULD BE unable to give a lower bound.\n\nby Boss (14.3k points)","date":"2020-01-23 17:28:01","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7370688915252686, \"perplexity\": 2645.983478142215}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250611127.53\/warc\/CC-MAIN-20200123160903-20200123185903-00017.warc.gz\"}"}
null
null
{"url":"https:\/\/www.natureof3laws.co.in\/derive-an-expression-of-magnetic-field-at-the-axial-point-of-a-bar-magnet\/","text":"# Derive an expression of the magnetic field at the axial point of a bar magnet class 12\n\nSHARING INCREASES KNOWLEDGE\n\nIn this article, we will derive an expression for the magnetic field at a point on the axis of a bar magnet, so let\u2019s get started\u2026\n\n## Derivation of the magnetic field at the axial point of a bar magnet class 12\n\n[latexpage]Let SN be a bar magnet of length $2l$ and magnetic pole strength of $q_m$. Suppose the point at which we wish to find the magnetic field, lies on the axis of the magnet at the distance $r$ from the center of the bar magnet. Here, P is the point where we want to find the magnetic field. See figure below:\n\nLet $B_1$ be the magnetic field experienced by the point P due to the North pole of the bar magnet and $B_2$ be the magnetic field experienced by the point P due to the South pole of the bar magnet. The net magnetic field $(B)$ experienced by the point P (lie on the axis of a bar magnet) is the algebraic sum of all the magnetic field acted on it, i.e $$B_{axial}=B_1 + B_2$$\n\nSo magnetic field due to the North pole of the bar magnet at the point P is: $$B_1=\\frac{\\mu_0}{4\\pi}\\cdot\\frac{q_m}{(r-l)^2}\\quad\\text{along}\\:\\overrightarrow{NP}$$ And the magnetic field due to South pole of the bar magnet at the point P is given as: $$B_2=\\frac{\\mu_0}{4\\pi}\\cdot\\frac{q_m}{(r+l)^2}\\quad\\text{along}\\:\\overrightarrow {PS}$$\n\nSo the net magnetic field at the point P is given as: $$B_{axial}=B_1 + (-B_2)$$\n\nSubstitute the value of $B_1$ and $B_2$ in the above sum, we get-\n\n$$B_{axial}=\\left[\\frac{\\mu_0}{4\\pi}\\cdot\\frac{q_m}{(r-l)^2}-\\frac{\\mu_0}{4\\pi}\\cdot\\frac{q_m}{(r+l)^2}\\right]$$ $$B_{axial}=\\frac{\\mu_0q_m}{4\\pi}\\left[\\frac{1}{(r-l)^2}-\\frac{1}{(r+l)^2}\\right]$$ $$B_{axial}=\\frac{\\mu_0q_m}{4\\pi}\\left[\\frac{4rl}{(r^2-l^2)^2}\\right]$$\n\nWe know that $q_m\\cdot 2l=M$, where M is the magnetic dipole moment of this magnetic dipole (bar magnet), so above equation can be rewritten as: $$B_{axial}=\\frac{\\mu_0}{4\\pi}\\cdot\\frac{2Mr}{(r^2-l^2)^2}$$\n\nFor a short bar magnet, for which $(l<<<r)$ magnetic length (l) of the bar magnet is taken as negligible as compared to the distance (r) of the axial point P. So the reduced expression of magnetic field at the axial point of bar magnet is given as: $$B_{axial}=\\frac{\\mu_0}{4\\pi}\\cdot\\frac{2M}{r^3}\\quad\\text{along}\\:\\overrightarrow {NP}$$\n\nIt is clearly visible that the direction of the magnetic field at the axial point due to a bar magnet is same as the direction of the magnetic dipole moment, i.e from S-pole to N-pole, so we can write: $$B_{axial}=\\frac{\\mu_0}{4\\pi}\\cdot\\frac{2 \\overrightarrow{M}}{r^3}$$\n\nStay tuned with Laws Of Nature for more useful and interesting content.\n\nSHARING INCREASES KNOWLEDGE","date":"2022-07-03 11:09:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8103267550468445, \"perplexity\": 112.31921415249718}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104240553.67\/warc\/CC-MAIN-20220703104037-20220703134037-00610.warc.gz\"}"}
null
null
In the early Cold War (roughly, the 1950s), nuclear exceptionalism reached such a peak that a new era of military security took hold. Rather than create safety by defending territories aggressively, i.e. through pre-emptive military attacks, or retaliatory military attacks, the superpowers (and that includes the UK) somehow settled for living on the cusp of both scenarios. Because, the nuclear bomb, perceived as exceptionally destructive, became a pre-emptive and retaliatory weapon at the same time. And therefore, it sort of, cancelled itself out as a real weapon of war. This did not actually mean that many strategists, militarists and members of the general public agreed that nuclear weapons were exceptionally bad, indeed it is well-known that many continued to view nuclear bombs as a reasonable aspect of the weapons arsenal. But what it did mean was that the main players in global security, the likes of the Soviet Union, the United States, Great Britain and China, somehow muddled along under the regulatory principles of deterrence. Deterrence was not a new concept – it just means that, when someone can fight you back in the way you attacked them: you decide not to do it. But the attack aspect of the scenario was so awful to people's imaginations that governments and strategists decided it would be best not to test how far the public was willing to go. A collective moral conscience and some limited recognition of government accountability to citizens (really, genuinely wanting a little bit of stability, you know, after those two world wars) sustained the Cold War status quo. It made sense to continue spending money on weapons development, sustain military technologies, and train armies to use new weapons systems, because it all totted up to, peace. More of this point below. A thermo-nuclear explosion entails the same effects of a bomb, but per gram of the weight of the bomb, the strength of explosion is higher. That means one bomb covers a very, very large area at once and its temperature is so high that it has a far more damaging, levelling effect on that area (and the people in it). What does make the nuclear weapon exceptional is its qualities of radioactivity. These, popularised in various memorable, fictional formats, cause many varieties of immediate sicknesses and long-term health and environmental issues. No wonder the world was deterred from using them, we might think. Suddenly it was logical to own massive stockpiles of risk-ridden weapons in the name of not using them. But undercover, indeed in files we may not even be able to read yet, nuclear skirmishes, nuclear battles occurred rather more than this regulatory principle, deterrence, would hold. In the North Atlantic ocean, submarines carrying nuclear warheads had fights, literally bashing each other under arctic ice, what could have happened? And what of the many accidents, some published and publicised, in which a finger might have pressed 'the button'? Owning and developing the nuclear weapon is not as simple as courting 'nuclear suicide' as many pundits and politicians like to put it. That is simply too easy and quite frankly too offensive (to people who know the meaning of the word) a way of putting it. You see, we fixate rather a lot on the dangers of nuclear weapons, the horrific consequences of nuclear war, and the inherent risk of stationing them around the world, all of which are not in question. But in the meantime, the likes of napalm, mustard gas, sarin, biological warfare, drone strikes, other extraordinarily large non-nuclear bombs, have all been used in conflicts since 1945: attacks as bad as the kind you could see through nuclear weapons have occurred – perhaps not to as great a size – but they have happened and continue to happen. … it would use a high proportion of conventional weapons – no doubt some of the weapons I have listed above. All the tanks, bombs, guns, artillery, men and women, employed in the various wars that hit headlines all the time. There might be nuclear weapons too, of various sizes, and various effects, used strategically from land, sea and air. In effect, the imagined deterrent effect of nuclear Armageddon is wearing off because in the lifetime of those weapons, some people have maintained a belief in their unexceptionality, and some have increased the exceptional power of 'conventional' weaponry. The distinctly fragile accord to allow nuclear weapons to 'cancel each other out' has eroded with the rise of new nuclear states and the many challenges to the authority and power of the Cold War superpowers. At the same time, the weapons that maintained wars have got better, bigger, and been tacitly endorsed by public silence. My thesis highlights that civilians in Britain incubated – to an extent, though not fully – a belief that by having a security system based on a nuclear deterrent war could be avoided. This belief relied on an impression of that exceptionality of nuclear weapons. This was natural. In the 1950s, war had proven to be devastating enough, it was distinctly unwanted, unjustified and not in anyone's everyday interests to go to war again after the Second World War. People needed space to get on with life, and if emphasising and vocalising the exceptionality of nuclear weapons could do that – then fine – but any attack was an unwanted attack on the postwar home front. Now where are we? Somehow still deeply alert to the horror of nuclear weapons, appalled that statesmen would suggest their deployment, yet somehow not as aware, or bothered by the deployment of conventional military arsenals. I am not telling you to become a pacifist, go out on the streets, and protest all wars. But I am suggesting that, without critiquing the hyperbole and rhetoric used to explain and report attack, armaments and war, we lose all sense of proportion, place and time. Kim Jung-Un doesn't think nuclear weapons are exceptional, they are part of a plan that has been in place in North Korea since 1945, to return the country to a whole and force reprisals on new and old enemies. To recognise this, is a step towards recognising the power that nuclear weapons continue to have over the aberrations and the silences, with which global governments surround the purpose and actuality of real wars. The nuclear imagination extended far and wide; for some reading see: Matthew Grant and Benjamin Ziemann, (eds.) Understanding the Imaginary War: Culture, Thought and Nuclear Conflict, 1945-90, (Manchester, Manchester University Press, 2016) and Jonathan Hogg, British Nuclear Culture: Official and Unofficial Narratives in the Long 20th Century, (London, Bloomsbury Academic, 2016); Joseph Masco, The Nuclear Borderlands : the Manhattan Project in Post-Cold War New Mexico (Princeton, N.J. :Princeton University Press, 2006).
{ "redpajama_set_name": "RedPajamaC4" }
6,711
\section{INTRODUCTION} 3D semantic segmentation has a wide range of applications in robotics since most autonomous systems require an accurate and robust perception of their environment. A commonly used sensor for 3D perception in robotics is the LiDAR (Light Detection And Ranging). It provides accurate distance measurements of the surrounding 3D space. In recent years, deep learning approaches are achieving state-of-the-art performance in the 3D LiDAR semantic segmentation task \cite{milioto2019rangenet++, alonso20203d}. However, deep learning methods require large amounts of labeled data to achieve high performances. Besides, deep neural networks often fail at generalizing the learned knowledge to new domains or environments. Therefore, when applying existing models on data with a different distribution than the training data, i.e., from a different domain, the performance is considerably degraded. A slight change in the data distribution can significantly drop the performance. Domain adaptation techniques aim to eliminate or reduce this drop. Existing works for domain adaptation in semantic segmentation focus on RGB data \cite{vu2019advent, zou2019confidence, chen2019domain, li2019bidirectional}. Most of them try to minimize the distribution shift between two different domains. Very few approaches have tackled this problem with LiDAR data \cite{wu2019squeezesegv2}, which equally suffers from the domain shift. RGB data commonly suffers from variations due to light and weather conditions, while the most common variations within 3D point clouds data come from sensor resolution (i.e., sensors with more laser sweeps generate denser point clouds) and from the sensor placement (because point clouds have relative coordinates with respect to the sensor). Both sensor resolution and placement issues are common examples that change the data distribution of the captured 3D point clouds. Coping with these issues would enable the use of large existing labeled LiDAR datasets for more realistic use-cases in robotic applications, reducing the need for data labeling. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{introduccion.png} \caption{ Result of our proposed approach for Domain Adaptation in LiDAR Semantic Segmentation. Given a model trained on the source domain, top row shows the result on the source domain (SemanticKitti \cite{behley2019semantickitti}). Meanwhile, bottom row shows the result on the target domain without adaptation and the improved result applying our proposed adaptation. } \label{fig:introduction} \end{figure} This work proposes two strategies to improve unsupervised domain adaptation (UDA) in LiDAR semantic segmentation, see a sample result on Fig. \ref{fig:introduction}. The first strategy addresses this problem by applying a set of simple steps to align the data distribution reducing the domain gap on the input space. The second strategy proposes how to align the distribution on the output space by aligning the class distribution. These two proposed strategies can be applied in conjunction with current state-of-the-art approaches boosting their performance. Our main contributions can be summarized as follows: \begin{itemize} \item Simple data processing steps (\textit{data alignment}) that considerably help reducing the domain gap. Our results show that this step is crucial for a proper domain adaptation. \item A novel learning method for aligning the target class distribution to the source class distribution (\textit{class alignment}) which further improves the adaptation. \end{itemize} We validate our approach on three different scenarios getting state-of-the-art results. We use the SemanticKitti dataset \cite{behley2019semantickitti} as the source domain and we adapt it to SemanticPoss \cite{pan2020semanticposs}, to Paris-Lille-3D \cite{roynard2018paris} and to a new collected and released dataset. \section{Related Work} \subsection{3D LiDAR Point Cloud Segmentation} Semantic segmentation of 3D LiDAR data aims to assign a semantic label to every point scanned by the LiDAR sensor. Before the current trend and wide adoption of deep learning approaches, earlier methods relied on exploiting prior knowledge and geometric constraints \cite{xie2019review}. As far as deep learning methods are concerned, there are two main types of approaches to tackle the 3D LiDAR semantic segmentation problem. On one hand, there are approaches that work directly on the 3D points, i.e., the raw point cloud is taken as the input \cite{landrieu2018large, qi2017pointnet, qi2017pointnet++}. On the other hand, other approaches convert this 3D point cloud into another representation (images\cite{alonso20203d}, voxels\cite{zhou2018voxelnet}, lattices\cite{rosu2019latticenet}) in order to have a structured input. For LiDAR semantic segmentation, the most commonly used representation is the spherical projection \cite{alonso20203d, milioto2019rangenet++, lunet, wu2018squeezeseg, wu2019squeezesegv2}. Milioto et al. \cite{milioto2019rangenet++} show that point-based methods, i.e., approaches that work directly on the 3D points, are slower and tend to be less accurate than methods which project the 3D point cloud into a 2D representation and make use of convolutional layers. SqueezeSeg \cite{wu2018squeezeseg} is one of the first works that uses the spherical projection for LiDAR semantic segmentation making use of a Convolutional Neural Network (CNN). Later works have improved this approach using more complex CNNs and adding modules to the SqueezeSeg pipeline. RangeNet \cite{milioto2019rangenet++} proposes a post-processing method for improving the re-projection of the 2D resulting segmentation back to the 3D points. 3D-MiniNet \cite{alonso20203d} proposes a learning module before the CNN which takes the raw point cloud as the input and outputs a learned 2D representation. \subsection{Unsupervised Domain Adaptation for Semantic Segmentation} Unsupervised Domain Adaptation (UDA) aims to adapt models that have been trained on one specific domain (source domain) to be able to work on a different domain (target domain) where there is a certain lack of labeled training data. Most works follow similar ideas: the input data or features from a source-domain sample and a target-domain sample should be indistinguishable. Several works follow an adversarial training scheme to minimize the distribution shift between the target and source domains data. This approach has been shown to work properly at pixel space \cite{yang2020fda}, at feature space \cite{hoffman2018cycada} and at output space \cite{vu2019advent}. However, adversarial training schemes tend to present convergence problems. Alternatively, other works follow different schemes. Entropy minimization methods \cite{vu2019advent, chen2019domain} do not require complex training schemes. They rely on a loss function that minimizes the entropy of the unlabeled target domain output probabilities. This entropy minimization is closely linked to self-training methods. For self-training, pseudo-labels are generated from the unlabeled target domain output probabilities for a later training with some supervised loss such as the softmax cross-entropy \cite{li2019bidirectional, zou2019confidence}. These self-supervised works follow an iterative and cyclic scheme where pseudo-labels help the model to improve and, as the model improves, the generated pseudo-labels present higher quality. Regarding segmentation on LiDAR data, very few works have studied the problem of domain adaptation. SqueezeSegV2 \cite{wu2019squeezesegv2} based the adaptation on existing adaptation works like correlation alignment \cite{morerio2017minimal}. A very recent work, Xmuda \cite{jaritz2020xmuda} focuses on combining the LiDAR information with the RGB images for multi-modal domain adaptation. They propose to apply the KL divergence between the output probabilities of both modalities as the main loss function. Besides, they also apply previously proposed methods like entropy minimization \cite{vu2019advent}. This work investigates different UDA strategies (both existing and novel) to improve UDA for the particular case of LiDAR semantic segmentation. The presented results show their effectiveness in reducing the domain gap. \section{Unsupervised Domain Adaptation For LiDAR Semantic Segmentation} \label{sec:method} This section describes the proposed domain adaptation approach, including the LiDAR semantic segmentation method used, the strategies proposed to reduce the domain gap (data alignment and class distribution alignment), and the formulation of the proposed learning task. Figure \ref{fig:pipeline} presents an overview of our proposed approach which is further explained in the following subsections. \begin{figure*}[!tb] \centering \includegraphics[width=0.95\linewidth]{pipeline.png} \caption{\textbf{Approach overview.} The figure shows our pipeline steps and optimization losses. First, we perform distribution alignment on the input space, i.e., data alignment strategies. Then, we optimize the segmentation loss for source samples where the labels are known and the class alignment and entropy losses for target data where no labels are available (See Sect. \ref{sec:method} for details). \textcolor{Green}{Green continuous arrows} are used for target data and \textcolor{Blue}{blue pointed arrows} for source data.} \label{fig:pipeline} \end{figure*} \subsection{LiDAR Semantic Segmentation Model} \label{sec:LiDAR-segmentation} We use a recent method for LiDAR semantic segmentation which achieves state-of-the-art performance on several LiDAR segmentation datasets, 3D-MiniNet \cite{alonso20203d}. This method consists of three main steps. First, it learns a 2D representation from the 3D points. Then, this representation is fed to a 2D Fully Convolutional Neural Network that produces a 2D semantic segmentation. These 2D semantic labels are re-projected back to the 3D space and enhanced through a post-processing module. Let $\src{\mathcal{X}}\subset \mathbb{R}^{N\times 3}$ be a set of source domain LiDAR point clouds along with associated semantic labels, $\src{\mathcal{Y}}\subset (1,C) ^{N}$. Sample $\mm x_s$ is a point cloud of size $N$ and $ \mm y_s^{(n,c)} $ provides the label of point $(n)$ as one-hot encoded vector. Let $F$ be our LiDAR segmentation network which takes a point cloud $\mm x$ and predicts a probability distribution (size $C$ classes) for each point of the point-cloud $F(\mm x) = \mm P_{\mm x}^{(n,c)}$. The parameters $\theta_F$ of $F$ are optimized to minimize cross-entropy loss $\mathcal{L}_{seg}(\src{\mm x}, \src{\mm y}) = -\sum_{n=1}^N\sum_{c=1}^{C} \src{\mm y}^{(n,c)} \log \mm P_{\src{\mm x}}^{(n,c)}$ on source domain samples. Therefore, as the supervised semantic segmentation is concerned, the optimization problem simply reads: \begin{equation} \label{eq:onlysource} \min_{\theta_F} \frac{1}{|\mathcal{X}_s|}\sum_{\mm x_s\in\mathcal{X}_s} \mathcal{L}_{seg}(\src{\mm x}, \src{\mm y}). \end{equation} \subsection{Data alignment strategies for LiDAR} \label{sec:strategies} The problem of domain adaptation, i.e., data distribution misalignment, between $\src{\mathcal{X}}$ and $\trg{\mathcal{X}}$ (a set of target domain LiDAR point clouds), can be handled on the network weights ${\theta_F}$ but also modifying $\src{\mathcal{X}}$ and $\trg{\mathcal{X}}$ in order to align the distributions at the input space. Next, we describe the different strategies for better data alignment that we propose to improve LiDAR domain adaptation. \paragraph*{\textbf{XYZ-shift augmentation}} One of the main causes of misalignment for LiDAR point clouds are the changes in the vehicle set-up: the placement of the sensor in different vehicles and different locations. Since the point cloud values are relative to the sensor origin, these changes cause variations affecting the whole point cloud. Performing strong data augmentation on $\src{\mathcal{X}}$ is crucial to reduce this domain gap. In this work, we perform XYZ shifts large enough to cover the different sensor set-ups. We propose to perform shifts up to $\pm$2 meters on the Z-axis (height) and up to $\pm$5 meters on the Y-axis and X-axis. \paragraph*{\textbf{Per-class augmentation}} Apart from performing standard data augmentation on the whole point cloud, we also propose to perform the augmentation independently per class, in order to enrich the spatial distribution. In particular, in this work, we perform shifts up to $\pm$1 meters on the Z-axis (height) and up to $\pm$3 meters on the Y-axis and X-axis. \paragraph*{\textbf{Same number of beams}} Different LiDAR sensors capture the environment differently. Besides the sensor placement and orientation, a significant difference between sensors is the number of captured beams, which results in a more sparse or dense point cloud. We propose to match the data beams between the two domains by reducing the data from the sensor with a higher number of beams ending up with more homogeneous data within $\src{\mathcal{X}}$ and $\trg{\mathcal{X}}$. \paragraph*{\textbf{Only relative features}} Point-cloud segmentation methods commonly use both relative and absolute values of the input data for learning the segmentation. In order to be independent of absolute coordinates that are less robust compared to relative coordinates, we propose to use only relative features of the data. Therefore, for XYZ values (it can be extended for reflectance or depth), we propose to use only relative distances of every point with respect to their neighbors. \subsection{Class distribution alignment} The domain shift appears due to many different factors. For example, different environments can present quite different appearances, the spatial distribution of objects may vary, the capturing set-up for different scenarios can be totally different, etc. Depending on the problem tackled and prior knowledge, we can hypothesize which of these differences can be neglected and assumed not to affect to the models we are learning. In this work, all the datasets used are from urban scenarios. Taking this into account, although the data distribution changes between the datasets, we can assume that the class distribution is going to be very similar across these datasets. For example, we can assume that if $\src{\mm{y}}$ has a distribution of 90\% road pixels and 10\% car pixels, then $\trg{\mm{y}}$ will likely present a similar distribution. Our approach learns parameters $\theta_F$ of $F$ in such a way that the predicted class distribution $F(\trg{\mathcal{X}})$ matches the real class distribution of $\src{\mathcal{Y}}$, i.e., the histogram representing the frequency of occurrence of each class, previously computed in an offline fashion. To do so, We propose to compute the KL-divergence between these two class distribution as the class alignment loss $\mathcal{L}_{align}(\trg{\mm x}, \src{\mathcal{Y}}) = \sum_{n=1}^N \mm hist(\src{\mathcal{Y}}) \log\frac{ \mm hist(\src{\mathcal{Y}})}{ \mm P_{\trg{\mm x}}^{(n)}}$. Therefore, the optimization problem reads as: \begin{equation} \label{eq:clasdistribution} \min_{\theta_F} \frac{1}{|\mathcal{X}_t|}\sum_{\mm x_t\in\mathcal{X}_t} \mathcal{L}_{align}(\trg{\mm x}, \src{\mathcal{Y}}). \end{equation} Equation 2 requires to compute the class distribution $\mm P_{\mm x_t}$ over the whole dataset. As this is computationally unfeasible, we compute it over the batch as an approximation. \subsection{Optimization Formulation} The entropy loss is computed as in prior work MinEnt \cite{vu2019advent}: \begin{equation}\label{eq:minentloss} \mathcal{L}_{ent}(\trg{\mm x}) =\frac{-1}{\log(C)}\sum_{n=1}^N\sum_{c=1}^{C} \mm P_{\trg{\mm x}}^{(n,c)} \log \mm P_{\trg{\mm x}}^{(n,c)}, \end{equation} while the segmentation loss $\mathcal{L}_{seg}$ and the class alignment loss $\mathcal{L}_{align}$ are computed as detailed in previous subsections. During training, we jointly optimize the supervised segmentation loss $\mathcal{L}_{seg}$ on source samples and the class alignment loss $\mathcal{L}_{align}$ and entropy loss $\mathcal{L}_{ent}$ on target samples. The final optimization problem is formulated as follows: \begin{equation}\label{eq:losses} \begin{split} &\min_{\theta_F} \frac{1}{|\mathcal{X}_s|}\sum_{\mm x_s} \mathcal{L}_{seg}(\src{\mm x}, \src{\mm y}) + \frac{\lambda_{align}}{|\mathcal{X}_t|} \sum_{\mm x_t}\mathcal{L}_{align}(\trg{\mm x}, \src{\mathcal{Y}}) \\ &+ \frac{\lambda_{ent}}{|\mathcal{X}_t|} \sum_{\mm x_t}\mathcal{L}_{ent}(\trg{\mm x}), \end{split} \end{equation} with~$\lambda_{ent}$ and $\lambda_{align}$ as the weighting factors of the alignment and entropy terms. \section{Experimental Setup} This section details the setup used in our evaluation. \subsection{Datasets} \label{sec:datasets} We use four different datasets for the evaluation. They were collected in four different geographical areas, with four different LiDAR sensors, and with four different set-ups. We take the well known SemanticKITTI dataset \cite{behley2019semantickitti} as the source domain dataset and the other three datasets as target domain data. \paragraph*{SemanticKITTI} The SemanticKITTI dataset \cite{behley2019semantickitti} is a recent large-scale dataset that provides dense point-wise annotations for the entire KITTI Odometry Benchmark \cite{geiger2012wekitti}. The dataset consists of over 43000 LiDAR scans from which over 21000 are available for training. The dataset distinguishes 22 different semantic classes. The capturing sensor is a Velodyne HDL-64E mounted on a car. \paragraph*{Paris-Lille-3D} Paris-Lille-3D \cite{roynard2018paris} is a medium-size dataset that provides three aggregated point clouds, which are built from continuous LiDAR scans of streets in Paris and Lille. It is collected with a tilted rear-mounted Velodyne HDL-32E placed on a vehicle. Following PolarNet work \cite{zhang2020polarnet}, we extract individual scans from the registered point clouds thanks to the scanner trajectory and points' timestamps. Each scan is made of points within +/- 100m. We take the Lille-1 point cloud for using the domain adaptation methods and Lille-2 for validation. We use the following intersecting semantic classes with the SemanticKitti: car, person, road, sidewalk, building, vegetation, pole, and traffic light. One thing to note is that this dataset only keeps points measured at a distance less than 20m and the LiDAR has an angle of 30 degrees between the axis of rotation and the horizontal. This configuration makes each scan to have a very limited field of view compared to other LiDAR setups. \paragraph*{SemanticPoss} The SemanticPoss \cite{pan2020semanticposs} is a medium-size dataset which contains 5 different sequences from urban scenarios providing 3000 LiDAR scans. The sensor used is a 40-line Pandora mounted on a vehicle. We take the three first sequences for applying the adaptation methods and the last two sequences for validation. We use the following intersecting semantic classes with the SemanticKitti: car, person, trunk, vegetation, traffic sign, pole, fence, building, rider, bike, and ground (which combines road and sidewalk in SemanticKitti). \paragraph*{I3A} We have captured a small fourth dataset to test our approach also in a different scenario. In contrast to the three previous datasets, this dataset is not captured from a vehicle but from a small mobile robot (namely a TurtleBot\footnote{\url{https://www.turtlebot.com/}} platform). Therefore, the sensor is placed at a significantly lower height than in the other set-ups. The capturing sensor is the Velodyne VLP-16. This is a 16-line sensor that captures less dense point clouds compared to the other datasets, making the domain gap bigger. The dataset contains two sequences, one for training and another for validation. We use the intersecting semantic classes with the SemanticKitti: car, person, road, sidewalk, building, vegetation, trunk, pole, and traffic light. \subsection{Training Protocol} As we mentioned in Sec. \ref{sec:LiDAR-segmentation}, we use 3D-MiniNet \cite{alonso20203d} as the base LiDAR semantic segmentation method. In particular, we use the available 3D-MiniNet-small version because of memory issues. For computing the relative coordinates and features, we follow 3D-MiniNet approach extracting them from the N neighbors of each 3D point where N is set to 16. For all the experiments we train this architecture for 700K iterations with a batch size of 8. We use Stochastic Gradient Descent (SGD) optimizer with an initial learning rate of $5e-3$ and a polynomial learning rate decay schedule with a power set to 0.9. We set $\lambda_{ent}$ to 0.001 as suggested in MinEnt \cite{vu2019advent} and $\lambda_{align}$ to 0.001. We empirically noticed that the performance is very similar when these two hyper-parameters are set between $1e-5$ and $1e-2$. The two main conditions for them to work properly are: (1) be greater than 0 and, (2) do not be higher than the supervised loss. One thing to take into account is that, as explained in \ref{sec:datasets}, the Paris-Lille-3D has a very limited field of view. Therefore in order to make MinEnt \cite{vu2019advent} work in this dataset, we had to simulate the same field of view on the source dataset. \section{Results} This section presents the experimental validation of our approach compared to different baselines. The proposed approach achieves better results than the other baselines in the three different scenarios for unsupervised domain adaptation in LiDAR Semantic Segmentation. In all the experiments we use the SemanticKITTI dataset \cite{behley2019semantickitti} as the source data distribution and perform the adaptation on the other three datasets\footnote{\url{code-and-datasets-to-be-released-upon-acceptance}}. \subsection{Ablation Study} \begin{figure*}[!tb] \centering \includegraphics[width=1\linewidth]{results.png} \caption{Visual results of the LiDAR domain adaptation with different adaptation methods for one example from each target dataset: I3A dataset first row, ParisLille dataset second row, and SemanticPoss last row. From left to right: Input point cloud, ground truth labels, baseline with no adaptation (trained on SemanticKitti), MinEnt \cite{vu2019advent} adaptation approach, our adaptation only with data processing strategies, our full adaptation pipeline. Best viewed in color. } \label{fig:visual-results} \end{figure*} The experiments in this subsection show how the different data alignment steps and the proposed learning losses affect the final performance of our approach. Table \ref{tab:ablation} summarizes the ablation study performed on three different scenarios. The results show how all the steps proposed contribute towards the final performance. The main insights observed in the ablation study are discussed next. \begin{table}[!bh] \centering \caption{ Ablation study of our domain adaption pipeline for semantic segmentation. Source dataset: SemanticKitti \cite{behley2019semantickitti}.} \label{tab:ablation} \resizebox{\columnwidth}{!}{ \begin{tabular}{rccc} \toprule[1.0pt] & \textbf{mIoU on} & \textbf{mIoU on} & \textbf{mIoU on} \\ Target dataset & \textbf{I3A} &\textbf{ParisLille} & \textbf{SemanticPoss} \\% \rotatebox{90}{\textbf{mIoU}} \toprule[1.0pt] Base model & 15.9 & 19.2 & 13.4\\ + XYZ-shift augmentation & 25.1 & 28.9 &16.3 \\ + Per-class augmentation & 27.0 &30.1 & 17.2 \\ + Same number of beams & 42.0 &35.4 & 18.3 \\ + Only relative features & 47.1 & --- & 19.0 \\ + MinEnt \cite{vu2019advent} & 50.3 &41.5 & 26.2 \\ + Class distribution alignment & 52.5 &42.7 & 27.0 \\ \toprule[1.0pt] \multicolumn{4}{p{8cm}}{\footnotesize {--- Not used because there was no performance gain.}} \end{tabular} } \end{table} Performing strong \textit{XYZ-shifts} results in a boost on the performance, meaning that the domain gap is considerably reduced. The distribution gap reduced by this step is the one caused by the fact of using different LiDAR sensor set-ups (such as different acquisition sensor height). Besides, in these autonomous driving set-ups, the distance between the car and the objects depends on how wide are the streets or on which lane is the data capturing source. Therefore, this is an essential and really easy data transformation to perform which gives an average of 7.2\% MIoU gain. The \textit{per-class data augmentation} also boosts the performance. This data augmentation method tries to reduce the domain gap by adding different relative distances between different classes gaining an average of 1.3\% MIoU gain. Another interesting and straightforward technique to perform is to \textit{match the number of LiDAR beams} of the source and target data, i.e., match LiDAR point-cloud resolution. This helps the data alignment especially for having the same point density on the 3D point-cloud and similar relative distances between the points. We show that this method gives an improvement similar to the XYZ-shift data augmentation, hugely reducing the domain gap. We can observe that the higher the initial difference in the number of beams, the more improvement we can get: the i3A LiDAR has 16 beams, the ParisLille 32, and the SemanticPoss 40, compared to the 64 of the source data (SemanticKitti). The use of \textit{relative features only} does not always help to reduce the domain gap, it was only beneficial on the i3A and SemanticPoss datasets. Removing the absolute features and only learning from relative features helps especially when the relative distances between the 3D points have less domain shift than the absolute coordinates. This will depend on the dataset, but the stronger the differences between capturing sensors, the more likely that the use of relative features will help. Besides the data alignment steps, our approach includes two \textit{learning losses} to the pipeline to help to reduce the domain gap. The first one is the entropy minimization loss proposed in MinEnt \cite{vu2019advent}. The second one is our proposed class distribution alignment loss introduced in this work. We show that these two losses can be combined for the domain adaptation problem and that, although less significantly with respect to previously discussed steps, they also improve on the three different set-ups, contributing to achieving state-of-the-art performance. \subsection{Comparison with other baselines} \begin{table}[!bh] \centering \caption{ Results on the three different LiDAR semantic segmentation datasets using different domain adaptation methods. The source dataset is the SemanticKitti dataset \cite{behley2019semantickitti} } \label{tab:adaptation} \begin{tabular}{rccc} \toprule[1.0pt] & \textbf{mIoU on} & \textbf{mIoU on} & \textbf{mIoU on} \\ & \textbf{I3A} &\textbf{ParisLille} & \textbf{SemanticPoss} \\% \rotatebox{90}{\textbf{mIoU}} \toprule[1.0pt] Baseline & 15.9 & 19.2 & 13.4\\ MinEnt \cite{vu2019advent} & 28.4 & 23.2 & 19.6 \\ AdvEnt \cite{vu2019advent} & 21.0 & 20.7 & 19.5\\ MaxSquare \cite{chen2019domain} & 28.4 & 22.8 &19.3\\ \toprule[1.0pt] Data alignment (ours)* & 47.1 & 36.2 &19.0 \\ Full approach (ours) & 52.5 &42.7 & 27.0 \\ \toprule[1.0pt] \multicolumn{4}{p{8cm}}{\footnotesize {* Only data alignment strategies from Sect. \ref{sec:strategies} }} \end{tabular} \end{table} Table \ref{tab:adaptation} shows the comparison of our pipeline (composed of all the steps discussed in the ablation study) with other existing methods for domain adaptation. We select MinEnt, Advent, and MaxSquare as the baselines because they are leading the state-of-the-art for unsupervised domain adaptation. We use the available authors' code for replication. We apply the different domain adaptation methods of the three different set-ups without our data alignment steps. This comparison shows that good pre-processing of the data can obtain better results than just applying out-of-the-box methods for domain adaptation. It also shows that our complete pipeline outperforms these previous methods on LiDAR domain adaptation. Our results demonstrate that combining proper data processing with learning methods for domain adaptation gives an average of more than $\times2$ boost on the performance. Figure \ref{fig:visual-results} includes a few examples of the segmentation obtained with a baseline with no domain adaptation, using the MinEnt \cite{vu2019advent} approach only, with our approach using only the data pre-processing steps, and with our approach including all steps proposed. We can appreciate in figure \ref{fig:visual-results} how data processing helps on certain semantic classes, such as road, person, car, or vegetation, while MinEnt usually improves at different ones like building. This suggests the good complementary of both strategies, and indeed combining them provides the best results. Additional results can be seen in the supplementary video. \section{Conclusions} In this work, we introduce a novel pipeline that addresses the task of unsupervised domain adaptation for LiDAR semantic segmentation. Our pipeline consists of aligning data distributions on the data space with different simple strategies combined with learning losses on the semantic segmentation process that also force the data distribution alignment. Our results show that a proper data alignment on the input space can produce better domain adaptation results that just using out-of-the-box state-of-the-art learning methods. Besides, we show that combining these data alignment methods with learning methods, like the one proposed in this work to align the class distributions of the data, can reduce even more the domain gap getting new state-of-the-art results. Our approach is validated on three different scenarios, from different datasets, as the target domain, where we show that our full pipeline improves previous methods on all three scenarios. { \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,355
Robert Gordon (1786–1864) was a British landowner and politician. Life He was the only son of William Gordon, a West Indies planter, of Auchendolly in Kirkcudbrightshire, and his wife Anna Nash, daughter of Stephen Nash of Bristol, and was educated at Eton College from 1799 to 1803. He entered Lincoln's Inn in 1803, and matriculated at Christ Church, Oxford in 1804, graduating B.A. in 1808 and M.A. in 1824. Gordon succeeded his father in 1802, inheriting the West Indies plantation, and estates in Sherborne, Dorset and Cricklade, Wiltshire. He was a cornet (1805) and lieutenant (1808) in the Dorset yeomanry and a captain in the Wiltshire yeomanry (1816). Appointed High Sheriff of Gloucestershire for 1811–12, Gordon served as MP for Wareham from 1812 to 1818. He was then MP for Cricklade from 1818 to 1837 and for New Windsor from 1837 to 1841. He was a commissioner to the Board of Control from 1832 to 1833 and joint secretary from 1833 to 1834 and from 1835 to 1839. He was Financial Secretary to the Treasury from 1839 to 1841. Gordon died in 1864. Family Gordon married his cousin Elizabeth Anne, the daughter of Charles Westley Coxe of Kemble House, Gloucestershire. References External links 1786 births 1864 deaths UK MPs 1812–1818 UK MPs 1818–1820 UK MPs 1820–1826 UK MPs 1826–1830 UK MPs 1830–1831 UK MPs 1831–1832 UK MPs 1832–1835 UK MPs 1835–1837 UK MPs 1837–1841 British landowners Members of Lincoln's Inn Alumni of Christ Church, Oxford High Sheriffs of Gloucestershire Members of the Parliament of the United Kingdom for Cricklade Members of the Parliament of the United Kingdom for Windsor Members of the Parliament of the United Kingdom for Wareham 19th-century British businesspeople People educated at Eton College English barristers
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,793
\section{Introduction} According to a well-known Asian proverb, ``There are a thousand Hamlets in a thousand people's eyes''. It means that different people have different opinions. These differences usually include both positive and negative attitudes. They are reflected in many fields including business, politics, and academics. For instance, a research paper submitted to CIKM may receive completely opposite reviews from two reviewers. Reviewer 1 gives an ``Accept'' decision, while Reviewer 2 chooses the ``Reject'' option. Understanding and modeling these differences is a useful perspective on a range of social computing studies (e.g., AI peer review~\cite{heaven2018ai} and congressional vote prediction~\cite{karimi2019multi}). \begin{figure} \centering \includegraphics[width=0.9\linewidth]{./imgs/SBN.pdf} \caption{Common application scenarios for signed bipartite networks. } \label{fig:application} \vspace{-15pt} \end{figure} \figref{fig:application} shows some common application scenarios for signed bipartite networks, including product review, bill vote, and peer review. Some opinions can be viewed as positive relationships, such as favorable reviews on products, supporting the bill, accepting a paper, and so on. Meanwhile, some opinions are negative links that indicate negative reviews, disapproving a bill, rejecting a paper, and so forth. These scenarios can be modeled as signed bipartite networks, which include two sets of nodes (i.e., $\mathcal{U}$ and $\mathcal{V}$) and the links with positive and negative relationships between two sets. Compared with unsigned bipartite networks, the links of signed bipartite networks are more complicated, including two opposite relationships (i.e., positive and negative links). Besides, previous works on signed networks only focus on unbipartite signed networks, which are networks that have a single node type~\cite{facchetti2011computing}. Different node types in signed bipartite networks represent different things. Modeling signed bipartite networks is a promising and challenging research field. With modeling the above scenarios into signed bipartite net-works, we do social network analysis on real-world datasets and use advanced graph representation learning methods to model them. In signed networks, balance analysis is one of the key research problems in signed graph modeling~\cite{huang2021signlens}. A common balance analysis method is to count the number of balanced signed triangles in unbipartite signed networks~\cite{leskovec2010signed}. For signed bipartite networks, \cite{derr2019balance} defines the signed butterfly isomorphism, and uses it to analyze the balance in signed bipartite networks. But signed butterfly isomorphism may be missing due to data sparsity. In this paper, we offer a new perspective for analyzing balance theory on signed bipartite networks. By sign construction, we construct the links between the nodes in the same set and count the signed triangles for the links between the nodes in the same set. We analyze the balance theory of signed bipartite networks from both two perspectives and explore the balance change in the peer review scenario. We find that after rebuttal, the balance of the review signed bipartite network increased. In addition to social network analysis, graph representation learning is another important tool. Graph Neural Networks (GNNs) have achieved state-of-art results in graph representation learning. Combing two perspectives, we propose a new \underline{S}igned \underline{B}ipartite \underline{G}raph \underline{N}eural \underline{N}etworks (SBGNNs). This model follows the message passing scheme, but we redesign the message function, aggregation function, and update function. Our SBGNN model achieves the state-of-art results on \lsp, which is the main machine learning task in signed networks~\cite{derr2020link}. To the best of our knowledge, none of the existing GNN methods has paid special attention to signed bipartite networks. It's the first time to introduce GNNs to signed bipartite networks. To summarize, the major contributions of this paper are as follows: \begin{itemize}[leftmargin=*] \item By defining the signed relationship of the same set of nodes (e.g., agreement/disagreement), we provide a new perspective for analyzing signed bipartite networks, which can measure the unbalanced structure of the signed bipartite networks from the same set of nodes. \item Combining two perspectives, we introduce a new layer-by-layer SBGNN model. Via defining new message functions, aggregation functions, and update functions, SBGNNs aggregate information from neighbors in different node sets and output effective node representations. \item We conduct \lsp experiments on four real-world signed bipartite networks including product review, bill vote, and peer review. Experimental results demonstrate the effectiveness of our proposed model. \end{itemize} \section{Related Work} \label{sec:related_work} \subsection{Signed Graph Modeling} Signed networks are such social networks having both positive and negative links\citepp{easley2010networks}. Balance theory~\cite{heider1944social, cartwright1956structural} is the fundamental theory in the signed network field~\cite{kirkley2019balance}. For classical signed networks, signed triangles are the most common way to measure the balance of signed networks~\cite{szell2010multirelational}. Distinct from homogeneous networks, there are two types of nodes in in bipartite networks. For signed bipartite networks, \cite{derr2019balance} conducts the comprehensive analysis on balance theory in signed bipartite networks, using the smallest cycle in signed bipartite networks (i.e., signed butterflies). To mine signed networks, many algorithms have been developed for lots of tasks, such as community detection\citepp{traag2009community,bonchi2019discovering}, node classification\citepp{tang2016node}, node ranking~\cite{shahriari2014ranking}, and spectral graph analysis\citepp{li2016spectral}. The \lsp is the main machine learning task for signed networks~\cite{song2015link}. Modeling balance theory usually leads to better experimental results on \lsp~\cite{huang2021sdgnn}. For example, \cite{leskovec2010predicting} extracts features by counting signed triangles and achieve good performance in \lsp. Even with recently signed network embedding methods~\cite{wang2017signed,chen2018bridge,mara2020csne, javari2020rose}, balance theory will be the important guideline for designing models. Specifically, SiNE~\cite{wang2017signed} designs an objective function guided by balance theory to learn signed network embeddings and outperforms feature-based methods. For signed bipartite networks, how to incorporate balance theory is a research-worthy problem. \subsection{Graph Representation Learning} Graph Representation Learning (or Network Representation Learning) is to learn a mapping that embeds nodes, or entire (sub)graphs, as points in a low-dimensional vector space~\cite{hamilton2017representation}. The nodes in the graph is represented as node embeddings, which reflect the structure of the origin graph. The common methods for graph representation learning include matrix factorization-based methods~\cite{ou2016asymmetric}, random-walk based algorithms~\cite{perozzi2014deepwalk,tang2015line,grover2016node2vec}, and graph neural networks. Specifically, Node2vec~\cite{grover2016node2vec} extends DeepWalk~\cite{perozzi2014deepwalk} by performing biased random walks to generate the corpus of node sequences; and it efficiently explores more diverse neighborhoods. For various complex networks (e.g., bipartite networks~\cite{gao2018bine} and signed networks~\cite{yuan2017sne}), researchers have also proposed a variety of embedding methods by adapting the random walk methods. Recently, Graph neural networks (GNNs) have received tremendous attention due to the power in learning effective representations for graphs~\cite{xugraph, xu2018graph}. Most GNNs can be summarized as a message-passing scheme where the node representations are updated by aggregating and transforming the information from the neighborhood~\citep{gilmer2017neural}. GNNs have a partial intersection but use the deep learning methods instead of matrix factorization and random walk and can better describe the network structure~\citep{wu2019comprehensive}. A lot of GNN models show a better performance than the shadow lookup embeddings\citepp{kipf2016semi,velickovic2017graph,hamilton2017inductive}. Most GNNs are designed for unsigned social networks whose links are only positive. For signed networks, some signed GNNs~\cite{derr2018signed, huang2019signed} are proposed to model the balance theory using convolution or attention mechanism. But they cannot handle the signed bipartite networks, because there are no links between nodes in the same sets. It is not trivial to transfer these models to signed bipartite networks. \section{Balance Theory in Signed Bipartite Networks} \label{sec:balance_thoery} \begin{figure*} \centering \includegraphics[width=\textwidth]{./imgs/SBN_balance.pdf} \caption{For a signed bipartite network, there exist two different analysis perspectives. Perspective 1 offers to analyze the signed butterfly isomorphism. For Perspective 2, we can analyze the signed triangle isomorphism by sign construction. } \label{fig:perspective} \end{figure*} For signed networks, balance theory is one of the most fundamentally studied social theories, which is originated in social psychology in the 1950s~\cite{heider1944social}. It discusses that due to the stress or psychological dissonance, people will strive to minimize the unbalanced state in their personal relationships, and hence they will change to balanced social settings. Specifically, triads with an even number of negative edges are defined as balanced. However, previous researches on balance theory are focused on unbipartite signed networks, measuring balance theory in signed bipartite networks is less studied. In this section, we give two perspectives to analyze balance theory in signed bipartite networks. \subsection{Signed Bipartite Networks} \label{sec:signed_bipratite_networks} \begin{table} \centering \caption{Statistics on Signed Bipartite Networks.} \label{tab:dataset} \setlength{\tabcolsep}{0.8mm}{ \begin{tabular}{lccccc} \toprule & Bonanza & \begin{tabular}[c]{@{}c@{}}U.S. \\House \end{tabular} & \begin{tabular}[c]{@{}c@{}}U.S. \\Senate \end{tabular} & \begin{tabular}[c]{@{}c@{}}Preliminary \\Review \end{tabular} & \begin{tabular}[c]{@{}c@{}}Final \\Review \end{tabular} \\ \midrule $|\mathcal{U}|$ & 7,919 & 515 & 145 & 182 & 182 \\ $|\mathcal{V}|$ & 1,973 & 1,281 & 1,056 & 304 & 304 \\ $|\mathcal{E}| = |\mathcal{E}^+| + |\mathcal{E}^-|$& 36,543 & 114,378 & 27,083 & 1,170 & 1,170 \\ \% Positive Links & 0.980 & 0.540 & 0.553 & 0.403 & 0.397 \\ \% Negative Links & 0.020 & 0.460 & 0.447 & 0.597 & 0.603 \\ \midrule \end{tabular} } \end{table} Firstly, we describe our datasets used in this paper. The first dataset is from the e-commerce website Bonanza\footnote{https://www.bonanza.com/}, which is similar to eBay\footnote{https://www.ebay.com/} or Taobao\footnote{https://www.taobao.com/}. Users can purchase products from a seller and rate the seller with ``Positive'', ``Neutral'', or ``Negative'' scores. In this dataset, $\mathcal{U}$ represents the buyers, and $\mathcal{V}$ represents the sellers. The next two datasets (i.e., U.S. Senate and U.S. House) are from the 1st to 10th United States Congress vote records. It is collected from the Govtrack.us\footnote{https://www.govtrack.us/}. The senators or representatives $\mathcal{U}$ can vote ``Yea'' or ``Nay'' for bills $\mathcal{V}$, which is the positive or negative links respectively. The above datasets are collected and used by \cite{derr2019balance}\footnote{https://github.com/tylersnetwork/signed\_bipartite\_networks}. The last dataset is the peer review data from a top computer science conference\footnote{Due to anonymity, we removed the name of the conference.}. Reviewers $\mathcal{U}$ can give ``SA'' (Strong Accept), ``A'' (Accept), ``WA'' (Weak Accept), ``WR'' (Weak Reject), ``R'' (Reject), and ``SR'' (Strong Reject) to papers $\mathcal{V}$ after reviewing. We regard ``SA'', ``A'', and ``WA'' as positive links and ``SR'', ``R'' and ``WR'' as negative links. It's worth mentioning that in most computer science conference peer reviews, there is usually a rebuttal phase when authors can point out errors in the reviews and help clarify reviewers' misunderstandings. It has proven to play a critical role in the final decision made by the meta-reviewers and the reviewers\footnote{https://icml.cc/FAQ/AuthorResponse}. Besides, during the rebuttal phase, the reviewers can see the scores of other reviewers and make adjustments to their review comments and scores based on the author's response and other reviewers' comments. Therefore, the peer review dataset is divided into two parts: Preliminary Review and Final Review. We list the statistics of datasets in \tableref{tab:dataset}. From \tableref{tab:dataset}, we can find that in different scenarios, the negative ratio varies. In the scenario of product reviews, the ratio of negative links is relatively lower (i.e., 0.02). Buyers rarely give bad rates to sellers. In the scenario of bill vote, the proportion of negative links increases comparing to the scenario of product reviews (i.e., 0.460 and 0.447). In many bills, it is more difficult for legislators to reach consensus due to different political standpoints. In the scenario of peer reviews, the ratio of negative links is higher than the ratios of positive links (i.e., $0.603 > 0.397$). In the top conferences of computer science, the acceptance rate needs to be controlled (e.g., about 20\% in CIKM\footnote{https://www.openresearch.org/wiki/CIKM}), so reviewers usually raise their standards for reviewing the paper, which will have a greater probability of giving negative reviews. Surprisingly, after the rebuttal phase, the proportion of negative links has slightly risen (i.e., from 0.597 to 0.603). \subsection{Signed Caterpillars and Signed Butterflies} The ``butterfly'' is the most basic motif that models cohesion in an unsigned bipartite network, which is the complete 2×2 biclique~\cite{sanei2018butterfly}. Base on the butterfly definition, \cite{derr2019balance} extends it to the signed butterfly by giving signs to the links in classical butterfly isomorphism. Except for signed butterfly definition, \cite{derr2019balance} denotes ``signed caterpillars'' as paths of length that are missing just one link to becoming a signed butterfly. They use signed butterflies to investigate balance theory in signed bipartite networks. According to the definition of ~\cite{derr2019balance}, we use the notation $\bigcirc ~\bigcirc~\bigcirc~\bigcirc$ to denote a signed butterfly isomorphism class that represents the links between $\mathcal{U}$ and $\mathcal{V}$ (i.e., $u_1\rightarrow^{\bigcirc} v_1, u_1\rightarrow^{\bigcirc} v_2, u_2\rightarrow^{\bigcirc} v_1, u_2\rightarrow^{\bigcirc} v_2$). Due to the symmetry of the structure, we can get 7 non-isomorphic signed butterflies classes. We show them in Perspective 1 in \figref{fig:perspective} and divide them into two categories, balanced and unbalanced. For example, isomorphism class $\texttt{+}\texttt{+}\texttt{+}\texttt{+}$ and $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$ denote the classes having all positive or all negative links, respectively. In the scenario of peer reviews, we can interpret isomorphism class $\texttt{+}\texttt{+}\texttt{+}\texttt{+}$ as the situations where reviewer $u_1$ and reviewer $u_2$ both give ``Accept'' to paper $v_1$ and paper $v_2$ (i.e., $u_1\rightarrow^+ v_1, u_1\rightarrow^+ v_2, u_2\rightarrow^+ v_1, u_2\rightarrow^+ v_2$). Similarly, we can interpret isomorphism class $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$ as reviewer $u_1$ and reviewer $u_2$ both reject paper $v_1$ and paper $v_2$ (i.e., $u_1\rightarrow^- v_1, u_1\rightarrow^- v_2, u_2\rightarrow^- v_1, u_2\rightarrow^- v_2$). Except isomorphism class $\texttt{+}\texttt{+}\texttt{+}\texttt{+}$ and $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$, isomorphism class $\texttt{+}\texttt{+}\texttt{-}\texttt{-}$, $\texttt{+}\texttt{-}\texttt{+}\texttt{-}$, and $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$ are balanced since they have an even number of negative links. In fact, the definition of signed butterflies can be viewed as analyzing the signed bipartite network from Perspective 1. \subsection{Signed Triangles in Signed Bipartite Networks} For signed bipartite networks, the nodes of the same set are not connected. Therefore, we propose a new sign construction process by judging the sign of the link from $\mathcal{U}$ to $\mathcal{V}$. After sign construction, we have signed links between nodes in the same set. It means that we have two new signed networks for $\mathcal{U}$ and $\mathcal{V}$ after sign construction. As shown in Perspective 2 in \figref{fig:perspective}, when $u_1$ and $u_2$ have links with same sign on $v_1$ (i.e., , $u_1\rightarrow^{+} v_1, u_2\rightarrow^{+} v_1$ or $u_1\rightarrow^{-} v_1, u_2\rightarrow^{-} v_1$), we construct a positive links between $u_1$ and $u_2$ (i.e., $\texttt{+}\texttt{+}\Rightarrow \texttt{+}$ and $\texttt{-}\texttt{-}\Rightarrow \texttt{+}$). When $u_1$ and $u_2$ have different link signs on $v_1$ (i.e., , $u_1\rightarrow^{+} v_1, u_2\rightarrow^{-} v_1,$), we construct a negative links between $u_1$ and $u_2$ (i.e., $\texttt{+}\texttt{-}\Rightarrow \texttt{-}$). Since $\mathcal{U}$ is a set of people nodes (e.g., Buyer, Congress, and Reviewer), the positive and negative links can be regard as agreement and disagreements. For $\mathcal{V}$, the positive link can be viewed as similarity and vice versa. After constructing the sign links between nodes of the same types, we can use the balance theory analysis in the classical signed networks. We can calculate the ratio of balanced triads (i.e., Triads with an even number of negative edges) in all triads~\cite{leskovec2010signed}. The signed triangles $\texttt{+}\texttt{+}\texttt{+}$ and $\texttt{+}\texttt{-}\texttt{-}$ are balanced as the principle that ``\textit{the friend of my friend is my friend, the enemy of my enemy is my friend}''. \subsection{Balance Theory Analysis} In this subsection, we analyze the balance theory in different datasets from different perspectives. From Perspective 1, we follow \cite{derr2019balance} and calculate the percentage each isomorphism class takes up of the total signed butterfly count in each dataset as ``\%''. Besides, we also calculate ``\%E" as the expectation of signed butterflies when randomly reassigning the positive and negative signs to the signed bipartite network. For example, ``\%E'' for the isomorphism class $\texttt{+}\texttt{-}\texttt{-}\texttt{-}$ is \[ \left(\begin{array}{c} 4 \\ 1 \end{array}\right)\left(\left(\left|\mathcal{E}^{+}\right| /|\mathcal{E}|\right) \times\left(\left|\mathcal{E}^{-}\right| /|\mathcal{E}|\right)^{3}\right). \] For Perspective 2, we count the percentage of each signed triangles as ``\%'' and the expectation of such signed triangles as ``\%E". Similarly, ``\%E'' in $\mathcal{U}$ for $\texttt{+}\texttt{-}\texttt{-}$ is \[ \left(\begin{array}{c} 3 \\ 1 \end{array}\right)\left(\left(\left|\mathcal{E}^{+}_\mathcal{U}\right| /|\mathcal{E}_\mathcal{U}|\right) \times\left(\left|\mathcal{E}_\mathcal{U}^{-}\right| /|\mathcal{E}_\mathcal{U}|\right)^{2}\right), \] where $|\mathcal{E}| = |\mathcal{E}^-_\mathcal{U}| + |\mathcal{E}^+_\mathcal{U}|$, and $\mathcal{E}^+_\mathcal{U}$ and $\mathcal{E}^-_\mathcal{U}$ are the positive and negative edges in $\mathcal{U}$, respectively. Since $\mathcal{U}$ is the set of people nodes, which is easy to describe, we only list the results of $\mathcal{U}$. \begin{table*}[!htp] \centering \caption{Balance theory analysis on five real-world datasets from two perspectives.} \label{tab:balance_analysis} \begin{tabular}{lccccc} \toprule & Bonanza & \begin{tabular}[c]{@{}c@{}}U.S. \\House \end{tabular} & \begin{tabular}[c]{@{}c@{}}U.S. \\Senate \end{tabular} & \begin{tabular}[c]{@{}c@{}}Preliminary \\Review \end{tabular} & \begin{tabular}[c]{@{}c@{}}Final \\Review \end{tabular} \\ \midrule Signed Butterfly Isomorphism $\texttt{+}\texttt{+}\texttt{+}\texttt{+}$ (\%, \%E) & (0.986, 0.922) & (0.244, 0.085) & (0.262, 0.094) & (0.109, 0.026) & (0.115$\uparrow$, 0.025) \\ Signed Butterfly Isomorphism $\texttt{+}\texttt{-}\texttt{-}\texttt{+}$ (\%, \%E) & (0.000, 0.001) & (0.109, 0.123) & (0.108, 0.122) & (0.109, 0.116) & (0.072$\downarrow$, 0.115) \\ Signed Butterfly Isomorphism $\texttt{+}\texttt{+}\texttt{-}\texttt{-}$ (\%, \%E) & (0.001, 0.001) & (0.111, 0.123) & (0.110, 0.122) & (0.101, 0.116) & (0.057$\downarrow$, 0.115) \\ Signed Butterfly Isomorphism $\texttt{+}\texttt{-}\texttt{+}\texttt{-}$ (\%, \%E) & (0.000, 0.001) & (0.186, 0.123) & (0.184, 0.122) & (0.156, 0.116) & (0.215$\uparrow$, 0.115) \\ Signed Butterfly Isomorphism $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$ (\%, \%E) & (0.000, 0.000) & (0.147, 0.045) & (0.133, 0.040) & (0.249, 0.127) & (0.315$\uparrow$, 0.133) \\ Balanced Signed Butterfly Summary (\%, \%E) & (\textbf{0.988}, 0.924) & (\textbf{0.798}, 0.500) & (\textbf{0.798}, 0.500) & (\textbf{0.724}, 0.501) & (\textbf{0.774$\uparrow$}, 0.501)\\ \midrule Signed Butterfly Isomorphism $\texttt{+}\texttt{+}\texttt{+}\texttt{-}$ (\%, \%E) & (0.012, 0.076) & (0.118, 0.289) & (0.122, 0.302) & (0.070, 0.156) & (0.075$\uparrow$, 0.151) \\ Signed Butterfly Isomorphism $\texttt{+}\texttt{-}\texttt{-}\texttt{-}$ (\%, \%E) & (0.000, 0.000) & (0.085, 0.211) & (0.081, 0.197) & (0.206, 0.343) & (0.151$\downarrow$, 0.349) \\ Unbalanced Signed Butterfly Summary (\%, \%E) & (\textbf{0.012}, 0.076) & (\textbf{0.202}, 0.500) & (\textbf{0.202}, 0.500) & (\textbf{0.276}, 0.499) & (\textbf{0.226$\downarrow$}, 0.499)\\ \midrule Signed Triangles Isomorphism $\texttt{+}\texttt{+}\texttt{+}$ in $\mathcal{U}$ (\%, \%E) & (0.978, 0.949) & (0.338, 0.217) & (0.360, 0.248) & (0.327, 0.213) & (0.446$\uparrow$, 0.310) \\ Signed Triangles Isomorphism $\texttt{+}\texttt{-}\texttt{-}$ in $\mathcal{U}$ (\%, \%E) & (0.011, 0.001) & (0.476, 0.287) & (0.436, 0.261) & (0.451, 0.290) & (0.346$\downarrow$, 0.212) \\ Balanced Signed Triangles Summary in $\mathcal{U}$ (\%, \%E) & (\textbf{0.989}, 0.950) & (\textbf{0.815}, 0.504) & (\textbf{0.796}, 0.508) & (\textbf{0.778}, 0.504) & (\textbf{0.792$\uparrow$}, 0.522) \\ \midrule Signed Triangle Isomorphism $\texttt{+}\texttt{+}\texttt{-}$ in $\mathcal{U}$ (\%, \%E) & (0.011, 0.050) & (0.176, 0.432) & (0.189, 0.440) & (0.194, 0.431) & (0.195$\uparrow$, 0.444) \\ Signed Triangle Isomorphism $\texttt{-}\texttt{-}\texttt{-}$ in $\mathcal{U}$ (\%, \%E) & (0.000, 0.000) & (0.009, 0.063) & (0.015, 0.051) & (0.027, 0.065) & (0.012$\downarrow$, 0.034) \\ Unbalanced Signed Triangles Summary in $\mathcal{U}$(\%, \%E) & (\textbf{0.011}, 0.050) & (\textbf{0.185}, 0.496) & (\textbf{0.204}, 0.492) & (\textbf{0.222}, 0.496) & (\textbf{0.208$\downarrow$}, 0.478) \\ \bottomrule \end{tabular} \end{table*} From \tableref{tab:balance_analysis}, we can find that the large majority of signed butterflies in signed bipartite networks are more balanced than expectation based on the link sign ratio in the given networks (e.g., $0.988> 0.924$ in Bonanza). For Perspective 2, signed triangles in signed networks are also more balanced than expectation (e.g., $0.989 >0.950$ in Bonanza). Although the perspectives are different, the conclusions are similar. In the scenario of peer reviews, after rebuttal phase, the balance of signed bipartite networks increased (i.e., $0.724 \rightarrow 0.774\uparrow$ and $0.778 \rightarrow 0.792\uparrow$). It shows that through authors' feedback and reviewers' discussions, the reviewers' opinions have become more balanced, although ratio of the negative links increased. From Perspective 1, the ratio of isomorphism class $\texttt{+}\texttt{+}\texttt{+}\texttt{+}$, $\texttt{+}\texttt{-}\texttt{+}\texttt{-}$, $\texttt{-}\texttt{-}\texttt{-}\texttt{-}$ increased (i.e., $0.109 \rightarrow 0.115\uparrow$, $0.156 \rightarrow 0.215\uparrow$, and $0.239 \rightarrow 0.315\uparrow$), which means that reviewers made a more balanced adjustment to their review comments. For Perspective 2, the ratio of signed triangles $\texttt{+}\texttt{+}\texttt{+}$ increased from $0.327$ to $0.446\uparrow$, which reflects that the reviewers are more balanced and consistent after the rebuttal phase. \section{Problem Formulation} \label{sec:problem_definiton} In this section, we give the definition of \textsc{Link Sign Prediction} , which can be regarded as the main machine learning task for signed bipartite networks. Consider a signed bipartite network, $\mathcal{G} = (\mathcal{U}, \mathcal{V}, \mathcal{E})$, where $\mathcal{U} = \{ u_1, u_2, ..., u_{|\mathcal{U}|} \}$ and $\mathcal{V} = \{ v_1, v_2, ..., v_{|\mathcal{V}|} \}$ represent two sets of nodes with the number of nodes $|\mathcal{U}|$ and $|\mathcal{V}|$. $\mathcal{E} \subset \mathcal{U} \times \mathcal{V}$ is the edges between $\mathcal{U}$ and $\mathcal{V}$. $\mathcal{E} = \mathcal{E}^+ \bigcup \mathcal{E}^-$ is the set of edges between the two sets of nodes $\mathcal{U}$ and $\mathcal{V}$ where $\mathcal{E}^+ \bigcap \mathcal{E}^- = \varnothing$, $\mathcal{E}^+$ and $\mathcal{E}^-$ represent the sets of positive and negative edges, respectively. Given $\mathcal{G} = (\mathcal{U}, \mathcal{V}, \mathcal{E})$, $u_{i}$ and $v_{j}$ from two different sets (their link sign is not observed), the goal is to find a mapping function $f(u_{i},v_{j}) \rightarrow \{-1, 1\}$. For network embeddings methods or GNNs, it will learn the representation of the node $u_{i}$ and $v_{j}$ to get embeddings $z_{u_i} \in \mathbb{R}^{d_u}$ and $z_{v_j} \in \mathbb{R}^{d_v} $, and use the embeddings to get the results by $f(z_{u_i}, z_{v_j}) \rightarrow \{-1, 1\}$. \section{Proposed Methodology} \label{sec:model} Based our discussion in \secref{sec:balance_thoery} and problem definition in \secref{sec:problem_definiton}, we proposed a new \underline{S}igned \underline{B}ipartite \underline{G}raph \underline{N}eural \underline{N}etworks (SBGNN) model to do \lsp task. Vanilla GNNs usually follow a message passing scheme where the node representations are updated by aggregating and transforming the information from the neighborhood~\cite{you2020design}. Specifically, for a graph $\mathcal{G} = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}=\{v_1, v_2, ...v_{|V|}\}$ is the node set and $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ is the edge set. The goal of GNNs is to learn node representation $h_i$ for node $i$ based on an iterative aggregation of local neighborhoods. For the $l$-th layer of a GNN, it can be written as: \begin{equation} \begin{aligned} m_{j \rightarrow i} ^{(l)}(i, j) &= \textsc{Msg}^{(l)}\left(h_{i}^{(l)}, h_{j}^{(l)}, h_{e_{j i}}^{(l)}\right), j \in \mathcal{N}(j),\\ m_{j \rightarrow i}^{(l)} (i) &=\textsc{Agg}^{(l)}\left(\left\{m_{j \rightarrow i}^{(l)}(i, j) \mid j \in \mathcal{N}(i)\right\}\right),\\ h_{i}^{(l+1)} &=\textsc{Upt}^{(l)}\left(h_{i}^{(l)}, m_{j \rightarrow i}^{(l)}(i) \right), \end{aligned} \end{equation} where $\textsc{Msg}, \textsc{Agg}, \textsc{Upt}$ are functions for \textit{message construction}, \textit{message aggregation}, and \textit{vertex update function} at the $l$-th layer~\cite{li2020deepergcn}. Most Graph Neural Networks~\cite{kipf2016semi} are designed for unsigned classical social networks. They design the $\textsc{Msg}$ and $\textsc{Agg}$ function such as \textit{mean}~\cite{kipf2016semi}, \textit{max}~\cite{hamilton2017inductive}, \textit{sum}~\cite{xu2018powerful} or \textit{attention}~\cite{velickovic2017graph}. In this paper, we follow the message passing scheme and show the SBGNN Layer in \figref{fig:SBGNN} including the design of $\textsc{Msg}$, $\textsc{Agg}$ and $\textsc{Upt}$. \begin{figure*} \centering \includegraphics[width=\textwidth]{./imgs/SBGNN-plot.pdf} \caption{Illustration of SBGNN. SBGNN Layer includes Aggeregate and Update functions. The aggregated message comes from the \textit{Set}$_1$ and \textit{Set}$_2$ with positive and negative links. After getting the embedding of the node $u_i$ and $v_i$, it can be used to predict the link sign relationship.} \label{fig:SBGNN} \end{figure*} \subsection{Message and Aggregation Function} As we discussed in \secref{sec:balance_thoery}, comparing to traditional unsigned networks, the links in signed bipartite networks is cross-set and complex (e.g., $u \rightarrow ^{+/-} v$). The message function of vanilla GNNs cannot be directly applied to signed bipartite networks. As shown in \figref{fig:SBGNN}, we design a new message function to aggregate messages from different sets of neighborhoods . We define that \textit{Set}$_1$ refers to the set of node with different types, and \textit{Set}$_2$ refers to the set of nodes with the same types. Message from \textit{Set}$_1$ and \textit{Set}$_2$ can be viewed as the modeling of Perspective 1 and Perspective 2 in \secref{sec:balance_thoery}, respectively. \subsubsection{Message from Set$_1$} For \textit{Set}$_1$, the type of the set is different from the type of the current node, and because it is a signed network, its links include both positive and negative relationships. Neighborhood nodes under positive and negative links have different semantic relations. So we use $W_{v\rightarrow^+ u}$ and $W_{v\rightarrow^- u}$ to aggregate the message from $v$ to $u$ with positive and negative links and $W_{u\rightarrow^+ v} $ and $ W_{u\rightarrow^- v}$ to aggregate the message from $u$ to $v$ with positive and negative links. For example, in \figref{fig:SBGNN}, for the purple circle $u_1$, the red square $v_2$ and the green square $v_4$ are the positive and negative neighborhoods, respectively. At SBGNN Layer $l$-th layer, we use $W^l_{v\rightarrow^+ u}, W^l_{v\rightarrow^- u}$ to collect the message from $v_j$ to $u_i$ by \begin{equation} \begin{aligned} m^l_{v\rightarrow^+ u} (v_j, u_i) & = \textsc{Msg}(h^l_{u_i}, h^l_{v_j}) = W^l_{v\rightarrow^+ u} \cdot h^l_{v_j}, v_j\in \mathcal{N}_{v\rightarrow^+ u} ({u_i}), \\ m^l_{v\rightarrow^- u} (v_j, u_i) & = \textsc{Msg}(h^l_{u_i}, h^l_{v_j}) = W^l_{v\rightarrow^- u} \cdot h^l_{v_j}, v_j\in \mathcal{N}_{v\rightarrow^- u} ({u_i}). \end{aligned} \end{equation} where $\mathcal{N}_{v\rightarrow^+ u} (u_i)$ and $\mathcal{N}_{v\rightarrow^- u} (u_i)$ are the neighborhood with positive and negative links to $u_i$. Similarly, we use we use $W^l_{u\rightarrow^+ v}, W^l_{u\rightarrow^- v}$ to collect the message from \textit{Set}$_1$ for $v_i$: \begin{equation} \begin{aligned} m^l_{u\rightarrow^+ v} (u_j, v_i) & = \textsc{Msg}(h^l_{v_i},h^l_{u_j})= W^l_{u\rightarrow^+ v} \cdot h^l_{u_j}, u_j\in \mathcal{N}_{u\rightarrow^+ v} (v_i), \\ m^l_{u\rightarrow^- v} (u_j, v_i) & = \textsc{Msg}(h^l_{v_i},h^l_{u_j})= W^l_{u\rightarrow^+ v} \cdot h^l_{u_j}, u_j\in \mathcal{N}_{u\rightarrow^- v} (v_i). \end{aligned} \end{equation} where $\mathcal{N}_{u\rightarrow^+ v}(v_i)$ and $\mathcal{N}_{u\rightarrow^- v}(v_i)$ are the neighborhood with positive and negative links to $v_i$. \subsubsection{Message from Set$_2$} As we said before, \textit{Set}$_2$ is the node set of the same type. However, there are no links between nodes of the same type, so we need to construct an sign link between nodes of the same type through sign construction in \secref{sec:balance_thoery}. After sign construction, we can aggregate message for the positive and negative links from $u_j$ to $u_i$ with $W_{u\rightarrow^+ u}$ and $W_{u\rightarrow^- u}$, respectively and $v_j$ to $v_i$ with $W_{v\rightarrow^+ v}$ and $W_{v\rightarrow^- v}$, respectively. In \figref{fig:SBGNN}, for the purple circle $u_1$, the green circle $u_3$ and the red circle $u_5$ are the positive and negative neighborhoods because of the link $u_1\rightarrow^- v_2 \rightarrow^{-} u_3$ and $u_1\rightarrow^- v_2 \rightarrow^{+} u_5$. For the purple square $v_5$, $v_1$ and $v_2$ are the negative neighborhoods, $v_3$ and $v_4$ are the positive neighborhoods based on our sign construction. In summary, we can definite the message from \textit{Set}$_2$ as follows: \begin{equation} \begin{aligned} m^l_{u\rightarrow^+ u}(u_j, u_i) & = \textsc{Msg}(h^l_{u_i}, h^l_{u_j})= W^l_{u\rightarrow^+ u} \cdot h^l_{u_j}, u_j\in \mathcal{N}_{u\rightarrow^+ u} (u_i), \\ m^l_{u\rightarrow^- u}(u_j, u_i) & = \textsc{Msg}(h^l_{u_i}, h^l_{u_j})= W^l_{u\rightarrow^- u} \cdot h^l_{u_j}, u_j\in \mathcal{N}_{u\rightarrow^- u} (u_i), \\ m^l_{v\rightarrow^+ v} (v_j, v_i) & = \textsc{Msg}(h^l_{v_i}, h^l_{v_j})= W^l_{v\rightarrow^+ v} \cdot h^l_{v_j}, v_j\in \mathcal{N}_{v\rightarrow^+ v} (v_i), \\ m^l_{v\rightarrow^- v} (v_j, v_i) & = \textsc{Msg}(h^l_{v_i}, h^l_{v_j})= W^l_{v\rightarrow^- v} \cdot h^l_{v_j}, v_j\in \mathcal{N}_{v\rightarrow^- v} (v_i), \end{aligned} \end{equation} where $\mathcal{N}_{u\rightarrow^- u} (u_i)$, $\mathcal{N}_{u\rightarrow^- u}(u_i)$, $\mathcal{N}_{v\rightarrow^+ v}(v_i)$, and $\mathcal{N}_{v\rightarrow^- v}(v_i)$ are positive and negative neighborhoods for $u_i$ and $v_i$, respectively. \subsubsection{Aggregation Design} For \textit{message aggregation}, it is commonly a differentiable, permutation invariant set function (e.g., \textit{mean}, \textit{max}, and \textit{sum}) that take a countable message set $\{m_{v}(u_i)| u_i\in \mathcal{N}(v) \}$ as input; and output a message vector $m_v$. In this paper, we use mean ($\textsc{Mean}$) and graph attention ($\textsc{Gat}$) aggregation functions. For the $\textsc{Mean}$ aggregation function, we get $m^l (u_i)$ and $m^l(v_i)$ by $\textsc{Mean}$ the message of neighborhoods: \begin{equation} \begin{aligned} m_{ {\leadsto} u }^{l} (u_i) &= \textsc{Mean} (\{ m^l_{\leadsto u}(j) , \forall j \in \mathcal{N}_{\leadsto u}(u_i) \}), \\ m_{ {\leadsto} v }^{l} (v_i) &= \textsc{Mean} (\{ m^l_{\leadsto v}(j) , \forall j \in \mathcal{N}_{\leadsto v}(v_i) \}) \end{aligned} \end{equation} where $\leadsto u$ is a relationship of links to $u$ (e.g., , $v\rightarrow^{+} u$) and $\leadsto v$ is a relationship of links to $v$ (e.g., , $u\rightarrow^{+} v$). For a graph attention function, it will firstly compute $\alpha^{i j}_{\leadsto}$ for node $i$ and node $j$ by the attention mechanism $\vec{\bf a}_{ \leadsto}$ and LeakyReLU nonlinearity activation function (with negative input slope $\alpha$ = 0.2) as: \begin{equation} \begin{aligned} \label{eq:gat1} \alpha^{ij}_{ \leadsto} = \frac{\exp\left(\text{LeakyReLU}\left(\vec{\bf a}_{ \leadsto}^\top \cdot [{\bf W}^l_{ \leadsto}{h^l_i}\|{\bf W}^l_{ \leadsto}{h^l_j}]\right)\right)}{\sum_{k\in\mathcal{N}_{ \leadsto}(i)} \exp\left(\text{LeakyReLU}\left(\vec{\bf a}_{ \leadsto}^\top \cdot [{\bf W}^l_{ \leadsto}{h^l_i}\|{\bf W}^l_{ \leadsto}{h^l_k}]\right)\right)}, \\ \end{aligned} \end{equation} where $\|$ is is the concatenation operation, $\cdot^{\top}$ represents transposition, $\mathcal{N}_{ \leadsto}(i)$ is the neighborhoods of node $i$ under the definition of $\leadsto$ (e.g., , $u\rightarrow^{+} v$) , ${\bf W}^l$ is the weight matrix parameter. Then we can compute $m^l(u_i)$ and $m^l(v_i)$ with $\alpha_{\leadsto}$: \begin{equation} \begin{aligned} m_{ {\leadsto} u }^{l} (u_i) &= \sum_{j \in \mathcal{N}_{\leadsto u}(u_i) } \alpha^{u_i j}_{\leadsto u} \cdot m^l_{\leadsto u}(j), \\ m_{ {\leadsto} v }^{l} (v_i) &= \sum_{j \in \mathcal{N}_{\leadsto v}(v_i) } \alpha^{v_i j}_{\leadsto v} \cdot m^l_{\leadsto v}(j), \end{aligned} \end{equation} where $\leadsto u$ is a relationship of links to $u$ (e.g., , $v\rightarrow^{+} u$) and $\leadsto v$ is a relationship of links to $v$ (e.g., , $u\rightarrow^{+} v$). The attention aggregation can be seen as a learnable weighted average function. \subsection{Update Function} For our \textit{vertex update function}, we concatenate the messages from different neighborhoods with origin node features and apply it to an $\textsc{Mlp}$ to get the final node representation: \begin{equation} \begin{aligned} h^{l+1}_u & = \textsc{Mlp}(h^l_u ~\|~ m^l_{v\rightarrow^+ u} ~\|~ m^l_{v\rightarrow^- u} ~\|~ m^l_{u\rightarrow^+ u} ~\|~ m^l_{u\rightarrow^- u}), \\ h^{l+1}_v & = \textsc{Mlp}(h^l_v ~\|~ m^l_{u\rightarrow^+ v} ~\|~ m^l_{u\rightarrow^- v} ~\|~ m^l_{v\rightarrow^+ v} ~\|~ m^l_{v\rightarrow^- v}), \end{aligned} \end{equation} where $\|$ is the concatenation operation. More specificlly, the $\textsc{Mlp}$is a two-layer neural networks with $\textsc{Dropout}$ layer and $\textsc{Act}$ activation function: \begin{equation} \textsc{Mlp}(x) = W_2\Big(\textsc{Act}\big(\textsc{Dropout} (W_1x + b_1)\big)\Big) + b_2, \end{equation} where $W_1, b_1$ and $W_2, b_2$ is the parameters for this $\textsc{MLP}$, and $\textsc{Dropout}$ is the dropout function ($p=0.5$ in this paper) and $\textsc{Act}$ is the activation function (e.g., $\mathrm{PReLU}$~\cite{he2015delving} in this paper) . \subsection{Loss Function} \label{sec:loss_function} After getting embeddings $z_{u_i} \in \mathbb{R}^{d_u}$ and $z_{v_j} \in \mathbb{R}^{d_v}$ of the node $u_{i}$ and $v_{j}$, we can use following methods to get the prediction value for $u_i\rightarrow v_j$. The first one is the product operation: \begin{equation} y_{pred} = \mathrm{sigmoid} (z_{u_i}^{\top} \cdot z_{v_j}), \end{equation} where $\cdot^\top$ is the transpose function and $\mathrm{sigmoid}$ is the sigmoid function $f(x) = \frac{1}{1+e ^{-x}}$. This method should keep the embedding dimension the same (i.e., $d_u = d_v$). Another method is to use an \textsc{Mlp} to predict the values by \begin{equation} y_{pred} = \mathrm{sigmoid}\big(\textsc{Mlp}( z_{u_i} ~\|~ z_{v_i} )\big) \end{equation} where $\textsc{Mlp}$ is a two layer neural networks, $\|$ is the concatenation operation. The $\textsc{Mlp}$ can be viewd as the Edge Learner in~\cite{agrawal2019learning}. After getting the prediction values, we use binary cross entropy as the loss function: \begin{equation} \quad \mathcal{L} = - w \left[ y \cdot \log y_{pred} + (1 - y) \cdot \log (1 - y_{pred}) \right] \end{equation} where $w$ is the rescaling weight for the unblanced negative ratios (It is the weights inversely proportional to class frequencies in the input data); $y$ is the ground truth with mapping $\{-1, 1\}$ to $\{0, 1\}$. \subsection{Training SBGNN} With the design of our SBGNN model, the training procedure is summarized in \algref{alg:algorithm1}. \begin{algorithm} \caption{SBGNN~ Algorithm} \label{alg:algorithm1} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE { Signed Bipartite Graph $\mathcal{G}(\mathcal{U}, \mathcal{V}, \mathcal{E})$;\\ Encoder Aggregators $Enc$;\\ SBGNN Layer Number $L$;\\ Epoch $T$; } \ENSURE{ Node representation $Z_{\mathcal{U}}, Z_{\mathcal{V}}$ } \\ \STATE{Prepare original node embeddings $z^0_u, z^0_v, \forall u \in \mathcal{U}, \forall v \in \mathcal{V}$.} \STATE{Initialize the parameters of SBGNN model.} \FOR{$epoch=1,...,T$} \STATE{Get neighborhoods $\mathcal{N}_{v\rightarrow^+ u}(u_i)$, $\mathcal{N}_{v\rightarrow^- u}(u_i)$, $\mathcal{N}_{u\rightarrow^+ u}(u_i)$, $\mathcal{N}_{u\rightarrow^- u}(u_i)$, $ \forall u_i \in \mathcal{U}$} \STATE{Get neighborhoods $\mathcal{N}_{u\rightarrow^+ v}(v_i)$, $\mathcal{N}_{u\rightarrow^- v}(v_i)$, $\mathcal{N}_{v\rightarrow^+ v}(v_i)$, $\mathcal{N}_{v\rightarrow^- v}(v_i)$, $ \forall v_i \in \mathcal{V}$} \FOR{$l=0...L-1$} \STATE{ $z_{u_i}^{l+1} \leftarrow Enc^l\Big(h^l_{u_i} , m^l_{v\rightarrow^+ u}(u_i) , m^l_{v\rightarrow^- u}(u_i)$, \\ $m^l_{u\rightarrow^+ u}(u_i) , m^l_{u\rightarrow^- u}(u_i) \Big)$, $\forall u_i \in \mathcal{U}$ } \STATE{ $z_{v_i}^{l+1} \leftarrow Enc^l\Big(h^l_{v_i} , m^l_{u\rightarrow^+ v}({v_i}) , m^l_{u\rightarrow^- v}({v_i})$, \\ $m^l_{v\rightarrow^+ v}({v_i}) , m^l_{v\rightarrow^- v}({v_i})\Big)$, $\forall v_i \in \mathcal{V}$ } \ENDFOR \STATE{Compute loss $\sum\limits_{u \rightarrow v \in \mathcal{E}} \mathcal{L}_{loss}(u \rightarrow v)$ with $Z_{\mathcal{U}}$ and $Z_{\mathcal{V}}$} \STATE{Back propagation, update parameters.} \ENDFOR \RETURN {$Z^l_{\mathcal{U}}, Z^l_{\mathcal{V}}$} \end{algorithmic} \end{algorithm} \algref{alg:algorithm1} demonstrates that our SBGNN is a layer-by-layer architecture design, where $Enc$ can be any powerful GNN aggregators like $\textsc{Mean}$ or $\textsc{Gat}$. \section{Experiments} \label{sec:experiments} In this section, we evaluate the performance of our proposed SBGNN on real-world datasets. We first introduce the datasets, baselines, and metrics for experiments, then present the experimental results of SBGNN and baselines. Finally, we analyze our models from parameter analysis and ablation study. \subsection{Experimental Settings} \subsubsection{Datasets} As previously discussed in \secref{sec:signed_bipratite_networks}, we choose four datasets for this study, namely, Bonanza, U.S. House, U.S. Senate, and Review (we use Final Review as the Review dataset). Following the experimental settings in \cite{derr2019balance}, we randomly select 10\% of the links as test set, utilize a random 5\% for validation set, and the remaining 85\% as training set for each of our datasets. We run with different train-val-test splits for 5 times to get the average scores. \subsubsection{Competitors} We compare our method SBGNN with several baselines including Random Embeddings, Unsigned Network Embeddings, Signed/Bipartite Network Embeddings, and Signed Butterfly Based Methods as follows. \vpara{Random Embeddings: } It generates $d$ dimensional random values from a uniform distribution over $[0, 1)$ (i.e., $z=(z_1, z_2, ..., z_d)$, $z_i \in [0, 1)$). Given embeddings $z_{u_i}$ and $z_{v_j}$, we concatenate them and use a Logistic Regression (\textsc{Lr}) to predict the value of $u_i$ and $v_i$. \textsc{Lr} will be trained on the training set, and make predictions on the test set. Since \textsc{Lr} has learnable parameters, this method can be viewed as the lower bound of the graph representation learning methods. \vpara{Unsigned Network Embeddings: } Comparing random embeddings, we use some classical unsigned network embedding methods (e.g., DeepWalk~\cite{perozzi2014deepwalk}\footnote{https://github.com/phanein/deepwalk}, Node2vec~\cite{grover2016node2vec}\footnote{https://github.com/aditya-grover/node2vec}, LINE~\cite{tang2015line}\footnote{https://github.com/tangjianpku/LINE}). By keeping only positive links, we input unsigned networks to such unsigned network embedding methods to get embeddings for $u_i$ and $v_j$. As same as Random Embeddings, we concatenate embeddings $z_{u_i}$ and $z_{v_j}$, and use \textsc{Lr} to predict the sign of links. \vpara{Signed/Bipartite Network Embedding: } We use Signed or/and Bipartite Network Embedding as our baselines. More specifically, we use SiNE~\cite{wang2017signed}\footnote{ http://www.public.asu.edu/\textasciitilde swang187/codes/SiNE.zip} to learn the embeddings for $u_i$ and $v_i$ after sign link construction in \secref{sec:balance_thoery}. For BiNE~\cite{gao2018bine}\footnote{https://github.com/clhchtcjj/BiNE}, we remove the negative links between $\mathcal{U}$ and $\mathcal{V}$ . We use BiNE to get embeddings $z_{u_i}$ and $z_{v_j}$ and use \textsc{Lr} to predict the sign of links with concatenating $z_{u_i}$ and $z_{v_j}$. Compared with the unsigned network embeddings methods, we try to let the representation learn the structural information (e.g., links between $\mathcal{U}$ and $\mathcal{V}$ and the link sign) instead of just relying on the downstream classifier. SBiNE~\cite{zhang2020sbine} is a representation learning method for signed bipartite networks, which preserves the first-order and second-order proximity. Instead of a two-stage model, SBiNE uses single neural networks with sigmoid nonlinearity function to predict the value of $u_i$ and $v_j$. \vpara{Signed Butterfly Based Methods:} Based on the analysis of signed butterfly isomorphism, \cite{derr2019balance} proposes a variety of methods for \lsp, including SCsc, MFwBT, and SBRW \footnote{https://github.com/tylersnetwork/signed\_bipartite\_network}. Specifically, SCsc is a balance theory guided feature extraction method. MFwBT is the matrix factorization model with balance theory. SBRW is the signed bipartitle random walk method. Due to the findings about receiving aid in prediction from balance theory will always perform better than the methods that only use generic signed network information ~\cite{derr2019balance}, we take the SCsc as the most competitive baseline for our SBGNN model. \vpara{Signed Bipartite Graph Neural Networks:} For our SBGNN, we try two different aggregation design (i.e., $\textsc{Mean}$ and $\textsc{Gat}$) and remark it as SBGNN-$\textsc{Mean}$ and SBGNN-$\textsc{Gat}$, respectively. For a fair comparison, we set all the node embedding dimension to 32 which is as same as that in SBiNE~\cite{zhang2020sbine} for all embedding based methods. For other parameters in baselines, we follow the recommended settings in their original papers. For embedding methods, we use the balanced class weighted Logistic Regression in Scikit-learn~\cite{scikitlearn}\footnote{https://scikit-learn.org/stable/index.html}. For SBiNE, we use PyTorch~\cite{paszke2019pytorch} to implement it by ourselves. For our SBGNN, we also use PyTorch to implement our model. We use Adam optimizer with an initial learning rate of 0.005 and a weight decay of 1e-5. We run 2000 epochs for SBGNN and choose the model that performs the best AUC metrics on the validation set. \subsubsection{Evaluation Metrics} Since \lsp is a binary classification problem, we use AUC, Binary-F1, Macro-F1, and Micro-F1 to evaluate the results. These metrics are widely used in \lsp~\cite{chen2018bridge,huang2021sdgnn}. Note that, among all these four evaluation metrics, the greater the value is, indicating the better the performance of the corresponding method. \subsection{Experiment Results} \label{sec:experiments-res} \begin{table*}[!ht] \centering \caption{The results of \lsp on four datasets. Results not available are marked as `\#N/A'. Two-tailed t-tests demonstrate the improvements of our SBGNN to the baseline SCsc are statistically significant ( $^*$ indicates p-value $\leq$ 0.05).} \label{tb:experiment-result} \scalebox{0.92}{ \setlength{\tabcolsep}{1.0mm}{ \begin{tabular}{c|c|c|ccc|ccc|ccc|cc} \toprule \multicolumn{2}{c|}{} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Random \\ Embedding\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Unsigned \\Network Embedding\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Signed/Bipartite \\Network Embedding\end{tabular}} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Signed Butterfly\\ Based Methods\end{tabular}} & \multicolumn{2}{c}{\begin{tabular}[c]{@{}c@{}}Signed Bipartite \\Graph Neural Networks\end{tabular} } \\ \midrule Dataset & Metric & Random & Deepwalk & Node2vec & LINE & SiNE & BiNE & SBiNE & SCsc & MFwBT & SBRW & SBGNN-$\textsc{Mean}$ & SBGNN-$\textsc{Gat}$ \\ \midrule \multirow{4}{*}{Bonanza} & AUC & 0.5222 & 0.6176 & \underline{0.6185} & 0.6124 & 0.6088 & 0.6026 & 0.5525 & \textbf{0.6524} & 0.5769 & 0.5315 & 0.5841 & 0.5769 \\ & Binary-F1 & 0.7282 & 0.7843 & 0.7530 & 0.6974 & 0.9557 & 0.7426 & 0.8514 & 0.6439 & 0.8927 & \textbf{0.9823} & 0.9488$^*$ & \underline{0.9616}$^*$ \\ & Macro-F1 & 0.3868 & 0.4258 & 0.4087 & 0.3790 & \textbf{0.5422} & 0.4016 & 0.4538 & 0.3543 & 0.4813 & 0.5353 & 0.5311$^*$ & \underline{0.5404}$^*$ \\ & Micro-F1 & 0.5770 & 0.6497 & 0.6093 & 0.5424 & 0.9157 & 0.5960 & 0.7436 & 0.4843 & 0.8076 & \textbf{0.9652} & 0.9044$^*$ & \underline{0.9269}$^*$ \\ \midrule \multirow{4}{*}{Review} & AUC & 0.5489 & 0.6324 & 0.6472 & 0.6236 & 0.5741 & \#N/A & 0.5329 & 0.5522 & 0.4727 & 0.5837 & \underline{0.6584}$^*$ & \textbf{0.6747}$^*$ \\ & Binary-F1 & 0.4996 & 0.5932 & 0.6141 & 0.5974 & 0.5247 & \#N/A & 0.4232 & 0.3361 & 0.4346 & 0.5423 & \underline{0.6128}$^*$ & \textbf{0.6366}$^*$ \\ & Macro-F1 & 0.5426 & 0.6268 & 0.6400 & 0.6120 & 0.5688 & \#N/A & 0.5262 & 0.4823 & 0.4696 & 0.5767 & \underline{0.6556}$^*$ & \textbf{0.6629}$^*$ \\ & Micro-F1 & 0.5487 & 0.6325 & 0.6444 & 0.6137 & 0.5744 & \#N/A & 0.5521 & 0.5812 & 0.4752 & 0.5812 & \underline{0.6632}$^*$ & \textbf{0.6667}$^*$ \\ \midrule \multirow{4}{*}{U.S. House} & AUC & 0.5245 & 0.6223 & 0.6168 & 0.5892 & 0.6006 & 0.6103 & 0.8328 & 0.8274 & 0.8097 & 0.8224 & \underline{0.8474}$^*$ & \textbf{0.8481}$^*$ \\ & Binary-F1 & 0.5431 & 0.6401 & 0.6323 & 0.6304 & 0.6118 & 0.6068 & 0.8434 & 0.8375 & 0.8234 & 0.8335 & \underline{0.8549}$^*$ & \textbf{0.8560}$^*$ \\ & Macro-F1 & 0.5238 & 0.6215 & 0.6158 & 0.5883 & 0.5991 & 0.6097 & 0.8323 & 0.8267 & 0.8096 & 0.8219 & \underline{0.8463}$^*$ & \textbf{0.8471}$^*$ \\ & Micro-F1 & 0.5246 & 0.6224 & 0.6166 & 0.5892 & 0.5996 & 0.6108 & 0.8330 & 0.8274 & 0.8106 & 0.8226 & \underline{0.8468}$^*$ & \textbf{0.8476}$^*$ \\ \midrule \multirow{4}{*}{U.S. Senate} & AUC & 0.5251 & 0.6334 & 0.6260 & 0.5743 & 0.5875 & 0.6071 & 0.7998 &0.8163 & 0.7857 & 0.8142 & \underline{0.8209}$^*$ & \textbf{0.8246}$^*$ \\ & Binary-F1 & 0.5502 & 0.6603 & 0.6526 & 0.6159 & 0.5923 & 0.5968 & 0.8175 & \underline{0.8294} & 0.8043 & 0.8291 & 0.8277 & \textbf{0.8320} \\ & Macro-F1 & 0.5239 & 0.6325 & 0.6251 & 0.5722 & 0.5842 & 0.6037 & 0.7992 & 0.8148 & 0.7850 & 0.8131 & \underline{0.8177}$^*$ & \textbf{0.8215}$^*$ \\ & Micro-F1 & 0.5254 & 0.6347 & 0.6271 & 0.5732 & 0.5848 & 0.6042 & 0.8009 & 0.8160 & 0.7867 & 0.8145 & \underline{0.8183}$^*$ & \textbf{0.8221}$^*$ \\ \bottomrule \end{tabular} } } \end{table*} We show the results in \tableref{tb:experiment-result}. We have bolded the highest value of each row and underlined the second value. From \tableref{tb:experiment-result}, we make the following observations: \begin{itemize}[leftmargin=*] \item Even with random embedding, \textsc{Lr} can still achieve a certain effect on \lsp (i.e., AUC > 0.5). It demonstrates that the downstream classifier has a certain predictive ability for \lsp when it is regarded as a two-stage model. \item After using network embeddings, the graph structure data is modeled into the node representation, which improves the prediction results. Even the worst-performing LINE outperforms random embedding by 17.3\%, 13.6\%, 12.3\%, and 9.4\% on AUC in Bonanza, Review, U.S. House, and U.S. Senate, respectively. It demonstrate that graph structure information is helpful for \lsp. In unsigned network embedding methods, Node2vec has made the best results in the unsigned network embeddings methods. We guess that the biased random walk mechanism may be able to explore the graph structure more effectively. \item For signed or bipartite model (i.e., SiNE and BiNE), the information for sign and bipartite network structure can contribute to the node representation learning. SiNE is more effective than BiNE (e.g., AUC in Bonanza (0.6088 > 0.6026)), indicating that link sign information may be more important than the link relationship. The performance of SBiNE is not as good as that in \cite{zhang2020sbine}. This can be due to the fact of different data splits and implementation details. \item The signed butterfly based methods (i.e., SCsc, MFwBT, and SBRW) outperform Deepwalk by 33.0\%, 30.1\%, and 32.2\% on AUC in U.S. House, achieve 28.9\%, 24.3\% and 28.5\% gains on AUC in U.S. Senate, respectively. It shows that modeling the balance theory in the signed bipartite network is key for \lsp. But in Review and Bonanza datasets, the signed butterfly based methods cannot outperform embedding based methods. This can be due to that there are fewer signed butterfly isomorphism in these two datasets. Besides, Bonanza is an extremely unbalanced dataset (i.e., \% Positive Links is 0.980), AUC and F1 show a big difference. \item Our SBGNN model achieve the best results on most metrics. Except bonanza, SBGNN-$\textsc{Mean}$ and SBGNN-$\textsc{Gat}$ significantly outperform SCsc. In bonanza, SBGNN-$\textsc{Gat}$ significantly gains better results in Binary-F1, Macro-F1, and Micro-F1. It demonstrates that SBGNN effectively models signed bipartite networks. Besides, we can find that $\textsc{Gat}$ aggregator is better than $\textsc{Mean}$ aggregator, which can be attributed to the attention mechanism~\cite{vaswani2017attention}. \end{itemize} \subsection{Parameter Analysis and Ablation Study} In this subsection, we conduct parameter analysis and ablation study for our SBGNN model. We choose the U.S. House as our dataset and select 85\% training edges, 5\% validation edges, and 15\% test edges as before. \begin{figure}[!t] \hspace{-0.02\linewidth} \begin{center} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{imgs/layer.pdf} \caption{\#Layer $l$} \label{fig:layer} \end{subfigure} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{imgs/dim.pdf} \caption{Dimension $d$} \label{fig:d} \end{subfigure} \caption{Parameter analysis on the number of SBGNN Layer $l$ and imension $d$ for SBGNN on the U.S. House dataset.} \label{fig:parameter-analysis} \vspace{-1em} \end{center} \end{figure} \subsubsection{Parameter Analysis} We analyze the number of our SBGNN Layer $l$ and the dimension $d$ of embeddings. For the number of our SBGNN Layer $l$, we vary $l$ from $\{0, 1, 2, 3, 4\}$. Note that, $l=0$ means there is no GNN Layer is used (i.e., just lookup embeddings is used), so the results for SBGNN-$\textsc{Mean}$ and SBGNN-$\textsc{Gat}$ are same. For \figref{fig:layer}, we can find that GNN Layer is more effective than the lookup embedding methods. For SBGNN-$\textsc{Mean}$, the best $l$ is 1, which AUC is 0.8481. But for SBGNN-$\textsc{Gat}$, two SBGNN Layers will get a better results. For the dimension of SBGNN, we choose the value $d$ from $\{4, 8, 16 $, $32, 64, 128\}$ to analyze the effects of dimensions. From \figref{fig:d}, we can find that as the value increases from 4 to 32, AUC on SBGNN-$\textsc{Gat}$ increases from 0.8056 to 0.8485; AUC on SBGNN-$\textsc{Mean}$ increases from 0.8053 to 0.8447. After 32 for SBGNN-$\textsc{Mean}$ and 64 for SBGNN-$\textsc{Gat}$, the AUC value slightly descrease. This result can be due to the reason that large dimension $d$ will cause the difficulties of training embeddings. \begin{table} \begin{center} \caption{Ablation study results for SBGNN model on the U.S. House dataset.} \label{tab:ablation_study} \scalebox{0.92}{ \begin{tabular}{c|cccc} \toprule Method & AUC & Binary-F1 & Macro-F1 & Micro-F1 \\ \midrule SBGNN-$\textsc{Gat}$ & 0.8485 & 0.8586 & 0.8477 & 0.8485 \\ SBGNN-$\textsc{Gat}$ (w/o \textit{Set}$_1$) & 0.8406 & 0.8521 & 0.8400 & 0.8409\\ SBGNN-$\textsc{Gat}$ (w/o \textit{Set}$_2$) & 0.8440 & 0.8567 & 0.8438 & 0.8448 \\ SBGNN-$\textsc{Gat}$ (with \textsc{Lr}) & 0.6281 & 0.6195 & 0.6227 & 0.6227\\ SBGNN-$\textsc{Gat}$ (with \textsc{Mlp}) & 0.8365 & 0.8480 & 0.8358 & 0.8367 \\ \midrule SBGNN-$\textsc{Mean}$ & 0.8447 & 0.8519 & 0.8429 & 0.8434\\ SBGNN-$\textsc{Mean}$ (w/o \textit{Set}$_1$) & 0.8419 & 0.8496 & 0.8402 & 0.8408\\ SBGNN-$\textsc{Mean}$ (w/o \textit{Set}$_2$) & 0.8296 & 0.8410 & 0.8288 & 0.8297 \\ SBGNN-$\textsc{Mean}$ (with \textsc{Lr}) & 0.6285 & 0.6387 & 0.6263 & 0.6267 \\ SBGNN-$\textsc{Mean}$ (with \textsc{Mlp}) & 0.8443 & 0.8531 & 0.8430 & 0.8436\\ \bottomrule \end{tabular} \vspace{-4em} } \end{center} \end{table} \subsubsection{Ablation Study} For the ablation study, we investigate the effect of different aggregation and prediction functions. Firstly, as we discussed in \secref{sec:experiments-res}, $\textsc{Gat}$ aggregator is better than $\textsc{Mean}$ aggregators. We further investigate the effect for our message function design. From \tableref{tab:ablation_study}, we can see that without message from \textit{Set}$_1$ (i.e., w/o \textit{Set}$_1$), SBGNN-$\textsc{Gat}$ and SBGNN-$\textsc{Mean}$ descrease 0.9\% and 0.3\%, respectively; without \textit{Set}$_2$ message (i.e., w/o \textit{Set}$_2$), SBGNN-$\textsc{Gat}$ and SBGNN-$\textsc{Mean}$ descrease 0.5\% and 1.8\%, respectively. It demonstrates that both message from \textit{Set}$_1$ and \textit{Set}$_1$ is useful for the SBGNN model. As we discussed in \secref{sec:loss_function}, the prediction can be achieved by product operation or $\textsc{Mlp}$. We replace it with a simple \textsc{Lr} layer or a two-layer $\textsc{Mlp}$. From \tableref{tab:ablation_study}, we can find that $\textsc{Mlp}$ is much better than simple \textsc{Lr} but not better than product operation. \section{Conclusions and Future Work} \label{sec:conclusion} In this paper, we focus on modeling signed bipartite networks. We first discuss two different perspectives to model the signed bipartite networks. Through sign construction, the new perspective can count the signed triangles in the same node type networks. It obtains consistent results with signed butterfly analysis. We further use these two perspectives to model peer review and find that after rebuttal, the balance of reviewers' opinions improved. It shows that the rebuttal mechanism makes the reviewer's opinions more consistent. Under the definition of a new perspective, we propose a new graph neural network model SBGNN to learn the node representation of signed bipartite graphs. On four real-world datasets, our method has achieved state-of-the-art results. Finally, we conducted parameter analysis and ablation study on SBGNN. In future work, we will explore signed bipartite networks with node features, which can improve the \lsp~\cite{karimi2019multi} and node classification~\cite{tang2016node}. For example, in the prediction of bill vote, if the node features can be modeled, such as the political standpoint of the congress, it will be more effective in predicting the vote results. Besides, we will also try to introduce signed bipartite graph neural networks into recommender system~\cite{tang2016recommendations}. \begin{acks} This work is funded by the National Natural Science Foundation of China under Grant Nos. 62102402, 91746301 and U1836111. Huawei Shen is also supported by Beijing Academy of Artificial Intelligence (BAAI) under the grant number BAAI2019QN0304 and K.C. Wong Education Foundation. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,707
Eating from the earth is good for our bodies – and for the planet. Here are some easy ways to fill your plate with more delicious, plant-based foods. What you eat affects how you feel every day. It also has one of the biggest influences on your longevity, vitality and your chances of developing a chronic disease. Nutrition experts have known how to eat for optimum health for generations, but too often the message is drowned out by fad diets, internet echo chambers and hyperbolic headlines. Eating more plant-based foods is the common theme among the Blue Zone diets – those diets around the world associated with good health and the longest lifespans. These diets were famously explored by National Geographic fellow Dan Buettner in his book The Blue Zones: Lessons for Living Longer From the People Who've Lived the Longest. Many studies (including the 20-year China–Cornell–Oxford Project led by Cornell University biochemist Thomas Colin Campbell) have supported the idea that eating lots of plant foods can contribute to a longer, healthier life compared to eating a meat-heavy diet. Plant-based foods include vegetables, fruit, nuts, seeds, legumes and wholegrains – anything that is a plant or that grows on a plant. Plant-based foods are also rich in important vitamins, minerals, fibre, antioxidants and phytonutrients, all of which are required for the body to function optimally. As well as being good for our bodies, eating more plants is good for the earth, Around the globe, the ever-increasing demand for meat is having a significant environmental impact. Legume, grain and vegetable crops have a much smaller ecological footprint than meat. This is not to suggest that everyone become vegetarian, but I do believe it's an issue more of us need to consider. Simply taking an approach that reduces your meat intake can help make a difference. The most common question I get asked about eating more plants and a smaller proportion of meat is whether you can get enough protein. Yes, you can! In fact, the Australian Bureau of Statistics National Nutrition Survey shows the average Australian eats roughly twice as much protein as needed. Any excess, just as it is with excess carbohydrates and fat, is converted to fat and stored. Including a good plant protein source in each main meal will keep your protein needs covered. Plant protein sources include dried beans, split peas, lentils, chickpeas, soy beans, tofu and other products made from soy, as well as nuts and seeds. There are also smaller amounts of protein in whole grains, and even fruit and vegetables contain a little protein. Legumes are among the most beneficial foods for human health and offer enormous environmental benefits as well. Luckily, there are plenty of delicious ways to enjoy them. Some, like butter beans, are quite creamy and therefore delicious in salads or pureed into mash. Adzuki beans are nutty and great in casseroles, kidney beans are meaty and robust enough for chilli, and brown lentils are great in everything from soups, burgers and salads, to tagines, stews and curries. Most hold their shape where others, like split peas and split red lentils, 'melt' and slip silently into sauces, soups and casseroles. Legumes can even be used as a flour replacement in sweet baking. I make a delicious apple cake with tinned chickpeas, and an awesome black bean brownie. Add a tin of chickpeas to your favourite curry or soup. Toss a tin of lentils, four-bean mix or chickpeas into your salad to make it more filling. Throw a veggie burger (that could include lentils, beans or chickpeas) on the barbecue. Use brown lentils in place of mince in a Bolognese or chilli, or swap out half of the mince and replace with lentils. Try scheduling a couple of plant-based days into your week. This sample menu shows how easy it is to get plenty of protein throughout the day. Breakfast – ½ cup oats cooked into porridge with 10 raisins, 5 walnuts, cinnamon and ⅔ cup soy milk. Lunch – Pumpkin, leek and red lentil soup, plus 1 slice wholegrain bread with avocado. Dinner – Lentil Bolognese with wholemeal pasta and 1 tablespoon parmesan cheese. Dessert – Baked apples with seedy nut topping and 3 tablespoons natural yoghurt. Snacks – 30 g cashews, 1 banana, 1 small soy latte and 30 g dark chocolate. Caroline Trickey is an Accredited Practising Dietitian working in private practice, who runs cooking classes in Sydney's inner west.
{ "redpajama_set_name": "RedPajamaC4" }
335
This was very difficult to photograph it is more of a tactile piece. Carving out the wood and modelling paste to create height and variety to the piece. This is part of a large piece that creates a full human sized body when all fitted together.
{ "redpajama_set_name": "RedPajamaC4" }
5,138
Q: 'this' on line8 is undefined, but on line 14,19,20 'this' works class SearchBar extends React.Component { state = { term: "Search" }; onFormSubmit(event) { event.preventDefault(); console.log(this); } render() { return ( <div className="ui segment"> <form onSubmit={this.onFormSubmit} className="ui form"> <div className="field"> <label>Image Search</label> <input type="text" onChange={(e) => this.setState({ term: e.target.value })} onClick={(e) => this.setState({ term: "" })} value={this.state.term} /> </div> </form> </div> ); } } Error TypeError: Cannot read property 'state' of undefined onFormSubmit D:/Projects/pics/src/SearchBar.js:8 5 | 6 | onFormSubmit(event) { 7 | event.preventDefault(); 8 | console.log(this.state.term); | ^ 9 | } 10 | 11 | render() { A: You need to learn about "this" in js. class SearchBar extends React.Component { state = { term: "Search" }; onFormSubmit = (event) => { event.preventDefault(); // console.log(this); console.log(event); } render() { return ( <div className="ui segment"> <form onSubmit={this.onFormSubmit} className="ui form"> <div className="field"> <label>Image Search</label> <input type="text" onChange={(e) => this.setState({ term: e.target.value })} onClick={(e) => this.setState({ term: "" })} value={this.state.term} /> </div> </form> </div> ); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,730
Ninio (ur. 30 września 1999 w Ramat Gan w Izraelu) – słoń afrykański (Loxodonta africana) znajdujący się wśród kolekcji zwierząt Nowego Zoo w Poznaniu. Mieszka od 10 marca 2009 na terenie, wybudowanej w tym samym roku na poznańskiej Malcie, ogromnej słoniarni z wybiegiem o powierzchni 2,5 ha. Po raz pierwszy został zaprezentowany zwiedzającym w dniach 25–26 kwietnia 2009 podczas otwarcia obiektu. Ninio posiada numer ewidencyjny 9906 w bazie danych EEP – Europejskiego Programu Ochrony Zwierząt. Pochodzenie i hodowla Jego ojcem jest Yossi (EEP 7402) urodzony w 1974 r. w ogrodzie zoologicznym w Ramat Gan Zoo – tym samym, w którym urodził się Ninio. Matką była starsza od ojca o pięć lat słonica Norris (alias Mazal, EEP 6906) urodzona na wolności w Parku Narodowym Aruszy w Tanzanii i odłowiona do zoo w 1973. Znane są też imiona dziadków Ninia ze strony ojca, którymi byli Bahati i Timbo. Ninio urodził się w Ramat Gan Zoo, gdzie przez 5 lat przebywał razem z matką. Do 2009 doliczono się, że Ninio posiada dwadzieścioro dwoje przyrodniego rodzeństwa ze strony ojca i pięcioro ze strony matki, która zmarła 7 lipca 2008, gdy młody, 9-letni słoń przebywał już w zoo w Nyíregyháza na Węgrzech. Z Izraela 10 lipca 2003 Ninio trafił do Ogrodu Zoologicznego w Warszawie, gdzie dano mu imię Lotek, które to otrzymał na cześć sponsora – Totalizatora Sportowego, finansującego wówczas zakup słoni. W Warszawie mieszkał ze swoim młodszym przyrodnim bratem Leonem i trzema samicami: Fryderyką, Bubą i Zulą. Sprawiał nieco problemów ze względu na kłopoty z zębami i pobudliwość. Jak donosił wówczas stołeczny dodatek do "Gazety Wyborczej" – Lotek bił słonice: kiedy któraś podebrała mu jedzenie, obrywała trąbą i była ganiana przez pół wybiegu. Innym razem wrzucił Leona do basenu. Winą za niesforne zachowanie obarczano zły stan uzębienia 4-letniego słonia. Dziury w lewym ciosie musiały mu zostać zaleczone pod narkozą przez stomatologa z Warszawskiego Uniwersytetu Medycznego, a pełną relację z przeprowadzonego zabiegu zamieścił internetowy "Magazyn Stomatologiczny". Ze względu na fakt, że słonie są zwierzętami żyjącymi w stadach matriarchalnych, wiadomo było, że któryś z samców będzie musiał opuścić warszawskie zoo. W międzyczasie w jednym z węgierskich ogrodów zoologicznych ukończono budowę nowoczesnej słoniarni i ponieważ stała pusta, to Węgrzy zwrócili się o przekazanie jednego ze słoni. Wybrano Ninia, gdyż zachowywał się dość agresywnie wobec pozostałych. 20 kwietnia 2006 Ninio został przewieziony do Sóstó Zoo w Nyíregyházi Állatpark na Węgrzech, gdzie otrzymał nowe imię Szabolcs. Stamtąd po trzech latach został ponownie przetransportowany do Polski razem z 5-letnim słoniem o imieniu Yzik, trafiając do Poznania. W sumie przejechał trasę ponad 4,5 tys. kilometrów, zanim trafił do zoo, w którym można go obecnie oglądać. 20 sierpnia 2009 do grupy poznańskich słoni dołączyła przywieziona z Holandii 23-letnia słonica Linda. Pod koniec września polskie media donosiły o planowanym skojarzeniu Ninia z dwiema słonicami, które zostaną przewiezione do poznańskiego zoo z Warszawy w celach rozrodczych. Informacja była jednak błędna, bowiem warszawski ogród zoologiczny sam tworzy własną grupę hodowlaną. Ostatecznie 24 listopada 2009 roku przeniesiono do Poznania dwie dorosłe słonice z Chorzowa: Kingę i Kizi. Pierwsza z nich objęła przodownictwo w grupie słonic, podczas gdy druga została nieformalną opiekunką młodego Izika, który do czasu połączenia z chorzowskimi słonicami został przygarnięty przez Lindę. Obecność dojrzałych samic przyspieszyła dojrzewanie Ninio, który od wiosny 2010 roku zaczął przejawiać oznaki dojrzewania i zainteresowania samicami. Kojarzenie słoni odbywa się zgodnie z zaleceniami europejskiego koordynatora gatunku. Warszawskie słonice są cenne ze względu na materiał genetyczny, ponieważ urodziły się w Afryce, w naturalnych warunkach. Podobnie wysoką wartość mają również wszystkie poznańskie słonice, które urodziły się w naturze. Wyjątkową wartość dla hodowli ma Ninio, który pochodzi od rodziców urodzonych w naturze. Niestety wartości takiej nie ma warszawski Leon, bowiem pochodzi z chowu wsobnego (jego matka jest dla niego jednocześnie przyrodnią siostrą, a ojciec – dziadkiem). Docelowo poznańskie stado słoni ma liczyć 10 osobników. W pierwszej połowie 2013 słoń przeszedł trzy operacje, z których najtrudniejszy był trwający ponad trzy godziny zabieg usunięcia uszkodzonego lewego ciosu. Przeprowadzili go specjaliści z RPA – dr Gerhard Steenkamp i dr Adrian Tordiffe. Interpelacja w sprawie "słonia-geja" Słoń Ninio stał się słynny w 2009 za sprawą radnego Michała Grzesia, członka Prawa i Sprawiedliwości. Po lekturze materiałów zamieszczonych w Internecie oraz artykule w tygodniku "Wprost", radny miejski wykrył u słonia orientację homoseksualną, o której mówił na posiedzeniu rady miejskiej Poznania 7 kwietnia 2009: Ponadto, według radnego, nie dosyć, że lubił tylko kolegów, to jeszcze bił samice trąbą ― Obawiam się też, że niewyżyty słoń może stanowić zagrożenie ― dodawał radny w wypowiedziach prasowych. Sprawa interpelacji poznańskiego radnego w sprawie słonia-geja odbiła się echem na całym świecie, stanowiąc podstawę do prześmiewczych publikacji prasowych i relacji telewizyjnych. Informacja pojawiła się w Polsce w TVP, TVN, TVN24, RMF i Polsacie. Wśród zagranicznych mediów relacjonowały m.in. stacje telewizyjne: Sky News, ABC News, dzienniki: "Independent", "Washington City Paper", "Daily Telegraph" oraz media internetowe: portal Yahoo! i plotkarski PerezHilton.com. Sprawa dotarła nawet do Chin poprzez agencję prasową Xinhua. Informował o niej również najbardziej prestiżowy dla słoni (powiązany z EEP) serwis internetowy "Elephant-News", a także elektroniczne media w poprzednim miejscu zamieszkania słonia, na Węgrzech. Komentowano i porównywano sprawę słonia-geja z protestami w Polsce przeciwko seksualnym upodobaniom jednego z Teletubisiów. W kontekście polskiej homofobii padało nazwisko radnego i nazwa jego partii. Według dyrektora zoo i opiekunów zwierzęcia podpierających się opinią etologów, słoń jest jeszcze za młody, by móc mówić o jego preferencjach seksualnych, gdyż słonie osiągają dojrzałość płciową w wieku 14 lat. Cała sprawa przysporzyła popularności poznańskiemu zoo. Sam radny Grześ przyznał w jednej z wypowiedzi dla prasy: Orientacja słonia budzi bardzo duże zainteresowanie wśród zwiedzających. Jest ich znacznie więcej niż przed ogłoszeniem tych rewelacji. To dla ogrodu jest ogromny zysk. Po czasie całość sprawy obracał w żart. Dzięki nietypowej popularności młody słoń zyskał fundusze na swoje utrzymanie. W dniach 8–17 maja 2009 odbył się w Poznaniu "Ninio Knows-How Festiwal" na cześć sławnego słonia. Ninio został maskotką festiwalu LGBT, a na imprezach kwestowano na rzecz jego adopcji, m.in. podczas występów drag queens. Wymyślony na potrzeby poznańskiego festiwalu slogan brzmiał: Zobacz też słoń a sprawa polska homoseksualne zachowania zwierząt Partyzant (słoń) Roy i Silo Przypisy Linki zewnętrzne Joanna Bosakowska Skandal w zoo: słonio-niewiadomo Słonie afrykańskie w warszawskim zoo Słonie afrykańskie w poznańskim zoo Słynne słonie Ogród Zoologiczny w Poznaniu Historia Poznania po 1945
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,488
Vjatskije Poljany () jsou město v Kirovské oblasti v Ruské federaci. Při sčítání lidu v roce 2010 měly přes pětatřicet tisíc obyvatel. Poloha Vjatskije Poljany leží na Vjatce, pravém přítoku Kamy v povodí Volhy. Od Kirova, správního střediska oblasti, jsou vzdáleny přibližně 280 kilometrů jihovýchodně. Dějiny Udmurtská vesnice Oštorma-Bodja zde byla už v polovině šestnáctého století. V roce 1596 zde vznikla u pravoslavného kláštera ruská vesnice Vjatskije Poljany (doslova znamená vjatské mýtiny). Klášter zanikl v roce 1764. V roce 1938 získalo sídlo status sídla městského typu a od roku 1942 je městem. Odkazy Reference Externí odkazy Města v Kirovské oblasti
{ "redpajama_set_name": "RedPajamaWikipedia" }
850
@interface UIBarButtonItem (Item) + (UIBarButtonItem *)barButtonItemWithImage:(UIImage *)image highImage:(UIImage *)highImage target:(id)target action:(SEL)action forControlEvents:(UIControlEvents)controlEvents; @end
{ "redpajama_set_name": "RedPajamaGithub" }
4,194
require 'aws-sdk' module AwsSsmHandler def self.set_ssm_client return true if ENV.fetch('SKIP_AWS_SSM_SETUP', false) if ENV['RUNNING_ON_LOCAL_DEV'] == "true" if !ENV['AWS_SECURITY_TOKEN'].nil? # This means that the user has already set AWS security token using one # of the CLI tools from aws or aws-vault. I can directly create a # credentials object from them call_create_creds_from_credentials create_ssm_client_from_env_variables else # This means that I need to route the user through a `assume role` setup # process based on their personal access keys Aws.config.update( region: ENV['region '].strip, credentials: Aws::Credentials.new(ENV['aws_access_key_id '].strip, ENV['aws_secret_access_key '].strip) ) create_sts_client_for_user call_assume_role create_ssm_client_for_assumed_role end elsif ENV['RUNNING_ON_LOCAL_DEV'] == "false" create_ssm_client else puts "Missing RUNNING_ON_LOCAL_DEV in ENV" puts "Please pass `RUNNING_ON_LOCAL_DEV=true` if you're in dev/test mode" exit! end end # Creates a STS client based on developer's user profile to assume role # @return [Aws::STS::Client] def self.create_sts_client_for_user @sts_client = Aws::STS::Client.new end # Use the temporary credentials that AssumeRole returns to make a # connection to Parameter Store # @return [Aws::SSM::Client] # @see https://docs.aws.amazon.com/sdkforruby/api/Aws/SSM/Client.html def self.create_ssm_client_for_assumed_role @ssm_client = Aws::SSM::Client.new(region: ENV['region '].strip, credentials: @assumed_role_object) end def self.create_ssm_client_from_env_variables @ssm_client = Aws::SSM::Client.new(region: ENV['AWS_REGION'], credentials: @role_credentials) end # Creates a credentials object for the assumed role # @return [Aws::Credentials] # @see https://docs.aws.amazon.com/sdkforruby/api/Aws/Credentials.html def self.call_create_creds_from_credentials @role_credentials = Aws::Credentials.new(ENV['AWS_ACCESS_KEY_ID'], ENV['AWS_SECRET_ACCESS_KEY'], ENV['AWS_SECURITY_TOKEN']) end # Calls the assume_role method of the STSConnection object @sts_client and # pass the role ARN, role session name, MFA device number and one time passcode # @return [Aws::STS::Types::AssumeRoleResponse] # @see https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html def self.call_assume_role if !ENV['TOKEN_CODE'].nil? @assumed_role_object = @sts_client.assume_role( role_arn: ENV['role_arn '].strip, role_session_name: "AssumeRoleSession_local", serial_number: ENV['mfa_serial '].strip, token_code: ENV['TOKEN_CODE'] ) else puts "Missing value for `ENV['TOKEN_CODE']`." puts "Please pass MFA code as an environment variable in `TOKEN_CODE` and retry" exit! end end # Creates SSM client for AWS ECS # When the ECS task is running with FARGATE, this method will will see that # the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI variable is available, and it # will use the provided credentials to make calls to the AWS APIs # @return [Aws::SSM::Client] # @see https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/SSM/Client.html def self.create_ssm_client @ssm_client = Aws::SSM::Client.new end # Creates a credentials object for the ECS executing role in AWS # @return [Aws::InstanceProfileCredentials] # @see https://docs.aws.amazon.com/sdkforruby/api/Aws/InstanceProfileCredentials.html def self.call_create_creds_in_ecs @role_credentials = Aws::InstanceProfileCredentials.new end def self.get_param_from_parameter_store(name) decrypted_param = @ssm_client.get_parameter(name: name.to_s, with_decryption: true) decrypted_param.parameter.value end end
{ "redpajama_set_name": "RedPajamaGithub" }
7,447
\section{Full simulation results} The main body of the text focuses on instantiations of our algorithms that employ Thompson sampling as their underlying strategy. We here report results for comparison to Exp3, UCB1 and Epsilon Greedy. \begin{figure*}[t] \centering \vspace{-6mm} \subfigure[] {\includegraphics[width=1\textwidth]{figs/example2allbandits.png}\hspace{1cm} \label{fig:rich}} \subfigure[] {\includegraphics[width=1\textwidth]{figs/T12allbandits.png} \label{fig:t12}} \caption{ (a) Average surplus rewards relative to random assignment for Example 2 for 1,000 simulations. (b) Average surplus rewards relative to random assignment for an adaptive trial over 12 patients over 1,000 simulations.} \end{figure*}
{ "redpajama_set_name": "RedPajamaGithub" }
8,541
Debut, Features, Interviews, Music September 30, 2015 October 6, 2015 A Guitar, A Keyboard, and A Computer in a Bedroom: A Conversation With The Score Where Do You Run [EP] - The Score Every so often, there comes a song that instantly captures the attention of everyone within earshot; a song so powerfully catchy and mesmerizing, that it holds its audience hostage from start to finish; a song that one cannot help but listen to on repeat. After leaving New York for Los Angeles and risking everything in the process, Eddie Anthony and Edan Dover, who together comprise musical duo The Score, made that song. Oh my love, let me be your fire We're a thousand miles up and I'm 'bout to get higher Feel my heart beating out my chest You're the only prayer I need to make me feel blessed With its explosive entrance, constant drive and no-frills positive attitude, "Oh My Love" is easy to fall in love with – in fact, many already have! Released independently to SoundCloud in early 2015, "Oh My Love" quickly accumulated over 1.8 million streams, enjoying massive success in the UK where it achieved the coveted #1 spot on the Spotify UK Viral Chart and the #4 spot on the iTunes UK pop charts. The song's popularity and mass appeal thrust Eddie and Edan, who had already been making music together for quite some time, into a much-welcomed spotlight: The success of "Oh My Love" led to The Score signing with Republic Records and landing a major advertisement placement in the UK. In short, the past few months have been a whirlwind of adventure for Eddie and Edan – but the real adventure is only just beginning. With little notice, The Score last week released Where Do You Run, a debut EP that joins "Oh My Love" with three more infectiously catchy and memorable hits in the making – "Where Do You Run," "Something New," and "Livin Right." The EP serves as confirmation that The Score are by no means a one-and-done kind of band, and that they may, in fact, end up as one of the biggest indie pop acts of 2016. There's a long road ahead of them, but the future looks nothing but bright for The Score. Atwood Magazine met with Eddie and Edan last week, just days before their EP release, to dig deeper into the indie pop duo as individuals and learn all that we could about this hot artist on the rise! Watch: "Oh My Love" – The Score A CONVERSATION WITH THE SCORE Atwood Magazine: Eddie and Edan, welcome back to New York! What does it feel like to be back in this city? Eddie: It's cool, it feels good! Edan: Yeah, it feels great to be back in New York, having come full circle in the past year. Being in New York now, we're in a very different place from where we were a year ago. Where were you a year ago? Eddie: A year ago, we had just left New York to go to LA. Edan: Yeah, literally a year ago from now! Eddie: We were doing well in New York – like, we were playing Rockwood all the time, packing the main room and stuff, but we really wanted to step it up a notch and really try to make it, and everything was out in LA. So we just packed up, like "We're doing it!" Did you know folks out there? Eddie: All of our music friends, like a lot of other writers and producers, had all moved to LA. There was a mass exodus in the past three years, and we were one of the last ones on that train. I'm from Orange County over there, so it was nice to be closer to home, but we didn't have a set plan, really; we were just like, "If we want to make it, we gotta go over," because there was no one here in the pop world. Edan: If you paint the picture of where we were a year ago: We hadn't written "Oh My Love" yet. Eddie: Yeah, if you had told a year ago, "You're going to be signed to the biggest label in the world, and you're going to be on the radio soon," we'd have been like, "That's cool, that's funny." Edan: We were like, grinding in my bedroom – basically – in my Upper East Side apartment – Eddie: – wow, that… – Edan: – not… We were just hustling [laughs] in my bedroom. We were working really hard all the time, we had just released two EPs independently and we got a little bit of traction from those – you know, little things like in-store play from Abercrombie and Hollister. Eddie: We were a finalist, or second-place for the John Lennon Songwriting Contest. Edan: Little things here and there, but we were just like, "Okay, I guess we're going to move to LA," and it was really a leap of faith for us; we didn't realize that all of this – what's happened in the past year – would actually happen. 'Oh My Love' may be the song that introduced the world to The Score, but it is by no means your first song. You have quite the repertoire - it's like a whole different band beforehand. Edan: Yeah, and I think that happens a lot, because artists need to go through that critical development time. People don't realize – like, a lot of artists are marketed in a way like they came out of nowhere; like they just picked up a guitar and started writing hit songs, and that's not how it works at all. For most people – maybe not everyone – but for most people, there's a long period of artist development, of grinding the pavement and really figuring out who they are and honing their craft. Eddie: Yeah, we're trying to still find our sound. For us, I don't think the songwriting has changed very much; we've been writing for so long together. I don't think it was the songwriting part as much as it was finding the sound that worked – like, a combination of songwriting that made us stick out. On "Oh My Love," it kind of clicked: The songwriting was there, the production and the sound was kind of sonically 'different,' so I think that's what set us apart from the rest of the EPs. Listen: "Say Something (A Great Big World Ft. Christina Aguilera Cover)" – The Score Eddie: Everything that makes us now is the result of all the work we put in back then. That's the great thing about going to LA – we worked so hard in New York, like writing in the apartment and playing shows – where we are now is a culmination of all that work. It's nothing to be ashamed of; we're not this 'packaged' pop group who came out of nowhere. Everyone has a past, and I don't think it's a big deal to say, "We were an acoustic pop band before." You gained a lot of popularity from your covers; 'Say Something' was your most streamed song before 'Oh My Love.' Edan: Yeah, YouTube was good for us. There was a period of time when we were just doing covers. We called them "Score Sundays," and we tried to do them every two weeks or so. We put a lot of time and effort into them and we got our initial audience through YouTube. It was a great way to build that first audience base. Eddie: We weren't touring, so we needed a way to get a base. Edan: And it was great. The practice, I think, of dissecting all of these hits songs and recreating them in our own voice – it's very educational. Not only the process of recording and producing everything, but also learning from the great songs and the songwriters who wrote these great songs. If you recreate them, you internalize the melodies and chord progressions a little better. I think that was also a good education for us. What makes The Score different? Edan: For us, it's about the songwriting; we're originally producers and songwriters, so first and foremost that's in our DNA. Eddie: I think we try to make every song a single. With our EP and the album, we're not trying to write 'album cuts' – as messed up as that sounds – but we're trying to write every song like it could potentially be on the radio. Edan: Not because radio is the endgame, but because writing universally anthemic, relatable choruses and melodies that just – we want people to feel uplifted; we want people to listen to the songs and feel better. It's like almost therapy – that's what we try to do with our music. Why the name 'The Score' - what do you want people to think of when they hear your name? Edan: We like the association of the score to music, like a music score… There's a bunch of different ways – you could do the music score, there's sports – like, everybody wants to know the score of the game – Eddie: Hashtag winning! Edan: When we came across that name, it just seemed to fit so well, so we stuck with it. Eddie: And it was the one name that we were going through that we didn't absolutely hate – so like, this is cool. Edan: Like, "E-Squared" was definitely one that had to be cut [laughs]. Eddie: It was just bad – we had a show coming up, and that name sounded pretty good so we used that. The Score is Eddie Anthony (left) and Edan Dover (right) // © Steven Taylor You're signed to Republic Records alongside a number of other acts who gained their initial momentum and ascent independently through streaming music. What does this signing mean to you? Eddie: This signing kind of means everything to us – it's the pinnacle of everything we've worked so hard for. We had choices between any label we wanted, but I think Republic shared our mentality of wanting to win and going hard – it's a very New York mentality – and I think at the end of the day, even though we're in LA, the mentality we still have is very New York, and Republic totally got that. You still have your personal band email on your website. Does that go to you guys, and do you check it often? Edan: Yeah – we get all the emails and we check it every day! Are you planning to keep it up there? Edan: I mean, after this interview, maybe not [laughs]… Yeah, management has been hounding us about making a special management email and putting that up instead, so maybe it'll change to a management email, but for now we're getting all the emails. How has that been for you? Have you received anything really special that stood out? Eddie: A lot of weddings. A lot of couples are using "Oh My Love" for their wedding song, so we get a lot of requests like, "Hey, can we use your song in our wedding video or at our wedding?" so that's cool to see a song we wrote in Edan's apartment being the basis for someone's marriage. Onto the music: Is there any structure to how you write your songs? Edan: Yeah – I mean, our roles are kind of different. Eddie's a songwriter songwriter: He picks up a guitar and starts writing lyric and melody, whereas I am a jazz-trained pianist and producer; I'm very techy and stuff. Usually he'll come to me with an idea of some kind of melody – like in "Oh My Love," he came to me with the chorus melody – and then we start building the track. We build the actual record as we're writing it because we have that producer/songwriter vibe, so usually what you hear on our final records is how it sounded from the very beginning. There is no demo, and then it's re-recorded. It goes straight from demo to record. What was the hardest song for you to write off of your EP? Eddie: I think "Livin Right" wasn't the hardest song to write, but it was the hardest song to record and produce – to create. When I brought it to Edan, it was just the verse and chorus on the acoustic guitar, but it was so stripped that we didn't know where to take it to make it our sound, sonically. Out of those four songs, that might have been the most difficult one to nail down. Edan: I think that song has a little bit of an '80s electronic influence and we don't really usually do that with our songs. We had to push the production in that direction to really do justice to the song. When we finally figured it out, it worked – it sounded great. Usually what you hear on our final records is how it sounded from the very beginning. How long did 'Oh My Love' take? Edan: It was quick. The song was written in a day or two, and within a week the production was done. Eddie: Probably within four or five days, the song was the rough version of what it is. What role does repetition play in your music? Edan: Oh, we love repetition! Eddie: I think what separates us from a lot of other bands is that we're so song-driven – like, a lot of bands are huge fans of other bands, and their palettes are very eclectic, and ours still are, but I think we're really big fans of songwriters, like the Ryan Tedders and the Max Martins and all that – Edan: When we look at records we like, we look up who produced them and who wrote them, you know? Eddie: So repetition, for us, is everything because anything in pop music that's a big song repeats like crazy. That's like, the Max Martin rule number one. Edan: When we write, our method of writing is the path of least resistance. We've learned that, to get the creative flow going and to get into a rhythm while you're writing, you need to make sure you don't think too much, if that makes any sense. It's good to think outside the box and whatnot, and get creative – but you don't want to get too analytical to the point where you're stalling yourself out, because once you fall down that pit of despair while writing, it's hard to get out of that writer's block. So you've got to create this creative rhythm and keep going with it. The best way to do that is to just let it flow and not be too critical of yourself. Repetition is always a place we go, because it just works: People want to hear a verse sung the same way the second time as the first time, they want to hear that hook multiple times. We love to do that – we're not scared of repetition at all. Now, there is such a thing as too much repetition, but we'd rather be too repetitive, and then figure out afterwards how to make it less repetitive, than vice versa. What has been your biggest challenge to overcome so far? Eddie: It's not really the biggest challenge, but because these records are so production-heavy now, we're just having to translate that into our live set. Our home has been pretty much a studio, so now is the first time that we get to venture out to start touring and playing out. I think right now, trying to match our live set to our recordings is probably the toughest thing. Edan: Yeah, we're not a traditional band in the sense that we're a bunch of dudes who jammed in a garage and figured out how we're going to plays these songs, and write them together live. A guitar, a keyboard, and a computer in a bedroom: That's how music is created today, and that's how we make our music. So then we have to play this live – and we're no strangers to performing live, but it does take a second how to replicate these massive productions in a live setting. Eddie: Our older acoustic pop songs – those are really easy to do live. Now, we're working with samples and different pads and stuff to match the record. A guitar, a keyboard, and a computer in a bedroom: That's how music is created today. You were saying earlier that you look to songwriters as your influences. For you as musicians, what are your goals? Eddie: I think the goals of our songs are to have a universal appeal. It's been crazy seeing "Oh My Love" take off in the UK with the response there, in a whole different I've never been to. Hopefully it translates well here. I think we're just trying to write songs, like what Edan said earlier, that are uplifting and universal, and that are just big songs! We want to be like Ryan Tedder; we want to be like those guys. Edan: So, I used to study jazz piano. I would hole myself up in a practice room for five hours a day, practicing, and yeah – within my little community of the Jazz Department at NYU, they would care, but if I showed my friends what I was up to – like, the music I was writing or playing at the time, they really could not relate to it. It was hard for them to tap into. With pop songwriting, we tap into a world that so many more people relate with, and we can touch so many more people. For Eddie and me, that's what we crave – being able to make that connection. So we want write those songs that make that connection with as many people as possible. If your music is universally understood, what are those themes that you want people to be getting out of it? Edan: With "Oh My Love," we were coming from a place where we had just started this new adventure. We just had moved to LA, we packed up all of our bags, we left our friends and family back behind, and we were taking this risk, so I think we wanted to instill confidence in ourselves. We wanted to write a song – a melody and a lyric – that was, I guess, a reassurance for ourselves that we're going to do this; we may have taken a risk, but we're going to achieve what we set out here to achieve. That's one of the feelings that we want to spread to people. Eddie: It's funny – people comment about "Oh My Love" that there are all these religious undertones, or it's about a relationship, etc.; it's cool to see people pull different things from a song that wasn't necessarily written for religious things – it's just the way it came out. There's a universality-slash-ambiguity to some of the songs, which is really cool; people get to pull their own interpretation from them. We wanted to write a song – a melody and a lyric – that was a reassurance for ourselves that we're going to do this; I think we wanted to instill confidence in ourselves. 'The song 'Catching Fire' is not on the EP, but it contains some very deep lyrics: 'I am always aiming too high / Staring too long makes you blind / You got my wings catching fire.' Eddie: It was a song about a relationship where you just know you're going to go down, but we kind of wanted to have it be a little coy and play up the whole 'Icarus' thing, and so that's why there are a lot of references to literally catching fire – like, you're flying too high and aiming too high – kind of the whole Icarus story, but instead of a father/son aspect, having it be with a relationship. That was the basis for that song – it was fun to do; the turnaround was pretty quick on that song. Listen: "Catching Fire" – The Score 'Livin Right' has this darkness-to-light theme to it - 'Heaven can wait, we're staying here; spending tonight like there's nothing to fear. Heaven can wait as long as you're near – living it right like there's nothing to fear' What do these words mean to you? Eddie: "Livin Right" was about guy that finds a girl who was kind of messed up in something before, and he wants her to know that this is going to be 'living right' – like with this new relationship, everything's going to be fine. "Living it right like there's nothing to fear" – just take a chance. The second song most folks will hear of yours will be "Where Do You Run" – what does that song mean to you? Edan: I love that song. Eddie: "Where Do You Run" is probably our favorite from the EP. It's really fun to play live. Edan: It's the one song that – I guess it is uplifting, but in a less obvious way. What's cool about it is that it's not specific to a kind of relationship – like a romantic relationship; it could be about any kind of relationship – father/son, friends, sister/brother. It's just about being that support for somebody when they need you; when they're in a bad place. "Who do you run to? Where do you run?" Eddie: When you're at your lowest… Like, one of the lyrics is, "Where do you run when you're screaming out? Where do you run when no one can hear you?" When you're at your lowest, where do you run and who do you turn to? Edan: It's that point of vulnerability. Eddie: And you're just saying in the song that you'll be that person to whoever's feeling low. It could be anything. Edan: We haven't started the music video for that, but I think we can already envision it; it's just one of those really big sounding, emotional, epic kind of songs. We love it! 'Where Do You Run' is is about being that support for somebody when they need you – when they're in a bad place. What is the most important lyric to you? Edan: "Where Do You Run." That song has a depth to it that I can really connect with. Since you're the producer as well, what's the biggest moment out of the production that affects you? Edan: Dude, there are so many little things that get me excited! We can go song-by-song. "Livin Right" – I love the '80s toms on "Livin Right" and how in each of the verses, another element introduces itself and it plays a critical part in the production. It's not just like we threw everything at a canvas and it all stuck: Each instrument plays a role. Then there's little things, like the bend of the bass synth into the second chorus – little surprises here and there that we throw that get me super excited every time I listen. On "Where Do You Run," the bridge is our favorite part. It's just so epic – we have this huge choir thing come in, and this rapid guitar lead that's played… For me, when I listen to "Where Do You Run," I'm just waiting until the bridge. I love everything up to the bridge, but that's my favorite part. Eddie: Lyrically, I like this thing on "Livin Right" – "The faith that you need when you're running out / I'll be devout," I think that's a really cool lyric. Edan: Oh shit – yeah, I like that lyric too! Eddie: I don't know, for some reason – I didn't realize it – but a lot of the stuff I was writing had all these religious undertones. I have no idea why, but it's just a cool lyric – it sticks out. Edan: What are the exact words for that? Eddie: "I'll be the sun, light break in your clouds / The truth when you're feeling doubt / The faith that you need when you're running out / I'll be devout" Edan: "Running out / I'll be devout." When I heard him – he came to me with that, and I was like, "That's genius." So awesome. Did you have a religious upbringing? Eddie: Yeah, but I'm not super religious by any means. On certain songs, it came out more than others – I don't know why, I wasn't necessarily thinking about that stuff. If you were to write a song that wasn't about relationships, what would it be about? Eddie: That's tough, because we usually tend to write about stuff that we know. It says a lot that that's what your focus has been as a songwriter. Edan: There's so many other things you could do – a song about proving people writing; a song about being a champion… But is that The Score? Edan: I guess not – I mean, who knows where our songwriting will take us in the future. Eddie: I'm trying to think of kind of like, U2? I don't want to say, like, 'political stuff,' but a lot of their album – like on The Joshua Tree, they had "With Or Without You," but then they had "Sunday Bloody Sunday" and stuff like that. I don't know – maybe something about current, relevant events with the world; I don't know. Mitch and The Score Do you believe, as artists, that you have a social responsibility that goes beyond delivering music? Eddie: Yeah, I think that music gives you a platform to reach out to people in other ways aside from the music. Bands like Coldplay, and things like the Global Citizen Festival – I think anyone with a platform, whether it's music or film, etc. – if you are lucky and blessed enough to have a platform to do what you love and have a mass audience, I think you have a responsibility to use that for the betterment of everyone else. And that's just starting for you guys. Edan: Yeah, I mean as the platform grows and we get bigger and our influence gets bigger, we're excited to make a positive change in any way we can. That's something we'll definitely explore once people actually hear about us! [laughs] In the UK we're doing well, but in the US, we've got a mountain to climb. We're confident – we've got a great team at Republic, so we're super excited about what's ahead. What are you most looking forward to over the next year? Eddie: I'm looking forward to hearing the song on the radio over here. I think that'll be a very surreal, "Whoa!" moment, and then seeing it climb… I think we're both excited for what we think "Oh My Love" is going to do – the doors it will open for our other music; for opportunities for us, as songwriters, to write for other people; I think there's a whole bunch of stuff down the road that we're excited for, but "Oh My Love" is definitely the kick-starter. And it has been the entire time; you've been riding this for a little while. Edan: It didn't really get popular until July, so it's actually very new. We released it in February, but I'd say the biggest thing was that we got a very large sync. Eddie: The ASDA… Edan: There was a very popular commercial in the UK that used the song very prominently throughout all these different versions of the commercials on radio and TV… That was the spark for all this, so it's all been pretty recent! Are you sick of it yet? Eddie: No, I don't think we've totally realized it yet. Edan: Apparently everybody in the UK knows the song! I meet people who are visiting from London, and we'll get to talking and we'll be like, "Oh yeah, we have that song that's 'Oh My Love,'" and they'll be like, "What!?" Or, "The song in the ASDA advertisement" if they don't recognize the song by name, and they can literally sing the song back to me! They know the song – the lyrics and everything! That's such a surreal experience. Eddie: It's funny when on Twitter, too. I'll check our Twitter every morning, and like, today people were tweeting, "So happy to hear The Score on BBC One today." It's really cool to be seeing something that we did in his apartment to progress from getting a couple good blog posts, to an advertisement, and now it's on BBC One, which is arguably one of the biggest stations in the world. It's kind of cool to see that transpire and evolve. We're stoked – we're not tired of it yet. Listen: "Don't Wanna Wake Up" – The Score If there were any song from your backlog that you could either reproduce or really get people's ears around today... Can you think of a song you would want to bring back? Edan: There's a song that we play live that we live, "Almost." We've never been able to figure out how to produce it correctly, so we never recorded it. "This Beating Heart" is a great song that a lot of people like… I always liked "Dancing Shoes," although it's a little more mellow… Eddie: "Don't Wanna Wake Up," everyone loved, but I don't think it's really for us right now. It's just so poppy – like Train could do it, and that would be cool. Edan: I'd say "This Beating Heart" – if we could reproduce "This Beating Heart"… Back then, when we produced, we were producing things a certain way and we didn't have as much experience as we have now. That would be a cool experiment – to jump in and try redoing that song today, and to take all the new influences and the new things that we do, and bring them into that song. It would definitely come out pretty cool. Listen: "This Beating Heart" – The Score What are you most excited for at the end of this year? Eddie: So we start going to radio here in the US in September, and that's a big landmark for us… I don't know; we're just doing the radio tour for all of October and a bunch of one-offs in November. I think this whole end of the year is going to be kind of fun to ride for us, building "Oh My Love" up, and then once the album comes out, it's going to be a whole other… We're excited. I don't know if there's one thing necessarily. Any last words for your fans out there? Edan: We need a name for our fans. Eddie: Oh yeah, that was on Twitter also. Edan: You know how "beliebers" are for Justin Bieber? We're trying to figure out ours; the only one we could think of is really inappropriate, so we need… like, Scorers? Scorelords? Any suggestions! What's the inappropriate one? Edan: Score whores. [laughs] We don't have a really big social media following – we're growing – but people have been asking, like, "You need a name for your fandom!" We'll have to do a competition. Eddie: A DJ in the UK did a remix of "Oh My Love" and it's number eleven or ten on The Hype Machine right now, so that's kind of cool. Listen: "Oh My Love (Kat Krazy Remix)" – The Score We've come to a point of maturity in our songwriting and our sound. Edan: Yep, that's great. I guess to the fans, it would just be… This has been an amazing journey, and we're really excited to share this new music with them. This EP, four songs, is going to be, like, really – we've come to a point of maturity in our songwriting and our sound that, if our fans loved all the stuff we did before, we think they're really going to love what's around the corner! Eddie: And we're excited to get out to everyone else and make new fans! We're so small, and I think this song is going to be very big, so it's going to be exciting to see all the new people who get to know who we are. We'll grow the scoreboard! The Score: Yes – absolutely! Where Do You Run An EP by The Score Where Do You Run [EP] – The Score Learn more about The Score online at www.thescoreofficial.com Follow The Score on Facebook, Twitter, and Instagram Got a question for Eddie and Edan? Email them at thescore@thescoreofficial.com An invigorating swathe of soaring indie rock driven by passion, sincerity, substance,... Highlighting the Sweetest Moments in Vance Joy's 'Nation of Two' On the Nature of Change: How Plastic Picnic's "After You" Offers a Personal Journey Unlike Any Other A Conduit for Codifying Honesty: A Conversation with R.LUM.R The Drums Still Atop at The Bowery Ballroom Today's Song: Carmody's "Summer Rain" Drips with Love's Soulful Sensuality Premiere: Indie Folk Duo Fransancisco Debut with Breathtaking Heartache in "Gold" Previous articleToday's Song: Ideas Turn to Action in Transviolet's "New Bohemia" Next articleToday's Song: Daughter Lament Life and Loss On "Doing The Right Thing" Music, Music Videos, Premieres Premiere: Family, Façades, and Jake Tavill's Fierce "Truth" Video
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,354
\section*{Supplementary Materials} \label{sec:supplementaryMaterial} Demo video: \url{https://youtu.be/AOENvwf8sfM} \par \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{I}{n} application scenarios, such as searches and expeditions, small drones are usually used to explore unknown environments. To achieve autonomous unmanned aerial vehicle (UAV) navigation in unknown environments, achieving onboard simultaneous localization and mapping (SLAM) and path planning is required. In the research field of path planning for UAVs, the three most essential indicators are usually safety, path length, and calculation time for replanning the trajectory. In general, all methods are designed to ensure that the flight is safe, that is, collision-free. However, regarding path length and calculation time, most researchers have only focused on one of these factors because of the potential conflicts between them. In other words, calculating a shorter path is always more time-consuming. Minimizing the calculation time often requires direct planning based on the raw sensing data instead of planning on the periodically updated map, and therefore, it is difficult to handle complex environments. The UAV is more likely to detour, resulting in an inefficient flight trajectory. In addition, calculating the shortest path in the global 3D map consumes too much time and is not applicable to real-time planning. While flying in an unknown environment, the environmental information sensed by the drone is continuously used to update the map. After the map is updated, if the planned path cannot be replanned in time, the flight of the drone will be greatly imperiled. Therefore, for real-time calculations during drone flight, using a local map (the part of the global map around the location of the drone) is a common and effective method. Moreover, drones in unknown environments usually do not have a complete map, which means that the globally shortest path is difficult to plan. Admittedly, planning the globally shortest path with only a local map is impossible. Shortening the path in the local map will also contribute to shortening the final flight path length. Nevertheless, to respond to emergencies, planning on the map may not be sufficiently fast. Mapping and planning on the map cost too much time to avoid the intruding dynamic obstacles. The drone must be able to avoid sudden obstacles in the unknown environment before the map is updated. Therefore, we propose a framework in which a low-frequency path planner and a high-frequency trajectory planner work in parallel. The designated goal is cast to the local map as the local goal. The map planner (MP) is first used to determine the 2D path to the local goal with the improved jump point search (JPS) method on the projection map. Then, a discrete angular graph search (DAGS) is used twice to find a 3D path that is obviously shorter than the 2D one. If the shorter 3D path is found, it is adopted. Otherwise, the 2D path is output. The point cloud planner (PCP) for trajectory planning is based on the design in our previous work \cite{chen2020computationally}. With a given goal, the PCP generates collision-free motion primitives continuously in a computationally efficient way to navigate the drone. In this parallel framework, it calculates the goal from the path output by the MP. In addition, we introduce the calculation formula for obtaining the goal for the PCP from the waypoints in the path. One benefit of the local map is that the computational time for the path planning on the map will not increase with the global map size. The path output frequency and the computational resource usage are guaranteed in a specific range to ensure that the loop frequency of the PCP is unaffected, and the MP can respond to the map change in time. In this framework, all the submodules are designed to minimize the time cost. For UAVs' real-time planning, it is safer when the planning outer loop frequency is higher. The main contributions of this work are as follows: \begin{itemize} \item A parallel architecture with the MP and PCP is proposed, considering the planning success rate, path length, and fast response. The framework has been tested to achieve satisfactorily synthesized performance in extensive environments. \item We improve the original JPS path and obtain a shorter 2D path. A sliding local map with two resolutions is introduced to increase the planning speed while maintaining a fine-grained path around the drone. \item We introduce the DAGS based on the angular cost and try to find a 3D path shorter than the improved 2D path. \item Based on our former work \cite{chen2020computationally}, the PCP is further optimized in time cost, motion planning success rate, and safety. To connect two planners in one framework, we build an optimization problem to calculate the local goal from the path output by the MP. The analytical solution of the optimization problem is found from a geometric view. \end{itemize} Our proposed framework's performance is tested and verified in several simulation and hardware tests. The flight trajectory length and the detailed algorithm execution time are compared with the shortest global path length and those of the state-of-the-art algorithms, respectively, demonstrating the superiority of our method with the best all-around performance. \section{Related work} \subsection{Environmental information retrieving method} In the related works, two main categories for environmental information retrieving methods can be summarized: memory-less and fusion-based \cite{Duchon2014}. The first category only uses the latest sensor measurement data or weights the most recent data \cite{dey2016vision,florence2018nanomap}. In other words, these methods will not record the passed by obstacles \cite{schulman2014motion,augugliaro2012generation}. An example of this kind of method is to generate motion primitives randomly and check collision with the transformed point cloud and the trajectories \cite{lopez2017aggressive}. However, path planning directly on the latest point cloud requires the information of high quality and in full view. For UAVs with limited sensors, e.g., only one single depth camera with a narrow field of view, such methods are not applicable. The second type is based on data fusion. Sensor data will be continuously fused into a map, usually in the form of occupying grid or distance field \cite{lau2010improved}. Considering the map is important to plan a safe and short path in complex scenarios, many navigation frameworks are applied on a global map or local map. For building a map with the sensed environmental information, representative methods include voxel grids \cite{elfes1989using}, Octomap \cite{hornung2013octomap}, and elevation maps \cite{choi2012global}. Octomap is memory-efficient for presenting a large-scale environment and maintaining the map automatically. In \cite{lunenburg2018a}, with the point cloud raw data input, Octomap is utilized to provide the map for their proposed planning algorithms, and the experimental results are satisfactory. \subsection{Path planning} For path searching of UAVs, the algorithms commonly used can be classified into two categories: searching‐based or sampling-based methods. Searching‐based methods discretize the whole space into a grid map and solve path planning by graph searching. The graph can be defined in a 2D, 3D, or higher‐order state space. Typical methods include Dijkstra \cite{dijkstra1959a}, A* \cite{hart1968a}, anytime repairing A* \cite{likhachev2003ara}, JPS \cite{harabor2011online}, and hybrid A* \cite{dolgov2010path}. Dijkstra's algorithm is the root of the above methods, which searches path by utilizing an exhaustive method on all the given grids. A* improves the efficiency by setting a cost function to cut off the search away from the goal. As an improved version of the traditional A*, JPS greatly reduces its time cost without sacrificing the optimality in all the cases. However, as the path direction is constrained, the path is not the true shortest in the unconstrained 2D map. Sampling-based methods usually do not need to discretize the space first. In the representative sampling‐based approach such as rapidly exploring random tree (RRT) \cite{lavalle2001randomized}, random and uniform sampling is performed from the space near the starting point, and the root node and child nodes are continuously connected to form a tree that grows toward the target. The RRT algorithm can effectively find a viable path, but it has no asymptotic optimality, and its search will stay at the first feasible solution. Sampling-based methods with asymptotic optimality include probabilistic road maps (PRM*) \cite{kavraki1996probabilistic}, rapid exploration of random graphs (RRG) \cite{karaman2011sampling} and RRT* \cite{karaman2011sampling}, where RRT* can make the solution converge to the global best point with the increase of samples. RRG is an extension of the RRT algorithm because it connects the new sample with all other nodes within a specific range and searches for the path after constructing the graph. Based on RRT, the method in \cite{webb2013kinodynamic} cancels the optimal control of time to ensure the asymptotic optimality of the path and kinematics feasibility. Also, a belief roadmap can be combined with RRG \cite{bry2011rapidly} to solve the problem of trajectory planning under the state uncertainty. A technique called "partial ordering" balances confidence and distance to complete the expansion graph in the confidence space. \subsection{Trajectory and motion planning} The trajectory planner utilizes the local obstacle information and the target point's position to plan an optimal path and a corresponding set of motion primitives within a particular time. The artificial potential field (APF) method \cite{khatib1986real} conceives that the goal point generates an ``attractive force" to the vehicle, and the obstacle exerts a ``repulsive force". The movement of the vehicle is controlled by seeking the resultant force. Its expression is concise, but it is easy to fall into the local optimal solution. The vector field histogram (VFH) \cite{borenstein1991the} is a classical algorithm for robot navigation with a lidar, improved from the APF method. VFH will calculate the travel cost in each direction. The more obstacles in this direction, the higher the cost. The dynamic window approach (DWA) is a sampling-based method, which samples the motion primitives within the feasible space and chooses one set by ranking them with a cost function \cite{fox1997the}. The concepts in these classical methods are of the excellent reference value and enlightening significance for our method. However, for an UAV with a single depth camera flying in an unknown environment, they are not sufficient. Inspired by the methods mentioned above, several advanced frameworks have successfully been tested in actual drone flights. For frameworks that directly obtain the motion primitives, they can also be divided into two categories: One is based on solving an optimization problem, and the other is based on sampling within the feasible state space. The former requires a wise and appropriate form to represent the UAV trajectory, e.g., Bezier curves \cite{gao2018online}, and constraints are needed to ensure the planned trajectory is collision-free. The representative works in this category are \cite{howard2008state,zhou2019robust,preiss2017trajectory}. For sampling-based methods, the evaluation function is necessary to choose the most suitable group of motion primitives as the output. A representative work is presented by \cite{mueller2015a}, in which the time cost to generate and evaluate trajectories is short enough to enable the quadrotor to even catch a falling ball. Path planning on a local map can be conducted before the motion planning to improve the trajectory length. The local map only maintains the environment information near the UAV, while the global goal is converted to the goal in the local map. Then, the path planning algorithms introduced above can be utilized with the local map, and the motion primitives are gained based on the path. For example, Chen et al. \cite{chen2019dynamic} and Liu et al. \cite{liu2016high} adopt the minimum-jerk method to generate trajectory through the waypoints from an A* search based on the local occupancy grid map. Besides, some works studied how to allocate the flight time for a drone to fly through the waypoints. The receding horizon control policy is introduced to plan in a limited time range \cite{watterson2015safe}, and a bi-level optimization method is also effective \cite{berg2012lqg,oleynikova2018safe}. For the two categories, collision check is the most time-consuming part of the framework. In our previous work \cite{chen2020computationally}, we proposed a computational efficient sampling-based method with a novel collision check technique to generate collision-free trajectories on the point cloud obtained during UAV's navigation. Its loop frequency is up to 60 Hz, and it is a submodule of this project. In the last few year, several research works discussed how to combine the optimal global planning algorithm for static maps with the algorithm applied to real-time online replanning. The algorithms can learn from each other's strengths and weaknesses, i.e., working out a short path and responding quickly to map changes for replanning the trajectory. For the existence of the unknown space in the environment, several methods can be adopted: the unknown space is regarded to be freely passable in \cite{chen2016online,pivtoraiko2013incremental}, and the path is continuously adjusted as the obstacle information is updated. We call it the optimistic planner. In \cite{oleynikova2018safe,oleynikova2016continuous}, the optimistic global planner and conservative local planner are combined to ensure the safety of the aircraft. To diminish the inconsistency between the global and local planner, Tordesillas et al. \cite{tordesillas2019real} proposed a planning framework with multi-fidelity models. They run the JPS algorithm on the local slide grid map, and the constraints of motion optimization were divided into three parts according to the distance from the drone, where the most strict constraints are for the closest area. \section{Mapping and the map planner} The flight system architecture is shown in Fig. \ref{fig_archi}. The MP obtains the map from the mapping kit and plans the final path as the reference, and the PCP searches the next waypoint based on the final path and optimizes the motion primitives to make the drone fly through the waypoint. \begin{figure}[thpb] \centering \includegraphics[width=0.48\textwidth]{figs/pipeline.png} \caption{Architecture of our autonomous navigation system for UAVs.} \label{fig_archi} \end{figure} This section primarily introduces the construction of a stitched map with two resolutions and the algorithms used for path planning in the local map (the MP). The PCP will be introduced in section \uppercase\expandafter{\romannumeral4}. Moreover, the point cloud filter for the raw sensor data preprocessing is introduced at the beginning. \subsection{Point cloud filter} The dense point cloud from a real depth camera contains noise. The noise follows certain types of distributions, that is, the noise level is high when the object is far from the camera. In addition, the dense points may overburden computational procedures, such as point coordinate transformation, mapping, and planning. Moreover, the noise will mislead the mapper to mark many nonexistent obstacles on the map. Before building a map, we filter out the noise in the point cloud and keep the actual obstacle points. The process of filtering the point cloud is shown in Fig. \ref{fig2}. First, we filter the original point cloud data through the distance filter to obtain $Pcl_1$. It removes the points farther than $d_{pass}$ from the camera, which may contain too much noise. Next, a voxel filter is used to reduce $Pcl_1$ to $Pcl_2$. Furthermore, the outlier filter removes the outliers to obtain $Pcl_3$: The local point density distinguishes the outliers because the point cloud density of the noise is usually smaller. Then, we convert $Pcl_3$ into $Pcl_4$ in the Earth coordinate system and use $Pcl_4$ in the mapping kit to build and maintain a global map. Finally, the center points of occupied voxels are used as the 3D map for the collision check, referred to as $Pcl_m$. It is apparent that $Pcl_m$ well retains the basic shape of the obstacle in a more concise and tidy form. $B_E$ in (1) is the transformation matrix from body coordinate $B\!-\!xyz$ to Earth coordinate $E\!-\!X\!Y\!Z$. $c\psi$ is short for $cos(\psi)$, $s\psi$ is short for $sin(\psi)$, and $\psi$, $\phi$, and $ \theta$ are attitude angles. At last, $Pcl_4$ is used for the map building and collision check in the PCP, and $Pcl_m$ is used for the 3D path collision check in the MP. $$ B_{E}=\left[\begin{array}{ccc} c \psi c \theta & s \psi c \theta & - s \theta\\ c \psi s \theta s \phi-s \psi c \phi & s \psi s \theta s \phi+c \psi c \phi & c \theta s \phi \\ c \psi s \theta c \phi+s \psi s \phi & s \psi s \theta c \phi-c \psi s \phi & c \theta c \phi \end{array}\right] \eqno{(1)} $$ \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{figs/f24.png} \caption{Process for point cloud filtering, coordinate transformation, and mapping.} \label{fig2} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=0.5\textwidth]{figs/f25.png} \caption{Local and global maps. The goal projection of the local 3D map is $g_{l} \in \mathbb{R}^{3}$. $Pcl_{lm}$, while $g'_l$ is the projection of $g_l$ in the 2D local map $Map_1$, and $g'_l$ locates on the edge of $Map_1$.} \label{fig21} \vspace{-0.3cm} \end{figure} \subsection{Mapping and 2D path planning} The mapping kit MLmapping\footnote{https://github.com/HKPolyU-UAV/MLMapping} assembled in this project is self-developed. It provides $Pcl_m$ and the projected 2D grid map for the path planning in this paper. Here, we first introduce the basic concepts of the local map. A local map is a subset of the global map and is also presented by voxels' center points. The space covered by the local map is a cuboid with a square bottom surface, and it has no relative rotation to the global map. As shown in Fig. \ref{fig21}, $l_{ms}$ is the square side length, and $h_{ms}$ is the local map height, which is much smaller than $l_{ms}$. The center of the local map follows the drone's current position $p_{n} \in \mathbb{R}^{3}$. $Pcl_{lm}$ represents the subset of $Pcl_{m}$ corresponding to the local map in the text below. By projecting the local map on the ground plane, the 2D grid map $Map_1$ is obtained to plan the 2D path. To plan an optimal path on $Map_1$, JPS is one of the best choices, because it is fast and can replan the path in real time. JPS outputs the optimal path by searching a set of jump points where the path changes its direction. However, two problems arise if the path planning is performed directly on $Map_1$. First, to find a short and safe path, the local map scale should not be small. Otherwise, the optimal path on a tiny local map is more likely to end at a blind alley or differs substantially from the globally optimal path. However, a large local map is computationally expensive, and it is important to leave as much CPU resource as possible to the high-frequency PCP for safety. Second, the path planned directly on $Map_1$ is adjacent to the obstacle projection. In our framework, considering the drone frame size and flight control inaccuracies, the drone must remain at a safe distance from obstacles. Thus, the path should remain a certain distance from obstacles. The PCP will make the drone closely follow the path obtained by the MP. When the path is found to be occupied, the PCP starts to take effect. As a result, the PCP in this framework can run faster compared to that of \cite{chen2020computationally}, because the initial search direction is more likely to be collision-free. \begin{figure}[thpb] \centering \includegraphics[width=0.46\textwidth]{figs/f031.png} \caption{Partial view of the mapping process (k=3,h=2). The figures in the first row show the path in $Map_1$ and $Map_{1b}$. The figures in the second row show the path in $Map_c$, the jump points in $ Path_1$, and the stitched map. The dark gray grids indicate the obstacle, and the light gray grids are the obstacle's inflation after the convolution. The path planning start point is the center of $Map_1$ and $Map_{c}$.} \label{fig3} \vspace{-0.3cm} \end{figure} In our framework, we take two measurements to address these two problems. For the first problem, we plan the path on the downsampled local map and the cast local map, respectively, and fuse the paths. We first conduct the convolution with $Map_1$ to reduce the map size and obtain a low-resolution version $Map_{1b}$ from $Map_1$. $Map_c$ is segmented from $Map_1$ afterward as the original resolution map around the drone. Then, we plan $Path_b$ on $Map_{1b}$ and find the intersection point $g_{ist}$ of $Path_b$ and the $Map_c$ boundary. The part of $Path_b$ that lies in $Map_c$ is removed. Finally, we use $g_{ist}$ as the goal point to find the path $Path_a$ in $Map_c$, and splice $Path_a$ and $Path_b$ to form a complete path $Path_1$. The grid size of $Map_{1b}$ positively correlates with the map size, so the time cost of 2D path planning can be controlled. For the second problem, after we have obtained $Map_c$, we first perform an expansion operation on the obstacles in $Map_c$. Using a convolution kernel to convolve the binary matrix corresponding to $Map_c$, the blank area next to the obstacle in the map can be marked as an obstacle so that each point on $Path_1$ maintains a certain distance from true obstacles. $$Map'_{1} = \left[\begin{array}{cc} [Map_{1}]_{i \times j}&[\textbf{0}]_{(i+s) \times s}\\ {}[\textbf{0}]_{s \times j}& \end{array}\right]\eqno{(2)}$$ $$Map'_{c} = \left[\begin{array}{ccc} &[\textbf{0}]_{k \times (n+2k)}&\\ {}[\textbf{0}]_{(m+2k) \times k} & [Map_{c}]_{m \times n} & [\textbf{0}]_{(m+2k) \times k}\\ &[\textbf{0}]_{k \times (n+2k)}& \end{array}\right]\eqno{(3)}$$ $$Map_{c} = Sgn(Conv_{1}([Map'_{c}]_{(m+2k)\times (n+2k)},I_{k\times k}))\eqno{(4)}$$ $$Map_{1b}=\langle Conv_{h}([Map'_{1}]_{(i+s)\times(j+s)},\dfrac{I_{h\times h}}{h^{2}})\rangle \eqno{(5)}$$ $$h=\langle\dfrac{ij}{2mn}\rangle \ (ij>3mn \ \&\ (i+s)\ \text{is divisible by}\ h)\eqno{(6)}$$ Equations (2)-(5) show the calculation of the downsampling and inflation. $i$ and $j$ denote the size of the original $Map_1$ ($i=j$), and $m$ and $n$ denote the size of the cut map $Map_c$ ($m=n$). We use $[\ ]$ to present a matrix, and the subscript of the matrix denotes its size. $[\textbf{0}]$ indicates the zero matrix. $s$ is the line and column number for zero padding for $Map_1$. $h$ and $k$ are the convolution kernels' size for map downsampling and obstacle inflation, respectively. $Sgn()$ is a function that returns the sign matrix corresponding to each element in the input matrix. The sign matrix is used as the binary map with two types of elements: 0 and 1. $Conv()$ indicates the convolution, and it inflates the occupancy grids on the map or downsamples $Map_1$. Its subscription indicates the step size for the kernel sliding, and the second element is the convolution kernel. $\langle \rangle$ is for rounding the number to the nearest integer. If a matrix is in $\langle \rangle$, it rounds each element in the matrix. (4) represents the obstacle inflation process, and (5) is for the map downsampling. Fig. \ref{fig3} illustrates the map processing and path planning intuitively. The deep gray grids represent the obstacles, and the light gray grids are the inflation of obstacles after the convolution. $g_{ist}$ is represented in blue on the maps. When $g_{ist}$ is occupied after the inflation, we find the nearest free grid on the map edge as the new $g_{ist}$. The calculation of $h$ is introduced in (6), and $i,\ j,\ m,\ n,\ s$ should meet the conditions in the bracket. \subsection{Improved 2D path} In section \uppercase\expandafter{\romannumeral3}.B, a path $Path_{1}(jp_{1},jp_{2},...)$ on a plane parallel to the ground plane $XY$ is found using the JPS method in the hybrid map of $Map_{1b}$ and $Map_{c}$. However, in some cases, it is not the shortest path in the plane, as search directions of waypoints can only be a multiple of $45^{\circ}$. We can further optimize the original path by deleting the redundant waypoints. For example, in Fig. \ref{fig32}, the red path is the original path, the green path is the improved path, and $jp_2$ and $jp_4$ are deleted. The deleting process can be written in Algorithm \ref{alg21}. $ti$ is the iteration number, $jp_{ck}$ is the $ck\text{th}$ point in $Path_1$, and the same for $jp_{ti}$. We connect the third point in the original JPS path with the first point and check if the line collides with the occupied grid in the map. If it does not collide, the point before the checked point in the waypoint sequence of the original JPS path is deleted. The first and third points can be directly connected as the path. Then, the next point will be checked until all the point pairs (the two points are not adjacent) from $Path_1$ are checked, and all excess waypoints in $Path_1$ are removed. \begin{algorithm}[htp] \caption{Optimize the original JPS path} \label{alg21} \begin{algorithmic}[1] \FOR{$jp_{ck}$ in $Path_1$ ($ck$ is the iteration number):} \STATE $ti=ck+2$ \WHILE{$ti<len(Path_{1})$ and $len(Path_{1})>2$} \IF{$\overline{jp_{ck}jp_{ti}}$ does not collide with the occupied grids in the 2D map:} \STATE $ti = ti-1$, delete $jp_{ti}$ from $Path_1$ \ENDIF \STATE $ti=ti+1$ \ENDWHILE \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Shorter 3D path searching} After an improved 2D path is found, we notice an obviously shorter 3D path in some scenarios. For example, to avoid a wall, which has large width but limited height, flying above the wall is better than flying over a bypass from right or left. To search for a shorter 3D path with light computation, a generalized method DAGS for all environments is described in Algorithm \ref{alg31}, Fig. \ref{fig32}, and Fig. \ref{fig31} in detail. It is composed of two rounds of search, and each round determines one straight line segment to compose the 3D path. $Pcl_{lm}$ is divided into two parts: One part is denoted as $Pcl_{lm\!-\!1}$ and is composed of the points whose distance to $p_n$ is smaller than $\overline{p_{n}jp'_{1}}$. Another part $Pcl_{lm\!-\!2}$ consists of the remaining points. As shown in Fig. \ref{fig32}, the first segment is $\overline{p_{n}tp_{1}}$, the second segment is $\overline{tp_{1}tp_{2}}$, and $p_{n}\!-\!tp_{1}\!-\!tp_{2}\!-\!g_{l}$ represents the shorter 3D path. $\alpha_{res}$ is the angular resolution of the discrete angular graph. $A_g(\alpha_{g1},\alpha_{g2})$ is the angular part of the spherical coordinates of $g_l$, and the origin of the spherical coordinate system is $p_{sr}$ for each search round. $min()$ is a function that returns the minimal value of an array. Here, the procedure for the first round of the search is briefly introduced, and the second round is basically identical. First, the discrete angular graph is built by Algorithm \ref{alg31}, line 3-8, as shown in Fig. \ref{fig31}(b). $\lfloor \ \rfloor$ returns the integer part of each element of the input. The angular coordinate in the graph is the direction angle difference between the goal $g_l$ and any point in the space. The colored grids represent all the discrete angular coordinates $A_{mid-d}$ corresponding to the input point cloud. Then, the relative direction angle $A_{eg}$ for $\overline{p_{n}tp_{1}}$ is found (line 9), which has the minimal angle difference with $\overrightarrow{p_{n}g_l}$ (the yellow grid in Fig. \ref{fig32} and Fig. \ref{fig31}). Next, the length of path segment $\overline{p_{n}tp_{1}}$ is determined in line 10, and the direction angle of this path segment in $E\!\!-\!\!X\!Y\!Z$ is calculated by line 11. $\alpha_{safe}$ is the angle increment to make the path segment remain a safe distance from obstacles. Finally, the coordinate of the endpoint of this path segment is calculated in line 13. If the 3D path is found by Algorithm \ref{alg31}, it is compared to the optimized $Path_{1}$, and the path with the shortest length is denoted as $Path_{fnl}$. Subsequently, the drone follows $Path_{fnl}$, and the MP is suspended until $Path_{fnl}$ collides with the obstacles in the updated map. \begin{figure}[htb] \centering \includegraphics[width=0.50\textwidth]{figs/f0312.png} \caption{A scenario where the 3D path is much shorter than the improved 2D path.} \label{fig32} \end{figure} \begin{figure}[htb] \centering \subfigure[]{ \includegraphics[width=0.215\textwidth]{figs/f0313.png}} \centering \subfigure[]{\includegraphics[width=0.26\textwidth]{figs/f0311.png}} \caption{(a): A wall stands between the drone $p_n$ and the local goal $g_l$, (b): The discrete angular graph for (a), $\alpha_{res}=20^\circ$. The orange grids are the edge grid of the obstacle.} \label{fig31} \end{figure} \begin{algorithm}[htp] \caption{DAGS method} \label{alg31} \begin{algorithmic}[1] \FOR{$sr$ in \{1,2\} ($sr$ is the searching rounds number)} \STATE If $sr=1$, $p_{sr} = p_{n}$, otherwise $p_{sr} = tp_{1}$ \STATE {The angular coordinate of $\overrightarrow{p_{sr}g_{l}}$ $\rightarrow$ $A_{g}(\alpha_{g1},\alpha_{g2})$} \FOR{each point $p_{mi}$ in $Pcl_{lm-sr}$:} \STATE The angular coordinate of $p_{mi}$ $\rightarrow$ $A_{mi}$ \STATE $A_{mig}=A_{mi}-A_{g}$ \STATE Discretize $A_{mig}$, $A_{mig-d}=\lfloor A_{mig}/\alpha_{res}\rfloor$, build the discrete angular graph with $A_{mig\!-\!d}$ \ENDFOR \STATE The edge of all $A_{mig-d}$ in the angular graph $\rightarrow$ $A_{eg-all}$, $A_{eg} \subset A_{eg-all}$ and $\|A_{eg}\|_{2}=min(\|A_{eg-all}(1)\|_{2},\|A_{eg-all}(2)\|_{2},...)$ \STATE Points in $Pcl_{lm-sr}$ corresponding to $A_{eg}$ $\rightarrow$ $Pcl_{eg}$, $p_{eg} \subset Pcl_{eg}$ and $p_{eg}$ has the maximal distance to $\overline{p_{sr}g_{l}}$, $l_{tp}=\overline{p_{n}p_{eg}}$ \STATE Get the direction angle of $\overrightarrow {p_{sr}tp_{sr}}$, $A_{tp} =(\|A_{eg}\|_{2}+ \alpha_{safe})\dfrac{A_{eg}}{\|A_{eg}\|_{2}}$, $\alpha_{safe}=arcsin(r_{safe}/l_{tp})$ \STATE $tp_{sr} = p_{sr} + l_{tp}(cos(A_{tp}(1)),sin(A_{tp}(1)), sin(A_{tp}(2)))$ \ENDFOR \end{algorithmic} \end{algorithm} \section{Trajectory planning on the point cloud} This section will introduce how the goal $g_n$ is generated from $Path_{fnl}$ for the PCP's current step $n$ and how the PCP outputs the final motion primitives. First, the discrete angular search (DAS) method specifies the safe waypoint $w_{pn} \in \mathbb{R}^{3}$ in free space, which the drone should traverse. The motion planner solves the optimization equation to make the drone pass through $w_{pn}$ under the given motion constraints. The PCP also includes an additional safety measure to ensure that no collision will occur, which works when no $w_{pn}$ can be found in an emergency. \subsection{Review of DAS} The PCP in this paper is an improved version of our previously proposed trajectory planner based on the DAS method \cite{chen2020computationally}. Thus, briefly introducing it helps to understand the contributions in this section. In \cite{chen2020computationally}, we first search a waypoint $w_{pn}$ near the drone as the constraint for the motion planning, and a quadratic polynomial curve is optimized as the final trajectory in the motion planning. Motion planning is for solving a nonlinear optimization problem. The trajectory traverses $w_{pn}$ within a small, predetermined distance error, and the corresponding motion primitives are within the kinematic constraints. Moreover, the real trajectory between $p_n$ and $w_{pn}$ can be proved collision-free. As shown in Fig. \ref{fig61}, $g_{n}$ is the goal point for the current step, a cluster of line segments fanned out in the direction of $\overrightarrow{p_{n}g_{n}}$, and these line segments have a common starting point $p_n$ and the same length $r_{det}$. In this paper, $g_{n}$ is determined by the planned path $Path_{fnl}$. $r_{det}$ is the point cloud distance threshold, and the collision check only considers the points within distance $r_{det}$ from $p_{n}$. The two symmetrical lines about $\overrightarrow{p_{n}g_{n}}$ on the plane parallel to the ground plane are first checked to determine if they collide with obstacles (minimal distance smaller than $r_{safe}$). $r_{safe}$ is an important parameter for safety, which is a function of the preassigned maximal speed $v_{max}$ and acceleration $a_{max}$. Then, the two symmetrical lines about $\overrightarrow{p_{n}g_{n}}$ in the vertical plane are checked. Fig. \ref{fig61} shows that when the first round of the search fails, another round with a greater angle difference is conducted until a collision-free line $\overline{p_{n}p_{w}}$ is found. $w_{pn}$ is on $\overline{p_{n}p_{w}}$, and $\overline{p_{n}w_{pn}}$ should satisfy the safety analysis in \cite{chen2020computationally}). When the drone encounters a suddenly intruded obstacle in the sensor detection range, the PCP plans a safe trajectory in 0.02 s before the map and $Path_{fnl}$ are updated. \begin{figure}[thpb] \centering \includegraphics[width=0.47\textwidth]{figs/das.png} \hfill \caption{Illustration of the DAS process. The red dash circle and red arrows are for the search on the ground-parallel plane, while the green ones are for the vertical plane. In this figure, the collision check for the first line of the second search round is successful (denoted as $\overline{p_{n}p_{w}}$).} \label{fig61} \vspace{-0.6cm} \end{figure} \subsection{Connection between the PCP and MP} For the PCP, an updated goal point $g_{n}$ is always required at every step $n$. The direction $\overrightarrow{p_{n}g_{n}}$ is the initial search direction. If this direction does not collide with any obstacles, the planned trajectory will head to $g_{n}$ directly. The final path $Path_{fnl}=[pt_1,pt_2,...,g_l]$ is received from the MP ($Path_{fnl}$ dose not include $p_n$), and an optimization problem (8) is designed to find the current goal $g_{n}$. It is designed to make the final trajectory smooth by sliding $g_{n}$ continuously. If $pt_1$ is simply assigned as $g_{n}$, $g_{n}$ will jump to $pt_{2}$ as the drone approaching $pt_1$. This result may cause $w_{pn}$ to also jump with $g_{n}$ and cannot be reached within the drone's kinematic constraints. The drone should start to turn earlier to avoid a violent maneuver, which leads to a greater control error and undermines safety. The endpoint of the planned trajectory remains near $Path_{fnl}$ under the premise of safety assurance. \vspace{-0.3cm} $$\begin{aligned} \min _{v_{t}} \quad & \|v_{t}\dfrac{\|v_{0}\|_2}{\|v_{t}\|_2}-v_{0}\|_{2}+\|v_{t}-\kappa_{1}a_{1}\|_{2} + \|v_{t}-\kappa_{2}a_{2}\|_{2} \\ \text {s.t.} \quad & a_1=pt_{1}-p_{n},\ a_2=pt_{2}-p_{n}, \ v_{t}= \overrightarrow{p_{n}g_{n}} \end{aligned} \eqno{(8)}$$ In (8), the three components are the acceleration cost, the cost of $pt_1$, and the cost of $pt_2$. $p_n$ is the current position of the drone, and $v_0$ is the current velocity. $v_t$ presents the initial search direction of the local planner. $r_{det}$ is the search range radius for the DAS. $\kappa_1$ and $\kappa_2$ are the weight factors for adjusting the influence of $pt_1$ and $pt_2$ on $v_t$, and $\kappa_1$ is much larger than $\kappa_2$. Fig. \ref{fig4} intuitively demonstrates the initial search direction $v_t$ for the PCP, drone position, and waypoints on $Path_{fnl}$. The green, dashed line displays the rough shape of the trajectory if the PCP does not check for a collision and $w_{pn}$ is always on $\overline{p_{n}g_{n}}$. We can see from (8) and Fig. \ref{fig4} that as the drone approaches the next waypoint $pt_1$, the influence on $g_{n}$ from $a_2$ overwhelms $a_1$. When the drone is far from $pt_1$ and $pt_2$, $a_1$ is the governing influence factor of $v_t$. We can also reduce the difference between the trajectory and $Path_{fnl}$ by regulating $\kappa_1$ and $\kappa_2$. If only one waypoint $g_l$ remains in $Path_{fnl}$, $pt_1=pt_2=g_l$. Solving a nonlinear optimization problem, such as (8), is computationally expensive. From a geometric point of view, the nature of (8) is to find a point ($v_t$) in the space with the minimal total distance between three fixed points ($\kappa_{1}a_{1}+p_{n}$,\ $\kappa_{2}a_{2}+p_{n}$,\ $v_{0}+p_{n}$). The triangle composed of these three points is called a Fermat triangle. It can be solved by locating the \textbf{Fermat point} $f_m$ of the triangle, as shown in Fig. \ref{fig4}, and $g_{n}=f_{m}$. The calculation of $f_m$ is illustrated in (9). First, the plane coordinates $(x_{f1},y_{f1})$, $(x_{f2},y_{f2})$, and $(x_{f3},y_{f3})$ in the plane $P_{fm}$ for the three points are determined ($P_{fm}$ is the plane defined by the Fermat triangle). $S_{fm}$ is the area of the triangle. $l_{1}$ is the length of the side opposite to the triangle point $(x_{f1},y_{f1})$, and so on. $f'_{m}(x_{fm},y_{fm})$ is the coordinate of $f_m$ in plane $P_{fm}$. Finally, $f'_{m}$ is converted to $E\!-\!X\!Y\!Z$ to obtain $f_m$. $$ \begin{aligned} & x_{fm}= (\sum_{i=1}^{3}x_{fi}(4S_{fm}+\sqrt{3}l^{2}_{i})+g(y))/f_{Sl} \\ & y_{fm}=(\sum_{i=1}^{3}y_{fi}(4S_{fm}+\sqrt{3}l^{2}_{i})+g(x))/f_{Sl}\\ & g(x)= [x_{f1},x_{f2},x_{f3}][l^{2}_{3}-l^{2}_{2},l^{2}_{1}-l^{2}_{3},l^{2}_{2}-l^{2}_{1}]^{T}\\ & g(y)= [y_{f1},y_{f2},y_{f3}][l^{2}_{3}-l^{2}_{2},l^{2}_{1}-l^{2}_{3},l^{2}_{2}-l^{2}_{1}]^{T}\\ & f_{Sl}= 12S_{fm}+\sqrt{3}(l^{2}_{1}+l^{2}_{2}+l^{2}_{3}) \end{aligned}\eqno{(9)}$$ \begin{figure}[thpb] \centering \includegraphics[width=0.38\textwidth]{figs/f032.png} \caption{Geometric illustration of the analytical solution of (8). The pink, dashed line marks the Fermat triangle.} \label{fig4} \end{figure} \subsection{Improvements on the PCP} \subsubsection{Streamline and sequence the input point cloud} For the input point cloud $Pcl_{r}$ of the PCP, $Pcl_{r}=Pcl_{mr} \cup Pcl_{4r}$. $Pcl_{4r}$ is the subset of $Pcl_4$, $Pcl_{mr}$ is the subset of $Pcl_m$, and their distance to $p_n$ is within $r_{det}$. Not only $Pcl_{4r}$ but also $Pcl_{mr}$ is used as the input point cloud because the camera field of view (FOV) is narrow. If only $Pcl_{4r}$ is checked for the collision, the drone may still collide with the obstacles outside the FOV. In \cite{chen2020computationally}, we find that the execution time of the trajectory planner is highly relevant to the size of the input point cloud. The collision check accounts for a large proportion of the total time cost. Every point in $Pcl_{r}$ is checked for a collision in the loops. Once a point $p_{1st}$ in $Pcl_{r}$ is first found to collide with the detecting line segment, the collision check loops are terminated. Therefore, we hope to find $p_{1st}$ in a more computation-efficient way to reduce the time cost. By analyzing a large amount of recorded data of the PCP in simulation and hardware tests, we found the following statistical laws. $p_{1st}$ has a larger probability to appear in the part that is closer to $p_n$ and $\overline{p_{n}g_{n}}$ ($Pcl_{r}$ is first sorted in order of increasing distance to $p_n$). $d_{ft}$ is the farthest distance from $p_n$ to the points in $Pcl_{r}$. For approximately $89.6\%$ of all the recorded $p_{1st}$, $\overline{p_{n}p_{1st}} \leq 0.5d_{ft}$. For approximately $81.3\%$ of $p_{1st}$, the angle $\angle p_{1st}p_{n}g_{n} \leq 90^{\circ}$. Thus, the part of $Pcl_{r}$ that is out of the highlight range ($\overline{p_{n}p_{1st}} \leq 0.5d_{ft}$, and $\angle p_{1st}p_{n}g_{n} \leq 90^{\circ}$) can be streamlined. In Algorithm \ref{alg41}, we streamline $Pcl_{r}$ to remain within at most $n_{use}$ points of it. $len()$ returns the list size. $D_{pn}$ is the list that stores the distance between the points in $Pcl_{4r}$ and $p_n$. The point that is more likely to collide is stored in list $Pcl_{use}$ to have a higher priority for being checked (line 3-4). When the number of points in $Pcl_{use}$ is more than $n_{use}$, $Pcl_{use}$ is evenly spaced to limit its size. The limited size of $Pcl_{use}$ reduces and stabilizes the time cost for the collision check. In addition, if $n_{use}$ and $r_{safe}$ are reasonable, safety is not compromised in extensive simulation and hardware tests. \begin{algorithm}[htp] \caption{Streamline the sorted $Pcl_{4r}$} \label{alg41} \begin{algorithmic}[1] \FOR{$pc_j$ in $Pcl_{4r}$ ($j$ is the iteration number):} \IF {($D_{pn}(j)\le 0.5d_{ft}$ or $\angle p_{1st}p_{n}g_{n} \leq 90^{\circ}$):} \STATE Put $pc_j$ in list $Pcl_{use}$ \ENDIF \ENDFOR \IF{$len(Pcl_{use}) > n_{use}$:} \STATE Remove $len(Pcl_{use})-n_{use}$ points in $Pcl_{use}$ randomly \ELSE \STATE Choose $n_{use}-len(Pcl_{use})$ points from $\complement_{Pcl_{4r}}Pcl_{use}$ randomly, add them into $Pcl_{use}$ \ENDIF \end{algorithmic} \end{algorithm} \vspace{-0.3cm} \subsubsection{Improve the motion primitives generation efficiency} After obtaining the next waypoint $w_{pn}$, the next step is to calculate the motion primitives and send the command to the flight controller. Sending $w_{pn}$ directly as the control command may cause the flight to be unstable, and the acceleration magnitude may exceed $a_{max}$. Because $w_{pn}$ may vary significantly between two continuous motion planning steps, the point cloud quality is harmed when the drone acceleration magnitude is too large. In addition, the position commands cannot control the speed. To ensure that the aircraft can fly within its kinematic limits and reach the next waypoint, the motion primitives are generally obtained by solving an optimization problem. Previously \cite{chen2020computationally}, we considered the flight time to reach $w_{pn}$ the optimization variable. However, we found the solver may fail in the given number of iteration steps in some cases. The solving success rate with an error tolerance $10^{-3}$ is approximately $83.7\%$ within 40 steps. When the solver fails in several continuous PCP steps, the planned trajectory deviates considerably from $w_{pn}$, and the drone is very dangerous. Furthermore, considering the requirement of low time cost for real-time computing, the maximal number of iteration steps should be limited. The time variable increases the problem complexity, and the flight controller does not require it. Therefore, the flight time can be removed from the optimization variables and the optimization strategy (7) is proposed. It is slightly different from that of \cite{chen2020computationally}, to improve the success rate and time cost. \vspace{-0.5cm} $$\begin{aligned} \min _{a_{n}} & \left\|a_{n}\right\|_{2}^{2}+\eta_{1} \| \overrightarrow{w_{pn}p_{n+1}}\|_{2}+\eta_{2} \dfrac{\|\overrightarrow{p_{n}p^{*}_{n+1}}\times \overrightarrow{p^{*}_{n+1}w_{pn}}\|_{2}}{\|\overrightarrow{p_{n}w_{pn}}\|_2}\\ \text {s.t.} \ \ & v_{n} =\dot{p}_{n},\ a_{n} =\dot{v}_{n} \\ &\left\|v_{n+1}\right\|_{2} \leq v_{\max },\ \left\|a_{n}\right\|_{2} \leq a_{\max }\\ &v_{n+1}=v_{n}+a_{n} t_{avs}\\ &p_{n+1}=p_{n}+v_{n} t_{avs}+\frac{1}{2} a_{n} t_{avs}^{2}\\ &p^{*}_{n+1}=p_{n}+2v_{n} t_{avs}+2 a_{n} t_{avs}^{2} \end{aligned} \eqno{(7)}$$ \vspace{-0.0cm} In the revised optimization formula, we fix the trajectory predicting time to $t_{avs}$. $t_{avs}$ is the average time cost of the last 10 executions of the PCP. The endpoint constraint is moved to the objective function. The endpoint of the predicted trajectory need not coincidence with $w_{pn}$. Because the execution time of the PCP is always much smaller than the planned time to reach $w_{pn}$, before the drone reaches $w_{pn}$, a new trajectory is generated, and then the remainder of the formerly predicted trajectory is abandoned. Predicting only the trajectory between the current step to the next step of the PCP is sufficient. Therefore, minimizing the distance between the trajectory endpoint at $t_{avs}$ and $w_{pn}$ is reasonable. Given that the current step run time of the PCP may exceed $t_{avs}$, the distance from the trajectory endpoint at $2t_{avs}$ to $\overline{p_{n}w_{pn}}$ should also be optimized. The subscript $n$ presents the current step in a rolling process of the PCP. $v_{n} \in \mathbb{R}^{3}$ and $a_{n} \in \mathbb{R}^{3}$ are the current velocity and acceleration of the drone. $v_{max}$ and $a_{max}$ are the kinematic constraints for speed and acceleration, respectively, and $v_{n+1}$, $p_{n+1}$, and $p_{n+1}$ are calculated using the kinematic formula. $\eta_1$, $\eta_2$ are the weight factors for the trajectory endpoint constraint. After the modification, the success rate with error tolerance $10^{-3}$ within 20 steps is increased to $99.8\%$, and no dangerous trajectory deviation from $w_{pn}$ can be detected. The time cost of the motion planning and safety of the PCP is greatly improved. \subsubsection{Safety backup plan} On some occasions, such as when the obstacles are too dense or an obstacle suddenly appears near the drone (distance is smaller than $r_{safe}$), DAS may fail to find a feasible direction. To solve this problem, the minimum braking distance $d_{bkd}=\dfrac{\|v_{n}\|^{2}_{2}}{2a_{max}}$ at current velocity $v_{n}$ is introduced. It is smaller than $r_{safe}$ by setting the appropriate maximum acceleration constraint $a_{max}$ and velocity constrain $v_{max}$ ($\|v_{n}\|_{2} \leq v_{max}$). If the minimum distance from the drone to obstacles is greater than $d_{bkd}$, the search direction having the maximum distance to the obstacles is chosen (although the distance is smaller than $r_{safe}$). Otherwise, the drone brakes immediately and flies back to the position at the former PCP step, and the chosen search direction of the former step will not be considered after the drone has flown back in place. This measure is called the ``safety backup plan.'' \subsection{The whole framework} \begin{algorithm}[htp] \caption{our proposed framework} \label{alg1} \begin{algorithmic}[1] \WHILE{true: (Thread 1)} \STATE Filter the raw point cloud data, output $Pcl_4$ \ENDWHILE \WHILE{true: (Thread 2)} \STATE Build a global 3D voxel map ${Pcl_{m}}$, and project it on the ground to obtain $Map_1$ \ENDWHILE \WHILE{the goal is not reached: (Thread 3)} \IF{the shorter 3D path has not been found or it collides with the updated $Pcl_m$:} \STATE Apply convolution on ${Map_1}$, find the 2D path $Path_1$ on the stitched map. \STATE Try to find a shorter 3D path, output $Path_{fnl}$ \ENDIF \ENDWHILE \WHILE{the goal is not reached: (Thread 4)} \STATE Calculate the goal $g_{n}$ from $Path_{fnl}$ \STATE Find the waypoint $w_{pn}$ by DAS method \IF{$\textbf{found a feasible waypoint:}$} \STATE Run the motion planner to get motion primitives \ELSE \STATE Run the safety backup plan, and go to line 14 \ENDIF \STATE Send the motion primitives to the UAV flight controller \ENDWHILE \end{algorithmic} \end{algorithm} To summarize, Algorithm \ref{alg1} shows the overall proposed framework. The two planners (MP and PCP) are designed to run in ROS parallelly and asynchronously because of their large difference in operation time and share all the data involved in the calculation via the ROS master node. In addition, the point cloud filter and the mapping kit are run in parallel on different threads. \section{Test Results} In this section, the static tests and real-time flight tests are introduced to validate the effectiveness of the methods in our proposed framework. \subsection{Algorithm performance static test} Our detailed algorithms and methods are designed to obtain an undiminished or much better path planning performance with a decreased or slightly increased time cost. To prove our proposed algorithms' effectiveness, we first individually test them offline with static data input. This approach can avoid the influence from the fluctuations in computing performance caused by other simultaneously running algorithms when one algorithm is analyzed. Moreover, the data can be customized, so the tests are more effective and targeted. In this subsection, all the time costs are measured on a personal computer with an Intel Core i7-8565U 1.8-4.6 GHz processor and 8 GB RAM, and Python 2.7 is used as the programming language. \subsubsection{Path planning on the 2D map} The size of the local map is the main influencing factor of the actual flight trajectory length and computing time of each replanning step. In addition, we apply the JPS algorithm twice on two maps of different sizes and resolutions and splice the two paths into a whole. The $Map_c$ size is also a key to balancing the time cost and the path length. As the effectiveness of our proposed method should be verified and analyzed, two rounds of numerical simulations are designed and conducted. The first round tests the influence of the local map size, the second-round tests the effect of the $Map_c$ size (see Fig. \ref{fig3}). A large-scale 2D map is used in the numerical tests, as shown in Fig. \ref{fig801}. The map size is 800 m*800 m, and the local map size is tested with three configurations (unit: m): 75*75, 200*200, and 400*400. We assume the local map center moves along the local map path and can only move one meter (including the diagonal move) in one step. The test is conducted with 10 combinations of the randomly assigned start point and goal point, whose straight-line distance is greater than 500 m. For each local map size, the 10 combinations are identical. For the first round, the average time cost and real trajectory length are compared with that of the planning on the entire map, as shown in TABLE \ref{tab_1}. $Len_1$ represents the average trajectory length, while $Len_2$ denotes the average global JPS path length. $T_{c1}$ is the average total computing time of each replanning step with the local map, and $T_{c2}$ is that of the global planning. TABLE \ref{tab_1} shows that $Len_1$ does not increase substantially compared to $Len_2$, while the time cost is saved considerably compared to the global planning time. We use green color to highlight the data corresponding to our proposed method, demonstrating the superiority. $Len_1$ increases by only $2.2\%$ and $T_{c1}$ decreases by $96.6\%$ compared to $Len_{2}$ and $T_{c2}$, respectively, when the $Map_1$ size is 200 m*200 m. It is the most time-efficient map size among the three tested sizes. For the second round, the size of $Map_1$ is fixed at 200 m*200 m and the size of $Map_c$ has three alternatives (unit: m): 70*70, 100*100, and 120*120. In TABLE \ref{tab_2}, the average total computing time $T_{c3}$ and trajectory length $Len_3$ are compared between the tests that do and do not use the stitched map. When the size of $Map_c$ is 100 m*100 m, the real trajectory length $Len_3$ increases by only $0.31\%$, while the time cost $T_{c3}$ is reduced by $53.4\%$ compared to $Len_{1}$ and $T_{c1}$, respectively. Thus, the effectiveness of planning on the multi-resolution hybrid map is well validated. \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.47\textwidth]{figs/f8002.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.47\textwidth]{figs/f8001.png}} \caption{Visualized result during the numerical simulation. (a): only the sliding local map is used, with the map size of 75 m*75 m, (b): the double layer map is used, the sizes of $Map_1$ and $Map_c$ are 200*200 and 100*100 (unit: m). The configuration for the obstacle inflation and map downsampling is the same with Fig. \ref{fig3}. The blue line indicates the real trajectory of the drone, the red dash line indicates the global JPS path, and the purple line is the JPS path on the local map.} \label{fig801} \end{figure} \begin{table}[h] \caption{Test results of different $Map_1$ sizes} \label{tab_1} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $Map_1$ size (m) & $Len_1$ (m) & $T_{c1}$ (s) &$Len_2$ (m)& $T_{c2}$ (s)\\ \hline 75*75 & \textcolor{ForestGreen}{1021.962} &\textcolor{ForestGreen}{0.036} & 980.439 & 3.454 \\ \hline {\textbf{200*200}} & \textcolor{ForestGreen}{\textbf{1001.783}} &\textcolor{ForestGreen}{\textbf{0.118}} &{\textbf{980.439}} & {\textbf{3.454}} \\ \hline 400*400 &\textcolor{ForestGreen}{997.486} &\textcolor{ForestGreen}{0.284} &980.439 & 3.454\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Test results of different $Map_c$ sizes} \label{tab_2} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $Map_c$ size (m) & $Len_3$ (m) & $T_{c3}$ (s) &$Len_1$ (m)& $T_{c1}$ (s)\\ \hline 70*70 & \textcolor{ForestGreen}{1110.374} &\textcolor{ForestGreen}{0.040} & 1001.783 & 0.118 \\ \hline \textbf{100*100} & \textcolor{ForestGreen}{\textbf{1004.848}} &\textcolor{ForestGreen}{\textbf{0.055}} & \textbf{1001.783} & \textbf{0.118}\\ \hline 120*120 &\textcolor{ForestGreen}{1002.917} &\textcolor{ForestGreen}{0.082}& 1001.783 & 0.118\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Shorter 3D path searching} To study the path length shortened by the 3D path search and corresponding extra time cost, the flight data in the Gazebo/ROS simulation environment is analyzed. Gazebo is a simulation software that provides a physical simulation environment similar to the real world. Compared to our experimental hardware platform, all the simulation configurations are set up as the same or similar to ensure the credibility of the simulation and the analysis conclusion. The Gazebo simulation world and the visualized data are shown in Fig. \ref{fig802}. The obstacle feature size in the simulation is from 0.5 m to 6 m, which is similar to that of most real scenes. Three local 3D map bottom sizes are used (unit: m): 12*12, 20*20, and 30*30, and the map height is fixed at 6 m with the map resolution of 0.2 m. The drone is at the center of the local map. We also conduct 5 flight tests with different combinations of starting and goal points for each local map size. For each flight test, an additional flight test without the 3D path search procedure is used as the control group to compare the final trajectory length. During the flight simulation, the time cost of the 3D path search and the path length is recorded for statistical analysis. The results can be found in TABLE \ref{tab_3}. $\eta_{2D}$ indicates the mean of shortening percentage of the 3D path compared to the 2D path. $T_{3D}$ is the average time cost for the 3D path search (Algorithm 3). $Len_{3D}$ and $Len_{2D}$ are the actual trajectory average lengths for the flight tests with and without Algorithm 3, respectively. \begin{table}[h] \caption{Flight test results for the 3D path planning} \vspace{-0.5cm} \label{tab_3} \begin{center} \resizebox{9 cm}{10mm}{ \begin{tabular}{|c|c|c|c|c|} \hline Map size (m) & $T_{3D}$ (s) & $\eta_{2D}$ ($\%$) &$Len_{3D}$ (m)& $Len_{2D}$ (m)\\ \hline 12*12 &\textcolor{ForestGreen}{0.011} & \textcolor{ForestGreen}{38.6} & \textcolor{ForestGreen}{39.274} & 47.263\\ \hline \textbf{20*20} &\textcolor{ForestGreen}{\textbf{0.032}} &\textcolor{ForestGreen}{\textbf{35.7}} &\textcolor{ForestGreen}{\textbf{32.635}} & \textbf{44.582}\\ \hline 30*30 &\textcolor{ForestGreen}{0.059} & \textcolor{ForestGreen}{35.0} & \textcolor{ForestGreen}{30.843} & 42.341\\ \hline \end{tabular}} \end{center} \end{table} We can see that $\eta_{2D}$ and $Len_{3D}$ decrease as the map size increases. When the map size is small, the points in the 3D map reduces. Seemingly, the algorithm is more likely to find a shorter 3D path. However, the 3D path of a small local map is more likely to collide with the newly appearing obstacles in the updated local map. Therefore, the drone has a high probability of taking a detour while flying along the 3D path, and the actual trajectory length is longer than when we use a large local map. When the local map size is settled as 20 m*20 m, the average outer loop frequency of the path planning is greater than 15 Hz, and the actual trajectory is smooth and natural. We use this map size in the following flight tests. \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.46\textwidth]{figs/f8003.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.41\textwidth]{figs/f8004.png}} \caption{(a): The simulation world. (b): The visualized data in RVIZ. The Gazebo window when the flight test is ongoing is shown at the lower right corner. The colorful dots is the point cloud of the 3D local map. The black blocks on the ground plane in RVIZ stand for the obstacles, the white part stands for the free or unknown area.} \label{fig802} \end{figure} \subsubsection{The improvements in optimization formula} After several hardware flight tests with our proposed framework, we record all the required data for solving the optimization problem, including $p_n$, $v_n$, and $w_{pn}$, at each step (over $5.3\times 10^{4}$ steps in total). To validate the improvements of the optimization formula, the collected data is input to the original optimization formula and the improved one in this paper for comparison. In addition, the optimization solving performance under different maximum iterating numbers is studied. The average time cost and overall success rate are counted for quantitative comparison. In TABLE \ref{tab_opti}, $R_{og}$ and $T_{og}$ are the solution success rate and the average time cost of the original optimization formula, respectively, and $R_{im}$ and $T_{im}$ are for the improved version. We can see that the success rate and time cost are greatly improved. When the maximum step is more than 20, the success rate improvement is minor. Because $99.83 \%$ is a satisfactory success rate, we set the maximum step number as 20 to reduce the time cost. The motion optimization problem-solving time decreases by $39.33\%$ compared to that of the original formula. \begin{table}[h] \caption{Test results for the improvements of optimization formula} \label{tab_opti} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Max steps & $R_{og}$ ($\%$) & $R_{im}$ ($\%$) &$T_{og}$ (ms)& $T_{im}$ (ms) \\ \hline 5 & $41.51$ &\textcolor{ForestGreen}{89.12 }&3.24 & \textcolor{ForestGreen}{2.12} \\ \hline 10 & $61.14$ &\textcolor{ForestGreen}{95.49 }&4.87 &\textcolor{ForestGreen}{2.94} \\ \hline \textbf{20} &\textbf{$76.87$} &\textcolor{ForestGreen}{\textbf{99.83}}&6.56 &\textcolor{ForestGreen}{\textbf{3.98}} \\ \hline 40 &$83.68$ &\textcolor{ForestGreen}{99.96} &9.45 &\textcolor{ForestGreen}{5.23} \\ \hline 80 &$92.15 $ &\textcolor{ForestGreen}{100.00} &14.78 &\textcolor{ForestGreen}{5.61} \\ \hline \end{tabular} \end{center} \end{table} \subsection{Simulated flight tests with real-time planning} Compared to the counterpart that follows the original JPS path directly on the 2D map, our proposed framework is proved to shorten the actual trajectory substantially with limited additional time cost. Another test is required to compare the final trajectory length with the 3D global shortest path to further validate the framework's performance. However, the globally optimal path can only be obtained after the map is entirely constructed, so we cannot obtain it while exploring the environment. Because the 3D map of the environment is represented by the point cloud, no graph-searching based algorithm can be applied to obtain the 3D optimal path length. We adopt the asymptotically optimal method RRT* to generate the globally optimal path using a large amount of offline iteration computing. For comparison, the same simulation configuration with the former flight tests is used in this test. We first fly the drone manually with the mapping kit to build up the globally 3D map for the simulation world. Then, the RRT* method is applied 5 times on the map with each group of starting and goal points. The shortest path of the 5 runs is considered the global optimal. The initial parameters of RRT* are generated randomly so that the repeating can avoid the local optimum. The iteration terminates when the relative error of the path length is smaller than $10^{-3}$ in the last 10 iterations. Table \ref{table_1} describes the parameter settings of the framework in the simulation test. ``pcl'' is short for the point cloud. All the length results of the 10 flight tests are shown in TABLE \ref{tab_4}. The mean of the actual trajectory length $Len_{3D}$ increases by approximately $12.8 \%$ compared to the mean of the globally optimal path length $Len_{opt}$. $t_{mp}$ is the average step time cost of MP in this flight. $t_{rrt*}$ is the total time cost for RRT* to find the shortest path in the 5 runs. $t_{rrt*}$ only include the time cost of the runs before the shortest path is found. Fig. \ref{fig803} illustrates the detailed visualized data of the flight test corresponding to second test in TABLE \ref{tab_4} (the bold part). Fig. \ref{fig803}(a) demonstrates the entire map of the simulation world in Fig. \ref{fig802}. The coordinates of the starting and goal points of the flight test in Fig. \ref{fig803}(b) are $(11.4,14.4,0.8)$ and $(-12.0,-10.0,1.2)$, respectively. The global optimal path length for this flight is 33.252 m, and the actual flight trajectory length is 36.057 m. We see that the real trajectory has obvious differences with the globally optimal path. Nevertheless, the path length gap is not large, which is similar to the numerical test results on the 2D map. The trajectory length comparison between our proposed framework and the state-of-the-art algorithms is shown in TABLE \ref{tab_5}. \cite{chen2020computationally} is our former work for the trajectory planner. $\eta_{3D}$ indicates the mean of the 3D path length percentage increase compared to the optimal 3D path. Except for the $\eta_{3D}$ of \cite{chen2020computationally} is evaluated from the flight test data in the same simulation world, the other data is obtained from the experimental results in the references. Our proposed framework can fly the shortest trajectory compared to the works listed in TABLE \ref{tab_5}. \begin{table}[h] \caption{Parameters for the framework} \label{table_1} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Parameter& Value &Parameter & Value\\ \hline $ l_{ms}$ & 20 m & $h_{ms}$ & 6 m\\ \hline $\alpha_{res}$ & $10^{\circ}$ &$i,j$ &100\\ \hline $m,n$ &50 &$k$ &3\\ \hline $h$ & 2 &$r_{safe}$ &0.5 m\\ \hline $\overline{p_{n}w_{pn}}$& 0.3 m & $n_{use}$ & 70\\ \hline voxel size & $0.2$ m & pcl frequency & 30 Hz\\ \hline depth resolution & 640*360 & $\kappa_{1},\ \kappa_{2}$ & 4.2, 1.5\\ \hline $\eta_{1},\eta_{2}$ &40, 10 &$r_{dec}$ & 3 m\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{3D path length comparison with the global optimal} \label{tab_4} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $Len_{3D}(m)$ & $Len_{opt}(m)$ &$t_{mp} (s)$ & $t_{rrt*}(s)$\\ \hline \textcolor{ForestGreen}{41.385} &36.761 &\textcolor{ForestGreen}{0.073} &6.294 \\ \hline \textcolor{ForestGreen}{\textbf{36.057}} &\textbf{33.252} &\textcolor{ForestGreen}{\textbf{0.067}}&\textbf{6.091}\\ \hline \textcolor{ForestGreen}{32.249} &26.825 &\textcolor{ForestGreen}{0.068} &4.505\\ \hline \textcolor{ForestGreen}{30.674} &27.692 &\textcolor{ForestGreen}{0.072} &3.974\\ \hline \textcolor{ForestGreen}{38.668} &32.707 &\textcolor{ForestGreen}{0.071} &5.340\\ \hline \textcolor{ForestGreen}{34.916} &31.832 &\textcolor{ForestGreen}{0.069} &6.110\\ \hline \textcolor{ForestGreen}{38.269} &33.403 &\textcolor{ForestGreen}{0.074} &4.904\\ \hline \textcolor{ForestGreen}{39.475} &36.412 &\textcolor{ForestGreen}{0.073} &3.841\\ \hline \textcolor{ForestGreen}{33.785} &31.353 &\textcolor{ForestGreen}{0.068} &5.916\\ \hline \textcolor{ForestGreen}{40.879} &36.044 &\textcolor{ForestGreen}{0.070} &6.554\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{3D path length comparison with the state-of-the-art algorithms} \label{tab_5} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline work&$\eta_{3D}$ & work&$\eta_{3D}$& work&$\eta_{3D}$\\ \hline \textcolor{ForestGreen}{\textbf{Ours}} & \textcolor{ForestGreen}{$\textbf{12.8\%}$} &\cite{tordesillas2019real} & $15.6\%$ &\cite{oleynikova2018safe} & $29.7\%$\\ \hline \cite{bircher2016receding}& $49.6\%$ &\cite{chen2020computationally} & $29.5\%$& \cite{zhou2019robust}& $34.5\%$\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth, height=4.8cm]{figs/f8006.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.47\textwidth]{figs/f8007.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.49\textwidth]{figs/f8005.png}} \caption{(a): The entire 3D occupancy grid map. (b): The map after one flight test using our proposed framework. The gradient color curve represents the final trajectory. (c): The comparison of the globally optimal path found by RRT* and the actual trajectory from the flight test data. The 3D map is shown in the form of a point cloud. (b) and (c) are for the same flight test from different views. } \label{fig803} \end{figure} \subsection{Hardware flight tests} The video for the hardware test has been uploaded online. In the test environment, static and dynamic obstacles are present to validate the fast reaction of the PCP. \subsubsection{Introduction of hardware platform} We conduct the hardware tests on a self-assembled quadrotor. The Intel RealSense depth camera D435i is installed under the frame as the only perception sensor. The drone frame is QAV250, with the diagonal length 25 cm. The Pixracer autopilot with a V1.10.1 firmware version is adopted as the underlying flight controller. A LattePanda Alpha 800S with an Intel Core M3-8100Y, dual-core, 1.1-3.4 GHz processor is installed as the onboard computer, where all the following timing breakdowns are measured. For the hardware test, we fly the drone in multiple scenarios with our self-developed visual-inertial odometry (VIO) kit\footnote{https://github.com/HKPolyU-UAV/FLVIS} to demonstrate the practicality of our proposed framework. The point cloud filter, the VIO kit, and the mapping kit are executed by C++ code, while the other modules are run by the Python scripts. All the parameters used in the hardware test are identical to the simulation flight tests, as shown in TABLE \ref{table_1}. The yaw angle of the drone is controlled to keep the camera always heading toward the local goal $g'_{l}$. \subsubsection{Indoor tests} Fig. \ref{fig8} is our indoor flight test environment. First, the drone takes off from point 1 and flies through the cluttered static obstacles (shown in the picture), following the sequence 2-3-4. The environment is unknown to the drone before it takes off. We can see from the video that the drone can avoid the obstacles agilely. Meanwhile, the drone posture is stable and the final flight trajectory is smooth. After the drone reaches point 3 and starts flying toward point 4, a person who hides behind the boxes suddenly appears closely in front of the drone. In the video, the drone quickly maneuvers after the person appears, and the avoidance is successful. The map is updated afterward, and the drone continues to follow the path from the MP. The constructed voxel map and flight trajectory after this flight are shown in Fig. \ref{fig81}. The position, attitude, and velocity curves of the drone can be found in Fig. \ref{fig9}. The attitude angles react immediately to the appearance of the person, and then the velocity changes considerably. Sequentially, the drone decelerates and flies to the left side to pass the person. The maximum speed is $1.23$ m/s at $74.40$ s. The drone does not approach any obstacles before this time, so the speed continues to increase. Because $\|\overrightarrow{p_{n}w_{pn}}\|_{2}$ is assigned to be always greater than $\|\overrightarrow{p_{n}p_{n+1}}\|_{2}$, in the motion optimization problem (7), minimizing the cost $\eta_{1} \| \overrightarrow{w_{pn}p_{n+1}}\|_{2}$ leads to positive acceleration. The drone will decelerate when the obstacles are near it, because of the acceleration cost and the endpoint cost in the objective function. \subsubsection{Outdoor tests} In addition, the proposed framework is tested in diverse outdoor scenarios. Fig. \ref{fig812} shows the environments of two experiments, and the flight tests in other environments are included in the video. Fig. \ref{fig812}(a) is entirely static, with several obstacles standing on a slope and dense bushes and trees. This environment is challenging, requiring the 3D precise trajectory planning and motion control. Fig. \ref{fig812}(b) is a larger environment compared to those of the tests above. In addition to the complex static obstacles, five moving people in the field continuously interrupt the drone from the original planned path and validate its reaction performance. The video demonstrates that the drone performs agile and safe flights in various test environments, so the practicability and flight efficiency of our proposed framework can be proven. \begin{figure}[] \centering \includegraphics[width=0.49\textwidth]{figs/f8008.png} \caption{Indoor environment for the hardware flight tests. In the video, a flight test is first conducted with static obstacles, and then the table closest to trace of the person moving is removed for the test with an intruding person.} \label{fig8} \end{figure} \begin{figure}[] \centering \includegraphics[width=0.49\textwidth]{figs/f8012.png} \caption{Explored map and the drone trajectory after the hardware flight test in the static environment.} \label{fig81} \end{figure} \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/f8011.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/f8009.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.48\textwidth]{figs/f8010.png}} \caption{(a)-(c): Curves of the three-axis coordinate positions, flight velocities, and attitude angles. The framework begins to work at time 0 s, and the data shown in the figures start at 70.38 s and end at 94.31 s, corresponding to the flight from point 3 to point 4. The moving person enters the depth camera's FOV at 76.09 s (marked with the vertical dashed lines).} \vspace{-0.4cm} \label{fig9} \end{figure} \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.233\textwidth]{figs/exp-polyu.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.233\textwidth]{figs/exp-park.png}} \caption{Two of the outdoor flight test environments. (a) locates in a campus and (b) is at the sports corner in a park.} \label{fig812} \end{figure} Finally, the average time cost of each part of the MP and PCP for the hardware tests is counted and analyzed in Fig. \ref{fig911} to show the computational efficiency. In Fig. \ref{fig911}(a), the average time cost and the percentage are provided on a pie chart. In Fig. \ref{fig911}(b), the relationship between the time cost of each procedure in the PCP and the size of $Pcl_{use}$ is illustrated, with the average time cost shown on the right side. For the MP, most of the time cost ($63\%$) is used for path planning and optimization on the 2D map. The path search with the 3D point cloud is computationally inexpensive. The average total time cost of each MP step is 0.078 s, and the loop frequency is approximately 12 Hz. These results are slower than the offline test results because the computing resource is occupied by the other part of the framework (VIO, mapping kit, point cloud filter, and the PCP). For the PCP, only the time cost of the waypoint searching is relevant to the $Pcl_{use}$ size because the number of the points determines the collision check's circling number. The average time cost of the PCP step is 16.2 ms, of which searching for $w_{pn}$ is the most time-consuming part. \begin{figure}[] \centering \subfigure[]{ \includegraphics[width=0.43\textwidth]{figs/f8014.png}} \hfill \centering \subfigure[]{ \includegraphics[width=0.49\textwidth]{figs/f8013.png}} \caption{(a): Average time cost and the proportion of each submodule of MP. (b): The time cost versus $Pcl_{use}$ size curves of each part of the PCP. The average time cost is marked with a dashed line, and the values are on the right side.} \label{fig911} \end{figure} Moreover, the time cost is compared with those of the state-of-the-art algorithms in TABLE \ref{table_2}. Because our proposed method is composed of two planners running asynchronously, no single value represents the framework execution time. Thus, the average step time costs of the MP and PCP are listed for the comparison. Notably, the time costs of the related works are measured on different hardware platforms with different program code types. ``MSCF'' is the abbreviation for the maximum single-core frequency of the hardware platform processor. Although TABLE \ref{table_2} cannot be used for the absolute performance comparison, the trajectory replanning time cost of our proposed method (PCP) is believed promisingly to be better than those of the state-of-the-art algorithms. \begin{table}[thpb] \setlength{\belowcaptionskip}{0.5cm} \caption{COMPARISON WITH STATE-OF-THE-ART ALGORITHMS.} \label{table_2} \begin{center} \begin{tabular}{|c|c|c|} \hline Works & Time cost (ms) & MSCF (GHz)\\ \hline MP & 78 & 3.4 \\ \hline \textcolor{ForestGreen}{\textbf{PCP}} & \textcolor{ForestGreen}{\textbf{16.2}} & \textcolor{ForestGreen}{\textbf{3.4}}\\ \hline \cite{zhou2019robust} & $>$100 & 3.0\\ \hline \cite{liu2016high} &$>$160 & 3.4\\ \hline \cite{burri2015real} & $>$40 & N/A \\ \hline\cite{chen2020computationally} & 19 & 4.6\\ \hline \cite{bircher2016receding}& 199 & 3.1 \\ \hline\cite{gao2018online} &106 & 3.0\\ \hline \end{tabular} \end{center} \vspace{-0.4cm} \end{table} \section{Conclusion and future work} In this paper, a framework of trajectory planning for UAVs with two parallel planners is introduced. The map planner tries to find the shortest possible path in limited computational time. The point cloud planner takes effect when the point cloud near the drone differs from the 3D map to ensure safety. It reacts much faster than the path planning on the map. The test results verify that the techniques proposed in this paper can reduce the computing time cost, with the performance basically unchanged or even improved compared to that of the original method \cite{chen2020computationally}. The real-time flight trajectory length outperforms those of the state-of-the-art algorithms, and the reacting time of the PCP is also superlative. The entire framework is tested extensively in simulation and hardware experiments, demonstrating excellent rapid response capabilities and flight safety. However, the test environments do cover all the UAV application scenarios. Moreover, the UAV flight speed in our tests is not sufficiently high. In the future, the framework will be tested in more challenging environments with a higher vehicle speed than the current study. \section*{Acknowledgment} The authors would like to thank Mr. Ching-wei Chang for his kind help in the hardware equipment debugging and Miss Yuyang Hu for her assistance in the hardware tests. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,127
Dorset born, Laura graduated from Oxford Media & Business School in 2017. She completed her PA Diploma with a Distinction of Honours. Laura couldn't wait to move to London where she initially spent a couple of months at a PR Agency working for Mark Borkowski who is the founder and head of Borkowski PR. Vicky and Laura wouldn't have crossed paths if it wasn't for her business partner, Polly Hadden-Paton (You Need A PA) who made this perfect introduction. From managing client bookings and the staff rota to seamlessly running Vicky's diary (and life), Laura is truly the lynchpin of You Need A Vicky and a constant ray of sunshine for clients and team members alike. Senior Professional Organiser Lindsey is originally from Yorkshire and brings to the team a happy smile, high energy, practical thinking and lots of northern friendliness! Her background includes a BA Honours in Musical Theatre which she achieved at Arts Educational Schools London and she then went on to perform in the West End. It was clear from the first moment Vicky and Lindsey met, that she was going to be a fantastic professional organiser. Her positive attitude, professional approach to her work and unbelievable eye for detail are part of the skills we love at You Need A Vicky. Christina has lived most of her life in Milton-Keynes. Her background has been in retail, recruitment and most recently property before she decided on a full-time career as a professional organiser. Vicky and Christina first met when Christina attended one of Vicky's professional organising training workshops and she realised her dream job could be achieved! Christina is passionate about enhancing people's lives through organisation; her detailed approach to her work makes her a fantastic professional organiser. Tahirah Tahirah grew up in London and has a wide range of skills behind her. She achieved an NVQ Diploma in Personal Training, Exercise & Fitness instructing and also has a BA Hons Degree in Contemporary Media Practice. Tahirah's career has included acting, personal training and even conservation. She's a natural organiser and was really excited to learn that she could turn this into a job. After speaking to Vicky, it all became clear! Professional Organiser Hannah's work history includes 6 years in the fashion and styling industry and working at John Lewis. After attending one of Vicky's 'How to Become A Professional Organiser' events, Hannah started her training with Vicky straight away and it was immediately clear she was a natural. Hannah is a highly skilled organiser who is passionate about her job. Poppy grew up in London and then moved to Brighton where she studied psychology, achieving a Bachelor of Science degree. After organising for friends and family for many years and gaining vital experience when working as a nanny, Poppy has no problem at all transforming client's space from disorganised to organised wonderfulness. Poppy's interview process over ran significantly because of their mutual love of all that is…practical efficient and space saving.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
543
Austin5733 has not yet been in a debate. Post a comment to Austin5733's profile. Austin5733 does not have any Debate.org friends. Austin5733 has not added any photo albums. If you are logged in, you will also see green or red bullets next to each issue, which indicate whether you agree or disagree with Austin5733 on that particular issue. You can also click each issue to find other members that agree with Austin5733's position on the issue.
{ "redpajama_set_name": "RedPajamaC4" }
8,693
La cerceta de El Cabo o cerceta del Cabo (Anas capensis) es una especie de ave anseriforme de la familia Anatidae ampliamente distribuida por los humedales del África subsahariana. Mide de 40 a 50 cm de largo. Es un especie esencialmente no migratoria, aunque se mueve oportunistamente con las lluvias. Como muchos patos del sur, los sexos son similares. Es muy pálido y principalmente grisáceo, con un dorso más pardo y rosáceo en el pico (los jóvenes no tienen rosado). Su número no sobrepasa los dos mil ejemplares. Comen plantas acuáticas y pequeñas criaturas obtenidas por chapoteo en el agua. Su nido está en tierra bajo vegetación, y cerca del agua. Es generalmente muy quieta, excepto durante la temporada de apareo. El macho por reproducirse reclama con un claro silbido, mientras la hembra tiene un quejumbroso "cuac". Esta es una de las especies donde se aplica el Acuerdo de Conservación de aves acuáticas migratorias africanas-eurasiáticas (AEWA). Referencias Enlaces externos Capensis Aves de África Animales descritos en 1789 Taxones descritos por Johann Friedrich Gmelin
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,635
After the FRMF, Toulouse FC defends Zakaria Aboukhlal After the Royal Moroccan Football Federation (FRMF), which reacted strongly to the accusations of glorifying Salafism against the Moroccan international, Zakaria Aboukhlal, it is the turn of the French club Toulouse FC to take up the defense of his player. On its Twitter account, Toulouse FC has, in fact, condemned the allegations of an Arabic-speaking site which accused the Atlas Lion of taking advantage of its position in the national team to "disseminate his religious ideas indoctrinated by the European Salafist sheikhs". " The Toulouse Football Club condemns the accusations made by a site with regard to our player Zakaria Aboukhlal and joins forces with the Moroccan Football Federation to ensure its full support and confidence in our player." tweeted TFC. The Toulouse Football Club condemns the accusations made by a site against our player Zakaria Aboukhlal and joins forces with the Moroccan Football Federation to ensure its full support and confidence in our player. —Toulouse FC (@ToulouseFC) December 25, 2022 And to add: "The Club condemns the accusations made by this site, which are false, unfounded and degrading, which damage the image of our player, and reserves the right to use all the remedies available to defend the image and the integrity of Zakaria". The Club condemns the accusations made by this site, false, unfounded and degrading, which damage the image of our player, and reserves the right to use all the remedies available to defend the image and the integrity of Zakaria. Note that the FRMF published a press release on Sunday condemning these accusations, and ensuring that it reserves the right to "use all means of recourse in order to protect the members of the national team from all false accusations affecting their personal lives or their behavior while exercising national duty", especially through the courts. The Federation also wished to recall that "Zakaria Aboukhlal adopted, like his teammates, an exemplary behavior which led to honorable results being signed by the National Eleven in this planetary meeting". The same story on the side of the National Press Council (CNP) which considered that the slanderous remarks against Zakaria Aboukhlal "can in no way be considered as self-respecting journalistic work". The CNP, which decided to present this dossier to the Committee on Professional Ethics and Disciplinary Questions, in accordance with its charter and the law governing its bodies, assured that "the focus of the press on any person because of their ethnic or religious affiliation is an unacceptable stigmatization which is moreover rejected by all the charters of ethics of the press, including the Charter of professional ethics approved at the national level".
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,133
Studce jsou vesnice v okrese Nymburk, část obce Loučeň. Nachází se 2 km východně od městyse Loučeň směrem ke Mcelům. Ze západu, severu a východu obklopují osadu rozsáhlé lesy. Poloha na svahu umožňuje výhled do Polabské nížiny. V roce 2006 zde žilo 68 obyvatel. Historie První písemná zmínka o vesnici pochází z roku 1384. Osídlení lokality je dlouhodobé. Na ostrohu nad vsí je zadokumentováno hradiště lužické kultury. Po třicetileté válce zde zůstalo deset živností, zbývající čtyři byly pusté. V roce 1881 zde byl založen Sbor dobrovolných hasičů. V roce 1896 zde byly vybudovány silnice, mj. ke Mcelům a Jikci, pak i k Loučeni. V restauraci u Riegrů se hrávalo ochotnické divadlo. Obec měla obecního četníka, který byl současně i ponocným. Bylo zde několik později zrušených hostinců. Roku 1912 byly dokončeny úpravy vsi, které zahrnovaly i úpravy hráze rybníka. Osídlení a rozvoj Že se jednalo vždy o nevelkou obec, svědčí i záznamy o počtu stálých obyvatel. V roce 1787 zde bylo 169 lidí v 33 domech, v roce 1930 pak 252 lidí a 62 domů, nyní zde žije 68 lidí v 68 domech. Pamětihodnosti K pamětihodnostem patří pískovcový křížek z roku 1875 a kaplička zasvěcená Cyrilu a Metodějovi z roku 1926, oboje na návsi, a pískovcová socha z roku 1886 věnovaná sv. Trojici umístěná pod lesem na východě vesnice. Na domě čp. 32 jsou sluneční hodiny. Galerie Reference Externí odkazy Vesnice v okrese Nymburk Loučeň Sídla v Jizerské tabuli Sídla ve Středolabské tabuli
{ "redpajama_set_name": "RedPajamaWikipedia" }
447
package kinesumer import ( "errors" "fmt" "math/rand" "time" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/kinesis" "github.com/remind101/kinesumer/checkpointers/empty" k "github.com/remind101/kinesumer/interface" "github.com/remind101/kinesumer/provisioners/empty" ) const ( // According to the Kinesis limits documentation: // // Each shard can support up to 5 transactions per second for // reads, up to a maximum total data read rate of 2 MB per second. // // See http://docs.aws.amazon.com/streams/latest/dev/service-sizes-and-limits.html DefaultGetRecordsThrottle = 200 * time.Millisecond ) type Kinesumer struct { Kinesis k.Kinesis Checkpointer k.Checkpointer Provisioner k.Provisioner Stream string Options *Options records chan k.Record stop chan Unit stopped chan Unit nRunning int rand *rand.Rand } type Options struct { ListStreamsLimit int64 DescribeStreamLimit int64 GetRecordsLimit int64 // Determines how frequently GetRecords is throttled. The zero value is // DefaultGetRecordsThrottle. GetRecordsThrottle time.Duration // Amount of time to poll of records if consumer lag is minimal PollTime int MaxShardWorkers int ErrHandler func(k.Error) DefaultIteratorType string // How long to try and get shard iterator ShardAcquisitionTimeout time.Duration // ShardIteratorTimestamp is used when DefaultIteratorType is "AT_TIMESTAMP" ShardIteratorTimestamp time.Time } var DefaultOptions = Options{ // These values are the hard limits set by Amazon ListStreamsLimit: 1000, DescribeStreamLimit: 10000, GetRecordsLimit: 10000, GetRecordsThrottle: DefaultGetRecordsThrottle, PollTime: 2000, MaxShardWorkers: 50, ErrHandler: DefaultErrHandler, DefaultIteratorType: "LATEST", ShardAcquisitionTimeout: 90 * time.Second, } func NewDefault(stream string, duration time.Duration) (*Kinesumer, error) { return New( kinesis.New(session.New()), nil, nil, nil, stream, nil, duration, ) } func New(kinesis k.Kinesis, checkpointer k.Checkpointer, provisioner k.Provisioner, randSource rand.Source, stream string, opt *Options, duration time.Duration) (*Kinesumer, error) { if kinesis == nil { return nil, NewError(ECrit, "Kinesis object must not be nil", nil) } if checkpointer == nil { checkpointer = emptycheckpointer.Checkpointer{} } if provisioner == nil { provisioner = emptyprovisioner.Provisioner{} } if randSource == nil { randSource = rand.NewSource(time.Now().UnixNano()) } if len(stream) == 0 { return nil, NewError(ECrit, "Stream name can't be empty", nil) } if opt == nil { tmp := DefaultOptions opt = &tmp } if opt.ErrHandler == nil { opt.ErrHandler = DefaultErrHandler } if duration != 0 { opt.DefaultIteratorType = "AT_TIMESTAMP" opt.ShardIteratorTimestamp = time.Now().Add(duration * -1) } return &Kinesumer{ Kinesis: kinesis, Checkpointer: checkpointer, Provisioner: provisioner, Stream: stream, Options: opt, records: make(chan k.Record, opt.GetRecordsLimit*2+10), rand: rand.New(randSource), }, nil } func (kin *Kinesumer) GetStreams() (streams []string, err error) { streams = make([]string, 0) err = kin.Kinesis.ListStreamsPages(&kinesis.ListStreamsInput{ Limit: &kin.Options.ListStreamsLimit, }, func(sts *kinesis.ListStreamsOutput, _ bool) bool { streams = append(streams, aws.StringValueSlice(sts.StreamNames)...) return true }) return } func (kin *Kinesumer) StreamExists() (found bool, err error) { streams, err := kin.GetStreams() if err != nil { return } for _, stream := range streams { if stream == kin.Stream { return true, nil } } return } func (kin *Kinesumer) GetShards() (shards []*kinesis.Shard, err error) { for { retry := false shards = make([]*kinesis.Shard, 0) err = kin.Kinesis.DescribeStreamPages(&kinesis.DescribeStreamInput{ Limit: &kin.Options.DescribeStreamLimit, StreamName: &kin.Stream, }, func(desc *kinesis.DescribeStreamOutput, _ bool) bool { if desc == nil || desc.StreamDescription == nil { err = errors.New("Stream could not be described") return false } switch aws.StringValue(desc.StreamDescription.StreamStatus) { case "CREATING": retry = true return false case "DELETING": err = errors.New("Stream is being deleted") return false } shards = append(shards, desc.StreamDescription.Shards...) return true }) if retry { time.Sleep(time.Second) } else { return } } } func (kin *Kinesumer) LaunchShardWorker(shards []*kinesis.Shard) (int, *ShardWorker, error) { perm := kin.rand.Perm(len(shards)) for _, j := range perm { err := kin.Provisioner.TryAcquire(aws.StringValue(shards[j].ShardId)) if err == nil { worker := &ShardWorker{ kinesis: kin.Kinesis, shard: shards[j], checkpointer: kin.Checkpointer, stream: kin.Stream, pollTime: kin.Options.PollTime, stop: kin.stop, stopped: kin.stopped, c: kin.records, provisioner: kin.Provisioner, errHandler: kin.Options.ErrHandler, defaultIteratorType: kin.Options.DefaultIteratorType, shardIteratorTimestamp: kin.Options.ShardIteratorTimestamp, getRecordsThrottle: getRecordsThrottle(kin.Options.GetRecordsThrottle), GetRecordsLimit: kin.Options.GetRecordsLimit, } kin.nRunning++ go worker.RunWorker() return j, worker, nil } } return 0, nil, errors.New("No unlocked keys") } func (kin *Kinesumer) Begin() (int, error) { shards, err := kin.GetShards() if err != nil { return 0, err } err = kin.Checkpointer.Begin() if err != nil { return 0, err } n := kin.Options.MaxShardWorkers if n <= 0 || len(shards) < n { n = len(shards) } tryTime := kin.Options.ShardAcquisitionTimeout if tryTime < 2*kin.Provisioner.TTL()+time.Second { tryTime = 2*kin.Provisioner.TTL() + time.Second } start := time.Now() kin.stop = make(chan Unit, n) kin.stopped = make(chan Unit, n) workers := make([]*ShardWorker, 0) for kin.nRunning < n && len(shards) > 0 && time.Now().Sub(start) < tryTime { for i := kin.nRunning; i < n; i++ { j, worker, err := kin.LaunchShardWorker(shards) if err != nil { kin.Options.ErrHandler(NewError(EWarn, "Could not start shard worker", err)) } else { workers = append(workers, worker) shards = append(shards[:j], shards[j+1:]...) } } time.Sleep(time.Duration(500+rand.Intn(1500)) * time.Millisecond) } kin.Options.ErrHandler(NewError(EInfo, fmt.Sprintf("%v/%v workers started", kin.nRunning, n), nil)) if len(workers) < 1 { return len(workers), NewError(EWarn, "0 shard workers started", nil) } return len(workers), nil } func (kin *Kinesumer) End() { for kin.nRunning > 0 { select { case <-kin.stopped: kin.nRunning-- case kin.stop <- Unit{}: } } kin.Checkpointer.End() } func (kin *Kinesumer) Records() <-chan k.Record { return kin.records } // getRecordsThrottle returns a channel that will tick every time d has elapsed. // If d is 0, DefaultGetRecordsThrottle will be used. func getRecordsThrottle(d time.Duration) <-chan time.Time { if d == 0 { d = DefaultGetRecordsThrottle } return time.NewTicker(d).C }
{ "redpajama_set_name": "RedPajamaGithub" }
5,427
Majoros István (1900–1985) író, műfordító, filmkritikus Majoros István (1946–2015) táncművész, balettművész, koreográfus Majoros István (1949) történész, egyetemi tanár Majoros István (1974) olimpiai bajnok birkózó
{ "redpajama_set_name": "RedPajamaWikipedia" }
388
Mark Wilson/Getty Images News Obamacare's Biggest Threat Isn't the Election. It's a Supreme Court Case Being Discussed Today. Control of the U.S. Senate for the next two years will depend on what voters decide when they cast their ballots next Tuesday. But the success of Obamacare may hinge more on what the justices of the Supreme Court decide when they meet in private on Friday morning. It's their regularly scheduled conference—their chance to discuss which cases to hear, which to turn away, and which to ponder for a while longer. Among the cases they will consider this time is King v. Burwell. If they make a decision one way or another, to take the case or to reject it, they will announce it either Monday or the following Monday. But it's possible they will decide to ponder the case for a while longer, in which case they will "re-list" it. (My thanks to Nicholas Bagley and some other friends at the University of Michigan Law School for explaining the possibilities to me.) King is one of four similar cases now moving through the judiciary system. The lawsuits claim that the federal government can't legally provide tax credits for buying insurance in states where officials have decided not to run their own marketplaces, leaving that job to the feds. Why? The lawsuits suggest this is what the architects of the law intended. (If you want to know how supporters of the lawsuits come to that conclusion, you can read this paper by Michael Cannon and Jonathan Adler.) At the moment, only 16 states plus the District of Columbia run their own marketplaces. The other 34 don't, which means the people now getting tax credits in those states, more than 4 million of them, would lose that money if the lawsuits succeed. That'd be a huge deal. The tax credits, worth hundreds or even thousands of dollars a year, are what make it possible for these people to afford coverage in the first place. Take away those subsidies and many become uninsured and the system in those states more or less collapses—an outcome that supporters of the lawsuits have said openly they desire. King came through the Fourth Circuit Court of Appeals, based in Richmond, Virginia, where a three-judge panel ruled unanimously to reject the lawsuit. The plaintiffs in the case have appealed and it's their petition the justices are pondering on Friday. Initially, the plaintiffs could cite the fact that another appellate court considering a similar case, the D.C. Circuit in Halbig v. Burwell, had ruled the other way in a two-to-one decision. Splits between Circuit Courts have traditionally been grounds for the Supreme Court to hear a case. But the full D.C. Circuit subsequently announced that it was suspending that ruling, pending a new, en banc hearing before the entire panel of active judges. That means there's no split anymore. The justices could still decide to take the case, though. That may explain the timing of an op-ed that appeared in the Washington Post on Thursday. The article made a now-familiar argument—that Congress never intended to deprive some people of subsidies, just because they lived in states where officials didn't want to run marketplaces. But the article was noteworthy for who wrote it. The byline had five members of the U.S Congress—and not just any old members. They were the chairman of the five committees, two in the Senate and three in the House, that actually wrote the bill. Particularly when it comes to complex legislation like the Affordable Care Act, plenty of lawmakers don't have a sophisticated understanding of what they are considering. But the chairmen and ranking members of the key committees certainly do. And in the op-ed, they made very clear what their intentions had been: None of us contemplated that the bill as enacted could be misconstrued to limit financial help only to people in states opting to directly run health insurance marketplaces. In fact, as chairs of the three House committees that collectively authored the health-care reform legislation (Ways and Means, Energy and Commerce, and Education and the Workforce), three of us issued a joint fact sheet in March 2010 reflecting our intention that financial help would be available to consumers in the state marketplaces, whether the state were to run it directly or via the federal government. This was actually the first time I've seen that House fact sheet. It's a strong piece of evidence, because it's contemporaneous. But the issue shouldn't really be in doubt anymore. There are other contemporaneous documents, after all. Pretty much everybody who worked directly on writing the legislation has said the same thing: They understood the law to be providing tax credits to people in every state, full stop. That's also the recollection of the journalists who covered the debate most closely, and it's consistent with a ton of circumstantial evidence. While not all of that evidence is part of the official judicial record, some is—enough, I would think, to make the case pretty clear-cut. But it takes only four votes to grant a hearing and, as we learned in the individual mandate cases, there are at least four justices very hostile to the law. —Jonathan Cohn News from Thursday EBOLA: In Maine, Doctors Without Borders volunteer Kaci Hickox continues to fight with authorities over her "voluntary quarantine," but she did finally get a pizza she ordered. (Nina Golgowski, New York Daily News) LGBT: Apple CEO Tim Cook wrote in an essay in which he publicly aknolwedged his sexuality for the first time and said he is "proud to be gay." (Bloomberg Businessweek) CLIMATE After founder of the Weather Channel John Coleman (who is no longer affiliated with the company) said on Fox News that he doesn't believe in climate change, the channel issued a statement confirming that climate change is, in fact, real. (Matt Schiavenza, Atlantic) Get ready for overtime: Nate Silver says the numbers suggest both Georgia and Louisana will need runoffs to settle their midterm races, which probably means control of the Senate won't officially be known for a few more weeks. (FiveThirtyEight) Everyone's a little bit partyist...right? Jon Chait defends his aversion to the idea of his children marrying Republicans. (New York) Security in the digital age. Americans are more afraid of credit card hacking than they are of getting mugged, burglarized or much much any other crime, writes Anand Katakam. (Vox) Keep your friends close… Ted Cruz encourages Time readers to punish Obama's party at the voting booth next week after the publication of Jeffrey Goldberg's article that revealed some lost love between the U.S. and Israeli governments. Stories we'll be watching The final days of the midterm campaign At QED Rebecca Leber zooms in on one Michigan congressional race that is being flooded with outside funding. Alec MacGillis looks at a larger trend: the GOP seems to be back in bed with Wall Street. Armand Sprecher defends his colleague Dr. Craig Spencer, who was recently diagnosed with Ebola after treating patients in Africa. Jonathan Cohn remembers former Boston Mayor Thomas Menino, who died on Thursday, as a great leader and a great liberal. QEDaily, Politics, QED, SCOTUS, ACA, King v. Burwell, Obamacare, Affordable Care Act, Halbig v. Burwell, Michael Cannon, subsidies, Exchanges, Henry Waxman, George Miller, Tom Harkin
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,727
\section{Introduction} Grover proposed a quantum algorithm for solving large database search problems in Ref. \cite{grover97,grover01} . Grover's search algorithm helps searching for an unknown marked item in an unstructured database of $N$ items by accessing the database a minimum number of times. From a classical standpoint, it is necessary to test $N/2$ items, on average, before finding the correct item. With Grover's algorithm however, the same task can be completed successfully with a complexity of order $\sqrt{N}$, that is, with a quadratic speed up. Grover's algorithm was presented in terms of a discrete sequence of unitary logic gates (digital quantum computation). Specifically, the transition probability from the source state $\left\vert \psi_{s}\right\rangle $ to the target state $\left\vert \psi_{w}\right\rangle $ after the $k$-times sequential application of the so-called Grover quantum search iterate $G$ is given by \begin{equation} \mathcal{P}_{\text{Grover}}\left( k\text{, }N\right) \overset{\text{def} {=}\left\vert \left\langle \psi_{w}|G^{k}|\psi_{s}\right\rangle \right\vert ^{2}=\sin^{2}\left[ \left( 2k+1\right) \tan^{-1}\left( \frac{1}{\sqrt {N-1}}\right) \right] \text{.} \label{pgrover \end{equation} In the limit of $N$ approaching infinity, $\mathcal{P}_{\text{Grover}}$ in Eq. (\ref{pgrover}) approaches one if $k=O\left( \sqrt{N}\right) $. We point out that the big $O$-notation $f\left( x\right) =O\left( g\left( x\right) \right) $ means that there exist\emph{ real} constants $c$ and $x_{0}$ such that $\left\vert f\left( x\right) \right\vert \leq c\left\vert g\left( x\right) \right\vert $ for any $x\geq x_{0}$. The temporal evolution of the state vector $\left\vert \psi\left( t\right) \right\rangle $ of a closed quantum system is characterized by the Schr\"{o}dinger equation \begin{equation} i\hslash\partial_{t}\left\vert \psi\left( t\right) \right\rangle =\mathcal{H}\left( t\right) \left\vert \psi\left( t\right) \right\rangle \text{,} \label{H1 \end{equation} where $\hslash\overset{\text{def}}{=}h/\left( 2\pi\right) $ is the reduced Planck constant, $i$ denotes the imaginary complex unit, and $\partial _{t}\overset{\text{def}}{=}\partial/\partial t$. The Hamiltonian $\mathcal{H}\left( t\right) $ in Eq. (\ref{H1}) encodes all relevant information about the time evolution of the quantum system. From a quantum computing standpoint, if the Hamiltonian $\mathcal{H}\left( t\right) $ is known and properly designed, the quantum mechanical motion is known and the initial state (source state, $\left\vert \psi_{s}\right\rangle $) at $t=0$ can potentially evolve to a given final state (target state, $\left\vert \psi _{w}\right\rangle $) at $t=T$. In particular, for any instant $0\leq t\leq T$, the probability $\mathcal{P}_{\left\vert \psi\left( t\right) \right\rangle \rightarrow\left\vert \psi_{w}\right\rangle }$ that the system transitions from the state $\left\vert \psi\left( t\right) \right\rangle $ to the state $\left\vert \psi_{w}\right\rangle $ under the working assumption of constant Hamiltonian is given by \begin{equation} \mathcal{P}_{\left\vert \psi\left( t\right) \right\rangle \rightarrow \left\vert \psi_{w}\right\rangle }\overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|\psi\left( t\right) \right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t |\psi_{s}\right\rangle \right\vert ^{2}\text{. \end{equation} The unitary operator $\mathcal{U}\left( t\right) \overset{\text{def} {=}e^{-\frac{i}{\hslash}\mathcal{H}t}$ denotes the temporal evolution operator. Fig. $1$ displays a graphical depiction of the digital (discrete time) and analog (continuous time) quantum search algorithms. Working in a continuous time quantum computing framework, Farhi and Gutmann proposed an analog version of Grover's algorithm in Ref. \cite{farhi98} where the state of the quantum register evolves continuously in time under the action of a suitably chosen driving Hamiltonian (analog quantum computation). Specifically, the transition probability from the source state $\left\vert \psi_{s}\right\rangle $ to the target state $\left\vert \psi_{w}\right\rangle $ after the application of the unitary continuous time evolution operator $\mathcal{U}_{\text{FG}}\left( t\right) \overset{\text{def}}{=}e^{-\frac {i}{\hslash}\mathcal{H}_{\text{FG}}t}$ for a closed quantum system described by a constant Hamiltonian $\mathcal{H}_{\text{FG}}$ is given by, \begin{equation} \mathcal{P}_{\text{Farhi-Gutmann}}\left( t\text{, }x\right) \overset {\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash }\mathcal{H}_{\text{FG}}t}|\psi_{s}\right\rangle \right\vert ^{2}=\sin ^{2}\left( \frac{Ex}{\hslash}t\right) +x^{2}\cos^{2}\left( \frac {Ex}{\hslash}t\right) \text{,} \label{PFG \end{equation} where $E$ is a energy-like positive and \emph{real }constant coefficient. We point out that $\mathcal{P}_{\text{Farhi-Gutmann}}$ in\ Eq. (\ref{PFG}) approaches one if $t$ approaches $h/(4Ex)$. For recent discussions on the transition from the digital to analog quantum computational setting for Grover's algorithm, we refer to Ref. \cite{carlo1,carlo2, cafaro2017}. Ideally, one seeks to achieve unit success probability (that is, unit fidelity) in the shortest possible time in a quantum search problem. There are however, both practical and foundational issues that can justify the exploration of alternative circumstances. For instance, from a practical standpoint, one would desire to terminate a quantum information processing task in the minimum possible time so as to mitigate decoherent effects that can appear while controlling (by means of an external magnetic field, for instance) the dynamics of a source state driven towards a target state \cite{rabitz12,rabitz15,cappellaro18}. In addition, from a theoretical viewpoint, it is known that no quantum measurement can perfectly discriminate between two nonorthogonal pure states \cite{chefles00,croke09}. Moreover, it is equally notorious that suitably engineered quantum measurements can enhance the transition probability between two pure states \cite{fritz10}. Therefore, minimizing the search time can be important from an experimental standpoint while seeking at any cost perfect overlap between the final state and the target state can be unnecessary from a purely foundational standpoint. Similar lines of reasoning have paved the way to the fascinating exploration of a possible tradeoff between fidelity and time optimal control of quantum unitary transformations in Ref. \cite{rabitz12}. In this paper, motivated by these issues and starting from the consideration of a family of multi-parameter generalized quantum search Hamiltonians originally introduced by Bae and Kwon in Ref. \cite{bae02}, we present a detailed analysis concerning minimum search times and maximal success probabilities that can be obtained from suitably chosen sub-families belonging to the original family of Hamiltonians. In particular, we uncover the existence of quantum search Hamiltonians characterized by minimum search times needed for a perfect search that are smaller than the one required by the Farhi-Gutmann perfect quantum search Hamiltonian. Furthermore, and more interestingly, we report on the existence of imperfect quantum search Hamiltonians that, despite being incapable of guaranteeing perfect search, can outperform (in terms of minimum search time) perfect search Hamiltonians provided that only a very large nearly optimal fidelity value is required to stop the search. The layout of the rest of the paper can be described as follows. In Section II, we provide a detailed computation of a general expression for the transition probability in the case of a quantum mechanical evolution governed by a time-independent generalized quantum search Hamiltonian. In Section III, we discuss a variety of limiting cases that arise from the generalized search Hamiltonian. In particular, we distinguish optimal scenarios (that is, cases where the probability of finding the target state equals one) from suboptimal scenarios (that is, cases where the probability of finding the target state is less than one). Our concluding remarks appear in Section IV. Finally, technical details are presented in Appendices A, B, and C. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth] {fig1}\caption{Gate-level schematic of the (a) digital and (b) analog quantum search algorithms. \label{fig1 \end{figure} \section{Transition probability} In this section, we consider the time-independent search Hamiltonian $\mathcal{H}$ defined as \cite{bae02} \begin{equation} \mathcal{H}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\beta\left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\gamma\left\vert \psi _{s}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{hamilton \end{equation} The adimensional coefficients $\alpha$, $\beta$, $\gamma$, $\delta$ in Eq. (\ref{hamilton}) are \emph{complex} expansion coefficients while $E$ is a \emph{real} constant with energy physical dimensions. We also assume that the quantum state $\left\vert \psi_{w}\right\rangle $ is the normalized target state while $\left\vert \psi_{s}\right\rangle $ is the normalized initial state with quantum overlap $x\overset{\text{def}}{=}\left\langle \psi_{w |\psi_{s}\right\rangle $ that evolves unitarily according to the Schr\"{o}dinger quantum mechanical evolution law \begin{equation} \left\vert \psi_{s}\right\rangle \mapsto e^{-\frac{i}{\hslash}\mathcal{H t}\left\vert \psi_{s}\right\rangle \text{. \end{equation} In general, $x$ is a complex quantity. However, since any phase factor $e^{i\phi_{ws}}$ with $\phi_{ws}\i \mathbb{R} $ in $x\overset{\text{def}}{=}\left\langle \psi_{w}|\psi_{s}\right\rangle =\left\vert \left\langle \psi_{w}|\psi_{s}\right\rangle \right\vert e^{i\phi_{ws}}$ can be incorporated into the state $\left\vert s\right\rangle $, one can assume that $x\i \mathbb{R} _{+}\backslash\left\{ 0\right\} $. Our objective is to find the time $t^{\ast}$ such that $\mathcal{P}\left( t^{\ast}\right) =\mathcal{P}_{\max}$ where $\mathcal{P}\left( t\right) $ is the transition probability defined as \cite{sakurai,picasso} \begin{equation} \mathcal{P}\left( t\right) \overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle \right\vert ^{2}\text{.} \label{fidelity \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.3\textwidth] {fig2}\caption{Visualization of the normalized states $\left\vert \psi_{w}\right\rangle $, $\left\vert \psi _{s}\right\rangle $, and $\left\vert \psi_{r}\right\rangle $, as well as the quantum overlap $x$. \label{fig2 \end{figure}Using the Gram-Schmidt orthonormalization technique, we can construct an orthonormal set of quantum state vectors starting from the set $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{s}\right\rangle \right\} $. The transition from a set of linear independent state vectors to a set of orthonormal state vector can be described as \begin{equation} \left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{s}\right\rangle \right\} \rightarrow\left\{ \left\vert \psi_{w \right\rangle \text{, }\left\vert \psi_{s}\right\rangle -\left\langle \psi _{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\} \rightarrow\left\{ \frac{\left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{w}\right\rangle \right\Vert }\text{, }\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\Vert }\right\} \text{. \end{equation} For notational simplicity, let us define the quantum state vector $\left\vert \psi_{r}\right\rangle $ a \begin{equation} \left\vert \psi_{r}\right\rangle \overset{\text{def}}{=}\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\left\Vert \left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle \right\Vert }\text{.} \label{erre2 \end{equation} Recalling the definition of the quantum overlap $x$, Eq. (\ref{erre2}) can be expressed as \begin{equation} \left\vert \psi_{r}\right\rangle =\frac{\left\vert \psi_{s}\right\rangle -\left\langle \psi_{s}|\psi_{w}\right\rangle \left\vert \psi_{w}\right\rangle }{\sqrt{\left\langle \psi_{s}|\psi_{s}\right\rangle -\left\langle \psi _{s}|\psi_{w}\right\rangle ^{2}}}=\frac{1}{\sqrt{1-x^{2}}}\left( \left\vert \psi_{s}\right\rangle -x\left\vert \psi_{w}\right\rangle \right) \text{.} \label{fiar \end{equation} Fig. $2$ displays a graphical depiction of the orthonormal states $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $ together with the source state $\left\vert \psi_{s}\right\rangle $ and the quantum overlap $x$. Fig. $3$, instead, is a simple depiction of the orthogonalization and normalization procedures that specify the Gram-Schmidt orthonormalization technique. Note that because of the definition of $\left\vert \psi_{r}\right\rangle $ in Eq. (\ref{fiar}), $x$ must be different from one. In terms of the set of orthonormal basis vectors $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $, the source state $\left\vert \psi_{s}\right\rangle $ become \begin{equation} \left\vert \psi_{s}\right\rangle =\left\vert \psi_{s}\right\rangle \left( \left\vert \psi_{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi_{r}\right\rangle \left\langle \psi_{r}\right\vert \right) =\left\langle \psi_{w}|\psi_{s}\right\rangle \left\vert \psi_{w}\right\rangle +\left\langle \psi_{r}|\psi_{s}\right\rangle \left\vert \psi_{r}\right\rangle \text{.} \label{chell \end{equation} Note that the quantum mechanical overlap $\left\langle \psi_{r}|\psi _{s}\right\rangle $ in Eq. (\ref{chell}) can be recast as, \begin{equation} \left\langle \psi_{r}|\psi_{s}\right\rangle =\frac{1}{\sqrt{1-x^{2}}}\left( \left\langle \psi_{s}\right\vert -x\left\langle \psi_{w}\right\vert \right) \left( \left\vert \psi_{s}\right\rangle \right) =\frac{1}{\sqrt{1-x^{2} }\left( 1-x^{2}\right) =\sqrt{1-x^{2}}\text{.} \label{chist \end{equation} Therefore, by\textbf{ }using Eq. (\ref{chist}), the state $\left\vert \psi _{s}\right\rangle $ in Eq. (\ref{chell}) becomes \begin{equation} \left\vert \psi_{s}\right\rangle =x\left\vert \psi_{w}\right\rangle +\sqrt{1-x^{2}}\left\vert \psi_{r}\right\rangle \text{.} \label{sr \end{equation} The matrix representation of the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) with respect to the orthonormal basis $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} $ where $\left\langle \psi_{w}|\psi_{r}\right\rangle =\delta_{wr}$, with $\delta_{wr}$ denoting the Kronecker delta, can be formally written a \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }\overset{\text{def} {=}\left( \begin{array} [c]{cc \left\langle \psi_{w}|\mathcal{H}|\psi_{w}\right\rangle & \left\langle \psi_{w}|\mathcal{H}|\psi_{r}\right\rangle \\ \left\langle \psi_{r}|\mathcal{H}|\psi_{w}\right\rangle & \left\langle \psi_{r}|\mathcal{H}|\psi_{r}\right\rangle \end{array} \right) \text{. \end{equation} Using Eqs. (\ref{hamilton}) and (\ref{sr}) together with the orthonormality conditions $\left\langle \psi_{w}|\psi_{r}\right\rangle =\delta_{wr}$, we hav \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }=\left( \begin{array} [c]{cc H_{11} & H_{12}\\ H_{21} & H_{22 \end{array} \right) \text{,} \label{symm \end{equation} where \begin{align} & H_{11}\overset{\text{def}}{=}E\left[ \alpha+\left( \beta+\gamma\right) x+\delta x^{2}\right] \text{, }H_{12}\overset{\text{def}}{=}E\sqrt{1-x^{2 }\left( \beta+\delta x\right) \text{,}\nonumber\\ & \nonumber\\ & H_{21}\overset{\text{def}}{=}E\sqrt{1-x^{2}}\left( \gamma+\delta x\right) \text{, }H_{22}\overset{\text{def}}{=}E\delta\left( 1-x^{2}\right) \text{.} \label{heq \end{align} \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig3}\caption{Illustration of the Gram-Schmidt orthonormalization procedure for some vectors $\left\vert a\right\rangle $ and $\left\vert b\right\rangle $. In this case, the resulting orthonormal basis consists of $\left\vert e_{1}\right\rangle \overset {\text{def}}{=}\frac{\left\vert a\right\rangle }{\left\Vert \left\vert a\right\rangle \right\Vert }$ and $\left\vert e_{2}\right\rangle \overset{\text{def}}{=}\frac{\left\vert b\right\rangle -\left\langle a|b\right\rangle \left\vert a\right\rangle }{\left\Vert \left\vert b\right\rangle -\left\langle a|b\right\rangle \left\vert a\right\rangle \right\Vert }$. \label{fig3 \end{figure}Observe that the Hamiltonian $\mathcal{H}$ is an Hermitian operator and, therefore, its eigenvalues must be \emph{real }(for details, see Appendix A). For this reason, recalling the previous constraints on $x$, we finally have $x\in\left( 0\text{,}1\right) $. Furthermore, imposing that $\mathcal{H}=\mathcal{H}^{\dagger}$ where the dagger symbol \textquotedbllef $\dagger$\textquotedblright\ is the Hermitian conjugation operation, we have \begin{equation} \left( \begin{array} [c]{cc H_{11} & H_{12}\\ H_{21} & H_{22 \end{array} \right) =\left( \begin{array} [c]{cc H_{11}^{\ast} & H_{21}^{\ast}\\ H_{12}^{\ast} & H_{22}^{\ast \end{array} \right) \text{.} \label{heq2 \end{equation} Then, from Eqs. (\ref{heq}) and (\ref{heq2}), it follows that $\alpha$ and $\delta$ must be \emph{real} coefficients while $\beta=\gamma^{\ast}$. The symbol \textquotedblleft$\ast$\textquotedblright\ denotes the \emph{complex} conjugation operation. Next, let us diagonalize the Hermitian matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ in Eq. (\ref{symm}). The two \emph{real} eigenvalues $\lambda_{\pm}$ of the matrix can be written as, \begin{equation} \lambda_{\pm}\overset{\text{def}}{=}\frac{1}{2}\left[ \left( H_{11 +H_{22}\right) \pm\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{.} \label{eigen \end{equation} The eigenspaces $\mathcal{E}_{\lambda_{-}}$ and $\mathcal{E}_{\lambda_{+}}$ that correspond to the eigenvalues $\lambda_{-}$ and $\lambda_{+}$ are defined a \begin{equation} \mathcal{E}_{\lambda_{-}}\overset{\text{def}}{=}\text{\textrm{Span}}\left\{ \left\vert v_{\lambda_{-}}\right\rangle \right\} \text{, and }\mathcal{E _{\lambda_{+}}\overset{\text{def}}{=}\text{\textrm{Span}}\left\{ \left\vert v_{\lambda_{+}}\right\rangle \right\} \text{, \end{equation} respectively. Furthermore, two orthogonal eigenvectors $\left\vert v_{\lambda_{+}}\right\rangle $ and $\left\vert v_{\lambda_{-}}\right\rangle $ corresponding to the distinct eigenvalues $\lambda_{+}$ and $\lambda_{-}$ are given by \begin{equation} \left\vert v_{\lambda_{+}}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c \frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) +\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\right] \\ 1 \end{array} \right) \text{,} \label{v1 \end{equation} and \begin{equation} \left\vert v_{\lambda_{-}}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c \frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) -\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\right] \\ 1 \end{array} \right) \text{,} \label{v2 \end{equation} respectively. For notational simplicity, let us introduce two \emph{complex} quantities $\mathcal{A}$ and $\mathcal{B}$ defined a \begin{equation} \mathcal{A}\overset{\text{def}}{=}\frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) -\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{,} \label{anew \end{equation} and \begin{equation} \mathcal{B}\overset{\text{def}}{=}\frac{1}{2H_{21}}\left[ \left( H_{11}-H_{22}\right) +\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21 }\right] \text{,} \label{bnew \end{equation} respectively. Using Eqs. (\ref{v1}), (\ref{v2}), (\ref{anew}), and (\ref{bnew}), the eigenvector matrix $M_{\mathcal{H}}$ for the matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ and its inverse $M_{\mathcal{H }^{-1}$ can be formally written as \begin{equation} M_{\mathcal{H}}\overset{\text{def}}{=}\left( \begin{array} [c]{cc \mathcal{A} & \mathcal{B}\\ 1 & 1 \end{array} \right) \text{,} \label{mmatrix2 \end{equation} and \begin{equation} M_{\mathcal{H}}^{-1}\overset{\text{def}}{=}\frac{1}{\mathcal{A}-\mathcal{B }\left( \begin{array} [c]{cc 1 & -\mathcal{B}\\ -1 & \mathcal{A \end{array} \right) =M_{\mathcal{H}}^{\dagger}\text{,} \label{mimatrix2 \end{equation} respectively. In terms of the matrices $M_{\mathcal{H}}$ in Eq. (\ref{mmatrix2}), $M_{\mathcal{H}}^{-1}$ in Eq. (\ref{mimatrix2}), and the diagonal matrix $H_{\text{diagonal}}$ defined as \begin{equation} H_{\text{diagonal}}\overset{\text{def}}{=}\left[ \mathcal{H}\right] _{\left\{ \left\vert v_{\lambda-}\right\rangle \text{, }\left\vert v_{\lambda_{+}}\right\rangle \right\} }=\left( \begin{array} [c]{cc \left\langle v_{\lambda-}|\mathcal{H}|v_{\lambda-}\right\rangle & \left\langle v_{\lambda-}|\mathcal{H}|v_{\lambda_{+}}\right\rangle \\ \left\langle v_{\lambda_{+}}|\mathcal{H}|v_{\lambda-}\right\rangle & \left\langle v_{\lambda_{+}}|\mathcal{H}|v_{\lambda_{+}}\right\rangle \end{array} \right) =\left( \begin{array} [c]{cc \lambda_{-} & 0\\ 0 & \lambda_{+ \end{array} \right) \text{,} \label{hdiagonal \end{equation} the matrix $\left[ \mathcal{H}\right] _{\left\{ \left\vert \psi _{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }$ in\ Eq. (\ref{symm}) become \begin{equation} \left[ \mathcal{H}\right] _{\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi_{r}\right\rangle \right\} }=M_{\mathcal{H }H_{\text{diagonal}}M_{\mathcal{H}}^{-1}=\left( \begin{array} [c]{cc \mathcal{A} & \mathcal{B}\\ 1 & 1 \end{array} \right) \left( \begin{array} [c]{cc \lambda_{-} & 0\\ 0 & \lambda_{+ \end{array} \right) \left( \begin{array} [c]{cc \frac{1}{\mathcal{A}-\mathcal{B}} & \frac{-\mathcal{B}}{\mathcal{A -\mathcal{B}}\\ \frac{-1}{\mathcal{A}-\mathcal{B}} & \frac{\mathcal{A}}{\mathcal{A -\mathcal{B} \end{array} \right) \text{. \end{equation} We recall that the eigenvalues in\ Eq. (\ref{hdiagonal}) are defined in\ Eq. (\ref{eigen}) while $\mathcal{A}$ and $\mathcal{B}$ appear in\ Eqs. (\ref{anew}) and (\ref{bnew}), respectively. At this juncture, we also recall that our objective is to find the time $t^{\ast}$ such that $\mathcal{P \left( t^{\ast}\right) =\mathcal{P}_{\max}$ where the transition probability $\mathcal{P}\left( t\right) $ is defined in Eq. (\ref{fidelity}). Employing standard linear algebra techniques, $\mathcal{P}\left( t\right) $ can be recast a \begin{equation} \mathcal{P}\left( t\right) \overset{\text{def}}{=}\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|e^{-\frac{i}{\hslash}M\mathcal{H _{\text{diagonal}}M^{\dagger}t}|\psi_{s}\right\rangle \right\vert ^{2}=\left\vert \left\langle \psi_{w}|Me^{-\frac{i}{\hslash}\mathcal{H _{\text{diagonal}}t}M^{\dagger}|\psi_{s}\right\rangle \right\vert ^{2}\text{,} \label{pt3 \end{equation} where $\mathcal{H}_{\text{diagonal}}$ denotes the Hermitian operator whose matrix representation is $H_{\text{diagonal}}$ in Eq. (\ref{hdiagonal}). Using the matrix notation with components expressed with respect to the orthonormal basis $\left\{ \left\vert \psi_{w}\right\rangle \text{, }\left\vert \psi _{r}\right\rangle \right\} $, quantum states $\left\vert \psi_{w \right\rangle $ and $\left\vert \psi_{s}\right\rangle $ are given by \begin{equation} \left\vert \psi_{w}\right\rangle \overset{\text{def}}{=}\left( \begin{array} [c]{c 1\\ 0 \end{array} \right) \text{, and }\left\vert \psi_{s}\right\rangle \overset{\text{def} {=}\left( \begin{array} [c]{c x\\ \sqrt{1-x^{2} \end{array} \right) \text{,} \label{matic2 \end{equation} respectively. By means of Eqs. (\ref{mmatrix2}), (\ref{mimatrix2}), and (\ref{matic2}), the quantum state amplitude $\left\langle \psi_{w |e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle $ that appears in the expression of the fidelity $\mathcal{P}\left( t\right) $ in\ Eq. (\ref{pt3}) become \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle =\frac{1}{\mathcal{A}-\mathcal{B}}\left[ \mathcal{A}e^{-\frac{i}{\hslash }\lambda_{-}t}\left( x-\mathcal{B}\sqrt{1-x^{2}}\right) -\mathcal{B e^{-\frac{i}{\hslash}\lambda_{+}t}\left( x-\mathcal{A}\sqrt{1-x^{2}}\right) \right] \text{,} \label{part1b \end{equation} and, as a consequence, its complex conjugate $\left\langle \psi_{w |e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}$ is, \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}=\frac{1}{\mathcal{A}-\mathcal{B}}\left[ \mathcal{A}e^{\frac {i}{\hslash}\lambda_{-}t}\left( x-\mathcal{B}\sqrt{1-x^{2}}\right) -\mathcal{B}e^{\frac{i}{\hslash}\lambda_{+}t}\left( x-\mathcal{A \sqrt{1-x^{2}}\right) \right] \text{.} \label{part2b \end{equation} Observe that \begin{equation} e^{-\frac{i}{\hslash}\lambda_{-}t}=e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22 }{2}t}e^{i\frac{\mathrm{a}}{\hslash}t}\text{ and, }e^{-\frac{i}{\hslash }\lambda_{+}t}=e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}e^{-i\frac {\mathrm{a}}{\hslash}t} \label{aeq \end{equation} where, recalling Eq. (\ref{eigen}), the \emph{real} quantity $\mathrm{a}$ is defined a \begin{equation} \mathrm{a}\overset{\text{def}}{=}\frac{1}{2}\sqrt{\left( H_{11 -H_{22}\right) ^{2}+4H_{12}H_{21}}\text{.} \label{adef \end{equation} Employing Eq. (\ref{aeq}), the \emph{complex} probability amplitudes in Eqs. (\ref{part1b}) and (\ref{part2b}) becom \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle =e^{-\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}\left[ \frac{\mathcal{A \left( x-\mathcal{B}\sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B }e^{i\frac{\mathrm{a}}{\hslash}t}-\frac{\mathcal{B}\left( x-\mathcal{A \sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B}}e^{-i\frac{\mathrm{a }{\hslash}t}\right] \text{,} \label{part3 \end{equation} and \begin{equation} \left\langle \psi_{w}|e^{-\frac{i}{\hslash}\mathcal{H}t}|\psi_{s}\right\rangle ^{\ast}=e^{\frac{i}{\hslash}\frac{H_{11}+H_{22}}{2}t}\left[ \frac {\mathcal{A}^{\ast}\left( x-\mathcal{B}^{\ast}\sqrt{1-x^{2}}\right) }{\mathcal{A}^{\ast}-\mathcal{B}^{\ast}}e^{-i\frac{\mathrm{a}}{\hslash t}-\frac{\mathcal{B}^{\ast}\left( x-\mathcal{A}^{\ast}\sqrt{1-x^{2}}\right) }{\mathcal{A}^{\ast}-\mathcal{B}^{\ast}}e^{i\frac{\mathrm{a}}{\hslash t}\right] \text{,} \label{part4 \end{equation} respectively. Using Eqs. (\ref{part3})\ and (\ref{part4}) and introducing the following three quantitie \begin{equation} \tilde{A}\overset{\text{def}}{=}\frac{\mathcal{A}\left( x-\mathcal{B \sqrt{1-x^{2}}\right) }{\mathcal{A}-\mathcal{B}}\text{, }\tilde{B \overset{\text{def}}{=}-\frac{\mathcal{B}\left( x-\mathcal{A}\sqrt{1-x^{2 }\right) }{\mathcal{A}-\mathcal{B}}\text{, and }\tilde{\alpha}=\frac {\mathrm{a}}{\hslash}t\text{,} \label{newroba \end{equation} the transition probability $\mathcal{P}\left( t\right) $ in Eq. (\ref{fidelity}) can be written a \begin{equation} \mathcal{P}\left( \tilde{\alpha}\right) =\left[ \tilde{A}e^{i\tilde{\alpha }}+\tilde{B}e^{-i\tilde{\alpha}}\right] \left[ \tilde{A}^{\ast e^{-i\tilde{\alpha}}+\tilde{B}^{\ast}e^{i\tilde{\alpha}}\right] =\left\vert \tilde{A}\right\vert ^{2}+\left\vert \tilde{B}\right\vert ^{2}+2\tilde {A}\tilde{B}^{\ast}\cos\left( 2\tilde{\alpha}\right) \text{, \end{equation} where we point out that $\tilde{A}\tilde{B}^{\ast}$ is \emph{real} since $H_{12}=H_{21}^{\ast}$. By employing standard trigonometric identities in a convenient sequential order (for details, see Appendix B), we fin \begin{equation} \mathcal{P}\left( \tilde{\alpha}\right) =\left\vert \tilde{A}-\tilde {B}\right\vert ^{2}\sin^{2}\left( \tilde{\alpha}\right) +\left\vert \tilde{A}+\tilde{B}\right\vert ^{2}\cos^{2}\left( \tilde{\alpha}\right) \text{.} \label{fess \end{equation} Using Eqs. (\ref{newroba}), (\ref{adef}), (\ref{anew}), and (\ref{bnew}), the transition probability $\mathcal{P}\left( \tilde{\alpha}\right) $ in Eq. (\ref{fess}) become \begin{equation} \mathcal{P}\left( t\right) =\frac{\left\vert \left( H_{11}-H_{22}\right) x+2H_{12}\sqrt{1-x^{2}}\right\vert ^{2}}{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\sin^{2}\left( \frac{\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}}{2\hslash}t\right) +x^{2}\cos^{2}\left( \frac {\sqrt{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}}{2\hslash}t\right) \text{.} \label{it \end{equation} From Eq. (\ref{it}), it follows that the maximum $\mathcal{P}_{\max }=\mathcal{P}\left( t^{\ast}\right) $ of $\mathcal{P}\left( t\right) $ occurs at the instant $t^{\ast}$ \begin{equation} t^{\ast}\overset{\text{def}}{=}\frac{\pi\hslash}{\sqrt{\left( H_{11 -H_{22}\right) ^{2}+4H_{12}H_{21}}}\text{,} \label{start \end{equation} and equal \begin{equation} \mathcal{P}_{\max}=\frac{\left\vert \left( H_{11}-H_{22}\right) x+2H_{12}\sqrt{1-x^{2}}\right\vert ^{2}}{\left( H_{11}-H_{22}\right) ^{2}+4H_{12}H_{21}}\text{.} \label{maxp \end{equation} Finally, making use of Eq.(\ref{heq}) and recalling that $\alpha$ and $\delta$ must be \emph{real} coefficients while $\beta=\gamma^{\ast}$, $\mathcal{P _{\max}$ in Eq. (\ref{maxp}) become \begin{equation} \mathcal{P}_{\max}\left( \alpha\text{, }\beta\text{, }\delta\text{, }x\right) =\frac{\left\vert \left[ \left( \alpha-\delta\right) +\left( \beta+\beta^{\ast}\right) x+2\delta x^{2}\right] x+2\left( \beta+\delta x\right) \left( 1-x^{2}\right) \right\vert ^{2}}{\left[ \left( \alpha-\delta\right) +\left( \beta+\beta^{\ast}\right) x+2\delta x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) }\text{.} \label{nono \end{equation} Note that $\gamma=\beta^{\ast}$, $\beta+\beta^{\ast}=2\operatorname{Re}\left( \beta\right) \i \mathbb{R} $, and the product $\left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) $ is a \emph{real} quantity for any \emph{complex} parameter $\beta$. \section{Discussion} In this section, we discuss a variety of limiting cases that arise from the generalized search Hamiltonian in Eq. (\ref{hamilton}). In particular, we make a distinction between optimal and suboptimal scenarios. The former scenarios are cases where the probability of finding the target state equals one. The latter scenarios, instead, are cases where the probability of finding the target state is less than one. \emph{General Case}: The general case is specified by the conditions $\alpha\neq\delta$ \emph{real} and $\beta=\gamma^{\ast}$ \emph{complex }coefficients. We note that after some straightforward but tedious algebra, $\mathcal{P}_{\max}$ in Eq. (\ref{nono}) can be recast a \begin{equation} \mathcal{P}_{\max}\left( \alpha\text{, }\beta\text{, }\delta\text{, }x\right) =\frac{4\left[ \left\vert \beta\right\vert ^{2}-\operatorname{Re ^{2}\left( \beta\right) \right] x^{4}+\left[ \left( \alpha+\delta\right) ^{2}-8\left( \left\vert \beta\right\vert ^{2}-\operatorname{Re}^{2}\left( \beta\right) \right) \right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+4\left\vert \beta\right\vert ^{2}}{4\left[ \alpha\delta+\operatorname{Re}^{2}\left( \beta\right) -\left\vert \beta\right\vert ^{2}\right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+\left[ \left( \alpha-\delta\right) ^{2}+4\left\vert \beta\right\vert ^{2}\right] }\text{.} \label{nano2 \end{equation} Furthermore, by using Eq. (\ref{heq}) in Eq. (\ref{start}), the time $t^{\ast }$ at which this maximum transition probability value $\mathcal{P}_{\max}$ is achieved becomes \begin{equation} t_{\mathcal{H}}^{\ast}\overset{\text{def}}{=}\frac{\pi\hslash}{E\sqrt{\left[ \left( \alpha-\delta\right) +x\left( \beta+\beta^{\ast}\right) +2x^{2}\delta\right] ^{2}+4\left( 1-x^{2}\right) \left( \beta+\delta x\right) \left( \beta^{\ast}+\delta x\right) }}\text{.} \label{nano \end{equation} The subscript $\mathcal{H}$ in $t_{\mathcal{H}}^{\ast}$ denotes the generalized search Hamiltonian in Eq. (\ref{hamilton}). Observe that $t_{\mathcal{H}}^{\ast}$ in Eq. (\ref{nano}) can be rewritten as \begin{equation} t_{\mathcal{H}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{4\left[ \alpha\delta+\operatorname{Re}^{2}\left( \beta\right) -\left\vert \beta\right\vert ^{2}\right] x^{2}+4\operatorname{Re}\left( \beta\right) \left( \alpha+\delta\right) x+\left[ \left( \alpha-\delta\right) ^{2}+4\left\vert \beta\right\vert ^{2}\right] }}\frac{\pi\hslash}{2E}\text{.} \label{nano1 \end{equation} \begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c|c|c Case & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & ${\mathcal{P}}_{\text{max} $\\\hline General & $\neq\delta$ & $\gamma^{*}\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $\neq\alpha$ & $\leq1$\\ 1 & $\delta$ & $0$ & $0$ & $\alpha$ & $1$\\ 2 & $\neq\delta$ & $0$ & $0$ & $\neq\alpha$ & $\leq1$\\ 3 & $0$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $0$ & $1$\\ 4 & $0$ & $\gamma*\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $0$ & $\leq1$\\ 5 & $\delta$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $\alpha$ & $1$\\ 6 & $\delta$ & $\gamma^{*}\in\mathbb{C}$ & $\beta^{*}\in\mathbb{C}$ & $\alpha$ & $\leq1$\\ 7 & $\neq\delta$ & $\gamma^{*}\in\mathbb{R}$ & $\beta^{*}\in\mathbb{R}$ & $\neq\alpha$ & $\leq1 \end{tabular} \caption{Summary of maximal success probability values $\mathcal{P}_{\max}$ that can be achieved for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ specifying the quantum search Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}). \end{table}In what follows, we choose to briefly discuss a number of limiting cases that arise from Eq. (\ref{nano2}). In particular, the big-calligraphic-$\mathcal{O}$ notation $f\left( \varepsilon\right) =\mathcal{O}\left( g\left( \varepsilon\right) \right) $ means that $f\left( \varepsilon\right) $ is an infinitesimal of order equal to $g\left( \varepsilon\right) $ as $\varepsilon$ approaches zero, that is \begin{equation} \lim_{\varepsilon\rightarrow0}\frac{f\left( \varepsilon\right) }{g\left( \varepsilon\right) }=K\text{, \end{equation} where $K$ denotes a nonzero \emph{real} constant. In Table I we report an overview of the maximal success probability values that can be obtained for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ characterizing the quantum search Hamiltonian $\mathcal{H}$. In particular, we note that the unit success probability $\mathcal{P}_{\max}=1$ can be achieved only when considering the Hamiltonians $\mathcal{H}_{1}$, $\mathcal{H}_{3}$, and $\mathcal{H}_{5}$. Fig. $4$, instead, displays the negative effect on the maximal success probability $\mathcal{P}_{\max}$ due to asymmetries ($\alpha\neq\delta$) and complexities ($\beta\i \mathbb{C} $) in the parameters of the quantum search Hamiltonian $\mathcal{H}$ when the quantum overlap $x$ approaches zero. \emph{Case 1}: $\alpha=\delta$, and $\beta=\gamma=0$. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by \begin{equation} \mathcal{H}_{1}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{h1 \end{equation} Furthermore, the the maximum value of the transition probability in Eq. (\ref{nono}) becomes $\mathcal{P}_{\max}=1$ reached at the time $t_{\mathcal{H}_{1}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{1}}^{\ast}=\frac{1}{\alpha x}\frac{\pi\hslash}{2E}\text{.} \label{tstar1 \end{equation} Observe that when $\alpha=1$ in\ Eq. (\ref{tstar1}), we recover the well-known result by Farhi and Guttmann. As a side remark, we point out that $t_{\mathcal{H}_{1}}^{\ast}$ in Eq. (\ref{tstar1}) is inversely proportional to the quantum overlap $x$ between the initial state $\left\vert \psi _{s}\right\rangle $ and the target state $\left\vert \psi_{w}\right\rangle $. \emph{Case 2}: $\alpha\neq\delta$, and $\beta=\gamma=0$. Using these working assumptions, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) becomes \begin{equation} \mathcal{H}_{2}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] \text{.} \label{h2 \end{equation} In this case, $\mathcal{P}_{\max}$ is given by \begin{equation} \mathcal{P}_{\max}=\frac{\left( \alpha+\delta\right) ^{2}x^{2}}{4x^{2 \alpha\delta+\left( \alpha-\delta\right) ^{2}}\text{.} \label{p2 \end{equation} This maximum $\mathcal{P}_{\max}$ with $0\leq\mathcal{P}_{\max}\leq1$ is reached at $t_{\mathcal{H}_{2}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{2}}^{\ast}=\frac{2}{\sqrt{4x^{2}\alpha\delta+\left( \alpha-\delta\right) ^{2}}}\frac{\pi\hslash}{2E}\text{.} \label{tstar2 \end{equation} Note that $\mathcal{P}_{\max}$ in Eq. (\ref{p2}), when viewed as a function of $x$, assumes it maximum value $1$ when $\alpha=\delta$. Furthermore, $\mathcal{P}_{\max}=1$ when $x=1$ for any choice of $\alpha$ and $\delta$. Furthermore, $t_{\mathcal{H}_{1}}^{\ast}\geq$ $t_{\mathcal{H}_{2}}^{\ast}$ when $0\leq\delta/\left( 1-4x^{2}\right) \leq\alpha$. In particular, when $\alpha=\delta/\left( 1-4x^{2}\right) $, we ge \begin{equation} \frac{2E}{\pi\hslash}t_{\mathcal{H}_{1}}^{\ast}=\frac{1-4x^{2}}{\delta x}=\frac{2E}{\pi\hslash}t_{\mathcal{H}_{2}}^{\ast}\text{. \end{equation} Finally, we remark that when $0$ $\leq\alpha-\delta\ll1$, the approximate expression of $\mathcal{P}_{\max}$ in Eq. (\ref{p2}) become \begin{equation} \mathcal{P}_{\max}=1-\frac{1}{4}\frac{1-x^{2}}{\alpha^{2}x^{2}}\left( \alpha-\delta\right) ^{2}+\mathcal{O}\left( \left\vert \alpha-\delta \right\vert ^{3}\right) \text{.} \label{profound \end{equation} This approximate maximum transition probability value $\mathcal{P}_{\max}$ in Eq. (\ref{profound}) is achieved whe \begin{equation} t_{\mathcal{H}_{2}}^{\ast}=\left[ \frac{1}{\alpha x}-\frac{1}{8}\frac{\left( \alpha-\delta\right) ^{2}}{\alpha^{3}x^{3}}+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \right] \frac{\pi\hslash}{2E}\text{, \end{equation} that is, $t_{\mathcal{H}_{2}}^{\ast}=t_{\mathcal{H}_{1}}^{\ast}+\mathcal{O \left( \left\vert \alpha-\delta\right\vert ^{2}\right) $. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig4}\caption{Maximal success probability $\mathcal{P}_{\max}$ as a function of $\alpha-\delta$ for $\left\vert \beta\right\vert =0.25$ (dotted line), $\left\vert \beta \right\vert =0.5$ (thin solid line), and $\left\vert \beta\right\vert =1$ (thick solid line) in the working assumption that $x$ approaches zero (left); Maximal success probability $\mathcal{P}_{\max}$ as a function of $\left\vert \beta\right\vert $ for $\alpha-\delta=0$ (dotted line), $\alpha-\delta=0.25$ (thin solid line), and $\alpha-\delta=0.5$ (thick solid line) in the working assumption that $x$ approaches zero (right). \label{fig4 \end{figure} \emph{Case 3}: $\beta=\gamma^{\ast}$ \emph{real}, and $\alpha=\delta=0$. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by \begin{equation} \mathcal{H}_{3}\overset{\text{def}}{=}\beta E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h3 \end{equation} The Hamiltonian $\mathcal{H}_{3}$ can be used to search for the target state $\left\vert \psi_{w}\right\rangle $ with certainty since the maximum probability value $\mathcal{P}_{\max}$ is given by $\mathcal{P}_{\max}=1$. This maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{3}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{3}}^{\ast}\overset{\text{def}}{=}\frac{1}{\beta}\frac {\pi\hslash}{2E}\text{.} \label{tstar3 \end{equation} Note that, unlike the previous two cases, the time $t_{\mathcal{H}_{3}}^{\ast }$ does not depend on the quantum overlap $x$. \emph{Case 4}: $\beta=\gamma^{\ast}$ $\emph{complex}$, and $\alpha=\delta=0$. Using these working assumptions, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) becomes \begin{equation} \mathcal{H}_{4}\overset{\text{def}}{=}E\left[ \beta\left\vert \psi _{w}\right\rangle \left\langle \psi_{s}\right\vert +\beta^{\ast}\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h4 \end{equation} In this case, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{8\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2}-4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{4}+4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) ^{2 }{4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2 +4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }\text{.} \label{pmaxcomplex \end{equation} This maximum $\mathcal{P}_{\max}$ in\ Eq. (\ref{pmaxcomplex}) is reached at $t_{\mathcal{H}_{4}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{4}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{4\left[ \operatorname{Re}\left( \beta\right) \right] ^{2}x^{2}+4\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }}\frac{\pi\hslash}{2E}\text{. \end{equation} Note that, unlike the previous case, the time $t_{\mathcal{H}_{4}}^{\ast}$ does depend on the quantum overlap $x$. Furthermore, observe that if $\operatorname{Re}\left( \beta\right) =0$, $\mathcal{P}_{\max}$ in Eq. (\ref{pmaxcomplex}) becomes $\mathcal{\tilde{P}}_{\max}=1-x^{2}$. This maximum $\mathcal{\tilde{P}}_{\max}$ is reached at $\tilde{t}_{\mathcal{H}_{4}}^{\ast }$ \begin{equation} \tilde{t}_{\mathcal{H}_{4}}^{\ast}\overset{\text{def}}{=}\frac{1 {\sqrt{\left\vert \beta\right\vert ^{2}\left( 1-x^{2}\right) }}\frac {\pi\hslash}{2E}=\frac{1}{\left\vert \beta\right\vert }\left[ 1+\frac{1 {2}x^{2}+\mathcal{O}\left( x^{4}\right) \right] \frac{\pi\hslash {2E}=t_{\mathcal{H}_{3}}^{\ast}+\mathcal{O}\left( x^{2}\right) \text{.} \label{ttilda4 \end{equation} In other words, when $0\leq x\ll1$, the search Hamiltonian $\mathcal{H}_{4}$ behaves approximately like the Hamiltonian $\mathcal{H}_{3}$. As a final remark, we note that when $\beta\overset{\text{def}}{=}2iEx$ we recover Fenner's quantum search Hamiltonian as proposed in\ Ref. \cite{fenner}. \emph{Case 5}: $\alpha=\delta$, and $\beta=\gamma^{\ast}$ \emph{real}. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given b \begin{equation} \mathcal{H}_{5}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +\beta E\left[ \left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{.} \label{h5 \end{equation} It happens that given $\mathcal{H}_{5}$, $\mathcal{P}_{\max}$ becomes $\mathcal{P}_{\max}=1$. Furthermore, the maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{5}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{5}}^{\ast}=\frac{1}{\alpha x+\beta}\frac{\pi\hslash {2E}\text{.} \label{tstar5 \end{equation} Note that for $\beta=0$ and $\alpha=0$, $t_{\mathcal{H}_{5}}^{\ast}$ reduces to $t_{\mathcal{H}_{1}}^{\ast}$ and $t_{\mathcal{H}_{3}}^{\ast}$, respectively. For the sake of completeness, we remark that the Hamiltonian in\ Eq. (\ref{h5}) was originally considered in Ref. \cite{bae02}. \emph{Case 6}: $\alpha=\delta$, and $\beta=\gamma^{\ast}$ \emph{complex}. In this case, the Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given b \begin{equation} \mathcal{H}_{6}\overset{\text{def}}{=}\alpha E\left[ \left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +E\left[ \beta\left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\beta^{\ast}\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{. \end{equation} Moreover, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{\left\vert \left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] x+2\left( \alpha x+\beta\right) \left( 1-x^{2}\right) \right\vert ^{2}}{\left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left[ \left\vert \beta\right\vert ^{2}+2\alpha\operatorname{Re}\left( \beta\right) x+\alpha^{2}x^{2}\right] }\text{.} \label{pmax2 \end{equation} The maximum $\mathcal{P}_{\max}$ is reached at $t_{\mathcal{H}_{6}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{6}}^{\ast}\overset{\text{def}}{=}\frac{2}{\sqrt{\left[ 2\operatorname{Re}\left( \beta\right) x+2\alpha x^{2}\right] ^{2}+4\left( 1-x^{2}\right) \left[ \left\vert \beta\right\vert ^{2}+2\alpha \operatorname{Re}\left( \beta\right) x+\alpha^{2}x^{2}\right] }}\frac {\pi\hslash}{2E}\text{. \end{equation} \begin{table}[t] \centering \begin{tabular} [c]{c|c|c|c|c $\mathcal{H}$ & ${\mathcal{P}}_{\text{max}}$ & $t_{\mathcal{H}}^{\ast}$ & $(\alpha,\delta)$ & $(\beta,\gamma)$\\\hline $\mathcal{H}_{1}$ & $1$ & $\frac{\pi\hslash}{2E}(\alpha x)^{-1}$ & $\alpha=\delta\neq0$ & $\beta=\gamma^{*}=0$\\ $\mathcal{H}_{3}$ & $1$ & $\frac{\pi\hslash}{2E}(\beta)^{-1}$ & $\alpha =\delta=0$ & $\beta=\gamma^{*}\in\mathbb{R}\backslash\{0\}$\\ $\mathcal{H}_{5}$ & $1$ & $\frac{\pi\hslash}{2E}(\alpha x+\beta)^{-1}$ & $\alpha=\delta\neq0$ & $\beta=\gamma^{*}\in\mathbb{R}\backslash\{0\} \end{tabular} \caption{Summary of cases where unit maximal success probability values $\mathcal{P}_{\max}$ can be achieved for a variety of choices of the parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ specifying the quantum search Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}). \end{table} \emph{Case 7}: $\alpha\neq\delta$, and $\beta=\gamma^{\ast}$ \emph{real}. The Hamiltonian $\mathcal{H}$ in Eq. (\ref{hamilton}) is given by, \begin{equation} \mathcal{H}_{7}\overset{\text{def}}{=}E\left[ \alpha\left\vert \psi _{w}\right\rangle \left\langle \psi_{w}\right\vert +\delta\left\vert \psi _{s}\right\rangle \left\langle \psi_{s}\right\vert \right] +\beta E\left[ \left\vert \psi_{w}\right\rangle \left\langle \psi_{s}\right\vert +\left\vert \psi_{s}\right\rangle \left\langle \psi_{w}\right\vert \right] \text{. \end{equation} In this case, $\mathcal{P}_{\max}$ become \begin{equation} \mathcal{P}_{\max}=\frac{\left[ \left( \alpha+\delta\right) x+2\beta \right] ^{2}}{4\left[ \alpha\delta x^{2}+\left( \alpha\beta+\beta \delta\right) x+\beta^{2}\right] +\left( \alpha-\delta\right) ^{2 }\text{.} \label{pm7 \end{equation} The maximum $\mathcal{P}_{\max}$ in Eq. (\ref{pm7}) is reached at $t_{\mathcal{H}_{7}}^{\ast}$ \begin{equation} t_{\mathcal{H}_{7}}^{\ast}=\frac{2}{\sqrt{4\left[ \alpha\delta x^{2}+\left( \alpha\beta+\beta\delta\right) x+\beta^{2}\right] +\left( \alpha -\delta\right) ^{2}}}\frac{\pi\hslash}{2E}\text{. \end{equation} Finally, we remark that when $0$ $\leq\alpha-\delta\ll1$, the approximate expression of $\mathcal{P}_{\max}$ in Eq. (\ref{pm7}) become \begin{equation} \mathcal{P}_{\max}=1-\frac{1}{4}\frac{1-x^{2}}{\left( \alpha x+\beta\right) ^{2}}\left( \alpha-\delta\right) ^{2}+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \text{.} \label{app \end{equation} This approximate maximum transition probability value in Eq. (\ref{app}) is achieved whe \begin{equation} t_{\mathcal{H}_{7}}^{\ast}=\left[ \frac{1}{\alpha x+\beta}-\frac{1}{8 \frac{\left( \alpha-\delta\right) ^{2}}{\left( \alpha x+\beta\right) ^{3 }+\mathcal{O}\left( \left\vert \alpha-\delta\right\vert ^{3}\right) \right] \frac{\pi\hslash}{2E}\text{, \end{equation} that is, $t_{\mathcal{H}_{7}}^{\ast}=t_{\mathcal{H}_{5}}^{\ast}+\mathcal{O \left( \left\vert \alpha-\delta\right\vert ^{2}\right) $ with $t_{\mathcal{H}_{5}}^{\ast}$ in Eq. (\ref{tstar5}). In Table II we describe the minimum search times $t_{\mathcal{H}_{i}}^{\ast}$ with $i\in\left\{ 1\text{, }3\text{, }5\right\} $ when the maximal success probability $\mathcal{P}_{\max}$ equals one.\textbf{ }Furthermore, Fig\textbf{. }$5$ displays two plots. The plot on the left represents the minimum search time $t^{\ast}$ versus the overlap $x$ assuming $\alpha =\beta=1$ and $E=h=1$. From this plot, we note that $t_{\mathcal{H}_{5} ^{\ast}\leq t_{\mathcal{H}_{3}}^{\ast}\leq t_{\mathcal{H}_{1}}^{\ast}$. The plot on the right, instead, represents the temporal behavior of the success probability $\mathcal{P}\left( t\right) $ assuming $\alpha=\beta=1$, $E=h=1$,\textbf{ }and $x=0.5$. We observe that $\mathcal{P}\left( t\right) $ reaches the ideal unit value with $\mathcal{H}_{5}$ at $t_{\mathcal{H}_{5 }^{\ast}=1/6\simeq0.17$, with $\mathcal{H}_{3}$ at $t_{\mathcal{H}_{3}}^{\ast }=1/4=0.25$\textbf{, }and with $\mathcal{H}_{1}$ at $t_{\mathcal{H}_{1} ^{\ast}=1/2=0.5$. Despite the detrimental effects of asymmetries and complexities on the achievable maximal success probability values represented in Fig\textbf{.} $4$ when $x$ approaches zero and despite the fact as reported in Table II and Fig. $5$ that $\mathcal{H}_{5}$ appears to be the quantum search Hamiltonian that yields the shortest search time needed to achieve unit success probability, we point out that it is possible to suitably choose the Hamiltonian parameters $\alpha$, $\beta$, $\gamma$, and $\delta$ in $\mathcal{H}$ together with the overlap $x$ in such a manner that nearly optimal success probability threshold values can be obtained in search times shorter than those specified by $\mathcal{H}_{5}$. Indeed, Fig. $6$ displays such a circumstance. Assuming $\alpha=\delta=0.5$, $\beta=1$, and $x=0.5$, the unit success probability is obtained with $\mathcal{H}_{5}$ at $t_{\mathcal{H _{5}}^{\ast}=1/5=0.2$ while the chosen threshold value $\mathcal{P _{\text{threshold}}=0.95$ is reached at $\tilde{t}_{\mathcal{H}_{5} \simeq0.1667$. However, assuming $\mathcal{H}$ with $\alpha=0.5$, $\delta=1$, $\beta=1$, and $x=0.5$, despite the fact that the maximal success probability is only nearly optimal with $\mathcal{P}_{\max}\simeq0.9758\leq1$,\textbf{ }the selected threshold value $\mathcal{P}_{\text{threshold}}=0.95$\textbf{ }is reached at $\tilde{t}_{\mathcal{H}}\simeq0.1579\leq\tilde{t _{\mathcal{H}_{5}}$. For a discussion on the choice of the numerical values of the quantum overlap $x$, we refer to Appendix C. \begin{figure}[t] \centering \includegraphics[width=0.75\textwidth] {fig5}\caption{The plot on the left displays the minimum search time $t^{\ast}$ versus the quantum overlap $x$ for the search Hamiltonian $\mathcal{H}_{1}$ (dashed line), $\mathcal{H}_{3}$ (thin solid line), and $\mathcal{H}_{5}$ (thick solid line). The plot on the right, instead, shows $\mathcal{P}\left( t\right) $ versus $t$ for the search Hamiltonian $\mathcal{H}_{1}$ (dashed line), $\mathcal{H}_{3}$ (thin solid line), and $\mathcal{H}_{5}$ (thick solid line). In the former plot, we assume $\alpha=\beta=1$ and $E=h=1$. In the latter plot, we also assume $x=1/2$. \label{fig5 \end{figure} \section{Concluding Remarks} In this paper, we presented a detailed analysis concerning the computational aspects needed to analytically evaluate the transition probability from a source state $\left\vert s\right\rangle $ to a target state $\left\vert w\right\rangle $ in a continuous time quantum search problem defined by a multi-parameter generalized time-independent Hamiltonian $\mathcal{H}$ in\ Eq. (\ref{hamilton}). In particular, quantifying the performance of a quantum search in terms of speed (minimum search time, $t^{\ast}$) and fidelity (high success probability, $\mathcal{P}$), we consider a variety of special cases that emerge from the generalized Hamiltonian. Finally, recovering also the well-known Farhi-Gutmann analog quantum search scenario, we briefly discuss the relevance of a tradeoff between speed and fidelity with emphasis on issues of both theoretical and practical importance to quantum information processing. \subsection{Summary of main results} Our main conclusions can be summarized as follows. \begin{enumerate} \item[{[1]}] First, we provided a detailed analytical computation of the transition probability $\mathcal{P}_{\left\vert s\right\rangle \rightarrow \left\vert w\right\rangle }\left( t\right) $ in Eq. (\ref{it}) from the source state $\left\vert s\right\rangle $ to the target state $\left\vert w\right\rangle $ under the working assumption that the quantum mechanical evolution is governed by the generalized quantum search Hamiltonian $\mathcal{H}$. Such a computation, despite being straightforward, is quite tedious. Therefore, we have reason to believe it can be relevant to the novice with a growing interest in analog quantum search algorithms as well as to the expert seeking to find nearly-optimal solutions in realistic quantum search problems where a tradeoff between fidelity and minimum search time is required; \item[{[2]}] Second, given the family $\mathcal{F}_{\mathcal{H} \overset{\text{def}}{=}\left\{ \mathcal{H}\right\} $ with $\mathcal{H =\mathcal{H}\left( x\text{; }\alpha\text{, }\beta\text{, }\gamma\text{, }\delta\right) $ where $\alpha$ and $\beta\i \mathbb{R} $ while $\beta=\gamma^{\ast}$ $\i \mathbb{C} $, we have conveniently identified two sub-families $\mathcal{F}_{\mathcal{H }^{\left( \text{optimal}\right) }\overset{\text{def}}{=}\left\{ \mathcal{H}_{1}\text{, }\mathcal{H}_{3}\text{, }\mathcal{H}_{5}\right\} $ and $\mathcal{F}_{\mathcal{H}}^{\left( \text{nearly-optimal}\right) \overset{\text{def}}{=}\left\{ \mathcal{H}_{2}\text{, }\mathcal{H}_{4}\text{, }\mathcal{H}_{6}\text{, }\mathcal{H}_{7}\right\} $ that contain quantum search Hamiltonians yielding optimal and nearly-optimal fidelity values, respectively. The former sub-family is specified by the asymmetry between the \emph{real} parameters $\alpha$ and $\delta$. The latter sub-family, instead, is characterized by the complexity (that is, the essence of being \emph{complex}-valued) of the parameters $\beta$ and $\gamma$. Each element of the family has been classified with respect to its maximal success probability and the minimum time necessary to reach such a value.\ An overview of these results appears in Table I. In addition, in Fig. $4$ we report on the detrimental effects caused by the presence of asymmetries and complexities in the parameters that specify the particular quantum search Hamiltonian on the maximal success probability in the limiting working assumption that the source state and the target state are orthogonal. \item[{[3]}] Third, we ranked the performance of each element of the sub-family $\mathcal{F}_{\mathcal{H}}^{\left( \text{optimal}\right) }$ by analyzing the minimum search time required to reach unit fidelity. These results are displayed in Table II. In particular, as evident from Fig. $5$, we find\textbf{ }that $\mathcal{H}_{5}$ can outperform the Farhi-Gutmann search Hamiltonian $\mathcal{H}_{1}$ in terms of speed. \item[{[4]}] Lastly, despite the observed detrimental effects of asymmetries and complexities on the numerical values of the maximal success probabilities, we find that imperfect search Hamiltonians can outperform perfect search Hamiltonians provided that only a large nearly-optimal fidelity value is sought. This finding is reported in Fig. $6$. \end{enumerate} \subsection{Limitations and possible future developments} In what follows, we report some limitations together with possible future improvements of our investigation. \begin{enumerate} \item[{[1]}] First, we have reason to believe our analysis presented in this paper could be a useful starting point for a more rigorous investigation that would include both experimental and theoretical aspects of a tradeoff between fidelity and run time in quantum search algorithms. Indeed, we are aware that it is helpful to decrease the control time of the control fields employed to generate a target quantum state or a target quantum gate in order to mitigate the effect of decoherence originating from the interaction of a quantum system with the environment. Moreover, we also know that it may be convenient to increase the control time beyond a certain critical value to enhance the fidelity of generating such targets and reach values arbitrarily close to the maximum $\mathcal{F}=1$. However, when the control time reaches a certain value that may be close to the critical value, decoherence can become a dominant effect. Therefore, investigating the tradeoff between time control and fidelity can be of great practical importance in quantum computing \cite{rabitz12,rabitz15,cappellaro18}. Given that it is very challenging to find a rigorous optimal time control and in many cases the control is only required to be sufficiently precise and short, one can design algorithms seeking suboptimal control solutions for much reduced computational effort. For instance, the fidelity of tomography experiments is rarely above $99\%$ due to the limited control precision of the tomographic experimental techniques as pointed out in Ref. \cite{rabitz15}. Under such conditions, it is unnecessary to prolong the control time since the departure from the optimal scenario is essentially negligible. Hence, it can certainly prove worthwhile to design slightly suboptimal algorithms that can be much cheaper computationally. \item[{[2]}] Second, we speculate it may be worth pursuing the possibility of borrowing ideas from approximate quantum cloning to design approximate quantum search algorithms capable of finding targets in the presence of\textbf{ }\emph{a priori }information. As a matter of fact, recall that the no-cloning theorem in quantum mechanics states that it is impossible to consider a cloning machine capable of preparing two exact copies of a completely unknown pure qubit state \cite{zurek82}. However, with the so-called universal cloner \cite{hillery96} (that is, a state-independent symmetric cloner) acting on the whole Bloch sphere, it is possible to prepare two approximate copies of an unknown pure qubit state with the same fidelity\textbf{ }$\mathcal{F =5/6<1$\textbf{. }Interestingly, it is possible to enhance these fidelity values achieved with a universal cloner by properly exploiting some relevant\textbf{ }\emph{a priori}\textbf{ }information on a given quantum state that one wishes to clone. This idea of exploiting\textbf{ }\emph{a priori}\textbf{ }information generated a number of state-dependent cloning machines capable of outperforming the universal cloner for some special set of qubits. For instance, phase-covariant cloners are especially successful for cloning qubits chosen from the equator of the Bloch sphere \cite{macchiavello00} while belt quantum cloning machines are very efficient in cloning qubits between two latitudes on the Bloch sphere \cite{wang09}. For an interesting method for improving the cloning fidelity in terms of\textbf{ }\emph{a priori}\textbf{ }amplitude and phase information, we refer to Ref. \cite{kang16}. We shall investigate this line of investigation in forthcoming efforts. \item[{[3]}] Third, from a more applied perspective, despite its relative simplicity, the idea of finding a tradeoff between search time and fidelity in analog quantum searching as presented in this paper could be potentially regarded as a valid starting point for a time-fidelity tradeoff analysis in disease diagnosis in complex biological systems. For these systems, the source and target states are replaced with the source and target patterns, respectively. In particular, the target pattern classifies the type of illness being searched. For recent important investigations based upon the joint use of quantum field theoretic methods and general relativity techniques concerning the transition from source to target patterns in complex biological systems, including DNA\ and protein structures, we refer to Refs. \cite{capozziello1,capozziello2}. More realistic applications of our work are very important and we shall also give a closer look to these aspects in the near future. \item[{[4]}] Fourth, a further possibility could be related to cosmology. As discussed in \cite{capozziello3,capozziello4,luongo19,capozziello13,capozziello11}, there exist possible connections between quantum entanglement and cosmological observational parameters. In fact, assuming that two cosmological epochs are each other entangled, by measuring the entanglement degree, it is possible to recover dynamical properties. Specifically, the effects of the so called \textit{dark energy} could be due to the entanglement between states, since a negative pressure arises. In this process, an \textquotedblleft entanglement weight\textquotedblright, the so-called negativity of entanglement can be defined and then the apparent today observed accelerated expansion occurs when the cosmological parameters are entangled. In this perspective, dark energy could be seen as a straightforward consequence of entanglement without invoking (not yet observed) further material fundamental components. The present analysis could help in this cosmological perspective once the cosmological equations are modeled out as Schr\"{o}dinger-like equations as discussed in \cite{capozziello5}. \item[{[5]}] Lastly, in real life scenarios, searching in a completely unstructured fashion can be unnecessary. Instead, the search can be guided by employing some \emph{prior} relevant information about the location of the target state. Interestingly, this is what happens in the framework of quantum search with advice \cite{montanaro11,montanaro17}. In this framework, the aim is to minimize the expected number of queries with respect to the probability distribution encoding relevant information about where the target might be located. A major advancement in the work we presented in this paper would be figuring out a systematic way to incorporate relevant \emph{prior} information about the possible location of the target directly into the continuous time quantum search Hamiltonian. We leave this intriguing open problem to future scientific endeavours. \end{enumerate} In conclusion, our proposed analysis was partially inspired by some of our previous findings reported in Refs. \cite{cafaro-alsing19A, cafaro-alsing19B} and can be improved in a number of ways in the immediate short term. For instance, it would be interesting to address the following question: How large should the nearly optimal fidelity value be chosen, given the finite precision of quantum measurements and the unavoidable presence of decoherence effects in physical implementations of quantum information processing tasks? We leave the investigation of a realistic tradeoff between speed and fidelity in analog quantum search problems to forthcoming scientific efforts. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth] {fig6}\caption{The thin and the thick solid lines display $\mathcal{P}\left( t\right) $ versus $t$ for the search Hamiltonians $\mathcal{H}$ and $\mathcal{H}_{5}$, respectively. In the former case, we set $\alpha=0.5$, $\delta=1$, $\beta=1$, and $x=0.5$. In the latter case, instead, we set $\alpha=\delta=0.5$, $\beta=1$, and $x=0.5$. In both cases, we also assume $E=h=1$. The dashed line denotes the chosen threshold success probability value $\mathcal{P}_{\text{threshold}}=0.95$. Finally, the dotted line denotes the optimal success probability $\mathcal{P}=1$. \label{fig6 \end{figure} \begin{acknowledgments} C. C. acknowledges the hospitality of the United States Air Force Research Laboratory (AFRL) in Rome-NY where part of his contribution to this work was completed. S.C. acknowledges partial support of \textit{Istituto Italiano di Fisica Nucleare} (INFN), \textit{iniziative specifiche} QGSKY and MOONLIGHT2. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,658
Sam Hayes was born in Aldershot. He studied Music at Cambridge University, and Choral Directing at the University of Uppsala, in Sweden, and at the Conservatoire Camille Saint-Saëns, in Paris. A fellow of the Royal College of Organists, he won both of the major prizes in the Choral Directing Diploma examinations in 2006. Since 2005 he has been Director of Music at Great St Mary's, The University Church, Cambridge. He taught for the Music degree course in the University of Cambridge for ten years, and has held the position of Musician in Residence at King Edward VI Upper School, Bury St Edmunds since 2010. Sam now lives in East Surrey and is delighted to have been appointed as the first Music Director of Phoenix Choir of Crawley. Outside of Music, his interests include travelling, particularly in the Balkans, gastronomy, cats and classic cars. Sam is ably assisted by our enthusiastic and sympathetic rehearsal accompanist, Gina Eason. Gina accompanies on both the piano and the organ in Surrey and Sussex. She also plays the cello and recorder and is a keen composer of works, including small scale choral music, much of it for use in Church Services. In her spare time Gina's interests include outdoor activities such as cycling and bird watching, and she is an enthusiastic gardener.
{ "redpajama_set_name": "RedPajamaC4" }
1,576
Multiclavula fossicola är en lavart som först beskrevs av Corner, och fick sitt nu gällande namn av R.H. Petersen 1967. Multiclavula fossicola ingår i släktet Multiclavula och familjen Clavulinaceae. Inga underarter finns listade i Catalogue of Life. Källor Basidiesvampar fossicola
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,137
\section{INTRODUCTION} The Schur lemma is an elementary but extremely useful statement in representation theory of groups and algebras.\\ The lemma is named after Issai Schur who used it to prove orthogonality relations and develop the basics of the representation theory of finite groups. Schur's lemma admits generalisations to Lie groups and Lie algebras, the most common of which is due to Jacques Dixmier.\\ The Schur Lemma appears also in the study of stability conditions: when an abelian category $\mathcal{A}$ is equipped with a stability condition, then every endomorphism of a stable object is either the zero morphism or is an isomorphism, see \cite{Ru,BST}. More generally, for $E_1, E_2 \in \mathcal{A}$ two stable objects of the same slope $\phi(E_1) =\phi(E_2)$, any morphism from $E_1$ to $E_2$ is either the zero morphism or is an isomorphism. Bridgeland, in his seminal work on stability conditions on triangulated categories \cite{Br}, identifies the need to define a notion of stability on quasi-abelian categories, equipped with the exact structure of strict morphisms. This motivates the study of the Schur Lemma in the context of exact categories.\\ Exact categories generalise the abelian categories, namely additive categories with a choice of a Quillen exact structure \cite{Qu} which is given by a class of short exact sequences, called admissible pairs of morphisms, satisfying Quillen's axioms.\\ The notion of abelian category is an abstraction of basic properties of the category $Ab$ of abelian groups, more generally of the category $Mod(R)$ of modules over some ring $R$. So it is not difficult to check that what holds for these categories generalise also to the abelian context.\\ In \cite{Bau}, Baumslag gave a short proof of the Jordan–Hölder theorem for \emph{groups} by intersecting the terms in one subnormal series with those in the other series. The Schur lemma and classical isomorphism theorems for categories of modules play a crucial role in the proof.\\ Our motivation is to generalise Baumslag's idea, so we first generalise the Schur lemma to the context of exact categories and it turns out that the new version holds for any exact structure: \begin{proposition}[Proposition \ref{schur}]({\bf{The $\mathcal{E}$-Schur lemma}}) Let $\begin{tikzcd} X\arrow[r, "\circ" description, "f"] & Y \end{tikzcd}$ be an admissible non-zero morphism, that is, $f$ can be factored as an admissible epic followed by an admissible monic. Then, the following hold \begin{enumerate} \item[$\bullet$]if $X$ is $\mathcal{E}-$simple, then $f$ is an admissible monic, \item[$\bullet$]if $Y$ is $\mathcal{E}-$simple, then $f$ is an admissible epic. \end{enumerate} \end{proposition} Secondly, we study the notions of abelian intersections and sum, aiming at a generalisation. The abelian intersection, which exists and is well defined in a pre-abelian exact category, is not necessarily an \emph{admissible} subobject. So we introduce the following exact categories which are quasi-n.i.c.e. in the sense that they are {\bf n}ecessarily {\bf i}ntersection {\bf c}losed {\bf e}xact categories that do not necessarly admit admissible sums, and which we call {\bf{A.I}} since they admit {\bf{A}}dmissible {\bf{I}}ntersections: \begin{definition}[\ref{quasi-nice}] An exact category $(\mathcal{A}, \mathcal{E})$ is called an \emph{AI-category} if $\mathcal{A}$ is a pre-abelian additive category satisfying the following additional axiom: \begin{itemize} \item[$({AI})$] The pull-back $A$ of two admissible monics $j: C \rightarrowtail D$ and $g: B\rightarrowtail D$ exists and yields two admissible monics $i$ and $f$. \[ \begin{tikzcd} {A} \arrow[r, tail, "i"] \arrow[d, "f"', tail] & {B} \arrow[d, "g", tail] \\ {C} \arrow[r, tail, "j"'] & {D} \arrow[ul, phantom, "\lrcorner" very near start] \end{tikzcd} \] \end{itemize} \end{definition} Inspired by the abelian sum, we also introduce exact categories satisfying the admissible sum property, that we call {\bf{A.S}} exact categories, since they admit {\bf{A}}dmissible {\bf{S}}ums: \begin{definition}[\ref{AS}] An exact category $(\mathcal{A}, \mathcal{E})$ is called an \emph{AS-category} if it satisfies the following additional axiom: \begin{itemize} \item[$({AS})$]The morphism $u$ in the diagram below, given by the universal property of the push-out $E$ of $i$ and $f$, is an admissible monic. \[ \begin{tikzcd} {A} \arrow[r, tail, "i"] \arrow[d, "f"', tail] & {B} \arrow[d, "l", tail] \arrow[ddr, "g", tail, bend left]& \\ {C} \arrow[r, tail, "k"'] \arrow[drr, tail, "j"', bend right] & {E} \arrow[ul, phantom, "\ulcorner" near end] \arrow[dr, tail, "u"] \\ & & D \end{tikzcd} \] \end{itemize} \end{definition} Combining these two new notions, we introduce a special sub-class of the AI exact categories, that we call {\bf{A.I.S}} exact categories, since they admit {\bf{A}}dmissible {\bf{I}}ntersections and {\bf{S}}ums. These categories were called \emph{nice exact categories} in a previous version of this work: \begin{definition}[\ref{nice}] An exact category $(\mathcal{A}, \mathcal{E})$ is an AIS-category or a \emph{nice} or if it satisfies the (AIS) axiom which is defined by both the (AI) and the (AS) axioms at the same time. \end{definition} These categories were recently studied by the first author, Thomas Br\"ustle, Amit Shah, Aran Tattar and Sven-Ake Wegner and we obtained the following characterisations: \begin{theorem}\cite[Theorem 6.1]{HSW} A category $(\mathcal{A}, \mathcal{E}_{max})$ is quasi-abelian if and only if it is an AI-category. \end{theorem} \begin{theorem}\cite[Theorem 4.22]{BHT} An exact category $(\mathcal{A}, \mathcal{E})$ is an AIS-category if and only if $\mathcal{A}$ is abelian and $\mathcal{E}=\mathcal{E}_{all}$. \end{theorem} These results make us conclude that the pull-back and push-out notions of unique intersection and sum do not always apply to all exact categories. This motivates the study of general admissible intersection and sum in \cite[Section 5]{BHT}, where we introduce (\cite[Definition 5.5]{BHT}) a notion of intersection and sum that works for all exact categories, and using this we study the Jordan-H\"older exact categories \cite[Theorem 5.11, 6.8, 6.13]{BHT}.\\ Finally, we reprove the classical isomorphism theorems from module theory using exact categorical arguments and we apply it all in the last section, where we fix an abelian category $\mathcal{A}$ with its maximal exact structure $\mathcal{E}_{all}$ given by the class of all short exact sequences and follow Baumslag's ideas to obtain a proof of the Jordan-H\"older theorem for abelian categories using the language of exact structures. \begin{comment} \begin{theorem}[\ref{JH}] {\bf (Jordan-H\"older theorem)} Let $(\mathcal{A}, \mathcal{E})$ be an AIS-category. Any two $\mathcal{E}-$composition series for a finite object $X$ of $\mathcal{A}$ \[ 0=X_0 \; \imono{i_0} X_1 \;\imono{i_1} \cdots \; \imono{i_{m-2}} \; X_{m-1}\;\imono{i_{m-1}}\; X_m=X \] and \[0=X'_0 \; \imono{i'_0} X'_1 \;\imono{i'_1} \cdots \; \imono{i'_{n-2}} \; X'_{n-1}\;\imono{i' _{n-1}}\; X'_n=X \] are equivalent, that is, they have the same length and the same composition factors, up to permutation and isomorphism. \end{theorem} \end{comment} This proof is different than the abelian proof studied in \cite[Section 4.5, page 174]{par}.\\ Note that, parallel to our work, Enomoto studies the Schur lemma in \cite{E20} from the viewpoint of semibricks and wide subcategories. \paragraph{Acknowledgements.} The authors would like to thank their supervisor Thomas Br\"ustle for his support, and would like also to thank Aran Tattar, Amit Shah, Sven-Ake Wegner and Haruhisa Enomoto for interesting discussions.\\ The authors were supported by Bishop's University, Université de Sherbrooke, and NSERC of Canada. The first author is supported by the scholarship "thésards étoiles" of the ISM. \section{Background} In this section we recall from \cite{GR,Bu} the definition of Quillen exact structures and the definition of a pre-abelian additive category. \begin{definition}{}Let $\mathcal{A}$ be an additive category. A kernel-cokernel pair $(i, d)$ in $\mathcal{A}$ is a pair of composable morphims such that $i$ is kernel of $d$ and $d$ is cokernel of $i$. If a class $\mathcal{E}$ of kernel-cokernel pairs on $\mathcal{A}$ is fixed, an {\em admissible monic} is a morphism $i$ for which there exist a morphism $d$ such that $(i,d) \in \mathcal{E}$. An {\em admissible epic} is defined dually. Note that admissible monics and admissible epics are referred to as inflation and deflation in \cite{GR}, respectively. We depict an admissible monic by $ \;\xymatrix{ \ar@{>->}[r] & \\} $ and an admissible epic by $ \xymatrix{ \ar@{->>}[r] & \\} $. An {\em exact structure} $\mathcal{E}$ on $\mathcal{A}$ is a class of kernel-cokernel pairs $(i, d)$ in $\mathcal{A}$ which is closed under isomorphisms and satisfies the following axioms: \begin{enumerate} \item[(A0)] For all objets $A \in Obj\mathcal{A}$ the identity $1_A$ is an admissible monic \item[{(A0)$^{op}$}] For all objets $A \in Obj\mathcal{A}$ the identity $1_A$ is an admissible epic \item[(A1)] the class of admissible monics is closed under composition \item[{(A1)}$^{op}$] the class of admissible epics is closed under composition \item[(A2)] The push-out of an admissible monic $i: A \;\xymatrix{ \ar@{>->}[r] & \\} B$ along an arbitrary morphism $f: A \to C$ exists and yields an admissible monic $j$: \[\xymatrix{ A \; \ar[d]_{f} \ar@{ >->}[r]^{i} \ar@{}[dr]|{\text{PO}} & B\ar[d]^{g}\\ C \; \ar@{>->}[r]^{j} & D} \] \item[{(A2)}$^{op}$]The pull-back of an admissible epic $h$ along an arbitrary morphism $g$ exists and yields an admissible epic $k$ \[\xymatrix{ A \; \ar[d]^{f} \ar@{ ->>}[r]^{k} \ar@{}[dr]|{\text{PB}} & B\ar[d]^{g}\\ C \; \ar@{->>}[r]^{h} & D} \] \end{enumerate} An {\em exact category} is a pair $(\mathcal{A}, \mathcal{E})$ consisting of an additive category $\mathcal{A}$ and an exact structure $\mathcal{E}$ on $\mathcal{A}$. The pairs $(i,d)$ forming the class $\mathcal{E}$ are called {\em admissible short exact sequences}, or just {\em admissible sequences.} \end{definition} \begin{definition} \cite[Definition 8.1]{Bu}\label{ad mor} A morphism $f: A \rightarrow B$ in an exact category is called \emph{admissible} if it factors as a composition of an admissible monic with an admissible epic. Admissible morphisms will sometimes be displayed as \[ \begin{tikzcd} A\arrow[r, "\circ" description, "f"] & B \end{tikzcd} \] in diagrams, and the classes of admissible arrows of $\mathcal{A}$ will be denoted as ${\mbox{Hom}^{ad}_{\mathcal{A}}}(-,-)$. \end{definition} \begin{proposition} \cite[Proposition 2.16]{Bu}\label{obscure axiom} Suppose that $i: A\rightarrow B$ is a morphism in $\mathcal{A}$ admitting a cokernel. If there exists a morphism $j:B\rightarrow C$ such that the composite $j\circ i: A \;\xymatrix{ \ar@{>->}[r] & \\} C$ si an admissible monic, then $i$ si an admissible monic. \end{proposition} \begin{definition}An additive category $\mathcal{A}$ is \emph{pre-abelian} if it has kernels and cokernels. \end{definition} \begin{example}An additive category $\mathcal{A}$ is \emph{abelian} if it is \emph{pre-abelian} and all morphisms are \emph{strict}. So abelian categories are an example of pre-abelian additive categories where every morphism is admissible. \end{example} \section{The $\mathcal{E}$-Schur lemma} In this section we generalise the abelian Shur lemma to the context of exact categories. \begin{definition}\cite[Definition 3.1]{BHLR} Let $A$ and $B$ be objects of an exact category $(\mathcal{A},\mathcal{E})$. If there is an admissible monic $i: A \rightarrowtail B$ we say the pair $(A,i)$ is an {\em admissible subobject} or {\em $\mathcal{E}-$subobject of $B$}. Often we will refer to the pair $(A,i)$ by the object $A$ and write $A {\subset}_{\mathcal{E}} B $. If $i$ is not an isomorphism, we use the notation $A {\subsetneq}_{\mathcal{E}} B $ and if, in addition, $A \not \cong 0$ we say that $(A,i)$ is a \emph{proper} admissible subobject of $B$. \end{definition} \begin{definition}\cite[Definition 3.3]{BHLR} A non-zero object $S $ in $(\mathcal{A},\mathcal{E})$ is {\em $\mathcal{E}-$simple} if $S$ admits no $\mathcal{E}-$sub\-objects except $0$ and $S$, that is, whenever $ A \subset_\mathcal{E} S$, then $A$ is the zero object or isomorphic to $S$. \end{definition} \begin{remark}\label{quotient} Let $A$ be an $\mathcal{E}-$subobject of $B$ given by the monic $A{\imono{i}} B$. We denote by $B{/}^{i}A$ (or simply $B/A$ when $i$ is clear from the context) the Cokernel of $i$, thus we denote the corresponding admissible sequence as \[ A \imono{i} B \xymatrix{ \ar@{->>}[r] & \\} B/A.\] \end{remark} \begin{remark}\label{zero coker} An admissible monic $A \imono{i} B$ is relatively proper precisely when its co\-kernel is non-zero. In fact, by uniqueness of kernels and cokernels, the exact sequence $$B\imono{1_B} B \xymatrix{ \ar@{->>}[r] & \\} 0$$ is, up to isomorphism, the only one with zero cokernel. Thus an admissible monic $i$ has $\mbox{Coker}(i) = 0$ precisely when $i$ is an isomorphism. Dually, an admissible epic $B \iepi{d} C$ is an isomorphism precisely when $\mbox{Ker}\,(d) = 0$. In particular a morphism which is at the same time an admissible monic and epic is an isomorphism.\\ Note that a subobject is proper means {\em all} admissible monics are proper. \end{remark} \begin{comment} \begin{definition}\cite[Definition 6.4]{BHLR} An object $X$ of $(\mathcal{A}, \mathcal{E})$ is $\mathcal{E}-$Noetherian if any increasing sequence of $\mathcal{E}-$subobjects of $X$ \[X_1 \; \;\xymatrix{ \ar@{>->}[r] & \\} X_2 \; \;\xymatrix{ \ar@{>->}[r] & \\} \cdots \;\xymatrix{ \ar@{>->}[r] & \\} \; X_{n-1}\; \;\xymatrix{ \ar@{>->}[r] & \\} \; X_n \;\xymatrix{ \ar@{>->}[r] & \\} X_n \; \cdots \] becomes stationary. Dually, an object $X$ of $(\mathcal{A}, \mathcal{E})$ is $\mathcal{E}-$Artinian if any descending sequence of $\mathcal{E}-$subobjects of $X$ \[\cdots \; X_n \;\xymatrix{ \ar@{>->}[r] & \\} X_n \; \;\xymatrix{ \ar@{>->}[r] & \\} X_{n-1}\; \;\xymatrix{ \ar@{>->}[r] & \\} \cdots \; \;\xymatrix{ \ar@{>->}[r] & \\} X_2 \;\xymatrix{ \ar@{>->}[r] & \\} \; X_1\;\] becomes stationary. The exact category $(\mathcal{A}, \mathcal{E})$ is called $\mathcal{E}-$Artinian (respectively $\mathcal{E}-$Noetherian) if every object is $\mathcal{E}-$Artinian (respectively \mbox{$\mathcal{E}-$Noetherian)}. \end{definition} \bigskip \end{comment} \begin{lemma}({\bf{The $\mathcal{E}$-Schur lemma}})\label{schur} Let $\begin{tikzcd} X\arrow[r, "\circ" description, "f"] & Y \end{tikzcd}$ be an admissible non-zero morphism. \begin{enumerate} \item[$\bullet$]if $X$ is $\mathcal{E}-$simple, then $f$ is an admissible monic, \item[$\bullet$]if $Y$ is $\mathcal{E}-$simple, then $f$ is an admissible epic. \end{enumerate} \end{lemma} \begin{proof} Let \[ \xymatrix{X\ar[rd]^{f} \ar@{->>}[d]_{e} \\ S \;\ar@{>->}[r]^-m & Y } \] be the factorisation of $f$ as a composition of an admissible epic $e$ with an admissible monic $m$. \begin{enumerate} \item[$\bullet$] if $X$ is $\mathcal{E}-$simple then either $\mbox{Ker}\,(e)=X$ or $\mbox{Ker}\,(e)=0$, but in the first case $e=0$ and so $f=0$, contradicting the assumption $f\neq 0$. Hence $Ker(e)=0$, and by Remark \ref{zero coker}, $e\cong 1_{X}$ and $f \cong m$ and therefore $f$ is an admissible monic. \item[$\bullet$] If $Y$ is $\mathcal{E}-$simple, then the $\mathcal{E}-$subobject $S$ is either zero or equal to $Y$, but in case $S=0$, $e=0$, we get $f=m\circ e=0$ which contradicts $f\neq 0$. Therefore $S=Y$ and $m: Y \;\xymatrix{ \ar@{>->}[r] & \\} Y$ is an admissible monic with zero cokernel. By Remark \ref{zero coker}, $m\cong 1_{Y}$, and $f\cong e$, which means that $f$ is an admissible epic. \end{enumerate} \end{proof} \begin{cor}\label{Aut} Let $S$ be an $\mathcal{E}-$simple object, then the non-zero admissible endomorphisms $\begin{tikzcd} S\arrow[r, "\circ" description, "f"] & S \end{tikzcd}$ form the group Aut$(S)$ of automorphisms of $S$. \end{cor} \begin{proof} It follows from Lemma \ref{schur} that any non-zero admissible morphism $\begin{tikzcd} S\arrow[r, "\circ" description, "f"] & S \end{tikzcd}$ is an admissible monic and an admissible epic, thus $f$ is an isomorphism.\\ Conversely, every isomorphism is admissible, so we get the group of automorphisms of $S$ which is closed under composition by (A2) or $(A2)^{op}$. \end{proof} \begin{remark} The classical Schur lemma on abelian categories states that the endomorphism ring of a simple object is a division ring. We show in Corollary \ref{Aut} that any non-zero admissible endomorphism of an $\mathcal{E}-$simple object is invertible, but it is not true in general that the set of admissible endomorphisms forms a ring. In fact, the composition of admissible morphisms need not be admissible, (see \cite[Remark 8.3]{Bu}), nor is it true for sums of admissible morphisms, as we discuss in \cite{BHT}. \end{remark} \section{AI, AS and AIS-CATEGORIES } Let us first recall the definitions of \emph{intersection} and \emph{sum} of subobjects for abelian categories in general as mentioned in \cite[section 5]{Gabriel} or as defined in \cite[Definition 2.6 ]{Po}: \bigskip \begin{definition}\label{ab sum}Let $(X_1,i_1)$, $(X_2,i_2)$ be two subobjects of an object $X$ in an abelian category, that is, we consider monics $i_1:X_1\to X$ and $i_2:X_2\to X$. We denote by $X_1{+}_{X}X_2$ (or simply $X_1+X_2$ when there is no possibility of confusion) the {\em sum of $X_1$ and $X_2$}, which is defined as the image $\mbox{Im}\, (s)$ of the morphism \[s=[i_1 \; i_2]: X_1\oplus X_2\rightarrow X.\] \bigskip \end{definition} \begin{definition}\label{ab intersection} Let $(X_1,i_1)$, $(X_2,i_2)$ be two subobjects of an object $X$ in an abelian category. We denote by $X_1{\cap}_X X_2$ (or simply $X_1{\cap} X_2$) the {\em intersection of $X_1$ and $X_2$}, defined as the kernel $\mbox{Ker}\,(t) $ of the morphism \[t = \begin{bmatrix} d_1 \\ d_2 \\ \end{bmatrix}: X\rightarrow Y_1\oplus Y_2\] where $d_1:X \to Y_1$ and $d_2: X \to Y_2$ are the cokernels of the monics $i_1$ and $i_2$, respectively. \end{definition} \bigskip Note that this intersection, which exists and is well defined in a pre-abelian exact category, is not necessarily an \emph{admissible} subobject. So let us introduce the following exact categories which are quasi-n.i.c.e. in the sense that they are {\bf n}ecessarily {\bf i}ntersection {\bf c}losed {\bf e}xact categories that does not necessarly admit admissible sums, and which we call {\bf{A.I}} since they admit {\bf{A}}dmissible {\bf{I}}ntersections: \begin{definition}\label{quasi-nice} An exact category $(\mathcal{A}, \mathcal{E})$ is called an \emph{AI-category} if $\mathcal{A}$ is pre-abelian additive category satisfying the following additional axiom: \begin{itemize} \item[$({AI})$] The pull-back $A$ of two admissible monics $j: C \rightarrowtail D$ and $g: B\rightarrowtail D$ exists and yields two admissible monics $i$ and $f$. \[ \begin{tikzcd} {A} \arrow[r, tail, "i"] \arrow[d, "f"', tail] & {B} \arrow[d, "g", tail] \\ {C} \arrow[r, tail, "j"'] & {D} \arrow[ul, phantom, "\lrcorner" very near start] \end{tikzcd} \] \end{itemize} \end{definition} \label{nice def} Let us aslo introduce exact categories satisfying the admissible sum propertys, that we call {\bf{A.S}} exact categories, since they admit {\bf{A}}dmissible {\bf{S}}ums: \begin{definition}\label{AS} An exact category $(\mathcal{A}, \mathcal{E})$ is called an \emph{AS-category} if it satisfies the following additional axiom: \begin{itemize} \item[$({AS})$]The morphism $u$ in the diagram below, given by the universal property of the push-out $E$ of $i$ and $f$, is an admissible monic. \[ \begin{tikzcd} {A} \arrow[r, tail, "i"] \arrow[d, "f"', tail] & {B} \arrow[d, "l", tail] \arrow[ddr, "g", tail, bend left]& \\ {C} \arrow[r, tail, "k"'] \arrow[drr, tail, "j"', bend right] & {E} \arrow[ul, phantom, "\ulcorner" near end] \arrow[dr, tail, "u"] \\ & & D \end{tikzcd} \] \end{itemize} \end{definition} Let us now introduce a special sub-class of the AI exact categories, that we call {\bf{A.I.S}} exact categories, since they admit {\bf{A}}dmissible {\bf{I}}ntersections and {\bf{S}}ums. These categories were called \emph{nice exact categories} in a previous version of this work: \begin{definition}\label{nice} An exact category $(\mathcal{A}, \mathcal{E})$ is an AIS-category or a \emph{nice} or if it satisfies the following additional axiom: \begin{itemize} \item[$(AIS)$] The pull-back of two admissible monics $j: C \;\xymatrix{ \ar@{>->}[r] & \\} D$ and $g: B \;\xymatrix{ \ar@{>->}[r] & \\} D$ exists and yields two admissible monics $i$ and $f$: \[\xymatrix{ A \; \ar@{>->}[d]_{f} \ar@{ >->}[r]^{i} \ar@{}[dr]|{\text{PB}} & B\ar@{>->}[d]^{g}\\ C \; \ar@{>->}[r]^{j} & D} \] and moreover, the push-out along these pull-backs yields an admissible monic $u$\footnote{The existence of $u$ is given by the universal property of the push-out.}: \[\xymatrix{ PB \; \ar@{ >->}[d]^{f} \ar@{ >->}[r]^{i} \ar@{}[dr]|{\text{PO}} & B\ar@{ >->}[d]^{l} \ar@{ >->}[ddr]^{g} \\ C \; \ar@{ >->}[drr]^{j} \ar@{>->}[r]^{k} & PO \ar@{ >->}[dr]^{u} \\ & & D.} \] \end{itemize} \end{definition} Now let us define relative notions of intersection and sum: \begin{definition} \label{intersection & sum} Let $(X_1,i_1)$, $(X_2,i_2)$ be two $\mathcal{E}$-subobjects of an object $X$. We define their \emph{intersection} $X_1{\cap}_X X_2$, to be the pullback \[ \begin{tikzcd} X_1{\cap}_X X_2 \arrow[dr, phantom, "\lrcorner", very near end] \arrow[r, "s_1", tail] \arrow[d, "s_2"', tail] & X_1 \arrow[d, "i_1", tail] \\ X_2 \arrow[r, "i_2"', tail] & X. \end{tikzcd} \] We then define their \emph{sum}, $X_1{+}_{X}X_2$, to be the pushout \[ \begin{tikzcd} X_1{\cap}_X X_2 \arrow[r, "s_1", tail] \arrow[d, "s_2"', tail] & X_1 \arrow[d, "j_1", tail] \\ X_2 \arrow[r, "j_2"', tail] & \arrow[ul, phantom, "\ulcorner", near end]X_1{+}_{X}X_2. \end{tikzcd} \] \end{definition} \begin{remark} Assume $(\mathcal{A},\mathcal{E})$ is an AIS-category, then these intersection and sum are well-defined and admissible for any two admissible subobjects. \end{remark} \begin{remark} Let $(X_1,i_1)$, $(X_2,i_2)$ and $(Y,j)$ be $\mathcal{E}$-subobjects of an object $X$. Then \begin{enumerate} \item[i)]$X_1 \cap_X X_1= X_1 = X_1 +_X X_1$. \item[ii)]If $X_1{+}_{X}X_2=0_{\mathcal{A}}$ then $X_1=X_2=0_{\mathcal{A}}$. \end{enumerate} \end{remark} \begin{remark} \label{sum intersection as coker ker} Equivalently, for two $\mathcal{E}$-subobjects $(X_1,i_1)$, $(X_2,i_2)$ of an object $X$ we have \[ X_1{\cap}_X X_2 = \mbox{Ker}\, \left( \begin{tikzcd}[sep= large] X_1 \oplus X_2 \arrow[r, "{[ i_1 - i_2] }"] & X \end{tikzcd} \right) \] and \[ X_1{+}_{X}X_2 = \mbox{Coker} \left( \begin{tikzcd}[sep=large] X_1 \cap_X X_2 \arrow[r, "{\left[ s_1 -s_2 \right]^t}"] & X_1 \oplus X_2 \end{tikzcd} \right). \] Thus, as the direct sum is an associative operation, so are the sum and intersection operations. Moreover, the direct sum is commutative up to isomorphism, and so are the sum and intersection. \end{remark} Now let us show how this definition generalises the abelian versions from Definitions \ref{ab sum} and \ref{ab intersection}: \begin{proposition}\label{intersection in abelian} Let $(\mathcal{A},\mathcal{E}_{all})$ be an abelian exact category and let $(X_1,i_1)$ and $(X_2,i_2)$ be two $\mathcal{E}$-subobjects of an object $X$. Then $\mbox{Ker}\, t$ forms the pull-back of $(X, i_1, i_2)$, where \[t = \left[ \begin{smallmatrix} d_1 \\ d_2 \end{smallmatrix} \right]: X\rightarrow X/X_1\oplus X/X_2\] is given by the cokernels $d_1,d_2$ of $i_1, i_2$ as in Definition \ref {ab intersection}. \end{proposition} \begin{proof} Let us consider the following diagram \[ \begin{tikzcd} {\mbox{Ker}\, t} \arrow[r, "k_1"] \arrow[d, "k_2"'] \arrow[rd, "i"] & {X_1} \arrow[d, tail, "i_1"] & \\ {X_2} \arrow[r, tail, "i_2"'] & {X} \arrow[r, two heads, "d_2"] \arrow[d, two heads, "d_1"'] \arrow[rd, "t"] & {X / X_2} \arrow[d, tail] \\ & {X/ X_1} \arrow[r, tail] & {X/ X_1 \oplus X/ X_2} \end{tikzcd} \] where $\ i_1\circ k_1= i$ and $\ i_2\circ k_2= i$. Assume now one has an object $V$ and two morphisms $v_1$, $v_2$ such that $ i_1\circ v_1 = i_2\circ v_2$: \[ \begin{tikzcd} V \arrow[drr, "v_1", bend left] \arrow[ddr, "v_2"', bend right] \arrow[dr, dashed, "v"] & & & \\ &{\mbox{Ker}\, t} \arrow[r, "k_1"] \arrow[d, "k_2"'] \arrow[rd, "i"] & {X_1} \arrow[d, tail, "i_1"] & \\ & {X_2} \arrow[r, tail, "i_2"'] & {X} \arrow[r, two heads, "d_2"] \arrow[d, two heads, "d_1"'] \arrow[rd, "t"] & {X / X_2} \arrow[d, tail] \\ & & {X/ X_1} \arrow[r, tail] & {X/ X_1 \oplus X/ X_2} \end{tikzcd} \] Since $t\circ i_1\circ v_1 = \left[ \begin{smallmatrix} d_1 \\ d_2 \\ \end{smallmatrix} \right] \circ i_1\circ v_1 = \left[ \begin{smallmatrix} d_1\circ i_1\circ v_1 \\ d_2\circ i_1\circ v_1 \\ \end{smallmatrix} \right] = \left[ \begin{smallmatrix} 0 \\ d_2\circ i_2\circ v_2 \\ \end{smallmatrix} \right] = 0 $, by the universal property of the kernel there exists a unique morphism $v$ such that $i_1 \circ v_1= i\circ v = i_1 \circ k_2 \circ v$. Since $i_1$ is mono, we conclude $v_1 = k_2\circ v$. By symmetry we also have that there exists a unique morphism $v$ such that $v_2 = k_1\circ v$. We conclude that $(\mbox{Ker}\, t, k_1, k_2) $ is the pull-back of $(X, i_1, i_2)$. \end{proof} \begin{proposition}\label{sum in abelian} Let $(\mathcal{A}, \mathcal{E}_{all})$ be an abelian exact category and let $(X_1,i_1)$ and $(X_2,i_2)$ be two $\mathcal{E}$-subobjects of an object $X$. Then $\mbox{Im}\, s$ forms the push-out of $(X_1{\cap}_X X_2, s_1, s_2)$ where $s$ is as in Definition \ref{ab sum} and $s_1$ and $s_2$ are given by the pull-back as in Definition \ref{intersection & sum}. \end{proposition} \begin{proof} In the abelian case, the pull-back along $(X, i_1,i_2)$ is the kernel of $[i_1 \;\; i_2]: $ \[ \begin{tikzcd}[ampersand replacement = \&] \mbox{Ker}\, \left[ \begin{smallmatrix} i_1 & i_2 \end{smallmatrix} \right] \arrow[r, "{\left[ \begin{smallmatrix} s_1 \\ -s_2\end{smallmatrix} \right]}"] \& X_1 \oplus X_2 \arrow[r, "{\left[ \begin{smallmatrix} i_1 & i_2 \end{smallmatrix} \right]}"] \arrow[dr, "{\left[ \begin{smallmatrix} j_1 & j_2\end{smallmatrix} \right]}"'] \& X \\ \& \& \mbox{Coker} \left[ \begin{smallmatrix} s_1 \\ -s_2 \end{smallmatrix} \right] \end{tikzcd} \] Consider the pull-back diagram defining $(X_1{\cap}_{X} X_2)= \mbox{Ker}\, \left[ \begin{smallmatrix} i_1 & i_2 \end{smallmatrix} \right] $ \[ \begin{tikzcd} X_1{\cap}_X X_2 \arrow[dr, phantom, "\lrcorner", near end] \arrow[r, "s_1", tail] \arrow[d, "s_2"', tail] & X_1 \arrow[d, "i_1", tail] \\ X_2 \arrow[r, "i_2"', tail] & X \end{tikzcd} \] and the push-out along $( s_1, s_2)$ : \[ \begin{tikzcd}[row sep = large] X_1{\cap}_X X_2 \arrow[r, "s_1", tail] \arrow[d, "s_2"', tail] & X_1 \arrow[d, "j_1", tail] \\ X_2 \arrow[r, "j_2"', tail] & \arrow[ul, phantom, "\ulcorner", near end] X_1{+}_{X}X_2 = \mbox{Coker} \left[ \begin{smallmatrix} s_1 \\ -s_2 \end{smallmatrix} \right] . \end{tikzcd} \] The push-out $X_1{+}_{X}X_2$ is $\mbox{Coker}(\mbox{Ker}\,(s))=\mbox{Coim}\,(s).$ And since $\mbox{Coim}\,(s)\cong \mbox{Im}\, (s)$ in an abelian category, we conclude that $\mbox{Im}\, (s)$ coincides with the general admissible sum in a nice category. \end{proof} \begin{cor}\label{abelian is nice} Let $\mathcal{A}$ be an abelian category. Then $(\mathcal{A},\mathcal{E}_{all})$ is an AIS-category. \end{cor} \begin{proof} This follows directly from Propositions \ref{intersection in abelian} and \ref{sum in abelian}. \end{proof} \bigskip \begin{comment} \begin{definition}\label{semi-nice} Let $(\mathcal{A},\mathcal{E})$ be an exact category, it is said to be \emph{semi-nice} when the following properties are satisfied for any two $\mathcal{E}-$subobjects $(X_1,i_1)$, $(X_2,i_2)$ of an object $X$: \begin{enumerate} \item $(X_1{\cap}_X X_2, s_1)$ and $(X_1{\cap}_X X_2, s_2)$ are $\mathcal{E}$-subobjects\footnote{where the morphisms $s_1$ and $s_2$ are defined as in \ref{intersection & sum}} of $X_1$ and $X_2$ respectively, \item $(X_1{+}_{X}X_2, u)$ is an $\mathcal{E}$-subobject\footnote{We mean here that the arrow $u$ of \ref{sum u.p} should be admissible.} of $X$, \end{enumerate} when these intersections and sums exists. \end{definition} \end{comment} Now we give some properties of the intersection and the sum of $\mathcal{E}-$subobjects of an object: \begin{lemma}\label{intersection inclusion} Let $X,Y$ and $Y'$ be $\mathcal{E}-$subobjects of an object $Z$ in an AI-category. If there exists an admissible monic \[ i : Y \;\xymatrix{ \ar@{>->}[r] & \\} Y'\] then there exists an admissible monic \[X{\cap}_{Z} Y \;\xymatrix{ \ar@{>->}[r] & \\} X{\cap}_{Z} Y'.\] \end{lemma} \begin{proof}By definition we have the two following pull-back diagrams \[ \xymatrix{ X{\cap}_{Z} Y\ar@{>->}[r]^{f} \ar@{>->}[d]_{g} & Y\ar@{>->}[d]_{h}\\ X \ar@{>->}[r]_{k} & Z} \] and \[ \xymatrix{ X{\cap}_{Z} Y'\ar@{>->}[r]^{f'} \ar@{>->}[d]_{g'} & Y'\ar@{>->}[d]_{h'}\\ X \ar@{>->}[r]_{k} & Z} \] where $\ h'\circ i= h$. So we have a monic $\ i\circ f= l$ that commutes the following diagram \[ \xymatrix{ X{\cap}_{Z} Y\ar@{>->}[r]^{l} \ar@{>->}[d]_{g} & Y'\ar@{>->}[d]_{h'}\\ X \ar@{>->}[r]_{k} & Z} \] By the universal property of the pull-back, there exist a morphism \[ r : X{\cap}_{Z} Y\to X{\cap}_{Z} Y'.\] such that $\ f'\circ r= l$ and $\ g'\circ r= g$. Since $l$ is an admissible monic, and the cokernel of $r$ exists, then the obscure axiom \ref{obscure axiom} implies that the morphism $r$ is also an admissible monic. \end{proof} \begin{lemma}\label{sum inclusion} Let $X,Y$ and $Y'$ be $\mathcal{E}-$subobjects of an object $Z$ in an AS-category. If there exists an admissible monic \[ i : Y \;\xymatrix{ \ar@{>->}[r] & \\} Y'\] then there exists an admissible monic \[Y{+}_{Z}X \;\xymatrix{ \ar@{>->}[r] & \\} Y'{+}_{Z}X\] when these sums exists. \end{lemma} \begin{proof} By definition we have the two following push-out diagram \[\xymatrix{ X{\cap}_Z Y \; \ar@{ >->}[d]^{g} \ar@{ >->}[r]^{f} \ar@{}[dr]|{\text{PO}} & Y\ar@{ >->}[d]^{d} \ar@{ >->}[ddr]^{l'} \\ X\; \ar@{ >->}[drr]^{e'} \ar@{>->}[r]^{e} & Y{+}_Z X \ar@{ >->}[dr]^{r'} \\ & &Y'{+}_Z X } \] where $\ d'\circ i= l'$, and by the universal property of the push-out, there exists a unique morphism \[r' : Y{+}_{Z}X\to Y'{+}_{Z}X.\] such that $r'\circ e=e'$ and $r'\circ d=d'$. The unique two admissible monics \[u : Y{+}_{Z}X \;\xymatrix{ \ar@{>->}[r] & \\} Z\] \[u' : Y'{+}_{Z}X \;\xymatrix{ \ar@{>->}[r] & \\} Z\] such that $\ u'\circ r'= u$ are admissibles by the (AS) axiom, and since $u$ is an admissible monic and the cokernel of $r'$ exists, then the obscure axiom \ref{obscure axiom} implies that the morphism $r'$ is also an admissible monic. \begin{comment} \[ \xymatrix{ X{\cap}_{Z} Y\ar@{>->}[r]^{f} \ar@{>->}[d]_{g} & Y\ar@{>->}[d]_{d}\\ X \ar@{>->}[r]_{e} & Y{+}_{Z}X} \] and \[ \xymatrix{ X{\cap}_{Z} Y'\ar@{>->}[r]^{f'} \ar@{>->}[d]_{g'} & Y'\ar@{>->}[d]_{d'}\\ X \ar@{>->}[r]_{e'} & Y'{+}_{Z}X} \] So we have a monic $\ d'\circ i= l'$ that commutes the following diagram \[ \xymatrix{ X{\cap}_{Z} Y\ar@{>->}[r]^{f} \ar@{>->}[d]_{g} & Y\ar@{>->}[d]_{l'}\\ X \ar@{>->}[r]_{e'} & Y'{+}_{Z}X} \] \end{comment} \end{proof} \begin{proposition}Let $(X_1,i_1)$, $(X_2,i_2)$ be two $\mathcal{E}$-subobjects of an object $X$ in an AIS-category, then $X_1 {\cap}_{X}X_2 = X_1 {\cap}_{(X_1{+}_{X}X_2)}X_2$. \end{proposition} \begin{proof} Using the equivalent assertions of \cite[Proposition 2.12]{Bu}. \end{proof} \begin{definition} An additive functor $F:\mathcal{A} \to \mathcal{B}$ is called \emph{exact} if for every kernel-cokernel pair $(i,d)$ in $\mathcal{A}$, we have that $(Fi, Fd)$ is a kernel-cokernel pair in $\mathcal{B}$. An additive functor $F:(\mathcal{A}, \mathcal{E}) \to (\mathcal{B}, {\mathcal{E}}')$ is called \emph{ $\mathcal{E}$-exact} if $F(\mathcal{E}) \subseteq {\mathcal{E}}'$. \end{definition} \begin{remark} In particular, exact functors preserve kernels and cokernels and therefore preserve intersections and sums. \end{remark} \section{ISOMORPHISM THEOREMS} In this section $(\mathcal{A},\mathcal{E})$ is an AIS-category.\\ We will recall the existence of some special \emph{admissible} short exact sequences, which will play an important role in the proof of the Jordan-H\"older property. \begin{lemma}\label{c3}Let $X$, and $Y' \;\xymatrix{ \ar@{>->}[r] & \\} Y''$ be three $\mathcal{E}-$subobjects of an object $Z$. Then there exists an admissible short exact sequence \[(Y'{+}_{Z}X)/X \;\xymatrix{ \ar@{>->}[r] & \\} (Y''{+}_{Z}X)/X \xymatrix{ \ar@{->>}[r] & \\} (Y''{+}_Z X)/(Y'{+}_Z X)\] \end{lemma} \begin{proof} The admissible monic that exists by \ref{sum inclusion} fit into the commutative diagram below, where the arrow on the right exists by the universal property of a Cokernel, then by the dual of \cite[Proposition 2.12]{Bu} the right square is bicartesian, and by (A2) (or by \cite[Proposition 2.15]{Bu}) the morphism \[Y'{+}_Z X/ X \imono{c} Y''{+}_Z X/ X\] is also an admissible monic. Since the first two horizontal rows and the middle column are short exact, then by the Noether Isomorphism for exact categories \cite[Lemma 3.5]{Bu} the third columnn is a well defined admissible short exact sequence, and is uniquely determined by the requirement that it makes the diagram commutative. Moreover, the upper right hand square is bicartesian; \[ \xymatrix{ & & 0 \ar[d]& {\color{blue}0\ar[d]}\\ 0\ar[r] & X\ar@{=}[d] \ar@{>->}[r] & Y'{+}_Z X \ar@{>->}[d] \ar@{>>}[r] & {\color{blue}(Y'{+}_Z X)/X}\ar@{>->}[d] \ar[r] \ar[d] & 0\\ 0 \ar[r] & X \ar@{>->}[r] & Y''{+}_Z X \ar@{>>}[r] \ar@{>>}[d] & {\color{blue}(Y''{+}_Z X)/ X} \ar[r] \ar@{>>}[d] & 0 \\ & & (Y''{+}_Z X)/(Y'{+}_Z X)\ar[r]\ar[d] & {\color{blue}(Y''{+}_Z X)/ (Y'{+}_Z X)} \ar[d]& \\ & & 0& {\color{blue}0} }\] In particular $(Y''{+}_Z X)/(Y'{+}_Z X)$ is the admissible Cokernel of the admissible monic $c$. \end{proof} \begin{comment} \begin{lemma}({\bf{The second isomorphism theorem}})\label{ppp} Let $X$ and $Y$ be two subobjects of $Z$, then there exists an isomorphism \[Y/(Y\cap X) \simeq (Y{+} X)/X.\] \end{lemma} \begin{proof} {\color{green}to check with the new definition of sum} The second isomorphism theorem for abelian categories \cite[Proposition 6.4]{Po} uses \cite[(5.2), (5.3), (5.6), the dual of (5.6), (5.7), (6.3)]{Po} which all could be adapted and still hold for pre-abelian categories, so the proof is again true in the more general context of pre-abelian categories. \end{proof} \end{comment} \begin{lemma}({\bf{The $\mathcal{E}-$second isomorphism theorem}})\label{parallelo} Let $X$, and $Y' \;\xymatrix{ \ar@{>->}[r] & \\} Y''$ be three $\mathcal{E}-$subobjects of an object $Z$. The following is an admissible short exact sequence \[Y'{\cap}_{Z}X \;\xymatrix{ \ar@{>->}[r] & \\} Y' \xymatrix{ \ar@{->>}[r] & \\} (Y'{+}_Z X)/X\] \end{lemma} \begin{proof} We consider the following push-out diagram \[\xymatrix{ Y'{\cap}_{Z}X \; \ar@{>->}[d]_{f} \ar@{ >->}[r]^{g} \ar@{}[dr]|{\text{PO}} & Y' \ar@{>->}[d]^{f'}\\ X \; \ar@{>->}[r]^{g'} & Y'{+}_Z X } \] and by \cite[Proposition 2.12]{Bu} this square is part of the diagram \[\xymatrix{ Y'{\cap}_{Z}X \; \ar@{>->}[d]_{f} \ar@{ >->}[r]^{g} \ar@{}[dr]|{\text{PO}} & Y'\ar@{>->}[d]^{f'} \ar@{>>}[r]^{c} & Y'/(Y'{\cap}_{Z}X)\ar@{=}[d]\\ X \; \ar@{>->}[r]^{g'} &Y'{+}_Z X \ar@{>>}[r]^{c'} & (Y'{+}_Z X)/X.} \] \end{proof} \begin{proposition}\label{the s.e.s}Let $X$, and $Y' \;\xymatrix{ \ar@{>->}[r] & \\} Y''$ be three $\mathcal{E}-$subobjects of an object $Z$. There exists an admissible short exact sequence \[ (Y''{\cap}_{Z} L)/(Y'{\cap}_{Z} L) \;\xymatrix{ \ar@{>->}[r] & \\} (Y''/Y') \xymatrix{ \ar@{->>}[r] & \\} (Y''{+}_{Z}X)/(Y'{+}_{Z}X).\] \end{proposition} \begin{proof}Consider the commutative diagram below in which the three colomus are admissibles short exact sequences by \ref{c3} and \ref{quotient}. In addition the first two rows are admissibles short exact sequences by \ref{parallelo}, then the 3$\times$3-lemma for exact categories \cite[Corollary 3.6]{Bu} implies the existence of the commutative diagram of admissible short exact sequences \[ \xymatrix{0\ar[r] & Y'{\cap}_Z X \ar@{>->}[d]_{} \ar@{>->}[r]{}^{ } & Y'\ar@{>->}[d]_{} \ar@{>>}[r] & (Y'+X)/X\ar@{>->}[d]^{} \ar[r] & 0\\ 0\ar[r] &Y''{\cap}_Z X \ar@{>>}[d]\ar@{>->}[r]_{} & Y'' \ar@{>>}[d] \ar@{>>}[r] & (Y''+_Z X)/X \ar@{>>}[d] \ar[r] & 0 \\ {\color{blue}0}\ar[r] & {\color{blue}(Y'{\cap}_Z X)/(Y'{\cap}_Z X)} \ar@{>->}[r]_{} & {\color{blue}Y''/Y'} \ar@{>>}[r] & {\color{blue}(Y''+_Z X)/(Y'+_Z X) }\ar[r] & {\color{blue}0}} \] and in particular the third row is an admissible short exact sequence. \end{proof} \begin{comment} \begin{theorem} Let \[0 \longrightarrow X \; \;\xymatrix{ \ar@{>->}[r] & \\} Z \xymatrix{ \ar@{->>}[r] & \\} Y \longrightarrow 0\] be an admissible short exact sequence, then (a) Z is $\mathcal{E}-$artinian if and only if X and Y are. (b) Z is $\mathcal{E}-$noetherian if and only if X and Y are. \end{theorem} \begin{proof} The necessity of (a) come from the fact that all subobjects of $X$ is a subobject of $Z$. For the necessity of (b), we show first that the subobject . The sufficiency come from 5.2 and 5.4 \end{proof} \begin{cor} Let {$X_1,...,X_n$} be a finite set of objects, then (a) Each $X_i$ is $\mathcal{E}-$artinian if and only if $\bigoplus \limits_{\underset{}{i=1}}^n X_i$ is $\mathcal{E}-$artinian. (b) Each $X_i$ is $\mathcal{E}-$noetherian if and only if $\bigoplus \limits_{\underset{}{i=1}}^n X_i$ is $\mathcal{E}-$noetherian. \end{cor} \begin{proof} First of all, in the case n=2, we have the following admissible short exact sequence \[0 \longrightarrow X_1 \; \;\xymatrix{ \ar@{>->}[r] & \\} X_1\oplus X_2 \xymatrix{ \ar@{->>}[r] & \\} X_2 \longrightarrow 0\] Then the necessity of (a) and (b) are true for n=2 by 3.17. For $n>2$, we consider the induction on this admissible short exact sequence \[0 \longrightarrow \bigoplus \limits_{\underset{}{i=1}}^{n-1} X_i \; \;\xymatrix{ \ar@{>->}[r] & \\} \bigoplus \limits_{\underset{}{i=1}}^n X_i \xymatrix{ \ar@{->>}[r] & \\} X_n \longrightarrow 0\] For the sufficiency, we use the induction on this admissible short exact sequence \[0 \longrightarrow \bigoplus \limits_{\underset{i\neq j}{i=1}}^{n-1} X_i \; \;\xymatrix{ \ar@{>->}[r] & \\} \bigoplus \limits_{\underset{}{i=1}}^n X_i \xymatrix{ \ar@{->>}[r] & \\} X_i \longrightarrow 0\] \end{proof} \end{comment} \section{THE JORDAN-H\"OLDER PROPERTY} In \cite{Bau}, Baumslag gives a short proof of the Jordan–Hölder theorem, for \emph{groups}, by intersecting the terms in one subnormal series with those in the other series.\\ In this section we write Baumslag proof of the Jordan-H\"older theorem for abelian categories in the language of exact category $(\mathcal{A}, \mathcal{E})$.\\ We repeat \cite{Bau} steps by using the admissible morphisms of the maximal exact structure of the abelian category. Our proof use only exact category theoretic arguments, in particular the Schur lemma for exact categories \ref{schur}. \begin{definition}\label{composition series} An $\mathcal{E}-$composition series for an object $X$ of $\mathcal{A}$ is a sequence \begin{eqnarray}\label{chain1} 0=X_0 \; \imono{i_0} X_1 \;\imono{i_1} \cdots \; \imono{i_{n-2}} \; X_{n-1}\;\imono{i_{n-1}}\; X_n=X \end{eqnarray} where all $i_l$ are \emph{proper admissible monics} with $\mathcal{E}-$simple cokernel. \end{definition} \begin{theorem} {\bf (Jordan-H\"older theorem)}\label{JH} Let $(\mathcal{A}, \mathcal{E})$ be an AIS-category. Any two $\mathcal{E}-$composition series for a finite object $X$ of $\mathcal{A}$ \[ 0=X_0 \; \imono{i_0} X_1 \;\imono{i_1} \cdots \; \imono{i_{m-2}} \; X_{m-1}\;\imono{i_{m-1}}\; X_m=X \] and \[0=X'_0 \; \imono{i'_0} X'_1 \;\imono{i'_1} \cdots \; \imono{i'_{n-2}} \; X'_{n-1}\;\imono{i' _{n-1}}\; X'_n=X \] are equivalent, that is, they have the same length and the same composition factors, up to permutation and isomorphism. \end{theorem} \begin{proof} By induction on $m$. If $m=0$, then $X=0$ and $n=0$. If $m=1$, then $M$ is $\mathcal{E}-$simple: the only $\mathcal{E}-$composition series is $0 \;\xymatrix{ \ar@{>->}[r] & \\} M$, and so $n=1$. If $m\gneq 1$, we consider the sequence on $\mathcal{E}-$subobjects of $X$: \[ 0 \; \imono{} X'_1{\cap}_X X_{m-1} \;\imono{} \cdots \; \imono{} \; X'_{n-1}{\cap}_X X_{m-1}\;\imono{}\;X_{m-1}=\] \[ X_{m-1}\; \imono{} X'_1 {+}_X X_{m-1}\;\imono{} \cdots \; \imono{} \; X'_{n-1} {+}_X X_{m-1}\;\imono{}\;X.\] Since the Cokernels $X/X_{m-1}=X_{m}/X_{m-1}$ are $\mathcal{E}-$simples, there exists a unique $0\leqslant k\lneq n$ such that \[ X_{m-1}= X'_1 {+}_X X_{m-1}= \cdots X'_k{+}_{X} X_{m-1} {\subsetneq}_{\mathcal{E}} X'_{k+1} {+}_{X} X_{m-1}\cdots =X'_{n-1} {+}_{X} X_{m-1}=X.\] By \ref{the s.e.s}, there exists for each $0 \leqslant l\lneq n$ an admissible short exact sequence \[0\rightarrow (X'_{l+1}{\cap}_{Z} X_{m-1})/(X'_{l}{\cap}_{X} X_{m-1}) \;\xymatrix{ \ar@{>->}[r] & \\} (X'_{l+1}/X'_{k}) \] \[\space \xymatrix{ \ar@{->>}[r] & \\} (X'_{l+1}{+}_{Z}X_{m-1})/(X'_{l}{+}_{X}X_{m-1})\rightarrow 0.\] In particular the middle term of this sequence is an $\mathcal{E}-$simple object. By the $\mathcal{E}-$Schur lemma \ref{schur}, the admissible monic (respectively the admissible epic) of this sequence is either the zero morphism, or an isomorphism. For $l=k$, we have \[X_{m}/X_{m-1}\backsimeq (X'_{k+1}{+}_{X}X_{m-1})/(X'_{k}{+}_{X}X_{m-1}) \backsimeq (X'_{k+1}/X'_{k})\] and then by \ref{zero coker} we have $X'_{k+1}{\cap}_Z X_{m-1}\backsimeq X'_k{\cap}_X X_{m-1}$. While for $l\neq k$ we have \[(X'_{l+1}{\cap}_{Z} X_{m-1})/(X'_{l}{\cap}_{X} X_{m-1})\backsimeq (X'_{l+1}/X'_{k})\] which means that $X'_{l+1}{\cap}_X X_{m-1} \neq X'_l{\cap}_X X_{m-1}$ and $X'_{l+1}{\cap}_X X_{m-1} / X'_l{\cap}_X X_{m-1}$ is an $\mathcal{E}-$simple object. This shows that the sequence \[ 0 \; {\subsetneq}_{\mathcal{E}} X'_1{\cap}_X X_{m-1} {\subsetneq}_{\mathcal{E}} \cdots X'_k{\cap}_X X_{m-1}=X'_{k+1}{\cap}_X X_{m-1}\cdots {\subsetneq}_{\mathcal{E}} \; X'_{n-1}{\cap}_X X_{m-1}{\subsetneq}_{\mathcal{E}} X_{m-1}\] is a composition series of $X_{m-1}$ of length $n-1$. By the recurrence hypothesis $m-1=n-1$, and so $m=n$ and there exists a bijection \[\sigma: \{0, 1,..., k-1, k+1, ..., n-1\}\rightarrow \{0, 1, ..., m-1\}\] such that $X'_{l+1}/X'_l \backsimeq X_{\sigma(k)+1}/X_{\sigma(k)}$ for $l\neq k$, and by taking $\sigma(i)=m-1$. \end{proof} \begin{remark} More generally, for a fixed additive category $\mathcal{A}$, one may choose an exact structure $\mathcal{E}$ on $\mathcal{A}$ from the lattice of exact structures $Ex(\mathcal{A})$ (introduced in \cite[Section 5]{BHLR} and recentely studied in \cite{FG} and \cite{BBH}) and consider the $\mathcal{E}-$Jordan-H\"older property. Then the exact category $(\mathcal{A}, \mathcal{E})$ may not necessarly satisfy the $\mathcal{E}-$Jordan-H\"older property (see \cite[Example 6.9]{BHLR}, \cite{E19} and \cite[Examples 5.3, 5.12]{BHT} for counter-examples) and characterisations of Jordan-H\"older exact categories has appeared in both \cite{E19} and in \cite{BHT}. \end{remark}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,758
\section{Introduction}\label{sec_intro} In recent years there have been significant progress~\cite{Phillips:2010dt,Hammer:2011ye,Rupak:2011nk,Fernando:2011ts,Lensky:2011he,PhysRevC.86.044608,Acharya:2013nia,Zhang:2014zsa,Ryberg:2014exa,Ryberg:2015lea} in the effective field theory (EFT) treatment of electromagnetic properties of halo nuclei, following early work in Refs.~\cite{Bertulani:2002sz,Bedaque:2003wa}. Halo nuclei structure and reactions play an important role in heavy element synthesis in nuclear astrophysics~\cite{Bertulani:2009mf,Rauscher:2010pu, kawano:1991ApJ372,Wiescher1990,Wiescher1999,Kajino1990}. They provide an unique window into the properties of exotic nuclei near the driplines resembling weakly-bound clusters rather than tightly bound shell-like structures. There is renewed interest in the study of halo nuclei due to the advent of present and planned experiments with high intensity beams of exotic radioactive rare isotopes. Further, the single and two nucleon halo nuclear systems display properties that are universal such as the Efimov effect~\cite{Efimov:1971a,Efimov:1993a,Fedorov:1994,Mazumdar:2000,Yamashita:2005wu,Canham:2008jd}, and can be realized in few-body atomic systems as well~\cite{Kraemer:2006Nat,Braaten:2004a}. EFT are ideally suited for the study of halo nuclei at low energy. The clear separation of energy scales -- the small energy required to remove the valance nucleon (or nucleons) and the large energy required to break apart the tightly bound core -- allows one to construct a low energy EFT. Physical observables are expressed as expansions in the small ratio $Q/\Lambda$ where $Q$ is a momentum associated with the low energy and $\Lambda$ is the momentum associated with the high energy physics. In EFT, the core and loosely bound particle are considered as fundamental fields to reduce the complexity of the problem. For example, in $^{15}$C that is represented as a single neutron halo of a $^{14}$C core, the binding momentum $\gamma\sim 46.21$ MeV for valance neutron separation is associated with the soft scale $Q$ whereas the momentum threshold for pion physics, the excited states of the core $^{14}$C, etc., is identified with the hard scale $ \Lambda\sim 100$-200 MeV~\cite{PhysRevC.86.044608}. In the EFT, at a given order in the $Q/\Lambda$ expansion all the relevant quantum operators are systematically included. The theoretical error is estimated from the higher order terms in the perturbative $Q/\Lambda$ expansion. In this work we calculate the electromagnetic form factors for $s$-wave spin $\frac{1}{2}$ halo nuclei. The electric form factor for $^{11}$Be was studied in Ref.~\cite{Hammer201117}. We include the magnetic form factor as well as apply the analysis to couple of other halo nuclei $^{15}$C and $^{19}$C. Form factors of nuclei have been a longstanding subject of interest in nuclear physics. Experiments on elastic electron scattering from a nucleus provide essential information about the internal structure of the nucleus, such as charge density and magnetic moment. The form factors are generally written as the ratio of the electron-nuclei cross section to the Mott scattering cross section off point-like particle. The halo nuclei we consider -- $^{15}$C, $^{11}$Be, and $^{19}$C -- can be analyzed similar to electron-proton scattering. These nuclei involve spin $\frac{1}{2}^+$ hadrons as the nuclear target just like the proton. The halo nuclei ground states $^{11}$Be, $^{15}$C and $^{19}$C were studied in Refs~\cite{Phillips:2010dt,Hammer:2011ye,PhysRevC.86.044608,Acharya:2013nia}. The construction of the EFT for these nuclei is similar though the power counting that determines the relative sizes of the quantum operators are system specific. The form factor calculation is sensitive to these differences, and they will be discussed when we consider the specific nuclei. The organization of the paper is as follows. In Section~\ref{sec_theory} we introduce the general formalism for the electric and magnetic form factors. Section~\ref{sec_EFT} introduces the EFT interactions and the form factor calculations. Then we discuss the results for the specific halo systems in Section~\ref{sec_results}. The power countings in the halo systems are discussed, and the corresponding EFT parameters are chosen. Conclusions are presented in ~\ref{sec_conclusions}. \section{Formalism}\label{sec_theory} Elastic electron scattering on $\frac{1}{2}^+$ halo nucleus can be analyzed similar to electron scattering on proton target as both involve spin $\frac{1}{2}$ hadrons. The elastic scattering amplitude separates into the leptonic and hadronic currents as \begin{align} i\mathcal M = [ie \bar\psi_e(-\bm{p}',s')\gamma^\mu\psi_e(-\bm{p},s)]\(-i\frac{g_{\mu\nu}}{q^2}\) [i\bar \psi_\phi(\bm{p}',a')J_\phi^\nu \psi_\phi(\bm{p},a)], \end{align} where $\psi_e(\bm{p},s)$ and $\psi_\phi(\bm{p},s)$ are the electron and halo nucleus Dirac fields with momenta $\bm{p}$ and spins $s$ respectively. The photon momentum $q= p'-p$. Summing over final spins and averaging over initial spins we get \begin{align} \frac{1}{2}\frac{1}{2}\sum_{s,s'}\sum_{a,a'}|\mathcal M|^2\equiv \frac{e^2 }{(q^2)^2} g_{\mu\nu} g_{\alpha\beta}L^{\mu\alpha} T^{\nu\beta}, \end{align} where the leptonic contribution is written as \begin{align} L^{\mu\alpha}= &\frac{1}{2}\sum_{s,s'} [\bar\psi_e(-\bm{p}',s')\gamma^\mu\psi_e(-\bm{p},s)][\bar\psi_e(-\bm{p},s) \gamma^\alpha\psi_e(-\bm{p}',s')]\nonumber \\ = &\frac{1}{2} \operatorname{Tr}[(-{\slashed p}' +m_e)\gamma^\mu(-\slashed p+m_e)\gamma^\alpha]\nonumber\\ = & 2[p'^\mu p^\alpha+p'^\alpha p^\mu-(p'\cdot p) g^{\mu\alpha}]+2 m_e^2 g^{\mu\alpha}. \end{align} $m_e$ is the electron mass. The hadronic contribution is \begin{align} T^{\nu\beta}=\frac{1}{2}\sum_{a,a'} [\bar\psi_\phi(\bm{p}',a')J^\nu\psi_\phi(\bm{p},a)][\bar\psi_\phi(\bm{p},a) J^\beta\psi_\phi(\bm{p}',a')] \end{align} Current conservation $q_\mu T^{\mu\nu}=0=q_\nu T^{\mu\nu}$ and Lorentz invariance restricts the form of the hadronic current to a generic form \begin{align} i \Gamma^\mu= &i \bar \psi_\phi(\bm{p}',a')J_\phi^\mu \psi_\phi(\bm{p},a) = i e Z_c \bar\psi_\phi(\bm{p}',a')\[\gamma^\mu \mathcal F_1(-q^2) + i\frac{\kappa}{2M}\mathcal F_2(-q^2) \sigma^{\mu\nu} q_\nu \] \psi_\phi(\bm{p},a)\nonumber\\ =&i e Z_c \bar\psi_\phi(\bm{p}',a')\[\frac{p^\mu+p'^\mu}{2M} \mathcal F_1(-q^2) + i\frac{\mathcal F_1(-q^2) +\kappa \mathcal F_2(-q^2)}{2M} \sigma^{\mu\nu} q_\nu \] \psi_\phi(\bm{p},a), \end{align} that is rewritten using the Gordon identity in the last line. The constant $\kappa$ is the anomalous magnetic moment and $Z_c$ the charge of the halo nucleus core. For non-relativistic kinematics, in the Breit frame $(q_0=0,\bm{q})$, we get \begin{align}\label{eq:EMforms} i\Gamma^0 = & ie Z_c \bar u_\phi(\bm{p}',a')\mathcal F_1(|\bm{q}|^2) u_\phi(\bm{p},a), \\ i\Gamma^i =& ie Z_c \bar u_\phi(\bm{p}',a')\[ \frac{p^i +p'^i}{2M} \mathcal F_1(|\bm{q}|^2) +i\frac{\mathcal F_1(|\bm{q}|^2) +\kappa\mathcal F_2(|\bm{q}|^2)}{2M}\epsilon^{ijk}\sigma_k q_j \] u_\phi(\bm{p},a).\nonumber \end{align} In the Sach form that is commonly used for a physical interpretation, we write the charge $G_E(|\bm{q}|^2)$ and magnetic $G_M(|\bm{q}|^2)$ form factors as \begin{align} \label{eq:Sach} G_E(|\bm{q}|^2) = &\mathcal F_1(|\bm{q}|^2)-\tau\kappa\mathcal F_2(|\bm{q}|^2),\\ G_M(|\bm{q}|^2)=&\mathcal F_1(|\bm{q}|^2) +\kappa \mathcal F_2(|\bm{q}|^2), \nonumber \end{align} with $\tau=|\bm{q}|^2/(4 M^2)$. In the EFT the form factors $\mathcal F_i$s are $\mathcal O(1)$ in the $Q/\Lambda$ expansion as we show later in Section~\ref{sec_EFT}. We count $|\bm{q}|\sim Q$ at low photon momentum exchange. Thus the magnetic form factor $G_M$ gets contribution from both $\mathcal F_1$ and $\mathcal F_2$ whereas the electric form factor $G_E$ only gets contribution from $\mathcal F_1$ upto NLO. The $\mathcal F_2$ term in $G_E$ is the so called Darwin-Foldy contribution. The electric and magnetic form factors are normalized such that for small $|\bm{q}|$ \begin{align} G_E(|\bm{q}|^2)\approx 1-\frac{1}{6} \langle r_E^2 \rangle|\bm{q}|^2+\cdots, \end{align} where $\sqrt{\langle r_E^2\rangle}$ is the charge radius and \begin{align} \frac{eZ_c}{2M}G_M(|\bm{q}|^2)\approx\kappa_\phi\mu_N \[1-\frac{1}{6} \langle r_M^2\rangle |\bm{q}|^2+\cdots\], \end{align} where $\kappa_\phi$ is the halo nucleus magnetic moment and $\sqrt{\langle r_M^2\rangle}$ the magnetic radius. The differential elastic scattering cross section in the laboratory frame is written as \begin{align} \label{eq:diffcsection} \frac{d\sigma}{d\Omega}={\frac{d\sigma}{d\Omega}} |_{Mott} \[\mathcal A(|\bm{q}|^2)+ \mathcal B(|\bm{q}|^2) \tan^{2}\(\frac{\theta}{2}\)\] , \end{align} where \begin{align} \label{eq:difcrosssection} \mathcal A(|\bm{q}|^2)& ={\mathcal F_1^2(|\bm{q}|^2)}+\tau\kappa^2 \mathcal F_2^2(|\bm{q}|^2) = \frac{1}{1+\tau}[G_E^2(|\bm{q}|^2) +\tau G_M^2(|\bm{q}|^2)], \\ \mathcal B(|\bm{q}|^2) &=2\tau[\mathcal F_1(|\bm{q}|^2)+\kappa \mathcal F_2(|\bm{q}|^2 )]^2 = 2\tau G_M^2(|\bm{q}|^2). \nonumber \end{align} \section{Effective Field Theories}\label{sec_EFT} The halo nuclei $^{11}$Be, $^{15}$C and $^{19}$C ground states all have spin-parity assignment $\frac{1}{2}^+$. They are treated as a shallow bound state of a single neutron and a spin zero core in the $s$-wave. This is reasonable as the binding energy of the ground state is much smaller than the energy needed to break the core or the excited state energies of the core~\cite{Phillips:2010dt,Hammer:2011ye,PhysRevC.86.044608,Acharya:2013nia}. The EFT calculations in these halo systems, so far, agree with available data within the estimated theoretical errors. The bound state is described by the strong interaction Lagrangian \begin{align}\label{eq:Ls} \mathcal L_s=\phi^\dagger_\alpha\[\Delta+iD_0+\frac{\bm{D}^2}{2M}\]\phi_\alpha+h\[\phi^\dagger_\alpha(N_\alpha C)+\operatorname{h.c.}\], \end{align} where $\phi_\alpha$ is an auxiliary field with spin index $\alpha$, $N_\alpha$ is the neutron field and $C$ is a scalar field for the core. In the following we suppress the spin index. $D_\mu=\partial_\mu+ie Z_c A_\mu$ is the covariant derivative. The field $\phi$ represents the $\frac{1}{2}^+$ single neutron bound halo nucleus. We take the neutron mass $M_n=939.6$ MeV, total mass $M=M_n+M_c$ where the core mass $M_c=9328$ MeV, 13044 MeV and 16792 MeV for $^{10}$Be, ${}^{14}$C and ${}^{18}$C respectively. The strong interaction couplings $\Delta$, $h$ is specific to the particular halo nucleus we consider and would in general be different from one system to the next. The EFT couplings are related to elastic scattering parameters. Calculating the elastic neutron-core scattering amplitude in Fig~\ref{fig:scattering}, we get \begin{figure}[thb] \begin{center} \includegraphics[width=0.47\textwidth,clip=true]{scattering_and_dimer} \end{center} \caption{\protect Elastic scattering amplitudes $\mathcal A$ in $s$-wave. Single line is the neutron propagator, double line represent the dimer $\phi$ propagator, dashed line the bare dimer propagator. } \label{fig:scattering} \end{figure} \begin{align}\label{eq:AmpEFT} i\mathcal A(p)=-i h^2 D_\phi(\frac{p^2}{2\mu},0)= -\frac{i h^2}{\Delta+p^2/(2\mu)+\mu h^2(\lambda+i p)/(2\pi)}, \end{align} where the dressed $\phi$ propagator is \begin{align} i D_\phi(p_0,\bm{p})=&\frac{i}{\Delta+p_0-p^2/(2M)+i h^2 f_0(p_0,\bm{p})},\\ f_0(p_0,\bm{p})= &-i 2\mu\(\frac{\lambda}{2}\)^{4-D}\int \frac{d^{D-1}\bm{q}}{(2\pi)^{D-1}}\frac{1}{q^2- 2\mu p_0 +\mu p^2/M -i 0^+}\nonumber\\ =& -\frac{i\mu}{2\pi}(\lambda-\sqrt{-2\mu p_0 +\mu p^2/M-i 0^+}), \nonumber \end{align} and $\lambda$ is the renormalization scale~\cite{Kaplan:1998tg} and $\mu=M_n M_c/(M_n+M_c)$ the reduced mass. Comparing the above relation to the equivalent one from the effective range expansion in the $s$-wave \begin{align} i\mathcal A(p)=\frac{2\pi}{\mu}\frac{i}{p\cot\delta-i p}\approx \frac{2\pi}{\mu}\frac{i}{-\gamma+\rho(p^2+\gamma^2)/2-i p}, \end{align} we get \begin{align}\label{eq:EFTcouplings} \frac{2\pi\Delta}{\mu h^2}+\lambda=&\gamma-\frac{1}{2}\rho\gamma^2,\\ -\frac{2\pi}{h^2\mu^2}=&\rho. \nonumber \end{align} The binding momentum $\gamma$ is determined from the binding energy $B=\gamma^2/(2\mu)$. The effective range $\rho$ is typically less constrained from data as elastic neutron scattering data is scarce. However, $\rho$ can be constrained from radiative capture or Coulomb dissociation data when available. In the EFT power counting $\gamma\sim Q$ for shallow bound states and contributes at leading order. \emph{A priori} it is not known how $\rho$ that has dimensions of length should scale. If $\rho\sim 1/\Lambda$ it is a next-to-leading order effect whereas if $\rho\sim 1/Q$ it contributes at leading order. We consider both the situations later -- perturbative $\rho$ for $^{11}$Be and $^{19}$C and non-perturbative $\rho$ for $^{15}$C. The form factor calculations also depend on the magnetic moment coupling of the neutron and possible two-body currents. We consider the following magnetic operators in addition to the interactions in Eq.~(\ref{eq:Ls}): \begin{align} O_{EM}=2\kappa_n\mu_NN^\dagger\bm\sigma\cdot\bm B N +\mu_N L_M\phi^\dagger\bm\sigma\cdot\bm B\phi, \end{align} where $\kappa_N=-1.91304$ is the neutron anomalous magnetic moment, $\mu_N$ the nuclear magneton. $L_M$ is the dimensionless couplings for a magnetic two-body current. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{FormFactorT0} \end{center} \caption{\protect EFT calculation of $\Gamma^0$. The wavy lines correspond to $A_0$ photons.} \label{fig:GammaZero} \end{figure} The form factors in Eq.~(\ref{eq:EMforms}) are calculated from Feynman diagrams with a photon between initial and final ground state $\phi$ with momenta $\bm{p}$ and $\bm{p}'$, respectively. In general this requires initial and final state interaction description of the ground state, and electromagnetic current insertion in intermediate state. The EFT calculation of $\Gamma^0$ corresponds to the diagrams in Fig.~\ref{fig:GammaZero} where a $A_0$ photon is inserted between the initial and final ground state. We get \begin{align}\label{eq:Gamma0} i\Gamma^0 = &i e Z_c Z_\phi \bar u_\phi(\bm{p'},a')\[ h^2\frac{\mu M_c}{\pi |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_c\gamma}\) +1 \]u_\phi(\bm{p},a), \end{align} where the first term is the contribution from the one-loop diagram and the second term is from the tree-level diagram. The overall factor $Z_\phi$ is the wavefunction renormalization that is defined as the residue of the dimer $\phi$ propagator at the bound state energy pole~\cite{Chen:1999tn} \begin{align} Z_\phi^{-1} =\frac{\partial}{\partial p_0}[D_\phi(p_0,\bm{p})]^{-1}\Big|_{p_0=p^2/(2M)-B} = 1+\frac{\mu^2 h^2}{2\pi\gamma} =-\frac{1-\rho\gamma}{\rho\gamma}. \end{align} We used the relation $B=\gamma^2/(2\mu)$ for the shallow bound nucleus and $h^2=-2\pi/(\rho\mu^2)$ from before. For the halo nuclei $^{11}$Be and $^{19}$C in Section~\ref{sec_results}, $\rho\sim 1/\Lambda$ and we see that the second term in Eq.~(\ref{eq:Gamma0}) is $\mathcal O(Q/\Lambda)$ smaller compared to the first term. Though the effective range correction contributes at NLO, some of the $\rho$'s are large Refs.~\cite{Hammer:2011ye,Fernando:2011ts,Acharya:2013nia} which motivates us to use the ``zed"-parameterization~\cite{Phillips:1999hh}. In this parameterization, the wave function renormalization is reproduced exactly at NLO. For the halo nuclei $^{15}$C, $\rho\sim 1/Q$ and we keep the effective range contributions exactly by treating it non-perturbatively. In this case, both the terms in Eq.~(\ref{eq:Gamma0}) contribute at the same order. We start with a description of the zed-parameterization. A convenient starting point for formulating the zed-parameterization with dimers is Eq.~(\ref{eq:EFTcouplings}). The EFT power counting assumption $\gamma\sim\lambda\sim Q$ and $\rho\sim 1/\Lambda$ implies $\Delta\sim Q$, $h^2\sim1/\Lambda$. To renormalize the theory systematically, we expand the couplings $\Delta$ and $h$ as \begin{align} \Delta &=\Delta_1+\Delta_2+\Delta_3+\cdots,\\ h &=h_0+h_1+h_2+\cdots, \nonumber \end{align} where the subscript indicates the scaling with the powers of $Q$ in the $Q/\Lambda$ expansion. Then by inspection of Eq.~(\ref{eq:AmpEFT}), one sees that $\Delta_1$ and $h_0$ along with the unitary cut contribution $ip$ contributes at LO while the $p^2/(2\mu)$ piece associated with the effective range expansion would appear at higher order. As the combination $h^2 Z_\phi= 2\pi\gamma/[\mu^2(1-\rho\gamma)]$ enters the calculation often, in the zed-parameterization we rewrite $1/(1-\rho\gamma)$ as $1+(Z_d-1)$ where $Z_d-1$ is treated as order $Q/\Lambda$. Consistently applying the $\Delta_n$, $h_n$ expansion to Eq.~(\ref{eq:EFTcouplings}), then yields \begin{align} &\Delta_1=-\frac{\gamma(\gamma-\lambda)}{\mu(Z_d-1)},& &\Delta_2=-\frac{\gamma(\gamma-2\lambda)}{2\mu},\\ &h_0^2=-\frac{2\pi\gamma}{\mu^2(Z_d-1)},& &h_1^2=-\frac{\pi\gamma(Z_d-1)}{2\mu^2}, \nonumber\\ &\hspace{0.25in} \vdots & &\hspace{0.25in}\vdots{} \nonumber \end{align} where the perturbative expansion for $\Delta$ beyond the terms shown vanish but for $h^2$ continues. It is straightforward then to show that \begin{align} i\mathcal A(p)=\frac{2\pi}{\mu}\frac{i}{-\gamma-ip}[1+(Z_d-1)+0+0+0+\cdots]\ , \end{align} as derived in Ref.~\cite{Phillips:1999hh}. $\rho$ and $Z_d$ are related in perturbation as $Z_d=1+\rho\gamma+(\rho\gamma)^2+\cdots$. We express physical observables in terms of $Z_d-1$ instead of $\rho$ when the perturbative expansion is valid. In situations where $\rho\sim1/Q$, $\rho\gamma\sim 1$, we do not treat $Z_d-1$ as a perturbation, and $Z_d=1/(1-\rho\gamma)$ is not expanded. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{FormFactorTi} \end{center} \caption{\protect EFT calculation of $\Gamma^i$. The wavy lines correspond to $A_i$ photons. The magnetic coupling is represented in the second diagram with a $\otimes$, and the two-body current in the third diagram is represented with a filled square. } \label{fig:GammaI} \end{figure} The contribution to $\Gamma^i$ follows from the diagrams in Fig.~\ref{fig:GammaI} that gives: \begin{align}\label{eq:GammaI} i\Gamma^i=&ie Z_c Z_\phi \bar u_\phi(\bm{p},a)\left\{ \frac{p_i +p'_i}{2M} \[ h^2\frac{\mu M_c}{\pi |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_c\gamma}\) +1 \] \right. \\ &\left. +i\frac{\mu_N}{e Z_c}\[ {h^2 \kappa_n}\frac{\mu M_n}{\pi |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_n\gamma}\) +L_M \]\epsilon^{ijk}\sigma_j q_k \right\}u_\phi(\bm{p}',a').\nonumber \end{align} $\Gamma^i$ receives contribution from magnetic photons $A_i$ that includes contribution from both magnetic moment coupling and the electromagnetic current generated by the orbital motion of the charged $^{10}$Be, $^{14}$C or $^{18}$C core. The two-body current contribution indicated by the dimensionless coupling $L_M$ which is assumed to have a natural size $\mathcal O(1)$. The two-body current contributes at NLO for perturbative $\rho$ and at LO for non-perturbative $\rho$ in the $Q/\Lambda$ expansion. We first derive some expression for pertrubative $\rho$ where we apply the zed-parameterization. We consider the non-perturbative case later separately when we discuss the $^{15}$C nucleus in Section~\ref{sec_results} to keep the discussion simple. Comparing Eqs. ~(\ref{eq:EMforms}), (\ref{eq:Gamma0}) and (\ref{eq:GammaI}), we get \begin{align} \mathcal F_1(|\bm{q}|^2)&=Z_\phi \[ h^2\frac{\mu M_c}{\pi |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_c\gamma}\) +1 \] \\ &=\frac{2 M_c\gamma}{\mu |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_c\gamma}\) +(Z_d-1)\[ \frac{2 M_c\gamma}{\mu |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_c\gamma}\) -1\], \nonumber \\ G_M(|\bm{q}|^2)&=\frac{2 M\mu_N}{e Z_c} Z_\phi\[ \frac{h^2 g_n}{2}\frac{\mu M_n}{\pi |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_n\gamma}\) +L_M \] \nonumber \\ &=\frac{2 M\mu_N}{e Z_c}\frac{2 M_n\gamma\kappa_n}{\mu |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_n\gamma}\)\nonumber\\ &+(Z_d-1)\frac{2 M\mu_N}{e Z_c}\[ \frac{2 M_n\gamma\kappa_n}{\mu |\bm{q}|} \tan^{-1}\(\frac{\mu|\bm{q}|}{2 M_n\gamma}\) -L_M\], \nonumber \\ G_E(|\bm{q}|^2)&=(1+\tau )\mathcal F_1(|\bm{q}|^2)-\tau G_M(|\bm{q}|^2). \nonumber \end{align} The form factor $\mathcal F_1$ can be determined once the binding momentum $\gamma$ and wave function renormalization constant $Z_d-1$ is known. For the magnetic form factor $G_M$, the two-body coupling $L_M$ is also need. Expanding the electric form factor in $|\bm q|$ followed by an expansion in $Z_d-1\sim Q/\Lambda$, we get $\rho\gamma\sim Q/\Lambda$ , we get to NLO \begin{align} G_E(|\bm{q}|^2) &\approx 1-\frac{\mu^2}{12M_c^2\gamma^2}[1+(Z_d-1)] |\bm q|^2. \end{align} The charge normalization at low $|\bm q|^2$ is as expected. In the electric form factor we ignored the Darwin-Foldy contributions which appear at higher order. In the EFT the core of the halo nucleus is treated as point-like. However, to compare the charge radius with experimental values one has to add the finite charge radius of the core in quadrature. We write \begin{align} &\langle r_E^2\rangle\approx \frac{\mu^2}{2 M_c^2 \gamma^2}[1+(Z_d-1)]+\langle r_c^2\rangle, \end{align} expanded to NLO where $\sqrt{\langle r_c^2\rangle}$ is the core charge radius. The LO charge radius is entirely determined by the halo nucleus binding energy. The effective range parameter contributes at NLO which we discuss in the next section. For the magnetic form factor we get \begin{align} \frac{e Z_c}{2M}G_M(|\bm{q}|^2)&\approx \mu_N[\kappa_n+(Z_d-1)(\kappa_n-L_M)] -\frac{\mu^2}{12 M_n^2\gamma^2}\kappa_n\mu_N[1+(Z_d-1)]|\bm q|^2, \end{align} expanded to NLO for small $|\bm q|^2$. The halo nuclei magnetic moment is identified as \begin{align}\label{eq:kappaEFT} \kappa_\phi=\kappa_n+(Z_d-1)(\kappa_n-L_M), \end{align} where the LO result is just the Schmidt value associated with the magnetic moment of the valance neutron. The LO magnetic radius is in analogy to the charge radius given by \begin{align} \langle r_M^2\rangle\approx \frac{\mu^2}{2 M_n^2 \gamma^2}[1+(Z_d-1)] . \end{align} From the above analysis that is applicable to perturbative $\rho$, we see that the LO result is known from the binding energy and the neutron magnetic moment. At NLO, contribution from both the effective range and a two-body current is needed to determine the electromagnetic form factors. \section{Form Factors}\label{sec_results} In this section we apply the expressions derived above to ${}^{11}$Be, ${}^{15}$C and ${}^{19}$C nuclei, and calculate the corresponding electromagnetic form factors. \subsection{$^{11}$Be} The $s$-wave $\frac{1}{2}^+$ state and the $p$-wave state $\frac{1}{2}^-$ of ${}^{11}$Be was analyzed in Ref.~\cite{Hammer201117}. We only consider the $s$-wave $\frac{1}{2}^+$ state here. In the EFT power counting with $\gamma\sim |\bm q|\sim Z_d-1\sim Q$ , the LO and NLO contributions to the form factors $\mathcal A$ is: \begin{align}\label{eq:AZd} \mathcal A(|\bm q|^2)\approx \frac{4 M^2_c\gamma^2}{\mu^2 |\bm q|^2}\[\tan^{-1}\(\frac{\mu|\bm q|}{2M_c \gamma}\)\]^2 +(Z_d-1) &\frac{4 M_c\gamma}{\mu |\bm q|}\tan^{-1}\(\frac{\mu|\bm q|}{2M_c \gamma}\) \\ &\times\[\frac{2 M_c\gamma}{\mu |\bm q|}\tan^{-1}\(\frac{\mu|\bm q|}{2M_c \gamma}\)-1\], \nonumber \end{align} Up to NLO, $\mathcal A$ depends only on the electric form factor $G_E$. The binding momentum in the above relation is determined from the valance neutron separation energy $B=500$ keV as $\gamma=\sqrt{2\mu B}\approx 29.22$ MeV~\cite{Kelley1990}. In Ref.~\cite{Hammer201117} the wave function normalization factor is determined from the Coulomb dissociation of the $\frac{1}{2}^+$ state to neutron and ${}^{10}$Be through E1 transition that determines $Z_d-1=0.69$. This corresponds to $\rho\gamma\approx 0.4$ in $Z_d=1/(1-\rho\gamma)$. Effective range corrections though perturbative are significant justifying the use of the zed-parameterization. This yields a EFT charge radius $\langle r_E^2\rangle^{1/2}=(2.40\pm0.02)$ fm using the experimental ${}^{10}$Be radius $\langle r_c^2\rangle^{1/2}=(2.357\pm0.018)$ fm that compares well with the experimental value $\langle r_E^2\rangle_\mathrm{exp}^{1/2}=(2.463\pm0.016)$ fm. The EFT expansion for this system is estimated to be around $Q/\Lambda\sim 0.3$-0.4. The NLO result is expected to have an error of about 10-15\% from the NNLO $(Q/\Lambda)^2$ corrections. The final state interaction in the $p$-wave that is treated perturbatively for natural sized parameters contribute at NNLO. The form factor $\mathcal B$ can be expanded similarly to get \begin{align}\label{eq:BZd} \frac{e^2 Z_c^2}{4M^2\mu_N^2} \mathcal B(|\bm q|^2)&\approx \kappa_n^2\frac{2 M_n^2\gamma^2}{M^2\mu^2 }\[\tan^{-1}\(\frac{\mu|\bm q|}{2M_n \gamma}\)\]^2 \\ &+ (Z_d-1)\kappa_n\frac{2 M_n\gamma|\bm q|}{M^2 \mu}\tan^{-1}\(\frac{\mu|\bm q|}{2M_n \gamma}\) \[\frac{2 M_n\gamma \kappa_n}{\mu |\bm q|}\tan^{-1}\(\frac{\mu|\bm q|}{2M_n \gamma}\) -L_M\]. \nonumber \end{align} To determine $\mathcal B$ at NLO we need to know $L_M$ besides $\gamma$ and $Z_d$. We fit $L_M$ to the known magnetic moment for $^{11}$Be, $\kappa_\phi^{(exp)}=-1.6814$~\cite{PhysRevLett.102B}, which gives $L_M=-2.25$ from Eq.~(\ref{eq:kappaEFT}). This is a reasonable value for a dimensionless coupling in the EFT power counting where we assumed it to be $\mathcal O(1)$. In Fig.~\ref{fig:ABbe11} we plot the form factors $\mathcal A(|\bm q|^2)$ and $\mathcal B(|\bm q|^2)$. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{Abe11.pdf} \includegraphics[width=0.7\textwidth,clip=true]{Bbe11.pdf} \end{center} \caption{\protect Form factors for $^{11}$Be. Solid red curve LO contribution; and dashed blue curve LO + NLO contributions. } \label{fig:ABbe11} \end{figure} \subsection{$^{19}$C} The halo nuclei $^{19}$C was considered in halo EFT in Ref.~\cite{Acharya:2013nia}. The authors calculated the radiative capture $^{18}$C$(n,\gamma)^{19}$C and breakup $^{19}$C$(\gamma,n)^{18}$C cross section. The EFT analysis extracts the binding energy as $0.575\pm 0.055$ MeV and $Z_d-1\approx 0.73$, where we only indicate the central value for $Z_d$ that enters at NLO. The charge radius and the magnetic moment for $^{19}$C are not known. The analysis for this system is similar to the $^{11}$Be system. We can make a NLO prediction for the charge radius in halo EFT \begin{align} \langle r_E^2\rangle-\langle r_c^2\rangle \approx \frac{\mu^2}{2 M_c^2 \gamma^2}[1+(Z_d-1)]=0.0534\times[1+0.73]\ \mathrm{fm}^2\approx 0.09\ \mathrm{fm}^2, \end{align} where we used the central values for the parameters. The form factor $\mathcal A$ is plotted in Fig.~\ref{fig:ABc19} using Eq.~(\ref{eq:AZd}). \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{Ac19.pdf} \includegraphics[width=0.7\textwidth,clip=true]{Bc19.pdf} \end{center} \caption{\protect Form factors for $^{19}$C. Solid red curve LO contribution; and dashed blue curves LO + NLO contributions. The shaded area between the blue curves indicate a range of NLO values as explained in the text.} \label{fig:ABc19} \end{figure} The magnetic moment at NLO depends on the two-body coupling $L_M$ that is not known. If we require that the NLO result for the magnetic moment be within 30\% of the LO Schmidt value, then we can estimate $-2.7\lesssim L_M\lesssim -1.1$. With this assumption we plot the form factor $\mathcal B$ in Fig.~\ref{fig:ABc19} using Eq.~(\ref{eq:BZd}). The shaded band indicates where the NLO result in halo EFT is expected to lie. The lower dashed blue curve corresponds to $L_M=-2.7$ and the upper dashed blue curve corresponds to $L_M=-1.1$. \subsection{$^{15}$C} In Ref.~\cite{PhysRevC.86.044608}, $^{15}$C was treated as a single neutron halo nucleus with a $^{14}$C core. The radiative capture $^{14}$C$(n,\gamma)^{15}$C and breakup $^{15}$C$(\gamma,n)^{14}$C processes (through Coulomb dissociation) were calculated. The capture process proceeds through E1 transition from an initial $^2P_{1/2}$ and $^2P_{3/2}$ state to $^2S_{1/2}$ final state. The breakup process is related to the capture through detailed balance. The available direct capture and Coulomb dissociation data suggested that either the effective range or the $p$-wave interaction is non-perturbative. Here we revisit that discussion and present another analysis. In the $^2P_{1/2}$ channel there is a resonance at energy $E_r\approx 1.885$ MeV with a width of about $\Gamma_r\approx 40$ keV. The $p$-wave scattering volume and effective range are fixed as~\cite{Fernando:2011ts} \begin{align} a_1=-\frac{\mu\Gamma_r}{p_r^5}\approx -5.6\times10^{-8}\ \mathrm{MeV}^{-3},\ \ \mathrm{and}\ \ r_1=-\frac{2p_r^3}{\mu\Gamma_r}\approx-11\times 10^{3}\ \mathrm{MeV}. \end{align} Though these values are not fine tuned, near the resonance their contribution is kinematically enhanced~\cite{Bertulani:2002sz,Bedaque:2003wa}. Away from the resonance, $p$-wave interaction in this channel is suppressed as expected. The capture and Coulomb dissociation data then suggest that either the $^2S_{1/2}$ effective range $\rho$ (or $Z_d-1$) or the $^2P_{3/2}$ interaction is fine tuned to be non-perturbative. Unlike in Ref.~\cite{PhysRevC.86.044608} where the $^2P_{3/2}$ interaction was take as non-perturbative, a non-perturbative $s$-wave effective range $\rho$ gives a slightly better fit reducing the $\chi^2$ per degree of freedom from 1.70 to 1.26. Qualitatively the more important difference is that whereas the non-perturbative $p$-wave interaction required two operators to be fine tuned in Ref.~\cite{PhysRevC.86.044608}, a fine tuned $\rho$ involves a single $s$-wave fine tuned operator. In Fig.~\ref{fig:c15Capture} we show the two fits to data for the capture process. The dependence on the effective rage $\rho$ enters as a factor of $1/(1-\rho\gamma)$ where we do not expand in $\rho$. We find $\rho=2.67$ fm or $Z_d=2.66$ from the fit. For $^{15}$C, with a binding energy $B=1.2181$ MeV, $\rho\gamma\sim 0.6$ which makes effective range corrections large. For this halo system, we treat $\rho\sim 1/Q$ and at LO we get for the charge radius \begin{align} \langle r_E^2\rangle-\langle r_c^2\rangle=\frac{\mu^2}{2M_c^2\gamma^2}\frac{1}{1-\rho\gamma} \approx 0.11\ \mathrm{fm}^2, \end{align} and for the magnetic moment \begin{align} \kappa_\phi=\frac{\kappa_n-\rho\gamma L_M}{1-\rho\gamma}. \end{align} Experimentally only the magnitude of the magnetic moment is known as $\kappa_\phi^{(exp)}=(1.720\pm0.009)$~\cite{Asahi200288}. Assuming a shell-model configuration with a valence $s$-wave neutron dominating the $^{15}$C ground state wave function with 97-98\% probability, a tentative experimental value $\kappa_\phi^{(exp)}=-(1.77\pm0.05)$ ~\cite{Asahi200288} was extracted. From this we can extract $L_M=-2.0$. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{c15Capture.pdf} \end{center} \caption{\protect Capture cross section $^{14}$C$(n,\gamma)^{15}$C. The data is from Ref.~\cite{Nakamura2009}. The dashed blue curve is a fit with non-perturbative $^2P_{3/2}$ interaction and the solid red curve is a fit with non-perturbative $^2S_{1/2}$ effective range interaction. The curves are fitted to c.m. energy 1.5 MeV as explained in the text. } \label{fig:c15Capture} \end{figure} The form factors $A(|\bm q|^2)$ and $B(|\bm q|^2)$ for $^{15}$C is very similar to $^{11}$Be above except we do not expand in the effective range $\rho$ or equivalently in $Z_d-1$. We get: \begin{align} \mathcal A(|\bm q|^2)&=\frac{1}{(1-\rho\gamma)^2}\left[ \frac{2 M_c\gamma}{\mu |\bm q|}\tan^{-1}\(\frac{\mu|\bm q|}{2 M_c\gamma}\)-\rho\gamma \right]^2,\\ \frac{e^2 Z_c^2}{4 M^2\mu_N^2}\mathcal B(|\bm q|^2)&= \frac{|\bm q|^2}{2 M^2} \frac{1}{(1-\rho\gamma)^2} \left[\kappa_n\frac{2 M_n\gamma}{\mu |\bm q|}\tan^{-1}\(\frac{\mu |\bm q|}{2 M_n\gamma}\)-\rho\gamma L_M \right]^2 . \nonumber \end{align} The LO results are shown in Fig.~\ref{fig:ABc15}. \begin{figure}[thb] \begin{center} \includegraphics[width=0.7\textwidth,clip=true]{Ac15.pdf} \includegraphics[width=0.7\textwidth,clip=true]{Bc15.pdf} \end{center} \caption{\protect Form factors for $^{15}$C. Soldi red curve LO contribution.} \label{fig:ABc15} \end{figure} \section{Conclusions}\label{sec_conclusions} The electromagnetic form factors for several spin $\frac{1}{2}$ halo nuclei -- $^{11}$Be, $^{15}$C and $^{19}$C -- were calculated. The form factors probe the charge and magnetic distribution of the halo systems. The calculations were performed using halo EFT where the halo nuclei is approximated as a single neutron bound to a nuclear core. We calculated the form factors to NLO except for $^{15}$C where the calculation was LO. The form factors depended on the neutron separation energy in the halo system, the $s$-wave effective range for neutron-core scattering, and a two-body magnetic coupling. The electric form factor for $^{11}$Be was calculated previously~\cite{Hammer201117}. The charge radius was found to agree with the known experimental value with the EFT error estimate. We include the magnetic form factor in this analysis. At NLO a two-body magnetic coupling contributes that is fitted to the known magnetic moment. The contribution from the two-body current is consistent with the EFT power counting. We provide the low momentum dependence of the electric and magnetic form factors. The analysis for $^{19}$C is very similar to $^{11}$Be. We make a NLO prediction for the charge radius that depends on the effective range determined~\cite{Acharya:2013nia} from $^{19}$C Coulomb dissociation data. For the magnetic form factor we are only able to provide an estimate for the two-body contribution based on the power counting. A determination of the magnetic moment would fix the two-body contribution more precisely. The power counting for $^{15}$C system is found to be a little different than the two systems above. We reanalyzed the Coulomb dissociation calculation~\cite{PhysRevC.86.044608}, and found that a non-perturbative $s$-wave effective range contribution describes that data better. In this system both the effective range and the two-body magnetic coupling contributes at LO. The effective range is determined from a fit to dissociation data, and the two-body current contribution from the magnetic moment. The momentum dependence of the LO electric and magnetic form factors are presented. \begin{acknowledgments} The authors thank T. Nakamura for providing data on Coulomb dissociation. We also thank D. R. Phillips for valuable discussions. This work is partially supported by the U.S. NSF grant No. PHY-1307453 and HRD-1345219, and NASA grant No. NNX09AV07A\end{acknowledgments} \input{HaloFormFactors_111015.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,358
Q: Is It Possible To Save Downloaded gameobject(using assetbundle) as Prefab not Unity3d file? My original prefabs(which is attached models, shader, animation, material, scripts, etc) were disappeared by accident. I can only download unity3d file through server. Downloaded objects is visible in UnityEditor. But I can't store them as prefabs(using drag and drop OR script). All components missed. Question: - Why this is impossible(gameobject is loaded in RAM but why can't I store them in disk)? - How can I recover them? A: The short answer would be no, you can not just create a prefab at runtime. The reason for this is that making prefabs is editor only relying on prefabUtility which is part of the UnityEditor namespace, hence only available outside of runtime (which relies on the UnityEngine namespace). Assets also get pretty much all get locked to read-only during run time. However that does not mean it is completely impossible to restore your GameObjects as Prefabs. You could write your own script that serializes the GameObjects you want as prefabs into a format like JSON or XML, then make an editor script that de-serializes this data and reconstruct a new prefab out of this. You would have to figure out how to do this precisely on your own though as i personally do not have any examples on how to best handle this. more on JSON serialization can be found here
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,094
Must pedigree dogs in UK be tattooed/microchipped)? Do pedigree dogs in the UK have to be tattooed (or microchipped)? If so, is this required when they are registered with the Kennel Club? Or does it only apply if they are to be entered into competitions? of the recording of markings and a reliance on honesty??? Grateful for any info on this. abroad. I've not heard of tattooing dogs here at all. anything like that is required for showing. Honesty does indeed form a very important part in the British system. though I think it is the very early stages of thought at the moment. when I'm asdfjjhhkl;ljfd.;oier' puyykmm4hbdm9lo9j USING IT.
{ "redpajama_set_name": "RedPajamaC4" }
1,111
\section{Introduction.} Social network analysis has been a hot topic in the field of data mining. In the co-authorship network, a node is an author and a edge indicates a publishing collaboration between them. Researchers are interested in these special networks from which they discover power law in the degree distribution, that is, only a small proportion of nodes have high degrees while the rest has low degree. Social networks also present positive assortative values while in nonsocial networks, such as Internet, biology network, the values are always negative, indicating that in social networks, higher degree nodes trend to connect with higher degree nodes while in nonsocial ones, it is largely possible that higher degree ones are linked with lower degree ones. Moreover, researchers reveal community structures where the vertices within communities have higher density of edges while vertices between communities have lower density of edges. In the co-authorship network, a community reflects a group of scholars with similar interest. Apparently, from these static characteristics, people have gained much understanding of social networks. However, we are not satisfied with these achievements, but will furthermore explore those dynamic features of social networks. For example, how can we track community evolution effectively? Does other dynamic features exist to distinguish social networks from nonsocial ones? How can we establish a more reasonable model of social network? With the interest to dynamic features of social networks, we firstly perform experiments in which after a long time duration has been divided into several snapshots, we find that about 80 percent of nodes appear in one or two snapshots. The experiment indicates that most of nodes is so unstable that we can not rely on them too much. We also discover that node with higher degree will appear in more snapshots. On its basis, we propose a core-based algorithm called CommTracker to track community evolution effectively. With it, we not only find out a community evolution trace but also discover split or mergence points in the trace. By the algorithm, we find two unique phenomena of social networks. One is that a larger community leads to a longer life and the other is that a community with a longer life trend to have lower member stability. Correspondingly, we propose two representative coefficients: GROWTH and METABOLISM, by which we are able to tell social networks from nonsocial ones. At last, we propose a more reasonable model which focuses on node change. The model successfully displays two important phenomena discovered above. We validate our conclusions in 11 datasets including 6 social networks: 3 co-authorship networks in cond-mat, math and nonlinear fields, a call network, an email networks and a movie actor network as well as 5 nonsocial ones involving 3 software networks (tomcat 4, tomcat 5, ant), an Internet network, a vocabulary network. The rest of the paper is organized as follows: Section 2 reviews the related work. Section 3 gives definitions. Section 4 introduces some basic dynamic features of our dataset. Section 5 presents the core-based algorithm of tracking community evolution. Section 6 introduces two unique phenomena discovered in the social networks. Section 7 shows our model and Section 8 concludes. \section{Related Work.} A lot of work has been dedicated to exploring the characteristics of social networks. Barabasi and Albert show an uneven distribution of degree through BA models\cite{POWER_LAW}. Newman has successfully discovered distinct characteristics between social networks and nonsocial ones\cite{assortivity}. Various methods have been utilized to detect community structures. Among them, there are Newman's betweenness algorithm \cite{NEWMAN_GN}\cite{NEWMAN_GN_FAST}, Nan Du's clique-based algorithm\cite{DUNAN_KDD} and CPM\cite{CPM} that focuses on finding overlapping communities. Clustering is another technique to group similar nodes into large communities, including L. Donetti and M.Miguel's method\cite{cluster1} which exploits spectral properties of the graph as well as Laplacian matrix and J.~Hopcroft's ``natural community'' approach\cite{KDD03}. Some social network models have been proposed \cite{emily_model}\cite{model2}\cite{model3}. With respect to core node detection, Roger Guimera and Luis A.Nunes Amaral propose a methodology that classifies nodes into universal roles according to their pattern of intra- and inter-module connections \cite{ANOTHER_FIND_CORENODE}. B.~Wu offers a method to detect core nodes with a threshold \cite{PEIXIN_ISDM}. Shaojie Qiao and Qihong Liu dedicate themselves to mining core members of a crime community\cite{911}. As to dynamic graph mining, Tanya Y.Berger-Wolf and Jared Saia study community evolution based on node overlapping \cite{NODE_LAP}; John Hopcroft and Omar Khan propose a method which utilizes ``nature community" to track evolution\cite{NAR_COMM}. However, both methods have to set some parameters, which is too difficult to be adaptive to various situations. In contrast, Keogh et al. suggests the notion of parameter free data mining\cite{NO_PARA}. Jimeng Sun's GraphScope is a parameter-free mining method of large time-evolving graphs\cite{COMM_EVL_KDD07}, using information theoretic principles. Our method in the paper shares the same spirit. As forerunners, A.L.Barabasi and H.Jeong study static characteristic variations on the network of scientific collaboration\cite{SCIE_EVOL}. Gergely Palla and A.-L. Barabasi provide a method which effectively utilizes edge overlapping to build evolving relationship\cite{EDGE_LAP}. With the approach, they discover valuable phenomena of social community evolution. \section{Symbol Definition.} The table below lists almost all the symbols used in the paper.\\ \begin{tabular}{ll} \hline Sym. & Definition \\ \hline $C^{(t)}_{i}$ & Community of index $i$ in snapshot $t$\\ $N^{(t)}_{i}$ & Node of index $i$ in snapshot $t$\\ $W(N^{(t)}_{i})$ & Weight of a node of index $i$ in snapshot $t$\\ $Cen(N^{(t)}_{i})$ & Central degree of node $N^{(t)}_{i}$\\ $Core(C^{(t)}_{i})$ & Core node set of $C^{(t)}_{i}$\\ $Node(C^{(t)}_{i})$ & Node set of $C^{(t)}_{i}$\\ $Edge(C^{(t)}_{i})$ & Edge set of $C^{(t)}_{i}$\\ $|Node(C)|$ & community $C$ size\\ $C^{(t)}_{i} \to C^{(t+1)}_{j}$ & $C^{(t)}_{i}$ is a predecessor of $C^{(t+1)}_{j}$ or $C^{(t+1)}_{j}$ is a successor of $C^{(t)}_{i}$\\ $C^{(t-k)}_{i}\Rightarrow C^{(t)}_{j}$ & $C^{(t-k)}_{i}$ is an ancestor of $C^{(t)}_{j}$\\ $Evol(C^{(t)}_{i})$ & Evolution trace of $C^{(t)}_{i}$\\ $|Evol(C^{(t)}_{i})|$ & Span of evolution trace of $C^{(t)}_{i}$\\ \hline \end{tabular} \begin{definition} (COMMUNITY EVOLUTION TRACE). An evolution trace $Evol(C^{(t)}_{x})$ is a time-series of $C^{(t+n)}$ as follows: $$ Evol(C^{(t)}_{x}):=\{C^{(t)}_{x},C^{(t+1)}_{x},C^{(t+1)}_{y}, C^{(t+2)}_{x} \ldots,C^{(t+n)}_{x}\} (n \geq 0) $$ where each community $C^{(t+i)}_{x},i\in[1,n]$ satisfies the condition that there exists at least one community $C^{(t+i-1)}_{x}$, and then $C^{(t+i-1)}_{x}\to C^{(t+i)}_{x}$. Note that more than one community is allowed to appear in the same snapshot t+i, like $C^{(t+1)}_{x},C^{(t+1)}_{y}$ both locating in the snapshot $t+1$. $|Evol(C^{(t)}_{x})|$ is $n+1$ \end{definition} \begin{definition} (ANCESTOR OF A COMMUNITY). The definition of a community's ancestor is as follows: $C^{(t-k)}_{i}\Rightarrow C^{(t)}_{j}$ if there is an evolving chain $C^{(t-k)}_{i}\to C^{(t-k+1)}_{x},\ldots,\to C^{(t)}_{j} (k \geq 1)$ \end{definition} \begin{definition} (COMMUNITY AGE). The age of a community is time span between its birth snapshot and its current snapshot. Here in the $Evol(C^{(t)}_{x})$ defined in the Definition 1, the age of $C^{(t)}_{x}$ = 1 and $C^{(t+2)}_{x}$ = 3. \end{definition} \begin{definition} (MEMBER STABILITY OF A COMMUNITY). The member stability of a community $C^{(t)}$ is as following: $$ MS(C^{(t)}) = \frac{Node(C^{(t)})\cap(Node(C^{(t+1)}_1) \cup Node(C^{(t+1)}_2) \ldots \cup Node(C^{(t+1)}_n))}{Node(C^{(t)})\cup(Node(C^{(t+1)}_1) \cup Node(C^{(t+1)}_2) \ldots \cup Node(C^{(t+1)}_n)) } $$ where $C^{(t)} \rightarrow C^{(t+1)}_i$ $(i\in[1,n])$ \end{definition} \begin{definition} (MEMBER STABILITY OF A COMMUNITY EVOLUTION TRACE). The member stability of a community evolution trace is the average stability value of all community having successors within the trace. Its definition is as following: $\sum MS(C^{(t)})/n$, where $C^{(t)}$ is the community having successors and $n$ is the corresponding number. \end{definition} \section{Basic dynamic characteristics of social networks.} In this section, we are interested in the following three aspects: (1) how the scale of social networks evolves; (2) how the members of social networks evolve; (3) which nodes trend to live long lives. Note that the paper concentrates on social networks, but nonsocial networks are taken into account in that we must compare distinct characteristics between them. \subsection{Dataset.} Co-authorship networks in the field of condense matter, math and nonlinear.Here, nodes represent authors and edges are collaboration relationships of publishing papers. This three datasets include co-authorship information of Cornell e-print from 1993 to 2006, from 1993 to 2006 and from 1994 to 2006 respectively (http://arxiv.org) and we build 28, 28 and 26 network snapshots from them by making partial dataset in half one year as a snapshot. Cell phone network. In the network, a caller or callee is a node and the phone communication between them is an edge. The dataset includes call information within a duration of 20 weeks in a province of China and we gain 10 network snapshots by each including call information of 2 weeks. Email network. Here, a node is regarded as an email sender or receiver and an edge is considered as one email communication. This dataset from Enron (http://www.cs.cmu.edu/enron/) spans about 3 years and 32 network snapshots are obtained, each with a duration of 1 month. Collaboration network of movie actors. Nodes are movie actors and edges represent their collaborations. The dataset includes collaboration information from 1980 to 2002 (http://www.imdb.com). Each snapshot is 2 years. Internet network. From this dataset (http://sk\_aslinks.caida.org), we get 29 snapshots of Internet every 2 months. Vocabulary network. We get vocabularies related to computer in EI Village from 1993 to 2006 (http://www.engineeringvillage2.org.cn). A node is a controlled term and if two controlled terms appear in the same article, an edge exists between them. In this case, a snapshot lasts a year duration. Software network of Ant, Tomcat4, Tomcat5. Here, a node represents a class and an edge exists between them if two classes have the invoking relationship. Three datasets include 12, 19 and 21 versions respectively (http://www.apache.org) and one version is used to establish a network. \subsection{The evolution of network scale.} As Fig.\ref{network-scale} shows, in each co-authorship network (cond-mat, math, non linear), the node number of networks at different snapshot gradually increase. The phenomenon is also observed in the network of movie actor. However, in the call network, such an increase trend is not very apparent and in the email network, we can see a fluctuant rise, but it falls in the latest snapshots. In our analysis, co-authorship datasets and movie dataset reflect worldwide cooperating situations, which is relatively complete. In contrast, the call network only considers the situation of one province and the email network is from the Enron company. Both of them might reflect the partial change. In all, we can get the conclusion that social network scale inflates when it evolves. \subsection{The evolution of social network members.} Although the size of a network increases in the evolution, its members is always changing, that is, some members will leave the network and some will enter it. We make a statistics which indicates that during the whole evolution process, about 80\% nodes appear in less than two snapshots (See Fig.\ref{activity-percentage-correlation}). Therefore, we concludes that members of social networks change dramatically and only a small proportion exists in the networks stably. \subsection{Discovery of long life members.} We are also interested in which nodes will get high appearance times in the network. Here, node degree is taken into account as a critical factor, which indicates the importance of some node in the network to some extent. We respectively calculate the correlation coefficient between node degree and appearance frequency in six social networks: cond-mat is 0.12; math is 0.13; non linear is 0.22; call is 0.28; email is 0.44; movie actor is 0.14; In conclusion, nodes with higher degree will exist in the network with a larger possibility. According the conclusions in this section, we understand that a large proportion of nodes is so unstable that we can not rely on them too much but focus on those small stable nodes, especially when we want to track community evolution. \begin{figure*} \centering \includegraphics[width=0.9\textwidth, bb = 0 0 793 566]{network-scale.jpg} \caption{Network scale (node number) evolution. Snapshot id (X axis) and network scale (Y axis) }\label{network-scale} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\textwidth,bb = 0 0 793 566]{activity_percentage_correlation.jpg} \caption{Node appearance distribution. Node appearance times (X axis) and percentage (Y axis)}\label{activity-percentage-correlation} \end{figure*} \section{Core-based algorithm of tracking community evolution.} As discussed above, community structures are mined by many algorithms in every network snapshots. We are interested in how these communities evolves. For example, there exists a community in snapshot $t$, and what about its state in the next snapshot $t+1$? Does it split into smaller ones or merge into a larger one with another community? Our algorithm, CommTracker, heavily relies on core nodes instead of the overlapping level of nodes or edges between two communities. From the experiments above, we have realized that most of nodes lacks stability. Therefore, taking advantage of not all nodes that include those high fluctuating ones but these representative and reliable core nodes, will be more accurate and effective to track community evolution. A good example is the co-authorship community where core nodes represent famous professors and ordinary ones are other students. The research interest of professors is usually that of a whole community. Moreover, it is harder for professors to change their research interest than for those ordinary students. In this section, the algorithm of core node detection is firstly introduced and then we present our core-based algorithm of tracking community evolution. \subsection{Core Node Detection Algorithm.} As discussed above, core nodes are of greatest importance in our evolution algorithm, so its preparation work, selecting core nodes from a community, is a key step. The structure of a community is too dynamic and unpredictable to set an empirical threshold to distinguish core nodes from ordinary ones. Unlike \cite{PEIXIN_ISDM}, the following method concentrates on not only effectiveness but also parameter free. A node can be weighed in terms of many aspects, such as degree, betweenness, page rank and so on. Generally, the higher a node's weight is, the more important it is in a community. Here, we give a node $N_{i}$ a weight value $W(N_{i})$ according to its degree. In our algorithm, both the community topology and the node weight are considered as critical factors to distinguish core nodes from ordinary ones. In Algorithm 1, we present the whole algorithm. \begin{figure} \centering \includegraphics[width=0.6\textwidth,bb=0 0 322 179]{core_detection.jpg} \caption{Core detection illustration.}\label{core_detection} \end{figure} The basic idea behind the algorithm is similar to a vote strategy. For each node $N_{i}$, it is entitled to evaluate the centrality of those nodes linked with it. Assuming that $W(N_{i})$ is higher than the weight of a linked node, $W(N_{j})$, then $N_{i}$ is considered as more important node than $N_{j}$, so $N_{i}$'s centrality value should be incremented by a specified value while $N_{j}$'s value is reduced by a specified value. Here, $|W(N_{i})-W(N_{j})|$ is employed to represent the centrality difference between two nodes. Through the ``vote'' of all round nodes, if $N_{i}$'s centrality is nonnegative, it is regarded as a core node. Otherwise, it is just an ordinary node. As Fig.\ref{core_detection} shows, $W(N_{1})=6$. The running result is that $Cen(N_{1})=23$,\\ $Cen(N_{2})=12$ whereas $Cen(N_{4})=Cen(N_{5}) =-5$,$Cen(N_{6})= Cen(N_{7}) = Cen(N_{10})= -4$,$Cen(N_{8})= Cen(N_{9}) = -3$, $Cen(N_{3})=-7$. Therefore, the core set are $\{N_{1},N_{2}\}$. \begin{algorithm}[!h] \caption{CoreDetection($C$)} \label{coreDetect} \begin{algorithmic}[1] \IF {$W(N_{1})=W(N_{2})=\ldots=W(N_{n})$} \STATE return C \ENDIF \STATE $Cen(N_{i}) = 0, i\in[1,n]$ \FOR {every edge $e \in Edge(C)$} \STATE $N_{i}$,$N_{j}$ are nodes connected with $e$ \IF {$ W(N_{i}) < W(N_{j}) $} \STATE$Cen(N_{i})=Cen(N_{i})-|W(N_{i})-W(N_{j})|$ \STATE$Cen(N_{j})=Cen(N_{j})+|W(N_{i})-W(N_{j})|$ \ENDIF \IF {$ W(N_{i}) \geq W(N_{j}) $} \STATE$Cen(N_{i})=Cen(N_{i})+|W(N_{i})-W(N_{j})|$ \STATE$Cen(N_{j})=Cen(N_{j})-|W(N_{i})-W(N_{j})|$ \ENDIF \ENDFOR \STATE coreset = \{\} \FOR{every node $N_{i}\in Node(C)$} \IF {$Cen(N_{i}) \geq 0$} \STATE input $N_{i}$ into coreset; \ENDIF \ENDFOR \STATE return coreset \end{algorithmic} \end{algorithm} In general, Algorithm 1 is effective to detect core nodes in a small network scope, like community, where node distances are no more than 3 hops and each node has large probability to connect to all other ones. \subsection{Core-based Algorithm of Tracking Community Evolution.} Tanya Y.Berger-Wolf and Jared Saia propose a method based on the overlapping level of nodes that $C^{(t+1)}$ is a successor of $C^{(t)}$ if $nodeoverlap(C^{(t)},C^{(t+1)})\geq s$ \cite{NODE_LAP}. However, to set a proper $s$ is challenging for users. When members of a community change dramatically and $s$ is given a higher value, $C^{(t+1)}$ will be considered to disappear because of too low overlapping level between them, but in fact $C^{(t+1)}$ still exists. Otherwise, if $s$ is set a bit low, doing so will give irrelevant communities more opportunities to become the successors of $C^{(t)}$, leading to ``successors explosion'' and masking those real successors. Gergely Palla and A.-L. Barabasi provide an approach utilizing the overlapping of edge between two communities\cite{EDGE_LAP}, but it fails to deal with split and mergence amongst communities. As there are one $C^{(t)}$ and two $C^{(t+1)}_i$, $C^{(t+1)}_j$, in snapshot $t$ and $t+1$ respectively, if the edge overlapping level between $C^{(t)}$ and $C^{(t+1)}_i$ is higher than that between $C^{(t)}$ and $C^{(t+1)}_j$, $C^{(t+1)}_i$ becomes the successor of $C^{(t)}$ while $C^{(t+1)}_j$ is considered as a new born community. Actually, $C^{(t)}$ may split into two parts. The similar problem also exists in the process of community mergence. The disadvantage of the method above is to treat all nodes in an unprejudiced way and it is not accorded with the reality where different nodes have different influences. Our method has deeply paid attention to such a difference so that it puts emphasis on core nodes. \begin{figure} \centering \includegraphics[width=6.75cm,height=3cm,scale=0.5,bb=0 0 526 233]{example_evl.jpg} \caption{Community Evolution illustration: core nodes are colored red and ordinary ones grey. As we seen, (1) in snapshot $t+1$, $C^{(t+1)}$ contains two core node $N_1$,$N_2$ of $C^{(t)}$. (2) Node $N_3$ has also been in $C^{(t-m)}$, an ancestor of $C^{(t)}$. Therefore, $C^{(t+1)}$ becomes the succeeding community of $C^{(t)}$. In practice, if $C^{(t)}$ has no ancestor, then communities satisfying the first condition will become $C^{(t)}$'s successors automatically.}\label{example_evl} \end{figure} The basic thought of our algorithm can be described as: $C^{(t)}_{i}\to C^{(t+1)}_{j}$ if and only if (1) at least one core node of $C^{(t)}_{i}$ appears in $C^{(t+1)}_{j}$, that is, $Core(C^{(t)}_{i})\cap Node(C^{(t+1)}_{j})\neq \emptyset$ (2) at least one core node of $C^{(t+1)}_{j}$ must appear in some ancestor community of $C^{(t)}_{i}$, that is, there exists one $C^{(t-m)}_{k}$, $C^{(t-m)}_{k} \Rightarrow C^{(t)}_{i}$, $Node(C^{(t-m)}_{k})\cap Core(C^{(t+1)}_{j})\neq \emptyset$. see Fig.\ref{example_evl} \begin{algorithm}[!h]\label{coreDetect} \caption{Community Evolution($C^{(t)}_i$)} \begin{algorithmic}[1] \STATE Evol($C^{(t)}_i$) = \{$C^{(t)}_i$\} \STATE $Core(C^{(t)}_{i})$ = CoreDetection($C^{(t)}_{i}$) \FOR {every community $C^{(t+1)}_j$ in snapshot $t+1$} \STATE $Core(C^{(t+1)}_{j})$ = CoreDetection($C^{(t+1)}_{j}$) \IF {$Core(C^{(t)}_{i})\cap Node(C^{(t+1)}_{j})\neq \emptyset$ and $Node(C^{(t-m)}_{k})\cap Core(C^{(t+1)}_{j})\neq \emptyset$ and $C^{(t-m)}_{k} \Rightarrow C^{(t)}_{i}$} \STATE establish the relationship $C^{(t)}_{i} \rightarrow C^{(t+1)}_j$ \STATE Evol($C^{(t+1)}_j$) = Community Evolution($C^{(t+1)}_j$) \STATE Evol($C^{(t)}_i$) = Evol($C^{(t)}_i$)$\cup$Evol($C^{(t+1)}_j$) \ENDIF \ENDFOR \STATE return $Evol(C^{(t)}_{i})$ \end{algorithmic} \end{algorithm} For the first condition, it is reasonable to consider $C^{(t)}_{i}$'s core nodes appear in some succeeding community $C^{(t+1)}_{j}$, due to the representative quality of core nodes. As to the second condition, if some community $C^{(t+1)}_{j}$ wants to become the succeeding one of a specified community $C^{(t)}_{i}$, it must suffice that its core nodes appear in some ancestor of $C^{(t)}_{i}$, because of the stable quality of core nodes, that is , core nodes do not appear suddenly without any evidence in the past snapshots. We describe the whole algorithm in Algorithm 2. From the perspective of successors and predecessors, we provide a very straightforward way to identify $community$ $split$, $community$ $mergence$, $community$ $birth$ and $community$ $death$. Note that they are four phenomena that occurs in a single evolution trace. \begin{itemize} \item Community Split: a community has more than one successor. \item Community Mergence: a community owns more than one predecessor. \item Community Birth: a community has no predecessor. \item Community Death: a community has no successor. \end{itemize} Fig.\ref{community_evolution_illustration} shows a typical example of community evolution. \begin{figure*} \centering \includegraphics[width=1.0\textwidth,bb=0 0 1023 158]{illustration.jpg} \caption{Community evolution illustration. Red square points are core nodes. }\label{community_evolution_illustration} \end{figure*} \section{Two representative phenomena in the social network.} \begin{figure*} \centering \includegraphics[width=1.0\textwidth,bb=0 0 800 573]{2-all.jpg} \caption{(a) The correlation between community size (X axis) and community age (Y axis). (b) The correlation between evolution trace span (X axis) and member stability (Y axis). }\label{2_phenomena} \end{figure*} In \cite{EDGE_LAP}, Palla has performed two experiments only on cond-mat co-authorship and call networks: one is to find out the correlation between community size and age; the other is to uncover the correlation between evolution trace span and member stability. In his paper, he obtains conclusions that communities of larger size lead to longer lives and that if an evolution trace span is longer, its member stability is lower. We are interested in the two situations in other social networks and nonsocial ones. The results are shown in Fig.\ref{2_phenomena} (a) and (b). Firstly, depending on CommTracker, we can discover similar phenomena with those proposed in the Palla's paper, proving that our method is effective and correct. Secondly, it is obvious that 6 social networks display two common behaviors we discuss above. On the contrary, nonsocial networks fails to own such behaviors. In nonsocial networks, it seems that the size of a community can not reflect its age and that a community with higher stability will live for a longer life. We calculate the correlation coefficients between community size and age (GROWTH) as well as between evolution trace span and member stability (METABOLISM) (See Table \ref{growth_metabolism}). Apparently, in the 1st experiment, social networks' values are positive while those of nonsocial ones are nearly all negative. In the 2nd experiment, the values of social networks are negative whereas those of nonsocial ones are all positive. Two experiments reveal that we can differentiate social networks from nonsocial ones according to GROWTH and METABOLISM. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline &1&2&3&4&5&6&7&8&9&10&11&12\\ \hline GROWTH & 0.67 & 0.45 & 0.76 & 0.2 & 0.39 & 0.31 & 0.29 & -0.07 &-0.02 & -0.09 & -0.23 & -0.01\\ \hline METABOLISM & -0.76 & -0.72 & -0.62 & -0.76 & -0.67 & -0.37 & 0.25 & 0.25 & 0.23 & 0.47 & 0.51 & 0.16\\ \hline \end{tabular} \caption{GROWTH and METABOLISM. (1) cond-mat (2) math (3) nonlinear (4) call (5) email (6) movie actor (7) Internet (8) vocabulary (9) ant (10) tomcat4 (11) tomcat5 (12) random }\label{growth_metabolism} \end{table} One important reason contributing to such distinctions is that in social networks, a community represents a group of persons with close connection and in nonsocial ones a community is just a cluster of objects. As we know, in social networks, if a community want to obtain a long life, it must undertake suitable member changes, that is, when some old core members retire, new ones take over responsibility in time so that the development of the community is well supported. Otherwise, if a community refuses to absorb new members, when the old core members exit from the community, it is possible that new core ones have not been cultivated, leading to quick disintegration. In contrast, the members of nonsocial networks are objects, not persons. For example, in software network, a community is a class cluster with similar functions. If a class cluster is designed well, it must experience little change and be used for a long time. \section{Social network model.} Nowadays, many social network models have been established. However, when we get some snapshots generated from these social networks, most of them fail to display the characteristic behaviors we have proposed above. In our view, a main defect is that a node will permanently exist in the network once it is added into the network. However, from the experiments shown in Section 4, a lot of nodes enter into the network and then quickly exist from it. Hence, how to revise existing models to make them more reasonable is a problem to be solved. \subsection{Model introduction.} Our model is based on the one proposed in Emily's model\cite{emily_model}, which takes into account social network aspects completely, such as meeting rate between pairs of individuals, decay of friendships, etc. Moreover, Emily's model indeed presents many static features of social networks. Therefore, we decide to adopt it as our model basis. Our model can be simulated directly using the following algorithm. Let $n_p=\frac{1}{2}N(N-1)$ where $N$ is the network initial scale. Let $n_e=\frac{1}{2} \sum z_i$ where $z_i$ is the degree of the $i^{th}$ vertex. And let $n_m=\frac{1}{2} \sum z_i(z_i-1)$. 1. We choose $n_p r_0$ pairs of vertices uniformly at random from the network to meet. If a pair meet who do not have a pre-existing connection, and if neither of them already has the maximum $z^*$ connections then a new connection is established between them. 2. We choose $n_m r_1$ vertices at random, with probability proportional to $z_i(z_i-1)$. For each vertex chosen we randomly choose one pair of its neighbors to meet, and establish a new connection between them if they do not have a pre-existing connection and if neither of them already has the maximum number $z^*$ of connections. 3. We choose $n_e \gamma$ vertices with probability proportional to $z_i$. For each vertex chosen we choose one of its neighbors uniformly at random and delete the connection to that neighbor. 4. We choose one vertex, if its degree $z_i > \overline{z}$, the average degree, we delete it with the probability $\alpha$; otherwise, we delete it with the probability $\beta$. The process doesn't stop until $k_d$ vertices have been deleted. 5. We add $k_a$ new vertices. For each new one, it establishes a link with a vertex $v$ randomly and then it also connects to the vertex with highest degree from the neighbor vertices of $v$. Note that the first 3 steps have already existed in the Emily's algorithm while the last 2 steps are added by ourselves. The 4th step is responsible for deleting some existing vertices according to their degrees. The last step focuses on adding new vertices. In this step, we eliminate the limit of maximum connection in order to allow some vertices to get high degree. In reality, a community consists of vertices with distinct degrees while in the Emily's social network model, a community trends to be a clique due to the limit of maximum connection. As pointed out in \cite{emily_model}, the network is initialized by starting with no edges, and running the first two steps without the other three ones until all or most vertices have degree $z^*$ (we set the limitation as 85\%). Then all five steps are used for the remainder of the simulation. \subsection{Model stimulation.} Six experiments have been performed with different parameters $\alpha$, $\beta$, $k_a$ and $k_d$ shown in Fig.\ref{model}. In all stimulation, $z^*=5$, $N=250$, $r_0=0.0005$, $r_1=2$, $\gamma=0.005$. When all the five steps are running, we get a snapshot every five repetitions. We consider 17 snapshots together. \begin{figure*} \centering \includegraphics[width=1.0\textwidth,bb = 0 0 793 566]{model.jpg} \caption{Model stimulation. (a) $\alpha = 0.8$,$\beta = 0.6$,$k_a = k_d = 3$ (b) $\alpha = 0.5$,$\beta = 0.5$,$k_a = k_d = 3$ (c) $\alpha = 0.3$,$\beta = 0.8$,$k_a = k_d = 3$ (d) $\alpha = 0.8$,$\beta = 0.3$,$k_a = k_d = 3$ (e) $\alpha = 0.5$,$\beta = 0.5$,$k_a = k_d = 6$ (f) $\alpha = 0.5$,$\beta = 0.5$,$k_a = 5$, $k_d = 3$ }\label{model} \end{figure*} \section{Conclusions.} In the paper, we firstly perform some basic experiments to explore those dynamic characteristics of social networks and it is discovered that a large percentage of nodes are so instable that we can not rest on them too much and that nodes with higher degree will appear more frequently during the evolution of a social network. Under the experimental results, we propose a novel core-based algorithm to track community evolution, which has the following features:(1) it is effective; (1) it is parameter-free; (2) it is suitable to discover split and mergence points. With the algorithm, we uncover two representative dynamic features of social networks and define two coefficients: GROWTH and METABOLISM by which we also achieve the goal of telling social networks from nonsocial ones. In the end, we propose a revised social network model which can display two typical characteristics. The experiments are based on 6 social networks (co-authorship network, call network, movie actor network and email network)and 5 nonsocial networks (Internet, vocabulary network and software network).
{ "redpajama_set_name": "RedPajamaArXiv" }
3,078
{"url":"https:\/\/deepai.org\/publication\/uniform-tail-estimates-and-l-p-n-convergence-for-finite-difference-approximations-of-nonlinear-diffusion-equations","text":"DeepAI\n\n# Uniform tail estimates and L^p(\u211d^N)-convergence for finite-difference approximations of nonlinear diffusion equations\n\nWe obtain new equitightness and C([0,T];L^p(\u211d^N))-convergence results for numerical approximations of generalized porous medium equations of the form \u2202_tu-\ud835\udd0f[\u03c6(u)]=g in \u211d^N\u00d7(0,T), where \u03c6:\u211d\u2192\u211d is continuous and nondecreasing, and \ud835\udd0f is a local or nonlocal diffusion operator. Our results include slow diffusions, strongly degenerate Stefan problems, and fast diffusions above a critical exponent. These results improve the previous C([0,T];L_loc^p(\u211d^N))-convergence obtained in a series of papers on the topic by the authors. To have equitightness and global L^p-convergence, some additional restrictions on \ud835\udd0f and \u03c6 are needed. Most commonly used symmetric operators \ud835\udd0f are still included: the Laplacian, fractional Laplacians, and other generators of symmetric L\u00e9vy processes with some fractional moment. We also discuss extensions to nonlinear possibly strongly degenerate convection-diffusion equations.\n\n\u2022 7 publications\n\u2022 3 publications\n\u2022 2 publications\n02\/21\/2021\n\n### Convergence rate of DeepONets for learning operators arising from advection-diffusion equations\n\nWe present convergence rates of operator learning in [Chen and Chen 1995...\n03\/23\/2021\n\n### Majorant series for the N-body problem\n\nAs a follow-up of a previous work of the authors, this work considers un...\n01\/24\/2021\n\n### A symmetric fractional-order reduction method for direct nonuniform approximations of semilinear diffusion-wave equations\n\nWe introduce a symmetric fractional-order reduction (SFOR) method to con...\n11\/25\/2022\n\n### Generalized convolution quadrature for the fractional integral and fractional diffusion equations\n\nWe consider the application of the generalized Convolution Quadrature (g...\n11\/30\/2019\n\n### The one-phase fractional Stefan problem\n\nWe study the existence and properties of solutions and free boundaries o...\n11\/28\/2021\n\n### Remarks on the Radiative Transfer Equations for Climatology\n\nUsing theoretical and numerical arguments we discuss some of the commonl...\n12\/07\/2021\n\n### Explicit approximations for nonlinear switching diffusion systems in finite and infinite horizons\n\nFocusing on hybrid diffusion dynamics involving continuous dynamics as w...\n\n## 1. Introduction\n\nThe purpose of this paper is to improve convergence results in to for numerical schemes of generalized porous medium equations in the context of bounded and integrable solutions. In detail, we study\n\n (GPME)\n\nwhere is the solution, is nondecreasing and -H\u00f6lder continuous with , some right-hand side, , and . The general operator is given\u00a0as\n\n (1.1) L=L\u03c3,\u03bc:=L\u03c3+L\u03bc\n\nwhere is a possibly degenerate local diffusion operator\n\n (1.2) L\u03c3[\u03c8](x):=tr(\u03c3\u03c3TD2\u03c8(x))\n\nwhere , , and , and the anomalous or nonlocal diffusion operator is defined for any as\n\n (1.3) L\u03bc[\u03c8](x)=\u222bRN\u2216{0}(\u03c8(x+z)\u2212\u03c8(x)\u2212z\u22c5D\u03c8(x)1|z|\u22641)d\u03bc(z),\n\na characteristic function, and\n\na nonnegative and symmetric Radon measure satisfying at least the usual L\u00e9vy measure condition (see Section 3). In this paper, to simplify, we will always choose either or . That is, the local operator is given as where .\n\n###### Remark 1.1.\n1. Since will be symmetric, , and we have an equivalent definition of in (1.3) in terms of a principal value integral:\n\n L\u03bc[\u03c8](x)=limr\u21920+\u222b|z|>r(\u03c8(x+z)\u2212\u03c8(x))d\u03bc(z)=P.V.\u222b|z|>0(\u03c8(x+z)\u2212\u03c8(x))d\u03bc(z).\n2. We will also comment on the case when (GPME) has a convection term in addition, see Section 5.2.\n\nEquations of the form (GPME) (and also variants with convection) appear in numerous applications. We selectively mention reservoir simulation, sedimentation processes, and traffic flow in the local case [25, 8, 39]; cardiac electrophysiology and semiconductor growth in the nonlocal case [7, 40]\n\n; and flows of fluids through porous media, mathematical finance, information theory, and phase transitions in both cases\n\n[38, 12, 11, 36, 6]. Important examples are strongly degenerate Stefan problems (cf. [17, 18]) with , , and the full range of porous medium and fast diffusion equations (cf. [12]) with , . The class of operators coincides with the generators of symmetric L\u00e9vy processes [4, 35, 1]. This includes e.g. the Laplacian , fractional Laplacians , , relativistic Schr\u00f6dinger type operators , and , tempered stable processes [11], and even discretizations of all of these. Since and may be degenerate or even identically zero, equation (GPME) can be purely nonlocal, purely local, or a combination.\n\nCompactness results depend on the type of equation under study, and the properties available for such an equation. They are essential in the context of existence, continuous dependence, and asymptotic behaviour. For the latter example, this is particularly the case when considering the rescaled solution in the \u201cfour-step method\u201d introduced by Kamin and V\u00e1zquez in [33]. In all of these cases, an approximate solution of the equation under study is considered. Then one shows compactness of the family formed by in order to be able to find a limit function. The limit must of course be a solution of the original equation (or variants of it). In this paper, the approximate solution will always come from a finite-difference scheme for (GPME).\n\nTo prove compactness in with , we employ the well-known Arzel\u00e0-Ascoli and Kolmogorov-Riesz compactness theorems (cf. Appendix A). A systematic approach to these theorems are presented in Section 2. Compared to some previous results in this direction (see e.g. Lemma 2.2 in [34] and Theorem A.8 in [31]), we are actually going to deduce equitightness (uniform tail control) for approximate solutions of (GPME). Then we are able to avoid all unnecessary tricks to fulfil the requirements of both Arzel\u00e0-Ascoli and Kolmogorov-Riesz, and instead present a minimal and efficient compactness argument. However, we still use the uniform boundedness (-stability) of the solutions to make sure that some of the estimates needed hold. A possible improvement of our results is thus to study (GPME) in a pure -framework.\n\nAs far as we know, equitightness results for (GPME) have not been presented under our general assumptions before (especially in the nonlocal case), and these results are really the core of this paper. Such estimates are deduced by taking, roughly speaking, as a test function in the very weak formulation of (GPME). This gives uniform tail control of the approximative solution (cf. e.g. [12, Proof of Proposition 10.2]). In examples when (GPME) conserves mass, we can summarize our equitightness results by saying that such estimates always holds in the local case, and also in the nonlocal setting when we assume that the nonlocal operator is comparable to a fractional order operator at infinity.\n\nFinite-difference methods were developed in the full generality of (GPME) in [15, 16]; some early works are can be found in [20, 23]. We also refer to [21] (see also Part II of [22]) in the purely local case. In the case of (GPME) with an additional convection term, we mention e.g. [34, 31, 10].\n\nThe numerical schemes which will be presented below include most of the mentioned works on local and nonlocal cases. However, none of the above showed convergence in (but some could still deduce that the limit itself belonged to that space, see also Section 5.1). Our equitightness estimates therefore improves convergence results already present in the literature. Conservative and monotone finite-difference schemes for scalar conservation laws are discussed in [31, Theorem 3.8], and they immediately fall into our -framework. By adding a possibly nonlinear local diffusion term to such equations, we obtain convection-diffusion equations with local diffusion. Such equations have been studied in the context of finite-difference approximations in [34, Theorem 4.2]. See also Theorem 3.9 and Corollary 3.10 in [27]. Again, we can improve the convergence from the respective and to since -equitightness holds. In the nonlocal diffusion setting, finite-difference schemes has just recently been analyzed in rigorous detail and generality in [15, 16], see also [20, 23]. The former two references obtain compactness\/convergence results in , and the latter in . Our framework thus improves the compactness\/convergence results of all four papers.\n\n#### Outline\n\nWe start by reviewing known compactness theorems in Section 2. Assumptions and extensions are discussed. Main results are provided in Section 3 As the nonlocal operator will be the hardest term to control uniformly at infinity, we discuss its discretization, needed assumptions, and related estimates in that section as well. Section 4 is reserved for proofs, and Section 5 for extensions (including the case of the convection) and comments. Important well-known results are presented in Appendices A and B, and finally, some auxiliary results regarding equitightness can be found in Appendix C.\n\n#### Notation\n\nDerivatives are denoted by , , , and and denote the -gradient and Hessian matrix of . with will denote a standard mollifier.\n\nWe use standard notation for , , and . Moreover, is the space of smooth functions with compact support, and the space of measurable functions such that for every , , and when for all compact and . In a similar way we also define . Note that the notion of is a subtle one. In fact, we mean that has an a.e.-version which is continuous . See e.g. p. 726 in [21] for more details. The space with is identified as the Banach space with norm where\n\n |\u03c8|C0,\u03d1(R):=supx,y\u2208R{|\u03c8(x)\u2212\u03c8(y)||x\u2212y|\u03d1}.\n\nWhen , we simply get .\n\n###### Remark 1.2.\n\nFrom now on, we will study convergence in (abbrev. ) with . We use because we want it to be a unique identifier.\n\n## 2. On compactness and convergence in C([0,T];Lr(RN))\n\nLet us give an overview of the properties needed to deduce compactness and convergence in with .\n\n### 2.1. Necessary and sufficient conditions for compactness\n\nConsider a sequence , and assume that satisfies:\n\n1. (Equitight in space pointwise in time) For all ,\n\n limR\u2192\u221esupn\u222b|x|>R|un(x,t)|rdx=0.\n2. (Equicontinuous in space pointwise in time) For all , there exists a modulus of continuity such that\n\n supn\u2225un(\u22c5+\u03be,t)\u2212un(\u22c5,t)\u2225Lr(RN)\u2264\u03bb(|\u03be|).\n3. (Equicontinuous in time) For all , there exists a modulus of continuity such that\n\n supn\u2225un(\u22c5,t)\u2212un(\u22c5,s)\u2225Lr(RN)\u2264\u03bb(|t\u2212s|).\n\nProperties (I)\u2013(III) are exactly the necessary and sufficient conditions needed to obtain compactness in (cf. the Arzel\u00e0-Ascoli and Kolmogorov-Riesz compactness theorems A.1 and A.3).\n\n###### Theorem 2.1 (Compactness).\n\nAssume , and let\n\n {un}n\u2208N\u2282C([0,T];Lr(RN)).\n\nThe following are equivalent:\n\n1. The sequence satisfies properties (I)\u2013(III).\n\n2. There exists a subsequence and a such that\n\n unk\u2192uin\u00a0C([0,T];Lr(RN))\u00a0as\u00a0k\u2192\u221e.\n###### Remark 2.2.\n\nThe classical Kolmogorov-Riesz compactness theorem requires the family of functions to be equibounded, that is, for all ,\n\n supn\u2225un(\u22c5,t)\u2225Lr(RN)<\u221e.\n\nHowever, in [30], it has been pointed out that such a property follows from properties (I)\u2013(II).\n\n###### Proof.\n\nThe fact that the sequence satisfies (I)\u2013(II) is, by the Kolmogorov-Riesz compactness theorem (cf. Theorem A.3), equivalent with being relatively compact in for all . Finally, since the sequence satisfies (III), the proof is completed by an application of the Arzel\u00e0-Ascoli compactness theorem (cf. Theorem A.1). \u220e\n\n### 2.2. Consequences\n\nRecall that, in our context, will be a sequence of, e.g., numerical approximations of some function which could, e.g., be a distributional\/very weak, entropy, energy, strong, mild, or classical solution of (GPME). The next properties therefore relate the above compactness with results for the underlying equation.\n\n1. (Consistent approximation) Assume is a consistent approximation of some problem (P), i.e.,\n\n un\u2192uin\u00a0C([0,T];Lr(RN))\u00a0as\u00a0n\u2192\u221e\u27f9u\u00a0solves (P) in some sense.\n###### Corollary 2.3 (Existence by consistent approximation).\n\nAssume . Let be a sequence satisfying properties (I)\u2013(IV). Then there exist a solution of (P).\n\n###### Proof.\n\nBy (I)\u2013(III), any subsequence defined in Theorem 2.1 has a further subsequence converging to some function . The consistency given by (IV) tells us that the limit is a solution of (P). \u220e\n\nWe end this discussion, by noting that full convergence of the sequence relies on uniqueness of the problem (P):\n\n1. (Uniqueness) There is at most one solution of (P).\n\n###### Proposition 2.4 (Convergence by uniqueness).\n\nAssume . Let be a sequence satisfying properties (I)\u2013(V). Then\n\nwhere is the unique solution of (P).\n\n###### Proof.\n\nAssume by contradiction that does not converge to in . Then there is a subsequence and an such that for every . By compactness (Theorem 2.1) there is a further subsequence converging in to a function which is a solution of (P) (Corollary 2.3). However, uniqueness tells us that , and we have a contradiction. The whole sequence thus converges to in . \u220e\n\n### 2.3. Some variants\n\nWe will now discuss some variants of the above conditions which we will use in the paper, and also a comparison with other compactness results.\n\n#### Equitight in space uniformly in time.\n\nNow, we replace (I) and (III) by:\n\n1. (Equitight in space uniformly in time) For all ,\n\n2. (Equicontinuous in time) For all and all compact , there exists a modulus of continuity such that\n\n supn\u222bK|un(\u22c5,t)\u2212un(\u22c5,s)|dx\u2264\u03bbK(|t\u2212s|).\n\nWe immediately see that (i) implies (I). Moreover:\n\n###### Lemma 2.5.\n\nAssume . If the sequence satisfies (i) and (iii), then for all there exists such that for all and all\n\n (2.1) |t\u2212s|<\u03b7\u27f9\u222bRN|un(x,t)\u2212un(x,s)|rdx<\u03b5.\n###### Remark 2.6.\n\nThe condition (2.1) is a weaker version of the original (III). However, the second item of the Arzel\u00e0-Ascoli compactness theorem A.1 is still fulfilled, and hence, Theorem 2.1 still holds under the assumptions (i), (II), and (iii).\n\n###### Proof.\n\nFix and the compact set such that . Then\n\n supn\u222bRN|un(x,t)\u2212un(x,s)|rdx\u2264supn\u222bK|un(x,t)\u2212un(x,s)|rdx+2supnsupt\u2208[0,T]\u222b|x|>R|un(x,t)|rdx\n\nBy (i), we choose such that\n\n 2supnsupt\u2208[0,T]\u222b|x|>R|un(x,t)|rdx<\u03b52,\n\nand then by (iii), we take such that\n\n supn\u222bK|un(x,t)\u2212un(x,s)|rdx<\u03b52.\n\nThe proof is complete. \u220e\n\n#### Variants of Arzel\u00e0-Ascoli and Kolmogorov-Riesz.\n\nGeneralizations of the Arzel\u00e0-Ascoli compactness theorem to -spaces can be found in Simon\u2019s well-cited paper [37]. There he discusses compactness of functions which are in time with values in some Banach space\n\n. Sections 8 and 9 of that paper also contain what is commonly known as the Aubin-Lions lemma. Such an approach is different than what we do here, and is probably more suited in an energy-like setting. A nonlocal version is given by Theorem 3.1 in\n\n[32].\n\nRegarding Kolmogorov-Riesz on compact sets, the Helly compactness theorem can be used as a particular version in the -setting, see [29, Section 6].\n\n## 3. Main results\n\n### 3.1. Assumptions and concept of solution\n\nConsider the following typical assumptions on (GPME):\n\n (A\u03c6) \u03c6\u2208C0,\u03d1loc(R,R),\u03d1\u2208(0,1], is nondecreasing. (Ag) g\u00a0is measurable and\u00a0\u222bT0(\u2225g(\u22c5,t)\u2225L1(RN)+\u2225g(\u22c5,t)\u2225L\u221e(RN))dt<\u221e. (Au0) u0\u2208L1(RN)\u2229L\u221e(RN).\n\nIn this paper, we restrict ourselves to , where\n\n (A\u03c3) L\u03c3=c\u0394:=cN\u2211i=1\u22022xixiwhere\u00a0c\u2208{0,1},\n\nand is a general symmetric L\u00e9vy operator under the usual assumption:\n\n \u03bc\u00a0is a nonnegative symmetric Radon measure on\u00a0RN\u2216{0}\u00a0satisfying\n###### Remark 3.1.\n1. Assumption () is the same as saying that\n\n g\u2208L1(0,T;L1(RN)\u2229L\u221e(RN))\n\nwhere the space is understood as iterated -spaces in the sense of [2]. Of course, we have that .\n\n2. Without loss of generality, we can assume (by adding constants to in (GPME)).\n\n3. As solutions of (GPME) will be bounded, we can always assume that in ().\n\nWe will work with very weak solutions of (GPME):\n\n###### Definition 3.1 (Very weak solutions).\n\nLet and . Then is a very weak solution of (GPME) if, for all , and\n\n \u222cQT(u\u2202t\u03c8+\u03c6(u)L\u03c3,\u03bc[\u03c8]+g\u03c8)dxdt+\u222bRNu0(x)\u03c8(x,0)dx=0.\n\nNote that under the assumptions ()\u2013(3.1), very weak solutions of (GPME) are unique in by [13, Theorem 3.1]. By the same reference,\n\n \u2225u(\u22c5,t)\u2225L\u221e(RN)\u2264\u2225u0\u2225L\u221e(RN)andu\u2208C([0,T];L1loc(RN)),\n\nand hence, they are moreover in for all .\n\n### 3.2. Approximation through numerical method\n\nTo define our numerical scheme, we need to introduce a discrete grid in . Consider a sequence of numbers defining a nonuniform gird in time such that and let . The time steps are then denoted by\n\n \u0394tj=tj\u2212tj\u22121>0forevery$j\u2208J$and\u0394t=maxj\u2208J{\u0394tj}.\n\nMoreover, let the space step , and consider the discrete subset of given by\n\n Gh:=hZN={x\u03b2:=h\u03b2:\u03b2\u2208ZN}.\n\nWe are now ready to define our numerical scheme. Since and do not necessarily have pointwise values, we set, for ,\n\n (3.1) U0\u03b2:=1hN\u222bx\u03b2+Rhu0(x)dxandGj\u03b2:=1hN\u0394tj\u222btjtj\u22121\u222bx\u03b2+Rhg(x,\u03c4)dxd\u03c4.\n\nThen we seek a function which solves\n\n (3.2) Uj\u03b2=Uj\u22121\u03b2+\u0394tjLh[\u03c6(Uj\u22c5)]\u03b2+\u0394tjGj\u03b2\u2200(\u03b2,j)\u2208(ZN,J).\n###### Remark 3.2.\n\nNote that the above scheme is implicit in the diffusion term. This is done to ensure a simple theoretical analysis for merely H\u00f6lder continuous . If we choose an explicit scheme, we need to rely on a CFL-type stability condition which involves , and this condition will blow up if is not Lipschitz continuous. In the pure H\u00f6lder case, we would then need a further approximation of the nonlinearity. We refer the reader to [15, 16] for details.\n\nThe discrete variants of and will now be discussed. For the partial derivative in time, we use the simple backward difference:\n\n \u03c8(tj)\u2212\u03c8(tj\u22121)\u0394tjfor all\u00a0\u03c8:TT\u0394t\u2192R.\n\nThe finite-difference discretization of is well-known and given by\n\n \u0394h\u03c8(x)=N\u2211i=1\u03c8(x+hei)+\u03c8(x\u2212hei)\u22122\u03c8(x)h2.\n\nRecently, such approximation was shown to be in the class of L\u00e9vy operators for a certain finite measure [14, 13, 15, 16]. We thus present a unified approach to numerical discretizations for local and nonlocal operators. In fact, by choosing the correct weights, we recover either local, nonlocal, or combinations of both. Hence, consider the family of bounded, symmetric, and monotone operators given by\n\n Lh[\u03c8](x\u03b2)=Lh[\u03c8]\u03b2=\u2211\u03b3\u22600(\u03c8(x\u03b2+z\u03b3)\u2212\u03c8(x\u03b2))\u03c9\u03b3,h%forall$\u03c8:Gh\u2192R$,\n\nwhere and .\n\n###### Remark 3.3.\n\nWe can also write\n\n Lh[\u03c8]\u03b2=\u222bRN(\u03c8(x\u03b2+z)\u2212\u03c8(x\u03b2))d\u03bdh(z)with\u03bdh(z)=\u2211\u03b3\u22600\u03b4z\u03b3(z)w\u03b3,h,\n\nwhere is the dirac delta measure centered at (cf. [16]).\n\nWe then need the following discrete versions of (3.1):\n\n For all\u00a0h>0\u00a0and\u00a0\u03b3\u22600,\u00a0\u03c9\u03b3,h=\u03c9\u2212\u03b3,h\u22650\u00a0and sup01\u03c9\u03b3,h}<\u221e.\n###### Remark 3.4.\n\nThe parallel of the above assumptions with (3.1) might be easier to see in terms of :\n\n sup01d\u03bdh(z)}<\u221e\n\nIn Appendix C.3, we also provide conditions for checking this assumption through operator consistency (see (3.6) below) and assumptions on the limit operator .\n\nEquitightness and convergence in with\n\nof numerical schemes will be presented for a suitable interpolant which extends the discrete solutions to the whole space\n\n. For that reason, let us define the piecewise constant space interpolant of as\n\n \u00af\u00af\u00af\u00afU(x)=\u2211x\u03b2\u2208Gh1x\u03b2+Rh(x)U\u03b2,\n\nand the piecewise linear in time and piecewise constant in space space-time interpolant of as\n\n (3.3) \u02dc\u00af\u00af\u00af\u00afU(x,t)=\u00af\u00af\u00af\u00af\u00af\u00af\u00afU0(x)1{t0}(t)+\u2211tj\u2208TT\u0394t\u2216t01(tj\u22121,tj](t)(\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00afUj\u22121(x)+t\u2212tj\u22121tj\u2212tj\u22121(\u00af\u00af\u00af\u00af\u00af\u00afUj(x)\u2212\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00af\u00afUj\u22121(x))).\n\n### 3.3. Equitightness\n\nOur results make use of a family of functions of the form\n\n (3.4) XR(x):=X(xR)withR>0,\n\nwhere is some fixed function such that\n\n (3.5)\n\nObserve that for all and . Moreover:\n\n###### Lemma 3.5.\n\nAssume . The function defined by (3.4)\u2013(3.5) satisfies:\n\n1. pointwise as .\n\n2. If , then as .\n\n###### Remark 3.6.\n\nWe note that gives the least restrictive convergence of in as .\n\n###### Theorem 3.7 (Equitightness estimate).\n\nAssume (), (), (), (3.2), and . Let and be such that (hence, ), and let be an a.e.-solution of (3.2) corresponding to and , and the corresponding interpolant. Then\n\n supt\u2208[0,T]\u222b|x|>R|\u02dc\u00af\u00af\u00af\u00afUh(x,t)|rdx\u2264Mr\u22121u0,g(\u2225u0XR\u2225L1(RN)+\u2225gXR\u2225L1(0,T;L1(RN))+Cu0,g,\u03c6T\u2225Lh[XR]\u2225Lp(RN))\n\nwhere\n\n Mu0,g:=\u2225u0\u2225L\u221e(RN)+\u2225g\u2225L1(0,T;L\u221e(RN)).\n\nand\n\n C\u03c6,u0,g:=|\u03c6|C0,\u03d1(Mu0,g)\u03d1\u22121q(\u2225u0\u2225L1(RN)+\u2225g\u2225L1(QT))1q.\n###### Remark 3.8.\n\nWe interpret when .\n\nThe proof, see Section 4, basically consists of choosing as a test function in Definition 3.1. Since we need to rely on in as . This puts a restriction on the class of operators, as can be seen in the next result. To write down the result properly, we will consider weights satisfying one of the following:\n\n For all\u00a0h>0\u00a0and\u00a0\u03b3\u22600,\u00a0\u03c9\u03b3,h=\u03c9\u2212\u03b3,h\u22650\u00a0and\n In addition to (3.2), also assume that, for \u03b1\u2208(0,2), supR>1sup0R\u03c9\u03b3,h}\u2264C.\n###### Corollary 3.9 (Equitightness).\n\nAssume (), (), (), and . Let be an a.e.-solution of (3.2) corresponding to and , and the corresponding interpolant. Then\n\n limR\u2192\u221esup0R|\u02dc\u00af\u00af\u00af\u00afUh(x,t)|rdx=0.\n\nif any of the following holds:\n\n1. When (with weights satisfying (3.2)) and .\n\n2. When has weights satisfying (3.3) and .\n\n3. When has weights satisfying (3.3) and .\n\n###### Remark 3.10.\n1. More generally, we also have equitightness when has weights satisfying (3.2) and\n\n sup0R\u03c9\u03b3,h}=oR(1).\n2. Let us also provide the counterpart of (3.3) and (3.3) for :\n\n \u03bc\u00a0is a nonnegative symmetric Radon measure on\u00a0RN\u2216{0}\u00a0satisfying \u222b0<|z|\u22641|z|2d\u03bc(z)+\u222b|z|>1|z|d\u03bc(z)<\u221e,\n\nand\n\n \u03bc\u00a0satisfies\u00a0(???)\u00a0and, for\u00a0\u03b1\u2208(0,2),\n\nThe latter is satisfied for e.g. such that (3.1) holds and for and some . We again refer to Appendix C.3 for practical ways of checking (3.3) and (3.3) through operator consistency (see (3.6) below) and assumptions on the limit operator .\n\n3. Assumptions (b) and (b) are moment like conditions. They imply that has moments with or , respectively. This is equivalent with the underlying L\u00e9vy process having moments of order , see e.g. Section 2.3 in [24].\n\n###### Proof of Corollary 3.9.\n\nLet us summarize the direct computations done in Appendix C:\n\n1. in as if when , and when (see Corollary C.5).\n\n2. Under (3.3), in as if (see Corollary C.6).\n\n3. Under (3.3), in as if when , when and , and when and (see Corollary C.6 and Example C.1).\n\nNow, we can deduce a condition on in order to have convergence as , and then conclude by Theorem 3.7. Choose (hence, ), and require that:\n\n \u23a7\u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa\u23a8\u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa\u23a91<11\u2212\u03d1in Case (i) for\u00a0N=1,N2<11\u2212\u03d1in Case (i) for\u00a0N\u22652,N<11\u2212\u03d1in Case (ii) for\u00a0N\u22651,N\u03b1<11\u2212\u03d1in Case (iii) for\u00a0N\u22652% ,1\u03b1<11\u2212\u03d1in Case (iii) for\u00a0N=1\u00a0and%\u00a0\u03b1\u2208(0,1),1<11\u2212\u03d1in Case (iii) for\u00a0N=1\u00a0and\u00a0\u03b1\u2208[1,2).\n\nHence, we get a restriction on the lower bound of :\n\n \u23a7\u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa\u23a8\u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa \u23aa\u23a9","date":"2023-01-30 15:28:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9401419162750244, \"perplexity\": 858.1288688889468}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764499819.32\/warc\/CC-MAIN-20230130133622-20230130163622-00540.warc.gz\"}"}
null
null
\section{Introduction} In certain respects, in the algebraic approach to homotopy theory, the basic object of study is a category $\mathcal{C}$ endowed with a class of morphisms $\mathcal{W}$, called weak-equivalences, that should be considered as ``isomorphisms honoris causa''. If the class of weak-equivalences is well behaved, we say that $(\mathcal{C},\mathcal{W})$ is a relative category: \begin{define}\label{d:rel0} A \emph{relative category} is a pair $(\mathcal{C},\mathcal{W})$, consisting of a category $\mathcal{C}$, and a subcategory $\mathcal{W}\subseteq \mathcal{C}$ that contains all the isomorphisms and satisfies the two out of three property; $\mathcal{W}$ is called the \emph{subcategory of weak-equivalences}. \end{define} The data of a relative category is enough to define most of the constructions needed in homotopy theory such as mapping spaces, homotopy limits, derived functors, etc. In fact, a relative category is one of the models for the abstract notion of an $(\infty,1)$-category, which enables to define these concepts via universal properties (see \cite{BaKa}, \cite{Lur}). Alas, in a relative category, it is in practice very hard to ensure the existence of wanted objects or to carry out any computations. Thus, working effectively in a relative category $(\mathcal{C},\mathcal{W})$ is usually achieved by adding some extra structure. The most prevalent example is the structure of a model category defined by Quillen in \cite{Qui}. Model categories, albeit very useful, admit quite a ``heavy" axiomatization. A model category consists of relative category $(\mathcal{C},\mathcal{W})$ together with two subcategories $\mathcal{F},\mathcal{C} of$ of $\mathcal{C}$ called \emph{fibrations} and \emph{cofibrations}. The quadruple $(\mathcal{C},\mathcal{W},\mathcal{F},\mathcal{C} of)$ should satisfy many axioms. (We refer the reader to \cite{Hov} for the modern definition of a model category.) The axioms for a model category are often very hard to verify, and furthermore, there are situations in which there is a natural definition of weak equivalences and fibrations; however, the resulting structure is not a model category. (Note that the structure of a model category is determined by the classes of weak equivalences and fibrations, since the class of cofibrations is then determined by a left lifting property.) In \cite{BaSc1} we introduced a structure that is easier to verify and much weaker than a model category; we called it a ``weak fibration category". A weak fibration category consists of a relative category $(\mathcal{C},\mathcal{W})$ together with one subcategory $\mathcal{F}$ of $\mathcal{C}$ called \emph{fibrations}, satisfying certain axioms (see Definition \ref{d:weak_fib}). In \cite{BaSc1} we show that a weak fibration category can be ``completed" into a full model category structure on its pro-category, provided it satisfies conditions which we call ``pro-admissible" and ``homotopically small". The property of being homotopically small is a bit technical and we will not need it here. What is important for us here is that any small weak fibration category is homotopically small. The property of pro-admissibility is easier to define, and we now bring a definition. Let $(\mathcal{C},\mathcal{W},\mathcal{F})$ be a weak fibration category. We say that a morphism in $\Pro(\mathcal{C})$ is in $Lw^{\cong}(\mathcal{W})$ if it is isomorphic, as a morphism in $\Pro(\mathcal{C})$, to a natural transformation which is level-wise in $\mathcal{W}$. \begin{rem} It is not hard to see that $Lw^{\cong}(\mathcal{W})$ is the essential image of $\Pro(\mathcal{W})$ under the natural equivalence $\Pro(\mathcal{C}^{\to})\to\Pro(\mathcal{C})^{\to}$ (where $\mathcal{W}$ is considered as a full subcategory of $\mathcal{C}^{\to}$). \end{rem} We say that the weak fibration category $(\mathcal{C},\mathcal{F},\mathcal{W})$ is \emph{pro-admissible} if $$(\Pro(\mathcal{C}),Lw^{\cong}(\mathcal{W}))$$ is a relative category (or in other words if $Lw^{\cong}(\mathcal{W})$ satisfies the two out of three property). From \cite[Theorem 4.8]{BaSc1} it easily follows (for more details see Theorem \ref{t:model}): \begin{thm}\label{t:model0} Let $(\mathcal{C},\mathcal{W},\mathcal{F})$ be a small pro-admissible weak fibration category. Then there exists a model category structure on $\Pro(\mathcal{C})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The cofibrations are $\mathbf{C} := {}^{\perp} (\mathcal{F}\cap \mathcal{W})$. \item The fibrations are maps satisfying the right lifting property with respect to all acyclic fibrations. \end{enumerate} Moreover, this model category is $\omega$-cocombinatorial, with $\mathcal{F}$ as the set of generating fibrations and $\mathcal{F}\cap \mathcal{W}$ as the set of generating acyclic fibrations. \end{thm} \begin{rem} \begin{enumerate} \item Note that by abuse of notation we consider morphisms of $\mathcal{C}$ as morphisms of $\Pro(\mathcal{C})$ indexed by the trivial diagram. \item A more explicit description of the fibrations in this model structure can be given, but this requires some more definitions. We give the more detailed theorem in the appendix (see Theorem \ref{t:model_elaborate}). \item A model category is said to be \emph{cocombinatorial} if its opposite category is \emph{combinatorial}. Combinatorial model categories were introduced by J. H. Smith as model categories which are locally presentable and cofibrantly generated (see for instance the appendix of \cite{Lur}). If $\gamma$ is a regular cardinal, we also follow J. H. Smith and call a model category \emph{$\gamma$-combinatorial} if it is combinatorial and both cofibrations and trivial cofibrations are generated by sets of morphisms having $\gamma$-presentable domains and codomains. \end{enumerate} \end{rem} The pro-admissibility condition on a weak cofibration category $\mathcal{C}$, appearing in Theorem \ref{t:model0}, is not intrinsic to $\mathcal{C}$. It is useful to be able to deduce the pro-admissibility of $\mathcal{C}$ only from conditions on $\mathcal{C}$ itself. One purpose of this paper is to give one possible solution to this problem. This is done in Section \ref{s:proper}. Everything we have discussed so far is completely dualizable. Thus we can define the notion of an \textbf{ind}-admissible weak \textbf{cofibration} category, and show: \begin{thm}\label{t:model_dual0} Let $(\mathcal{M},\mathcal{W},\mathcal{C})$ be a small ind-admissible weak cofibration category. Then there exists a model category structure on $\Ind(\mathcal{M})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The fibrations are $\mathbf{F} = (\mathcal{C}\cap \mathcal{W})^{\perp} $. \item The cofibrations are maps satisfying the left lifting property with respect to all acyclic fibrations. \end{enumerate} Moreover, this model category is $\omega$-combinatorial, with $\mathcal{C}$ as the set of generating cofibrations and $\mathcal{C}\cap \mathcal{W}$ as the set of generating acyclic cofibrations. \end{thm} Model categories constructed using Theorem \ref{t:model_dual0} have some further convenient property, namely, their class of weak equivalences is finitely accessible, when viewed as a full subcategory of the morphism category (we follow the terminology of \cite{AR} throughout this paper). This means that it is of the form $\Ind(\mathcal{D})$ for some small category $\mathcal{D}$. This assertion follows from the observation that $Lw^{\cong}(\mathcal{W})$ is the essential image of $\Ind(\mathcal{W})$ under the natural equivalence $\Ind(\mathcal{C}^{\to})\to\Ind(\mathcal{C})^{\to}$, where $\mathcal{W}$ is considered as a full subcategory of $\mathcal{C}^{\to}$. It then follows that $\Ind(\mathcal{W})$ is a full subcategory of $\Ind(\mathcal{C}^{\to})$, and thus $\Ind(\mathcal{W})\simeq Lw^{\cong}(\mathcal{W})$. In \cite{BaSc1} we have applied (a generalization of) Theorem \ref{t:model0} to a specific weak fibration category (namely, the category of simplicial sheaves over a Grothendieck site, where the weak equivalences and the fibrations are local in the sense of Jardine) to obtain a novel model structure in its pro-category. In this paper we also consider an application of Theorem \ref{t:model0} (or rather of its dual version, Theorem \ref{t:model_dual0}), but in a reverse direction. Namely, we begin with an $\omega$-combinatorial model category $\underline{\mathcal{M}}$ and ask whether the model structure on $\underline{\mathcal{M}}$ is induced, via Theorem \ref{t:model_dual0}, from a weak cofibration structure on its full subcategory of finitely presentable objects. The main conclusion we wish to deduce from this is the finite accessibility of the class of weak equivalences in $\underline{\mathcal{M}}$, as explained above. While we were writing the first draft of this paper, Raptis and Rosick\'y published a paper with some related results \cite{RaRo}. In their paper, Raptis and Rosick\'y mention that while the class of weak equivalences in any combinatorial model category is known to be accessible, the known estimates for the accessibility rank are generally not the best possible. In their paper, they prove theorems giving estimates for the accessibility rank of weak equivalences in various cases. Their main application is to the standard model structure on simplicial sets. They show that the class of weak equivalences in this model structure is finitely accessible. The purpose of this paper is the same, as well as the main example. Namely, we prove theorems giving estimates for the accessibility rank of weak equivalences in various cases, and our main example is the category of simplicial sets on which we achieve a similar estimate as \cite{RaRo}. However, our theorems, as well as the methods of proof, are completely different. Since our basic tool is based on applying Theorem \ref{t:model_dual0} as explained above, our estimates only concern finite accessibility. We do believe, however, that Theorem \ref{t:model_dual0}, and thus also our results here, can be generalized to an arbitrary cardinal instead of $\omega$. On the other hand, our theorems apply also in cases where the theorems in \cite{RaRo} do not. We will now state our main results. For this, we first need a definition: \begin{define} Let $(\mathcal{C},\mathcal{W})$ be a relative category. A map $f:A\to B$ in $\mathcal{C}$ will be called \emph{right proper}, if for every pull back square of the form \[ \xymatrix{C\ar[d]^j\ar[r] & D\ar[d]^i\\ A\ar[r]^f & B} \] such that $i$ is a weak equivalence, the map $j$ is also a weak equivalence. \end{define} We can now state our first criterion for the finite accessibility of the category of weak equivalences (see Theorem \ref{l:admiss2}): \begin{thm} Let $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ be an $\omega$-combinatorial left proper model category. Let $\mathcal{M}$ denote the full subcategory of $\underline{\mathcal{M}}$ spanned by the finitely presentable objects. Suppose we are given a cylinder object in $\mathcal{M}$, that is, for every object $B$ of $\mathcal{M}$ we are given a factorization in $\mathcal{M}$ of the fold map $B\sqcup B\to B$ into a cofibration followed by a weak equivalence: $$B\sqcup B\xrightarrow{(i_0,i_1)} I\otimes B\xrightarrow{p} B.$$ (Note that we are not assuming any simplicial structure; $I\otimes B$ is just a suggestive notation.) We make the following further assumptions: \begin{enumerate} \item The category $\mathcal{M}$ has finite limits. \item Every object in $\mathcal{M}$ is cofibrant. \item For every morphism $f:A\to B$ in $\mathcal{M}$ the map $B\coprod_{A}(I\otimes A)\to B$, induced by the commutative square $$\xymatrix{A\ar[d]^{f}\ar[r]^{i_0} & I\otimes A\ar[d]^{f\circ p} \\ B \ar[r]^{=} & B,}$$ is a right proper map in $(\mathcal{M},\mathcal{W})$. \end{enumerate} Then the full subcategory of the morphism category of $\underline{\mathcal{M}}$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} Our second criterion can be shown using the first one (see Theorem \ref{l:admiss3}): \begin{thm} Let $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ be an $\omega$-combinatorial left proper model category. Let $\mathcal{M}$ denote the full subcategory of $\underline{\mathcal{M}}$ spanned by the finitely presentable objects. Assume that the category $\mathcal{M}$ has finite limits and let $*$ denote the terminal object in $\mathcal{M}$. Suppose we are given a factorization in $\mathcal{M}$ of the fold map $*\sqcup *\to *$ into a cofibration followed by a weak equivalence: $$*\sqcup *\xrightarrow{} I\xrightarrow{} *.$$ We make the following further assumptions: \begin{enumerate} \item For every morphism $Y\to B$ in $\mathcal{M}$, the functor $$Y\times_B(-):\mathcal{M}_{/B}\to\mathcal{M}$$ commutes with finite colimits. \item Every object in $\mathcal{M}$ is cofibrant. \item For every object $B$ in $\mathcal{M}$, the functor $$B\times(-):\mathcal{M}\to\mathcal{M}$$ preserves cofibrations and weak equivalences. \end{enumerate} Then the full subcategory of the morphism category of $\underline{\mathcal{M}}$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} It is not hard to verify that the standard model structure on the category of simplicial sets satisfies the hypothesis of the previous theorem (see Theorem \ref{l:S_f_admiss}). Thus we obtain: \begin{thm}\label{l:S_f_admiss0} The full subcategory of the morphism category of $\mathcal{S}$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} As mentioned above, Theorem \ref{l:S_f_admiss0} was also proved in \cite{RaRo}, using different methods. In the appendix we prove some results that might shade some light as to possible connections between the approach taken in this paper, and that of Raptis and Rosick\'y. To prove these results we will need to present the more detailed version of Theorem \ref{t:model_dual0} (see Theorem \ref{t:model_elaborate}). \subsection{Organization of the paper} In Section \ref{s:prelim} we give a short review of the necessary background on pro-categories and model structures on them. Everything in this section dualizes easily to ind-categories. In Section \ref{s:proper} we prove a theorem giving sufficient intrinsic conditions for the pro-admissibility of a weak fibration category. We also define an auxiliary notion that generalizes the notion of a model category. The results and definitions of Section \ref{s:proper} will be used in Section \ref{s:app}, where we prove the main results of this paper, namely, a series of criteria for the finite accessibility of the category of weak equivalences. \subsection{Acknowledgments} We would like to thank Yonatan Harpaz for useful conversations. We also thank Dmitri Pavlov for pointing out the relation of our work to the work of Raptis and Rosick\'y \cite{RaRo}, and Geoffroy Horel for Remark \ref{r:sharp}. Finally, we would like to thank the referee for his useful comments. \section{Preliminaries: model structures on pro-categories}\label{s:prelim} In this section we review the necessary background of model structures on pro-categories. We state the results without proof, for later reference. For proofs and more information the reader is referred to \cite{AM}, \cite{Isa}, \cite{BaSc} and \cite{BaSc1}. All these results are easily dualized to the case of ind-categories. \subsection{Pro-categories} \begin{define}\label{d:cofiltered} A category $I$ is called \emph{cofiltered} if the following conditions are satisfied: \begin{enumerate} \item The category $I$ is non-empty. \item For every pair of objects $s,t \in I$, there exists an object $u\in I$, together with morphisms $u\to s$ and $u\to t$. \item For every pair of morphisms $f,g:s\to t$ in $I$, there exists a morphism $h:u\to s$ in $I$ such that $f\circ h=g\circ h$. \end{enumerate} \end{define} A category is called \emph{small} if it has only a set of objects and a set of morphisms. \begin{define} Let $\mathcal{C}$ be a category. The category $\Pro(\mathcal{C})$ has as objects all diagrams in $\mathcal{C}$ of the form $I\to \mathcal{C}$ such that $I$ is small and cofiltered (see Definition \ref{d:cofiltered}). The morphisms are defined by the formula $$\Hom_{\Pro(\mathcal{C})}(X,Y):=\lim\limits_s \mathop{\precolim}\limits_t \Hom_{\mathcal{C}}(X_t,Y_s).$$ Composition of morphisms is defined in the obvious way. \end{define} Thus, if $X:I\to \mathcal{C}$ and $Y:J\to \mathcal{C}$ are objects in $\Pro(\mathcal{C})$, providing a morphism $X\to Y$ means specifying for every $s$ in $J$ an object $t$ in $I$ and a morphism $X_t\to Y_s$ in $\mathcal{C}$. These morphisms should of course satisfy some compatibility condition. In particular, if the indexing categories are equal, $I=J$, any natural transformation $X\to Y$ gives rise to a morphism $X\to Y$ in $\Pro(C)$. More generally, if $p:J\to I$ is a functor, and $\phi:p^*X:=X\circ p\to Y$ is a natural transformation, then the pair $(p,\phi)$ determines a morphism $\nu_{p,\phi}:X\to Y$ in $\Pro(C)$ (for every $s$ in $J$ we take the morphism $\phi_s:X_{p(s)}\to Y_s$). In particular, taking $Y=p^*X$ and $\phi$ to be the identity natural transformation, we see that $p$ determines a morphism $\nu_{p,X}:X\to p^*X$ in $\Pro(C)$. The word pro-object refers to objects of pro-categories. A \emph{simple} pro-object is one indexed by the category with one object and one (identity) map. Note that for any category $\mathcal{C}$, $\Pro(\mathcal{C})$ contains $\mathcal{C}$ as the full subcategory spanned by the simple objects. \begin{define}\label{d:cofinal} Let $p:J\to I$ be a functor between small categories. The functor $p$ is said to be \emph{(left) cofinal} if for every $i$ in $I$ the over category ${p}_{/i}$ is nonempty and connected. \end{define} Cofinal functors play an important role in the theory of pro-categories mainly because of the following well-known lemma (see for example \cite{AM}): \begin{lem}\label{l:cofinal} Let $p:J\to I$ be a cofinal functor between small cofiltered categories, and let $X:I\to \mathcal{C}$ be an object in $\Pro(\mathcal{C})$. Then the morphism in $\Pro(\mathcal{C})$ that $p$ induces, $\nu_{p,X}:X\to p^*X$, is an isomorphism. \end{lem} The following lemma can be found in \cite[Appendix 3.2]{AM}. See also \cite[Corollary 3.26]{BaSc} for a stronger result. \begin{lem}\label{every map natural} Every morphism in $\Pro(\mathcal{C})$ is isomorphic, in the category of morphisms in $\Pro(\mathcal{C})$, to a morphism that comes from a natural transformation (that is, to a morphism of the form $\nu_{id,\phi}$, where $\phi$ is a natural transformation). \end{lem} \begin{define}\label{def levelwise} Let $\mathcal{C}$ be a category, $M \subseteq \Mor(\mathcal{C})$ a class of morphisms in $\mathcal{C}$, $I$ a small category, and $F:X\to Y$ a morphism in $\mathcal{C}^I$. Then $F$ will be called a \emph{level-wise $M$-map}, if for every $i\in I$ the morphism $X_i\to Y_i$ is in $M$. We will denote this by $F\in Lw(M)$. \end{define} \begin{define}\label{def ess levelwise} Let $\mathcal{C}$ be a category, and $M \subseteq \Mor(\mathcal{C})$ a class of morphisms in $\mathcal{C}$. Denote by: \begin{enumerate} \item ${}^{\perp}M$ the class of morphisms in $\mathcal{C}$ having the left lifting property w.r.t. any morphism in $M$. \item $M^{\perp}$ the class of morphisms in $\mathcal{C}$ having the right lifting property w.r.t. any morphism in $M$. \item $Lw^{\cong}(M)$ the class of morphisms in $\Pro(\mathcal{C})$ that are \textbf{isomorphic} to a morphism that comes from a natural transformation which is a level-wise $M$-map. \end{enumerate} \end{define} Everything we have done so far (and throughout this paper) is completely dualizable. Thus we can define: \begin{define}\label{d:filtered} A category $I$ is called \emph{filtered} if the following conditions are satisfied: \begin{enumerate} \item The category $I$ is non-empty. \item For every pair of objects $s,t \in I$, there exists an object $u\in I$, together with morphisms $s\to u$ and $t\to u$. \item For every pair of morphisms $f,g:s\to t$ in $I$, there exists a morphism $h:t\to u$ in $I$ such that $h\circ f=h\circ g$. \end{enumerate} \end{define} The dual to the notion of a pro-category is the notion of an ind-category: \begin{define} Let $\mathcal{C}$ be a category. The category $\Ind(\mathcal{C})$ has as objects all diagrams in $\mathcal{C}$ of the form $I\to \mathcal{C}$ such that $I$ is small and filtered (see Definition \ref{d:filtered}). The morphisms are defined by the formula $$\Hom_{\Pro(\mathcal{C})}(X,Y):=\lim\limits_s \mathop{\precolim}\limits_t \Hom_{\mathcal{C}}(X_s,Y_t).$$ Composition of morphisms is defined in the obvious way. \end{define} Clearly for every category $\mathcal{C}$ we have a natural isomorphism of categories: $\Ind(\mathcal{C})^{op}\cong \Pro(\mathcal{C}^{op})$. We are not going to write the dual to every definition or theorem explicitly, only in certain cases. \subsection{From a weak fibration category to a model category} We now present the definition of a weak fibration category, after two preliminary definitions: \begin{define} Let $\mathcal{C}$ be a category, and let $M,N$ be classes of morphisms in $\mathcal{C}$. We will denote by $\Mor({\mathcal{C}}) = {M}\circ {N}$ the assertion that every map $A\to B $ in ${\mathcal{C}}$ can be factored as $A\xrightarrow{f} C\xrightarrow{g} B $, where $f$ is in ${N}$ and $g$ is in ${M}$. \end{define} \begin{define}\label{d:PB} Let ${\mathcal{C}}$ be a category with finite limits, and let ${\mathcal{M}}\subseteq{\mathcal{C}}$ be a subcategory. We say that ${\mathcal{M}}$ is \emph{closed under base change} if whenever we have a pullback square \[ \xymatrix{A\ar[d]^g\ar[r] & B\ar[d]^f\\ C\ar[r] & D} \] such that $f$ is in ${\mathcal{M}}$, then $g$ is in ${\mathcal{M}}$. \end{define} \begin{define}\label{d:weak_fib} A \emph{weak fibration category} is a category ${\mathcal{C}}$ with an additional structure of two subcategories: $${\mathcal{F}}, {\mathcal{W}} \subseteq {\mathcal{C}}$$ that contain all the isomorphisms, such that the following conditions are satisfied: \begin{enumerate} \item ${\mathcal{C}}$ has all finite limits. \item ${\mathcal{W}}$ has the two out of three property. \item The subcategories ${\mathcal{F}}$ and ${\mathcal{F}}\cap {\mathcal{W}}$ are closed under base change. \item $\Mor({\mathcal{C}}) = {\mathcal{F}}\circ {\mathcal{W}}$. \end{enumerate} The maps in ${\mathcal{F}}$ are called \emph{fibrations}, and the maps in ${\mathcal{W}}$ are called \emph{weak equivalences}. \end{define} \begin{rem} The notion of a weak fibration category is closely related other notions considered previously in the literature such as a ``category of fibrant objects" (\cite{Bro}), a ``fibration category" (\cite{Bau}), an ``Anderson-Brown-Cisinski fibration category" (\cite{Rad}) and more. These notions were introduced as a more flexible structure than a model category in which to do abstract homotopy theory. \end{rem} \begin{define}\label{d:rel} A relative category is a pair $(\mathcal{C},\mathcal{W})$, consisting of a category $\mathcal{C}$, and a subcategory $\mathcal{W}\subseteq \mathcal{C}$ that contains all the isomorphisms and satisfies the two out of three property; $\mathcal{W}$ is called the subcategory of \emph{weak equivalences}. \end{define} \begin{rem} Any weak fibration category is naturally a relative category when ignoring the fibrations. \end{rem} \begin{define} We will denote by $\to$ the category consisting of two objects and one non-identity morphism between them. Thus, if $\mathcal{C}$ is any category, the functor category $\mathcal{C}^\to$ is just the category of morphisms in $\mathcal{C}$. \end{define} \begin{define}\label{d:admiss} A relative category $(\mathcal{C},\mathcal{W})$ will be called: \begin{enumerate} \item pro-admissible, if $Lw^{\cong}(\mathcal{W})\subseteq \Pro(\mathcal{C})^\to$ satisfies the two out of three property, \item ind-admissible, if $Lw^{\cong}(\mathcal{W})\subseteq \Ind(\mathcal{C})^\to$ satisfies the two out of three property, \item admissible, if it both pro- and ind-admissible. \end{enumerate} \end{define} The following theorem is almost a special case of \cite[Theorem 4.8]{BaSc1}: \begin{thm}\label{t:model} Let $(\mathcal{C},\mathcal{W},\mathcal{F})$ be a small pro-admissible weak fibration category. Then there exists a model category structure on $\Pro(\mathcal{C})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The cofibrations are $\mathbf{C} := {}^{\perp} (\mathcal{F}\cap \mathcal{W})$. \item The fibrations are maps satisfying the right lifting property with respect to all acyclic cofibrations. \end{enumerate} Moreover, this model category is $\omega$-cocombinatorial, with $\mathcal{F}$ as the set of generating fibrations and $\mathcal{F}\cap \mathcal{W}$ as the set of generating acyclic fibrations. \end{thm} \begin{rem} The definition of a model category that we refer to in Theorem \ref{t:model}, is the one used in \cite{Hov}. In particular, we require \textbf{functorial} factorizations. This is a stronger conclusion then that of \cite[Theorem 4.8]{BaSc1}, and is achieved because we assume that $\mathcal{C}$ is small. Notice, that we did not require functorial factorizations in the definition of a weak fibration category. Note also that we only required the existence of finite limits in the definition of a weak fibration category, while in $\Pro(\mathcal{C})$ we do get the existence of arbitrary limits and colimits. \end{rem} \begin{proof} Most of the proof goes exactly like the proof of \cite[Theorem 4.8]{BaSc1}, the only difference is that we can use \cite[Proposition 3.17]{BaSc1} instead of \cite[Proposition 3.16]{BaSc1}, and thus obtain functorial factorizations. It only remains to show that $\Pro(\mathcal{C})$ is $\omega$-cocombinatorial, with set of generating fibrations $\mathcal{F}$ and set of generating acyclic fibrations $\mathcal{F}\cap \mathcal{W}$. The category $\mathcal{C}$ has finite limits, so $\mathcal{C}^{op}$ has finite colimits. By the results of \cite{AR}, the category $\Ind(\mathcal{C}^{op})\cong \Pro(\mathcal{C})^{op}$ is locally presentable and every object of $\mathcal{C}^{op}$ is $\omega$-presentable in $\Ind(\mathcal{C}^{op})$. It thus remains to show that: $$\mathbf{C}= {}^{\perp}(\mathcal{F}\cap \mathcal{W}),(\mathbf{C}\cap \mathbf{W})= {}^{\perp}\mathcal{F},$$ but this was shown in \cite[Theorem 4.8]{BaSc1}. \end{proof} The dual to the notion of a weak fibration category is a weak cofibration category. Namely, a weak cofibration category is a category ${\mathcal{M}}$ together with two subcategories, ${\mathcal{C}}$ and $ {\mathcal{W}}$, such that $({\mathcal{M}}^{op},{\mathcal{C}}^{op},{\mathcal{W}}^{op})$ is a weak fibration category. The following is a dual formulation of Theorem \ref{t:model}: \begin{thm}\label{t:model_dual} Let $(\mathcal{M},\mathcal{W},\mathcal{C})$ be a small ind-admissible weak cofibration category. Then there exists a model category structure on $\Ind(\mathcal{M})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The fibrations are $\mathbf{F} = (\mathcal{C}\cap \mathcal{W})^{\perp} $. \item The cofibrations are maps satisfying the left lifting property with respect to all acyclic fibrations. \end{enumerate} Moreover, this model category is $\omega$-combinatorial, with $\mathcal{C}$ as the set of generating cofibrations and $\mathcal{C}\cap \mathcal{W}$ as the set of generating acyclic cofibrations. \end{thm} \section{Proper morphisms}\label{s:proper} \subsection{A criterion for the two out of three property} The pro-admissibility condition on a relative category $\mathcal{C}$, appearing in Theorem \ref{t:model}, is not intrinsic to $\mathcal{C}$ (see Definition \ref{d:admiss}). It is useful to be able to deduce the pro-admissibility of $\mathcal{C}$ only from conditions on $\mathcal{C}$ itself. In this subsection we give one possible solution to this problem. The idea is a very straightforward generalization of an idea of Isaksen (\cite[Section 3]{Isa}). \begin{define} Let $(\mathcal{C},\mathcal{W})$ be a relative category. A map $f:A\to B$ in $\mathcal{C}$ will be called: \begin{enumerate} \item \emph{Left proper}, if for every push out square of the form \[ \xymatrix{A\ar[d]^i\ar[r]^f & B\ar[d]^j\\ C\ar[r] & D} \] such that $i$ is a weak equivalence, the map $j$ is also a weak equivalence. \item \emph{Right proper}, if for every pull back square of the form \[ \xymatrix{C\ar[d]^j\ar[r] & D\ar[d]^i\\ A\ar[r]^f & B} \] such that $i$ is a weak equivalence, the map $j$ is also a weak equivalence. \end{enumerate} We denote by $LP$ the class of left proper maps in $\mathcal{C}$ and by $RP$ the class of right proper maps in $\mathcal{C}$. \end{define} \begin{rem}\label{r:sharp} The notion of a right proper map is related to the notion of a sharp map defined by Rezk in \cite{Rez}. A sharp map is a map such that all its base changes are right proper. In other words, the class of sharp maps is the largest class of maps that is contained in the right proper maps and is closed under base change (see Definition \ref{d:PB}). A sharp map is called a weak fibration by Cisinski and a fibrillation by Barwick and Kan. \end{rem} \begin{example}\label{e:proper_map} Let $\mathcal{M}$ be a model category. Then: \begin{enumerate} \item Every acyclic cofibration in $\mathcal{M}$ is a left proper map in $(\mathcal{M},\mathcal{W})$. \item Every acyclic fibration in $\mathcal{M}$ is a right proper map in $(\mathcal{M},\mathcal{W})$. \item The model category $\mathcal{M}$ is left proper iff every cofibration in $\mathcal{M}$ is a left proper map in $(\mathcal{M},\mathcal{W})$. \item The model category $\mathcal{M}$ is right proper iff every fibration in $\mathcal{M}$ is a right proper map in $(\mathcal{M},\mathcal{W})$. \end{enumerate} \end{example} \begin{define} Let $(\mathcal{C},\mathcal{W})$ be a relative category. Then $(\mathcal{C},\mathcal{W})$ will be said to have \emph{proper factorizations}, if the following hold: \begin{enumerate} \item $\Mor(\mathcal{C})=RP\circ LP$. \item $\Mor(\mathcal{C})=RP\circ \mathcal{W}$. \item $\Mor(\mathcal{C})=\mathcal{W}\circ LP$. \end{enumerate} \end{define} \begin{lem} Let $\mathcal{M}$ be a proper model category. Then the relative category $(\mathcal{M},\mathcal{W})$ has proper factorizations. \end{lem} \begin{proof} \begin{enumerate} \item $\Mor(\mathcal{M})=RP\circ LP$ is shown by factoring every map into a cofibration followed by an acyclic fibration (see Example \ref{e:proper_map}). \item $\Mor(\mathcal{C})=RP\circ \mathcal{W}$ is shown by factoring every map into an acyclic cofibration followed by a fibration (see Example \ref{e:proper_map}). \item $\Mor(\mathcal{C})=\mathcal{W}\circ LP$ is shown by factoring every map into a cofibration followed by an acyclic fibration (see Example \ref{e:proper_map}). \end{enumerate} \end{proof} The following is shown in \cite[Lemma 3.2]{Isa} (see also \cite[Remark 3.3]{Isa}): \begin{lem}\label{l:factor} Let $\mathcal{C}$ be a category, and let $N$ and $M$ be classes of morphisms in $\mathcal{C}$, such that $\Mor(\mathcal{C})=M\circ N$. Let $T$ be a cofiltered category and let $f:\{X_t\}_{t\in T}\to \{Y_t\}_{t\in T}$ be a natural transformation, that is, a map in the functor category $\mathcal{C}^T$. Suppose that $f$ is an isomorphism as a map in $\Pro(\mathcal{C})$ (or $\Ind(\mathcal{C})$). Then there exist a cofiltered category $J$, a cofinal functor $p:J\to T$ and a factorization $p^*X\xrightarrow{g} H_f \xrightarrow{h} p^*Y$ of $p^*f:p^*X\to p^*Y$ in the category $C^{J}$ such that $h$ is a level-wise $\mathcal{M}$ map, $g$ is a level-wise $N$ map, and $g,h$ are isomorphisms as maps in $\Pro(\mathcal{C})$ (or $\Ind(\mathcal{C})$). \end{lem} The following proposition is our main motivation for introducing the concepts of left and right proper morphisms: \begin{prop}\label{p:compose} Let $(\mathcal{C},\mathcal{W})$ be a relative category, and let $X\xrightarrow{f} Y\xrightarrow{g} Z $ be a pair of composable morphisms in $\Pro(\mathcal{C})$ (or $\Ind(\mathcal{C})$). Then: \begin{enumerate} \item If $\mathcal{C}$ has finite limits and colimits, and $\Mor(\mathcal{C})=RP\circ LP$, then $f,g\in Lw^{\cong}(\mathcal{W})$ implies that $g\circ f\in Lw^{\cong}(\mathcal{W})$. \item If $\mathcal{C}$ has finite limits, and $\Mor(\mathcal{C})=RP\circ \mathcal{W}$, then $g,g\circ f\in Lw^{\cong}(\mathcal{W})$ implies that $f\in Lw^{\cong}(\mathcal{W})$. \item If $\mathcal{C}$ has finite colimits, and $\Mor(\mathcal{C})=\mathcal{W}\circ LP$, then $f,g\circ f\in Lw^{\cong}(\mathcal{W})$ implies that $g\in Lw^{\cong}(\mathcal{W})$. \end{enumerate} \end{prop} \begin{proof} For simplicity of writing we only examine the $\Pro(\mathcal{C})$ case. We show 1. The proof is a straightforward generalization of the proof of \cite[Lemma 3.5]{Isa}. Since $f,g\in Lw^{\cong}(\mathcal{W})$ there exists a diagram in $\Pro(\mathcal{C})$, $$\xymatrix{X''\ar[r] & Y'' & & \\ X\ar[u]^{\cong}\ar[r]^f & Y\ar[u]^{\cong} \ar[r]^g & Z\\ & Y'\ar[u]^{\cong}\ar[r] & Z',\ar[u]^{\cong}}$$ such that the vertical maps are isomorphisms in $\Pro(\mathcal{C})$ and such that $Y'\to Z'$ is a natural transformation indexed by $I$ that is level-wise in $\mathcal{W}$ and $X''\to Y''$ is a natural transformation indexed by $J$ that is level-wise in $\mathcal{W}$. Let $Y'\xrightarrow{\cong} Y''$ denote the composition $Y'\xrightarrow{\cong}Y\xrightarrow{\cong} Y''$. It is an isomorphism in $\Pro(\mathcal{C})$ (but not necessarily a level-wise isomorphism). It follows from \cite[Appendix 3.2]{AM} that there exists a cofiltered category $K$, cofinal functors $p:K\to I$ and $q:K\to J$, and a map in $\mathcal{C}^K$, $$q^*Y'\xrightarrow{}p^*Y'',$$ such that there is a commutative diagram in $\Pro(\mathcal{C})$, $$\xymatrix{q^*Y'\ar[r]^{\cong} \ar[d]^{\cong} & p^*Y''\ar[d]^{\cong}\\ Y'\ar[r]^{\cong} & Y'',}$$ with all maps isomorphisms. Thus we have a diagram in $\mathcal{C}^K$, $$p^*X''\xrightarrow{} p^*Y''\xleftarrow{\cong}q^*Y'\xrightarrow{} q^*Z',$$ such that the first and last maps are level-wise in $\mathcal{W}$ and the middle map is an isomorphism as a map in $\Pro(\mathcal{C})$ (but not necessarily a level-wise isomorphism). Since $\Mor(\mathcal{C})=RP\circ LP$, we get by Lemma \ref{l:factor}, applied for $M=RP$ and $N=LP$, that after pulling back by a cofinal functor $T\to K$ we obtain a diagram in $\mathcal{C}^T$, $$A\xrightarrow{} B\xleftarrow{\cong}E\xleftarrow{\cong}C\xrightarrow{} D,$$ such that the first and last maps are level-wise in $\mathcal{W}$, the second map is level-wise right proper and an isomorphism in $\Pro(\mathcal{C})$, and the third map is level-wise left proper and an isomorphism in $\Pro(\mathcal{C})$. By Corollary 3.19 of \cite{BaSc}, since $\mathcal{C}$ has finite limits and colimits, the pull back and push out in $\Pro(\mathcal{C})$ of a diagram in $\mathcal{C}^T$ can be computed level-wise. We thus get the following diagram in $\mathcal{C}^T$: $$\xymatrix{A\ar[r]^{Lw(\mathcal{W})} & B & & \\ A\times_B E\ar[r]^{Lw(\mathcal{W})}\ar[u]^{\cong} & E\ar[u]_{Lw(RP)}^{\cong} \ar[r]^{Lw(\mathcal{W})} & E\coprod_C D\\ & C\ar[u]_{Lw(LP)}^{\cong}\ar[r]_{Lw(\mathcal{W})} & D\ar[u]^{\cong}}$$ where $\cong$ indicates an isomorphism in $\Pro(\mathcal{C})$. It follows that the composition $$A\times_B E\xrightarrow{Lw(\mathcal{W})} E\xrightarrow{Lw(\mathcal{W})} E\coprod_C D$$ is a level-wise $\mathcal{W}$ map that is isomorphic, as a map in $\Pro(\mathcal{C})$, to the composition $g\circ f$. Thus we obtain that $g\circ f\in Lw^{\cong}(\mathcal{W})$. It is not hard to show 2. and 3. using the same type of generalization to the proof of \cite[Lemma 3.6]{Isa}. \end{proof} \begin{cor}\label{r:proper} Let $(\mathcal{C},\mathcal{W})$ be a relative category that has finite limits and colimits and proper factorizations. Then $(\mathcal{C},\mathcal{W})$ is admissible (see Definition \ref{d:admiss}). In particular, if $\mathcal{C}$ is a proper model category then $(\mathcal{C},\mathcal{W})$ is admissible. \end{cor} \subsection{Almost model categories} Corollary \ref{r:proper} gives sufficient conditions for the admissibility of a relative category and, in particular, of a weak (co)fibration category. However, in some interesting examples these conditions are too restrictive. Namely, in some situations there is a natural mapping cylinder factorization (see the proof of Theorem \ref{l:admiss2}) which can be shown to give factorizations of the forms $\Mor(\mathcal{M})=RP\circ LP$ and $\Mor(\mathcal{M})=\mathcal{W}\circ LP$ but not $\Mor(\mathcal{M})=RP\circ \mathcal{W}$. We will therefore need to use an auxiliary notion that is more general than a model category, which we call an \emph{almost model category}. \begin{define} An \emph{almost model category} is a quadruple $(\mathcal{M},\mathcal{W},\mathcal{F},\mathcal{C})$ satisfying all the axioms of a model category, except (maybe) the two out of three property for $\mathcal{W}$. More precisely, an almost model category satisfies: \begin{enumerate} \item $\mathcal{M}$ is complete and cocomplete. \item $\mathcal{W}$ is a class of morphisms in $\mathcal{M}$ that is closed under retracts. \item $\mathcal{F},\mathcal{C}$ are subcategories of $\mathcal{M}$ that are closed under retracts. \item $\mathcal{C}\cap \mathcal{W}\subseteq{}^{\perp}\mathcal{F}$ and $\mathcal{C}\subseteq{}^{\perp}(\mathcal{F}\cap\mathcal{W})$. \item There exist functorial factorizations in $\mathcal{M}$ into a map in $\mathcal{C}\cap \mathcal{W}$ followed by a map in $\mathcal{F}$, and into a map in $\mathcal{C}$ followed by a map in $\mathcal{F}\cap \mathcal{W}$. \end{enumerate} \end{define} The following lemma can be proven just as in the case of model categories (see for example \cite[Lemma 1.1.10]{Hov}): \begin{lem}\label{l:lifting} In an almost model category $(\mathcal{M},\mathcal{W},\mathcal{F},\mathcal{C})$ we have: \begin{enumerate} \item $\mathcal{C}\cap \mathcal{W}={}^{\perp}\mathcal{F}$. \item $\mathcal{C}={}^{\perp}(\mathcal{F}\cap\mathcal{W})$. \item $\mathcal{F}\cap \mathcal{W}=\mathcal{C}^{\perp}$. \item $\mathcal{F}=(\mathcal{C}\cap\mathcal{W})^{\perp}$. \end{enumerate} \end{lem} \begin{define}\label{d:almost_admiss} A relative category $(\mathcal{C},\mathcal{W})$ will be called \emph{almost pro-admissible}, if $Lw^{\cong}(\mathcal{W})\subseteq \Pro(\mathcal{C})^\to$ satisfies the following portion of the two out of three property: For every pair of composable morphisms in $\Pro(\mathcal{C})$: $X\xrightarrow{f} Z\xrightarrow{g} Y $ we have: \begin{enumerate} \item If $f,g$ belong to $Lw^{\cong}(\mathcal{W})$ then $g\circ f\in Lw^{\cong}(\mathcal{W})$. \item If $g,g\circ f$ belong to $Lw^{\cong}(\mathcal{W})$ then $f\in Lw^{\cong}(\mathcal{W})$. \end{enumerate} \end{define} We now prove the following generalization of Theorem \ref{t:model}: \begin{thm}\label{t:almost_model} Let $(\mathcal{C},\mathcal{W},\mathcal{F})$ be a small almost pro-admissible weak fibration category. Then there exists an almost model category structure on $\Pro(\mathcal{C})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The cofibrations are $\mathbf{C} := {}^{\perp} (\mathcal{F}\cap \mathcal{W})$. \item The fibrations are maps satisfying the right lifting property with respect to all acyclic cofibrations. \end{enumerate} Furthermore, we have $\mathbf{C} \cap \mathbf{W}= {}^{\perp} \mathcal{F}.$ \end{thm} \begin{proof} This is very much like the proof of Theorem \ref{t:model}, that is based on \cite[Theorem 4.8]{BaSc1}. Going over the proof of \cite[Theorem 4.8]{BaSc1} we find that we can show all the axioms of a model category for $\Pro(\mathcal{C})$, except the two out of three property for $Lw^{\cong}(\mathcal{W})$, using only the fact that $\mathcal{C}$ is almost pro-admissible. (In fact, the only place where we use the fact that $Lw^{\cong}(\mathcal{W})$ satisfies the two out of three property is in Lemma 4.13, where we only use the portion of the two out of three property given in Definition \ref{d:almost_admiss}.) \end{proof} We can dualize the above: \begin{define}\label{d:almost_admiss_dual} A relative category $(\mathcal{C},\mathcal{W})$ will be called \emph{almost ind-admissible}, if $Lw^{\cong}(\mathcal{W})\subseteq \Ind(\mathcal{C})^\to$ satisfies the following portion of the two out of three property: For every pair of composable morphisms in $\Ind(\mathcal{C})$: $X\xrightarrow{f} Z\xrightarrow{g} Y $ we have: \begin{enumerate} \item If $f,g$ belong to $Lw^{\cong}(\mathcal{W})$ then $g\circ f\in Lw^{\cong}(\mathcal{W})$. \item If $f,g\circ f$ belong to $Lw^{\cong}(\mathcal{W})$ then $g\in Lw^{\cong}(\mathcal{W})$. \end{enumerate} \end{define} \begin{thm}\label{t:almost_model_dual} Let $(\mathcal{M},\mathcal{W},\mathcal{C})$ be a small almost ind-admissible weak cofibration category. Then there exists an almost model category structure on $\Ind(\mathcal{M})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The fibrations are $\mathbf{F} := (\mathcal{C}\cap \mathcal{W})^{\perp} $. \item The cofibrations are maps satisfying the left lifting property with respect to all acyclic fibrations. \end{enumerate} Furthermore, we have $\mathbf{F} \cap \mathbf{W}= \mathcal{C}^{\perp}.$ \end{thm} \section{Criteria for finite accessibility}\label{s:app} In this last section we will state our main results of this paper, namely, a series of criteria for the finite accessibility of the category of weak equivalences. The criteria are stated in a decreasing level of generality (each criterion being an application or a special case of the previous one) but in an increasing level of convenience of verification and applicability. Our only example in this paper is the category of simplicial sets, which is an example of applying the third and last criterion. However, the authors are aware of an example where the second criterion applies but not the third. This is a non-standard model structure on the category of chain complexes of modules over a ring, and will be treated in a future paper. \begin{define}\label{d:finite_access} A category is called \emph{finitely accessible} if it has filtered colimits and there is a small set of finitely presentable objects that generates it under filtered colimits. \end{define} The following lemma explains the relevance of Theorem \ref{t:model_dual} to the finite accessibility of the category of weak equivalences. \begin{lem}\label{l:finite access} Let $(\mathcal{M},\mathcal{W},\mathcal{C})$ be a small ind-admissible weak cofibration category. Consider the model structure induced on $\Ind(\mathcal{M})$ by Theorem \ref{t:model_dual}. Then the full subcategory of $\Ind(\mathcal{M})^\to$, spanned by the class of weak equivalences, is finitely accessible (see Definition \ref{d:finite_access}). \end{lem} \begin{proof} We need to show that $Lw^{\cong}(\mathcal{W})$ is of the form $\Ind(\mathcal{D})$ for some small category $\mathcal{D}$. This follows from the observation that $Lw^{\cong}(\mathcal{W})$ is the essential image of $\Ind(\mathcal{W})$ under the natural equivalence $\Ind(\mathcal{C}^{\to})\to\Ind(\mathcal{C})^{\to}$, where $\mathcal{W}$ is considered as a full subcategory of $\mathcal{C}^{\to}$. It then follows that $\Ind(\mathcal{W})$ is a full subcategory of $\Ind(\mathcal{C}^{\to})$, and thus $\Ind(\mathcal{W})\simeq Lw^{\cong}(\mathcal{W})$. \end{proof} We now come to our first criterion: \begin{prop}\label{l:admiss} Let $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ be an $\omega$-combinatorial model category. Let $\mathcal{M}$ denote the full subcategory of $\underline{\mathcal{M}}$ spanned by the finitely presentable objects. Let $\mathcal{W},\mathcal{C}$ denote the classes of weak equivalences and cofibrations between objects in $\mathcal{M}$, respectively. We denote by $LP$ the class of left proper maps in $(\mathcal{M},\mathcal{W})$ and by $RP$ the class of right proper maps in $(\mathcal{M},\mathcal{W})$. We make the following further assumptions: \begin{enumerate} \item The category $\mathcal{M}$ has finite limits. \item $\Mor(\mathcal{M})=\mathcal{W}\circ \mathcal{C}$. \item $\Mor(\mathcal{M})=\mathcal{W}\circ LP$. \item $\Mor(\mathcal{M})=RP\circ LP$. \end{enumerate} Then $(\mathcal{M},\mathcal{W},\mathcal{C})$ is an ind-admissible weak cofibration category and the induced model structure on $\Ind(\mathcal{M})$, given by Theorem \ref{t:model_dual}, coincides with $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$, under the natural equivalence $\underline{\mathcal{M}}\simeq\Ind(\mathcal{M})$. In particular, it follows from Lemma \ref{l:finite access} that the full subcategory of $\underline{\mathcal{M}}^\to$, spanned by the class of weak equivalences, is finitely accessible. \end{prop} \begin{proof} Since $\underline{\mathcal{M}}$ is locally finitely presentable (being $\omega$-combinatorial) it follows that its full subcategory $\mathcal{M}$ is essentially small, closed under finite colimits, and we have a natural equivalence of categories $\Ind(\mathcal{M})\simeq\underline{\mathcal{M}}$ given by taking colimits (see \cite{AR}). It is now trivial to verify, using assumption 2 above, that $(\mathcal{M},\mathcal{W},\mathcal{C})$ is a weak cofibration category. Using assumptions 1,3 and 4 we get, by Proposition \ref{p:compose}, that $(\mathcal{M},\mathcal{W},\mathcal{C})$ is almost ind-admissible (see Definition \ref{d:almost_admiss_dual}). Thus, by Theorem \ref{t:almost_model_dual}, there exists an almost model category structure on $\underline{\mathcal{M}}\simeq\Ind(\mathcal{M})$ such that: \begin{enumerate} \item The weak equivalences are $\overline{\mathcal{W}} := Lw^{\cong}(\mathcal{W})$. \item The fibrations are $\overline{\mathcal{F}}:= (\mathcal{C}\cap \mathcal{W})^{\perp} $. \end{enumerate} Furthermore, we have $\overline{\mathcal{F}} \cap\overline{\mathcal{W}}= \mathcal{C}^{\perp}.$ Since the model category $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ is $\omega$-combinatorial, we have that $$\overline{\mathcal{F}}\cap \overline{\mathcal{W}}= \mathcal{C}^{\perp}=\underline{\mathcal{F}}\cap \underline{\mathcal{W}},$$ $$\overline{\mathcal{F}}:= (\mathcal{C}\cap \mathcal{W})^{\perp} =\underline{\mathcal{F}}.$$ Thus, using Lemma \ref{l:lifting}, we also obtain $$\overline{\mathcal{C}}\cap \overline{\mathcal{W}}= ^{\perp}\overline{\mathcal{F}}=^{\perp}\underline{\mathcal{F}}=\underline{\mathcal{C}}\cap \underline{\mathcal{W}},$$ $$\overline{\mathcal{C}}:= ^{\perp}(\overline{\mathcal{F}}\cap \overline{\mathcal{W}}) =^{\perp}(\underline{\mathcal{F}}\cap \underline{\mathcal{W}}) =\underline{\mathcal{C}}.$$ It is now easy to show that $\overline{\mathcal{W}}=\underline{\mathcal{W}}$: we will show that $\overline{\mathcal{W}}\subseteq\underline{\mathcal{W}}$, and the other direction can be shown similarly. Let $f:X\to Y$ be an element in $\overline{\mathcal{W}}$. We decompose $f$, in the almost model category $(\underline{\mathcal{M}},\overline{\mathcal{W}},\overline{\mathcal{F}},\overline{\mathcal{C}})$, into an acyclic cofibration followed by a fibration: $$X\xrightarrow{h\in \overline{\mathcal{C}}\cap\overline{\mathcal{W}}} Z\xrightarrow{g\in\overline{\mathcal{F}}}Y.$$ Since the weak cofibration category $(\mathcal{M},\mathcal{W},\mathcal{C})$ is almost ind-admissible, we have that $g$ also belongs to $\overline{\mathcal{W}}$. Thus we have $$h\in \overline{\mathcal{C}}\cap\overline{\mathcal{W}}= \underline{\mathcal{C}}\cap\underline{\mathcal{W}},$$ $$g\in \overline{\mathcal{F}}\cap\overline{\mathcal{W}}= \underline{\mathcal{F}}\cap\underline{\mathcal{W}}.$$ It follows that $f\in \underline{\mathcal{W}}$, because $\underline{\mathcal{W}}$ is closed under composition. \end{proof} We now come to our second criterion for the finite accessibility of the category of weak equivalences. \begin{thm}\label{l:admiss2} Let $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ be an $\omega$-combinatorial left proper model category. Let $\mathcal{M}$ denote the full subcategory of $\underline{\mathcal{M}}$ spanned by the finitely presentable objects. Let $\mathcal{W},\mathcal{C}$ denote the classes of weak equivalences and cofibrations between objects in $\mathcal{M}$, respectively. Suppose we are given a cylinder object in $\mathcal{M}$, that is, for every object $B$ of $\mathcal{M}$ we are given a factorization in $\mathcal{M}$ of the fold map $B\sqcup B\to B$ into a cofibration followed by a weak equivalence: $$B\sqcup B\xrightarrow{(i_0,i_1)} I\otimes B\xrightarrow{p} B.$$ (Note that we are not assuming any simplicial structure; $I\otimes B$ is just a suggestive notation.) We make the following further assumptions: \begin{enumerate} \item The category $\mathcal{M}$ has finite limits. \item Every object in $\mathcal{M}$ is cofibrant. \item For every morphism $f:A\to B$ in $\mathcal{M}$ the map $B\coprod_{A}(I\otimes A)\to B$, induced by the commutative square $$\xymatrix{A\ar[d]^{f}\ar[r]^{i_0} & I\otimes A\ar[d]^{f\circ p} \\ B \ar[r]^{=} & B,}$$ is a right proper map in $(\mathcal{M},\mathcal{W})$. \end{enumerate} Then $(\mathcal{M},\mathcal{W},\mathcal{C})$ is an ind-admissible weak cofibration category and the induced model structure on $\Ind(\mathcal{M})$, given by Theorem \ref{t:model_dual}, coincides with $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$, under the natural equivalence $\underline{\mathcal{M}}\simeq\Ind(\mathcal{M})$. In particular, it follows from Lemma \ref{l:finite access} that the full subcategory of $\underline{\mathcal{M}}^\to$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} \begin{proof} We will verify that all the conditions of Proposition \ref{l:admiss} are satisfied. We only need to check the existence of factorizations of the form: \begin{enumerate} \item $\Mor(\mathcal{M})=\mathcal{W}\circ \mathcal{C}$. \item $\Mor(\mathcal{M})=\mathcal{W}\circ LP$. \item $\Mor(\mathcal{M})=RP\circ LP$. \end{enumerate} All the factorizations above will be given by the same factorization which we now describe. This is just the mapping cylinder factorization relative to our given cylinder object for $\mathcal{M}$. It is not hard to show that for any $B\in\mathcal{M}$ the maps $i_0,i_1:B\to I\otimes B$ are acyclic cofibrations. Let $f:A\to B$ be a morphism in $\mathcal{M}$. We define the mapping cylinder of $f$ to be the push out $$\xymatrix{A\ar[r]^{i_0}\ar[d]^f & I\otimes A\ar[d] \\ B \ar[r] & C(f).}$$ We define a morphism $q:C(f)=B\coprod_{A}(I\otimes A)\to B$ to be the one induced by the commutative square $$\xymatrix{A\ar[d]^{f}\ar[r]^{i_0} & I\otimes A\ar[d]^{f\circ p} \\ B \ar[r]^{=} & B.}$$ We define a morphism $i:A\to C(f)=B\coprod_{A}(I\otimes A)$ to be the composition $${A\xrightarrow{i_1} I\otimes A \xrightarrow{} C(f).}$$ Clearly $f=qi$, and we call this the \emph{mapping cylinder factorization}. The map $q$ is a left inverse to $j$, defined by the mapping cylinder push out square $$\xymatrix{A\ar[r]^{i_0}\ar[d]^f & I\otimes A\ar[d] \\ B \ar[r]^j & C(f).}$$ Since $i_0$ is an acyclic cofibration, we get that $j$ is also an acyclic cofibration and, in particular, $q$ is a weak equivalence. The map $i$ is a cofibration, being a composite of two cofibrations $$\xymatrix{A\ar[r] & B\coprod A\ar[r]^{(j,i)}& B\coprod_{A}(I\otimes A)}.$$ These maps are cofibrations because of the following push out squares: $$\xymatrix{\phi\ar[r]\ar[d] & B\ar[d] & A\coprod A \ar[r]^{(i_0,i_1)} \ar[d]^{f\coprod id} & I\otimes A \ar[d] \\ A\ar[r] & B\coprod A, & B\coprod A\ar[r] &B\coprod_{A}(I\otimes A) .}$$ Since the map $i$ is a cofibration and $\underline{\mathcal{M}}$ is left proper, we get that the map $i$ is also left proper. By assumption 3, $q$ is right proper. \end{proof} We now come to our third and last criterion. \begin{thm}\label{l:admiss3} Let $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$ be an $\omega$-combinatorial left proper model category. Let $\mathcal{M}$ denote the full subcategory of $\underline{\mathcal{M}}$ spanned by the finitely presentable objects. Assume that the category $\mathcal{M}$ has finite limits and let $*$ denote the terminal object in $\mathcal{M}$. Let $\mathcal{W},\mathcal{C}$ denote the classes of weak equivalences and cofibrations between objects in $\mathcal{M}$, respectively. Suppose we are given a factorization in $\mathcal{M}$ of the fold map $*\sqcup *\to *$ into a cofibration followed by a weak equivalence: $$*\sqcup *\xrightarrow{} I\xrightarrow{} *.$$ We make the following further assumptions: \begin{enumerate} \item For every morphism $Y\to B$ in $\mathcal{M}$, the functor $$Y\times_B(-):\mathcal{M}_{/B}\to\mathcal{M}$$ commutes with finite colimits. \item Every object in $\mathcal{M}$ is cofibrant. \item For every object $B$ in $\mathcal{M}$ the functor $$B\times(-):\mathcal{M}\to\mathcal{M}$$ preserves cofibrations and weak equivalences. \end{enumerate} Then $(\mathcal{M},\mathcal{W},\mathcal{C})$ is an ind-admissible weak cofibration category and the induced model structure on $\Ind(\mathcal{M})$, given by Theorem \ref{t:model_dual}, coincides with $(\underline{\mathcal{M}},\underline{\mathcal{W}},\underline{\mathcal{F}},\underline{\mathcal{C}})$, under the natural equivalence $\underline{\mathcal{M}}\simeq\Ind(\mathcal{M})$. In particular, it follows from Lemma \ref{l:finite access} that the full subcategory of $\underline{\mathcal{M}}^\to$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} \begin{proof} We will verify that all the conditions of Theorem \ref{l:admiss2} are satisfied. For every object $B$ of $\mathcal{M}$ we have that the induced diagram $$B\sqcup B\cong (*\times B)\sqcup(*\times B)\cong (*\sqcup *)\times B \xrightarrow{} I\times B\xrightarrow{}*\times B\cong B$$ is a factorization in $\mathcal{M}$ of the fold map $B\sqcup B\to B$ into a cofibration followed by a weak equivalence. (Note that here $\times$ denotes the actual categorical product and is not just a suggestive notation.) Thus, we only need to check that for every morphism $f:A\to B$ in $\mathcal{M}$ the map $q:B\coprod_{A}(I\times A)\to B$, induced by the commutative square $$\xymatrix{A\ar[d]^{f}\ar[r]^{i_0} & I\times A\ar[d]^{f\circ p} \\ B \ar[r]^{=} & B,}$$ is a right proper map in $(\mathcal{M},\mathcal{W})$. We will use the same notation as in the proof of Theorem \ref{l:admiss2}, regarding the mapping cylinder factorization. Let \[ \xymatrix{C(f)\times_B X\ar[d]^j\ar[r] & X\ar[d]^i\\ C(f)\ar[r]^q & B} \] be a pull back square in $\mathcal{M}$ such that $i$ is a weak equivalence. We need to show that $j$ is a weak equivalence. Using condition 1 we get natural isomorphisms: $$C(f)\times_B X=(B\coprod_{A}(I\times A))\times_B X\cong (B\times_B X)\coprod_{A\times_B X} ((I\times A)\times_B X)\cong$$ $$\cong(X\coprod_{A\times_B X} (I\times (A\times_B X))=C(k),$$ where $k:A\times_B X\to X$ is the natural map. By condition 3 and the proof of Theorem \ref{l:admiss2}, we get that the natural map $C(k)\cong C(f)\times_B X\to X$ is a weak equivalence. By the two out of three property, we get that $j$ is also a weak equivalence. \end{proof} We now turn to our main example: \begin{thm}\label{l:S_f_admiss} Let $\mathcal{S}$ denote the category of simplicial sets with its standard model structure. Let $\mathcal{S}_f$ denote the full subcategory of $\mathcal{S}$ spanned by the finitely presentable objects. Let $\mathcal{W},\mathcal{C}$ denote the classes of weak equivalences and cofibrations between objects in $\mathcal{S}_f$, respectively. Then $(\mathcal{S}_f,\mathcal{W},\mathcal{C})$ is an ind-admissible weak cofibration category and the induced model structure on $\Ind(\mathcal{S}_f)$, given by Theorem \ref{t:model_dual}, coincides with the standard model structure on $\mathcal{S}$, under the natural equivalence ${\mathcal{S}}\simeq\Ind(\mathcal{S}_f)$. In particular, it follows from Lemma \ref{l:finite access} that the full subcategory of ${\mathcal{S}}^\to$, spanned by the class of weak equivalences, is finitely accessible. \end{thm} \begin{proof} We will verify that all the conditions of Theorem \ref{l:admiss3} are satisfied. The model category $\mathcal{S}$ is $\omega$-combinatorial and left proper. We first sketch a proof showing that the subcategory $\mathcal{S}_f$ of $\mathcal{S}$ is closed under finite limits. Let $X$ be a finite simplicial set. It is not hard to verify that there exists a finite diagram $F:D\to \{\Delta^0,\Delta^1,\Delta^2,...\}$ such that $$X\cong colim_{D}F.$$ We now note the following facts: \begin{enumerate} \item In the category $\mathcal{S}$, pull backs commute with colimits. \item For all $n,m\geq 0$, $\Delta^n\times\Delta^m$ belongs to $\mathcal{S}_f$ (by direct computation). \item A sub-simplicial set of a finite simplicial set is also finite. \item The colimit in $\mathcal{S}$, of a finite diagram in $\mathcal{S}_f$, belongs to $\mathcal{S}_f$. \end{enumerate} Using these facts it is not hard to check that the pull back (in $\mathcal{S}$) of objects in $\mathcal{S}_f$ belongs to $\mathcal{S}_f$. Since the terminal object in $\mathcal{S}$ also belongs to $\mathcal{S}_f$, it follows that the subcategory $\mathcal{S}_f$ of $\mathcal{S}$ is closed under finite limits. In particular, this shows that $\mathcal{S}_f$ admits finite limits and they can be calculated in $\mathcal{S}$. This also gives condition 1 of Theorem \ref{l:admiss3} (as this condition is known to hold in $\mathcal{S}$). Clearly every object in $\mathcal{S}_f$ is cofibrant, so condition 2 is satisfied. Let $B$ be an object in $\mathcal{S}_f$. Since $B$ is cofibrant and $\mathcal{S}$ is a simplicial model category, we get that the functor $$B\times(-):\mathcal{S}\to\mathcal{S}$$ is a left Quillen functor and thus preserves cofibrations and weak equivalences between cofibrant objects. Since every object in $\mathcal{S}_f$ is cofibrant, we get that $$B\times(-):\mathcal{S}_f\to\mathcal{S}_f$$ preserves cofibrations and weak equivalences. This gives condition 3. Finally, we may take the factorization of the fold map: $$*\sqcup *\xrightarrow{} I\xrightarrow{} *,$$ to be $\Delta^{\{0\}}\sqcup \Delta^{\{1\}}\xrightarrow{} \Delta^1\xrightarrow{} \Delta^0.$ \end{proof} \begin{rem}\label{r:fib} Let $f:X\to Y$ be a morphism in $\mathcal{S}_f$. In the proof of Theorem \ref{l:admiss2} we considered the mapping cylinder factorization of $f$: $X\xrightarrow{h} C(f)\xrightarrow{g} Y $. We showed that $g$ is right proper. Note that $g$ is not, in general, a fibration in $\mathcal{S}$. Consider the map $f:\Delta^n\to \Delta^0$ ($n\geq 0$). Then the mapping cylinder factorization of $f$ is just $\Delta^{\{1,...,n+1\}}\to \Delta^{n+1}\to \Delta^0$. But $\Delta^{n+1}\to \Delta^0$ is not a Kan fibration, since $\Delta^{n+1}$ is not a Kan complex. Thus we see that we are using the extra generalization provided by Proposition \ref{p:compose} over Isaksen's results (in \cite[Lemmas 3.5 and 3.6]{Isa}). \end{rem} \section{Appendix: Relation to the work of Raptis and Rosick\'y}\label{s:cosmall} In this paper we proved theorems giving sufficient conditions for the finite accessibility of the category of weak equivalences in combinatorial model categories. Our main application was to the standard model structure on the category of simplicial sets, deducing the finite accessibility of its class of weak equivalences. As mentioned in the introduction, the same result on simplicial sets was also proved in \cite{RaRo}, using different methods. In this appendix we explain a possible connection between the two approaches. An important ingredient is the proof of \cite{RaRo} is a generalization of Quillen's small-object argument (called the fat small-object argument). Our proof is based mainly on Theorem \ref{t:model_dual}, describing a construction of a model structure on the ind-category of a small weak cofibration category. Theorem \ref{t:model_dual} was not proved directly, but was deduced, by duality, from Theorem \ref{t:model}. The main technical tool in the proof of Theorem \ref{t:model} is a certain factorization proposition, namely, \cite[Proposition 3.17]{BaSc1}. The main purpose of this appendix is to prove Proposition \ref{c:trans} which connects the notion of a relative cell complex, appearing in Quillen's small object argument, and the notion of an essentially cospecial map, appearing in Proposition \ref{p:factor_gen_dual} (which is the dual version of \cite[Proposition 3.17]{BaSc1}, and which is used in proving Theorem \ref{t:model_dual}). This will hopefully shade some light as to possible connections between the approach taken in this paper, and that of Raptis and Rosick\'y. As we explain below, Proposition \ref{c:trans} solves a conjecture of Isaksen. We end the appendix with a non trivial application of Proposition \ref{c:trans} to finite simplicial sets. \begin{define} Let $T$ be a poset. Then we view $T$ as a category which has a single morphism $u\to v$ iff $u\leq v$. Note that this convention is the opposite of that used in \cite{BaSc1}. \end{define} Thus, a poset $T$ is filtered (see Definition \ref{d:filtered}) iff $T$ is non-empty, and for every $a,b$ in $T$ there exists an element $c$ in $T$ such that $c\geq a,b$. A filtered poeset will also be called \emph{directed}. \begin{define}\label{def CDS} A cofinite poset is a poset $T$ such that for every element $x$ in $T$ the set $T_x:=\{z\in T| z \leq x\}$ is finite. \end{define} \begin{define}\label{def cospecial} Let $\mathcal{C}$ be a category with finite colimits, $N$ a class of morphisms in $\mathcal{C}$, $I$ a cofinite poset (see Definition \ref{def CDS}) and $F:X\to Y$ a morphism in $\mathcal{C}^I$. Then the map $F$ will be called a \emph{cospecial} $N$-\emph{map}, if the natural map $$X_t\coprod_{\mathop{\precolim}_{s<t} X_s} \mathop{\precolim}_{s<t} Y_s \to Y_t $$ is in $N$, for every $t$ in $ I$. We will denote this by $F\in coSp(N)$. \end{define} \begin{define}\label{def ess cospecial} Let $\mathcal{C}$ be a category and $N$ a class of morphisms in $\mathcal{C}$. \begin{enumerate} \item We denote by $R(N)$ the class of morphisms in $\mathcal{C}$ that are retracts of morphisms in $N$. Note that $R(R(N))=R(N)$. \item If $\mathcal{C}$ has finite colimits, we denote by $coSp^{\cong}(N)$ the class of morphisms in $\Ind(\mathcal{C})$ that are \textbf{isomorphic} to a morphism that comes from a natural transformation which is a cospecial $N$-map (see Definition \ref{def cospecial}). Maps in $coSp^{\cong}(N)$ are called \emph{essentially cospecial} $N$-\emph{maps}. \end{enumerate} \end{define} In the following we bring a few results from several papers. These results were originally stated in the language of pro-categories. For the convenience of the reader we bring them in their dual formulation, which we need here. \begin{prop}[{\cite[Proposition 2.19]{BaSc1}}]\label{forF_sp_is_lw} Let $\mathcal{C}$ be a category with finite colimits, and $\mathcal{N} \subseteq \mathcal{C}$ a subcategory that is closed under cobase change, and contains all the isomorphisms. Let $F:X\to Y$ be a natural transformation between diagrams in $\mathcal{C}$, which is a cospecial $\mathcal{N}$-map. Then $F$ is a levelwise $\mathcal{N}$-map. \end{prop} We now state our factorization proposition which is the main technical tool in the proof of Theorem \ref{t:model_dual}. \begin{prop}[{\cite[Proposition 3.17]{BaSc1}}]\label{p:factor_gen_dual} Let $\mathcal{C}$ be a category that has finite colimits, $\mathcal{N}\subseteq\mathcal{C}$ a subcategory that is closed under cobase change, and $M\subseteq\Mor(\mathcal{C})$ an arbitrary class of morphisms such that $M\circ\mathcal{N}=\Mor(\mathcal{C})$. Then every morphism $f:X\to Y$ in $\Ind(\mathcal{C})$ can be functorially factored as $X\xrightarrow{g} H_f \xrightarrow{h} Y$, where $g$ is in $coSp^{\cong}(\mathcal{N})$ and $h$ is in $Lw^{\cong}(M)\cap \mathcal{N}^{\perp}$. \end{prop} We can now also state the more elaborate version of Theorem \ref{t:model_dual}; \begin{thm}\label{t:model_elaborate} Let $(\mathcal{C},\mathcal{W},\mathcal{C}of)$ be a small ind-admissible weak cofibration category. Then there exists a model category structure on $\Ind(\mathcal{C})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{W})$. \item The cofibrations are $\mathbf{C} := R(coSp^{\cong}(\mathcal{C}of))$. \item The fibrations are $\mathbf{F} := (\mathcal{C}of\cap \mathcal{W})^{\perp}$. \end{enumerate} Moreover, this model category is $\omega$-combinatorial, with set of generating cofibrations $\mathcal{C}of$ and set of generating acyclic cofibrations $\mathcal{C}of\cap \mathcal{W}$. Furtheremore, the acyclic cofibrations in this model structure are given by $$\mathbf{C}\cap\mathbf{W}=R(coSp^{\cong}(\mathcal{C}of\cap \mathcal{W})).$$ \end{thm} The following two definitions are based on \cite[Section 2.1]{Hov}. \begin{define}\label{d:trans} Let $\mathcal{D}$ be a category with all small colimits, $N\subseteq \Mor(\mathcal{D})$ a class of morphisms in $\mathcal{D}$, and $\lambda$ an ordinal. A \emph{$\lambda$-sequence} in $\mathcal{D}$, relative to $N$, is a diagram $X:\lambda\to \mathcal{D}$, such that for all limit ordinals $t<\lambda$, the natural map $ \mathop{\precolim}_{s<t} X_s\to X_t $ is an isomorphism, and for all non limit ordinals $t<\lambda$, the map $X_{t-1}\to X_t $ is in $N$. The (transfinite) composition of the $\lambda$-sequence $X$ is defined to be the natural map $X(0)\to\mathop{\precolim}_{\lambda} X$. \end{define} \begin{define} Let $\mathcal{D}$ be a category with all small colimits, and $N\subseteq \Mor(\mathcal{D})$ a class of morphisms in $\mathcal{D}$. A \emph{relative} $N$-\emph{cell complex}, is a transfinite composition of pushouts of elements of $N$. That is, $f:A\to B$ is a relative $N$-cell complex if there exists an ordinal $\lambda$, and a $\lambda$-sequence in $\mathcal{D}$, relative to pushouts of maps in $N$, such that $f$ is isomorphic to the composition of $X$. We denote the collection of all relative $N$-cell complexes by $cell(N)$. \end{define} From now until the end of this section we let $\mathcal{C}$ be a small category with finite colimits. By the results of \cite{AR}, the category $\Ind(\mathcal{C})$ is locally presentable and every object of $\mathcal{C}$ is $\omega$-presentable in $\Ind(\mathcal{C})$. In particular, the category $\Ind(\mathcal{C})$ has all small colimits. \begin{prop}[{\cite[Proposition 5.2]{Isa}}]\label{coSp_cell} For any class of morphisms $N\subseteq \Mor(\mathcal{C})$ we have $coSp^{\cong}(N)\subseteq cell(N)$, in $\Ind(\mathcal{C})$. \end{prop} In \cite{Isa}, Isaksen conjectures a partial converse to Proposition \ref{coSp_cell}. Namely, that for any class of morphisms $N\subseteq \Mor(\mathcal{C})$, we have $R(cell(N))\subseteq R(coSp^{\cong}(N))$, in $\Ind(\mathcal{C})$. This conjecture fails as stated, as the following counterexample demonstrates. Take $\mathcal{C}$ to be the category $$\xymatrix{ a \ar[r]\ar[d]^N & b \ar[d] \\ c \ar[r] & d, \\ }$$ where the square is commutative, and take $N$ to consist only of the unique map $a\to c$. It is easy to verify, that there is a natural equivalence of categories $\Ind(\mathcal{C})\simeq \mathcal{C}$, and under this equivalence, $R(coSp^{\cong}(N))$ is just $N$. Thus the unique map $b\to d$ belongs to $R(cell(N))$ but not to $R(coSp^{\cong}(N))$. However, using Theorem \ref{t:model_dual}, we can prove Isaksen's conjecture in the case where $N$ is a subcategory that is closed under cobase change. \begin{prop}\label{c:trans} Let $\mathcal{N}\subseteq\mathcal{C}$ be a subcategory that is closed under cobase change and contains all the isomorphisms. Then $R(cell(\mathcal{N}))= R(coSp^{\cong}(\mathcal{N}))$. \end{prop} \begin{proof} By Proposition \ref{coSp_cell} we know that $R(coSp^{\cong}(\mathcal{N}))\subseteq R(cell(\mathcal{N}))$. It thus remains to show that $R(cell(\mathcal{N}))\subseteq R(coSp^{\cong}(\mathcal{N}))$. Since $\mathcal{N}\subseteq R(coSp^{\cong}(\mathcal{N}))$, it is enough to show that the class $R(coSp^{\cong}(\mathcal{N}))\subseteq \Mor(\Ind(\mathcal{C}))$ is closed under cobase change and transfinite compositions. It is easy to see that $(\mathcal{C},\mathcal{C},\mathcal{N})$ is a small weak cofibration category. Moreover, $Lw^{\cong}(\mathcal{C})=\Mor(\Ind(\mathcal{C}))$ by (the dual version of) Lemma \ref{every map natural}, so $(\mathcal{C},\mathcal{C},\mathcal{N})$ is clearly ind-admissible. Thus, it follows from Theorem \ref{t:model_elaborate} that there exists a model category structure on $\Ind(\mathcal{C})$ such that: \begin{enumerate} \item The weak equivalences are $\mathbf{W} := Lw^{\cong}(\mathcal{C})$. \item The cofibrations are $\mathbf{C} := R(coSp^{\cong}(\mathcal{N}))$. \item The fibrations are $\mathbf{F} := \mathcal{N}^{\perp}$. \end{enumerate} In particular, it follows that $R(coSp^{\cong}(\mathcal{N}))={}^{\perp}(\mathbf{F}\cap \mathbf{W})$, and thus $R(coSp^{\cong}(\mathcal{N}))$ is closed under cobase change and transfinite compositions by well known arguments (see for example \cite[Section A.1.1]{Lur}). \end{proof} Proposition \ref{c:trans} can be used to connect Quillen's small object argument with our factorization proposition (Proposition \ref{p:factor_gen_dual}). As an example, we show how a special case of the small object argument follows easily from our factorization proposition. \begin{cor}\label{c:cosmall} Let $N \subseteq \Mor(\mathcal{C})$ be any class of morphisms. Then every map $f:X\to Y$ in $\Ind(\mathcal{C})$ can be \emph{functorially} factored as $X\xrightarrow{h} H \xrightarrow{g} Y$, where $g$ is in $cell(N)$, and $h$ is in $N^{\perp}$. \end{cor} \begin{proof} Let $\mathcal{N}$ denote the smallest subcategory of $\mathcal{C}$ that is closed under cobase change and contains all the isomorphisms, that also contains $N$. Since the classes $cell({N})$ and ${}^{\perp}(N^{\perp})$ are closed under cobase change and transfinite composition, we have: \begin{enumerate} \item $cell({N})=cell(\mathcal{N})$. \item $N^{\perp}=({}^{\perp}(N^{\perp}))^{\perp}=\mathcal{N}^{\perp}$. \end{enumerate} Thus the corollary follows by combining Propositions \ref{p:factor_gen_dual} and \ref{c:trans}. \end{proof} We now present a nice application of Proposition ~\ref{c:trans}. Let $\mathcal{S}_f$ denote the category of simplicial sets with finitely many non-degenerate simplices. Let $\mathcal{A}$ denote the smallest subcategory of $\mathcal{S}_f$, that contains all the isomorphisms and is closed under push outs, that also contains all the horn inclusions $\Lambda^n_i\to\Delta^n$. In other words, if $H$ denotes the set of horn inclusions, then maps in $\mathcal{A}$ are just finite relative $H$-cell complexes in $\mathcal{S}_f$. That is, maps that can be obtained as a finite composition of push outs of horn inclusions, starting from an arbitrary object in $\mathcal{S}_f$. Clearly, every map in $\mathcal{A}$ is a trivial cofibration in $\mathcal{S}_f$. \begin{prop}\label{l:trivial_cof} Every trivial cofibration in $\mathcal{S}_f$, is a retract of a map in $\mathcal{A}$. \end{prop} \begin{proof} Let $f:A\to B$ be a trivial cofibration in $\mathcal{S}_f$. By the results of \cite[Section 2.1]{Hov}, $f$ belongs to $R(cell(H))=R(cell(\mathcal{A}))$ as a map in $Ind(\mathcal{S}_f)\simeq\mathcal{S}$. By Proposition ~\ref{c:trans} $f$ also belongs to $R(coSp^{\cong}(\mathcal{A}))$. Thus, there exists $h\in coSp^{\cong}(\mathcal{A})$ such that $f$ is a retract of $h$. Without loss of generality we may assume that $h:\{X_t\}_{t\in T}\to \{Y_t\}_{t\in T}$ is a natural transformation, which is a cospecial $\mathcal{A}$-map. We have the following retract diagram: $$ \xymatrix{ A \ar[d]^f \ar[r] & \{X_t\} \ar[d]^h \ar[r] & A \ar[d]^f \\ B \ar[r] & \{Y_t\} \ar[r] & B .}$$ It follows from the definition of morphisms in $Ind(\mathcal{S}_f)$, that there exists $t_0 \in T$ such that the above diagram can be factored as: $$ \xymatrix{ A \ar[d]^f \ar[r] & X_{t_0} \ar[d]^{h_{t_0}} \ar[r] & \{X_t\}_{t\in T} \ar[d]^h \ar[r] & A \ar[d]^f \\ B \ar[r] & Y_{t_0} \ar[r] & \{Y_t\}_{t\in T} \ar[r] & B .}$$ It follows that $f$ is a retract of $h_{t_0}$, in $\mathcal{S}_f$. But by Proposition ~\ref{forF_sp_is_lw}, $h$ is a levelwise $\mathcal{A}$-map. In particular $h_{t_0}$ belongs to $\mathcal{A}$, and we get the desired result. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,638
`; if(industryLength > 0) { if(industryLength < (16 + 1)) { let countImg = 1; for(let itemsIndex = 0; itemsIndex < rows; itemsIndex++) { let itemImgCol; let imgIndex; contentItemsStart = ` `; myArray.push(contentItemsStart); for(itemImgCol = 0; itemImgCol <= countImg; itemImgCol++) { console.log('Full Item : '+industryLength) console.log('Image Index Count : '+countImg) console.log('Calculation : '+ (countImg % 4)) console.log('') if((industryLength) >= countImg){ imgIndex = (itemsIndex * countImg) + itemImgCol; if(industry[imgIndex] != null) { contentImg = ` `; myArray.push(contentImg); console.log('currentIndex: '+ countImg) console.log('') } else { contentImg = ` ${andMore} `; myArray.push(contentImg); } countImg++; } else { contentImg = ` `; myArray.push(contentImg); console.log('stop'); break; } } contentItemsEnd = ` `; myArray.push(contentItemsEnd); } content.innerHTML = myArray.join(""); } else{} } } DPO International E-Materials Our 8 Services Food & Beverage Intelligence International (English) China (Chinese) The Emerging Demands of Better For You Ingredients June 20, 2019 January 9, 2023 dpo-admin Nowadays, consumers are becoming more aware on the importance of healthy lifestyles. That being said, the health trend and societal changes are the major drivers that contribute to the growing demand for "better for you" ingredients in the food and beverage industry. Consumers are becoming more interested in knowing about the ingredient quality and the health benefits when making food choices. That is why more and more consumers are looking at the product labels before making their purchase. Through these buying behaviors, clean label is gaining significance and is rapidly evolving into mainstream food and beverage industries. Clean label can be defined as label claims such as "all natural", "minimally processed," and "non-GMO (Kantha, 2018). According to a recent survey done by McFadden & Lusk (2018), consumers see 'organic' and 'non-GMO' food labels as synonymous and willing to pay more for both – 35% more for those labelled "non-GMO Project" and 40% more for those labelled "USDA Organic. Fiber Enrichment Insufficient dietary fibre intake is always a nutrition concern. Thus, adequate intake of dietary fiber is certainly important in achieving good digestive health. Inulin and Oligofructose (FOS) are one of the best choices of dietary fiber. They are natural non-digestible carbohydrates from chicory roots and are associated to aid in maintaining good digestive health. They support overall healthy intestinal environment which contributes to improved stool frequency. FDA ruling has further reinforced the claim of Inulin and Oligofructose (FOS) being beneficial dietary ingredients for customers to improve their nutritional quality and bridging the fiber gap (FDA, 2018). According to EFSA (2015), significant results were demonstrated whereby 12g of inulin daily is proven to help increase stool frequency, in which supports digestive health. Inulin and Oligofructose are soluble in cold water and can be added into almost any applications including baked goods, sport beverages, creamy dairy desserts, confectioneries, breakfast cereals, dairy alternatives and the list goes on. Alternatively, it can be sprinkled directly on your meals. Sugar Reduction World Health Organization (WHO) has recommended that the intake of free sugars should not exceed 10% of total dietary energy intake for both adults and children. Although sugar is a great source of energy and can be quickly metabolized and absorbed by our bodies, it provides "empty" calories and lacks minerals and vitamins. Consumers are becoming increasingly aware of the health detriments of sugar, especially when consumed excessively. Monk Fruit extract derives from a dried fruit known as Luo Han Guo. It is a natural sweetener in which the glycosides in the fruit gives the sweet taste, thus being an all-natural and a zero-calorie sweetener (Balachandran, 2018). The sweetness of Monk Fruit extract is reported to be 100 to 300 times sweeter than sugar depending on the structure of the mogrosides, the number of glucose units and its food matrix (FDA, 2018; Bajwa & Goraya, 2016). It is so sweet that only small amounts are needed to sweeten foods (Boyle, 2015). They can be also marketed to diabetics and those who want to cut down on sugar intake. Protein – The Star Health Halo Ingredient Protein is another star health halo ingredient and is forecasted to grow at a CAGR close to 9% within year 2019 to 2023. Increasing health consciousness among consumers contributes to protein's positive growth. The expanding global vegan population also becomes one of the primary contributors for the global plant-based protein market growth (Technavio, 2018). Today, protein consumption is no longer meant only for workouts but also to power up our days. The source of plant protein market is segmented into wheat protein, soy protein, pea protein and other cereal proteins. Proteins deriving from plants are available in various forms including protein concentrate, protein isolate and textured protein. Consumers are more inclined towards plant protein as it contains higher nutritional profiles, including ease of digestibility, sustainable source, high nutritional value and being a non-allergic nature (Mordor Intelligence, 2018). Plant-based Ingredients Natural and simple diets are further expanding vegetarian, vegan and other plant-focused formulations. The industry is welcoming more products that utilize plants as key ingredients as consumers are seeking more fruits, vegetables, grains, seeds, herbs and other plant-based ingredients for their shopping list. Rice protein is increasingly considered as an alternative and economic source of high quality plant-based protein while also being natural, gluten-free, non-GMO, hypo-allergenic along with an excellent amino acid profile and protein digestibility (Beneo, 2016). Chia is another plant-based nutritious ancient seed. This magnificent seed not only provides sustainable energy and endurance but also a nutrient dense source of soluble and insoluble fibre, plant omegas and protein. According to USDA (2016), the dietary fiber content in chia seeds is higher than flax seeds and quinoa seeds. Chia seeds contain remarkable source of essential minerals that includes magnesium, calcium, potassium and phosphorus. In comparison to 100g milk, Chia seed contains six times more calcium, eleven times more phosphorus and four times more potassium (Munoz et al., 2012). Sodium Reduction Although sodium reduction is a priority for food manufacturers, it is not a major health concern for majority of the consumers. However, WHO (2018) recommended consumers to keep salt intake to less than 5g daily for healthy diet. The Dietary Guidelines also recommend that the general population consume no more than 2,300 milligrams of sodium a day (about a teaspoon of table salt) (USDA, 2015). We tend to consume too much sodium in our daily food intake and often times are unaware of the actual amount of salt we consume as it may be added on in processed foods including ready meals, processed meats such as bacon, ham and salami, cheese, salty snacks. Excess salt may also occur in the frequently consumed food such as bread, condiments and seasonings that are used in cooking such as stock cubes, table salt and sauces. High sodium intake contributes to high blood pressure which leads to heart disease and stroke (Mozaffarian et al., 2014). Consumers nowadays are becoming more health conscious and demand for healthier food choice. They are willing to pay a premium price for healthier products that offer various functional benefits. "Better for you ingredients" will continue to shine bright as food manufacturers are increasingly including healthy ingredients in fortified and functional food and beverages applications to capture consumer spending. The creativity and innovation of manufacturers has successfully promoted the "better for you ingredients" into the food industry, expanding to end market with products that have healthier nutritional benefits. Austin, K. & Seebohar, B. (2011). Performance Nutrition: Applying the Science of Nutrient Timing. Human Kinetics, 132. Bajwa, U. & Goraya, R. K. (2016) The Sweetness Technology of Sugar Substituted Low-Calorie Beverages. Food & Nutrition Journal, G115, 1-8. Balachandran, K. (2018). Natural sweeteners. Journal of Social Health and Diabetes, 6, 8 – 10. https://doi.org/10.4103/JSHD.JSHD_20_17 Beneo. (2016). Matching today's expectations. Specialty rice ingredients for better nutrition. Brochure. Retrieved from https://www.food.be/public/uploads/company-files/77/BENEO_Brochure_Specialty_Rice_ingredients_2016.pdf Boyle, M. A. (2015). Personal Nutrition. Cengage Learning, 118. EFSA. (2015). Scientific Opinion on the substantiation of a health claim related to "native chicory inulin" and maintenance of normal defecation by increasing stool frequency pursuant to Article 13.5 of Regulation (EC) No 1924/2006, 13(1), 3951. FDA Guidance. (2018). "The Declaration of Certain Isolated or Synthetic Non-Digestible Carbohydrates as Dietary Fiber on Nutrition and Supplement Facts Labels: Guidance for Industry." Retrieved from https://www.fda.gov/regulatory-information/search-fda-guidance-documents/guidance-industry-declaration-certain-isolated-or-synthetic-non-digestible-carbohydrates-dietary FDA. (2018). Additional Information about High-Intensity Sweeteners Permitted for Use in Food in the United States. Retrieved from https://www.fda.gov/food/food-additives-petitions/additional-information-about-high-intensity-sweeteners-permitted-use-food-united-states Graff, C.S., Allouche, R. & Allouche, R. (2013). The New Lean for Life. Harlequin, 22. Kantha, S. (2018). Clean Label Trends. Prepared Foods. Articles. Retrieved from https://www.preparedfoods.com/articles/120827-clean-label-trends McFadden, B. R. & Lusk, J. L. (2018). Effects of the National Bioengineered Food Disclosure Standard: Willingness To Pay for Labels that Communicate the Presence or Absence of Genetic Modification. Applied Economic Perspectives and Policy, 40(2), 259-275. Mordor Intelligence. (2018). Global Plant Protein Market- By Product Type, Application and Geography-Market Shares, Forecasts And Trends (2018 – 2023). Industry Report. Mozaffarian, D., Fahimi, S., Singh, G. M., Micha, R., Khatibzadeh, S., Engell, R. E., Lim, S., Danaei, G, Ezzati, M. & Powles, J. (2014). Global Sodium Consumption And Death From Cardiovascular Causes. N Engl J Med., 371 (7), 624-34. https://doi.org/10.1056/NEJMoa1304127 Munoz, L. A., Cobos, A., Diaz, O. & Aguilera, J. M. (2012). Chia seeds: microstructure, mucilage extraction and hydration. Journal of Food Engineering, 108, 216-224. Palus, S., Springer, Doehner, W., Haehling, S. V., Anker, M., Anker, S. D. & Springer, J. (2017). Models of sarcopenia: Short review. International Journal of Cardiology, 238, 19-21. Parker, L. (2018). Better-For-You Category Offers More Choices In Ingredients, Formats. Prepared Foods. Article. Retrieved from https://www.preparedfoods.com/articles/121613-better-for-you-category-offers-more-choices-in-ingredients-formats Pennings, B., Groen, B., Lange, A., Gijsen, A. P., Zorenc, A. H., Senden, J. M. G. & Loon, L. J. C. (2012). Amino acid absorption and subsequent muscle protein accretion following graded intakes of whey protein in elderly men. Am J Physiol Endocrinol Metab, 302 (8), E992-9. https://doi.org/10.1152/ajpendo.00517.2011 Technavio. (2018). Global Plant Based Protein Products Market 2019-2023. Market Research Report. USDA National Nutrient Database for Standard Reference Release 28 (2016). Basic report 12006, seeds, Chia seeds, dried. USDA. (2015). Scientific Report of the 2015 Dietary Guidelines Advisory Committee. Retrieved from https://ods.od.nih.gov/pubs/2015_dgac_scientific_report.pdf Reducing The Cost of Plant-Based Chocolate SOUTHEAST ASIA & CHINA MARKET TRENDS 2023 – CONFECTIONERY Taiwanese Pineapple Tart A Sweet Way to Improve Digestive Health Regulatory Update: Thailand Releases Recommendations for the Use of Ingredients in Caffeinated Beverages Global Market Trends 2023 – Confectionery Market Trends In Sri Lanka – Animal Nutrition Market Trends In Vietnam – Animal Nutrition Food Knowledge DPO Updates © 2020 DPO International. All Rights Reserved. Disclaimer Copyright © 2023 DPO International Theme: Flash by ThemeGrill. Proudly powered by WordPress If you're looking for solutions to your food business or have any urgent questions, DPO International is delighted to assist you! By submitting this form, you agree to the terms of service
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,249
{"url":"http:\/\/mathhelpforum.com\/geometry\/140803-coordinate-problem.html","text":"# Math Help - coordinate problem\n\n1. ## coordinate problem\n\nthe coordinate of the midpoint of a line segement AB are (-2,4) if the coordinate of point A are (7,10) what are the coordinates of B?\n\nhow can i solve this problem i tried using the midpoint formula but it doesnt work\n\n2. Originally Posted by zelda1850\nthe coordinate of the midpoint of a line segement AB are (-2,4) if the coordinate of point A are (7,10) what are the coordinates of B?\n\nhow can i solve this problem i tried using the midpoint formula but it doesnt work\nHi zelda,\n\nthe midpoint formula ought to work easily enough.\n\nYou can simply say....\n\nthe x co-ordinate of the midpoint is halfway between the x co-ordinates\nof the points on the line.\n\nSo -2 is halfway between 7 and another x.\n-2 is 9 away from 7.\nThe other value 9 away from -2 is -11.\n\nSo -2 is halfway between -11 and 9.\n\nAlso 4 is halfway between 10 and -2\n\nThe other point is (-11,-2).\n\nUsing the midpoint formula, which is\n\n\"the x midpoint is the average of the x values\"\n\"the y midpoint is the average of the y values\".\n\n$\\frac{x_1+x_2}{2}=-2$\n\n$x_1+x_2=-2(2)=-4$\n\n$x_1=-4-x_2=-4-7=-11$\n\n$\\frac{y_1+y_2}{2}=4$\n\n$y_1+y_2=4(2)=8$\n\n$y_1=8-y_2=8-10=-2$","date":"2014-12-26 17:54:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 6, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8523148894309998, \"perplexity\": 1050.797224192027}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-52\/segments\/1419447549548.75\/warc\/CC-MAIN-20141224185909-00072-ip-10-231-17-201.ec2.internal.warc.gz\"}"}
null
null
Hynix offers 20nm NAND chips Tags: #20nm #30nm #nand #nand-flash #solid-state #solid-state-device #solid-state-storage #ssd Companies: #hynix Hynix has announced that it has begun mass production of 64Gb NAND flash chips based around a 20nm process, opening up the possibility of another leap in SSD capacity. The company's latest 20nm process means that it is able to produce 64Gb NAND flash chips the same physical size as its previous 32Gb models - theoretically allowing manufacturers using the older models to drop-in the new replacements for an instant doubling of capacity, or giving them the opportunity to use half as many chips for the same capacity in order to reduce costs or make power savings. Pricing should improve as a result of the move, too: the company claims that its 300mm fabrication facility can get around 60 per cent better yields than the previous 30nm process - and better yields mean better prices, although how much of this saving might get passed down to the consumer level remains to be seen. Dr. S. W. Park, the chief technology officer at the company, said that the move means that Hynix is now "enabled to provide customized, high performance products in a timely manner which perfectly suits mobile solutions including smartphones, table PCs and others." So far no companies have announced that they are planning to use the new 20nm chips in their products, although Hynix has announced that it will be continuing its partnership with Israel-based SSD manufacturer Anobit and is looking to upgrade its devices to the new chips by September this year. With solid-state storage becoming increasingly popular, moves like this are required in order to get the cost-per-megabyte down: while current models offer the capacity and performance required of, say, a boot drive, they're still priced out of the reach of many - especially if you're planning on using them for mass storage. Are you thinking about holding out on an SSD purchase until 20nm becomes the norm, or are companies switching to Hynix's latest chips just likely to keep the cost savings for themselves? Share your thoughts over in the forums. Samsung NAND fab hit by reported power outage Destroys 3.5 percent of global March NAND output. Hynix partners with SK Group, becomes SK Hynix Hynix has announced a deal with SK Group and a new name: SK Hynix. Samsung announces new mobile tech Samsung has announced a trio of new technologies heading for its smartphones.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,032
In the busyness of our lives, we don't often reflect upon the remarkable treasure to be found in the Mass. The video: True Magnificence of the Mass reveals the truth about the nature of worship, our understanding of the Mass, and how we gain access to Jesus Christ by participating in it. Through a series of interviews with Catholic lay people and Father John Riccardo, we gain insight into the Mass as a gift from God where he pours His grace upon us to face the challenges of life. Within the personal testimonies, the Mass as a sacrifice is explored as well as the great mystery of the Eucharist, whereby the body and blood of Jesus is made present and accessible to us. How different our lives are given such a gift. Your life will be forever changed! True Magnificence Introduction from Our Lady of Good Counsel on Vimeo. True Magnificence from Our Lady of Good Counsel on Vimeo. An inspiring and informative interview with Father John Riccardo provides additional teaching on the Mass. Below is a brief summary for each chapter as well as questions for discussion or reflection and some practical suggestions for how to re-awaken to the magnificence of the Mass. This is a brief introduction to the changes in the words we will be saying at Mass beginning November 27, 2011, due to the revised translation of the Roman Missal. The history of changes to the Mass is given and the importance of making changes now. Teachings of the Mass: The Revised Roman Missal from Our Lady of Good Counsel on Vimeo. The Mass is known as the "source and summit" of Catholic life because of the gift we receive of Jesus Christ in the Eucharist. This chapter explains God's love and desire for us and how He is revealed in the Mass. Teachings of the Mass: The Significance of the Mass from Our Lady of Good Counsel on Vimeo. Using the analogy of the movie, Saving Private Ryan, Father John Riccardo explores the sacrifice of Christ, in His death and resurrection, and how the Mass represents that sacrifice. In addition, the great gift of the Mass is revealed. Teachings of the Mass: The Mass as a Sacrifice from Our Lady of Good Counsel on Vimeo. The nature of grace is explained and the ways we can receive it in the Mass, through both Sacred Scripture and in the Eucharist. The nature of worship is discussed and how the material pleasures of this world distract us from the awesome wonder of God in the Mass. Suggestions are given for how to gain a better understanding of the Mass and to get something out of it. Teachings of the Mass: Prepare for Sunday from Our Lady of Good Counsel on Vimeo.
{ "redpajama_set_name": "RedPajamaC4" }
2,072
{"url":"https:\/\/repository.uantwerpen.be\/link\/irua\/132641","text":"Title Domain selectivity in $BiFeO_{3}$ thin films by modified substrate termination Author Solmaz, Alim Huijben, Mark Koster, Gertjan Egoavil, Ricardo Gauquelin, Nicolas Van Tendeloo, Gustaaf Verbeeck, Jo Noheda, Beatriz Rijnders, Guus Faculty\/Department Faculty of Sciences. Physics Publication type article Publication 2016 Weinheim , 2016 Subject Physics Chemistry Engineering sciences. Technology Source (journal) Advanced functional materials. - Weinheim Volume\/pages 26(2016) :17 , p. 2882-2889 ISSN 1616-301X ISI 000377587800011 Carrier E Target language English (eng) Full text (Publishers DOI) Affiliation University of Antwerp Abstract Ferroelectric domain formation is an essential feature in ferroelectric thin films. These domains and domain walls can be manipulated depending on the growth conditions. In rhombohedral BiFeO3 thin films, the ordering of the domains and the presence of specific types of domain walls play a crucial role in attaining unique ferroelectric and magnetic properties. In this study, controlled ordering of domains in BiFeO3 film is presented, as well as a controlled selectivity between two types of domain walls is presented, i.e., 71\u00b0 and 109\u00b0, by modifying the substrate termination. The experiments on two different substrates, namely SrTiO3 and TbScO3, strongly indicate that the domain selectivity is determined by the growth kinetics of the initial BiFeO3 layers. E-info https:\/\/repository.uantwerpen.be\/docman\/iruaauth\/464496\/132641.pdf http:\/\/gateway.webofknowledge.com\/gateway\/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000377587800011&DestLinkType=RelatedRecords&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 http:\/\/gateway.webofknowledge.com\/gateway\/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000377587800011&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=ef845e08c439e550330acc77c7d2d848 Handle","date":"2017-04-26 00:08:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2588953375816345, \"perplexity\": 13289.13370646953}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917121000.17\/warc\/CC-MAIN-20170423031201-00400-ip-10-145-167-34.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} As illustrated in Figure \ref{example}, in local sequence transduction (LST) tasks, a model is trained to map an input sequence $x_{1}, . . . , x_{n}$ to an output sequence $y_{1}, . . . , y_{m}$, where the input and output sequences are of similar length and differ only in a few positions. Many important NLP tasks can be formulated as LST tasks, including automatic grammatical error correction (GEC)~\cite{lee2006automatic}, OCR error correction~\cite{tong1996statistical} and spell checking~\cite{fossati2007mixed}. With the recent success of sequence-to-sequence (seq2seq) learning~\cite{sutskever2014sequence} and the transformer model~\cite{vaswani2017attention}, most LST tasks have been tackled by directly training the transformer-based models in a seq2seq fashion. While the conventional seq2seq paradigm suits well for general sequence transduction problems such as machine translation, their left-to-right auto-regressive decoding scheme cannot access the future predictions on the right side, which does not fully utilize the characteristic of LST tasks and has been demonstrated to degrade the performance of seq2seq modes~\cite{zhang2018asynchronous,zhang2019synchronous}. \begin{figure} \centering \includegraphics[width=\linewidth]{example_new.pdf} \caption{Illustration of the characteristic of local transduction tasks versus general sequence transduction tasks. Words and letters in red are those different from that in the input sequences.} \label{example} \end{figure} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{mask-emnlp-new_cropped.pdf} \caption{Illustration of the proposed pseudo future modeling approach and the pseudo-bidirectional attention mask used for parallel training. } \label{mask} \end{figure*} In this work, motivated by the characteristic of LST tasks, we propose a Pseudo Bidirectional Decoding (PBD) approach to tackle LST tasks. Our approach copies the input tokens on the right side of the current decoding position as a proxy for the future tokens. In this way, we augment the decoder of the conventional transformer by allowing it to attend to the representation of ``pseudo'' future tokens in the decoder, making the decoding self-attention module bidirectional without introducing any computational overhead during inference. To retain the parallelizability of the training transformer models, we propose a novel masking strategy that enables the decoder to attend to copied future token representations during training in a parallelizable fashion. Also, we incorporate a segment embedding mechanism to make the decoder aware of whether a token is directly copied from the encoder and represent them differently from the generated tokens. With the proposed approach, the encoder and the decoder in transformer models for LST tasks receive similar input sequences and both attend to their bidirectional context information, which motivates us to share all their parameters (except encoder-decoder attention). The parameter sharing mechanism allows us to roughly reduce the total number of parameters of the model by half, which is beneficial for real-world applications and makes the training more efficient. It also explicitly models the characteristic of LST tasks and leads to good regularization effects, enhancing the performance of transformer models on LST tasks and allowing us to train deeper models for further improvements. We conduct extensive experiments on three LST tasks including grammatical error correction, spell correction, and OCR correction. Experimental results demonstrate that the proposed PBD approach is able to substantially and consistently improve over competitive transformer baselines across all three LST tasks and yield state-of-the-art results on both spell and OCR correction tasks. \section{Pseudo-Bidirectional Decoding} \subsection{Pseudo Future Modeling} In contrast to recent works on bidirectional decoding~\cite{zhang2018asynchronous,zhang2019synchronous} that employs a right-to-left model at the same time and combines their predictions in a post-hoc fashion with sophisticated algorithms, our approach enables the decoder of the seq2seq model to exploit the future context of the generated sequence without having to predict them in the first place. Concretely, our method copies the representation of tokens from the $N+1$ th position to the end of the input sequence in the encoder layer to the corresponding decoder layer as pseudo future information when predicting the $N$ th output token. For instance, for grammatical error correction, given an input text ``\textit{He go to school yesterday.}'', a conventional left-to-right decoder would probably generate ``\textit{goes}'' at the second decoding step, as the decoder state is ``\textit{He \textunderscore} '', which is likely to be continued with the third person singular form of the verb ``go''. In contrast, with the proposed pseudo-bidirectional decoding scheme, the decoder state becomes ``\textit{He \textunderscore \ to school yesterday.}'', which facilitates the decoder to correctly generate ``went''. While ideally the encoder-decoder attention may capture this information, our method makes the decoder self-attention more effective by allowing it to directly attend to future information, which may be complementary to the information captured by the encoder-decoder attention module, leading to better empirical performance. \paragraph{Pseudo-bidirectional Attention Mask} A naive implementation of the PBD approach requires us to change the decoder input for each decoding step instead of feeding the entire output sequence into the decoder and use a causal attention mask to ensure the causality of the decoder. This would hinder the transformer model from being trained in parallel, thus makes the training much less efficient. To address this problem, we propose a novel masking strategy. As illustrated in Figure \ref{mask}, we concatenate the representation of the input sequences to that of the output sequences to form the key and the value in the decoder self-attention module. The pseudo-bidirectional attention mask makes the decoder self-attention bidirectional by allowing the query tokens to attend to pseudo future tokens copied from the encoder, retaining the causality of the decoder and enabling parallel training. \paragraph{Segment Embedding} While the characteristic of LST tasks ensures the copied pseudo future tokens to be similar to the expected output tokens, the simple position-wise alignment method may make the pseudo future information contain some noise. Therefore, we propose a simple segment embedding method that enables the decoder to distinguish the copied tokens from the tokens generated by the decoder and represent them differently. Similar to BERT~\cite{devlin2018bert}, we add a learned embedding, which indicates whether the token is generated or copied, to each token representation in each decoder layer. Hopefully, this would make the decoder able to distinguish the copied tokens from the tokens generated by the decoder and represent them differently, thus improve its robustness to the noise in the copied future token representations. \subsection{Parameter sharing} The encoder and the decoder in conventional transformer models are independently parameterized for two main reasons. First, the inputs for the encoder and the decoder are usually different for general sequence transduction tasks such as machine translation and text summarization. Second, the encoder self-attention module is bidirectional whereas the decoder self-attention is causal (i.e. uni-directional). The characteristic of LST tasks ensures the inputs for the encoder and the decoder to be roughly the same, and the proposed pseudo-bidirectional decoding method makes both the encoder and the decoder self-attention module to be bidirectional. This motivates us to share all parameters, except that in the encoder-decoder attention module, between the encoder and the decoder. This roughly reduces the number of parameters by half and also provide some regularization effects. \section{Experiments} In this section, we conduct experiments on LST benchmarks to validate the effectiveness of our approach. We mainly focus on the grammatical error correction task and also report results on two other LST tasks including spell and OCR corrections. \subsection{Grammatical Error Correction} \paragraph{Datasets} Following the recent work~\cite{grundkiewicz2019neural,kiyono2019empirical,zhou2019improving} in GEC, the GEC training data we use is the public Lang-8~\cite{mizumoto2011mining}, NUCLE~\cite{dahlmeier2013building}, FCE~\cite{yannakoudakis2011new} and W\&I+LOCNESS datasets~\cite{bryant2019bea,granger1998computer}. To investigate whether our approaches can yield consistent improvement in this setting, we pretrain our models with 30M sentence pairs obtained by the corruption-based approach and 30M pairs by the fluency boost back-translation approach \cite{ge2018fluency} for GEC pre-training. \paragraph{Models} We use the ``transformer-big'' architecture as our baseline model, denoted by \textbf{Transformer}. For throughout comparison, we train two model variants with our approach. The first model (\textbf{Ours}) consists of the same number (i.e. 6) of transformer blocks with the baseline model, thus has the same inference latency while containing only half the number of parameters. The second model is denoted by \textbf{Ours-12 layers}, which consists of 12 transformer blocks, thus has approximately the same number of parameters but the inference latency is $1.7\times$ longer. For comparison, we also train a variant of the ``transformer-big'' architecture with 12 blocks, which is denoted by \textbf{Transformer-12 layers}. For reference, we also compare with a recent model specifically designed for local sequence transduction tasks, denoted by \textbf{PIE}. We use synthetic data for pre-training and then use the GEC training data to fine-tune the pre-trained models. The details of model training are provided in the Appendix due to space constraints. \begin{table}[!t] \begin{center} \resizebox{1.\linewidth}{!}{ \begin{tabular}{lccc} \hline\hline \textbf{Method} & \textbf{\# Parameters} & \textbf{BEA-19} & \textbf{CoNLL-14} \\ \hline \bf PIE (with pretraining) & 345M & - & 59.7 \\ \hline \multicolumn{4}{c}{\textbf{w/o Pretraining}} \\ \hline \bf Transformer & 210M & 57.1 & 51.5 \\ \bf Transformer-12 layers & 383M & 56.3 & 51.3 \\ \bf Ours & 132M & 58.6 & 53.7 \\ \multicolumn{2}{l}{~-w/o future modeling} & 57.6 & 51.8 \\ \multicolumn{2}{l}{~-w/o parameter sharing} & 58.2 & 52.9 \\ \bf Ours-12 layers & 232M & \bf 59.5 & \bf 54.4 \\ \multicolumn{2}{l}{~-w/o future modeling} & 58.6 & 52.1 \\ \multicolumn{2}{l}{~-w/o parameter sharing} & 58.8 & 53.8 \\ \hline \multicolumn{4}{c}{\textbf{w/ pretraining}} \\ \hline \bf Transformer & 210M & 61.2 & 57.1 \\ \bf Transformer-12 layers & 383M & 61.9 & 57.5 \\ \bf Ours & 132M & 63.2 & 58.9 \\ \multicolumn{2}{l}{~-w/o future modeling} & 61.5 & 57.4 \\ \multicolumn{2}{l}{~-w/o parameter sharing} & 61.7 & 57.9 \\ \bf Ours-12 layers & 232M & \bf 63.9 & \bf 60.1 \\ \multicolumn{2}{l}{~-w/o future modeling} & 61.8 & 57.7 \\ \multicolumn{2}{l}{~-w/o parameter sharing} & 62.1 & 58.2 \\ \hline\hline \end{tabular}} \end{center} \caption{\label{gecresult} The performance of different compared models on two test sets of GEC task.} \end{table} \paragraph{Evaluation} We evaluate the performance of GEC models on the BEA-19 and the CoNLL-14 benchmark datasets. Following the latest work in GEC~\cite{grundkiewicz2019neural,kiyono2019empirical,zhou2019improving, zhang2019sequence}, we evaluate the performance of trained GEC models using $F_{0.5}$ on test sets using official scripts\footnote{M2scorer for CoNLL-14; Errant for BEA-19.} in both datasets. \paragraph{Results} The performance of different compared models on the GEC task is shown in Table \ref{gecresult}. Note that we only compare against transformer models with the same pretraining/fine-tuning data in our setting for fair comparison as our contribution is orthogonal to better data synthesis method. We can see that for the same configuration of the ``transformer-big'' baseline, our approach outperforms the baseline by a large margin in both settings with and without pretraining with synthetic data. This suggests that our approach is able to improve the performance of transformer-based LST models while reducing the number of parameters by half. In addition, we can see that by doubling the number of transformer blocks, our model is able to yield substantial further improvement. In contrast, we can see that simply increasing the number of transformer blocks (i.e. \textbf{Transformers-12 layers}) fails to improve the performance. This implies that our approach may be able to facilitate the training of deeper transformer models by providing regularization effects. We then conduct an ablation study where either the pseudo future context modeling approach or the parameter sharing mechanism is disabled to better understand their relative importance. The results are shown in Table \ref{gecresult}. We can see that the proposed pseudo future modeling approach method is very important in both the default and the deeper configuration of our transformer-based models, demonstrating its effectiveness on LST tasks. We also find that the parameter sharing mechanism is more effective in deeper models. This suggests that the parameter sharing mechanism may provide strong regularization effects and make it easier to train deeper transformer models. \subsection{More Sequence Transduction Tasks} Following previous work~\cite{ribeiro2018local,awasthi2019parallel}, we demonstrate the effectiveness of the proposed approaches on two additional local sequence transduction tasks including spell and OCR correction. We employ a two-layer transformer and a four-layer transformer as the backbone model for comparison and evaluate the compared models on the twitter spell correction datasetand the Finnish OCR dataset described as follows: \paragraph{Spell correction} We use the twitter spell correction dataset~\cite{aramaki2010typo} which consists of 39172 pairs of original and corrected words obtained from twitter. We use the same train-dev-valid split as~\cite{ribeiro2018local} (31172/4000/4000). We tokenize on characters and our vocabulary comprises the 26 lower cased letters of English. \paragraph{OCR correction} We use the Finnish OCR data set3 by~\cite{silfverberg2016data} comprising words extracted from Early Modern Finnish corpus of OCR processed newspaper text. We use the same train-dev-test splits as provided by~\cite{silfverberg2016data}. We tokenize on characters and our vocabulary comprises all the characters seen in the training data. \begin{table}[!t] \begin{center} \begin{tabular}{lcc} \hline\hline \textbf{Method} & \textbf{Spell} & \textbf{OCR} \\ \hline \bf LSTM-soft & 46.3 & 79.9 \\ \bf LSTM-hard & 52.2 & 58.4 \\ \bf \citet{ribeiro2018local} & 54.1 & 81.8 \\ \bf PIE & 67.0 & 87.6 \\ \hline \multicolumn{3}{c}{\textbf{2 Layers}} \\ \hline \bf Transformer & 67.6 & 84.5 \\ \bf Ours & \bf 69.2 & \bf 88.7 \\ \hline \multicolumn{3}{c}{\textbf{4 Layers}} \\ \hline \bf Transformer-4 layers & 67.1 & 85.4 \\ \bf Ours-4 layers & \bf 70.4 & \bf 89.6 \\ \hline\hline \end{tabular} \end{center} \caption{\label{otherlst} The performance (accuracy) of different compared models on the spell and OCR correction tasks.} \end{table} \paragraph{Results} The result is shown in Table \ref{otherlst}. We can see that the proposed method is able to significantly outperform the vanilla transformer-based models, as well as the LSTM and sequence labeling based LST baselines in both settings where either the number of parameters in the model is the same or the inference latency is the same, which is consistent with the result in the GEC task. Our deeper model variant yields the state-of-the-art results in both tasks. This demonstrates the effectiveness of the proposed approach and suggests that our approach is versatile for different LST tasks. \section{Related work} \paragraph{Local Sequence Transduction}~\citet{ribeiro2018local} proposed to formulate LST tasks as sequence labeling problems by first predicting insert slots in the input sequences using learned insertion patterns and then using a sequence labeling task to output tokens in the input sequences or a special ``delete'' token. \citet{awasthi2019parallel,malmi-etal-2019-encode} propose to predict output edit operations including word transformations and further improve the performance of sequence labeling based LST models. Their approaches require massive engineering efforts to design an appropriate set of word transformations, which makes it non-trivial to generalize to other LST tasks. Also, the sequence labeling formulation lacks the flexibility of seq2seq models because it can only make local edits, which is demonstrated by their inferior performance. More recently, \citet{li2020towards} use BERT to perform local sequence transduction with the method proposed by \citet{zhou2019bert}. However, their method mainly suits for the cases where the length of the output sentence is unchanged. \paragraph{Bidirectional Decoding and Future Modeling} Previous work~\cite{sennrich2016edinburgh,deng2018alibaba} investigate using right-to-left models to re-rank the generated sentences. More recently, \citet{xia2017deliberation} and \citet{zheng2018modeling} propose two-pass decoding to model the right-side information, while~\citet{zhang2018asynchronous,zhang2019synchronous} use bidirectional beam search algorithms to generate the output sequences. These approaches integrate the right side context indirectly and introduce substantial computational overhead during inference, which is undesirable for real-world applications. \paragraph{Parameter Sharing in Transformer} Several parameter sharing mechanisms have been explored in transformer-based models. ALBERT~\cite{lan2019albert} shares all encoder layers to reduce the number of parameters in the pretrained language model. \citet{xia2019tied} propose to share the encoder and the decoder in transformer-based machine translation models. The performance gain in their setting is relatively small, which may be due to the discrepancy in the input sequences and the attention direction in the encoder and the decoder. \section{Conclusion} In this paper, motivated by the characteristic of local sequence transduction tasks, we propose pseudo-bidirectional decoding (PBD) to provide approximated future information for transformer-based LST models and share the parameters between the encoder and the decoder of LST models to provide regularization effects while reducing the number of parameters. Our experiments on three LST tasks shows that our approach is able to yield consistent improvements upon strong transformer baselines while significantly reducing the number of parameters in the model. \section*{Acknowledgments} We thank the anonymous reviewers for their valuable comments. \bibliographystyle{acl_natbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,866
\section{Introduction} Despite tremendous attention, a satisfying theory of generalization in deep learning remains elusive. In light of so many claims about explaining generalization in deep learning, this statement is somewhat controversial. It also raises an important question: \begin{quote} \emph{What does it mean to explain generalization in deep learning?} \end{quote} In this work, we propose empirical methodology to aid in the search of a precise mathematical theory, allowing us to leverage large-scale empirical studies of generalization, like those in recent work \citep{jiang2018predicting,FGM}. Unlike earlier work, however, our proposal rests on the foundation of \emph{robust prediction}, in order to catch out, rather than average out, failures. The dominant approach to studying generalization is the frequentist framework of statistical learning theory. We focus our attention on the simplest setting within supervised classification, where the training data, $S$, are modeled as a sequence of $n$ random variables, drawn i.i.d.\ from a distribution $\mathcal D$ on labeled examples $(x,y)$. In supervised classification, learning algorithms choose a classifier $h_S$, based on the training data $S$. Ignoring important considerations such as fairness, robustness, etc., the key property of a classifier $h$ is its probability of error, or (classification) risk, \[\label{riskdefn} \Risk{\mathcal D}{h} = \Pr_{(x,y) \sim \mathcal D} [ h(x) \neq y ]. \] One of the key questions is why deep learning often produces classifiers with human-level risk in domains that stymied researchers for decades. In this work, we take an empirical perspective and judge theories of generalizations by the predictions they provide when tested. In the other direction, any systematic rule for predicting generalization---whether learned or invented---can be thought of as a theory that can be tested. We consider families of environments, defined by data distributions, architectural choices, train set sizes, learning algorithms and their tuning parameters, etc. Given a particular family of environments, a strong theory achieves a desired level of precision for its predictions, while depending as little as possible on the particular details of the environments. At one extreme, explanations based on the VC dimension of the zero--one loss class of neural networks would pin the success of deep learning on empirical risk minimization. In practice, these explanations are poor, not just because the ensuing bounds are numerically vacuous for the size of networks and datasets used in practice, but because they fail to correctly predict the effect of changes to network width, depth, etc. At the other extreme, average risk on held-out data (i.e., a test-set bound) provides a sharp estimate of risk, yet the computation to produce this estimate is inextricably tied to every detail of the learned weights and data distribution. Viewing predictors as theories, the test-set bound is essentially silent. Any satisfactory theory of generalization in deep learning must therefore lie between these two extremes. We must necessarily exploit properties of the data distribution and/or learning algorithm, but we must also be willing to trade precision to decrease our dependence on irrelevant details. What dependence on the data distribution and learning algorithm is necessary to explain deep learning? Even taking the data distribution into consideration, the fact that stochastic gradient descent (SGD) and its cousins often perform empirical risk minimization cannot explain generalization \citep{Rethinking17}. There is, however, a picture emerging of overparametrization and SGD conspiring to perform capacity control. In some theoretical scenarios, this control can be expressed in terms of norms. At the same time, great strides have been made towards identifying notions of capacity that can be shown to formally control generalization error, $ \Risk{\mathcal D}{h} - \EmpRisk{S}{h}, $ uniformly over $h$ belonging to specially defined classes. (Here $\EmpRisk{S}{h}$ denotes the empirical risk, as estimated by the training data.) Despite this progress, there is still a wide gulf between performance as measured empirically via held-out data and performance as predicted by existing theory. % No expert would be surprised to discover that a published bound % yields predictions for risk that are numerically vacuous when evaluated empirically. A standard retort is that the constants in most bounds are suboptimal or that the purpose of bounds is to identify qualitative phenomena or inspire the development of new algorithms. Even ignoring the issue of numerically vacuous bounds, many bounds demonstrably fail to account for the right dependencies. As a case in point, recent empirical work \citep{NK19c} identifies state-of-the-art bounds on generalization error that grow with training set size, while the generalization error actually shrinks. Indeed, many bounds are distribution- or data-dependent and so the question of whether they explain generalization in practice is an empirical question. \subsection{Large-scale empirical studies} Recent work proposes large-scale empirical investigations to study generalization \citep{FGM}. (See also \citep{jiang2018predicting}.) While it is becoming more common for theoretical work to present empirical evaluations, among recent empirical studies \citep[etc.]{GR17,neyshabur2018theRole,liao2018surprising,fisher-rao,bartlett2017spectrally,arora2018Compress}, most are limited. One motivation for large-scale empirical studies is to leverage massive computing resources in the pursuit of a scientific challenge that has largely been approached mathematically. Another motivation is to go beyond simply measuring correlation towards measuring aspects of causation. (Several authors of this work---Dziugaite, Neal, and Roy---have each advocated for this publicly.) Given how influential these proposals by \citet{FGM} may be, we believe they deserve critical attention. (Indeed, recent preprints have already started to integrate their methodology.) \citet{FGM} propose to use Kendall correlation coefficients and independence tests to evaluate a suite of so-called \emph{generalization measures}. Many of these generalization measures are frequentist bounds, though with missing constants or lower-order terms. Others are only loosely inspired by bounds. \citet{FGM} propose to {\em average} evaluation metrics over a range of experimental settings. In contrast, we argue that average performance is \emph{not} a suitable way to measure the strength of a generalization measure\xspace as a theory of generalization. In particular, a satisfying theory of generalization should admit a generalization measure\xspace that offers reasonable predictions of generalization across a range of experimental settings, e.g., those arising from different hyperparameter choices, datasets, etc. A theory---as realized by a generalization measure---is as strong as its weakest component: a satisfying theory cannot simply predict well on average. The study of prediction across a range of environments is the subject of \emph{distributional robustness}~\citep{arjovsky2019invariant,BuhlmannNL}. An extreme form of robustness is obtained when one seeks to predict well in all environments that may arise from all possible interventions to an experimental setting. This extreme form of robustness can be linked to a weak form of causality \citep{BuhlmannNL, meinshausen2018causality}. Crucially, we do not aim for robustness over all possible environments. To achieve some level of generality, useful theories must necessarily have limited scope. As we demonstrate in \cref{svmexample}, frequentist generalization bounds can exploit noncausal correlations that can be seen to stand in for unknown properties of the data distribution because of properties of the learning algorithm. Such bounds have an important role to play in our search for a theory of generalization in deep learning, but we cannot expect them to explain generalization under interventions that upset these noncausal correlations. Bounds that depend on properties of the data distribution have an important role to play, though one hindered by the statistical barriers of unknown distributions, accessible only through a limited pool of data. More general theories (that minimize this dependence) can pinpoint key data properties. \paragraph{Contributions.} Theories of generalization yield predictions: % How should we evaluate these predictions empirically? In this work, we adopt the proposal of \citep{FGM} to exploit large-scale empirical studies, but critique the paper's guidance that we should evaluate the predictions of these theories in much the same way that we evaluate typical ML benchmarks. Based on the specific scientific goals of understanding generalization, we propose that the framework of distributional robustness is more appropriate, and suggest how to use it to evaluate generalization measures. Besides theoretical contributions, we make empirical contributions: We collect data from thousands of experiments on CIFAR-10 and SVHN, with various values for width, depth, learning rate, and training set size. We adopt the ranking task and sign-error loss introduced by \citep{FGM}, but use the collected data to perform a \emph{robust} ranking evaluation, across the 24 candidate generalization measures on over 1,600,000 pairs of experiments. We find that \emph{no existing complexity measure has better robust sign-error than a coin flip}. Even though some measures perform well on average, every single measure suffers from 100\% failure in predicting the sign change in generalization error under some intervention. This observation is not the end of the evaluation, but the beginning. To better understand the measures, we evaluate them in families of environments defined by interventions to a single hyperparameter. We find: (i) most, though not all, measures are good at robustly predicting changes due to training set size; (ii) robustly predicting changes due to width and depth is hard for all measures, though some PAC Bayes-based measures are robust across a large fraction of the environments tested; (iii) norm-based measures outperform other measures at learning rate interventions. By focusing on robust evaluation, we force ourselves to dig into the data to uncover the cause of failures---failures which might otherwise go undiscovered by looking at average performance. As such, robustness provides better guidance to the scientific challenge of explaining generalization. The rest of this paper is organized as follows. In \cref{svmexample}, we present a concrete example of a frequentist analysis of a learning algorithm, which reiterates some of the high-level points above. We then introduce distributional robustness in \cref{sec:predstab} and describe how the framework can be used to analyze large-scale empirical studies of generalization measures in \cref{sec:gen-via-robust}. We detail our experimental setting in \cref{sec:methods} and summarize our experimental findings in \cref{sec:empstudy} before ending with a discussion. \newcommand{\mathcal H}{\mathcal H} \newcommand{\mathcal L}{\mathcal L} \section{A Motivating Example: SVMs, Norm-based Capacity Control, and the Role of Causality} \label{svmexample} In this section, we study support vector machines (SVMs) to demonstrate some of the challenges in understanding and explaining generalization error. This section owes much to \citep{shalev2014understanding}. The intuition extracted from this simple model motivates our methodological choices for the rest of this paper. In particular, we see that frequentist generalization bounds derived under one set of conditions may rely on quantities that do not have a direct causal relationship with the generalization error under other conditions. This highlights that frequentist bounds can be expected to have limits to the predictive powers under intervention, but also that asking for causal measures of generalization may rule out measures that nonetheless work well in a range of scenarios. \newcommand{\overline{\ww}}{\overline{\bm{w}}} \newcommand{\ww_S}{\bm{w}_S} \newcommand{\overline{\ww_S}}{\overline{\bm{w}_S}} \newcommand{\ww_*}{\bm{w}_*} \newcommand{\overline{\oww}}{\overline{\ww_*}} Consider linear prediction, based on an embedding of inputs into $\mathbb{R}^p$. As usual, we index the space of linear predictors by nonzero vectors $\bm{w} \in \mathcal H = \mathbb{R}^p$, where the decision boundary associated to $\bm{w}$ is the tangent hyperplane $\{ \bm{x} \in \mathbb{R}^p : \ip{\bm{w}}{\bm{x}} = 0 \}$, passing through the origin. Assuming labels take values in $\{{\pm 1}\}$, the zero--one classification loss of the predictor $\bm{w}$ on a labeled input $(\bm{x},y)$ is $\ell(\bm{w},(\bm{x},y)) = \frac 1 2 (1 + y \sign(\ip{\bm{x}}{\bm{w}}))$. Note that the loss is invariant to the magnitude of the vector $\bm{w}$, and so the set of hyperplanes can be put into correspondence with the unit vectors $\overline{\ww} \defas \bm{w}/\norm{\bm{w}}$. We focus on the realizable setting, i.e., data are assumed to be labeled according to some hyperplane. In this case, every finite data set admits a positive cone of empirical risk minimizers. Let $S=\{(\bm{x}_i,y_i)\}_{i \in [n]}$ be $n$ i.i.d.\ labeled data in $\mathbb{R}^p \times \{{\pm 1}\}$, and let $w_S$ be chosen according to the SVM rule: $\min_{\bm{w}} \|\bm{w}\|^2$, subject to the constraint that $y_i \ip{\bm{x}_i}{\bm{w}} \ge 1$ for all $i \in [n]$. The constraint demands that, for each data point, the functional margin, $y_i \ip{\bm{x}_i}{\ww_S}$, be greater than one. Thus the hyperplane $\overline{\ww_S}$ indeed separates the data and achieves zero empirical risk. However, among the vectors that satisfy the margin constraint, $\ww_S$ has the smallest L2 norm. Geometrically, the hyperplane $\overline{\ww_S}$ is that with the largest \emph{geometric} margin, $\min_i y_i \ip{\bm{x}_i}{\overline{\ww_S}}$. Why does the SVM classifier generalize? The best explanation may depend on the situation. The VC dimension of the space of $p$-dimensional linear predictors is $p$, and so, with high probability over the sample $S$, uniformly over all separating hyperplanes $\bm{w}$, the difference between the empirical risk and risk is $\smash{\tilde O}(p/n)$. If $n \gg p$, then this reason alone suffices to explain strong performance. The details of the SVM rule are irrelevant beyond it returning an empirical risk minimizer. Suppose that we consider a family of embeddings of growing dimensionality and find that the SVM rule generalizes equally well across this family. The VC theory cannot explain this. A theory based on the maximum-margin property of the SVM rule may. To that end, assume there exists a hyperplane $\ww_*$ such that $y \ip{\ww_*}{\bm{x}} \ge 1$ with probability one over the pair $(\bm{x},y)$. To fix a scale, assume $\norm{\bm{x}} \le \rho$ with probability one. By exploiting strong convexity, and the fact that $\norm{\bm{w}_S} \le \norm{\ww_*}$, one can show that the risk of $\ww_S$ is bounded by $\smash{\tilde O}(\rho \norm{\ww_*} / n)$. Note that this bound has \emph{no explicit dependence} on the dimension $p$. Instead, it depends on the quantity $\rho \norm{\ww_*}$, whose reciprocal has a geometric interpretation: the distance between the separating hyperplane $\overline{\oww}$ and the nearest data point, normalized by the radius of the data. Therefore, this analysis trades dependence on dimension for dependence on the data distribution's density near the decision boundary. When $\rho \norm{\ww_*} \ll n$, SVM's inductive bias is sufficient to explain generalization, even if $p \gg n$. In fact, % we can always build a bound based on the norm of the learned weights: with high probability, for \emph{every} ERM $\ww_S$, the risk is bounded by $\smash{\tilde O}(\rho \norm{\ww_S} / n)$. One might prefer such a bound since $\norm{\ww_*}$ is often presumed unknown. Even if this bound matches risk empirically, it has a strange property: The bound depends on the norm $\|\ww_S\|$ even though risk is \emph{independent of norm}. Thus, we cannot expect the bound to remain valid if we intervene on the norm after training, e.g., to test for a causal relationship between norms and risk. Norms are the effect of the data and SVM interacting. This example highlights that there may be multiple overlapping explanations depending on the range of environments in which one wants to understand generalization. We cannot, however, expect a theory to be robust to arbitrary interventions. Identifying a theory with limitations may lead us to more general ones, once we understand those limitations. All of this motivates a careful design of experimental methodology, in order to navigate these tradeoffs. In particular, we demand that a theory is \emph{robustly predictive} of generalization over a carefully designed family of \emph{environments}. \newcommand{\mathcal A}{\mathcal A} \section{Preliminaries on Robust Prediction} \label{sec:predstab} In this section, we introduce the framework of robust prediction, borrowing heavily from \citet{BuhlmannNL,PetersEtAl2016,RothEtAl}. In the next section, we cast the problem of studying generalization into this framework. Consider samples collected in a family $\mathcal F$ of different \defn{environments}. In particular, let $(\Omega,\mathcal A)$ denote a common (measurable) sample space and, in each of these environments $\mathcal{E} \in \mathcal F$, assume the data we collect are drawn i.i.d.\ from a distribution $\diste$ on $\Omega$. We will think of environments as representing different experimental settings, interventions to these experiments, sub-populations, etc. For example, each sample might be a covariate vector and binary response, i.e., $\Omega = \Reals^p \times \{{0,1}\}$. A well-studied setting is where the distributions $\diste$ all agree on the conditional mean of the response given the covariates (i.e., the regression function), but disagree on the distribution of the covariates. Prediction is formalized by a \defn{loss function}. In particular, a loss function for a set $\Phi$ of predictors is a map $\ell : \Phi \times \Omega \to \mathbb{R}$. The \defn{error} or \defn{risk} (of a predictor $\phi \in \Phi$ in an environment $e \in \mathcal F$) is then the expected loss, $\mathbb{E}_{\omega \sim \diste}[\ell(\phi,\omega)]$. If we focus on one environment $\mathcal{E} \in \mathcal F$, it is natural to seek a predictor $\phi \in \Phi$ with small risk \emph{for that individual environment}. However, if we care about an entire family $\mathcal F$ of environments, we may seek a predictor that works well simultaneously across $\mathcal F$. In the setting of \defn{distribution robustness}, the performance of a predictor relative to a family $\mathcal F$ of environments is measured by the \defn{robust error (or risk)} \[\label{eq:robustrisk}\textstyle \sup_{\mathcal{E} \in \mathcal F} \mathbb{E}_{\omega \sim \diste} [ \ell(\phi,\omega)]. \] The goal of robust prediction is to identify a predictor with small robust risk. \paragraph{Connection to causality.} If taken to an extreme, then robust prediction is closely related to learning causality. Specifically, suppose that $(\icovar,\iresp)$ is induced by a common causal model $\iresp:=f(\icovar).$ If $\mathcal F$ represents all possible interventions on subsets of $\icovar$, then the causal predictor $f(\icovar)$ also minimizes the robust risk. See \citep{rojas2018invariant,BuhlmannNL} for more details. \section{Studying Generalization via Distributional Robustness} \label{sec:gen-via-robust} \newcommand{single-network\xspace}{single-network\xspace} \newcommand{Single-network\xspace}{Single-network\xspace} \newcommand{coupled-network\xspace}{coupled-network\xspace} \newcommand{Coupled-network\xspace}{Coupled-network\xspace} We are interested in understanding the effects of changes to a complex machine learning experiment, with a focus on effects on generalization. In this section, we cast this problem into the framework of distributional robustness. In order to study generalization, we view theories of generalization as yielding predictors for generalization under a range of experimental settings. We use the term \defn{generalization measure} to refer to such predictors. \subsection{Experimental Records and Settings} In the notation of \cref{sec:predstab}, points $\omega \in \Omega$ represent possible samples. In our setting, each sample represents a complete record of a machine learning experiment. An environment $\mathcal{E}$ specifies a distribution $\diste$ on the space $\Omega$ of complete records. In the setting of supervised deep learning, a complete record of an experiment would specify hyperparameters, random seeds, optimizers, training (and held out) data, etc. Ignoring concerns of practicality, we assume the complete record also registers every conceivable derived quantity, not only including the learned weights, but also the weights along the entire trajectory, training errors, gradients, etc. Formally, we represent these quantities as random variables defined on the probability spaces $(\Omega,\mathcal A,\diste)$, $\mathcal{E} \in \mathcal F$. Among these random variables, there is the empirical risk $\hat R$ and risk $R$ of the learned classifier, and their difference, $G$, the generalization error/gap. Each distribution $\diste$ encodes the relationships between the random variables. Some of these relationships are common to all the environments. E.g., the generalization error $G$ always satisfies $G = R - \hat R$, and the empirical risk $\hat R$ is always the fraction of incorrectly labeled examples in the training data. Some relationships may change across environments. E.g., in a family $\mathcal F$ designed to study SGD, changes to, e.g., the learning rate, affect the distribution of the trajectory of the weights. In machine learning, environments arise naturally from learning algorithms applied to benchmarks under standard hyperparameter settings. In order to evaluate theories that explain the effect of, e.g., hyperparameter changes, we also consider environments arising from perturbations/interventions to standard settings. E.g., we may modify the hyperparameters or data, or intervene on the trajectory of weights in some way. Every perturbation $\mathcal{E}$ is captured by a different distribution $\diste$. With respect to a family of environments $\mathcal F$, a generalization measure is preferred to another if it has smaller robust error (\ref{eq:robustrisk}). In \cref{sec:empstudy,sec:methods}, we restrict our attention to $\mathcal F$ induced by varying hyperparameters, data distributions, training datasets, and dataset sizes. In this work, we do not intervene on the dynamics of SGD. However, intervening on the trajectory induced by SGD might be an interesting future direction that could allow one to tease apart the role of implicit regularization. \subsection{Prediction tasks} \label{sec:predtask} The predictions associated with a theory of generalization are formalized in terms of a map $C : \Omega \to \mathbb{R}$, which we call a \defn{generalization measure}. We will study ad hoc generalization measures as well as ones derived from frequentist bounds. In both cases, we are interested in the ability of these measures to predict changes in the generalization. One important aspect of a generalization measure is the set of (random) variables (i.e., covariates) it depends on. Indeed, there is an important difference between the task of predicting generalization using only the architecture and number of data and using also, e.g., the learned weights. Formally, let $V_1,\dots,V_k$ be a finite collection of random variables. A generalization measure $C$ is \defn{$\sigma(V_1,\dots,V_k)$-measurable} if there exists a map $g$ such that $C(\omega) = g(V_1(\omega),\dots,V_k(\omega))$ for all $\omega \in \Omega$. We may prefer one generalization measure to another on the basis of the covariates it uses. As a simple example, if a generalization measure offers comparable precision to another measure, but is measurable with respect to a strict subset of variables, then this increased generality may be preferred. \paragraph{Goals of the prediction.} We are broadly interested in two types of prediction tasks, distinguished by whether we train one or two networks. In \emph{coupled-network\xspace} experiments, we train two networks, such that they share all hyperparameters except one. We are interested in trying to predict which network has smaller generalization error. Some of the generalization measures we consider are based on generalization bounds from the literature. Given that generalization bounds are often numerically vacuous, it would not be informative to evaluate their predictions directly at this stage. It is, however, reasonable to evaluate whether they capture the right dependencies. Indeed, one desirable property of evaluating generalization measures by the rankings they induce in coupled-network\xspace experiments is that the rankings are invariant to monotonically increasing transformations of the measure. % In \emph{single-network\xspace} experiments, we try to predict the numerical value of the generalization error for that network based on a linear or affine function of a generalization measure. Generalization measures that perform well in such a task would serve as accurate predictors of generalization, and could be used for, e.g., model selection. However, such measures would not necessarily serve to be useful in generalization bounds. We describe the experimental details and results of \emph{single-network\xspace} experiments in \cref{sec:regression} due to space limitations. \section{Experimental methodology} \label{sec:methods} In coupled-network\xspace experiments, we evaluate the \emph{ranking} that the generalization measure induces on training networks. The approach we describe here is a robust analogue of the Kendall-$\tau$-based approach advocated by \citet{FGM}.\footnote{See Appendix~\ref{app:jiang-ic} for a comparison to \citet{FGM}'s conditional-independence-based approach.} This change is deceptively minor. We highlight the very different conclusions drawn using our methodology in \cref{sec:empstudy}. \paragraph{Evaluation criterion.} In more detail, recall that a coupled-network\xspace environment $\mathcal{E}$ determines a distribution $\diste$ on pairs $(\omega,\omega')$ of {\em variable assignments}, each representing a full record of an experiment. We evaluate a generalization measure\xspace, $C$, and the realized generalization error, $G$, on both assignments, $\omega$ and $\omega'$. We use the ranking of $C$ values to predict the ranking of $G$ values. Then, the \defn{sign-error of a generalization measure\xspace $C$} for this task\footnote{In order to match \citep{FGM}, in all of our experiments, we train to a fixed level of cross entropy loss that also yields zero training error. Since $G = R - \hat{R}$, and $\hat{R} =0$, a prediction task that depends on changes in generalization error $G$ is equivalent to one that depends on changes in risk $R$.} is given by \[\label{eqn:sign-error}\textstyle \SignE{\diste}{C} = \frac 1 2 \mathbb{E}_{(\omega,\omega') \sim \diste} \big[1 - \sign \big(G(\omega')-G(\omega) \big)\cdot\sign\big(C(\omega')-C(\omega) \big) \big]. \] Given a family $\mathcal F$ of coupled-network\xspace environments, the \defn{robust sign-error of a generalization measure\xspace $C$} is $\sup_{\mathcal{E} \in \mathcal F} \SignE{\diste}{C}$. The $\Psi$ summary proposed by \citet{FGM} is analogous to the average sign-error, $|\mathcal F|^{ -1} \sum_{\mathcal{E} \in \mathcal F} \SignE{\diste}{C}$.\footnote{We apply a $\frac{1 - \Psi}{2}$ transformation to obtain values in $\left[0, 1\right]$, where $1$ is achieved if $\Psi = -1$ and $0$ if $\Psi = 1$.} In our experiments, we use a modification of the loss in \cref{eqn:sign-error} in order to account for Monte Carlo variance in empirical averages. We use a weighted empirical average, where the weight for a sample $(\omega,\omega')$ is calculated based on the difference in generalization error $\abs{G(\omega)-G(\omega')}$. We discard samples for which the difference in generalization error is below the Monte Carlo noise level. In effect, we control the precision to which we want our generalization measure to predict changes: when the difference is insignificant, we do not predict the sign. See \cref{app:montecarlo} for the details on how we use the Monte Carlo variance to choose what environments are being considered. Other details of data collection are described in \cref{sec:expdetails}. \paragraph{Environments.} In our experiments, variable assignments ($\omega$) are pairs $(H,\sigma)$ of hyperparameter settings and random seeds, respectively. The hyperparameters are: learning rate, neural network width and depth; dataset (CIFAR-10 or SVHN), and training set size. (See \cref{sec:expdetails} for ranges.) Each environment $\mathcal{E}$ is a pair $(H,H')$ of hyperparameter settings that differ in the setting of \emph{one} hyperparameter (e.g., depth changes from $2 \rightarrow 3$ between $H$ and $H'$ and the remaining hyperparameters are identical). The distribution $\diste$ for a pair $\mathcal{E} = (H,H')$ is the distribution of $(\omega, \omega')=((H,\sigma),(H',\sigma'))$, where the random seeds $\sigma, \sigma'$ are chosen uniformly at random. That is, the expectation in \cref{eqn:sign-error} is taken only over a random seed. \section{Empirical Findings} \label{sec:empstudy} \begin{figure} \centering % \includegraphics[width=\textwidth]{figures/couplednet/figure__signerror_cdf_per_hp__ds_cifar10_svhn__tau_0__mw_12.000000_cdf_per_hp.pdf} \caption{Cumulative distribution of the sign-error across subsets of environments for each generalization measure. The measures are ordered based on the mean across `All' environments. A completely \emph{white} bar indicates that the measure is perfectly robust, whereas a \emph{dark blue} bar indicates that it completely fails to be robust.} \label{fig:ranking_cifar10_svhn_cdf_per_param} \end{figure} In \cref{fig:ranking_cifar10_svhn_cdf_per_param}, we present a visualization of 1,600,000 ranking evaluations on 24 generalization measures derived from those used in \citep{FGM}. A full description of these measures can be found in \cref{app:exp-measures}. Motivated by the discussion in the introduction, we seek strong predictive theories: generalization measures that increase monotonically with generalization error and for which this association holds across a range of environments. Such a measure would achieve zero robust sign-error (\cref{eqn:sign-error}). As described in \cref{sec:methods}, each environment contains a pair of experiments that share all hyperparameters but one (learning rate, depth, width, train set size, dataset). In each environment, we calculate the weighted empirical average version of the sign-error over 100 samples from $\diste$ (10 networks runs with different seeds per $\omega$. Note that we discard environments where too many samples have differences in generalization error below the Monte Carlo noise level (see \cref{sec:app-montecarlo-emp} for details). This is in contrast with the protocol proposed by \citet{FGM} where such noise is not filtered and can significantly undermine the estimation of sign-error (see \cref{sec:app-montecarlo-ablation}). In the remainder of this section we interpret the results of \cref{fig:ranking_cifar10_svhn_cdf_per_param}, highlight some significant shortcomings of the generalization measures, and point out cases where these shortcomings would have been obscured by non-robust, average-based summary statistics like those used by \citet{FGM}. \paragraph{How to read \cref{fig:ranking_cifar10_svhn_cdf_per_param}.} This figure presents the empirical cumulative distribution function (CDF) of the sign-error across all environments and generalization measure\xspace{}s. \textbf{Every row} shows the CDF over a subset of environments (e.g., those where only depth is varied). The `All' row shows the same but over all environments. The number of environments in each subset is given on the left of each row. \textbf{Each bar} in the figure is the empirical CDF of all sign-errors in the set of environments. A bar's y-axis corresponds to the range of possible sign-errors and the internal coloring depicts the distribution (starting at the median value for improved readability). We annotate the bars with the max (i.e., robust sign-error; \textcolor{limegreen}{\textbf{green}}), the 90th percentile (\textcolor{magenta}{\textbf{magenta}}), and the mean (\textcolor{orange}{\textbf{orange}}). The latter statistics do not measure robustness over all environments. However, a low 90th percentile value means the measure would have had low empirical robust sign-error restricting to some $90\%$ of the environments tested. If the max is at 1.0, then there exists at least one environment where the measure fails to predict the sign of the change in generalization on all random seeds. If the max is below 0.5, then the measure is more likely than not to predict the correct sign on {\em all environments} in the set. \emph{Identifying subfamilies in which a measure is robust is one of our primary objectives.} \paragraph{No measure is robust.} As illustrated in the `All' row, for every one of the 24 measures, there is at least one environment in which the measure \emph{always incorrectly predicts} the direction of change in generalization. Nonetheless, some measures have low robust error over large fractions of environments, as reflected by the 90th percentiles of the sign-error distributions. Notice how the average-based summaries proposed by \citet{FGM} do not reflect robustness, which implies their inability to detect the causal associations that they seek. Given these poor results, we must dig deeper to understand the merits and shortcomings of these generalization measures. Therefore, we study their performance in natural subfamilies $\mathcal E \subseteq \mathcal F$ of environments. Our analyses of the `Train Size', `Depth', and `Width' rows below are examples of this. While no measure is robust across the CIFAR-10 and SVHN datasets considered here, we find measures that are quite robust over a $90\%$ fraction of environments % for SVHN only (see \cref{app:couplednet-add-results-svhn}). \paragraph{Robustness to train set size changes is not a given.} In the `Train size' row, most measures correctly predict the effect of changing the train set size. (In general, generalization error decreases with train set size.) It may seem a foregone conclusion that a bound of the form $\smash{\tilde O}(\sqrt {c/n})$ would behave properly, but, for most of these measures, the complexity term $c$ is a random variable that can grow with more training data. In fact, while many measures do achieve a low robust sign-error, some measures fail to be robust. In particular, some bounds based on Frobenius norms (e.g., \texttt{prod.of.fro}; \cref{app:exp-measures-frobenius} and \citep{Ney1503}) increased with train set size in some cases. Such corner cases arose mostly for shallow models (e.g., depth 2) with limited width (e.g., width 8) and were automatically identified by our proposed method. Note that the same finding was recently uncovered in a bespoke analysis \citep{NK19c}, and we may have missed this looking only at average sign-errors, which are usually low. \begin{wrapfigure}{R}{0.35\textwidth} \centering % \includegraphics[width=0.95\linewidth, trim=0cm .1cm 0cm .2cm, clip]{figures/couplednet/figure_triangle_cdf__ds_cifar10_svhn__tau_0__mw_12.000000_gm_pacbayes.mag.flatness_hp_hp.model_depth} % % % % % \caption{CDFs of sign-errors separated by environments where depth varies between two values for \texttt{pacbayes.mag.flatness}. The red \textcolor{red}{X} indicates that no environments remained after accounting for Monte Carlo noise.} \label{fig:trianglecdf} \end{wrapfigure} \paragraph{Robustness to depth.} In the `Depth' row, we depict robust sign-error for interventions to the depth. Again, robust sign-error is maxed out for every measure. Digging deeper, these failures are not isolated: many measures actually fail in most environments. However, there are exceptions: a few measures based on PAC-Bayes analyses show better performance in some environments. In \cref{fig:trianglecdf}, we dig into the performance of \texttt{pacbayes.mag.flatness} (\cref{app:exp-measures-flatness}) by looking at the subset of environments where it performs well (e.g., varying depth 3 $\to$ 4), fails but shows signs of robustness (e.g., 3 $\to$ 6), completely fails (e.g., 4 $\to$ 5), and those were a conclusion cannot be reached (e.g., 5 $\to$ 6). Looking into the data, we found that almost all environments where the measure fails are from the CIFAR-10 dataset, where the smaller networks we test suffer from significant overfitting. This illustrates how our proposed methodology can be used to zero-in on the limited scope where a measure is robust. \paragraph{Robustness to width is surprisingly hard.} In the `Width' row, all measures have robust sign-error close to 1. Looking into the data, we discover that generalization error changes very little in response to interventions on width because the networks are all very overparametrized. In fact, of the 4,000 available width environments, only 328 remain after accounting for Monte Carlo noise. \paragraph{Comparison to \citet{FGM}} Our contribution is primarily a methodological refinement to the proposals in \citep{FGM}. We describe how to discover failures of generalization measures in specific environments by looking at worst-case rather than average performance. We note that there are several reasons that even our \emph{average-case} results are not directly comparable with those in \citep{FGM}. First, their analysis considers the CIFAR-10 and SVHN datasets in isolation, whereas we combine models trained on both datasets. Second, they do not account for Monte Carlo noise, which we found to significantly alter the distribution of sign-errors (see \cref{sec:app-montecarlo-ablation}). This is important since we found that many environments had to be discarded due to high noise (e.g., only 8.2\% of the width environments remain after filtering out noise in our analysis). Third, the hyperparameters and ranges that they consider are different from ours and, consequently, both studies look at different populations of models. For example, the majority of models in \citep{FGM} use dropout, whereas our models do not. Such differences can alter how generalization measures and gaps vary in response to interventions on some hyperparameters and lead to diverging conclusions. For instance, in our results, no measure has an average-case performance much better than a coin-flip in the `Depth' environments for CIFAR10, while \citet{FGM} find measures that perform well in this context. Nevertheless, there are some general findings that persist across both studies; for instance, we see the good average-case performance of the path-norm (\cref{app:exp-measures-path}) and PAC-Bayes-flatness-based (\cref{app:exp-measures-flatness}) measures in contrast to the poor performance of spectral measures (e.g., \texttt{prod.of.spec}; \cref{app:exp-measures-spectral}). We also find more specific similarities, such as the poor average-case performance of most measures in `Width' environments for CIFAR-10 (\cref{app:couplednet-add-results-cifar10}), in contrast to the better performance of \texttt{path.norm} (\cref{app:exp-measures-path}), \texttt{path.norm.over.margin} (\cref{app:exp-measures-path}), and \texttt{pacbayes.mag.flatness} (\cref{app:exp-measures-flatness}). \section{Discussion} The quest to understand and explain generalization is one of the key scientific challenges in deep learning. Our work builds on recommendations in \citep{FGM} to use large-scale empirical studies to evaluate generalization bounds. At the same time, we critique some aspects of these recommendations. We feel that the proposed methodology in \citep{FGM} based on taking averages of sign-errors (or independence tests, which we have not pursued) can obscure failures. Indeed, for a long time, empirical work on generalization has not been systematic, and as a result, claims of progress outpace actual progress. Based on an understanding of the desired properties of a theory of generalization, we propose methodology that rests on the foundation of distributional robustness. Families of environments define the range of phenomena that we would like the theory to explain. A theory is then only as strong as its worst performance in this family. In our empirical study, we demonstrated how a family can be broken down into subfamilies to help identify where failures occur. While the present work focused on the analysis of existing measures of generalization, future work could build on the robust regression methodology of Appendix~\ref{sec:regression} and attempt to formulate new robust measures via gradient-based optimization. The development of benchmarks and quantitative metrics has been a boon to machine learning. We believe that methodology based on robustness with carefully crafted interventions will best serve our scientific goals. \section*{Broader Impact} Our work aims to sharpen our understanding of generalization by improving the way that we evaluate theories of generalization empirically. The proposed methodology is expected to aid in the quest to understand generalization in deep neural networks. Ultimately, this could lead to more accurate and reliable models and strengthen the impact of machine learning in critical applications where accuracy must be predictable. We believe that this work has no direct ethical implications. However, as with all advances to machine learning, long-term societal impacts depend heavily on how machine learning is used. \section*{Funding Sources} LW was supported, in part, by an NSERC Discovery Grant. DMR was supported, in part, by an NSERC Discovery Grant, Ontario Early Researcher Award, and a stipend provided by the Charles Simonyi Endowment. This research was carried out while GKD and DMR participated in the Special Year on Optimization, Statistics, and Theoretical Machine Learning at the Institute for Advanced Study. \section*{Acknowledgements} The authors would like to thank Grace Abuhamad, Ga\"el Letarte, Ben London, Jeffrey Negrea, and Jean-Philippe Reid for feedback on drafts. \printbibliography \clearpage
{ "redpajama_set_name": "RedPajamaArXiv" }
8,482
{"url":"http:\/\/www.cs.technion.ac.il\/users\/wwwb\/cgi-bin\/tr-info.cgi\/2013\/MSC\/MSC-2013-15","text":"# Technical Report MSC-2013-15\n\n TR#: MSC-2013-15 Class: MSC Title: Truth table minimization of computational models Authors: Netanel Raviv Supervisors: Eyal Kushlevitz PDF MSC-2013-15.pdf Abstract: Complexity theory offers a variety of concise computational models for computing boolean functions - branching programs, circuits, decision trees and ordered binary decision diagrams to name a few. A natural question that arises in this context with respect to any such model is this: Given a function f:{0,1}^n \\to {0,1}, can we compute the optimal complexity of computing f in the computational model in question? (according to some desirable measure). A critical issue regarding this question is how exactly is f given, since a more elaborate description of f allows the algorithm to use more computational resources. Among the possible representations are black-box access to f (such as in computational learning theory), a representation of f in the desired computational model or a representation of f in some other model. One might conjecture that if f is given as its complete truth table (i.e., a list of f's values on each of its 2^n possible inputs), the most elaborate description conceivable, then any computational model can be efficiently computed, since the algorithm computing it can run poly(2^n) time. Several recent studies show that this is far from the truth - some models have efficient and simple algorithms that yield the desired result, others are believed to be hard, and for some models this problem remains open. In this talk we will discuss the computational complexity of this question regarding several common types of computational models. We shall present several new hardness results and efficient algorithms, as well as new proofs and extensions for known theorems, for variants of decision trees, formulas and branching programs. Copyright The above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information\n\nRemark: Any link to this technical report should be to this page (http:\/\/www.cs.technion.ac.il\/users\/wwwb\/cgi-bin\/tr-info.cgi\/2013\/MSC\/MSC-2013-15), rather than to the URL of the PDF files directly. The latter URLs may change without notice.","date":"2021-01-27 10:56:32","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8049280643463135, \"perplexity\": 698.1610728775179}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610704821381.83\/warc\/CC-MAIN-20210127090152-20210127120152-00000.warc.gz\"}"}
null
null
\section{Introduction} The study of centrality measures is a fundamental primitive in the analysis of networked datasets \cite{Borgatti2006, Newman2010}, and plays a key role in social network analysis~\cite{Das2018}. A centrality measure informally captures how important a node is for a given network according to \emph{structural} properties of the network. Central nodes are crucial in many applications such as analyses of co-authorship networks~\cite{Liu2005,Yan2009}, biological networks~\cite{Wuchty2003,Koschuetzki2008}, and ontology summarization~\cite{Zhang2007}. One of the most important centrality measures is the betweenness centrality~\cite{Freeman1977,Freeman1978}, which informally captures the fraction of \emph{shortest paths} going through a specific node. The betweenness centrality has found applications in many scenarios such as community detection~\cite{Fortunato2010}, link prediction~\cite{Ahmad2020}, and network vulnerability analysis~\cite{Holme2002}. The exact computation of the betweenness centrality of each node of a network is an extremely challenging task on modern networks, both in terms of running time and memory costs. Therefore, sampling algorithms have been proposed to provide provable high-quality approximations of the betweenness centrality values, while remarkably reducing the computational costs~\cite{Riondato2016,Riondato2018,Brandes2007}. Modern networks, additionally to being large, have also richer information about their edges. In particular, one of the most important and easily accessible information is the \emph{time} at which edges occur. Such networks are often called \emph{temporal networks}~\cite{Holme2019}. The analysis of temporal networks provides novel insights compared to the insights that would be obtained by the analysis of static networks (i.e., networks without temporal information), as, for example, in the study of subgraph patterns~\cite{Paranjape2017,Kovanen2011}, community detection \cite{Lehmann2019}, and network clustering~\cite{Fu2020}. As well as for static networks, the study of the temporal betweenness centrality in temporal networks aims at identifying the nodes that are visited by a high number of \emph{optimal} paths ~\cite{Holme2012,Buss2020}. In temporal networks, the definition of optimal paths has to consider the information about the timing of the edges, making the possible definitions of optimal paths much more richer than in static networks~\cite{Rymar2021}. In this work, a temporal path is valid if it is time respecting, i.e. if all the interactions within the path occur at increasing timestamps (see Figures \ref{subfig:staticPath}-\ref{subfig:temporalShortestPath}). We considered two different optimality criteria for temporal paths, chosen for their relevance~\cite{Holme2012}: (i) shortest temporal path (STP) criterion, a commonly used criterion for which a path is optimal if it uses the minimum number of interactions to connect a given pair of nodes; (ii) restless temporal path (RTP) criterion, for which a path is optimal if, in addition to being shortest, all its consecutive interactions occur at most within a given user-specified time duration parameter $\delta\in\mathbb{R}$ (see Figure \ref{subfig:temporalShortestPath}). The RTP criterion finds application, for example, in the study of spreading processes over complex networks~\cite{Pan2011}, where information about the timing of consecutive interactions is fundamental. The exact computation of the temporal betweenness centrality under the STP and RTP optimality criteria becomes impractical (both in terms of running time and memory usage) for even moderately-sized networks. Furthermore, as well as for static networks, obtaining a high-quality approximation of the temporal betweenness centrality of a node is often sufficient in many applications. Thus, we propose \algname, the \emph{first} algorithm to compute rig\underline{O}rous estimatio\underline{N} of temporal \underline{B}etweenness cent\underline{R}ality values in tempor\underline{A}l networks\footnote{\url{https://vec.wikipedia.org/wiki/Onbra}.}, providing sharp guarantees on the quality of its output. As for many data-mining algorithms, \algname's output is function of two parameters: $\varepsilon \in (0,1)$ controlling the estimates' accuracy; and $\eta \in (0,1)$ controlling the confidence. The algorithmic problems arising from accounting for temporal information are really challenging to deal with compared to the static network scenario, although \algname\ shares a high-level sampling strategy similar to~\cite{Riondato2018}. Finally, we show that in practice our algorithm \algname, other than providing high-quality estimates while reducing computational costs, it also enables analyses that cannot be otherwise performed with existing state-of-the-art algorithms. Our main contributions are the following: \begin{itemize} \item We propose \algname, the first sampling-based algorithm that outputs high-quality approximations of the temporal betweenness centrality values of the nodes of a temporal network. \algname\ leverages on an advanced data-dependent and variance-aware concentration inequality to provide sharp probabilistic guarantees on the quality of its estimates. \item We show that \algname\ is able to compute high-quality temporal betweenness estimates for two optimality criteria of the paths, i.e., STP and RTP criteria. In particular, we developed specific algorithms for \algname\ to address the computation of the estimates according to such optimality criteria. \item We perform an extensive experimental evaluation with several goals: (i) under the STP criterion, show that studying the temporal betweenness centrality provides novel insights compared to the static version; (ii) under the STP criterion, show that \algname\ provides high-quality estimates, while significantly reducing the computational costs compared to the state-of-the-art exact algorithm, and that it enables the study of large datasets that cannot practically be analyzed by the existing exact algorithm; (iii) show that \algname\ is able to estimate the temporal betweenness centrality under the RTP optimality criterion by varying $\delta$. \end{itemize} \begin{figure}[t] \centering \subfloat[]{ \begin{tabular}{lr} \includegraphics[width=.36\linewidth]{media/imgs/graphSample} & \includegraphics[width=.36\linewidth]{media/imgs/graphStatic} \end{tabular} \label{subfig:temporalNetwork} }\\%\qquad \subfloat[]{ \includegraphics[width=.35\linewidth]{media/imgs/singlepathstatic} \label{subfig:staticPath} \subfloat[]{ \includegraphics[width=.45\linewidth]{media/imgs/pathtemporal} \label{subfig:temporalShortestPath} } \caption{(\ref{subfig:temporalNetwork}): (left) a temporal network $T$ with $n=8$ nodes and $m=12$ edges, (right) its associated static network $G_T$ obtained from $T$ by removing temporal information. A shortest \emph{temporal} path cannot be identified by a shortest path in the static network: e.g., the shortest paths from node $v_1$ to node $v_8$, respectively coloured in green in $T$ and purple in $G_T$, are different. (\ref{subfig:staticPath}): A path that is not time respecting. (\ref{subfig:temporalShortestPath}): A time respecting path that is also shortest in $T$. With $\delta\ge 42$ such path is also shortest $\delta$-restless path.} \label{fig:basicdef} \end{figure} \section{Preliminaries} In this section we introduce the fundamental notions needed throughout the development of our work and formalize the problem of approximating the temporal betweenness centrality of the nodes in a temporal network. We start by introducing temporal networks. \begin{definition} A \emph{temporal network} $T$ is a pair $T=(V,E)$, where $V$ is a set of $n$ nodes (or vertices), and $E=\{(u,v,t): u,v\in V, u\neq v, t \in \mathbb{R}^+\}$ is a set of $m$ directed edges\footnote{\algname\ can be easily adapted to work on \emph{undirected} temporal networks with minor modifications.}\footnote{W.l.o.g.\ we assume the edges $(u_1,v_1,t_1),\dots,(u_m,v_m,t_m)$ to be sorted by increasing timestamps.}. \end{definition} Each edge $e = (u,v,t) \in E$ of the network represents an interaction from node $u \in V$ to node $v \in V$ at time $t$, which is the \emph{timestamp} of the edge. Figure \ref{subfig:temporalNetwork} (left) provides an example of a temporal network $T$. Next, we define \emph{temporal paths}. \begin{definition} Given a temporal network $T$, a \emph{temporal path} $\mathsf{P}$ is a sequence $\mathsf{P}=\langle e_1=(u_1,v_1,t_1), e_2=(u_2,v_2,t_2), \dots,e_k=(u_k,v_k,t_k) \rangle$ of $k$ edges of $T$ ordered by increasing timestamps\footnote{Our work can be easily adapted to deal with non-strict ascending timestamps (i.e., with $\leq$ constraints).}, i.e., $t_i < t_{i+1}, i \in \{1,\dots, k-1\}$, such that the node $v_i$ of edge $e_i$ is equal to the node $u_{i+1}$ of the consecutive edge $e_{i+1}$, i.e., $v_i=u_{i+1}, i \in \{1,\dots,k-1\}$, and each node $v \in V$ is visited by $\mathsf{P}$ \emph{at most} once. \end{definition} Given a temporal path $\mathsf{P}$ made of $k$ edges, we define its length as $\ell_{\mathsf{P}} = k$. An example of temporal path $\mathsf{P}$ of length $\ell_{\mathsf{P}} = 3$ is given by Figure \ref{subfig:temporalShortestPath}. Given a \emph{source} node $s \in V$ and a \emph{destination} node $z \in V, z\neq s$, a \emph{shortest} temporal path between $s$ and $z$ is a temporal path $\mathsf{P_{s,z}}$ of length $\ell_{\mathsf{P}_{s,z}}$ such that in $T$ there is no temporal path $\mathsf{P}_{s,z}'$ connecting $s$ to $z$ of length $\ell_{\mathsf{P}_{s,z}'}<\ell_{\mathsf{P}_{s,z}}$. Given a temporal shortest path $\mathsf{P}_{s,z}$ connecting $s$ and $z$, we define $\mathtt{Int}(\mathsf{P}_{s,z}) = \{w \in V | \ \exists \ (u,w,t) \lor (w,v,t) \in \mathsf{P}_{s,z}, w \neq s,z\} \subset V$ as the set of nodes \emph{internal} to the path $\mathsf{P}_{s,z}$. Let $\sigma_{s,z}^{sh} $ be the number of shortest temporal paths between nodes $s$ and $z$. Given a node $v \in V$, we denote with $\sigma_{s,z}^{sh} (v)$ the number of shortest temporal paths $\mathsf{P}_{s,z}$ connecting $s$ and $z$ for which $v$ is an internal node, i.e., $\sigma_{s,z}^{sh} (v) = |\{\mathsf{P}_{s,z} | v \in \mathtt{Int}(\mathsf{P}_{s,z})\}|$. Now we introduce the \emph{temporal betweenness centrality} of a node $v \in V$, which intuitively captures the fraction of shortest temporal paths visiting $v$. \begin{definition} We define the \emph{temporal betweenness centrality} $b(v)$ of a node $v \in V$ as \[ b(v) = \frac{1}{n(n-1)} \sum_{s,z\in V, \ s\neq z} \frac{\sigma_{s,z}^{sh} (v)}{\sigma_{s,z}^{sh} }. \] \end{definition} Let $B(T) = \{(v,b(v)): v \in V\}$ be the set of pairs composed of a node $v \in V$ and its temporal betweenness value $b(v)$. Since the exact computation of the set $B(T)$ using state-of-the-art exact algorithms, e.g., \cite{Buss2020,Rymar2021}, is impractical on even moderately-sized temporal networks (see Section \ref{sec:exp} for experimental evaluations), in our work we aim at providing high-quality approximations of the temporal betweenness centrality values of all the nodes of the temporal network. That is, we compute the set $\tilde{B}(T)= \{(v,\tilde{b}(v)): v \in V\}$, where $\tilde{b}(v)$ is an accurate estimate of $b(v)$, controlled by two parameters $\varepsilon, \eta \in (0,1)$, (accuracy and confidence). We want $\tilde{B}(T)$ to be an \emph{absolute ($\epsilon,\eta$)-approximation set} of $B(T)$, as commonly adopted in data-mining algorithms (e.g., in \cite{Riondato2018}): that is, $\tilde{B}(T)$ is an approximation set such that \[ \mathbb{P}\left[\sup_{v \in V}|\tilde{b}(v)- b(v)| \le \varepsilon\right] \geq 1 - \eta. \] Note that in an absolute ($\epsilon,\eta$)-approximation set, for each node $v \in V$, the estimate $\tilde{b}(v)$ of the temporal betweenness value deviates from the actual value $b(v)$ of at most $\varepsilon$, with probability at least $1-\eta$. Finally, let us state the main computational problem addressed in this work. \begin{problem} \label{problem} Given a temporal network $T$ and two parameters $(\varepsilon, \eta)\in (0,1)^2$, compute the set $\tilde{B}(T)$, i.e., an absolute ($\epsilon,\eta$)-approximation set of $B(T)$. \end{problem} \section{Related Works}\label{sec:relwork} Given the importance of the betweenness centrality for network analysis, many algorithms have been proposed to compute it in different scenarios. In this section we focus on those scenarios most relevant to our work, grouped as follows. \emph{Approximation Algorithms for Static Networks.} Recently, many algorithms to approximate the betweenness centrality in static networks have been proposed, most of them employ randomized sampling approaches~\cite{Riondato2016,Riondato2018,Brandes2007}. The existing algorithms differ from each other mainly for the sampling strategy they adopt and for the probabilistic guarantees they offer. Among these works, the one that shares similar ideas to our work is \cite{Riondato2018} by Riondato and Upfal, where the authors proposed to sample pairs of nodes $(s,z)\in V^2$, compute all the shortest paths from $s$ to $z$, and update the estimates of the betweenness centrality values of the nodes internal to such paths. The authors developed a suite of algorithms to output an $(\varepsilon,\eta)$-approximation set of the set of betweenness centrality values. Their work cannot be easily adapted to temporal networks. In fact, static and temporal paths in general are not related in any way, and the temporal scenario introduces many novel challenges: (i) computing the optimal temporal paths, and (ii) updating the betweenness centrality values. Therefore, our algorithm \algname\ employs the idea of the estimator provided by \cite{Riondato2018}, while using novel algorithms designed for the context of temporal networks. Furthermore, the probabilistic guarantees provided by our algorithm \algname\ leverage on the variance of the estimates, differently from~\cite{Riondato2018} that used bounds based on the Rademacher averages. Our choice to use a variance-aware concentration inequality is motivated by the recent interest in providing sharp guarantees employing the \emph{empirical variance} of the estimates~\cite{Cousins2021, Pellegrina2021}. \emph{Algorithms for Dynamic Networks.} In this setting the algorithm keeps track of the betweenness centrality value of each node for every timestamp $t_1,\dots,t_m$ observed in the network~\cite{Lee2012,Hanauer2021}. Note that this is extremely different from estimating the temporal betweenness centrality values in temporal networks. In the dynamic scenario the paths considered are \emph{not} required to be time respecting. For example, in the dynamic scenario, if we consider the network in Figure \ref{subfig:temporalNetwork} (left) at any time $t>20$, the shortest path from $v_1$ to $v_8$ is the one highlighted in purple in Figure \ref{subfig:temporalNetwork} (right). Instead, in the temporal setting such path is not time respecting. We think that it is very challenging to adapt the algorithms for dynamic networks to work in the context of temporal networks, which further motivates us to propose \algname. \emph{Exact Algorithms for Temporal Networks.} Several exact approaches have been proposed in the literature~\cite{Tsalouchidou2020,Alsayed2015,Kim2012}. The algorithm most relevant to our work was presented in~\cite{Buss2020}, where the authors extended the well-known Brandes algorithm~\cite{Brandes2001} to the temporal network scenario considering the STP criterion (among several other criteria). They showed that the time complexity of their algorithm is $O(n^3(t_{m}-t_{1})^2)$, which is often impractical on even moderately-sized networks. Recently, \cite{Rymar2021} discussed conditions on temporal paths under which the temporal betweenness centrality can be computed in polynomial time, showing a general algorithm running in $O(n^2m(t_m-t_1)^2)$ even under the RTP criterion, which is again very far from being practical on modern networks. We conclude by observing that, to the best of our knowledge, no approximation algorithms exist for estimating the temporal betweenness centrality in temporal networks. \section{Method, Algorithm, and Analysis} \label{sec:method_algorithm_analysis} In this section we discuss \algname, our novel algorithm for computing high-quality approximations of the temporal betweenness centrality values of the nodes of a temporal network. We first discuss the sampling strategy used in \algname, then we present the algorithm, and finally we show the theoretical guarantees on the quality of the estimates of \algname. \subsection{\algname\ - Sampling Strategy} In this section we discuss the sampling strategy adopted by \algname\ that is independent of the optimality criterion of the paths. However, for the sake of presentation, we discuss the sampling strategy for the STP-based temporal betweenness centrality estimation. \algname\ samples \emph{pairs} of nodes $(s,z)$ and computes all the shortest temporal paths from $s$ to $z$. More formally, let $\mathcal{D} = \{(u,v)\in V^2: u\neq v \}$, and $\ell \in \mathbb{N}, \ell\ge2$ be a user-specified parameter. \algname\ first collects $\ell$ pairs of nodes $(s_i,z_i)_i,i=1,\dots,\ell$, sampled uniformly at random from $\mathcal{D}$. Next, for each pair $(s,z)$ it computes $\mathcal{P}_{s,z}=\{\mathsf{P}_{s,z}: \mathsf{P}_{s,z} \text{ is shortest}\}$, i.e., the set of shortest temporal paths from $s$ to $z$. Then, for each node $v \in V$ s.t.\ $\exists \mathsf{P}_{s,z} \in \mathcal{P}_{s,z}$ with $v \in \mathtt{int}(\mathsf{P}_{s,z})$, i.e., for each node $v$ that is internal to a shortest temporal path of $\mathcal{P}_{s,z}$, \algname\ computes the estimate $\tilde{b}'(v)= \sigma_{s,z}^{sh}(v) / \sigma_{s,z}^{sh}$, which is an unbiased estimator of the temporal betweenness centrality value $b(v)$ (i.e., $\mathbb{E}[\tilde{b}'(w)] = b(v)$, see Lemma \ref{lemma:unbiased} in Appendix \ref{app:proofs}). Finally, after processing the $\ell$ pairs of nodes randomly selected, \algname\ computes for each node $v \in V$ the (unbiased) estimate $\tilde{b}(v)$ of the actual temporal betweenness centrality $b(v)$ by averaging $\tilde{b}'(v)$ over the $\ell$ sampling steps: $\tilde{b}(v) = 1/\ell \sum_{i=1}^\ell \tilde{b}'(v)_i $, where $\tilde{b}'(v)_i$ is the estimate of $b(v)$ obtained by analyzing the $i$-th sample, $i\in [1,\ell]$. We will discuss the theoretical guarantees on the quality of $\tilde{b}(v)'s$ in Section \ref{subsec:theoguar}. \begin{algorithm}[t] \DontPrintSemicolon \LinesNumbered \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKwFunction{GetSample}{uniformRandomSample} \SetKwFunction{ModifiedSP}{SourceDestinationSTPComputation} \SetKwFunction{UpdateEST}{updateSTPEstimates} \SetKwComment{tcp}{//}{} \KwIn{Temporal network $T=(V,E)$, $\eta\in(0,1), \ell \ge 2$} \KwOut{Pair $(\varepsilon',\tilde{B}(T))$ s.t. $\tilde{B}$ is an absolute $(\varepsilon', \eta)$-approximation set of $B(T)$.} $\mathcal{D}\leftarrow\{(u,v)\in V\times V, u\neq v\}$\label{algline:samplespace}\; $\tilde{B}_{v,:} \gets \vec{0}_\ell, \forall v\in V$\label{algline:estimatesmatrix}\; \For{$i\gets 1$ \KwTo $\ell$} {\label{algline:mainloop} $(s,z)\leftarrow$ \GetSample{$\mathcal{D}$}\label{algline:getsample}\; \ModifiedSP{$T,s,z$}\label{algline:spcomp}\; \If{\emph{reached}$(z)$\label{algline:ifReachedDest}} { \UpdateEST{$\tilde{B}, i$}\label{algline:updateEst}\; } } $\tilde{B}(T) \gets \{(v, 1/\ell \sum_{i=1}^\ell \tilde{B}_{v,i}): v\in V \} $\label{algline:unbiasedComp}\; $\varepsilon' \gets \sup_{v\in V} \left\{ \sqrt{\frac{ 2 \mathbf{V}(\tilde{B}_{v,:}) \ln(4n / \eta)}{\ell}} + \frac{7 \ln(4n / \eta)}{3(\ell - 1)} \right\} $\label{algline:compStoppingCond}\; \Return{$(\varepsilon', \tilde{B}(T))$}\; \caption{\algname.} \label{alg:static} \end{algorithm} \subsection{Algorithm Description} \subsubsection*{Sampling Algorithm: \algname} \algname\ is presented in Algorithm \ref{alg:static}. In line \ref{algline:samplespace} we first initialize the set $\mathcal{D}$ of objects to be sampled, where each object is a pair of distinct nodes from $V$. Next, in line \ref{algline:estimatesmatrix} we initialize the matrix $\tilde{B}$ of size $|V| \cdot \ell$ to store the estimates of \algname\ for each node at the various iterations, needed to compute their empirical variance and the final estimates. Then we start the main loop (line \ref{algline:mainloop}) that will iterate $\ell$ times. In such loop we first select a pair $(s,z)$ sampled uniformly at randomly from $\mathcal{D}$ (line \ref{algline:getsample}). We then compute all the shortest temporal paths from $s$ to $z$ by executing Algorithm \ref{alg:truncatedstpaths} (line \ref{algline:spcomp}), which is described in detail later in this section. Such algorithm computes all the shortest temporal paths from $s$ and $z$ adopting some pruning criteria to speed-up the computation. If at least one STP between $s$ and $z$ exists (line \ref{algline:ifReachedDest}), then for each node $v \in V$ internal to a path in $\mathcal{P}_{s,z}$ we update the corresponding estimate to the current iteration by computing $\tilde{b}'(v)_i$ using Algorithm \ref{alg:updatepathcounts} (line \ref{algline:updateEst}). While in static networks this step can be done with a simple recursive formula \cite{Riondato2018}, in our scenario we need a specific algorithm to deal with the more challenging fact that a node may appear at different distances from a given source across different shortest temporal paths. We will discuss in detail such algorithm later in this section. At the end of the $\ell$ iterations of the main loop, \algname\ computes: (i) the set $\tilde{B}(T)$ of unbiased estimates (line \ref{algline:unbiasedComp}); (ii) and a tight bound $\varepsilon'$ on $\sup_{v \in V}|\tilde{b}(v)- b(v)|$, which leverages the empirical variance $\mathbf{V}(\tilde{B}_{v,:})$ of the estimates (line \ref{algline:compStoppingCond}). We observe that $\varepsilon'$ is such that the set $\tilde{B}(T)$ is an absolute $(\varepsilon', \eta)$-approximation set of $B(T)$. We discuss the computation of such bound in Section \ref{subsec:theoguar}. Finally, \algname\ returns $(\varepsilon',\tilde{B}(T))$. \begin{algorithm}[t] \DontPrintSemicolon \SetKwComment{Comment}{$\triangleright$\ }{} \LinesNumbered \KwIn{$T=(V,E)$, source node $s$, destination node $z$} \For{$v\in V$} { $\mathsf{dist}_{v} \gets -1$; $\sigma_{v} \gets 0$\label{alglinetr:structdist}\; } \For{$(u,v,t)\in E$} { $\sigma_{v,t}\gets 0; P_{v,t}\gets \emptyset; \mathsf{dist}_{v,t} \gets -1$\label{alglinetr:structpaths}\; } $\mathsf{dist}_s \gets 0$; $\mathsf{dist}_{s,0} \gets 0$\label{alglinetr:initdist}\; $\sigma_s \gets 1$; $\sigma_{s,0} \gets 1$; $\mathsf{d}_z^{\min} \gets \infty$\label{alglinetr:initpaths}\; $Q \gets$ empty queue; $Q.$enqueue$((s,0))$\label{alglinetr:initdatastruct}\; \While{$!Q.$\emph{empty}$()$\label{alglinetr:mainwhile}} { $(v,t) \gets Q.$dequeue$()$ \label{alglinetr:curpair}\; \If{$(\mathsf{dist}_{v,t} < \mathsf{d}_z^{\min})$\label{alglinetr:truncCond}}{ \For{$(w,t')\in \mathcal{N}^{{+}}(v,t), $\label{alglinetr:forOutNeighTemp}} { \If{$\mathsf{dist}_{w,t'}=-1$\label{alglinetr:ifNeverReachedAtTime}} { $\mathsf{dist}_{w,t'} \gets \mathsf{dist}_{v,t}+1$\label{alglinetr:mindistTime}\; {\If{$\mathsf{dist}_w=-1$\label{alglinetr:ifNeverReached}} { $\mathsf{dist}_w \gets \text{dist}_{v,t}+1$\label{alglinetr:mindDistToNode}\; \If{$w=z$} { $\mathsf{d}_z^{\min} \gets \mathsf{dist}_w$\label{alglinetr:mindDistToDest}\; } } } $Q.$enqueue$((w,t'))$\label{alglinetr:enqueue}\; } \If{$\mathsf{dist}_{w,t'}= \mathsf{dist}_{v,t}+1$\label{alglinetr:ifShortest}} { $\sigma_{w,t'} \gets \sigma_{w,t'} + \sigma_{v,t}$\label{alglinetr:updatePaths}\; $P_{w,t'} \gets P_{w,t'} \cup \{(v,t)\}$\label{alglinetr:updatePredecessors}\; {\If{$\mathsf{dist}_{w,t'}=\mathsf{dist}_w$} { $\sigma_w \gets \sigma_w+\sigma_{v,t}$\label{alglinetr:updateTotPaths}\; }} } } } } \caption{Source-Destination STP computation.} \label{alg:truncatedstpaths} \end{algorithm} \subsubsection*{Subroutines} We now describe the subroutines employed in Algorithm \ref{alg:static} focusing on the STP criterion. Then, in Section \ref{sec:RTP_criteria}, we discuss how to deal with the RTP criterion. \emph{Source Destination Shortest Paths Computation.} We start by introducing some definitions needed through this section. First, we say that a pair $(v,t)\in V\times \{t_1,\dots,t_m\}$ is a \emph{vertex appearance} (VA) if $\exists (u,v,t)\in E$. Next, given a VA\ $(v,t)$ we say that a VA $(w,t')$ is a \emph{predecessor} of $(v,t)$ if $\exists(w,v,t)\in E, t' < t$. Finally, given a VA $(v,t)$ we define its set of \emph{out-neighbouring VAs} as $\mathcal{N}^{{+}}(v,t)=\{(w,t') : \exists (v,w,t')\in E, t<t'\}$. We now describe Algorithm \ref{alg:truncatedstpaths} that computes the shortest temporal paths between a source node $s$ and a destination node $z$ (invoked in \algname\ at line \ref{algline:spcomp}). Such computation is optimized to prune the search space once found the destination $z$. The algorithm initializes the data structures needed to keep track of the shortest temporal paths that, starting from $s$, reach a node in $V$, i.e., the arrays $\mathsf{dist}[\cdot]$ and $\sigma[\cdot]$ that contain for each node $v\in V$, respectively, the minimum distance to reach $v$ and the number of shortest temporal paths reaching $v$ (line \ref{alglinetr:structdist}). In line \ref{alglinetr:structpaths} we initialize $\mathsf{dist}[\cdot,\cdot]$ that keeps track of the minimum distance of a VA from the source $s$, $\sigma[\cdot, \cdot]$ that maintains the number of shortest temporal paths reaching a VA from $s$, and $P$ keeping the set of predecessors of a VA across the shortest temporal paths explored. After initializing the values of the data structures for the source $s$ and $\mathsf{d}_z^{\min}$ keeping the length of the minimum distance to reach $z$ (lines \ref{alglinetr:initdist}-\ref{alglinetr:initpaths}), we initialize the queue $Q$ that keeps the VAs to be visited in a BFS fashion in line \ref{alglinetr:initdatastruct} (observe that, since the temporal paths need to be time-respecting, all the paths need to account for the time at which each node is visited). Next, the algorithm explores the network in a BFS order (line \ref{alglinetr:mainwhile}), extracting a VA $(v,t)$ from the queue, which corresponds to a node and the time at which such node is visited, and processing it by collecting its set $\mathcal{N}^{{+}}(v,t)$ of out-neighbouring VAs (lines \ref{alglinetr:curpair}-\ref{alglinetr:forOutNeighTemp}). If a VA $(w,t')$ was not already explored (i.e., it holds $\mathsf{dist}_{w,t'}=-1$), then we update the minimum distance $\mathsf{dist}_{w,t'}$ to reach $w$ at time $t'$, the minimum distance $\mathsf{dist}_{w}$ of the vertex $w$ if it was not already visited, and, if $w$ is the destination node $z$, we update $\mathsf{d}_z^{\min}$ (lines \ref{alglinetr:ifNeverReachedAtTime}-\ref{alglinetr:mindDistToDest}). Observe that the distance $\mathsf{d}_z^{\min}$ to reach $z$ is used as a \emph{pruning criterion} in line \ref{alglinetr:truncCond} (clearly, if a VA appears at a distance greater than $\mathsf{d}_z^{\min}$ then it cannot be on a shortest temporal path from $s$ to $z$). After updating the VAs to be visited by inserting them in $Q$ (line \ref{alglinetr:enqueue}), if the current temporal path is shortest for the VA $(w,t')$ analyzed, we update the number $\sigma_{w,t'}$ of shortest temporal paths leading to it, its set $P_{w,t'}$ of predecessors, and the number $\sigma_w$ of shortest temporal paths reaching the node $w$ (lines \ref{alglinetr:ifShortest}-\ref{alglinetr:updateTotPaths}). \emph{Update Estimates: STP criterion.} Now we describe Algorithm \ref{alg:updatepathcounts}, which updates the temporal betweenness estimates of each node internal to a path in $\mathcal{P}_{s,z}$ already computed. With Algorithm \ref{alg:truncatedstpaths} we computed for each VA $(w,t)$ the number $\sigma_{w,t}$ of shortest temporal paths from $s$ reaching $(w,t)$. Now, in Algorithm \ref{alg:updatepathcounts} we need to combine such counts to compute the total number of shortest temporal paths leading to each VA $(w,t)$ appearing in a path in $\mathcal{P}_{s,z}$, allowing us to compute the estimate of \algname\ for each node $w$. At the end of Algorithm \ref{alg:truncatedstpaths} there are in total $|\mathcal{P}_{s,z}|$ shortest temporal paths reaching $z$ from $s$. Now we need to compute, for each node $w$ internal to a path in $\mathcal{P}_{s,z}$ and for each VA $(w,t)$, the number $\sigma_{w,t}^z$ of shortest temporal paths leading from $w$ to $z$ at a time greater that $t$. Then, the fraction of paths containing the node $w$ is computed with a simple formula, i.e., $\sum_t \sigma_{w,t}^z\cdot \sigma_{w,t}/\sigma_z$, where $\sigma_z =|\mathcal{P}_{s,z}| $. The whole procedure is described in Algorithm \ref{alg:updatepathcounts}. We start by initializing $\sigma_{v,t}^z$ that stores for each VA $(v,t)$ the number of shortest temporal paths reaching $z$ at a time greater than $t$ starting from $v$, and a boolean matrix $M$ that keeps track for each VA if it has already been considered (line \ref{alglineup:initdatastruct}). In line \ref{alglineup:initqueue} we initialize a queue $R$ that will be used to explore the VAs appearing along the paths in $\mathcal{P}_{s,z}$ in reverse order of distance from $s$ starting from the destination node $z$. Then we initialize $\sigma^z_{w,t'}$ for each VA reaching $z$ at a given time $t'$ (line \ref{alglineup:initsigmas}), and we insert each VA in the queue only one time (line \ref{alglineup:initVappeareance}). The algorithm then starts its main loop exploring the VAs in decreasing order of distance starting from $z$ (line \ref{alglineup:mainwhile}). We take the VA $(w,t)$ to be explored in line \ref{alglineup:dequeue}. If $w$ differs from $s$ (i.e., $w$ is an internal node), then we update its temporal betweenness estimate by adding $\sigma_{w,t}^z\cdot \sigma_{w,t}/\sigma_z$ (line \ref{alglineup:updateBetweenness}). As we did in the initialization step, then we process each predecessor $(w',t')$ of $(w,t)$ across the paths in $\mathcal{P}_{s,z}$ (line \ref{alglineup:forPred}), update the count $\sigma_{w',t'}^z$ of the paths from the predecessor to $z$ by summing the number $\sigma_{w,t}^z$ of paths passing through $(w,t)$ and reaching $z$ (line \ref{alglineup:UpdatePaths}), and we enqueue the predecessor $(w',t')$ only if it was not already considered (lines \ref{alglineup:inqueuewhile}-\ref{alglineup:enqueue}). So, the algorithm terminates by having properly computed for each node $v\in V, v\neq s,z$ the estimate $\tilde{b}'(v)_i$ for each iteration $i\in[1,\ell]$. \begin{algorithm}[t] \DontPrintSemicolon \SetKwComment{Comment}{$\triangleright$\ }{} \LinesNumbered \KwIn{$\tilde{B}, i$.} \For{$(u,v,t)\in E$} { $\sigma_{v,t}^z\gets 0$; $M_{v,t} \gets \mathsf{False}$\label{alglineup:initdatastruct}\; } $R \gets$ empty queue; \label{alglineup:initqueue}\; \ForEach{$t : (\sigma_{z,t} > 0)$\label{alglineup:fortReachingZ}} { \For{$(w,t')\in P_{z,t}$\label{alglineup:forPredecessorsZ}} { $\sigma_{w,t'}^z \gets \sigma_{w,t'}^z+1$\label{alglineup:initsigmas}\; \If{$!M_{w,t'}$\label{alglineup:inqueue}}{ $R$.enqueue($(w,t')$); $M_{w,t'} \gets \mathsf{True} $\label{alglineup:initVappeareance}\; } } } \While{$!R.$\emph{empty}$()$\label{alglineup:mainwhile}}{ $(w,t) \gets R.$dequeue$()$\label{alglineup:dequeue}\; \If{$w\neq s$\label{alglineup:nots}}{ $\tilde{B}_{w,i} \gets \tilde{B}_{w,i} + \sigma_{w,t}^z \cdot \sigma_{w,t} / \sigma_z$\label{alglineup:updateBetweenness}\; \For{$(w',t')\in P_{w,t}$\label{alglineup:forPred}} { $\sigma_{w',t'}^z \gets \sigma_{w',t'}^z + \sigma_{w,t}^z$\label{alglineup:UpdatePaths}\; \If{$!M_{w',t'}$\label{alglineup:inqueuewhile}}{ $R$.enqueue($(w',t')$); $M_{w',t'} \gets \mathsf{True} $\label{alglineup:enqueue}\; } } } } \caption{Update betweenness estimates - STP.} \label{alg:updatepathcounts} \end{algorithm} \subsection{Restless Temporal Betweenness} \label{sec:RTP_criteria} In this section we present the algorithms that are used in \algname\ when considering the RTP criterion for the optimal paths to compute the temporal betweenness centrality values. Recall that, in such scenario, a temporal path $\mathsf{P}=\langle e_1=(u_1,v_1,t_1), \ e_2=(u_2,v_2,t_2), \dots,e_k=(u_k,v_k,t_k) \rangle$ is considered optimal if and only if $\mathsf{P}$, additionally to being \emph{shortest}, is such that, given $\delta \in \mathbb{R}^+$, it holds $t_{i+1}\le t_{i}+\delta$ for $i=1,\dots,k-1$. Considering the RTP criteria, we need to relax the definition of shortest temporal paths and, instead, consider \emph{shortest temporal walks}. Intuitively, a walk is a path where we drop the constraint that a node must be visited at most once. We provide an intuition of why we need such requirement in Figure \ref{fig:twalks}. Given $\delta \in \mathbb{R}^+$, we refer to a shortest temporal walk as \emph{shortest $\delta$-restless temporal walk}. In order to properly work under the RTP criteria, \algname\ needs novel algorithms to compute the optimal walks and update the betweenness estimates. Note that to compute the shortest $\delta$-restless temporal walks we can use Algorithm \ref{alg:truncatedstpaths} provided that we add the condition $t' -t \leq \delta$ in line \ref{alglinetr:forOutNeighTemp}. More interestingly, the biggest computational problem arises when updating the temporal betweenness values of the various nodes on the optimal walks. Note that, to do so, we cannot use Algorithm \ref{alg:updatepathcounts} because it does not account for cycles (i.e, when vertices appear multiple times across a walk). We therefore introduce Algorithm \ref{alg:updatewalkcounts} (pseudocode in Appendix \ref{app:RTPPseudocode}) that works in the presence of cycles. The main intuition behind Algorithm \ref{alg:updatewalkcounts} is that we need to recreate backwards all the optimal walks obtained through the RTP version of Algorithm \ref{alg:truncatedstpaths}. For each walk we will maintain a set that keeps track of the nodes already visited up to the current point of the exploration of the walk, updating a node's estimate if and only if we see such node for the first time. This is based on the simple observation that a cycle cannot alter the value of the betweenness centrality of a node on a fixed walk, allowing us to account only once for the node's appearance along the walk. We now describe Algorithm \ref{alg:updatewalkcounts} by discussing its differences with Algorithm \ref{alg:updatepathcounts}. In line \ref{alglineupwa:initdatastruct}, instead of maintaining a matrix keeping track of the presence of a VA in the queue, we now initialize a matrix $\mathsf{u}[\cdot,\cdot]$ that keeps the number of times a VA is in the queue. The queue, initialized in line \ref{alglineupwa:initqueue}, keeps elements of the form $\langle\cdot,\cdot\rangle$, where the first entry is a VA to be explored and the second entry is the set of nodes already visited backwards along the walk leading to such vertex appearance. While visiting backwards each walk, we check if the nodes are visited for the first time on such walk: if so, we update the betweenness values by accounting for the number of times we will visit such VA across other walks (lines \ref{alglineupwa:ifnotvisited}-\ref{alglineupwa:updatebetween}). Next, we update the set of nodes visited (line \ref{alglineupwa:updateset}). Finally, we update the count $\sigma_{w',t'}^z$ of the walks leading from the predecessor $(w',t')$ of the current VA $(w,t)$ to $z$ (line \ref{alglineupwa:updatepathstow}), the number $\mathsf{u}_{w',t'}$ of times such predecessor will be visited (line \ref{alglineupwa:updatevisits}), and enqueue the predecessor $(w',t')$ to be explored, together with the additional information of the set $\mathsf{S}'$ of nodes explored up to that point. To conclude, note that Algorithm \ref{alg:updatewalkcounts} is more expensive than Algorithm \ref{alg:updatepathcounts} since it recreates all the optimal walks, while Algorithm \ref{alg:updatepathcounts} avoids such step given the absence of cycles. \begin{figure}[t] \centering \begin{tabular}{lr} \includegraphics[width=.4\linewidth]{media/imgs/nondelta} & \includegraphics[width=.4\linewidth]{media/imgs/delta} \end{tabular} \caption{Considering the temporal network in Figure \ref{fig:basicdef} and $\delta=10$, the paths from node $v_1$ to node $v_6$ on the left are not shortest $\delta$-restless since both violate the timing constraint (i.e., $45-20,70-20 > \delta$). Instead, the walk on the right is shortest and meets the timing constraint with $\delta=10$: so, it is a shortest $\delta$-restless walk.} \label{fig:twalks} \end{figure} \subsection{\algname\ - Theoretical Guarantees}\label{subsec:theoguar} \iffalse \ilie{[We will probably use other results than the once I am reporting here]} We start by introducing Bennett's inequality for the case of heterogeneous random variables. \begin{theorem}[{\cite{bennett_probability_1962}}]\label{theo:bennet} For a collection $X_1,\dots,X_{\ell}$ of independent random variables satisfying $X_i \le M_i$, $\mathbb{E}[X_i]= \mu_i$ and $\mathbb{E}[(X_i-\mu_i)^2]= \sigma_i^2$ for $i=1,\dots,{\ell}$ and for any $\xi\ge 0$, the following holds \begin{displaymath} \mathbb{P} \left( \left|\frac{1}{{\ell}}\sum_{i=1}^{\ell} X_i - \frac{1}{{\ell}}\sum_{i=1}^{\ell} \mu_i \right|\ge \xi \right) \le 2\exp\left(-{\ell}\frac{\mathsf{v}}{B^2} h\left(\frac{\xi B}{\mathsf{v}}\right) \right) \end{displaymath} where $h(x) = (1+x) \ln(1+x)-x, B= \max_i M_i - \mu_i$ and $\mathsf{v}=\frac{1}{{\ell}}\sum_{i=1}^{\ell} \sigma_i^2$. \end{theorem} \begin{lemma} If it holds that $\ell \ge \dots$ than Algorithm \ref{alg:static} achieves the desired theoretical guarantees. \end{lemma} \begin{proof} First we want to apply the bound in Theorem \ref{theo:bennet} to our setting, thus we fix a vertex $v\in V$ and consider $\tilde{b}'(v) = X_i, i=1,\dots,\ell.$ First note that $\tilde{b}'(v) \le \min\{ 1, b(v) n(n-1),$ $ \max_{s,z\in V^2}(\sigma_{sz}(v)/\sigma_{sz}) \} \doteq \hat{B}(v)$. Next we need to bound $\mathbb{E}[(\tilde{b}'(v) - b(v))^2]$ by using the alternative formula to compute the variance (i.e. $Var(X) = \mathbb{E}[X^2] - \mathbb{E}[X]^2$) we obtain: \begin{displaymath} \begin{split} \mathbb{E}[\tilde{b}'(v)^2] &= \mathbb{E}\left[\sum_{sz\in V^2}\frac{\sigma_{sz}(v)}{\sigma_{sz}} X_{sz} \sum_{s'z'\in V^2}\frac{\sigma_{s'z'}(v)}{\sigma_{s'z'}} X_{s'z'}\right]\\ &=\sum_{sz\in V^2}\sum_{s'z'\in V^2} \frac{\sigma_{sz}(v)}{\sigma_{sz}}\frac{\sigma_{s'z'}(v)}{\sigma_{s'z'}} \mathbb{E}\left[X_{sz}X_{s'z'}\right] \\ &= \sum_{sz\in V^2}\frac{\sigma_{sz}(v)^2}{\sigma_{sz}^2}\frac{1}{(n-1)n}\le \frac{1}{(n-1)n}\sum_{sz\in V^2}\frac{\sigma_{sz}(v)}{\sigma_{sz}} = b(v). \end{split} \end{displaymath} Therefore $\mathsf{v}=Var(\tilde{b}'(v)) \le b(v)(1-b(v))$, it is worth noting that this result often improves Popoviciou's inequality that bounds the variance of variables in a given domain $[a,b]$ by $(b-a)^2/4$. Therefore if $\ell \ge \hat{B}(v)^2/\mathsf{v} \cdot 1/h\left(\frac{\xi \hat{B}(v)}{\mathsf{v}}\right)$ the desired approximation guarantees hold. \ilie{To conclude the proof we obtain three cases, we perhaps have to bound the max value of the betweenness that can be obtained in a graph to make this bound "practical". Interestingly using even a loose bound on $B$ i.e., 1, for $b(v)=0.3, \eta=0.1, \xi=0.05$ a numerical validation results in $\ell \ge 29$ while for the same values with Hoeffding's inequality we obtain $\ell \ge 149$.} \end{proof} \fi In order to address Problem \ref{problem}, \algname\ bounds the deviation between the estimates $\tilde{b}(v)$ and the actual values $b(v)$, for every node $v \in V$. To do so, we leverage on the so called \emph{empirical Bernstein bound}, which we adapted to \algname. Given a node $v \in V$, let $\tilde{B}_{v,:} = (\tilde{b}'(v)_1, \tilde{b}'(v)_2, \dots, \tilde{b}'(v)_{\ell})$, where $\tilde{b}'(v)_i$ is the estimate of $b(v)$ by analysing the $i$-th sample, $i \in \{1, \dots, \ell\}$. Let $\mathbf{V}(\tilde{B}_{v,:})$ be the \emph{empirical variance} of $\tilde{B}_{v,:}$: \[ \mathbf{V}(\tilde{B}_{v,:}) = \frac{1}{\ell(\ell-1)} \sum_{1 \leq < i < j \leq \ell} (\tilde{b}'(v)_i-\tilde{b}'(v)_j)^2. \] We use the \emph{empirical Bernstein bound} to limit the deviation between $\tilde{b}(v)$'s and $b(v)$'s, which represents Corollary 5 of \cite{maurer2009empirical} adapted to our framework, since Corollary 5 of \cite{maurer2009empirical} is formulated for generic random variables taking values in $[0,1]$ and for an arbitrary set of functions. \begin{theorem}[Corollary 5, \cite{maurer2009empirical}] \label{th:bound} Let $\ell \geq 2$ be the number of samples, and $\eta \in (0,1)$ be the confidence parameter. Let $\tilde{b}'(v)_i$ be the estimate of $b(v)$ by analysing the $i$-th sample, $i \in \{1, \dots, \ell\}$ and $v \in V$. Let $\tilde{B}_{v,:} = (\tilde{b}'(v)_1, \tilde{b}'(v)_2, \dots, \tilde{b}'(v)_{\ell})$, and $\mathbf{V}(\tilde{B}_{v,:})$ be its empirical variance. With probability at least $1-\eta$, and for every node $v \in V$, we have that \[ |\tilde{b}(v) - b(v)| \leq \sqrt{\frac{ 2 \mathbf{V}(\tilde{B}_{v,:}) \ln(4n / \eta)}{\ell}} + \frac{7 \ln(4n / \eta)}{3(\ell - 1)}. \] \end{theorem} The right hand side of the inequality of the previous theorem differs from Corollary 5 of \cite{maurer2009empirical} by a factor of $2$ in the arguments of the natural logarithms, since in \cite{maurer2009empirical} the bound is not stated in the symmetric form reported in Theorem \ref{th:bound}. Finally, the result about the guarantees on the quality of the estimates provided by \algname\ follows. \begin{corollary} \label{cor:onbra} Given a temporal network $T$, the pair $(\varepsilon', \tilde{B}(T))$ in output from \algname\ is such that, with probability $>1-\eta$, it holds that $\tilde{B}(T)$ is an absolute $(\varepsilon',\eta)$-approximation set of $B(T)$. \end{corollary} Observe that Corollary \ref{cor:onbra} is independent of the structure of the optimal paths considered by \algname, therefore such guarantees hold for both the criteria considered in our work. \section{Experimental Evaluation} \label{sec:exp} In this section we present our experimental evaluation that has the following goals: (i) motivate the study of the temporal betweenness centrality by showing two real world temporal networks on which the temporal betweenness provides novel insights compared to the static betweenness computed on their associated static networks; (ii) assess, considering the STP criterion, the accuracy of the \algname's estimates, and the benefit of using \algname\ instead of the state-of-the-art exact approach~\cite{Buss2020}, both in terms of running time and memory usage; (iii) finally, show how \algname\ can be used on a real world temporal network to analyze the RTP-based betweenness centrality values. \subsection{Setup} \begin{table}[t] \centering \caption{Datasets used and their statistics.} \label{tab:datasets} \scalebox{0.75}{ \begin{tabular}{ccccl} \toprule Name& $n$&$m$& Granularity & Timespan\\ \midrule HighSchool2012 (HS) & 180 & 45K & 20 sec & 7 (days)\\ CollegeMsg & 1.9K & 59.8K & 1 sec & 193 (days)\\ EmailEu & 986 & 332K & 1 sec & 803 (days)\\ FBWall (FB) & 35.9K & 199.8K & 1 sec & 100 (days)\\ Sms & 44K & 544.8K & 1 sec & 338 (days)\\ Mathoverflow & 24.8K & 390K & 1 sec & 6.4 (years)\\ Askubuntu & 157K & 727K & 1 sec & 7.2 (years)\\ Superuser & 192K & 1.1M & 1 sec & 7.6 (years)\\ \bottomrule \end{tabular} } \end{table} We implemented \algname\ in C++20 and compiled it using \texttt{gcc 9}. The code is freely available\footnote{\url{https://github.com/iliesarpe/ONBRA}.}. All the experiments were performed sequentially on a 72 core Intel Xeon Gold 5520 @ 2.2GHz machine with 1008GB of RAM available. The real world datasets we used are described in Table \ref{tab:datasets}, which are mostly social or message networks from different domains. Such datasets are publicly available online\footnote{\url{http://www.sociopatterns.org/} and \url{https://snap.stanford.edu/temporal-motifs/data.html}.}. For detailed descriptions of such datasets we refer to the links reported and~\cite{Paranjape2017}. To obtain the FBWall dataset we cut the last 200K edges from the original dataset~\cite{Viswanath2009}, which has more than 800K edges. Such cut is done to allow the exact algorithm to complete its execution without exceeding the available memory. \subsection{Temporal vs Static Betweenness} In this section we assess that the temporal betweenness centrality of the nodes of a temporal network provides novel insights compared to its static version. To do so, we computed for two datasets, from different domains, the exact ranking of the various nodes according to their betweenness values. The goal of this experiment is to compare the two rankings (i.e., temporal and static) and understand if the relative orderings are preserved, i.e., verify if the most central nodes in the static network are also the most central nodes in the temporal network. To this end, given a temporal network $T=(V,E)$, let $G_T=(V,\{(u,v) : \exists (u,v,t)\in E \})$ be its associated static network. We used the following two real world networks: (i) HS, that is a temporal network representing a face-to-face interaction network among students; (ii) and FB, that is a Facebook user-activity network \cite{Viswanath2009} (see Table \ref{tab:datasets} for further details). We first computed the exact temporal and static betweenness values of the different nodes of the two networks. Then, we ranked the nodes by descending betweenness values. We now discuss how the top-$k$ ranked nodes vary from temporal to static on the two networks. We report in Table \ref{tab:topK} (in Appendix \ref{app:suppldata}) the Jaccard similarity between the sets containing the top-$k$ nodes of the static and temporal networks. On HS, for $k=25$, only 11 nodes are top ranked in both the rankings, which means that less than half of the top-25 nodes are central if only the static information is considered. The value of the intersection increases to $36$ for $k=50$, since the network has only 180 nodes. More interestingly, also on the Facebook network only few temporally central nodes can be detected by considering only static information: only 9 over the top-25 nodes and 15 over the top-50 nodes. In order to better visualize the top-$k$ ranked nodes, we show their betweenness values in Figure \ref{subfig:topKvals}: note that there are many top-$k$ temporally ranked nodes having small static betweenness values, and vice versa. These experiments show the importance of studying the temporal betweenness centrality, which provides novel insights compared to the static version. \begin{figure \centering \begin{tabular}{lr} \subfloat[]{ \includegraphics[width=.46\linewidth]{media/plots/topFB} \label{subfig:topKvals}} \subfloat[]{ \includegraphics[width=.46\linewidth]{media/plots/EmailEU_absolute_errors} \label{subfig:errors_and_exacts}}\\ \subfloat[]{ \includegraphics[width=.47\linewidth]{media/plots/15daysFB} \label{fig:deltavarC}} \subfloat[]{ \includegraphics[width=.47\linewidth]{media/plots/1monthFB} \label{fig:deltavarD}} \end{tabular} \caption{ (\ref{subfig:topKvals}): static and temporal betweenness values of the top-50 ranked nodes of the dataset FB; (\ref{subfig:errors_and_exacts}): for dataset \texttt{EmailEu}, the deviations (or absolute errors) $|\tilde{b}(v)- b(v)|$ between the estimates $\tilde{b}(v)$ and the actual values $b(v)$ of the temporal betweenness centrality, for decreasing order of $b(v)$; (\ref{fig:deltavarC},\ref{fig:deltavarD}): comparison between the temporal betweenness values based on STP and RTP, for $\delta$=15 days (left) and $\delta$=1 month (right).} \end{figure} \begin{table*}[t \caption{For each dataset, the average and maximum deviation between the estimate $\tilde{b}(v)$ and the actual temporal betweenness value $b(v)$ over all nodes $v$ and $10$ runs, respectively $Avg. \ Error$ and $\sup_{v\in V}|b(v)-\tilde{b}(v)|$, the theoretical upper bound $\varepsilon'$, the $Sample \ rate$ (\%) of pairs of nodes we sampled, the running time $t_{EXC}$ and peak RAM memory $MEM_{EXC}$ required by the exact approach \cite{Buss2020}, the running time $t_{\text{\algname}}$ and peak RAM memory $MEM_{\text{\algname}}$ required by \algname. The symbol \ding{55} denotes that the exact computation of \cite{Buss2020} is not able to conclude on our machine.} \label{tab:results} \centering \scalebox{0.83}{ \begin{tabular}{ccccccc|cc} \toprule Dataset & Avg.\ Error & $\sup_{v\in V}|b(v)-\tilde{b}(v)|$ & $\varepsilon'$ & Sample rate (\%) & $t_{\text{EXC}}$ (sec) & $t_{\text{\algname}}$ (sec) & MEM$_{\text{EXC}}$ (GB) & MEM$_{\text{\algname}}$ (GB)\\ \midrule \texttt{CollegeMsg} & $1.74 \cdot 10^{-4}$ & $6.38 \cdot 10^{-3}$ & $2.27 \cdot 10^{-2}$ & 0.083 & 231 & \textbf{148} & 12.0 & \textbf{0.13} \\ \texttt{EmailEu} & $4.69 \cdot 10^{-4}$ & $1.35 \cdot 10^{-2}$ & $6.15 \cdot 10^{-2}$ & 0.093 & 7211 & \textbf{1808} & 23.9 & \textbf{2.1} \\ \texttt{Mathoverflow} & $6.35 \cdot 10^{-6}$ & $2.1 \cdot 10^{-3}$ & $5.38 \cdot 10^{-3}$ & 0.005 & 79492 & \textbf{36983} & 1004.3 & \textbf{6.8} \\ \texttt{FBWall} & $4.25 \cdot 10^{-6}$ & $5.89 \cdot 10^{-4}$ & $2.13 \cdot 10^{-3}$ & 0.003 & 11489 & \textbf{3145} & 738.0 & \textbf{11.1} \\ \texttt{Askubuntu} & \ding{55} & \ding{55} & $6.92 \cdot 10^{-3}$ & 0.00006 & \ding{55} & \textbf{35585} & $>$1008 & \textbf{20.3}\\ \texttt{Sms} & \ding{55} & \ding{55} & $1.54 \cdot 10^{-3}$ & 0.00231 & \ding{55} & \textbf{13020}&$>$1008 & \textbf{16.2}\\ \texttt{Superuser} & \ding{55} & \ding{55} & $1.02 \cdot 10^{-2}$ & 0.00003 & \ding{55} & \textbf{41856} &$>$1008 & \textbf{16.7}\\ \bottomrule \end{tabular} } \end{table*} \subsection{Accuracy and Resources of \algname} In this section we first assess the accuracy of the estimates $\tilde{B}(T)$ provided by \algname\ considering only the STP criterion, since for the RTP criterion no implemented exact algorithm exists. Then, we show the reduction of computational resources induced by \algname\ compared to the exact algorithm in~\cite{Buss2020}. To assess \algname's accuracy and its computational cost, we used four datasets, i.e., \texttt{CollegeMsg}, \texttt{EmailEu}, \texttt{Mathoverflow}, and \texttt{FBWall}. We first executed the exact algorithm, and then we fix $\eta=0.1$ and $\ell$ properly for \algname\ to run within a fraction of the time required by the exact algorithm. The results we now present, which are described in detail in Table \ref{tab:results}, are all averaged over 10 runs (except for the RAM peak, which is measured over one single execution of the algorithms). Remarkably, even using less than $1\%$ of the overall pairs of nodes as sample size, \algname\ is able to estimate the temporal betweenness centrality values with very small average deviations between $4 \cdot 10^{-6}$ and $5 \cdot 10^{-4}$, while obtaining a significant running time speed-up between $\approx$$1.5\times$ and $\approx$$4\times$ with respect to the exact algorithm \cite{Buss2020}. Additionally, the amount of RAM memory used by \algname\ is significantly smaller than the exact algorithm in~\cite{Buss2020}: e.g., on the \texttt{Mathoverflow} dataset \algname\ requires only $6.8$ GB of RAM peak, which is $147\times$ less than the $1004.3$ GB required by the exact state-of-the-art algorithm~\cite{Buss2020}. Furthermore, in all the experiments we found that the maximum deviation is distant at most one order of magnitude from the theoretical upper bound $\varepsilon'$ guaranteed by Corollary \ref{cor:onbra}. Surprisingly, for two datasets (\texttt{EmailEu} and \texttt{Mathoverflow}) the maximum deviation and the upper bound $\varepsilon'$ are even of the same order of magnitude. Therefore we can conclude that the guarantees provided by Corollary \ref{cor:onbra} are often very sharp. In addition, \algname's accuracy is demonstrated by the fact that the deviation between the actual temporal betweenness centrality value of a node and its estimate obtained using \algname\ is about one order of magnitude less than the actual value, as we show in Figure \ref{subfig:errors_and_exacts} and Figure \ref{fig:fig_appendix} (in Appendix \ref{app:suppldata}). Finally, we show in Table \ref{tab:results} that on the large datasets \texttt{Asku\-buntu}, \texttt{Sms}, and \texttt{Superuser} the exact algorithm \cite{Buss2020} is not able to conclude the computation on our machine (denoted with \ding{55}) since it requires more than 1008GB of RAM. Instead, \algname\ provides estimates of the temporal betweenness centrality values in less than $42$K (sec) and $21$ GB of RAM memory. To conclude, \algname\ is able to estimate the temporal betweenness centrality with high accuracy providing rigorous and sharp guarantees, while significantly reducing the computational resources required by the exact algorithm in \cite{Buss2020}. \subsection{\algname\ on RTP-based Betweenness} In this section we discuss how \algname\ can be used to analyze real world networks by estimating the centrality values of the nodes for the temporal betweenness under the RTP criterion. We used the FB network, on which we computed a tight approximation of the temporal betweenness values ($\varepsilon'<10^{-4}$) of the nodes for different values of $\delta$, i.e., $\delta$=1 day, $\delta$=15 days, and $\delta$=1 month. For $\delta$=1 day, we found only 4 nodes with temporal betweenness value different from 0, which is surprising since it highlights that the information spreading across wall posts through RTPs in 2008 on Facebook required more than 1 day of time between consecutive interactions (i.e., slow spreading). We present the results for the other values of $\delta$ in Figures \ref{fig:deltavarC} and \ref{fig:deltavarD}, comparing them to the (exact) STP-based betweenness. Interestingly, 15 days are still not sufficient to capture most of the betweenness values based on STPs of the different nodes, while with $\delta$=1 month the betweenness values are much closer to the STP-based values. While this behaviour is to be expected with increasing $\delta$, finding such values of $\delta$ helps to better characterize the dynamics over the network. To conclude, \algname\ also enables novel analyses that cannot otherwise be performed with existing tools. \iffalse \begin{figure \centering \begin{tabular}{lr} \includegraphics[width=.45\linewidth]{media/plots/15daysFB} \includegraphics[width=.45\linewidth]{media/plots/1monthFB} \end{tabular} \caption{Correlation between the values of temporal betweenness based on STP and RTP, for two different values of $\delta$, $\delta$=15 days (left) and $\delta$=1 month (right).} \label{fig:deltavar} \end{figure} \fi \section{Discussion} In this work we presented \algname, the first algorithm that provides high-quality approximations of the temporal betweenness centrality values of the nodes in a temporal network, with rigorous probabilistic guarantees. \algname\ works under two different optimality criteria for the paths on which the temporal betweenness centrality is defined: shortest and restless temporal paths (STP, RTP) criteria. To the best of our knowledge, \algname\ is the first algorithm enabling a practical computation under the RTP criteria. Our experimental evaluation shows that \algname\ provides high-quality estimates with tight guarantees, while remarkably reducing the computational costs compared to the state-of-the-art in \cite{Buss2020}, enabling analyses that would not otherwise be possible to perform. Finally, several interesting directions could be explored in the future, such as dealing with different optimality criteria for the paths, and employing sharper concentration inequalities to provide tighter guarantees on the quality of the estimates. \begin{acks} Part of this work was supported by the Italian Ministry of Education, University and Research (MIUR), under PRIN Project n. 20174LF3T8 \enquote{AHeAD} (efficient Algorithms for HArnessing networked Data) and the initiative \enquote{Departments of Excellence} (Law 232/2016), and by University of Padova under project \enquote{SID 2020: RATED-X}. \end{acks} \newpage \bibliographystyle{ACM-Reference-Format}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,741
require File.expand_path(File.join(File.dirname(__FILE__), '..', '..', 'helper')) context "Geo" do setup do run_with_rescue do CityGrid::API::Advertising::GeoLocation.search( "streetAddress" => "720 3rd ave", "city" => "seattle", "state" => "wa", "zipCode" => "98007", :token => "" ) end end should("not be empty") { !topic.empty? } should("return message OK") { topic.message }.equals("OK") should("return code 200") { !topic.geocodeResponse.first.geoAccuracy.nil? } end
{ "redpajama_set_name": "RedPajamaGithub" }
7,640
\section{Introduction} The remarkable journey of modern cosmology started in 1998, when the observational evidences showed that we are living in an accelerating universe and that the previous physical scenarios needed to be retraced. The introduction of dark energy (DE) concept was a need in order for the observational predictions to acquire a solid theoretical formulation. The dark energy is a component with high negative pressure that drives the universe acceleration, nevertheless its nature has remained a mysterious chapter in the scientific history after a series of investigations by a large number of researchers. The cosmological constant is the simplest DE fluid with the above features, however the ``cosmological constant problem'' \cite{Weinberg:1988cp} and the possibility that the DE sector could be dynamical led to a number of explanations, mainly in two directions. The first is to consider that the DE sector corresponds to a peculiar extra fluid that fills the universe in the framework of general relativity \cite{Copeland:2006wr,Cai:2009zp}. The second direction is to consider that the DE fluid is an effective one, arising from a modification of the gravitational sector itself \cite{modgrav1,Capozziello:2011et,Cai:2015emx}. Independently of the underlying nature and the micro-physical theory of DE, one can introduce phenomenological parametrizations of the DE equation-of-state parameter $w_x = p_x/\rho_x$, where $p_x$ and $\rho_x$ are respectively the pressure and energy density of the (effective) DE perfect fluid, which is considered to have a dynamical character in general. Since for the moment we do not have any fundamental rule in favor of some specific equation-of-state parameters, we may consider various functional forms for $w_x$. For a literature survey of various DE parametrizations and models we refer to the works \cite{Chevallier:2000qy, Linder:2002et, Cooray:1999da, Efstathiou:1999tm, Astier:2000as, Weller:2001gf, Jassal:2005qc, Linder:2005ne,Gong:2005de, Nesseris:2005ur, Feng:2004ff, Xia:2006rr, Basilakos:2006us, Nojiri:2006ww,Saridakis:2008fy,Barboza:2008rh,Saridakis:2009pj,Dutta:2009yb, Saridakis:2009ej, Ma:2011nc,Feng:2011zzo, Feng:2012gf, DeFelice:2012vd, Chen:2011cy,Basilakos:2013vya,Umilta:2015cta,Ballardini:2016cvy,DiValentino:2016hlg, Chavez:2016epc, DiValentino:2017zyq,DiValentino:2017gzb,Zhao:2017cud,DiValentino:2017rcr,Yang:2017amu, Marcondes:2017vjw,Yang:2017alx, Pan:2017zoh,Vagnozzi:2018jhn}. In general, the well known DE parametrizations have two free parameters, usually denoted by $w_0w_a$CDM models, where $w_0$ marks the present value of $w_x$ and $w_a$ characterizes the dynamical nature of the DE sector. However, apart from the $w_0w_a$CDM parametrizations, one-parameter dynamical DE parametrizations, as well as models with more than two parameters, have also been introduced and investigated in the last years. Nevertheless, the one-parameter dynamical DE parametrizations are much neglected in the literature compared to the DE parametrizations having two or more parameters. In principle we do not find any strong reason behind this underestimation, and thus in this work we aim to investigate the features of this particular class of DE parametrizations, and explore its cosmological viabilities with the recent observational evidences, taking into account their advantage that they are more economical and have less number of free parameters compared to other dark energy models with two or more than two free parameters. Hence, we introduce various one-parameter dynamical DE parametrizations that are primarily motivated from the phenomenological ground, and we perform a detailed observational confrontation. In particular, we use data from cosmic microwave background (CMB) observations, from Joint light-curve analysis sample from Supernovae Type Ia observations (JLA), from baryon acoustic oscillations (BAO) distance measurements, as well as from cosmic chronometers Hubble parameter measurements (CC), performing additionally various combined analyses. The manuscript is organized as follows. In Section \ref{sec-2} we present the basic equations for a general dark-energy scenario at both the background and perturbation level, and we display the five one-parameter DE parametrizations that are going to be investigated. In Section \ref{sec-data} we describe the observational data sets that will be used. In Section \ref{sec-results} we perform the observational confrontation, extracting the observational constraints on the various cosmological quantities. After that in Section \ref{sec-baysian} we compare the present dynamical DE parametrizations mainly through the Bayesian analysis. Finally, we close the present work in Section \ref{sec-conclu} with a brief summary. \section{One-parameter parametrizations at background and perturbation levels} \label{sec-2} In this section we present the basic equations that determine the universe evolution at both the background and perturbation levels, and we introduce various one-parameter parametrizations for the dark-energy equation-of-state parameter. Throughout the work we consider the homogeneous and isotropic Friedmann-Lema\^{i}tre-Robertson-Walker (FLRW) geometry, with metric \begin{eqnarray} {\rm d}s^2 = -{\rm d}t^2 + a^2 (t) \left[\frac{{\rm d}r^2}{1-Kr^2} + r^2 \left(d \theta^2 + \sin^2 \theta d \phi^2\right)\right], \end{eqnarray} where $a(t)$ is the scale factor and $K$ determines the spatial curvature, with values $0$, $-1$ and $+1$ corresponding to spatially flat, open and closed universe, respectively. We consider a universe filled with radiation, baryons and cold dark matter, and we additionally consider the DE fluid. In this case the Friedmann equations, that determine the universe evolution at the background level, read as \begin{eqnarray} H^2 + \frac{K}{a^2} &=& \frac{8\pi G}{3} \rho_{tot},\label{efe1}\\ 2\dot{H} + 3 H^2 + \frac{K}{a^2} &=& - 8 \pi G\, p_{tot}\label{efe2}, \end{eqnarray} with $G$ the Newton's constant and $H=\dot{a}/a$ the Hubble function of the FLRW universe, with dots denoting derivatives with respect to cosmic time. In the above expressions we have introduced the total energy density and pressure as $\rho_{tot} = \rho_r +\rho_b +\rho_c +\rho_x$ and $p_{tot} = p_r + p_b + p_c + p_x$ respectively, with the symbols $r,\; b,\; c,\; x$ denoting radiation, baryon, cold dark matter and dark energy fluids. Finally, for simplicity in the following we will focus on the spatially flat case ($K=0$) since it is favored by observations. As usual we assume that the above sectors do not have any mutual interaction, and thus the conservation equation of each fluid is \begin{eqnarray}\label{cons} \dot{\rho}_i + 3 H (1 +w_i ) \rho_i = 0, \end{eqnarray} where $i \in \{ r, b, c, x\}$ and $p_i = w_i \rho_i$, $w_i$ being the barotropic state parameter for the $i$-th fluid. Note that out of equations (\ref{efe1}), (\ref{efe2}) and (\ref{cons}), only two are independent. Hence, using the known equation-of-state parameters $w_r=1/3$, $w_b=w_c=0$, in (\ref{cons}) one can explicitly write down the conservation equations for radiation, baryons and cold dark matter respectively as, $\rho_r =\rho_{r0}\, a^{-4}$, $\rho_b = \rho_{b0} \, a^{-3}$ and $\rho_c = \rho_{c0}\, a^{-3}$, with $\rho_{i0}$ the present value of $\rho_i$ and where we have set the present scale factor $a_0$ to $1$. Similarly, concerning the dark energy sector, equation (\ref{cons}) leads to \begin{eqnarray}\label{de-evol} \rho_{x}=\rho_{x,0}\,\left( \frac{a}{a_{0}}\right) ^{-3}\,\exp\left[ -3\int_{a_{0}}^{a}\frac{w_{x}\left( a'\right) }{a'}\,da' \right]. \end{eqnarray} Thus, the evolution equation (\ref{de-evol}) implies that the dynamics of DE can be determined as long as a specific parametrization of the DE equation of state is given. Having presented the equations that determine the universe evolution at the background level, we now proceed to the investigation of its evolution at the perturbation level, since this is related to the observed structure formation. In order to study the perturbation equations, one needs to consider the perturbed FLRW metric either in synchronous or in conformal Newtonian gauge. In the following we consider the former choice, in which the perturbed metric takes the form \begin{eqnarray} \label{perturbed-metric} ds^2 = a^2(\eta) \left [-d\eta^2 + (\delta_{ij}+h_{ij}) dx^idx^j \right], \end{eqnarray} where $\eta$ is the conformal time, and $\delta_{ij}$, $h_{ij}$ are respectively the unperturbed and the perturbed metric tensors. Now, in the synchronous gauge the conservation equations of energy and momentum for the $i$-th component of the fluid for a mode with wavenumber ${k}$ can be written as \cite{Mukhanov,Ma:1995ey, Malik:2008im}: \begin{eqnarray} &&\delta'_{i} = - (1+ w_{i})\, \left(\theta_{i}+ \frac{h'}{2}\right) - 3\mathcal{H}\left(\frac{\delta p_i}{\delta \rho_i} - w_{i} \right)\delta_i - 9 \mathcal{H}^2\left(\frac{\delta p_i}{\delta \rho_i} - c^2_{a,i} \right) (1+w_i) \frac{\theta_i} {{k}^2}, \label{per1} \\ &&\theta'_{i} = - \mathcal{H} \left(1- 3 \frac{\delta p_i}{\delta \rho_i}\right)\theta_{i} + \frac{\delta p_i/\delta \rho_i}{1+w_{i}}\, {k}^2\, \delta_{i} -{k}^2\sigma_i,\label{per2} \end{eqnarray} where primes mark derivatives with respect to conformal time and $\mathcal{H}= a^{\prime}/a$ is the conformal Hubble parameter. Furthermore, $\delta_i = \delta \rho_i/\rho_i$ is the density perturbation for the $i$-th fluid, $\theta_{i}\equiv i k^{j} v_{j}$ denotes the divergence of the $i$-th fluid velocity, $h = h^{j}_{j}$ stands for the trace of the metric perturbations $h_{ij}$, and $\sigma_i$ is the anisotropic stress related to the $i$-th fluid. Finally, the quantity $c_{a,i}^2 = \dot{p}_i/\dot{\rho}_i$ denotes the adiabatic speed of sound of the $i$-th fluid, and it is given by $ c^2_{a,i} = w_i - \frac{w_i^{\prime}}{3\mathcal{H}(1+w_i)}$ in the case where we set the sound speed $c^2_{s} = \delta p_i / \delta \rho_i$ to $1$. In the following analysis we neglect the anisotropic stress for simplicity. In this work we are interested in investigating one-parameter DE equation-of-state parametrizations. In particular, we consider five such parametrizations given by: \begin{eqnarray} &&{\rm Model~I}:~~~~w_x(a)=w_0\exp(a-1),\label{model1}\\ &&{\rm Model~II}:~~~~w_x(a)=w_0a[1-\log(a)],\label{model2}\\ &&{\rm Model~III}:~~~~w_x(a)=w_0a\exp(1-a),\label{model3}\\ &&{\rm Model~IV}:~~~~w_x(a)=w_0a[1+\sin(1-a)],\label{model4}\\ &&{\rm Model~V}:~~~~w_x(a)=w_0a[1+\arcsin(1-a)],\label{model5} \end{eqnarray} where $w_0$ is the only free parameter, corresponding to the dark energy equation-of-state parameter at present. In order to provide a more transparent picture of the behavior of the above parametrizations, in Fig.~\ref{fig:wa} we depict $w_x(a)$, taking two cases for $w_0$, namely one lying in the quintessence and one lying in the phantom regime. As we can see, in all models $w_x(a)$ presents a decreasing behavior. \begin{figure}[ht] \centering \includegraphics[width=0.495\textwidth]{wa2.pdf} \includegraphics[width=0.495\textwidth]{wa1.pdf} \caption{{\it{The evolution of the one-parameter dynamical DE equation-of-state parametrizations (\ref{model1})-(\ref{model5}) as a function of the scale factor, for $w_0= -0.9$ (left graph) and $w_0 = -1.2$ (right graph).}}} \label{fig:wa} \end{figure} \section{Observational data} \label{sec-data} In this section we proceed to a detailed observational confrontation of the one-parameter dynamical DE equation-of-state parametrizations (\ref{model1})-(\ref{model5}) presented in the previous section. We analyze several combinations of cosmological data, by considering the six cosmological parameters of the standard $\Lambda$CDM paradigm: the baryon and the cold dark matter energy densities $\Omega_{\rm b}h^2$ and $\Omega_{\rm c}h^2$, the ratio between the sound horizon and the angular diameter distance at decoupling $\Theta_{s}$, the reionization optical depth $\tau$, and the spectral index and the amplitude of the scalar primordial power spectrum $n_\mathrm{S}$ and $A_\mathrm{S}$. Moreover, for the various models we add the free parameter $w_0$, which parametrizes the DE evolution. All these $7$ free parameters are explored within the range of the conservative flat priors listed in Table~\ref{priors}. \begin{table}[ht] \begin{center} \begin{tabular}{|c|c|} \hline Parameter & prior \\ \hline $\Omega_{\rm b} h^2$ & $[0.013,0.033]$ \\ $\Omega_{\rm c} h^2$ & $[0.001,0.99]$ \\ $\Theta_{\rm s}$ & $[0.5,10]$ \\ $\tau$ & $[0.01,0.8]$ \\ $n_\mathrm{S}$ & $[0.7,1.3]$ \\ $logA$ & $[1.7, 5.0]$ \\ $w_0$ & $[-2,0]$ \\ \hline \end{tabular} \end{center} \caption{Summary of the flat priors on the cosmological parameters assumed in this work, for the different DE parametrizations (\ref{model1})-(\ref{model5}).} \label{priors} \end{table} We derive the bounds on the cosmological parameters by analyzing the full range of the 2015 Planck temperature and polarization cosmic microwave background (CMB) angular power spectra, and we call this combination ``CMB'' ~\cite{Adam:2015rua, Aghanim:2015xee}. Additionally, we consider the Joint light-curve analysis sample from Supernovae Type Ia and we refer to this dataset as ``JLA'' ~\cite{Betoule:2014frx}. Furthermore, we add the baryon acoustic oscillations (BAO) distance measurements, and we call them ``BAO'' ~\cite{Beutler:2011hx, Ross:2014qpa,Gil-Marin:2015nqa}. Finally, we use the Hubble parameter measurements from the cosmic chronometers (CC) and we refer to them as ``CC'' ~\cite{Moresco:2016mzx}. In order to analyze statistically the several combinations of datasets, exploring the different dynamical DE scenarios, we use our modified version of the publicly available Monte-Carlo Markov Chain package \texttt{Cosmomc} \cite{Lewis:2002ah}, including the support for the Planck data release 2015 Likelihood Code \cite{Aghanim:2015xee}\footnote{See \url{http://cosmologist.info/cosmomc/}}. This has a convergence diagnostic based on the Gelman and Rubin statistic and implements an efficient sampling of the posterior distribution using the fast/slow parameter de-correlations \cite{Lewis:2013hha}. \section{Results} \label{sec-results} In this section we present the observational constraints and their implications for all the one-parameter DE parametrizations (\ref{model1})-(\ref{model5}). All models are confronted initially with CMB data alone, and then with different combinations of cosmological data. In the Appendix we show all Tables containing the constraints on the model parameters for all observational datasets used in this work. Additionally, in Fig. \ref{whisker} we concisely display the constraints on the present value of the dark energy equation of state $w_0$, for all models, considering all observational datasets. In the following we investigate the one parameter DE models in detail, presenting their observational consequences.\\ \begin{figure}[ht] \includegraphics[width=0.95\textwidth]{w.pdf} \caption{{\it{Whisker graph with the 68\% CL (solid line) and 95\% CL (dashed line) regions for the free model parameter $w_0$ of the DE parametrizations (\ref{model1})-(\ref{model5}), for the combination of datasets considered in this work.}}} \label{whisker} \end{figure} We start by investigating Model I of (\ref{model1}), namely $w_x(a)=w_0\exp(a-1)$. In Fig.~\ref{fig:model1-cmb-mpower} we can see the effects of different $w_0$ values on the temperature and matter power spectra. The results of the observational analysis of this model can be seen in Table \ref{tab:results-model1} of the Appendix, where we display the 68\% and 95\% confidence level (CL) constraints for various quantities while the full contour plots are presented in Fig.~\ref{fig:tri1}. As we observe from Table \ref{tab:results-model1} (see Appendix), the CMB data alone allow a very large value of the Hubble constant at present and moreover its error bars are significantly large: $H_0= 74_{- 7}^{+ 11}$ at 68\% CL ($H_0 = 74_{-15}^{+14}$ at 95\% CL). The constraint on $H_0$ is actually very close to its local measurements \cite{Riess:2016jrr}, recently confirmed by \cite{R18} and \cite{Birrer:2018vtm}. \begin{figure}[ht] \centering \includegraphics[width=0.49\textwidth]{1CMBpower_I.pdf} \includegraphics[width=0.49\textwidth]{1Mpower_I.pdf} \caption{{\it{The CMB TT spectra (left graph) and the matter power spectra (right graph), for Model I of (\ref{model1}), namely $w_x(a)=w_0\exp(a-1)$, for various values of the free model parameter $w_0$.}}} \label{fig:model1-cmb-mpower} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.83\textwidth]{I_3vs_contour.pdf} \caption{{\it{The 2D contour plots for several combinations of various quantities for Model I of (\ref{model1}), namely $w_x(a)=w_0\exp(a-1)$, and the corresponding 1D posterior distributions.}}} \label{fig:tri1} \end{figure} From the Table \ref{tab:results-model1} we see that the present value of the DE equation-of-state parameter for CMB alone is found to prefer a phantom dark energy scenario, namely $w_0 < -1$, at more than 95\% CL. Consequently, the matter density parameter decreases and acquires a very low value ($\Omega_{m0}= 0.268_{-0.081}^{+0.038}$ at 68\% CL). However, since these $\Omega_{m0}$ and $\sigma_8$ are anti-correlated, while $\sigma_8$ is positively correlated with $H_0$ (see Fig.~\ref{fig:tri1}), hence, this does not correspond to the alleviation of the $S_8=\sigma_8 \sqrt{\Omega_{m0}/0.3}$ tension of Planck's indirect estimation with its direct measurements from cosmic shear experiments like the Canada France Hawaii Lensing Survey (CFHTLenS)~\cite{Heymans:2012gg, Erben:2012zw}, the Kilo Degree Survey of$~$450 deg$^2$ of imaging data (KiDS-450)~\cite{Hildebrandt:2016iqg}, and the Dark Energy Survey (DES)~\cite{Abbott:2017wau}. When the BAO data are added to CMB, the constraints on the model parameters are significantly improved and the error bars on most of the parameters, in particular $w_0$, $\Omega_{m0}$, $\sigma_8$ and $H_0$, are decreased. The mean value of the Hubble constant slightly shifts towards a lower value, and the DE equation of state at present, $w_0$, moves towards a smaller one ($w_0 = -1.48\pm 0.10$ at 68\% CL) comparing to its estimation from CMB alone ($w_0 < -1.45$ at 68\% CL). As one can see, the CMB+BAO data also assure the validity of $w_0< -1$ at more than 99\% CL. The interesting output of this analysis is that the constraint on $H_0$ is again found to be very close to its estimation by local measurements \cite{Riess:2016jrr}. The addition of JLA to the former data set combination (i.e., CMB+BAO) further improves the cosmological constraints, as one can clearly see from Table \ref{tab:results-model1} (see Appendix). In particular, we see that $H_0$ again shifts down and $w_0$ up, with decreasing error bars. An analogous improvement of the bounds can be seen for $\Omega_{m0}$ and $\sigma_8$. Although the estimation of $H_0$ from this analysis decreases in comparison to the previous results of CMB and CMB+BAO, within 95\% CL, it can still match the direct estimation \cite{Riess:2016jrr}. Furthermore, the DE equation of state at present is again found to be in phantom regime. We close the analysis by adding the CC dataset, nevertheless the results for the data combination CMB+BAO+JLA+CC do not exhibit significant differences from the previous case CMB+BAO+JLA. In summary, the observational analysis for Model I shows that $w_0< -1$ at more than 95\% CL for CMB only, while the tension on $H_0$ seems to be alleviated. The addition of JLA shifts $H_0$ towards lower values, but still in agreement within 2$\sigma$ with \cite{Riess:2016jrr}, while the addition of CC does not affect the results significantly. The contour plot in the $w_0-H_0$ plane can be seen in lower left graph of Fig.~\ref{fig:tri1}. The preference for a phantom DE equation of state is due to the better fit of the large scales of the temperature power spectrum, that prefers a lower quadrupole with respect to the $\Lambda$CDM scenario, as can be clearly seen in Fig.~\ref{fig:bf1}. \begin{figure}[ht] \centering \includegraphics[width=0.495\textwidth]{CMBpower_I_vs_LCDM.pdf} \includegraphics[width=0.495\textwidth]{CMBpower_I_vs_LCDM_lowl.pdf} \caption{{\it{Comparison between the best fit for the $\Lambda$CDM paradigm and for the Model I of (\ref{model1}), namely $w_x(a)=w_0\exp(a-1)$. While the curves are almost indistinguishable in the high multipole range, at large scales Model I can better recover the lower quadrupole of the data.}}} \label{fig:bf1} \end{figure} Finally, comparing the results obtained for Model I with the constraints released by the Planck collaboration \cite{Aghanim:2018eyx} for the $w$CDM or $w_0w_a$CDM models, we can notice that for Model I using only CMB data the $H_0$ value is well constrained by the data and is close to its directly measured value \cite{Riess:2016jrr}, while it has a slightly lower limit for the Planck's extended scenario (that means the $w_0w_a$CDM scenario). On the other hand, when adding the BAO data, the $H_0$ value is still high and in agreement with \cite{Riess:2016jrr}, while in the Planck case the Hubble constant decreases leading to the aforementioned tension. In this context we refer to a recent work \cite{Lin:2017bhs} discussing the tensions in the cosmological parameters from various observational data and some possible explanations. \\ We now investigate Model II of (\ref{model2}), namely $w_x(a)=w_0a[1-\log(a)]$. In Table \ref{tab:results-model2} of the Appendix we summarize the observational constraints arising from various data combinations. We do not explicitly present the corresponding contour plots since they are similar to the ones of Model I. Comparing the results with those of Model I above, we can see that for Model II considering the CMB data alone, $H_0$ acquires higher values ($H_0 = 81_{- 9} ^{+ 12}$ at 68\% CL) and $w_0$ indicates a strong evidence for a phantom equation of state which remains at more than 95\% CL. Moreover, similarly to Model I, for Model II we also observe that the combinations CMB+BAO, CMB+BAO+JLA and CMB+BAO+JLA+CC significantly improve the constraints and reduce the error bars on the parameters. In particular, the $H_0$ mean value shifts towards lower values and we find $w_0< -1$ at more than 95\% CL for all data combinations. Note that from Table \ref{tab:results-model2} (see Appendix) we see that for the data set CMB+BAO the estimated value of $H_0$ is in agreement within 1 standard deviation with the local estimation of \cite{Riess:2016jrr} and thus the $H_0$-tension is alleviated ($H_0$ is higher than the one estimated by Planck 2015 \cite{Ade:2015xua} for the base $\Lambda$CDM scenario, and it is in perfect agreement to \cite{Riess:2016jrr}). Additionally, for the last two combinations CMB+BAO+JLA and CMB+BAO+JLA+CC we observe that while the Hubble constant is always in agreement within $2\sigma$ with \cite{Riess:2016jrr}, in contrast to Model I $w_0$ prefers a lower phantom mean value, but still with high significance. Finally, similar to the previous model, the $\sigma_8$ tension is not reconciled. \\ We proceed to the investigation of Model III of (\ref{model3}), namely $w_x(a)=w_0a\exp(1-a)$. Using the same observational datasets, in Table \ref{tab:results-model3} of the Appendix we summarize the observational constraints on this model. As we can see, for the CMB data alone the Hubble parameter acquires an even larger mean value in comparison to the previous models, while the DE equation-of-state parameter at present obtains a smaller value, namely $w_0 = -1.63_{-0.37}^{+0.38}$ at 95\% CL. Similarly to the previous model, we find that the inclusion of any external data set, namely BAO, JLA or CC, to CMB significantly improves the constraints, and $w_0 <-1$ is still valid up to 95\% CL. For the combination of CMB+BAO data we see that the estimated value of $H_0 = 71.4_{- 1.6}^{+ 1.4}$ (at 68\% CL) is perfectly in agreement to its local estimation of \cite{Riess:2016jrr}, alleviating the $H_0$-tension. Moreover, concerning $w_0$ we can note that it is constrained to be $w_0 = -1.239_{- 0.049}^{+ 0.060}$ (at 68\% CL) which is phantom at more than $3\sigma$. The addition of JLA to CMB+BAO decreases the error bars on $H_0$, $\Omega_{m0}$ and $\sigma_8$, while within 95\% CL this model seems to alleviate the tension on $H_0$. Finally, the combination CMB+BAO+JLA+CC does not offer any notable differences compared to the analysis with CMB+BAO+JLA, and thus similar conclusions are achieved.\\ We investigate Model IV of (\ref{model4}), namely $w_x(a)=w_0a[1+\sin(1-a)]$. In Table \ref{tab:results-model4} of the Appendix we summarize the observational constraints arising from various data combinations. As we see, the estimations of the Hubble parameter for all the dataset combinations are shifted towards higher values than $\Lambda$CDM. For the CMB data only, $H_0$ acquires values comparable with Model III, i.e. $H_0 = 84.3_{- 6.5}^{+ 9.9}$ at 68\% CL. As before, the inclusion of any external data set significantly improves the constraints on the cosmological parameters, decreasing the error bars. A common feature for all the analyses is that $w_0$ remains in the phantom regime at more than 95\% CL. Furthermore, for this model the CMB+BAO+JLA and CMB+BAO+JLA+CC data combinations favor a phantom DE equation of state at many standard deviations. Additionally, it is clearly seen that the $H_0$-tension is alleviated for all the combinations considered, apart from the CMB data alone which predict a quite high $H_0$ value. \\ We close our analysis with the investigation of Model V of (\ref{model5}), namely $w_x(a)=w_0a[1+\arcsin(1-a)]$. In a similar fashion, using the same observational datasets, in Table \ref{tab:results-model5} of the Appendix we summarize the observational constraints on this model. As we observe, we can clearly notice that this model maintains a similar trend compared to the previous four dynamical DE models. The present value of the DE equation-of-state parameter is constrained in the phantom regime up to 99\% CL. The Hubble parameter acquires a very large value for the CMB data only ($H_0 = 82.9_{-7.0}^{+ 12}$ at 68\% CL) with large error bars, however for the other data combinations $H_0$ and its errors bars decrease, and it becomes clear that the $H_0$-tension can be alleviated. \section{Model Comparison and the Bayesian Evidence} \label{sec-baysian} In the previous section we present the observational analysis and we extracted the constraints on various cosmological parameters of the five examined models. Concerning the dark energy equation of state at present, $w_0$, we have already presented its constraints at 68\% and 95\% CL through the Whisker graph in Fig.~\ref{whisker} for all the one parameter DE models as well as considering all the observational datasets employed in this work. As we mentioned above, we observe that in all models a phantom DE equation-of-state parameter at current time is favored. Moreover, from Fig.~\ref{whisker} we may also note that the extracted $w_0$ for Model II and III using the common datasets, namely, CMB+BAO, CMB+BAO+JLA and CMB+BAO+JLA+CC, are relatively close to the cosmological constant boundary $w_ 0 = -1$, compared to other three models. Additionally, in order to present in a more transparent way the alleviation of the $H_0$-tension, in Fig.~\ref{H0w0} we summarize the contour plots in the $w_0-H_0$ plane for all the examined models. From the figure one can notice that the parameters $H_0$ and $w_0$ are correlated to each other. \begin{figure}[ht] \includegraphics[width=0.395\textwidth]{I_3vs_2d_w0H0.pdf} \includegraphics[width=0.42\textwidth]{II_3vs_2d_w0H0.pdf} \includegraphics[width=0.42\textwidth]{III_3vs_2d_w0H0.pdf} \includegraphics[width=0.405\textwidth]{IV_3vs_2d_w0H0.pdf} \includegraphics[width=0.42\textwidth]{V_3vs_2d_w0H0.pdf} \caption{{\it{Contour plots at the $68 \%$ and $95 \%$ CL on the $w_0-H_0$ plane for the five models of one-parameter DE parametrizations (\ref{model1})-(\ref{model5}), for various combinations of datasets. The gray horizontal band is the 68\% CL Hubble parameter value corresponding to the direct measurement of \cite{Riess:2016jrr}, while the dotted vertical line marks the cosmological constant value $w_0=-1$. }}} \label{H0w0} \end{figure} The question that arises naturally is which of the five models exhibits a better behavior, and moreover how efficient are they comparing to standard $\Lambda$CDM cosmology. Hence, we close our work with examining the Bayesian evidence of each of the five models analyzed above, compared to the reference $\Lambda$CDM cosmological scenario. The Bayesian evidence plays a crucial role in determining the observational support of any cosmological model. The involved calculation is performed through the publicly available code \texttt{MCEvidence} \cite{Heavens:2017hkr,Heavens:2017afc}\footnote{See \href{https://github.com/yabebalFantaye/MCEvidence}{github.com/yabebalFantaye/MCEvidence}.}. We mention that \texttt{MCEvidence} needs only the MCMC chains that are used to extract the parameters of the models. In Bayesian analysis one needs to evaluate the posterior probability of the model parameters $\theta$, given a particular observational dataset $x$ with any prior information for a model $M$. Using the Bayes theorem one can write \begin{eqnarray}\label{BE} p(\theta|x, M) = \frac{p(x|\theta, M)\,\pi(\theta|M)}{p(x|M)}, \end{eqnarray} where the quantity $p(x|\theta, M)$ refers to the likelihood as a function of $\theta$ with $\pi(\theta|M)$ the prior information. The quantity $p(x|M)$ that appears in the denominator of (\ref{BE}) is known as the Bayesian evidence used for the model comparison. Let us note that this Bayesian evidence is the integral over the non-normalized posterior $\tilde{p} (\theta|x, M) \equiv p(x|\theta,M)\,\pi(\theta|M)$, given by \begin{eqnarray} E \equiv p(x|M) = \int d\theta\, p(x|\theta,M)\,\pi(\theta|M). \end{eqnarray} Now, for any cosmological model $M_i$ and the reference model $M_j$ (the reference model is the one with respect to which we compare the observational viability), the posterior probability is given by the following law: \begin{eqnarray} \frac{p(M_i|x)}{p(M_j|x)} = \frac{\pi(M_i)}{\pi(M_j)}\,\frac{p(x| M_i)}{p(x|M_j)} = \frac{\pi(M_i)}{ \pi(M_j)}\, B_{ij}, \end{eqnarray} where the quantity $B_{ij} = \frac{p(x| M_i)}{p(x|M_j)}$ is the Bayes factor of the model $M_i$ with respect to the reference model $M_j$. Depending on different values of $B_{ij}$ (or equivalently $\ln B_{ij}$) we quantify the observational support of the model $M_i$ over the model $M_j$. Here we use the widely accepted Jeffreys scales \cite{Kass:1995loi} shown in Table \ref{tab:jeffreys}, which imply that for $B_{ij} > 1 $ the observational data support model $M_i$ more strongly than model $M_j$. The negative values of $\ln B_{ij}$ reverse the conclusion, that is the reference model $M_j$ is preferred over $M_i$. \begin{center} \begin{table}[!h] \begin{tabular}{cc} \hline\hline $\ln B_{ij}$ &$\ \ $ Strength of evidence for model ${M}_i$ $\ \ $ \\ \hline $0 \leq \ln B_{ij} < 1$ & Weak \\ $1 \leq \ln B_{ij} < 3$ & Definite/Positive \\ $3 \leq \ln B_{ij} < 5$ & Strong \\ $\ln B_{ij} \geq 5$ & Very strong \\ \hline\hline \end{tabular} \caption{ Revised Jeffreys scale used to quantify the observational support of model $M_i$ with respect to the reference model $M_j$ \cite{Kass:1995loi}. } \label{tab:jeffreys} \end{table} \end{center} \begin{table} \begin{center} \begin{tabular}{ccccccccc} \hline\hline Dataset & Model & $\ln B_{ij}$ & ~~Strength of evidence for reference $\Lambda$CDM scenario \\ \hline CMB & Model I & $-1.9$ & Definite/Positive\\ CMB+BAO & Model I & $-2.9$ & Definite/Positive\\ CMB+BAO+JLA & Model I & $-6.0$ & Very Strong\\ CMB+BAO+JLA+CC & Model I & $-5.2$ & Very Strong\\ \hline\hline CMB & Model II & $-1.7$ & Definite/Positive\\ CMB+BAO & Model II & $-2.3$ & Definite/Positive\\ CMB+BAO+JLA & Model II & $-3.1$ & Strong\\ CMB+BAO+JLA+CC & Model II & $-3.0$ & Strong\\ \hline \hline CMB & Model III & $-1.6$ & Definite/Positive\\ CMB+BAO & Model III & $-2.6$ & Definite/Positive\\ CMB+BAO+JLA & Model III & $-4.2$ & Strong\\ CMB+BAO+JLA+CC & Model III & $-3.9$ & Strong\\ \hline \hline CMB & Model IV & $-2.1$ & Definite/Positive\\ CMB+BAO & Model IV & $-3.7$ & Strong \\ CMB+BAO+JLA & Model IV & $-6.9$ & Very Strong \\ CMB+BAO+JLA+CC & Model IV & $-7.2$ & Very Strong \\ \hline \hline CMB & Model V & $-2.8$ & Definite/Positive \\ CMB+BAO & Model V & $-2.9$ & Definite/Positive\\ CMB+BAO+JLA & Model V & $-5.8$ & Very Strong\\ CMB+BAO+JLA+CC & Model V & $-5.3$ & Very Strong\\ \hline \hline\hline \end{tabular} \caption{Summary of the Bayes factors values $\ln B_{ij}$ calculated for the five one-parameter DE models (\ref{model1})-(\ref{model5}), with respect to the reference $\Lambda$CDM scenario. The negative sign indicates that the reference scenario is preferred over the fitted models.} \label{tab:bayesian} \end{center} \end{table} In Table \ref{tab:bayesian} we present the values of $\ln B_{ij}$ calculated for the five one-parameter DE models (\ref{model1})-(\ref{model5}) analyzed in the previous section, for various observational datasets, compared to the reference $\Lambda$CDM scenario. From the values of $\ln B_{ij}$ we can see that Model II and Model III present a better behavior than the other three analyzed models. However, comparing to all models the reference $\Lambda$CDM scenario is favored. Nevertheless, we mention here that, interestingly enough, the one-parameter DE parametrizations considered in the present work seem to behave similarly or be less disfavored with respect to $\Lambda$CDM scenario comparing with two-parameter DE parametrizations \cite{Ma:2011nc,Feng:2011zzo,Yang:2017alx,Pan:2017zoh}. This is an indication that one-parameter DE models can indeed be efficient in describing the universe evolution. \section{Summary and Conclusions} \label{sec-conclu} The phenomenological parametrizations of DE equation of state can be very helpful for the investigation of DE features, since they are of general validity and can describe the DE sector independently of whether it is an extra peculiar fluid in the framework of general relativity or it is effectively of gravitational origin. However, although in the literature there has been a large amount of research on DE parametrizations which involve two or more free parameters, the one-parameter parametrizations seem to be underestimated. In this work we performed a detailed observational confrontation of several one-parameter DE parametrizations, with various combination datasets. In particular, we used data from cosmic microwave background (CMB) observations, from Joint light-curve analysis sample from Supernovae Type Ia observations (JLA), from baryon acoustic oscillations (BAO) distance measurements, as well as from cosmic chronometers Hubble parameter measurements (CC), and we additionally performed various combined analyses. Our analyses revealed that all the examined one-parameter dynamical DE models favor a phantom DE equation-of-state at present time $w_0$, and this remains valid at more than 95\% CL, confirming the result obtained in various other works in different contexts ~\cite{DiValentino:2017iww,DiValentino:2015ola,DiValentino:2016hlg,DiValentino:2017zyq,Yang:2017ccc,Yang:2017zjs,Mortsell:2018mfj,Yang:2018euj,Yang:2018uae}. The inclusion of any external dataset to CMB improves the fitting and decreases the errors significantly without any change in the conclusion. Concerning the present value of the Hubble parameter $H_0$, we found that the CMB data alone leads to large error bars, however the inclusion of other datasets decreases them significantly, with the favored $H_0$ value being in perfect agreement with its direct measurements. Hence, we deduce that one-parameter DE models can provide a solution to the known $H_0$-tension between local measurements and Planck indirect ones. This is one of the main results of the present work. Nevertheless, the possible $\sigma_8$-tension does not seem to be reconciled, since in all models the favored $\sigma_8$ value is similar to the Planck's estimated one. Lastly, in order to examine which of the five models is better fitted to the data, as well as in order to compare them with the standard $\Lambda$CDM cosmological scenario, we compute their Bayesian evidences using the \texttt{MCEvidence} (summarized in Table \ref{tab:bayesian}). As we saw Model II and Model III are relatively close to $\Lambda$CDM (this can also be viewed from the Whisker graph in Fig.~\ref{whisker} where $w_0$ for Model II and Model III are relatively close to $w_0 = -1$ compared to other models). However, the reference $\Lambda$CDM scenario is still favored compared to all one parameter dynamical DE models. Nevertheless, these one-parameter DE models have similar or better efficiency in fitting the data comparing with the two-parameter DE parametrizations analyzed in the literature, taking into account their advantage that they are more economical and have one free parameter less. This is an indication that one-parameter DE models can indeed be efficient in describing the universe evolution, and thus they deserve a thorough investigation. \begin{acknowledgments} The authors would like to thank an anonymous referee for essential suggestions that improved the presentation and the quality of the manuscript. WY was supported by the National Natural Science Foundation of China under Grants No. 11705079 and No. 11647153. SP acknowledges the research grant under Faculty Research and Professional Development Fund (FRPDF) Scheme of Presidency University, Kolkata, India. EDV acknowledges support from the European Research Council in the form of a Consolidator Grant with number 681431. SC acknowledges the Mathematical Research Impact Centric Support (MATRICS), project reference no. MTR/2017/000407, by the Science and Engineering Research Board, Government of India. This article is based upon work from CANTATA COST (European Cooperation in Science and Technology) action CA15117, EU Framework Programme Horizon 2020. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,647
\section{Introduction} \label{sect:intro} The 3.6m DOT was commissioned at the Devasthal observatory of Aryabhatta Research Institute of observational sciencES (ARIES), Nainital (India) \cite{2018BSRSL..87...29K}. The Devasthal Observatory is situated in the Himalayan regions of Uttarakhand at $\sim 2450$ meter above the mean sea level with geographical coordinates of $29^{\circ}.360$ N, $79^{\circ}.690$ E. This location lies in the middle of the $180^{\circ}$-wide longitude-gap between the Canary Islands ($\sim 20^{\circ}$ W) and Eastern Australia ($\sim 160^{\circ}$ E), making it suitable for observations of time-critical astronomical events due to the availability of several moderate aperture telescopes. The DOT uses a $f/9$ Ritchey-Chr$^{'}$etien (RC) system supported on an alt-azimuth mount \cite{Sagar_2019, 2012SPIE.8444E..1VN}. The aperture of this telescope is appropriate for medium-resolution spectroscopy and observations of faint sources. A low-dispersion spectrograph-cum-imager, ARIES Devasthal Faint Object Spectrograph (ADFOSC), has been developed in ARIES for spectroscopy and imaging of the celestial objects \cite{2019arXiv190205857O}. The spectrograph uses a fixed focal reducer, which converts the incoming f/9 optical beam from the telescope into a faster $\sim f/4.2$ beam. The spectrograph can be used in both spectroscopic and imaging modes by selecting the instrument's corresponding optical elements (e.g. filters, grism, slit, etc.) with the help of a GUI-based instrument control software \cite{10.1117/12.2233082}. In either mode, a charge-coupled device (CCD) is required to detect and record the data. A CCD detector system/imager has been designed and assembled in ARIES in technical collaboration with the Herzberg Institute of Astrophysics (HIA), Canada. \begin{figure} \centering \includegraphics[width=\columnwidth]{CCD.pdf} \caption{The ADFOSC CCD camera setup in the laboratory. The Camera consists of a CCD detector, a controller, a pressure gauge, a dewar which is cooled using a heat-exchange cryogenic system.} \label{fig:Camera} \end{figure} \label{ccd_details} We performed a detailed characterization of the CCD system before commissioning it for scientific observations, both in the laboratory and on the sky. This included estimating parameters like bias level, readout noise, and thermal noise in the dark room. We then performed iterative experiments in the laboratory to optimize the overall system performance and verified the CCD for cosmetic defects. We demonstrate a method to calculate the dark signal of the CCD at different temperatures using the bias frames. As the CCD is developed for the ADFOSC instrument, we also estimated the spectral dispersion on the detector using the lamp spectra. After optimization in the laboratory environment, we performed similar experiments over the night sky on the 3.6m DOT to verify the on-sky performance of the detector system. The paper discusses the different methodologies employed for characterizing the performance of the CCD system. The test setup used for performing different tests to optimize the system parameters is detailed in section \ref{characterization}. We also discuss various experiments performed to determine and optimize the CCD characteristics. The integration of the CCD system with the ADFOSC instrument and results of the on-sky tests are discussed in section \ref{sky_verification}. To evaluate the performance of the CCD system on science targets, we observed transient sources during the observing cycle 2020C2 of the 3.6m DOT. The results of the scientific observations are presented in section \ref{performance}. \section{The CCD detector system} \label{CCD} \input{characteristics} The CCD is a $4096\times4096$ format back-illuminated e2v 231-84 CCD sensor having square pixels of 15-micron size. It is a deep-depleted sensor capable of enhancing the sensitivity toward the longer wavelengths ($ \sim 700-1000$ nm ) of the optical spectrum. The CCD has an imaging area of $61.4$ $\times$ $61.4 \ \rm mm^{2}$, providing a Field of View (FoV) of $\sim13.6\times13.6$ arcmin$^2$ and a spectral dispersion in the range $0.1-0.2$ nm/pixel. The CCD has four readout ports (0, 1, 2, 3). The $\sim16$ million pixels can be read from any of the four amplifiers or through four amplifiers simultaneously. The four-port readout decreases the readout time by a factor of four. However, it requires additional processing to match the bias levels of the four quadrants. Since the readout noise would differ for the four different amplifiers, each quadrant's respective signal-to-noise ratio (SNR) would also be different. In the case of the ADFOSC instrument, we have implemented a single port readout via port-0, which provides the lowest readout noise. The readout frequency is fixed at $\sim 160$ kHz, providing a readout time of $\sim 104$ sec. A Bonn shutter is mounted at the camera entrance with an aperture size of $100$ mm $\times 100$ mm. The shutter employs servo motors for fast operation and offers uniform exposure at the detector plane. Using this shutter, a minimum exposure time of $\sim 1$ ms is possible with an uncertainty of 300 $\rm \mu$s. The detector system consists of the CCD detector, a clock-shaping fan-out circuit board, and a generic Astronomical Research Cameras (ARC) controller\footnote{\url{http://astro-cam.com}} to generate the suitable clock and bias voltages for the detector. The controller hardware has two video cards for reading four ports with 16-bit Analog to digital converter (ADC) units to interface and digitize the four channels of the CCD. Additionally, different bin settings are implemented to read the image in different binning patterns for photometry and spectroscopy. The CCD sensor is cooled to $-120^\circ$C to minimize the dark signal. The CCD dewar is evacuated for several hours using an oil-free turbo molecular vacuum pump before deep cooling. The dewar is equipped with a Pirani vacuum gauge for monitoring its vacuum pressure. Once the vacuum reaches below $ \sim 1$ x $10^{-3} $ Torr, the closed-cycle (Joule-Thomson) cryogenic heat-exchange system supplied by Brooks Automation, USA, is switched on. The overall process of cryo-cooling the CCD from an ambient temperature of $25^\circ$C to $-120^\circ$C takes between 4 to 5 hours. This temperature is stabilized and held constant within $0.01^\circ$C using a Lakeshore 335 Proportional Integral Derivative (PID) temperature controller\footnote{\url{http://irtfweb.ifa.hawaii.edu/~s2/software/gpib-eth-ls335/335_Manual.pdf}}. For this purpose, a small heater and a temperature sensor are mounted on the cold plate below the sensor. A charcoal-filled getter is used to absorb outgassing inside the dewar. The charcoal gets activated to absorb gases at cryogenic temperatures and helps attain a high vacuum. An ultimate vacuum of $ \sim 3$ x $10^{-7} $ Torr is usually attained with this system at $-120^\circ$C. \section{Characterization of the CCD system} \label{characterization} Detailed characterization of the CCD includes the estimation of bias level, bias stability, readout noise (RN), gain, defects, linearity, and saturation level of the CCD. This section describes the laboratory-based test setup and the experiments performed to determine these parameters. We tested the CCD performance using all four ports, numbered zero to three. However, the read noise is found to be the lowest for port-0; hence single port readout mode using port-0 is currently being used for acquiring the scientific data. The paper focuses on characterizing parameters for port-0 of the CCD system. \subsection{Test setup and Data Acquisition} \label{set-up} We set up the CCD system on an electrostatic discharge (ESD) safe, dark room of the ARIES optics laboratory for performing the experiments. A light-emitting diode (LED) operated at a constant regulated voltage was used as an illumination source for the experiment. We covered the CCD window with an Aluminium plate with a pinhole for the light to enter. We fixed the source on the pinhole to avoid any fluctuations in light intensity due to any change in the source's position. The sub-systems, namely, the temperature controller, cryogenic pump, pressure gauge, etc., were carefully grounded to a common point to avoid noise entering from ground loops. The entire system was again reconfigured when the ADFOSC was mounted on the 3.6m DOT for sky tests. We acquired the data using the \sw{Owl}\footnote{\url{http://www.astro-cam.com/Gen3Software.php}} software provided along with the ARC controller. The software offers different dialog boxes to control the controller parameters, including gain, readout, binning, etc., and saves the acquired images in standard Flexible Image Transport System (FITS) format. Different modules of \sw{Python}\cite{van1995python} like \sw{Astropy}\cite{astropy:2013,astropy:2018,astropy:2022} and \sw{ccdproc}\cite{matt_craig_2017_1069648} were used to process the FITS image files. \subsection{Bias level and readout noise} \label{sec:bias_level} \begin{figure} \includegraphics[width=\columnwidth]{biasframe.pdf} \caption{Master bias of the CCD created using median combining 50 bias frames.} \label{bias_frame} \end{figure} A positive offset is generally provided to the CCD electronics to avoid negative counts in the output of the CCD. The mean offset value, or the bias value, is optimized in a way that it is large enough to avoid the non-linear regime of the CCD amplifiers but not too large to reduce the dynamic range of the CCD. We set the bias level slightly above thousand counts to balance the above factors. \begin{figure} \includegraphics[width=\columnwidth]{bias.pdf} \caption{Histogram of the master bias frame with a mean value of $1134.01\pm2.62$ counts.} \label{bias} \end{figure} We estimated the bias level of the CCD using the bias frames, which are the CCD images with a zero-second exposure. Several bias frames were acquired, and we used fifty of them to generate a master bias frame by taking their median value. We did this using \sw{Mediancombine} task of \sw{ccdproc} software module. Fig. \ref{bias_frame} shows the median bias frame, and the corresponding histogram is shown in Fig. \ref{bias}. The width of the distribution represents the RN of the CCD, which is the number of electrons introduced by the readout electronics while reading out each pixel. We estimated the RN using the Janesick method (equation \ref{eq:RNEQ})\cite{2001sccd.book.....J}. We created a difference image using two bias images and estimated its standard deviation ($\sigma_{Bias1-Bias2}$). The explanation of the Gain value used in equation \ref{eq:RNEQ} is provided in section \ref{gain}. We found the RN value to be 6.20 analog to digital units (ADU) or 6.20 $e^{-1}$ for the gain value of $\sim$ 1. However, the achieved noise is more than twice the value in the e2v datasheet at 160kHz. We observed some interference patterns in the image. These patterns are likely to be responsible for this increased noise. The probable cause could be ground loops outside and inside the dewar, length of cables and imperfect shielding scheme. These have been controlled to a large extent by iteratively evaluating various schemes like shortening the cables and star connecting the ground points of the auxiliary devices like chiller, pressure gauge, temperature controller etc. Also, grounding permutations were tried with the four-port video cables. Finally, the shielding of the video cables was grounded only at the controller end and left open at the CCD connector end, which resulted in a lower noise floor. There is still scope for improvement as the ground loops inside the dewar have not been evaluated. This evaluation will be attempted later since the CCD has already been commissioned for observations. \begin{center} \begin{equation} RN = \frac{Gain\times\sigma_{Bias1-Bias2}}{\sqrt{2}} \label{eq:RNEQ}, \end{equation} \end{center} We calculated the standard deviation of 50 bias frames to verify this RN value, as shown in Fig. \ref{RN}. The mean of these standard deviation values is 6.38 ADU, which is consistent with the value calculated using equation \ref{eq:RNEQ}. \begin{figure} \includegraphics[width=\columnwidth]{RN.pdf} \caption{A plot showing the variation of the standard deviation of 50 bias frames} \label{RN} \end{figure} \subsection{Linearity} The CCD system should have a linear response to the incident light for scientific observations. However, several factors can introduce non-linearity in CCD performance. The controller clock periods and overlaps should be timed for complete charge transfer during the readout process. Moreover, there should be a delay between this transfer and the correlated double sampling instant to allow the transients to settle to avoid any induced noise or glitch and introduce non-linearity. We verified the signal waveform using a 1 GHz digital oscilloscope to ensure the above before connecting the interface board to the detector. Other factors critical for linearity are bias voltages: voltage output drain (VOD) and voltage reset drain (VRD). The CCD manufacturer provided a range of values for the voltages, VOD from 25 to 31 volts and VRD from 16 to 19 volts. To check the behaviour of the CCD at different voltages, we initially set these voltages near minimum values and iteratively increased these voltages within this range. We rejected some of the voltage combinations that provided very low-bias levels. For other combinations, experiments were performed to check the linearity of the CCD. We used an LED source (section \ref{set-up}) to illuminate the CCD and acquired images with an incremental increase in exposure time. We obtained a pair of images for each exposure time to detect any variation in the source intensity. We noticed that the counts are identical for the pair of images. \begin{figure} \centering \includegraphics[width=\columnwidth]{NL_CCD.pdf} \caption{Non-linearity curves at different operating voltages in the lower count regime (left panel) and in the higher count region (right panel). Non-linearity is the minimum for a combination of VOD=29 volts and VRD=16.5 volts.} \label{linearity_ccd} \end{figure} We estimated the non-linearity (the relative difference between our measurements and the best-fit linear curves) by fitting a linear function to the variation of mean counts with exposure time for each voltage combination. We used the \sw{linregress} function from the \sw{stats} library under \sw{Python} for this purpose. Fig. \ref{linearity_ccd} shows the non-linearity curves for various combinations of VOD and VRD in different count regions. For most voltage combinations, the non-linearity is negligible in the higher count regime. The non-linearity, however, shows up in the lower count regime and is significant for certain voltages. For a combination of VOD = 29 volts and VRD = 16.5 volts, the non-linearity is the lowest. For this voltage combination, the value of the regression coefficient ($R^2$) is $0.9999$, which is almost equal to unity (see Fig. \ref{linearity}). We considered this voltage combination as the optimum value for the CCD system. \subsection{Saturation level} \begin{figure} \centering \includegraphics[width=\columnwidth]{linearity_lab.pdf} \caption{Linearity curve at VOD = 29 volts and VRD = 16.5 volts. A linear fit to the data gives the regression coefficient as 0.9999. The horizontal dashed line indicates the saturation level.} \label{linearity} \end{figure} The maximum capacity of a CCD pixel to store the photo-electrons is its full well capacity, beyond which the pixels saturate. Since the available 16-bit ADC of the controller saturates at a value of 65535, the controller's gain setting helps to select the dynamic range. As we are interested in accurate photometry of faint objects, a gain of unity is selected, constraining the saturation point to 65535. The selection of system gain is discussed in section \ref{gain}. Illumination of the CCD until its saturation point limited the counts to 65535, the saturation point of the 16-bit ADC. It is demonstrated and shown in Fig. \ref{linearity} where a bright source illuminates the detector, and the ADC saturates at 65535 counts. If the science cases demand the utilization of full well capacity, the user can select a gain setting close to 3 or higher electrons per ADU. \subsection{The system gain} \label{gain} The gain of a CCD system is defined in terms of ADU, which corresponds to the number of electrons assigned to one digital unit in the output image. The available gain values are 1, 2, 5, and 10 {e$^{-}/$ADU} in the controller. The gain values can be selected using the software at runtime. The saturation level of the CCD should correspond to the saturation level of the ADC to utilize the full well capacity. Since the full well capacity of the CCD is $408$~ke$^{-}$ (as mentioned in the result sheet of the supplied detector), a gain of 10 is suitable to match the saturation levels. However, for the detection of photon-limited faint objects, a gain of 1 is implemented using the controller parameters. The electronic gain of the CCD system is the product of gain values introduced by each stage of the readout electronics. The inherent gain of the CCD, defined by the output capacitor, is 7 {$\mu$ V/e$^{-}$}. A series of Op-amp stages within the controller further amplify this gain. Initially, it is preamplified with a gain of 4 and passed through a gain selection stage, offering a range of gain values: 1, 2, 4.75, and 9.5. A bias adjustment stage after the integrator provides a gain of 2. Hence, an amplification of 56 {$\mu$ V/e$^{-}$} is obtained with these four stages. Since the 16-bit ADC operating at a reference voltage of 10 V provides a bin size of 152.588 {$\mu$ V/ADU}, the integration of the Op-amp integrator is adjusted to provide an additional gain factor of 2.725 to achieve the desired system gain of 1 {e$^{-}/$ADU}. Since the integrator time can only be adjusted in increments of 40 ns, the closest possible value of 0.998 {e$^{-}/$ADU} is set. We experimentally verified this gain setting using the Janesick method \cite{2001sccd.book.....J} given by equation \ref{eq:gainEQ} where ($S$) is the mean of the signal acquired by the CCD, and $\sigma_S^2$ is the variance. \begin{center} \begin{equation} \sigma_S^2 = \frac{S}{G} + \sigma_R^2 \label{eq:gainEQ}, \end{equation} \end{center} \begin{figure} \includegraphics[width=\columnwidth]{Gain_lab1.pdf} \caption{Photon transfer curve (PTC) of the CCD obtained in the laboratory environment. The measured value of the gain is $1.00\pm0.04$ e$^{-}/$ADU.} \label{gain_ccd} \end{figure} We acquired a pair of images at each exposure and estimated the mean signal after bias subtraction and cosmic-ray removal from the image. Further, these images were normalized by subtracting one image from the other for compensating the flat-field effect. We used the resulting image to estimate the variance of the signal ($\sigma_S^2$). Fig. \ref{gain_ccd} shows the photon transfer curve (PTC) for the CCD. To estimate the gain, the PTC was fitted with a linear function using \sw{linregress}. The estimated gain is 1.00 $\pm$ 0.04 {e$^{-}/$ADU}, which matches the electronic gain value of the system within the errorbar. \begin{figure*} \includegraphics[width=8cm,height=8cm]{temp_grad.pdf} \includegraphics[scale=0.40]{Bias_-50.pdf} \caption{Masterbias at $-50^\circ$C and deviation of the counts of all pixels from the zeroth pixel counts. A clear gradient can be seen in the image as the last pixels get more time to generate the dark signal.} \label{dark} \end{figure*} \subsection{Thermal noise} \label{thermal_noise} In the CCD detectors, electron charge-density increases exponentially with an increase in temperature due to the thermal generation of electrons. A CCD must be cooled optimally to minimize the dark signal. To determine this optimum temperature, we calculated the dark signal at various temperatures ranging from $-35^\circ$C to $-120^\circ$C using the bias frames. We acquired several bias frames at different temperatures and generated master bias frames at each temperature. The left panel of Fig. \ref{dark} shows the master bias frame at $-50^\circ$C. A gradient in counts is visible in the master bias due to the finite readout time of the CCD. The zeroth pixel gets the lowest time to generate dark electrons. The last pixel accumulates dark counts over full readout time, hence has the maximum number of thermally generated electrons. If the readout time and the gain are known, then by comparing the counts of the first and the last pixel, we can measure the number of electrons generated per pixel per second. Using this method, we estimated the dark signal at different temperatures. \input{table1} The right panel of Fig. \ref{dark} shows the deviation of counts in each pixel from the zeroth counts. The farther the pixel number is from the readout port, the larger the dark count and the larger the deviation from the zeroth counts. To determine the slope of this gradient, we fitted a polynomial in counts vs. pixel number data using the \sw{polyfit} function of \sw{Python}. It is seen that a linear function provides the best fit, as shown in the left panel of Fig. \ref{dark}. We used the slope to calculate the difference in counts between the first and the last pixel. We divided this difference by the total readout time to obtain dark counts generated per second. Since the bias frames were acquired in $4\times4$ binning, it was further scaled by a factor of 16 after subtracting the RN. Below $-100^\circ$C temperature, the thermal noise becomes less than RN; hence, we could estimate the dark signal values up to $-95^\circ$C. The dark signal values at different temperatures are listed in table \ref{tab:dark}. As shown in Fig. \ref{dark_signal}, the dark signal varies exponentially with temperature. Below -80°C, the dark signal is negligible, suggesting that the CCD can be used below this temperature with minimal thermal noise. \begin{figure} \includegraphics[width=\columnwidth]{dark_vs_temperature.pdf} \caption{Variation of dark signal with temperature. The dark signal is negligible below $-80^\circ$C.} \label{dark_signal} \end{figure} \subsection{CCD defects} \label{ccd_defects} \begin{figure} \includegraphics[width=\columnwidth]{Bias_vs_counts.pdf} \caption{The response of CCD pixels at different temperatures is shown in the figure. The deviation of the counts from the zeroth counts is fitted with a polynomial (black line). The dark signal is calculated from the best fit. There are a few pixels which are behaving differently at higher temperatures. At lower temperatures, the CCD behaves as grade-0 CCD.} \label{defects} \end{figure} The CCD may have some pixels that might not respond to light optimally due to defects in the CCD structure. These can be point defects, hot defects, or dark/dead pixels. We are employing a grade-0 CCD detector (the CCD detector with minimum possible defects) as mentioned by the manufacturer. To check the CCD for point and column defects, we examined the response of all the pixels at different temperatures. Fig. \ref{defects} shows the deviation of counts from the mean bias counts for each pixel of the CCD operated at different temperatures (the method to obtain these plots is described in section \ref{thermal_noise}). Some pixels are seen to behave differently at higher temperatures exhibiting high counts. Though they appear to be hot pixels, the counts are found to decrease with decreasing temperature. Eventually, below $-110^\circ$C, the CCD acts as a nominal grade-0 CCD without any point and column defects. \section{On-sky verification} \label{sky_verification} After optimizing the performance of the CCD system in the laboratory, we verified the on-sky performance. The CCD was integrated with the ADFOSC instrument and mounted on the axial port of 3.6m DOT. This section describes the estimation of gain, linearity, bias level, and bias stability using on sky observation with the instrument. \subsection{Bias Stability} We calculated the bias level using the methodology described in section 3. The mean bias level equal to $1133.85\pm2.48$ matches the laboratory estimated value, i.e. $1134.01\pm2.62$. Since fluctuation in bias level can introduce errors in photometric estimates, we acquired and examined several bias frames to check the stability of the bias. Fig. 4 shows the variation of the mean bias level for 30 different nights spread across an observing cycle of three months. The mean bias level fluctuates within a fraction of a count, ensuring bias stability. \begin{figure} \includegraphics[width=\columnwidth]{bias_stability.pdf} \caption{Variation within the mean counts of bias frames during different nights. The bias level is stable within one count. The red and black dotted lines show the mean bias level estimated in the lab and on-sky.} \label{bias_stability} \end{figure} \subsection{Linearity and gain} \begin{figure} \centering \includegraphics[width=\columnwidth]{ds9.pdf} \caption{CCD image of the Landolt standard field SA110. The standard stars are between 10 to 16 mag in V-band.} \label{SA110} \end{figure} We chose the standard field available at the zenith at the time of observations to validate the linearity and gain of the CCD. Multiple images of Landolt standard field SA110 \cite{1992AJ....104..340L} were acquired in r-band with an exposure time ranging from 5 sec to 100 sec. Before using the images for characterization purposes, we pre-processed the images with basic steps of bias subtraction, flat correction and cosmic-ray removal using the \sw{ccdproc}. Fig. \ref{SA110} shows the pre-processed CCD image of the field SA110, which contains both bright and faint stars (with magnitudes ranging from 10 to 16 mag in the V-band). We used the faint stars to check the linearity in the lower count region and the bright stars to estimate the saturation level of the CCD. Fig. \ref{on_sky_linearity} shows CCD linearity with $R^2$ = $0.9997$ and a non-linearity percentage of 0.30. The CCD system is seen to saturate at 65535 counts for a gain of 1 {e$^{-}/$ADU}. \begin{figure} \centering \includegraphics[width=\columnwidth]{linearity_sky.pdf} \caption{Variation of mean counts with exposure time from on-sky experiments. The black line represents the best linear fit with a regression coefficient of 0.9997. } \label{on_sky_linearity} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Gain_sky1.pdf} \caption{Photon transfer curve (PTC) of the CCD as obtained from the sky experiments. The gain value is estimated as $1.04\pm0.13$ e$^{-}/$ADU.} \label{on_sky_gain} \end{figure} The estimated gain value using the method described in section \ref{gain} is $1.04\pm0.13$ e$^{-}$/ADU, which is close to the value estimated in the laboratory. On-sky gain estimation is also affected by the sky variation, which results in a slightly higher error bar. The mean-variance plot is shown in Fig. \ref{on_sky_gain}. \section{Performance of the CCD} \label{performance} \begin{figure} \centering \includegraphics[width=\columnwidth]{210217A_dot.pdf} \caption{Field of GRB 210217A afterglow imaged with the ADFOSC in r-band.} \label{imaging} \end{figure} We used the CCD for imaging and spectroscopic observations of various science targets after optimizing it in the lab and successfully verifying it on-sky. This section demonstrates the performance of the CCD system in both imaging and spectroscopic modes with observations of GRB and Supernovae sources. The on-sky performance of ADFOSC on different science targets is discussed in more detail in Omar et al. 2019 \cite{2019arXiv190205857O}. \subsection{Imaging} We observed the optical afterglow of GRB 210217A using the ADFOSC in imaging mode. These observations were performed on 18th February 2021 in the r-band at 23:59:18 UT, at $\sim1.7$ days after the burst. Owing to the faintness and rapid decay rate of GRB afterglows, a series of eight images, each with an exposure time of 300 seconds, were acquired. The images were stacked after pre-processing (as described in the previous section) to improve the signal-to-noise ratio. The optical afterglow is visible in the stacked image as shown in Fig. \ref{imaging}. The photometric estimate of the afterglow is $22.32\pm0.16$ mag (AB). \begin{figure} \includegraphics[scale=0.3]{lamp_colmn.pdf} \includegraphics[scale=0.3]{lamp_cal.pdf} \caption{The lamp spectrum of Mercury Argon lamp using 770R grating. The left panel shows the spectrum in pixel scale. The vertical lines indicate the column numbers for the first and last emission lines identified in the spectrum in the left figure. The right panel shows the spectrum in wavelength scale.} \label{spectal_dispersion} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{spectra.pdf} \caption{Spectrum of SN~2020jfo obtained using ADFOSC at $\sim254$ days after the discovery of supernova. We identified different absorption lines in the spectrum and compared these with the spectra taken from other instruments/telescopes.} \label{spectroscopy} \end{figure} \subsection{Spectroscopy} The spectrograph provides three sets of gratings: 300 gr/mm, 420 gr/mm, and 600 gr/mm. We acquired the lamp spectra using the Mercury Argon (HgAr) calibration lamp to estimate the spectral dispersion. We used the \sw{find peaks} module of \sw{scipy}\cite{2020SciPy-NMeth} to extract the spectral peaks from the obtained spectra. We identified the column number corresponding to each peak and compared it with the wavelength-calibrated lamp spectrum atlas. We defined initial polynomial solutions using these matched wavelength pairs and calculated the best-fit polynomial coefficients to transform between the column number and wavelength. The left panel of Fig. \ref{spectal_dispersion} shows the lamp spectrum obtained using a 300gr/nm grating element in pixel scale. The right panel shows the calibrated spectrum in the wavelength scale. A spectral dispersion of 0.20 nm/pixel is estimated for this grating. For gratings elements, 420 gr/mm and 600 gr/mm, the estimated values of spectral dispersion are 0.14 nm/pixel and 0.10 nm/pixel, respectively. We acquired the spectrum of the supernova SN~2020jfo using $1.5^{''}$ slit and 420 gr/mm grating with an exposure time of 900 sec on 13th January 2021 at 23:12:53 UT. Ailawadhi et al.\cite{2022arXiv221102823A} describe the spectral reduction technique. The absorption features in the spectrum obtained by ADFOSC were identified and matched with the spectra obtained from other telescopes, as shown in Fig. \ref{spectroscopy}. It is noticed that spectral features at longer wavelengths are visible, indicating the high sensitivity of the CCD detector (deep-depleted) in the long-wavelength regime. \section{Conclusions} \label{conclusions} We present the methodology employed to characterize the performance of a CCD system developed for integrating with a low dispersion spectrograph instrument, ADFOSC, on the 3.6m DOT. Various experiments were initially performed in the laboratory to characterize and optimize different critical parameters of the CCD system. We also verified the estimated parameters on the sky by mounting the instrument on the 3.6m DOT. We evaluated the bias level during the on-sky tests and examined its stability over several observing nights. We experimentally identified an optimum combination of bias voltages: VOD and VRD, to operate the CCD with minimum non-linearity. The readout performance of the CCD is satisfactory. However, some interference patterns in the image contribute to readout noise. Through different experiments, we tuned and verified the gain parameter corresponding to 1 e$^{-}/$ADU for detecting faint objects. We calculated the dark current at different temperatures using bias frames at lower temperatures and established an optimum operating temperature of the CCD. The CCD acts as a grade-0 detector with no hot pixels at optimum temperature. The regression coefficient values and the gain parameter obtained on-sky are consistent with the values obtained in the laboratory. After verifying the satisfactory performance, we observed the science targets both in imaging and spectroscopic modes. We carried out the imaging of GRB 210217A \cite{Dimple2022} field and the spectroscopy of supernova SN~2020jfo using ADFOSC successfully \cite{2022arXiv221102823A}. \section*{Acknowledgments} We thank Greg Burley and Tim Hardy from NRC Herzberg Astronomy and Astrophysics Research Centre, Canada, for their help in developing the CCD system. We thank the ARIES 3.6 m DOT engineering team and staff members for providing the necessary support during development, verification, and commissioning work. We would also like to thank Dr. Raya Dastidar for helping us with spectroscopic data reduction. \section{Introduction} \label{sect:intro} The 3.6m DOT was commissioned at the Devasthal observatory of Aryabhatta Research Institute of observational sciencES (ARIES), Nainital (India) \cite{2018BSRSL..87...29K}. The Devasthal Observatory is situated in the Himalayan regions of Uttarakhand at $\sim 2450$ meter above the mean sea level with geographical coordinates of $29^{\circ}.360$ N, $79^{\circ}.690$ E. This location lies in the middle of the $180^{\circ}$-wide longitude-gap between the Canary Islands ($\sim 20^{\circ}$ W) and Eastern Australia ($\sim 160^{\circ}$ E), making it suitable for observations of time-critical astronomical events due to the availability of several moderate aperture telescopes. The DOT uses a $f/9$ Ritchey-Chr$^{'}$etien (RC) system supported on an alt-azimuth mount \cite{Sagar_2019, 2012SPIE.8444E..1VN}. The aperture of this telescope is appropriate for medium-resolution spectroscopy and observations of faint sources. A low-dispersion spectrograph-cum-imager, ARIES Devasthal Faint Object Spectrograph (ADFOSC), has been developed in ARIES for spectroscopy and imaging of the celestial objects \cite{2019arXiv190205857O}. The spectrograph uses a fixed focal reducer, which converts the incoming f/9 optical beam from the telescope into a faster $\sim f/4.2$ beam. The spectrograph can be used in both spectroscopic and imaging modes by selecting the instrument's corresponding optical elements (e.g. filters, grism, slit, etc.) with the help of a GUI-based instrument control software \cite{10.1117/12.2233082}. In either mode, a charge-coupled device (CCD) is required to detect and record the data. A CCD detector system/imager has been designed and assembled in ARIES in technical collaboration with the Herzberg Institute of Astrophysics (HIA), Canada. \begin{figure} \centering \includegraphics[width=\columnwidth]{CCD.pdf} \caption{The ADFOSC CCD camera setup in the laboratory. The Camera consists of a CCD detector, a controller, a pressure gauge, a dewar which is cooled using a heat-exchange cryogenic system.} \label{fig:Camera} \end{figure} \label{ccd_details} We performed a detailed characterization of the CCD system before commissioning it for scientific observations, both in the laboratory and on the sky. This included estimating parameters like bias level, readout noise, and thermal noise in the dark room. We then performed iterative experiments in the laboratory to optimize the overall system performance and verified the CCD for cosmetic defects. We demonstrate a method to calculate the dark signal of the CCD at different temperatures using the bias frames. As the CCD is developed for the ADFOSC instrument, we also estimated the spectral dispersion on the detector using the lamp spectra. After optimization in the laboratory environment, we performed similar experiments over the night sky on the 3.6m DOT to verify the on-sky performance of the detector system. The paper discusses the different methodologies employed for characterizing the performance of the CCD system. The test setup used for performing different tests to optimize the system parameters is detailed in section \ref{characterization}. We also discuss various experiments performed to determine and optimize the CCD characteristics. The integration of the CCD system with the ADFOSC instrument and results of the on-sky tests are discussed in section \ref{sky_verification}. To evaluate the performance of the CCD system on science targets, we observed transient sources during the observing cycle 2020C2 of the 3.6m DOT. The results of the scientific observations are presented in section \ref{performance}. \section{The CCD detector system} \label{CCD} \input{characteristics} The CCD is a $4096\times4096$ format back-illuminated e2v 231-84 CCD sensor having square pixels of 15-micron size. It is a deep-depleted sensor capable of enhancing the sensitivity toward the longer wavelengths ($ \sim 700-1000$ nm ) of the optical spectrum. The CCD has an imaging area of $61.4$ $\times$ $61.4 \ \rm mm^{2}$, providing a Field of View (FoV) of $\sim13.6\times13.6$ arcmin$^2$ and a spectral dispersion in the range $0.1-0.2$ nm/pixel. The CCD has four readout ports (0, 1, 2, 3). The $\sim16$ million pixels can be read from any of the four amplifiers or through four amplifiers simultaneously. The four-port readout decreases the readout time by a factor of four. However, it requires additional processing to match the bias levels of the four quadrants. Since the readout noise would differ for the four different amplifiers, each quadrant's respective signal-to-noise ratio (SNR) would also be different. In the case of the ADFOSC instrument, we have implemented a single port readout via port-0, which provides the lowest readout noise. The readout frequency is fixed at $\sim 160$ kHz, providing a readout time of $\sim 104$ sec. A Bonn shutter is mounted at the camera entrance with an aperture size of $100$ mm $\times 100$ mm. The shutter employs servo motors for fast operation and offers uniform exposure at the detector plane. Using this shutter, a minimum exposure time of $\sim 1$ ms is possible with an uncertainty of 300 $\rm \mu$s. The detector system consists of the CCD detector, a clock-shaping fan-out circuit board, and a generic Astronomical Research Cameras (ARC) controller\footnote{\url{http://astro-cam.com}} to generate the suitable clock and bias voltages for the detector. The controller hardware has two video cards for reading four ports with 16-bit Analog to digital converter (ADC) units to interface and digitize the four channels of the CCD. Additionally, different bin settings are implemented to read the image in different binning patterns for photometry and spectroscopy. The CCD sensor is cooled to $-120^\circ$C to minimize the dark signal. The CCD dewar is evacuated for several hours using an oil-free turbo molecular vacuum pump before deep cooling. The dewar is equipped with a Pirani vacuum gauge for monitoring its vacuum pressure. Once the vacuum reaches below $ \sim 1$ x $10^{-3} $ Torr, the closed-cycle (Joule-Thomson) cryogenic heat-exchange system supplied by Brooks Automation, USA, is switched on. The overall process of cryo-cooling the CCD from an ambient temperature of $25^\circ$C to $-120^\circ$C takes between 4 to 5 hours. This temperature is stabilized and held constant within $0.01^\circ$C using a Lakeshore 335 Proportional Integral Derivative (PID) temperature controller\footnote{\url{http://irtfweb.ifa.hawaii.edu/~s2/software/gpib-eth-ls335/335_Manual.pdf}}. For this purpose, a small heater and a temperature sensor are mounted on the cold plate below the sensor. A charcoal-filled getter is used to absorb outgassing inside the dewar. The charcoal gets activated to absorb gases at cryogenic temperatures and helps attain a high vacuum. An ultimate vacuum of $ \sim 3$ x $10^{-7} $ Torr is usually attained with this system at $-120^\circ$C. \section{Characterization of the CCD system} \label{characterization} Detailed characterization of the CCD includes the estimation of bias level, bias stability, readout noise (RN), gain, defects, linearity, and saturation level of the CCD. This section describes the laboratory-based test setup and the experiments performed to determine these parameters. We tested the CCD performance using all four ports, numbered zero to three. However, the read noise is found to be the lowest for port-0; hence single port readout mode using port-0 is currently being used for acquiring the scientific data. The paper focuses on characterizing parameters for port-0 of the CCD system. \subsection{Test setup and Data Acquisition} \label{set-up} We set up the CCD system on an electrostatic discharge (ESD) safe, dark room of the ARIES optics laboratory for performing the experiments. A light-emitting diode (LED) operated at a constant regulated voltage was used as an illumination source for the experiment. We covered the CCD window with an Aluminium plate with a pinhole for the light to enter. We fixed the source on the pinhole to avoid any fluctuations in light intensity due to any change in the source's position. The sub-systems, namely, the temperature controller, cryogenic pump, pressure gauge, etc., were carefully grounded to a common point to avoid noise entering from ground loops. The entire system was again reconfigured when the ADFOSC was mounted on the 3.6m DOT for sky tests. We acquired the data using the \sw{Owl}\footnote{\url{http://www.astro-cam.com/Gen3Software.php}} software provided along with the ARC controller. The software offers different dialog boxes to control the controller parameters, including gain, readout, binning, etc., and saves the acquired images in standard Flexible Image Transport System (FITS) format. Different modules of \sw{Python}\cite{van1995python} like \sw{Astropy}\cite{astropy:2013,astropy:2018,astropy:2022} and \sw{ccdproc}\cite{matt_craig_2017_1069648} were used to process the FITS image files. \subsection{Bias level and readout noise} \label{sec:bias_level} \begin{figure} \includegraphics[width=\columnwidth]{biasframe.pdf} \caption{Master bias of the CCD created using median combining 50 bias frames.} \label{bias_frame} \end{figure} A positive offset is generally provided to the CCD electronics to avoid negative counts in the output of the CCD. The mean offset value, or the bias value, is optimized in a way that it is large enough to avoid the non-linear regime of the CCD amplifiers but not too large to reduce the dynamic range of the CCD. We set the bias level slightly above thousand counts to balance the above factors. \begin{figure} \includegraphics[width=\columnwidth]{bias.pdf} \caption{Histogram of the master bias frame with a mean value of $1134.01\pm2.62$ counts.} \label{bias} \end{figure} We estimated the bias level of the CCD using the bias frames, which are the CCD images with a zero-second exposure. Several bias frames were acquired, and we used fifty of them to generate a master bias frame by taking their median value. We did this using \sw{Mediancombine} task of \sw{ccdproc} software module. Fig. \ref{bias_frame} shows the median bias frame, and the corresponding histogram is shown in Fig. \ref{bias}. The width of the distribution represents the RN of the CCD, which is the number of electrons introduced by the readout electronics while reading out each pixel. We estimated the RN using the Janesick method (equation \ref{eq:RNEQ})\cite{2001sccd.book.....J}. We created a difference image using two bias images and estimated its standard deviation ($\sigma_{Bias1-Bias2}$). The explanation of the Gain value used in equation \ref{eq:RNEQ} is provided in section \ref{gain}. We found the RN value to be 6.20 analog to digital units (ADU) or 6.20 $e^{-1}$ for the gain value of $\sim$ 1. However, the achieved noise is more than twice the value in the e2v datasheet at 160kHz. We observed some interference patterns in the image. These patterns are likely to be responsible for this increased noise. The probable cause could be ground loops outside and inside the dewar, length of cables and imperfect shielding scheme. These have been controlled to a large extent by iteratively evaluating various schemes like shortening the cables and star connecting the ground points of the auxiliary devices like chiller, pressure gauge, temperature controller etc. Also, grounding permutations were tried with the four-port video cables. Finally, the shielding of the video cables was grounded only at the controller end and left open at the CCD connector end, which resulted in a lower noise floor. There is still scope for improvement as the ground loops inside the dewar have not been evaluated. This evaluation will be attempted later since the CCD has already been commissioned for observations. \begin{center} \begin{equation} RN = \frac{Gain\times\sigma_{Bias1-Bias2}}{\sqrt{2}} \label{eq:RNEQ}, \end{equation} \end{center} We calculated the standard deviation of 50 bias frames to verify this RN value, as shown in Fig. \ref{RN}. The mean of these standard deviation values is 6.38 ADU, which is consistent with the value calculated using equation \ref{eq:RNEQ}. \begin{figure} \includegraphics[width=\columnwidth]{RN.pdf} \caption{A plot showing the variation of the standard deviation of 50 bias frames} \label{RN} \end{figure} \subsection{Linearity} The CCD system should have a linear response to the incident light for scientific observations. However, several factors can introduce non-linearity in CCD performance. The controller clock periods and overlaps should be timed for complete charge transfer during the readout process. Moreover, there should be a delay between this transfer and the correlated double sampling instant to allow the transients to settle to avoid any induced noise or glitch and introduce non-linearity. We verified the signal waveform using a 1 GHz digital oscilloscope to ensure the above before connecting the interface board to the detector. Other factors critical for linearity are bias voltages: voltage output drain (VOD) and voltage reset drain (VRD). The CCD manufacturer provided a range of values for the voltages, VOD from 25 to 31 volts and VRD from 16 to 19 volts. To check the behaviour of the CCD at different voltages, we initially set these voltages near minimum values and iteratively increased these voltages within this range. We rejected some of the voltage combinations that provided very low-bias levels. For other combinations, experiments were performed to check the linearity of the CCD. We used an LED source (section \ref{set-up}) to illuminate the CCD and acquired images with an incremental increase in exposure time. We obtained a pair of images for each exposure time to detect any variation in the source intensity. We noticed that the counts are identical for the pair of images. \begin{figure} \centering \includegraphics[width=\columnwidth]{NL_CCD.pdf} \caption{Non-linearity curves at different operating voltages in the lower count regime (left panel) and in the higher count region (right panel). Non-linearity is the minimum for a combination of VOD=29 volts and VRD=16.5 volts.} \label{linearity_ccd} \end{figure} We estimated the non-linearity (the relative difference between our measurements and the best-fit linear curves) by fitting a linear function to the variation of mean counts with exposure time for each voltage combination. We used the \sw{linregress} function from the \sw{stats} library under \sw{Python} for this purpose. Fig. \ref{linearity_ccd} shows the non-linearity curves for various combinations of VOD and VRD in different count regions. For most voltage combinations, the non-linearity is negligible in the higher count regime. The non-linearity, however, shows up in the lower count regime and is significant for certain voltages. For a combination of VOD = 29 volts and VRD = 16.5 volts, the non-linearity is the lowest. For this voltage combination, the value of the regression coefficient ($R^2$) is $0.9999$, which is almost equal to unity (see Fig. \ref{linearity}). We considered this voltage combination as the optimum value for the CCD system. \subsection{Saturation level} \begin{figure} \centering \includegraphics[width=\columnwidth]{linearity_lab.pdf} \caption{Linearity curve at VOD = 29 volts and VRD = 16.5 volts. A linear fit to the data gives the regression coefficient as 0.9999. The horizontal dashed line indicates the saturation level.} \label{linearity} \end{figure} The maximum capacity of a CCD pixel to store the photo-electrons is its full well capacity, beyond which the pixels saturate. Since the available 16-bit ADC of the controller saturates at a value of 65535, the controller's gain setting helps to select the dynamic range. As we are interested in accurate photometry of faint objects, a gain of unity is selected, constraining the saturation point to 65535. The selection of system gain is discussed in section \ref{gain}. Illumination of the CCD until its saturation point limited the counts to 65535, the saturation point of the 16-bit ADC. It is demonstrated and shown in Fig. \ref{linearity} where a bright source illuminates the detector, and the ADC saturates at 65535 counts. If the science cases demand the utilization of full well capacity, the user can select a gain setting close to 3 or higher electrons per ADU. \subsection{The system gain} \label{gain} The gain of a CCD system is defined in terms of ADU, which corresponds to the number of electrons assigned to one digital unit in the output image. The available gain values are 1, 2, 5, and 10 {e$^{-}/$ADU} in the controller. The gain values can be selected using the software at runtime. The saturation level of the CCD should correspond to the saturation level of the ADC to utilize the full well capacity. Since the full well capacity of the CCD is $408$~ke$^{-}$ (as mentioned in the result sheet of the supplied detector), a gain of 10 is suitable to match the saturation levels. However, for the detection of photon-limited faint objects, a gain of 1 is implemented using the controller parameters. The electronic gain of the CCD system is the product of gain values introduced by each stage of the readout electronics. The inherent gain of the CCD, defined by the output capacitor, is 7 {$\mu$ V/e$^{-}$}. A series of Op-amp stages within the controller further amplify this gain. Initially, it is preamplified with a gain of 4 and passed through a gain selection stage, offering a range of gain values: 1, 2, 4.75, and 9.5. A bias adjustment stage after the integrator provides a gain of 2. Hence, an amplification of 56 {$\mu$ V/e$^{-}$} is obtained with these four stages. Since the 16-bit ADC operating at a reference voltage of 10 V provides a bin size of 152.588 {$\mu$ V/ADU}, the integration of the Op-amp integrator is adjusted to provide an additional gain factor of 2.725 to achieve the desired system gain of 1 {e$^{-}/$ADU}. Since the integrator time can only be adjusted in increments of 40 ns, the closest possible value of 0.998 {e$^{-}/$ADU} is set. We experimentally verified this gain setting using the Janesick method \cite{2001sccd.book.....J} given by equation \ref{eq:gainEQ} where ($S$) is the mean of the signal acquired by the CCD, and $\sigma_S^2$ is the variance. \begin{center} \begin{equation} \sigma_S^2 = \frac{S}{G} + \sigma_R^2 \label{eq:gainEQ}, \end{equation} \end{center} \begin{figure} \includegraphics[width=\columnwidth]{Gain_lab1.pdf} \caption{Photon transfer curve (PTC) of the CCD obtained in the laboratory environment. The measured value of the gain is $1.00\pm0.04$ e$^{-}/$ADU.} \label{gain_ccd} \end{figure} We acquired a pair of images at each exposure and estimated the mean signal after bias subtraction and cosmic-ray removal from the image. Further, these images were normalized by subtracting one image from the other for compensating the flat-field effect. We used the resulting image to estimate the variance of the signal ($\sigma_S^2$). Fig. \ref{gain_ccd} shows the photon transfer curve (PTC) for the CCD. To estimate the gain, the PTC was fitted with a linear function using \sw{linregress}. The estimated gain is 1.00 $\pm$ 0.04 {e$^{-}/$ADU}, which matches the electronic gain value of the system within the errorbar. \begin{figure*} \includegraphics[width=8cm,height=8cm]{temp_grad.pdf} \includegraphics[scale=0.40]{Bias_-50.pdf} \caption{Masterbias at $-50^\circ$C and deviation of the counts of all pixels from the zeroth pixel counts. A clear gradient can be seen in the image as the last pixels get more time to generate the dark signal.} \label{dark} \end{figure*} \subsection{Thermal noise} \label{thermal_noise} In the CCD detectors, electron charge-density increases exponentially with an increase in temperature due to the thermal generation of electrons. A CCD must be cooled optimally to minimize the dark signal. To determine this optimum temperature, we calculated the dark signal at various temperatures ranging from $-35^\circ$C to $-120^\circ$C using the bias frames. We acquired several bias frames at different temperatures and generated master bias frames at each temperature. The left panel of Fig. \ref{dark} shows the master bias frame at $-50^\circ$C. A gradient in counts is visible in the master bias due to the finite readout time of the CCD. The zeroth pixel gets the lowest time to generate dark electrons. The last pixel accumulates dark counts over full readout time, hence has the maximum number of thermally generated electrons. If the readout time and the gain are known, then by comparing the counts of the first and the last pixel, we can measure the number of electrons generated per pixel per second. Using this method, we estimated the dark signal at different temperatures. \input{table1} The right panel of Fig. \ref{dark} shows the deviation of counts in each pixel from the zeroth counts. The farther the pixel number is from the readout port, the larger the dark count and the larger the deviation from the zeroth counts. To determine the slope of this gradient, we fitted a polynomial in counts vs. pixel number data using the \sw{polyfit} function of \sw{Python}. It is seen that a linear function provides the best fit, as shown in the left panel of Fig. \ref{dark}. We used the slope to calculate the difference in counts between the first and the last pixel. We divided this difference by the total readout time to obtain dark counts generated per second. Since the bias frames were acquired in $4\times4$ binning, it was further scaled by a factor of 16 after subtracting the RN. Below $-100^\circ$C temperature, the thermal noise becomes less than RN; hence, we could estimate the dark signal values up to $-95^\circ$C. The dark signal values at different temperatures are listed in table \ref{tab:dark}. As shown in Fig. \ref{dark_signal}, the dark signal varies exponentially with temperature. Below -80°C, the dark signal is negligible, suggesting that the CCD can be used below this temperature with minimal thermal noise. \begin{figure} \includegraphics[width=\columnwidth]{dark_vs_temperature.pdf} \caption{Variation of dark signal with temperature. The dark signal is negligible below $-80^\circ$C.} \label{dark_signal} \end{figure} \subsection{CCD defects} \label{ccd_defects} \begin{figure} \includegraphics[width=\columnwidth]{Bias_vs_counts.pdf} \caption{The response of CCD pixels at different temperatures is shown in the figure. The deviation of the counts from the zeroth counts is fitted with a polynomial (black line). The dark signal is calculated from the best fit. There are a few pixels which are behaving differently at higher temperatures. At lower temperatures, the CCD behaves as grade-0 CCD.} \label{defects} \end{figure} The CCD may have some pixels that might not respond to light optimally due to defects in the CCD structure. These can be point defects, hot defects, or dark/dead pixels. We are employing a grade-0 CCD detector (the CCD detector with minimum possible defects) as mentioned by the manufacturer. To check the CCD for point and column defects, we examined the response of all the pixels at different temperatures. Fig. \ref{defects} shows the deviation of counts from the mean bias counts for each pixel of the CCD operated at different temperatures (the method to obtain these plots is described in section \ref{thermal_noise}). Some pixels are seen to behave differently at higher temperatures exhibiting high counts. Though they appear to be hot pixels, the counts are found to decrease with decreasing temperature. Eventually, below $-110^\circ$C, the CCD acts as a nominal grade-0 CCD without any point and column defects. \section{On-sky verification} \label{sky_verification} After optimizing the performance of the CCD system in the laboratory, we verified the on-sky performance. The CCD was integrated with the ADFOSC instrument and mounted on the axial port of 3.6m DOT. This section describes the estimation of gain, linearity, bias level, and bias stability using on sky observation with the instrument. \subsection{Bias Stability} We calculated the bias level using the methodology described in section 3. The mean bias level equal to $1133.85\pm2.48$ matches the laboratory estimated value, i.e. $1134.01\pm2.62$. Since fluctuation in bias level can introduce errors in photometric estimates, we acquired and examined several bias frames to check the stability of the bias. Fig. 4 shows the variation of the mean bias level for 30 different nights spread across an observing cycle of three months. The mean bias level fluctuates within a fraction of a count, ensuring bias stability. \begin{figure} \includegraphics[width=\columnwidth]{bias_stability.pdf} \caption{Variation within the mean counts of bias frames during different nights. The bias level is stable within one count. The red and black dotted lines show the mean bias level estimated in the lab and on-sky.} \label{bias_stability} \end{figure} \subsection{Linearity and gain} \begin{figure} \centering \includegraphics[width=\columnwidth]{ds9.pdf} \caption{CCD image of the Landolt standard field SA110. The standard stars are between 10 to 16 mag in V-band.} \label{SA110} \end{figure} We chose the standard field available at the zenith at the time of observations to validate the linearity and gain of the CCD. Multiple images of Landolt standard field SA110 \cite{1992AJ....104..340L} were acquired in r-band with an exposure time ranging from 5 sec to 100 sec. Before using the images for characterization purposes, we pre-processed the images with basic steps of bias subtraction, flat correction and cosmic-ray removal using the \sw{ccdproc}. Fig. \ref{SA110} shows the pre-processed CCD image of the field SA110, which contains both bright and faint stars (with magnitudes ranging from 10 to 16 mag in the V-band). We used the faint stars to check the linearity in the lower count region and the bright stars to estimate the saturation level of the CCD. Fig. \ref{on_sky_linearity} shows CCD linearity with $R^2$ = $0.9997$ and a non-linearity percentage of 0.30. The CCD system is seen to saturate at 65535 counts for a gain of 1 {e$^{-}/$ADU}. \begin{figure} \centering \includegraphics[width=\columnwidth]{linearity_sky.pdf} \caption{Variation of mean counts with exposure time from on-sky experiments. The black line represents the best linear fit with a regression coefficient of 0.9997. } \label{on_sky_linearity} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{Gain_sky1.pdf} \caption{Photon transfer curve (PTC) of the CCD as obtained from the sky experiments. The gain value is estimated as $1.04\pm0.13$ e$^{-}/$ADU.} \label{on_sky_gain} \end{figure} The estimated gain value using the method described in section \ref{gain} is $1.04\pm0.13$ e$^{-}$/ADU, which is close to the value estimated in the laboratory. On-sky gain estimation is also affected by the sky variation, which results in a slightly higher error bar. The mean-variance plot is shown in Fig. \ref{on_sky_gain}. \section{Performance of the CCD} \label{performance} \begin{figure} \centering \includegraphics[width=\columnwidth]{210217A_dot.pdf} \caption{Field of GRB 210217A afterglow imaged with the ADFOSC in r-band.} \label{imaging} \end{figure} We used the CCD for imaging and spectroscopic observations of various science targets after optimizing it in the lab and successfully verifying it on-sky. This section demonstrates the performance of the CCD system in both imaging and spectroscopic modes with observations of GRB and Supernovae sources. The on-sky performance of ADFOSC on different science targets is discussed in more detail in Omar et al. 2019 \cite{2019arXiv190205857O}. \subsection{Imaging} We observed the optical afterglow of GRB 210217A using the ADFOSC in imaging mode. These observations were performed on 18th February 2021 in the r-band at 23:59:18 UT, at $\sim1.7$ days after the burst. Owing to the faintness and rapid decay rate of GRB afterglows, a series of eight images, each with an exposure time of 300 seconds, were acquired. The images were stacked after pre-processing (as described in the previous section) to improve the signal-to-noise ratio. The optical afterglow is visible in the stacked image as shown in Fig. \ref{imaging}. The photometric estimate of the afterglow is $22.32\pm0.16$ mag (AB). \begin{figure} \includegraphics[scale=0.3]{lamp_colmn.pdf} \includegraphics[scale=0.3]{lamp_cal.pdf} \caption{The lamp spectrum of Mercury Argon lamp using 770R grating. The left panel shows the spectrum in pixel scale. The vertical lines indicate the column numbers for the first and last emission lines identified in the spectrum in the left figure. The right panel shows the spectrum in wavelength scale.} \label{spectal_dispersion} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{spectra.pdf} \caption{Spectrum of SN~2020jfo obtained using ADFOSC at $\sim254$ days after the discovery of supernova. We identified different absorption lines in the spectrum and compared these with the spectra taken from other instruments/telescopes.} \label{spectroscopy} \end{figure} \subsection{Spectroscopy} The spectrograph provides three sets of gratings: 300 gr/mm, 420 gr/mm, and 600 gr/mm. We acquired the lamp spectra using the Mercury Argon (HgAr) calibration lamp to estimate the spectral dispersion. We used the \sw{find peaks} module of \sw{scipy}\cite{2020SciPy-NMeth} to extract the spectral peaks from the obtained spectra. We identified the column number corresponding to each peak and compared it with the wavelength-calibrated lamp spectrum atlas. We defined initial polynomial solutions using these matched wavelength pairs and calculated the best-fit polynomial coefficients to transform between the column number and wavelength. The left panel of Fig. \ref{spectal_dispersion} shows the lamp spectrum obtained using a 300gr/nm grating element in pixel scale. The right panel shows the calibrated spectrum in the wavelength scale. A spectral dispersion of 0.20 nm/pixel is estimated for this grating. For gratings elements, 420 gr/mm and 600 gr/mm, the estimated values of spectral dispersion are 0.14 nm/pixel and 0.10 nm/pixel, respectively. We acquired the spectrum of the supernova SN~2020jfo using $1.5^{''}$ slit and 420 gr/mm grating with an exposure time of 900 sec on 13th January 2021 at 23:12:53 UT. Ailawadhi et al.\cite{2022arXiv221102823A} describe the spectral reduction technique. The absorption features in the spectrum obtained by ADFOSC were identified and matched with the spectra obtained from other telescopes, as shown in Fig. \ref{spectroscopy}. It is noticed that spectral features at longer wavelengths are visible, indicating the high sensitivity of the CCD detector (deep-depleted) in the long-wavelength regime. \section{Conclusions} \label{conclusions} We present the methodology employed to characterize the performance of a CCD system developed for integrating with a low dispersion spectrograph instrument, ADFOSC, on the 3.6m DOT. Various experiments were initially performed in the laboratory to characterize and optimize different critical parameters of the CCD system. We also verified the estimated parameters on the sky by mounting the instrument on the 3.6m DOT. We evaluated the bias level during the on-sky tests and examined its stability over several observing nights. We experimentally identified an optimum combination of bias voltages: VOD and VRD, to operate the CCD with minimum non-linearity. The readout performance of the CCD is satisfactory. However, some interference patterns in the image contribute to readout noise. Through different experiments, we tuned and verified the gain parameter corresponding to 1 e$^{-}/$ADU for detecting faint objects. We calculated the dark current at different temperatures using bias frames at lower temperatures and established an optimum operating temperature of the CCD. The CCD acts as a grade-0 detector with no hot pixels at optimum temperature. The regression coefficient values and the gain parameter obtained on-sky are consistent with the values obtained in the laboratory. After verifying the satisfactory performance, we observed the science targets both in imaging and spectroscopic modes. We carried out the imaging of GRB 210217A \cite{Dimple2022} field and the spectroscopy of supernova SN~2020jfo using ADFOSC successfully \cite{2022arXiv221102823A}. \section*{Acknowledgments} We thank Greg Burley and Tim Hardy from NRC Herzberg Astronomy and Astrophysics Research Centre, Canada, for their help in developing the CCD system. We thank the ARIES 3.6 m DOT engineering team and staff members for providing the necessary support during development, verification, and commissioning work. We would also like to thank Dr. Raya Dastidar for helping us with spectroscopic data reduction.
{ "redpajama_set_name": "RedPajamaArXiv" }
568
{"url":"http:\/\/hans.fugal.net\/blog\/tag\/bmi\/","text":"# The FugueCounterpoint by Hans Fugal\n\n28Apr\/120\n\n## Calipers and Science\n\nJust for kicks I dug up the original Jackson\/Pollock paper for skinfold measurements for determining body fat percentage. Turns out there's also a 7-point equation that also takes circumference of waist and forearm into account.\n\nHere's a snapshot of the equations for men from the paper (\"Generalized equations for predicting body density of men\" by A.S. Jackson and M.L. Pollock, 1978. I couldn't find the PDF for the women paper online).\n\nImportant notes: skinfolds are in millimeters, circumferences are in meters, and log is the natural log (ln in most computer languages). I plugged my values from two weeks back into a spreadsheet and got the following results:\n\nJP Equation Density %BF\nSum of seven skinfolds\nS, S^2, age 1.0518 20.62%\nS, S^2, age,C 1.0476 22.51%\nlog S, age 1.0506 21.15%\nlog S, age, C 1.0482 22.25%\nSum of three skinfolds\nS, S^2, age (5) 1.0607 16.69%\nS, S^2, age,C (6) 1.0549 19.24%\nlog S, age (7) 1.0578 17.95%\nlog S, age, C (8) 1.0574 18.14%\n\nThe most interesting thing here is that there's a large difference between 7 and 3 site measurements, and the 3 site range is significantly larger. Also very interesting to note is that the one-site (suprailiac) AccuMeasure chart is, for me, in line with the 7-site measurement (22.1%). Given other measurements I've taken and just general guesswork based on what I see in the mirror, I think that is a decent estimate.\n\nIt's also curious that there are two sets of equations given, one using logs and one using squares.\n\nMoral of the story: more data is better, sometimes not-enough more data is worse than a simpler estimate, and interesting things can be learned when you go to the original source. (This is just a quick note, but the paper is very interesting and reading it will be an interesting exercise that sets proper expectations for, and understanding of, the JP7 skinfold method).\n\n14Nov\/072\n\n## Measure body fat with only a gallon jug (and a couple thousand tons of water)\n\nI finally got around to uploading the PDF version of my body fat measurement quest, and also a simplified one-page PDF for those of you that just want to try it out and don't want to wade through all the physics and my ramblings. While I was at it I threw together an OpenOffice.org spreadsheet to do all the heavy math for you too. Now the only difficulty is finding a couple thousand tons of water laying around. I put together a simple page with links to all that stuff I just mentioned (except the water).\n\nTagged as: , , , 2 Comments\n22Aug\/071\n\n## Body Density Measurement Uncertainty\n\nA couple days back I posted my idea for measuring body density and estimating\nbody\nfat\n.\nDad, who has a set of skinfold calipers gave it a try and gave me comparative\nresults, and asked the question on everbody's mind: just how accurate is it,\nespecially with that pretty blatant guess at residual lung volume?\n\nSo I took some time to learn how to account for uncertainty and take a stab at\npinning a confidence interval on the technique. First of all, I didn't realize\nhow complicated uncertainty propogation is. Partial derivatives, squares and\nsquare roots, etc. Luckily, I came across some lecture or presentation notes\ndetailing a sequential perturbation method (instead of an analytical method). I\ncould have talked Jacob into walking me through the partial derivatives, but\nthis method is easy to code and a find in and of itself. Read about it in this\nPDF\n.\n\nI coded up the formula and ran some test data through it. Here's the equation\nagain for review: \u03c1 = m \/ ((m + mc)\/\u03c1w - (va +\nvc + vr)) Here's the values and uncertainty I attribute\nto each variable:\n\n\u2022 m = 121.29 \u00b10.02 kg\n\u2022 \u03c1w = 0.997 \u00b10.001 kg\/l\n\u2022 va = 1.13 \u00b1\u064d0.01 l\n\u2022 vr = 1.87 \u00b10.5 l\n\u2022 mc = 0 \u00b10.02 kg\n\u2022 vc = 0 \u00b10.01 l\n\nI didn't actually use a counterbalance, but I included the uncertainty in\nmeasuring its mass and volume as if I had, just for completeness. As suspected,\nvr has the largest uncertainty. I calculated the uncertainty if\nvr were magically accurate, and found that the uncertainty was\n0.0014 kg\/l. This translates to about 0.65% body fat with Siri's equation\n(ignoring the uncertainty inherent in that equation, which is a constant bias\naccross measurements for one person on any given day).\n\nNote that I give \u03c1w this time, instead of whisking it away with a\nmagical 1 kg\/l. I picked an average value between 72\u00b0F and 84\u00b0F (most pools are\nin this range), with an uncertainty (due to water temperature) of about 0.001\nkg\/l. If you use 1 kg\/l instead you are introducing a bias of about 0.9% body\nfat. So I was wrong about that being insignificant.\n\nNow, I found a better estimate\n(why better? because it seems to come from a more reputable source than\nWikipedia) for residual lung volume: vr = RV = 0.24 VC. So I may\nhave overestimated my RV last time by \u00bd liter.\n\n(Update: I think that must be a typo on that page, they probably mean 24% or 28% of total capacity instead. This fits in much better with the rest of the literature that I have found, e.g. Quanjer and Paoletti.) That seems like a generous\nuncertainty measure for RV, too. With that uncertainty factored in, we get an\nuncertainty of about 2.1% body fat, or about 5% is you are on the slight side of average (the less you weigh, the more difference that 1\/2 liter makes).\n\nSo, Dad, let's bump your score up by about 1% for the density of water and then\ntack an uncertainty of 2% onto it, you have a body fat of 26.3% \u00b12%. I'm no\nexpert on using calipers, but one paper's\nabstract\nindicates that the\nskinfold method uncertainty is about 3%. I've seen 10% tossed around casually\ntoo, but have no reliable source to back that up. That puts the two methods\nwithin the appropriate reach of eachother, which is heartening. It's\ninteresting to note that BMI is overestimating Dad's fat, because he's more\nlean than the average couch potato. Imagine the difference if the subject were\nsomeone completely nuts, like a young triathlete, who has body fat of about\n15%. Even better, if you are such a nut you could do the experiment and post\nyour results (and BMI) here as a comment for us to see.\n\n20Aug\/077\n\nWhen you try to lose weight, what you are really trying to do is lose fat.\nWeighing yourself is a first approximation of your progress, but a better\nindicator is your body fat percentage (%BF). Unfortunately, measuring %BF can\nbe expensive and\/or difficult. It doesn't need to be so. All you need is a body\nof water (e.g. a swimming pool), a gallon jug, and your bathroom scale.\n\nOne of the most accurate ways to measure body fat is hydrostatic weighing. You are weighed underwater and on land, and your body's density is determined. Then body fat is estimated from the measured density. This is the same basic technique that we will use, but we don't require an underwater scale or special tank.\n\nFirst the how, then I'll give you the physics. Get in the pool and exhale all\nthe air you can, and allow yourself to sink. You will sink unless you're\nparticularly obese. Take note of the sensation of sinking. Then do the same\nthing but with your lungs full. Take note of the sensation of floating. Now, we\nwant to reach the point of neutral buoyancy when your lungs are empty, where\nyou are neither sinking nor floating. You will be weightless under the water.\nTake the gallon jug and hold it under the water, then exhale completely. If the\njug is full of air, you will probably float (unless you are quite lean, in\nwhich case you'll need two jugs). Keep adding water and repeating until you\nreach neutral buoyancy. If you sink, add air (pour out some water). If you\nfloat, add water. Once you've found the magic amount of water, use this\nequation to calculate your body density (\u03c1):\n\n$\\rho=\\frac{m}{\\frac{m}{\\rho_w}-v_a}$\n\nwhere m is your mass (what the scale tells\nyou), v is the volume of you and your buoy combined, and vair is the\nvolume of air in your buoy. If you have \u00be gallon of water in your gallon jug,\nthen vair is \u00bc gallon. \u03c1w=1 kg\/liter for all the\nprecision we need.\n\nOnce you have density, you may like to estimate your body fat. The equation for that is Siri's equation, which says\n\n$\\text{BF}=\\left(\\frac{4.95}{\\rho}-4.50\\right)100$\n\nThis equation assumes your lungs are completely empty, which they can't be, so we need to introduce a term for the residual volume of your lungs. This is about \u00bc of your total lung capacity, or \u2153 of your vital lung capacity. You can measure your vital lung capacity with a balloon or a by blowing air through a straw into an inverted container filled with water. The average residual lung capacity for an american male is 1.2 liters; mine is about 1.9 liters. So we can adjust the formula for density as follows:\n\n$\\rho=\\frac{m}{\\frac{m}{\\rho_w}-v_a-v_r}$\n\nIf you do this experiment you will probably find that your estimated %BF is not too far from your BMI, which is a statistical tool for estimating %BF. It can be wildly inaccurate for statistical outliers (e.g. people who are actually in shape), but it's easy to calculate and a decent sanity check in this case.\n\nHere's what's going on. We're using Archimedes' principle: the buoyant force on a submerged object is equal to the weight of the fluid displaced. When the buoyant force balances the force of gravity, we have neutral buoyancy. The buoyant force is expressed as $F_b=-\\rho_w v g$. Substitute weight for the buoyant force and solve for the volume of the body (v = vbody + vair), then substitute that into the definition of density (m\/v), and you get the formula I gave you above (if you consider the mass of air and the gallon jug as negligible). I glossed over that\u2014if you'd like me to go into more detail say so in the comments.\n\nIf you're particularly obese and don't sink when you exhale completely, then all is not lost. You just need some counterbalance. The modified equation is:\n\n$\\rho=\\frac{m_b}{\\frac{m_b+m_c}{\\rho_w}-v_a-v_r-v_c}$.\n\nYou can find the volume of your counterbalance by taking a cue from Archimedes and measuring displacement.\n\nAbout accuracy: the biggest variable in this process is how much air is left in your lungs. You will find with practice that you are able to exhale more air, which will lower your %BF estimation, as if by magic. However it always overestimates and once you figure out how to completely exhale will be very consistent. Siri's equation is the next place to look\u2014it basically takes the density of fat and the density of muscle and ignores bone mass and density, what you ate for lunch, etc. It will also almost certainly overestimate %BF. The astute reader will wonder about air compression in the milk jug. I measured this and found that when the jug is held within a foot or so from the surface, it does compress. However, the amount it compresses conveniently offsets the extra capacity of the jug (they don't pack milk spilling over the brim of the jug, after all). All in all I think it's accurate within a few percentage points for %BF, gives you an upper bound (i.e. you are free to brag about the number you get, even if it may be slightly high), and is more accurate than BMI.","date":"2017-03-30 04:53:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 5, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6700819730758667, \"perplexity\": 1796.1788090096675}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218191986.44\/warc\/CC-MAIN-20170322212951-00374-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>Using advanced Berkeley DB features with dbstl</title> <link rel="stylesheet" href="gettingStarted.css" type="text/css" /> <meta name="generator" content="DocBook XSL Stylesheets V1.73.2" /> <link rel="start" href="index.html" title="Berkeley DB Programmer's Reference Guide" /> <link rel="up" href="stl.html" title="Chapter 7. Standard Template Library API" /> <link rel="prev" href="stl_db_usage.html" title="Berkeley DB configuration" /> <link rel="next" href="stl_txn_usage.html" title="Using transactions in dbstl" /> </head> <body> <div class="navheader"> <table width="100%" summary="Navigation header"> <tr> <th colspan="3" align="center">Using advanced Berkeley DB features with dbstl</th> </tr> <tr> <td width="20%" align="left"><a accesskey="p" href="stl_db_usage.html">Prev</a> </td> <th width="60%" align="center">Chapter 7. Standard Template Library API</th> <td width="20%" align="right"> <a accesskey="n" href="stl_txn_usage.html">Next</a></td> </tr> </table> <hr /> </div> <div class="sect1" lang="en" xml:lang="en"> <div class="titlepage"> <div> <div> <h2 class="title" style="clear: both"><a id="stl_db_advanced_usage"></a>Using advanced Berkeley DB features with dbstl</h2> </div> </div> </div> <div class="toc"> <dl> <dt> <span class="sect2"> <a href="stl_db_advanced_usage.html#id1599057">Using bulk retrieval iterators</a> </span> </dt> <dt> <span class="sect2"> <a href="stl_db_advanced_usage.html#id1599981">Using the DB_RMW flag</a> </span> </dt> <dt> <span class="sect2"> <a href="stl_db_advanced_usage.html#id1599961">Using secondary index database and secondary containers</a> </span> </dt> </dl> </div> <p> This section describes advanced Berkeley DB features that are available through dbstl. </p> <div class="sect2" lang="en" xml:lang="en"> <div class="titlepage"> <div> <div> <h3 class="title"><a id="id1599057"></a>Using bulk retrieval iterators</h3> </div> </div> </div> <p> Bulk retrieval is an optimization option for const iterators and nonconst but read-only iterators. Bulk retrieval can minimize the number of database accesses performed by your application. It does this by reading multiple entries at a time, which reduces read overhead. Note that non-sequential reads will benefit less from, or even be hurt by, this behavior, because it might result in unneeded data being read from the database. Also, non-serializable reads may read obsolete data, because part of the data read from the bulk read buffer may have been updated since the retrieval. </p> <p> When using the default transaction isolation, iterators will perform serializable reads. In this situation, the bulk-retrieved data cannot be updated until the iterator's cursor is closed. </p> <p> Iterators using a different isolation levels, such as <a href="../api_reference/C/dbcget.html#dbcget_DB_READ_COMMITTED" class="olink">DB_READ_COMMITTED</a> or <a href="../api_reference/C/dbopen.html#dbopen_DB_READ_UNCOMMITTED" class="olink">DB_READ_UNCOMMITTED</a> will not perform serializable reads. The same is true for any iterators that do not use transactions. </p> <p> A bulk retrieval iterator can only move in a singled direction, from beginning to end. This means that iterators only support operator++, and reverse iterators only support operator--. </p> <p> Iterator objects that use bulk retrieval might contain hundreds of kilobytes of data, which makes copying the iterator object an expensive operation. If possible, use ++iterator rather than iterator++. This can save a useless copy construction of the iterator, as well as an unnecessary dup/close of the cursor. </p> <p> You can configure bulk retrieval for each container using both in the const and non-const version of the <code class="methodname">begin()</code> method. The non-const version of <code class="methodname">begin()</code> will return a read-only cursor. Note that read-only means something different in C++ than it does when referring to an iterator. The latter only means that it cannot be used to update the database. </p> <p> To configure the bulk retrieval buffer for an iterator when calling the <code class="methodname">begin()</code> method, use the <code class="function">BulkRetrievelItrOpt::bulk_retrieval(u_int32_t bulk_buffer_size)</code> function. </p> <p> If you move a <code class="classname">db_vector_iterator</code> randomly rather than sequentially, then dbstl will not perform bulk retrieval because there is little performance gain from bulk retrieval in such an access pattern. </p> <p> You can call <code class="function">iterator::set_bulk_buffer()</code> to modify the iterator's bulk buffer size. Note that once bulk read is enabled, only the bulk buffer size can be modified. This means that bulk read cannot be disabled. Also, if bulk read was not enabled when you created the iterator, you can't enable it after creation. </p> <p> Example code using this feature can be found in the <code class="methodname">TestAssoc::test_bulk_retrieval_read()</code> method, which is available in the the dbstl test suite. </p> </div> <div class="sect2" lang="en" xml:lang="en"> <div class="titlepage"> <div> <div> <h3 class="title"><a id="id1599981"></a>Using the DB_RMW flag</h3> </div> </div> </div> <p> The <a href="../api_reference/C/dbcget.html#dbcget_DB_RMW" class="olink">DB_RMW</a> flag is an optimization for non-const (read-write) iterators. This flag causes the underlying cursor to acquire a write lock when reading so as to avoid deadlocks. Passing <code class="function">ReadModifyWriteOption::read_modify_write()</code> to a container's <code class="methodname">begin()</code> method creates an iterator whose cursor has this behavior. </p> </div> <div class="sect2" lang="en" xml:lang="en"> <div class="titlepage"> <div> <div> <h3 class="title"><a id="id1599961"></a>Using secondary index database and secondary containers</h3> </div> </div> </div> <p> Because duplicate keys are forbidden in primary databases, only <code class="classname">db_map</code>, <code class="classname">db_set</code> and <code class="classname">db_vector</code> are allowed to use primary databases. For this reason, they are called <span class="bold"><strong>primary containers</strong></span>. A secondary database that supports duplicate keys can be used with <code class="classname">db_multimap</code> containers. These are called <span class="bold"><strong>secondary containers</strong></span>. Finally, a secondary database that forbids duplicate keys can back a <code class="classname">db_map</code> container. </p> <p> The <span class="bold"><strong>data_type</strong></span> of this <code class="classname">db_multimap</code> secondary container is the <span class="bold"><strong>data_type</strong></span> for the primary container. For example, a <code class="classname">db_map&lt;int, Person&gt;</code> object where the <code class="classname">Person</code> class has an <code class="literal">age</code> property of type <code class="literal">size_t</code>, a <code class="classname">db_multimap&lt;size_t, Person&gt;</code> using a secondary database allows access to a person by age. </p> <p> A container created from a secondary database can only be used to iterate, search or delete. It can not be used to update or insert. While dbstl does expose the update and insert operations, Berkeley DB does not, and an exception will be thrown if attempts are made to insert objects into or update objects of a secondary container. </p> <p> Example code demonstrating this feature is available in <code class="methodname">TestAssoc::test_secondary_containers()</code>, which is available in the dbstl test suite. </p> </div> </div> <div class="navfooter"> <hr /> <table width="100%" summary="Navigation footer"> <tr> <td width="40%" align="left"><a accesskey="p" href="stl_db_usage.html">Prev</a> </td> <td width="20%" align="center"> <a accesskey="u" href="stl.html">Up</a> </td> <td width="40%" align="right"> <a accesskey="n" href="stl_txn_usage.html">Next</a></td> </tr> <tr> <td width="40%" align="left" valign="top">Berkeley DB configuration </td> <td width="20%" align="center"> <a accesskey="h" href="index.html">Home</a> </td> <td width="40%" align="right" valign="top"> Using transactions in dbstl</td> </tr> </table> </div> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
543
Pineridge Page 7 - Creative and Innovative Kids Storage. Choose the Best Toddler Table and Chair Sets. Bring Personalized Toy Box as Fun Storage. desk. cube. beds. Tags: toy. bed. bunk. bench. with. beds. Tags: folding. boy. outdoor. with. and. toddler. girl. Tags: painted. bench. pink. wooden. toy. personalised. Tags: learning. poop. wooden. folding. bed. kids. Tags: tilted. tower. kitchen. toddler. step. little. towers. Tags: childs. activity. desk. play. with. Tags: bean. pool. setting. kids. bedroom. set. play. pong. Tags: cute. white. set. teen. boys. accessories. computer. Tags: small. lamps. table. furniture. chairs. corner. desk. Tags: beds. girls. futon. trundle. over. chairs.
{ "redpajama_set_name": "RedPajamaC4" }
349
\section{Introduction} We consider first the semirelativistic $N$-body Hamiltonian $H$ given by $$H=\sum_{i=1}^N\sqrt{\|{\mathbf p}_i\|^2+m^2}+\sum_{1=i<j}^NV(r_{ij}),\eqno{(1)}$$ and the following model Hamiltonian $H_c$ $$H_c= \sum_{1=i<j}^N\left[{\gamma}^{-1}\sqrt{\gamma\|{\mathbf p}_i-{\mathbf p}_j\|^2+ (mN)^2}~+~ V(r_{ij}) \right],\eqno{(2)}$$ where $\gamma = {N\choose 2} = \frac{1}{2} N(N-1).$ If $\Psi(\rho_2,\rho_3,\dots,\rho_N)$ is the lowest boson eigenstate of $H$ expressed in terms of Jacobi relative coordinates, then it was proved in Ref.~\cite{HallLucha2007} that the model facilitates a `reduction' $\langle H_c\rangle = \langle{\mathcal H}\rangle$ to the expectation of a one-body Hamiltonian ${\mathcal H}$ given by $${\cal H} = N\sqrt{\lambda p^2+ m^2}~~+~\gamma V(r),\quad \lambda ={{2(N-1)}\over{N}}. \eqno{(3)}$$ The question remains as to the relation between $H$ and the model $H_c.$ It is known from earlier work (discussed in~\cite{HallLucha2007}) that the {\bf lower bound conjecture} $$\langle H\rangle \geq \langle H_c\rangle\eqno{(4)}$$ is true for the following cases: for the armonic oscillator $V(r) = vr^2,$ for all attractive $V(r)$ in the nonrelativistic large-$m$ limit, and for static gravity $V(r) = -v/r$. This list was augmented in Ref.~\cite{HallLucha2007} by the following cases: in general for $N = 3$, and, if $m = 0,$ for $N = 4.$ The purpose of the present article is to extend this list to include the ultrarelativistic cases $m = 0$ for all $N\ge 2$ and arbitrary attractive $V(r).$ \section{The general lower bound for $m = 0.$} It was shown in Ref.~\cite{HallLucha2007} that the non-negativity of the expectation $\langle\delta(m,N)\rangle$ is sufficient to establish the validity of the conjecture (4), where $$ \delta(m, N) = \sum_{i = 1}^{N}\sqrt{\|{\mathbf p}_i\|^2 + m^2}\ -\ {{2}\over{N-1}}\sum_{1=i<j}^{N} \sqrt{{{N-1}\over{2N}}\|{\mathbf p}_i -{\mathbf p}_j\|^2+ m^2}.\eqno{(5)} $$ Thus for the new cases we are now able to treat we must consider $\langle\delta(0,N)\rangle$. By using the necessary boson permutation symmetry of $\Psi$, the expectation value we need to study is reduced to $$ \langle\delta(0, N)\rangle = N\left\langle\|{\mathbf p}_1\|\ -\ \sqrt{{{N-1}\over{2N}}}\|{\mathbf p}_1 -{\mathbf p}_2\|\right\rangle.\eqno{(6)} $$ \clearpage The principal result of this paper, the lower bound for $m = 0$ and all $N\ge2$, is an immediate consequence of the following: \nll{\bf Theorem~1}~~~$\langle\delta(0, N)\rangle= 0$. \nll{\bf Proof of Theorem~1} \nll Without loss of generality we adopt in momentum space a coordinate origin such that $\sum_{i=1}^N{\mathbf p}_i := {\mathbf p} = {\mathbf 0}.$ We define the mean lengths $$\langle ||{\mathbf p}_1||\rangle := k \quad {\rm and} \quad \langle ||{\mathbf p}_1-{\mathbf p}_2||\rangle := d.\eqno{(7)}$$ We wish to make a correspondence between mean lengths such as $k$ and $d$ and the sides of triangles that can be constructed with these lengths. We consider the triangle formed by the three vectors $\{{\mathbf p}_1, {\mathbf p}_2,~ {\mathbf p}_1-{\mathbf p}_2\}.$ We suppose that the corresponding angles in this triangle are $\{\phi_{12}, \theta_1,\theta_2\}$ (the same notation is used for other similar triples). We now consider projections of one side on a unit vector along an adjacent side and define the mean angles $\phi$ and $\theta$ by the relations $$\langle \|{\mathbf p}_1\|\cos(\phi_{12})\rangle := \langle \|{\mathbf p}_1\|\rangle\cos(\phi)$$ and $$\langle\|{\mathbf p}_1-{\mathbf p}_2\|\cos(\theta_1)\rangle := \langle\|{\mathbf p}_1-{\mathbf p}_2\|\rangle\cos(\theta).$$ Thus, on the average, this triangle is isosceles with one angle $\phi$ and the other two angles $\theta.$ Since ${\mathbf p} = 0,$ we have $\langle {\mathbf p}_1\cdot {\mathbf p}\rangle = 0.$ Hence $$||{\mathbf p}_1||^2 + \sum_{i = 2}^N||{\mathbf p}_1||||{\mathbf p}_i||\cos(\phi_{1i})= 0.$$ Thus, by dividing by $||{\mathbf p}_1||$ and using boson symmetry, we find $$\langle\left(||{\mathbf p}_1|| + (N-1)||{\mathbf p}_2||\cos(\phi_{12})\right)\rangle = \langle||{\mathbf p}_1||\left(1 + (N-1)\cos(\phi_{12})\right)\rangle= 0. $$ We therefore conclude that $k(1+(N-1)\cos(\phi)) = 0,$ that is to say $$\cos(\phi) = -\frac{1}{N-1}.$$ We now consider again the triangle formed by the three vectors $\{{\mathbf p}_1, {\mathbf p}_2,~{\mathbf p}_1-{\mathbf p}_2\}.$ We have immediately from the dot product ${\mathbf p}_1\cdot ({\mathbf p}_1-{\mathbf p_2})$ $$\|{\mathbf p}_1\| \|{\mathbf p}_1-{\mathbf p_2}\|\cos(\theta_1) = \|{\mathbf p}_1\|(\|{\mathbf p}_1\| - \|{\mathbf p}_2\|\cos(\phi_{12})).$$ By dividing by $\|{\mathbf p}_1\|$ and taking means we obtain $$d\cos(\theta) = k(1-\cos(\phi)).$$ But $\theta = (\pi/2 - \phi/2)$ and $\cos(\phi) = -1/(N-1).$ Hence we conclude $$\frac{k}{d} = \left(\frac{N-1}{2N}\right)^{\frac{1}{2}}.$$ This equality establishes Theorem~1.\qed \section{The linear potential} We apply the new bound to the case of the linear potential $V(r) = r.$ The weaker $N/2$ lower bound (discussed in Ref.~\cite{HallLucha2007}) is always available, but, up to now, we knew no way of obtaining tight bounds for this problem. For a comparison upper bound, we use a Gaussian trial function $\Phi$ and the original Hamiltonian $H$ to obtain a scale-optimized variational upper bound $E \leq E_g = (\Phi, H\Phi).$ As we showed in Ref.~\cite{HallLucha2007}, for the linear potential $V(r) = r$ in three spatial dimensions, the conjecture (now proven) implies that the $N$-body bounds are given for $N \ge 2$ by $$ N\left(\frac{(N-1)^3}{2N}\right)^{1\over 4}e = E_c^L \leq E \leq E_g^U = 4N\left(\frac{(N-1)^3}{2N\pi^2}\right)^{\frac{1}{4}},\eqno{(8)}$$ where $e \approx 2.2322 $ is the bottom of the spectrum~\cite{Boukraa89} of the one-body problem $h = \|{\mathbf p}\| + r$. From (8) we see that the ratio $R = E_g/E_c = 4/(\pi^{\frac{1}{2}}e) \approx 1.011.$ The energy of the ultrarelativistic many-body system with linear pair potentials is therefore determined by these inequalities with error less than 0.55\% for all $N\ge 2.$ Earlier we were able to obtain such close bounds for all $N$ only for the harmonic oscillator~\cite{Hall04}. \section{Conclusion} We have enlarged the number of semirelativistic problems that satisfy the lower-bound conjecture $\langle H\rangle \geq \langle H_c\rangle$ to include all problems with $m = 0$ and $N\ge 2.$ An extension of the geometric reasoning used in Ref.~\cite{HallLucha2007} from pyramids to more general simplices would perhaps have provided an alternative proof. However, the more algebraic approach adopted here, relying in the end on mean angles in a triangle, seemed to provide a more independent and robust approach. \section*{Acknowledgement} One of us (RLH) gratefully acknowledges both partial financial support of his research under Grant No.~GP3438 from~the Natural Sciences and Engineering Research Council of Canada and hospitality of the Institute for High Energy Physics of the Austrian Academy of Sciences in Vienna. \medskip
{ "redpajama_set_name": "RedPajamaArXiv" }
965
Sat May 09, 2015 4:35 EXCLUSIVE: Iranian Parliament to Present Triple-Urgency Bill to Stop N. Talks TEHRAN (FNA)- Iranian parliamentarians plan to present a bill with the highest degree of urgency to the floor which will require the government to suspend nuclear talks with the world powers while the US continues its threats against Tehran. Javad Karimi Qoddousi, a member of the parliament's National Security and Foreign Policy Commission, told FNA on Saturday that "the plan will be signed by the legislators and presented to the parliament's Presiding Board tomorrow". "Based on this single-article bill, the government will be required to stop the negotiations until the Americans apologize to the Islamic Republic of Iran and end their threats," influential lawmaker said. Karimi Qoddousi underlined that the Iranian lawmakers will not allow the superpowers to violate the Iranian nation's rights or keep the nation under threat. The parliament's decision was announced after US Secretary of State John Kerry in an effort to improve ties with Israel over the Iran policy said recently that military action is still among possible options for Washington. Also Joe Biden, the US vice-president, repeated the same remarks later. In relevant comments on Wednesday, Iran's Supreme Leader Ayatollah Seyed Ali Khamenei dismissed the US war rhetoric against Iran as empty and boastful remarks, but meantime, warned that Tehran would not negotiate under threat. "This is not acceptable that the opposite side continues making threats simultaneous with the talks," the Leader said, addressing a public meeting with teachers here in Tehran. He further noted the remarks made by the two US officials in recent days who had alleged that military threats against Iran were still alive, and said, "Negotiation under the ghost of a threat is meaningless and the Iranian nation does not tolerate negotiation under the shadow of threat." "First of all, you can't do a damn thing," Ayatollah Khamenei said in response to the two US officials, and added, "Secondly, as I had already stated during the term of the former US president, the era of hit-and-run attacks is gone and the Iranian nation will not let go anyone intending to make an aggression" against it. Meantime, he said the US needs the nuclear talks, at least, as much as Iran does, and pointed out that Iran is willing to put an end to the sanctions, while the US officials need to leave a legacy behind as "they are deeply in need to make this claim that they have made Iran sit to the negotiating table and imposed certain points on it". The Supreme Leader underlined that now everyone in Iran knows pretty well that the country's economic problems are not resolved through the removal of the sanctions, "rather resolving economic problems requires our own planning, will and ability, no matter the sanctions are in place or not". "Of course, if the sanctions are removed, the economic problems could be solved more easily, but their resolution will be possible if the sanctions continue," he added. The Iranian leader further reminded the country's team of negotiators once again to pay good heed to the redlines, "but never allow the other side to impose its will, exercise force, humiliate or threaten you". President Rouhani Renews Call on Powers to Show Resolve to Strike N. Deal with Iran AEOI Head: Powers Reneging on Agreed Undertakings in N. Talks with Iran Leader's Top Aide Downplays US Congress Bill on Iran Deal as Political Game Spokeswoman: Iran Not to Accept Exceptional Inspection for Striking N. Deal Leader Rejects Continued N. Talks under Threat Senior N. Negotiator Vows to Defend Iranian Nation's Rights in Talks with World Powers Vice-Speaker: Iran Not to Bow to US Excessive Demands Senior Iranian Negotiator: Differences in Draft N. Agreement Still Galore IRGC Navy Receives Modern Warfare Equipment TEHRAN (FNA)- The Islamic Revolution Guards Corps (IRGC) on Monday delivered state-of-the-art equipment to its Naval forces to equip them with Chemical, Biological, Radiological and Nuclear Defense (CBRND) systems for defending the country.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,603
{"url":"http:\/\/pdglive.lbl.gov\/DataBlock.action?node=Q123MR0&init=0","text":"# ${\\boldsymbol m}_{{{\\boldsymbol u}}}\/{\\boldsymbol m}_{{{\\boldsymbol d}}}$ MASS RATIO INSPIRE search\n\nVALUE DOCUMENT ID TECN \u00a0COMMENT\n$\\bf{ 0.48 {}^{+0.07}_{-0.08}}$ OUR EVALUATION\n$0.485$ $\\pm0.011$ $\\pm0.016$ 1\n 2016\nLATT\n$0.4482$ ${}^{+0.0173}_{-0.0206}$ 2\n 2015\nLATT\n$0.470$ $\\pm0.056$ 3\n 2014\nLATT\n$0.698$ $\\pm0.051$ 4\n 2012\nLATT\n$0.42$ $\\pm0.01$ $\\pm0.04$ 5\n 2010\nLATT\n$0.4818$ $\\pm0.0096$ $\\pm0.0860$ 6\n 2010\nLATT\n$0.550$ $\\pm0.031$ 7\n 2007\nLATT\n\u2022 \u2022 \u2022 We do not use the following data for averages, fits, limits, etc. \u2022 \u2022 \u2022\n$0.43$ $\\pm0.08$ 8\n 2004 A\nLATT\n$0.410$ $\\pm0.036$ 9\n 2003\nLATT\n$0.553$ $\\pm0.043$ 10\n 1996\nTHEO Compilation\n1\u00a0 FODOR 2016 is a lattice simulation with ${{\\mathit N}_{{f}}}$ = 2 + 1 dynamical flavors and includes partially quenched QED effects.\n2\u00a0 BASAK 2015 is a lattice computation using 2+1 dynamical quark flavors.\n3\u00a0 CARRASCO 2014 is a lattice QCD computation of light quark masses using 2 + 1 + 1 dynamical quarks, with ${{\\mathit m}_{{u}}}$ = ${{\\mathit m}_{{d}}}{}\\not=$ ${{\\mathit m}_{{s}}}{}\\not=$ ${{\\mathit m}_{{c}}}$. The ${\\mathit {\\mathit u}}$ and ${\\mathit {\\mathit d}}$ quark masses are obtained separately by using the ${{\\mathit K}}$ meson mass splittings and lattice results for the electromagnetic contributions.\n4\u00a0 AOKI 2012 is a lattice computation using 1 + 1 + 1 dynamical quark flavors.\n5\u00a0 BAZAVOV 2010 is a lattice computation using 2+1 dynamical quark flavors.\n6\u00a0 BLUM 2010 is a lattice computation using 2+1 dynamical quark flavors.\n7\u00a0 BLUM 2007 determine quark masses from the pseudoscalar meson masses using a QED plus QCD lattice computation with two dynamical quark flavors.\n8\u00a0 AUBIN 2004A perform three flavor dynamical lattice calculation of pseudoscalar meson masses, with continuum estimate of electromagnetic effects in the kaon masses.\n9\u00a0 NELSON 2003 computes coefficients in the order $\\mathit p{}^{4}$ chiral Lagrangian using a lattice calculation with three dynamical flavors. The ratio ${\\mathit m}_{{{\\mathit u}}}\/{\\mathit m}_{{{\\mathit d}}}$ is obtained by combining this with the chiral perturbation theory computation of the meson masses to order $\\mathit p{}^{4}$.\n10\u00a0 LEUTWYLER 1996 uses a combined fit to ${{\\mathit \\eta}}$ $\\rightarrow$ 3 ${{\\mathit \\pi}}$ and ${{\\mathit \\psi}^{\\,'}}$ $\\rightarrow$ ${{\\mathit J \/ \\psi}}$ (${{\\mathit \\pi}},{{\\mathit \\eta}}$) decay rates, and the electromagnetic mass differences of the ${{\\mathit \\pi}}$ and ${{\\mathit K}}$.\n\n${\\mathit m}_{{{\\mathit u}}}\/{\\mathit m}_{{{\\mathit d}}}$ MASS RATIO\nReferences:\n FODOR 2016\nPRL 117 082001 Up and Down Quark Masses and Corrections to Dashen's Theorem from Lattice QCD and Quenched QED\n BASAK 2015\nJPCS 640 012052 Electromagnetic Effects on the Light Hadron Spectrum\n CARRASCO 2014\nNP B887 19 Up, Down, Strange and Charm Quark Masses with $\\mathit N_{f}$ = 2+1+1 Twisted Mass Lattice QCD\n AOKI 2012\nPR D86 034507 1+1+1 Flavor QCD + QED Simulation at the Physical Point\n BAZAVOV 2010\nRMP 82 1349 Full Nonperturbative QCD Simulations with 2+1 Flavors of Improved Staggered Quarks\n BLUM 2010\nPR D82 094508 Electromagnetic Mass Splittings of the Low Lying Hadrons and Quark Masses from 2+1 Flavor Lattice QCD+QED\n BLUM 2007\nPR D76 114508 Determination of Light Quark Masses from the Electromagnetic Splitting of Pseudoscalar Meson Masses Computed with Two Flavors of Domain Wall Fermions\n AUBIN 2004A\nPR D70 114501 Light Pseudoscalar Decay Constants, Quark Masses, and Low Energy Constants from three-flavor Lattice QCD\n NELSON 2003\nPRL 90 021601 Up Quark Mass in Lattice QCD with Three Light Dynamical Quarks and Implications for Strong $\\mathit CP$ Invariance\n LEUTWYLER 1996\nPL B378 313 The Ratios of the Light Quark Masses","date":"2018-09-21 19:25:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9602770805358887, \"perplexity\": 4254.863185696354}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267157503.43\/warc\/CC-MAIN-20180921190509-20180921210909-00501.warc.gz\"}"}
null
null
\section{Introduction} Let $X$ be a projective variety defined over a number field $K$. Suppose that ${\varphi} : X \rightarrow X$ is a map on $X$, also defined over $K$. Assume that we can find an ample line bundle $\L$ on $X$ and a number $\alpha > 1$, such that $ \L^{\alpha} \cong \varphi^*\L$. Under this conditions, we can build the canonical height $\hat{h}_{{\varphi}}$ (\cite{silvermancanonicalheights} theorem 1.1) and the canonical measure $d\mu_{{\varphi}}$ (\cite{dynamics} proposition 3.1.4) associated to ${\varphi}$ and $\L$. They satisfy nice properties with respect to the map ${\varphi}$, for example we have $\hat{h}_{{\varphi}} \circ {\varphi}=\deg({\varphi}) \hat{h}_{{\varphi}}$ and ${\varphi}_* \mu_{{\varphi}}=\mu_{{\varphi}}.$ Sometimes it happens that a whole set of maps are associated to the same canonical height function and measure. As our first example consider the collection of maps $\phi_k : {\mathbb P}_{\bar{K}}^1 \rightarrow {\mathbb P}_{\bar{K}}^1$ on the Riemann Sphere, where $\phi_k$ is defined as $\phi_k(t)=t^k$. The line bundle $\L=\mathcal{O}(1)$ on ${\mathbb P}^1$ satisfies the isomorphism $\phi_k^* \L \cong \L^k$. If one builds the canonical height and measure associated to $\phi_k$ and $\mathcal{O}(1)$, one obtains: \begin{enumerate} \item All $\phi_k$ have the same canonical height namely, the naive height $h_{nv}$ on ${\mathbb P}^1_{\bar{{\mathbb Q}}}$. The naive height $h_{nv}(P)$ is a refined idea of the function $\sup \{|a_0|,|a_1| \}$, measuring the computational complexity of the projective point $P=(a_0:a_1)$. For a precise definition see later definition \ref{naive height}. \item All $\phi_k$ have the same canonical measure, that is, the Haar measure $d\theta$ on the unit circle $S^1$. \end{enumerate} Similar properties are fulfilled by the collection of maps $[n] : E \rightarrow E$, representing multiplication by $n$ on an elliptic curve $E$ defined over $K$. If $\L$ is an ample symmetric line bundle on $E$, we have the isomorphism $[n]^* \L \cong \L^{n^2}$, along with the properties: \begin{enumerate} \item All maps $[n]$ share the same canonical height, that is, the Neron-Tate height $\hat{h}_{E}$ on $E$. In fact this will be our definition (\ref{NT}) of the Neron-Tate height on $E$. For many other interesting properties we refer to B-4 in \cite{intro-diophantine}. \item All maps $[n]$ have the same canonical measure, that is, the Haar measure $i/2 Im(\tau) dz \wedge d\bar{z}$ on $E = {\mathbb C} / {\mathbb Z}+\tau {\mathbb Z}$. \end{enumerate} We observe that any two maps in each collection commute for the composition of maps. Besides, the line bundle $\L \in \Pic(X) \otimes {\mathbb R}$, suitable to make everything work, is the same within each collection. The present work establish the general fact: \begin{proposition} Let $X$ be a projective variety defined over a number field $K$. Suppose that two maps $\varphi, \psi : X \rightarrow X$ commute ( $\varphi \circ \psi = \psi \circ \varphi$) and satisfy the following property: For some ample line bundle $\L \in \Pic(X) \otimes {\mathbb R}$ and real numbers $\alpha, \beta > 1$, we have $\varphi^*\L \xrightarrow{\sim} \L^{\alpha}$ and $\psi^*\L \xrightarrow{\sim} \L^{\beta}$, then we have $ \hat{h}_{{\varphi}}=\hat{h}_{\psi}=\hat{h}_{\varphi \circ \psi}$ and $ d\mu_{{\varphi}}=d\mu_{\psi}=d\mu_{{\varphi} \circ \psi}$. \end{proposition} This result is known in dimension one, a proof can found for example in \cite{eremenko}. Also it is a well known fact \cite{intro-diophantine}, that commuting maps in a projective variety must share the same canonical height. The main feature of the present work it is to obtain all this results from the equality of the canonical metrics. Given a ample line bundle $\L$ on $X$, it was an original idea of Arakelov \cite{arakelovinter} to put metrics on $\L_{\sigma}=\L \otimes_{\sigma} {\mathbb C}$ over all places $\sigma$ of $K$ at infinity. This gave rise to heights as intersection numbers and curvature forms at infinity. In was then an idea of Zhang \cite{zhangadelic} to look for suitable metrics at all places $v$ of $K$. In presence of the dynamics ${\varphi} : X \rightarrow X$, the line bundle $\L$ on $X$ can be endowed \cite{zhangadelic} with very special metrics $\|.\|_{{\varphi},v}$ on $\L_v$ that satisfy the functional equation $$\|.\|_{\varphi,v}=(\phi^*\varphi^*\|.\|_{\varphi,v})^{1/\alpha},$$ whenever we have an isomorphism $\phi : \L^{\alpha} \xrightarrow{\sim} \varphi^*\L$. The canonical height and the canonical metric will be defined (definitions \ref{canonical measure} and \ref{canonical height as intersection}) depending only on the metric $\|.\|_{{\varphi}}$. The equality of canonical heights and measure for commuting maps is a consequence of the following proposition: \begin{proposition} Suppose that two maps $\varphi, \psi : X \rightarrow X$ commute, and for some ample line bundle $\L \in \Pic(X) \otimes {\mathbb R}$ we have $\varphi^*\L \xrightarrow{\sim} \L^{\alpha}$ and $\psi^*\L \xrightarrow{\sim} \L^{\beta}$ for some numbers $\alpha, \beta > 1$, then $\|.\|_{{\varphi}}=\|.\|_{\psi}$. \end{proposition} Towards the end of the paper we discuss maps on ${\mathbb P}^1$ arising as projections of maps on elliptic curves with complex multiplication. We study ramification points and present examples of commuting maps on the Riemann sphere. \section{Canonical heights and canonical measures} \subsection{Canonical metrics} Consider the projective variety $X$ defined over a number field $K$, a map ${\varphi} : X \rightarrow X$ defined over $K$, and an ample line bundle $\L \in \Pic(X) \otimes {\mathbb R}$ such that $\phi : \L^{\alpha} \xrightarrow{\sim} \varphi^*\L$ for some $\alpha >1$. This situation will be called \cite{dynamics} a polarized dynamical system $(X,{\varphi},\L,\alpha)$ on $X$ defined over $K$. \\ Assume that for every place $v$ of $K$ we have chosen a continuous and bounded metric $\|.\|_v$ on each fibre of $\L_v=\L \otimes_K K_v$. The following theorem is proposition 2.2 in \cite{zhangadelic}: \begin{theorem} \label{canonical metric} The sequence defined recurrently by $\|.\|_{v,1}=\|.\|_v$ and $\|.\|_{v,n}=(\phi^* \varphi^* \|.\|_{v,n-1})^{1/\alpha}$ for $n > 1$, converge uniformly on $X(\bar{K}_v)$ to a metric $\|.\|_{v,\varphi}$ (independent of the choice of $\|.\|_{v,1}$) on $\L_v$ which satisfies the equation $\|.\|_{\varphi,v}=(\phi^*\varphi^*\|.\|_{\varphi,v})^{1/\alpha}$. \end{theorem} \begin{proof} Denote by $h$ the continuous function $ \log \frac{\|.\|_2}{\|.\|_1}$ on $X(\bar{K}_v)$. Then $$\log\|.\|_n=\log\|.\|_1 + \sum_{k=0}^{n-2}(\frac{1}{\alpha} \phi^*\varphi^*)^k h.$$ Since $\|(\frac{1}{\alpha} \phi^*\varphi^*)^k h\|_{sup} \leq (\frac{1}{\alpha})^k\|h\|_{sup}$, it follows that the series given by the expression $\sum_{k=0}^{\infty} ( \frac{1}{\alpha} \phi^*\varphi^*)^k h$, converges absolutely to a bounded and continuous function $h^v$ on $X(\bar{K}_v)$. Let $\|.\|_{\varphi,v}=\|.\|_1 \exp (h^v) $, then $\|.\|_n$ converges uniformly to $\|.\|_{\varphi,v}$ and its not hard to check that $\|.\|_{\varphi,v}$ satisfies $$\|.\|_{\varphi,v}=(\phi^*\varphi^*\|.\|_{\varphi,v})^{1/\alpha},$$ which was the result we wanted to prove. \end{proof} \begin{definition} The metric $\|.\|_{v,\varphi}$ is called the canonical metric on $\L_v$ relative to the map ${\varphi}$. \end{definition} \begin{example} Consider the line bundle $\L=\mathcal{O}_{{\mathbb P}^n}(1)$ on ${\mathbb P}_{\bar{\Q}}^n$ and the rational map $\phi_k : {\mathbb P}_{{\mathbb Q}}^n \rightarrow {\mathbb P}_{{\mathbb Q}}^n$ given by the expression $\phi(T_0:...:T_n)=(T_0^k:...:T_n^k)$. The Fubini-Study metric $$\|(\lambda_0 T_0+...+\lambda_n T_n )(a_0:...:a_n)\|_{FS}=\frac{|\sum \lambda_i a_i |}{\sqrt{\sum_i a^2_i}}$$ is a smooth metric on $\L_{{\mathbb C}}$. If we take $\|.\|_1=\|.\|_{FS}$ as our metric at infinity, the limit metric we obtain is $$\|(\lambda_0 T_0+...+\lambda_n T_n)(a_0:...:a_n)\|_{nv}=\frac{|\sum \lambda_i a_i |}{\sup_i(|a_i|)}.$$ \end{example} \begin{example} Suppose that $X=E$ is an elliptic curve and assume that $[n] : E \rightarrow E$ is denoting the multiplication by n on $E$. As a consequence of the theorem of the cube, the ample symmetric line bundle $\L$ on $E$ satisfies $\phi : [n]^*\L \xrightarrow{\sim} \L^{n^2}$. The canonical metric is the metric of the cube discussed in \cite{moretasterisque} and suitable to make $\phi$ and isomorphism of metrized line bundles. \end{example} The following proposition relates the canonical metrics associated to commuting maps. It represents the main result of this paper. \begin{proposition} \label{commuting canonical metrics} Let $(X,{\varphi},\L,\alpha)$ and $(X,\psi,\L,\beta)$ be two polarized systems on $X$ defined over $K$. Suppose that the maps ${\varphi}$ and $\psi$ satisfy ${\varphi} \circ \psi = \psi \circ {\varphi}$, then $\|.\|_{{\varphi}}=\|.\|_{\psi}$. \end{proposition} \begin{proof} The key idea is that the canonical metric associate to a morphism does not depend on the metric we start the iteration with. Let $s \in \Gamma(X,\L)$ be a non-zero section of $\L$. We are going to consider two metrics $\|.\|_{v,1}=\|.\|_{{\varphi}}$ and $\|.\|'_{v,1}=\|.\|_{\psi}$ on the line bundle $\L$. By our definition of canonical metric for ${\varphi}$, we can start with $\|.\|'_{v,1}$ and obtain $\|s(x)\|_{{\varphi}}=\lim_k \|s({\varphi}^k(x)\|^{1/\alpha^k}_{\psi}$, but also by our definition of canonical metric for $\psi$ starting with $\|.\|_{v,1}=\|.\|_{{\varphi}}$ we get $\|s(x)\|_{\psi}=\lim_l \|s({\varphi}^l(x))\|^{1/\beta^l}_{v,1}$. So using the uniform convergence and the commutativity of the maps, \begin{equation*} \begin{split} \|s(x)\|_{{\varphi}} & =\lim_{k,l} \|s({\varphi}^k \circ \psi^l (x))\|^{1/\alpha^k \beta^l}_{v,1} \\ & =\lim_{l,k} \|s(\psi^l \circ {\varphi}^k (x))\|^{1/\beta^l \alpha^k}_{v,1}=\|s(x)\|_{\psi}, \end{split} \end{equation*} which was the result we wanted to prove.\end{proof} \subsection{Canonical measures} Let $X$ be a n-dimensional projective variety defined over a number field $K$ and suppose that $(X,{\varphi},\L,\alpha)$ is a polarized dynamical system defined over $K$. let $v$ be a place of $K$ over infinity. We can consider the morphism ${\varphi} \otimes v : X_v \rightarrow X_v$ on the complex variety $X_v$. Associated to ${\varphi}$ and $v$ we also have the canonical metric $\|.\|_{{\varphi},v}$ and therefore the distribution $c_1(\L,\|.\|_{{\varphi},v})=\frac{1}{({\pi}i) }\partial\overline{\partial}\log \|s_1(P)\|_{{\varphi},v}$ analogous to the first Chern form of $(\L,\|.\|_{{\varphi},v})$. It can be proved that $c_1(\L,\|.\|_{{\varphi},v})$ is a positive current in the sense of Lelong, and following \cite{dema1} we can define the n-product $$c_1(\L,\|.\|_{{\varphi},v})^n=c_1(\L,\|.\|_{{\varphi},v})...c_1(\L,\|.\|_{{\varphi},v}),$$ which represents a measure on $X_v$. \begin{definition} \label{canonical measure} The measure $d\mu_{{\varphi}}=c_1(\L_v,\|.\|_{{\varphi},v})^n/ \mu(X)$, is called the canonical measure associated to ${\varphi}$ and $v$. Once we have fixed $\L$, it depends only on the metric $\|.\|_{{\varphi},v}$. \end{definition} \label{naive canonical measure} \begin{example} Consider the rational map $\phi_k : {\mathbb P}_{{\mathbb Q}}^n \rightarrow {\mathbb P}_{{\mathbb Q}}^n$ given by the formula $\phi_k(T_0:...:T_n)=(T^k_0:...:T^k_n)$, the canonical measure $d\mu_{\phi_k}$ is the normalize Haar measure on the n-torus $S^1 \times...\times S^1$. \end{example} \begin{example} Let $E$ be an elliptic curve, $\L$ a symmetric line bundle on $E$ and the map $[n] : E \rightarrow E$. The canonical measure associated to $(E,[n],\L,[n]^2)$ can be proved to be \cite{moretasterisque} the normalized Haar measure on $E$. \end{example} \subsection{Canonical heights as intersection numbers} For a regular projective variety $X$ of dimension n, defined over a field K, the classical theory of intersection (\cite{fultoninter},\cite{Serre}) defines the intersection ${c_1}({\mathcal{L}}_{1})...{c_1}({\mathcal{L}}_{n})$ of the classes ${c_1}({\mathcal{L}}_{i})$ associated to line bundles $\L_i$ on $X$, when $0<i \leq n$. \\ For the purpose of defining the arithmetic intersection, we want to assume that $X$ is an arithmetic variety of dimension $n+1$, that is, given a number field $K$, there exist a map $f : X \rightarrow \Spec(\mathcal{O}_K)$, flat and of finite type over $\Spec(\mathcal{O}_K)$. We can define (See for example \cite{delignedeterminant}, \cite{BGS}, \cite{orsay}, \cite{arakelovinter}, \cite{asterisquedeux} or \cite{zhangvar}) the arithmetic intersection number $\tilde{c}_1({\mathcal{L}}_{1})...\tilde{c}_1({\mathcal{L}}_{n+1})$ of the classes $\tilde{c}_1({\mathcal{L}}_{i})$ of hermitian line bundles $\tilde{\L}_i=(\L_i,\|.\|)$ on $X$. The fact that $\tilde{\L}_i$ are hermitian line bundles for $i=1..n+1$, means that each line bundle $\L_i$ on $X$ is equipped with a hermitian metric $\|.\|_{v,i}$ over $X_v=X \otimes_K \Spec{\mathcal{O}_{K_v}}$ for each place $v$ at infinity. The numbers $\tilde{c}_1({\mathcal{L}}_{1})...\tilde{c}_1({\mathcal{L}}_{j})$ prove to be the appropriated theory of intersection in the particular case of arithmetic varieties, adding places over infinity allows us to recover the desirable properties of the classical intersection numbers of varieties over fields.\\ The last step in the theory of intersection is actually the one that plays the more important role in our definition of the canonical height associated to a morphism. Suppose that $X$ is a regular variety of dimension n and $(\L_i,\|.\|_i)_v$ $(i=1,..,p+1)$ are metrized line bundles on $X$. Assume also that the $\L_i$ are been equipped with semipositive metrics over all places $v$ (not just at infinity as before) in the sense of \cite{zhangadelic}. Such line bundles are called adelic metrized line bundles and following \cite{zhangadelic}, we can define the adelic intersection number $\hat{c}_1({\mathcal{L}}_{1}|Y)...\hat{c}_1({\mathcal{L}}_{p+1}|Y)$ over a $p-$cycle $Y \subset X$. The adelic intersection number is in fact a limit of classical numbers $\tilde{c}_1({\mathcal{L}}_{1})...\tilde{c}_1({\mathcal{L}}_{p+1})$ once the notion of converge is established. The numbers $\hat{c}_1({\mathcal{L}}_{1}|Y)...\hat{c}_1({\mathcal{L}}_{p+1}|Y)$ satisfy again nice properties, they are multilinear in each of the $\L_i$ and satisfy $\hat{c}_1({f^*\mathcal{L}}_{1}|Y)...\hat{c}_1(f^*{\mathcal{L}}_{p+1}|Y)=\hat{c}_1({\mathcal{L}}_{1}|f(Y))...\hat{c}_1({\mathcal{L}}_{p+1}|f(Y)),$ whenever we have a map $f : X \rightarrow X$. We are interested in a particular case of this situation. Suppose that we are in the presence of a polarized dynamical system $(X,{\varphi},\L,\alpha)$, in this situation the canonical metric $\|.\|_{{\varphi}}$ of \ref{canonical metric} represent a semipositive metric on $\L$, (again we refer to \cite{zhangadelic}) and we can define the canonical height associated to $(\L,\|.\|_{{\varphi}})$. \begin{definition} \label{canonical height as intersection} The canonical height $\hat{h}_{{\varphi}}(Y)$ of a $p-$cycle $Y$ in $X$ is defined as $$\hat{h}_{{\varphi}}(Y)=\frac{\hat{c}_1({\mathcal{L}}|Y)^{p+1}}{(\dim(Y) + 1)c_1(\L|Y)^p}.$$ It depends only on $(\L,\|.\|_{{\varphi}})$, where $\|.\|_{{\varphi}}$ is actually representing a collection of canonical metrics over all places of $K$. An important particular case of canonical height will be the canonical height $\hat{h}_{{\varphi}}(P)$ of a point in $P \in X$. \end{definition} \begin{example} \label{naive height}Consider the map $\phi_k : {\mathbb P}_{\bar{{\mathbb Q}}}^n \rightarrow {\mathbb P}_{\bar{{\mathbb Q}}}^n$ given by the formula $\phi_k(T_0:...:T_n)=(T^k_0:...:T^k_n)$, the canonical height associated to $\phi_k$ is called the naive height $h_{nv}$ on ${\mathbb P}^n$. If $P=[t_0:...:t_n]$ is a point in ${\mathbb P}^n$ the naive height is $$h_{nv}([t_0 :...:t_n]) = \frac{1}{[K:\mathbb{Q}]}\log\prod _{\text{places } v \text{ of } K} \sup(|t_0|_v ,...,|t_n|_v)^{\N_v},$$ where $\N_v = [K_v:{\mathbb Q}_w]$ and $w$ is the place of ${\mathbb Q}$ such that $v \mid w$. \end{example} \begin{definition} \label{NT} Let $E$ be an elliptic curve and $\L$ an ample symmetric line bundle on $E$. The canonical height associated to $[n] : E \rightarrow E$ and $\L$ is called the Neron-Tate height $\hat{h}_{E}$ on $E$. The fact that this is independent of n, will be a consequence of proposition \ref{commuting maps}. \end{definition} The collection of maps $\{\phi_k\}_k$ on ${\mathbb P}^n$ and the collection $\{[n]\}_n$ on a given elliptic curve $E$, share two important properties, the maps within each collection commute, and share the same canonical height and canonical measure. The following proposition establishes a general fact about canonical heights and canonical measures of commuting maps on a projective variety $X$. \begin{proposition} \label{commuting maps}Let $(X,{\varphi},\L,\alpha)$ and $(X,\psi,\L,\beta)$ be two polarized systems on $X$ defined over $K$. Suppose that the maps ${\varphi}$ and $\psi$ satisfy ${\varphi} \circ \psi = \psi \circ {\varphi}$, then $ \hat{h}_{{\varphi}}=\hat{h}_{\psi}=\hat{h}_{\varphi \circ \psi}$ and $ d\mu_{{\varphi}}=d\mu_{\psi}=d\mu_{\varphi \circ \psi}.$ \end{proposition} \begin{proof} This is a consequence of our definitions of canonical measure \ref{canonical measure}, canonical height \ref{canonical height as intersection} and proposition \ref{commuting canonical metrics}. \end{proof} \begin{corollary} Suppose that two maps $\varphi, \psi : {\mathbb P}^1 \rightarrow {\mathbb P}^1$, satisfy the hypothesis of the previous proposition, then the two maps have the same Julia set. \end{corollary} \begin{proof} The Julia set of a map ${\varphi} : {\mathbb P}^1 \rightarrow {\mathbb P}^1$ is nothing but the closure in ${\mathbb P}^1$ of the set of repelling periodic points. For details we refer to definition 2.2 in \cite{sz-t-p}. Now, the corollary is a consequence of proposition \ref{commuting maps} and proposition 7.2 in \cite{sz-t-p}. \end{proof} \section{Elliptic Curves and examples} This section illustrates examples of commuting maps on ${\mathbb P}^1$. They all share one thing in common: being induced in some sense by endomorphisms on elliptic curves. \begin{proposition} \label{main} Consider an elliptic curve $E={\mathbb C}/1 {\mathbb Z}+\tau {\mathbb Z}$ given by Weierstrass equation $y^2=G(x)$. Suppose that $E$ admits multiplication by the algebraic number $\lambda$, then multiplication by $\lambda$ in $E$ induces, as quotient by the action of $[-1]$, a map ${\varphi}_{\lambda} : {\mathbb P}^1 \rightarrow {\mathbb P}^1$. Besides, we have: \begin{enumerate} \item $\hat{h}_{E}(x,y)=\hat{h}_{\lambda}(x)$ for any point $P=(x,y)$ on $E$. \item The canonical measure on ${\mathbb P}^1$ associated to ${\varphi}_{\lambda}$ is $$d\mu_{{\varphi}_\lambda}=\frac{i dz \wedge d\bar{z}}{2Im(\tau) |G(z)|}.$$ \end{enumerate} \end{proposition} \begin{proof} The first part is a classical fact of the theory of elliptic functions and complex multiplication. There exist polynomials $P(z)$ and $Q(z)$ where $\deg(P)=\deg(Q)+1=N(\lambda)$ such that $ \wp(\lambda z)=P(\wp(z))/Q(\wp(z))$, where $\wp$ is denoting the Weierstrass $\wp-$function. Suppose that we call $\pi$ the quotient map from $E \rightarrow {\mathbb P}^1$, we have a commutative diagram: \[ \begin{CD} E @>\lambda>> E \\ @V \pi VV @V \pi VV\\ {\mathbb P}^1 @>{\varphi}_{\lambda}>> {\mathbb P}^1 \\ \end{CD} \] Now, consider the line bundle $\L=\mathcal{O}(1)$ on ${\mathbb P}^1$, we have ${\varphi}_{\lambda}^* \L \xrightarrow{\sim} \L^{N(\lambda)}$ and equally for the ample symmetric line bundle $\pi^* \L$ on $E$. Therefore, it make sense to talk about canonical heights associated to ${\varphi}_{\lambda} : {\mathbb P}^1 \rightarrow {\mathbb P}^1$ and $\lambda : E \rightarrow E$. The number $\lambda$ lies in an imaginary quadratic extension of ${\mathbb Q}$, so we also have a commutative diagram: \[ \begin{CD} E @>\lambda>> E @>\bar{\lambda}>> E \\ @V \pi VV @V \pi VV @V \pi VV\\ {\mathbb P}^1 @>{\varphi}_{\lambda}>> {\mathbb P}^1 @>{\varphi}_{\bar{\lambda}}>> {\mathbb P}^1 \\ \end{CD} \] So, the two maps ${\varphi}_{\lambda}$ and ${\varphi}_{\bar{\lambda}}$ commute. After \ref{commuting maps} the canonical height associated to multiplication by $\lambda$ on $E$ is the same as the canonical height associated to multiplication by $N(\lambda)$, that is the Neron-Tate height on $E$. Take $\L=\mathcal{O}(1)$ on ${\mathbb P}^1$ and $P$ a point on $E$, the intersection numbers satisfy a projection formula $$\hat{c}_1({\pi^*\mathcal{L}}|P)=\hat{c}_1({\mathcal{L}}|\pi(P)) \qquad c_1(\pi^*\L |P)=c_1(\L|\pi(P)).$$ This gives (i) after definition \ref{canonical height as intersection}. For (ii) consider the Haar measure $i/2 dz \wedge d\bar{z}$ on $E$, normalized by $Im(\tau)$. If $\wp$ denote the Weierstrass function and $\omega=\wp(z)$, we have $$\frac{i d\omega \wedge d \bar{\omega}}{2 Im(\tau)}=\frac{i dz \wedge d\bar{z}}{2|\wp'(z)|^2 Im(\tau)} =\frac{i dz \wedge d\bar{z}}{2 |y^2| Im(\tau)}=\frac{i dz \wedge d\bar{z}}{2 |G| Im(\tau)}.$$ which gives the result we wanted to prove. \end{proof} \begin{remark} If the elliptic curve $E$ admits multiplication by the numbers $\lambda$ and $\delta$, then ${\varphi}_{\lambda} \circ {\varphi}_{\delta} = {\varphi}_{\delta} \circ {\varphi}_{\lambda}$. \end{remark} \begin{example} \label{multipcation by n in E.curves} Consider an elliptic curve $E$ given by Weierstrass equation $E : y^2=G(x)$. For $\lambda=2$ we have $${\varphi}_2(z)=\frac{(G'(z))^2-8zG(z)}{4G(z)}.$$ \end{example} \begin{example}Let's consider some examples of elliptic curves with complex multiplication: \end{example} The elliptic curve $E_1 : y^2=x^3+x$ admits multiplication by ${\mathbb Z}[i]$. The multiplication by $i$ morphism can be written in $x,y$ coordinates as $[i](x,y)=(-x,iy)$. The two maps $${\varphi}_{1+i}(z)=\frac{1}{(1+i)^2}\frac{z^2+1}{z} \qquad {\varphi}_{1-i}(z)=-\frac{1}{(1+i)^2}\frac{z^2+1}{z}$$ commute, and their composition satisfies $$\varphi_{1+i}({\varphi}_{1-i}(z))= {\varphi}_{1-i}(\varphi_{1+i}(z)) = \varphi_2(z) = \frac{z^4-2z^2+1}{4(z^3+z)}.$$ The canonical height and measure are: $$\hat{h}(z)=h_{E_1}(z,\pm \sqrt{z^3+z}) \qquad d\mu(z)=\frac{i dz \wedge d\bar{z}} {2|z^3+z|}$$ Other examples of maps attached to $E_1$ are $${\varphi}_{1+2i}(z)=\frac{(-3-4i)z(z^2+1+2i)^2}{(5z^2+1-2i)^2} \quad {\varphi}_{1-2i}(z)=\frac{(3+4i)z(z^2+1+2i)^2}{(5z^2+1-2i)^2}$$ $${\varphi}_{2+i}(z)=\frac{(3-4i)z(z^2+1-2i)^2}{(5z^2+1+2i)^2} \quad {\varphi}_{2-i}(z)=\frac{(-3+4i)z(z^2+1-2i)^2}{(5z^2+1+2i)^2}.$$ \\ The curve $E_2 : y^2=x^3+1$ admits multiplication by the ring ${\mathbb Z}[\rho]$ where $\rho=(\sqrt{-3}+1)/2$. The multiplication by $\rho$ can be expressed in $x,y$ coordinates as $[\rho](x,y)=(\rho x,y)$. An example of commuting maps coming from $E_2$ is $${\varphi}_{\sqrt{-3}}(z)=\frac{-(z^3+4)}{3z^2} \qquad {\varphi}_{\sqrt{-3}\rho}(z)=\frac{-\rho(z^3+4)}{3z^2}$$ $${\varphi}_{\sqrt{-3}} \circ {\varphi}_{\sqrt{-3}\rho} (z) = {\varphi}_{\varepsilon}(z)=\frac{(z^9-96z^6+48z^3+64)} { 9 \rho z^2(z^3+4)^2},$$ where $\varepsilon=(-3\sqrt{-3}+3)/2$. The canonical measure associated to the three maps is $$d\mu_{E_2}(z)=\frac{\sqrt{3}i dz \wedge d\bar{z}} {3|z^3+1|}.$$ To have an idea of the ramification points and indexes of the maps ${\varphi}_{\lambda}$, we proof the following lemma: \begin{lemma} A ramification point for $\varphi_{\lambda}$ belongs to the image by $\pi$ of the 2-torsion points on $E$. \end{lemma} \begin{proof} To see this, suppose that ${\varphi}_{\lambda}^{-1}(\pi(P))=\{ \pi(Q) | \lambda Q=P \} $ has cardinal strictly smaller than $N(\lambda)$. Then there exist two points $\pi(Q) \neq \pi(-Q)$ inside the set ${\varphi}_{\lambda}^{-1}(P)$, such that $\lambda Q = -\lambda Q$ and consequently $2 \lambda Q =2P=0.$ \end{proof} Let's see some examples of the different ramifications that a map ${\varphi}_{\lambda}$ may have. Let $d$ be a positive square free integer. Assume that the elliptic curve ${\mathbb C}/Z+\sqrt{-d}Z$, admits multiplication by $\lambda=a+b\sqrt{-d}$. Suppose that $P_0=0$, $P_1= 1/2$, $P_2=1/2 + \sqrt{-d}/2$ and $P_3=\sqrt{-d}/2$ denote the 2-torsion points on $E$ and that $r_j$ denotes the amount of pre-images of the point $\pi(P_j)$, that is, the cardinality of the set ${\varphi}_{\lambda}^{-1}(\pi(P_j))$. Under the conditions previously described, we can observe for example that for $\lambda=2$, the points in ${\varphi}_{2}^{-1} (\pi(P_0))$ are not ramification points of ${\varphi}_2$. On the other hand for the multiplication by $\lambda=1+2i$ on $E_1$, all points in ${\varphi}_{1+2i}^{-1} (\pi(P_0))\cup {\varphi}_{1+2i}^{-1} (\pi(P_1)) \cup {\varphi}_{1+2i}^{-1} (\pi(P_2)) \cup {\varphi}_{1+2i}^{-1} (\pi(P_3))$ are ramification points of ${\varphi}_{1+2i}$. The following table summarize the results: \begin{center} \begin{tabular}{|c|c|c|} \hline $\lambda=a+b\sqrt{-d}$, $N(\lambda)$ & $r_j, j=0,2$ & $r_j, j=1,3$ \\[4pt] \hline $a+bd \equiv 1 mod(2)$ & $ r_0=(N(\lambda)+1)/2$& $r_1=(N(\lambda)+1)/2$ \\ \cline{2-3} &$ r_2=(N(\lambda)+1)/2$& $ r_3=(N(\lambda)+1)/2$\\ [4pt] \hline $a \equiv b \equiv 0 mod(2)$, $N(\lambda)>4$ & $ r_0=N(\lambda)/2+2$& $ r_1=N(\lambda)/2$ \\ [4pt] \cline{2-3} &$r_2=N(\lambda)/2$& $r_3=N(\lambda)/2$\\ [4pt] \hline $a \equiv b \equiv 0 mod(2)$, $N(\lambda)=4$ & $ r_0=4$& $r_1=N(\lambda)/2$ \\ [4pt] \cline{2-3} &$ r_2=N(\lambda)/2$& $ r_3=N(\lambda)/2$\\ [4pt] \hline $a \equiv bd \equiv 1 mod(2)$, $N(\lambda)=2$ & $ r_0=1$& $ r_1=N(\lambda)$ \\ [4pt] \cline{2-3} &$ r_2=1$& $ r_3=N(\lambda)$\\ [4pt] \hline $a \equiv bd \equiv 1 mod(2)$, $N(\lambda)>2$ & $ r_0=N(\lambda)/2$& $ r_1=N(\lambda)/2+1$ \\ [4pt] \cline{2-3} &$ r_2=N(\lambda)/2$& $ r_3=N(\lambda)/2+1$\\ [4pt] \hline \hline \end{tabular} \end{center}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,535
import { NgModule } from '@angular/core'; import { LoginRoutingModule } from './login-routing.module'; import { LoginComponent } from './components/login.component'; import { SharedModule } from '../shared/shared.module'; import { CommonModule } from '@angular/common'; import { LoginPresentation } from './components/login.presentation'; @NgModule({ declarations: [ LoginComponent, LoginPresentation ], imports: [ CommonModule, SharedModule, LoginRoutingModule ] }) export class LoginModule { }
{ "redpajama_set_name": "RedPajamaGithub" }
7,013
{"url":"https:\/\/docs.juliadsp.org\/v0.7\/convolutions\/","text":"# Convolutions - similarity methods\n\nDSP.convFunction\nconv(u,v)\n\nConvolution of two arrays. Uses either FFT convolution or overlap-save, depending on the size of the input. u and v can be N-dimensional arrays, with arbitrary indexing offsets, but their axes must be a UnitRange.\n\nsource\nconv(u,v,A)\n\n2-D convolution of the matrix A with the 2-D separable kernel generated by the vectors u and v. Uses 2-D FFT algorithm.\n\nsource\nDSP.deconvFunction\ndeconv(b,a) -> c\n\nConstruct vector c such that b = conv(a,c) + r. Equivalent to polynomial division.\n\nsource\nDSP.xcorrFunction\nxcorr(u,v; padmode = :none)\n\nCompute the cross-correlation of two vectors, by calculating the similarity between u and v with various offsets of v. Delaying u relative to v will shift the result to the right.\n\nThe size of the output depends on the padmode keyword argument: with padmode = :none the length of the result will be length(u) + length(v) - 1, as with conv. With padmode = :longest the shorter of the arguments will be padded so they are equal length. This gives a result with length 2*max(length(u), length(v))-1, with the zero-lag condition at the center.\n\nsource","date":"2022-12-08 15:23:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6254241466522217, \"perplexity\": 1877.1574530043902}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446711344.13\/warc\/CC-MAIN-20221208150643-20221208180643-00262.warc.gz\"}"}
null
null
Point Spread Standings SEASON: 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 WEEK: Pre1 Pre2 Pre3 Pre4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 PS1 PS2 PS3 PS4 Dolphins 9 7 0 474 494 0-0 0-0 4-4 5-3 5-11 5-5 4-2 Bills 9 7 1 372 327 2-3 2-0 1-2 4-2-1 10-7 3-1-1 6-6 Patriots 9 8 0 433 407 4-5 5-3 0-0 0-0 12-5 2-2 7-6 Jets 7 9 0 369 373½ 0-2 1-2 4-2 2-3 7-9 1-3 6-6 Ravens 10 6 1 555½ 417½ 2-5 5-0-1 2-0 1-1 14-3 6-6-1 4-0 Steelers 7 8 1 318 340½ 2-3 1-3 2-0-1 2-2 8-8 7-5-1 0-3 Bengals 7 9 0 375 425½ 0-2 0-1 3-3 4-3 2-14 2-4 5-5 Browns 5 10 1 371½ 439½ 3-2-1 1-4 0-2 1-2 6-10 4-8-1 1-2 Titans 10 8 1 517½ 425 2-3-1 3-1 1-1 4-3 11-8 7-8-1 3-0 Texans 8 9 1 476½ 494½ 2-5 0-0-1 1-1 5-3 11-7 7-8-1 1-1 Colts 7 7 2 397½ 413½ 4-3 0-1 0-1 3-2-2 7-9 3-2-2 4-5 Jaguars 7 9 0 351 411 1-2 1-0 2-3 3-4 6-10 6-7 1-2 Chiefs 13 5 0 550½ 466 6-2 5-2 1-1 1-0 14-4 12-4 1-1 Broncos 9 7 0 344 335½ 2-2 0-1 3-1 4-3 7-9 7-6 2-1 Raiders 8 8 0 383 444 1-2 0-1 3-2 4-3 7-9 7-6 1-2 Chargers 4 10 2 363½ 396 0-4-1 2-3 1-2 1-1-1 5-11 4-9-2 0-1 Cowboys 9 7 0 440½ 413½ 4-3 3-4 1-0 1-0 8-8 1-1 8-6 Giants 7 9 0 413 465½ 2-1 1-1 0-5 4-2 4-12 3-0 4-9 Eagles 7 10 0 413½ 434½ 2-3 2-2 1-3 2-2 9-8 2-1 5-9 Redskins 6 10 0 391½ 442½ 0-1 0-1 2-5 4-3 3-13 4-7 2-3 Packers 11 7 0 446 451½ 6-3 2-2 0-0 3-2 14-4 8-6 3-1 Vikings 10 8 0 472½ 423 4-3 3-1 1-0 2-4 11-7 1-4 9-4 Lions 6 10 0 418½ 436 0-1 0-2 4-3 2-4 3-12-1 1-5 5-5 Bears 4 12 0 305 346½ 2-4 1-4 1-1 0-3 8-8 4-8 0-4 Saints 11 6 0 494½ 458 3-5 4-0 1-0 3-1 13-4 5-0 6-6 Falcons 8 8 0 437 419½ 2-2 0-1 2-2 4-3 7-9 3-2 5-6 Panthers 6 9 1 398 498½ 2-3 1-0 0-2-1 3-4 5-11 5-7-1 1-2 Buccaneers 5 9 2 495½ 473½ 0-4-1 2-0 0-2-1 3-3 7-9 2-7-2 3-2 49ers 11 6 1 555 430½ 5-4-1 1-2 0-0 5-0 15-3 7-6-1 4-0 Rams 10 5 1 406 420 4-2-1 4-2 0-1 2-0 9-7 5-1 5-4-1 Cardinals 9 5 2 452 444 0-1 0-0 4-3 5-1-2 5-10-1 6-4-1 3-1-1 Seahawks 8 9 1 465½ 488½ 2-5 3-1-1 0-1 3-2 12-6 4-1-1 4-8 (W/L/P) Won/Loss/Push: Team records against the point spread. Points For: Combines the actual points scored with any points a team was given on the point spread when playing as the underdog. (PA) Points Against: The total number of points a team has allowed on the field plus any points given on the spread when playing as a favorite. (HF) Home Favorite: Point spread record of a team playing at home when favored. (AF) Away Favorite: Point spread record of a team playing away from home as the favorite. (HD) Home Underdog: Point spread record of a team getting points on their home field. (AD) Away Underdog: Point spread record of a team getting points on the road. (SU) Straight Up: Team record based only on the final score. (Grs) Grass: Team records against the point spread when playing on a natural surface. (Trf) Artificial Turf: Point Spread record of a team while playing on an artificial surface.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
932
\section{Introduction} In the entropy estimation problem one seeks to approximately compute the Renyi entropy of some \emph{unknown} distribution $X$ while observing only its samples. This is a fundamental problem in many areas such as data analysis and anomaly detection~\cite{Jizba20122971,DBLP:conf/ica3pp/LiZYD09}, machine learning and data analysis~\cite{Xu:1998:EEI:929350,1223401,Paninski:2003:EEM:795523.795524,ma2000image,DBLP:journals/imst/NeemuchwalaHZC06,DBLP:journals/pr/SahooA04,DBLP:conf/uai/MansourMR09}, security and cryptography~\cite{Knuth:1998:ACP:280635,DBLP:journals/joc/OorschotW99,DBLP:journals/tit/Arikan96,DBLP:journals/tit/PfisterS04,DBLP:journals/tit/HanawalS11,DBLP:conf/focs/ImpagliazzoZ89,DBLP:journals/tit/BennettBCM95,DBLP:conf/crypto/BarakDKPPSY11,DBLP:conf/tcc/DodisY13}. In this paper we revisit some practical aspects of this problem and propose a more efficient estimator. \subsection{Related Work} \subsubsection{Distribution Testing} The case of testing closeness of distributions to being uniform under $\ell_2$ norm is known to be equivalent to estimating collision entropy~\cite{batu2000testing}. However this doesn't generalize to higher orders, in general the $\ell_d$ distance from the uniform distribution is not a function of Renyi entropy of order $d$, but rather a complicated \subsubsection{Stream frequency estimators} Empirical frequency estimators are very important for big data problems, the research started in \cite{alon1999space} and was finalized with optimal bounds in \cite{indyk2005optimal}. Although the problem looks similar to entropy estimation, in frequency estimators we compute \emph{moments of an empirical distribution} while in entropy estimation we (equivalently) compute \emph{moments of unknown probability distributions}. Since the empirical distribution still has bias wrt the true sampling distribution, there is no direct reduction. Furthemore, the state of-art estimators~\cite{acharya2016estimating,obremski_et_al:LIPIcs:2017:7569} don't actually have a compatible expressions because of the median trick involved. \subsection{Dedicated Works on Entropy Estimation} The state-of-art bounds have been obtained in~\cite{acharya2016estimating,obremski_et_al:LIPIcs:2017:7569} and shown to be asymptotically optimal. The contribution of this paper is a slightly different estimator which allows for a simpler and elegant analysis, giving superior confidence bounds at the same time. As the estimator computes just means and doesn't depend on the so called \emph{median trick} we are able to connect it to stream frequency estimators and sketch an memory efficient implementation. Finally we rigorusly discuss estimation in low and moderate entropy regimes, which can be done much faster. \subsection{Results} \subsubsection{Birthday-paradox Estimator} We analyze an estimator for Renyi entropy based on \emph{birthday paradox}, which simply computes the number of collisions occuring between tuples. The pseudocode appears in~\Cref{algo:main}. \begin{lstlisting}[caption={Estimator of $d$-th moment},label=algo:main,captionpos=t,float,abovecaptionskip=-\medskipamount,language=Python] def MomentEstimator(x,d,dlt,eps): # x[1],x[2],.., are observed samples # C[n,d] is the set of d-combinations out of [1,2,...,n] # eps is the relative error # 1-delta is the confidence n_batches = 8*log[2/dlt]/(3*eps**2) n_0 = floor(n/n_0) for b = 0..n_batches-1: y[1],..y[n_0] = x[n_0*b+1],..,x[n_0*(b+1)] // get batch m[b] = size{(i_1,..,i_d) in C[n,d]: y[i_1] = y[i_2] = ... y[i_d]} m[b] = m[b] / binom[n_0,d] return mean(m[b] for b in 0..b_batches_-1) \end{lstlisting} The theoretical analysis of the algorithm turns out to be much simpler and offering superior confidence bounds when compared to the state-of-art estimators. In particular we recover the optimal sample complexity $\tilde{O}\left(2^{(1-d^{-1})\cdot H_d(X)}\right)$ known from previous works~\cite{acharya2016estimating}. We stress that one of our technical contribution is \emph{eliminating} the median trick which has been used to amplify the confidence of auxiliary estimators~\cite{acharya2016estimating,obremski_et_al:LIPIcs:2017:7569}. \begin{theorem}\label{thm:main} For any discrete distribution $X$, integer $d\geqslant 2$, precision $\epsilon>0$ and confidence parameter $\delta>0$ the algorithm in~\Cref{algo:main} with probability $1-\delta$ estimates $\sum_{x}P_x(x)^{d}$ up to a relative error given $$n \geqslant \frac{16d\log(2/\delta)}{3\epsilon^2} \cdot \left(\sum_{x}P_x(x)^{d}\right)^{-\frac{1}{d}}$$ independent samples $x_1,x_2,\ldots,x_n$ from $X$ on the input. In particular it produces $\frac{\epsilon}{d-1}$-additive error to the Renyi entropy $H_d$ of $X$ given that $$n \geqslant \frac{16d\log(2/\delta)}{3\epsilon^2} \cdot 2^{(1-d^{-1})\cdot H_d(X)}$$ \end{theorem} \subsection{Learning Moderate Entropy Regimes} Note that \Cref{thm:main} promises a speedup with respect to the pesymistic sample complexity $\tilde{O}(2^{(1-d^{-1})H_0(X)})$ where $H_0$ is the log of the support of $X$ in \emph{small or moderate entropy regimes}. However we don't know in adnavce whether we can safely assume $H_d(X) < t_0$ or not. We discuss how to adapt our algorithm to gradually test and increase the threshold, so that the upper bound is met. The overhead in the number of necessary samples is only $O(\log \log |\mathrm{dom}(X)|))$. This is discussed in \Cref{seq:early_stop}. \subsection{Memory Efficient Algorithm} Last but not least we comment on the memory complexity. Although the algorithm in \Cref{algo:main} can be implemented in $\tilde{O}\left(2^{(1-d^{-1})H_d(X)}\right)$, our results imply \emph{much better strategy}. Namely, on each batch $i$ the estimator can be equivalently written as \begin{align} \mathbf{E}\tilde{p}_i = \binom{n}{d}^{-1}\sum_x \binom{n_x}{d} \end{align} Where $n_x$ is the number of occurences of symbol $x$ and $x^{\underline{d}}$ denotes a falling factorial. This can be reduced to the problem of \emph{frequency moment estimation in stream} \section{Preliminaries} We consider discrete random variables $X$, the set of its values is denoted by $\mathrm{dom}(X)$ and its probability mass function by $p_X$. \begin{definition}[Frequency Moment] The $d$-th frequency moment of a random variable $X$ is defined as $\sum_x P_x(x)^{d}$. We also denote the $d$-th norm of $P_X$ as $\|P_X\| = \left(\sum_x P_x(x)^d\right)^{1/d}$. \end{definition} \begin{definition}[Renyi Entropy] Let $X$ be a random variable over a discrete alphabet $\mathcal{X}$. The Renyi entropy of order $d$ is defined as \begin{align}\label{eq:entropy} \mathbf{H}_{d}(X) = \frac{1}{1-d}\log\left(\sum_{x\in\mathcal{X}}P_X(x)^{d}\right). \end{align} \end{definition} \section{Proofs of Results} \subsection{Eliminating Median Trick} It has been popular in many works on algorithms to use the so called median trick~\cite{jerrum1986random} to amplify the estimator confidence. It reduces the problem to finding an approximation with confidence $2/3$, which is usually done by a second moment method (Czebyszev inequality); boosting the confidence to any $\delta>0$ costs a multplicative factor $O(\log(1/\delta)$ in the number of samples. \begin{proposition} Suppose that an algorithm $\tilde{A}$ estimates in some interval range with probability $1/4$. Then, for any $\delta > 0$, repeating independently $O(\log(1/\delta))$ times $\tilde{A}$ and taking the median of all outputs we get an estimate in the same range which is correct with probability $1-\delta$. \end{proposition} Let $A$ be the real quantity to be estimated. The approximation with constant confidence can be obtained by the Chebyszev inequality which states that $\Pr[|\tilde{A} - A| > \epsilon] < \mathbf{MSE}(\tilde{A})/\epsilon^2$. When the estimator is unbiased, that is $\mathbf{E}\tilde{A} = A$ we have $\mathbf{MSE}(\tilde{A}) = \mathbf{Var}(\tilde{A})$ and instead of medians we can simply amply means combined with Bernstein inequality. \begin{proposition}[Bernstein's inequality~\cite{bernstein1924modification,niemiro2009fixed}]\label{prop:bernstein} Let $\tilde{A}_i$ be IID with mean $A$, let $\epsilon >0$ be a relative error and let variance of $\tilde{A}_i$ be at most $B\cdot (\mathbf{E}A)^2$. Then \begin{align*} \Pr\left[\left|m^{-1}\sum_{i=1}^{m}\tilde{A_i} - A\right| > \epsilon\cdot A\right] \leqslant 2\exp\left(-\frac{m\epsilon^2}{2B + 2B\epsilon/3} \right) \leqslant 2\exp\left(-\frac{3m\epsilon^2}{8B}\right). \end{align*} where the second inequality is true when $\epsilon \leqslant 1$. \end{proposition} In particular we see that a) For some optimization of the constant in the median trick see the discussion in~\cite{niemiro2009fixed}. Why is better because the median trick internally reduces to deviations from the mean + doesn't quite capture the variance information. \subsection{Second Moments - Collision Entropy (Second Moments)} Let $X_1,\ldots,X_n$ be observed symbols. Let $C_{i,j}$ indicate whether $X_i$ and $X_j$ collides, that is \begin{align} C_{i,j} = \left\{\begin{array}{rl} 1 & X_i = X_j \\ 0 & \text{otherwise} \end{array}\right. \end{align} With this notation we clearly have \begin{proposition} With notation as above, the second-moment estimator for $p_X$ equals \begin{align} \tilde{p} = \binom{n}{2}^{-1}\sum_{i<j} C_{i,j}. \end{align} \end{proposition} It is straightforward to see that the estimator is unbiased \begin{proposition}\label{prop:pure_moment} For every $i\not = j$ we have $\mathbb{E} C_{i,j} = \sum_{x}p_X(x)^2$. \end{proposition} Note that $C_{i,j}$ in general are not independent, and in fact are \emph{positively} associated. We can however bound their mixed moment \begin{proposition}\label{prop:mixed_moment} Let $i<j<k$, then $\mathbb{E} C_{i,j} C_{j,k} = \sum_{x}p_X(x)^3$. \end{proposition} \begin{proof} Conditioning on $X_j = x$ we have $C_{i,j}C_{j,k} = 1$ if and only if $X_i = x$ and $X_j = 1$. Since $i<j<k$ these two events (conditioned on $X_j=x$) are independent and hold both with probbability $p_X(x)$. Then the claim follows by the total probability law. \end{proof} \begin{remark}[Positive correlation] Jensen's inequality implies $\sum_{x}p_X(x)^3 \geqslant \left(\sum_{x}p_X(x)^2\right)^2$, thus $\mathbf{Cov}(C_{i,j},C_{j,k})\geqslant 0$. \end{remark} By combining \Cref{prop:pure_moment} and \Cref{prop:mixed_moment} we obtain \begin{proposition}[Variance estimation]\label{prop:variance} We have $$\mathbf{Var}\left(\sum_{i<j}C_{i,j}\right) \leqslant \binom{n}{2}\sum_{x}p_X(x)^2 +2\binom{n}{2}(n-2)\sum_{x}p_X(x)^3+ \binom{n}{2}\binom{n-2}{2}\sum_{x}p_x^4.$$ In particular $$ \mathbf{Var}(\tilde{p}) \leqslant \frac{\sum_{x}p_X(x)^2 +2(n-2)\sum_{x}p_X(x)^3+ \binom{n-2}{2}\sum_{x}p_x^4.}{\binom{n}{2}} $$ \end{proposition} \begin{proof} Since $C_{i,j}$ are boolean, \Cref{prop:pure_moment} bounds the variance of $C_{i,j}$ which correspond to $\binom{n}{2}$ terms as $i<j$. Then \Cref{prop:mixed_moment} bounds the covariance of $C_{i,j}$ and $C_{j,k}$ which appears in $n^{\underline{3}} = 2!\binom{n}{2}(n-2)$ terms; it is also possible to get pairs $C_{i,j}$ and $C_{i',j'}$ where $i<j$, $i'<j'$ are all distinct in $\binom{4}{2}\cdot \binom{n}{4}$ ways (and then random variables are independent). The bound follows now from the variance sum law. The second follows from the definition of $\tilde{p}$ and scaling the variance. For the sanity check, note that $\binom{n}{2} + 2!\binom{n}{2}(n-2) + \binom{n}{2}\binom{n-2}{2}$ equals $\binom{n}{2}\cdot\left(1+2(n-2)+\binom{n-2}{2}\right)$ which is $\binom{n}{2}^2$, the total number of terms in the variance sum formula. \end{proof} \subsection{Higher Moments - General Case} For a tuple $\mathbf{i}=(i_1,\ldots,i_d)$ let $C_{\mathbf{i}}$ indicate whether all $X_i$ collides. It is clear that \begin{proposition}\label{eq:unbiased_general} With notation as above, the $d$-th moment estimator for $p_X$ equals \begin{align} \tilde{p} = \binom{n}{d}^{-1}\sum_{\mathbf{i}=(i_1,\ldots,i_d): 1\leqslant i_1<i_2<\ldots<i_d \leqslant n} C_{\mathbf{i}}. \end{align} that is the summation is over ordered tuples of distinct indices. \end{proposition} Similarly as before, it is straightforward to see that the estimator is unbiased. \begin{proposition}\label{prop:pure_moment_general} For every $i\not = j$ we have $\mathbb{E} C_{i,j} = \sum_{x}p_X(x)^2$. In particular $\tilde{p}$ is unbiased. \end{proposition} This is actually a special case ($k=d$) of the more general result below. \begin{proposition}[Collision patterns]\label{prop:mixed_moment_general} Let $\mathbf{i}=i_1,\ldots,i_d$ and $\mathbf{j}=j_1,\ldots,j_d$ be tuples of distinct indices. Suppose that exactly $k\geqslant 0$ of entries in $\mathbf{i}$ collides with some entries in $\mathbf{j}$, that is $|\mathbf{i}\cap\mathbf{j}|=k$. Then \begin{align*} \mathbf{E}\left[ C_{\mathbf{i}}C_{\mathbf{j}} \right] = \sum_x p_X(x)^{2d-k}. \end{align*} \end{proposition} \begin{proof} Consider the case $k=0$ which means that $\mathbf{i}$ and $\mathbf{j}$ do not share a common index; it is easy to see that the formula is true. Consider now $k>0$ which means that $\mathbf{i}$ and $\mathbf{j}$ overlaps. We have $X_i = X_j$ for all $i\in\mathbf{i}$ and $j\in\mathbf{j}$. Conditioning on the common value of $X_i$ and $X_j$ \begin{align*} \mathbf{E}\left[ C_{\mathbf{i}}C_{\mathbf{j}} \left| X_{\mathbf{i}}=X_{\mathbf{j}}=\underbrace{x,\ldots,x}_{2d-k}\right.\right] = p_X(x)^{2d-k}. \end{align*} because we have exactly $2d-k$ distinct variables $X_i$ or $X_j$ and all are equal to $x$. The claim follows now by aggregating over possible values of $x$ \end{proof} \begin{proposition}[Number of terms]\label{prop:terms_enumer_general} They are $\binom{n}{d}\binom{d}{k}\binom{n-d}{d-k}$ \emph{unordered} distinct tuples $\mathbf{i}$ and $\mathbf{j}$ which satisfy $|\mathbf{i}\cap\mathbf{j}|=k$. The number of ordered tuples equals $\binom{n}{2d-k}$. \end{proposition} \begin{proof} Recall that $\mathbf{i}$ and $\mathbf{j}$ are $d$-combinations out of $n$. To enumerate tuples such that $|\mathbf{i}\cap \mathbf{j}| = k$ note it suffices to choose $\mathbf{i}$ one in $\binom{n}{d}$ ways, then choose $k$ common elements in $\binom{d}{k}$ ways and then choose remaining $\mathbf{j}\setminus\mathbf{i}$ elements in $\binom{n-d}{d-k}$ ways. This gives the formula. \end{proof} By combining \Cref{prop:pure_moment_general}, \Cref{prop:mixed_moment_general} and \Cref{prop:terms_enumer_general} we derive the following variance formula. The proof is analogous as in \Cref{prop:variance}. \begin{proposition}[Variance estimation]\label{prop:variance_general} With the summation convention as in \Cref{eq:unbiased_general} $$\mathbf{Var}\left(\sum_{\mathbf{i}}C_{\mathbf{i}}\right) \leqslant \binom{n}{d}\sum_{k=1}^{d}\binom{d}{k}\binom{n-d}{d-k}\sum_{x}p_X(x)^{2d-k}$$ In particular $$ \mathbf{Var}(\tilde{p}) \leqslant \frac{\sum_{k=0}^{d}\binom{d}{k}\binom{n-d}{d-k}\sum_{x}p_X(x)^{2d-k}}{\binom{n}{d}}. $$ \end{proposition} \begin{remark} For a sanity check note that $\sum_{k=0}^{d}\binom{d}{k}\binom{n-d}{d-k} = \binom{n-d+d}{d} = \binom{n}{d}$ by the binomial theorem. This means that all terms in the variance sum law have been taken into account. \end{remark} Finally we simplify formula further to show how it depends on the $d$-th moment only. We will use the standard fact from calculus about $\alpha$-summable sequences. \begin{proposition}[\cite{konca2015p}]\label{prop:summable_seq} The mapping $d\rightarrow \left(\sum_{x}p_X(x)^{\alpha}\right)^{1/\alpha}$ for any nonnegative weigts $p_X(x)$ is decreasing in $\alpha\geqslant 1$. \end{proposition} \begin{corollary}[Variance estimation]\label{cor:variance} Let $\|p\|_d = \left(\sum_{x}P_x(x)^d\right)^{1/d}$. Then $$ \mathbf{Var}(\tilde{p}) \leqslant \frac{\|p\|^{2d}_d\sum_{k=0}^{d}\binom{d}{k}\binom{n-d}{d-k}\|p\|^{-k}_d}{\binom{n}{d}} $$ and in particular we have $$\mathbf{Var}(\tilde{p}) \leqslant \binom{n}{d}^{-1}\cdot 2\|p\|_{d}^{d},\quad n>2d^2.$$ \end{corollary} \begin{remark} Consider the term $k=d$, it contributes to the variance at least $\Omega\left(\|p\|_d^{-d}\right) $. \end{remark} \begin{proof} By \Cref{prop:summable_seq} we can write $\sum_{x}p_X(x)^{2d-k} \leqslant \left(\sum_{x}p_X(x)^{d}\right)^{2-\frac{k}{d}}$, plugging this and rearranging terms we obtain the first inequality. Next, observe that $Q_k=\binom{d}{k}\binom{n-d}{d-k}$ attains its maximum at $k=d$ provided that $n>(d+1)^2$; indeed $Q_{k+1} = Q_k\cdot \frac{d-k}{k+1}\cdot \frac{d-k}{n-2d + k +1}$ and thus $Q_{k+1} / Q_{k}$ decreases in $k$ as both factors decreases; thus $Q_{k+1} / Q_{k} < Q_{0} / Q_{1} = d^2/(n-2d+1) \leqslant \frac{1}{2}$ given our assumption on $n$ and $d$. Now we can estimate $Q_{k+1}\leqslant 2^{-k} \cdot Q_0$ which means $\binom{d}{k}\binom{n-d}{d-k}\|p\|^{-k}_d \leqslant 2\|p\|_{d}^{-d}$ (sum of the geometric progression) which implies the second inequality. \end{proof} Now using \Cref{prop:bernstein} we conclude our main result. \begin{corollary} \Cref{thm:main} holds with $n$ such that $n \geqslant \frac{16\log(2/\delta)}{3\epsilon^2} \cdot \left(\sum_{x}P_x(x)^{d}\right)^{-1}$. \end{corollary} \begin{proof} Choose $n_0$ so that the bound in \Cref{cor:variance} is at most $(\mathbf{E}\tilde{p})^2 = \|p\|_d^{-d}$; by the elementary inequality $\binom{n}{d} > (n/d)^d$ it suffices to satisfy \begin{align*} n_0 > 2d\cdot \| p \|^{-1}_d. \end{align*} To apply \Cref{prop:bernstein} we divide the samples into batches of length $n_0$ and choose $\lceil n / n_0\rceil$ accordingly to get $\epsilon$ error and $1-\delta$ confidence. We shall note that in terms of entropy $\|p\|_{d}^{d} = 2^{-\frac{H_{d}(X)}{d-1}}$ so that $\|p\|_{d} = 2^{-\frac{d-1}{d}\cdot H_d(X)}$. \end{proof} \subsection{Learning Moderate Entropy Regimes with Early Stopping}\label{seq:early_stop} Let $p = \sum_x p_X(x)^d$ be the uknown moment to esimtate and $\tilde{p}$ be the actual estimator. We will use the estimator to \emph{gradually test} whether $p$ is big or not. \begin{proposition}[Small values don't give high estimates] Set parameters assuming $p\geqslant p_0$ so that $\epsilon = 1$ and $\delta$ is a small number Suppose that $p = p_0\gamma$, where $\gamma<1/2$ is some constant. Then $\tilde{p} \leqslant 2p_0$ with probability $1-\delta$. \end{proposition} \begin{proof} Suppose not, then $\tilde{p} = \epsilon'\cdot p_0$ for some constant $\epsilon'\geqslant 2\gamma$. But we still have $\mathbf{E}\tilde{p} = p$, in particular \begin{align*} \Pr[\tilde{p} > \epsilon'\cdot p_0] \leqslant \Pr[\tilde{p}-p > (\epsilon' - \gamma)\cdot p_0 ] \leqslant \Pr[\tilde{p}-p > \epsilon'/2 \cdot p_0 ] \end{align*} When we use \Cref{prop:bernstein} to estimate this probability, the bound on the number $B$ for $\tilde{p}$ differs from that of $p\geqslant p_0$ by a factor $p_0/p = \gamma^{-1}$. Suppose that $\epsilon' = 2$. In \Cref{prop:bernstein} we use the tail bound $2\exp\left(-\frac{m\epsilon^2}{2B + 2B\epsilon/3}\right)$. We get the same dependency on $\epsilon$ and increase $B$ because of $\gamma < 1$, therefore get same bounds as before. \end{proof} This result guarantees that we can gradually test whether $p_0 < 2^{-\lambda}$ for $\lambda=1,2,\ldots,$ with constant multiplicative error. By doing this we lose in confidence at most $H_0(X)\cdot \delta$, thus the number of samples should be increased by a factor of $O(\log \log\mathrm{dom}(X)))$ to preserve the confidence. Once we know the interval for $p$, up to a multiplicative factor, we can set up the estimator as usual. \subsection{Stream Estimation} The quantity $\binom{n_x}{d}$ is a polynomial of order $d$ in $n_x$, similar to those considered in streaming estimators. The best streaming algorithms for estimating the frequency moments give the bound $\tilde{O}\left(|\mathrm{dom}(X)|^{1-2/k}\right)$ to approximate empirical sum of $k$-th powers $\sum_{x} n_x^{k}$. Our sum can be transformed to a combination of such expressions, via change of bases. Indeed, we have \begin{proposition} For any natural $k$ it holds that $$x^{k} = \sum_{j=0}^{k}S(k,j) j! \binom{x}{j}$$ where $S(k,j)$ are Stirling numbers of the second kind. \end{proposition} Now applying the state-of-art stream estimators to each combination we see that the complexity is dominated by the case $k=d$. Thus we can reduce the memory usage to about $\tilde{O}\left(|\mathrm{dom}(X)|^{1-2/k}\right)$. \section{Conclusion}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,898
I find this extremely hard to believe, but according to new research published in Nature Neuroscience, scientists have invented a method to induce memories in brains for the first time in history. The study—published by Case Western Reserve University School of Medicine's Professor of Neurosciences and Physiology/Biophysics Ben Strowbridge, PhD, and MD/PhD student Robert A. Hyde—shows a method to store different types of short-term memories, which they have successfully tested in brain tissue stored in vitro. Titled "Mnemonic Representations of Transient Stimuli and Temporal Sequences in Rodent Hippocampus In Vitro", their paper describes how they used a piece of mouse brain tissue to form the necessary circuits to record a short-term declarative memory. This type of memory can be something like names, places and events. These neural circuits—located in the hippocampus—retained the memory from different stimuli for ten seconds. The researchers were able to observe the recording of these artificial memories by tracing the activity of the brain cells. According to Hyde, "the type of activity we triggered in isolated brain sections was similar to what other researchers have demonstrated in monkeys taught to perform short-term memory tasks. Both types of memory-related activity changes typically lasted for 5-10 seconds." Uncanny. The rat brain in vitro was even able to remember different sequences of events.
{ "redpajama_set_name": "RedPajamaC4" }
2,447
Sweater fit well and looks good. Haven't washed it yet. Fabric was very thin and cheaply made. Was not worth the price. Received this item bot as advertised, disappointed and maybe returning it. Not what I expected. They are the same length as the regular snap button cardigans. I love my sweater! It's lightweight but still warm and it sooooooo soft.
{ "redpajama_set_name": "RedPajamaC4" }
7,610
package com.alibaba.otter.canal.parse.inbound.mysql.tsdb; import java.io.File; import java.io.FileInputStream; import java.net.URL; import java.util.ArrayList; import java.util.List; import org.apache.commons.io.IOUtils; import org.apache.commons.lang.StringUtils; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.test.context.ContextConfiguration; import org.springframework.test.context.junit4.SpringJUnit4ClassRunner; import com.alibaba.druid.sql.repository.Schema; import com.alibaba.otter.canal.parse.inbound.TableMeta; /** * @author agapple 2017年8月1日 下午7:15:54 */ @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = { "/tsdb/h2-tsdb.xml" }) public class MemoryTableMeta_Random_DDL_Test { @Test public void test_database() throws Throwable { URL url = Thread.currentThread().getContextClassLoader().getResource("dummy.txt"); File dummyFile = new File(url.getFile()); int number = 39; for (int i = 1; i <= number; i++) { File sourceFile = new File(dummyFile.getParent() + "/ddl/table", "test_" + i + ".sql"); String sourceSql = StringUtils.join(IOUtils.readLines(new FileInputStream(sourceFile)), "\n"); MemoryTableMeta source = new MemoryTableMeta(); source.apply(null, "test", sourceSql, null); File targetFile = new File(dummyFile.getParent() + "/ddl/table", "mysql_" + i + ".sql"); String targetSql = StringUtils.join(IOUtils.readLines(new FileInputStream(targetFile)), "\n"); MemoryTableMeta target = new MemoryTableMeta(); target.apply(null, "test", targetSql, null); compareTableMeta(i, source, target); } } @Test public void test_table() throws Throwable { URL url = Thread.currentThread().getContextClassLoader().getResource("dummy.txt"); File dummyFile = new File(url.getFile()); int number = 80; for (int i = 1; i <= number; i++) { try { File sourceFile = new File(dummyFile.getParent() + "/ddl/alter", "test_" + i + ".sql"); String sourceSql = StringUtils.join(IOUtils.readLines(new FileInputStream(sourceFile)), "\n"); MemoryTableMeta source = new MemoryTableMeta(); source.apply(null, "test", sourceSql, null); File targetFile = new File(dummyFile.getParent() + "/ddl/alter", "mysql_" + i + ".sql"); String targetSql = StringUtils.join(IOUtils.readLines(new FileInputStream(targetFile)), "\n"); MemoryTableMeta target = new MemoryTableMeta(); target.apply(null, "test", targetSql, null); compareTableMeta(i, source, target); } catch (Throwable e) { Assert.fail("case : " + i + " failed by : " + e.getMessage()); } } } private void compareTableMeta(int num, MemoryTableMeta source, MemoryTableMeta target) { List<String> tableNames = new ArrayList<>(); for (Schema schema : source.getRepository().getSchemas()) { tableNames.addAll(schema.showTables()); } for (String table : tableNames) { TableMeta sourceMeta = source.find("test", table); TableMeta targetMeta = target.find("test", table); boolean result = DatabaseTableMeta.compareTableMeta(sourceMeta, targetMeta); if (!result) { Assert.fail(sourceMeta.toString() + " vs " + targetMeta.toString()); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,012
\section{Introduction} Negative screening is one method to avoid interactions with inappropriate entities. For instance, international organizations and governments issue smart sanctions lists to prohibit trade with foreign entities that are involved in illegal activities, such as terrorism and money laundering. Financial institutions also maintain many versions of investment exclusion lists by gathering information from various news sources. Their focus is not only to avoid firms that have financial problems to keep their portfolios profitable, there is also a growing interest to put pressure on publicly listed firms to improve their environmental, social, and government (ESG) practices \cite{OECD2017}, that is, to put more pressure on big firms ``to do the right thing'' by avoiding investing in them. The aims of these ESG practices include not only environmental issues but also human rights issues (e.g., child labor), discrimination (e.g., gender and race) issues, and incorporating information from smart sanctions lists issued by countries and international organizations worldwide. Thus, negative screening is becoming increasingly important to enhance the healthy functioning of global markets. Our focus is precisely to predict the appearance of firms on investment exclusion lists maintained in the finance domain (Fig.~\ref{fig:list}), which is gaining popularity worldwide \cite{Sherwood2018}. There are three information sources used to create such investment exclusion lists: (1) information that firms voluntarily disclose, (2) ESG ratings provided by rating agencies and (3) news information reported by the media. We focus on the investment exclusion lists created using news information because (1) is susceptible to manipulation, as in the Enron's creative accounting practices \cite{Markham2006}, and (2) might be corrupted by conflicts of interest, as in the subprime mortgage crisis \cite{Hill2010}. Although there are concerns about fake news, news reportings used for professional investments are less susceptible to manipulation, and investment exclusion lists created from these news reportings are widely reported to have a positive impact on a portfolio's performance \cite{Sherwood2018}. However, news information also has its shortcomings, such as investors could react only after the news is released. Their ex-post nature makes them effectively ``locking the barn door after the horse has been stolen.". A more ambitious approach is to try to identify possible future news events that have not yet been reported that might trigger a firm to be added to the investment exclusion lists, as we propose here. Our approach is tested using negative news investment exclusion list data of more than 35,000 firms worldwide from January 2012 to May 2018. Our investment exclusion lists are based on data from Dow Jones, which created its dataset using negative news information from about 10,000 news sources worldwide. Dow Jones categorizes negative news into 17 categories, and we create investment exclusion lists according to this classification. Because the strategy to predict firms that might be exposed to a financial problem in the near future might be different from the strategy to predict firms that might be exposed to environmental problems, we must have a method that can adjust its prediction strategy to each investment exclusion list category accordingly. Thus, we aim to build a model that can adaptively adjust to each category. However, it is not sufficient to develop an adaptable prediction strategy for each investment exclusion list category by using only basic information that one data vendor provides (i.e., date of addition, industry classification, and headquarters location). Thus, we construct a vast heterogeneous information network that covers the necessary information surrounding each firm by gathering information from several sources. The network is assembled using seven professionally curated datasets and two open datasets, which results in approximately 50 million nodes and 400 million edges in total. Exploiting this vast heterogeneous information network, we propose a model that can navigate through the network to predict firms that are more likely to be added to each investment exclusion list in the near future. To further motivate the heterogeneous information network approach in our setting, we provide a specific example of how real investigators and journalists solve the problem of determining possible entities to add to the smart sanctions lists or investigation targets. This example is from a book written by a former member of the United Nations Panel of Experts on Sanctions Against North Korea \cite{Furukawa2017}. The Panel of Experts is in charge of the investigation to determine possible candidates to include in the United Nation's smart sanctions lists. In Fig.~\ref{fig:investigation}, we provide a simplified network that illustrates how the expert conducted his investigation. In 2008, the Japanese police force exposed one firm, called X, that was attempting to export luxury goods from Japan to North Korea (Fig.~\ref{fig:investigation}). The export of luxury goods to North Korea is against United Nations sanctions and thus is illegal in Japan. It is worth emphasizing that only adding firm X to the smart sanctions list was not sufficient to ban all the illegal export activities. There could have been other firms involved in these illegal exports, and the goal was to include all of them. This motivated the expert to investigate further. Firm X was said to manage several other vessels, one of which was held by a firm in a tax haven (i.e., firm A). This company's contact information was directed to firm B, which interestingly had the same registered address as company X. This raised suspicion of these firms (i.e., firm A and B) and further supporting investigations were performed. In 2008, the Japanese police force exposed one firm, called X, that was attempting to export luxury goods from Japan to North Korea (Fig.~\ref{fig:investigation}). The export of luxury goods to North Korea is against United Nations sanctions and thus is illegal in Japan. It is worth emphasizing that only adding firm X to the smart sanctions list was not sufficient to ban all the illegal export activities. There could have been other firms involved in these illegal exports, and the goal was to include all of them. This motivated the expert to investigate further. Firm X was said to manage several other vessels, one of which was held by a firm in a tax haven (i.e., firm A). This company's contact information was directed to firm B, which interestingly had the same registered address as company X. This raised suspicion of these firms (i.e., firm A and B) and further supporting investigations were performed. Furthermore, firm X had a partnership with firm C, which was using the vessels that were involved in the 2008 arrest. These vessels were owned by firm D, which raised suspicion that firm D was possibly also heavily involved in the illegal activities. Initially, the expert also thought of the possibility that firm D was involved just by accident. However, it turned out that partnership firm C had person P as its board member, who owned another firm, E, of which one of the principal shareholders was firm D, which was under suspicion. Moreover, firm D and firm E happened to have the same board member, Q, which further reinforced this suspicion. As is clear from this example, investigators and journalists attempt to track suspicious patterns by manually inspecting information from several sources (i.e., in this case, vessel information, shareholder information, firm relational information, and registry information) to narrow down their list of targets. However, investigating each entity, manually as the expert above, might not be a reasonable approach when we have a large number of entities to monitor. Specifically, in the finance domain, there are cases when we need to invest on a global scale for a more diversified portfolio. There were 46,583 officially listed domestic firms worldwide in 2017 \cite{WFE2017}, and monitoring them on a global scale undoubtedly requires the development of machine-assisted methods. This requirement motivates us to develop our machine-assisted heterogeneous information network approach. Many studies exist in data mining regarding building a heterogeneous information network by gathering information from various sources \cite{Hofmann2017}. Recent prominent work includes that of Google \cite{Dong2014} and Wikipedia's DBpedia \cite{Auer2007}, which are used for search engine optimization. Using web-based data, these databases are expanding rapidly. Some researchers even claim that the knowledge graph should be the default data model for learning heterogeneous knowledge \cite{Wilcke2017}. In recent years, there has been a wide variety of both theoretical ~\cite{Sun2013,Wang2018} as well as applied research ~\cite{Chen2016,Cao2018} that focuses on using a heterogeneous information network. See \cite{Nickel2016, Wang2017Review} for excellent overviews. There are also studies that focus on using information from multiple (multimodal) sources not limiting to heterogeneous information network structure \cite{Hu2018}. However, the entire social impact of such an approach is yet to be known. Our work is another line of applied research that follows this trend to show that information concerning firms worldwide can be mapped into one heterogeneous information network, and a machine-assisted method can learn patterns that can predict the occurrence of firms appearing in investment exclusion lists maintained by professional institutions. Our contribution is summarized as follows. \begin{itemize} \item We propose a new social impact problem called list prediction using heterogeneous information network that has a significant impact on risk management and ESG investing \cite{Sherwood2018}. \item We propose a new model based on label propagation that could exploit the heterogeneous information stored in the network to answer the list prediction problem. \item We tested our models using a real-world vast heterogeneous information network that was assembled using seven professionally curated datasets and two open datasets, resulting in a total of approximately 50 million nodes and 400 million edges. Our investment exclusion lists are based on negative news stories from January 2012 to May 2018 and cover 35,000 firms around the globe. We thus believe that this dataset is sufficient to judge the validity of our approach in real-world settings. \item Comparing with the state-of-the-art methods with and without the network, we show that the predictive accuracy is substantially improved when using our model with heterogeneous information. \item Not only does our model performs well in terms of predictive accuracy, but our model is also interpretable. \end{itemize} The remainder of the paper is organized as follows. In the next section, we briefly provide an overview of our datasets, which we use throughout the paper. We first review our negative news investment exclusion list data. We also present direct observations that show that negative media coverage has an impact on financial returns, thereby highlighting the importance of performing such predictions. We then describe all the datasets used in the paper to create our heterogeneous information network. In the model section, we describe the model used in this paper. We first describe our proposed model, which is a variation of label propagation using Jacobi iteration with edge weight learning. We then describe how to define the features for each edge using information in our heterogeneous information network. We also describe other state-of-the-art methods with and without using heterogeneous information. In the result section, we summarize the results. We show that our method substantially outperforms other methods. We then discuss the interpretability of our model. In the final section, we conclude the paper. \begin{figure*}[ht] \centering \begin{subfigure}{.3\textwidth} \centering \includegraphics[width=1.0\linewidth]{fig0.pdf} \caption{List prediction problem.} \label{fig:list} \end{subfigure}% \begin{subfigure}{.7\textwidth} \centering \includegraphics[width=1.0\linewidth]{fig1.pdf} \caption{Simplified network that illustrates the investigation.} \label{fig:investigation} \end{subfigure}% \caption{Schematic figures describing the problem.} \label{fig:schematic} \end{figure*} \section{Datasets} \subsection{Negative News Investment Exclusion List} \begin{table} \centering \begin{tabular}{lrr} Date & Name & Negative News Category \\ \hline Jan 3, 2012 & Firm A & Management \\ Jan 3, 2012 & Firm B & Product/Service \\ Jan 10, 2012 & Firm C & Regulatory \\ Jan 11, 2012 & Firm D & Workplace \\ \hline \end{tabular} \caption{Sample of the Dow Jones Adverse Media dataset.} \label{table:adme} \end{table} We use Dow Jones Adverse Media Entity data from January 2012 to May 2018 as our primary data. The data consist of the name of the firm, date of the news report, and 17 categories that classify the negative news report. Table~\ref{table:adme} shows a sample of the dataset. In Table~\ref{si:table:count}, we present the number of firms in each category for the 35,657 firms analyzed in this study from January 2012 to May 2018. ``No. of news stories'' denotes the total number of negative news stories for a particular investment exclusion list category. ``Unique firms'' denotes the total number of unique firms tagged with a particular piece of negative news at least once. In the table, ``No. of news stories'' is sometimes much higher than ``Unique firms,'' which indicates that some firms are tagged with the same negative news report category multiple times. When we create our investment exclusion lists, we add each firm to the lists for the date of the initial news report. We also keep a record of the last date of the news report to determine whether there is an ongoing news report. We can see that, in addition to financial and environmental issues, there are other investment exclusion list categories, such as ``Product/Service,'' which records negative news, such as drug test failure and recall incidents, and ``Regulatory,'' which records when a firm is reported to have problems with regulatory issues. \begin{table* \centering \resizebox{1.0\textwidth}{!}{ \begin{tabular}{lrrrrrrr} Group & No. of Samples & 0.01 & 0.05 & 0.5 & 0.95 & 0.99 & Skewness \\ \hline With news & 8685 & -0.233 & -0.102 & 0.005 & 0.098 & 0.191 & -6.521 \\ Without news & 1667616 & -0.218 & -0.109 & 0.005 & 0.110 & 0.207 & 0.165 \\ \hline \end{tabular}} \caption{Comparison of 10 trading day log returns with and without news events. Numbers in the first row indicate quantiles.} \label{table:return} \end{table*} \begin{table}[!htp] \centering \resizebox{0.8\textwidth}{!}{ \centering \begin{tabular}{lrr} \hline Category & No. of news stories & Unique firms\\ \hline Product/Service & 20,637 & 8,779 \\ Regulatory & 21,652 & 7,552 \\ Financial & 22,754 & 3,310 \\ Fraud & 14,489 & 3,997 \\ Workforce & 7,523 & 3,963 \\ Management & 11,220 & 4,063 \\ Anti-Competitive & 7,748 & 3,620 \\ Information & 6,401 & 2,873 \\ Workplace & 6,827 & 2,492 \\ Discrimination-Workforce & 6,477 & 2,426 \\ Environmental & 4,083 & 1,887 \\ Ownership & 4,124 & 2,615 \\ Production-Supply & 2,878 & 1,869 \\ Corruption & 3,621 & 1,578 \\ Human & 496 & 302 \\ Sanctions & 254 & 157 \\ Association & 247 & 90 \\ \hline \end{tabular} } \caption{Number of negative news reported from January 2012 to May 2018 among the 35,657 firms investigated in this study. ``No. of news stories'' represents the total number of negative news stories for a particular negative news category. ``Unique firms'' represents the total number of unique firms tagged with a particular negative news category.} \label{si:table:count} \end{table} To highlight the importance of predicting which firms appear in such a dataset, we first tested whether a negative news report had a financial impact by checking its relationship with a cross-section of returns using the following steps. For all US stocks in the dataset, we gathered their prices from January 2012 to May 2018: there were 1,139 such stocks in total. For each date in the negative news dataset, we used a 10-day window centered on a specified date. We then calculated the log return between the start and end dates of the 10-day window, and compared these returns with the 10 trading day log returns outside the window. Table~\ref{table:return} compares the distributions of stock returns with and without negative news reports. The quantiles and skewness show that the negative tail of the log-return distribution is more stretched than the positive tail, which agrees with previous studies that argued that negative information has a negative impact on financial returns. We also performed a two-sample Kolmogorov--Smirnov test for the null hypothesis that the two distributions are from the same distribution. This was rejected with a $p$-value below $10^{-6}$. \begin{table* \centering \resizebox{\textwidth}{!}{ \centering \begin{tabular}{lrrrrr} Source & Date of acquisition & Node types & Relation types & No. of nodes & No. of edges \\ \hline Dow Jones Adverse Media Entity & Dec 2016 & Firm & Location, Homepage & 132,127 & 390,320 \\ Dow Jones State-Owned Companies & Dec 2016 & State-owned firms & VIP, Employee, Owner & 280,995 & 702,172 \\ Dow Jones Watchlist & Dec 2016 & VIPs, specially interested person & Social relations & 1,826,273 & 8,322,560 \\ Capital IQ Company Screening Report & Dec 2016 & Firms & Buyer-seller, borrower etc & 505,789 & 2,916,956 \\ FactSet & Dec 2015 & Firm, goods, industry & Parent-child firm, Issue Stock & 613,422 & 8,213,225 \\ FactShip & Jan 2017 & Firm, goods, invoice etc & Overseas trade etc & 16,137,550 & 36,345,381 \\ Reuters Ownership & Dec 2016 & Owners, stocks & Issue, own & 1,560,544 & 121,769,151 \\ Panama papers & Jan 2017 & Entities, officers & Shareholder of, director of & 888,630 & 1,371,984 \\ DBpedia & Apr 2016 & Various & Various & 35,006,127 & 249,429,771 \\ \hline \end{tabular}} \caption{Summary of the dataset used in this study.} \label{table:datasets} \end{table*} \subsection{Heterogeneous Information Network} In addition to negative news information, the Dow Jones Adverse Media Entity data contains basic information about the location and domain information of each firm. However, this information is not sufficient to predict investment exclusion lists. Hence, our strategy is to assemble data from other widely used professionally curated sources in the form of a heterogeneous information network. Table~\ref{table:datasets} summarizes the dataset used in the paper. We note several points about the data. First, to remove duplicates when combining node information from several sources, we did not only consider the name of the firm. In addition to name similarity, we determined two firms from different datasets to be the same if any of the following information was precisely the same: (i) their homepage information, (ii) the longitude and latitude information of their addresses, or (iii) their stock symbol. We manually inspected our strategy and found that it led to a small number of ``false positive'' errors (i.e., incorrectly identifying different nodes as duplicates), but to a large number of ``false negative'' errors (i.e., missing nodes that are duplicates). This was because we could not remove duplicate firms that did not have a homepage, address, or stock symbol information. For the sake of robustness check of our results, we tested with several variations of this strategy varying the parameters that govern name similarity and excluding either (i), (ii) or (iii) and found that all of them provides similar results as the one described in the present paper. \begin{table}[thbp] \centering \begin{tabular}{lrr} \hline Rank & Relation & Number \\ \hline 1 & located\_in & 2,723,162 \\ 2 & customer & 717,019 \\ 3 & supplier & 713,434 \\ 4 & own\_stock & 493,316 \\ 5 & belongs\_to\_industry & 359,425 \\ 6 & strategic\_alliance & 348,352 \\ 7 & creditor & 339,184 \\ 8 & receive\_goods & 330,311 \\ 9 & send\_goods & 319,292 \\ 10 & issue\_stock & 187,498 \\ 11 & make\_products & 181,574 \\ 12 & competitor & 174,487 \\ 13 & part\_of\_industry & 172,621 \\ 14 & borrower & 153,203 \\ 15 & domain & 131,153 \\ 16 & distributor & 116,262 \\ 17 & subsidiary & 107,119 \\ 18 & parent-company & 107,117 \\ 19 & associated-person & 100,699 \\ 20 & international\_shipping & 95,050 \\ 21 & associate & 72,685 \\ 22 & landlord & 62,904 \\ 23 & http://dbpedia.org/ontology/party & 55,653 \\ 24 & employer & 47,901 \\ 25 & employee & 47,184 \\ \hline \end{tabular} \caption{ Tthe top 25 relation types. } \label{table:relation} \end{table Second, half of the relational information in our datasets does not include a timestamp. This is problematic in the sense that it is difficult to ensure that no future information is used when we perform our prediction. To avoid any information from the future contaminating our heterogeneous information network and to achieve an exemplary evaluation, we only predict future occurrences of negative news after February 1, 2017, which is after the latest date for which we acquired data (Table~\ref{table:datasets}). Finally, for the relational information in the Dow Jones Adverse Media Entity dataset, we use the December 2016 version and update only the negative news information to May 2018. We also removed relation types that appeared too many times in our dataset to avoid computational overload. These relation types include ``http:// dbpedia.org/ontology/wikiPageWikiLink'' and ``http://purl.org/dc/terms/subjects,'' which create approximately 175 million and 22 million edges, respectively. We also ignored relation types that only appeared in the dataset fewer than 100 times. Furthermore, some of the edges in our dataset had multiple timestamps, and we unified them into one relationship. These include relation types such as ``own stocks'' and ``sends goods,'' of which the former are on a quarterly basis, whereas the latter includes the timestamp information of when they passed through US customs. For ``own stocks,'' we further restricted the data to relationships with at least 5\% ownership. After the removal of duplicates and data cleaning, a total of approximately 3.7 million nodes and 9.1 million edges with 216 relation types remained. Table~\ref{table:relation} shows the top 25 relation types in our dataset. Many relation types connect the firms, but there are also relation types, such as those for (i) associations and employees, which relate firms to people; (ii) own stocks, which relates firms or individuals to a stock symbol; and (iii) domain, which relates firms and individuals to a homepage. Because our investment exclusion targets are firms that are either publicly listed or closely related to publicly listed firms, we restricted our prediction targets to firms in the Dow Jones Adverse Media Entity dataset for which we had at least one item of relational information among our prediction targets. We call the network of our prediction targets the core network. The core network is a weighted undirected network $G=(V, E, W)$ that consists of a set of nodes V, set of edges E, and edge weights W. We assume that there is an edge between two nodes in the core network if there is at least one relation type that connects the two nodes. There are 35,657 firms with 322,138 edges in the core network. We restrict our attention to the core network because we only have limited information about firms outside this network. Restricting our focus to the core network strikes a reasonable balance between improving the ``reach''~\cite{Wan2015} of our prediction while assuring that we have sufficient information for prediction. We also note that the code of the present paper will be made available on the author's website. \section{Model} \subsection{Label Propagation Model} Using the core network defined in the last section, we define a non-negative weight function $f_{\theta}:X \rightarrow [0,1]$, where $X$ defines the set of features for edge $i,j$ extracted from the heterogeneous information network. We define $f_{\theta}$ to be a simple multilayer perceptron with 30 hidden units and a sigmoid layer for our output function, where $\theta$ denotes the parameters of the model. We combined the core network defined above with the indicator label of each investment exclusion list category using a variation of label propagation model with edge weight learning using Jacobi iteration~\cite{Chapelle2010}. Our model is similar in spirit to a supervised random walk~\cite{Backstrom2011}; however, instead of a directed network, we focus on the undirected case. Our strategy is to split the nodes into the source and target nodes depending on the date of the last negative news report date. We trained our model to minimize the loss of predicting the labels of our target nodes. The exact steps connecting $X$, the set of features for edge $i,j$, to the loss is described in algorithm~\ref{alg1}. Note that our model is not exactly a label propagation model because we set $D_{ii}=\Sigma_{j}1_{ij \in E}$ instead of $D_{ii}=\Sigma_{j}w_{ij}$. The diagonal dominance condition \cite{Chapelle2010} that ensures that the Jacobi iteration converges still holds because $\Sigma_{j}1_{ij \in E} \geq \Sigma_{j}w_{ij}$, which results from the fact that we defined $0 \leq w_{ij} \leq 1$. Note that our model is exactly equivalent to the classic label propagation when all $w_{ij}$ equal $1$; however, after learning the edge weights, the spectral radius of $A^{-1}W$ becomes smaller than the usual label propagation, which leads the model to focus on propagating the labels to nearby nodes. After learning the parameters of the model, we consider both the source and target nodes as known labels and predict the future occurrence of negative news reports, for firms that did not have such news report before in the dataset, after the last date of the training data (i.e., February 1, 2017) to the end of the dataset (i.e., May 31, 2018). The duration that separates target nodes from source nodes in the training data was set to 31 days before the last date of the training data for most of the investment exclusion list categories for which we had sufficient negative news report information, and 182 days for categories with less information (e.g., sanction, human, and association). Note that we use the timestamp information to separate the source nodes and target nodes used for training. More aggressive use of the timestamp information is possible, but this is left for future work. We have also performed a robustness check of our results varying from the last date of the training data to August 1, 2017, and also report results obtained by eliminating the first year (i.e., January 1, 2012, to December 31, 2012) out of the dataset. We obtain very similar results, as shown in the current paper. We have also performed robustness check of our results varying the last date of the training data to August 1, 2017, and also report results obtained by eliminating the first year (i.e., January 1, 2012, to December 31, 2012) out of the dataset. We obtain similar results as shown in the current paper (this is reported in the supplementary material). \begin{algorithm} \caption{Label propagation with edge weight learning} \label{alg1} \begin{algorithmic} \STATE (1) For each edge in the core network set, $w_{ij}=f_{\theta}(x_{ij})$, where $x_{ij}$ denotes features from the network. \STATE (2) Compute diagonal degree matrix $D$ using $D_{ii}=\Sigma_{j}1_{ij \in E}$. \STATE (3) Compute $A_{ii}=I_{l}(i)+D_{ii}$, where $I_{l}(i)$ indicates $i$'s known label. \STATE (4) Initialize $Y^{0} = (y_{1},...,y_{l},0,...,0)$, where $l$ is the number of known labels. \STATE (5) Iterate $Y^{t+1}=A^{-1}(WY^{t}+Y^{0})$ until convergence. \STATE (6) Calculate the loss by considering the mean squared error of $Y^{target}=(y_{l+1},...,y_{l+m},0,...,0)$ and $Y^{T}=(y_{l+1}^{T},...)$. \STATE (7) Update $\theta$ in $f_{\theta}$ using gradient descent. \STATE (8) Repeat until convergence. \end{algorithmic} \end{algorithm} \subsection{Edge Features} For our model to work, we need to define the features for each edge. We use the occurrence of relation types in the core network, a path in the overall heterogeneous information network that connect the two nodes~\cite{Lao2011}, or the relation types along path segments that connect the two nodes as our features. We denote each model as LP-core-relation, LP-path, and LP-path-segment, respectively, where LP denotes ``label propagation.'' Instead of using the raw number counts of each relation type or path, we use a binary indicator to describe whether a specific feature exists. To be more specific, suppose that edge $A,B$ has the following two direct relations and two paths between them: (A,supplies,B), (A,strategic alliance,B), (A,is in,c,is in,B), and (A,makes,x,is made of,y,makes,B). For LP-core-relation, we only pay attention to (A, supplies, B) and (A,strategic alliance,B), and hence use $[0,...,1,0,1,0...]$ as our feature, where the two $1$'s correspond to the supplies and strategic alliance relation types. LP-path works similarly, but instead of creating a one-hot vector for each relation type, we create a one-hot vector for each path. We restrict our attention to the top 3,000 paths found with a length no larger than 4 for computational reasons. We also ignore the direction of each relation type. Moreover, we discard paths that connect two nodes that are already connected by shorter paths. Using our example above, paths with lengths 1 and 2 are not affected by this restriction but, starting from paths of length 3, there might be a path of length 3, such as (A,is in,c,alliance with,d,supports,B), that also connects A and B. We ignore these paths because node c already appears in a path of length 2 (i.e., (A,is in,c,is in,B)). We use this additional restriction to prevent super-nodes (e.g., industries) from contaminating our path features. Features in LP-path-segment are created by distinguishing relation types that occur along the path segments. This can be considered as a collapsed version of LP-path with relation-type one-hot vectors for each path segment. A naive implementation of this results in 10 segments for path lengths of up to 4. However, because the core network is undirected, we can exploit the symmetry and reduce the number of segments. For example, there is no difference between starting a path from A or starting from B in (A, is in,c, is in, B). Hence, we do not need to distinguish path segments for paths of length 2, for example, 2:1 and 2:2, but instead we could combine them, thereby creating only one feature of path length 2. We use path lengths of up to 4, and there are six possible path segments in total, which we denote by 1, 2, 3:1, 3:2, 4:1, and 4:2. \subsection{Other Models Compared} We compare our models with the following basic as well as state-of-the-art methods, both using and not fully using the heterogeneous information network. For the basic model that does not fully use the heterogeneous information network, we add country, industry categories and node degree to Table~\ref{table:adme}, transform the former two into one-hot vectors and use a random forest model for classification. We call this model the ``random forest.'' For a model that uses the network but not edge weight learning, we directly perform label propagation on the core network. We call this the ``LP-fixed model.'' We further compare our method with methods that can incorporate multi-category correlation. Many previous studies have combined multi-category correlation with label propagation~\cite{Wang2016}. However, most of these methods are computationally very expensive, and hence we use the method of ~\cite{Wang2016}, which turned out to be computationally reasonable. However, ~\cite{Wang2016} used a KNN graph that is not available in our case. Instead, we use the core matrix and multiply it by an additional parameter to ensure that the spectral radius of the entire matrix is below one. \section{Results} \subsection{Quantitative Comparison} \begin{figure}[thp] \centering \includegraphics[width=0.6\linewidth]{edgeweight.pdf} \caption{Normalized histogram for the edge weights of the ``Product/Service'' category for LP-path-segment.} \label{si:fig:edge_weight} \end{figure} \begin{figure*} \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{ComparisonAPNew.pdf} \caption{AUC-PR} \label{fig:aucpr} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=1.0\linewidth]{ComparisonROCNew.pdf} \caption{AUC-ROC} \label{fig:aucroc} \end{subfigure} \caption{Comparison of predictive performance for random guessing (black inverted triangles), random forest (purple triangles), LP-fixed (light-blue squares), LP-mult (green circles), LP-core-relation (blue stars), LP-path (orange diamonds), and LP-path-segment (red crosses).} \label{fig:perform} \end{figure*} Our prediction problem is a standard binary classification problem (whether a firm would be added to the investment exclusion list from February 1, 2017, to May 31, 2018), so we use the area under the receiver operator characteristics (AUC-ROC) for evaluation. Because our labels are highly imbalanced, we also evaluate performance using the area under the precision-recall curves (AUC-PR)~\cite{Davis2006}. Because of space limitations, the results are shown in the form of graphics (see Fig.~\ref{fig:perform}). We first note that there seems to be predictability by only performing label propagation on the core network (i.e., LP-fixed). However, its performance is slightly worse than that of the random forest baseline using country and industry indicators. The performance of the network approach improves when the adaptive edge weighting scheme is used. This is apparent because LP-core-relation performs better than LP-fixed almost all the time. It is possible that LP-path performs worse than LP-core-relation because we only use the top 3,000 paths for computational reasons. LP-mult does not seem to improve performance when compared with LP-fixed. Whether this originates from the particular algorithm used or because not much information is added by incorporating multi-category correlation needs further investigation. Finally, comparing LP-path-segment to all the other methods, we find that it performs substantially better, outperforming all the methods for all the categories compared in this paper. To summarize, our results show that using the information stored in the heterogeneous information network leads to a substantially better predictive accuracy. For completeness, in Fig.~\ref{si:fig:edge_weight}, we provide a normalized histogram that shows the learned edge weights for LP-path-segment for predicting the ``Product/Service'' category. We see that our algorithm tends to separate edge weights into values of either one or zero. \subsection{Interpretability} To understand what our models have learned, we perform partial dependence analysis on our learned model~\cite{Hastie2001}. However, because the features used by LP-path-segment are highly correlated, calculating the importance measure for each feature might not be a reasonable approach. Hence, we first reduce the dimensionality of the feature space to 50 using a standard binary nonnegative matrix factorization (BNMF) technique \cite{Zitnik2012} and then perform the usual partial dependence analysis along the basis of the matrix obtained by the standard BNMF method. The BNMF finds similar relation types among the different path segments that can be aligned to make an interpretation of the results possible. Typically, the sample standard deviation of the fitted values of the partial dependence plot is used as a measure of feature importance~\cite{Greenwell2017}. However, because our feature matrix is binary, we instead focus on the absolute difference of the response at the 0.99 and 0.01 quantile of the coefficient vector that corresponds to each basis vector. We also consider the average value of the importance measure, repeating the training and partial dependence analysis step 30 times using different initial parameters to mitigate the effect of fluctuation that results from the learning process. Table~\ref{table:topfeature} shows the top five important features learned for the ``Product/Service'' category. Basis vector 4 seems to have the most negative effect, whereas basis vector 13 seems to have the most positive effect on the weights. Note that features in higher path segments are likely to have a higher value in the basis vector because our feature matrix is a binary matrix taking one if there is at least one relation type in a particular path segment. Thus, we must pay attention to the relation type in each segment when interpreting the result and, in Fig.~\ref{fig:pdp:product}, we report the top relation types for each path segment for basis vector 4 and basis vector 13. Whereas the path segments of basis vector 4 include more relation types that are related to the license relation, basis vector 13, which has a positive effect, focuses more on the buyer-seller and partnership-manufacture relations. Because ``Product/Service'' is more closely related to news about the specific products of a firm, such as recall incidents and drug test failures, our model learned to value those relation types in the path segments more. In Table~\ref{si:table:topfeature:financial}, we show the top five important features for the ``Financial'' category. All the top five features have a positive effect on the edge weights, so we focus on the top two and report analysis for basis vector 34 (Fig.~\ref{fig:pdp34}) and basis vector 10 (Fig.~\ref{fig:pdp10}). For basis vectors 34 and 10, we see that they focus more on creditor-borrower relationships. Because ``Financial'' negative news is reported when a firm is in a serious financial condition or when there are ownership issues, it makes sense that these relation types are at the top and have a positive effect on the edge weights. Since reporting the names of our prediction might be too offensive, we refrain from doing that in the present paper, but we have also checked several examples from our prediction and checked the validity of our approach as well. \begin{table \centering \begin{tabular}{lrrr} \toprule Rank & Basis & $E_{\hat{\theta} }[f(x_{0.99})-f(x_{0.01})]$ & $|E_{\hat{\theta} }[f(x_{0.99})-f(x_{0.01})]|$\\ \midrule 1 & 4 & -0.096 & 0.096\\ 2 & 26 & -0.070 & 0.070\\ 3 & 30 & -0.057 & 0.057\\ 4 & 13 & 0.040 & 0.040\\ 5 & 7 & 0.039 & 0.039\\ \bottomrule \end{tabular} \caption{Top five important features for the ``Product/Service'' category.} \label{table:topfeature} \end{table \begin{figure*} \centering \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.95\linewidth]{pdp4.pdf} \caption{Basis vector 4 (license-licensee)} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=.95\linewidth]{pdp13.pdf} \caption{Basis vector 13 (buyer-seller)} \label{fig:sfig2} \end{subfigure} \caption{Comparison of basis vector 4 and basis vector 13. The dotted vertical lines divide each path segment. Because there are relation types that do not appear in some path segments, the total number of features is 526 instead of 1,296 ($216 \times 6$). Peaks in basis vector 4: (a) in-licensing, (b) in-licensing, (c) in-licensing, (d) out-licensing, (e) distributor, (f) in-licensing, (g) out-licensing, and (h) customer. Peaks in basis vector 13: (a) customer, (b) partner-manufacture, (c) international shipping (d) receive goods, (e) international shipping, (f) international shipping (g) receive goods, and (h) franchise.} \label{fig:pdp:product} \end{figure*} \begin{table}[tbhp] \centering \begin{tabular}{lrrr} \toprule Rank & Basis & $E_{\hat{\theta} }[f(x_{0.99})-f(x_{0.01})]$ & $|E_{\hat{\theta} }[f(x_{0.99})-f(x_{0.01})]|$\\ \midrule 1 & 34 & 0.090 & 0.090 \\ 2 & 10 & 0.089 & 0.089\\ 3 & 7 & 0.089 & 0.089 \\ 4 & 21 & 0.088 & 0.088\\ 5 & 20 & 0.081 & 0.081\\ \bottomrule \end{tabular} \caption{Top five important features for the ``Financial'' category. } \label{si:table:topfeature:financial} \end{table \begin{figure*} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Financialpdp34.pdf} \caption{Basis vector 34 (creditor borrower)} \label{fig:pdp34} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=.95\linewidth]{Financialpdp10.pdf} \caption{Basis vector 10 (creditor borrower)} \label{fig:pdp10} \end{subfigure} \caption{Comparison of basis vector 34 and basis vector 10. The dotted vertical lines divide each path segment. Because there are relation types that do not appear in some path segments, the total number of features is 526 instead of 1,296 ($216 \times 6$). Peaks in basis vector 34: (a) creditor, (b) strategic alliance, (c) borrower, (d) creditor, (e) borrower, (f) tenant, (g) landlord, and (h) creditor. Peaks in basis vector 10: (a) borrower, (b) strategic alliance, (c) creditor, (d) borrower, (e) creditor, (f) creditor, (g) borrower, and (h) borrower.} \label{fig:pdp:financial} \end{figure*} \section{Conclusion} In this paper, using a comprehensive dataset of negative news investment exclusion list data and a heterogeneous information network among 35,657 global firms assembled from professional data sources, we showed that the predictive performance of predicting firms that are more likely to be added to an investment exclusion list increases in a striking manner when we exploit the vast amount of information stored in the heterogeneous information network. Our work suggests a machine-assisted method to exploit the heterogeneous information contained in big data to monitor firms on a global scale for better risk management. We also showed that our model is interpretable. Fig.~\ref{fig:perform} demonstrates the remarkable over-performance of our methods, which requires some explanation. First, when a problem occurs for a firm, it is likely that the firms that it is related to or similar firms are also in trouble. The similarity of firms could be quantified by the closeness in the heterogeneous information network, which includes a variety of information concerning a firm. Moreover, instead of using the raw closeness measure that our heterogeneous information network suggests, we adjust for the closeness measure using past patterns, which results in high predictive performance. Perhaps more importantly, when a problem catches the eye of the public, investigative journalists search for nearby firms for follow-up stories. By doing so, they can claim that the first problem they reported is not just confined to one firm, but a more general issue in need of more attention. Hence, it might not be surprising that our machine-assisted method works. The misclassifications of our model can be organized into four categories, as shown in Table~\ref{table:error}. The inaccuracy that results from our model or data limitations could result in both false positive and false negative errors. There are exogenous events in false negatives that are impossible to predict from our approach of simply learning past negative news patterns. Exogenous events always constitute an intrinsic limit to prediction methods. However, on the positive side, there might be cases of false positive misclassifications that correspond to unrealized or uncovered events. From a journalist's point of view, the list of firms in this category might be the next possible target for further investigation. From a firm's point of view, our prediction score might be a good diagnostic to follow to take timely actions for fair media coverage using firm-initiated press releases and investor relations firms \cite{Solomon2012}. Moreover, instead of using the media labels as the data vendor provides it, we could investigate further into the text to pick up news that had a significant impact (e.g., arrest, lawsuits) instead of just a shallow allegation. We could also take into account node information (e.g., firm size) to focus on firms that are too big to fail or the banking sector for which the effect of negatvie media coverage is already well-known \cite{Birchler2016}. \begin{table}[H] \resizebox{1.0\textwidth}{!}{ \centering \begin{tabular}{lrrr} \toprule {} & {} & \multicolumn{2}{c}{Real world}\\ \multicolumn{2}{c}{} & False & True\\ \midrule Prediction & False & Correct & FN: Model error/Data limit\\ {} & {} & {} & Exogeneous events\\ {} & True & FP: Model error/Data limit & Correct \\ {} & {} & Not realized/Not covered & {} \\ \bottomrule \end{tabular}} \caption{Model prediction and the real world. FP denotes false positive and FN denotes false negative.} \label{table:error} \end{table} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,910
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="pt"> <head> <!-- Generated by javadoc (version 1.7.0_79) on Mon Jun 22 15:50:37 BRT 2015 --> <title>Uses of Class org.sdnplatform.sync.thrift.Store</title> <meta name="date" content="2015-06-22"> <link rel="stylesheet" type="text/css" href="../../../../../stylesheet.css" title="Style"> </head> <body> <script type="text/javascript"><!-- if (location.href.indexOf('is-external=true') == -1) { parent.document.title="Uses of Class org.sdnplatform.sync.thrift.Store"; } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar_top"> <!-- --> </a><a href="#skip-navbar_top" title="Skip navigation links"></a><a name="navbar_top_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../../index-files/index-1.html">Index</a></li> <li><a href="../../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../../../index.html?org/sdnplatform/sync/thrift/class-use/Store.html" target="_top">Frames</a></li> <li><a href="Store.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="header"> <h2 title="Uses of Class org.sdnplatform.sync.thrift.Store" class="title">Uses of Class<br>org.sdnplatform.sync.thrift.Store</h2> </div> <div class="classUseContainer"> <ul class="blockList"> <li class="blockList"> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing packages, and an explanation"> <caption><span>Packages that use <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Package</th> <th class="colLast" scope="col">Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><a href="#org.sdnplatform.sync.internal.rpc">org.sdnplatform.sync.internal.rpc</a></td> <td class="colLast">&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="#org.sdnplatform.sync.thrift">org.sdnplatform.sync.thrift</a></td> <td class="colLast">&nbsp;</td> </tr> </tbody> </table> </li> <li class="blockList"> <ul class="blockList"> <li class="blockList"><a name="org.sdnplatform.sync.internal.rpc"> <!-- --> </a> <h3>Uses of <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a> in <a href="../../../../../org/sdnplatform/sync/internal/rpc/package-summary.html">org.sdnplatform.sync.internal.rpc</a></h3> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing methods, and an explanation"> <caption><span>Methods in <a href="../../../../../org/sdnplatform/sync/internal/rpc/package-summary.html">org.sdnplatform.sync.internal.rpc</a> that return <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><code>static <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">TProtocolUtil.</span><code><strong><a href="../../../../../org/sdnplatform/sync/internal/rpc/TProtocolUtil.html#getTStore(java.lang.String,%20org.sdnplatform.sync.ISyncService.Scope,%20boolean)">getTStore</a></strong>(java.lang.String&nbsp;storeName, <a href="../../../../../org/sdnplatform/sync/ISyncService.Scope.html" title="enum in org.sdnplatform.sync">ISyncService.Scope</a>&nbsp;scope, boolean&nbsp;persist)</code> <div class="block">Allocate a thrift <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift"><code>Store</code></a> object for the current store</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><code>static <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">TProtocolUtil.</span><code><strong><a href="../../../../../org/sdnplatform/sync/internal/rpc/TProtocolUtil.html#getTStore(java.lang.String,%20org.sdnplatform.sync.thrift.Scope,%20boolean)">getTStore</a></strong>(java.lang.String&nbsp;storeName, <a href="../../../../../org/sdnplatform/sync/thrift/Scope.html" title="enum in org.sdnplatform.sync.thrift">Scope</a>&nbsp;scope, boolean&nbsp;persist)</code> <div class="block">Allocate a thrift <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift"><code>Store</code></a> object for the current store</div> </td> </tr> </tbody> </table> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing methods, and an explanation"> <caption><span>Methods in <a href="../../../../../org/sdnplatform/sync/internal/rpc/package-summary.html">org.sdnplatform.sync.internal.rpc</a> with parameters of type <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><code>static <a href="../../../../../org/sdnplatform/sync/thrift/SyncMessage.html" title="class in org.sdnplatform.sync.thrift">SyncMessage</a></code></td> <td class="colLast"><span class="strong">TProtocolUtil.</span><code><strong><a href="../../../../../org/sdnplatform/sync/internal/rpc/TProtocolUtil.html#getTSyncValueMessage(org.sdnplatform.sync.thrift.Store)">getTSyncValueMessage</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code> <div class="block">Get a partially-initialized <a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html" title="class in org.sdnplatform.sync.thrift"><code>SyncValueMessage</code></a> wrapped with a <a href="../../../../../org/sdnplatform/sync/thrift/SyncMessage.html" title="class in org.sdnplatform.sync.thrift"><code>SyncMessage</code></a>.</div> </td> </tr> </tbody> </table> </li> <li class="blockList"><a name="org.sdnplatform.sync.thrift"> <!-- --> </a> <h3>Uses of <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a> in <a href="../../../../../org/sdnplatform/sync/thrift/package-summary.html">org.sdnplatform.sync.thrift</a></h3> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing fields, and an explanation"> <caption><span>Fields in <a href="../../../../../org/sdnplatform/sync/thrift/package-summary.html">org.sdnplatform.sync.thrift</a> declared as <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Field and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncValueMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html#store">store</a></strong></code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncRequestMessage.html#store">store</a></strong></code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncOfferMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncOfferMessage.html#store">store</a></strong></code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">RegisterRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/RegisterRequestMessage.html#store">store</a></strong></code>&nbsp;</td> </tr> </tbody> </table> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing methods, and an explanation"> <caption><span>Methods in <a href="../../../../../org/sdnplatform/sync/thrift/package-summary.html">org.sdnplatform.sync.thrift</a> that return <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#deepCopy()">deepCopy</a></strong>()</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncValueMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html#getStore()">getStore</a></strong>()</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncRequestMessage.html#getStore()">getStore</a></strong>()</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">SyncOfferMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncOfferMessage.html#getStore()">getStore</a></strong>()</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">RegisterRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/RegisterRequestMessage.html#getStore()">getStore</a></strong>()</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#setPersist(boolean)">setPersist</a></strong>(boolean&nbsp;persist)</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#setScope(org.sdnplatform.sync.thrift.Scope)">setScope</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Scope.html" title="enum in org.sdnplatform.sync.thrift">Scope</a>&nbsp;scope)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#setStoreName(java.lang.String)">setStoreName</a></strong>(java.lang.String&nbsp;storeName)</code>&nbsp;</td> </tr> </tbody> </table> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing methods, and an explanation"> <caption><span>Methods in <a href="../../../../../org/sdnplatform/sync/thrift/package-summary.html">org.sdnplatform.sync.thrift</a> with parameters of type <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Modifier and Type</th> <th class="colLast" scope="col">Method and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><code>int</code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#compareTo(org.sdnplatform.sync.thrift.Store)">compareTo</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;other)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code>boolean</code></td> <td class="colLast"><span class="strong">Store.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#equals(org.sdnplatform.sync.thrift.Store)">equals</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;that)</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html" title="class in org.sdnplatform.sync.thrift">SyncValueMessage</a></code></td> <td class="colLast"><span class="strong">SyncValueMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html#setStore(org.sdnplatform.sync.thrift.Store)">setStore</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/SyncRequestMessage.html" title="class in org.sdnplatform.sync.thrift">SyncRequestMessage</a></code></td> <td class="colLast"><span class="strong">SyncRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncRequestMessage.html#setStore(org.sdnplatform.sync.thrift.Store)">setStore</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/SyncOfferMessage.html" title="class in org.sdnplatform.sync.thrift">SyncOfferMessage</a></code></td> <td class="colLast"><span class="strong">SyncOfferMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncOfferMessage.html#setStore(org.sdnplatform.sync.thrift.Store)">setStore</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colFirst"><code><a href="../../../../../org/sdnplatform/sync/thrift/RegisterRequestMessage.html" title="class in org.sdnplatform.sync.thrift">RegisterRequestMessage</a></code></td> <td class="colLast"><span class="strong">RegisterRequestMessage.</span><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/RegisterRequestMessage.html#setStore(org.sdnplatform.sync.thrift.Store)">setStore</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> </tbody> </table> <table border="0" cellpadding="3" cellspacing="0" summary="Use table, listing constructors, and an explanation"> <caption><span>Constructors in <a href="../../../../../org/sdnplatform/sync/thrift/package-summary.html">org.sdnplatform.sync.thrift</a> with parameters of type <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a></span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colOne" scope="col">Constructor and Description</th> </tr> <tbody> <tr class="altColor"> <td class="colLast"><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/RegisterRequestMessage.html#RegisterRequestMessage(org.sdnplatform.sync.thrift.AsyncMessageHeader,%20org.sdnplatform.sync.thrift.Store)">RegisterRequestMessage</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/AsyncMessageHeader.html" title="class in org.sdnplatform.sync.thrift">AsyncMessageHeader</a>&nbsp;header, <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colLast"><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/Store.html#Store(org.sdnplatform.sync.thrift.Store)">Store</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;other)</code> <div class="block">Performs a deep copy on <i>other</i>.</div> </td> </tr> <tr class="altColor"> <td class="colLast"><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncOfferMessage.html#SyncOfferMessage(org.sdnplatform.sync.thrift.AsyncMessageHeader,%20org.sdnplatform.sync.thrift.Store,%20java.util.List)">SyncOfferMessage</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/AsyncMessageHeader.html" title="class in org.sdnplatform.sync.thrift">AsyncMessageHeader</a>&nbsp;header, <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store, java.util.List&lt;<a href="../../../../../org/sdnplatform/sync/thrift/KeyedVersions.html" title="class in org.sdnplatform.sync.thrift">KeyedVersions</a>&gt;&nbsp;versions)</code>&nbsp;</td> </tr> <tr class="rowColor"> <td class="colLast"><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncRequestMessage.html#SyncRequestMessage(org.sdnplatform.sync.thrift.AsyncMessageHeader,%20org.sdnplatform.sync.thrift.Store)">SyncRequestMessage</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/AsyncMessageHeader.html" title="class in org.sdnplatform.sync.thrift">AsyncMessageHeader</a>&nbsp;header, <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store)</code>&nbsp;</td> </tr> <tr class="altColor"> <td class="colLast"><code><strong><a href="../../../../../org/sdnplatform/sync/thrift/SyncValueMessage.html#SyncValueMessage(org.sdnplatform.sync.thrift.AsyncMessageHeader,%20org.sdnplatform.sync.thrift.Store,%20java.util.List)">SyncValueMessage</a></strong>(<a href="../../../../../org/sdnplatform/sync/thrift/AsyncMessageHeader.html" title="class in org.sdnplatform.sync.thrift">AsyncMessageHeader</a>&nbsp;header, <a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Store</a>&nbsp;store, java.util.List&lt;<a href="../../../../../org/sdnplatform/sync/thrift/KeyedValues.html" title="class in org.sdnplatform.sync.thrift">KeyedValues</a>&gt;&nbsp;values)</code>&nbsp;</td> </tr> </tbody> </table> </li> </ul> </li> </ul> </div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar_bottom"> <!-- --> </a><a href="#skip-navbar_bottom" title="Skip navigation links"></a><a name="navbar_bottom_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../../overview-summary.html">Overview</a></li> <li><a href="../package-summary.html">Package</a></li> <li><a href="../../../../../org/sdnplatform/sync/thrift/Store.html" title="class in org.sdnplatform.sync.thrift">Class</a></li> <li class="navBarCell1Rev">Use</li> <li><a href="../package-tree.html">Tree</a></li> <li><a href="../../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../../index-files/index-1.html">Index</a></li> <li><a href="../../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li>Prev</li> <li>Next</li> </ul> <ul class="navList"> <li><a href="../../../../../index.html?org/sdnplatform/sync/thrift/class-use/Store.html" target="_top">Frames</a></li> <li><a href="Store.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
1,261
It was thrilling to see these little ruffles float and rotate in the air today when they were photographed. I am entering this piece in a show–deadline is day-after-tomorrow. I hope the juror likes it. (I don't like being rejected). Submitting entries for juried shows is really hard for me. The technology needed to fit all the requirements is daunting. Thankfully I have wonderful help. A video is coming soon. It really looks nice as the components spin around independently.
{ "redpajama_set_name": "RedPajamaC4" }
3,023
Courtesy of Piet van der Pijl and Nin Cheun (Noteshobby ). According to a communiqué dated 5 February 2019, the Banque Centrale de la République de Guinée introduced a new 20,000-franc note on 21 January 2019. The new note retains the overall design of the preceding issues (B338a), but is physically smaller, has varnish coating for improved durability, two pigeons printed with a color-changing optical effect, and a gold-to-green security thread. The new note will circulate in parallel with the preceding issues. According to an AfricaTribune article dated 6 November 2018, the Banque Centrale de la Republique de Guinée intends to introduce a reduced-size 10,000-franc note as well as a 20,000-franc note with improved security features. Like B339a, but new date (2017) and new signatures. Prefix AV. Like B340a, but new date (2017) and new signatures. Prefix CK. B341: Like B324, but new date, new signature, and new vignette at lower right front. Courtesy of Manuel Fernando Iglesias. Tan. Front: Woman with plaits; two pigeons taking flight; coat of arms; bank logo as registration device. Back: Map of Guinea; two dump trucks and steam shovel at open pit bauxite mine; stalk of bananas. Windowed security thread with demetalized GNF 1000. Watermark: Woman with plaits and electrotype RG 1000. Printer: Unknown. 139 x 68 mm. a. 2015. Signature 5. Prefix AF. Courtesy of Hamid Kazemi (banknoteswholesale ). Green. Front: Woman with headscarf; coat of arms. Back: Minehead; diamond; ceremonial headdress. Solid security thread. Watermark: Woman with headscarf, RG, and Cornerstones. Printer: (TDLR). 128 x 63 mm. a. 2015. Signature 5. Prefix NU. Like B328, but physically smaller. Purple, green, red, and blue. Front: Woman with cimier hairstyle; two pigeons taking flight coat of arms; bank logo as registration device. Back: Kinkon dam; mask. 3-mm red-to-green windowed security thread with demetalized GNF 5000. Watermark: Woman with cimier hairstyle, RG, and Cornerstones. Printer: (TDLR). 140 x 65 mm. a. 2015. Signature 5. Prefix AA. Intro: July 2015. Courtesy of Oleksiy Danylenko (Alex Liberia). According to an article on Guinee24 dated 11 July 2015, the Banque Centrale de la Republique de Guinée plans to introduce a new 5,000-franc banknote by the end of July 2015. The new note is based upon the existing design (B334), but is reduced in size and has enhanced security features. Blue, green, orange, and purple. Front: Guinean woman wearing head scarf; two pigeons taking flight; coat of arms. Back: Map of Guinea; hydroelectric dam in Kaleta; electricity transmission towers. 3-mm red-to-green windowed security thread with demetalized GNF 20000. Watermark: Guinean woman and electrotype 20000. Printer: (TDLR). 148 x 70 mm. a. 2015. Signature 5. Intro: 11.05.2015. Courtesy of Ruslan Vaschenko, Claudio Marana, Cedrian Lopez-Bosch, and Fernando Iglesias. different font in horizontal serial number; w/o Cornerstone watermarks. Printer: Unknown. Prefix KO. 1,000 francs, 1 MARS 2010. Like BCRG B33a, but 3-mm (not 4-mm) tall prefix letters in vertical serial number at right and different font in horizontal serial number at left (B33a, top; B33b, bottom). Also, Cornerstone watermarks removed, possibily indicating someone other than DLR printed the new variety. 5,000 francs, 2012. Like BCRG B30a (P41), but new date. Courtesy of Hartmut Fraunhoffer (www.banknoten.de). According to a press release, on 9 July 2012 the Banque Centrale de la Republique de Guinée issued a new 10,000-franc (US$1.40) note dated 2012, similar to the preceding issues (BCRG B32), but red instead of green, and with a holographic patch to the right of the portrait on front. 500 francs (US$0.01), 2012. Like BCRG B28 (P39), but new date. 100 francs (US$0.01), 2012. Like BCRG B24 (P35), but new date and new prefix/serial number font. Courtesy of Claudio Marana and Thomas Augustsson. The Guinea chapter of The Banknote Book is now available for individual sale and as a free download to subscribers. This 16-page catalog covers notes issued by the Gouvernement Général de l'A.O.F. - Colonie de la Guinée Française (Government General of French West Africa - Colony of French Guinea) from 1917 to 1920, the Banque de la République de Guinée (Bank of the Republic of Guinea) in 1958, and the Banque Centrale de la République de Guinée (Central Bank of the Republic of Guinea) from 1960 to present. Revised 22 June 2016. At a press conference on 20 November 2010, Alhasanne Barry, governor of the Banque Centrale de la République de Guinée, unveiled new 1,000-, 5,000-, and 10,000-franc (US$0.15, 0.70, and 1.45, respectively) banknotes to commemorate the 50th anniversary of Guinean currency. In addition to the 50th anniversary logo on the watermark area, the new notes have enhanced security features, additional intaglio printing, and a varnish for additional durability. Courtesy of Abdullah Beydoun and banknoteshop@gmx.net. I'm researching an interesting 1998 note variety for Guinea. As you can see, both 500-franc notes (Pick 36) shown below are dated 1998. However, the note with the AO prefix has a signature variety that's different from the signature combination on the note with the FB prefix. The odd thing about this discovery is that the second combo is found on all other denominations in the 1998 series, as well as the 1985 series before it, and the 2006-present series. Courtesy of Rui Manuel Palhares. Green, blue, and brown. Front: Young girl; coat of arms; three shells and pineapple. Back: Field with trees; "Lady of Maali" rock formation on Mount Loura. 3-mm red-to-green windowed security thread with demetalized 10000 GNF. Watermark: Young girl, electrotype RG, and Cornerstones. Printer: TDLR (w/o imprint). 153 x 78 mm. 2008. Signature 4. This note is very similar to 2007 type, but with subtle changes to the design on both sides. P37b: 1,000 francs (US$0.20), 2006. Like Pick 37, but new date, Cornerstone watermarks, and smaller size. 139 x 68 mm. P38b: 5,000 francs (US$0.90), 2006. Like Pick 38, but new date, Cornerstone watermarks, and smaller size. 151 x 76 mm. 500 francs (US$0.10), 2006. Like Pick 36, but new date, Cornerstone watermarks, solid security thread, full bleed color (no white borders), and smaller size. 133 x 63 mm. 10,000 francs (US$2.55), 2007. Issued June 11, 2007. Green, blue, and brown. Portrait of young girl at left, arms at center, three shells and pineapple at lower right. Field with trees in background at center, "Lady of Maali" rock formation on Mount Loura at right on back. Watermark of a young girl with the letters RG (Republic of Guinea), 3-mm red-to-green windowed security thread printed GNF 10000, registration device, UV printing, and signatures (unknown, MINISTRE DES FINANCES; Aboubakar Kagbe Toure, GOUVERNEUR BANQUE CENTRALE). 153 x 78 mm.
{ "redpajama_set_name": "RedPajamaC4" }
7,173
{"url":"https:\/\/www.udacity.com\/wiki\/math-glossary","text":"# Mathematics Glossary\n\nOften with mathematics, common words have different meanings which can be confusing. On top of that there is all the specialised vocabulary. This page aims to cover, in everyday language as well as more formally, all the mathematical vocabulary used.\n\n## Notation\n\n### Dot dot dot \\ldots\n\nWhen you see \\dots it means continue the same pattern. For instance 1,2,3 \\ldots, 10 means 1,2,3,4,5,6,7,8,9,10.\n\n### Braces { }\n\nA pair of curly brackets\/braces { } is used to denote sets.\n\n### Approximately Equal \\approx\n\nThe symbol \\approx means approximately equal. \\sqrt(2) \\approx 1.41 says \"the square root of 2 is approximately equal to one point four one.\"\n\n### Not Equal to \\ne\n\nThe symbol \\ne means not equal to so p\\ne2 means \"p is not equal to 2\".\n\n### Line over part of a decimal e.g. 0.1\\overline{6}\n\nAn overbar over part of a decimal means the digits repeat forever. For example \\frac{1}{6} = 0.1\\overline{6} means that just the 6 is repeated giving 0.16666666666\\ldots, but \\frac{2}{7} = 0.\\overline{285714} means that the whole sequence 285714 is repeated: 0.285714285714285714\\dots.\n\n### Absolute Value |a|\n\nThe absolute value |a| is the magnitude of a, that is, the number with any negative sign removed. For example, |3|=3 and |-3|=3. Note that any calculation within the absolute value is done first and then the sign ignored eg |5-8| = |-3| = 3.\n\n## Sets\n\nA set is a collection of objects, for example, { 1,2,3 } is the set of numbers 1, 2 and 3.\n\n### Empty Set\n\nThe set with no elements, { } is called the empty set, and has its own notation \\emptyset.\n\n### Element\n\nThe objects in a set are called elements. For example, 1 is an element in {1,2,3} but {1} is not.\u00a0 However, { 1 } is an element of A={ { 1 }, 2, { 1,2,3 } } and 3 is not since A contains the elements { 1 }, 2 and { 1,2,3 }.\n\n### Subset\n\nIf all the elements of one set are contained in another set, then the first set is a subset of the second set. (Formally, A is a subset of B is every element in A is contained in B.) For example, {1,2} is a subset of {1,2,3}. The empty set is a subset of all other sets as it has no elements.\n\n## Number\n\n### Terminating Decimals\n\nDecimal numbers that do not go on forever are called terminating decimals. For example 0.3 is a terminating decimal but \\frac{1}{3} = 0.\\overline{3} is not as the 3 is repeated forever. The decimal representation of \\sqrt(2) is not terminating either. We might approximate it to a terminating decimal, but however many decimal places we use, it will never be exactly \\sqrt(2).\n\n### Repeating Decimals\n\nDecimals which do not terminate but repeat the same digit or sequence of digits over and over again are called repeating decimals. An overbar over the repeating digit or sequence of digits is used, for example, \\frac{1}{6} = 0.1\\overline{6} and \\frac{2}{7} = 0.\\overline{285714}.\n\n### Rounding\n\nWhen we want to express a decimal which does not terminate as terminating decimal, we round to a certain number of decimal places. For example \\frac{1}{3} \\approx 0.333, \\frac{1}{6} \\approx 0.167, and \\sqrt(2) \\approx 1.412 all rounded to 3 decimal places.\n\n### Positive Integers (Natural Numbers)\n\nThe Positive Integers, also called the Natural Numbers, are the numbers {1, 2, 3, 4, 5, ...}.\n(Note that Natural Numbers sometimes include 0 so make sure you are always aware of which definition is being used in any books and webpages you're reading and in any courses you are taking.)\n\n### Negative Integers\n\nThe Negative Integers is the set {-1, -2, -3, -4, \\ldots}.\n\n### Non-negative Integers\n\nThe Non-negative Integers is the set of positive integers and zero.\n\n### Integers\n\nThe Integers consists of the Positive Integers, Negative Integers {-1,-2,-3,-4,\\ldots} and 0. That is, the Integers is the set {\\ldots, -4, -3, -2, -1, 0, 1, 2, 3, 4, \\ldots}.\n\n### Real Numbers\n\nThe real numbers consist of all fractions, whole numbers and decimals. The numbers which are not real are called imaginary numbers. (To get the imaginary numbers we have to define the square root of -1 which we call i. Weird, huh?)\n\n### Rational Numbers\n\nRational numbers are numbers which can be written as a fraction where the numerator (top) and denominator (bottom) are both integers, and the denominator (bottom) is not 0. For example, 2\/3 is a rational number since 2 and 3 are both integers, and 3\\ne0. Note that the integers are rational numbers since they can be written as a fraction with denominator (bottom) 1. In decimal form they are represented by either terminating or repeating decimals. For example, \\frac{3}{10} = 0.3, \\frac{1}{3} = 0.\\overline{3} and \\frac{2}{7} = 0.\\overline{285714}0.\n\n### Irrational Numbers\n\nIrrational numbers are real numbers which are not rational! They can not be written as a fraction with integer numerator (top) and denominator (bottom), and denominator (bottom) which is not zero. They can not be written as terminating decimals. They can not be written as repeating decimals. Examples of irrational numbers are \\pi, \\sqrt(2). Note that if you multiple or divide irrational numbers by rational numbers, you get an irrational number. If you add or subtract a rational and irrational number, you get an irrational number. However, if you multiply irrational numbers together you may get a rational number eg \\sqrt(2)\\cdot\\sqrt(2) = 2.","date":"2016-10-01 10:23:35","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9083172678947449, \"perplexity\": 674.1124798424372}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-40\/segments\/1474738662705.84\/warc\/CC-MAIN-20160924173742-00206-ip-10-143-35-109.ec2.internal.warc.gz\"}"}
null
null
Posted on November 22, 2017 by staff dotdigital Group pays £11m for Comapi AIM-listed dotdigital has acquired omni-channel messaging company Comapi BC south-westbc dealsBC messagingBC worldwide Mo Aldalou 9:34am 22nd Nov 2017 Comapi has been purchased for £11m in cash Comapi, a business which operates in the omni-channel messaging and cloud communication market, has been acquired in a multimillion-pound deal. AIM-listed dotdigital Group, which provides software-as-a-service (SaaS) technology and tools for digital marketing professionals, has purchased the Comapi group of companies for a cash sum of £11m, with a potential further payment of up to £1.2m depending on specific performance targets. Comapi is behind a scalable software platform that allows companies to communicate with customers across multiple messaging channels including email, SMS, social messaging apps and live chat. The firm, which employs a workforce of 30 at its headquarters in Cheltenham, generated earnings of £1.2m and revenues of £7.8m in the financial year to December 2016. "By adding Comapi to our business, dotdigital is executing on its vision to be an omni-channel marketing automation platform," said chief executive Milan Patel. "Comapi has built an impressive platform that, integrated with our software, will allow our customers access to the next generation of consumer engagement marketing technology aiding retention and boosting our competitive advantage in securing new customers." In a statement on the London Stock Exchange, the directors of dotdigital said the deal will extend its marketing automation platform to provide an "industry leading solution" offering fully integrated omni-channel and conversational commerce support to marketers. It will also support the company's ambitions to expand beyond the UK by enabling easier access to more mobile-centric markets including those in Asia.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,807
{"url":"https:\/\/socratic.org\/questions\/how-do-you-rationalize-the-denominator-and-simplify-1-sqrt5-3","text":"# How do you rationalize the denominator and simplify 1\/(sqrt5-3)?\n\n$\\frac{1}{\\sqrt{5} - 3} = - \\frac{\\sqrt{5} + 3}{4}$\nFor expressions $\\frac{\\textrm{s o m e t h \\in g}}{\\sqrt{a} - b}$ we multiply the numerator and denominator by $\\sqrt{a} + b$ so it matches the LHS of the formula $\\left(x + y\\right) \\left(x - y\\right) = {x}^{2} - {y}^{2}$.\n$\\frac{1}{\\sqrt{5} - 3} = \\frac{1}{\\sqrt{5} - 3} \\cdot \\frac{\\sqrt{5} + 3}{\\sqrt{5} + 3} = \\frac{\\sqrt{5} + 3}{5 - 9} = - \\frac{\\sqrt{5} + 3}{4}$","date":"2020-04-02 16:51:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 5, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9993500113487244, \"perplexity\": 196.64326339457168}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370506988.10\/warc\/CC-MAIN-20200402143006-20200402173006-00507.warc.gz\"}"}
null
null
\section{Introduction} Formal argumentation has been proved to be a successful approach to non-monotonic reasoning, among many other applications \cite{bench2007argumentation,atkinson2017towards,addedvalue}. Within the studies directed to provide a formal model for argument-based inference, abstract models of argumentation play a crucial role, as they answer a rather fundamental question: how should a rational agent choose among a conflicting set of arguments those that are better justified? The adjective \emph{abstract} stresses that these models disregard the nature and structure of arguments, in order to focus on the different semantics through which one could give a precise answer to the question above. The foremost abstract model of argumentation is the use of directed graphs, first proposed by Dung in \cite{dung1995acceptability} under the name of \emph{argumentation frameworks} (AFs), where nodes stand for arguments and arrows stand for attacks among arguments. \par While being an elegant and powerful tool, AFs have too limited modelling capabilities for many purposes. Consequently, many extensions of Dung's model were proposed in the literature, most prominently support relations \cite{cayrol2005acceptability}, recursive forms of attacks \cite{baroni2009encompassing}, and preferences between arguments \cite{amgoud2011new}. Two essential limitations of all these approaches are: (i)~their static character; and (ii)~the assumption that the formalized agent has perfect knowledge about the structure of the AF, that is, about the relevant arguments and attacks of the debate. \par Regarding (i), an AF can be understood as a snapshot of a debate, and this has been shown useful to provide mathematically precise counterparts of many interesting argumentative notions. However, a fundamental aspect of argumentation is its dynamic character, since arguments, conflicts among them, and participants' opinions typically change during the development of an argumentative dialogue. It is then unsurprising that the dynamics of formal argumentation systems has been the center of attention of a recent research avenue within formal argumentation, with an important focus on abstract models; we refer to \cite{DM18} and \cite{baumann2021enforcement} for recent surveys. As to (ii), it turns out to be a significant shortcoming in adversarial contexts where one typically wants to model the information (i.e., the part of an AF) that an agent thinks her opponent entertains, and thus uncertainty arises naturally. This assumption of perfect knowledge has been relaxed through the study of extensions of AFs that account for different forms of uncertainty, be it probabilistic or more qualitative; see \cite{hunter2021survey} and \cite{mailly2021yes} for recent surveys on the respective approaches. Among the second group of approaches, \emph{incomplete argumentation frameworks} (IAFs) \cite{baumeister2021acceptance,fazzing2020,baumeister2018credulous,baumeister2018verification,baumeister2018complexity} and \emph{control argumentation frameworks} (CAFs) \cite{dimopoulos2018control,niskanen2021controllability, cafsnegotiation} have recently received a lot of attention, resulting in a precise complexity map of the different associated reasoning tasks as well as some applications \cite{cafsnegotiation}. Concurrently, a considerable amount of work in formal argumentation has focused on building a suitable logical theory for reasoning about argumentation formalisms, with a special focus on AFs and their dynamics; see \cite{besnard2020logical} for a recent survey. The \emph{dynamic logic of propositional assignments} (DL-PA) \cite{balbiani2013dynamic} has been shown to be a useful tool for this enterprise \cite{doutre2014dynamic,doutre2017dynamic,doutre2019clar,clar2021}. DL-PA is a well-behaved variant of \emph{propositional dynamic logic} (PDL) \cite{HarelKozenTiuryn00}, where atomic programs are restricted to assignments of propositional variables to either Truth or Falsity. It is expressive enough to capture all standard argumentation semantics. When compared to encodings in propositional logic, DL-PA can capture semantics that incorporate minimality or maximality criteria more succinctly. Moreover, its advantages over equally succinct languages such as \emph{quantified Boolean formulas} have been highlighted \cite{doutre2019clar}.\par This work pushes further the logical encoding of abstract AFs in DL-PA by pursuing three general aims: (1)~to capture argumentation semantics that had not been captured before, some of them posing challenging encodings methods; (2)~to integrate qualitative uncertainty about AFs and dynamics of argumentation in DL-PA by reducing reasoning tasks of different extensions of argumentation frameworks to DL-PA model checking problems; and (3)~to show that the chosen logic is also a suitable tool for exploratory purposes, by developing new forms of modelling argumentative communication under uncertainty that are directly inspired by our encodings. \par After providing the essential background on AFs and DL-PA (Section~\ref{sec:background}), Section~\ref{sec:semantics} provides polynomial encodings of a wide range of AF semantics in DL-PA. In Section~\ref{sec:uncertainty} we present several formalisms for qualitatively representing uncertainty about AFs, as well as the reduction of their main reasoning tasks to DL-PA model checking problems. In Section~\ref{sec:dynamics}, we discuss joint approaches to dynamics and qualitative uncertainty of AFs. Section~\ref{sec:final} ends the paper with some discussion and challenges for future work. Proofs and proof sketches can be found in the \nameref{sec:app}. \section{Background}\label{sec:background} Throughout the paper we assume a fixed, finite, non-empty set of arguments $\mathcal{U}$ (the \emph{universe}). We moreover assume that $\mathcal{U}$ is big enough to accommodate our examples. Sets of arguments (noted $A$, sometimes with a superscript) are supposed to be subsets of $\mathcal{U}$; and all conflict relations (noted $R$, sometimes with a superscript) are binary relations on $\mathcal{U}$, i.e., $R \subseteq \mathcal{U} \times \mathcal{U}$. Given $A \subseteq \mathcal{U}$ and $R \subseteq \mathcal{U} \times \mathcal{U}$, we use $R\upharpoonright_{ A}$ to abbreviate $R \cap (A \times A)$ (the \textbf{restriction of $R$ to $A$}). \subsection{Abstract Argumentation Frameworks (AFs) and their Semantics}\label{sec:background:argsema} An \textbf{argumentation framework} (AF) is a directed graph $(A,R)$ \cite{dung1995acceptability}, where $A$ stands for a set of arguments and $R$ stands for a conflict-based relation among them (typically an attack relation).\footnote{ As $A\subseteq \mathcal{U}$, we actually focus on \emph{finite} AFs, as most of the literature does. This is an inherent limitation of our approach: our encodings use quantification over $\mathcal{U}$, which makes finiteness of $\mathcal{U}$ necessary. Capturing some more general argumentation semantics has turned out to require powerful logical languages, such as the modal $\mu$-calculus for the grounded semantics \cite{grossi2010logic}.} We note $\mathcal{AF}$ the set of all argumentation frameworks (over $\mathcal{U}$). Argumentation semantics are meant to capture the informal notion of reasonable positions in a debate. The literature contains a large number of such semantics. They are typically presented either in extension-based terms or in labelling-based terms. For most of the existing semantics, both approaches (extensions and labellings) were proved equivalent. Here, we opt for an extension-based presentation and restrict our attention to a limited number of semantics, but the interested reader is referred to \cite{baroni2018abstract} for an overview. Let us first define some useful concepts. Let $\af$ be an AF and let $E\subseteq A$. We define $E^{+}=\{x \in A \mid \exists y \in E: (y,x)\in R\}$ (the \textbf{set of arguments attacked by $E$}), and $E^{\oplus}=E\cup E^{+}$ (the so-called \textbf{range of $E$}). A set of arguments $E\subseteq A$ is \textbf{conflict-free} iff $E\cap E^{+}=\emptyset$. Moreover, $E$ \textbf{defends} $a\in A$ iff for every $x \in A$: if $(x,a)\in R$, then $x \in E^{+}$. Finally, $E\subseteq A$ is \textbf{admissible} iff it is (i) conflict-free and (ii) self-defended (it defends all its members). In \cite{dung1995acceptability}, Dung introduced four different semantics. A set of arguments $E\subseteq A$ is said to be: \begin{itemize} \item a \textbf{stable extension} iff (i)~it is conflict-free, and (ii)$A \setminus E\subseteq E^{+}$ (`$E$ attacks every argument outside itself'); \item a \textbf{complete extension} iff (i)~it is conflict-free; and (ii)~it contains precisely the arguments of $A$ that it defends; \item a \textbf{grounded extension} iff it is a minimal (w.r.t.\ set inclusion) complete extension; \item a \textbf{preferred extension} iff it is a maximal (w.r.t.\ set inclusion) complete extension. \end{itemize} It is well known that the existence of complete, grounded and preferred extensions is guaranteed for any AF. However, this does not hold for stable semantics: there exist frameworks lacking stable extensions. Moreover, the grounded semantics is the only one from the above list {belonging to the so-called single-status approach}: each AF has exactly one grounded extension. This is an advantage when, for instance, modelling the beliefs of an agent as the output of her argument-evaluation processes. More precisely, if a semantics admits AFs with several extensions then these extensions are usually logically incompatible when one works with structured arguments, and there is no clear way to choose among them. Besides Dung's above four semantics we will take into account some others. Semi-stable semantics was born to solve the problems caused by the absence of stable extensions under certain conditions. A set $E\subseteq A$ is a \textbf{semi-stable extension} of $\af$ iff $E$ is a complete extension with maximal (w.r.t.\ set inclusion) range among complete extensions. More formally, $E$ is a semi-stable extension iff \begin{itemize} \item[(i)] $E$ is a complete extension; and \item[(ii)] there is no other complete extension $E'$ such that $E^{\oplus}\subset E'^{\oplus}$. \end{itemize} Contrarily to what happens with stable extensions, there is at least one semi-stable extension in every finite AF. Moreover, when the set of stable extensions is nonempty, stable and semi-stable extensions coincide \cite{caminada2012semi}. Although appealing because of its single-status approach, grounded semantics can be criticised as being too sceptical because it typically leaves many undecided arguments, i.e., arguments neither belonging to the grounded extension nor attacked by it. The idea of both the eager and the ideal semantics is to keep the advantage of returning a single extension while avoiding being overly sceptical. Formally, a set $E\subseteq A$ is an \textbf{ideal set} of $\af$ iff it is admissible and it is contained in every preferred extension. The \textbf{ideal extension} of $\af$ is its maximal (w.r.t.\ set inclusion) ideal set. Moreover, a set $E\subseteq A$ is an \textbf{eager set} iff it is admissible and it is contained in every semi-stable extension. The \textbf{eager extension} of $\af$ is its maximal (w.r.t.\ set inclusion) eager set. All the above semantics satisfy the so-called \emph{admissibility principle}, meaning that all of their extensions are admissible sets. For some purposes, however, self-defence could be too strong a requirement, for instance when capturing human argument evaluation {\cite{guillaume2022plosone}}. Alternative semantics selecting specific conflict-free sets were defined under the denomination \emph{naivety-based} semantics (see e.g.\ \cite{cramer2019scf2}). The basis of all these semantics is the notion of naive extension. A \textbf{naive extension} of $(A,R)$ is just a maximal (w.r.t.\ set inclusion) conflict-free set. {A more elaborated naivety-based semantics, strongly inspired in the notion of semi-stability, is stage semantics. Formally, a \textbf{stage extension} of $\af$ is a conflict-free set with maximal range among conflict-free sets.} We abbreviate the name of each semantics by using the shorthands $\{st, co, gr, pr, se, id, ea, na, stg\}$ in the obvious way. For every $\sigma\in \{st, co, gr, pr, se, id, ea, na, stg\}$, we note $\sigma(A,R)$ the set of all $\sigma$-extensions of $(A,R)$. An argument $x\in A$ is said to be credulously (resp.\ sceptically) $\sigma$-\textbf{accepted} iff it belongs to at least one (resp.\ every) $\sigma$-extension. As an example, for the AF $(A_0,R_0)$ represented in the picture below we have $\mathsf{st}(A_0,R_0)=\mathsf{pr}(A_0,R_0)=\mathsf{se}(A_0,R_0)= \mathsf{stg}(A_0,R_0)=\{\{b,e\},\{c,d\}\}$; $\mathsf{gr}(A_0,R_0)=\mathsf{id}(A_0,R_0)=\mathsf{ea}(A_0,R_0)=\{\emptyset\}$; $\mathsf{co}(A_0,R_0)=\{\emptyset,\{b,e\},\{c,d\}\}$ and $\mathsf{na}(A_0,R_0)=\{\{a,c\},\{a,e\},\{b,e\},\{b,d\},\{c,d\}\}$. \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=1 cm of a]{}; \node[world] (b) [above=0.5 cm of pos]{b}; \node[world] (c) [left=1 cm of b]{c}; \node (name) [left=1 cm of c]{$(A_0,R_0)$}; \node[world] (d) [below=0.5 cm of pos]{d}; \node[world] (e) [left=1 cm of d]{e}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (e) edge[bend right] (c); \end{tikzpicture} \end{center} Moreover, $(A_1,R_1)$, depicted below and borrowed from \cite{caminada2012semi} illustrates the difference between stable and semi-stable semantics: the framework has no stable extension, while $\{c,a\}$ is a semi-stable extension (which is actually the only one). \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node[world] (b) [left=1 cm of a]{b}; \node (pos) [left=0.5 cm of b]{}; \node[world] (d) [above=0.35 cm of pos]{d}; \node (name) [left=1 cm of d]{$(A_1,R_1)$}; \node[world] (c) [below=0.35 cm of pos] {c}; \draw[->] (b) edge (a); \draw[->] (c) edge (b); \draw[->] (d) edge (b); \draw[->] (d) edge[reflexive left] (d); \end{tikzpicture} \end{center} Finally, to see the difference between (semi-)stable and preferred semantics consider $(A_2,R_2)$, borrowed from \cite{baroni2018abstract} and depicted below. The set $\{a\}$ is a preferred extension, but it is not a (semi-)stable one. Moreover, the example also illustrates the difference between ideal and eager semantics, as the eager extension is $\{b,d\}$, while the ideal one is empty. For more examples, the reader is referred to \cite{baroni2018abstract}, as well as to the graphic on-line solver \emph{ConArg} \cite{conargpaper}. \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node[world] (b) [left=1 cm of a]{b}; \node[world] (c) [left=1 cm of b]{c}; \node[world] (d) [left=1 cm of c]{d}; \node (pos) [left=0.5 cm of c] {}; \node[world] (e) [above=1 cm of pos]{e}; \node (name) [left=1 cm of e]{$(A_2,R_2)$}; \draw[->] (b) edge[bend right] (a); \draw[<-] (b) edge[bend left] (a); \draw[<-] (c) edge (b); \draw[->] (d) edge[bend left] (e); \draw[<-] (c) edge[bend right] (e); \draw[->] (c) edge[bend left] (d); \end{tikzpicture} \end{center} \subsection{Dynamic Logic of Propositional Assignments (DL-PA)}\label{sec:dlpa} We use DL-PA as the general logical framework of this paper. The language of DL-PA is built from a {countably infinite} set of propositional variables $\propset = \{p_1, p_2, \ldots\}$. We suppose that $\propset$ contains several kinds of propositional variables capturing statuses of arguments and relations between them. First, to every set of arguments $A \subseteq \mathcal{U}$ we associate the set of \textbf{awareness variables} $\mathsf{AW}_{A}=\{\aw x \mid x \in A\}$ and the set of \textbf{acceptance variables} $\mathsf{IN}_{A}=\{\acc x \mid x \in A\}$. Second, to every relation $R \subseteq \mathcal{U} \times \mathcal{U}$ we associate the set of \textbf{attack variables} $\mathsf{ATT}_{R}=\{\att x y \mid (x,y)\in R\}$. The set of propositional variables of our logic therefore contains \begin{align*} \propset_{\mathcal{U}} &= \mathsf{AW}_{\mathcal{U}} \cup \mathsf{IN}_{\mathcal{U}} \cup \mathsf{ATT}_{\mathcal{U}\times\mathcal{U}} \text{.} \end{align*} As $\propset_{\mathcal{U}}$ is finite, the countably infinite $\propset$ provides a reservoir of auxiliary variables that are going to help us to encode e.g.\ semi-stable and stage semantics. Formulas and programs of DL-PA are defined by mutual recursion: \begin{align*} \varphi &::= p \mid \lnot \varphi \mid (\varphi \land \varphi)\mid [\pi]\varphi , \\ \pi &::= \assgntop{p} \mid \assgnbot{p} \mid \varphi?\mid (\pi;\pi) \mid (\pi \cup \pi) \mid \pi^{\smallsmile} , \end{align*} where $p$ ranges over $\propset$. The formula $[\pi]\varphi$ reads ``$\varphi$ is true after every possible execution of $\pi$''. The program $\assgntop{p}$ makes $p$ true and $\assgnbot{p}$ makes $p$ false. The program $\varphi?$ tests that $\varphi$ is true and fails when it is false. The program $\pi_1 ; \pi_2 $ is the sequential composition of $\pi_1$ and $\pi_2$; and $\pi_1 \cup \pi_2$ is their nondeterministic composition. Finally, $\pi^{\smallsmile} $ is the execution of $\pi$ `the other way round'; for example, the program $\assgntop p^{\smallsmile} $ undoes the assignment of $p$ to true: when $p$ is false then it fails, and when $p$ is true then it nondeterministically either does nothing or makes $p$ false. Here are some more examples. The formula $[\assgnbot p]\lnot p$ is going to be valid: there is only one way of executing $\assgnbot p$, and $p$ is false afterwards. In contrast, $[\assgntop p]\lnot p$ is going to be unsatisfiable. Moreover, $[\assgntop p]q$ is equivalent to $q$ for syntactically different $p$ and $q$. The formula $[\phi ? ] \psi$ says that $\psi$ is true after every possible execution of the test $\phi ? $. There is at most one such execution, namely when $\phi$ is true, and it does not change anything; when $\phi$ is false then the test fails and $[\phi ? ] \psi$ is vacuously true. Therefore $[\phi ? ] \psi$ has to be equivalent to $\lnot \phi \lor \psi$. The formula $[\pi_1 ; \pi_2]\phi$ is equivalent to $[\pi_1][\pi_2]\phi$ and $[\pi_1 \cup \pi_2]\phi$ is going to be equivalent to $[\pi_1]\phi \land [\pi_2]\phi$. Finally, the formulas $[\assgntop p \cup \assgnbot p]\lnot p$ and $[\assgnbot p^\smallsmile]p$ are both going to be unsatisfiable. The former is the case because there is a nondeterministic choice (namely that of $\assgntop p$) after which $p$ is true; the latter is the case because there is an execution of the nondeterministic $\assgnbot p^\smallsmile$ after which $p$ is still false. Here are some abbreviations of formulas and programs that are going to be useful in the rest of the paper. The program $\top?$ is abbreviated as $\mathsf{skip}$: it always succeeds and does not change anything. The program $(\varphi ? ; \alpha)\cup(\lnot \varphi ? ; \beta)$ abbreviates $\mathsf{if}\, \varphi\, \mathsf{then}\, \alpha\, \mathsf{else}\, \beta$. A special case of the latter is when $\beta$ is $\mathsf{skip}$, where we just write $\mathsf{if}\, \phi\, \mathsf{then}\, \pi$. (Observe that this is not the same as $\varphi ? ; \alpha$: when $\phi$ is false then the latter fails while the former succeeds and does nothing.) As to formulas, the missing Boolean connectives are defined as usual. Moreover, the formula $\ldia{\pi}\phi$ abbreviates $\lnot [\pi] \lnot \phi$. It therefore reads ``$\varphi$ is true after some possible execution of $\pi$''. In particular, $\ldia{\pi}\top$ has to be read ``$\pi$ is executable''. For example, the formula $\phi \rightarrow [\pi] \ldia{\pi^\smallsmile}\phi$ expresses that every successful execution of $\pi$ can be reversed; it is going to be valid. (Observe that the diamond cannot be replaced by a box, as illustrated by the invalid $p \rightarrow [\assgntop p] [\assgntop p^\smallsmile]p$.) Our models are classical propositional valuations over $\propset$, i.e., they are subsets of $\propset$. We use $\mathit{v},\mathit{v}',\mathit{v}''$ to denote valuations. Formulas $\varphi$ are interpreted in a way similar to dynamic logic, and programs $\pi$ are interpreted as binary relations on valuations. Just as the syntax, the semantics of DL-PA is defined by mutual recursion. The interpretation of formulas is: \begin{center}\begin{tabular}{lcl} $\mathit{v} \models p$ &if& $p \in \mathit{v}$, \\ $\mathit{v} \models [\pi]\varphi$ &if& $(\mathit{v},\mathit{v}')\in ||\pi||$ implies $\mathit{v}'\models \varphi$, \end{tabular}\end{center} and as usual for the Boolean connectives; and the interpretation of programs is: \begin{align*} ||\assgntop{p}|| &=\{(\mathit{v},\mathit{v}')\mid \mathit{v}'=\mathit{v}\cup\{p\}\} , \\ ||\assgnbot{p}|| &=\{(\mathit{v},\mathit{v}')\mid \mathit{v}'=\mathit{v}\setminus\{p\}\} , \\ ||\varphi?|| &=\{(\mathit{v},\mathit{v})\mid \mathit{v}\models \varphi\} , \\ ||\pi; \pi'|| &=||\pi||\circ ||\pi'|| , \\ ||\pi\cup \pi'|| &=||\pi||\cup ||\pi'|| , \\ ||\pi^{\smallsmile}|| &=||\pi||^{-1} . \end{align*} The interpretation of $\assgntop p$ is the relation that makes $p$ true while not changing anything else; and similarly for $\assgnbot p$. That of the test $\phi ? $ relates every valuation where $\phi$ is true with itself; for example, $||\top ? || = \{(\mathit{v},\mathit{v}) \mid \mathit{v} \subseteq \propset\}$. Sequential composition $\pi_1 ; \pi_2$ is naturally interpreted as relation composition and nondeterministic composition $\pi_1 \cup \pi_2$ as set union of the two relations $||\pi_1||$ and $||\pi_2||$. The interpretation of the converse $\pi^\smallsmile$ is the inverse of the relation $||\pi||$ and relates a valuation $\mathit{v}$ to all those valuations where $\pi$ is executable and may lead to $\mathit{v}$. For example, $||\assgntop p^{\smallsmile}||=\{(\mathit{v}',\mathit{v})\mid \mathit{v}'=\mathit{v} \cup \{p\}\}$. \par A formula $\varphi$ is DL-PA satisfiable if $\mathit{v} \models \phi$ for some $\mathit{v}$, and it is DL-PA valid if $\mathit{v} \models \phi$ for every $\mathit{v}$. For example, $\ldia{\assgntop p} \top$ and $\ldia{\assgnbot p} \top$ are both valid, while $\ldia{\assgnbot p^\smallsmile} p$ and $\ldia{\assgnbot p^\smallsmile} \lnot p$ are satisfiable but not valid. It is known that satisfiability, validity, and model checking are all PSPACE complete decision problems \cite{BalbianiHST14}. Let us now introduce some DL-PA programs that will be useful later on. Let $\mathsf{P} = \{p_1,\ldots,p_n\} \subseteq \propset$ be a finite set of propositional variables. First of all, we define $$\Seq_{p\in \mathsf{P}} \pi_p = \pi_{p_1};\ldots;\pi_{p_n} . $$ {By convention, we assume that the abbreviation amounts to $\mathsf{skip}$ when $\mathsf{P}=\emptyset$. We adopt the same convention for nondeterministic union, i.e., $\bigcup_{p \in \emptyset}\pi_p=\mathsf{skip}$.} In principle the order of the elements of $\mathsf{P}$ matters, but each time we are going to use this notation we will make sure that the programs $\pi_{p_i}$ are such that this is not the case. This will in particular hold for the following abbreviations: \begin{align*} \mathsf{mkTrueOne}(\mathsf{P})&=\bigcup_{p\in \mathsf{P}}(\lnot p?; \assgntop p)=(\lnot p_1?;\assgntop {p_1})\cup \ldots \cup(\lnot p_n?;\assgntop{p_n}) \text{,} \\ \mathsf{mkFalseOne}(\mathsf{P})&=\bigcup_{p\in \mathsf{P}}(p?; \assgnbot p)=(p_1?;\assgnbot {p_1})\cup \ldots \cup(p_n?;\assgnbot{p_n}) \text{,} \\ \mathsf{mkTrueSome}(\mathsf{P})&=\Seq_{p\in \mathsf{P}}(\assgntop p \cup \mathsf{skip})=(\assgntop{p_1} \cup \mathsf{skip}); \ldots ;(\assgntop{p_n}\cup \mathsf{skip}) \text{,} \\ \mathsf{mkFalseSome}(\mathsf{P})&=\Seq_{p\in \mathsf{P}}(\assgnbot p \cup \mathsf{skip})=(\assgnbot{p_1} \cup \mathsf{skip}); \ldots ;(\assgnbot{p_n}\cup \mathsf{skip}) \text{,} \\ \mathsf{vary}(\mathsf{P}) &=\Seq_{p\in \mathsf{P}}(\assgntop p \cup \assgnbot p) =\big(\assgntop{p_1}\cup \assgnbot{ p_1 }\big); \ldots ;\big(\assgntop{p_n} \cup \assgnbot{p_n}\big) . \end{align*} The program $\mathsf{mkTrueOne}(\mathsf{P})$ chooses an element of $\mathsf{P}$, checks that it is false and makes it true, while $\mathsf{mkTrueSome}(\mathsf{P})$ makes true some elements of $\mathsf{P}$ that were false before. The programs $\mathsf{mkTrueOne}(\mathsf{P})$ and $\mathsf{mkFalseOne}(\mathsf{P})$ are the converse of each other; same for $\mathsf{mkTrueSome}(\mathsf{P})$ and $\mathsf{mkFalseSome}(\mathsf{P})$. The sequential composition $\mathsf{mkTrueOne}(\mathsf{P}) ; \mathsf{mkTrueSome}(\mathsf{P})$ makes true at least one element of $\mathsf{P}$ that was false before (possibly more). The last program---i.e., $\mathsf{vary}(\mathsf{P})$---has the same interpretation as the sequential compositions $\mathsf{mkTrueSome}(\mathsf{P}) ; \mathsf{mkFalseSome}(\mathsf{P})$ and $\mathsf{mkFalseSome}(\mathsf{P}) ; \mathsf{mkTrueSome}(\mathsf{P})$. \par Let us state formally the meaning of these programs:\footnote{The following proposition is a slight correction of \cite[Lemma 1]{doutre2019clar}.} \begin{proposition}\label{prop:dlpaprg} We have: \begin{align*} ||\mathsf{mkTrueOne}(\mathsf{P})|| &= \{(\mathit{v},\mathit{v}') \mid \mathit{v}' = \mathit{v} \cup \{p\} \text{ for some } p \in \mathsf{P} {\setminus \mathit{v}}\} , \\ ||\mathsf{mkFalseOne}(\mathsf{P})|| &= \{(\mathit{v},\mathit{v}') \mid \mathit{v}' = \mathit{v} \setminus \{p\} \text{ for some } p \in \mathsf{P} {\cap \mathit{v}}\} , \\ ||\mathsf{mkTrueSome}(\mathsf{P})|| &= \{(\mathit{v},\mathit{v}') \mid \mathit{v}' = \mathit{v} \cup \mathsf{P}' \text{ for some } \mathsf{P}' \subseteq \mathsf{P}\} , \\ ||\mathsf{mkFalseSome}(\mathsf{P})|| &= \{(\mathit{v},\mathit{v}') \mid \mathit{v}' = \mathit{v} \setminus \mathsf{P}' \text{ for some } \mathsf{P}' \subseteq \mathsf{P}\} , \\ ||\mathsf{vary}(\mathsf{P})|| &= \{(\mathit{v},\mathit{v}') \mid \mathit{v} \setminus \mathit{v}' \subseteq \mathsf{P} \text{ and } \mathit{v}' \setminus \mathit{v} \subseteq \mathsf{P} \} . \end{align*} \end{proposition} \paragraph{From valuations to AFs and backward.} Thanks to our hypothesis that $\propset$ contains $\propset_{\mathcal{U}}$, each valuation $\mathit{v}\subseteq \propset$ represents the AF $(A_{\mathit{v}},R_{\mathit{v}})$ defined by: \begin{align*} A_\mathit{v} &= \{x \in \mathcal{U} \mid \aw x \in \mathit{v} \} , \\ R_\mathit{v} &= \{(x,y)\in \mathcal{U} \times \mathcal{U}\mid \att x y \in \mathit{v}\}\upharpoonright_{ A_{\mathit{v}}} \\ &= \{(x,y) \in A_\mathit{v} \times A_\mathit{v} \mid \att x y \in \mathit{v} \} . \end{align*} The other way round, each AF $(A,R)$ is represented by the valuation $$\mathit{v}_{(A,R)}=\{\aw x \mid x \in A\}\cup\{\att x y \mid (x,y)\in R\}. $$ Note that the valuation $\mathit{v}_{\af}$ is well defined for any set $A\subseteq \mathcal{U}$ and relation $R \subseteq \mathcal{U}\times \mathcal{U}$, even when $\af$ is not an AF. (This is the case as soon as $R$ contains pairs $(x,y) \in \mathcal{U}\times\mathcal{U}$ that are not in $A\timesA$.) Moreover, notice that if we start with a valuation $\mathit{v}'$ then $\mathit{v}_{(A_{\mathit{v}'},R_{\mathit{v}'})}=\mathit{v}'$ does not generally hold because a valuation can contain an attack variable $\att a b$ without containing $\aw a$ and $\aw b$. If we, however, start with an AF $(A',R')$ then $(A_{\mathit{v}_{(A',R')}}, R_{\mathit{v}_{(A',R')}})=(A',R')$ is always the case. Finally, for each valuation $\mathit{v}$ we define the \textbf{extension associated to $\mathit{v}$} by: $$\extOf\mathit{v} = \{x \in \mathcal{U} \mid \acc x \in \mathit{v}\} . $$ \section{Argumentation Semantics in DL-PA}\label{sec:semantics} We now show how to capture argumentation semantics in DL-PA. The starting point is to adopt the encoding of AFs in propositional logic as introduced in \cite{BesnardDoutre}. It consists in associating to each semantics $\sigma$ a formula $\varphi_{\sigma}$ such that $\mathit{v} \models \varphi_{\sigma}$ if and only if $\extOf\mathit{v}$ is a $\sigma$-extension of $(A_{\mathit{v}},R_{\mathit{v}})$. This approach was pushed further in \cite{doutre2014dynamic,doutre2017dynamic,doutre2019clar,clar2021}, where it was proposed to go beyond the characterisation of extensions and exploit DL-PA programs to \emph{describe the computation of extensions}. The most basic way to do so is a `generate and test' approach: the generic program $$\mathsf{makeExt}^{\sigma} = \mathsf{vary}(\mathsf{IN}_{\mathcal{U}});\varphi_{\sigma}?$$ nondeterministically builds all possible $\sigma$-extensions by first varying the values of the acceptance variables and then checking that a $\sigma$-valuation has been obtained. As worked out in \cite{doutre2019clar}, other, more efficient extension building algorithms can also be captured as DL-PA programs and can be proved to be equivalent to $\mathsf{makeExt}^{\sigma}$. Due to our hypothesis of a background universe of arguments $\mathcal{U}$ we need an encoding of argumentation semantics that takes awareness variables $\aw x$ into account. This was done by \cite{doutre2017dynamic} for stable semantics.\footnote{In \cite{doutre2017dynamic}, the term \emph{enablement} and the notation $\mathsf{En}_x$ are used instead of \emph{awareness} and $\aw x$.} Here we extend the encoding to the rest of the semantics presented in Section~\ref{sec:background:argsema}. The correctness of all encodings is formally stated at the end of this section. We start by defining some formulas that allow us to capture the different semantics in a compact way. \subsection{Useful DL-PA Formulas} The following DL-PA formula expresses that the arguments identified by acceptance variables are indeed arguments entertained by the formalised agent (arguments she is aware of): $$\mathsf{Well} = \bigwedge_{x \in \mathcal{U}}(\acc x \to \aw x)\text{.}$$ This abbreviation allows us to express conflict-freeness and admissibility: \begin{align*} \mathsf{ConFree} &= \mathsf{Well} \land \bigwedge_{x\in \mathcal{U}} \bigwedge_{y \in \mathcal{U}} \lnot (\acc x \land \acc y \land \att x y)\text{,} \\ \mathsf{Admissible} &= \mathsf{ConFree} \land \bigwedge_{x\in \mathcal{U}} \Big(\acc x \to \bigwedge_{y \in \mathcal{U}}\big((\aw y \land \att y x)\to \bigvee_{z \in \mathcal{U}}(\acc z \land \att z y) \big)\Big)\text{.} \end{align*} Our characterisation of semi-stable and stage extensions makes use of fresh copies $\acc x'$ of the variables $\acc x$, one per $x \in \mathcal{U}$ (which are available because $\propset$ is countably infinite while $\mathcal{U}$ is finite). For these auxiliary variables we define a program that copies the values of the $\mathsf{IN}_\mathcal{U}$ variables: \begin{align*} \mathsf{copy}(\mathsf{IN}_{\mathcal{U}}) &= \Seq_{x \in \mathcal{U}} ((\acc{x} ? ; \assgntop{\acc{x}'}) \cup (\lnot \acc{x} ? ; \assgnbot{\acc x'}) ) . \end{align*} Furthermore, the following two formulas characterise whether the range of the extension $\extOf\mathit{v}$ represented by $\mathit{v}$, in symbols $\extOf\mathit{v}^{\oplus}$, is included in the range of the extension represented by the copies; and vice versa: \begin{align*} \mathsf{IncludedInCp} =\ & \bigwedge_{x \in \mathcal{U}} \left[ \left(\acc{x} \lor \left(\aw{x} \land \bigvee_{y \in \mathcal{U}}(\acc{y} \land \att y x) \right) \right) \right . \\& \qquad \rightarrow \left . \left(\acc{x}' \lor \left(\aw{x} \land \bigvee_{y \in \mathcal{U}}(\acc {y}' \land \att y x) \right) \right) \right] , \\ \mathsf{IncludesCp} =\ & \bigwedge_{x \in \mathcal{U}} \left[ \left(\acc{x}' \lor \left(\aw{x} \land \bigvee_{y \in \mathcal{U}}(\acc{y}' \land \att y x) \right) \right) \right . \\& \qquad \rightarrow \left . \left(\acc{x} \lor \left(\aw{x} \land \bigvee_{y \in \mathcal{U}}(\acc {y} \land \att y x) \right) \right) \right] . \end{align*} Finally, to capture ideal and eager semantics we need to ensure that the entertained set is admissible and belongs to every preferred extension (for the case of ideal semantics), or to every semi-stable extension (for the case of eager semantics). This can be done in a compact way by means of the extension-building programs $\mathsf{makeExt}^{\sigma}$: \begin{align*} \mathsf{IdealSet} &= \mathsf{Admissible}\land \bigwedge_{x \in \mathcal{U}}(\acc x \to [\mathsf{makeExt}^{pr}] \acc x) , \\ \mathsf{EagerSet} &= \mathsf{Admissible}\land \bigwedge_{x \in \mathcal{U}}(\acc x \to [\mathsf{makeExt}^{se}] \acc x) . \end{align*} \subsection{Encoding the Semantics of Section~\ref{sec:background:argsema} in DL-PA} Table~\ref{table:encodings} lists all the encodings. That of stable and complete semantics slightly simplifies that of \cite{doutre2017dynamic,clar2021}. Our encoding of grounded, complete, and preferred semantics straightforwardly adapts those of \cite{doutre2019clar} for computing minimality and maximality criteria. The first four encodings are essentially a combination of those developed in \cite{doutre2017dynamic} and \cite{doutre2019clar}, with some slight improvements and adaptations. Among the semantics that have not been captured in DL-PA before, our encoding of naive semantics simplifies the program for checking set maximality w.r.t.\ other semantics such as preferred semantics because no superset of a set containing conflicts can be conflict-free. \begin{table} \begin{align*} \mathsf{Stable} =\ & \mathsf{Well} \land \bigwedge_{x\in \mathcal{U}} \Big( \aw x\to\big(\acc x \leftrightarrow \lnot \bigvee_{y \in \mathcal{U}}(\acc y \land \att y x \big)\Big), \\ \mathsf{Complete} =\ & \mathsf{ConFree} \land \bigwedge_{x\in \mathcal{U}} \Big(\acc x \leftrightarrow \bigwedge_{y \in \mathcal{U}}\big((\aw y \land \att y x)\to \bigvee_{z \in \mathcal{U}}(\acc z \land \att z y) \big)\Big), \\ \mathsf{Grounded} =\ & \mathsf{Complete} \land [\mathsf{mkFalseOne}(\mathsf{IN}_{\mathcal{U}});\mathsf{mkFalseSome}(\mathsf{IN}_{\mathcal{U}})] \lnot \mathsf{Complete} , \\ \mathsf{Preferred} =\ & \mathsf{Admissible} \land [\mathsf{mkTrueOne}(\mathsf{IN}_{\mathcal{U}});\mathsf{mkTrueSome}(\mathsf{IN}_{\mathcal{U}})] \lnot \mathsf{Admissible} , \\ \mathsf{Naive} =\ & \mathsf{ConFree} \land [\mathsf{mkTrueOne}(\mathsf{IN}_{\mathcal{U}})]\lnot \mathsf{ConFree} , \\ \mathsf{SemiStable} =\ & \mathsf{Complete} \land \lbox{ \mathsf{copy}(\mathsf{IN}_{\mathcal{U}}) ; \mathsf{makeExt}^{co} } \left( \mathsf{IncludesCp} \rightarrow \mathsf{IncludedInCp} \right) , \\ \mathsf{Stage} =\ & \mathsf{ConFree} \ \land \\& \lbox{ \mathsf{copy}(\mathsf{IN}_{\mathcal{U}}) ; \mathsf{vary} (\mathsf{IN}_{\mathcal{U}});\mathsf{ConFree}? } \left( \mathsf{IncludesCp} \rightarrow \mathsf{IncludedInCp} \right) , \\ \mathsf{Ideal} =\ & \mathsf{IdealSet}\land [\mathsf{mkTrueOne}(\mathsf{IN}_{\mathcal{U}});\mathsf{mkTrueSome}(\mathsf{IN}_{\mathcal{U}})] \lnot \mathsf{IdealSet} , \\ \mathsf{Eager} =\ & \mathsf{EagerSet}\land [\mathsf{mkTrueOne}(\mathsf{IN}_{\mathcal{U}});\mathsf{mkTrueSome}(\mathsf{IN}_{\mathcal{U}})] \lnot \mathsf{EagerSet} . \end{align*} \caption{Encoding the Semantics of Section~\ref{sec:background:argsema} by DL-PA formulas}\label{table:encodings} \end{table} \begin{theorem}\label{thrm:encodings} Let $\sigma\in \{st, co, gr, pr, se, id, ea, na, stg\}$. Let $\mathit{v}\subseteq \propset$. Let $\af$ be an AF. Then: \begin{itemize} \item $\mathit{v}\models \varphi_{\sigma}$ iff $\extOf \mathit{v} \in \sigma(A_v,R_v)$; \item $\sigma(A,R)=\big\{\extOf \mathit{v} \mid (\mathit{v}_{\af},v)\in ||\mathsf{makeExt}^{\sigma}||\big\}$. \end{itemize} \end{theorem} The proof can be found in the \nameref{sec:app}, just as the proofs or proof sketches of all other results. \section{Qualitative Uncertainty in Abstract Argumentation through DL-PA}\label{sec:uncertainty} In this section, we review existing formalisms for representing uncertainty about AFs. We restrict our attention to \emph{qualitative} forms of uncertainty, that is, representations neither using probabilities nor any other kind of numeric device. In particular, we cover: \emph{incomplete argumentation frameworks} \cite{baumeister2021acceptance}, their \emph{enriched version} \cite{mailly2020note}, \emph{constrained incomplete argumentation frameworks} \cite{clar2021,maillyciafs}, and \emph{incomplete argumentation frameworks with dependencies} \cite{fazzingaijcai21,fazzingakr21}. The main motivation for the study of these formalisms is that there are several sources of uncertainty in real-life argumentation. For instance, arguments can be so complex that the reasoning agent is not sure whether they are to be taken into account or whether they attack other arguments. Perhaps more frequently, uncertainty appears in argumentation when an agent reasons about \emph{her opponent's argumentative situation}. Due to the lack of total knowledge about her adversary, the agent might doubt whether the latter entertains some of the arguments or sees some of the attacks. And this is in turn crucial for choosing the right arguments to convince her opponent. We keep this latter intuition in mind as a guideline for the rest of the paper. All the formalisms of the present section share the idea of representing uncertainty through the notion of \emph{completion}. A completion is a hypothetical removal of uncertainty, such that the formalised agent reasons under the assumption that her opponent's AF is such-and-such. In epistemic logic terms, this amounts to the notion of \emph{possible world}, as mentioned in \cite{baumeister2018credulous,baumeister2018complexity}, and studied in detail in \cite{proietti2021,kr}. For a more elaborated comparison among the formalisms presented in this section and epistemic logic, the interested reader is referred to Section \ref{sec:final}. After introducing each formalism we explain how the main associated reasoning tasks can be reduced to DL-PA model checking problems. We conclude by providing a comparison of the different approaches. \subsection{Incomplete AFs}\label{sec:iafs} An \textbf{incomplete AF} \cite{baumeister2021acceptance} (IAF), is a pair $\mathsf{IAF}=(F,U)$, where $F=(A^{F},R^{F})$ is called the \emph{fixed part}, $U=(A^{?},R^{?})$ is called the \emph{uncertain part}, $R,R^{?}\subseteq (A^{F}\cup A^{?})\times(A^{F}\cup A^{?})$, $A^{F}\capA^{?}=\emptyset$ and $R^{F}\capR^{?}=\emptyset$. Hence an IAF is basically an AF where arguments and attacks have been split into two disjoint sets. We sometimes omit internal parentheses when talking about IAFs, that is, we write $(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$ instead of $((A^{F},R^{F}),(A^{?},R^{?}))$. Note that, by definition, there can be fixed attacks among uncertain arguments (sometimes called \emph{conditionally definite attacks} \cite{baumeister2018credulous}). We can intuitively think about these as attacks the agent thinks her opponent entertains whenever she thinks that her opponent is aware of the involved arguments. \par A \textbf{completion} of an $\mathsf{IAF}=(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$ is any AF $(\argset^{\ast},\attrel^{\ast})$ such that: \begin{itemize} \item $A^{F}\subseteq A^{\ast}\subseteq A^{F}\cup A^{?}$; and \item $R^{F}\upharpoonright_{ A^{\ast}}\subseteq R^{\ast}\subseteq (R^{F}\cup R^{?})\upharpoonright_{ A^{\ast}}$. \end{itemize} Given an IAF $\mathsf{IAF}$, we note $\mathsf{completions}(\mathsf{IAF})$ the set of all its completions. A standard AF $(A,R)$ can be identified with the IAF $(A,R,\emptyset,\emptyset)$, which is the unique completion of itself. Two subclasses of IAFs are well-studied in the literature, namely \textbf{attack-incomplete AFs} (att-IAFs, for short),\footnote{This subclass was previously studied under the name of \emph{partial AFs} \cite{cayrol2007partial,coste2007merging}.} which are IAFs with empty $A^{?}$; and \textbf{argument-incomplete} AFs (arg-IAFs, for short), which are IAFs with empty $R^{?}$. \begin{example}\label{ex:iaf} Let us consider $\mathsf{IAF}_0=(A^{F}_0,R^{F}_{0},A^{?}_0,R^{?}_0)$, where $A^{F}_0=\{a,b,d\}$, $R^{F}_{0}=\{(b,a),(d,a),(c,b),(e,d),(c,e),(e,c), (f,e)\}$, $A^{?}_{0}=\{c,e,f\}$ and $R^{?}_{0}=\{(f,c)\}$, graphically represented below. The set of completions of $\mathsf{IAF}_0$ is the one depicted in Table~\ref{tab:comp} except for the cells \textbf{B2}, \textbf{C2}, \textbf{B4}, \textbf{C4}, \textbf{B5} and \textbf{C5}. \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=1 cm of a]{}; \node[world] (b) [above=0.5 cm of pos]{b}; \node[world,dashed] (c) [left=1 cm of b]{c}; \node[world] (d) [below=0.5 cm of pos]{d}; \node[world,dashed] (e) [left=1 cm of d]{e}; \node[world,dashed] (f) [left=3 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (e) edge[bend right] (c); \draw[->] (f) edge (e); \draw[->] (f) edge[dashed] (c); \end{tikzpicture} \end{center} \end{example} \begin{table} \footnotesize \begin{tabular}{c| c | c | c|} & \textbf{A} & \textbf{B} & \textbf{C} \\ \hline & & & \\ \textbf{1}& \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node (c) [left=0.5 cm of b]{}; \node[world] (d) [below=0.25 cm of pos]{d}; \node (e) [left=0.5 cm of d]{}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node (e) [left=0.5 cm of d]{}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node (c) [left=0.5 cm of b]{}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (e) edge (d); \end{tikzpicture} \\ \hline & & & \\ \textbf{2} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (e) edge[bend right] (c); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node (f) [left=1.5 cm of pos]{}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (e) edge[bend right] (c); \end{tikzpicture} \\ \hline & & & \\ \textbf{3} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node (c) [left=0.5 cm of b]{}; \node[world] (d) [below=0.25 cm of pos]{d}; \node (e) [left=0.5 cm of d]{}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \end{tikzpicture} \quad & \quad \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node (e) [left=0.5 cm of d]{}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \end{tikzpicture} \quad & \quad \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node (e) [left=0.5 cm of d]{}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (f) edge (c); \end{tikzpicture} \\ \hline & & & \\ \textbf{4} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (e) edge[bend right] (c); \draw[->] (f) edge (e); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (f) edge (e); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (e) edge[bend right] (c); \draw[->] (f) edge (e); \end{tikzpicture} \\ \hline & & & \\ \textbf{5} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (e) edge[bend right] (c); \draw[->] (f) edge (e); \draw[->] (f) edge (c); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (c) edge[bend right] (e); \draw[->] (f) edge (e); \draw[->] (f) edge (c); \end{tikzpicture} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node[world] (c) [left=0.5 cm of b]{c}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[->] (e) edge[bend right] (c); \draw[->] (f) edge (e); \draw[->] (f) edge (c); \end{tikzpicture} \\ \hline & & & \\ \textbf{6} & \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=0.5 cm of a]{}; \node[world] (b) [above=0.25 cm of pos]{b}; \node (c) [left=0.5 cm of b]{}; \node[world] (d) [below=0.25 cm of pos]{d}; \node[world] (e) [left=0.5 cm of d]{e}; \node[world] (f) [left=1.5 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (f) edge (e); \end{tikzpicture} & & \\ & & & \\ \hline \end{tabular} \bigskip \caption{Completions of $\mathsf{CAF}_0$. The column [\textbf{1}, \textbf{2},..., \textbf{6}] and the row [\textbf{A}, \textbf{B}, \textbf{C}] are just included for numbering purposes. (Empty cells do \emph{not} represent the empty completion $(\emptyset,\emptyset)$.)} \label{tab:comp} \end{table} \normalsize Classic reasoning tasks such as extension enumeration or argument acceptance have been generalized from AFs to IAFs. We here focus on acceptance queries such as the following: \begin{center} \begin{tabular}{|l|} \hline $\sigma$-Necessary-Credulous-Acceptance ($\sigma$-NCA)\\ \hline \textbf{Given:} An IAF $\mathsf{IAF}=(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$ and an argument $a\in \argset^{\fix}$. \\ \textbf{Question:} Is it true that for every $(A^{\ast},R^{\ast}) \in \mathsf{completions}(\mathsf{IAF})$ \\ there is an $E\in \sigma(A^{\ast},R^{\ast})$ such that $a \in E$? \\ \hline \end{tabular} \end{center} \noindent We can switch quantifiers in the definition above in order to obtain different variants of the problem, resulting in possible and sceptical variants. Note that the only difference between these reasoning tasks and standard acceptance problems in AFs is an added quantification layer, namely quantification over completions. \smallskip Our aim now is to reduce these acceptance problems to DL-PA model checking problems. As we already have programs for building the extensions of AFs, the fundamental step in this reduction consists in designing a DL-PA program, $\mathsf{makeComp}^{\mathsf{IAF}}$, that computes all the completions of $\mathsf{IAF}$. First, the \textbf{valuation associated to $\mathsf{IAF}$} is determined by its fixed part: \begin{align*} v_{\mathsf{IAF}}&=v_{(A^{F},R^{F})} \\&= \mathsf{AW}_{A^{F}} \cup \mathsf{ATT}_{R^{F}} \\&= \{\aw x \mid x \in \argset^{\fix}\}\cup\{\att x y \mid (x,y) \in \attrel^{\fix}\} . \end{align*} Note that $(A_{\mathit{v}_{\mathsf{IAF}}},R_{\mathit{v}_{\mathsf{IAF}}})$ is already a completion of $\mathsf{IAF}$: it is the smallest one, where only fixed arguments and fixed attacks between them are considered. In order to compute all the completions of $\mathsf{IAF}$ we make true subsets of propositional variables representing arguments in $A^{?}$ and attacks in $R^{?}$: \begin{align*} \mathsf{makeComp}^{\mathsf{IAF}}&=\mathsf{mkTrueSome}(\mathsf{AW}_{A^{?}}) ; \mathsf{mkTrueSome}(\mathsf{ATT}_{R^{?}})\text{.} \end{align*} The next proposition shows that our original target is reached. \begin{proposition}\label{prop:iafenco} Let $\mathsf{IAF}=(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$. Then: \begin{itemize} \item If $(\mathit{v}_{\mathsf{IAF}},\mathit{v})\in ||\mathsf{makeComp}^{\mathsf{IAF}}||$, then $(A_{\mathit{v}},R_{\mathit{v}})\in \mathsf{completions}(\mathsf{IAF})$. \item If $(A^{\ast},R^{\ast})\in \mathsf{completions}(\mathsf{IAF})$, then $(\mathit{v}_{\mathsf{IAF}},\mathit{v}_{(A^{\ast},R^{\ast})})\in ||\mathsf{makeComp}^{\mathsf{IAF}}||$. \end{itemize} \end{proposition} Using this result together with the general technique to compute extensions provided in Section~\ref{sec:semantics}, we can reduce reasoning problems in IAFs to model checking problems in DL-PA. \begin{proposition}\label{prop:rediafs} Let $\mathsf{IAF}=(F,U)$, $\sigma \in \{st, co, gr, pr, se, id, ea, na, stg\}$, and $a \in A^{F}$. Then: \begin{itemize} \item The answer to $\sigma$-NSA with input $\mathsf{IAF}$ and $a$ is yes iff\\ $v_{\mathsf{IAF}}\models [\mathsf{makeComp}^{\mathsf{IAF}};\mathsf{makeExt}^{\sigma}] \acc a$. \item The answer to $\sigma$-NCA with input $\mathsf{IAF}$ and $a$ is yes iff \\ $v_{\mathsf{IAF}}\models [\mathsf{makeComp}^{\mathsf{IAF}}]\langle\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PCA with input $\mathsf{IAF}$ and $a$ is yes iff \\ $v_{\mathsf{IAF}}\models \langle\mathsf{makeComp}^{\mathsf{IAF}};\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PSA with input $\mathsf{IAF}$ and $a$ is yes iff \\ $v_{\mathsf{IAF}}\models \langle\mathsf{makeComp}^{\mathsf{IAF}}\rangle [\mathsf{makeExt}^{\sigma}] \acc a$. \end{itemize} \end{proposition} \subsection{Rich Incomplete AFs}\label{sec:riafs} A \textbf{rich incomplete AF} (rIAF) \cite{mailly2020note} extends an IAF in its uncertain part $U$ by adding a new (symmetric and irreflexive) uncertainty relation $\attrel^{\leftrightarrow} \subseteq (A^{F}\cup A^{?})\times (A^{F}\cup A^{?})$ such that $\attrel^{\leftrightarrow} \cap R^{F} = \emptyset$ and $\attrel^{\leftrightarrow} \cap R^{?} = \emptyset$. We sometimes omit internal brackets when talking about rIAFs and note them $(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$. The new component, $\attrel^{\leftrightarrow}$, is informally understood as a set of attacks whose existence is known, but whose direction is unknown. The introduction of $\attrel^{\leftrightarrow}$ can be motivated by pointing out that attacks have two essential properties: their existence and their direction. Thus, while $R^{?}$ captures uncertainty about the former, $\attrel^{\leftrightarrow}$ captures uncertainty about the latter. Note that any IAF can be understood as a rIAF with empty $\attrel^{\leftrightarrow}$. The notion of completion is easily adapted to rIAFs, capturing the intuitions about $\attrel^{\leftrightarrow}$ that we have just mentioned. A \textbf{completion} of $\mathsf{rIAF}=(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$ is any AF $(\argset^{\ast},\attrel^{\ast})$ such that: \begin{itemize} \item $\argset^{\fix} \subseteq A^{\ast}\subseteq (\argset^{\fix}\cupA^{?})$; \item $\attrel^{\fix}\upharpoonright_{ A^{\ast}}\subseteq R^{\ast}\subseteq (\attrel^{\fix}\cup R^{?}\cup \attrel^{\leftrightarrow})\upharpoonright_{ A^{\ast}}$; \item for every $x,y\in A^{\ast}$: $(x,y)\in \attrel^{\leftrightarrow}$ implies $(x,y)\in R^{\ast}$ or $(y,x)\in R^{\ast}$. \end{itemize} \begin{example}\label{ex:riaf} Let $\mathsf{rIAF}_0=(A_0^{F},A_0^{?},R^{F}_0,R_0^{?},R_0^{\leftrightarrow})$ where $A^{F}_0=\{a,b,d\}$, $A_0^{?}=\{c,e,f\}$, $R_{0}^{F}=\{(b,a), (d,a), (c,b), (e,d), (f,e) \}$, $R_{0}^{?}=\{(f,c) \}$, and $R_{0}^{\leftrightarrow}=\{(c,e),(e,c)\}$. We represent $\mathsf{rIAF}_0$ graphically as follows: \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=1 cm of a]{}; \node[world] (b) [above=0.5 cm of pos]{b}; \node[world,dashed] (c) [left=1 cm of b]{c}; \node[world] (d) [below=0.5 cm of pos]{d}; \node[world,dashed] (e) [left=1 cm of d]{e}; \node[world,dashed] (f) [left=3 cm of pos]{f}; \draw[->] (b) edge (a); \draw[->] (d) edge (a); \draw[->] (c) edge (b); \draw[->] (e) edge (d); \draw[<->] (c) edge[double,dashed] (e); \draw[->] (f) edge (e); \draw[->] (f) edge[dashed] (c); \end{tikzpicture} \end{center} The set of completions of $\mathsf{rIAF}_0$ is depicted in Table~\ref{tab:comp}. \end{example} \medskip The computation of the completions of a rich IAF in DL-PA gets slightly more complicated since the program $\mathsf{mkTrueSome}$ does not suffice to deal with the symmetric attacks of $\attrel^{\leftrightarrow}$. We can, however, define a specific program for this purpose. First of all, given $\mathsf{rIAF}=(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$, the \textbf{valuation associated to $\mathsf{rIAF}$} is determined by its fixed part as before: \begin{align*} v_{\mathsf{rIAF}} &=v_{(A^{F},R^{F})} \\&= \mathsf{AW}_{A^{F}} \cup \mathsf{ATT}_{R^{F}} \\&= \{\aw x \mid x \in \argset^{\fix}\}\cup\{\att x y \mid (x,y) \in \attrel^{\fix}\} . \end{align*} Note that, contrarily to what happened with IAFs, $(A_{\mathit{v}_{\mathsf{rIAF}}},R_{\mathit{v}_{\mathsf{rIAF}}})$ is \emph{not} always a completion of $\mathsf{rIAF}$: this fails to be the case as soon as $\attrel^{\leftrightarrow} \cap (A^{F}\times A^{F})$ is nonempty. Let us now define the program that integrates the elements of $\attrel^{\leftrightarrow}$ into each completion. Let $\mathsf{ATT}_{R}=\{\att{x_{1}}{y_{1}},...,\att{x_{n}}{y_{n}} \}$ \footnote{Remember that $\mathsf{ATT}_{R}= \{\att x y \mid (x,y) \in R\}$, and that $\mathsf{ATT}_{R}$ is a subset of the set of propositional variables $\mathsf{ATT}_{\mathcal{U} \times \mathcal{U}}$.} be a set of attack variables, and define the program \begin{align*} \mathsf{dis}(\mathsf{ATT}_{R})=\left( \assgntop{ \att{x_{1}}{y_{1}} } \cup \assgntop{\att{y_{1}}{x_{1}} } \right); \ldots ;\left( \assgntop{ \att{x_{n}}{y_{n}} } \cup \assgntop{ \att{y_{n}}{x_{n}} } \right)\text{.} \end{align*} Intuitively, $\mathsf{dis}(\mathsf{ATT}_{R})$ makes true at least one of the variables from the set $\{\att x y, \att y x\}$, for each $(x,y)\in R$. Moreover, when applied to a symmetric relation $\attrel^{\leftrightarrow}$, $\mathsf{dis}$ makes true either $\att x y$, or $\att y x$, or both, for every $(x,y)\in \attrel^{\leftrightarrow}$. We are now ready to define the program $\mathsf{makeComp}$ in its version for rIAFs. Given $\mathsf{rIAF}=(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$, let \begin{align*} \mathsf{makeComp}^{\mathsf{rIAF}}&= \mathsf{mkTrueSome}(\mathsf{AW}_{A^{?}});\mathsf{mkTrueSome}(\mathsf{ATT}_{R^{?}});\mathsf{dis}(\mathsf{ATT}_{\attrel^{\leftrightarrow}})\text{.} \end{align*} The following proposition states that the above program is correct. \begin{proposition}\label{prop:riafenco} Let $\mathsf{rIAF}=(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$, then: \begin{itemize} \item If $(\mathit{v}_{\mathsf{rIAF}},\mathit{v})\in ||\mathsf{makeComp}^{\mathsf{rIAF}}||$, then $(A_{\mathit{v}},R_{\mathit{v}})\in \mathsf{completions}(\mathsf{rIAF})$. \item If $(A^{\ast},R^{\ast})\in \mathsf{completions}(\mathsf{rIAF})$, then $(\mathit{v}_{\mathsf{rIAF}},\mathit{v}_{(A^{\ast},R^{\ast})})\in ||\mathsf{makeComp}^{\mathsf{rIAF}}||$. \end{itemize} \end{proposition} Again, acceptance problems can be reduced to DL-PA model checking problems. Note that the definition of acceptance problems for rIAFs is just as for IAFs (we only have to change the input). Let us just state the reduction result we are after: \begin{proposition}\label{prop:redriafs} Let $\sigma \in \{st, co, gr, pr, se, id, ea, na, stg\}$. Let $\mathsf{rIAF}=(\argset^{F},\attrel^{F},\argset^{?}\!, \attrel^{?}, \symattrel)$ and $a \in A^{F}$. Then: \begin{itemize} \item The answer to $\sigma$-NSA with input $\mathsf{rIAF}$ and $a$ is yes iff\\ $v_{\mathsf{rIAF}}\models [\mathsf{makeComp}^{\mathsf{rIAF}};\mathsf{makeExt}^{\sigma}] \acc a$. \item The answer to $\sigma$-NCA with input $\mathsf{rIAF}$ and $a$ is yes iff \\ $v_{\mathsf{rIAF}}\models [\mathsf{makeComp}^{\mathsf{rIAF}}]\langle\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PCA with input $\mathsf{rIAF}$ and $a$ is yes iff \\ $v_{\mathsf{rIAF}}\models \langle\mathsf{makeComp}^{\mathsf{rIAF}};\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PSA with input $\mathsf{rIAF}$ and $a$ is yes iff \\ $v_{\mathsf{rIAF}}\models \langle\mathsf{makeComp}^{\mathsf{rIAF}}\rangle [\mathsf{makeExt}^{\sigma}] \acc a$. \end{itemize} \end{proposition} \subsection{Shrinking the Set of Completions} Incomplete AFs (and their enriched version) deal with uncertainty about argumentative situations in a simple and intuitive manner. However, the kind of situations that we can model with them is rather limited (as we will discuss in detail later on). This is the main motivation for the development of more expressive formalisms, and it actually led to concurrent proposals during the last year, either under the name of \emph{constrained incomplete argumentation frameworks} \cite{clar2021,maillyciafs} or \emph{incomplete argumentation frameworks with dependencies} \cite{fazzingaijcai21,fazzingakr21}. We start by presenting our version of constrained incomplete AFs (the one introduced in \cite{clar2021}), and then move to alternative approaches. \subsubsection{Constrained Incomplete AFs}\label{sec:ciafs} A \textbf{constrained incomplete AF} (cIAF) is a pair $\mathsf{cIAF}=(A,\varphi)$ where $A\subseteq \mathcal{U}$ is a set of arguments and $\varphi$ is a Boolean formula built over the set of propositional variables $\mathsf{AW}_{A}\cup \mathsf{ATT}_{A\times A}$.\footnote{We have slightly changed the original definition of cIAFs \cite{clar2021}, by switching the domain from $\mathcal{U}$ to an arbitrary $A$, because it allows for naturally plugging-in argumentation dynamics, as we will do in Section~\ref{sec:cciafs}.} The set of \textbf{completions} of a given cIAF is $$\mathsf{completions}(A,\varphi) = \{(A_\mathit{v},R_\mathit{v})\mid \mathit{v} \subseteq \propset_{A} \text{ and } \mathit{v} \models \varphi\} . $$ \begin{example}\label{ex:ciaf} Let us consider $\mathsf{cIAF}_0 = ({A},\varphi)$ with ${A}=\{a,b\}$ and $\varphi=(\aw a \land \aw b) \land (\att a b \lor \att b a) \land \lnot (\att a b \land \att b a) \land \lnot \att a a \land \lnot \att b b$. The completions of $\mathsf{cIAF}_0$ are: \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) []{a}; \node[world] (b) [right=1 cm of a]{b}; \draw[->] (a) edge (b); \node[world] (a1) [right=4 cm of a]{a}; \node[world] (b1) [right=1 cm of a1]{b}; \draw[->] (b1) edge (a1); \end{tikzpicture} \end{center} \end{example} Notice that, differently to what happened with previous classes of structures, the set of completions of a cIAF might be empty, since $\varphi$ can be an inconsistent formula. Moreover, even being consistent, $\varphi$ might not be satisfied by any valuation representing a non-empty AF, so that we could get the empty AF $(\emptyset,\emptyset)$ as the only completion of a cIAF; e.g., $\mathsf{completions}((\{a\},\lnot \aw a))=\{(\emptyset,\emptyset)\}$. \paragraph{The need of cIAFs.} Besides being mathematically interesting, one may wonder why one should use cIAFs. As mentioned, our main motivation is that, while the computational complexity of reasoning tasks associated to the previously introduced formalisms (i.e., (r)IAFs and subclasses) is well-known and relatively low, their modelling power is rather limited. Consider, for instance, a proponent reasoning about the view of her opponent in a very simple debate containing only two arguments $\{a,b\}$. Suppose that $a$ is an argument about public health policies stated by the right-wing presidential candidate. Similarly, $b$ is an argument stated by the left-wing candidate. Imagine that $a$ and $b$ have contradictory conclusions, so they are mutually incompatible. Let us informally understand $R$ as a \emph{defeat} relation here, that is, a relation based on logical incompatibility plus some kind of epistemic-based assessment of the involved arguments (for instance, regarding the reliability of their premisses), as it is usually done in structured argumentation. Now, suppose our proponent knows that her opponent is polarized, in the sense that he (the opponent) is already inclined towards one side of the political spectrum, but she does not know which one; then the possible AFs that the agent attributes to her opponent are exactly the completions of $\mathsf{cIAF}_0$ (see Example~\ref{ex:ciaf}). As it will be proved later (Proposition~\ref{prop:express}), there is no rIAF (and therefore no IAF) with the exact set of completions of $\mathsf{cIAF}_0$. \medskip Let us now show how cIAFs can be captured in DL-PA. Let $\mathsf{cIAF}=(\argset,\varphi)$, and define its \textbf{associated valuation} simply as the empty set, that is, $\mathit{v}_{\mathsf{cIAF}}=\emptyset$. (Actually any valuation over $\propset_{A}$ will do the job.) The program generating all completions of $\mathsf{cIAF}$ is defined as $$\mathsf{makeComp}^{\mathsf{cIAF}}=\mathsf{vary}(\mathsf{AW}_{A});\mathsf{vary}(\mathsf{ATT}_{A \times A});\varphi?\text{.}$$ The behaviour of $\mathsf{makeComp}^{\mathsf{cIAF}_0}$ (see Example \ref{ex:ciaf}) is illustrated in Figure~\ref{fig:ciafasmodel}. \color{black} \begin{proposition}\label{prop:encociafs} Let $\mathsf{cIAF}=(\argset,\varphi)$, then: \begin{itemize} \item If $(\mathit{v}_{\mathsf{cIAF}},\mathit{v})\in ||\mathsf{makeComp}^{\mathsf{cIAF}}||$, then $(A_{\mathit{v}},R_{\mathit{v}})\in \mathsf{completions}(\mathsf{cIAF})$. \item If $(A^{\ast},R^{\ast})\in \mathsf{completions}(\mathsf{cIAF})$, then $(\mathit{v}_{\mathsf{cIAF}},\mathit{v}_{(A^{\ast},R^{\ast})})\in ||\mathsf{makeComp}^{\mathsf{cIAF}}||$. \end{itemize} \end{proposition} \color{black} \begin{figure} \centering \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) []{a}; \node[world] (b) [right=1 cm of a]{b}; \draw[->] (a) edge (b); \node[draw, fit=(a) (b)](fit) [label=left:$\mathit{v}_1$] {}; \node[world] (a1) [right=4 cm of a]{a}; \node[world] (b1) [right=1 cm of a1]{b}; \draw[->] (b1) edge (a1); \node[draw, fit=(a1) (b1)](fit1) [label=right:$\mathit{v}_2$] {}; \node (pos) at ($(fit)!0.5!(fit1)$) {}; \node (emp) [below=1.5cm of pos] {$\emptyset$}; \draw[->] (emp) edge[double,dashed] (fit); \draw[->] (emp) edge[double,dashed] (fit1); \draw[->] (fit1) edge[reflexive above, double,dashed] (fit1); \draw[->] (fit) edge[reflexive above, double,dashed] (fit); \draw[<->] (fit) edge[double,dashed] (fit1); \end{tikzpicture} \caption{Completions of $\mathsf{cIAF}_0$ seen as valuations over $\propset_{\{a,b\}}$. Dashed double arrows represent the interpretation of $\mathsf{makeComp}^{\mathsf{cIAF}_{0}}$; the other valuations over $\propset_{\{a,b\}}$ are omitted.} \label{fig:ciafasmodel} \end{figure} Reasoning problems for (r)IAFs can be easily adapted to cIAFs: we just have to ensure that the argument about which we formulate the query belongs to all completions. As an example, consider: \begin{center} \begin{tabular}{|l|} \hline $\sigma$-Necessary-Credulous-Acceptance ($\sigma$-NCA)\\ \hline \textbf{Given:} A constrained IAF $\mathsf{cIAF}=(\argset,\varphi)$ \\ and an argument $a\in A$ such that $\models \varphi \to \aw a$. \\ \textbf{Question:} Is it true that for every \\ $(A^{\ast},R^{\ast}) \in \mathsf{completions}(\mathsf{cIAF})$ \\ there is an $E\in \sigma(A^{\ast},R^{\ast})$ such that $a \in E$? \\ \hline \end{tabular} \end{center} \noindent Note that requiring $\models \varphi \to \aw a$ amounts to requiring $a \in A$ for all $(A,R)\in \mathsf{completions}(A,\varphi)$. Once again, we can reduce acceptance problems in cIAFs to DL-PA model checking problems. \begin{proposition}\label{prop:redciaf} Let $\mathsf{cIAF}=(\argset,\varphi)$ and let $a \in A$ such that $\models \varphi \to \aw a$. Let $\sigma \in \{st, co, gr, pr, se, id, ea, na, stg\}$. Then: \begin{itemize} \item The answer to $\sigma$-NSA with input $\mathsf{cIAF}$ and $a$ is yes iff\\ $v_{\mathsf{cIAF}}\models [\mathsf{makeComp}^{\mathsf{cIAF}};\mathsf{makeExt}^{\sigma}] \acc a$. \item The answer to $\sigma$-NCA with input $\mathsf{cIAF}$ and $a$ is yes iff \\ $v_{\mathsf{cIAF}}\models [\mathsf{makeComp}^{\mathsf{cIAF}}]\langle\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PCA with input $\mathsf{cIAF}$ and $a$ is yes iff \\ $v_{\mathsf{cIAF}}\models \langle\mathsf{makeComp}^{\mathsf{cIAF}};\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PSA with input $\mathsf{cIAF}$ and $a$ is yes iff \\ $v_{\mathsf{cIAF}}\models \langle\mathsf{makeComp}^{\mathsf{cIAF}}\rangle [\mathsf{makeExt}^{\sigma}] \acc a$. \end{itemize} \end{proposition} We observe that beyond these reasoning problems one may also consider the reasoning task of checking emptiness of the set of completions of a cIAF. \subsubsection{Closely Related Approaches}\label{sec:related} As mentioned, the idea of shrinking the set of completions of an IAF led to concurrent proposals during the last year. In this subsection, we briefly present the two alternative approaches to our cIAFs of \cite{clar2021}. \paragraph{A more graph-theoretic version of cIAFs.} In \cite{maillyciafs}, Jean-Guy Mailly defined his version of cIAFs that we call \textbf{cIAFs$^{JM}$} here to avoid confusion. A cIAF$^{JM}$ is pair of the form $(\mathsf{IAF},\varphi)$ where $\mathsf{IAF}=(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$ is an IAF and $\varphi$ is a Boolean formula over $\propset^{\mathsf{IAF}}=\mathsf{AW}_{A\cup A^{?}}\cup \mathsf{ATT}_{(A\cup A^{?}) \times (A\cup A^{?})}$. Then the set of \textbf{completions of} $\mathsf{cIAF}^{JM}=(\mathsf{IAF},\varphi)$ is defined as $$\mathsf{completions}(\mathsf{IAF})\cap\{(A_\mathit{v},R_\mathit{v})\mid \mathit{v} \subseteq \propset^{\mathsf{IAF}} \text{ and } v\models \varphi\}\text{.}$$ \paragraph{IAFs with dependencies.} In \cite{fazzingaijcai21,fazzingakr21}, the team from the University of Calabria formed by Bettina Fazzinga, Sergio Flesca and Filippo Furfaro introduced the notion of IAFs with dependencies.\footnote{\label{correlation}The term \emph{correlations} is used in \cite{fazzingaijcai21,fazzingakr21} as the informal counterpart of \emph{dependencies}. We stick to the latter term to avoid confusion.} More precisely, their two proposals respectively focus on two restricted classes of IAFs that we have already mentioned: arg-IAFs, and att-IAFs. For the sake of brevity we only present here the notion of arg-IAF with dependencies of \cite{fazzingaijcai21}. Let $A$ be a set of arguments and let $X,Y\subseteq A$. First, a \textbf{dependency over $A$} is either $X \Rightarrow Y$ or $\mathsf{OP}(X)$ with $\mathsf{OP}\in\{\mathsf{OR}, \mathsf{NAND}, \mathsf{CHOICE}\}$. Second, an \textbf{arg-IAF with dependencies} (d-arg-IAF, for short) is a pair $((A,A^{?},R),\Delta)$, where $(A,A^{?},R)$ is an arg-IAF and $\Delta$ is a set of dependencies over $A^{?}$. Before defining the completions of a d-arg-IAF we need to settle how dependencies are to be interpreted in arg-IAFs. Let $\af$ be an AF and let $\delta$ be a dependency over $A$. We say that $\af$ satisfies $\delta$\footnote{ \cite{fazzingaijcai21} uses the expression ``$\af$ is valid w.r.t.\ $\delta$'', but our expression is more appropriate in a logical analysis.} iff one of the following mutually exclusive clauses holds: \begin{itemize} \item $\delta=X \Rightarrow Y$ and (if $X\subseteq A$, then $A\cap Y\neq \emptyset$); \item $\delta=\mathsf{OR}(X)$ and $A\cap X\neq \emptyset$, \item $\delta=\mathsf{NAND}(X)$ and $A\cap X\subset X$, \item $\delta=\mathsf{CHOICE}(X)$ and $|A \cap X|=1$. \end{itemize} The \textbf{completions of $((A,A^{?},R),\Delta)$} are defined as those completions of the arg-IAF $(A,A^{?},R)$ that satisfy every dependency $\delta \in \Delta$. \medskip The three alternative proposals are already compared in \cite{mailly2021yes}. We will provide some new insights beyond this in the next section. Let us just make a couple of points here. First, note that both versions of cIAFs as well as IAFs with dependencies are clearly inspired by the notion of \emph{constrained AF} \cite{constrained}, which are pairs $(\af,\varphi)$ where $\varphi$ is used to shrink the set of \emph{extensions} of $\af$. Second, note that the reasoning tasks associated to both classes of structures are clearly encodable in DL-PA, but we do not work out the details here. Let us just point out that each set of dependencies $\Delta$ can be translated into a Boolean formula $t(\Delta)$, and then the program $\mathsf{makeComp}^{((A,A^{?},R),\Delta)}=\mathsf{vary}(\mathsf{AW}_{A^{?}});t(\Delta)?$ computes all the completions of $((A,A^{?},R),\Delta)$ when executed at $\mathit{v}_{(A,R)}$. \subsection{Comparison of the Different Approaches}\label{sec:comparison} Let us now compare the different approaches to representing qualitative uncertainty about AFs. We start with a couple of general considerations. \paragraph{Combinatorics vs.\ logic.} The spirit of the seminal works on IAFs was to represent uncertainty by defining completions as directed graphs whose domains and relations fall between given intervals. One may qualify this approach as ``combinatorial'', since, once the extremes of the interval are given (e.g.\ $A^{F}$ and $A^{F}\cup A^{?}$), the task of computing completions amounts to finding all possible combinations within the interval. Progressively, other reasoning features that we might qualify as ``logical'' have been integrated in the definition of completion. For instance, rIAFs introduce a sort of disjunctive reasoning through the addition of $\attrel^{\leftrightarrow}$. We can understand this transition from combinatorics to logic as a sort of spectrum: \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node (com) {Combinatorial}; \node (log) [right=8 cm of com] {Logical}; \node (pos) [right=4 cm of com] {}; \node (spe) [below=0.35 cm of pos]{IAFs \quad rIAFs \quad d-arg-IAFs \quad cIAFs$^{JM}$ \quad cIAFs }; \draw[->] (com) edge (log); \end{tikzpicture} \end{center} Note how at the right-hand extreme (cIAFs), the combinatorial nature of completions has completely vanished. \paragraph{Graphic representations.} One of the appealing features of IAFs is that they admit a very intuitive graphic representation (see Example~\ref{ex:iaf}). Interestingly, rIAFs and d-arg-IAFs can also be fully represented in a pictorial manner; see \cite{ijcai21} for examples with d-arg-IAFs. As pointed out in \cite{mailly2020note}, cIAFs$^{JM}$ only admit a partial graphic representation. Finally, this pictorial representability is lost by our cIAFs, which completely abstract away from the graph-theoretic definition of IAFs. Hence, in this respect, IAFs with dependencies compare better to cIAFs and cIAFs$^{JM}$. \paragraph{Expressivity via sets of completions.} Following \cite{mailly2020note}, we can compare the modelling power of each of the previous formalisms for arguing with uncertainty using the sets of completions they can represent. Let $\mathcal{IAF}$ denote the class of all IAFs, and likewise for $att\text{-}\mathcal{IAF}$, $arg\text{-}\mathcal{IAF}$, $\mathcal{RIAF}$, $d\text{-}arg\text{-}\mathcal{IAF}$, $d\text{-}att\text{-}\mathcal{IAF}$, $c\text{-}\mathcal{IAF}$ and $c\text{-}\mathcal{IAF}^{JM}$. Let $\mathcal{X}$ and $\mathcal{Y}$ be metavariables denoting one of these classes. We say that $\mathcal{X}$ is \textbf{at least as expressive as} $\mathcal{Y}$ (in symbols: $\mathcal{X}\succeq\mathcal{Y}$) if, for every $Y \in \mathcal{Y}$ there is a $X \in \mathcal{X}$ such that $\mathsf{completions}(X)=\mathsf{completions}(Y)$. We use $\succ$ to denote the strict part of $\succeq$, we use $\preceq$ to denote the inverse of $\succeq$, and we use $\equiv$ to abbreviate $\succeq\cap \preceq$. For instance, it was proved in \cite{mailly2020note} that $\mathcal{RIAF} \succ \mathcal{IAF}$. \begin{proposition}\label{prop:express} cIAFs are strictly more expressive than IAFs and rIAFs. In other words, for every (r)IAF, there is a cIAF with the same set of completions; but there is a cIAF such that no (r)IAF has the same set of completions. \end{proposition} In the first part of the proof of the previous proposition---see the \nameref{sec:app}---we have used an argument that works for \emph{any} set of directed graphs with domain $\mathcal{U}$ (and not only for the completions of a given rIAF), hence we can state that: \begin{corollary} For any set $\mathsf{S}$ of directed graphs with domain $\mathcal{U}$ there is a cIAF $\mathsf{cIAF}$ such that $\mathsf{S}=\mathsf{completions}(\mathsf{cIAF})$. \end{corollary} \color{black} In words, cIAFs are a maximally expressive formalism for representing qualitative uncertainty about AFs. Using arguments similar to those employed in the proof of Proposition~\ref{prop:express} we can provide the following general result: \begin{proposition}\label{prop:ex} The relations of Figure~\ref{fig:exp} hold, where an arrow from $\mathcal{X}$ to $\mathcal{Y}$ means that $ \mathcal{X} \preceq \mathcal{Y}$ and where transitive and reflexive arrows are omitted. \end{proposition} \begin{figure} \centering \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node (afs) []{$\mathcal{AF}$}; \node (pos) [right=0.5 cm of afs]{}; \node (attiafs) [above=0.75 cm of pos]{$att\text{-}\mathcal{IAF}$}; \node (dattiafs) [right=0.75 cm of attiafs]{$d\text{-}att\text{-}\mathcal{IAF}$}; \node (argiafs) [below=0.75 cm of pos]{$arg \text{-}\mathcal{IAF}$}; \node (dargiafs) [right=0.75 cm of argiafs]{$d\text{-}arg\text{-}\mathcal{IAF}$}; \node (iafs) [right=0.75 cm of pos]{$\mathcal{IAF}$}; \node (pos1) [right=0.45 cm of iafs]{}; \node (riafs) [right=0.75 cm of iafs] {$\mathcal{RIAF}$}; \node (ciafs) [right=0.75 cm of riafs]{$c\text{-}\mathcal{IAF}\text{,}c\text{-}\mathcal{IAF}^{JM}$}; \draw[->] (afs) edge (attiafs); \draw[->] (afs) edge (argiafs); \draw[->] (attiafs) edge (iafs); \draw[->] (attiafs) edge (dattiafs); \draw[->] (argiafs) edge (iafs); \draw[->] (argiafs) edge (dargiafs); \draw[->] (iafs) edge (riafs); \draw[->] (dargiafs) edge (ciafs); \draw[->] (dattiafs) edge (ciafs); \draw[->] (riafs) edge (ciafs); \end{tikzpicture} \caption{Relative expressivity of formalisms for qualitative uncertainty in formal argumentation. An arrow from $\mathcal{X}$ to $\mathcal{Y}$ means that $ \mathcal{X} \preceq \mathcal{Y}$, i.e., $\mathcal{Y}$ is at least as expressive as $\mathcal{X}$. Reflexive and transitive arrows have been omitted.} \label{fig:exp} \end{figure} Besides providing a full expressivity map, this proposition highlights the fact that IAFs with dependencies have not been given their most expressive formulation yet. That is, we have arg-IAFs with dependencies \cite{fazzingaijcai21}, and att-IAFs with dependencies \cite{fazzingakr21}, but no IAFs with dependencies. This makes that these kinds of structures do not yet permit expressing any set of completions (contrarily to what happens with both cIAFs and cIAFs$^{JM}$). It seems clear that a mixed version of those formalisms would also be maximally expressive. However, some important design choices are to be made; for instance, whether one permits \emph{mixed dependencies} (those involving uncertain arguments and attacks) or not. \section{Encompassing Dynamics and Uncertainty}\label{sec:dynamics} As argued in the introduction, there are two fundamental aspects of argumentation that are left out of AFs: the uncertainty about the relevant argumentative information (that is, which arguments and attacks should be taken into account during a debate), and the dynamics of such information. In the previous section we have discussed various ways to represent uncertainty about AFs. As to the dynamics of AFs, it is a well-studied branch of research by now; see e.g.\ \cite{DM18,baumann2021enforcement} for recent surveys. In this section we sketch how both ideas are to be combined. We start by presenting a well-studied case: control AFs \cite{dimopoulos2018control}, showing that their main reasoning tasks are also encodable in DL-PA. After mentioning some of its limitations, we proceed to study an extension that combines the kind of dynamics captured by CAFs with the flexibility of cIAFs for representing uncertainty. We close the section by sketching a general theory of dynamics and uncertainty of AFs that provides conceptual tools for conducting future research. \subsection{Control AFs}\label{sec:cafs} Control argumentation frameworks were introduced in \cite{dimopoulos2018control} and applied to argument-based negotiation in \cite{cafsnegotiation}. They represent a joint approach to uncertainty and dynamics of AFs. Regarding uncertainty, they are as expressive as rIAFs (Section~\ref{sec:riafs}). As to dynamics, they capture a parametrised version of what has been called \emph{normal expansion} \cite{BB10} at the level of each completion. Formally, a \textbf{control argumentation framework} is a triple $\mathsf{CAF}= (F,U,C)$ where: \begin{itemize} \item $F=(\argset^{\fix},\attrel^{\fix})$ is the \emph{fixed part}, with $\attrel^{\fix}\subseteq (\argset^{\fix}\cup A^{?})\times (\argset^{\fix}\cupA^{?})$, and both $\argset^{\fix}$ and $A^{?}$ being two finite sets of arguments; \item $U=(A^{?},(R^{?}\cup \attrel^{\leftrightarrow}))$ is the \emph{uncertain part}, where $$R^{?},\attrel^{\leftrightarrow} \subseteq (\argset^{\fix}\cup A^{?})\times(\argset^{\fix}\cupA^{?})$$ and $\attrel^{\leftrightarrow}$ is symmetric and irreflexive;\footnote{Symmetry and irreflexivity of $\attrel^{\leftrightarrow}$ are not assumed in the original paper \cite{dimopoulos2018control}, but appeared later on in the literature about CAFs \cite{niskanen2021controllability,niskanen2020thesis}. Note that both assumptions do not affect expressivity (in the sense used in Section~\ref{sec:comparison}) of CAFs.} \item $C=(A^{C},R^{C})$ is the \emph{control part}, where $A^{C}$ is yet another finite set of arguments and $$R^{C}\subseteq (A^{C}\times (A^{F} \cup A^{?}\cup A^{C})) \cup ( (A^{F} \cup A^{?}\cup A^{C})\times A^{C} )\text{;}$$ \item $\argset^{\fix}$, $A^{?}$, and $A^{C}$ are pairwise disjoint; and \item $\attrel^{\fix},R^{?},\attrel^{\leftrightarrow}$, and $ R^{C}$ are pairwise disjoint. \end{itemize} We note $\mathcal{CAF}$ the class of all control AFs. Given a $\mathsf{CAF}=(F,U,C)$, a \textbf{control configuration} is a subset of control arguments $CFG \subseteq A^{C}$. Informally, each control configuration can be seen as a possible argumentative move for the proponent. The \textbf{CAF associated to $CFG$} is $\mathsf{CAF}_{CFG} = (F,C_{CFG},U)$, where $C_{CFG} = (CFG,R^{C}\upharpoonright_{ \argset^{\fix}\cup A^{?}\cup CFG})$. \begin{example}\label{ex:caf} Let us consider the CAF $\mathsf{CAF}_0 = (F_0,C_0,U_0)$ where $A^{F}_0=\{a\}$, $R_0^{F}=\{(f,e)\}$, $A^{?}_{0}=\{c,e,f\}$, $R^{?}_0=\{(f,c)\}$, $R^{\leftrightarrow}_0=\{(c,e),(e,c)\}$, $A^{C}_{0}=\{b,d\}$, and $R^{C}_{0}=\{(b,a),(d,a),(c,b),(e,d)\}$. We represent $\mathsf{CAF}_0$ graphically as follows: \begin{center} \begin{tikzpicture}[modal,world/.append style= {minimum size=0.5cm}] \node[world] (a) [] {$a$}; \node (pos) [left=1 cm of a]{}; \node[carg] (b) [above=0.5 cm of pos]{b}; \node[world,dashed] (c) [left=1 cm of b]{c}; \node[carg] (d) [below=0.5 cm of pos]{d}; \node[world,dashed] (e) [left=1 cm of d]{e}; \node[world,dashed] (f) [left=3 cm of pos]{f}; \draw[->] (b) edge[double] (a); \draw[->] (d) edge[double] (a); \draw[->] (c) edge[double] (b); \draw[->] (e) edge[double] (d); \draw[<->] (c) edge[double,dashed] (e); \draw[->] (f) edge (e); \draw[->] (f) edge[dashed] (c); \end{tikzpicture} \end{center} \end{example} The notion of \textbf{completion} is defined as follows for CAFs: \begin{itemize} \item $(\argset^{\fix}\cup A^{C})\subseteq A^{\ast}\subseteq (\argset^{\fix}\cup A^{C}\cup A^{?})$; \item $(\attrel^{\fix}\cupR^{C})\upharpoonright_{ A^{\ast}}\subseteq R^{\ast}\subseteq (\attrel^{\fix}\cup R^{C}\cupR^{?}\cup \attrel^{\leftrightarrow})\upharpoonright_{ A^{\ast}}$; and \item for every $x,y\in A^{\ast}$: $(x,y)\in \attrel^{\leftrightarrow}$ implies $(x,y)\in R^{\ast}$ or $(y,x)\in R^{\ast}$. \end{itemize} According to this definition, control arguments/attacks behave like fixed arguments/attacks once they have been communicated. Hence, the completions of $\mathsf{CAF}_0$ coincide with those of $\mathsf{rIAF}_0$ (Example \ref{ex:riaf}), i.e., those depicted in Table~\ref{tab:comp}. Regarding CAFs, defining relevant reasoning tasks gets slightly more complicated because we have to take into account their dynamic dimension. In this context, a natural reasoning task is to find a control configuration (that is, a set of control arguments) such that a certain argument gets accepted by the opponent after the latter learns about them. As before, acceptability is then relative to quantification over completions and extensions. Here is an example: \begin{center} \begin{tabular}{|l|} \hline $\sigma$-Necessary-Sceptical-Controllability ($\sigma$-NSCon)\\ \hline \textbf{Given:} A control argumentation framework \\ $\mathsf{CAF}=(F,U,C)$ and an argument $a\in \argset^{\fix}$. \\ \textbf{Question:} Is it true that there is a configuration \\ $CFG\subseteq A_{C}$ such that for every completion $(A^{\ast},R^{\ast})$ \\ of $\mathsf{CAF}_{CFG}$ and for every $E\in \sigma(A^{\ast},R^{\ast}),a \in E$? \\ \hline \end{tabular} \end{center} \medskip We now move on to explain how to reason about CAFs in DL-PA. Since, uncertainty-wise, control argumentation frameworks are essentially rich incomplete argumentation frameworks, the delicate part in the encoding process comes with their dynamic component, i.e., the control part. First, given a CAF $\mathsf{CAF}=(F,U,C)$, we define its \textbf{associated valuation} as \begin{align*} v_{\mathsf{CAF}} &=v_{(A^{F},R^{F} \cup R^{C})} \\&= \mathsf{AW}_{A^{F}} \cup \mathsf{ATT}_{R^{F}} \cup \mathsf{ATT}_{R^{C}} \\&= \{\aw x \mid x \in \argset^{\fix}\}\cup\{\att x y \mid (x,y) \in \attrel^{\fix}\} \cup\{\att x y \mid (x,y)\in R^{C}\}\text{.} \end{align*} Note that $v_{\mathsf{CAF}}$ contains all attack variables corresponding to control attacks, but none of them appear in $(A_{v_{\mathsf{CAF}}},R_{v_{\mathsf{CAF}}})$ since none of the control arguments has been communicated yet. This highlights the fact that in an epistemic interpretation of CAFs, the proponent knows how the opponent will perceive the attack relations regarding all communicable arguments. To capture the dynamic component of $\mathsf{CAF}$ we define the following program: $$\mathsf{control}^{\mathsf{CAF}}= \mathsf{mkTrueSome}(\mathsf{AW}_{A^{C}})\text{.}$$ Intuitively, $\mathsf{control}^{\mathsf{CAF}}$ nondeterministically chooses some of the possible control configurations of $\mathsf{CAF}$, i.e., some subset of control arguments. Once we have computed some control configuration, we use the same program as for rIAFs in order to compute completions: \begin{align*} \mathsf{makeComp}^{\mathsf{CAF}}&=\mathsf{mkTrueSome}(\mathsf{AW}_{A^{?}});\mathsf{mkTrueSome}(\mathsf{ATT}_{R^{?}});\mathsf{dis}(\mathsf{ATT}_{\attrel^{\leftrightarrow}})\text{.} \end{align*} We again state a correctness result: \begin{proposition}\label{prop:cafsenco} Let $\mathsf{CAF}=(F,U,C)$. \begin{itemize} \item If $(\mathit{v}_{\mathsf{CAF}},\mathit{v})\in ||\mathsf{control}^{\mathsf{CAF}};\mathsf{makeComp}^{\mathsf{CAF}}||$ then there is a control configuration $CFG\subseteq A^{C}$ and a completion $(\argset^{\ast},\attrel^{\ast})$ of $\mathsf{CAF}_{CFG}$ such that $(A_{\mathit{v}},R_{\mathit{v}})=(\argset^{\ast},\attrel^{\ast})$. \item For every control configuration $CFG\subseteq A^{C}$ and every $(\argset^{\ast},\attrel^{\ast}) \in \mathsf{completions}(\mathsf{CAF}_{CFG})$ there is a valuation $\mathit{v}\in 2^{\propset}$ such that $(\mathit{v}_{\mathsf{CAF}},\mathit{v})\in ||\mathsf{control}^{\mathsf{CAF}};\mathsf{makeComp}^{\mathsf{CAF}}||$ and $(A_{\mathit{v}},R_{\mathit{v}})=(\argset^{\ast},\attrel^{\ast})$. \end{itemize} \end{proposition} We can then combine the previous programs with $\mathsf{makeExt}$ in order to reduce controllability problems to DL-PA model checking problems. \begin{proposition}\label{prop:redcaf} Let $\sigma \in \{st, co, gr, pr, se, id, ea, na, stg\}$. Let $\mathsf{CAF}=(F,U,C)$ and $a \in A^{F}$. Then: \begin{itemize} \item The answer to $\sigma$-NSCon with input $\mathsf{CAF}$ and $a$ is yes iff\\ $v_{\mathsf{CAF}}\models \langle \mathsf{control}^{\mathsf{CAF}}\rangle[\mathsf{makeComp}^{\mathsf{CAF}};\mathsf{makeExt}^{\sigma}] \acc a$. \item The answer to $\sigma$-NCCon with input $\mathsf{CAF}$ and $a$ is yes iff \\ $v_{\mathsf{CAF}}\models \langle \mathsf{control}^{\mathsf{CAF}}\rangle [\mathsf{makeComp}^{\mathsf{CAF}}]\langle\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PCCon with input $\mathsf{CAF}$ and $a$ is yes iff \\ $v_{\mathsf{CAF}}\models \langle\mathsf{control}^{\mathsf{CAF}};\mathsf{makeComp}^{\mathsf{CAF}};\mathsf{makeExt}^{\sigma}\rangle \acc a$. \item The answer to $\sigma$-PSCon with input $\mathsf{CAF}$ and $a$ is yes iff \\ $v_{\mathsf{CAF}}\models \langle\mathsf{control}^{\mathsf{CAF}};\mathsf{makeComp}^{\mathsf{CAF}}\rangle [\mathsf{makeExt}^{\sigma}] \acc a$. \end{itemize} \end{proposition} We close this section by highlighting and making precise two of the main modelling limitations of CAFs that we have already mentioned. Regarding uncertainty, they cannot go further than rIAFs. As to dynamics, the form of communication that they model assumes that uncertainty does not increase. More formally: \begin{remark} $\mathcal{CAF} \equiv \mathcal{RIAF}$. \end{remark} \begin{remark}\label{remark:compcaf} Let $\mathsf{CAF}=(F,U,C)$ and let $CFG,CFG^{'}\subseteq A^{C}$. Then $ | \mathsf{completions}(\mathsf{CAF}_{CFG}) | = | \mathsf{completions}(\mathsf{CAF}_{CFG^{'}}) | $. \end{remark} \subsection{Control Constrained Incomplete AFs}\label{sec:cciafs} One of the advantages of the approaches presented so far is that they can be freely combined. Moreover, the encoding of these formalisms in DL-PA can easily be extrapolated to combined classes of structures. In this subsection, we give evidence of such flexibility by mixing the kind of uncertainty modelled by cIAFs with the kind of dynamics modelled by CAFs. Formally, a \textbf{control constrained incomplete AF} (CcIAF) is a tuple $\mathsf{CcIAF}=(A^{C}, R^{C}, A^{S}, \varphi)$ with: \begin{itemize} \item $A^{C}$ (control arguments) and $A^{S}$ (static arguments) are disjoint, \item $R^{C} \subseteq (A^{C} \times(A^{C} \cup A^{S}))\cup ((A^{C}\cup A^{S})\times A^{C})$, \item $\varphi$ is a Boolean formula over $\mathsf{AW}_{A^{S}}\cup\mathsf{ATT}_{A^{S} \times A^{S}}$. \end{itemize} Given $\mathsf{CcIAF}=(\argset^{C}, \attrel^{C}, \argset^{S}, \varphi)$, the pair $(A^{S},\varphi)$ is its \textbf{underlying cIAF}. The notion of completion is adapted to CcIAFs by combining the intuition behind CAFs and cIAFs, i.e., a completion of $(\argset^{C}, \attrel^{C}, \argset^{S}, \varphi)$ is any AF $(A^{\ast},R^{\ast})$ such that: \begin{itemize} \item $A^{\ast}= A^{C} \cup A'$, \item $R^{\ast}= (R^{C}\cup R')\upharpoonright_{ A^{\ast}}$, \item $(A',R')$ is a completion of the underlying cIAF. \end{itemize} The notion of control configuration is also straightforwardly adapted to our new class of structures. More in detail, a \textbf{control configuration} of $\mathsf{CcIAF}=(\argset^{C}, \attrel^{C}, \argset^{S}, \varphi)$ is any $CFG \subseteq A^{C}$. The CcIAF associated to $CFG$ is defined as $\mathsf{CcIAF}_{CFG}=(CFG,R^{C}\upharpoonright_{ CFG},A^{S}, \varphi)$. {We can extrapolate controllability problems to CcIAFs:} \begin{center} \begin{tabular}{|l|} \hline $\sigma$-Necessary-Sceptical-Controllability ($\sigma$-NSCon)\\ \hline \textbf{Given:} A control constrained incomplete argumentation framework \\ $\mathsf{CcIAF}=(\argset^{C}, \attrel^{C}, \argset^{S}, \varphi)$ and an argument $a \in A^{S}$ s.th.\ $\models \varphi \to \aw a$. \\ \textbf{Question:} Is it true that there is a configuration \\ $CFG\subseteq A_{C}$ such that for every completion $(A^{\ast},R^{\ast})$ \\ of $\mathsf{CcIAF}_{CFG}$ and for every $E\in \sigma(A^{\ast},R^{\ast}),a \in E$? \\ \hline \end{tabular} \end{center} Regarding the DL-PA encoding of the reasoning problems that we have just defined, we start by assigning to each CcIAF its \textbf{associated valuation}, in a similar way to what we did both with CAFs and with cIAFs: \begin{align*} v_{\mathsf{CcIAF}} & = \mathsf{ATT}_{R^{C}} \\&= \{\att x y \mid (x,y)\in R^{C}\}\text{.} \end{align*} The control part of a CcIAF is encoded with the same DL-PA program as in CAFs: \begin{align*} \mathsf{control}^{\mathsf{CcIAF}}&= \mathsf{mkTrueSome}(\mathsf{AW}_{A^{C}})\text{.} \end{align*} Something analogous happens with the programs for computing completions, where we take over the program we used for cIAFs: $$\mathsf{makeComp}^{\mathsf{CcIAF}}=\mathsf{vary}(\mathsf{AW}_{A^{S}});\mathsf{vary}(\mathsf{ATT}_{A^{S} \times A^{S}});\varphi?\text{.}$$ \begin{proposition} Let $\mathsf{CcIAF}=(\argset^{C}, \attrel^{C}, \argset^{S}, \varphi)$. \begin{itemize} \item If $(\mathit{v}_{\mathsf{CcIAF}},\mathit{v})\in ||\mathsf{control}^{\mathsf{CcIAF}};\mathsf{makeComp}^{\mathsf{CcIAF}}||$, then there is a control configuration $CFG\subseteq A^{C}$ and a completion $(\argset^{\ast},\attrel^{\ast})$ of $\mathsf{CcIAF}_{CFG}$ such that $(A_{\mathit{v}},R_{\mathit{v}})=(\argset^{\ast},\attrel^{\ast})$. \item For every control configuration $CFG\subseteq A^{C}$ and every $(\argset^{\ast},\attrel^{\ast}) \in \mathsf{completions}(\mathsf{CcIAF}_{CFG})$ there is a valuation $\mathit{v}\in 2^{\propset_{A^{S}\cup A^{C}}}$ such that $(\mathit{v}_{\mathsf{CAF}},\mathit{v})\in ||\mathsf{control}^{\mathsf{CcIAF}};\mathsf{makeComp}^{\mathsf{CcIAF}}||$ and $(A_{\mathit{v}},R_{\mathit{v}})=(\argset^{\ast},\attrel^{\ast})$. \end{itemize} \end{proposition} Once again, we can also reduce reasoning tasks involving CcIAFs to DL-PA model checking problems. Details are left to the reader. \subsection{Towards a General Theory}\label{sec:generaltheory} In \cite{DM18}, a general theory of the dynamics of abstract argumentation systems is developed. The focus of the paper is the dynamics of AFs but, as pointed out by the authors, the theory is \emph{prima facie} applicable to other kinds of argumentation frameworks. In this subsection we apply their categorisation to the formalisms for representing qualitative uncertainty about AFs studied in Section~\ref{sec:uncertainty}. At the same time, we show how DL-PA works as a good logical candidate for formalising many parts of this general theory. \paragraph{Structural constraints.} According to \cite{DM18}, there are different kinds of constraints that one might want to enforce in an argumentation system. The first kind of constraint is concerned with the structure of an AF. \cite{DM18} distinguishes between \emph{elementary} and \emph{global} structural constraints. The former are defined directly on the components of the framework (adding/removing some arguments/attacks); the latter require some property that the output AF must satisfy, e.g., being odd-loop-free, acyclic, etc. Both kinds of constraints make perfect sense in the kind of structures that we have studied in Section~\ref{sec:uncertainty}. Interestingly, the richer nature of these formalisms allows for further distinctions. \paragraph{Elementary structural constraints.} While in AFs elementary constraints amount to addition/removal of arguments/attacks (or combinations of these, as in the case of AF expansions \cite{BB10}), we can perform more subtle actions in argumentation frameworks with qualitative uncertainty. Let us illustrate some of these actions for the case of IAFs. \paragraph{Settling uncertain arguments/attacks.} In a debate, an agent may want to promote the epistemic status of an uncertain argument/attack by ``settling it''. Formally, and restricting our attention to arguments {and incomplete AFs}, given $\mathsf{IAF}=(\argset^{F}\!,\attrel^{F}\!,\argset^{?}\!,\attrel^{?})$ and $a\in A^{?}$, define the partial function $\mathsf{settle}:(\mathcal{IAF}\times \mathcal{U})\to \mathcal{IAF}$ by: $$\mathsf{settle}(\mathsf{IAF},a)=(A^{F}\cup\{a\},A\setminus\{a\}, R^{F},R^{?})\text{.}$$ In DL-PA we can compute the completions of the resulting IAF straightforwardly: $$\mathsf{completions}(\mathsf{settle}(\mathsf{IAF},a))=\{(A_v,R_v)\mid (v_{\mathsf{IAF}},v)\in ||\mathsf{makeComp}^{\mathsf{IAF}};\assgntop \aw a||\} . $$ \paragraph{Communicating arguments that become uncertain.} Another kind of dynamics, formally modelled by moving arguments from $\mathcal{U} \setminus (A^{F}\cup A^{?})$ to $A^{?}$, can be used to model situations in which argumentation takes place through a communication channel which is not fully trustworthy (say, a messaging app), so that the proponent is not sure whether the opponent received the arguments that were sent. Again, the completions of the resulting IAF can be easily computed within DL-PA, and the same ideas can be applied to communicating attacks instead of arguments. \paragraph{Communicating arguments with uncertain effects.} Yet another kind of action is to communicate arguments whose effects on the opponent's framework are not known. For instance, and within the context of CAFs, one can relax their definition by extending the domain and range of $R^{?}$ or $R^{\leftrightarrow}$ so as to include $A^{C}$. \paragraph{Belief change methods for logical structures.} As for cIAFs (and this applies also to cIAFs$^{JM}$), elementary changes amount to either augmenting/shrinking the domain $A$ or, more interestingly, changing the epistemic constraint $\varphi$. Regarding the latter, methods imported from the belief change literature can be used; for instance, if a new piece of information $\psi$ that is inconsistent with $\varphi$ is to be added, one could do so by means of an AGM belief revision operator \cite{AlchourronEtAl85}. In that respect, DL-PA has been shown useful to capture belief change operators \cite{Herzig-Kr14}, and these have been applied in turn to AFs \cite{doutre2014dynamic,doutre2019clar}. \par \paragraph{Types of elementary structural changes.} Several interesting criteria can be applied to provide a classification of elementary structural changes within frameworks for arguing with qualitative uncertainty. Let us just point out a couple of them. Regarding awareness of arguments, we can distinguish between \emph{internal actions}, \emph{argument-gaining actions}, and \emph{argument-losing actions}. Informally, as the outcome of an internal action, agents neither become aware nor unaware of any new argument.\footnote{However, they might change the epistemic status of arguments they are aware of.} A bit more formally, and restricting our attention to IAFs, we say that the action transforming $\mathsf{IAF}_0=(A^{F}_0,A^{?}_0,R^{F}_0,R^{?}_0)$ into $\mathsf{IAF}_1=(A^{F}_1,A^{?}_1,R^{F}_1,R^{?}_1)$ is internal whenever $A^{F}_0\cupA^{?}_0=A_1^{F}\cupA^{?}_1$. For example, the partial function $\mathsf{settle}:(\mathcal{IAF}\times \mathcal{U})\to \mathcal{IAF}$ defined above is clearly internal. Argument-gaining actions formally amount to requiring that $A^{F}_0\cupA^{?}_0\subsetA_1^{F}\cupA^{?}_1$, so that the agent becomes aware of at least one novel argument. The action of communicating uncertain arguments that we have explained above is an example of an argument-gaining action. Finally, in argument-losing actions we have that $A_1^{F}\cupA^{?}_1 \subset A^{F}_0\cupA^{?}_0$, that is, the agent has become unaware of at least one argument.\footnote{The last type of action connects with a recent thoughtful study of the notion of \emph{forgetting an argument} in the context of AFs \cite{BaumannGR20}. Interestingly, we can capture within IAFs the distinction, made in \cite{forgetting2009}, between \emph{forgetting-as-becoming-unaware} (moving an argument from $A^{F}\cup A^{?}$ to $\mathcal{U}\setminus(A^{F}\cup A^{?})$), and \emph{forgetting-as-becoming-ignorant} (moving an argument from $A^{F}$ to $A^{?}$).} A second criterion for categorizing elementary structural changes would be measuring their impact on the number completions, since, intuitively, the more completions we have, the more uncertainty the formalised agent is dealing with. As examples, the $\mathsf{settle}$ function described above always results in a reduction of the number of completions; computing control configurations of CAFs keeps the number of completions constant (see Remark~\ref{remark:compcaf} for a precise formulation); and communicating uncertain arguments (also described above) increases the number of completions. \paragraph{Global structural constraints.} In AFs, global structural constraints amount to things like obtaining an acyclic graph, or an odd-loop-free graph, etc. These constraints are motivated by the appealing mathematical properties implied by them. For instance, it is known since \cite{dung1995acceptability} that in acyclic AFs, all the four classic semantics collapse. Interestingly, DL-PA can capture many of these constraints. For example, in \cite{doutre2019clar} polynomial formulas characterising the existence of odd- and even-length-loops are constructed. When extrapolated to the more complex formalisms studied here, global constraints can be required either possibly (that is, in at least one completion) or necessarily (in all of them). Furthermore, DL-PA can be used to check if the constraint is satisfied possibly or necessarily. To be more precise, and focusing on IAFs for simplicity: let $\varphi$ be the formula characterising a targeted global constraint and let $\mathsf{IAF}$ be an IAF; then we have that $\varphi$ is satisfied possibly (resp.\ necessary) iff $\mathit{v}_{\mathsf{IAF}}\models \ldia{\mathsf{makeComp}^{\mathsf{IAF}}}\varphi$ (resp.\ iff $\mathit{v}_{\mathsf{IAF}}\models [\mathsf{makeComp}^{\mathsf{IAF}}]\varphi$). In AFs, global constraints are usually enforced through elementary changes (those described above). Once again, this relation can be studied in DL-PA. For instance, if we want to know if a global constraint $\varphi$ is possibly enforced in $\mathsf{IAF}$ as the result of settling $a\in A^{?}$, it is enough to model-check whether $\mathit{v}_{\mathsf{IAF}}\models \ldia{\mathsf{makeComp}^{\mathsf{IAF}};\assgntop \aw a}\varphi$ holds. \paragraph{Acceptability constraints.} The second kind of constraint distinguished by \cite{DM18} is concerned with the output of the argument evaluation process in an argumentation system. The distinction elementary/global applies here, too. When restricted to AFs, one might want to enforce a set of arguments to be part of (or equal to) at least one (or every) extension; this is an elementary acceptability constraint. This kind of enforcement is probably the most studied throughout the literature on abstract argumentation, since the work of \cite{BB10}, as it has a clear informal counterpart in real-life argumentation: persuading an opponent basically amounts to enforcing some targeted arguments. Furthermore, and from a more technical perspective, one might also want to enforce some kind of global acceptability constraint: controlling the cardinality of the set of extensions, its structure, etc. Again, qualitative uncertainty introduces a new layer of quantification: acceptability enforcement can be pursued possibly (i.e., in at least one completion) or necessarily (in all of them). Just as it happens with AFs, acceptability constraints are usually enforced through a (combination of) structural changes such as the ones we have described above. As an example, the reasoning tasks of both CAFs and CcIAFs are a way of enforcing a possible/necessary acceptability constraint through the performance of a combination of elementary structural changes that do not increase uncertainty (activating control arguments). \paragraph{Semantic constraints.} Finally, the third kind of constraint distinguished by \cite{DM18} affects the semantics that has been chosen to evaluate arguments. Informally, enforcing a semantic constraint amounts to a change in the standards applied within the argument evaluation process. To this respect, not only the parameter $\sigma$ can be switched to $\sigma'$, but one could also move from credulous to sceptical acceptability, and \emph{vice versa}. Just as before, the formalisms studied in Section~\ref{sec:uncertainty} introduce an additional layer of quantification to be taken into account when formulating semantic constraints: we can move from a `possible' semantics (arguments should be accepted in at least one completion) to a `necessary' semantics (they should be accepted in all), and backwards. As we have shown throughout the paper (e.g., in Proposition~\ref{prop:rediafs}), the distinction between possible and necessary acceptability can be transparently captured in DL-PA. \section{Discussion, Related Work and Future Directions}\label{sec:final} We have taken the logical encoding of AFs and their extensions a step further by moving from encodings in propositional logic and \emph{quantified Boolean formulas} (QBF) to encodings in a simple version of dynamic logic DL-PA. Approaches to argumentation reasoning problems based on SAT-solvers typically use Besnard and Doutre's encoding of AFs and their semantics in propositional logic \cite{BesnardDoutre}, as well as its extension to QBF for semantics requiring maximality checking; see e.g.\ \cite{DBLP:conf/kr/NiskanenJ20a} for a recent such approach, and \cite{cerutti2017foundations} for a review of approaches to abstract argumentation reasoners. Based on our work, one could use DL-PA model checkers instead of SAT-solvers in order to automatically decide the reasoning problems that we have investigated here. This would however have to await such model checkers, which for the time being do not exist yet. Alternatively, one could resort to translations from DL-PA to QBF and use solvers for the latter. This is currently pursued in the LILaC group at IRIT. \par On the whole, all we have done in DL-PA can as well be done in equally expressive logical frameworks like propositional logic or QBF. The advantage over the former is that (1)~some semantics can be expressed more compactly in DL-PA, such as the preferred semantics: it is one level higher in the polynomial hierarchy than the other semantics and can therefore not be captured by a polynomial propositional logic formula, while a polynomial DL-PA formula is given in \cite{doutre2019clar},\footnote{Remember that our adaptation of the formula $\mathsf{Preferred}$ of \cite{doutre2019clar} captures preferred semantics in the more general setting of a set of background arguments $\mathcal{U}$ and is also polynomial in the size of $\mathcal{U}$.} and (2)~the reasoning problems can be expressed directly as DL-PA programs. The advantage over QBFs is that the DL-PA encoding of reasoning problems by means of programs is more natural than the rather complex QBF encodings that one can find in the literature. Actually, most of the works on arguing with qualitative uncertainty use QBF encodings and algorithms for determining the complexity of associated reasoning tasks (see e.g.\ \cite{baumeister2021acceptance} or \cite{niskanen2021controllability}). All advantages already pointed out by \cite{doutre2019clar} of using DL-PA instead of QBF for encoding argumentative semantics are preserved by our encodings. In particular, ``extension construction programs such as $\mathsf{makeExt^{\sigma}}$ capture things in a more general, flexible and natural way than a QBF encoding''. \paragraph{Getting closer to a theorem proving approach.} Our encoding of formalisms for arguing with qualitative uncertainty can be qualified as \emph{hybrid}, since it combines some previous semantic reasoning with reasoning inside DL-PA. For instance, in order to compute the completions of an IAF, one first needs to find its associated valuation (which is reasoning outside the logic, using semantic objects), then has to write down the $\mathsf{makeComp}$ program, and finally one reasons in DL-PA to find the $\mathsf{makeComp}$-successors of the associated valuation. We followed this hybrid method because we found intuitive the identification of directed graphs with propositional valuations over $\propset$. However, we can adopt results from \cite{doutre2014dynamic,doutre2017dynamic,doutre2019clar} to get a more homogeneous method here. For instance, given $\mathsf{IAF}=(F,U)$, instead of computing its associated valuation we can write down a propositional formula that characterizes its fixed elements (similar to what is done in \cite{doutre2014dynamic} for standard AFs and in our proof of Proposition~\ref{prop:express} in the \nameref{sec:app}): $$\mathsf{Th}(\mathsf{IAF})=\bigwedge_{x \in A^{F}}\aw x \land \bigwedge_{x \in \mathcal{U} \setminus A^{F}} \lnot \aw x \land \bigwedge_{(x,y) \in R^{F}}\att x y \bigwedge_{(x,y) \in \mathcal{U} \times \mathcal{U}\setminus R^{F}} \lnot \att x y . $$ If we combine this formula with the $\mathsf{makeComp}$ program and the converse operator we obtain a formula whose models completely characterize the set of completions of $\mathsf{IAF}$: \begin{align*} \mathsf{completions}(\mathsf{IAF}) &= \{ (A_{v},R_{v})\mid v \in ||\langle \big( \mathsf{Th}(\mathsf{IAF})?;\mathsf{makeComp}^{\mathsf{IAF}}\big)^{\smallsmile}\rangle \top||\} . \end{align*} \paragraph{Novel contents w.r.t.\ our conference paper \cite{clar2021}.} This work is based on our previous conference paper \cite{clar2021}, which we have improved and extended in three main different directions. First, in Section~\ref{sec:semantics}, (i) we capture argumentation semantics in DL-PA that had not been captured before (naive, semi-stable, stage, ideal and eager semantics), and (ii) we also adapt previous encodings to our more general setting (in particular, we adapt the encodings of complete, preferred and grounded semantics \cite{doutre2019clar} to the assumption of the existence of a background universe of arguments $\mathcal{U}$, which is in turn useful for modelling both dynamics and uncertainty about AFs). Second, we discussed some closely related works that appeared since (sections~\ref{sec:related} and \ref{sec:comparison}). Third, we provided new results regarding the combination of dynamics and uncertainty in abstract argumentation: sections~\ref{sec:cciafs} and \ref{sec:generaltheory} are entirely new. Finally, there are also several small improvements w.r.t.\ the conference version, some of which are signalled throughout the paper. \paragraph{Epistemic aspects of argumentation.} In recent years, a few papers dealing with the combination of epistemic logic and formal argumentation have appeared. Broadly speaking, these works can be divided into two main branches: (i)~those trying to provide a formalisation of the notion of justified belief based on argumentative tools such as \cite{grossi2014,shi2017argument,shi2018,shi2021logic,comma,tark}; and (ii)~those using epistemic models for reasoning about uncertain AFs such as \cite{schwarzentruber2012building, proietti2021, ijcai21, kr}. Clearly, the second one is strongly connected---both conceptually and technically---to some of the ideas presented here. The main first difference is that the formalisms used in this paper lack a tool for capturing higher-order epistemic attitudes, that is, a tool capable of representing not only what an agent thinks of her opponent's argumentative situation (her AF), but also about what the agent thinks that her opponent thinks about the agent's argumentative situation, and so on. This is an important point, since this kind of mental attitude has been successfully employed under the name of \emph{recursive opponent models} within the sub-field of strategic argumentation (see, e.g., \cite{thimm2014strategic}). However, the incorporation of this type of multiple agency together with a full dynamic tool-kit would mean to replace DL-PA by the strong modelling power of dynamic epistemic logic \cite{DitmarschHoekKooi07}. This comes at the price of a blow-up in the computational complexity of the associated reasoning tasks. One might however follow \cite{DBLP:journals/ai/CooperHMMPR21} and employ lightweight epistemic logics where disjunctions in the scope of epistemic operators are forbidden. That would represent a compromise between modelling multiple agency/dynamics, on the one side, and modelling uncertainty, on the other side, since any form of uncertainty that goes beyond IAFs (see Figure \ref{fig:exp}) would have to be excluded from this approach. {A second important difference is that, unlike epistemic logic, none of the formalisms studied in this paper allow for modelling \emph{the actual world}, i.e., what is true independently of what the formalised agent thinks. This notion is in turn needed for distinguishing between \emph{knowledge} (which is usually required to be true) and \emph{belief} (which is often merely required to be consistent). However, this limitation seems easier to be overcome: It suffices to augment IAFs (and their extensions) with a \emph{distinguished completion}, informally accounting for what the actual AF is.} \paragraph{Further semantics.} Yet another direction for future work is extending our DL-PA encoding to semantics that have not been considered in Section~\ref{sec:semantics}. A specially interesting case is the recently introduced family of weak admissibility-based semantics \cite{BaumannBU20}, since most of the associated reasoning tasks have been shown to be PSPACE-complete, matching the complexity of the DL-PA model checking problem \cite{BalbianiHST14}. \paragraph{An alternative notion of expressivity.} In a very recent paper \cite{alfano2022aaai}, Alfano et al.\ invented a rewriting technique in order to reduce general IAFs to their strict subclasses arg-IAFs and att-IAF, and yet to a proper subclass of arg-IAFs.\footnote{The so-called \emph{fact-uncertain AFs} (farg-IAFs), which are argument-incomplete AFs where all uncertain arguments are not attacked.} More concretely, they show \cite[Theorem 7]{alfano2022aaai} that the completions of the rewritten incomplete AF can be mapped (through another transformation) to the completions of the original one. They moreover claim that ``This result entails that arg-IAFs (resp.\ farg-IAF, att-IAF) have the same expressivity of general IAFs, though arg-IAFs (resp.\ farg-IAF, att-IAF) have a simpler structure''. This clearly conflicts with the expressivity map that we provided in Proposition \ref{prop:ex} and Figure \ref{fig:exp}, which is based on the notion of expressivity first introduced in \cite{mailly2020note} and later used in \cite{mailly2021yes,clar2021}. Although a detailed comparison of both notions of expressivity is out of the scope of this discussion, we would just like to mention that the one employed here seems more useful for intuitive modelling purposes (i.e., to find out what kind of situations the formalised agent is able to represent in her mind), while Alfano et.\ al's seems more interesting from a technical perspective (actually, it is used to extend complexity results regarding IAFs to their proper subclasses). Be as it may, the work done in \cite{alfano2022aaai} opens an interesting research question: can the rewriting technique be extended to more expressive formalisms (in our sense), such as rIAFs or cIAFs? \color{black} \subsubsection*{Funding} The research activity of both authors is partially supported by the EU ICT-48 2020 project TAILOR (No. 952215). Part of this research was carried on when Antonio Yuste was employed by the University of M\'alaga through a Post-doctoral contract type A.3.1.\ of \emph{Plan Propio de Investigaci\'on, Transferencia y Divulgaci\'on Cient\'ifica}. \subsubsection*{Acknowledgements} We thank Sylvie Doutre and Jean-Guy Mailly for previous discussions on the topic of this paper, specially for triggering the idea of constrained incomplete argumentation frameworks. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,325
Q: on back pressed from browser erases all my variable value I have button.Onclicking button it cals->choose Browser->on choosing Browser it loads my Url. on coming back from browser,all my previous locally set variable value are cleared.Variables have have values that are initialized with them. for ex: i=0. inside method i assign i=10 call browser now // i use this code to call browser Intent intent = new Intent(Intent.ACTION_VIEW, Uri.parse(strUrl)); intent.setFlags(Intent.FLAG_ACTIVITY_NO_HISTORY); startActivity(Intent.createChooser(intent, "Choose browser")); it loads Url// now on coming back to activity Varialble (i) value becomes 0. A: declare your variable as static static int i = 0; this will not reset i to zero. on back press A: You need to understand the lifecycle of an Activity a little better. When you leave and return from an Activity state is not automatically persisted. Android Lifecycle: http://developer.android.com/training/basics/activity-lifecycle/index.html You should use onSavedInstanceState to restore your state when you return to your Activity. This is the code example the (linked) docs give: static final String STATE_SCORE = "playerScore"; static final String STATE_LEVEL = "playerLevel"; ... @Override public void onSaveInstanceState(Bundle savedInstanceState) { // Save the user's current game state savedInstanceState.putInt(STATE_SCORE, mCurrentScore); savedInstanceState.putInt(STATE_LEVEL, mCurrentLevel); // Always call the superclass so it can save the view hierarchy state super.onSaveInstanceState(savedInstanceState); }
{ "redpajama_set_name": "RedPajamaStackExchange" }
958
BARRY Chart 0500 This is a Chart for Redmond Barry and Catherine ??? REDMOND BARRY 1861 Journeyman Tailor 1867 Tailor (marriage of Elizabeth (Mary Ann) 1871 Tailor CATHERINE ??? Gravesend, Kent (Mary Ann) Minories, Middlesex 10 Trafalgar Road, Gravesend, Kent James NAGLE Mary BARRY 1871 Scholar 1881 Fireman 1891 Marine? Fireman 1861 Census he was living at Sousecoset? (Somerset) Street, Gravesend, Kent. 1871 Census - 22 Crooked Lane, Milton, Kent. Redmond is down as Edmond on this Census and Catherine is down as being born in Ireland. There is a marriage as shown above for a Redmond BARRY, if this is the correct one his wife's maiden name would have been DAVIES, I am not at all certain about this as the age I have for Redmond means that he would have only been 18 at this time. Cannot find any other possible marriage and cannot find them on he 1851 Census. 1869 Victoria Cottages, Northfleet, Kent (at baptism of William John) 1871 42 Wakefield Street, Gravesend, Kent 1877 1 Marshalls Place, Leather Bottle Lane, Northfleet, Kent 1879 10 Springhead Rd, Northfleet, Kent 1881 73 Shepherd Street, Northfleet, Kent. There was a Visitor with the family a Richard R BARVEY born 1862 Gravesend, Kent 1889/1891 59 Charles Street, Stone, Kent 1891 59 Charles Street, Stone, Kent. 1896/1898 104 High Street, Northfleet, Kent 1901 104 High Street, Northfleet, Kent A list of births of the family of Henry John STEDMAN found by Gladys WOOD née THOROGOOD at the beginning of 1998 indicates that the full name of Elizabeth BARRY was Mary Ann Elizabeth BARRY, this is the first indication that she had any other names. This might make it possible to find her marriage and birth certificates. It also gave us a complete list of birth dates for all the children, making it even more unlikely that the name of Fanny which appears in the 1891 census was correct. I have now found her marriage certificate which uses the name Elizabeth. I have therefore put Elizabeth back as the first name and followed them with the two other in parenthesis. Elizabeth lived with her youngest son and his family at 31 Terrace Street, Gravesend, Kent for the last years of her life until just before her death when she went into a home. 1881 Census - 26 Shaw Street, Grimsby, Lincolnshire. James was with a family named BAGSHAW who came from Northfleet, Kent. 1891 Census - 8 Pilot's Place, Milton, Gravesend, Kent. James was a boarder with a Jane STONE, a laundress. Last updated Friday, June 01, 2012 08:14
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,338
\section{Introduction} Many astrophysical processes depend on the Compton scattering of photons by a thermal distribution of electrons. Examples from cosmology include the evolution of anisotropies in the cosmic microwave background (CMB) radiation in the pre-recombination era~\cite{peebles70}, and the Sunyaev-Zel'dovich distortion~\cite{zel69} in the CMB spectrum due to passage through hot clusters. The Comptonization is described by the Boltzmann equation, which is an integro-differential equation for the evolution of the photon distribution function. For a detailed discussion of the Compton scattering kernel, see Kershaw, Prasad, \&\ Beason (1986). In situations of astrophysical interest, it is often the case that the energy transfer during scattering is sufficiently small to allow one to make a Fokker-Planck expansion of the scattering kernel. In this case, the Boltzmann equation may be replaced by a hierarchy of differential equations for the moments of the distribution function. The most complete analytic treatment to date of Comptonization in a moving media was given by Psaltis \&\ Lamb (1997), who derived the first two moments of the photon kinetic equation for arbitrary (anisotropic) photon and electron distributions. This derivation was correct to first-order in $\hbar \omega /m_{e}c^{2}$ and $\theta_{\!e}\equiv k_{B} T_{e} /m_{e} c^{2}$, where $\omega$ is the photon frequency, $T_{e}$ is the electron temperature and $m_{e}$ the electron mass, and to second-order in $\beta$, the bulk velocity of the electrons in units of $c$. Psaltis \&\ Lamb (1997) showed the importance of using the full relativistic cross section for Compton scattering in order to obtain the kinetic equation to the required accuracy. Although the generality of the results in Psaltis \&\ Lamb (1997) ensures that they are sufficient to describe many astrophysical processes, there are situations of interest where the inclusion of higher-order relativistic corrections is necessary. One such example is the Sunyaev-Zel'dovich effect for hot clusters, where $k_{B} T_{e}$ may be as high as $15\mbox{\rm \,keV}$. In this example, the radiation is initially isotropic in the cosmic frame, and the optical depth to Compton scattering through the cluster is sufficiently small that the effect on the scattering of anisotropies induced by any peculiar velocity of the cluster may be neglected (the probability of a photon undergoing multiple scattering is very low). The effect of relativistic corrections on the thermal Sunyaev-Zel'dovich effect ($\beta=0$) has been studied numerically (Rephaeli 1995; Rephaeli \&\ Yankovitch 1997) and analytically (Stebbins 1998; Challinor \&\ Lasenby 1998; Itoh, Kohyama, \&\ Nozawa 1998). Recently, these analyses have been extended to include the effects of any peculiar velocity of the cluster (Nozawa, Itoh, \&\ Kohyama 1998; Sazonov \&\ Sunyaev 1998). It was shown that for typical values of the peculiar velocity $\beta \simeq 1/300$, the relativistic corrections to the kinematic effect are $\simeq 8\%$, and arise from a term of $O(\beta \theta_{\!e})$ which is not included in the analysis of Psaltis \&\ Lamb (1997). In the course of their derivation of the corrections to the kinematic Sunyaev-Zel'dovich effect, Nozawa et al.\ (1998) described a covariant method for performing a Fokker-Planck expansion of the photon kinetic equation for $\beta\neq 0$ by extending the method used in Challinor \&\ Lasenby (1998), where the electron bulk velocity was neglected ($\beta = 0$). This method includes all relativistic effects, including induced scattering and electron recoil in a unified manner. However, Nozawa et al.\ did not go on to evaluate the detailed form of the kinetic equation including all these effects; instead they gave only the resulting equation for a Planck distribution of isotropic radiation at temperature $T_{r}$ in the limit that $T_{r} \ll T_{e}$. This equation is sufficient to describe the dominant corrections to the Sunyaev-Zel'dovich effect (which was the purpose of their paper), but since the effects of recoil and induced scattering do not enter in this limit, their results are not sufficiently general to describe other interesting properties of the Comptonization process, such as the energy transfer rate between the electrons and (hot) radiation. In fact, the dominant corrections to the Sunyaev-Zel'dovich effect have been derived independently by Sazonov \&\ Sunyaev (1998) with recoil and induced scattering neglected from the start. Their calculation is much simpler than that in Nozawa et al.\ (1998), since they need only use the Thomson cross section. In the present paper, which is intended to complement the paper by Nozawa et al.\ (1998), we derive the detailed form of the photon kinetic equation describing Comptonization of an initially isotropic radiation field in moving media, in the limit of low optical depth. In this limit, the Compton scattering term in the Boltzmann equation may be evaluated for an isotropic distribution, and, since multiple scattering is very improbable, the effects of the velocity-induced anisotropies on subsequent scatterings can be safely ignored. We feel that this result, which is omitted from the Nozawa et al.\ (1998) analysis, could be valuable to the astrophysics community at large since Comptonization is central to many problems. Relativistic effects may be fully included by a systematic expansion in the parameters $\theta_{\!e}$ and $\beta$. The new result given here is written in a form that manifestly conserves the number of photons, and allows a simple calculation of the energy transfer rate between the electrons and the radiation. In the limit of $\beta=0$ we recover the expression given in Challinor \&\ Lasenby (1998). For $\beta\neq 0$, our result yields corrections at higher order than given elsewhere the literature. We give the kinetic equation correct to $O(\theta_{\!e}^{2},\beta \theta_{\!e}^{2},\beta^{2}\theta_{\!e})$, before calculating the rate of energy transfer for a Planck distribution correct to $O(\theta_{\!e}^{2},\theta_{\!r}^{2},\beta\theta_{\!e},\beta\theta_{\!r})$, where $\theta_{\!r} \equiv k_{B} T_{r}/m_{e} c^{2}$. We end with a brief discussion of the terms in the kinetic equation that are required to describe the Sunyaev-Zel'dovich effect, obtaining results in full agreement with those in Nozawa et al.\ (1998). We use units with $\hbar =c = k_{B} = 1$ in the following, unless stated otherwise. \section{Extending the Kompaneets equation} We consider the Comptonization of an unpolarised, initially isotropic radiation field by a thermal distribution of electrons at temperature $T_{e}$ which has bulk velocity $\mbox{\boldmath $\beta$}$ relative to the radiation. The optical depth is assumed to be sufficiently low that the radiation may be treated as isotropic throughout the interaction with the medium. We shall work exclusively in the frame in which the radiation is isotropic, but it should be emphasised that we express our final results in terms of the electron number density \emph{in the rest frame of the scattering medium}, $N_{e}$. The electron distribution function is denoted by $f(E, \hat{\mbox{\boldmath $p$}})$ and the photon distribution by $n(\omega, \hat{\mbox{\boldmath $k$}})$. (Note that we use distribution functions normalised to equal the mode occupation numbers.) Here, the electron energy is $E$, and the direction of propagation $\hat{\mbox{\boldmath $p$}}$. For the photons, $\omega$ denotes the energy and $\hat{\mbox{\boldmath $k$}}$ the direction of propagation. Neglecting the effects of electron degeneracy, the Boltzmann equation for the evolution of $n(\omega, \hat{\mbox{\boldmath $k$}})$ may be written as~\cite{buchler76} \begin{eqnarray} \lefteqn{% \frac{Dn(\omega, \hat{\mbox{\boldmath $k$}})}{D t} = -2 \int \frac{d^{3} \mbox{\boldmath $p$}}{(2\pi)^{3}} d^{3} \mbox{\boldmath $p$}' d^{3} \mbox{\boldmath $k$}' W \Big\{ n(\omega ,\hat{\mbox{\boldmath $k$}}) [1+ n(\omega', \hat{\mbox{\boldmath $k$}}')] f(E, \hat{\mbox{\boldmath $p$}})}\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx} \nonumber \\ && - n(\omega', \hat{\mbox{\boldmath $k$}}')[1+n(\omega, \hat{\mbox{\boldmath $k$}})] f(E', \hat{\mbox{\boldmath $p$}}') \Bigl\}, \label{eq_bol} \end{eqnarray} where the operator $D/Dt$ denotes $\partial_{t} + \hat{\mbox{\boldmath $k$}} \! \cdot \! \mbox{\boldmath $\nabla$}$. The invariant transition amplitude $W$ for Compton scattering of a photon of 4-momentum $k^{\mu}$ by an electron (of charge $e$ and mass $m_{e}$) with 4-momentum $p^{\mu}$, to a photon momentum $k^{\prime\mu}$ and an electron momentum $p^{\prime\mu}$ (whose energy is $E'$) is~\cite{ber-quan}: \begin{eqnarray} W &=& \frac{(e^{2}/4\pi)^{2} \bar{X}}{2\omega\omega' E E'} \delta^{4}(p+k-p'-k') \\ \bar{X} &\equiv& 4m_{e}^{4} \left(\frac{1}{\kappa} + \frac{1}{\kappa'} \right)^{2} - 4m_{e}^{2} \left(\frac{1}{\kappa} + \frac{1}{\kappa'}\right) - \left(\frac{\kappa}{\kappa'} + \frac{\kappa'}{\kappa}\right), \end{eqnarray} with $\kappa \equiv -2p^{\mu}k_{\mu}$ and $\kappa'\equiv 2p^{\mu}k^{\prime}_{\mu}$. The electron distribution function is assumed to be a relativistic Fermi distribution in the frame moving at $\mbox{\boldmath $\beta$}$. Since we are ignoring degeneracy effects, in the frame of the radiation we have \begin{equation} f(E, \hat{\mbox{\boldmath $p$}}) \approx \et{-[\gamma (E-\mbox{\boldmath $\beta$}\! \cdot \!\mbox{\boldmath $p$}) - \mu_{e}]/T_{e}}, \end{equation} where $\mu_{e}$ is the electron chemical potential and $\gamma \equiv (1-\beta^{2})^{-1/2}$. Substituting for $f(E, \hat{\mbox{\boldmath $p$}})$ into equation~\eqref{eq_bol}, setting $n(\omega, \hat{\mbox{\boldmath $k$}}) =n(\omega)$ in the integrand, and expanding the distribution functions in powers of $\Delta x$, where \begin{eqnarray} x & \equiv & \omega / T_{e}, \\ \Delta x &\equiv & (\omega' - \omega)/T_{e}, \end{eqnarray} gives the Fokker-Planck expansion for an isotropic radiation field~\cite{nozawa98} \begin{eqnarray} \frac{D n(\omega, \hat{\mbox{\boldmath $k$}})}{D t} & = & 2 \left[ \frac{\partial n}{\partial x} I_{1, 0} + n(1+n) I_{1,1} \right] + 2\left[\frac{\partial^{2} n}{\partial x^{2}} I_{2,0} + 2(1+n) \frac{\partial n}{\partial x} I_{2,1} + n(1+n) I_{2,2} \right] \nonumber \\ &&\mbox{} + 2 \left[ \frac{\partial^{3} n}{\partial x^{3}} I_{3,0} + 3(1+n)\frac{\partial^{2} n}{\partial x^{2}} I_{3,1} + 3(1+n)\frac{\partial n}{\partial x} I_{3,2} + n(1+n)I_{3,3}\right] \nonumber \\ && \mbox{} + \cdots + 2 n \left[(1+n) J_{0} + \frac{\partial n}{\partial x} J_{1} + \frac{\partial^{2} n}{\partial x^{2}} J_{2} + \cdots \right], \label{eq_fok} \end{eqnarray} where \begin{eqnarray} I_{k,l} & \equiv & \frac{1}{k!} \int \frac{d^{3} \mbox{\boldmath $p$}}{(2\pi)^{3}} d^{3} \mbox{\boldmath $p$}' d^{3} \mbox{\boldmath $k$}' W f(E, \hat{\mbox{\boldmath $p$}}) (\Delta x)^{k} \et{x \gamma \mbox{\boldmath $\beta$} \! \cdot \! (\hat{\mbox{\boldmath $k$}} - \hat{\mbox{\boldmath $k$}} ')} \gamma^{l} \left(1-\mbox{\boldmath $\beta$}\! \cdot \!\hat{\mbox{\boldmath $k$}}'\right)^{l}, \label{eq_ikl}\\ J_{k} & \equiv & - \frac{1}{k!} \int \frac{d^{3} \mbox{\boldmath $p$}}{(2\pi)^{3}} d^{3} \mbox{\boldmath $p$}' d^{3} \mbox{\boldmath $k$}' W f(E, \hat{\mbox{\boldmath $p$}}) (\Delta x)^{k} \left(1- \et{x\gamma \mbox{\boldmath $\beta$}\! \cdot \! (\hat{\mbox{\boldmath $k$}} - \hat{\mbox{\boldmath $k$}}')}\right). \label{eq_jk} \end{eqnarray} We calculate the $I_{k,l}$ and $J_{k}$ coefficients by expanding the integrands of equations~\eqref{eq_ikl} and~\eqref{eq_jk} in powers of $p/m_{e}$ and $\omega/m_{e}$. These integrations are ideally suited to symbolic computer algebra packages (we used Maple). To derive a kinetic equation correct to $O(\theta_{\!e}^{2}, \beta \theta_{\!e}^{2}, \beta^{2} \theta_{\!e})$, one must evaluate $I_{1,0}$ through to $I_{5,5}$, and $J_{0}$ through to $J_{4}$. Substituting the resulting coefficients back into equation~\eqref{eq_fok}, we find the kinetic equation \begin{eqnarray} \lefteqn{% \frac{1}{N_{e}\sigma_{T}} \frac{D n(\omega, \hat{\mbox{\boldmath $k$}})}{D t} = \frac{1}{x^{2}} \frac{\partial}{\partial x}\Biggl\{ \theta_{\!e} x^{4} \left[\frac{\partial n}{\partial x} + n(1+n)\right] + \theta_{\!e}^{2} \Biggl[ {\frac{5}{2}} x^{4} \left(\frac{\partial n}{\partial x} + n(1+n)\right)} \nonumber \\ & & + {\frac{7}{10}} \frac{\partial} {\partial x} \left( x^{6} \npp{2} \right) + {\frac{7}{5}}x^{3}(1+2n) \frac{\partial}{\partial x} \left(x^{3} \frac{\partial n}{\partial x}\right) + {\frac{7}{10}} x^{6} \frac{\partial n}{\partial x} \left(1-2\frac{\partial n}{\partial x}\right) \Biggr] + {\frac{1}{3}} \beta^{2} x^{4} \frac{\partial n}{\partial x} \nonumber \\ &&+ \beta^{2} \theta_{\!e} \Biggl[ {\frac{5}{2}} x^{4} \frac{\partial n}{\partial x} + {\frac{7}{15}} \frac{\partial}{\partial x} \left(x^{6} \npp{2}\right) + {\frac{4}{3}} x^{4} n(1+n) + {\frac{7}{15}} x^{3} (1+2n)\frac{\partial}{\partial x} \left(x^{3} \frac{\partial n}{\partial x}\right)\nonumber \\ && - {\frac{7}{15}} x^{6} \left(\frac{\partial n}{\partial x}\right)^{2} \Biggr] \Biggr\} - x P_{1}(\mu) \beta \Biggl[\frac{\partial n}{\partial x} + \theta_{\!e} C_{1} + \theta_{\!e}^{2} C_{2} \Biggr] +x P_{2}(\mu) \beta^{2} \Biggl[\frac{2}{3} \frac{\partial n}{\partial x} + {\frac{11}{30}} x\npp{2} +\theta_{\!e} C_{3} \Biggr] \nonumber \\ &&+ O(\theta_{\!e}^{3},\beta \theta_{\!e}^{3}, \beta^{2} \theta_{\!e}^{2},\beta^{3}), \label{eq_komp} \end{eqnarray} where $\mu$ is the cosine of the angle between the photon momentum and the peculiar velocity of the electron distribution, $\mu= \hat{\mbox{\boldmath $k$}} \! \cdot \! \mbox{\boldmath $\beta$} /\beta$, the $P_{l}(\mu)$ are the Legendre polynomials, and $N_{e}$ is the number density of electrons in the frame where the bulk velocity vanishes. The coefficient $C_{1}$ is given by \begin{equation} C_{1} = 10\frac{\partial n}{\partial x} + \frac{1}{5} x\left(47\npp{2}+7x\npp{3}\right) + 8n(1+n) +\frac{1}{5}x(1+2n)\left(31 \frac{\partial n}{\partial x} + 7 x\npp{2} \right), \end{equation} $C_{2}$ by \begin{eqnarray} C_{2} &=& 25 \frac{\partial n}{\partial x} + {\frac{1}{10}}x \left(1117 \npp{2} + 847 x\npp{3} + 183 x^{2}\npp{4} + 11 x^{3} \npp{5} \right)\nonumber \\ &&\mbox{} + 20 n(1+n) + {\frac{1}{10}} x (1+2n) \left(911\frac{\partial n}{\partial x} + 1015 x \npp{2} + 292 x^{2} \npp{3} + 22 x^{3} \npp{4} \right) \nonumber \\ && \mbox{} + {\frac{1}{10}} x^{2} \left(273 \frac{\partial n}{\partial x} + 109 x \npp{2} + 11 x^{2} \npp{3}\right), \end{eqnarray} and $C_{3}$ by \begin{eqnarray} C_{3} & = & 4 \frac{\partial n}{\partial x} + 12 x \npp{2} + 6 x^{2} \npp{3} + {\frac{19}{30}}x^{3} \npp{4} \nonumber \\ && \mbox{} + {\frac{8}{3}} n(1+n) + {\frac{1}{30}} x (1+2n) \left( 188 \frac{\partial n}{\partial x} + 132 x \npp{2} + 19 x^{2} \npp{3} \right). \label{eq_c3} \end{eqnarray} Equations~\eqref{eq_komp}--\eqref{eq_c3} are the main result of this paper. Some of the terms in equation~\eqref{eq_komp} have been given previously in the literature; setting $\beta=0$ recovers the kinetic equation given in Challinor \&\ Lasenby (1998) which we used to investigate corrections to the thermal Sunyaev-Zel'dovich effect (higher-order corrections to the $\beta=0$ equation were given by Itoh et al. (1998)), the $O(\beta^{2})$ term inside the curly braces is implicit in the $l=0$ moment equation given in Psaltis \&\ Lamb (1997) (but not the $O(\beta^{2} \theta_{\!e})$ term which is fourth-order in the electron velocity), and the $O(\beta^{2})$ term and part of the $O(\beta \theta_{\!e})$ term are implicit in the analysis of Sazonov \&\ Sunyaev (1998), although they have ignored the parts of these terms arising from induced scattering and recoil effects. For any particular application, the validity of neglecting the higher-order terms in equation~\eqref{eq_komp} should be carefully checked. The terms that we have given are more than sufficient to describe the kinematic Sunyaev-Zel'dovich effect for typical cluster parameters, $\beta \simeq 1/300$ and $k_{B}T_{e}\simeq 10\mbox{\rm \,keV}$. (In Nozawa et al.\ (1998) it is shown that the $O(\beta^{2})$ terms are insignificant for these parameters, while the $O(\beta \theta_{\!e})$ term gives a correction of $-8.2\%$ to the kinematic Sunyaev-Zel'dovich effect at the position of the zero of the thermal effect, and the $O(\beta \theta_{\!e}^{2})$ term gives a correction of $+1.3\%$.) If required, higher-order terms in the kinetic equation can be derived from the Fokker-Planck expansion (eq.~\eqref{eq_fok}), although in practice the evaluation of the $I_{k,l}$ and $J_{k}$ rapidly becomes prohibitive. We have written equation~\eqref{eq_komp} in a form that manifestly preserves the total number of photons, as required for Compton scattering. For $\beta=0$, we obtain a generalisation of the diffusion approximation to the Boltzmann equation (see, for example, Prasad et al.\ (1988)). Note that we have derived our equation (eq.~\eqref{eq_komp}) by a systematic expansion of the original Boltzmann equation (eq.~\eqref{eq_bol}) in $1/m_{e}$; we have not appealed to the heuristic arguments that form the basis of the diffusion approximation. For a spatially localised, isotropic distribution of photons, the time rate of change of the total photon number $N_{r}$ is given by \begin{equation} \frac{d N_{r}}{d t} = 2 \int \frac{d^{3} \mbox{\boldmath $k$}}{(2\pi)^{3}} d^{3} \mbox{\boldmath $x$} \frac{D n(\omega, \hat{\mbox{\boldmath $k$}})}{D t}, \end{equation} where $d^{3}\mbox{\boldmath $x$}$ is the spatial measure, and the factor of two accounts for the two polarisations. Integrating equation~\eqref{eq_komp} over photon momenta, the terms involving $\mu$ vanish by virtue of the integral over solid angles, and the $\mu$-independent terms (those in curly braces) vanish after integration over photon energies, since these terms are written in the form of a conservation law. It is not hard to show that equation~\eqref{eq_komp} with $\beta=0$ admits static, homogeneous solutions with $n = 1/(\exp(x-\nu)-1)$ as required for thermodynamic equilibrium. \section{Rate of energy transfer} The rate of increase of energy density $E_{r}$ in the radiation due to Compton scattering is given by \begin{equation} \frac{\partial E_{r}}{\partial t} = 2 \int \frac{d^{3} \mbox{\boldmath $k$}}{(2\pi)^{3}} \frac{D n(\omega,\hat{\mbox{\boldmath $k$}})}{D t} \omega. \end{equation} Substituting for $Dn/Dt$ from equation~\eqref{eq_komp}, we see that only the $\mu$-independent terms contribute to the energy transfer. Substituting a Planck distribution at temperature $T_{r}$ for $n$ and performing the integral, we find \begin{eqnarray} \frac{dE_{r}}{dt} &=& E_{r} N_{e} \sigma_{T}\biggl\{ 4(\theta_{\!e}-\theta_{\!r})\left[1+{\frac{5}{2}}\theta_{\!e} - 21 {\frac{\zeta(6)}{\zeta(4)}} \theta_{\!r} + O(\theta^{2}) \right] \nonumber \\ &&\mbox{}+ \beta^{2} \left[ {\frac{4}{3}} + 10 \theta_{\!e} -\left({\frac{16}{3}}+28 {\frac{\zeta(6)}{\zeta(4)}} \right)\theta_{\!r} + O(\theta^2) \right] + O(\beta^{4}) \biggr\}, \label{eq_erg} \end{eqnarray} where $\zeta(x)$ is the Riemann Zeta function, and $\theta_{\!r} \equiv T_{r}/m_{e}$. The terms in the first square bracket in equation~\eqref{eq_erg} are independent of $\beta$; they tend to equalise the radiation and electron temperatures. These terms (which were also given in Challinor \&\ Lasenby (1998)) were first derived by Woodward (1970), where higher-order terms were also given. The terms in the second square bracket in equation~\eqref{eq_erg} represent the lowest-order effects of the electron bulk velocity on the energy transfer. The first such term $4E_{r}\sigma_{T} N_{e}\beta^{2} /3$ is well known (see, for example, Sazonov \&\ Sunyaev (1998) and references within). \section{The Sunyaev-Zel'dovich effect} For CMB photons passing through a cluster at redshift $z$, the average value of $x=\omega/T_{e}$ is $\bar{x} \simeq 6.2 \times 10^{-4} (1+z)/ k_{B} T_{e}$, where $k_{B} T_{e}$ is expressed in $\mbox{\rm \,eV}$. Since the electron temperature for a hot cluster is typically $\simeq 10\mbox{\rm \,keV}$, it follows that $\bar{x} \ll 1$. In this limit, equation~\eqref{eq_komp} reduces to \begin{eqnarray} \lefteqn{% \frac{1}{N_{e}\sigma_{T}} \frac{D n(\omega, \hat{\mbox{\boldmath $k$}})}{D t} = \frac{1}{x^{2}} \frac{\partial}{\partial x}\Biggl\{ x^{4}\left[\theta_{\!e} + \frac{1}{3}\beta^{2}+\frac{5}{2}\theta_{\!e}(\theta_{\!e}+\beta^{2}) \right]\frac{\partial n}{\partial x}} \nonumber \\ &&\mbox{}+ \theta_{\!e}\left[\frac{7}{10}\theta_{\!e}+\frac{7}{15}\beta^{2}\right] \frac{\partial}{\partial x}\left(x^{6}\npp{2}\right)\Biggr\} - x P_{1}(\mu)\beta \Biggl[ \frac{\partial n}{\partial x} + \theta_{\!e} \left(10\frac{\partial n}{\partial x} + \frac{47}{5} x\npp{2} + \frac{7}{5} x^{2}\npp{3} \right)\nonumber \\ &&\mbox{}+ \theta_{\!e}^{2} \left(25\frac{\partial n}{\partial x} + \frac{1117}{10}x\npp{2} + \frac{847}{10}x^{2}\npp{3} + \frac{183}{10} x^{3}\npp{4} + \frac{11}{10} x^{4} \npp{5} \right) \Biggr] \nonumber \\ &&\mbox{}+x P_{2}(\mu)\beta^{2} \Biggl[\frac{2}{3} \frac{\partial n}{\partial x} + \frac{11}{30} x\npp{2} +\theta_{\!e}\left(4\frac{\partial n}{\partial x}+12x\npp{2} + 6x^{2}\npp{3} + \frac{19}{30}x^{3} \npp{4} \right) \Biggr]. \end{eqnarray} Setting $n=1/(\exp(\alpha x)-1)$, where $\alpha \equiv T_{e}/T_{r}$ recovers the combined thermal and kinematic Sunyaev-Zel'dovich spectral distortion given in Nozawa et al.\ (1998) and Sazonov \&\ Sunyaev (1998), where the relative importance of the various terms for typical cluster parameters is discussed in detail. \section{Conclusion} Using the covariant Fokker-Planck expansion described in Nozawa et al.\ (1998), we have derived a kinetic equation describing the interaction of an isotropic radiation field with a thermal distribution of electrons, which moves at bulk velocity $c\beta$ relative to the radiation, in the limit of low optical depth. Relativistic effects are included to $O(\theta_{\!e}^{2}, \beta\theta_{\!e}^{2}, \beta^{2}\theta_{\!e})$, or equivalently to $O(\Theta^{2}, \beta\Theta^{2}, \beta^{2}\Theta)$ where $\Theta$ is either of $\hbar \omega/ m_{e} c^{2}$ or $k_{B} T_{e} / m_{e} c^{2}$, which is sufficient to describe the corrections to the kinematic Sunyaev-Zel'dovich effect for typical cluster parameters (Nozawa et al. 1998) The method may be easily extended to include higher-order relativistic effects if required. We have calculated the rate of energy transfer between a Planckian radiation field and the electrons, obtaining the usual ``thermal'' and ``kinematic'' terms, as well $O(\beta^{2} \theta)$ ``interference'' terms. Specialising to the limit $T_{r} \ll T_{e}$, we confirm the relativistic corrections to the thermal and kinematic Sunyaev-Zel'dovich effect given in Nozawa et al.\ (1998) and Sazonov \&\ Sunyaev (1998). \acknowledgements We would like to express our gratitude to Roberto Turolla and Silvia Zane for bringing a number of useful references to our attention.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,751
Q: Center text value of a field in polygon and display I would like to display the value of a field over the polygon. In this case, the field name is Subname, and i would like to use a specific font. Would like to perform this action for each polygon on the layer. A: Specific instructions then would be to right-click the layer in the table of contents and go to "Properties". Then choose the "Labels" tab. Check the box to "Label Features in this layer" and choose your "Subname" field as your "Label Field". Change the Font using the options under "Text Symbol".
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,143
using System; using System.ServiceModel; using System.Threading; using System.Threading.Tasks; namespace Xamarin.Forms.Platform { public abstract class IsolatedStorageFileBase : IIsolatedStorageFile { public abstract Task CreateDirectoryAsync(string path); public abstract Task<bool> GetDirectoryExistsAsync(string path); public abstract Task<bool> GetFileExistsAsync(string path); public abstract Task<DateTimeOffset> GetLastWriteTimeAsync(string path); public abstract Task<System.IO.Stream> OpenFileAsync( string path, FileMode mode, FileAccess access, FileShare share); public abstract Task<System.IO.Stream> OpenFileAsync( string path, FileMode mode, FileAccess access); Task<System.IO.Stream> Xamarin.Forms.IIsolatedStorageFile.OpenFileAsync( string path, Xamarin.Forms.FileMode mode, Xamarin.Forms.FileAccess access, Xamarin.Forms.FileShare share) { return OpenFileAsync(path, (FileMode)mode, (FileAccess)access, (FileShare)share); } Task<System.IO.Stream> Xamarin.Forms.IIsolatedStorageFile.OpenFileAsync(string path, Xamarin.Forms.FileMode mode, Xamarin.Forms.FileAccess access) { return OpenFileAsync(path, (FileMode)mode, (FileAccess)access); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,523
What The Piña Colada Song Teaches Us About Marketing Updated by David Gaughran 7 April 2021. Marketing • 17 Comments. You may love it, you may hate it, but you've definitely heard it: The Piña Colada Song is one of the most recognizable and enduring hits of the last fifty years – the only song ever to hit No. 1 in America in two different decades. But it almost sank without trace, and it's near-failure can teach us a lot about book marketing. The artist behind it is Rupert Holmes, who is primarily known to many for penning this one tune, but he has led an interesting and varied life. While he currently lives in New York, Holmes was born in Cheshire in 1947 as David Goldstein – a US Army brat, with an American father and an English mother, the wonderfully named Gwendolen. His was a very musical upbringing, and when the family were uprooted and moved to Nanuet, New York in the 1950s, he ended up attending the prestigious Manhattan School of Music and majored in the clarinet, although he didn't follow his brother into the world of opera and "serious" music. Instead, he became a session musician and did side-gigs like writing jingles for shampoo commercials. Holmes was delighted to be working in the music business at any level, but it also enabled him to support himself while working on his own music. In the early 1970s, he had a couple of minor hits under his own name, while also wrote songs for big stars like Dolly Parton, the Drifters, Gene Pitney, and the Partridge Family. He broke out himself in 1974 with the album Widescreen which really nailed down his soon-to-be-signature style of witty but romantic story songs. And when Barbara Streisand asked to cover some of the songs from that album for her movie A Star Is Born, he'd hit the big time. It was the release of his fifth solo album – and particularly the single Escape (The Piña Colada Song) – which made him truly famous. At least among those who didn't mistake the singer for Barry Manilow, a surprisingly persistent error over time. While that case of mistaken identity didn't dampen the song's initial reception, another form did. It was originally released as Escape – with no mention of those famous piña coladas in the song title. People would call up radio stations and ask for the song about piña coladas, only to be met with bafflement. And when they went to their local record store to order The Piña Colada Song, they were told the store didn't have it. They did, of course, but it was titled something else: Escape. Word-of-mouth was hitting an impenetrable wall and sales remained tepid throughout October 1979. The record label begged Rupert Holmes to change the title to The Piña Colada Song, but he resisted the pressure. The theme of escape is central to the whole meaning of the song, he clearly felt, which is all about the protagonist wanted to escape his boring life, and the prospective love interest taking out a personal ad to escape hers. Rupert Holmes dug his heels in and refused to change the title… right up until December of that year when he finally agreed to a compromise and renamed the song Escape (The Piña Colada Song). It went straight to No. 1. Marketing Lesson #1: a title is part of the commercial packaging – ignore your Inner Artist when it comes to business decisions. There was another reason for Holmes' reticence to change the song title. The original version never mentioned piña coladas at all. It's hard to imagine now, with that chorus tattooed into your brain over decades of radio airplay, but the original line of the chorus was "If you like Humphrey Bogart…" rather than any kind of drink. Holmes only changed it at the last minute when he thought it didn't feel right. The piña colada didn't have any special significance to him, it just happened to be the first cocktail which popped into his head. He later confessed that he didn't even drink piña coladas. Marketing Lesson #2: don't associate your brand with creamy drinks if you are lactose intolerant. I kid. I have no idea if Rupert Holmes is sensitive to dairy; I think he just doesn't like piña coladas. Fans buy them for him all the time, assuming he is as fond of them as his characters are. But at the time of writing the song, he had never even tasted one. The real reason he changed the line is because of feedback he got when he was running the song by various people. For reasons we'll get to in a moment, he only had a day or so to nail down the lyrics and when he was on the way to the studio, he read them out for his taxi driver. What he was specifically seeking opinions on was the infamous twist at the close of the song. If you're not familiar with the narrative behind this most famous of story songs, it goes a little like this: a guy in a long-term relationship – presumably married, but that's not made explicit – is a little… bored with his "woman" and starts perusing the personals. For the millennials in the audience, this was an old-school form of Tinder which took place in a newspaper, where people could take 24 hours or more to swipe right. Anyway, one of the personal ads catches his eye, one particular lady's declaration of love for piña coladas and beachy trysts, as well as antipathy for the yoga craze that was also flourishing at the time. Our wannabe cheater responds with similar affectation for piña coladas, expanding this boozy smorgasbord to include champagne, but dismissing the comparative merits of health food. (His views on sandy shagging are unstated but can be safely inferred.) He offers to meet the mysterious lady tomorrow, at noon, in O'Malley's Bar, where he says they can plan their escape, and presumably their new life together. The twist – and this is where it gets really weird – is that he walks into the bar and the lady behind the personal ad is… his wife. (Or girlfriend. We never got that fully established. Let's go with "partner.") Either way, they have been together for a while – that's clear from the start – and neither of them seemed remotely perturbed by the fact they are respectively taking out, and responding to, personal ads looking for extra-curricular activities. They both seem quite blasé and enjoy discovering they have more in common than previously assumed. The seventies, man; it was a different time. Holmes wasn't worried if that twist was bizarre or even palatable – but whether it was too obvious. He sang the song to several people to see if they could see it coming, but obviously they couldn't because it's completely crackers. Another problem emerged during this process though: the line about Humphrey Bogart just didn't scan well. It was only in the studio, just before laying down the vocal, that Rupert Holmes decided to change it to the first exotic cocktail he could recall. Time was tight, he was just about to get on the mic, and he didn't have the luxury of dawdling. Perhaps we should all be glad it wasn't a Harvey Wallbanger. Marketing Lesson #3: Loop in feedback, and bake it right into the product. If all this wasn't random enough, Escape wasn't even originally planned to be single from that album. It was nothing more than a filler track – Holmes needed something up tempo to offset all the ballads and went through a variety of half written songs and melodies, desperately trying to find something that would fit before they ran out of studio time. The drummer had over-indulged during this lull and had to be put in a taxi home, forcing Holmes to use a primitive form of sampling to lay down the backing track. But it didn't matter so much, this was just filler, he thought. Even the vocal was done in just one take, as they were running out of time on the final day of recording. Holmes couldn't summon the energy or enthusiasm for a second attempt. He was worn out from recording this album, and particularly this final song. As I mentioned previously, he only had a day or so to write the lyrics. He had the melody already, but no words to go with it, and pulled his usual trick of scanning the personal ads in the Village Voice to see if any interesting characters jumped out at him. One ad did: a woman describing herself in such glowing terms that he sardonically wondered to himself why such a putative catch would need to place a personal ad to get some action. Then he thought he was perhaps being unfair. She might just be looking for adventure, for an escape. He wondered what would happen if he replied… Boom, the song was born. Marketing Lesson #4: hunger is the best sauce and necessity is the mother of invention. Even after the crazy recording session and the track came together, Holmes didn't view it as a standout track on the album at all. He considered it "too simple musically and harmonically … it was just supposed to balance out the album." And he was surprised when the studio expressed interest in releasing it as the main single, but decided not to fight them on it, if they were so convinced – and they most certainly were. Marketing Lesson #5: seek opinions from those with more emotional distance. Rupert Holmes is sometimes described as a one-hit wonder, which is both unjust and inaccurate – he had several hits before and after the monster success which overshadowed the rest of his pop career, just not to the same crazy level. And that professional career was just as varied afterwards too. Aside from continuing to write and record music, he was a playwright and novelist and composer. He wrote a TV show and worked on several movies. He won the Tony Awards – twice – for a couple of the many musicals he penned. And he even won the Edgar award for one of the mysteries he wrote, a book called Where The Truth Lies, which was published by Random House and subsequently turned into a movie starring – naturally – Kevin Bacon. The song was anomalous in some fairly key ways though. As Rupert Holmes said himself, "It's my most successful song and probably the least typical of my work." It was an interview in 2003 he gave just after he won the Tony award for his musical The Mystery of Edwin Drood and just before he won the Edgar for his mystery Where the Truth Lies, so he was hardly in a creative rut or sour about it all. He seemed bemused more than anything. It was never meant to be heard 100 million times; it was meant to be a little short story with a little wink at the end of it, and that was supposed to be it. It was also not supposed to make the piña colada a popular drink in Idaho. Rupert Holmes I guess that's the final marketing lesson here, although it's more of a life-lesson: you don't get to decide how people feel about your work. All we can do is keep putting it out there. It becomes its own thing then. In a way, it's not ours anymore. I hope you enjoyed this post! I just wanted to let you know that I send out exclusive content every Friday to my mailing list subscribers with a special focus on book marketing for self-published authors. I talk about the latest tricks with Facebook Ads or BookBub Ads, I also get into topics like content marketing, reader targeting, and everything else under the sun that pertains to building audience and reaching readers. By signing up to my list, you get access to the all the old emails too, as well as sneak previews of upcoming books (meaning you get the jump on the latest tricks strategies of everyone else), and exclusive discounts too. You also get a FREE copy of Following – a book that you can't get anywhere else! I strongly recommend that you join over ten thousand authors and sign up today because there are all sorts of bonuses you will enjoy. Born in Ireland, he now lives in a little fishing village in Portugal, although this hasn't increased the time spent outside. He writes novels under another name, has helped thousands of authors build a readership with his books, blogs, workshops, and courses, and has created marketing campaigns for some of the biggest self-publishers on the planet. Friend to all dogs. Tagsbook marketing, commercial packaging, emotional distance, importance of feedback, marketing, marketing advice ← Previous Previous post: 12 Free Graphic Design Tools For Authors Next → Next post: Content Marketing For Authors • Your Guide 17 Replies to "What The Piña Colada Song Teaches Us About Marketing" sobia says: Such a great post, this is very interested content, thanks, David A wonderful song with a realistic meaning in our society. The lyrics seemed risky until the humorous twist at the end. Excuse me as I'm going outside to get caught in the rain. Luong Chau (Love Power Tool) says: Hi David, Good evening !!! I've heard about Pina Colada Song before. But today came the chance to enjoy it! Managed by Rupert Holmes but that's pretty interesting. I don't think you dislike it. Pina Colada Song will also help a lot in marketing. I'm still saving it to my store. I am very happy to read this article. Happy 2020!!! Valerie Ormond says: Thank you so much for this! Sometimes I think people get so caught up in what is popular and what it is that they should and shouldn't do that we lose creativity. Appreciate this story, and it will stick with me for some time. Fran Hunne says: Not here for the marketing lessons, but there was a lot of background information to this song I was happy to take away. (You do not get to decide what I like about your blogpost …) chris reiswig says: No kidding, this article was written at just the right time for me. For some reason, I've been fixated on this one song recently and this is the best write-up I found about it, and it turns out you published it just 4 days ago. Very informative and well-executed, I look forward to reading whatever else you have to offer. Sheila M. Cronin says: Deborah Fredericks says: OMG, that song! Get it away from me!! Not really. It was so over-played at the time, I guess I'm still sick of it. John Sefton says: Hey David. That was a very interesting story about the Pina calada song, I really enjoyed it. Also the related marketing tips. Love it! Now I can't get the song out of my head lol Alexa, play… Nice piece. I really enjoyed it. As someone trying to pin down a title for my novel, it was especially relevant. We cannot control how the reader or listener perceives our work, we can only get it out there and see if it takes. Of course, we can try our darnedest to make it happen, too. Harald Johnson says: Well, of course, I immediately opened up a tab for the song on YouTube (the one with Spanish subtitles) with him singing in a brown sweater with those '70s glasses. Ah, the late '70s; those were the days. But great analogy/analysis! Couple of comments: ML#1 & 2: Right-on about the Title and ignoring the Inner Artist. When casting for my birth-of-New-York-City historical novel, I went with the obvious and prosaic, which actually appears in the Amazon Ads Search Terms list. No flights of fancy. Back to the song: That chorus change from Humphrey Bogart to Piña Colada was truly inspired by him. PC means "exotic escape"; HB means, well, not (re: Casablanca). BTW: the Piña Colada ingredient coconut cream/milk is not dairy; no lactose. 😉 David Gaughran says: Khaaaaaaaaaaaaan! Jane Cochrane says: Nice one David! Ruth Harris says: David, thanks for the excellent points. Reminds me of Mario Puzo's comment when The Godfather hit big. "If I'd known it was going to be this successful, I would have made it better." MP started out as a well-reviewed literary writer so it goes to show that lots of times, the author doesn't have a clue. Hugh O'Donnell says: Such an enjoyable piece! Fun to read while teaching important lessons for authors. Thank you, David! Just wait for my follow up "What Guns N' Roses can teach us about superfans." © 2022 by David Gaughran
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
291
package edu.cmu.lti.oaqa.framework.collection; import java.io.IOException; import java.util.UUID; import javax.jms.JMSException; import javax.jms.MapMessage; import javax.jms.Message; import javax.jms.MessageListener; import javax.jms.TextMessage; import org.apache.uima.analysis_engine.AnalysisEngine; import org.apache.uima.analysis_engine.AnalysisEngineProcessException; import org.apache.uima.cas.CAS; import org.apache.uima.collection.CollectionException; import org.apache.uima.collection.CollectionReader_ImplBase; import org.apache.uima.jcas.JCas; import org.apache.uima.resource.ResourceInitializationException; import org.apache.uima.util.Progress; import org.apache.uima.util.ProgressImpl; import com.google.common.base.Throwables; import com.google.common.io.Closeables; import edu.cmu.lti.oaqa.ecd.BaseExperimentBuilder; import edu.cmu.lti.oaqa.framework.DataElement; import edu.cmu.lti.oaqa.framework.async.ProducerManager; import edu.cmu.lti.oaqa.framework.async.Topics; import edu.cmu.lti.oaqa.framework.async.activemq.ActiveMQQueueConsumer; import edu.cmu.lti.oaqa.framework.async.activemq.ActiveMQQueueProducer; import edu.cmu.lti.oaqa.framework.async.activemq.ActiveMQTopicSubscriber; import edu.cmu.lti.oaqa.framework.types.ExperimentUUID; import edu.cmu.lti.oaqa.framework.types.InputElement; public abstract class AbstractCollectionReaderConsumer extends CollectionReader_ImplBase implements MessageListener { int nextInput; private String consumerUuid; private String experimentUuid; private DataElement nextElement; private AnalysisEngine[] decorators; private ActiveMQQueueConsumer consumer; private ActiveMQQueueProducer producer; private ActiveMQTopicSubscriber closeListener; private boolean processing = true; private int stageId; private String lastSequenceId; private String dataset; @Override public void initialize() throws ResourceInitializationException { // String user = (String) getConfigParameterValue("amq-username"); // String password = (String) getConfigParameterValue("amq-password"); String url = (String) getConfigParameterValue("broker-url"); String prefetchUrl = url + "?jms.prefetchPolicy.queuePrefetch=0"; this.consumerUuid = UUID.randomUUID().toString(); this.experimentUuid = (String) getConfigParameterValue(BaseExperimentBuilder.EXPERIMENT_UUID_PROPERTY); this.stageId = (Integer) getConfigParameterValue(BaseExperimentBuilder.STAGE_ID_PROPERTY); try { initDecorators(); this.closeListener = new ActiveMQTopicSubscriber(url, this, Topics.COLLECTION_READER_COMPLETE); this.consumer = new ActiveMQQueueConsumer(prefetchUrl, this.experimentUuid + AbstractCollectionReaderProducer.COLLECTION_READER_QUEUE_SUFFIX); this.producer = new ActiveMQQueueProducer(url, this.experimentUuid + ProducerManager.COMPLETION_QUEUE_SUFFIX); } catch (Exception e) { throw new ResourceInitializationException(e); } } private void initDecorators() { nextInput = 0; String decoratorsNames = (String) getConfigParameterValue("decorators"); if (decoratorsNames != null) { this.decorators = BaseExperimentBuilder.createAnnotators(decoratorsNames); } } @Override public boolean hasNext() throws IOException, CollectionException { if (lastSequenceId != null) { try { notifyProcessed(dataset, lastSequenceId); } catch (JMSException e) { e.printStackTrace(); } } return waitForNext(); } @Override public void getNext(CAS aCAS) throws IOException, CollectionException { try { nextInput++; JCas jcas = aCAS.getJCas(); jcas.setDocumentText(nextElement.getText()); ExperimentUUID expUuid = new ExperimentUUID(jcas); expUuid.setUuid(experimentUuid); expUuid.setStageId(stageId); expUuid.addToIndexes(); String sequenceId = nextElement.getSequenceId(); InputElement next = new InputElement(jcas); next.setDataset(nextElement.getDataset()); next.setQuestion(nextElement.getText()); next.setSequenceId(sequenceId); next.addToIndexes(); decorate(jcas); this.dataset = nextElement.getDataset(); this.lastSequenceId = sequenceId; } catch (Exception e) { throw new CollectionException(e); } } private boolean waitForNext() throws CollectionException { if (!processing) { return false; } try { Message msg = consumer.receive(); try { MapMessage map = (MapMessage) msg; if (map == null) { // consumer is already closed return false; } int stageId = map.getInt("stageId"); if (this.stageId != stageId) { throw new IllegalStateException(String.format("Received stage id %s expected %s ", stageId, this.stageId)); } nextElement = getDataElement(map); // TODO: CONCEPTUAL ERROR, THIS ACKNOWLEDGES THAT THE TOPIC HAS BEEN RECEIVED AND DECORATED, // NOT PROCESSED msg.acknowledge(); return true; } catch (Exception e) { consumer.recover(); Throwables.propagateIfInstanceOf(e, CollectionException.class); throw new CollectionException(e); } } catch (JMSException e) { throw new CollectionException(e); } } protected abstract DataElement getDataElement(MapMessage map) throws Exception; protected void decorate(JCas jcas) throws AnalysisEngineProcessException { if (decorators != null) { for (AnalysisEngine appender : decorators) { appender.process(jcas); } } } @Override public Progress[] getProgress() { return new Progress[] { new ProgressImpl(nextInput, -1, Progress.ENTITIES) }; } private void notifyProcessed(String dataset, String lastSequenceId) throws JMSException { MapMessage message = producer.createMapMessage(); message.setString("dataset", dataset); message.setString("sequenceId", lastSequenceId); message.setString("consumerUuid", getConsumerUuid()); producer.send(message); } private String getConsumerUuid() { return consumerUuid; } @Override public void onMessage(Message msg) { TextMessage message = (TextMessage) msg; try { // TODO: Synchronize lock? if (message.getText().equals(experimentUuid)) { processing = false; Closeables.close(consumer, true); } } catch (Exception e) { System.err.println("Unable to process message: " + message); } } @Override public void close() throws IOException { System.out.printf("(%s) Closing connections!!\n", stageId); Closeables.close(consumer, true); Closeables.close(producer, true); Closeables.close(closeListener, true); } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,528
newborn baby gift set buy new born baby gift set for infant rattle online newborn baby girl gift sets uk newborn baby gift set india. newborn baby gift set baby girl mommy gift set new baby new mom bath set natural baby bath products baby gift set new mom gift baby shower newborn gift newborn baby gift sets ireland newborn baby gift. newborn baby gift set lamb baby bath set newborn baby set malaysia newborn baby gifts india online. newborn baby gift set little lamb baby gift set gt new baby gift sets newborn baby gifts buy online india. newborn baby gift set newborn baby by the bay baby gift set newborn baby girl gift sets uk newborn baby sets india. newborn baby gift set new baby girl necessities first year gift set newborn baby gift sets australia baby boy gift sets newborn uk. newborn baby gift set newborn baby gift set newborn baby gift singapore newborn baby gift sets online. newborn baby gift set a new baby gift set that gets around newborn baby gift malaysia newborn baby gift online india. newborn baby gift set choose newborn baby gift set online new baby gift singapore. newborn baby gift set newborn baby gift set 4 piece cotton 0 3 months newborn baby gift sets uk newborn baby gifts online pakistan. newborn baby gift set newborn baby gifts in bird perch fabric new baby gift sets newborn baby gift singapore. newborn baby gift set baby gift set newborn baby boy 5 piece set new baby gift hamper uk newborn baby gift sets australia. newborn baby gift set newborn baby gift set newborn baby gift sets india newborn baby gifts online australia. newborn baby gift set blue and white printed newborn baby shower gift pack of newborn baby gift sets next newborn baby gift sets ireland. newborn baby gift set newborn baby gift set newborn baby gift sets australia newborn baby gifts buy online india. newborn baby gift set newborn baby gift set newborn baby gift sets malaysia newborn baby gift set malaysia. newborn baby gift set cotton 6 in 1 cute newborn baby gift set 0 newborn baby gift set india newborn baby gift baskets india. newborn baby gift set baby river newborn baby gift box with bunny soother rattle cotton new baby gift singapore new baby gift sets. newborn baby gift set set new born baby gift set girl clothes cotton infant baby boy clothing sets newborn baby gift sets next newborn baby gift sets ireland. newborn baby gift set brand newborn baby gift clothing set baby boys girls high quality clothing for the newborns baby wear no packaging newborn baby set malaysia baby boy gift sets newborn uk. newborn baby gift set baby clothing set gift pieces infant underwear suits cotton fashion newborn baby gift newborn baby sets india newborn baby gift set online. newborn baby gift set newborn baby clothes gift set newborn baby gift singapore newborn baby gift set singapore. newborn baby gift set baby hamper gift set girl newborn baby sets india newborn baby gift baskets india. newborn baby gift set fashion newborn baby gift sets infant clothing unisex baby boy and girl suits outfits for 0 month wear from new baby boy gift set uk newborn baby gift sets malaysia. newborn baby gift set newborn baby gift set carters baby clothes set with box newborn baby gift sets uk newborn baby gift online shopping. newborn baby gift set china promotional newborn baby gift set baby clothes gift set clothing set newborn baby gift sets newborn baby gift set singapore. newborn baby gift set newborn baby gifts online uk newborn baby gift set india. newborn baby gift set newborn by gifts newborn boy gift set lush x pixels newborn baby gifts india online newborn baby gifts online south africa. newborn baby gift set new baby hamper 5 items sweet friends new baby gift set new baby gift online newborn baby gifts india online. newborn baby gift set tong new baby gift set newborn baby clothes newborn cotton clothes multi piece gift box newborn baby girl gift sets uk newborn baby gifts online usa. newborn baby gift set newborn baby gift set pieces baby girl newborn baby gift singapore newborn baby gift online india.
{ "redpajama_set_name": "RedPajamaC4" }
8,712
\section{Introduction} \begin{figure*}[h!] \centering \includegraphics[width=1\textwidth]{Figures/Burst_Hypothesis_Figure_V5.pdf} \caption{Burst coding hypothesis(es). \textbf{a} Burst coding arising from the observation of response patterns arising only when a specific generation mechanism is present. Here a synaptic pathway (pathway 1) engages only spikes in relative isolation, and another synaptic pathway (pathway 2) engages bursts when present in conjunction with pathway 1. \textbf{b} Burst coding also arises from the observation that STD and STF synapses transmit bursts differently, with STD transmitting isolated spikes and the first spikes in a burst, and STF transmitting mostly the later spikes in a burst. \textbf{c} Burst coding surfaces from long-term potentiation, where pairing pre-post activity with bursts leads to LTP while pairing with isolated spikes leads to LTD.} \label{fig:variations} \end{figure*} The problem of neural coding is to attribute the correct interpretation to neuronal signals. The "\emph{basis of sensation}" is generally that neurons represent input features with the number but not the shape of action potentials \cite{Adrian1928a}. Its central dogma stated that "\emph{high impulse frequency} (...) \emph{corresponds to high certainty that the trigger feature is present}" \cite{Barlow1972a}. In the presence of noise, interpreting neuronal responses becomes a statistical problem whereby high impulse frequencies may be randomly generated and only imply the presence of a feature if these high frequencies occur consistently. It has been argued that, somewhat counter-intuitively, such random utterance of all firing frequencies is a way of maximizing information transmission \cite{Atick1990a}. As a result, inputs are represented by computing averages, turning the problem of neural coding towards the challenge of interpreting rate-based responses in large, possibly heterogeneous and context-dependent populations \cite{Georgopoulos1986a,Averbeck2006a,Shamir2006a,Yu2008a}. This dogma has been challenged by the idea that not all spikes are equal, that the central currency for neurons is not simply the spike, that the result is not a binary code of spikes and silences, but instead that the code is ternary: formed by either silences, spikes or bursts of high-frequency action potentials. In this burst coding hypothesis, high impulse frequency is not the sum of its parts, but a distinct type of event immersed in a separate causal chain. We review evidence supporting the burst coding hypothesis according to three of its variations (Fig. \ref{fig:variations}). The first variation asks if neurons have evolved special means to generate bursts, which are distinct from those to produce spikes in relative isolation. The second and third variations focus on the question of whether synapses treat bursts and spikes in the same way, either for transmission or for engaging in long-term plasticity. Each of these three hypotheses could be unrelated, but we show that a number of algorithms would benefit from an alignment between the generation mechanisms and the synaptic mechanisms. When all these hypotheses converge, the nervous system acquires the capability to process two streams of information simultaneously. Theories exploiting this capability for implementing top-down attention and learning have only begun to be considered. \section{What is a burst?} If the brain uses a ternary code, and when recording from neurons in vivo, should we expect to see well separated events? some events unfolding at 100 Hz and some other unfolding at much lower frequencies? While many neurons display such bimodal interspike interval (ISI) distributions, many do not. Does this mean that neurons displaying unimodal interspike interval distribution do not engage in burst coding? This question was addressed using information theory by Williams et al. (2021) \cite{Williams2021a}. The study found that, surprisingly, a unimodal distribution of intervals is not necessarily associated with a drop of information transmitted using a burst code. This is consistent with studies in the hippocampus that found well-defined complex spikes -- the very stereotype of a burst -- have ISIs ranging from 10 to 30 ms \cite{Epsztein2011a}, that is, frequencies produced by normal spiking during a place field. Consistently, as high-frequency events combine plateau potentials and normal firing, estimating burstiness based on plateau potentials leads to lower fractions than based on interspike intervals \cite{Bittner2015a,Sanders2019a}. A unimodal interspike interval distribution implies that there is no logical interspike interval threshold that perfectly separate isolated spikes from bursts. Bursts can still be differentiated by including in addition to the interspike interval the number of intra-burst spikes and the presence of a sustained depolarization. Invariably, the question must come: what exactly is a burst? If it is a sudden bout of firing at a high frequency, then precisely how high is that frequency and how sudden must the frequency increase be? How many spikes must be produced at high frequency? When does it end? Is an alteration of spike shape or the presence of a sustained depolarization relevant? Does the answer to any of these questions depend on cell type, area, or age? There is currently no consensus to answer these questions, but this is not to say that no attempts were made. Various metrics have been proposed. Some require a spike-per-spike classification of firing patterns \cite{Harris2001a,Naud2018a}, others do not \cite{Compte2003a,Shinomoto2009a,Simonnet2019a,Insanally2019a}. Yet other metrics measure the association between spike timing codes and external correlates without an estimate of burstiness nor a precise definition for bursting \cite{Kayser2009a,deKock2021a}. Some of these methods have parameters that must be set, while the others come with methodologies for self-tuning according to a particular objective (parameter free). Other than the information-theory metrics who evade the question, these metrics are implementations of a particular definition of bursting and assume somewhat different answers to the question "what is a burst?". \subsection{Different together: when a burst is more than a bunch of spikes} The burst coding hypothesis offers a loose definition: bursts are a family a firing patterns that trigger physiological mechanisms not engaged by the same number of spikes in relative isolation. Figure \ref{fig:variations} illustrates three variations of this question: generation, transmission and plasticity. \begin{figure*} \centering \includegraphics{Figures/WhatIsABurst_Figure_V2.pdf} \caption{What is a burst? \textbf{a} Features associated with with bursting are high intra-burst frequencies, sustained depolarization, decreasing action potential amplitude and widening of action potential shape. \textbf{b} Defining bursting based on the number of spikes at a high frequency, here 2 spikes at 50 Hz is illustrated, but other choices of thresholds are possible. \textbf{c} Simulating calcium traces left by action potentials (exponential decay, 50 ms) and plotting the heatmap of peak calcium as generated by a number of spikes (y-axis) with a given frequency (x-axis). A threshold in peak calcium would correspond to a nonlineary curve in the space of frequency vs number of spikes, indicating a tradeoff between these two variable. } \label{fig:What} \end{figure*} Do neurons reserve some firing patterns for the response to specific types of stimulation? In all cells, short and long after-hyperpolarization currents following every action potential \cite{Schwindt1988a,Lundstrom2010a,Pozzorini2013a}. These combine with a moving threshold \cite{Azouz2000a,Mensi2012a,Harkin2023a} to prevent bouts of high-frequency action potentials to occur. Modelling studies have shown that that these properties ensure neurons responds rarely with high-frequency action potentials, although very large input fluctuations can transiently overcome these constraints \cite{Vogels2005a,Ostojic2011a}. Such random fluctuations are themselves constrained by feedforward and feedback inhibition, which ensures that any surge in excitation is followed by a calculated surge in inhibition, limiting depolarization in duration \cite{Pouille2001a,Swadlow2001a}. A number of cellular properties can evade these restrictions and produce high-frequency firing. These properties almost exclusively involve voltage gated calcium channels (VGCCs; for non-VGCC based bursting, see \cite{Haj1997a,Brumberg2000a,Doiron2007a}). In the thalamus, potent T-type VGCCs can de-inactivate following at least 100 ms of hyperpolarization and produce bursting upon release from inhibition \cite{Deschenes1984a,Huguenard1992a,Wang1991a}. In vitro studies have shown that these bursts are fast, with intra-burst intervals of 3-4 ms and last for less than 100 ms \cite{Deschenes1984a}. Duration of thalamic burst is variable and depends on the intensity of burst-generating inputs \cite{Mease2017a}. Dendritic VGCC in Purkinje cells lead to complex spikes \cite{Konnerth1992a,Davie2008a} having intraburst intervals of 2-5~ms with a sustained depolarization, alteration of spike shape and durations varying between 6 and 15 ms \cite{Maruta2007a,Yang2014a}, a duration that is controlled by inputs responsible for complex spike generation. In the hippocampus and cortex, VGCCs \cite{Williams2018a} in the dendrites of pyramidal cells of the cortex and hippocampus also generate dendrite-dependent VGCCs-based plateau-like potentials. In vitro studies indicate that these events contain spikes at a frequency close to 100 Hz, show alteration of spike shape, a sustained depolarization and last between 40 ms to 100 ms \cite{Kandel1961a,Larkum1999a,Larkum1999b,Takahashi2009a}. As for the cerebellum, duration and frequency is variable, with longer bouts being generated with higher input intensity and duration. Correspondingly, the presence of acetylcholine (ACh) give rise to longer (>500 ms) bouts of high-frequency firing \cite{Williams2018a}. In all areas, the bouts of high-frequency firing are associated with a gradual decrease of the amplitude of action potentials within the burst and a sustained depolarization. Together, neurons restrict the generation of bouts of high-frequency firing (above 100 Hz), events that are characterized by sustained depolarization and an alteration of spike shape with durations of at least 10~ms. What type of stimulation patterns are transmitted differently by synapses? At some synapses, sudden bouts of high-frequency firing will either strongly and transiently depress or potentiate the post-synaptic effect of action potentials. This short-term plasticity (STP) can be frequency-dependent, and the effects can accumulate during a high-frequency train. There is a great diversity of STP expression that appears to depend on both pre- and post-synaptic cell classes. Some show a strong frequency dependence that gets engaged at a slow frequency ($1-10$ Hz) but is maximally expressed at high frequencies ($50-200$ Hz), others only start to be expressed at high frequencies, others show STP of opposite polarities at low and high-frequencies (\textit{i.e.} short-term facilitation (STF) and short-term depression (STD)), and yet others show no frequency-dependence at all \cite{Toth2000a,Dittman2000a,Salin1996a,Jackman2016a,Xu2012b,Lefort2009a,Campagnola2022a}. A group of 6-8 high-frequency action potentials can induce up to 6-fold change in amplitude \cite{Toth2000a,Sun2009b,Xu2012b}. Such large change of post-synaptic amplitude can happen for spikes in relative isolation (e.g. at mossy fiber synapses \cite{Toth2000a}), but facilitation of isolated firing is uncommon at most synapses. Adding a sustained depolarization to the high-frequency event does not alter transmission further \cite{Apostolides2016a}. Thus the properties of STP do not provide a precise definition, but indicate that we may consider firing above 100 Hz a burst, with the possibility of trading lower firing frequency with greater number of spikes. To complete the picture, we may look into the firing patterns that trigger a form of long-term synaptic plasticity not engaged with spikes in relative isolation. Alongside neuromodulation \cite{He2007a} and pre-post spike timing \cite{Markram1997a}, firing patterns are a central factor determining the relative strength of long-term potentiation (LTP) with respect to long-term depression (LTD) \cite{Sjostrom2001a,Pfister2006a,Lisman2005a,Froemke2006a}. In the cerebellum, LTD of parallel fiber synapses is induced by pairing parallel fibers inputs with complex-spike-inducing inputs \cite{Ito1982a,Suvrathan2016a}, parallel fiber inputs alone induce LTP, an observation that is thought to generalize to pairing parallel fiber stimulation with weak levels of activity of the Purkinje cell \cite{Lev2002a,Coesmans2004a,Jorntell2006a}. In the hippocampus and cortex, it is LTP that is induced by pairing pre-synaptic activity with high-frequency bursting post-synaptic, while isolated post-synaptic spikes express LTD \cite{Sjostrom2001a,Lisman2005a,Inglebert2020a,Froemke2006a,Bittner2017a}. In vitro studies have indicatd that LTP is expressed by frequencies of at least 40-50 Hz \cite{Sjostrom2001a,Pfister2006a}, but these studies with higher concentration of extracellular calcium than is to be expected in the living brain. In more physiological conditions, frequencies of at least 60 Hz are required and longer bursts are more likely to express LTP than shorter bursts \cite{Inglebert2020a}. For bursts longer than 2-3 spikes, burst duration reliably engages LTP. Further changes in burst duration as well as the presence of sustained depolarization can affect the learning rate and the duration of the eligibility trace \cite{Bittner2017a}. In these studies, however, the relative timing as well as the presence of neuromodulation can alter such first-order association between firing patterns and plasticity. Looking for the firing pattern associated with specific mechanisms, we find, that events of more than 3 spikes at frequencies higher than 100 Hz can be labeled a burst in the hippocampus and cortex, and that a higher frequency may be required in the thalamus and the cerebellum. In all systems, bursts can have a variable duration and can have variable intra-burst intervals. This is consistent with a general observation abserved accross all systems: that of the involvement of calcium. VGCCs are involved in generating complex spikes and plateau potentials. Elevation of pre-synaptic calcium is central to short-term facilitation. Elevation of post-synaptic calcium is also essential to the expression of many forms of LTP/LTD. Defining burst based on a nonlinear readout of calcium means that frequency and number of spikes are both required to establish the boundary between burst and non-burst (Fig. \ref{fig:What}). \section{Disjunctive and conjunctive burst generation} \begin{figure*} \centering \includegraphics{Figures/Conjunctive_Figure_V6.pdf} \caption{Conjunctive and disjunctive burst generation. \textbf{a} When two input pathways impinge on a cell, bursting arises when the burst inducing pathway is active (disjunctive code) or when both input pathways are active together (conjunctive code). \textbf{b} Separating the spike train into events made of either singlets or bursts and computing the burst fraction by dividing the burst rate by the event rate, we can consider the correlation between singlet rate, burst rate event rate, burst fraction, and firing rate with either inputs. A disjunctive code shows strong correlation between singlet rate and $I_1$ and a strong correlation between burst arte and $I_2$. A conjunctive code shows a strong correlation between event rate and $I_1$ and between burst fraction and $I_2$. \textbf{c-f} illustration pathways giving rise to conjunctive or disjunctive code in the thalamus, cortex, hippocampal CA1 and cerebellum. } \label{fig:Conjunctive} \end{figure*} Let us consider two fictitious synaptic pathways, pathway `1' and pathway `2'. Pathway 1 drives action potential firing through leaky integration, with adaptation, refractoriness and feedforwad inhibition to make bursting less likely. Pathway `2' is required for burst firing. Now pathway `2' may drive bursting in two fundamentally distinct ways. It may act autonomously, triggering bursts without input in pathway `1'. At every time step we may get either an isolated spike from pathway `1' or a burst from pathway `2'. Alternatively, pathway `2' may drive bursting only when pathway `1' has generated an action potential. In this mode, at every time step we may get a burst if inputs from both pathway `1' and pathway `2' are present. We refer to these two possibilities as disjunctive (1 \emph{or} 2) and conjunctive (1 \emph{and} 2) burst generation. Mathematically, we may denote the singlet rate $s$ and burst rate $b$ as functions of two independent inputs $\mathcolor{blue} I_{\mathcolor{blue} 1}$ and $\mathcolor{red} I_{\mathcolor{red} 2}$. In a \textbf{disjunctive} code this means that $\mathcolor{red} b(\mathcolor{red} I_{\mathcolor{red} 2})$ is independent of $\mathcolor{blue} I_{\mathcolor{blue} 1}$ and conversely $\mathcolor{blue}s(\mathcolor{blue} I_{\mathcolor{blue} 1})$ is independent of $\mathcolor{red} I_{\mathcolor{red} 2}$. The rate of either types of events $e = s+b$ will correlate input 1 and with input 2, but more weakly because information from input 2 perturbs information from input 1 and conversely. For a fixed number of spikes per burst $n$, the firing rate $f = s + nb$ also shows a mixed dependence. In a \textbf{conjunctive}, input 1 causes either spikes or bursts such that it is the event rate which is dependent on input 1: $\mathcolor{blue}e(\mathcolor{blue} I_{\mathcolor{blue} 1})$. The burst fraction is controlled by the other input: $\mathcolor{red} p(\mathcolor{red} I_{\mathcolor{red} 2})$. In this code, the burst rate $b = pe = \mathcolor{blue}e(\mathcolor{red}p$ shows a mixed dependence. So does the firing rate ($f=\mathcolor{blue}e(1-(n-1)\mathcolor{red}p)$) and the singlet rate. This structure of correlation is summarized in Fig. \ref{fig:Conjunctive}. The fact that in a conjunctive code, the firing rate remains highly correlated with burst rate can explain why many studies have rejected burst coding in favor of rate coding \cite{Reinagel1999a,Gabbiani1996a,Shinomoto2005a}. \subsection{Disjunctive dendritic calcium spikes - cerebellum} Purkinje cells of the cerebellum emit two distinct types of potentials. Simple spikes are short action potentials (1 ms) mediated by sodium and potassium ion channels (Pathway 1). The occurence of these potentials is modulated by inputs from parallel fibers (PF), which target more distal dendrites. Complex spikes are bursts mediated mainly by calcium currents in the dendrites (Pathway 2). These potentials are triggered by input from climbing fibers (CF), which target more proximal parts of the dendritic arborization of these cells. CF input alone is sufficient to trigger complex spikes, while PF inputs are not reported to do so. Dendritic organization has been implicated in regulating the calcium currents underlying complex spikes \cite{Rancz2010a}, but they may not be essential for this disjunctive take place \cite{Davie2008a}. These observations, which are echoed by further observations in vivo \cite{Herzfeld2015a}, point to a disjunctive burst code in the cerebellum. \subsection{Conjunctive bursting – hippocampus} Pyramidal cells of the CA1 region have a prominent apical dendrite and a number of basal as well as radial oblique dendrites, segregating input from different pathways. Action potentials can be triggered by input onto basal and radial oblique, but input onto apical tuft undergo much electrotonic attenuation and are not thought to contribute as strongly to action potential generation. Consistently, inputs from CA3, which targets radial oblique dendrites can trigger single action potentials but rarely bursts \cite{Pouille2001a,Takahashi2009a}. These inputs can be seen as an example of pathway `1', from the previous section. Inputs onto the apical dendrite may be seen as pathway `2' as they can produce plateau potentials \cite{Takahashi2009a}. Unlike in the cerebellum where a single stimulation was sufficient to trigger a complex spike, very strong stimulation of the perforant path targetting the apical dendrites is required to generate a plateau potential. Concomittant but relatively weak stimulation of afferents from CA3 and from entorhinal cortex are sufficient to elicit these plateau potentials \cite{Takahashi2009a}. These in vitro studies suggests a conjunctive code, which has seen further experimental support in vivo \cite{Bittner2015a}. Such conjunctive burst generation is expected to arise for well separated pathways for a wide range of dendritic spike shapes and strength of the back-propagating action potential \cite{Friedenberger2022a}. In vitro studies have further supported the role of the back-propagation of somatically generated action potentials, distal NMDA-spikes and the large amplitudes of distal post-synaptic potentials in shaping input integration in these neurons \cite{Stuart1994a,Magee1999b,Magee1999a,Grienberger2014a}. Principal cells from other parts of the hippocampus were also observed to generate bursts of action potentials. This includes cells from the dentate gyrus \cite{Kowalski2016a} and cells from the CA3 \cite{Hablitz1981a,Balind2019a}. Granule cells from dentate gyrus can produce dendritic spikes with durations around 40 ms with a strong attenuation of the backpropagating action potential \cite{Krueppel2011a}, but little is known about whether these mechanisms are associated with the generation of high-frequency bursts in these cells. Cells from the CA3 have calcium-based dendritic spikes that give rise to bursts \cite{Balind2019a}. The back-propagating action potential appears to help the generation of these calcium spikes \cite{Mago2021a},but more evidence is required to establish if the CA3 pyramidal cells are using a conjunctive code. In these cells, both recurrent connections and input from dentate gyrus are more likely to trigger single action potentials (pathway `1'), while input from entorhinal cortex could be modulating the burst fraction. Evidence from in vivo recordings would help establish the conjunctive nature of CA3 bursting with more certainty. \subsection{Conjunctive bursting – cortex} Cortex has many types of principal cells across its layered structure \cite{Tasic2018a}. Deep layer, thick tufted neurons with projection to the pyramidal tract neurons can generate burst of action potentials mainly when input to the apical dendrite is combined with an action potential \cite{Larkum1999a,Larkum2001a,Larkum2004a}. Similar to cells from CA1, inputs to basal and radial oblique dendrites receive recurrent and thalamic connections (Pathway `1' \cite{Yaeger2022a}) whose ability to generate action potentials at a high frequency is counteracted by potent feedforward inhibition \cite{Sermet2019a}. These mechanisms suggest a conjunctive code whereby apical inputs AND input basal/radial oblique input are required to generate bursts. Xu et al 2012 \cite{Xu2012a} have confirmed the conjunctive nature of burst coding in vivo by showing that both calcium plateau potentials in somatosensory cortex arise only in the combination of sensory input and feedback from motor cortex. Cortex appears to use a conjunctive code where feedforward inputs from sensory thalamus or lower order cortical areas (pathway `1') must combine with feedback inputs from higher-order thalamus or higher-order cortical areas (pathway `2') to generate bursts of action potentials. Bursts of action potentials is also known to occur in vivo in pyramidal cells of the layer 2-3 \cite{deKock2008a,deKock2021a,Wang2020a} and in spiny stellate cells \cite{Brecht2002a}. The superfucial layers tend to produce more burst than the deeper ones \cite{Shinomoto2009a}. The absence of either potent calcium spikes (for L2-3 cells \cite{Larkum2007a}) or apical dendrites (for spiny stellate cells) would indicates that these bursts do not arise from the same mechanism as in deep layer pyramidal cells. NMDA-spikes and dendritic sodium-ion channels can control burstiness and have been observed in these cells \cite{Brumberg2000a,Palmer2014a,Smith2013a,Friedenberger2022a}. \subsection{After-hyperpolarization rebound – thalamus} Thalamic relay neurons are known to produce bursts of action potentials in vivo, regularly under anesthesia and drowsiness and sporadically under awake conditions \cite{Swadlow2001a}. Electrophysiological investigations in slices have identified a potent bursting mechanism relying on T-type voltage-gated calcium channels \cite{Deschenes1984a,Huguenard1992a,Wang1991a}. In those cells, a normal somatic current injection produces regular firing, but the same current injection preceded by a long (100 ms) period of hyperpolarization produces a burst of action potentials. Bursts elicited in this manner requiring both a sustained inhibition and then an excitation, hinting at a form of conjunctive bursting, but one with an implicit delay between the two inputs. A study of tracking thalamic relay cells firing patterns in vivo across behavioral states \cite{Urbain2015a} has shown independent modulation of firing rate and burst fraction. The study noted, however, that bursts were not preceded by periods of hyperpolarization, suggesting that bursts did not arise from after-hyperpolarization rebound. Another study made a more direct test to the role of hyperpolarization in controling bursting \cite{Borden2022a}. With optegenetic hyperpolarization of thalamic neurons, the firing rate changes only weakly while there is a large increase in the burst fraction. These result show that controling the hyperpolarization controls the burst fraction, without affecting the firing rate. Yet another in vivo study brought further support to this view \cite{Born2021a}. Tracking firing patterns while optogenetically supressing corticothalamic activity, they found that the firing rate decreases, consistent with a net excitatory effect of corticothalamic afferents. Surprisingly, this was accompanied by a an increase in the burst fraction. Since visual afferents providing the main source of excitation to these cells shows a co-modulation of bursts and single spikes \cite{Reinagel2000a}, these observations are consistent with a conjunctive code where net inhibition onto the thalamic cells controls the burst fraction while net excitation controls the firing rate. \section{Burst-dependent synaptic plasticity} Long term potentiation of synaptic efficacy (LTP) provides a compelling cellular basis for learning and memory \cite{Nicoll2017a}. Reliably triggered 50 years ago by intense stimulation of hippocampal pathways \cite{Bliss1973a}, researchers have now established some of the core elements necessary for its expression. LTP requires NMDARs \cite{Collingridge1983a,Morris1982a}, elevated post-synaptic calcium \cite{Lynch1983a} as well a local glutamate release \cite{Lledo1998a}. These elements suggest a model whereby LTP arises from elevated post-synaptic calcium taking place relatively soon after or before some amount of glutamate has been released at a synapse. Multiple chemical pathways depend on this elevation of post-synaptic calcium \cite{Bhalla1999a,Maki2020a}. For calcium to be elevated at the synapse, it may arise from NMDAR-mediated currents, local release from calcium stores or VGCC. Whereas the first two sources are controlled by local synaptic inputs, VGCCs make calcium entry dependent on local voltage and thus possibly remote agents can control plasticity if they can control the voltage across the dendritic tree. Because bursts, particularly when accompanied with a sustained depolarization, would elevate the local membrane potential, reaching into the majority of dendrites \cite{Lisman2005a,Stuart1994a,Nevian2006a}, these events are uniquely positioned to control LTP expression. This view focuses on the role of post-synaptic bursting in regulating LTP/LTD, a view that is consistent with previous literature focusing on the role of the relative timing between pre-and post-synaptic spikes. When formulating learning rules based on relative timing on pre- and post-synaptic side of the synapse, many studies have considered the role of the post-synaptic firing patterns in regulating system-level plasticity. This model is essential to explain the dependence on repetition frequency observed in protocols of spike-timing dependent plasticity (STDP) \cite{Sjostrom2001a,Froemke2006a,Pfister2006a}. Multiple modeling studies have indicated that including burst-dependence in learning rules allows for the emergence of various types of selectivities. This includes the selectivity of ON- OFF- pathways in the developping retinogeniculate connection \cite{Butts2007a,Gjorgjieva2009a}, orientation selectivity in the developping thalamoctortical afferents \cite{Toyoizumi2005a,Pfister2006a,Ren2022a} as well as finer patterns of recurrent connectivity within cortex \cite{Clopath2010a}. The focus on post-synaptic bursting has conceptual implications that require a departure from Hebbian plasticity. Donald Hebb has speculated that learning takes place when a pathway taking part in firing a post-synaptic cell would be potentiated. This picture is entirely consistent with the simplified picture of LTP expression above, as long as a (feedforward / pathway 1) pathway can take part in creating a burst, this pathway will be potentiated. But Hebb's theory needs to be revised when bursting is not triggered by the same pathway undergoing the potentiation. This is striking for a disjunctive burst code where a given pathway instructs the plasticity of another pathway as in the cerebellum \cite{Marr1969a,Albus1971a,Ito1982a}. A process that is better called 'instructive plasticity' \cite{Grienberger2021a} or 'Marr-Albus-Ito Plasticity'. For a conjunctive code, the departure from Hebb is more subtle. Bursting, and therefore LTP, can be engaged when two pathways come together. This form of plasticity is partially Hebbian because of the need of co-activation between pathway 1 and the post-synaptic neuron. But in a strict interpretation, it is non-Hebbian because bursting is caused by the presence of another, instructive, input, itself not necessarily undergoing plasticity. In vitro recordings have confirmed such a central role of post-synaptic bursting in the expression of LTP. In physiological calcium, pairing an action potential with a post-synaptic spike shows LTD while pairing with a post-synaptic burst shows LTP \cite{Inglebert2020a}. The timing between the pre-synaptic spike and the post-synaptic burst need not be precise as the two events can be separated by up to 1 second \cite{Bittner2017a}. LTD can also arise from pairing pre-synaptic activity with post-synaptic bursts separated by half a second \cite{Milstein2021a}. Crucially, such behavioral timescale plasticity (BTSP) \cite{Grienberger2021a} has been shown to induce or tweak place selectivity in hippocampus CA1 \cite{Bittner2017a,Milstein2021a}. Further work will likely establish the role played by BTSP in cortex \cite{Aru2022a}. Plasticity in the cerebellum also shows burst dependence, but here the association of an eligibility trace with a post-synaptic burst leads to LTD while pairing with a singlet leads to LTP \cite{Coesmans2004a,Lev2002a,Jorntell2006a,Bouvier2016a}. These plasticity rule show symmetric depenence on the ordering of pre- and post-synaptic spikes \cite{Suvrathan2016a}, much like in similar burst-dependent learning rules in the electric sense of electric fish \cite{HarveyGirard2010a,HarveyGirard2013a,Bol2011a,Mejias2013a}. \section{Differential transmission of firing patterns} \begin{figure*} \centering \includegraphics{Figures/STP_Figure_V6.pdf} \caption{Patterns of short-term plasticity in four principal cells. \textbf{a} Sensory thalamus receive STD from the retina \cite{Granseth2002a}, sends STD to cortex \cite{Swadlow2001a,Cruikshank2010a} and receives STF from cortex \cite{Granseth2002a,Cruikshank2010a,Jackman2016a}.\textbf{b} Pyramidal cells of sensory cortex receive STD from thalamocortical afferents \cite{Swadlow2001a,Cruikshank2010a} and from motor cortex \cite{Lee2013a}. The Interconnections with PV-positive cells is STD \cite{Campagnola2022a} and connections onto SST cells is STF both from glutamatergic cells and from gabaergic VIP-positive cells \cite{Campagnola2002a}. As some top-down projections target the soma and other top-down projections target the apical dendrites \cite{Geng2022a}, we have assumed that STP experiments only revealed proximal connections. \textbf{c} Inputs from both entorhinal cortex (EC) and CA3, onto CA1 pyramidal cells, are facilitation \cite{Jackman2016a}. Outputs onto the subiculum are facilitation \cite{Xu2012b}. \textbf{d} Climbing fiber (CF) inputs onto Purkinje cells shows no STP in 1~mM calcium \cite{Foster2002a}. Parallel fiber inputs onto Purkinje cells show STF \cite{Dittman2000a}. Outputs from the cerebellum to the deep cerebellar nuclei (DCN) undergo STD \cite{Telgkamp2002a,Jackman2016a}.} \label{fig:STP} \end{figure*} To separate bursts from single spikes post-synaptically, transmission must be frequency dependent. No dendritic computation can reliably separate a burst in one afferent from synchronous single spikes across multiple afferents. Instead, we must consider frequency-dependent transmission that is specific to one afferent. Following a similar reasoning, Francis Crick had hypothesized such properties of synapses, which he called von der Marlsburg synapses \cite{Crick1984b}. Today, the existence of STP is well established. Yet to align specifically with a function of burst transmission, synaptic dynamics should match the main features of burst generation. Much of the early research on STP have focused on facilitation over timescales slower 50 Hz \cite{Salin1996a,Zucker2002a,Varela1997a,Tsodyks1998a,Toth2000a}. More recent experiments have confirmed the existence of STP that is triggered specifically by burst-like events having a frequency closer to 100 Hz \cite{Chamberland2018a,Lefort2009a,Jackman2016a,Campagnola2022a,Vandael2020a}, with a sensitivity to lengthening of the spike duration \cite{Geiger2000a} but not to sustained depolarization \cite{Apostolides2016a}. In this way, a mixture of spikes an bursts can be demixed by STP. When the same axon projects with STD at one target and STF at another, synaptic projections can communicate independent information - two streams of information - to different post-synaptic targets \cite{Naud2018a}. What logic is there to STP in synaptic projections? The different types of projections may reflect a different functional role on the post-synaptic target, such as a driver vs a modulator role \cite{Sherman2001a,Sherman2001b,Sherman2012a}. Alternatively, the different types of projections may reflect different filtering operations on the pre-synaptic spike train \cite{Izhikevich2003c,Fortune2001a}, allowing for routing of information \cite{Kording2000a,Naud2018a,Payeur2019a,Payeur2021a}. In the latter, STP inherits the semantic of bursts and singlets: applying STD on a set of spike trains $S$ extracts the event rate $e= STD[S]$ and applying STF extracts the burst rate $b = STF[S]$. In a conjunctive code, this means that STD connections communicate information from pathway 1 and that STF projections communicate information from a conjunction of pathway 1 and 2. \subsection{Cortex} A recent extensive study in cortex \cite{Campagnola2022a} has surveyed more than 20 000 synaptic pairs across 1502 animal slices. The study also reported over 2500 synaptic pairs in human tissue. Campagnola et al. (2022) \cite{Campagnola2022a} found that, most connections showed some level of STD over a wide range of stimulation frequencies. Separating cells according to transcriptomic families, they found that connections to and from PV-positive cells tend to show depression. These observations echo previous studies that have shown a tendency for depression for local connections onto fast-spiking or basket-type cells \cite{Reyes1998a,Markram1998a,Tsodyks1998a}. STD is also observed when PV-positive cells connect to their post-synaptic partner \cite{Galarreta1998a}. Many other synapses show STD, and multiple type of temporal dependencies are observed. At some synapses, but present in both human and mice cortex, a strong depression kicks in at frequencies of at least 100 Hz. A burst attempting to cross this type of synapse would communicate only the first few spikes in the burst – specifically those spikes that are triggered by pathway `1' in a conjunctive code \ref{fig:Conjunctive}. The depression takes between half a second and a second to recover, so that isolated spikes that follow a burst would also be attenuated. This type of frequency-dependent depression is observed in L4 pyramidal to PV-positive cell connections \cite{Campagnola2022a}. Similarly, many synapses show STF, and multple types of temporal dependencies are observed. At some synapses, a frequency-dependence with a cutoff at 50 or 100 Hz is observed \cite{Lefort2009a,Campagnola2022a}. This facilitation tends to cumulate over multiple high-frequency spikes. Campagnola et al. (2022) \cite{Campagnola2022a} found that among local cortical connections and separating according to transcriptomic families, it is those connections onto and from SST- and VIP-positive cells that display STF. This again echoes previous studies based on morphology or firing patterns of inhibitory cells that have indicated a tendency for connections onto Martinotti cells (SST-positive) to display potent facilitation \cite{Berger2010a,Tsodyks1998a,Reyes1998a}. The occurrence of frequency-dependent STF and STD suggests that different synapses will communicate either selectively bursts (STF) or both bursts and single spikes with equal rate (aka the event rate). Combining these synaptic properties in cellular connectivity motifs (i.e. feedforward inhibition) can also implement a selectivity to the fraction of bursts \cite{Naud2018a}. Cortical networks are thus able to separate information from pathway '1' and '2' in a conjunctive code. Theoretical studies have further established that when feedback inhibition is separated in two groups, one receiving STF and the other STD, network receiving two streams of input in a conjunctive burst code show better balance and better information transmission \cite{Naud2018a,Keijser2020a,Vercruysse2021a}. Is this polarization of local connections in terms of STD and STF also reflected in a polarization of impinging pathways? In sensory cortices, input are broadly separated into feedforward (e.g. sensory afferents) and feedback (e.g. cortico-cortical, high order thalamus) connections. Feedforward inputs to the cortex target principal neurons with STD and PV-positive cells providing feedforward inhibition also with STD \cite{Gibson1999a,Gil1999a,Beierlein2003a,Gabernet2005a,Jackman2016a}. These results, however, were obtained in young animals, such that age may reduce the potency of STD \cite{Frick2007a,Oswald2008a}. Descending connections, on the other hand, come in two types \cite{Geng2022a}. The apical type targets apical dendrites of pyramidal cells, SST and VIP cells, and the proximal type targets both apical and proximal dendrites as well as PV positive cells. Connections onto SST and VIP cells tend to express STF \cite{Sylwestrak2012a,Campagnola2022a}, which would support a polarization of STP along ascending and descending pathways, but this polarization remains mainly an assumption of cortical information processing theories \cite{Naud2018a,Payeur2021a,Greedy2022a}. \subsection{Hippocampus} In the hippocampus, and focusing on CA1 pyramidal cells, the perforant path (PP) and the schaffer collateral (SC) are the two main pathways (Fig. \ref{fig:STP}c). PP inputs from entorhinal cortex \cite{Witter2006a} targets mainly apical dendrites and NPY-positive cells, with a smaller fraction of input going to SST-positive cells \cite{Kajiwara2008a,Milstein2015a}. These inputs show weak STF with a frequency dependence that is controlled by the presence of inhibition \cite{Milstein2015a}. SC inputs from CA3 target mainly apical oblique dendrites, PV-, SST- as well as NPY-positive cells. These connections all show STF with frequency dependence \cite{Milstein2015a,Wierenga2005a}. This suggests an preponderance of STF for inputs onto CA1, a trend that extends to other parts of the hippocampus \cite{Salin1996a,Toth2000a,Rossbroich2021a}. CA1 pyramidal neurons then project to multiple targets, with many connections going to the subiculum. These connections also display STF \cite{Xu2012b}. This picture paints a picture dominated by facilitation into and out of the hippocampal CA1. \subsection{Cerebellum} Purkinje cells are the most numerous cells of the cerebellum. They receive inputs from granule cells in parallel fibers (PF) and from the inferior olive in climbing fibers (CF) (Fig. \ref{fig:STP}d). While motor related commands (and reward-related information) are carried by PF, the CF carry motor error (and reward-related) information \cite{Ito2008b,Kostadinov2022a}. PF inputs show STF \cite{Dittman2000a}, while CF inputs show an absence of potent forms of short-term plasticity \cite{Foster2002a}. \subsection{Thalamus} Feedforward infomation arriving from the senses to sensory thalamus shows STD \cite{Granseth2002a,Reichova2004a}. When this information continues to sensory cortices, thalamocortical afferents, whether to excitatory or inhibitory neurons, also show STD \cite{Gibson1999a,Gil1999a,Beierlein2002a}. When the cortex sends feedback information to sensory thalamus, it does so with STF projections \cite{Granseth2002a,Reichova2004a,Cruikshank2010a,Jackman2016a}. But when cortex sends to higher-order thalamus, the information could be considered as part of a feedforward stream. These afferents display STD (PoM \cite{Mease2016a})). \subsection{What logic} What logic arises from these connectivity patterns? Inputs to a principal cell are both STD and STF in the thalamus, more STD in the cortex and strictly STF in the hippocampus. Outputs from principal cells are strictly STD in the cerebellum and thalamus, but STF in the hippocampus. The idea that STD is used to communicated feedforward information is supported by the thalmus and to some extent the cortex, but the cerebellum appears as an exception since error-carrying CF shows no potent plasticity. Another idea is that STF inputs are used to control LTP. This rule applies to the cortex as SST-positive cells have been strongly implicated in the control of bursting and therefore of LTP \cite{Urban2016a,Artinian2018a,Chen2015a,Doron2019a}, in the hippocampus as both EC and CA3 inputs are required for LTP \cite{Takahashi2009a} but also in the cerebellum as PF inputs are associated with LTP \cite{Coesmans2004a,Lev2002a}. Clearly, however, much is missing from the picture of connectivity types in all of these systems. \section{Algorithmic requirements for burst coding} Why would it be beneficial for the nervous system to represent information using two distinct types of activity? What types of algorithms would be difficult to implement without this ternary code? We have described how properties of neurons and synapses in line with the burst coding hypothesis allow neurons to communicate, represent and exploit two types of syllables, but what algorithm would benefit from such a separation? We have argued that a three-syllable code allows to represent, process and transmit two types of information with one associated with elevated levels of intracellular calcium. We now review the place of such multiplexing in the theory literature. One category of algorithm that necessitate two streams of information are learning algorithms. For the most part, supervised learning proceeds by representing and communicating two types of information: information about the inputs to the network and information about the relative error with respect to a target response. thms allow synapses to change in such a way as to optimize a global objective. The backpropagation of error is one thm. It is not plausible to implement its exact formulation in the brain, but it is possible to enact approximations \cite{Lillicrap2020a}. Dendrite-dependent bursting combined with burst-dependent plasticity and polarization of short-term plasticity have been used in a computational model to approximate the backpropagation of error algorithm \cite{Payeur2021a,Francioni2022a,Greedy2022a}. As for reinforcement learning, with its powerful instances that uses deep neural networks, these algorithms rely on supervised learning techniques. Here, a reward prediction error is used to guide the output of a neural network, but the neural network then solves the credit assignment problem using backpropagation-like algorithms \cite{Hassabis2017a}. Therefore, deep reinforcement learning algorithms would benefit from burst coding for the same reason that supervised learning does. Within the category of learning algorithms, there is also unsupervised learning. A powerful type of unsupervised approach called the 'wake-sleep algorithm' can be used to train various model structures (e.g. Helmoltz machines). These algorithms proceed by exchanging two types of information in a network \cite{Hinton1995a,Vertes2018a}, one information about observations and one about predicted observations. Since the algorithm proceeds by comparing these representations, both must be represented approximately at the same time, an algorithmic demand that can be implemented using burst multiplexing. Other unsupervised algorithms proceed by requiring particular statistical patterns onto higher-order representations. These sometimes exploit the backpropagation of error algorithm \cite{Zhuang2019a}, and at other times succeed in communicating a single type of information to other neurons but require plasticity to depended on two types of activity \cite{Illing2021a,Halvagal2022a}. In every case, bursting appears as a natural candidate for the implementation of aspects of these algorithms. In machine learning, learning-related signals can be processed in many different ways. It is not necessary for these signals to follow a simple flow pattern such as a top-down backpropagation across a hierarchical network. Researchers have highlighted the importance of memorizing or projecting in space/time such learning-related signals \cite{Jaderberg2017a,Neftci2019a}. Another category of algorithms closely linked to learning are those of attention. Attention selectively coordinates the enhancement or suppression of task-relevant sensory representations using top-down feedback. Theories are typically concerned with attention's role in perception and view attention as biasing the competition between competing stimuli \cite{Reynolds2009a, Desimone1995a} or aiding in the binding of features within the visual scene \cite{Treisman1980a}. When implemented in theoretical models, these attention signals take the form of an additive or multiplicative gain modulation of neurons representing the feature of interest. On the other hand, there are theories concerned with attention's role in learning and view attention as a signal that gates plasticity \cite{Roelfsema2005a, Roelfsema2018a}. With the expectation of attention within the prediction coding framework \cite{Feldman2010a}, these two perspectives are traditionally treated separately. Within the burst coding framework, multiplexing is well suited to facilitate the coordination of two independent streams of information up and down the cortical hierarchy, where top-down "attention-like" feedback signals targeting dendrites can be a source of both gain modulation and plasticity gating. Therefore, burst coding is well suited to link these two roles for attention. \section{Bursting in vivo} \begin{table} \tabcolsep7.5pt \caption{\label{tab:anesth} Stationary burstiness Under Anesthesia. $^{\rm a}$: ratio of bursty intervals to all intervals.; $^{\rm b}$ bursts require a silent period before (100 ms).} \label{tab1} \begin{center} \begin{tabular}{@{}l|l|c|c|c@{}} & & $\Delta$ &BF\\ Area& Type & (ms) & (\%) & Ref.\\ \hline \textbf{S-CTX}& Rat, S1 &10 &1$\pm$1$^{\rm a}$ & \cite{deKock2008a}\\ & L2-3 &&& \\ &Rat, S1 &10&17$\pm$5$^{\rm a}$ &\cite{deKock2008a}\\ & L5B && 5& \\ \hline \textbf{HP}&Rat, CA1 & $\sim$20 & 96.3$\pm$0.8$^{\rm a}$&\cite{Kowalski2016a}\\ & &&& \\ &Rat, CA3 & $\sim$20 & 86.5$\pm$2.4$^{\rm a}$ &\cite{Kowalski2016a}\\ & &&&\\ \hline \textbf{THL}& Cat, LGN &4 & 13-25$^{\rm b}$ & \cite{Lesica2004a}\\ \end{tabular} \end{center} \end{table} \begin{table} \tabcolsep7.5pt \caption{\label{tab:awake} Stationary Burstiness Quiet Awake / freely moving. $^{\rm d}$ plateau potentials identified based on presence of sustained depolarization. $^{\rm e}$ ratio of the number burst to total number of action potentials.$^{\rm f}$ ratio of the number burst to total number of events (bursts or singlets).} \label{tab1} \begin{center} \begin{tabular}{@{}l|l|c|c|c@{}} & & $\Delta$ &BF&\\ Area& Type & (ms) & (\%) & Ref.\\ \hline \textbf{S-CTX}&Monkey,V1, & 6 & 35$\pm$6$^{\rm a}$ &\cite{Onorato2020a}\\ & L2-4 &&&\\ &Rat, S1, &10&15$\pm$3$^{\rm a}$ &\cite{deKock2008a}\\ & L2-3 &&&\\ &Rat, S1 &10&17$\pm$5$^{\rm a}$ &\cite{deKock2008a}\\ & L5B &&&\\ &Rat, S1 &15& $\sim$ 12$^{\rm f}$ &\cite{Naud2022a}\\ & L5 &&&\\ \hline \textbf{F-CTX}&Monkey, ACC/PFC & 5 & 30-40$^{\rm e}$ &\cite{Womelsdorf2014b}\\ \hline \textbf{HP}& Mouse, CA1 &-$^{\rm d}$ & 3.8$\pm 8^{\rm e}$ & \cite{Bittner2015a}\\ & Rat, CA1 &50 & ~ 40$^{\rm a}$ & \cite{Epsztein2011a}\\ & Rat, CA1 &10 & ~ 29-46$^{\rm a}$ & \cite{Sanders2019a}\\ & Rat, CA3 &10 & ~ 29-37$^{\rm a}$ & \cite{Sanders2019a}\\ & Mouse, CA1 & 15 & 20$^{\rm a}$ & \cite{Tanaka2018a}\\ & Mouse, DG & - & 65$\pm $ 22 & \cite{Pernia2014a}\\ \hline \textbf{THL}& Mouse,dLGN &4 & 5$^{\rm b}$ & \cite{Born2021a}\\ &Rabbit, dLGN & & 5-40 $^{\rm b}$& \cite{Bezdudnaya2006a}\\ &Mouse,VPM &10 & 15-20 $^{\rm a}$ & \cite{Urbain2015a}\\ &Mouse,Pom &10 & 10-15 $^{\rm a}$ & \cite{Urbain2015a}\\ \end{tabular} \end{center} \end{table} Are bursts being spontaneously generated in the awake and behaving animal? and if so, are these events only occurring at specific times? Either using patch clamp or extracellular recording techniques, many \textit{in vivo} studies have observed firing patterns in the behaving animals. While the answer to the first question is a clear yes, the second question is more difficult to answer. Bursts occur spontaneously at a low rate in many brain areas, both under anesthesia and in awake animals (Table \ref{tab:anesth} and \ref{tab:awake}). The fact that they occur spontaneously in all conditions also means we cannot equate their occurrence with an input or a coding feature, but instead we must resort to a comparison of the relative amount of burst that have occurred. Unfortunately, many technical aspects renders cross-study comparisons particularly difficult. The first aspect is that bursting is strongly correlated with firing rate (Fig. \ref{fig:Conjunctive}). While a condition or state may show more bursting, when this is explained by a difference in firing rate means it is not about bursting but about firing rate. For this reason, researchers tend to measure the burstiness by normalizing with some measure of firing rate. There are multiple ways of computing such burst fractions, and this diversity confuses comparisons. For instance, one may divide the number of bursts by the total number of AP or divide the number of short ISIs by the total number of APs. The latter will always give larger values than the former. Similarly, more stringent criteria for bursting (ISI cutoff or requiring a period of silence before a short ISI) will necessarily lead to lower burst fractions. Other factors include the recording methods, while in vivo patch-clamp allow one to measure the presence of a sustained depolarization during bursting, these techniques are very challenging and are not the majority. Extracellular recordings are more convenient but introduce potential artefacts due to spike sorting. Absent or incomplete spike sorting will tend to increase the firing rate and burst rate. Furthermore, spike sorting tends to sort-out spikes with altered spike shape, thus sorting out later spikes in a burst. With these technicalities in mind, we have compiled experimentally reported burst fractions in Tables \ref{tab:anesth} and \ref{tab:awake}. The anesthetized state has been known to alter the burstiness of neurons. This was highlighted by studies in sensory thalamus where both anesthesia and sleep are associated with higher degrees of burstiness \cite{Ramcharan2000a,Mukherjee1995a}. This tendency is further reflected in the separation between inattentive state with higher burstiness and attentive states with the lowest relative proportion of bursts \cite{Swadlow2001a} (but see \cite{Urbain2015a}). Does this relationship between arousal and bursting extend also to other areas? In the hippocampus, anesthesia is also associated with a state of predominantly burst firing (Tab. \ref{tab:anesth}), which recovers a sparse mixture in awake states (Tab. \ref{tab:awake}). Cortex shows a mixed yet contrarian picture: while L5 pyramidal neurons show no dependence on anesthesia, L2-3 pyramidal neurons stop firing bursts altogether in anesthetized states \cite{deKock2008a}. In awake states, the burst fraction appears fairly uniform across areas and species. It is, for the most part, sitting between 15 and 40\%. Similarly, the average number of spikes per bursts typically sits between 2 and 3. Such states of sparse bursting is optimal for communicating two streams of information \cite{Naud2018a}. Within this range, there are variations across areas. Within the hippocampus, granule cells have the highest proportion of bursts; and within the cortex burstiness changes across lamina with a minimum in L5 and higher values in L2-3 and L6 \cite{Shinomoto2005a,Senzai2019a}. Variations across areas would indicate a greated propensity for bursts in visual and prefrontal areas, as compared wiwth motor areas \cite{Shinomoto2009a}. \subsection{Attention states} Under states of visual attention, the saliency of locations (spatial attention) and features (feature-based attention) in the visual field are selectively enhanced or suppressed in order to increase perceptual performance. Across the visual cortex, modulations of the local field potential (LFP) \cite{Fries2001a, Chalk2010a, Khayat2010a} and noise correlations are observed at the population level \cite{Cohen2009a, Ni2018a, Ruff2016a}, while individual neurons show stimulus selective increases in firing rate \cite{Moran1985a, Spitzer1988a, McAdams1999a, Treue1999a} and decreases in trial-to-trial variability\cite{Cohen2009a, Mitchell2007a, Mitchell2009a}. As bursts can be a source of neural gain and variability, modulations in bursting are expected to influence all of these measures. However, the relationship between bursting and these measures will be dependent on the underlying circuit motifs and burst generation process. Therefore, future modeling work will be needed to determine if these observed changes in firing rate and variability can be explained by modulations in bursting or if they are independent. Direct measures of bursting modulation with attention have been observed across the visual cortex, in higher and lower areas of the ventral stream (V4 and V1) \cite{Anderson2013a, Huang2019a} and in the medial superior temporal (MST) area of the dorsal stream \cite{Xue2016a}. When spatial attention is directed onto a neuron's receptive field, there is an increase in firing rate and a reduction in burstiness upon evoked activity, as measured using an ISI-based metric (fraction of ISIs < 4 ms) and an autocorrelation-based metric. However, this reduction in burstiness appears to be area, cell-type, and context-specific. First, the reported degree of burst modulation in V1 was less than what was observed in V4, in line with the known polarity in firing rate changes \cite{Buffalo2009a, MartinezTrujillo2022a}. Second, when separating neurons into cell-type specific classes based on the extracellular spike duration, the reduction in burstiness was only significant for broad spiking pyramidal neurons at low firing rates (<20 Hz), and did not occur for narrow spiking inhibitory neurons \cite{Anderson2013a}. Lastly, only neurons whose firing rates increased with task difficulty showed a reduction in burstiness, while those with firing rates suppressed by task difficulty did not show a significant modulation in burstiness \cite{Huang2019a}. The reduction in burstiness also appears to only occur for spatial and not feature-based attention \cite{Xue2016a}, suggesting that spatial and feature-based attention may rely on different mechanisms. In contrast to the visual areas, attention-dependent increases in bursting have been observed in anterior cingulate and prefrontal cortex (PFC/ACC) \cite{Voloh2018a}. Following the cue during a covert attention task, burst fraction increases for both putative inhibitory and excitatory neurons, as determined by extracellular spike shape. However, the magnitude of the increase was cell-type specific, with putative inhibitory neurons showing a larger increase. Both cell-types also showed differences in their relation to LFP frequency bands. Within the theta frequency band burst firing coincided with stronger theta more than non-burst firing for both cell types, but was stronger for inhibitory cells. Within the beta frequency band excitatory neurons were associated with strong beta power at phases preceding the preferred phase. Suggesting that the bursting of inhibitory and excitatory neurons in the PFC/ACC may play a role in the synchronization of local population activity during attention states. "Attention-like" changes in bursting have also been observed in the primary somatosensory cortex of rodents. That is, changes in bursting which correlate with a shift in perceptual performance. During a whisker deflection detection task, activity within the apical dendrites of L5 pyramidal neurons is causally linked to the perceptual detection threshold \cite{Takahashi2016a}. Increases in calcium activity within the dendrites and bursting at the soma were positively correlated with detection probability for a subset of neurons, suggesting that conjunctive bursting influences perception. Similarly, for a microstimulation detection task, perceptual detection depended on the irregularity of stimulation, indicating that bursting makes representations more salient \cite{Doron2014a, Doron2019a}. Learning to respond to the microstimulation task appears to occur in two stages \cite{Naud2022a}. First, a fraction of neurons became selective to the stimulus by slightly increasing their firing rates. During the second stage, the formed representations are sharpened, with only the selective neurons showing an increase in firing rate. This is due to a temporal alignment of burst modulation with the previously learned representations. Taken together, these results suggest that an "attention-like" top-down signal modulating bursting is able to selectively sharpen previously learned task-relevant representations in order to increase perceptual performance. \subsection{Learning} A learning-related role of bursting has been studied in the hippocampus in the context of place-field formation. Recordings in behaving mice have indicated that place field are more likely to arise in cells with a propensity to produce bursts \cite{Epsztein2011a}. A correlation between the spontaneous occurrence of plateau potentials and place field formation has been noted \cite{Bittner2015a}. Testing for a causal relationship, Bittner et al. (2017) have found that the artificial induction of a few plateau potentials is sufficient to give rise to a new place field close to the location occupied by the animal at the time of the induction \cite{Bittner2017a,Zhao2022a}. In vivo bursting in these cells requires inputs from the entorhinal cortex (EC), CA3 and is helped by disinhibition from SST-positive \cite{Royer2012a,Grienberger2017a,Zutshi2022a}. The role of EC in particular as been tested with silencing experiments and shown to be essential for instructive plasticity of CA1 \cite{Grienberger2022a}. Because connections from EC to CA1 are facilitating \cite{Jackman2016a}, EC bursting is more likely to act as an instructive signal than isolated spikes. As these properties of instructive forms of plasticity echo similar burst-dependent plasticity in the cerebellum \cite{Shadmehr2010a,Yang2014a,Herzfeld2015a} and other structures \cite{Mejias2013a,Muller2019a}, such that the relationship between learning and bursting in cortex is suspected and much sought-after. High-frequency bursting of long duration start to occur in cortex upon entering an associative learning task \cite{Wang2020a}. Blocking either feedforward or feedback connections blocks learning \cite{Doron2020a}. Consistently, dendrite targeting inhibition blocks learning \cite{Chen2015a,Doron2020a}. Furthermore, if bursting controls plasticity by providing a unit-specific representation of error \cite{Payeur2021a}, one would predict that the relevant period for this instructive signal to act would come after any cues are given and after reward delivery or lack thereof. An increase in burstiness has been shown to encode errors with a delay of about 1 s with respect to the cue \cite{Naud2022a}, suggesting that second-long eligibility traces similar to hippocampus may be at work in cortex. Consistently silencing pyramidal cell activity during the reward period but after the sensory cue has been shown to alter learning of a sensory association task\cite{Ford2022a}. \section{Discussion} While we have reviewed a large number of supporting evidence that the brain uses a ternary code for representing, transmitting and processing information, and that this ternary code is at play in principal cells of the thalamus, cortex, cerebellum and hippocampus, there remains the question: is this language spoken only by these select neurons or will future scrutiny extend its boundaries? In cortex, the presence of burst-generating calcium spikes depend on neuron size, cortical layer and species \cite{Waters2003a,BeaulieuLaroche2018a,Ledergerber2010a,Fletcher2019a}. Yet even for layer 2-3 neurons that have been shown to have only weak calcium spikes \cite{Waters2003a}, these neurons engage in bursting in vivo \cite{Senzai2019a} and have burst-dependent transmission and synaptic plasticity \cite{Lefort2009a,Froemke2006a}. It is therefore more likely that other bursting mechanisms are at play in these cells \cite{Gidon2020a,Brumberg2000a,Haj1997a}. GABAergic interneurons of the cortex, on the other hand, may show burst-dependent transmission \cite{Campagnola2022a}, but have not been shown to have burst-dependent plasticity \cite{Hennequin2017a} or special burst-generation mechanisms. Recent work has revealed that these cells are producing NMDAR-based dendritic spikes, which can alter the ISI dispersion as in burst coding cells \cite{Friedenberger2022a}. Putative GABAergic cells have also been shown to modulate bursting in an attention task \cite{Voloh2018a} in vivo. Yet this forms an isolated body of evidence, so that the widespread nature of burst coding remains to be proven. \section{Conclusion} A ternary neural code is manifest from observations in some of the most studied neuron types in neuroscience. While neurons emit action potential in relative isolation by accumulating changes to the membrane potential, bursts typically arise from calcium flowing through VGCCs. We argued that the ternary code can be seen as a way for cells to communicate both elevated membrane potential and elevated levels of intracellular calcium. Synaptic plasticity being engaged by elevated calcium rather than elevated membrane potential, the ternary neural code may have arisen as a need to distinguish between these distinct intracellular signals when communicating with other cells. By signalling to other cells a signal for undergoing plasticity, neurons could engage in elaborate coordination of plasticity. Alternatively, a ternary neural code may have enabled other functions, such as the ability to manipulate and target top-down attention. The extent of burst coding in the central nervous system as well as its implications for theories of cognitive processes still need much to be discovered. For this, in vivo recordings must retain a sensitivity to both single spikes and burst, which remains a challenge for current high-throughput methodologies. \section*{References} \vspace{1cm}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,859
#include <autoconf.h> #include <assert.h> #include <camkes.h> /* generated header */ #include <platsupport/io.h> #include <sel4/types.h> #include <sel4/sel4.h> #include <sync/mutex.h> #include <sync/sem.h> #include <sel4platsupport/platsupport.h> #include <camkes/allocator.h> #include <camkes/dataport.h> #include <camkes/dma.h> #include <camkes/error.h> #include <camkes/io.h> #include <camkes/tls.h> #include <stdbool.h> #include <stdint.h> #include <stdlib.h> #include <string.h> #include <sync/sem-bare.h> #include <sel4debug/identity.h> #include <sel4utils/mapping.h> #include <utils/util.h> /*? macros.show_includes(me.type.includes) ?*/ /*- set putchar = c_symbol() -*/ static void (* /*? putchar ?*/)(int c); void set_putchar(void (*putchar)(int c)) { /*? putchar ?*/ = putchar; } void __arch_putchar(int c) { if (/*? putchar ?*/ != NULL) { /*? putchar ?*/(c); return; } #ifdef SEL4_DEBUG_KERNEL seL4_DebugPutChar(c); #endif } const char *get_instance_name(void) { static const char name[] = "/*? me.name ?*/"; return name; } /* DMA functionality. */ /*# Determine the size of the DMA pool. Note that we make no attempt to *# suppress this attribute being generated as a user-accessible variable at *# the top of this file. If the component actually has a declared attribute *# 'dma_pool' then they will get access to this variable at runtime. #*/ /*- set dma_pool = configuration[me.name].get('dma_pool', 0) -*/ /*- set p = Perspective() -*/ static char /*? p['dma_pool_symbol'] ?*/[ROUND_UP_UNSAFE(/*? dma_pool ?*/, PAGE_SIZE_4K)] __attribute__((section("persistent"))) __attribute__((aligned(PAGE_SIZE_4K))); /*- set get_paddr = c_symbol('get_paddr') -*/ uintptr_t /*? get_paddr ?*/(void *ptr) { uintptr_t base UNUSED = (uintptr_t)ptr & ~MASK(PAGE_BITS_4K); uintptr_t offset UNUSED = (uintptr_t)ptr & MASK(PAGE_BITS_4K); /*- for i in range(int(ROUND_UP(dma_pool, PAGE_SIZE) / PAGE_SIZE)) -*/ /*- if not loop.first -*/ else /*- endif -*/ if (base == (uintptr_t)/*? p['dma_pool_symbol'] ?*/ + /*? i ?*/ * PAGE_SIZE_4K) { /*- set p = Perspective(dma_frame_index=i) -*/ /*- set frame = alloc(p['dma_frame_symbol'], seL4_FrameObject) -*/ /*- set paddr_sym = c_symbol('paddr') -*/ static uintptr_t /*? paddr_sym ?*/; if (/*? paddr_sym ?*/ == 0) { seL4_ARCH_Page_GetAddress_t res = seL4_ARCH_Page_GetAddress(/*? frame ?*/); ERR_IF(res.error != 0, camkes_error, ((camkes_error_t){ .type = CE_SYSCALL_FAILED, .instance = "/*? me.name ?*/", .description = "failed to reverse virtual mapping to a DMA frame", .syscall = ARCHPageGetAddress, .error = res.error, }), ({ return (uintptr_t)NULL; })); /*? paddr_sym ?*/ = res.paddr; } return /*? paddr_sym ?*/ + offset; } /*- endfor -*/ return (uintptr_t)NULL; } /* MMIO related functionality for interaction with libplatsupport. */ void *camkes_io_map(void *cookie UNUSED, uintptr_t paddr UNUSED, size_t size UNUSED, int cached UNUSED, ps_mem_flags_t flags UNUSED) { /*- for d in me.type.dataports -*/ extern void * /*? d.name ?*/_translate_paddr(uintptr_t paddr, size_t size) __attribute__((weak)); if (/*? d.name ?*/_translate_paddr != NULL) { void *p = /*? d.name ?*/_translate_paddr(paddr, size); if (p != NULL) { return p; } } /*- endfor -*/ /* Not found. */ return NULL; } /* IO port related functionality for interaction with libplatsupport. */ int camkes_io_port_in(void *cookie UNUSED, uint32_t port UNUSED, int io_size UNUSED, uint32_t *result UNUSED) { /*- for u in me.type.uses -*/ /*- if u.type.name == 'IOPort' -*/ /*# XXX: awkward hardcoding of connector type name #*/ if (/*? u.name ?*/_in_range(port)) { switch (io_size) { case 1: *result = /*? u.name ?*/_in8(port); return 0; case 2: *result = /*? u.name ?*/_in16(port); return 0; case 4: *result = /*? u.name ?*/_in32(port); return 0; } return -1; } /*- endif -*/ /*- endfor -*/ return -1; } int camkes_io_port_out(void *cookie UNUSED, uint32_t port UNUSED, int io_size UNUSED, uint32_t val UNUSED) { /*- for u in me.type.uses -*/ /*- if u.type.name == 'IOPort' -*/ /*# XXX: awkward hardcoding of connector type name #*/ if (/*? u.name ?*/_in_range(port)) { switch (io_size) { case 1: /*? u.name ?*/_out8(port, val); return 0; case 2: /*? u.name ?*/_out16(port, val); return 0; case 4: /*? u.name ?*/_out32(port, val); return 0; } return -1; } /*- endif -*/ /*- endfor -*/ return -1; } /* Mutex functionality. */ /*- for m in me.type.mutexes -*/ /*- set mutex = c_symbol(m.name) -*/ static sync_mutex_t /*? mutex ?*/; /*- if not m.name.startswith('reinitializable_') -*/static /*- endif -*/int mutex_/*? m.name ?*/_init(void) { /*- set notification = alloc(m.name, seL4_NotificationObject, read=True, write=True) -*/ return sync_mutex_init(&/*? mutex ?*/, /*? notification ?*/); } int /*? m.name ?*/_lock(void) { return sync_mutex_lock(&/*? mutex ?*/); } int /*? m.name ?*/_unlock(void) { return sync_mutex_unlock(&/*? mutex ?*/); } /*- endfor -*/ /* Semaphore functionality. */ /*- for s in me.type.semaphores -*/ /*- set semaphore = c_symbol(s.name) -*/ static sync_sem_t /*? semaphore ?*/; static int semaphore_/*? s.name ?*/_init(void) { /*- set ep = alloc(s.name, seL4_EndpointObject, read=True, write=True) -*/ return sync_sem_init(&/*? semaphore ?*/, /*? ep ?*/, /*? configuration[me.name].get('%s_value' % s.name, 1) ?*/); } int /*? s.name ?*/_wait(void) { return sync_sem_wait(&/*? semaphore ?*/); } int /*? s.name ?*/_trywait(void) { return sync_sem_trywait(&/*? semaphore ?*/); } int /*? s.name ?*/_post(void) { return sync_sem_post(&/*? semaphore ?*/); } /*- endfor -*/ #ifndef CONFIG_CAMKES_DEFAULT_HEAP_SIZE #define CONFIG_CAMKES_DEFAULT_HEAP_SIZE 1048576 #endif /*- set heap_size = configuration[me.name].get('heap_size', 'CONFIG_CAMKES_DEFAULT_HEAP_SIZE') -*/ /*- set heap = c_symbol() -*/ #if /*? heap_size ?*/ > 0 static char /*? heap ?*/[/*? heap_size ?*/]; extern char *morecore_area; extern size_t morecore_size; #endif /* General CAmkES platform initialisation. Expects to be run in a * single-threaded, exclusive context. On failure it does not return. */ /*- set init = c_symbol() -*/ static void /*? init ?*/(void) { #if /*? heap_size ?*/ > 0 /* Assign the heap */ morecore_area = /*? heap ?*/; morecore_size = /*? heap_size ?*/; #endif /* The user has actually had no opportunity to install any error handlers at * this point, so any error triggered below will certainly be fatal. */ int res = camkes_dma_init(/*? p['dma_pool_symbol'] ?*/, ROUND_UP(/*? dma_pool ?*/, PAGE_SIZE_4K), PAGE_SIZE_4K, /*? get_paddr ?*/); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "DMA initialisation failed", }), ({ return; })); debug_set_id_fn(get_instance_name); /*- for m in me.type.mutexes -*/ res = mutex_/*? m.name ?*/_init(); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "initialisation of mutex \"/*? m.name ?*/\" failed", }), ({ return; })); /*- endfor -*/ /*- for s in me.type.semaphores -*/ res = semaphore_/*? s.name ?*/_init(); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "initialisation of semaphore \"/*? s.name ?*/\" failed", }), ({ return; })); /*- endfor -*/ /* Initialise cap allocator. */ /*- set tcb_pool = configuration[me.name].get('tcb_pool', 0) -*/ /*- for i in range(tcb_pool) -*/ /*- set tcb = alloc('tcb_pool_%d' % i, seL4_TCBObject, read=True, write=True) -*/ res = camkes_provide(seL4_TCBObject, /*? tcb ?*/, 0, seL4_CanRead|seL4_CanWrite); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "failed to add TCB /*? tcb + 1 ?*/ to cap allocation pool", }), ({ return; })); /*- endfor -*/ /*- set ep_pool = configuration[me.name].get('ep_pool', 0) -*/ /*- for i in range(ep_pool) -*/ /*- set ep = alloc('ep_pool_%d' % i, seL4_EndpointObject, read=True, write=True) -*/ res = camkes_provide(seL4_EndpointObject, /*? ep ?*/, 0, seL4_CanRead|seL4_CanWrite); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "failed to add EP /*? ep + 1 ?*/ to cap allocation pool", }), ({ return; })); /*- endfor -*/ /*- set notification_pool = configuration[me.name].get('notification_pool', 0) -*/ /*- for i in range(notification_pool) -*/ /*- set notification = alloc('notification_pool_%d' % i, seL4_NotificationObject, read=True, write=True) -*/ res = camkes_provide(seL4_NotificationObject, /*? notification ?*/, 0, seL4_CanRead|seL4_CanWrite); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "failed to add notification /*? notification + 1 ?*/ to cap allocation pool", }), ({ return; })); /*- endfor -*/ /*- set untyped_pool = [] -*/ /*- for attribute, value in configuration[me.name].items() -*/ /*- set r = re.match('untyped(\\d+)_pool$', attribute) -*/ /*- if r is not none -*/ /*- do untyped_pool.append((r.group(1), value)) -*/ /*- endif -*/ /*- endfor -*/ /*- for u in untyped_pool -*/ /*- for i in range(u[1]) -*/ /*- if not 4 <= int(u[0]) <= 28 -*/ /*? raise(Exception('illegal untyped size')) ?*/ /*- endif -*/ /*- set untyped = alloc('untyped_%s_pool_%d' % (u[0], i), seL4_UntypedObject, size_bits=int(u[0]), read=True, write=True) -*/ res = camkes_provide(seL4_UntypedObject, /*? untyped ?*/, 1U << /*? u[0] ?*/, seL4_CanRead|seL4_CanWrite); ERR_IF(res != 0, camkes_error, ((camkes_error_t){ .type = CE_ALLOCATION_FAILURE, .instance = "/*? me.name ?*/", .description = "failed to add untyped /*? untyped + 1 ?*/ of size /*? u[0] ?*/ bits to cap allocation pool", }), ({ return; })); /*- endfor -*/ /*- endfor -*/ } #ifndef CONFIG_CAMKES_DEFAULT_STACK_SIZE #define CONFIG_CAMKES_DEFAULT_STACK_SIZE PAGE_SIZE_4K #endif /*- set all_interfaces = me.type.provides + me.type.uses + me.type.emits + me.type.consumes + me.type.dataports -*/ /*- for i in all_interfaces -*/ /*? macros.show_includes(i.type.includes, '../static/components/' + me.type.name + '/') ?*/ /*- endfor -*/ /* Thread stacks */ /*- set p = Perspective(instance=me.name, control=True) -*/ /*- set stack_size = configuration[me.name].get('_stack_size', 'CONFIG_CAMKES_DEFAULT_STACK_SIZE') -*/ /*? macros.thread_stack(p['stack_symbol'], stack_size) ?*/ /*- for i in all_interfaces -*/ /*- set p = Perspective(instance=me.name, interface=i.name) -*/ /*- set stack_size = configuration[me.name].get('%s_stack_size' % i.name, 'CONFIG_CAMKES_DEFAULT_STACK_SIZE') -*/ /*? macros.thread_stack(p['stack_symbol'], stack_size) ?*/ /*- endfor -*/ /* IPC buffers */ /*- set p = Perspective(instance=me.name, control=True) -*/ /*? macros.ipc_buffer(p['ipc_buffer_symbol']) ?*/ /*- for i in all_interfaces -*/ /*- set p = Perspective(instance=me.name, interface=i.name) -*/ /*? macros.ipc_buffer(p['ipc_buffer_symbol']) ?*/ /*- endfor -*/ /* Attributes */ /*- set myconf = configuration[me.name] -*/ /*- for a in me.type.attributes -*/ /*- set value = myconf.get(a.name) -*/ /*- if value is not none -*/ const /*? show(a.type) ?*/ /*? a.name ?*/ = /*? value ?*/; /*- endif -*/ /*- endfor -*/ /*- set passive_interfaces = set() -*/ /* Scheduling Contexts */ /*- if realtime -*/ /*- for i in all_interfaces -*/ /*- set p = Perspective(instance=me.name, interface=i.name) -*/ /*- if parse_bool(configuration[me.name].get(p['passive_attribute'], 'false')) -*/ /*- do passive_interfaces.add(i.name) -*/ /*- set init_sc = alloc('sc_%s_init' % i.name, seL4_SchedContextObject) -*/ /*- else -*/ /*- set sc = alloc('sc_%s' % i.name, seL4_SchedContextObject) -*/ /*- endif -*/ /*- endfor -*/ /*- endif -*/ /*- set p = Perspective(instance=me.name, control=True) -*/ /*- if parse_bool(configuration[me.name].get(p['passive_attribute'], 'false')) -*/ /* Control thread declared passive. Ensure the realtime kernel is in use. */ #ifndef CONFIG_KERNEL_RT #error Passive control thread can only be used with the realtime kernel #endif /*- endif -*/ /*- if passive_interfaces -*/ /* Passive interfaces are present. Ensure the realtime kernel is in use. */ #ifndef CONFIG_KERNEL_RT #error Passive interfaces can only be used with the realtime kernel #endif /*- endif -*/ /*- set p = Perspective(instance=me.name) -*/ void USED /*? p['tls_symbol'] ?*/(int thread_id) { switch (thread_id) { /*- set tcb_control = alloc('tcb_0_control', seL4_TCBObject) -*/ /*- if realtime -*/ /*- set sc_control = alloc('sc__control', seL4_SchedContextObject) -*/ /*- endif -*/ case /*? tcb_control ?*/ : /* Control thread */ /*- set p = Perspective(instance=me.name, control=True) -*/ /*? macros.save_ipc_buffer_address(p['ipc_buffer_symbol']) ?*/ camkes_get_tls()->tcb_cap = /*? tcb_control ?*/; camkes_get_tls()->thread_index = 1; break; /*# Interface threads #*/ /*- for index, i in enumerate(all_interfaces) -*/ /*- set tcb = alloc('tcb_%s' % i.name, seL4_TCBObject) -*/ case /*? tcb ?*/ : { /* Interface /*? i.name ?*/ */ /*- set p = Perspective(instance=me.name, interface=i.name) -*/ /*? macros.save_ipc_buffer_address(p['ipc_buffer_symbol']) ?*/ camkes_get_tls()->tcb_cap = /*? tcb ?*/; camkes_get_tls()->thread_index = /*? index ?*/ + 2; break; } /*- endfor -*/ default: assert(!"invalid thread ID"); } } /*- set p = Perspective(instance=me.name) -*/ int USED /*? p['entry_symbol'] ?*/(int thread_id) { #if defined(SEL4_DEBUG_KERNEL) && defined(CONFIG_CAMKES_PROVIDE_TCB_CAPS) /*- set thread_name = c_symbol() -*/ char /*? thread_name ?*/[seL4_MsgMaxLength * sizeof(seL4_Word)]; snprintf(/*? thread_name ?*/, sizeof(/*? thread_name ?*/), "%s(%d)", get_instance_name(), thread_id); /*? thread_name ?*/[sizeof(/*? thread_name ?*/) - 1] = '\0'; seL4_DebugNameThread(camkes_get_tls()->tcb_cap, /*? thread_name ?*/); #endif /*- if options.fsupport_init -*/ /*# Locks for synchronising init ops. #*/ /*- set pre_init_ep = alloc('pre_init_ep', seL4_EndpointObject, read=True, write=True) -*/ /*- set pre_init_lock = c_symbol('pre_init_lock') -*/ static volatile int UNUSED /*? pre_init_lock ?*/ = 0; /*- set interface_init_ep = alloc('interface_init_ep', seL4_EndpointObject, read=True, write=True) -*/ /*- set interface_init_lock = c_symbol('interface_init_lock') -*/ static volatile int UNUSED /*? interface_init_lock ?*/ = 0; /*- set post_init_ep = alloc('post_init_ep', seL4_EndpointObject, read=True, write=True) -*/ /*- set post_init_lock = c_symbol('post_init_lock') -*/ static volatile int UNUSED /*? post_init_lock ?*/ = 0; /*- endif -*/ /*- set result = c_symbol() -*/ int /*? result ?*/ UNUSED; switch (thread_id) { case 0: /* This case is just here to help debugging. If you hit this case, * what is happening is probably a failure in passing arguments * (thread ID) from our loader. */ assert(!"invalid thread ID"); return -1; /*- set tcb_control = alloc('tcb_0_control', seL4_TCBObject) -*/ case /*? tcb_control ?*/ : /* Control thread */ /*? init ?*/(); /* Wake all of our passive threads (by binding a scheduling context) so they can initialise */ /*- for i in all_interfaces -*/ /*- set tcb = alloc('tcb_%s' % i.name, seL4_TCBObject) -*/ /*- if i.name in passive_interfaces -*/ /*- set init_sc = alloc('sc_%s_init' % i.name, seL4_SchedContextObject) -*/ /*? result ?*/ = seL4_SchedContext_Bind(/*? init_sc ?*/, /*? tcb ?*/); ERR_IF(/*? result ?*/ != 0, camkes_error, ((camkes_error_t){ .type = CE_SYSCALL_FAILED, .instance = "/*? me.name ?*/", .description = "failed to bind initialisation scheduling context for interface \"/*? i.name ?*/\"", .syscall = SchedContextBind, .error = /*? result ?*/, }), ({ return -1; })); /*- endif -*/ /*- endfor -*/ /*- if options.fsupport_init -*/ if (pre_init) { pre_init(); } /* Wake all the interface threads. */ /*- for i in all_interfaces -*/ sync_sem_bare_post(/*? pre_init_ep ?*/, &/*? pre_init_lock ?*/); /*- endfor -*/ /* wait for all the interface threads to run their inits. */ /*- for i in all_interfaces -*/ sync_sem_bare_wait(/*? interface_init_ep ?*/, &/*? interface_init_lock ?*/); /*- endfor -*/ if (post_init) { post_init(); } /* Wake all the interface threads. */ /*- for i in all_interfaces -*/ sync_sem_bare_post(/*? post_init_ep ?*/, &/*? post_init_lock ?*/); /*- endfor -*/ /*- endif -*/ /* Unbind scheduling context from all passive interface threads. */ /*- for i in all_interfaces -*/ /*- if i.name in passive_interfaces -*/ /*- set init_sc = alloc('sc_%s_init' % i.name, seL4_SchedContextObject) -*/ /*- set init_ntfn = alloc('ntfn_%s_init' % i.name, seL4_NotificationObject, read=True, write=True) -*/ seL4_Wait(/*? init_ntfn ?*/, NULL); /*? result ?*/ = seL4_SchedContext_Unbind(/*? init_sc ?*/); ERR_IF(/*? result ?*/ != 0, camkes_error, ((camkes_error_t){ .type = CE_SYSCALL_FAILED, .instance = "/*? me.name ?*/", .description = "failed to unbind initialisation scheduling context for interface \"/*? i.name ?*/\"", .syscall = SchedContextUnbind, .error = /*? result ?*/, }), ({ return -1; })); /*- endif -*/ /*- endfor -*/ /*- if me.type.control -*/ return run(); /*- else -*/ return 0; /*- endif -*/ /*# Interface threads #*/ /*- for index, i in enumerate(all_interfaces) -*/ /*- set tcb = alloc('tcb_%s' % i.name, seL4_TCBObject) -*/ case /*? tcb ?*/ : { /* Interface /*? i.name ?*/ */ /*- if options.fsupport_init -*/ /* Wait for `pre_init` to complete. */ sync_sem_bare_wait(/*? pre_init_ep ?*/, &/*? pre_init_lock ?*/); if (/*? i.name ?*/__init) { /*? i.name ?*/__init(); } /* Notify the control thread that we've completed init. */ sync_sem_bare_post(/*? interface_init_ep ?*/, &/*? interface_init_lock ?*/); /* Wait for the `post_init` to complete. */ sync_sem_bare_wait(/*? post_init_ep ?*/, &/*? post_init_lock ?*/); /*- endif -*/ /*- if i.name in passive_interfaces -*/ /*- set init_ntfn = alloc('ntfn_%s_init' % i.name, seL4_NotificationObject, read=True, write=True) -*/ /*# If this is a passive interface, the __run function must SignalRecv to tell the control *# thread to unbind its sc, and simultaneously start waiting for rpc calls. #*/ extern int /*? i.name ?*/__run_passive(seL4_CPtr init_ntfn) __attribute__((weak)); if (/*? i.name ?*/__run_passive) { return /*? i.name ?*/__run_passive(/*? init_ntfn ?*/); } else { /* Interface not connected. */ // Inform the main component thread that we're finished initializing seL4_Signal(/*? init_ntfn ?*/); // Block forever seL4_TCB_Suspend(/*? tcb ?*/); } /*- else -*/ extern int /*? i.name ?*/__run(void) __attribute__((weak)); if (/*? i.name ?*/__run) { return /*? i.name ?*/__run(); } /*- endif -*/ return 0; } /*- endfor -*/ default: /* If we reach this point, the initialiser gave us a thread we * weren't expecting. */ assert(!"Template generation failure"); return -1; } } /*- for e in me.type.emits -*/ void /*? e.name ?*/_emit_underlying(void) __attribute__((weak)); void /*? e.name ?*/_emit(void) { /* If the interface is not connected, the 'underlying' function will * not exist. */ if (/*? e.name ?*/_emit_underlying) { /*? e.name ?*/_emit_underlying(); } } /*- endfor -*/ /* Prototypes for functions generated in per-interface files. */ /*- for d in me.type.dataports -*/ extern int /*? d.name ?*/_wrap_ptr(dataport_ptr_t *p, void *ptr) /*- if d.optional -*/ __attribute__((weak)) /*- endif -*/ ; /*- endfor -*/ dataport_ptr_t dataport_wrap_ptr(void *ptr UNUSED) { dataport_ptr_t p = { .id = -1 }; /*- for d in me.type.dataports -*/ if ( /*- if d.optional -*/ /*? d.name ?*/_wrap_ptr != NULL && /*- endif -*/ /*? d.name ?*/_wrap_ptr(&p, ptr) == 0) { return p; } /*- endfor -*/ return p; } /* Prototypes for functions generated in per-interface files. */ /*- for d in me.type.dataports -*/ extern void * /*? d.name ?*/_unwrap_ptr(dataport_ptr_t *p) /*- if d.optional -*/ __attribute__((weak)) /*- endif -*/ ; /*- endfor -*/ void *dataport_unwrap_ptr(dataport_ptr_t p UNUSED) { void *ptr = NULL; /*- for d in me.type.dataports -*/ /*- if d.optional -*/ if (/*? d.name ?*/_unwrap_ptr != NULL) { /*- endif -*/ ptr = /*? d.name ?*/_unwrap_ptr(&p); if (ptr != NULL) { return ptr; } /*- if d.optional -*/ } /*- endif -*/ /*- endfor -*/ return ptr; }
{ "redpajama_set_name": "RedPajamaGithub" }
4,480
\section{Introduction} In this paper we review the recent status of our search for a QCD phase transition to the Quark Gluon Plasma \cite{qgp} by a systematic analysis of strangeness and pion (entropy) production in nuclear collisions. There are important reasons to select strangeness \cite{Ko:86} and entropy \cite{Va:82} as basic observables. Both are defined in any form of strongly interacting matter. Their equilibrium values are directly sensitive to the basic properties of matter: effective number of degrees of freedom and their effective masses. Entropy and strangeness production are believed to be produced at the early stage of the evolution of a system created in nuclear collisions and therefore they can allow us to `measure' properties of matter at very high energy densities. Our strategy of data\footnote{ Data obtained by about 100 different experiments are used. The references to the original experimental works can be found in the compilation papers \cite{Ga:1,Ga:2,Ga:3}.} analysis \cite{Ga:1,Ga:2,Ga:3,Ga:4,Ga:5,Ga:6} reviewed here can be summarized as follows:\\ 1. We study the dependence of the properly normalized entropy (mainly determined by pion multiplicity) and strangeness (mainly determined by kaon and hyperon yields) production on the volume of the colliding nuclear matter at a fixed collision energy. We demonstrate that a fast saturation occurs. The simplest qualitative interpretation is, that equilibration of entropy and strangeness takes place (Section 2). \\ 2. We study the dependence of the saturation levels on the collision energy. We demonstrate that the saturation levels for both entropy and strangeness show an unusual change between AGS ($\approx$15 A$\cdot$GeV/c) and SPS ($\approx$160 A$\cdot$GeV/c) energies. We interpret this effect as due to the localization of the threshold energy for the QGP creation in the above energy range (Section 3). \\ 3. Finally, we formulate a simple statistical model for entropy and strangeness production in nuclear collisions and show that the experimental results at SPS energes can be quantitatively described assuming creation of a QGP (Section 4). A critical discussion of the basic assumptions used in the model is given at the end of this section. \section{Volume Dependence} Experimental data on pion\footnote{Here we use pion multiplicity instead of entropy in order to start the analysis from `raw' experimental data. An improved experimental estimate of entropy is given in the next section.} and strangeness production in central nucleus--nucleus (A+A) and all inelastic nucleon--nucleon (N+N) collisions are shown in Fig. 1 as a function of the number of participant nucleons, $\langle N_P \rangle$, for various collision energies. In order to eliminate a trivial volume dependence, the normalized multiplicities are studied: \begin{equation} \frac {\langle \pi \rangle} {\langle N_P \rangle} \end{equation} and \begin{equation} E_S \equiv \frac{ \langle \Lambda \rangle + \langle K + \overline{K} \rangle } {\langle \pi \rangle}, \end{equation} where $\langle \pi \rangle$, $\langle \Lambda \rangle$, and $\langle K+\overline{K} \rangle$ are the mean multiplicities of pions, $\Lambda(+\Sigma^0)$ hyperons, and kaons and antikaons, respectively. For all energies a similar behaviour is observed: a rapid change between results for N+N interactions and intermediate mass nuclei ($\langle N_P \rangle \approx$ 50) is followed by a well defined region in which the normalized pion and strangeness production is almost constant. We interpret the observed saturation as a result of an equilibration of entropy and strangeness yields. As the production rates are steeply decreasing functions of the temperature the equilibration can be expected to happen at the early stage of the collision. Thus the measured equilibrium values may reflect the properties of the initially created matter. \section{Energy Depenedence} The collision energy dependence of the normalized entropy and strangeness production is shown in Fig. 2. The energy dependence is studied using the Fermi energy variable \cite{Fe:50,La:53}: \begin{equation} F \equiv \frac {(\sqrt{s}_{NN} - 2 m_N)^{3/4}} {\sqrt{s}_{NN}^{1/4}}, \end{equation} where $\sqrt{s}_{NN}$ is the c.m. energy for a nucleon--nucleon pair and $m_N$ is the nucleon mass. There are several advantages in using $F$ as an energy variable. The measured mean pion multiplicity in N+N interactions is approximately proportional to $F$ \cite{Go:89,Ga:4}. In the Landau model \cite{La:53} both the entropy and the temperature of the initially created matter (for $\sqrt{s}_{NN} \gg 2 m_N$) are also proportional to $F$. The `entropy' presented in Fig. 2 is calculated as: \begin{equation} S \equiv \langle \pi \rangle + \kappa \langle K+\overline{K} \rangle + \alpha \langle N_P \rangle, \end{equation} where the two last components take into account kaon production and pion absorption \cite{Ga:4,Ga:5}. Thus $S$ can be treated as the inelastic entropy measured in pion entropy units. The normalized `entropy' for central A+A collisions at low energies (AGS and below) follows the dependence established by N+N data. It is about 30\% higher for A+A collisions at SPS energies (the data of NA35 and NA49 Collab.) than for N+N interactions. The energy dependence of the $E_S$ ratio is also shown in Fig. 2. The results for N+N interactions are scaled by a factor of 3.6 to fit A+A data at AGS for a better comparison of the energy dependence. A monotonic increase of the $E_S$ ratio between Dubna energy (p$_{LAB}$ = 4.5 A$\cdot$GeV/c) and SPS energies is observed. In the range from AGS to SPS the $E_S$ ratio for N+N interactions is enhanced by a factor of about 2. A qualitatively different dependence is seen for central A+A collisions. An increase of $E_S$ between Dubna\footnote{ Note that the saturation of the $E_S$ with $\langle N_P \rangle$ at Dubna energy is still not established experimentaly, see Fig. 1} and AGS energies is followed by a weak (if any) change of $E_S$ between AGS and SPS collision energies. Let us now try to understand the observed energy dependence on a qualitative level. In the generalized Landau model \cite{Ga:4} the inelastic entropy is given by: \begin{equation} S \sim g^{1/4} \langle N_P \rangle F, \end{equation} where $g$ is the effective number of degrees of freedom. Thus the observed deviation of the data for A+A collisions from the Landau scaling, $S/\langle N_P \rangle \sim F$, can be interprated as due to an increase of the effective number of degrees of freedom when crossing the transition collision energy. The magnitude of this increase can be estimated, within the model, as the fourth power of the ratio of slopes of straight lines describing low and high energy A+A data: 1.33$^4$ $\approx$ 3 \cite{Ga:4}. Note that this estimation can be treated as an upper limit as it is based on the assumption that the inelasticity is the same in the N+N and A+A collisions. The second dominant effect of the transition to a QGP is the reduction of the effective masses of degrees of freedom. Basic thermodynamics tells us that for massless particles the ratio {\it (particle number)/entropy} is independent of the temperature. For massive particles the ratio increases with $T$ at low temperature and approaches the saturation level (equal to the corresponding ratio for massless particles) at high temperatures, $T \gg m$. This property can be used to estimate the magnitude of the effective mass of strangeness carriers in strongly interacting matter. The $E_S$ ratio is approximately proportional to the ratio { \it (number of strangeness carriers)/entropy (strangeness/entropy)} and therefore its temperature (collision energy, $F$) dependence should be sensitive to the effective mass of strangeness carriers. Reducing the mass of strangeness carriers should cause a weaker dependence of the $E_S$ ratio on the collision energy. An increase of the $E_S$ ratio in the energy range of $ F < $ 2 GeV$^{1/2}$ can be interpreted as due to the large effective mass of strangeness carriers (kaons or constituent strange quarks, $ m_S \approx $ 500 MeV/c$^2$ ) in comparison to the temperature of matter, $T < T_C \approx $ 150 MeV. At temperatures above $T_C$, the matter is in the form of a QGP and the mass of strangeness carriers is equal to the mass of current strange quarks, $ m_S \approx $ 150 MeV/c$^2$, consequently $ m_S \leq T $. Thus a much weaker dependence of the $E_S$ ratio on $F$ is expected in the high energy region where the creation of the QGP takes place. The equilibrium value of the {\it strangeness/entropy} ratio is higher in hadronic matter (HM) than in the QGP at very high temperatures \cite{Ka:86}. This is due to the fact that it is proportional to the ratio of the effective number of strangeness degrees of freedom to the number of all degrees of freedom. This ratio is lower in a QGP than in HM. At low temperature, however, the {\it strangeness/entropy} ratio is lower in HM than in a QGP. This is caused, as previously discussed, by the high mass of strangeness carriers in HM. Thus, in general, a transition to the QGP may lead to an increase or a decrease of the {\it strangeness/entropy} ratio depending at which temperatures of the QGP and the HM the comparison is made. The presented data suggest that the transition is associated with a decrease of the {\it strangeness/entropy} ratio. Thus one can expect a non--monotonic energy dependence of the {\it strangeness/pion} ratio; an initial increase of the ratio should be followed by a drop of this ratio to the characteristic value for the QGP. Above the transition region the ratio is expected to be collision energy independent. \section{Model of QGP in A+A} Encouraged by the qualitative agreement of the data with the hypothesis of the equilibrium QGP creation in the early stage of A+A collisions at SPS, we attempt to make a quantitative comparison using the simplest version of the generalized Landau model \cite{Ga:4,Ga:6}. In the first part of this section we formulate the model and compare the results with the experimental data. In the second part we critically review the basic assumption made. \subsection{Model Formulation and Results} We assume that inelastic energy (energy carried by the produced particles) is deposited and thermalized in a volume equal to the Lorentz contracted volume of the two overlapping nuclei (we only consider collisions of two identical nuclei only). In this volume an equilibrated QGP is formed. For simplicity, we assume that the two barionic fluids containing baryons of projectile and target nuclei are initially decoupled from the baryon--free QGP. Furthermore, we assume that the inelastic entropy\footnote{ Strictly speaking a fraction (less than 5\% at CERN SPS) of the inelastic entropy is absorbed by baryonic fluids in the later stages of the system evolution. A correction for this effect is applied, see Eq. 4.} and strangeness are not changed during the system evolution. The inelastic energy at SPS was measured to be (67$\pm$7)\% of the available energy \cite{Ba:94}, the same for S+S and Pb+Pb collisions. It is also weakly dependent on the collision energy between AGS and SPS \cite{St:96}. In the model we use an approximation of the uniform distribution of matter in the colliding nuclei. The radius of such an effective nucleus is calculated as: \begin{equation} R = r_0 \cdot A^{1/3} = (\frac {3 A} {4 \pi \overline{\rho}})^{1/3}, \end{equation} where $A$ is a nuclear mass number and $\overline{\rho}$ is an average nuclear density calculated using a parametrization of the nuclear density distribution as given in Ref. \cite{Da:85}. The resulting values of $r_0$ are 1.34 fm and 1.27 fm for S and Pb nuclei, respectively. The value of $r_0$ = 1.3 fm is used in the calculations. Similar $r_0$ values are obtained from the fits to the A+A inelastic cross section \cite{cs} using a hard sphere approximation. The QGP is assumed to consist of massless gluons and massive $u, d$, and $s$ quarks, and the corresponding antiquarks. The average mass of light quarks was taken to be 7.2 MeV \cite{Le:96}. The strange quark mass was taken to be 175$\pm$25 MeV \cite{Le:96}, this mass is determined at an energy scale of 1 GeV. The conversion factor between the calculated entropy and the `entropy' evaluated from the experimental data is taken to be 4 (entropy per pion at T $\approx$ 150 MeV). The conversion factor between total strangeness and strangeness measured by the sum of $\Lambda$ and $K+\overline{K}$ yields is taken to be 1.36 according to the N+N data and a procedure developed in \cite{Bi:92}. It should be stressed that the model formulated in the above way has no free parameters. The resulting production of entropy and strangeness in the energy range 30--500 A$\cdot$GeV is represented by dashed lines in Fig. 2. The agreement with the data is surprisingly good. The analysis suggests that plasma created at the SPS has an energy density of about 10 GeV/fm$^3$ and a temperature of about 280 MeV. \subsection{Discussion of Basic Assumptions} In the following we review basic model assumptions concerning the early stage volume in which thermalization takes place, the thermalized energy and the production of entropy and strangeness. {\bf Early Stage Volume.} It was Fermi \cite{Fe:50} who for the first time introduced the Lorentz contracted initial volume of two overlapping protons as an early stage volume in which thermalization of the available energy occurs. This assumption was later taken by Landau and collaborators \cite{La:53} in order to define initial conditions for the hydrodynamical expansion of matter created in A+A collisions. This volume can be treated as a maximum volume (no compression of colliding matter) in which all incomming matter has a chance to be excited. For central collisions of nuclei as large as the S--nucleus and energies as high as CERN SPS energies the above volume is large enough to use a grand canonical approximation for entropy and strangeness production \cite{Ra:80}. The volume is also large enough to be calculated from the initial geometrical volume of the nucleus (here the limitation is given by the longitudinal dimension). The effect of `smearing' due to the uncertainty principle can be neglected. {\bf Thermalized Energy.} Fermi \cite{Fe:50} and Landau \cite{La:53} assumed that the full available energy in the c.m. system is thermalized in the early stage volume. Instantenous decay of this matter into final state hadrons was assumed by Fermi. Landau, following Pomeranchuk's suggestion \cite{Po:51}, assumed that the matter before freeze--out undergoes a hydrodynamical expansion. Both models are in direct contradiction with the data. They predict that the rapidity distribution of nucleons is narrower than the one for pions (due to the mass difference) in contrary to the measured distributions in p+p and A+A collisions \cite{Ba:94}. In addition they predict similar\footnote{ Exact results depend on the freeze--out conditions assumed in the model.} rapidity distributions of baryons and antibaryons, but the data for p+p and A+A collisions show strong differences between $\Lambda$ and $\overline{\Lambda}$ distributions \cite{Ga:91,Al:94}. Based on this observation and guided by the partonic structure of the nucleon Pokorski and Van Hove \cite{Po:74} postulated that only gluons are stopped in high energy hadron--hadron interactions and their energy is used for particle production. The valence quarks are assumed to fly through. This picture was converted later into a 3--fluid hydrodynamical model by Katscher and collaborators \cite{Ka:93}. This leads us to the generalization of the Landau model \cite{Ga:4} assuming that only the energy of the produced particles (inelastic energy) is thermalized in the early stage volume. In the case of A+A collisions a fraction of the inelastic energy (entropy) can be absorbed by barionic fluids in the late stage of the expansion. Thus the final state inelastic energy (entropy) should be corrected for this effect as discussed in Refs. \cite{Ga:4,Ga:5}. {\bf Entropy Production.} A crucial assumption which allows us to connect properties of the matter at the early stage is an assumption that the entropy is produced only during the very first non--equilibrium stage of the collision, and it remains constant in the expansion, hadronization and freeze--out stages. Isentropic expansion of strongly interacting matter was postulated first by Landau \cite{La:53} on the base of qualitative arguments; at very high energy densities one expects much shorter mean free path than the size of the system. The influence of the hadronization on the entropy content depends on the nature of the hadronization process which remains still unclear \cite{Cs:95, Ra:96}. Recent studies indicate that entropy seems to be only weakly affected in the freeze--out stage \cite{So:92,Oc:96}. {\bf Strangeness Production.} In the model it is assumed that the strangeness reaches its equilibrium value at the early stage and remains unchanged during expansion, hadronization and freeze--out. Equilibration of strangeness during the high temperature stage of the expansion in the case of the QGP creation may be expected due to the fact that the estimated strangeness equilibration time is comparable with the life time of the QGP \cite{Ra:96}. Note that in the equilibrium QGP, due to the low mass of the strange quarks, isentropic expansion implies expansion with approximately constant strangeness content. Due to this fact it is not important for the final results at which stage of the QGP evolution the equilibration of strangeness takes place. The validity of the assumption that the strangeness remains constant in the hadronization stage depends (as in the case of the entropy) on the nature of this process. The production of strangeness at the freeze--out stage can be neglected due to a relatively large mass of strange hadrons and the requirement of strangeness conservation. \section{Summary and Conclusions} The experimental data on pion and strangeness production indicate:\\ -- saturation of pion and strangeness production with the number of participant nucleons,\\ -- change in the collision energy dependence taking place between 15 A$\cdot$GeV/c and 160 A$\cdot$GeV/c.\\ Within a statistical approach the observed behaviour can be qualitatively understood as due to:\\ -- equilibration of entropy and strangeness in collisions of heavy nuclei,\\ -- transition to a QGP occuring between AGS and SPS energies associated with the increase of the effective number of degrees of freedom.\\ \noindent These observations already hold for central S+S collisions, they are not unique to central Pb+Pb collisions. A non--monotonic collision energy dependence of the {\it strangeness/pion} ratio is expected in the transition energy region. The results at SPS energies are in surprisingly good agreement with the calculations done within the the generalized Landau model. The analysis suggests that the plasma created at the SPS may have a temperature of about 280 MeV (energy density of about 10 GeV/fm$^3$). Experimental studies of central Pb+Pb collisions in the energy range 20--160 A$\cdot$GeV are {\bf urgently} needed in order to localize the threshold energy more precisely and to study the properties of the QCD phase transition. {\it Acknowledgements.} I would like to thank Christian Bormann for the comments to the manuscript. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,215
"Living Type" is the third single released from Powderfinger's second album Double Allergic. The single was released on 11 November 1996. The song, written by Bernard Fanning, the band's lead singer, concerned the victims of Charles Manson. The song was well received by the public, who voted it onto the Triple J Hottest 100, 1996. This was the first time Powderfinger had appeared on the chart. History "Living Type" was released with one b-side track, dubbed "Other Delicacies". The track consisted of six 90 second blocks of music, with instrumental backgrounds. Guitarist Darren Middleton explained that the single would only have two tracks listed, and when referring to "Other Delicacies", said that "all the songs are recorded in a block so you can't skip them, you have to listen to the whole [single]". "Living Type" was first performed live by Powderfinger whilst opening for You Am I on their "Uptight Express Tour". The shows were considered highly successful; much more so than Powderfinger's live performances with American heavy metal band Pantera, which the band found to be problematic. Song meanings Whilst there was some speculation that "Living Type" was "a cheesy, angsty love song", Powderfinger's lead singer, Bernard Fanning (who wrote the song), said it was actually about Charles Manson, and the people affected by his cult. Fanning said that he was unable to control what people thought about the song, and so didn't try, but that "cheesy love song" that it had been dubbed was "not the intention" of the song. In a subsequent interview, Fanning told Juice that people had been asking him if "Living Type" was about menstruation, which he dismissed as being "so stupid". However, he also re-iterated his statement that he was unable to control interpretations of the song, saying that "people have a freedom to think what they like". Music video The music video for "Living Type" was directed by David Barker, and filmed in Harrisville, near Powderfinger's home town; Brisbane. It was proclaimed as "most lavish visual work to date" by numerous commentators. The video tells the story of Squinty B Jones (played by Fanning), a man on the run from forces unknown (it is likely he is running from a psychiatric ward, which would tie in with the song's background). The video was praised as "dripping with Australiana." When asked which Powderfinger videos "had been crap", drummer Jon Coghill jokingly said that they all had, before stating that he liked the videos for "Living Type" and "Good-Day Ray", and stating that he hadn't always understood the "vibe" of the band's other videos. Response "Living Type" was received favourably by the public, who voted it onto the Triple J Hottest 100, 1996. This was the first time Powderfinger had appeared on the chart, and "Pick You Up", another single from Double Allergic, came in at #6 on the chart. Track listing "Living Type" "Other delicacies" ("Entrees", "Mains" "Dessert") Charts References Powderfinger songs 1996 singles Cultural depictions of Charles Manson 1996 songs Polydor Records singles Songs written by Jon Coghill Songs written by John Collins (Australian musician) Songs written by Bernard Fanning Songs written by Ian Haug Songs written by Darren Middleton
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,623