text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
The Old Farmer On The Frontier Lost His Horse – An Old Chinese Story Written by Anita Hummel in China Connections,Chinese Folklore & Proverbs Translated From Chinese By Dr. Arthur William Hummel An Old Chinese Story From More Than 2,000 years ago. Written in the Book of Huai Nan-Tzu who died in 122 B.C. An Old Chinese Story – The Farmer and His Lost Horse An old farmer who lived on the northern frontier of China, on the border of Mongolia, lost his horse. It had wandered into the desert into no-man's land. When his neighbors heard of it they came to his house to commiserate with him. But all he would say was "You can you never tell it may turn out to be a good thing after all." Some months later, that horse came back, bringing with him a much finer horse. Thereupon all the farmer's neighbors came in to congratulate on his good luck. But again all that the farmer would say was "You never can tell, it may not be such a good thing after all." Since the family now had an extra good horse, the farmer's son took a fancy for riding the new one. Before long, however, the son fell from his mount and broke his leg. Once again, the neighbors flocked in to express their sympathy for the farmer's bad luck. Even so, all that the farmer would say was, "You never can tell, this may turn out to be a good thing, after all." Sure enough, before a year had gone by fierce horsemen from the desert came plundering across the frontier. There came such numbers that the authorities drafted for military service every able-bodied young man who could draw a bow or march in battle. Nine out of ten young men in that region lost their lives. Only the fact that the son was lame and the father was old preserved the family from harm. Who Was the Chinese Scholar Huai Nan Zi, also known as Liu An? Huai Nan Zi, also known as Liu An, was a prominent Daoist philosopher, Chinese nobleman, and scholar. He was also known as Master Huai Nan. He was born in Jiangsu Province, China, and is known to have died in 122 B.C. In the 3rd and 4th Centuries, many of Huai Nan Zi's writings and thoughts resurged in importance. In fact, for over a 700 years period, many of Huai Nan Zi's Daoist thoughts and writings were considered important; he was considered one of the most important Daoist writers. Huai Nan Zi or Lan An came from a prominent Chinese family. He was the grandson of Ganzu, the founder of the Western Han Dynasty, and was a cousin of the reigning Chinese emperor. As part of his royal lineage, he inherited a kinship and was granted the fief of Huai Nan. The fief or area he ruled is in modern-day north-central Anhui Province, China. Under his rule, Huai Nan Zi was a patron of the arts and sciences. Many talented people would flock to his court. It was also under his patronage that the classic Huai Nan Zi, also known as Huai Nan Hong Lie, was written. Huai Nan Zi was implicated in a plot against the Imperial Chinese throne, to avoid punishment by death, he committed suicide in 122 B.C. What is Eminent Chinese Of The Ch'ing (Qing) Period (1943), by Dr. Arthur W. Hummel? The editor of the book Eminent Chinese of the Ch'ing Period is my grandfather, Dr. Arthur W. Hummel. But he did not work alone but had the help of two very accomplished Chinese scholars, Dr. Chao Ying Fang and Dr. Tu Lein Che Fang. Together, as a team, they spent over 9 years and thousands of hours to compile this very important work about eminent people during the Chinese Qing (Ch'ing) dynasty. This book continues to be used and remains an important Chinese scholarly work. There is also an online version available. You can read more about this and Dr. Arthur William Hummel by reading our blog Eminent Chinese Of The Ch'ing (Qing) Period (1943), Arthur W. Hummel by clicking here. What Are Some Chinese Proverbs About Life and Living? My grandparents Arthur and Ruth Hummel lived in China in the early 1900s. During this time they started to collect and translate Chinese proverbs. We have over 175 of their Chinese proverbs categorize by the subjects they set for them. You can discover more Chinese proverbs by reading our blog Over 175 Inspirational Chinese Proverbs On Life And Living by clicking here. link to Valuable Lessons We Can Learn From Our Ancestors One of the great blessings of genealogy is that we can learn many valuable lessons from our ancestors. When I genealogy, I have been amazed at how much I have learned from my ancestors that I...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,856
/* * Minio Cloud Storage, (C) 2015, 2016 Minio, Inc. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package cmd import "go/build" // DO NOT EDIT THIS FILE DIRECTLY. These are build-time constants // set through 'buildscripts/gen-ldflags.go'. var ( // GOPATH - GOPATH value at the time of build. GOPATH = build.Default.GOPATH // Go get development tag. goGetTag = "DEVELOPMENT.GOGET" // Version - version time.RFC3339. Version = goGetTag // ReleaseTag - release tag in TAG.%Y-%m-%dT%H-%M-%SZ. ReleaseTag = goGetTag // CommitID - latest commit id. CommitID = goGetTag // ShortCommitID - first 12 characters from CommitID. ShortCommitID = CommitID[:12] )
{ "redpajama_set_name": "RedPajamaGithub" }
6,072
Q: Get query strings as an object after a url with hash So I have a url with this format . : https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333" I know how to get the query strings for a normal url but I was not able to get query strings which come after # I using node-url and I did this so far : import * as urlTool from 'url'; const url = "https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333" const parsedUrl = urlTool.parse(url,true); const { pathName, hash } = parsedUrl So upto now, my hash have this value #register?param1="122"&param2="333" but how can I get the query strings in a dynamic way, because the query strings may, or may not be there all the time, and I don't know the name of them as well, how can I get any query strings which may be come after the # in a url? A: You can use a split and Object.fromEntries with URLSearchParams to extract the query parameters into an object: const url = `https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333"` const [hash, query] = url.split('#')[1].split('?') const params = Object.fromEntries(new URLSearchParams(query)) console.log(hash) console.log(params) A: Using SearchParams var url = `https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333"`; console.log(new URL(`https://1.com?${url.split("?")[1]}`).searchParams.get("param1")); Building an object using String#split and Array#reduce var url = `https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333"`; console.log(url.split("?")[1].split("&").reduce(function(result, param) { var [key, value] = param.split("="); result[key] = value; return result; }, {})); Thought it would be safer to write something like this: function getParamsAfterHash(url) { if (typeof url !== "string" || !url) url = location.href; url = url.split("#")[1]; if (!url) return {}; url = url.split("?")[1]; if (!url) return {}; return url.split("&").reduce(function(result, param) { var [key, value] = param.split("="); result[key] = value; return result; }, {}); } console.log(getParamsAfterHash(`https://my-app.com/my-route/someOtherRoute#register?param1="122"&param2="333"`));
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,611
Q: DocBook & R with Sweave/knitr R and LaTeX (...) can be easily combined using Sweave or knitr. Is there any possibility to accomplish the same with DocBook? A: Isn't the answer to every "how do I convert markup format A to markup format B?" question simply "pandoc"? Not sure of the exact workflow, but you'd write R-flavour markdown, knit it, then pandoc to convert to docbook. Writing a new O'Reilly book? I know they luuurrve the docbook! A: Knitr and Sweave do not provide DocBook as a backend. Googling for r docbook did not yield much, other than a question on R-help in 2008 that asked exactly what you did. You could have a look at how knitr uses markdown, and port those ideas to docbook. This is probably doable, but a lot of work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,108
Q: Error: could not convert '()' from '' to 'struct' template <typename T, unsigned int S> class Vec { T data[S]; public: constexpr Vec(const T& s) : data{s} {} }; template <typename T, unsigned int Rows, unsigned int Cols> class Mat { Vec<T, Cols> data[Rows]; public: constexpr Mat(const T& s) : data{Vec<T, Cols>(s)} {} }; int main() { constexpr Mat<double, 2, 2> m{1.0}; return 0; } This code gives me the following error: source/main.cpp:24:25: error: could not convert '<brace-enclosed initializer list>()' from '<brace-enclosed initializer list>' to 'Vec<double, 2>' : data{Vec<T, Cols>(s)} {} ^ Can anybody tell me what this error mean, and how can I fix it? I have never encountered this error before. I'm using GNU Arm Embedded Toolchain 8.2.1 and g++ -std=c++17 -O3 as arguments. A: Rows is 2. So the size of Vec<T, Cols> data[Rows]; data is 2. But data array is initialized only by one item: : data{Vec<T, Cols>(s)} {} // initializer has only one element because you provided user-defined constructor by constexpr Vec(const T& s) : data{s} {} the default constructor of Vec is deleted, and the second item in data cannot be constructed. Add default ctor: constexpr Vec() :data {} {} A: The problem that I had was supposing that using array initialization with a single element initializes the whole array instead of only the first element. As @aschepler suggested, using integer sequences fixes the compiler error: #include <utility> template <typename T, unsigned int S> class Vec { std::array<T, S> data; public: constexpr Vec(const T& s) : Vec(s, std::make_integer_sequence<unsigned int, S>{}) {} private: template <unsigned int... Seq> constexpr Vec(const T& s, std::integer_sequence<unsigned int, Seq...>) : data{(static_cast<void>(Seq), s)...} {} }; template <typename T, unsigned int Rows, unsigned int Cols> class Mat { std::array<Vec<T, Cols>, Rows> data; public: constexpr Mat(const T& s) : Mat(s, std::make_integer_sequence<unsigned int, Rows>{}) {} private: template <unsigned int... Seq> constexpr Mat(const T& s, std::integer_sequence<unsigned int, Seq...>) : data{(static_cast<void>(Seq), Vec<T, Cols>(s))...} {} };
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,325
David Patrick "Dave" Hannan (* 26. November 1961 in Greater Sudbury, Ontario) ist ein ehemaliger kanadischer Eishockeyspieler, der in seiner aktiven Zeit von 1977 bis 1997 unter anderem für die Pittsburgh Penguins, Edmonton Oilers, Toronto Maple Leafs, Buffalo Sabres, Colorado Avalanche und Ottawa Senators in der National Hockey League gespielt hat. Karriere Dave Hannan begann seine Karriere als Eishockeyspieler in der kanadischen Juniorenliga Ontario Hockey Association, in der er von 1977 bis 1980 für die Windsor Spitfires, Sault Ste. Marie Greyhounds und Brantford Alexanders aktiv war. Anschließend spielte er für Brantford ein Jahr lang in der Nachfolgeliga der OHA, der Ontario Hockey League. Im Anschluss an seine Juniorenzeit wurde der Flügelspieler im NHL Entry Draft 1981 in der zehnten Runde als insgesamt 196. Spieler von den Pittsburgh Penguins ausgewählt, für die er von 1981 bis 1987 in der National Hockey League zum Einsatz kam. In seinen ersten vier Spielzeiten lief er zudem regelmäßig für Pittsburgh Farmteams, die Erie Blades und Baltimore Skipjacks, in der American Hockey League auf. Am 24. November 1987 wurde er zusammen mit Craig Simpson, Moe Mantha und Chris Joseph im Tausch gegen Paul Coffey, Dave Hunter und Wayne Van Dorp an die Edmonton Oilers abgegeben. Mit den Oilers gewann er in der Saison 1987/88 auf Anhieb den prestigeträchtigen Stanley Cup. Zu diesem Erfolg trug er mit insgesamt 22 Scorerpunkten, davon zehn Tore, in 63 Spielen bei. Für die Saison 1988/89 kehrte Hannan zu seinem Ex-Club Pittsburgh Penguins zurück. Anschließend verbrachte er drei Jahre beim kanadischen Traditionsteam Toronto Maple Leafs und wurde von diesem im März 1992 im Tausch gegen ein Fünftrundenwahlrecht im NHL Entry Draft 1992 an die Buffalo Sabres abgegeben. Dort verbrachte er vier Spielzeiten, ehe er am 20. März 1996 kurz vor dem Ende der Trade Deadline gegen ein Sechstrundenwahlrecht im NHL Entry Draft 1996 zu den Colorado Avalanche transferiert wurde. Mit diesen gewann er am Saisonende zum zweiten Mal in seiner Laufbahn den Stanley Cup. In 17 Spielen für die Avalanche hatte er ein Tor erzielt und zwei Vorlagen gegeben. Als Free Agent unterschrieb der Kanadier am 13. September 1996 einen Vertrag bei den Ottawa Senators, bei denen er am Ende der Saison 1996/97 im Alter von 35 Jahren seine Karriere beendete. International Für Kanada nahm Hannan an den Olympischen Winterspielen 1992 in Albertville teil. Dabei erzielte er in acht Spielen drei Tore und gab fünf Vorlagen. Zuvor kam er für die Nationalmannschaft zu drei Testspieleinsätzen während der Olympia-Vorbereitung. Erfolge und Auszeichnungen 1988 Stanley-Cup-Gewinn mit den Edmonton Oilers 1996 Stanley Cup-Gewinn mit den Colorado Avalanche NHL-Statistik Weblinks Eishockeynationalspieler (Kanada) Eishockeyspieler (Windsor Spitfires, 1975–1984) Eishockeyspieler (Sault Ste. Marie Greyhounds) Eishockeyspieler (Brantford Alexanders) Eishockeyspieler (Erie Blades) Eishockeyspieler (Baltimore Skipjacks) Eishockeyspieler (Buffalo Sabres) Eishockeyspieler (Colorado Avalanche) Eishockeyspieler (Edmonton Oilers) Eishockeyspieler (Ottawa Senators) Eishockeyspieler (Pittsburgh Penguins) Eishockeyspieler (Toronto Maple Leafs) Olympiateilnehmer (Kanada) Teilnehmer der Olympischen Winterspiele 1992 Stanley-Cup-Sieger Kanadier Geboren 1961 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,556
Q: what's the best way to learn C++ and Qt at the same time? Hi I did A Google search and couldn't find anything, so I wanna learn Qt/C++ my University (I'm a first year CompSci Student) won't be teaching C++ next year which is a big disappointment I already know Python and dabbled in LaTeX, Javascript, C++ I'm currently helping out a free software project Clementine but it's programmed in Qt/C++ and I don't know enough of both to help out enough. Is there any tips, Tutorial, howtos out there? A: Don't learn both at the same time. Learn C++, then learn Qt. Grab a book about C++, then a book about Qt; there is no substitute for a good book. Trying to learn C++ and Qt at the same time is like trying to learn the alphabet while reading Shakespeare. A: I don't think it's an impossible thought learning both at the same time. Since you already know Python, try learning some basic Qt concepts by implementing something simple in PyQt or PySide. Learn C++ by writing simple console programs. Once you've mastered C++, try doing the same stuff with Qt using C++. It's a fact that you'll get a lot more help and code examples from people using C++, so you shouldn't just be satisfied with using Python to implement Qt programs. A: From Qt's website: How to learn Qt and Qt tutorials. Since you're familiar with programming already, the Best Practices might be of interest too.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,499
4 × 100 meter estafette, bij atletiek 4 × 100 meter vrije slag, bij zwemmen
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,198
\section{\textbf{Basis, Motivation and Introduction }} In general, it is not easy to build the true epidemic growth curve in a timely fashion for any newly emerging epidemic and the chances of building the true growth picture worsens with poor disease reporting. As we know, preparedness for an epidemic spread is a primary public health concern for any health departments. Epidemic reporting of a disease is a fundamental event in understanding two key parameters in epidemiology, namely, epidemic diffusion within a population and growth at a population level. Normally, for a real-time epidemic the reporting of cases is rarely complete, especially if the epidemic is new or symptoms are unknown or symptoms are yet to be discovered. For viruses with shorter incubation periods without virus shedding during the incubation period, any delay in reporting or lack of reporting could lead to a severe epidemic due to absence of controlling measures. For example, for ebola, the average incubation period is between 2 and 21 days, and an individual diagnosed with the ebola virus does not spread the virus to others during this period. Suppose some of these individuals with ebola are not diagnosed (and hence not reported to the health care facilities); then, after 21 days, these individuals will (unknowingly) spread ebola to others. There are other viruses whose incubation period is small but they are contagious during this period, for example influenza. Even for epidemics with established symptoms, the reporting could be nowhere close to complete and the impact of reporting on epidemic surveillance can then be theoretically measured (Rao, 2012). In this study, we are trying to attempt a classical problem in epidemic reporting within a novel framework within harmonic analysis principles. This study develops methodologies of constructing complete data from a partial data using wavelets. \section{\textbf{Fundamental questions }} We are raising here very fundamental questions in epidemic reporting. For example, does an epidemic case reporting over time follow any pattern? Or, in any particular situation, does an epidemic reporting pattern have anything to do with the actual epidemic wave? Actual epidemic or a true epidemic wave is the number of all cases (reported and not reported) as a function of time. Is there any strong or weak association between ``epidemic reporting patterns'' and ``epidemic waves'' in general? Suppose we cannot generalize such an association for every epidemic, then will there be any such association for a particular epidemic? In any epidemic, we can hardly observe the actual (true) epidemic wave, and what we construct as a wave is mostly based on reporting numbers of disease cases. The central questions in which we are interested can be summarized as follows: \begin{enumerate} \item[{\bf (i)}] How far the reporting of an epidemic is helping us in accurate prediction of an epidemic, especially if it is an emerging epidemic? and how far such an association is clarified (which it is not otherwise) using methods of harmonic analysis? \item[{\bf (ii)}] It is seldom that the�cases generated in a population are completely detected, so the question that remains unanswered during most of the time in a newly emerging epidemic is: will there be any way to back-calculate and reproduce these numbers lost in detection and, if so, to what extent can we reconstruct accurately an epidemic growth (before control measures are implemented)? \end{enumerate} There are other related questions but first we want to see from the lens of wavelet/harmonic/PDE analysis because we believe there could be some useful light to be unearthed in this way. Hence, it is always challenging to construct true epidemic waves because population vaccination and control policies depend on understanding the true ground level reality of disease cases. \section{\textbf{Wavelets}} In the past thirty-five years there has developed a new branch of harmonic analysis called \textit{wavelet theory}\textemdash see (Meyer/Ryan, 1993), (Meyer, 1998), (Hernandez/Weiss, 1996), (Walker, 1997), (Strichartz, 1993), (Labate/Weiss/Wilson, 2013). Largely based on the ideas of Yves Meyer, wavelet theory replaces the traditional Fourier basis of sine functions and cosine functions with a more flexible and adaptable basis of wavelets. The advantages of wavelets are these: \begin{enumerate} \item[{\bf (a)}] The wavelet expansion of a function can be localized both in the time and the space variable; \item[{\bf (b)}] The wavelet expansion can be customized to particular applications; \item[{\bf (c)}] Because of \textbf{(b)}, the wavelet expansion is more accurate than the traditional Fourier expansion and also more rapidly converging. \end{enumerate} Wavelet theory has revolutionized the theory of image compression, the theory of signal processing, and the theory of partial differential equations. It will be a powerful new tool in the study of epidemiology, particularly in the analysis of epidemic growth curves. \section{\textbf{Theoretical Strategy}} First we propose to build the true wave of an epidemic (through some harmonic analysis set-up and assumptions) which is otherwise unknown directly. Then, by assuming a fraction of this constructed wave was reported out of a true wave, we will then determine how an observed wave appears. These fractions are variables, so we will have several patterns of waves representing one true epidemic wave. We will have to draw conclusions which one of these representations is an ideal candidate for building a true epidemic. There will be some noise in our modeling of the epidemic curve, and we will use some noise reduction techniques before finalizing a pattern (here noise could arise due to reporting error in disease data). Suppose an epidemic wave was observed within a time interval $[t_{0},t_{n}]$, where $t_{n}-t_{0}$ could be in weeks, months, years etc. Suppose $[t_{0},t_{n}]$ is partitioned into a set $S$ of sub-intervals $\left\{ \left[t_{0},t_{1}\right],\left(t_{1},t_{2}\right],\cdots,\left(t_{n-1},t_{n}\right]\right\} ,$ where $t_{i}-t_{i-1}$ could be in days, weeks depending upon the situation. Let $a_{i}$ and $b_{i}$ be the number of cases reported and number of cases those are occurred but not reported, respectively, within the interval $\left[t_{i-1},t_{i}\right]$ for $i=1,2,\cdots,n.$ Let $f$ be the function whose domain is set of time intervals $\left\{ \left[t_{i-1},t_{i}\right]\mid\forall i\right\} $ and whose range is the set $T=\left\{ a_{i}+b_{i}\mid i=1,2,...,n\right\} $ (See Figure \ref{Figure1ab}). Here $f$ need not be one to one function because two time intervals within $S$ could have same number of epidemic cases. Let $f_{1}$ be the function defined as $f_{1}:\left\{ \left[t_{i-1},t_{i}\right]\mid i=1,2,...,n\right\} \rightarrow A,$ where $A=\left\{ a_{i}\mid i=1,2,...,n\right\} $ (See Figure \ref{Functions of true and fractional}). We call $f_{1}$ as a fractional function of $f.$ The reason we call this function a fractional function is that it maps each time interval into corresponding number of reported cases in each of these intervals. Total fraction of reported cases $\Sigma_{i}a_{i}/\Sigma_{i}(a_{i}+b_{i})\in[0,1]$ during $[t_{0},t_{n}]$ is distributed into $n-$ time intervals. Whereas, \[ \sum_{i}\frac{a_{i}}{a_{i}+b_{i}}=\left\{ \begin{array}{c} n\text{ if all disease cases are reported during \ensuremath{[t_{0},t_{n}]} }\\ <n\text{ if any one interval in \ensuremath{S} there exists under reporting of diseases cases } \end{array}\right. \] Given that the $f_{1}$ is known, the question will be, whether can we estimate (or speculate) $f$? Once we are able to estimate some form of $f$, then, how can we test for accuracy of these form(s) obtained. We could define another fractional function $f_{2}$ as $f_{2}:\left\{ \left[t_{i-1},t_{i}\right]\mid i=1,2,...,n\right\} \rightarrow\frac{A}{T}$ where $\frac{A}{T}=\left\{ \frac{a_{1}}{a_{i}+b_{i}}\mid i=1,2,...,n\right\} $ and one could attempt (to develop techniques) to estimate (or speculate) $f$ from $f_{2}.$ In the Figure \ref{Functions of true and fractional}, the fractional epidemic wave pattern is not fully describing true epidemic wave pattern. Purely from fractional epidemic wave, it is not easy to speculate true epidemic wave pattern. Additional information on $b_{i}$ values is needed for better prospects in speculation of $f.$ \begin{figure} \includegraphics{Figure1a.eps} $ $ \includegraphics{Figure1b.eps} \caption{\label{Figure1ab}a) True epidemic wave and b) Fractional epidemic wave. $a_{i}$ values are part of $a_{i}+b_{i}$ values for each $i.$ We have newly introduced the phrase \emph{fractional epidemic waves} in this work. These fractional epidemic waves concept we are using with other ideas explained in this paper to develop new ideas related to \emph{fractional wavelets. }In a sense, fractional wavelets represent fractions of overall wavelet. This Figure serves as a foundational concept to link the idea of fractional reporting waves with reporting errors. } \end{figure} \begin{figure} \includegraphics[scale=0.6]{Figure2.eps} \caption{\label{Functions of true and fractional}Functions of true and fractional epidemic waves based on reported and actual time series epidemic data. } \end{figure} \begin{figure} \includegraphics[scale=0.6]{waveletfromsampledpoint.eps} \caption{\label{fig:Wavelets-constructed-fromsampleddata}Wavelets constructed from sampled reported data with supports. Black color points on wavelets represent sampled point (of total reported cases). Each wavelet is constructed with the pairs of information $\left\{ a_{i},supp(a_{i})\right\} $ available. One of the key technical features is that we are proposing through this Figure is to construct wavelets within each interval to quantify the level of reporting cases out of actual cases. } \end{figure} \begin{figure} \includegraphics[scale=0.6]{reportednoreportedcases.eps} \caption{\label{fig: sampled cases}Total reported cases within an interval of time could be formed from a sampled cases out of total cases. We drew graphs using black colored filled circles in this Figure to represent total reported cases in few of the situations out of all possible diseases reporting patterns. The sample point of reported cases represents the total reported cases at each time interval. Hence the size of all graphs at each interval was kept the same. Similarly, the size of graphs between different time intervals are kept different for demonstration purpose only and actual reported cases between different time intervals could be constant or not. } \end{figure} \begin{figure} \includegraphics[scale=0.8]{convergencegraph.eps} \caption{\label{fig:Convergence-of-graph}Convergence of graph at sample point to graph at complete reporting. From sampled point number of reported cases to the evolution of actual reported cases. This situation arises due to improved epidemic surveillance. } \end{figure} \begin{figure} \includegraphics[scale=0.7]{waveletgraphs.eps} \caption{\label{waveletgrapgh}Evolution of reporting of epidemic cases and returning to recovered stage} \end{figure} \subsection{Generating wavelets from sampled epidemic data: } Let us consider availability of data as per the Figure \ref{Figure1ab} on reported cases. Suppose each point on the y-axis is considered as a sampled point (out of many sets of plausible reported cases at that point). We call it a sampled point because we are not sure the point we obtain at any time intervals $[t_{0},t_{1}]$ and $(t_{i-1},t_{i}]$ for $i=2,3,...,n$ for the reported cases represents a true epidemic curve and that was one of the main assumptions in this paper. Total number of reported cases within a time interval is a combination of cases that were reported to the public health system (See Figure \ref{fig: sampled cases}). When we know the total number of reported cases within a time interval, then this number could be resultant of one of the several combinations of cases reported as shown in the Figure \ref{fig: sampled cases}. Within each interval the combination of cases reported is unknown but the total reported cases out of actual diseases cases within each interval are fixed (because we will take a single point reference of total reported cases within each interval). These reported cases are $a_{i}'s$ in Figure \ref{Figure1ab}. Given that there exists a sampled point within each of the time intervals, and with some support for each of the $a_{i}$ (say, $supp(a_{i}))$, we will construct wavelet for each time interval $[t_{0},t_{1}]$ and $(t_{i-1},t_{i}]$ for $i=2,3,...,n$. Sampled point at a time interval we mean, the final combination of cases reported (out of actual cases) those were considered as final reporting number for that interval. How we decide the support is not clear right now, but we will use from a data which was used to get the sampled point $a_{i}$. Here sampled point does not necessarily mean statistical sampled point. In the Figure \ref{fig: sampled cases}, for the interval $[t_{0},t_{1}]$, the graph connecting each of the black circle (vertex) within each square forms a graph. Although the sizes of each of these graphs are same, i.e. $7$, but their shapes are different and the sampled point is $7.$ The sampled point cannot be used easily to represent the shape of the graph unless location of each node (in this case a physical address or geographical location of each node) is known. One way to construct the support could be from the graph associated with each sampled point. Using the pairs of information $\left\{ a_{i},supp(a_{i})\right\} $ we will construct wavelets as shown in Figure \ref{fig:Wavelets-constructed-fromsampleddata}. Sampled points within each interval $[t_{0},t_{1}]$ and $(t_{i-1},t_{i}]$ for $i=2,3,...,n$ gives number of the reported cases and support constructed from graphs within each of these intervals. Within each square or a rectangle in the Figure \ref{fig: sampled cases}, if the size of a graph increases to the maximum possible size (i.e. when all cases are reported), then information to construct corresponding support increases. Let $G_{i}$ be the graph corresponding to a sampled point $a_{i}$ for the interval $(t_{i-1},t_{i}]$ and suppose $G_{i}^{c}$ be the graph with all possible reported cases being reported, then $G_{i}\rightarrow G_{i}^{c}$ ($G_{i}$ converges to $G_{i}^{c}$) for all $i$. In $G_{i}^{c}$ the number of vertices are the number of actual disease cases and edges are connected between closest vertices. In reality, $G_{i}^{c}$ is not possible to draw because we would not be able to observe all cases. It is challenging to understand before hand what fraction of the size of $G_{i}^{c}$ would be the size of $G_{i},$and this guess could give the speed of convergence to $G_{i}^{c}$ from $G_{i}.$ Actual time steps taken from $G_{i}$ to $G_{i}^{c}$ for each $i$ is not constant. We assume there will be a finite number of time steps to reach from $G_{i}$ to $G_{i}^{c}.$ Usually $c$ is not constant as well because the error rates vary. So, we let $c_{0}$ corresponds to complete reporting at $t_{0},$ $c_{1}$ at $t_{1},$ and so on for $c_{n}$ at $t_{n}.$ Let at $t_{0}$ the graph be $G_{i},$ at $t_{1}$ the graph be $G_{i}^{t_{1}}$ and so on. The corresponding sizes of graphs be $\left|E_{i}^{t_{j}}\right|$ for $i=1,2,...,n$ and $j=1,2,...,c_{i},$ and \[ \left|E_{i}^{t_{0}}\right|<\left|E_{i}^{t_{1}}\right|<...<\left|E_{i}^{t_{c_{i}}}\right|\:\text{for each }i. \] But the inequality, \[ \left|E_{i}^{t_{j}}\right|<\left|E_{l}^{t_{j+1}}\right|\text{ for some \ensuremath{i\neq l} and }l=1,2,...,n \] need not hold. The explanation for these inequalities is as follows: graphs within each time interval could converge toward actual disease cases but the size of the graph across various time intervals need not follow any monotonic property because degrees of error in reported cases could vary over time. Let $G_{i}$ is represented by $(V_{i},E_{i})$ and $G_{i}^{c}$ is represented by $(V_{i}^{c},E_{i}^{c})$ and as the reporting of diseases cases improves the values of $(V_{i},E_{i})$ increases such that they become exactly $(V_{i}^{c},E_{i}^{c})$ which we denote here as $G_{i}\rightarrow G_{i}^{c}$. See Figure \ref{fig:Convergence-of-graph}. We define $\left|E_{i}^{t_{j}}\right|\text{ for }i=1,2,...,n$ as local steady-state values and $\max_{i}\left(\left|E_{i}^{t_{j}}\right|\right)$ as global steady-state value. \begin{figure} \includegraphics[scale=0.6]{Reportedcasedistribution.eps} \caption{\label{fig:Distribution-of-reported}Distribution of reported cases into present time interval and to past time intervals. Reported cases found in a time interval in the column are distributed into respective bins of a time interval as shown through arrows. } \end{figure} \begin{figure} \includegraphics[scale=0.6]{meyerwavelets.eps} \caption{\label{fig:Meyer-wavelets}Meyer wavelets of order 3 for various situations for equally spaced interval of (a) {[}-4,4{]}, (b) {[}0,6{]}, (c) {[}-20,20,{]}, (d) {[}0,3{]}, (e) with order 10 for {[}-2.5, 2.5{]} } \end{figure} \begin{prop} \label{proposition1}The size of each graph within $[t_{i-1},t_{i})$ could reach local steady-state and the global steady-state is equal to the one of the local steady-states. \end{prop} For each $i,$ $\left|E_{i}^{t_{0}}\right|$ and $G_{i}$ are associated with reported cases. For any $i,$ $\left|E_{i}^{t_{0}}\right|=\left|E_{i}^{t_{c_{i}}}\right|$ then $G_{i}$ and $G_{i}^{t_{c_{i}}}$ are identical, and this situation refers to complete reporting of disease cases. If $\left|E_{i}^{t_{0}}\right|=\left|E_{i}^{t_{c_{i}}}\right|,$ for any $i$, then local steady-state for this $i$ attained at $t_{0}.$ \begin{rem} \label{remark2}If $\left|E_{i}^{t_{0}}\right|=\left|E_{i}^{t_{c_{i}}}\right|$ for each $i$ and $\max_{i}\left(\left|E_{i}^{t_{j}}\right|\right)=\left|E_{i}^{t_{0}}\right|,$ then the global steady-state also attains at $t_{0}.$ If $\left|E_{i}^{t_{0}}\right|\neq\left|E_{i}^{t_{c_{i}}}\right|$ for all $i,$ then the global stead-state attains at a time greater than $t_{0}.$ \end{rem} When $\left|E_{i}^{t_{0}}\right|\neq\left|E_{i}^{t_{c_{i}}}\right|$ for each $i$, then the global steady-state value could provide information on degree of reporting error to some extent. If $\left|E_{i}^{t_{0}}\right|=\left|E_{i}^{t_{c_{i}}}\right|$ for some $i$, and by chance at this $i,$ the global steady-state occurs then that wouldn't provide any information on degree of reporting errors, because at several other $i$ values we will have $E_{i}^{t_{0}}<\left|E_{i}^{t_{c_{i}}}\right|$ and actual total epidemic cases are more than sample epidemic cases. Above, statements in the Proposition \ref{proposition1} and in the Remark \ref{remark2} will alter when multiple reporting exists in one or more of the time intervals considered. Multiple reporting of cases is usually defined as reporting of a disease case more than once and treating it as more than one event of disease occurrence. When multiple reporting exists at each $i$, then $\max_{i}\left(\left|E_{i}^{t_{j}}\right|\right)\neq\left|E_{i}^{t_{0}}\right|$ is not the global steady-state. A mixed situation where multiple reporting and under reporting simultaneously exists within the longer time interval $[t_{0},t_{n}]$ is treated separately. With this method, we will develop a series of wavelets. Given the information to construct Figure \ref{fig:Convergence-of-graph}(a), and the rapidity at which this graph evolves from the Figure \ref{fig:Convergence-of-graph}(a) to the Figure \ref{fig:Convergence-of-graph}(d) (which we might refer above as \emph{support}) to attain the Figure \ref{fig:Convergence-of-graph}(d) is known, then combined with the information stored in the Figure \ref{waveletgrapgh}, we can then construct Figure \ref{fig:Wavelets-constructed-fromsampleddata}. Within each of the intervals $[t_{0},t_{1}]$ and $(t_{i-1},t_{i}]$ for $i=2,3,...,n$, the information of the Figures \ref{fig:Convergence-of-graph}-\ref{waveletgrapgh}, will be used to construct series of wavelets. For example, if some $\Psi(t)$ and some $\Phi(t)$ together describe the epidemic wave of a true epidemic, and if pairs of functions$\{(\Psi_{i}(t),\Phi_{i}(t))\}$ for $i=1,2,...,n$ represent the epidemic wave of those representing fractions of this true epidemic, then one of our central ideas is in determining which of these fractional wave is closest to the true epidemic. Usually data/information to construct a couple of such fractional wavelets could be observed in an emerging epidemic, say $(\Psi_{a}(t),\Phi_{a}(t))\}$ and $(\Psi_{b}(t),\Phi_{b}(t))$, so the first step is to construct these pairs of wavelets. These fractional wavelets are constructed on partial data (partial in the sense that observed data on disease cases in an emerging epidemic is not complete). The question we are attempting is: can we predict $(\Psi(t),\Phi(t))$ from either one of or from both of the fractional wavelets. {[}\textbf{Note:} There is no terminology of ``fractional wavelet'' in the literature, but we are calling $(\Psi_{a}(t),\Phi_{a}(t))$ and $(\Psi_{b}(t),\Phi_{b}(t))$ the fractional wavelet{]}. For this, let us consider Meyer wavelets which are readily available and could be a good first step to start with to explain our epidemic situation. We will define Meyer wavelet and briefly describe them below: The Meyer wavelet is an orthogonal wavelet created by Yves Meyer. It is a continuous wavelet, and has been applied to the study of adaptive filters, random fields, and multi-fault classification. \begin{defn} The Meyer wavelet is an infinitely differentiable function that is defined in the frequency domain in terms of a function $\nu$ as follows: \[ \Psi(\omega)=\begin{cases} \begin{array}{c} \frac{1}{\sqrt{2\pi}}\sin\left(\frac{\pi}{2}\nu\left(\frac{3\left|\omega\right|}{2\pi}-1\right)\right)e^{j\omega/2}\\ \frac{1}{\sqrt{2\pi}}\cos\left(\frac{\pi}{2}\nu\left(\frac{3\left|\omega\right|}{2\pi}-1\right)\right)e^{j\omega/2}\\ 0 \end{array} & \begin{array}{cc} \begin{array}{c} \text{if}\\ \text{if}\\ \text{if} \end{array} & \begin{array}{c} 2\pi/3<\left|\omega\right|<4\pi/3\\ 4\pi/3<\left|\omega\right|<8\pi/3\\ \text{otherwise.} \end{array}\end{array}\end{cases} \] Here \[ \nu(x)=\begin{cases} \begin{array}{c} 0\\ x\\ 1 \end{array} & \begin{array}{cc} \begin{array}{c} \text{if}\\ \text{if}\\ \text{if} \end{array} & \begin{array}{c} x<0\\ 0<x<1\\ x>1. \end{array}\end{array}\end{cases} \] There are other possible choices for $\nu.$ The Meyer scaling function is given by \[ \Phi(\omega)=\begin{cases} \begin{array}{c} \frac{1}{\sqrt{2\pi}}\\ \frac{1}{\sqrt{2\pi}}\cos\left(\frac{\pi}{2}\nu\left(\frac{3\left|\omega\right|}{2\pi}-1\right)\right)\\ 0 \end{array} & \begin{array}{cc} \begin{array}{c} \text{if}\\ \text{if}\\ \text{if} \end{array} & \begin{array}{c} \left|\omega\right|<2\pi/3\\ 2\pi/3<\left|\omega\right|<4\pi/3\\ \text{otherwise.} \end{array}\end{array}\end{cases} \] Of course it holds, as usual, that \[ \underset{k}{\Sigma}\left|\hat{\Phi}\left(\omega+2\pi k\right)\right|^{2}=\frac{1}{2\pi} \] and \[ \hat{\Phi}(\omega)=m_{0}(\omega/2).\hat{\Phi}(\omega/2) \] for some $2\pi-$periodic $m_{0}(\omega/2).$ Finally, \[ \begin{array}{ccc} \Psi(\omega) & = & e^{i\omega/2}\overline{m_{0}(\omega/2+\pi)}\hat{\Phi}(\omega/2)\;\qquad\quad\quad\;\;\\ & = & e^{i\omega/2}\underset{k}{\Sigma}\overline{\hat{\Phi}\left(\omega+2\pi(2k+1)\right)}\hat{\Phi}(\omega/2)\quad\quad\;\\ & = & e^{i\omega/2}\left(\hat{\Phi}\left(\omega+2\pi\right)+\hat{\Phi}\left(\omega-2\pi\right)\right)\hat{\Phi}(\omega/2). \end{array} \] It turns out that the wavelets \[ \Psi_{j,k}(x)=2^{j/2}\Psi(2^{j}x-k) \] form an othonormal basis for the square integrable functions on the real line. \end{defn} One proposition that could be formed is ``if a wavelet is constructed on the partial data of a particular series of events in a population, then this wavelet will not be fully compared with a wavelet constructed from the full data series of all events in the same population.'' Building a measure associated with these two wavelets�is interesting and there could be several such measures based on the level of completeness in the data. Because we are dealing with true versus reported disease cases this measure (a set of points each representing a distance between true and observed cases) could be termed the \textit{error} in reporting of disease cases. These kind of measures will be very helpful (such measures after further filtration can be useful for practical epidemiologists). Instead of constructing wavelets for the overall epidemic duration, we will construct wavelets within intervals $[t_{0},t_{1}]$ and $(t_{i-1},t_{i}]$ for $i=2,3,...,n$ as described in the Figure \ref{fig:Wavelets-constructed-fromsampleddata}. As the reported cases within an interval improve, described as in the Figure \ref{fig:Convergence-of-graph}, wavelets configuration improves. Each of the fractional wavelets obtained from partial data will be updated using the information shown in the Figure \ref{fig:Distribution-of-reported}. Meyer wavelets for various equally spaced intervals are demonstrated in the Figure \ref{fig:Meyer-wavelets}. \subsubsection{Computation} Suppose a sample point is obtained for an interval $(t_{i-1},t_{i}]$. An improvement of reported cases is tried to ascertain from the data obtained in the subsequent time intervals. One way to update this is from future epidemic cases those were infected and or diagnosed for the period $(t_{i-1},t_{i}]$ but made available during any of the intervals $(t_{i},t_{i+1}]$ for $i=2,3,...,n-1.$ That is, sum of the epidemic cases those were reported during $(t_{i-1},t_{i}]$ and those reported during each of the future time intervals $(t_{i},t_{i+1}]$ for $i=2,3,...,n-1$ and belong to the interval $(t_{i-1},t_{i}]$ will be treated as improved number of reported cases for the interval $(t_{i-1},t_{i}]$. We will update reported number of cases in a previous interval from future available reported cases that was associated with previous time interval. Hence, the evolution of the data for the interval $(t_{i-1},t_{i}]$ can be used to construct graphs shown in Figure \ref{fig:Convergence-of-graph}. Since this evolution is assumed to observe for a long period, it is assumed that $G_{i}$ will be convergent to $G_{i}^{c}$ approximately. As an epidemic progresses, we update the intervals $(t_{i-1},t_{i}]$ with newly available information during $(t_{i},t_{i+1}]$ and as $i$ approaches $n$ then those intervals nearby to $n$ will have less chance of evolution or less chance of up-gradation (due to truncation effect). Once the reporting numbers are complete, we will similarly study recover stage of an epidemic and hence collect the data to compute the Figure \ref{waveletgrapgh}. Accumulation of old cases in new intervals are distributed back to respective time intervals is schematically described in the Figure \ref{fig:Distribution-of-reported}. This procedure will update the reported cases in past as long as at least one reported case is observed in the present time interval that belongs to one of the past time interval. Based on the location of the reported cases the graphs constructed will be updated as well. Hence for each present time interval we observe a reported case that belongs to one of the past time interval, the fractional wavelets will become graphically closer to the complete (or true) wavelet for that time interval. One way to assess closest fractional wavelet is to compare some features of fractional wavelets with a PDE/ODE model of an emerging epidemic. \section{\textbf{What we can achieve through such an analysis?}} We provide a group of epidemic growth scenarios inspired from the harmonic analysis set-up. A couple of scenarios from this could represent true epidemic growth curves. {[}We will have to evolve a strategy to short list a couple of plausible true scenarios.{]} With this, we will be in a position to assess the level of under-reporting in a particular epidemic. So the strategy we are proposing could be beneficial in not only building a true epidemic but also assessing the level of reporting error in an epidemic. We provide the true epidemic with noise to illustrate our claim. In addition to the gain in construction of true epidemic, we also propose new methods that blend harmonic analysis with dynamical systems and sampling strategy. In this way, harmonic analysis is used to bridge the gap between unknown and known information in disease epidemiology. Suppose we determine one of the fractional wavelets, say $(\Psi_{a}(t),\Phi_{a}(t))$, is closest to the true wavelet; then finding a measure which is the difference between $(\Psi_{a}(t),\Phi_{a}(t))$ and $(\Psi(t),\Phi(t))$ will complete mapping of the epidemic at time $t.$ However determining which one of the fractions $(\Psi_{i}(t),\Phi_{i}(t))$ is closest to the true is not so easy. But if there are no significant multiple reporting of disease cases then the largest fractional wavelet could be assumed to be the one with shortest measure {[}Note: Still we need to provide more clarity on strategy to determine the closest available fractional wavelet{]}. We are trying to use wavelets to extract full data from a partial data. We have argued how various combinations of partial data can be used for discrete constructions which in turn form a supporting information to construct wavelets. These two aspects makes our proposed work very innovative. In summary, what we are trying to develop through this paper is, given that we have partial data of an event (here we mean event of reporting of disease cases), we will construct the complete event data. Wavelets, in this work, are occupying key role in processing of built-up or accumulated data to build complete event data. The event here is the reported number of cases in a time interval and these reported cases represent only partial number of actual epidemic cases.Through this paper we demonstrated a method of improving of partial data to close to a data which could be complete. How do we plan to update our data reported in an interval and bring it closer to actual number of disease cases is described in this paper. This exploration will assist in better visualization of any emerging epidemic spread in a more realistic sense. We also believe that our methods could provide additional tools for those epidemic modelers who frequently use modeling tools such as ODE and PDE to begin with. We are not only looking for academic development through this project, but, also a clear non-trivial body of techniques for applications of harmonic analysis that has never been seen before in-terms of developing epidemic analysis. We plan to come up with some interesting insights on how to construct wavelets for medical applications. The bottom line is that we will be able to help public health planners for better management and courses of action during emerging epidemics. The kind of analysis we present here to fill missing pieces of epidemic reporting information can be applied to other areas, for example, constructing total rhythm of a heart beat from partial information, etc \section{\textbf{Questions still remain }} Within what span of time we can generate a true epidemic from its emergence using the harmonic analysis set-up? Can we predict the full picture of an epidemic from only partial data? Can we measure the validity and the accuracy of an epidemic growth curve? Can we measure the timeliness of our analysis? \section*{\textbf{Global View}} We have identified a gap in the methods of understanding true epidemic growth and spread and tried to address this by proposing a novel method. We are proposing through this study that wavelets could offer a road map closer to finding a practical solution (we are aware that a perfect solution is impossible by any method because some of the disease cases in any situation are never reported). Technical aspects of the story line was depended on construction of discrete graphs and \emph{fractional wavelets}. Fractional wavelets are newly introduced to the literature through this study. A solution through wavelets is also not trivial because there is no ready-made set of wavelets available which will offer a timely road-map. So we have introduced a novel strategy. Hence we argue that our approach will help to come closer to our aims of understanding epidemics in a more accurate and timely fashion. As a bi-product, we can develop techniques for data scientists to analyze disease surveillance.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,448
HomeNewsNo NIN, No Driver's Licence – FRSC No NIN, No Driver's Licence – FRSC Victor Isaiah (REMALI)✓ Thursday, December 17, 2020 The Federal Road Safety Corps (FRSC) has given notice that it will start enforcing the National Identification Number (NIN) directive on driver's license processing nationwide, with effect from Dec. 21. Mr Bisi Kazeem, Assistant Corps Marshal (ACM), Corps Public Education Officer, announced this in a statement released on Wednesday, December 16. "Following the Federal Government's directives on the harmonisation of citizens' data by relevant agencies, the Federal Road Safety Corps (FRSC) had in compliance with the directives earlier put members of the public on notice. As a follow up to that, .FRSC Management has resolved that effective 21 December, 2020, all applicants for the National Driver's Licence in Nigeria must present the National Identification Number (NIN) from the @nimc_ng before they can be captured for any class of the licences produced by the FRSC." a statement reads According to Kazeem, all driver's license applicants are, therefore, expected to present their National Identification Number from the date, before they can be attended to, adding that there would be no waiver for anyone. Bisi further stressed the need for a harmonized database on citizens' information which he said is critical to resolving the challenges of identifying individuals to assist security agencies in data collation and quick retrievals to address some of the national security challenges. News Nigeria
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,350
Q: How to smoothly align text output Is there anyway to do this in python? I have a bunch of misaligned text that essntially looks like this (copied from above linked question) : column1 column2 ------- ------- sdfsdfsddfsdfsdfsdfsd 343r5 dfgdfgdf 234 gdfgdfgdfgdfgf 645 And I would like it to look like this: Name Address Size foo 01234567 346 bar 9abcdef0 1024 something-with-a-longer-name 0000abcd 2048 But I don't know/can't find any similar text and string modifiers equivalents in python. A: In python 3 (2.6+) you should use str.format() and not the "%-systax". format_str = '{name:30} {address:08x} {size:8d}' print(format_str.format(name='Name', address='Address', size='Size') print(format_str.format(name=names[i], address=addresses[i], size=sizes[i])) For more infos about the format() method look here
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,945
Q: Intellij Idea Spring Facet only recognizes resources with application.* Question states it a lot. Is there a way to let any resource file named other than application.* to be scanned by spring facet? Version of IntelliJ Idea is: 2017.1.3 Tagging Spring Boot in the question too, may be someone from that community knows configuration as well? A: 2017.2 allows this: Open Spring Facet, select Spring Boot autodetected fileset and click on "Customize Spring Boot" icon in toolbar and enter custom "spring.config.name" value in dialog.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,372
The Kingdom of Mitravarta is a huge, orderly nation, remarkable for its state-planned economy, devotion to social welfare, and stringent health and safety legislation. The cynical, devout population of 546 million Mitravartans are ruled without fear or favor by a psychotic dictator, who outlaws just about everything and refers to the populace as "my little playthings." The enormous, moralistic, socially-minded, well-organized government juggles the competing demands of Spirituality, Law & Order, and Welfare. The average income tax rate is 69.8%, and even higher for the wealthy. The sizeable but stagnant Mitravartan economy, worth 8.46 trillion Sonas a year, is driven entirely by a combination of government and state-owned industry, with private enterprise illegal. The industrial sector, which is quite specialized, is mostly made up of the Trout Farming industry, with significant contributions from Arms Manufacturing and Pizza Delivery. Average income is 15,491 Sonas, and distributed extremely evenly, with practically no difference between the richest and poorest citizens. Mitravarta is ranked 129,255th in the world and 206th in The Western Isles for Highest Economic Output, with 8.46 trillion Standard Monetary Units. 1 day 2 hours ago : Mitravarta published "The Royal Armed Forces of Mitravarta मित्रा‌वर्त महाराज्यसेना WIP" (Factbook: Military). 2 days 4 hours ago : Mitravarta lodged a message on the The Western Isles Regional Message Board. 6 days ago : Mitravarta lodged a message on the The Western Isles Regional Message Board. 12 days ago : Mitravarta lodged a message on the The Western Isles Regional Message Board. 15 days ago : Mitravarta lodged a message on the The Western Isles Regional Message Board. 19 days ago : Mitravarta was ranked in the Top 1% of the world for Most Authoritarian. 20 days ago : Mitravarta lodged a message on the The Western Isles Regional Message Board.
{ "redpajama_set_name": "RedPajamaC4" }
36
\section{Introduction} In profiling radar systems, range resolution is determined by transmitted signal bandwidth. Synthetic bandwidth technique provides high-range-resolution (HRR) capability by transmitting a series of pulses with various carrier frequencies. Within each pulse, the bandwidth is small. The benefit of this method is to achieve large signal bandwidth, while retaining low system complexity. After collecting all the received pulses, the `stretch' algorithm \cite{Einstein1984} can be implemented to derive HRR profiles\cite{wehner1995}. In synthetic HRR profiling, target back scattering property is carefully discussed in \cite{peikang2005}. Theoretical analysis and experimental results had demonstrated that target back scatter can be modeled as distributed point scatterers with individual amplitudes. This approximation of target named `point-scatterer model' is widely recognized in HRR profiling or imaging. Synthetic bandwidth technique usually suffers from jammers and interferences, due to large bandwidth occupied by transmitted signal. While signal frequencies clashes with environmental electromagnetic interferences, not all of the scattered signal can be correctly acquired by radar receiver. If we let the polluted signal unattended, synthetic HRR profiling will be affected. This frequency domain interference problem in synthetic profiling process can be modeled as `missing data' problem.Various methods have been proposed to interpolate the missing data \cite{Babu2010} \cite{wang2006} \cite{Stoica2009}. These fitting algorithms are based on assumptions of signal models or properties. Then, `stretch' is applied to the processed data. However, if the missing parts are too long, profiling result falls in quality due to inaccuracy interpolation \cite{Babu2010}. For large size of missing, recently developed compressed sensing \cite{Baraniuk2007} \cite{Shah2009} technique is a solution. However, while the missing pattern appears to be block-like, compressed sensing results also deteriorate. Autocovariance is the second order property of signal. While part of signal is missing, autocovariance function (ACF) of various lags can be estimated from valid data. In this paper, we demonstrate a new approach in missing data case. In which autocovariance matrix is estimated using limited available data. Subspace decomposition method is applied to the estimated matrix to obtain scatterer range information. We present both simulation results and real data from radar systems to validate this new methods. The paper is organized as follows. Section II describes signal model. Section III presents the new method. Simulation and real data result are presented in section IV. Results derived from real radar data is presented in Section V. Section VI concludes the paper. \section{Signal Model of HRR Profiling} In synthetic HRR profiling process, radar system transmits a pulse train of $N$ pulses with various carrier frequencies. For the $n$th pulse, carrier frequency is $f_n=f_0+n \Delta f$, where $f_0$ is starting frequency and $\Delta f$ is frequency step. The transmitted pulse for the $n$th frequency is $p_n(t)=A(t)\exp{j2\pi f_n t}$, in which $A(t)$ is the signal envelop. The total bandwidth is $N \Delta f$. Suppose a single reflecting poing with scattering amplitude $\alpha$ is positioning at time delay $\tau$ (corresponding to range at $c \tau /2$). At the receiver, radar received signal will be $r_n(t)=\alpha A(t-\tau)\exp{j2\pi f_n (t-\tau)}$. In synthetic radar HRR Profiling, point scatterer model describes a target as a series of individual reflecting points \cite{peikang2005} with various amplitudes $[\alpha_1,\alpha_2,\ldots,\alpha_N]$ and time delays $[\tau_1,\tau_2,\ldots,\tau_N]$. These scatterers composes of a linear, time-invariant system. Received signal is the superposition of reflecting wave from all the scatterers, plus noise: \begin{equation} r_n(t)=\sum_{k=1}^K \alpha_k A(t-\tau_k)\exp \{j2\pi f_n (t-\tau_k)\}+e(t) \end{equation} After quadrature demodulation and sampling, the complex sample of the $n$th pulse is \begin{equation} \label{eq:rx} y_n=\sum_{k=1}^K \alpha_k e^{-j2\pi f_n \tau_k}+e_n \end{equation} Obviously, the sampled data is superposition of multiple sinusoids. In synthetic bandwidth system, a series of samples are collected as equally spaced frequency domain samples. With these samples, synthetic HRR profiling is essentially an estimation of three parameters: numbers, time delays and amplitudes. MUSIC \cite{stoica1997} and other subspace methods are used in HRR profiling \cite{Kim2002}. The key step in traditional MUSIC, covariance matrix estimation is operated in full data case. In synthetic bandwidth signal, due to interference or jamming on the receiver, complex samples may not to be obtainable. Suppose pulse number $P_I=[m_1,m_2,\ldots,m_A]$ are available and $P_N=[m_{A+1},m_{A+1},\ldots,m_N]$ are polluted pulses with interference. We designate \begin{IEEEeqnarray*}{C} I(k)= \begin{cases} 1 & \text{if } k \in P_I, \\ 0 & \text{if } k \in P_N. \end{cases} \end{IEEEeqnarray*} Then $I(k)$ is an indication of the availability of the $k$th pulse. The observations in (\ref{eq:rx}) reduce to: \begin{equation} \label{eq:rxP} y_{m_i}=\sum_{k=1}^K \alpha_k e^{-j2\pi f_{m_i} \tau_k}+e_{m_i} \end{equation} In the following section, we present a new algorithm named Missing-MUSIC or M-MUSIC, modified for missing data case. \section{Algorithm Description} \subsection{Autocovariance Estimation} We starts the analysis of signal autocovariance function (ACF). For a second-order stationary process, the autocovarainces of different lags do not depend on time shift. In full data case, ACF for a continuously sampled data $X_1,X_2,\ldots,X_N$ with zero mean is estimated by unbiased estimator \begin{equation} \hat{c}(h)=\frac{1}{N-h}\sum_{i=1}^{N-h}X_{i+h} X_i^* \end{equation} Where $h$ is the lag of ACF and $(\cdot)^*$ is complex transpose. It is stated for lag $h$, we have $N-h$ couples of sampled data to be averaged. For sufficiently long series, the estimation is asymptotic consistent. However, as sampled data is polluted by jammers, the number of available couples will decrease as polluted sample size increases. For ACF with lag $h$, the number of useful couples is \begin{equation} Q(h)=\sum_{i=1}^{N-h} I(i)I(i+h). \end{equation} For each couple of samples that both exist, it will be accounted for ACF estimation. A series of $Q(h)$ for different lags $h$ is calculated. In theory, $Q(h)$ should be in range of $[0,N-h]$. ACF can be estimated by average all available couples of sampled data in the following manner. \begin{equation} \hat{c}(h)=\frac{1}{Q(h)}\sum_{i=1}^{N-h} I(i) I(i+h) X_{i+h} X_i^*. \end{equation} Stated simply, we should drop out the couples of polluted samples, average all the `clear' samples left. \subsection{Covariance Matrix Forming} In order to proceed subspace identification, a covariance matrix has to be formed. For a second-order stationary process, elements on the same diagonal equal to each other. Thus the covariance matrix should be a Toeplitz matrix, with the value of each diagonal equals to ACF of certain lag. \begin{equation} \hat{C}=\left[ \begin{array}{cccc} \hat{c}(0) & \hat{c}(1) & \ldots & \hat{c}(L-1) \\ \hat{c}(-1) & \hat{c}(0) & \ldots & \hat{c}(L-2) \\ \vdots & \vdots & \ddots & \vdots \\ \hat{c}(1-L) & \hat{c}(2-L) & \ldots & \hat{c}(0) \end{array} \right]. \end{equation} In which $L$ is the size of the covariance matrix. The matrix is essentially both Hermitian and Toeplitz. The choice of $L$ is important in this method. Too small size will reduce the accuracy and resolution of subspace method \cite{Stoica1989} \cite{Stoica1990}. On the other side, ACF estimation is based on limited samples, missing data condition reduces samples size in advance. Thus the size of $\hat{C}$ should be limited, or ACF estimations of large lags will degenerate covariance matrix property. A rule of thumb is to choose the largest $L$, subject to \begin{equation} \forall h<L, \quad Q(h) \ge \frac{N-h}{2} \end{equation} \subsection{Number of Scatterers and Range Estimation} All eigenvalues of estimated covariance matrix $\hat{C}$ is real, since $\hat{C}$ is hermitian. Let $\lambda_1 \ge \lambda_2 \ge \ldots \lambda_L$ denote the eigenvalues of $\hat{C}$ listed in decreasing order. In subspace method such as MUSIC, eigenspaces is divided into signal subspace and noise subspace. Signal subspace is the the span of eigenvectors correspondint to large eigenvalues. In HRR profiling problems, the number of scatterers is usually unknown. However, significant scatterers with strong reflectivity determines the characteristic of radar target. We may determine the number of scatterers by the value of eigenvalues above some `noise level'. Let $T$ be the threshold for significant eigenvalues, the number of scatterers $K$ is the total number of eigenvalues $\alpha_k > T$. $\hat{K}$ is also the dimension of signal subspace, noise subspace dimension is $L-\hat{K}$. Suppose $\{ s_1,s_2,\ldots,s_{\hat{K}} \}$ are $\hat{K}$ orthogonal eigenvector of the associated $K$ largest eigenvalues of $\hat{C}$. The vectors span the signal space $S$. $\{ g_{\hat{K}+1},g_{\hat{K}+2},\ldots,g_L \}$ are $L-\hat{K}$ eigenvectors spanning the noise subspace $G$. The range or time delay of scatterers is determined by Root-MUSIC \cite{Barabell1983}. If we define the polynomial: \begin{equation} g_k (z)=\sum_{l=1}^L g_{lk} z^{-(l-1)} \end{equation} Where $g_{lk}$ is the $l$th element of noise-eigenvector $g_k$. Solving the roots of another polynomial: \begin{equation} D(z)=\sum_{k=\hat{K}+1}^L g_k(z) s_k^* (1/z^*) \end{equation} Each of the polynomial roots has complex form $\hat{z}_i=|\hat{z}_i| e^{j \hat{\omega}_i}$ Obtaining $\hat{K}$ complex roots $\{ Z_1, Z_2, \ldots, Z_{\hat{K}} \}$ that is closest to unit circle on complex plain. The time delay in (\ref{eq:rx}) is \begin{equation} \hat{\tau}_i=\frac{\hat{\omega}_i}{2\pi}. \end{equation} \subsection{Amplitude Estimation and Profile Forming} Substituting estimated scatterer number $\hat{K}$ and their time delay $\hat{\tau}_i$ into equation (\ref{eq:rxP}): \begin{equation} y_{m_i}=\sum_{k=1}^{\hat{K}} \alpha_k e^{-j2\pi f_{m_i} \hat{\tau}_k}+e_{m_i} \end{equation} or in matrix form \begin{equation} \mathbf{Y}=\mathbf{\hat{F}} \alpha + \mathbf{E} \end{equation} Each of the estimated scatterers forms a steering vector for observations. The reflective amplitude of $\hat{K}$ scatteres are simply the solution to the least square equations above. \begin{equation} \hat{\alpha}=(\mathbf{\hat{F}}^* \mathbf{\hat{F}})^{-1} \mathbf{\hat{F}}^* \mathbf{Y} \end{equation} Up to now, all parameters of scatterers are determined. To generate a HRR profile, simply arrange the reflectivity of scatterers according to their time delay or range. Noting the amplitude we estimated is complex. In HRR profiles, amplitude is usually transformed to absolute value and represented in log scale. \section{Simulation Result} To demonstrate the proposed technique, we simulated a synthetic bandwidth system with total bandwidth of $960$MHz, in which $512$ pulses are equally spaced by $\Delta f=1.875$MHz. Theoretical range resolution of this system is about $0.15$ meters. We place four scatterers down range. The locations and amplitudes of scatterers are drawn in figure \ref{pic:simutgt}. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{SimulationTargets.jpg} \caption{Target simulation of four scatterers with different reflectivity. Closest scatterers are separated by 1 meter.} \label{pic:simutgt} \end{figure} \begin{table}[!t] \caption{Parameter Table for Missing Data Simulation} \label{tab:simupara} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{c c} \hline \hline Parameter & Value\\ \hline Radio-frequency & X band\\ Frequency Step Size $(\Delta f)$ & 1.875MHz\\ Total Pulse Number & 512\\ Full Bandwidth & 960MHz \\ Valid Pulse Number & 300\\ SNR at the Receiver & 15dB\\ \hline \end{tabular} \end{table} \subsection{Random Missing Data} \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{RandomMissing} \caption{Randomly distributed missing data pattern. Blue circles denotes valid sampling frequencies. Red crosses are polluted frequencies. Samples on these frequencies are not used. } \label{pic:lackrand} \end{figure} In this case, interference is randomly distributed on carrier frequencies, as drawn in figure \ref{pic:lackrand}. Simulation parameters are listed in Table \ref{tab:simupara}. Within total bandwidth of 960MHz composed of 512 pulses, 212 pulses are polluted. The unavailable frequencies are equally distributed with in the total bandwidth. We compare the results by compressed sensing and by our new method. Both methods incorporate Akaike information criterion to determine the number of scatterers. In in figure \ref{pic:resrand}, both result had proved to be capable to profile the scatterers. In both results, the four recovered scatterers appeared at the correct range. Amplitudes also reflect original reflectivity. However, if we look into the detail of the profiling result, new method has displayed some advantages. Compressed sensing result shows more scatterers surrounding the original scatterers. This phenomenon is due to `grid-mismatch' problems in this method \cite{Chi2011}. Meanwhile, new method resolves scatterer range by the root of a polynomial. The range of scatterers are not limited on pre-defined grids. \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{RandResOMP} \label{pic:randomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{RandResMUSIC} \label{pic:randmusic}} \caption{(a) HRR profile generated by compressed sensing. (b) HRR profile generated by M-MUSIC. 212 Pulses out of 512 transmitted pulses are polluted and discarded. Polluted frequencies are randomly ditributed over total bandwidth.} \label{pic:resrand} \end{figure} \subsection{Block Missing Data} While the interference over the total bandwidth is block-shaped, as in figure \ref{pic:lackblock}. Simulation parameters are the same as in previous subsection. Only the interference frequency distribution is different. This interference pattern is common in real environment. Signal transmitted by jammers or other radars occupies a continuous part of bandwidth. \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{BlockMissing.jpg} \caption{Randomly distributed missing data pattern. Blue circles denotes valid sampling frequencies. Red crosses are polluted frequencies. Samples on these frequencies are not used. } \label{pic:lackblock} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{BlockResOMP} \label{pic:blockomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{BlockResMUSIC} \label{pic:blockmusic}} \caption{(a) HRR profile generated by compressed sensing. (b) HRR profile generated by M-MUSIC. 212 pulses over two band are interfered.} \label{pic:resblock} \end{figure} The result by compressed sensing has further deteriorated. This is due to the matrix property in this method. The new method displaying stable profiling result exhibit an advantage over traditional method. \section{Real Radar Data Result} In order to verify the new method over real system, a synthetic bandwidth radar is placed at the shore of a lake. In this environment, clutter energy and other unwanted interferences are sufficiently low. Experimental data of I/Q channels was collected from the baseband of the radar receiver. The signal parameters are listed in Table \ref{tab:real}. Two corner reflectors separated by about 4 meters are set above the water, as in figure \ref{pic:targetview}. The target is 5km away from radar. In profiling result, we expect to derive two spikes of strong reflection. 200 pulses are deliberately jammed using electronic jammers, in block-like pattern. \begin{table}[!t] \caption{Parameter Table for Real System} \label{tab:real} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{c c} \hline \hline Parameter & Value\\ \hline Radio-frequency & X band\\ Frequency Step Size $(\Delta f)$ & 1.875MHz\\ Intra-pulse Bandwidth & 6MHz \\ Total Pulse Number & 512\\ Full Bandwidth & 960MHz \\ \hline \end{tabular} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=0.48\textwidth]{LakeTarget} \caption{Two corner reflectors were placed above water. The range between the reflectors was roughly 4 meters. Two point scatterers are expected at the profile output.} \label{pic:targetview} \end{figure} \begin{figure}[htbp] \centering \subfigure[]{\includegraphics[width=0.48\textwidth]{RealTargetsOMP} \label{pic:realomp}} \subfigure[]{\includegraphics[width=0.48\textwidth]{RealTargets} \label{pic:realmusic}} \caption{Two reflectors are resolved by 3.95 meters. (a) HRR profile by OMP noting multiple reflectors are surrounding the main reflector. This is caused by `Grid-Mismatch'. (b) Result by M-MUSIC. Only two spikes is sufficient to represent the received signal.} \label{pic:realres} \end{figure} We compare compressed sensing and our new method in HRR profile results. In Figure \ref{pic:realomp}, two scatterer points are resolved. However, multiple spikes are required to represent the reflector. This is the phenomenon appeared in simulation data. Result by new method in Figure \ref{pic:realmusic} shows clearly two scatterers. They are separated by 3.95 meters, correctly describing the position of the two reflectors. \section{Conclusion} In this paper, we introduced a new HRR profiling algorithm for synthetic bandwidth signal. This new approach is able to correctly resolve scatterer ranges and amplitudes under the condition of missing data. Based on sampled auto-covariance, the signal covariance matrix is formed. Subspace decomposition is applied in order to resolve scatterer ranges. Amplitudes are obtained by least square. Simulations and real data results show the newly method has advantage over compressed sensing method, which is a widely used in missing data case. More application of this method may apply to sinusoid signal decomposition and other radar imaging area. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,090
\section{Introduction} \label{sec:intro} Both active galactic nuclei (AGN) and local environment play key roles in shaping galaxy evolution. It is now understood that AGN are those nuclei in galaxies that emit radiation powered by accretion onto a supermassive black hole. Although this realisation has proved useful for explaining many observed characteristics of these active objects, there are still many unsolved problems, especially related to the physics of the accretion process itself. In the recent years much effort has been invested in studying the global properties of AGN as a unique population in the context of galaxy formation. In this work, we focus on a fundamental question: the dependence of the fraction of galaxies that have AGN on the density of the local environment at $z\sim1$, and the evolution of this dependence to $z\sim0$. At low redshift, many authors have investigated various correlations between \textit{galaxy properties} and environment. It is now well established that there exists a relationship between morphology and density (\citealt{Oemler1974} and \citealt{Dressler1980}), in that star-forming disk-dominated galaxies tend to inhabit less dense regions of the Universe than ``quiescent'' or inactive elliptical galaxies. Moreover, additional (and related) dependencies with environment have been found, such as with stellar mass, luminosity, colour, recent and past star formation, star formation quenching, surface brightness, and concentration (to name but a few) \citep[e.g.][]{Kauffmann2004, Balogh2004, Hogg2004, Blanton2005, Bundy2006}. In this scenario of entangled correlations it is useful to investigate the dependence of AGN properties on the local environment, especially since AGN are believed to play an important part in shaping galaxy evolution. This has sometimes been a rather controversial issue. In the local Universe, \cite{Miller2003} found no dependence on environment of the fraction of spectroscopically selected AGN, using the SDSS early data release. This result is in good agreement with \cite{Sorrentino2006} who used the much larger SDSS DR4. However, many other authors claim the existence of a strong link between nuclear activity and environment, at least for specific AGN types. \cite{Kauffmann2004} found that intermediate luminosity optically selected AGN (Seyfert IIs) favoured underdense environments, while low-luminosity optically selected AGN (Low-Ionization Nuclear Emission-line Regions; hereafter, LINERs) showed no density dependence, within the SDSS DR1. Similarly, lower-luminosity AGN were found to have a higher clustering amplitude than high-luminosity AGN by \cite{Wake2004} and \cite{Constantin2006a}. Radio-loud AGN have been noted to reside preferentially in mid-to-high density regions and tend to avoid underdense environments \citep{Zirbel1997, Best2004}. At high redshift the study of both galaxies and AGN, and their relation to the environment, has been restricted by the lack of adequate data. Only in recent years, with the emergence of quality large-scale probes of the high redshift galaxy population, such as the DEEP2 Galaxy Redshift Survey \citep{Davis2003} or the VIMOS-VLT Deep Survey \cite[VVDS,][]{LeFevre2003}, have we reached the stage where we can begin to measure the statistics of galaxy evolution in some detail. Using DEEP2, \cite{Cooper2006} found that the many of the low redshift galaxy correlations with environment are already in place at $z\!\sim\!1$. However important differences exist. The colour-density relation, for instance, tends to weaken towards higher redshifts \citep{Cooper2007a,Cucciati2006}. Also, bright blue galaxies are found, on average, in much denser regions than at low redshift. Such a population inverts the local star formation-density relation in overdense environments \citep{Cooper2007b,Elbaz2007}. This inversion may be an early phase in a galaxy's transition onto the red sequence through the process of star formation quenching. The truncation of star formation in massive galaxies is believed to be tightly connected with nuclear activity \citep[see e.g.][for more information]{Croton2006, Bower2006}. Further investigation reveals that post-starburst (aka. K+A or E+A) galaxies \citep[e.g.][]{DresslerGunn1983} are galaxies ``caught in the act'' of quenching and are in transit to the red sequence. These predominantly ``green valley'' objects reside in similar environments to regular star forming galaxies (\citealt{Hogg2006, Nolan2007}; Yan et al. 2007 in prep.) supporting the picture that star formation precedes AGN-triggered quenching, which precedes retirement onto the red sequence. \cite{Georgakakis2007} were one of the first to study the environments of X-ray selected AGN at $z\sim1$ using a sample of 58 sources drawn from the All-Wavelength Extended Groth Strip International Survey (AEGIS, \citealt{Davis2007}). The authors found that these galaxies avoided underdense regions with a high level of confidence. \cite{Nandra2007} show that the same AGN reside in host galaxies that populate from the top of the blue cloud to the red sequence in colour-magnitude space. They speculate that such AGN may be the mechanism through which a galaxy stays red. Similar ideas have become a popular feature of many galaxy formation models that implement lower luminosity (i.e. non-quasar) AGN to suppress the supply of cooling gas to a galaxy, hence quenching star formation through a process of ``starvation'' \citep[e.g.][]{Croton2006, Bower2006}. In this work we study the environmental dependence of nuclear activity in red sequence galaxies within a carefully chosen sample of both X-ray and optically selected AGN, drawn from the AEGIS Chandra catalogue and the DEEP2 Galaxy Redshift Survey, respectively. Our paper is organised as follows. In Section~\ref{sec:survey} we describe our AGN selection. In Section~\ref{sec:results} we present our main result: the AGN fraction of red sequence galaxies at $z\sim1$ as a function of environment for three types of AGN (LINERs, Seyferts and X-ray selected). We undertake a comparison between our high-z results and those derived from a low-z sample drawn from the SDSS in Section~\ref{sec:sdss}. Finally, in Sections~\ref{sec:discussion} and \ref{sec:summary} we provide a discussion and brief summary. Throughout, unless otherwise stated, we assume a standard $\Lambda$CDM concordance cosmology, with $\Omega_m=0.3$, $\Omega_\Lambda=0.7$, $w=-1$, and $h=1$. In addition, we use AB magnitudes unless otherwise stated. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[scale=0.7]{./figures/FIG_paper1_2.ps} \end{tabular} \end{center} \caption{Two panels that show our AGN selection of Seyferts and LINERS within the DEEP2. The left panel plots [OII] EW versus ${\rm H{\beta}}$ EW for objects with accurate redshifts ($Q\ge3$), $\delta_{3}$ environment measures, and covered [OII], [OIII] and ${\rm H{\beta}}$ (grey points). LINERs (black points) are selected using the empirical demarcation of Equation~\ref{eqn:liners_selection} along with the colour cut defined by Equation~\ref{eqn:color_deep2}. The right panel shows the line ratio ${\rm [OIII]/ H{\beta}}$ plotted against $(U-B)$ rest-frame colour for the same DEEP2 sample (grey points). Seyferts (black points) are selected to have ${\rm [OIII]/H{\beta}\ge 3}$ and rest-frame colour $(U\!-\!B)>0.8$, as denoted by the horizontal and vertical lines. See Section~\ref{sec:optical} for further details.} \label{fig:OSS_selection} \end{figure*} \section{Galaxy and AGN Selection} \label{sec:survey} Our primary galaxy and AGN samples are drawn from the DEEP2 Galaxy Redshift Survey \citep{Davis2003,Davis2005}, a project designed to study galaxy evolution and the underlying large-scale structure out to redshifts of $z\sim1.4$. The survey utilises the DEIMOS spectrograph \citep{Faber2003} on the 10-m Keck II telescope and has so far targeted $\lower.5ex\hbox{$\; \buildrel > \over \sim \;$} 50\,000$ galaxies covering $\sim 3$ square degrees of sky over four widely separated fields. In each field, targeted galaxies are observed down to an apparent magnitude limit of $R_{\rm AB} < 24.1$. Important for this work, the spectral resolution of the DEIMOS spectrograph is quite high, ${\rm R} \approx 5000$, spanning an observed wavelength range of $6500\!<\!\lambda\!<\!9200$\AA. This allows us to confidently identify AGN candidates through emission-line ratios down to low equivalent widths; such objects form a core part of the data analysed in this paper. More details on the DEEP2 survey design and galaxy detection can be found in \cite{Davis2003, Davis2005, Davis2007} and \cite{Coil2007b}. To study the dependence of the AGN fraction on local environment, for each galaxy we use the pre-calculated projected third-nearest-neighbour distance, $D_{p,3}$, and surface density, $\Sigma_{3}=3/(\pi D_{p,3}^{2})$ \citep[taken from][]{Cooper2005}. This density measure is then normalised by dividing by the mean projected surface density at the redshift of the galaxy in question, yielding a quantity denoted by $1+\delta_3$ . Tests using mock galaxy catalogues show that $\delta_{3}$ is a robust environment measure that minimises the role of redshift-space distortions and edge effects. See \cite{Cooper2005} for further details and comparisons with other commonly used density estimators. To complement our optical catalogue we employ Chandra X-ray data from the All-Wavelength Extended Groth Strip International Survey (AEGIS, \citealt{Davis2007}). The AEGIS catalogue provides a panchromatic measure of the properties of galaxies in the Extended Groth Strip (EGS) covering X-ray to radio wavelengths. The EGS is part of DEEP2, constituting approximately one sixth of its total area. This allows us to cross-correlate each X-ray detection with the optical catalogue to identify each galaxy counterpart. In this way environments can be determined for the X-ray AGN sources. Selecting objects from both the DEEP2 spectroscopic and Chandra (AEGIS) X-ray catalogues provides two different AGN populations that are embedded in the same underlying large-scale structure. To differentiate the two in the remainder of the paper, we hereafter refer to the first as the optically selected sample (OSS) and the second as the X-ray selected sample (XSS). In the following sections we describe the OSS and XSS populations in more detail. \subsection{The optically selected AGN sample (OSS)} \label{sec:optical} Optically (or spectroscopically) selected AGN in the DEEP2 survey can be divided into two main classes, LINERs and Seyferts, distinguished primarily through the spectral lines present and their strength. Although the physical processes that differentiate one class from the other are still not well understood, the identification of each class is never-the-less well defined. We restrict our analysis to the redshift range $0.72\!<\!z\!<\!0.85$ to ensure that all chosen AGN spectral indicators are visible within the covered wavelength range and that the environment measure is sufficiently reliable. This will be the redshift interval from which all our OSS results are taken. Furthermore, to facilitate a fair comparison between both AGN types, only objects on the red sequence or in the green valley are included (defined below). This will also allow us to compare with a low redshift sample (see Section~\ref{sec:sdss}). For a complete discussion of the spectroscopic detection of AGN in the DEEP2 survey see Yan et al. 2007 (in prep.). Below we will briefly outline our LINER and Seyfert selection in turn. \subsubsection{LINERs} As discussed in \cite{Yan2006}, LINERs are a population of emission-line galaxies with high equivalent width (EW) ratio ${\rm[OII]/H{\alpha}}$ (or ${\rm[OII]/H{\beta}}$). Specifically, we select a complete sample of LINERs using the division in ${\rm[OII]/H{\beta}}$ EW space given in \cite{Yan2006}: \begin{equation} {\rm EW \big(\big[O_{II}\big]\big)>18\,EW\big(H{\beta}\big)-6} \label{eqn:liners_selection} \end{equation} The left panel of Figure~\ref{fig:OSS_selection} illustrates this selection by plotting [OII] EW against ${\rm H{\beta}}$ EW for the entire DEEP2 sample with accurate redshifts ($Q\ge3$), $\delta_{3}$ environment measures, covered [OII], [OIII] and ${\rm H{\beta}}$ (for consistency with Seyfert selection -- see below), and redshift window $0.72\!<\!z\!<\!0.85$ (grey points). The solid line indicates the empirical demarcation of Equation~\ref{eqn:liners_selection}. Since quiescent galaxies with no line emission also satisfy this criteria, the inequality relation alone is not sufficient. Thus, we further require LINERs to have significant detection ($2\sigma$) of [OII]. As ${\rm H{\beta}}$ emission is expected to be weak in LINERs \citep{Yan2006}, we do not require significant detection on ${\rm H{\beta}}$. The error on ${\rm H{\beta}}$ EW emission is large due to the difficulty in measuring it after subtracting the stellar absorption. The above LINER selection has contamination from star-forming galaxies whose ${\rm H{\beta}}$ EW is underestimated. From a study of SDSS galaxies, \cite{Yan2006} concluded that LINERs are almost exclusively found in red sequence galaxies. Therefore, we adopt an additional colour cut to remove this contamination, which is the same used by \cite{Willmer2006}: \begin{equation} {(U\!-\!B)>-0.032M_{B}+0.322} \label{eqn:color_deep2} \end{equation} Our final LINER sub-sample with all of the above constraints is comprised of 116 objects and is over-plotted in the left panel of Figure~\ref{fig:OSS_selection} with black points. Note that within the SDSS a strong vertical branch can be seen (see Figure~2 of \citealt{Yan2006}, where they use $\rm H{\alpha}$ instead of ${\rm H{\beta}}$). This branch is significantly weaker at $z\sim1$ in the DEEP2 data. This is due in part to the greater errors on ${\rm H{\beta}}$ in the DEEP2 data, and in part to the domination of red galaxies in the SDSS sample (due to the SDSS selection criteria). \begin{figure} \plotone{./figures/FIG_paper2_3.ps} \caption{The redshift distribution for our optically selected AGN sample (OSS, Section~\ref{sec:optical}). The distribution of Seyferts is given by the blue solid line, while the distribution of LINERs is given by the red dashed line. The shaded region denotes the redshift window $0.72\!<\!z\!<\!0.85$ from which our final OSS sample is drawn. Within this window both populations are cleanly identified spectroscopically and the effect of selection is small in both sub-samples. } \label{fig:OSS_redshift} \end{figure} \subsubsection{Seyferts} Seyferts require different selection techniques than LINERs. Following the method of Yan et al (in prep.), we identify Seyferts in DEEP2 using a modified Baldwin-Phillips-Terlevich (BPT) diagram \citep{Baldwin1981}. Historically, the BPT diagram has been a reliable tool for determining the source of line emission from a galaxy. By plotting the line ratios ${\rm [OIII]\ \lambda 5007 / H{\beta}}$ against ${\rm [NII]\ \lambda 6583/H{\alpha}}$ one can visually differentiate Seyferts, LINERs and star-forming galaxies. However, ${\rm H{\alpha}}$ is not available in the DEEP2 spectra at $z\simgt0.4$ as it is redshifted into the infrared. For this reason, we use a modified BPT diagram which replaces the line ratio ${\rm [NII]\ \lambda 6583/H{\alpha}}$ with the rest-frame $U\!-\!B$ colour. This is possible because both are rough proxies for metallicity. Tests done on SDSS samples demonstrate that such a substitution is able to produce a clean and complete selection criterion for Seyferts (Yan et al. in prep.). In the right panel of Figure~\ref{fig:OSS_selection} we illustrate our Seyfert selection by showing the modified BPT diagram for the same underlying sample used to select LINERs (grey points). This figure shows that the modified BPT diagram has a similar two branching structure to the original BPT diagram. Seyferts are selected to have ${\rm [OIII]/H{\beta}\ge 3}$ and rest-frame colour $(U\!-\!B)>0.8$ (horizontal and vertical lines, respectively). For cases in which ${\rm H{\beta}}$ is not positively detected, we use a $2\sigma$ lower limit on ${\rm [OIII]/H{\beta}}$. With such criteria we obtain 131 Seyferts in the range $0.72\!<\!z\!<\!0.85$ where all spectral signatures for both Seyferts and LINERs are normally available (black points). Selecting only red sequence (or green valley) objects facilitates a fair comparison with LINERs and is consistent with their typical position in the colour-magnitude diagram \citep{Yan2006}. \begin{figure} \plotone{./figures/FIG_paper3.ps} \caption{The redshift distribution for our X-ray selected AGN sample (XSS, Section~\ref{sec:xray}), shown by the red solid line. The grey shaded region denotes the redshift range $0.6\!<\!z\!<\!1.1$, from which our final XSS is drawn. This range was chosen to be approximately comparable to the OSS (Figure~\ref{fig:OSS_redshift}) while simultaneously maximising the number and completeness in the sample.} \label{fig:XSS_selection} \end{figure} \subsubsection{Redshift distributions} In Figure~\ref{fig:OSS_redshift} we show the redshift distribution of both LINERs (dashed line) and Seyferts (dotted line) drawn from the selection given in each panel of Figure~\ref{fig:OSS_selection}. The DEEP2 Seyfert population extends from $z \approx 0.35$ to $z \approx 0.85$, peaking at around $0.75$. For LINERs the distribution is much more concentrated, extending from $0.72$ to $0.9$ and peaking at around $0.8$. Note that the peak for both is dominated by the DEEP2 survey galaxy selection and not an intrinsic peak in the AGN distribution. As discussed previously, the redshift window where both populations can be cleanly identified spectroscopically is $0.72\!<\!z\!<\!0.85$, denoted by the shaded region. This range maximises AGN coverage while insuring that selection effects are minimised in both samples. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[scale=0.7]{./figures/FIG_paper4_2.ps} \end{tabular} \end{center} \caption{The colour-magnitude diagram (CMD) for LINERs (left panel, red diamonds), Seyferts (middle panel, black triangles) and X-ray AGN (right panel, blue squares). The demarcations given by the solid and dashed lines represent the conventions adopted to separate the blue cloud from the green valley, and the latter from red sequence objects (Equation~\ref{eqn:color_deep2}), respectively \citep{Willmer2006}. The LINER sub-sample is composed of 116 objects, all of them lying on the red sequence by definition. The Seyferts sub-sample is composed of 131 objects, with 97 of them on the red sequence and 34 in the green valley. Finally, from our X-ray sample of 68 objects, 36 sources are red, 16 are green, and the remaining 16 blue. The underlying CMD of the population from which all AGN are drawn is shown in each panel with grey contours and black points. This parent population, in the left-hand and middle panels, is comprised of objects with accurate redshifts ($Q\ge3$), $\delta_{3}$ environment measures, covered [OII], [OIII] and ${H{\beta}}$ ; and redshift between $0.72$ and $0.85$. In the right-hand panel, the grey contours and black points represent all objects in the EGS field with accurate redshifts ($Q\ge3$) and $\delta_{3}$ environment measures; and redshift between $0.6$ and $1.1$.} \label{fig:CMD} \end{figure*} \subsection{The X-Ray selected AGN sample (XSS)} \label{sec:xray} AEGIS Chandra X-ray sources within the EGS field are optically and spectroscopically identified by cross-correlating with the DEEP2 photometric and redshift catalogues, following the prescriptions presented by \cite{Georgakakis2007}. They cover X-ray luminosities of $10^{41}\!\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}\!{\rm L_X (erg/s)}\!\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}\!10^{44}$ in host galaxies of luminosity $-19\!\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}\!M_{B}-5\log h\!\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}\!-22$. The base X-ray sample comprises a total of $113$ reliably matched objects. In Figure~\ref{fig:XSS_selection} we show the redshift histogram of our X-ray catalogue. To extract a sample that is as closely comparable to the OSS as possible while simultaneously maximising AGN number and completeness, we restrict the X-ray sources to the redshift range $0.6\!<\!z\!<\!1.1$. This is wider than the OSS redshift window; however, both samples (OSS and XSS) have similar redshift means, and we assume that the evolution effects for sources outside the OSS redshift window do not dominate our results (or at least does not differ significantly from evolution in the red sequence population itself). In this redshift range the number of reliable X-ray AGN drops to $68$, including $52$ red-ward of $(U\!-\!B)>0.8$ (i.e. a green valley cut), and $36$ red-ward of Equation~\ref{eqn:color_deep2} (i.e. a red sequence cut). \subsection{AGN in colour-magnitude space} \label{sec:cmd} In Figure~\ref{fig:CMD} we show the colour-magnitude diagram (CMD) for LINERs (left panel), Seyferts (middle panel) and X-ray AGN (right panel). The demarcations given by the solid and dashed lines represent the conventions adopted to separate the blue cloud from the green valley, and the latter from red sequence objects (Equation~\ref{eqn:color_deep2}), respectively (\citealt{Willmer2006}; Yan et al. in prep.). Here, LINERs are red sequence galaxies by definition. As explain above, this restriction is supported by the fact that local LINERs are almost exclusively red \citep{Yan2006}. For Seyferts, $\sim 80\%$ lie on the red side of the CMD, with the remainder residing in the green valley. Finally, for the XSS AGN, $\sim 50 \%$ of the sources are red, $\sim 25 \%$ are green, and the remaining $\sim 25 \%$ blue. The grey contours in each panel show the underlying DEEP2 CMD within the same redshift range. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[scale=0.7]{./figures/FIG_paper5_3.ps} \end{tabular} \end{center} \caption{The AGN fraction in red sequence galaxies versus local galaxy over-density, $\delta_{3}$, for LINERs (left panel) and Seyferts (right panel). For each, the respective symbols (diamonds for LINERs and squares for Seyferts) show the median measure in bins of low, mean and high density. Vertical error-bars represent the Poissonian uncertainty, while horizontal error-bars show the size of each density range. We also show how the AGN fraction varies smoothly with environment using a sliding box of width $0.3$ dex shifted from low to high density in increments of $0.025$ dex (dotted lines with shaded area showing the $1\sigma$ uncertainty in the sliding fraction). The overall fraction of LINERs and Seyferts is plotted with horizontal dashed lines. This figure shows some evidence that LINERs tend to favour high density environments relative to the underlying red sequence, whereas Seyferts have little (or no) environment dependencies.} \label{fig:OSS_result} \end{figure*} \subsection{Errors and completeness} \label{sec:errors} Our greatest source of error is that of noise from small number statistics, given the low number of AGN we have available in the DEEP2 and AEGIS surveys in any particular environment bin. Such current generation high redshift catalogues are thus still limited in the extent to which the statistical nature of the AGN population can be examined. All errors calculated in this paper were determined by propagating the Poissonian uncertainties on the number of objects. Due to the small number statistics, the errors obtained with this method will dominate any cosmic variance effects in the observed fields \citep{NewmanDavis2002}. It should be noted that the DEEP2 survey is by design incomplete. At $z\sim1$, approximately $60\%$ of the actual objects are observed by the telescope. Moreover, redshifts are successfully obtained for around $70\%$ of the target parent population (based on tests with blue spectroscopy, most failures are objects at z>1.4 (Steidel, priv. comm.)). This should be carefully considered in any statistic that counts absolute numbers of objects. In our work, however, we deal with \emph{relative} numbers of objects, i.e. the AGN fraction. We assume, to first order (and to the level of uncertainty given by the Poisson error), that any variation in redshift success or targeting rate between the AGN sample and the red sequence parent population is the same in low density regions as it is in high density regions. In principle, one may expect an easier detection (or even an easier redshift estimation) of an object identified as an AGN than the one for a ``regular'' red sequence object (due to the presence of remarkable features in the spectrum). However both \cite{Cooper2005} and \cite{Gerke2005} found that DEEP2 selection rates are essentially independent of local density. Finally, because of the different Seyfert and LINER selection we find some inevitable (but small) overlap between the two populations, $7\%$ of the total in our case, where a single object has been classified as both AGN types. We have re-calculated all our results excluding these dual class objects and find only trivial differences. For the sake of maximising statistics we have not removed such objects from the OSS, however note that they may constitute an interesting sub-population whose physical implications warrant further investigation. \section{Results} \label{sec:results} In this section we present our primary result: the dependence of the AGN fraction in the red sequence on local environment density. We will also extend the analysis to include green valley objects. Figure~\ref{fig:OSS_result} presents the density dependence of the fraction of $z\sim1$ red sequence AGN, for LINERs (left panel) and Seyferts (right panel) separately. In each panel, the respective symbols show the median measure in bins of low, mean and high density environments (each of them encompassing one third of the OSS), where the horizontal error-bars indicate the width of each bin, and the vertical error-bars show the Poisson uncertainty in the measured fraction, as described in Section~\ref{sec:errors}. We also show how the AGN fraction varies smoothly with environment using a sliding box of width $0.3$ dex, shifted from low to high density in intervals of $0.025$ dex (dotted line). The accompanying grey-shaded regions correspond to the sliding $1\sigma$ uncertainties in the sliding fraction. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[scale=0.7]{./figures/FIG_paper6.ps} \end{tabular} \end{center} \caption{The AGN fraction versus environment for our X-ray selected sample (XSS), using the same format as in Figure~\ref{fig:OSS_result}. In the left panel, squares show the fraction of red sequence X-ray AGN in the three density bins considered. In the right panel we extend this analysis to include green valley AGN. Note that this extension does not change the results in any significant way. XSS AGN behave more like Seyferts than LINERs (see Figure~\ref{fig:OSS_result}), with the fraction showing only a weak (or no) environmental dependence to within the errors.} \label{fig:XSS_result} \end{figure*} Evidence for a trend in the behaviour of the LINERs is quite apparent in Figure~\ref{fig:OSS_result}, suggesting the possibility that these objects tend to favour high density environments, and in a way stronger than the majority of red sequence galaxies. This is in contrast to the behaviour of red Seyferts, which show little (or no) environment dependence relative to the red sequence. This is a key result that will be discussed in more detail in the following sections. We now consider the X-ray catalogue drawn from the AEGIS Chandra imaging. Figure~\ref{fig:XSS_result} presents the X-ray selected AGN fraction versus local environment (note that the same format used in Figure~\ref{fig:OSS_result} has been applied here). In the left panel we show the fraction of red X-Ray AGN and in the right panel we extended the analysis to include green valley X-ray AGN and galaxies. Including green valley objects does not significantly alter our results. Red sequence X-ray selected AGN appear to behave similarly to optically selected Seyferts in terms of their lack of environmental preference, and differently from the LINER population in high density environments. This is in agreement with the results of \cite{Georgakakis2007}, also using the AEGIS data. We note that improved statistics could show trends at levels equal to or smaller than that which can be measured here given our errors. We have tested the significance of the results in Figures~\ref{fig:OSS_result} and \ref{fig:XSS_result} in a number of ways. Since LINERs show the most interesting environment dependencies we will focus our tests on this population. We randomly draw $1000$ sub-samples from the red sequence and replace the LINER sample with each of these random populations. After repeating our analysis for each we find that only $2\%$ of the random sub-samples show similar density dependencies to the LINER population (i.e. results at least as pronounced as the one in Figure~\ref{fig:OSS_result}) . The LINER environment dependence seen in the left panel of Figure~\ref{fig:OSS_result} deviates by at least $2\sigma$ (actually almost $2.5\sigma$) from a random selection of red sequence galaxies. Additionally, we can confirm that the trend in Figure~\ref{fig:OSS_result} is not due to an implicit dependence of colour or magnitude on environment within the red sequence. This was checked by repeatedly replacing the LINER sample with randomly drawn objects with the same colour or colour and magnitude distributions, and comparing their density distributions with that of the real LINERs. The mean density of the LINER population is $1+\delta_3 = 0.37 \pm 0.06$, almost double that for randomly colour selected samples which have $1+\delta_3 = 0.20 \pm 0.01$, and randomly colour and magnitude selected samples with $1+\delta_3 = 0.19 \pm 0.01$. Similar tests were performed on Seyferts and X-Ray AGN. \begin{figure*} \begin{center} \begin{tabular}{c} \includegraphics[scale=0.7]{./figures/FIG_paper7_2.ps} \end{tabular} \end{center} \caption{The SDSS red sequence AGN fractions versus environment, plotted using the same format from Figures~\ref{fig:OSS_result} and \ref{fig:XSS_result}. The left panel shows the fraction of low-z SDSS LINERs (squares) and, for comparison, the high-z DEEP2 LINER fraction (triangles) reproduced from Figure~\ref{fig:OSS_result}. The right panel shows the fraction of SDSS Seyferts, with the equivalent DEEP2 result again reproduced from Figure~\ref{fig:OSS_result} for comparison. SDSS LINERs and Seyferts both show a decreasing AGN fraction towards high density environments, unlike that seen in DEEP2. At $z\sim1$, LINERs and Seyferts are approximately equally abundant, whereas by $z\sim0$ the relative abundance of Seyferts to LINERs has dropped by approximately a factor of $7$.} \label{fig:SDSS_result} \end{figure*} \section{A Comparison with Local AGN in the SDSS} \label{sec:sdss} Our results thus far suggest that the $z\sim1$ red sequence LINER fraction depends on environment in a way that is different from Seyferts. This dependence takes the form of an increase in the relative abundance of LINERs in higher density environments. In this section we address the question of AGN fraction evolution. Specifically, do local red sequence LINERs also favour dense environments and Seyferts show little environment dependence? Our low redshift AGN sample is drawn from the Sloan Digital Sky Survey (SDSS, \citealt{York2000}) spectroscopic DR4 catalogue \citep{Adelman2006}. The SDSS DR4 covers almost $5000$ square degrees of the sky in five filters ($ugriz$) to an apparent magnitude limit of $r=17.7$. The redshift depth is approximately $z\sim0.3$, with a median redshift of $z=0.1$. DR4 consists of $\sim400\,000$ galaxies. The same environment measure is applied for consistency with the DEEP2 analysis above (see \citealt{Cooper2007a} for full details). To measure the low redshift AGN fraction we follow a similar procedure to that used for the high-z results. This procedure isolates a well defined red sequence population and identifies AGN within it. The base red sequence population is constructed by selecting SDSS galaxies within the redshift interval $0.05\!<\!z\!<\!0.15$ and applying the rest-frame colour cut $(U-B)> -0.032M_{B}+0.483$ (\citealt{Cooper2007a}, in agreement with the previous analysis by \citealt{Blanton2006}). For consistency with our DEEP2 sample we take a faint absolute magnitude limit of $M_{\rm B} - 5\log h = -20$ (representing the approximate faint-end of the red sequence at $z\sim0.8$ within the DEEP2 data) and evolve it $0.88$ magnitudes to mimic the evolution in the galaxy luminosity function between DEEP2 and SDSS (assuming evolution of $1.3$ magnitudes per unit redshift from \citealt{Willmer2005} and mean DEEP2 and SDSS redshifts of $0.78$ and $0.1$, respectively). With these constraints the underlying low-z red galaxy sample is composed of $5335$ objects. To select AGN from the SDSS red sequence sample we use the same set of criteria described in Section \ref{sec:optical} with the following modifications. Since SDSS spectra have a much higher signal-to-noise than DEEP2 spectra the emission line detections are easier. This will result in differences between the two AGN samples, as SDSS AGN will include weaker optical AGN than the DEEP2 can detect. Therefore, we determine a different line detection criterion by comparing the errors in the emission line EW measurements. Typical line measurement errors in DEEP2 are almost exactly twice as large as those in SDSS. We thus change all $2\sigma$ line detection criteria to $4\sigma$ for selecting AGN in the SDSS. The final low redshift AGN sample is comprised of $720$ objects, $611$ being LINERs and $109$ Seyferts. This should be contrasted with the high-z sample which has $213$ objects, of which $116$ are LINERs and $97$ Seyferts. In Figure~\ref{fig:SDSS_result} we present the SDSS red sequence AGN fractions versus environment (this figure follows the same format used in Figures~\ref{fig:OSS_result} and \ref{fig:XSS_result}). The left panel shows the fraction of local red sequence LINERs (squares) and, for comparison, the high redshift LINER fraction (triangles) reproduced from Figure~\ref{fig:OSS_result}. The right panel shows the Seyfert fraction in the SDSS red population (squares) and similarly for DEEP2 (triangles, from Figure~\ref{fig:OSS_result}). The left panel of Figure~\ref{fig:SDSS_result} reveals a different LINER trend with environment at $z\sim0$ than that seen at $z\sim1$. LINERs in the SDSS show no indication of favouring high density regions relative to other environments. In fact, it is very statistically significant that SDSS LINERs tend to reside more in mean-to-low density environments and clearly disfavour those of high density. In the right panel of Figure~\ref{fig:SDSS_result} Seyferts also follow a clear trend of decreasing AGN fraction towards denser SDSS environments. This is in contrast to the weak (or no) environmental trend in the high redshift DEEP2 Seyfert population. It is important to note that comparing the overall amplitudes of the LINER and Seyfert fractions between DEEP2 and SDSS is dangerous, as subtleties in the selection of the underlying red sequence can shift the absolute values around somewhat. The relative trends across environment \emph{within} a population are much more robust however, and these can be compared between high and low redshift (for example, we do not expect any selection effects to have a significant density dependence). Additionally, the difference in the abundance \emph{between} Seyferts and LINERs at a given redshift can be contrasted. Seyferts and LINERs are approximately equally abundant at $z\sim1$. By $z\sim0$ however, the Seyfert population has diminished relative to the LINER population by over a factor of $7$. This decline in the relative number of Seyfert AGN by redshift zero will be discussed in the next section. \section{Discussion} \label{sec:discussion} \subsection{Previous measures of AGN and environment} It is difficult to make direct comparisons of our results with previously published works. This is because past environment studies have tended to focus on the AGN fraction of \emph{all} colours of host galaxies, and also to mix both LINER and Seyfert classes into a combined AGN population. Our selection is restricted to the red sequence only (and also green valley), which allows us to compare high and low redshift populations and also study the AGN--environment connection without the colour--environment correlation. Locally, a number of SDSS measures of AGN and environment have been made. Using the SDSS early data release, \cite{Miller2003} found no dependence on environment for the spectroscopically selected AGN fraction in a sample of 4921 objects. Specifically, the authors report no statistically significant decrease in the AGN fraction in the densest regions, although their densest points visually suggest such a trend. This result is broadly consistent with the results of both LINERs and Seyferts in Figure~\ref{fig:SDSS_result}, even though we only consider red sequence objects. \cite{Kauffmann2003} also found little environment dependence of the overall fraction of detected AGN in a sample drawn from the SDSS DR1. However, they do report different behaviour when the sample is broken into strong AGN ($\log L[{\rm OIII}]>7$, ``Seyferts'') and weak AGN ($\log L[{\rm OIII}]<7$, ``LINERs''). For Seyfert they find a significant preference for low-density environments, especially when hosted by more massive galaxies. This is consistent with our SDSS findings in Figure~\ref{fig:SDSS_result} and different to what we find at $z\sim1$ in the DEEP2 fields. For LINERs, \citeauthor{Kauffmann2003} measure little environment dependence, whereas we find a significant decline in the SDSS LINER fraction in our overdense bins. The explanation for this difference may come from our removal of possible contaminating star forming galaxies by restricting our analysis to the red sequence. Also, we impose a higher line detection threshold on the SDSS data to provide a fair comparison with DEEP2 (see Section~\ref{sec:sdss}). Finally, \cite{Kauffmann2003} required that all lines for the BPT diagnosis were detected and this implies biasing the LINERs sample towards the strongest objects. Between redshifts $z=0.4$ and $z=1.35$, \cite{Cooper2007a} show that \emph{red galaxies} within the DEEP2 survey favour overdense environments, although the blue fraction in clusters does become larger as one moves to higher redshift \citep[see also][]{Gerke2007, Coil2007b}. At all redshifts there exists a non-negligible red fraction in underdense environments, which evolves only weakly if at all. \cite{Nandra2007} show that the host DEEP2 galaxies of X-ray selected AGN within the EGS field ($\sim 1/6$ of the DEEP2 survey volume) occupy a unique region of colour-magnitude space. These objects typically live at the top of the blue cloud, within the green valley, or on the red sequence. \cite{Georgakakis2007} measure the mean environment of this population and confirm that, on average, they live in density regions above that of the mean of the survey. They find this to be true for all host galaxy magnitudes studied ($M_B\lower.5ex\hbox{$\; \buildrel < \over \sim \;$}-21$) and colours ($U-B\simgt0.8$) (note the DEEP2 red sequence begins at $U-B\sim1$). However, given limited sample sizes, they were not able to establish whether the environment distribution of the X-ray AGN differed from that of the red population, rather than the DEEP2 population as a whole. \subsection{Understanding the sequence of events} From our results alone a comprehensive understanding of the different environment trends within the AGN population from high to low redshift is not possible. However, some speculation and interpretation can be made by drawing on our broader knowledge of these active objects from the literature. One possible scenario posits that LINERs and Seyferts occur in different types of galaxies. In this picture, LINERs are often associated with young red sequence galaxies \citep{Graves2007} and are especially common among post-starburst (K+A) galaxies \citep{Yan2006}. These galaxies would already be into the quenched phase of their evolution but still relatively young. Merger triggered starbursts and subsequent quasar winds are a possible mechanisms to produce rapid star formation shut down in such objects \citep{Hopkins2006}. The gas rich merging events required in this scenario are common in overdense environments at $z\sim1$ as clusters and massive groups assemble. By $z\sim0$, however, the activity in these environments has mostly ended. Hence, if this picture is correct, one may expect an over-abundance of red sequence LINERs in dense environments at high redshift (since both star formation and rapid quenching is common) that is not seen locally. This may be consistent with the trends found in the left panel of Figure~\ref{fig:SDSS_result}. Seyfert galaxies, on the other hand, could be objects in transition from the blue cloud to red sequence \citep{Groves2006}, whose AGN are thought to be initiated by internal processes (and not mergers), inferred from their often found spiral structure (e.g. M77) (mergers act to destroy such structure). From this, one may expect our red sequence Seyfert population to represent the tail of the colour distribution of transitioning objects whose dependence on environment is determined by secular mechanisms and who would evolve accordingly. At high redshift, disk galaxies are commonly found in all environments, including the most dense. In contrast, overdense regions in the local Universe are dominated by passive ellipticals and show an absence of spirals. This would be broadly consistent with our findings in the right panel of Figure~\ref{fig:SDSS_result}, where the most significant evolution in the red Seyfert fraction arises from a depletion in overdense regions relative to other environments, from high redshift to low. Alternatively, some authors claim that LINERs and Seyferts form a continuous sequence, with the Eddington rate the primary distinguishing factor \citep{Kewley2006}. In this scenario, Seyferts are young objects with actively accreting black holes. As the star formation begins to decay so does the accretion rate, and the galaxy enters a transition phase. Eventually, a LINER-like object emerges, with an old stellar population and very low supermassive BH accretion rate. This picture is supported by recent studies in voids from \cite{Constantin2007}. At high redshift, our results show that red Seyferts and LINERs are approximately equally abundant. By $z\sim0$ however, the Seyfert population has declined relative to the LINER population by over a factor of $7$. This may be interpreted as the natural transformation of Seyferts into LINERs with time, within a galaxy population which is smoothly reddening from $z\sim1$ to $z\sim0$. Moreover, the fact that high-z LINERs reside preferentially in high-density environments may imply that this Seyfert-LINER transition is more efficient in dense regions of the Universe. \section{Summary} \label{sec:summary} In this paper we measure the dependence of the AGN fraction of red galaxies on environment in the $z\sim1$ DEEP2 Galaxy Redshift Survey and local $z\sim0.1$ SDSS. We restrict our analysis to the red sequence to maintain a clean and consistent selection of AGN at high and low redshift, and this also reduces the additional effects of environment associated with galaxy colour. Our results can be summarised as follows: \begin{itemize} \item[(i)] High redshift LINERs at $z\sim1$ in DEEP2 appear to favour higher density environments relative to the red sequence from which they are drawn. In contrast, Seyferts and X-ray selected AGN at $z\sim1$ show much weaker (or no) environmental dependencies within the same underlying population. Extending our analysis to include green valley objects has little effect on the results. \item[(ii)] Low redshift LINER and Seyfert AGN in the SDSS both show a slowly declining red sequence AGN fraction towards high density environments. This is in contrast to the high redshift result. \item[(iii)] At $z\sim1$, Seyferts and LINERs are approximately equally abundant. By $z\sim0$ however, the Seyfert population has declined relative to the LINER population by over a factor of $7$. \end{itemize} It is important to remember that such measures are difficult to make with current data, and hence we remain limited by statistics to the extent to which we can physically interpret our results. Regardless, a robust outcome of our analysis is the differences between LINER and Seyfert AGN populations in high density regions, and between high and low redshift in all environments. Our results indicate that a greater understanding of both AGN and galaxy evolution may be possible if future analyses simultaneously focus on the detailed subdivision of different AGN classes, host galaxy properties, and their environment. \section*{Acknowledgements} AMD is supported by the Ministerio de Educaci\'on y Ciencia of the Spanish Government through FPI grant AYA2005-07789, and wishes to thank the University of California Berkeley Astronomy Department for their hospitality during the creation of this work. DC acknowledges support from NSF grant AST-0071048. ALC is supported by NASA through Hubble Fellowship grant HF-01182.01-A, awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS 5-26555. Support for this work was provided by NASA through the Spitzer Space Telescope Fellowship Program. Funding for the DEEP2 survey has been provided by NSF grants AST-0071048, AST-0071198, AST-0507428, and AST-0507483. The data was obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the University of California, Caltech and NASA. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The DEEP2 team and Keck Observatory acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community and appreciate the opportunity to conduct observations from this mountain. The DEEP2 and AEGIS websites are http://deep.berkeley.edu/ and http://aegis.ucolick.org/. Funding for the SDSS has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, NASA, the NSF, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS website is http://www.sdss.org/. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,813
\section{Introduction} There has been a considerable amount of study on noncoherent wireless channels where neither the transmitter nor the receiver knows the channel \cite{marzetta1999capacity,Abou_Faycal_noncoherent,Zheng_Tse_Grassmann_MIMO,Koch2013,Joyson_noncoh_diamond}. However, most of the progress has been on unicast networks. To the best of our knowledge, \emph{noncoherent} interference channel (IC) has not been studied from an information theoretic viewpoint. In this paper, we consider the noncoherent $2$-user IC with symmetric statistics and study the achievable generalized degrees of freedom (gDoF) region as a first step towards understanding its capacity region. \begin{comment} IC other works. Define prelog. Channel. Figure. \end{comment} The gDoF region captures the asymptotic behaviour of the prelog of the points on the boundary of the capacity region. For a point-to-point network parameterized by channel strengths\footnote{For ease of analysis, we absorb the transmit SNR into the channel strengths, hence the capacity characterization does not include the transmit SNR explicitly. The noise at the receivers are assumed to be of unit variance.} $\rho_{1}^{2},\rho_{2}^{2},\ldots,\rho_{4}^{2}$ on its links, the complete capacity characterization obtains the capacity for all values of $\rho_{1}^{2},\rho_{2}^{2},\ldots,\rho_{4}^{2}$. The DoF characterization finds the asymptotic behavior of the prelog of capacity along the line $\lgbrac{\rho_{1}^{2}}=\lgbrac{\rho_{2}^{2}}=\cdots=\lgbrac{\rho_{4}^{2}}$ in the $4-$dimensional space of link strengths in dBm. The gDoF characterization is more general; it finds the asymptotic behavior of the prelog of capacity along the line $\brac{\lgbrac{\rho_{1}^{2}}/\gamma_{1}}=\brac{\lgbrac{\rho_{2}^{2}}/\gamma_{2}}=\cdots=\brac{\lgbrac{\rho_{4}^{2}}/\gamma_{4}}$ with constants $\gamma_{1},\ldots,\gamma_{4}$. Equivalently, for the gDoF characterization, we can set $\lgbrac{\rho_{1}^{2}}/\gamma_{1}=\lgbrac{\rho_{2}^{2}}/\gamma_{2}=\cdots=\lgbrac{\rho_{L}^{2}}/\gamma_{L}=\lgbrac{\snr}$ and let $\snr\rightarrow\infty$. For a mulituser network, for example the 2-user IC, with two rates, and channel strengths $\rho_{1}^{2},\rho_{2}^{2},\ldots,\rho_{4}^{2}$ on its links, the complete capacity characterization obtains the \emph{capacity region} for all values of $\rho_{1}^{2},\rho_{2}^{2},\ldots,\rho_{4}^{2}$. We have a gDoF region in this case by letting $\lgbrac{\rho_{1}^{2}}/\gamma_{1}=\lgbrac{\rho_{2}^{2}}/\gamma_{2}=\cdots=\lgbrac{\rho_{L}^{2}}/\gamma_{L}=\lgbrac{\snr}$ with $\snr\rightarrow\infty$ and obtaining the asymptotic behavior of prelog of the points in the capacity region. The gDoF characterization was first used in \cite{etkin_tse_no_fb_IC} to characterize the asymptotic behavior of the capacity region of a 2-user symmetric interference channel (IC) for high SNR. There the link strengths were set to scale as $\snr,\snr^{\alpha},\snr^{\alpha},\snr$ for the 4 links of the IC. The method of scaling the channel strengths with different SNR exponents to obtain the gDoF region is also done in other works like \cite{gDoF_K_user_IC,gDoF_MIMO_IC}. A natural training-based scheme learns the channel at the receiver and uses the estimate to operate a coherent decoder. Such a scheme is known to be degrees of freedom (DoF) optimal for single-user multiple input multiple output (MIMO) channel \cite{Zheng_Tse_Grassmann_MIMO}. A natural question to ask is whether operating the noncoherent IC with such a standard training-based scheme is gDoF optimal. The main observation in this paper is that we can improve the gDoF of the natural training-based coherent scheme in several regimes for the noncoherent IC. \begin{figure} \begin{minipage}[c][1\totalheight][t]{0.45\textwidth \begin{center} \includegraphics[scale=0.6]{Nonfeedback_model} \par\end{center} \caption{\label{fig:nonfeedback_ic}The channel model without feedback.} \end{minipage}\hfill{ \begin{minipage}[c][1\totalheight][t]{0.45\textwidth \begin{center} \includegraphics[scale=0.6]{Feedback_model} \par\end{center} \caption{\label{fig:feedback_ic}The channel model with feedback.} \end{minipage} \end{figure} We introduce a \emph{noncoherent} version of the Han-Kobayashi scheme \cite{han_kobayashi}, where the transmitters use superposition coding, rate-splitting their messages into common and private parts based on the average interference-to-noise ratio\footnote{We use the abbreviation INR for the \emph{average} interference-to-noise ratio in context of fading channels and not for the (instantaneous) interference-to-noise ratio. Similarly, we use the abbreviation SNR for the \emph{average }signal-to-noise ratio.} (INR), and the receivers use noncoherent joint decoding. We also consider the scheme which treats interference-as-noise (TIN) and time division multiplexing (TDM) between single-user transmissions with equal time-sharing between the users. The TIN and TDM schemes are instantiated using one training symbol\footnote{The TIN and TDM schemes can also be implemented in a noncoherent manner without training symbols, but it can be verified that the gDoF performance remains the same.} in each coherence period, as there is only one channel coefficient to be estimated for each user. We evaluate the achievable gDoF region with these schemes and compare it to a natural training-based scheme. For a 2-user IC, the standard training-based scheme uses at least 2 symbols in every coherence period $T$, to train the channels\footnote{As we are considering high SNR behavior, one training symbol is sufficient for each link.}. The rest of the symbols are used for communication using a rate-splitting scheme for the coherent fading IC. (See \cite{joyson_fading_TCOM} for the coherent fast fading IC scheme.) We also consider the noncoherent IC with channel state and output feedback. Our main results on the gDoF of the noncoherent IC are illustrated in Figures \ref{fig:Symmetric-gDoF_T=00003D4}, \ref{fig:Symmetric-gDoF_T=00003D6}. \begin{figure} \begin{minipage}[c][1\totalheight][t]{0.45\textwidth \begin{center} \includegraphics[scale=0.6]{dof_T_eq_4} \par\end{center} \caption{\label{fig:Symmetric-gDoF_T=00003D4}Symmetric achievable gDoF of the noncoherent IC without feedback for coherence time $T=4$.} \end{minipage}\hfill{ \begin{minipage}[c][1\totalheight][t]{0.45\textwidth \begin{center} \includegraphics[scale=0.6]{dof_T_eq_6} \par\end{center} \caption{\label{fig:Symmetric-gDoF_T=00003D6}Symmetric achievable gDoF of the noncoherent IC without feedback for coherence time $T=6$.} \end{minipage} \end{figure} For the case without feedback, we observe that TIN is better than other schemes when INR is much lower than the average SNR. As a contrast, for the case when the channel is perfectly known, TIN has the same performance as rate-splitting schemes for low INR. However, for the noncoherent case, rate-splitting schemes based on the INR have lower gDoF. We believe that this is due to the added uncertainty in the interfering link along with the uncertainty of the interfering message to be decoded. When the coherence time is low ($T\leq4$) and interference is high, it is better to avoid interference using the TDM scheme; the uncertainty in the interfering link reduces the amount of message that can be decoded in the noncoherent scheme (Figure \ref{fig:Symmetric-gDoF_T=00003D4}). For larger coherence time ($T\geq5$) and high interference (Figure \ref{fig:Symmetric-gDoF_T=00003D6}), the effect of decoding the interfering message (which is longer for larger coherence time) and removing the interference dominates the effect of the uncertainty in the interfering link (which is constant throughout the coherence time), especially when the interference level $\alpha=\lgbrac{\inr}/\lgbrac{\snr}$ is around\footnote{The behavior around $\alpha=2/3$ is explained in \cite{etkin_tse_no_fb_IC} in terms of the common information decoded at both receivers and the private information decoded only at the intended receiver. The rate initially increases due to larger amount of common information that can be decoded to remove the interference. Afterwards, the behavior is dominated by the decrease in the private information.} $2/3$. Here our noncoherent rate-splitting scheme gives the best performance. We also provide some numerics to show that our results can demonstrate improvement in the rates compared to the natural training-based scheme at finite SNRs, the rate-SNR points are given in Table \ref{tab:Comparison-of-rates} on page \pageref{tab:Comparison-of-rates}. For the feedback case, we again propose a noncoherent scheme that performs rate-splitting based on the INR similar to \cite{joyson_fading_TCOM}. We evaluate the gDoF region and compare it with a standard training-based scheme and prove that the noncoherent scheme outperforms the standard training-based scheme (see Section \ref{sec:noncoh_FFIC_FB} on page \pageref{sec:noncoh_FFIC_FB}). The main results for the feedback case are illustrated in Figures \ref{fig:Symmetric_gDoF_fb_T=00003D3} and \ref{fig:Symmetric_gDoF_fb} on page \pageref{fig:Symmetric_gDoF_fb_T=00003D3}. The noncoherent scheme with feedback increases the gDoF compared to the noncoherent scheme without feedback. Also, we observe that with feedback, the performance of our noncoherent scheme is better than the TIN scheme for $T\geq3$, even when the INR is low compared to the SNR. However, TDM still outperforms other schemes when the INR is almost equal to the SNR. \subsection{Related Work} To the best of our knowledge, the capacity of noncoherent interference channel has not received much attention in the literature. Hence we give an overview of the existing works on noncoherent wireless networks and the related work on the interference channels. The noncoherent wireless model for MIMO channel was studied by Marzetta and Hochwald \cite{marzetta1999capacity}. In their model, neither the receiver nor the transmitter knows the fading coefficients and the fading gains remain constant within a block of length $T$ symbol periods. Across the blocks, the fading gains are identically independent distributed (i.i.d.) according to Rayleigh distribution. The capacity behavior at high SNR for the noncoherent MIMO channel was studied by Zheng and Tse in \cite{Zheng_Tse_Grassmann_MIMO}. The main conclusion of that work was that a standard training-based scheme was DoF optimal for the noncoherent MIMO channels, a message distinct from our conclusions in this paper for the noncoherent IC. Some works have specifically studied the case with $T=1$ \cite{Taricco_Elia_97,Abou_Faycal_noncoherent,lapidoth2003capacity}. In \cite{Abou_Faycal_noncoherent}, it was demonstrated that for $T=1$, the capacity is achieved by a distribution with a finite number of mass points, but the number of mass points grew with SNR. The capacity for the $T=1$ case was shown to behave double logarithmically in \cite{lapidoth2003capacity}. There have been other works that studied noncoherent relay channels. The noncoherent single relay network was studied in \cite{Koch2013}, where the authors considered identical link strengths and unit coherence time. They showed that under certain conditions on the fading statistics, the relay does not increase the capacity at high-SNR. In \cite{Gohary_non_coherent_2014}, similar observations were made for the noncoherent MIMO full-duplex single relay channel with block-fading. They showed that Grassmanian signaling can achieve the DoF without using the relay. Also for certain regimes, decode-and-forward with Grassmanian signaling was shown to approximately achieve the capacity at high-SNR. The above works considered a DoF framework for the noncoherent model, in the sense that for high-SNR, the link strengths are not significantly different, \emph{i.e.,} the links scale with the same SNR-exponent. The gDoF framework for noncoherent MIMO channel was considered in \cite{Joyson_2x2MIMO_isit,Joyson_2x2_mimo_journal} and it was shown that several insights from the DoF framework may not carry on to the gDoF framework. It was shown that a standard training-based scheme is not gDoF optimal and that all antennas may have to be used for achieving the gDoF, even when the coherence time is low, in contrast to the results for the MIMO channel with i.i.d. links. In \cite{Joyson_noncoh_diamond}, the gDoF of the 2-relay diamond network was studied. The standard training-based schemes were proved to be sub-optimal and a new scheme was proposed, which partially trains the network and performs a scaling and quantize-map-forward operation \cite{ozgur_diggavi_2010_isit,ozgur_diggavi_2013,avest_det} at the relays. In this work, we study noncoherent IC with symmetric statistics. This, we believe, is the first information theoretic analysis of noncoherent channels in multiple unicast networks. The (coherent) IC is well understood in terms of its capacity \cite{han_kobayashi,chong2008han,etkin_tse_no_fb_IC,suh_tse_fb_gaussian} when the channels are perfectly known at the receivers and transmitters. The capacity region of the 2-user IC without feedback was characterized in \cite{etkin_tse_no_fb_IC}, to within 1 bit.. In \cite{suh_tse_fb_gaussian}, a similar result was derived for the IC with feedback, obtaining the capacity region within 2 bits. In \cite{joyson_fading_TCOM}, the approximate capacity region (within a constant additive gap) for fast fading interference channels (FF-IC), with no instantaneous CSIT but with perfect channel knowledge at the receiver, was derived. There, the authors used a rate-splitting scheme based on the average interference-to-noise ratio, extending the existing rate-splitting schemes for IC \cite{etkin_tse_no_fb_IC,suh_tse_fb_gaussian}, and proved that this was approximately optimal for FF-IC. This approximate capacity region was derived for FF-IC without feedback and also for the case with feedback; the feedback improves the capacity region for FF-IC, similar to the case for the static IC \cite{suh_tse_fb_gaussian}. In this work, we extend the results from \cite{joyson_fading_TCOM} for FF-IC (where the receivers know the channel, but not the transmitters) to the case when both transmitters and receivers do not know the channel, \emph{i.e., }the noncoherent IC. The paper is organized as follows. In Section \ref{sec:Notation}, we set up the problem and explain the notations used. In Section \ref{sec:NC_IC_noFB}, we discuss our results on the noncoherent IC without feedback and in Section \ref{sec:noncoh_FFIC_FB}, we discuss the noncoherent IC with feedback. In Section \ref{sec:Conclusions-and-remarks}, we give the conclusions and remarks. Some of the proofs for the analysis is deferred to the appendixes. \section{Notation and system model\label{sec:Notation}} \subsection{Notational Conventions} We use the notation $\mathcal{CN}\brac{\mu,\sigma^{2}}$ for circularly symmetric complex Gaussian distribution with mean $\mu$ and variance $\sigma^{2}$. The logarithm to base 2 is denoted by $\lgbrac{}$. We use the symbol $\sim$ with overloaded meanings: one to indicate that a random variable has a given distribution and second to indicate that two random variables have the same distribution. We use the notation $\doteq$ for order equality, \emph{i.e.}, we say $f_{1}\brac{\mathsf{SNR}}\doteq f_{2}\brac{\mathsf{SNR}}$ if \begin{equation} \text{lim}_{\snr\rightarrow\infty}\frac{f_{1}\brac{\mathsf{SNR}}}{\lgbrac{\mathsf{SNR}}}=\text{lim}_{\snr\rightarrow\infty}\frac{f_{2}\brac{\mathsf{SNR}}}{\lgbrac{\mathsf{SNR}}}. \end{equation} The use of symbols $\leqdof,\geqdof$ are defined analogously. We use a bold script for random variables and the normal script for deterministic variables. We use small letters for scalars, capital letters for vectors and capital letter with underline for matrices. \subsection{System Model} We consider the 2-user noncoherent Gaussian fading IC (Figure \ref{fig:nonfeedback_ic}) with coherence time $T$. We have \begin{equation} \Y_{1}=\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+W_{1}, \end{equation} \begin{equation} \Y_{2}=\boldsymbol{g}_{12}\X_{1}+\boldsymbol{g}_{22}\X_{2}+W_{2}, \end{equation} where the $\X_{i}$, $\Y_{i}$, $\boldsymbol{W}_{i}$ are $1\times T$ vectors and the links $\boldsymbol{g}_{ij}$ are fading. The transmit signals have the average power constraint: \begin{equation} \frac{1}{T}\expect{\abs{\X_{i}}^{2}}=1.\label{eq:power_constraint} \end{equation} for $i\in\{1,2\}$. The realizations of $g_{ij}$ for any fixed $\brac{i,j}$ with $i,j\in\{1,2\}$ are i.i.d. across time, and the realizations for different $\brac{i,j}$ are independent. We consider the case with symmetric fading statistics $\boldsymbol{g}_{11}\sim\boldsymbol{g}_{22}\sim\mathcal{CN}\brac{0,\snr}$ and $\boldsymbol{g}_{12}\sim\boldsymbol{g}_{21}\sim\mathcal{CN}\brac{0,\inr}$. Neither the receivers nor the transmitters have knowledge of any of the realizations of $\boldsymbol{g}_{ij}$, but the channel statistics are known to both the receivers and the transmitters. Under the feedback model (Figure \ref{fig:feedback_ic}), after each reception, each receiver reliably feeds back the received symbols \footnote{IC with rate limited feedback is considered in \cite{VahidSuh_12_ratelimited} where outputs are quantized and fed back. Our schemes can also be extended for such cases.}. We consider the delayed feedback of symbols in blocks of $T$, however, the results that we derive still hold even if the feedback is performed during every symbol period. The interference level $\alpha$ is defined as, $\alpha=\lgbrac{\inr}/\lgbrac{\snr}$. Let $\mathcal{C}\brac{\snr,\inr}$ denote the capacity region. We capture the asymptotic behavior of the the capacity region as follows: Let $\mathcal{\tilde{D}}$ be a scaled version of $\mathcal{C}\brac{\snr,\inr}$ given by \sloppy$\mathcal{\tilde{D}}\brac{\snr,\inr}=\cbrac{\brac{R_{1}/\lgbrac{\snr},R_{2}/\lgbrac{\snr}}:\brac{R_{1},R_{2}}\in\mathcal{C}\brac{\snr,\inr}}$. Following \cite{etkin_tse_no_fb_IC}, we define the generalized degrees of freedom region as \begin{equation} \mathcal{D}\brac{\alpha}=\underset{\alpha\text{ fixed}}{\lim_{\snr,\ \inr\rightarrow\infty}}\mathcal{\tilde{D}}\brac{\snr,\inr}. \end{equation} In $\mathcal{D}\brac{\alpha}$, the limits of $R_{1}/\lgbrac{\snr},R_{2}/\lgbrac{\snr}$ are described by the variables $d_{1},d_{2}$. We also assume $T\geq2$, since if $T=1$, the gDoF region of the IC is null following the result for noncoherent MIMO channel \cite{Taricco_Elia_97,lapidoth2003capacity,Joyson_2x2_mimo_journal}. \section{Noncoherent IC without feedback\label{sec:NC_IC_noFB}} In this section, we provide our results with noncoherent rate-splitting scheme for IC without feedback. We compare the achievable gDoF using a standard training-based scheme to our scheme and we also compare it with the TIN and TDM schemes. \begin{thm} \label{thm:gDoF_ic_non_coh}Using a noncoherent rate-splitting scheme the gDoF regions given in Table \ref{tab:gdof_no_FB} are achievable. \begin{table}[H] \begin{centering} \caption{Achievable gDoF regions for different regimes of $\alpha$.\label{tab:gdof_no_FB}} \begin{tabular}{|c|c|c|} \hline $\alpha<1/2$ & $1/2\leq\alpha\leq1$ & $\alpha\geq1$\tabularnewline \hline \hline $\begin{array}{c} d_{1}\leq\brac{1-1/T}-\alpha/T\\ d_{2}\leq\brac{1-1/T}-\alpha/T\\ d_{1}+d_{2}\leq2\brac{1-1/T}-2\alpha \end{array}$ & $\begin{array}{c} d_{1}+d_{2}\leq\brac{2-3/T}-\alpha\brac{1-1/T}\\ d_{1}+d_{2}\leq2\brac{1-2/T}\alpha\\ 2d_{1}+d_{2}\leq\brac{2-3/T}-\alpha/T\\ d_{1}+2d_{2}\leq\brac{2-3/T}-\alpha/T \end{array}$ & $\begin{array}{c} d_{1}\leq\brac{1-2/T}\\ d_{2}\leq\brac{1-2/T}\\ d_{1}+d_{2}\leq\brac{1-1/T}\alpha-1/T \end{array}$\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} \begin{comment} \begin{thm} For $\alpha<1/2$ the region:\begin{subequations} \begin{align} d_{1} & \leq\brac{1-\frac{1}{T}}-\frac{\alpha}{T}\\ d_{2} & \leq\brac{1-\frac{1}{T}}-\frac{\alpha}{T}\\ d_{1}+d_{2} & \leq2\brac{1-\frac{1}{T}}-2\alpha \end{align} \end{subequations}and for $1/2\leq\alpha\leq1$ a gDoF region:\begin{subequations} \begin{align} d_{1}+d_{2} & \leq\brac{2-\frac{3}{T}}-\alpha\brac{1-\frac{1}{T}}\\ d_{1}+d_{2} & \leq2\brac{1-\frac{2}{T}}\alpha\\ 2d_{1}+d_{2} & \leq\brac{2-\frac{3}{T}}-\frac{\alpha}{T}\\ d_{1}+2d_{2} & \leq\brac{2-\frac{3}{T}}-\frac{\alpha}{T} \end{align} \end{subequations} \end{thm} and for $\alpha\geq1$ a gDoF region:\begin{subequations} \begin{align} d_{1} & \leq\brac{1-\frac{2}{T}}\\ d_{2} & \leq\brac{1-\frac{2}{T}}\\ d_{1}+d_{2} & \leq\brac{1-\frac{1}{T}}\alpha-\frac{1}{T} \end{align} \end{subequations}are achievable. \end{comment} \end{thm} \begin{IEEEproof} The proof follows by analyzing a Han-Kobayashi scheme \cite{han_kobayashi,chong2008han} similar to \cite{joyson_fading_TCOM} with rate-splitting based on the average interference-to-noise ratio. Here the transmitters split their messages into private and common parts with power allocation to each part determined based on the average interference-to-noise ratio. Each receiver, in a noncoherent manner jointly decodes the other transmitter's common message and its own private and common messages. The details are in Section \ref{subsec:no_fb}. \end{IEEEproof} \subsection{Discussion} We now compare our achievable gDoF with that of a standard training-based scheme. Standard training-based schemes for IC allocate training symbols to train each user independently. With two users, we need at least two symbols for training. The approximate capacity region of coherent fast fading IC is given in \cite{Joyson_fading}. The gDoF for the case which uses 2 symbols for training can be easily obtained from the gDoF region for the coherent case with a multiplication factor of $\brac{1-2/T}$. Hence, the gDoF regime for a scheme that uses 2 symbols for training is given by\begin{subequations} \textbf{ \begin{align} d_{1} & \leq\brac{1-2/T},\\ d_{2} & \leq\brac{1-2/T},\\ d_{1}+d_{2} & \leq\brac{1-2/T}\brac{\max\brac{1,\alpha}+\max\brac{1-\alpha,0}},\\ d_{1}+d_{2} & \leq2\brac{1-2/T}\max\brac{1-\alpha,\alpha},\\ 2d_{1}+d_{2} & \leq\brac{1-2/T}\brac{\max\brac{1,\alpha}+\max\brac{1-\alpha,\alpha}+\max\brac{1-\alpha,0}}\\ d_{1}+2d_{2} & \leq\brac{1-2/T}\brac{\max\brac{1,\alpha}+\max\brac{1-\alpha,\alpha}+\max\brac{1-\alpha,0}}. \end{align} }\end{subequations} In Figures \ref{fig:gDoF-nofb-set1}, \ref{fig:gDoF-nofb-set2} the gDoF achievable with our noncoherent scheme is compared with the gDoF achievable using the aforementioned training-based scheme. It can be observed that our noncoherent scheme outperforms the standard training-based scheme. We also consider the strategy of treating interference-as-noise (TIN) with Gaussian codebooks. Using standard analysis and using Gaussian codebooks, it can be easily shown that the gDoF region\begin{subequations} \begin{align} d_{1} & \leq\brac{1-1/T}\brac{1-\alpha},\\ d_{2} & \leq\brac{1-1/T}\brac{1-\alpha} \end{align} \end{subequations}can be achieved by treating interference-as-noise. Also with time division multiplexing (TDM) and using Gaussian codebooks, the gDoF region\begin{subequations} \begin{align} d_{1} & \leq\brac{1/2}\brac{1-1/T},\\ d_{2} & \leq\brac{1/2}\brac{1-1/T} \end{align} \end{subequations}is achievable. This also follows using standard analysis. Now, we give the achievable symmetric gDoF for the strategies that we discussed, with coherence time $T=5$, in Figure \ref{fig:Symmetric-gDoF}. It can be calculated from our gDoF regions that treating interference-as-noise outperforms the other schemes when $\alpha<1/2$. Note that for the coherent case, rate-splitting based on the INR is only as good as TIN for low INR ($\alpha<1/2$). For noncoherent case, rate-splitting scheme has lower performance than TIN for low INR, because the uncertainty in the interfering channel together with the uncertainty in the interfering message to be decoded, reduces the amount of the direct message that can be decoded. This reduction is more significant in the noncoherent case (compared to the coherent case) because the uncertainty in the channels does not appear in the coherent case. Also for intermediate INR, TDM outperforms noncoherent rate-splitting scheme, this can be explained looking at the points with $\alpha=.5$ and $\alpha=1$, where the noncoherent rate-splitting scheme gives gDoF of $\brac{1/2}\brac{1-2/T}$ and the TDM scheme gives the gDoF of $\brac{1/2}\brac{1-1/T}$. Here, the noncoherent scheme effectively behaves as a TDM that uses two training symbols per coherence period, where actually the TDM can be implemented with only one training symbol per coherence period. \begin{figure} \centering{}\includegraphics[scale=0.6]{IC_dof_less_than_half}\caption{gDoF for $\alpha<1/2$,$T\protect\geq2$. The solid line is gDoF achievable for a noncoherent scheme and the dotted line is an outer bound gDoF for a scheme that uses 2 symbols for training.\label{fig:gDoF-nofb-set1}} \end{figure} \begin{figure} \centering{}\includegraphics[scale=0.55]{IC_dof_greater_than_half}\caption{gDoF for $1/2<\alpha$, $T\protect\geq3$. For $T=2$ no gDoF is achievable using known schemes. The solid line is gDoF achievable for a noncoherent scheme and the dotted line is an outer bound gDoF for a scheme that uses 2 symbols for training.\label{fig:gDoF-nofb-set2}} \end{figure} \begin{figure} \centering{}\includegraphics[scale=0.6]{dof_T_eq_5}\caption{\label{fig:Symmetric-gDoF}Symmetric achievable gDoF for coherence time $T=5$. Training-based scheme uses $2$ symbols for training.} \end{figure} Although our main results are on the gDoF of the system, we can provide guidelines for specific scenarios. For example with transmit SNR $16$ dB, coherence time $T=5$, and all the links with average strength $0.1$, using TDM can improve the rate by $6\%$ compared to the standard training-based scheme used with rate-splitting. More rate points are illustrated in Table \ref{tab:Comparison-of-rates}. The details of the expressions used for the numerics are given in \ifarxiv Appendix \ref{app:Numerical-Calculation}\else \cite[Appendix E]{Joyson_noncoh_IC_arxiv}\fi. \begin{table}[H] \centering{}\caption{\label{tab:Comparison-of-rates}Comparison of rates achievable with different schemes for $T=5$, $\alpha=\protect\lgbrac{\protect\inr}/\protect\lgbrac{\protect\snr}=1$} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{$\snr$ dB} & \multicolumn{2}{c|}{Rates for different schemes}\tabularnewline \cline{2-3} \cline{3-3} & 2 symbol training & TDM\tabularnewline \hline 16 & 0.47 & 0.50\tabularnewline \hline 17 & 0.54 & 0.57\tabularnewline \hline 18 & 0.61 & 0.66\tabularnewline \hline 19 & 0.69 & 0.75\tabularnewline \hline 20 & 0.77 & 0.84\tabularnewline \hline \end{tabular} \end{table} \noindent\textbf{Difficulty with Outer Bounds:} One trivial outer bound is the coherent outer bound \emph{i.e., }assuming that the receivers have perfect channel state information. We could also try to derive noncoherent outer bounds following the existing techniques. For example, following \cite[Theorem 1]{etkin_tse_no_fb_IC} and using a genie-aided technique with $\S_{1}=\boldsymbol{g}_{12}\X_{1}+\Z_{2},$ and $\S_{2}=\boldsymbol{g}_{21}\X_{2}+\Z_{1}$, we could derive an outer bound \begin{equation} T\brac{R_{1}+R_{2}}\leq h\brac{\Y_{1}|\S_{1},\ts}+h\brac{\Y_{2}|\S_{2},\ts}-h\brac{\S_{1}|\X_{1},\ts}-h\brac{\S_{2}|\X_{2},\ts}\label{eq:outer_example} \end{equation} with input distributions $p\brac{\ts}p\brac{\rline{\X_{1}}\ts}p\brac{\rline{\X_{2}}\ts}$ with a time-sharing random variable $\ts.$ However, this bound is not better than the coherent outer bound. To understand this, we try evaluating (\ref{eq:outer_example}) with $\X_{1},\X_{2}$ taken as independent vectors with i.i.d. $\mathcal{CN}\brac{0,1}$ elements. With our choice, it can be shown that \begin{align} h\brac{\Y_{1}|\S_{1}} & \geqdof\lgbrac{1+\inr+\snr}+\brac{T-1}\lgbrac{1+\frac{\snr}{\inr}},\\ h\brac{\S_{1}|\X_{1}} & \eqdof\lgbrac{1+\inr},\\ h\brac{\Y_{1}|\S_{1}}-h\brac{\S_{1}|\X_{1}} & \geqdof T\lgbrac{1+\frac{\snr}{\inr}}. \end{align} This means that for $\alpha<1/2$ for gDoF, the actual outer bound from (\ref{eq:outer_example}) obtained by maximizing over all input distributions is looser than the bound $R_{1}+R_{2}\leqdof2\lgbrac{1+\snr/\inr}$, which is the same as the coherent outer bound for $\alpha<1/2$. \subsection{Proof of Theorem \ref{thm:gDoF_ic_non_coh}\label{subsec:no_fb}} We use a simplified form of Han-Kobayashi scheme \cite{han_kobayashi} similar to that in \cite{chong2008han}. We consider a fixed distribution $p\brac{\U_{1}}p\brac{\U_{2}}p\brac{\X_{1}|\U_{1}}p\brac{\X_{2}|\U_{2}}$ where $\U_{1},\U_{2},\X_{1},\X_{2}$ are vectors of length $T$. \textbf{\textbf{\noindent}Encoding:} For transmitter 1, generate $2^{NTR_{\text{c}1}}$ codewords $\U_{1}^{N}\brac i$ with $i\in\cbrac{1,\ldots,2^{NTR_{\text{c}1}}}$ according to $\prod_{l=1}^{N}p\brac{\U_{1(l)}}$ and for each $\U_{1}^{N}\brac i$, generate $2^{NTR_{\text{p}1}}$ codewords $\X_{1}^{N}\brac{i,j}$, with $j\in\cbrac{1,\ldots,2^{NTR_{\text{p}1}}}$, according to $\prod_{l=1}^{N}p\brac{\X_{1(l)}|\U_{1(l)}}$. Similarly for transmitter 2, generate $2^{NTR_{\text{c}2}}$ codewords $\U_{2}^{N}\brac i$, with $i\in\cbrac{1,\ldots,2^{NTR_{\text{c}2}}}$, according to $\prod_{l=1}^{N}p\brac{\U_{2(l)}}$ and for each $\U_{2}^{N}\brac j$, generate $2^{NTR_{\text{p}2}}$ codewords $\X_{2}^{N}\brac{i,j}$, with $j\in\cbrac{1,\ldots,2^{NTR_{\text{p}2}}}$, according to $\prod_{l=1}^{N}p\brac{\X_{2(l)}|\U_{2(l)}}$. Transmitter 1 has uniformly random messages $w_{\text{c}1}\in\sbrac{1,2^{NTR_{\text{c}1}}},w_{\text{p}1}\in\sbrac{1,2^{NTR_{\text{p}1}}}$ to transmit and transmitter 2 has uniformly random messages $w_{\text{c}2}\in\sbrac{1,2^{NTR_{\text{c}2}}},w_{\text{p}2}\in\sbrac{1,2^{NTR_{\text{p}2}}}$ to transmit. Transmitter 1 sends the symbols $\X_{1}^{N}\brac{w_{\text{c}1},w_{\text{p}1}}$ and transmitter 2 sends the symbols $\X_{1}^{N}\brac{w_{\text{c}2},w_{\text{p}2}}.$ \textbf{\textbf{\noindent}Decoding: }For decoding, receiver 1 finds a triplet $\brac{\hat{i},\hat{j},\hat{k}}$ requiring $\hat{i},\hat{j}$ to be unique with \[ \brac{\X_{1}^{N}\brac{\hat{i},\hat{j}},\U_{1}^{N}\brac{\hat{i}},\U_{2}^{N}\brac{\hat{k}},\Y_{1}^{N}}\in A_{\epsilon}^{\brac N}. \] Similarly reciever 2 finds a triplet $\brac{\hat{i},\hat{j},\hat{k}}$ requiring $\hat{i},\hat{j}$ to be unique with \[ \brac{\X_{2}^{N}\brac{\hat{i},\hat{j}},\U_{2}^{N}\brac{\hat{i}},\U_{1}^{N}\brac{\hat{k}},\Y_{2}^{N}}\in A_{\epsilon}^{\brac N}, \] where $A_{\epsilon}^{\brac N}$ indicates the set of jointly typical sequences. We analyse the error probability at reciever 1 assuming $\brac{i,j,k}=\brac{1,1,1}$. Let $E_{ijk}$ be the event $\cbrac{\brac{\X_{1}^{N}\brac{i,j},\U_{1}^{N}\brac i,\U_{2}^{N}\brac k,\Y_{1}^{N}}\in A_{\epsilon}^{\brac N}}$ for given $i,j,k$. By asymptotic equipartition property (AEP), the probability of $\text{Pr}\brac{\union_{k}E_{11k}}$ approaches unity. The error probability at reciever 1 is then captured by the following: \begin{align*} \text{Pr}\brac{\union_{(i,j)\neq\brac{1,1}}E_{ijk}} & \leq\brac{\sum_{i\neq1,j\neq1,k\neq1}E_{ijk}+\sum_{i\neq1,j=1,k\neq1}E_{ijk}}\\ & \quad+\brac{\sum_{i\neq1,j\neq1,k=1}E_{ijk}+\sum_{i\neq1,j=1,k=1}E_{ijk}}\\ & \quad+\sum_{i=1,j\neq1,k=1}E_{ijk}+\sum_{i=1,j\neq1,k\neq1}E_{ijk}\\ & \leq2^{N\brac{TR_{\text{c}1}+TR_{\text{c}2}+TR_{\text{p}1}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}+2^{N\brac{TR_{\text{p}1}+TR_{\text{c}1}-I\brac{\X_{1};\Y_{1}|\U_{2}}+\epsilon}}\\ & \quad+2^{N\brac{TR_{\text{p}1}-I\brac{\rline{\X_{1};\Y_{1}}\U_{1},\U_{2}}+\epsilon}}+2^{N\brac{TR_{\text{c}2}+TR_{\text{p}1}-I\brac{\rline{\X_{1},\U_{2};\Y_{1}}\U_{1}}+\epsilon}}. \end{align*} Combining the analysis for receiver 1 and receiver 2, we get the following equations for achievability:\begin{subequations} \begin{align} TR_{\text{c}1}+TR_{\text{c}2}+TR_{\text{p}1} & \leq I\brac{\X_{1},\U_{2};\Y_{1}},\\ TR_{\text{p}1}+TR_{\text{c}1} & \leq I\brac{\X_{1};\Y_{1}|\U_{2}},\\ TR_{\text{p}1} & \leq I\brac{\rline{\X_{1};\Y_{1}}\U_{1},\U_{2}},\\ TR_{\text{c}2}+TR_{\text{p}1} & \leq I\brac{\rline{\X_{1},\U_{2};\Y_{1}}\U_{1}},\\ TR_{\text{c}1}+TR_{\text{c}2}+TR_{\text{p}2} & \leq I\brac{\X_{2},\U_{1};\Y_{2}},\\ TR_{\text{p}2}+TR_{\text{c}2} & \leq I\brac{\X_{2};\Y_{2}|\U_{1}},\\ TR_{\text{p}2} & \leq I\brac{\rline{\X_{2};\Y_{2}}\U_{1},\U_{2}},\\ TR_{\text{c}1}+TR_{\text{p}2} & \leq I\brac{\rline{\X_{2},\U_{1};\Y_{2}}\U_{2}}. \end{align} \end{subequations}After Fourier-Motzkin elimination, the following equations are obtained for achievability, with $R_{1}=R_{\text{c}1}+R_{\text{p}1},R_{2}=R_{\text{c}2}+R_{\text{p}2}$: \begin{subequations}\label{eq:ach_nofb} \begin{align} TR_{1} & \leq I\brac{\X_{1};\Y_{1}|\U_{2}},\label{eq:ach_nofb1}\\ TR_{2} & \leq I\brac{\X_{2};\Y_{2}|\U_{1}},\label{eq:ach_nofb2}\\ T\brac{R_{1}+R_{2}} & \leq I\brac{\X_{2},\U_{1};\Y_{2}}+I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}},\label{eq:ach_nofb3}\\ T\brac{R_{1}+R_{2}} & \leq I\brac{\X_{1},\U_{2};\Y_{1}}+I\brac{\X_{2};\Y_{2}|\U_{1},\U_{2}},\label{eq:ach_nofb4}\\ T\brac{R_{1}+R_{2}} & \leq I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}+I\brac{\X_{2},\U_{1};\Y_{2}|\U_{2}},\label{eq:ach_nofb5}\\ T\brac{2R_{1}+R_{2}} & \leq I\brac{\X_{1},\U_{2};\Y_{1}}+I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}+I\brac{\X_{2},\U_{1};\Y_{2}|\U_{2}},\label{eq:ach_nofb6}\\ T\brac{R_{1}+2R_{2}} & \leq I\brac{\X_{2},\U_{1};\Y_{2}}+I\brac{\X_{2};\Y_{2}|\U_{1},\U_{2}}+I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}.\label{eq:ach_nofb7} \end{align} \end{subequations}Now similar to that in \cite{etkin_tse_no_fb_IC,joyson_fading_TCOM}, we choose $U_{k}$ as a vector of length $T$ with i.i.d. $\mathcal{CN}\brac{0,\lambda_{c}}$ elements and $\X_{pk}$ as a vector of length $T$ with i.i.d. $\mathcal{CN}\brac{0,\lambda_{p}}$ elements for $k\in\cbrac{1,2}$ and $\X_{1}=\U_{1}+\X_{p1},\quad\X_{2}=\U_{2}+\X_{p2},$ where $\lambda_{c}+\lambda_{p}=1$ and $\lambda_{p}=\min\brac{1/\inr,1}$. For gDoF characterization, we can assume $\inr\geq1$. If $\inr<1$, it is equivalent to the case with $\inr=1$ for gDoF, since both of these cases obtain $\alpha=0$. Hence, we can have $\lambda_{p}=1/\inr.$ Here we used the rate-splitting using the average interference-to-noise ratio. The following fact will be useful in approximating the mutual information terms in the rate region. \begin{fact} For an exponentially distributed random variable $\boldsymbol{\xi}$ with mean $\mu_{\xi}$ and with given constants $a\geq0,b>0$, we have \begin{equation} \lgbrac{a+b\mu_{\xi}}-\gamma\lgbrac e\leq\expect{\lgbrac{a+b\boldsymbol{\xi}}}\leq\lgbrac{a+b\mu_{\xi}}, \end{equation} where $\gamma$ is Euler's constant.\label{fact:Jensens_gap} \end{fact} \begin{IEEEproof} This is given \textcolor{black}{in \cite[Section VI-B]{Joyson_fading}.} \end{IEEEproof} We now simplify the region (\ref{eq:ach_nofb}) for low interference ($\alpha<1$) regime. We consider the terms in (\ref{eq:ach_nofb}), one by one. \begin{claim} The term $h\brac{\Y_{1}|\U_{2},\X_{1}}$ is upper bounded in gDoF by $\lgbrac{\snr+\inr}+\lgbrac{\inr}$ \[ h\brac{\Y_{1}|\U_{2},\X_{1}}\leqdof\lgbrac{\snr+\inr}+\lgbrac{\min\brac{\snr,\inr}} \] \label{claim:h_Y1_U2_X1} \end{claim} \begin{IEEEproof} The detailed proof is in Appendix \ref{app:proof_h_Y1_U2_X1}. The outline of the proof is as follows: with $\y_{1,i}$ as the components of $\Y_{1}$, we expand $h\brac{\Y_{1}|\U_{2},\X_{1}}=\sum_{i}h\brac{\y_{1,i}|\U_{2},\X_{1},\cbrac{\y_{1,j}}_{j=1}^{i}}$. The first term $h\brac{\y_{1,1}|\U_{2},\X_{1}}$ gives rise to the term $\lgbrac{\snr+\inr}$ with uncertainity from both the incoming channels. Let us consider the term $h\brac{\y_{1,2}|\U_{2},\X_{1},\y_{1,1}}$. In $\y_{1,2}$ the contribution to uncertainity is from the channels as well as the symbols. When conditioned on $\U_{2},\X_{1},\y_{1,1}$, the contribution of uncertainity from the symbols can be removed. The uncertainity from $\X_{p2}$ in $\y_{1,2}$ can be neglected in gDoF calculation due to the power allocation strategy that we use. The term $\y_{1,1}$ is a linear combination of the symbols as well as the channels. Using this single linear combination given in conditioning, the uncertainity from one of the channels can be removed. Thus this term gives rise to $\lgbrac{\min\brac{\snr,\inr}}$, with either the uncertainity from the direct channel removed or the uncertainity from the interfering channel removed. In terms $h\brac{\y_{1,i}|\U_{2},\X_{1},\cbrac{\y_{1,j}}_{j=1}^{i}}$ with $i\geq3$, we can follow the same procedure as stated in the above paragraph. However with $\cbrac{\y_{1,j}}_{j=1}^{i}$ available in conditioning, we have more than a single linear combination of the channels available. Using these, the contribution from both the channels can be removed, and hence these terms do not contribute to the gDoF. \end{IEEEproof} \begin{claim} The term $h\brac{\Y_{1}|\U_{1},\U_{2}}$ is lower bounded in gDoF as \[ h\brac{\Y_{1}|\U_{1},\U_{2}}\overset{}{\geqdof}\lgbrac{\snr+\inr}+\lgbrac{\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}} \] \label{claim:h(Y1|U1,U2)} \end{claim} \begin{IEEEproof} We expand $h\brac{\Y_{1}|\U_{1},\U_{2}}=\sum_{i}h\brac{\y_{1,i}|\U_{1},\U_{2},\cbrac{\y_{1,j}}_{j=1}^{i}}$. One way to lower bound $h\brac{\y_{1,i}|\U_{1},\U_{2},\cbrac{\y_{1,j}}_{j=1}^{i}}$ is to condition on the channel strengths and reduce the term to the coherent case. Another way to lower bound $h\brac{\y_{1,i}|\U_{1},\U_{2},\cbrac{\y_{1,j}}_{j=1}^{i}}$ is to give all the transmit signals in the conditioning and reduce the entropy to that of a (conditionally) joint Gaussian. These two techniques help us prove the claim. See Appendix \ref{app:h(Y1|U1,U2)proof} for more details. \end{IEEEproof} \begin{claim} The term $I\brac{\X_{1};\Y_{1}|\U_{2}}$ is lower bounded in gDoF as \[ I\brac{\X_{1};\Y_{1}|\U_{2}}\geqdof\brac{T-1}\lgbrac{\snr}-\lgbrac{\min\brac{\snr,\inr}}. \] \label{claim:simplify_term1_HK}\begin{IEEEproof} We have \begin{align} h\brac{\Y_{1}|\U_{2}}= & h\brac{\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+\Z_{1}|\U_{2}}\nonumber \\ = & \sum_{i=1}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{2}}\nonumber \\ \overset{\brac i}{\geq} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\x_{1,1},\x_{2,1},\U_{2}}\nonumber \\ & +\sum_{i=2}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}\nonumber \\ \overset{\brac{ii}}{\geqdof} & \lgbrac{\snr+\inr}+\brac{T-1}\lgbrac{\snr},\label{eq:y1_given_u2} \end{align} where $\brac i$ is due to the fact that conditioning reduces entropy and Markovity $\brac{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}}-\brac{\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}-\brac{\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{2}}$. The step $\brac{ii}$ is using Gaussianity for terms $h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\x_{1,1},\x_{2,1},\U_{2}}$, $h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}=h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{\text{p}2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{21},\boldsymbol{g}_{11}}$ and using Fact \ref{fact:Jensens_gap}. Using (\ref{eq:y1_given_u2}) and Claim \ref{claim:h_Y1_U2_X1} for $h\brac{\Y_{1}|\U_{2},\X_{1}}$ completes the proof. \end{IEEEproof} \end{claim} \begin{claim} The term $I\brac{\X_{2},\U_{1};\Y_{2}}$ is lower bounded in gDoF as \[ I\brac{\X_{2},\U_{1};\Y_{2}}\geqdof\brac{T-1}\lgbrac{\snr+\inr}-\lgbrac{\min\brac{\snr,\inr}}. \] \label{claim:simplify_term2_HK} \end{claim} \begin{IEEEproof} We have \begin{align} h\brac{\Y_{2}} & \geqdof T\lgbrac{\snr+\inr}. \end{align} Using Claim \ref{claim:h_Y1_U2_X1} for $h\brac{\rline{\boldsymbol{Y}_{1}}\X_{1},\U_{2}}$ and using symmetry we can get, \begin{align} h\brac{\rline{\Y_{2}}\X_{2},\U_{1}} & \leqdof\lgbrac{\snr+\inr}+\lgbrac{\min\brac{\snr,\inr}}. \end{align} Combining the last two equations completes the proof. \end{IEEEproof} \begin{claim} \label{claim:simplify_term3_HK}The term $I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}$ is lower bounded in gDoF as \[ I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}\geqdof\lgbrac{\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}}-\lgbrac{\min\brac{\snr,\inr}}. \] \end{claim} \begin{IEEEproof} This follows by using \[ h\brac{\Y_{1}|\U_{1},\U_{2}}\overset{}{\geqdof}\lgbrac{\snr+\inr}+\lgbrac{\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}} \] from Claim \ref{claim:h(Y1|U1,U2)} and \begin{align*} h\brac{\Y_{1}|\X_{1},\U_{1},\U_{2}} & \leq h\brac{\Y_{1}|\X_{1},\U_{2}}\leqdof\lgbrac{\snr+\inr}+\lgbrac{\min\brac{\snr,\inr}} \end{align*} from Claim \ref{claim:h_Y1_U2_X1}. \end{IEEEproof} \begin{claim} The term $I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}$ is lower bounded in gDoF as \[ I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}\geqdof\brac{T-1}\lgbrac{\frac{\snr}{\inr}+\inr}-\lgbrac{\min\brac{\snr,\inr}}. \] \end{claim} \begin{IEEEproof} We have \begin{align} h\brac{\Y_{1}|\U_{1}} & =h\brac{\rline{\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+\Z_{1}\vphantom{a^{a^{a}}}}\U_{1}}\nonumber \\ & =\sum_{i}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{1}}\\ & \overset{\brac i}{\geqdof}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\U_{1},\x_{1,1},\x_{2,1}}+\nonumber \\ & \qquad+\sum_{i=2}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\u_{1,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}\\ & \overset{\brac{ii}}{\geqdof}\lgbrac{1+\snr+\inr}\nonumber \\ & \qquad+\brac{T-1}\lgbrac{1+\frac{\snr}{\inr}+\inr},\label{eq:h_Y1_given_U1} \end{align} where $\brac i$ is due to the fact that conditioning reduces entropy and Markovity $\brac{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}}-\brac{\u_{1,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}-\brac{\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{1}}$. In step $\brac{ii}$ we removed the contribution of $\boldsymbol{g}_{11}\u_{1,i}$ from the second term and used the structure $\x_{1,i}=\u_{1,i}+\x_{1p,i}$, where $\u_{1,i},\x_{1p,i}$ are independent Gaussians and $\x_{1p,i}$ has variance $1/\inr$. We also used the Gaussianity of the channels and Fact \ref{fact:Jensens_gap}. We also have \begin{align} h\brac{\Y_{1}|\U_{2},\U_{1},\X_{1}} & \leq h\brac{\Y_{1}|\U_{2},\X_{1}}\leqdof\lgbrac{\snr+\inr}+\lgbrac{\min\brac{\snr,\inr}},\label{eq:I=00007BX_=00007B1=00007D,U_=00007B2=00007D;Y_=00007B1=00007D|U_=00007B1=00007D=00007D} \end{align} where the last step is using Claim \ref{claim:h_Y1_U2_X1} for $h\brac{\rline{\boldsymbol{Y}_{1}}\X_{1},\U_{2}}$. Using (\ref{eq:I=00007BX_=00007B1=00007D,U_=00007B2=00007D;Y_=00007B1=00007D|U_=00007B1=00007D=00007D}) and (\ref{eq:h_Y1_given_U1}) completes the proof. \end{IEEEproof} We collect the results from the previous four claims in Table \ref{tab:gDoF-inner-bounds_noFB}. \begin{table}[H] \begin{centering} \caption{\label{tab:gDoF-inner-bounds_noFB}gDoF lower bounds for the terms in achievability region} \begin{tabular}{|c|c|} \hline Term & Lower bound in gDoF\tabularnewline \hline \hline $I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}$ & $\brac{T-1}\lgbrac{\frac{\snr}{\inr}+\inr}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}$ & $\lgbrac{\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline $I\brac{\X_{2},\U_{1};\Y_{2}}$ & $\brac{T-1}\lgbrac{\snr+\inr}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{2}}$ & $\brac{T-1}\lgbrac{\snr}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline \end{tabular} \par\end{centering} \end{table} In Table \ref{tab:gDoF-inner-bounds_noFB_prelog} we obtain the prelog of the terms from Table \ref{tab:gDoF-inner-bounds_noFB} for different regimes. \begin{table}[H] \centering{}\caption{\label{tab:gDoF-inner-bounds_noFB_prelog}Prelog of lower bound of the terms in achievability region} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Term} & \multicolumn{3}{c|}{Prelog of lower bound}\tabularnewline \cline{2-4} \cline{3-4} \cline{4-4} & $\alpha<1/2$ & $1/2<\alpha<1$ & $1<\alpha$\tabularnewline \hline \hline $I\brac{\X_{1},\U_{2};\Y_{1}|\U_{1}}$ & $\brac{T-1}\brac{1-\alpha}-\alpha$ & $\brac{T-2}\alpha$ & $\brac{T-1}\alpha-1$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}$ & $\brac{T-1}\brac{1-\alpha}-\alpha$ & $\brac{T-2}\brac{1-\alpha}$ & $0$\tabularnewline \hline $I\brac{\X_{2},\U_{1};\Y_{2}}$ & $\brac{T-1}-\alpha$ & $\brac{T-1}-\alpha$ & $\brac{T-1}\alpha-1$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{2}}$ & $\brac{T-1}-\alpha$ & $\brac{T-1}-\alpha$ & $\brac{T-2}$\tabularnewline \hline \end{tabular} \end{table} Using the prelog of the lower bounds in the achievability region (\ref{eq:ach_nofb}) and using only the active inequalities, it can be verified that the gDoF region in Table \ref{tab:gdof_no_FB} is achievable. \section{Noncoherent IC with feedback\label{sec:noncoh_FFIC_FB}} In this section, we provide our results for noncoherent rate-splitting scheme for the noncoherent IC with feedback and compare the achievable gDoF with a standard training-based scheme. We also compare the performance with the TIN and TDM schemes. \begin{thm} For a noncoherent IC with feedback, the gDoF region given in Table \ref{tab:gdof_FB} is achievable:\label{thm:noncoh_IC_FB} \begin{table}[H] \centering{}\caption{Achievable gDoF regions for noncoherent IC with feedback.\label{tab:gdof_FB}} \begin{tabular}{|c|c|c|} \hline $\alpha<1/2$ & $1/2\leq\alpha\leq1$ & $\alpha\geq1$\tabularnewline \hline \hline $\begin{array}{c} d_{1}\leq\brac{1-1/T}-2\alpha/T\\ d_{2}\leq\brac{1-1/T}-2\alpha/T\\ d_{1}+d_{2}\leq2\brac{1-1/T}-\alpha\brac{1+1/T} \end{array}$ & $\begin{array}{c} d_{1}\leq\brac{1-2/T}\\ d_{2}\leq\brac{1-2/T}\\ d_{1}+d_{2}\leq\brac{2-3/T}-\alpha\brac{1-1/T} \end{array}$ & $\begin{array}{c} d_{1}+d_{2}\leq\brac{1-1/T}\alpha-1/T\end{array}$\tabularnewline \hline \end{tabular} \end{table} \end{thm} \begin{IEEEproof} This is obtained using the block Markov scheme of \cite[Lemma 1]{suh_tse_fb_gaussian} for the noncoherent case. We again use a rate-splitting scheme based on the average interference-to-noise ratio and noncoherent joint decoding at the receivers. The details are given in Section \ref{subsec:noncoh_IC_FB}. \end{IEEEproof} \subsection{Discussion} We now compare our achievable gDoF with that of a standard training-based scheme. The approximate capacity region of coherent fast fading IC with feedback is given in \cite{joyson_fading_TCOM}. The gDoF for the case which uses 2 symbols for training can be easily obtained from the gDoF region for the coherent case with a multiplication factor of $\brac{1-2/T}$. Hence, the gDoF regime for a scheme that uses 2 symbols for training is given by:\begin{subequations}\label{eq:training_dof_fb} \begin{align} d_{1} & \leq\brac{1-2/T}\max\brac{1,\alpha}\\ d_{2} & \leq\brac{1-2/T}\max\brac{1,\alpha}\\ d_{1}+d_{2} & \leq\brac{1-2/T}\brac{\max\brac{1,\alpha}+\max\brac{1-\alpha,0}}. \end{align} \end{subequations} We give the achievable symmetric gDoF for our noncoherent rate-splitting scheme and training-based scheme for the feedback case with coherence time $T=3$, in Figure \ref{fig:Symmetric_gDoF_fb_T=00003D3} and with coherence time $T=5$, in Figure \ref{fig:Symmetric_gDoF_fb}. We also include the gDoF of the nonfeedback schemes from Section \ref{sec:NC_IC_noFB} in the figures. It can be calculated from Table \ref{tab:gdof_FB} and (\ref{eq:training_dof_fb}) that TIN outperforms our noncoherent strategy with feedback when $T=2$ and $\alpha<1$. Our noncoherent rate-splitting strategy in the presence of feedback is as good as TIN or outperforms TIN when $T\geq3$. The noncoherent rate-splitting scheme attempts to decode part of the interfering message at the transmitter, and use it in subsequent transmissions. The amount of rate that can be decoded increases with $T$, when $T=2$ the advantage gained by decoding at the transmitter is low. For low INR, the uncertainty in the interfering channel together with the uncertainty of the interfering message to be decoded at the receiver reduces the amount of direct message that can be decoded in the rate-splitting scheme. The advantage gained by decoding at the transmitter outweighs this loss when $T\geq3.$ Also for intermediate INR, the TDM scheme outperforms other schemes, the explanation is similar to that of the non-feedback case. When $\alpha=1$, the noncoherent rate-splitting scheme gives a gDoF of $\brac{1/2}\brac{1-2/T}$ and the TDM scheme gives a gDoF of $\brac{1/2}\brac{1-1/T}$. Here, the noncoherent scheme effectively behaves as a TDM that uses two symbols to train, where actually the TDM can be implemented with only one training symbol. \begin{figure}[H] \centering{}\includegraphics[scale=0.8]{with_TIN_Teq3}\caption{\label{fig:Symmetric_gDoF_fb_T=00003D3}Symmetric achievable gDoF for coherence time $T=3$: feedback and nonfeedback cases.} \end{figure} \begin{figure}[h] \centering{}\includegraphics[scale=0.8]{with_TIN}\caption{\label{fig:Symmetric_gDoF_fb}Symmetric achievable gDoF for coherence time $T=5$: feedback and nonfeedback cases.} \end{figure} \subsection{Proof of Theorem \ref{thm:noncoh_IC_FB} \label{subsec:noncoh_IC_FB}} We use the block Markov scheme from \cite[Lemma 1]{suh_tse_fb_gaussian}: we use block Markov encoding with a total size of blocks $B$. In block 1, each transmitter splits its own message into common and private parts and then sends a codeword superimposing the common and private messages. In block 2, with feedback, each transmitter (noncoherently) decodes the other user\textquoteright s common message (sent in block 1) while treating the other user\textquoteright s private signal as noise. Then at the transmitter: (1) its own common message; and (2) the other user\textquoteright s common message decoded with the help of feedback are available. Each transmitter generates new common and private messages, conditioned on these two common messages. It then sends the corresponding codeword. Each transmitter repeats this procedure until block $B-1$. In the last block $B$, to facilitate backward decoding, each transmitter sends the predetermined common message and a new private message. Each receiver waits until a total of $B$ blocks have been received and then performs backward noncoherent decoding. \textbf{\textbf{\noindent}Encoding:}Fix a joint distribution $p\brac{\U_{1}}p\brac{\U_{2}}p\brac{\X_{1}|\U_{1}}p\brac{\X_{2}|\U_{2}}$ where $\U_{1},\U_{2},\X_{1},\X_{2}$ are vectors of length $T$. Generate $2^{NT\brac{2R_{\text{c}1}+R_{\text{c}2}}}$ codewords $\U_{1}^{N}\brac{i,j,k}$ with $i,k\in\cbrac{1,\ldots,2^{NTR_{\text{c}1}}}$, $j\in\cbrac{1,\ldots,2^{NTR_{\text{c}2}}}$ according to $\prod_{l=1}^{N}p\brac{\U_{1(l)}}$ For each codeword $\U_{1}^{N}\brac{i,j,k}$, generate $2^{NTR_{\text{p}1}}$ codewords $\X_{1}^{N}\brac{i,j,k,l}$ with $l\in\cbrac{1,\ldots,2^{NTR_{\text{p}1}}}$ according to $\prod_{l=1}^{N}p\brac{\X_{1(l)}|\U_{1(l)}}$. Similarly generate $2^{NT\brac{2R_{\text{c}2}+R_{\text{c}1}}}$ codewords $\U_{2}^{N}\brac{i,j,r}$ with $i,r\in\cbrac{1,\ldots,2^{NTR_{\text{c}1}}}$, $j\in\cbrac{1,\ldots,2^{NTR_{\text{c}2}}}$. For each codeword $\U_{2}^{N}\brac{i,j,r}$, generate $2^{NTR_{\text{p}2}}$ codewords $\X_{2}^{N}\brac{i,j,r,s}$ with $s\in\cbrac{1,\ldots,2^{NTR_{\text{p}2}}}$ according to $\prod_{s=1}^{N}p\brac{\X_{1(s)}|\U_{1(s)}}$. At block $b$, transmitter 1 has uniformly random messages $w_{\text{c}1}^{\brac b}\in\sbrac{1,2^{NTR_{\text{c}1}}},w_{\text{p}1}^{\brac b}\in\sbrac{1,2^{NTR_{\text{p}1}}}$ to transmit and transmitter 2 has uniformly random messages $w_{\text{c}2}^{\brac b}\in\sbrac{1,2^{NTR_{\text{c}2}}},w_{\text{p}2}^{\brac b}\in\sbrac{1,2^{NTR_{\text{p}2}}}$ to transmit. With feedback $\Y_{1}^{N,\brac{b-1}}$ transmitter 1 tries to noncoherently decode $\hat{w}_{2c}^{\brac{b-1}}=\hat{k}$ from transmitter 2 by finding unique $\hat{k}$ such that \[ \brac{\U_{1}^{N}\brac{w_{\text{c}1}^{\brac{b-2}},w_{\text{c}2}^{\brac{b-2}},w_{\text{c}1}^{\brac{b-1}}},\X_{1}^{N}\brac{w_{\text{c}1}^{\brac{b-2}},w_{\text{c}2}^{\brac{b-2}},w_{\text{c}1}^{\brac{b-1}},w_{\text{p}1}^{\brac{b-1}}},\U_{2}^{N}\brac{w_{\text{c}2}^{\brac{b-2}},w_{\text{c}1}^{\brac{b-2}},\hat{k}},\Y_{1}^{N,\brac{b-1}}}\in A_{\epsilon}^{\brac N}. \] where $A_{\epsilon}^{\brac N}$ indicates the set of jointly typical sequences. Transmitter 1 already knows $w_{\text{c}1}^{\brac{b-2}},w_{\text{c}1}^{\brac{b-1}},w_{\text{p}1}^{\brac{b-1}}$. Also $w_{\text{c}2}^{\brac{b-2}}$ is assumed to be correctly decoded in the previous block at transmitter 1 and $w_{\text{c}1}^{\brac{b-2}}$ is assumed to be correctly decoded in the previous block at transmitter 2. The current noncoherent decoding at transmitter 1 is performed with vanishing error probability if \begin{equation} TR_{\text{c}2}\leq I\brac{\rline{\U_{2};\Y_{1}}\X_{1}}.\label{eq:fb_markov_eq1} \end{equation} Based on $\hat{w}_{2c}^{\brac{b-1}}$, transmitter 1 then sends $\X_{1}^{N}\brac{w_{\text{c}1}^{\brac{b-1}},\hat{w}_{2c}^{\brac{b-1}},w_{\text{c}1}^{b},w_{\text{p}1}^{b}}$. Similarly transmitter 2 decodes $\hat{w}_{1c}^{\brac{b-1}}$, transmitter 2 then sends $\X_{2}^{N}\brac{w_{\text{c}2}^{\brac{b-1}},\hat{w}_{1c}^{\brac{b-1}},w_{\text{c}2}^{b},w_{\text{p}2}^{b}}$. \textbf{\textbf{\noindent}Decoding:}After recieving $B$ blocks, each receiver performs backward decoding. At receiver 1, block $b$ is decoded assuming block $b+1$ is correctly decoded. It finds unique triplet $\brac{\hat{i},\hat{j},\hat{k}}$ \[ \brac{\U_{1}^{N}\brac{\hat{i},\hat{j},w_{\text{c}1}^{\brac b}},\X_{1}^{N}\brac{\hat{i},\hat{j},w_{\text{c}1}^{\brac b},\hat{k}},\U_{2}^{N}\brac{\hat{j,}\hat{k},w_{\text{c}2}^{\brac b}},\Y_{1}^{N,\brac b}}\in A_{\epsilon}^{\brac N}. \] We analyse the error assuming $\brac{w_{\text{c}1}^{\brac{b-1}},w_{\text{c}2}^{\brac{b-1}},w_{\text{p}1}^{\brac b}}=\brac{1,1,1}$ was sent through block $b-1$ and block $b$. We assume that there was no backward decoding error, \emph{i.e.}, $\brac{w_{\text{c}1}^{\brac b},w_{\text{c}2}^{\brac b}}$ was correctly decoded. Let $E_{ijk}$ be the event $\cbrac{\brac{\U_{1}^{N}\brac{\hat{i},\hat{j},w_{\text{c}1}^{\brac b}},\X_{1}^{N}\brac{\hat{i},\hat{j},w_{\text{c}1}^{\brac b},\hat{k}},\U_{2}^{N}\brac{\hat{j,}\hat{k},w_{\text{c}2}^{\brac b}},\Y_{1}^{N,\brac b}}\in A_{\epsilon}^{\brac N}}$ for given $i,j,k$. By AEP, the probability of $E_{111}$ approaches unity. The error probability is thus captured by the following using standard analysis. \begin{align} \text{Pr}\brac{\union_{(i,j,k)\neq\brac{1,,1,1}}E_{ijk}} & \leq\sum_{i\neq1,j\neq1,k\neq1}E_{ijk}+\sum_{i=1,j=1,k\neq1}E_{ijk}+\sum_{i=1,j\neq1,k=1}E_{ijk}\nonumber \\ & \quad+\sum_{i\neq1,j=1,k=1}E_{ijk}+\sum_{i\neq1,j\neq1,k=1}E_{ijk}+\sum_{i\neq1,j=1,k\neq1}E_{ijk}+\sum_{i=1,j\neq1,k\neq1}E_{ijk}\nonumber \\ & \leq2^{N\brac{TR_{\text{c}1}+TR_{\text{c}2}+TR_{\text{p}1}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}+2^{N\brac{TR_{\text{p}1}-I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}+\epsilon}}\nonumber \\ & \quad+2^{N\brac{TR_{\text{c}2}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}+2^{N\brac{TR_{\text{c}1}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}+2^{N\brac{TR_{\text{c}1}+TR_{\text{c}2}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}\nonumber \\ & \quad+2^{N\brac{TR_{\text{c}1}+TR_{\text{p}1}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}+2^{N\brac{TR_{\text{c}2}+TR_{\text{p}1}-I\brac{\X_{1},\U_{2};\Y_{1}}+\epsilon}}\label{eq:fb_markov_eq2} \end{align} Combining (\ref{eq:fb_markov_eq1}) and \ref{eq:fb_markov_eq2}, and considering similar analysis for receiver 2, we get the following equations for achievability: \begin{subequations}\label{eq:ach_fb_markov} \begin{align} TR_{\text{c}2} & \leq I\brac{\U_{2};\Y_{1}|\X_{1}},\label{eq:ach_fb_markov1}\\ TR_{\text{p}1} & \leq I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}},\label{eq:ach_fb_markov2}\\ T\brac{R_{\text{c}1}+R_{\text{c}2}+R_{\text{p}1}} & \leq I\brac{\X_{1},\U_{2};\Y_{1}},\label{eq:ach_fb_markov3}\\ TR_{\text{c}1} & \leq I\brac{\U_{1};\Y_{2}|\X_{2}},\label{eq:ach_fb_markov4}\\ TR_{\text{p}2} & \leq I\brac{\X_{2};\Y_{2}|\U_{2},\U_{1}},\label{eq:ach_fb_markov5}\\ T\brac{R_{\text{c}1}+R_{\text{c}2}+R_{\text{p}2}} & \leq I\brac{\X_{2},\U_{1};\Y_{2}}.\label{eq:ach_fb_markov6} \end{align} \end{subequations} After performing Fourier-Motzkin elimination similar to that in \cite[Appendix B]{suh_tse_fb_gaussian}, we obtain the following achievability region with $R_{1}=R_{\text{c}1}+R_{\text{p}1},R_{2}=R_{\text{c}2}+R_{\text{p}2}$:\begin{subequations}\label{eq:ach_fb} \begin{align} TR_{1} & \leq I\brac{\X_{1},\U_{2};\Y_{1}},\label{eq:ach_fb1}\\ TR_{1} & \leq I\brac{\U_{1};\Y_{2}|\X_{2}}+I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}},\label{eq:ach_fb2}\\ TR_{2} & \leq I\brac{\X_{2},\U_{1};\Y_{2}},\label{eq:ach_fb3}\\ TR_{2} & \leq I\brac{\U_{2};\Y_{1}|\X_{1}}+I\brac{\X_{2};\Y_{2}|\U_{1},\U_{2}},\label{eq:ach_fb4}\\ T\brac{R_{1}+R_{2}} & \leq I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}+I\brac{\X_{2},\U_{1};\Y_{2}},\label{eq:ach_fb5}\\ T\brac{R_{1}+R_{2}} & \leq I\brac{\X_{2};\Y_{2}|\U_{1},\U_{2}}+I\brac{\X_{1},\U_{2};\Y_{1}}.\label{eq:ach_fb6} \end{align} \end{subequations} For power splitting, we adapt the idea of the simplified Han-Kobayashi scheme where the private power is set such that the private signal is seen below the noise level at the other receiver. We choose $U_{k}$ as a vector of length $T$ with i.i.d. $\mathcal{CN}\brac{0,\lambda_{c}}$ elements, $\X_{pk}$ as a vector of length $T$ with i.i.d. $\mathcal{CN}\brac{0,\lambda_{p}}$ elements for $k\in\cbrac{1,2}$, $\X_{1}=\U_{1}+\X_{p1},$ $\X_{2}=\U_{2}+\X_{p2}$, where $\lambda_{c}+\lambda_{p}=1$ and $\lambda_{p}=\min\brac{1/\inr,1}$ similar to \cite{suh_tse_fb_gaussian,joyson_fading_TCOM}. The region (\ref{eq:ach_fb}) following \cite[Lemma 1]{suh_tse_fb_gaussian} is still valid with \emph{$U=0$}. For gDoF characterization, we can assume $\inr\geq1$. Hence we have $\lambda_{p}=1/\inr$. Now we analyze the terms in (\ref{eq:ach_fb}) for obtaining an achievable gDoF region. Note that the joint distribution of $\brac{\X_{1},\Y_{1},\U_{1},\X_{2},\Y_{2},\U_{2}}$ in its \emph{single letter form} is the same as that for the nonfeedback case in Section \ref{subsec:no_fb}, hence we can use the inequalites from that section directly. \begin{claim} The term $I\brac{\U_{2};\Y_{1}|\X_{1}}$ is lower bounded in gDoF as \[ I\brac{\U_{2};\Y_{1}|\X_{1}}\geqdof\brac{T-1}\lgbrac{\inr}-\lgbrac{\min\brac{\snr,\inr}}. \] \label{claim:simplifyI(U2;Y1|X1)feedback} \end{claim} \begin{IEEEproof} We have \begin{align} h\brac{\rline{\Y_{1}}\X_{1}} & =h\brac{\rline{\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+\Z_{1}\vphantom{a^{a^{a}}}}\X_{1}}\\ & =\sum_{i}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\X_{1}}\\ & \overset{\brac i}{\geqdof}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\x_{2,1},\X_{1}}+\nonumber \\ & \qquad\sum_{i=2}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\X_{1},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}\\ & \overset{\brac{ii}}{\geqdof}\lgbrac{\snr+\inr}+\brac{T-1}\lgbrac{\inr},\label{eq:hY1|X1} \end{align} where $\brac i$ is due to the fact that conditioning reduces entropy and Markovity $\brac{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}}-\brac{\X_{1},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}-\brac{\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\X_{1}}$ and $\brac{ii}$ is using Gaussianity for terms $h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\x_{2,1},\X_{1}}$, $h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\X_{1},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}$ and using Fact \ref{fact:Jensens_gap}. Using (\ref{eq:hY1|X1}) and \begin{align*} h\brac{\rline{\Y_{1}}\U_{2},\X_{1}} & \leqdof\lgbrac{\snr+\inr}+\lgbrac{\min\brac{\snr,\inr}} \end{align*} from Claim \ref{claim:h_Y1_U2_X1} completes the proof. \end{IEEEproof} Using the previous Claim and using the existing results from \ref{tab:gDoF-inner-bounds_noFB}, we have the inner bounds for terms in the achievability region in Table \ref{tab:gDoF-inner-bounds_fb}. \begin{table}[H] \centering{}\caption{gDoF inner bounds for the terms in achievability region\label{tab:gDoF-inner-bounds_fb}} \begin{tabular}{|c|c|} \hline Term & Lower bound in gDoF\tabularnewline \hline \hline $I\brac{\X_{1},\U_{2};\Y_{1}}$ & $\brac{T-1}\lgbrac{\snr+\inr}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline $I\brac{\U_{2};\Y_{1}|\X_{1}}$ & $\brac{T-1}\lgbrac{\inr}-\lgbrac{\min\brac{\snr,\inr}}$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}$ & $\begin{array}{c} \lgbrac{\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}}\\ -\lgbrac{\min\brac{\snr,\inr}} \end{array}$\tabularnewline \hline \end{tabular} \end{table} In Table \ref{tab:gDoF-inner-bounds_FB_prelog} we obtain prelog of the terms from Table \ref{tab:gDoF-inner-bounds_fb} for different regimes. \begin{table}[H] \centering{}\caption{\label{tab:gDoF-inner-bounds_FB_prelog}Prelog of lower bound of the terms in achievability region} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Term} & \multicolumn{3}{c|}{Prelog of lower bound}\tabularnewline \cline{2-4} \cline{3-4} \cline{4-4} & $\alpha<1/2$ & $1/2<\alpha<1$ & $1<\alpha$\tabularnewline \hline \hline $I\brac{\X_{1},\U_{2};\Y_{1}}$ & $\brac{T-1}-\alpha$ & $\brac{T-1}-\alpha$ & $\brac{T-1}\alpha-1$\tabularnewline \hline $I\brac{\U_{2};\Y_{1}|\X_{1}}$ & $\brac{T-2}\alpha$ & $\brac{T-2}\alpha$ & $\brac{T-1}\alpha-1$\tabularnewline \hline $I\brac{\X_{1};\Y_{1}|\U_{1},\U_{2}}$ & $\brac{T-1}\brac{1-\alpha}-\alpha$ & $\brac{T-2}\brac{1-\alpha}$ & $0$\tabularnewline \hline \end{tabular} \end{table} Using the prelog of the lower bounds in the achievability region (\ref{eq:ach_fb}) and using only the active inequalities, it can be verified that the gDoF region in Table \ref{tab:gdof_FB} is achievable. \section{Conclusions and remarks \label{sec:Conclusions-and-remarks}} We studied the noncoherent IC with symmetric channel statistics. We proposed an achievability scheme based on noncoherent rate-splitting using the channel statistics. We derived the achievable gDoF using this scheme. We also considered the scheme that treats interference-as-noise (TIN) and also the time division multiplexing (TDM) scheme. We demonstrated that a standard scheme which trains the links of the IC is not gDoF optimal. Depending on the relative strength of the interference, the noncoherent rate-splitting or TIN or TDM gave the best gDoF performance. \begin{comment} We also studied a noncoherent rate-splitting scheme for IC with feedback and proved that we can achieve larger gDoF than a standard training-based scheme. A simple outer bound is the coherent outer bound (assuming channel state information at receiver). \end{comment} {} \begin{comment} We were unable to derive better outer bounds than the coherent outer bound, using the existing methods. \end{comment} {} A possible direction for further studies is to explore new techniques to derive non-trivial outer bounds that perform better than the coherent outer bounds. \appendices{} \section{Proof of Claim \ref{claim:h_Y1_U2_X1}\label{app:proof_h_Y1_U2_X1}} We have \begin{align} & h\brac{\Y_{1}|\U_{2},\X_{1}}\nonumber \\ & =h\brac{\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+\Z_{1}|\U_{2},\X_{1}}\\ & \leq h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\u_{2,1},\x_{1,1}}\nonumber \\ & \qquad+h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}\nonumber \\ & \qquad+\sum_{i=3}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\U_{2},\X_{1}}\nonumber \\ & \leqdof\lgbrac{1+\snr+\inr}+h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}\nonumber \\ & \qquad+\sum_{i=3}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\U_{2},\X_{1}},\label{eq:y1_given_u2_x1_expand} \end{align} Considering the second term in the previous expression, \begin{align} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}\nonumber \\ & =h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}\x_{1,2}+\boldsymbol{g}_{21}\x_{1,1}\x_{2,2}+\x_{1,1}\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}-\expect{\lgbrac{\abs{\x_{1,1}}}}\nonumber \\ & \overset{\brac i}{\leq}h\brac{\boldsymbol{g}_{11}\x_{1,1}\x_{1,2}+\boldsymbol{g}_{21}\x_{1,1}\x_{2,2}+\x_{1,1}\z_{1,2}-\x_{1,2}\brac{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}}}-\expect{\lgbrac{\abs{\x_{1,1}}}}\nonumber \\ & =h\brac{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{2,2}-\x_{2,1}\x_{1,2}}+\x_{1,1}\z_{1,2}-\x_{1,2}\z_{1,1}}-\expect{\lgbrac{\abs{\x_{1,1}}}}\label{eq:elim1}\\ & \leq\lgbrac{\pi e\expect{\abs{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{2,2}-\x_{2,1}\x_{1,2}}+\x_{1,1}\z_{1,2}-\x_{1,2}\z_{1,1}}^{2}}}-\brac{1/2}\expect{\lgbrac{\abs{\x_{1,1}}^{2}}}\nonumber \\ & \overset{\brac{ii}}{\eqdof}\lgbrac{1+\inr},\label{eq:y1_given_u2_x1_part1} \end{align} where $\brac i$ is by subtracting $\x_{1,2}\brac{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}}$ which is available from conditioning and then using the fact that conditioning reduces entropy, $\brac{ii}$ is by using the property of Gaussians for i.i.d. $\boldsymbol{g}_{21},\x_{1,1},\x_{2,2},\x_{2,1},\x_{1,2},\z_{1,2},\z_{1,1}$ and Fact \ref{fact:Jensens_gap} on page \pageref{fact:Jensens_gap} for $\expect{\lgbrac{\abs{\x_{1,1}}^{2}}}$ since $\abs{\x_{1,1}}^{2}$ is exponentially distributed with mean $1$. We can also use \begin{align} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}\nonumber \\ & =h\brac{\rline{\boldsymbol{g}_{11}\u_{2,1}\x_{1,2}+\boldsymbol{g}_{21}\u_{2,1}\x_{2,2}+\u_{2,1}\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{2},\X_{1}}-\expect{\lgbrac{\abs{\u_{2,1}}}}\nonumber \\ & \overset{\brac i}{\leq}h\brac{\boldsymbol{g}_{11}\u_{2,1}\x_{1,2}+\boldsymbol{g}_{21}\u_{2,1}\x_{2,2}+\u_{2,1}\z_{1,2}-\u_{2,2}\brac{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}}}-\expect{\lgbrac{\abs{\u_{2,1}}}}\nonumber \\ & =h\brac{\boldsymbol{g}_{11}\brac{\u_{2,1}\x_{1,2}-\x_{1,1}\u_{2,2}}+\boldsymbol{g}_{21}\brac{\u_{2,1}\x_{\text{p}2,2}-\u_{2,2}\x_{\text{p}2,1}}+\u_{2,1}\z_{1,2}-\u_{2,2}\z_{1,1}}\nonumber \\ & \quad-\expect{\lgbrac{\abs{\u_{2,1}}}}\nonumber \\ & \leqdof\lgbrac{\expect{\abs{\boldsymbol{g}_{11}\brac{\u_{2,1}\x_{1,2}-\x_{1,1}\u_{2,2}}+\boldsymbol{g}_{21}\brac{\u_{2,1}\x_{\text{p}2,2}-\u_{2,2}\x_{\text{p}2,1}}+\u_{2,1}\z_{1,2}-\u_{2,2}\z_{1,1}}^{2}}}\nonumber \\ & \quad-\brac{1/2}\expect{\lgbrac{\abs{\u_{2,1}}^{2}}}\nonumber \\ & \overset{\brac{ii}}{\eqdof}\lgbrac{1+\snr},\label{eq:h_y1_given_u2_x1_highsnr_part1} \end{align} where $\brac i$ is by subtracting $\x_{1,2}\brac{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}}$ which is available from conditioning and then using the fact that conditioning reduces entropy, $\brac{ii}$ is by using properties of i.i.d. Gaussians to evaluate the second moments and Fact \ref{fact:Jensens_gap} for $\expect{\lgbrac{\abs{\u_{2,1}}^{2}}}$ since $\abs{\u_{2,1}}^{2}$ is exponentially distributed with mean $1-1/\inr$. Using (\ref{eq:y1_given_u2_x1_part1}) and (\ref{eq:h_y1_given_u2_x1_highsnr_part1}) in (\ref{eq:y1_given_u2_x1_expand}), we get \begin{align*} h\brac{\Y_{1}|\U_{2},\X_{1}} & \leqdof\lgbrac{1+\snr+\inr}+\lgbrac{1+\min\brac{\snr,\inr}}\\ & +\sum_{i=3}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\U_{2},\X_{1}} \end{align*} Now for $i\geq3$, we will show that \begin{equation} h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\U_{2},\X_{1}}\leqdof0.\label{eq:h(y1_given_u2_x1)_zeroterm} \end{equation} and will complete our proof. For (\ref{eq:h(y1_given_u2_x1)_zeroterm}), similar to the elimination done in (\ref{eq:elim1}), we have \begin{align} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\U_{2},\X_{1}}\nonumber \\ & \leq h\brac{\rline{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{2,i}-\x_{2,1}\x_{1,i}}+\x_{1,1}\z_{1,i}-\x_{1,i}\z_{1,1}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{2,2}-\x_{2,1}\x_{1,2}}+\x_{1,1}\z_{1,2}-\x_{1,2}\z_{1,1},\U_{2},\X_{1}}\nonumber \\ & \qquad-\expect{\lgbrac{\abs{\x_{1,1}}}}. \end{align} Now we have \begin{align*} & \boldsymbol{g}_{21}\brac{\x_{1,1}\x_{2,i}-\x_{2,1}\x_{1,i}}+\x_{1,1}\z_{1,i}-\x_{1,i}\z_{1,1}\\ & =\boldsymbol{g}_{21}\brac{\x_{1,1}\u_{2,i}-\u_{2,1}\x_{1,i}}+\brac{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{\text{p}2,i}-\x_{\text{p}2,1}\x_{1,i}}+\x_{1,1}\z_{1,i}-\x_{1,i}\z_{1,1}} \end{align*} in the entropy expression. And in the conditioning the term \[ \boldsymbol{g}_{21}\brac{\x_{1,1}\u_{2,2}-\u_{2,1}\x_{1,2}}+\brac{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{\text{p}2,2}-\x_{\text{p}2,1}\x_{1,2}}+\x_{1,1}\z_{1,2}-\x_{1,2}\z_{1,1}} \] and $\U_{2},$ $X_{1}$ are available. Hence by elimination we can get \begin{align} \xi & =\brac{\x_{1,1}\u_{2,2}-\u_{2,1}\x_{1,2}}\brac{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{\text{p}2,i}-\x_{\text{p}2,1}\x_{1,i}}+\x_{1,1}\z_{1,i}-\x_{1,i}\z_{1,1}}\nonumber \\ & \qquad-\brac{\x_{1,1}\u_{2,i}-\u_{2,1}\x_{1,i}}\brac{\boldsymbol{g}_{21}\brac{\x_{1,1}\x_{\text{p}2,2}-\x_{\text{p}2,1}\x_{1,2}}+\x_{1,1}\z_{1,2}-\x_{1,2}\z_{1,1}} \end{align} in the entropy expression. Let $\xi$ be expanded into a sum of product form \begin{align*} \xi & =\sum_{i=1}^{L}\xi_{i}\\ & =\x_{1,1}\u_{2,2}\boldsymbol{g}_{21}\x_{1,1}\x_{\text{p}2,i}+\brac{-\x_{1,1}\u_{2,2}\boldsymbol{g}_{21}\x_{\text{p}2,1}\x_{1,i}}+\cdots \end{align*} where $\xi_{i}$ is in a simple product form. Now due to generalized mean inequality \cite[Ch. 3]{means_inequalities_bullen}, we have \begin{align} \abs{\sum_{i=1}^{L}\xi_{i}}^{2} & \leq L\brac{\sum_{i=1}^{L}\abs{\xi_{i}}^{2}}. \end{align} Hence \begin{align} \expect{\abs{\xi}^{2}} & \leq L\brac{\sum_{i=1}^{L}\expect{\abs{\xi_{i}}^{2}}}. \end{align} Now, for example, consider the term $\expect{\abs{\x_{1,1}\u_{2,2}\boldsymbol{g}_{21}\x_{1,1}\x_{\text{p}2,i}}^{2}}$ in the last equation \begin{align} \expect{\abs{\x_{1,1}\u_{2,2}\boldsymbol{g}_{21}\x_{1,1}\x_{\text{p}2,i}}^{2}} & =\expect{\abs{\x_{1,1}}^{4}}\expect{\abs{\u_{2,2}}^{2}}\expect{\abs{\boldsymbol{g}_{21}}^{2}}\expect{\abs{\x_{\text{p}2,i}}^{2}}\nonumber \\ & =2\times\brac{1-1/\inr}\times\inr\times\brac{1/\inr}\leq2. \end{align} Each of $\expect{\abs{\xi_{i}}^{2}}$ will be bounded by a constant since $\boldsymbol{g}_{21}$ always appears coupled with $\x_{\text{p}2,i}$. Hence the power scaling $\expect{\abs{\boldsymbol{g}_{21}}^{2}}=\inr$ gets canceled with the scaling $\expect{\abs{\x_{\text{p}2,i}}^{2}}=1/\inr$. Hence, by analyzing each of $\expect{\abs{\xi_{i}}^{2}}$ together with maximum entropy results, it can be shown that $\expect{\abs{\xi_{i}}^{2}}\leqdof0$ and hence, $h\brac{\xi}\leqdof0$. Thus (\ref{eq:h(y1_given_u2_x1)_zeroterm}) is proved and it completes our proof for the main result. \section{Proof of Claim \ref{claim:h(Y1|U1,U2)} \label{app:h(Y1|U1,U2)proof}} We have \begin{align} h\brac{\Y_{1}|\U_{1},\U_{2}}= & h\brac{\rline{\boldsymbol{g}_{11}\X_{1}+\boldsymbol{g}_{21}\X_{2}+\Z_{1}\vphantom{a^{a^{a}}}}\U_{1},\U_{2}}\nonumber \\ = & \sum_{i}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\vphantom{a^{a^{a}}}}\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{1},\U_{2}}\nonumber \\ \overset{\brac i}{\geq} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\x_{1,1},\x_{2,1},\U_{1},\U_{2}}\nonumber \\ & +h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}\nonumber \\ & +\sum_{i=3}^{T}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\u_{1,i},\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}\nonumber \\ \overset{\brac{ii}}{\geqdof} & \lgbrac{1+\snr+\inr}+h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}\nonumber \\ & +\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}},\label{eq:h_Y1_given_U1_U2_part1} \end{align} where $\brac i$ is due to the fact that conditioning reduces entropy and Markovity $\brac{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}}-\brac{\u_{1,i},\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}-\brac{\cbrac{\boldsymbol{g}_{11}\x_{1,j}+\boldsymbol{g}_{21}\x_{2,j}+\z_{1,j}}_{j=1}^{i-1},\U_{1},\U_{2}}$ and $\brac{ii}$ is using Gaussianity of the terms and using Fact \ref{fact:Jensens_gap}. In $\brac{ii}$ for the last term, we use \begin{align} h\brac{\rline{\boldsymbol{g}_{11}\x_{1,i}+\boldsymbol{g}_{21}\x_{2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\u_{1,i},\u_{2,i},\boldsymbol{g}_{21},\boldsymbol{g}_{11}} & \overset{\brac i}{=}h\brac{\rline{\boldsymbol{g}_{11}\x_{\text{p}1,i}+\boldsymbol{g}_{21}\x_{\text{p}2,i}+\z_{1,i}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{21},\boldsymbol{g}_{11}}\nonumber \\ & =\expect{\lgbrac{\pi e\brac{1+\frac{\abs{\boldsymbol{g}_{11}}^{2}}{\inr}+\frac{\abs{\boldsymbol{g}_{21}}^{2}}{\inr}}}}\nonumber \\ & \overset{\brac{ii}}{\geqdof}\lgbrac{1+\frac{\snr}{\inr}}.\label{eq:h_y1i|u1i,u2i,g21,g11} \end{align} $\brac i$ is by removing $\boldsymbol{g}_{11}\boldsymbol{u}_{1,i}+\boldsymbol{g}_{21}\boldsymbol{u}_{2,i}$ that is available in conditioning and because the private message parts $\x_{\text{p}1,i}$, $\x_{\text{p}2,i}$ are independent of the common message parts $\u_{1,i},\u_{2,i}$. The step $\brac{ii}$ is using Fact \ref{fact:Jensens_gap}. Now, \begin{align} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}\nonumber \\ & \geq h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\X_{1},\X_{2},\U_{1},\U_{2}}\nonumber \\ & =h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2},\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\X_{1},\X_{2},\U_{1},\U_{2}}\nonumber \\ & \qquad-h\brac{\rline{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1}\vphantom{a^{a^{a}}}}\X_{1},\X_{2},\U_{1},\U_{2}}\\ & \overset{\brac i}{=}\expect{\lgbrac{\pi e\left|\begin{array}{cc} \snr\abs{\x_{1,2}}^{2}+\inr\abs{\x_{2,2}}^{2}+1 & \snr\x_{1,2}\x_{1,1}^{\dagger}+\inr\x_{2,2}\x_{2,1}^{\dagger}\\ \brac{\snr\x_{1,2}\x_{1,1}^{\dagger}+\inr\x_{2,2}\x_{2,1}^{\dagger}}^{\dagger} & \snr\abs{\x_{1,1}}^{2}+\inr\abs{\x_{2,1}}^{2}+1 \end{array}\right|}}\nonumber \\ & \quad-\expect{\lgbrac{1+\abs{\x_{2,1}}^{2}\inr+\abs{\x_{1,1}}^{2}\snr}}\\ & \geq\expect{\lgbrac{\snr\cdot\inr\brac{\abs{\x_{1,1}}^{2}\abs{\x_{2,2}}^{2}+\abs{\x_{1,2}}^{2}\abs{\x_{2,1}}^{2}-2\text{Re}\brac{\x_{1,2}\x_{1,1}^{\dagger}\x_{2,2}^{\dagger}\x_{2,1}}}}}\nonumber \\ & \quad-\lgbrac{1+\inr+\snr}\\ & =\lgbrac{\snr\cdot\inr}+\expect{\lgbrac{\abs{\x_{1,1}\x_{2,2}-\x_{1,2}\x_{2,1}}^{2}}}-\lgbrac{1+\inr+\snr}\nonumber \\ & \overset{\brac{ii}}{\eqdof}\lgbrac{\frac{\snr\cdot\inr}{1+\inr+\snr}}\label{eq:h_y12_given_y11_u1_u2_secondlaststep}\\ & \overset{}{\eqdof}\lgbrac{1+\min\brac{\snr,\inr}},\label{eq:h_y12_given_y11_u1_u2_part1} \end{align} where $\brac i$ is using the property of Gaussians, $\brac{ii}$ is using Fact \ref{fact:Jensens_gap} on page (\ref{fact:Jensens_gap}) and Tower property of Expectation for $\expect{\lgbrac{\abs{\x_{1,1}\x_{2,2}-\x_{1,2}\x_{2,1}}^{2}}}$. Also \begin{align} & h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}\nonumber \\ & \overset{\brac i}{\geq}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2},\boldsymbol{g}_{11},\boldsymbol{g}_{21}}\nonumber \\ & \overset{\brac{ii}}{=}h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\U_{1},\U_{2},\boldsymbol{g}_{11},\boldsymbol{g}_{21}}\nonumber \\ & \overset{\brac{ii}}{\geqdof}\lgbrac{1+\frac{\snr}{\inr}},\label{eq:h_y12_given_y11_u1_u2_part2} \end{align} where $\brac i$ is using the fact that conditioning reduces entropy, $\brac{ii}$ is due to the Markov chain $\brac{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}}-\brac{\U_{1},\U_{2},\boldsymbol{g}_{21},\boldsymbol{g}_{11}}-\brac{\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}$, $\brac{iii}$ is following similar steps as for (\ref{eq:h_y1i|u1i,u2i,g21,g11}). Now combining (\ref{eq:h_y12_given_y11_u1_u2_part2}), (\ref{eq:h_y12_given_y11_u1_u2_part1}), we get \begin{equation} h\brac{\rline{\boldsymbol{g}_{11}\x_{1,2}+\boldsymbol{g}_{21}\x_{2,2}+\z_{1,2}\vphantom{a^{a^{a}}}}\boldsymbol{g}_{11}\x_{1,1}+\boldsymbol{g}_{21}\x_{2,1}+\z_{1,1},\U_{1},\U_{2}}\geqdof\lgbrac{1+\frac{\snr}{\inr}+\min\brac{\snr,\inr}}. \end{equation} Hence substituting the above equation in (\ref{eq:h_Y1_given_U1_U2_part1}), we get \begin{align*} h\brac{\Y_{1}|\U_{1},\U_{2}} & \overset{}{\geqdof}\lgbrac{1+\snr+\inr}+\lgbrac{1+\frac{\snr}{\inr}+\min\brac{\snr,\inr}}+\brac{T-2}\lgbrac{1+\frac{\snr}{\inr}}. \end{align*} \section{Numerical Calculations for Achievable rates\label{app:Numerical-Calculation}} Here we provide the calculations required for numerically evaluating the achievable rates given in Table \ref{tab:Comparison-of-rates}. \begin{comment} In the calculations below, the channels are scaled, so that\emph{ }the average power per transmit symbol from each antenna is unity. \end{comment} {} Gaussian codebooks are used in the training-based schemes. \subsection{Training-Based Rate-Splitting Scheme} For a training-based scheme, we send one symbol (of value 1) from transmitter 1 to train two of the channel parameters while keeping the transmitter 2 OFF. Then, for training the other two channel parameters the roles of the transmitters are reversed. After using two symbols to train we have $\Y_{11,\text{train}}=\boldsymbol{g}_{11}+\boldsymbol{z}_{11}$, $\Y_{12,\text{train}}=\boldsymbol{g}_{21}+\boldsymbol{z}_{21},$ at receiver 1 and the minimum mean squared error (MMSE) estimates are obtained as \begin{align*} \hat{g}_{11} & =\frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}\Y_{11,\text{train}}\\ & =\expect{\abs{\boldsymbol{g}_{11}}^{2}}\frac{\boldsymbol{g}_{11}+\boldsymbol{z}_{11}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}. \end{align*} \begin{align*} \hat{g}_{21} & =\frac{\expect{\abs{\boldsymbol{g}_{21}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}\Y_{12,\text{train}}\\ & =\expect{\abs{\boldsymbol{g}_{21}}^{2}}\frac{\boldsymbol{g}_{21}+\boldsymbol{z}_{21}}{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}. \end{align*} and similar estimates $\hat{g}_{22}$, $\hat{g}_{12}$ are obtained at receiver 2. The total noise at receiver 1 including MMSE is \begin{align*} N_{\text{}}= & \text{\ensuremath{\expect{\brac{\boldsymbol{g}_{11}-\expect{\abs{\boldsymbol{g}_{11}}^{2}}\frac{\boldsymbol{g}_{11}+\boldsymbol{z}_{11}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}}^{2}}}+\ensuremath{\expect{\brac{\boldsymbol{g}_{21}-\expect{\abs{\boldsymbol{g}_{21}}^{2}}\frac{\boldsymbol{g}_{21}+\boldsymbol{z}_{21}}{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}}^{2}}}+1}\\ = & \expect{\abs{\frac{\boldsymbol{g}_{11}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}}^{2}}+\frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}^{2}}{\abs{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}^{2}}\\ & +\expect{\abs{\frac{\boldsymbol{g}_{21}}{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}}^{2}}+\frac{\expect{\abs{\boldsymbol{g}_{21}}^{2}}^{2}}{\abs{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}^{2}}+1\\ = & \frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}+\frac{\expect{\abs{\boldsymbol{g}_{21}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{21}}^{2}}}+1. \end{align*} We assume symmetric statistics so that $\expect{\abs{\boldsymbol{g}_{11}}^{2}}=\expect{\abs{\boldsymbol{g}_{22}}^{2}}$, $\expect{\abs{\boldsymbol{g}_{21}}^{2}}=\expect{\abs{\boldsymbol{g}_{12}}^{2}}$ and hence the total noise at both receivers is the same. Following Theorem 8, {[}14{]}, and Appendix A from \cite{joyson_fading_TCOM}, using rate-splitting with $\lambda=\min\brac{1/\inr,1}$ and using the symmetry of the channel statistics for our case, the rates \begin{align} R_{1},R_{2} & \leq\expect{\lgbrac{1+\abs{\boldsymbol{g}_{11}}^{2}+\lambda\abs{\boldsymbol{g}_{21}}^{2}}}-r,\label{eq:inner_nofb1}\\ R_{1} & +R_{2}\leq\expect{\lgbrac{1+\abs{\boldsymbol{g}_{22}}^{2}+\abs{\boldsymbol{g}_{12}}^{2}}}\nonumber \\ & \quad+\expect{\lgbrac{1+\lambda\abs{\boldsymbol{g}_{11}}^{2}+\lambda\abs{\boldsymbol{g}_{21}}^{2}}}-2r,\label{eq:inner_nofb3}\\ R_{1} & +R_{2}\leq2\expect{\lgbrac{1+\lambda\abs{\boldsymbol{g}_{11}}^{2}+\abs{\boldsymbol{g}_{21}}^{2}}}-2r,\\ R_{1}+2R_{2},2R_{1} & +R_{2}\leq\expect{\lgbrac{1+\abs{\boldsymbol{g}_{11}}^{2}+\abs{\boldsymbol{g}_{21}}^{2}}}\nonumber \\ & \quad+2\expect{\lgbrac{1+\lambda\abs{\boldsymbol{g}_{11}}^{2}+\lambda\abs{\boldsymbol{g}_{21}}^{2}}}-3r\label{eq:inner_nofb6} \end{align} are achievable, where $r=\expect{\lgbrac{1+\lambda\abs{\boldsymbol{g}_{21}}^{2}}}$ using perfect channel knowledge. With 2 symbols for training and using MMSE estimates we have the modified formula for achievable rates \begin{align} R_{1},R_{2} & \leq\brac{1-\frac{2}{T}}\brac{\expect{\lgbrac{N+\abs{\hat{\g}_{11}}^{2}+\lambda\abs{\hat{\g}_{21}}^{2}}}-r'},\label{eq:inner_nofb1-1}\\ R_{1} & +R_{2}\leq\brac{1-\frac{2}{T}}\left(\expect{\lgbrac{N+\abs{\hat{\g}_{22}}^{2}+\abs{\hat{\g}_{12}}^{2}}}\right.\nonumber \\ & \quad+\left.\expect{\lgbrac{N+\lambda\abs{\hat{\g}_{11}}^{2}+\lambda\abs{\hat{\g}_{21}}^{2}}}-2r'\right),\label{eq:inner_nofb3-1}\\ R_{1} & +R_{2}\leq\brac{1-\frac{2}{T}}2\brac{\expect{\lgbrac{N+\lambda\abs{\hat{\g}_{11}}^{2}+\abs{\hat{\g}_{21}}^{2}}}-r'},\\ R_{1}+2R_{2},2R_{1} & +R_{2}\leq\brac{1-\frac{2}{T}}\left(\expect{\lgbrac{N+\abs{\hat{\g}_{11}}^{2}+\abs{\hat{\g}_{21}}^{2}}}\right.\nonumber \\ & \quad\left.+2\expect{\lgbrac{N+\lambda\abs{\hat{\g}_{11}}^{2}+\lambda\abs{\hat{\g}_{21}}^{2}}}-3r'\right)\label{eq:inner_nofb6-1} \end{align} with $r'=\expect{\lgbrac{N+\lambda\abs{\hat{\g}_{21}}^{2}}}$. Also note that the $\snr$ for the simulation is same as $\expect{\abs{\boldsymbol{g}_{11}}^{2}}=\expect{\abs{\boldsymbol{g}_{22}}^{2}}$. \subsection{TDM Scheme} In this case, we operate the first transmitter-receiver pair during half of the time, while the second pair one remains OFF. During other half of the time, the second transmitter-receiver pair operates and the first pair remains OFF. Here, we just have point-to-point channels and we use one symbol for training each point-to-point channel. For receiver 1 we have the MMSE estimate for the channel as \begin{align*} \hat{g}_{11} & =\frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}\Y_{11,\text{train}}\\ & =\expect{\abs{\boldsymbol{g}_{11}}^{2}}\frac{\boldsymbol{g}_{11}+\boldsymbol{z}_{11}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}. \end{align*} and the total noise at receiver 1 including MMSE is \begin{align*} N_{\text{1,TDM}}=N_{\text{TDM}}= & \text{\ensuremath{\expect{\brac{\boldsymbol{g}_{11}-\expect{\abs{\boldsymbol{g}_{11}}^{2}}\frac{\boldsymbol{g}_{11}+\boldsymbol{w}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}}^{2}}}+1}\\ = & \expect{\abs{\frac{\boldsymbol{g}_{11}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}}^{2}}+\frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}^{2}}{\abs{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}^{2}}+1\\ = & \frac{\expect{\abs{\boldsymbol{g}_{11}}^{2}}}{1+\expect{\abs{\boldsymbol{g}_{11}}^{2}}}+1. \end{align*} The terms for receiver 2 are similar. Using symmetry of the statistics the achievable rates are calculated as \begin{align*} R_{\text{1}}=R_{\text{2}}=\frac{1}{2}\brac{1-\frac{1}{T}} & \expect{\lgbrac{1+\frac{\abs{\hat{\boldsymbol{g}}_{11}}^{2}}{N_{\text{TDM}}}}}. \end{align*} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,844
\section{Radiative corrections of order \boldmath$R_\infty\alpha^4$.} For a given relative accuracy of $\sim\!10^{-10}-10^{-11}$ recoil corrections of orders $R_\infty\alpha^4(m/M)$ and higher are small and may be neglected. That allows to reduce calculation of higher order corrections for the Coulomb three-body system to the problem of a bound electron in an external field. The radiative corrections of order $R_\infty\alpha^4$ in the {\em external field} approximation are known in an analytic form \cite{SapYen,Eides01}: \begin{equation} \begin{array}{@{}l} \displaystyle E_{se}^{(4)} = \alpha^4 \frac{4\pi}{m_e^2} \left(\frac{139}{128}-\frac{1}{2}\ln{2}\right) \left\langle Z_1^2\delta(\mathbf{r}_1)\!+\!Z_2^2\delta(\mathbf{r}_2) \right\rangle, \\[4mm]\displaystyle E_{anom}^{(4)} = \alpha^2\frac{\pi}{m_e^2} \left[ \left(\frac{\alpha}{\pi}\right)^2 \left( \frac{197}{144}+\frac{\pi^2}{12}-\frac{\pi^2}{2}\ln{2} +\frac{3}{4}\zeta(3) \right) \right] \left\langle Z_1\delta(\mathbf{r}_1)\!+\!Z_2\delta(\mathbf{r}_2) \right\rangle, \\[4mm]\displaystyle E_{vp}^{(4)} = \frac{4\alpha^3}{3m^2} \left[\frac{5\pi\alpha}{64}\right] \left\langle Z_1^2\delta(\mathbf{r}_1)\!+\!Z_2^2\delta(\mathbf{r}_2) \right\rangle, \\[4mm]\displaystyle E_{2loop}^{(4)} = \frac{\alpha^4}{m_e^2\pi} \left[ -\frac{6131}{1296}-\frac{49\pi^2}{108}+2\pi^2\ln{2}-3\zeta(3) \right] \left\langle Z_1\delta(\mathbf{r}_1)\!+\!Z_2\delta(\mathbf{r}_2) \right\rangle. \end{array} \end{equation} The last equation includes both Dirac form factor and polarization operator contributions. \begin{figure}[t] \caption{Adiabatic "effective" potentials for the relativistic $m\alpha^6$ order correction for $\mbox{H}_2^+$ molecular ion ($Z_1=Z_2=1$). Energies are in $\hbox{(atomic units)}\times\alpha^4$.}\label{ef_pot} \begin{center} \vspace{-5mm} \includegraphics[width=70mm]{a6sum.eps} \vspace{-5mm} \end{center} \end{figure} \section{Relativistic corrections of order \boldmath$R_\infty\alpha^4$.} The most problematic contribution of $R_\infty\alpha^4$ order is the relativistic correction for a Dirac electron. It can be obtain within the adiabatic two-center approximation as follows (for details, see \cite{KorJPB07}). We start from the nonrelativistic Schr\"odinger equation with the Hamiltonian: \begin{equation} H_0 = \frac{p^2}{2m}+V, \qquad V=-\frac{Z_1}{r_1}-\frac{Z_2}{r_2}. \end{equation} The total contribution to the energy of a bound electron at the $R_\infty\alpha^4\sim m_ec^2\alpha^6$ order is defined by \begin{equation}\label{E4} \Delta E^{(6)} = \left\langle H_B Q (E_0-H_0)^{-1} Q H_B \right\rangle +\left\langle H^{(6)} \right\rangle. \end{equation} Here $H^{(6)}$ is the effective Hamiltonian for the interaction of an electron with the external field of two centers in this order, which can be expressed in a form: \begin{equation}\label{h6} \begin{array}{@{}l} \displaystyle H^{(6)} = \frac{p^6}{16m^5} +\frac{(\boldsymbol{\mathcal{E}}_1\!+\!\boldsymbol{\mathcal{E}}_2)^2} {8m^3} -\frac{3\pi}{16m^4} \Bigl\{ p^2\bigl[\rho_1\!+\!\rho_2\bigr]+ \bigl[\rho_1\!+\!\rho_2\bigr]p^2 \Bigr\} +\frac{5}{128m^4}\left(p^4V\!+\!Vp^4\right) -\frac{5}{64m^4}\left(p^2Vp^2\right), \\[3mm]\displaystyle\hspace{80mm} \boldsymbol{\mathcal{E}}_i=-Z_i\mathbf{r}_i/r_i^3, \qquad \rho_i=Z_i\delta(\mathbf{r}_i) \end{array} \end{equation} $H_B$ is the Breit--Pauli interaction: \begin{equation} H_B = -\frac{p^4}{8m^3} + \frac{\pi}{2m^2}[Z_1\delta(\mathbf{r}_1)+Z_2\delta(\mathbf{r}_2)] + \left( Z_1\frac{[\mathbf{r}_1\times\mathbf{p}]}{2m^2r_1^3}+ Z_2\frac{[\mathbf{r}_2\times\mathbf{p}]}{2m^2r_2^3} \right)\mathbf{s}\>, \end{equation} Both terms in Eq.~(\ref{E4}) are divergent. In order to remove the infinities a transformation to the second order term can be applied which separates a divergent part: \begin{equation}\label{trans} \left\{ \begin{array}{@{}l} H'_B = H_B+(H_0-E_0)U+U(H_0-E_0)\\[2mm] \displaystyle \left\langle H_B Q (E_0-H_0)^{-1} Q H_B \right\rangle = \left\langle H'_B Q (E_0-H_0)^{-1} Q H'_B \right\rangle \\[2mm]\displaystyle\hspace{20mm} +{\left\langle UH_B\!+\!H_BU \right\rangle -2\left\langle U \right\rangle \left\langle H_B \right\rangle +\left\langle U(H_0-E_0)U \right\rangle} \end{array} \right. \end{equation} with U=\frac{1}{4m}[Z_1/r_1+Z_2/r_2]=-\frac{1}{4m}V $. The last three terms of the second equation in (\ref{trans}) can be recast in a form of a new effective interaction: \begin{equation} \begin{array}{@{}l} \displaystyle H'^{(6)} = (UH_B+H_BU)-2U\langle H_B \rangle-U(E_0-H_0)U \\[2mm]\displaystyle\hspace{40mm} = \frac{p^4V\!+\!Vp^4}{32m^4} -\frac{\pi V\!\left[\rho_1\!+\!\rho_2\right]}{4m^3} +\frac{(\boldsymbol{\mathcal{E}}_1\!+\!\boldsymbol{\mathcal{E}}_2)^2} {32m^3} +\frac{V}{2m} \left\langle H_B \right\rangle. \end{array} \end{equation} Here $ \rho_i=Z_i\delta(\mathbf{r}_i)$ and $\boldsymbol{\mathcal{E}}_i=-Z_i\mathbf{r}_i/r_i^3$. Taking into account that $ \Psi_0$ is a solution of the Schr\"odinger equation $H_0\Psi_0 = E_0\Psi_0$, one may obtain from the above the following finite expression \cite{KorJPB07}: \begin{equation} \begin{array}{@{}l} \displaystyle \Delta E^{(6)} = \left\langle H'_B Q (E_0-H_0)^{-1} Q H'_B \right\rangle +\Bigl\langle H^{(6)} \Bigr\rangle +{\Bigl\langle H'^{(6)} \Bigr\rangle} = \left\langle H'_B Q (E_0-H_0)^{-1} Q H'_B \right\rangle \\[2mm]\displaystyle\hspace{25mm} +\frac{3E_0\left\langle V^2 \right\rangle}{4m^2} -\frac{5E_0^2\left\langle V \right\rangle}{4m^2} -\frac{3\pi E_0\left\langle(\rho_1+\rho_2)\right\rangle}{4m^3} +\frac{\left\langle\mathbf{p}V^2\mathbf{p}\right\rangle}{8m^3} +\frac{\left\langle V \right\rangle\left\langle H_B \right\rangle}{2m} +\frac{E_0^3}{2m^2}. \end{array} \end{equation} This new expression along with modified second order iteration can be now calculated numerically. "Effective" potentials of $\Delta E^{(6)}(R)$ have been obtained for different bond lengths in \cite{KorJPB07}. Results are shown in Figure~\ref{ef_pot}. Averaging them over the radial wave function of particular state one may get corresponding contribution to the energy of that state of order $R_\infty\alpha^4$. Results of numerical calculation of the relativistic corrections at this order are presented in Tables \ref{H2plus_a6} and \ref{HDplus_a6}. For the transition frequency this adiabatic approach provides about 3 significant digits. \begin{table}[t] \begin{center} \caption{Relativistic corrections of $R_\infty m\alpha^4$ order (in units $c^4\!\times\!(1\>\mbox{a.u.})$), $\mbox{H}_2^+$.} \label{H2plus_a6} \begin{tabular}{c@{\hspace{5mm}}ccccc} \hline\hline \vrule width0pt height 11pt & $v=0$ & $v=1$ & $v=2$ & $v=3$ & $v=4$ \\ \hline \vrule width0pt height 11pt $L\!=\!0$ & $-$0.042097 & $-$0.042908 & $-$0.043786 & $-$0.044732 & $-$0.045729 \\ $L\!=\!1$ & $-$0.042100 & $-$0.042912 & $-$0.043792 & $-$0.044740 & $-$0.045738 \\ $L\!=\!2$ & $-$0.042107 & $-$0.042922 & $-$0.043805 & $-$0.044757 & $-$0.045756 \\ $L\!=\!3$ & $-$0.042117 & $-$0.042938 & $-$0.043825 & $-$0.044782 & $-$0.045783 \\ $L\!=\!4$ & $-$0.042133 & $-$0.042959 & $-$0.043854 & $-$0.044818 & $-$0.045820 \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{Relativistic corrections of $R_\infty m\alpha^4$ order (in units $c^4\!\times\!(1\>\mbox{a.u.})$), $\mbox{HD}^+$.} \label{HDplus_a6} \begin{tabular}{c@{\hspace{5mm}}ccccc} \hline\hline \vrule width0pt height 11pt & $v=0$ & $v=1$ & $v=2$ & $v=3$ & $v=4$ \\ \hline \vrule width0pt height 11pt $L\!=\!0$ & $-$0.042043 & $-$0.042738 & $-$0.043483 & $-$0.044278 & $-$0.045126 \\ $L\!=\!1$ & $-$0.042045 & $-$0.042741 & $-$0.043487 & $-$0.044284 & $-$0.045132 \\ $L\!=\!2$ & $-$0.042050 & $-$0.042748 & $-$0.043496 & $-$0.044295 & $-$0.045146 \\ $L\!=\!3$ & $-$0.042058 & $-$0.042759 & $-$0.043510 & $-$0.044312 & $-$0.045167 \\ $L\!=\!4$ & $-$0.042069 & $-$0.042773 & $-$0.043529 & $-$0.044336 & $-$0.045195 \\ \hline\hline \end{tabular} \end{center} \end{table} \section{Higher order radiative corrections.} The electron ground state wave function to a good extent may be approximated by $\psi_e(\mathbf{r}_e) = C[\psi_{1s}(\mathbf{r}_1)+\psi_{1s}(\mathbf{r}_2)]$, where $\psi_{1s}$ is the hydrogen ground state wave function. So, the most important $R_\infty\alpha^5$ order contributions can be evaluated using this approximate wave function and the expressions: \begin{equation}\label{a5} \begin{array}{@{}l} \displaystyle E_{se}^{(5)} = \alpha^5\sum_{i=1,2} \left\{ \frac{Z_i^3}{m_e^2} \Biggl[ -\ln^2{\frac{1}{(Z_i\alpha)^2}} +A_{61}\ln{\frac{1}{(Z_i\alpha)^2}} +A_{60} \Biggr] \left\langle\delta(\mathbf{r}_i)\right\rangle \right\}, \\[4mm]\displaystyle E_{2loop}^{(5)} = \frac{\alpha^5}{\pi m_e^2} \left[ B_{50} \right] \left\langle Z_1^2\delta(\mathbf{r}_1)\!+\!Z_2^2\delta(\mathbf{r}_2) \right\rangle, \end{array} \end{equation} where the constants $A_{61}$, $A_{60}$, and $B_{50}$ are taken equal to the constants of the $ 1s$ state of the hydrogen atom $A_{61}=5.419\dots$ \cite{Lazer60}, $A_{60}=-30.924\dots$ \cite{Pac93}, and $ B_{50}=-21.556\dots$ \cite{b50}. Worthy to say that the leading contribution ($R_\infty\alpha^5\ln^2\alpha$) is exact. \section{Results and Conclusion} \begin{table} \caption{Summary of contributions to the $(v\!=\!0,L\!=\!0)\!\to\!(v'\!=\!1,L'\!=\!0)$ transition frequency (in MHz).} \label{summary} \begin{center} \begin{tabular}{l@{\hspace{12mm}}d@{\hspace{12mm}}d} \hline\hline \vrule height 10.5pt width 0pt depth 3.5pt & \mbox{H}_2^+ & \mbox{HD}^+ \\ \hline \vrule height 10pt width 0p $\Delta E_{nr}$ & 65\,687\,511.0686 & 57\,349\,439.9717 \\ $\Delta E_{\alpha^2}$ & 1091.041(03) & 958.152(03) \\ $\Delta E_{\alpha^3}$ & -276.544(02) & -242.125(02) \\ $\Delta E_{\alpha^4}$ & -1.997 & -1.748 \\ $\Delta E_{\alpha^5}$ & 0.120(23) & 0.105(19) \\ \hline \vrule height 10pt width 0p $\Delta E_{tot}$& 65\,688\,323.688(25) & 57\,350\,154.355(21)\\ \hline\hline \end{tabular} \end{center} \end{table} Various contributions to the frequency interval of the fundamental transition are summarized in Table \ref{summary}. Uncertainty in orders $R_\infty\alpha^2$ and $R_\infty\alpha^3$ are primarily due to numerical uncertainty in calculation of leading order terms like $\langle \mathbf{p}^4\rangle$ in the Breit-Pauli Hamiltonian, or the Bethe logarithm, $\beta(L,v)$ (see Refs.~\cite{HD_BL,H2_BL} for the details), and can be improved by more extensive calculations. We estimate uncertainty due to finite size of nuclei as $\sim\!3\cdot10^{-4}$ MHz for these transitions. So, the latter is so far negligible for the ro-vibrational spectroscopy. For the contribution of order $R_\infty\alpha^5$ the error bars are determined by the total contribution of the terms with coefficients $A$ and $B$ in Eq.~(\ref{a5}). Recently, the $(v,L)\!: (0,2)\!\to\!(4,3)$ ro-vibrational transition for the $\mbox{HD}^+$ ion has been precisely measured in the experiment at the D\"usseldorf university \cite{Sch07}. Comparison with theoretical calculation demonstrates a very good agreement: \[ \begin{array}{@{}r@{\,}l} E_{\rm exp}&=214\,978\,560.6(5) \mbox{ MHz} \\[0.5mm] E_{\rm th}&=214\,978\,560.88(7) \mbox{ MHz} \end{array} \] In conclusion, the relativistic corrections of order $R_\infty\alpha^4$ allow to reduce the relative accuracy of the fundamental transition frequency in $\mbox{H}_2^+$ to about $3\cdot10^{-9}$ or 0.3 ppb. Further improvement we expect to achieve by numerical estimate of coefficients $A_{61}$, $A_{60}$, and $B_{50}$ from Eq.~(\ref{a5}) using the two-center adiabatic (or external field) approximation. That may reduce the final uncertainty by a factor of $5$-$10$ and the relative uncertainty to less than $10^{-10}$. Eventually, it will make real the main goal of our studies: improving of the $m_p/m_e$ mass ratio from the ro-vibrational spectroscopy of $\mbox{H}_2^+$ and $\mbox{HD}^+$. \section{Acknowledgement} The author wants to express his gratitude to L.~Hilico and K.~Pachucki for helpful remarks. The support of the Russian Foundation for Basic Research under a grant No. 05-02-16618 is gratefully acknowledged.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,435
package group.pals.android.lib.ui.filechooser.utils; import group.pals.android.lib.ui.filechooser.R; import android.app.Dialog; import android.content.Context; import android.content.Intent; import android.net.Uri; import android.view.ContextThemeWrapper; import android.view.View; import android.view.Window; import android.widget.TextView; /** * Something funny :-) * * @author Hai Bison * */ public class E { /** * Shows it! * * @param context * {@link Context} */ public static void show(Context context) { String msg = null; try { msg = String.format("Hi :-)\n\n" + "%s v%s\n" + "…by Hai Bison Apps\n\n" + "http://www.haibison.com\n\n" + "Hope you enjoy this library.", "android-filechooser", "5.1 beta"); } catch (Exception e) { msg = "Oops… You've found a broken Easter egg, try again later :-("; } final Context ctw = new ContextThemeWrapper(context, R.style.Afc_Theme_Dialog_Dark); final int padding = ctw.getResources().getDimensionPixelSize( R.dimen.afc_10dp); TextView textView = new TextView(ctw); textView.setText(msg); textView.setPadding(padding, padding, padding, padding); textView.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { try { ctw.startActivity(new Intent(Intent.ACTION_VIEW, Uri .parse("http://www.haibison.com"))); } catch (Throwable t) { /* * Ignore it. */ } }// onClick() }); Dialog dialog = new Dialog(ctw, R.style.Afc_Theme_Dialog_Dark); dialog.requestWindowFeature(Window.FEATURE_NO_TITLE); dialog.setCanceledOnTouchOutside(true); dialog.setContentView(textView); dialog.show(); }// show() }
{ "redpajama_set_name": "RedPajamaGithub" }
483
Biografia Vocalmente basso, il Cartagenova fu anche interprete di numerosi ruoli da baritono. Viene ricordato anche per la sua abilità interpretativa. Nativo di Genova, del Cartagenova si ignora il percorso formativo musicale. Il suo debutto fu alla Scala di Milano il 23 agosto 1823 come Ircano nella prima di Ricciardo e Zoraide con Brigida Lorenzani seguito in settembre nella prima di Otello di Gioacchino Rossini nel ruolo di Elmiro diretto da Alessandro Rolla ed in novembre da Orbazzano nella prima di Tancredi diretto da Rolla con la Lorenzani. Entrò in contatto con i principali compositori italiani dell'epoca, tra cui Saverio Mercadante, di cui divenne amico e da cui fu scritturato nel 1824 per una tournée in Spagna, Portogallo ed Italia, Gaetano Donizetti, Vincenzo Bellini e Gioachino Rossini. A Lisbona nel 1825 è Atlante nel Violenza e costanza di Mercadante in una recita privata ed al Teatro Nacional de São Carlos dove è anche Aliprando in Matilde di Shabran, Assur in Semiramide di Rossini ed il conte Arnoldo in Elisa e Claudio di Mercadante. Dal 1833 in poi Cartagenova legò il suo nome al Teatro alla Scala, continuando però ad esibirsi in altri teatri italiani ed esteri, tra cui viene ricordata un'importante tournée a Londra nel 1836, ove si esibì ne La straniera al King's Theatre. Morì a Vicenza nel 1841. Ruoli creati Il Califfo in Adina di Rossini (22 Giugno 1826, Lisbona) Adolfo ne La testa di bronzo di Mercadante (3 Dicembre 1827, Lisbona) Osroas in Adriano in Siria di Mercadante (24 Febbraio 1828, Lisbona) Fayel in Gabriella di Vergy di Mercadante (8 Agosto 1828, Lisbona) Ordamante ne I normanni a Parigi di Mercadante (7 Febbraio 1832, Torino) Filippo Maria Visconti in Beatrice di Tenda di Bellini (16 Marzo 1833, Venezia) Corrado in Emma d'Antiochia di Mercadante (8 Marzo 1834, Venezia) Enrico in La gioventù di Enrico V di Mercadante (25 Novembre 1834, Milano) Manfredo ne Il giuramento di Mercadante (11 Marzo 1837, Milano) Alcandro in Saffo di Pacini (29 Novembre 1840, Napoli) Altri progetti Collegamenti esterni
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,532
Here's who we're picking to win the Indy 500 — who are YOU picking? By Jerry BonkowskiMay 27, 2018, 5:00 AM EDT Several members of the NBC Sports motorsports staff have made their predictions on who will win Sunday's 102nd Running of the Indianapolis 500. Of the six voters, two are going with 2017 Verizon IndyCar Series champ Josef Newgarden, while two others are going with Helio Castroneves to win a fourth 500, which would tie him with A.J. Foyt, Al Unser and Rick Mears. Here's our picks. Who are YOU picking? Leigh Diffey: Helio Castroneves — He drives the place better than anyone else. For me (for what its worth) he deserves a fourth 500 ring for the frustration he went through never getting the IndyCar title! Townsend Bell: Josef Newgarden Nate Ryan: Simon Pagenaud — All of his 2018 misfortune on the track has disappeared in May, and it culminates in the biggest victory of his career. Jerry Bonkowski: Helio Castroneves – There's no pressure as he pursues his fourth Indy 500 win. All Helio has to do, as late NFL owner Al Davis would say, is "Just win, baby!" (But if Helio falls short, watch for Marco Andretti) Kyle Lavigne: Josef Newgarden — I think he has everything he needs to get it done this year. Dan Beaver: Robert Wickens – Has driven like anything except a rookie and his second-place finish at Phoenix proves he's just as good on an oval as the road courses. Follow: Jerry Bonkowski
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,434
Hang Loose! I was racing along the beach this morning, faster than usual because there's just always so much to do, and so many places to be, and so many thoughts to think and blah, blah, blah ... when I saw this dude had strung up his hammock between the pillars of the pier. He was just chilling there, reading a book. Relaxing. Hanging loose. As we saunter on into the weekend, let this be a nice reminder that sometimes it's crucial to just slow down and enjoy a moment, turn off your phone, and just chill. Thinking about how lucky you are. Happy weekend, Homies! Posted by CJ Gronner at 2:48 PM No comments: Labels: chill, hammock, Hang loose, relax, slow down, Venice Pier, weekend Iggy Pop And Josh Homme - In Conversation At The Grammy Museum It's been sad times lately, so I jumped at the chance to go listen to something positive, especially when it was one of our favorites, Josh Homme, talking about his new record with Iggy Pop - the wonderful Post Pop Depression - in the intimate Clive Davis Theater at The Grammy Museum. We trekked downtown (and had a great talk about Prince on the train with a stranger), and settled in to hear from two of rock's most interesting characters for the next couple of hours. The event opened with an introduction from Grammy Foundation VP, Scott Goldman, who asked everyone to please silence their favorite Stooges ringtone, and then described Pop as first generation punk rock, a member of The Rock and Roll Hall Of Fame, the bloody, bruised lead of The Stooges, and Homme as the leader of Queens Of The Stone Age (etc, etc, etc), and massively inspired by Pop. As the Davis Theater is pretty small, they weren't going to be putting on a full stage show, so they screened as yet unseen footage recently shot for Austin City Limits. The double whammy of Pop's classic calling card, "Lust For Life" and "Break Into Your Heart" off of the new album. The footage showed how great they already are together - and it was only their second show ever. It also showed how funny and entertaining Pop is ... which would be proven again throughout the evening. With that, Pop and Homme took the stage to a standing ovation, in matching black leather jackets, both about as cool as you can get. Pop immediately exclaimed, "That was only our second gig! We're so much better now! We're cool." He didn't have to tell us. They got right into it, with Homme explaining that he first got turned on to Pop's music as an 11 year old in San Diego. The album was Raw Power, and he bought it directly because of the cover. He said he played the album until it didn't play anymore, and still didn't really understand it, to which Pop replied, "That's too much record for you, Boy." Homme agreed that it had been ... that he was afraid of it, but also drawn to it, "like a moth to a big old fire" (Homme is not at all afraid of analogies - and they're pretty good ones). That album had been made in 1972, and Pop said, "I thought kids would like it (to laughter, but Homme did!). It had spunk." Homme said he thought it was the most successful marriage of an album title and a sound - Raw Power. "Well, it's my noisiest record," said Pop. "On most of the cuts it's like is it a song or a problem?" That cracked everyone up. Goldman asked how the two guys met, and Pop said it was at the Kerrang Festival in London. "It was Lifetime Achievement time for me," said Pop with a laugh. "And I was asked to leave," chimed in Homme, who apparently had partied a little too hard for the metal fest. They were asked to be in a photo with Marilyn Manson, and Pop was impressed with Homme, "Mainly because he was the only other guy there not in a Satanic space outfit." The next time they met, Pop was to follow QOTSA at a festival, but he didn't really want to because they were GOOD. "I was mixed between telling them how good they were and wanting to blow them away, so I went to their dressing room and stuck my head in and said, 'You guys are really good, I gotta go, Fuck off!" The respect was there from the beginning. In speaking to how emotive Post Pop Depression is, Pop explained that in the race for the buck in the music business, "less and less feeling is allowed. There's less happy songs, there's less sad songs ... Like Clockwork (the incredible QOTSA album) really affected me emotionally ... and it was craft. Like Chopin would use in a nocturne. You don't hear that much anymore. I was looking for something I could sing with, and his music gives you that space." Homme followed that by saying, "I think it's important to be a fan. I gave up doing things I don't like. I wanted to be a part of something that connects people." So, once they got to talking about doing a collaboration, Pop sent Homme an entire dossier of material to get to know him by. The two share a love of Germany, so there were German photos and inspiration, there were essays about Pop's sex life, there were poems by Walt Whitman and Pop himself ... all personal glimpses into what makes Pop tick. "It was the first step in being vulnerable," said Homme, "And I started to see the wing span of a human being." "I sent it to him to hold up my end of the bargain," Pop explained. "Josh has a huge pile - he's a great guitarist, a writer, a composer, he has this whole little Motown thing happening in the desert, and I have this little pile - I sing and write - so I wanted to give him something to write about, and establish a common experience before getting into the studio. Like, Josh would already know when I write about Gardenia, because he'd already met her in my sexual essay. I wanted to give him an idea of what was on my mind." What a cool way to go about it, right? "You move at the speed of opportunity, and in a collaboration, you move together," said Homme. "You take a real chance. I don't know what it's gonna be, but it's gonna be alright. I'm willing to do whatever is necessary ... if I have to jump off a cliff, then we'll hold hands and jump." This "we're in it all the way together" vibe permeates both the album, and the obvious love and respect these two carry for each other. Homme sent Pop what he called "The Shitty Demos", to which Pop began writing and adding to. "He has such an economy of word choice, and so much color. He has so much color the Skittles people are jealous." Pop said after talking about how much they both love the gay Caberet scene in Berlin, Homme used a word to describe what the album would sound like, and Pop was shocked. He leaned over to whisper it to Homme, who said, "Go ahead, say it," but then they got sidetracked and we never learned what that word was. And I still want to know. "Iggy just turned 69. There's an edge, and most people fall off, but one person doesn't fall off and they have the best view, and that's Iggy." Meaning Iggy has come from the hard living guy cutting himself on stage to be here now, still creating and loving and inspiring. Pop added, "I'm not doing some things anymore, but if I want to put the pedal to the metal for five minutes, look the fuck out!" Yes. Once they got to work, it was on, though they decided not to tell anyone about it. Homme's Dad told him that you shouldn't tell people what you're going to do, you should tell them what you've done, so they just went for it, promising if it was no good they'd just literally bury it in the desert, and no one would ever know. "If no one knows you're making a record, then who are you making it for?", asked Homme. "That's sweet. I make something for you, and you make something for me." Pop added, "I'd be crushed if it wasn't good, but I'd be PERSONALLY crushed." Pop explained that he did two chanson albums in French to get him to here. "I'm singing 'La Vie En Rose' in French, and Stooges people online are thinking that now I just want to go and put on my slippers." Not if Homme had anything to say about it. "Every record deserves the chance to take a chance ... but protecting himself is not in Iggy's DNA." He went on to say, "I don't work in a bank. I'm here to take a leap. I can't always figure out how to say something, but I can figure out how to play it. Look, this record might wind up being a coaster for someone, but it will be a tits coaster, I'll tell you that." Truth. For Pop, after having 25 copies of his French albums sell on the counter in a wine shop in Lyon, "I kind of knew it was time to stick one to the motherfuckers ... that's the best way I can put it." Homme was on board for that. "There's an army of us affected and INfected by what Iggy has done, and I won't let that go unnoticed. I'll make tea for that." The reaction they've had from fans has been overwhelming, and Homme said that tonight when they play the Greek Theater here in L.A., "You'll look left, you'll look right, and you'll see this joyous thing, and we'll be up there grinning, and they'll get to show Iggy all this respect they have for him." They discussed how they worked together, and how Homme would agonize over a word and Pop would say, "No, just throw in something terrible and the right word will come." This was an epiphany for Homme, who said they'd been at the Magic Castle the night before, but this idea was the real "Ta Dah!" "This is the best thing I've been a part of," Homme said humbly and clearly appreciatively. When Goldman opined that "Sunday" is the "Hallelujah" moment on the album, Homme answered that he almost kept it for himself, "but I wanted him to know that I'd give him everything. "Sunday" is like, what if at the end of American Valhalla, he makes it to Sunday?" "I've got all I need and it's killing me" went the line, but then Josh added, 'killing me and YOU' - and that changed everything," explained Pop, "and the strings at the end are TRAGIC." Give it a listen, they really are. Goldman asked Pop if this album was a summation for him. "I"m summing up my vocation in this role. I hope to survive the experience, and quiet down a bit. You do less ... but I do a lot of other work too, like voiceover, some acting, a radio show, I like to guest on Christmas albums ..." cracked Pop. They then opened it up for some questions from the superfans (which these Grammy events always attract, so you actually learn a LOT), and one guy said that we've lost so many icons lately, and asked what their thoughts were on where we go when we pass. OK. Both Homme and Pop kind of hedged, and then Pop said, "There's a book called Sum with many possible answers to that. I'd suggest that book." - getting them both off of the hook. But then Homme wanted to add his two cents. " I know when I burn wood, it changes to ash, but it's still there. Wherever they go, I hope they're there when I'm there, or I'll be fucking pissed." Me too, Josh. Me too. In closing out the night, Pop said, "The main responsibility is to entertain, so I just want people to enjoy it." Homme's final thought was that, "The Arts are a Swiss Army pleasure device, and every time I just hope it works." It works, as evidenced by the thunderous applause and people back on their feet at the end of the program. What an interesting, great time it was listening to these two cats, both super individually impressive, but more impressive even together, showing what can happen when it's about love, friendship, and respect over the mighty dollar. Get your copy of Post Pop Depression to see what I mean ... available now everywhere. *All photos by Paul Gronnner Photography. Posted by CJ Gronner at 5:18 PM 1 comment: Labels: collaboration, conversations, desert sound, Grammy Museum, Iggy Pop, Josh Homme, music, Paul Gronner Photography, Post Pop Depression, Queens Of The Stone Age, Scott Goldman, The Stooges VJAMMing At Hama Sushi There is a fundraiser happening today at Hama Sushi for the VJAMM (Venice Japanese American Memorial Marker) to finally make this happen, as it's been in the works since 2009. The funds raised will allow the Memorial Marker to be placed at the corner of Lincoln and Venice Boulevards, where Japanese residents were forced to board buses to be hauled off to the Manzanar internment camp where innocent Japanese citizens were held . It is a painful reminder, but a necessary one, so that we may be sure that an insane injustice like that will never occur again. There was a delicious bento box lunch and a program this afternoon with 100% of the proceeds going to VJAMM, and if you go get your sushi tonight at Hama, 10% of all sales will also be donated to VJAMM. Sushi for a cause! I grew up my whole life next to the Kusunoki family back in Minnesota - the kindest, loveliest people I've ever known. They were here in California for Manzanar, and it's still hard to believe that blight on our collective American conscious ever really happened. But it did. And we should never forget. Love and thanks to all who work so hard on this project, and I look so forward to seeing the real memorial unveiled. That 9 foot tall black granite memorial will read: "IN APRIL 1942, DURING WORLD WAR II, MORE THAN A THOUSAND AMERICAN MEN, WOMEN, AND CHILDREN OF JAPANESE ANCESTRY IN VENICE, SANTA MONICA, AND MALIBU REPORTED TO THIS LOCATION AT VENICE AND LINCOLN BOULEVARDS WITH ONLY WHAT THEY COULD CARRY. THE WESTERN DEFENSE COMMAND AND FOURTH ARMY ISSUED CIVILIAN EXCLUSION ORDER NO. 7 WHICH GAVE THEM ONLY DAYS TO DISPOSE OF THEIR PROPERTY AND POSSESSIONS. BUSES TRANSPORTED THEM DIRECTLY TO MANZANAR WAR RELOCATION AUTHORITY CAMP IN INYO COUNTY WHERE MANY INTERNEES WERE INCARCERATED FOR MORE THAN THREE YEARS. EXECUTIVE ORDER 9066 HAD EMPOWERED THE UNITED STATES ARMY TO DECLARE AREAS OF WASHINGTON, OREGON, AND CALIFORNIA MILITARILY SENSITIVE, AND FORCED THE REMOVAL OF 120,000 JAPANESE AND AMERICANS OF JAPANESE ANCESTRY TO TEN AMERICAN CONCENTRATION CAMPS AFTER JAPAN ATTACKED THE U. S. NAVAL BASE AT PEARL HARBOR, HAWAII ON DECEMBER 7, 1941, PLUNGING THE U. S. INTO WAR WITH JAPAN. THE FORCED REMOVAL AND IMPRISONMENT OF CITIZENS OF THE U. S. WITHOUT ANY REGARD TO DUE PROCESS OR THE WRIT OF HABEAS CORPUS VIOLATED THEIR RIGHTS UNDER THE U. S. CONSTITUTION. MAY THIS VENICE JAPANESE AMERICAN MEMORIAL MARKER REMIND US TO BE FOREVER VIGILANT ABOUT DEFENDING OUR CONSTITUTIONAL RIGHTS, SO THAT THE POWERS OF GOVERNMENT SHALL NEVER AGAIN PERPETRATE AN INJUSTICE AGAINST ANY GROUP BASED SOLELY ON ETHNICITY, GENDER, SEXUAL ORIENTATION, RACE, OR RELIGION." Never again. Venice, eat at Hama tonight if you can! Thank you. Hama Sushi 213 Windward Avenue - On the circle Labels: fundraisers, Hama Sushi, internment camps, Manzanar, memorials, Social justice, Venice Japanese American Memorial Marker, VJAMM, Windward Circle Venice Love For Prince I'm sure people are going to get sick of me talking about Prince pretty soon, but I'm not all that concerned about it. Mainly because I still just can't believe it. I was thinking about it all again this morning walking along the beach, and my spirits were brightened by the graffiti walls. Artists had remembered Prince here too. They should all go watch Graffiti Bridge now, and get out and do up every bridge around. Please? And thank you. To everyone really, for putting up with me on this. I guess I'm just happy I got to have the experiences I did ... but that sadness comes from knowing there will never be any more. A friend of a friend posted a Christmas card he'd received from Prince years ago, where Prince had written (in purple ink, of course), "Peace and Be Wild". You now know my new motto. Peace! And be WILD!! Labels: Graffiti Bridge, memorials, murals, Prince, RIP Prince, sad, street art, Venice Beach, Venice Graffiti Walls A Party For Prince Weekend In Venice I'm physically sore and completely hoarse today from dancing so hard at our Prince party here in Venice all day and all night yesterday. And it's totally worth it. My Minneapolis friends and I out here have been having a very hard time of it after the world lost Prince last week, and it's been tortuous to see all the outpouring of love for him back home and not be able to be there with everyone. Like actually super painful, in a way that we had no way of anticipating. So we decided to dance. We decided to sing. We decided to party so hard that our friends back home could hear us ... and we did. My awesome friend Shane is real serious about Prince. He drove out his vast vinyl collection from Minnesota because it was too massive to ship. He generously offered to host a listening/dance party for all of us transplants that are seriously grieving, and the friends who sympathize with us, and it would be an all day bbq affair. A real Housequake. I could already hear the tunes blaring from blocks away when I arrived early to help set up (and watch the Wild lose the last game of the Season - but not after an awesome Prince tribute on the ice!), I almost cried - again - because there was Shane up on a ladder, hanging massive sheets of purple and paisley fabric as our mourning bunting, to set the tone of the day. He meant business. I got out the kids' sidewalk chalk and did my best to draw Prince's symbol to invite the guests in, and then we thought it would be nice if everyone signed their names on the driveway, so we could have a big physical memorial of our own. Some might think this is all over the top for a rock star, but then they don't know how Minneapolis feels about our Prince. So we show them. We wear purple. We wear paisley. We cry. We share stories. We DANCE. The kids all got into it, not exactly clear on why the grownups were all so sad about this fun guy with the fun music, but they were happy to wear purple and jump all day (and night) in the trampoline along to the hours and hours of classic Prince hits. People showed up in mostly Prince, purple, paisley, or Minneapolis clothing. I had my First Avenue sweatshirt on, of course, and underneath the shirt they gave me at Paisley Park when I did my college senior project there. It's a simple, boxy, pre-ladies cut shirt, but I'll never get rid of it now, and wore it with great pride yesterday. Folks brought purple potato salad, purple cupcakes, and purple drank. One friend had stopped and had custom purple tear stickers made, so we all walked around like purple gangsters all night. I really appreciated the school spirit for Prince that everyone displayed, with even the most casual fan in attendance decked out in purple and offering their sincere comfort to their clearly upset friends. We told stories, of all the shows we'd seen, and all the Prince sightings we'd had back home. We taught the cleaner song words to the kids and seriously danced our faces off. One song would end, and an even better one would begin, making it impossible to get off the dance floor that was the entire yard. I didn't actually take too many photos of things when they were in full swing, because I was far too busy getting DOWN. Minneapolis came together in Venice, and we really needed that. We needed the solace of people that understand, and share the same super insane crushing sense of loss. No, most of us didn't know him personally, but that doesn't matter. We grew up with him in the very fabric of our days in Minneapolis, and for me, he was a big influence on my world views and possibilities. There was never, and will never be, another entertainer like Prince. Period. People came and went as the day went on, but most everyone just stayed and danced. It was next to impossible to walk away from yet another masterpiece being spun, and so we just kept at it. I think there were times when I was actually asleep on my feet, but just needed to listen and move. It was a Sunday night, long after the Purple Rain credits ran on the t.v. inside, and the kids were all spent and long asleep, but still we danced. Monday was looming pretty large, and still the cries for "One more song!" continued, but ultimately ... Life is just a party and parties weren't meant to last. I still really can't believe it's real. THANK YOU to Jenny and Shane for letting us all party like it was 1999 ... and to everyone who was there and understands. It was cathartic, and so, so needed. Minneapolis, I hope you feel the love from absolutely everywhere, and I hope you know that we're all the way there with you in spirit!!! LOVE. *Photos by Paddy Wilkins, Paul Gronner, and me. Labels: dance party, First Avenue, house parties, Housequake, Minnepolis, paisley, Prince, Prince Party, Prince Rogers Nelson, purple, Purple Rain, RIP Prince Australia Luxe Collective - Cool Shoes From Venice! When good people come together, they make good things happen. Stuart Rush is a British gentleman who wound up living in Australia for eight years. One year while there, he was late on getting out Christmas presents to send back home, and quickly snatched up a bunch of pairs of the white-hot Australian-made Ugg boots to gift as something special, with "Love From Australia" (now also their LLC name). The boots were a huge hit back in the U.K., and a friend of Rush's asked him if he could send more over for a boutique. Then more. Then more. Rush realized there was something to this business, and decided to start his own line of sheepskin boots, with fancier accents and cool additions like studs and fur, and call it the Australia Luxe Collective. Planet Blue in Santa Monica placed their first big order, and it was enough to encourage Rush to make the move to California to make this company really happen. After a brief stint on Ocean Park, Rush moved to Venice, because he could tell that was what was up. He moved in next door to Crystal Green de Saint-Aignan, who has been a dear friend of mine since back in the Old Brig days. Crystal had a background in fashion, running things for J Brand jeans, and once they got to talking, Rush recognized that she would be a huge asset for ALC. He hired her on as their brand manager, and that's how I got turned on to this cool company. Since their sheepskin boot origins, ALC has branched out into bags, hats, gloves, scarves, jackets ... but also into seriously stylish footwear, like knee-high gladiator sandals just in time for Festival season, and the high heeled leather ones that I've got my eyes on. They are HOT. And now they can finally be yours in Venice, because for the first time in their nine years here, they're having a big sample sale this weekend at their offices on Lincoln Boulevard. A BIG sample sale, like half off. Check out their website in advance and make a plan of attack for when you get there, because everything is cute, and everything will be a steal. Rush feels strongly about Venice, as we all do. "There was never a reason to go anywhere else," he told me. "I'd get back to the beach from driving around and just want to stay. I'm involved in it, it's my vibe ... I don't even think about it. I love the people, the paddle ball courts, the bike path ... and I have bartenders here who look after me." Crucial. With the success of Australia Luxe Collective, Rush and de Saint-Aignan are now eager to contribute more to the community, and will soon be launching a new company, Califortunate - a one for one giving company that will donate a ridiculously soft cotton separate for each one sold. They have already sponsored shirts to give to the kids at Westminster Elementary, getting the helping ball rolling. "I feel very fortunate to be in California, but every time I go outside I see the less fortunate," explains Rush. "If I'm fortunate enough to have a business here, out of the cold and dreary of the U.K., then I can do something to help people." Another reason to love these guys. Come say hello this weekend, and meet the wonderful people behind our local Venice purveyor of awesome kicks, and score a great deal while you're at it. 2002 Lincoln Boulevard Saturday and Sunday, 9am - 2pm australialuxeco.com Posted by CJ Gronner at 12:36 PM 4 comments: Labels: Australia Luxe Collective, boots, Califortunate, Crystal Green de Saint-Aignan, fashion, lincoln boulevard, local businesses, Sample Sales, sandals, shoes, Stuart Rush, Ugg boots Prince Rogers Nelson 1958 - 2016 Prince is gone. I just can't believe it. My phone started going off this morning, and I kept ignoring it, trying for a couple more winks after a restless night's sleep. When I looked at all my texts, one after the other read, "Not Prince!" "RIP Prince!" "Thinking of you, so sad about Prince". NO!!!! I immediately burst into tears. It hit me hard, like a toddler cries. I didn't think about it, it just erupted out of me. Not Prince! If you know me at all, you know this awful news is truly devastating to me. Prince is my all time favorite. In fact, Prince probably helped some to make me who I am. You see, growing up in Minneapolis, Prince was IT. Prince was THE coolest. Prince was also kind of forbidden. Because Prince was "Dirty". Prince was shocking. Prince pushed every envelope. I was in junior high, and we would listen to Prince and be both thrilled and scandalized at the same time - in a good way. Singing out loud with my cassette Walkman, not realizing that Mom could hear the words I was belting out. This was not music my Mom wanted me to be listening too, and I loved it all the more for it, as all good rebel kids do. Every good D.J. knows that Prince will get everyone on the dance floor, and keep them there. His music is timeless. The music. There is no one better. Prince could play everything, expertly. We would just watch him in awe, and in Minneapolis, we got to watch him a lot. My little friends and I snuck in when they were filming Purple Rain at First Avenue, 'cause we were not about to miss out on that! Life changing. I had a friend in junior high named Molly Larson, only Molly called herself "Princess" - with a star to dot the i. Molly had a tough life and Prince was about the only thing that made her happy. She would doodle his name all over her notebooks, and we would break down the songs' lyrics, right down to the moans. Molly really believed that Prince was going to be her Prince Charming, and swoop in and take her away on his awesome purple motorcycle with the glyph symbol on it. But that never happened. Molly dropped out of school, and I didn't know what happened to her until there was an article in the paper about Molly becoming a teenage prostitute, and being beaten to death on the side of a road. I was so sad for her, and remember being so sad that she never met Prince. His presence was felt even larger. (Prince and I never personally met, but I'll live on a look he once gave me until the day I die.) I saw so many Prince shows over the years, one better than the next. When I went to college at Augsburg, it was the best because our campus was right in the city, and Prince was everywhere. Prince's drummer at the time, Michael Bland, also went to Augsburg, and would walk around in his big robes and Pope style hats, and we'd feel somehow closer to the Man. We'd have Prince sightings on his motorcycle blazing through Seven Corners, and it was magic. Prince would announce an arena show the day of, saying just show up with a can of food for entrance, and there would be a sold out show and the city's food shelves re-stocked that same night. When I was at Augsburg, there was no film major yet, so I asked if I could do my Senior Project as an Independent Study, and make a documentary on Paisley Park. The studio's manager at the time, Red White, gave me full access, letting me tour and film all over the studios in Chanhassen. Prince wasn't there that I saw that day, but the magic was felt everywhere. I have no idea where the old VHS tape of that film is, but I gotta find it. I remember sitting on my friend's deck at her house across the lake from Prince's house late one night, when we heard the most astonishingly gorgeous electric guitar solo carry out across the water. Prince, jamming with the windows open in the Summertime ... I'll never forget it. We saw Prince in the rain out here in L.A. at the Hollywood Bowl, one of the best shows I've ever seen in all my days. I saw him at Staples for the Musicology tour, when he gave that cd out with every concert ticket sale. That counted as record sales too, because Prince was also a business genius (we all remember The Artist Formerly Known As Prince. Smart.) 21 Forum Shows! Dance offs at Glam Slam! All those late night sessions at Paisley Park when Prince would jam with visiting artists, and you would just have your mind blown. And nothing will ever beat that Superbowl Halftime show. Nothing. Prince gave us SO much wonderful music, so much artistry, so much to think about. He did whatever he wanted artistically, he dressed however he wanted, but he also gave more than anyone even knows, because he never bragged about it. He's probably responsible for setting me on the path to having a major thing for multi-talented, multi-cultural, multi-instrumentalist, multi-genre men ... for better or for worse. I remember hearing about Prince showing up at someone's door to talk about Jehovah's Witness stuff. Can you imagine?! That is the one time I'd let them in and listen. Or would have. NO! NOT PRINCE!!! Last Summer, I was home visiting, and heard Prince was having a party at Paisley Park for the National Association of Black Journalists, of which I am not one. But I am determined. I was GOING. My brother and I took off to Chanhassen, where I proceeded to talk my way in, and single-handedly integrate this event with my brother. Prince didn't play, but when he took the stage right in front of me to speak, I can't explain the feeling that came over me. It was actual electricity, like a charge ran through me as I looked at him in his golden lounging pajamas, talking all smooth. The smoothest. The most mysterious. The most talented ... Prince. The one word name pretty much says it all. If you're not from Minneapolis, you can't really understand how much Prince means to us. He became one of the world's biggest stars, but he was OURS. He never left. He stayed in Minneapolis, and made us all cooler by proxy. He loved Minnesota. He supported its sports teams, he supported its artists. We used to make our Mom take us to Rudolph's Barbecue to eat as kids, just so we could look at Prince's booth, and hope that he would come in while we were there. Every performer that would come to play in Minneapolis would tend to cover Prince, or at least mention him. First Avenue is like a church to us all, ever since Purple Rain, and Prince's star outside on its wall is becoming a massive memorial as we speak. The city is in deep mourning, with people gathering at Electric Fetus (where Prince went all the time, as recently as this last Saturday for Record Store Day) to cry and buy music. I've had so many texts, from friends who all feel the same way. Gutted. In fact, one of the first texts I got this morning was from my Mom, feeling for me, but also sad herself. Prince finally won her over. I feel so far away from home, and my people, and know this loss is just as massive and crushing for the entire city as it is for me. I bet the entire city will be bathed in purple tonight, and that Prince's music is all you'll be hearing back there for some time to come. At least we'll always have that ... and now maybe Molly will finally get to meet Prince. I'm just so, so sad. Wow. A world without Prince. Thank you for your lifetime of music, Prince, and for making this Minneapolis girl so happy for so long. Nothing Compares 2 U. Labels: Chanhassen RIP Prince, First Avenue, genius, legend, Minneapolis, Paisley Park, Prince, Prince Rogers Nelson, Purple Rain Venice Is For Lovers I read the horoscope in the L.A. Times this morning, as one does, and this is what they had to say overall about today for everyone's sign ... "The sun's moving into the sign of sensual awareness will show you the true character of your environment. If you let it happen, this place will be a vibrant influence over the day and your mood. Consider and compare other possible settings. As you do this, the reason you are precisely where you are will become very clear." Then I went on my morning constitutional along the beach, and saw this rad mural by Jonas Never on my way back in front of The Whaler ... NOW I know why we're here! That horoscope might just be on to something. Pay attention to signs, People. Get it, Lovers! *Happy Birthday, Sailene Ossman! Enjoy your day with your lover! xxx Posted by CJ Gronner at 2:41 PM 2 comments: Labels: horoscopes, I love you Venice, Jonas Never, LA Times, lovers, murals, signs, street art, The Whaler, Venice Pier A Living Wall This weekend was so gorgeous, it was pretty unbelievable. Everything felt Summery and festive, the beaches were packed, and the winds were tropical. I felt happy just looking out the window. On the way to the beach, I took a new route just for fun, and passed by this incredible living wall surrounding a home on Olive Street. What a great idea, to cover your walls with living plants! The house it surrounds is actually pretty dark and severe looking (and big), but this wall livens up the whole thing, with the added bonus of being natural! I think it's great. I hope you all had a wonderful weekend, wherever you are, and that this new week shows you something surprising and beautiful and inspiring as well. Or even a few somethings! Good luck! Labels: beauty, I love you Venice, Living walls, Olive Avenue, plants, sunshine, walls Make A Magic Bus! I've been noticing a lot of wildly painted vehicles around town lately, and I dig it. I saw this Magic Bus last night, and of course wanted on, but no one seemed to be home. John Lennon, Ozzy, David Bowie and friends all look out from this mural of an automobile, and you just know there's good times aboard. Further along, there was a mobile home painted in a kind of desert palate Starry Night - also cool. These color-mobile's were both found on the same block ... there are plenty more around, happily. In the beautiful, wacky, creative vortex that is Venice, why not make your transpo look fun too? Why not make things interesting everywhere you look? Why not conduct your own Electric Kool Aid Acid Test on your own Further? Life is short, People. As we cruise on into the weekend, that's a great question ... Why not?! Labels: car murals, Color, Electric Kool Aid Acid Test, Fun, Further, I love you Venice, Magic Bus, Painted vehicles, weekend Surf Shop Parties - Starring Natas Kaupas Last Friday was the night for fancy surf shop parties, and the two that I knew of were both going off. We started the fun over at Lone Wolfs Objets d'Surf (you have to call it fancy when they name it that), where the surf shop/studio (Wolf At The Door) folks were hosting a party that featured the band Springtime Carnivore blazing through a set on the outdoor parking lot stage for both the packed crowd and the Friday night traffic on Lincoln Boulevard. It was more fun than we had anticipated, as seemingly ALL our Venice pals had come out of winter hibernation to kick up their heels and raise a glass with their friends while listening to some live music under the stars. But we couldn't stay. We had even bigger fish to fry. You see, pretty much our entire lives my brother Paul and I have held Natas Kaupas in the highest esteem possible. The Dogtown skate legend was our main dude when growing up in Minnesota, far away from the mean streets of Venice and Santa Monica where these guys were changing the world. We loved him, and he soon became a long running example of missing out on something big ... "Oh, you know who you just missed on David Letterman?" "It better not have been Natas Kaupas!" You get it. But we did NOT miss Natas Kaupas last Friday, because he was kicking it over at General Admission, with a party for a shoe collaboration Kaupas did with the Lakai Limited Footwear company. The shoes were pretty fresh, but nothing is as fresh as Natas himself. What a cool guy. Paul and I were beaming as we told him how much we'd admired him growing up, and he just smiled and laughed and clinked our glasses. I imagine he gets this a lot. We were stoked, not only at meeting a hero that totally holds up, but also that we get to live in a place where jamborees like this are the norm. Fun! Thanks! Posted by CJ Gronner at 12:55 PM No comments: Labels: Dogtown, General Admission, Lakai Limited Footwer, Lone Wolfs Objets D'Surf, Natas Kaupas, openings, parties, Springtime Carnivore, surf shops, Wolf At The Door Viral: 25 Years From Rodney King - The Wonderfully Important New Show At SPARC There was an opening reception for the wonderful and terribly important new art show at SPARC this past Saturday night, Viral: 25 Years From Rodney King. Operating from the premise that the video showing Rodney King being beaten by the LAPD in 1991 was truly the world's first viral video, putting the spotlight on the racial injustice and police corruption in America that has only seems to have grown worse in the 25 years since that horror show. SPARC's building was formerly the Venice Police Station, so the art work displayed inside was especially fitting for this show, as we were all behind the bars of the old jail while observing the awful timeline of this particular nightmare epidemic of trouble in our country. Artists from all over are featured in this group show, curated by Daryl Wells of Art Responders. Wells had a mentally ill brother who had been continually been harassed by police and wound up dead in murky circumstances. She introduced the show with an emotional speech about the frustration, pain, anger, and helplessness she had felt, and could only imagine the pain of the families left behind by victims like Trayvon Martin, Eric Garner, Sandra Bland, Tamir Rice, and so many more to mention. The show is interactive, as you walk the timeline around the room from 1991 - 2016, with all the visual art complimented by audio listening stations for music and spoken word poetry. There was a playlist created for the show, with all supremely worthy of inclusion tracks, but the song that kept playing through my head was the one released a day earlier from Ben Harper - "Call It What It Is (Murder)" from his brand new album of the same name. In it, Harper name checks these same murder victims, while pointing out that there are black and white cops, good and bad cops, of course, but let's call it what it really is - Murder. That cops are just getting away with. That MUST stop. That's all I could hear running through my head as I traversed the room ... Call it what it is, call it what it is ... Murder. There is even a virtual reality station where the viewer can experience Perspectives 2: The Misdemeanor, where there are cops and possible perpetrators, and someone gets shot, and how you feel about it depends on your perspective. It's pretty heavy, and really a great tool for putting yourself in either set of shoes. I was moved by a really powerful watercolor from Sanae Robinson depicting the uprising in Ferguson, Missouri. The events in Missouri were really so painful for me to watch going down on t.v., with utter disbelief that we still STILL have to even be having this conversation. WE ARE ALL ONE!!! Why WHY is that so very hard for some to grasp?! Ugh... Robinson's painting really captured that anguish for me. Overstreet Ducasse has several pieces in the show, with pieces done on the target practice charts from his state of Floida, called the Floreada Series. Ducasse carried his art right over on to his jacket, which put an exclamation on his point. This show, as the press release states, "asks vital questions about the last quarter century of developments in our criminal justice system, and how various systemic failures have allowed this phenomenon to frustrate the dream of a post-racial America ... it tackles the intersection of social media, the arts and criminal justice advocacy in order to bring about much-needed dialogue regarding systemic racism and its impact on communities of color." I had just read that when I saw a little girl in the jail cell with her Dad, and almost choked up with the hope that she doesn't have to think about any of this for very much longer. That we can all just, as Rodney King implored 25 years ago, GET ALONG. I ran into Francisco Letelier and we talked about the importance of shows like this, but also of getting people to GO to these shows. To keep that dialogue going. To support the artists that are often the loudest voices shouting for change. He told me that this SPARC show will be one of the stops on the upcoming May 15 edition of Art Block - the incredible gathering of our local art and artists that invites you into their studios - for free. I hope everyone will put an asterisk next to SPARC on their maps, as everyone living in America today should see this fantastic show. (But you don't have to wait until May ... go now! Every day!) Viral: 25 Years From Rodney King is on at SPARC through June 3, 2016 681 Venice Boulevard Labels: art for social justice, art openings, Art Responders, Daryl Wells, Overstreet Ducasse, Police violence, Social justice, SPARC, Viral: 25 Years From Rodney King, Virtual Reality Blogtown By CJ Gronner's Facebook Page and Twitter Blogtown By CJ Gronner on Facebook Follow @Blogtownbycjg Well, hello there ... CJ Gronner I live in Venice, CA. I write. I love both. I hope you will too. Free Venice Beachhead Axis of Justice Tom Morello/The Nightwatchman Kotori Magazine Iggy Pop And Josh Homme - In Conversation At The G... Australia Luxe Collective - Cool Shoes From Venice... Viral: 25 Years From Rodney King - The Wonderfully... Erewhon In Venice A Fogtown Morning The Skatermade Art Of Bart Saric
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,237
AFRICAN FILES - Inform, Educate, promote & Unify Africa Al- Sisi, President of Egypt Visits South Sudan By African Report Files On Nov 28, 2020 Share FacebookTwitterGoogle+WhatsAppEmailTelegram His Excellency Abdel Fattah Al- Sisi, President of Arab Republic of Egypt was today morning received upon arrival at Juba International Airport by His Excellency Salva Kiir Mayardit President of the Republic of South Sudan. His Excellency Salva Kiir Mayardit President welcomed his Egyptian Counterpart, His Excellency Abdel Fattah El Sisi, President of the Arab Republic of Egypt and his accompanying delegation in his Office and discussed a wide range of bilateral issues In the meeting, President Sisi expressed his readiness to Support the implementation of the Revitalized Peace agreement, where required. He also pledged the Egyptian full support in alleviating some of the challenges facing South Sudan such as the impact of: falling oil prices, COVID-19 pandemic and recent devastating flood across the Country. The two leaders agreed on the need to enhance mutual cooperation in areas of education, healthcare, media, energy, trade and investment, and infrastructural connectivity especially with respect to road and rail links. President Salva Kiir said, South Sudan Government strongly feels Egyptian leadership and expertise in these areas can make a difference in our developmental priorities. President Salva Kiir Mayardit and President Abdel Fattah Al-Sisi broadly discussed matters relating to water resources management, especially Egyptian readiness to use their expertise to help mitigate the impact of floods, both in short and medium terms through dredging water sources and water harvesting methods such as dams construction. On regional matters, President Abdel Fattah Al-Sisi commended South Sudan mediation of Sudanese conflict. President Sisi also explained Egypt's position on the Grand Ethiopian Renaissance Dam (GERD). Covid-19 vaccines: African Govt Should Stop Being Lazy… Uganda Electoral Commission Declares 76 Years Museveni… On his part, President Salva Kiir Mayardit underlined South Sudan's position on the importance of dialogue in dealing with issues affecting regional stability. As well as also he stressed South Sudan's firm commitment to regional solidarity and responsibility as Africans to seek African solutions to African problems. President Salva Kiir Mayardit and President Abdel Fattah Al- Sisi held an extensive meeting with their teams and pledged to work at official level towards the realization of the goals they set for themselves in various areas of their discussion. In conclusion, President Salva Kiir Mayardit thanked President Abdel Fattah Al- Sisi for visiting South Sudan and for the important discussions they held. President Abdel Fattah Al- Sisi is on one day official visit to South Sudan. – State House Media , South Sudan. FIFA sacks CAF President for corruption Drought: Locals in Southern Madagascar eat white clay for survival Covid-19 vaccines: African Govt Should Stop Being Lazy Consumers or We Shall die like… Uganda Electoral Commission Declares 76 Years Museveni Winner Doctor who said he might not get a hospital bed has passed on Breaking:Uganda's Bobi Wine rejects official results Doctor who said he might not get a hospital bed has passed… YELLOWMAN'S ABANDONMENT, REJECTION, CANCER & WHY… 3,799Likes Like our page Tourism20 Uganda Electoral Commission Declares 76… Doctor who said he might not get a… YELLOWMAN'S ABANDONMENT,… African, Love Thyself AFRICA'S SON OF SHAME Tigray-Ethiopia War Mediation:Prof Lumumba Salutes Kagame How Tom Mboya Won Negroe Votes for John Kennedy Bribery: Chinese Asset Manager sentenced to death Sierra Leone's President,Maada Bio Grants Citizenship to 22… See 800 Names of Kenyans that studied abroad from 1950… Little is known about Harry Foster Dean a freed African… © 2021 - African Report Files. All Rights Reserved. Design By: KNB
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,026
Waiting for the train – A chance encounter taps into bigger things. Click To Enlarge To See My Little Engine. My wife and I have been on the road for the past few weeks. First I gave a presentation in Vermont (go here, but watch out: The file is 15 MB). From there we cut over to Maine, where we have been enjoying the ocean, craft shows, lobster . . . and trains. While my wife has been frequenting craft shows, I have run my model steam engine at a couple of "steam-ups," and visited two steam railroad museums. (The American Dairy Council's slogan used to be "You Never Outgrow Your Need for Milk," but it's really "Trains.") One of the high points of this vacation (for me, anyway) was bringing my little engine to the museum where its big brother resides. Look closely at the photo above, and you can see my engine perched on the running board of its prototype. As I stepped back to take this photograph, a young man stepped forward (into the frame of the photo), staring intently. I waited patiently, but he seemed to be unaware that he was blocking my shot, and he gave no indication of moving. After a minute or two, I asked him politely if he could step back while I took my picture. "Is the train leaving now?" he asked, in a somewhat anxious tone. No, I didn't think so, I told him. "But soon." (And not until after I had taken my picture and removed my locomotive from the running board! I had obtained the engineer's permission before placing my prized possession in such a precarious position). "Is the train leaving now?," he repeated, leaning forward just a bit, arms flexed ever so slightly. Then it clicked. "Hi," I said. "What's your name?" He gave his name – we'll call him Eli (not his real name). I opened up a simple conversation with him: How old are you? Where do you live? Do you like trains? Eli responded politely, frequently interjecting his initial question ("Is the train leaving now?"), to which I repeated the same reassurance: "Not yet, but soon." After chatting with Eli for several minutes, I packed up my engine and made ready to go. At that point I caught sight of Eli's father, seated nearby. "Thank you," he murmured in a heartfelt tone of voice. Part of me felt like saying "Oh, it was nothing." Or, "I used to do this for a living. I can see what others can't: Eli is a nice young man, who responds to human kindness in his own way, despite his ASD." But I worried that to say those things might be an invasion of the family's privacy, or might reduce the value of my interaction with his son: Dad might see Eli's successful interaction with me as less of an achievement on his son's part, knowing that I was not an "ordinary" stranger. All of this flashed through my mind in the brief second it took for dad and me to exchange glances. So instead of revealing my own identity or background, I said "I enjoyed it as much as Eli!" – which was also true, but only half of the story. What I didn't tell Eli's dad was that I miss patient care, and talking with Eli helped me to fill that void, now that I have retired from clinical practice. Of course, now that I've blogged on this, it's possible that Eli's father will discover who I am. But the conundrum is mine, not his, in any case. And regardless of my training, what I did was the decent thing to do, and Eli did respond appropriately. He moved back when asked without getting agitated, he kept up his end of the conversation, and did his best to navigate the real world. Dad also did his part: observing at a distance, and letting his son test his legs. I'm afraid there are a lot of people who might not "get" Eli, who might have asked him less gently to move out of the way of the picture, or might have shied away from his repeated questioning. Worse still, there are predators out there who ridicule or exploit people like Eli. We are making progress as a society, but we're not there yet. Until the next time. We are making progress as a society, but we're not there yet. #ASD "You Never Outgrow Your Need for Milk," but it's really "Trains." #DrCoplan #ASD 2 responses to "Waiting for the train – A chance encounter taps into bigger things." MaryLee Hensley says: Thank you for all you do! Happy Holidays☮ drcoplan says: Thanks, MaryLee Enter the answer as digit(s) (not words) * 9 × 3 =
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,923
create table m04(id bigint primary key auto_increment, col1 date, col2 date not null, col3 date, col4 date default '2012-12-12'); insert into m04(col2) values('2011-11-11 PM'); select if (col1 is null and col3 is null and col4 = date'2012-12-12', 'ok', 'nok') from m04; --add default value alter table m04 modify col1 date default SYSDATE; alter table m04 modify col1 date default 'SYSDATE'; show columns in m04; insert into m04(col2, col3) values('2011-11-11 PM', '1976-01-01'); select if ((SYSDATE - col1) =0, 'ok', 'nok') from m04 where id = 2; --add default value alter table m04 modify col2 date default SYSDATE not null; alter table m04 modify col2 date default 'SYSDATE' not null; desc m04; insert into m04(col2, col3) values (default, '1999-09-09'); select if ((SYSDATE - col2) =0, 'ok', 'nok') from m04 where id = 3; --add default value alter table m04 modify col3 date default SYSDATE; alter table m04 modify col3 date default 'SYSDATE'; show columns in m04; insert into m04(id) values(default); select if ((SYSDATE - col3) =0 and col1 = col2 and col2 = col3, 'ok', 'nok') from m04 where id = 4; --reset default value alter table m04 modify col4 date default SYSDATE; alter table m04 modify col4 date default 'SYSDATE'; describe m04; insert into m04(col3, col4) values('1945-10-01', default); select if ((SYSDATE - col4) =0 and col1 = col2 and col2 = col4 and col3 = '1945-10-01', 'ok', 'nok') from m04 where id = 5; --set again alter table m04 modify col4 date default '1999-09-09'; show columns in m04; insert into m04(col3) values('2010-10-10'); select if (col4 = '1999-09-09' and (SYSDATE - col2) =0 and col1 = col2, 'ok', 'nok') from m04 where id = 6; --set again alter table m04 modify col4 date default SYSDATE; alter table m04 modify col4 date default 'SYSDATE'; describe m04; insert into m04(id, col3, col4) values(default, '1888-08-08', default); select if ((SYSDATE - col4) =0 and col1 = col2 and col2 = col4 and col3 = '1888-08-08', 'ok', 'nok') from m04 where id = 7; --set default values of multiple columns alter table m04 modify col1 date, modify col2 date default '2011-11-11 PM', modify col3 date default SYSTIMESTAMP; desc m04; insert into m04 values default; select if (col1 is null and col2 = '2011-11-11 PM' and SYSDATE = col3 and col3 = col4, 'ok', 'nok') from m04 where id = 8; alter table m04 modify col1 date default SYSDATE, modify col2 date default SYSDATE not null, modify col3 date default SYSDATE; show columns in m04; insert into m04(id) values(null); select if ((SYSDATE - col1) =0 and col1 = col2 and col2 = col3 and col3 = col4, 'ok', 'nok') from m04 where id = 9; drop table m04;
{ "redpajama_set_name": "RedPajamaGithub" }
4,375
\section{Introduction} People spend plenty of time indoors such as the bedroom, living room, office, and gym. Function, beauty, cost, and comfort are the keys to the redecoration of indoor scenes. The proprietor prefers demonstration of a layout of indoor scenes in several minutes nowadays. Many online virtual interior tools are then developed to help people design indoor spaces. These tools are faster, cheaper, and more flexible than real redecoration in real-world scenes. This fast demonstration is often based on the auto layout of indoor furniture and a good graphics engine. Machine learning researchers make use of virtual tools to train data-hungry models for the auto layout \cite{Dai_2018_CVPR,Gordon_2018_CVPR}. The models reduce the time of layout of furniture from hours to minutes and support the fast demonstration. Generative models of indoor scenes are valuable for the auto layout of the furniture. This problem of indoor scenes synthesis is studied since the last decade. One family of the approach is object-oriented which the objects in the space are represented explicitly \cite{10.1145/2366145.2366154,10.1145/3303766,Qi_2018_CVPR}. The other family of models is space-oriented which space is treated as a first-class entity and each point in space is occupied through the modeling \cite{10.1145/3197517.3201362}. Deep generative models are used for efficient generation of indoor scenes for auto-layout recently. These deep models further reduce the time from minutes to seconds. The variety of the generative layout is also increased. The deep generative models directly produce the layout of the furniture given an empty room. In the real world, a satisfactory layout of the furniture requires two key factors. The first one is the correct position, and the second one is a good size. However, this family of models only provides approximate size of the furniture, which is not practical in the real world industry as illustrated in Figure \ref{fig1} and Figure \ref{fig2}. \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig1.jpg} \caption{Examples of layouts produced by the state-of-the-art models \cite{10.1145/3306346.3322941}. These layouts are for bedroom. The first row gives the ground truth layout in the simulator and the real-time renders. The second and third rows show the layouts produced by the state-of-the-art models \cite{10.1145/3306346.3322941}. From the results, the state-of-the-art model can only produce approximate position and size of the furniture.} \label{fig1} \end{figure*} \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig2.jpg} \caption{Examples of layouts produced by the state-of-the-art models \cite{10.1145/3306346.3322941}. These layouts are for tatami room. The first row gives the ground truth layout in the simulator and the real-time renders. The second and third rows show the layouts produced by the state-of-the-art models \cite{10.1145/3306346.3322941}. From the results, the state-of-the-art model can only produce approximate position and size of the furniture.} \label{fig2} \end{figure*} The prior work neglects the fact that the industrial interior design process is indeed a sequential decision-making process, where professional designers need to make multiple decisions on size and position of furniture before they can produce a high-quality of design. In practice, professional designers need to be in a real room, improve the furniture layout step by step by obtaining the feedback of the current decision until a satisfactory design is produced. This industrial process can be naturally modelled as a Markov decision process (MDP). Reinforcement learning (RL) consists of an agent interacting with the environment, in order to learn an optimal policy by trial and error for MDP problems. The past decade has witnessed the tremendous success of deep reinforcement learning (RL) in the fields of gaming, robotics and recommendation systems \cite{gibney2016google,schrittwieser2020mastering,silver2017mastering}. Researchers have proposed many useful and practical algorithms such as DQN \cite{mnih2013playing} that learns an optimal policy for discrete action space, DDPG \cite{lillicrap2015continuous} and PPO \cite{schulman2017proximal} that train an agent for continuous action space, and A3C \cite{mnih2016asynchronous} designed for a large-scale computer cluster. These proposed algorithms solve stumbling blocks in the application of deep RL in the real world. We highlight our two main contributions to produce the layout of the furniture with accurate size. First, we develop an indoor scene simulator and formulate this task as a Markov decision process (MDP) problem. Specifically, we define the key elements of a MDP including state, action, and reward function for the problem. Second, we apply deep reinforcement learning technique to solve the MDP. This proposed method aims to support the interior designers to produce decoration solutions in the industrial process. In particular, the proposed approach can produce the layout of furniture with good position and size simultaneously. This paper is organized as follows: the related work is discussed in Section 2. Section 3 introduces the problem formulation. The development of the indoor environment simulator and the models for deep reinforcement learning are discussed in Section 4. The experiments and comparisons with the state-of-art generative models can be found in Section 5. The paper is concluded with discussions in Section 6. \section{Related Work} Our work is related to data-hungry methods for synthesizing indoor scenes through the layout of furniture unconditionally or partially conditionally. \subsection{Structured data representation} Representation of scenes as a graph is an elegant methodology since the layout of furniture for indoor scenes is highly structured. In the graph, semantic relationships are encoded as edges, and objects are encoded as nodes. A small dataset of annotated scene hierarchies is learned as a grammar for the prediction of hierarchical indoor scenes \cite{10.1145/3197517.3201362}. Then, the generation of scene graphs from images is applied, including using a scene graph for image retrieval \cite{Johnson_2015_CVPR} and generation of 2D images from an input scene graph \cite{Johnson_2018_CVPR}. However, the use of this family of structure representation is limited to a small dataset. In addition, it is not practical for the auto layout of furniture in the real world. \subsection{Indoor scene synthesis} Early work in the scene modeling implemented kernels and graph walks to retrieve objects from a database \cite{Choi_2013_CVPR,Dasgupta_2016_CVPR}. The graphical models are employed to model the compatibility between furniture and input sketches of scenes \cite{10.1145/2461912.2461968}. However, these early methods are mostly limited by the size of the scene. It is therefore hard to produce a good-quality layout for large scene size. With the availability of large scene datasets including SUNCG \cite{Song_2017_CVPR}, more sophisticated learning methods are proposed as we review them below. \subsection{Image CNN networks} An image-based CNN network is proposed to encoded top-down views of input scenes, and then the encoded scenes are decoded for the prediction of object category and location \cite{10.1145/3197517.3201362}. A variational auto-encoder is applied to the generation of scenes with the representation of a matrix. In the matrix, each column is represented as an object with location and geometry attributes \cite{10.1145/3381866}. A semantically-enriched image-based representation is learned from the top-down views of the indoor scenes, and convolutional object placement a prior is trained \cite{10.1145/3197517.3201362}. However, this family of image CNN networks can not apply to the situation where the layout is different with different dimensional sizes for a single type of room. \subsection{Graph generative networks} A significant number of methods have been proposed to model graphs as networks \cite{DBLP:journals/corr/abs-1709-05584,4700287}, the family for the representation of indoor scenes in the form of tree-structured scene graphs is studied. For example, Grains \cite{10.1145/3303766} consists of a recursive auto-encoder network for the graph generation and it is targeted to produce different relationships including surrounding and supporting. Similarly, a graph neural network is proposed for the scene synthesis. The edges are represented as spatial and semantic relationships of objects \cite{10.1145/3197517.3201362} in a dense graph. Both relationship graphs and instantiation are generated for the design of indoor scenes. The relationship graph helps to find symbolical objects and the high-lever pattern \cite{10.1145/3306346.3322941}. However, this family of models hard produces accurate size and position for the funiture layout. \subsection{CNN generative networks} The layout of indoor scenes is also explored as the problem of the generation of the layout. Geometric relations of different types of 2D elements of indoor scenes are modeled through the synthesis of layouts. This synthesis is trained through an adversarial network with self-attention modules \cite{DBLP:journals/corr/abs-1901-06767}. A variational autoencoder is proposed for the generation of stochastic scene layouts with a prior of a label for each scene \cite{Jyothi_2019_ICCV}. However, the generation of the layout is limited to produce a similar layout for a single type of room. \section{Problem Formulation} We formulate the process of furniture layout in the indoor scene as a Markov Decision Process (MDP) augmented with a goal state $G$ that we would like an agent to learn. We define this MDP as a tuple $(S,G,A,T,\gamma)$, in which $S$ is the set of states, $G$ is the goal, $A$ is the set of actions, $T$ is the transition probability function in which $T(s,a,s^{'})$ is the probability of transitioning to state $s^{'}$ when action $a$ is taken in state $s$, $R_t$ is the reward function at timestamp $t$, $\gamma$ is the discount rate $\gamma\in[0,1)$. At the beginning of each episode in a MDP, the solution to a MDP is a control policy $\pi: S,G \rightarrow A$ that maximizes the value function $v_{\pi}(s,g):=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty} \gamma^{t} R_{t}|s_{0}=s,g=G]$ for given initial state $s_0$ and goal $g$. \begin{figure*} \centering \includegraphics[height=8.0cm]{iccv_2021_1_fig3.jpg} \caption{The Formulation of MDP for the layout of furniture in the indoor scenes. A simulator is developed to bring the indoor scene in the real-time render into the simulation. Then, deep reinforcement learning is applied to train an agent for the exploration of action, reward, state and optimal policy for the developed simulated environment.} \label{fig3} \end{figure*} As shown in Figure \ref{fig4}, the states $S$ is a tuple $S=(S_{ws},S_{wins},S_{fs})$ where $S_{ws}$ is the geometrical position $p$, size $s$ of the wall $ws$, To be noted, $p$ is the position $(x,y)$ of the center of an wall, size $s$ is the width and height of the wall. Similarly, the geometrical position $p$, size $s$, of the windows $wins$ and doors $ds$ are defined as the position of the center and the (width,height) of the wall, and the geometrical position $p$, size $s$ of the furniture $fs$. Therefore, the state $S$ is a tuple $S=(S_{ws},S_{wins},S_{fs})$, where $S_{wins}$ contains $N_{wins}$ windows with corresponding $p_{i}$, size $s_{i}$ for each windows, $i \in \{1,2,\dot,N_{wins}\}$. Similarly, $S_{ws}$ contains $N_{ws}$ walls with corresponding $p_{i}$, size $s_{i}$ for each wall, $i \in \{1,2,\dot,N_{ws}\}$. $S_{fs}$ contains $N_{fs}$ furniture with corresponding $p_{i}$, size $s_{i}$ for each furniture, $i \in \{1,2,\dot,N_{fs}\}$. The goal $G$ is represented as the correct position $p$, size $s$ and direction $d$ of the furniture. The correct value of the furniture is from the design of professional interior designer which is sold to the customer. The action $A$ represents the motion of how the furniture moves to the correct position $p$. \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig6.jpg} \caption{Evaluation sample of the furniture layout for the bedroom, the bathroom, the tatami room and the kitchen. The first column represents the results of the state-of-the-art models \cite{10.1145/3306346.3322941}. The second column represents the results of the proposed method in the simulation. The third column represents the ground truth layout. The state-of-the-art model is able to predict the approximate size and position of the furniture, while the proposed method is able to predict more accurate size and position. The prediction is close to the ground truth.} \label{fig6} \end{figure*} \section{Methods} \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig5.jpg} \caption{The simulator transfer the indoor scenes into a simulated indoor scenes. The walls are represented as black patterns. The doors are represented as green patterns. The windows are represented as blue patterns. For the bathroom, shower is represented as the pink pattern, the toilet is represented as the yellow pattern, the bathroom cabinet is represented as the red pattern. For the bedroom, the bed is represented as the red pattern, the cabinet is represented as the yellow pattern. For the tatami room, the tatami is represented as the red pattern, the working desk is represented as the yellow pattern. For the kitchen, the sink is represented as the red pattern, the cook-top is represented as the yellow pattern.} \label{fig4} \end{figure*} \begin{figure} \includegraphics[height=4.0cm]{iccv_2021_1_fig4.jpg} \caption{DQN network architecture combined with the developed simulation in the formulated MDP. } \label{fig5} \end{figure} In order to solve this formulated MDP problem. We explore and develop the simulated environment, action, reward, agent and the training of the agent in this section as Figure \ref{fig3} and Figure \ref{fig4} shown. Briefly, we define all elements in the indoor scene for this formulated MDP problem. Then, a real-time simulator is developed to explore this dynamics. In order to solve this MDP problem, we explore the reward, action, agent and policy in the filed of the reinforcement learning. Finally, a reinforcement learning model is trained to find a optimal policy as a solution to this formulated MDP problem. \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig7.jpg} \caption{Evaluation sample of the furniture layout for the living-room, the balcony, the dining-room room and the study room. The first column represents the results of the state-of-the-art models \cite{10.1145/3306346.3322941}. The second column represents the results of the proposed method in the simulation. The third column represents the ground truth layout. The state-of-the-art model is able to predict the approximate size and position of the furniture, while the proposed method is able to predict more accurate size and position. The prediction is close to the ground truth.} \label{fig7} \end{figure*} \subsection{Environment} This indoor environment for this defined MDP is implemented as a simulator $F$, where $(S_{next},R)=F(S,A)$, $A$ is the action from the agent in the state $S$. The simulator $F$ receives the action $A$ and produces the next state $S_{next}$ and reward $R$. The simulator builds a world of a room contains the walls, windows, doors and the furniture. As Figure \ref{fig4} shown, the walls are visualized as the black patterns, the windows are visualized as the blue patterns, the doors are visualized as the green patterns. The furniture are visualized as different color of the patterns. For example, the cabinet is visualized as yellow patterns, the bed is visualized as red patterns. The simulator receives action and then produces the reward and the next state. In the next state, the geometrical position, size of walls, doors, and windows are not changed, the geometrical position of the furniture is updated following the input action. \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig8.jpg} \caption{Given a bed with random positions, the proposed method is able to produce a good layout of the bed in the bedroom. The first row represents the the ground truth layout for a bedroom in the simulation and corresponding renders. The second row represents the bed in random positions. The third row represents the final layout produced by the proposed method. The fourth row represents the corresponding layout in the renders.} \label{fig8} \end{figure*} \subsection{Action} The action space is discrete, they are defined as four different action named as right, left, below and up. It means the center of the furniture moves right, left, below and up in a step. Besides, we set another two rules particularly for this simulator. Firstly, in the training process, if the furniture moves beyond the room, the simulator drops this action and then receives a new action. In the test process, if the furniture moves beyond the room, the simulator stops the action immediately. As Figure \ref{fig3} shown. \begin{figure*} \centering \includegraphics[height=6.5cm]{iccv_2021_1_fig9.jpg} \caption{Given a toilet with random positions, the proposed method is able to produce a good layout of the toilet in the bathroom. The first row represents the the ground truth layout for a toilet in the simulation and corresponding renders. The second row represents the toilet in random positions. The third row represents the final layout produced by the proposed method. The fourth row represents the corresponding layout in the renders.} \label{fig9} \end{figure*} \subsection{Reward} We define three types of reward for this environment. Firstly, the first reward function works to encourage the furniture to move towards the most right position. It's defined as the following: \begin{equation} r1=\theta_{1}IoU(f_{target},f_{state}) \end{equation} where $\theta_{1}$ is the parameter which is a positive parameter, $f_{target}$ represents the ground truth position and size of the furniture, $f_{state}$ represents the current position and size of the furniture in the state. IoU represents the intersection between $f_{state}$ and $f_{target}$. The second reward function works to prevent the furniture moving in coincide with other elements including the walls, windows, doors and other furniture. It's defined as the following: \begin{multline*} r2=\theta_{2}(\sum_{i=0}^{N_{fs}-1}IoU(f_{state},f^{'}_{target})+\sum_{i=0}^{N_{ws}}IoU(f_{state}, w_{target})\\ +\sum_{i=0}^{N_{ds}}IoU(f_{state},d_{target})+\sum_{i=0}^{N_{wins}}IoU(f_{state},wins_{target})) \end{multline*} where $\theta_{2}$ is the parameter which is a negative parameter, $f_{state}$ represents the current position and size of the furniture in the state, $f^{'}_{target}$ represents the ground truth position and size of other furniture in the environment, $w_{target}$ represents the ground truth position and size of the walls in the environment, $d_{target}$ represents the ground truth position and size of the doors in the environment, $wins_{target}$ represents the ground truth position and size of the windows in the environment. IoU represents the intersection between the ground truth object and the predicted object. The third reward function works to prevent the furniture moving outside the room. It's defined as the following: \begin{equation} r3=\theta_{3}(1-IoU(f_{state},Indoor_{area})) \end{equation} where $\theta_{3}$ is the parameter which is a negative parameter, $f_{state}$ represents the current position and size of the furniture in the state for the furniture. $Indoor_{area}$ represents the area of the indoor rooms. IoU represents the intersection between the predicted object and the area of the indoor room. Then, the mutual reward is calculated as the sum of the three type reward, $r=\sum_{i=1}^{3}r_{i}$. It's aimed at moving the furniture to the right position with suitable size, preventing the furniture moving in coincide with walls, doors, windows and other furniture. Besides. it's aimed at moving the furniture outside the room. \subsection{Agent} Consider the above Markov decision process defined by the tuple $\mu=(S,G,A,T,\gamma)$ where the state $S$, goal $G$, action $A$ is defined as above. And the reward $R$ is defined as above. Commonly, for an agent, upon taking any action $a \in A$ at any state $s \in S$, $P(·|s, a)$ defines the probability distribution of the next state and $R(·|s, a)$ is the distribution of the immediate reward. In practise, the state for this environment is a compact subset $S={S_{1},S_{2},\dot,S_{n}}$, where each the walls, windows, doors and other furniture in each state $S_{i},i\in \{1,\dot,n\}$ keeps non-changed, only the furniture moving by the agent is changed to different positions and sizes. $A={a_{1},a_{2},a_{3},a_{4}}$ has finite cardinality $4$, $a_{1}$ represents moving left, $a_{2}$ represents moving right, $a_{3}$ represents moving up, $a_{4}$ represents moving below. The rewards $R(·|s, a)$ has a range on $[-200,100]$ for any $s \in S$ and $a \in A$. The agent learns a policy $\pi:S \rightarrow P(A)$ for the MDP maps any state $s \in S$ to a probability distribution $\pi(·|s)$ over $A$. For a given policy $\pi$, starting from the initial state $S_{0}=s$, the actions, rewards, and states evolve according to the law as follows: \begin{multline*} (A_{t},R_{t},S_{t+1}):A_{t} \sim \pi(.|S_{t}), R_{t} \sim R(.|S_{t},A_{t}),\\ S_{t+1} \sim P(.|S_{t},A_{t}), t=0,1,\dots,n \end{multline*} and the corresponding value function $V_{\pi}:S \rightarrow R$ is defined as the cumulative discounted reward obtained by taking the actions according to $\pi$ when starting from a fixed state , that is, \begin{equation} V^{\pi}(s) = \mathcal{E}[\sum_{t=0}^{\infty}\gamma^{t}·R_{t}|S_{0}=s] \end{equation} The policy $\pi$ is controlled by the agent, the functions $P$ and $R$ are defined above for the environment. By the law of iterative expectation, for nay policy $\pi$. \begin{equation} V^{\pi}(s) = \mathcal{E}[Q^{\pi}(s,A)|A \sim \pi(.|s)], \forall s \in S \end{equation} where $Q^{\pi}(s,a)$ is the action value function. The agent is trained to find the optimal policy, which achieves the largest cumulative reward via dynamically learning from the acquired data. The optimal action-value function $Q^{*}$ is defined as following: \begin{equation} Q^{*}(s,a) = \sup_{\pi}Q^{\pi}(s,a), \forall (s,a) \in S \times A \end{equation} where the supremum is taken over all policies. Then we train the agent to get the optical action value function applying a deep neural network $Q_{\theta}:S \times A \rightarrow \mathcal{R}$ is trained to approximate $Q^{*}$, where $\theta$ is the parameter as Figure \ref{fig5} shown. Simultaneously, two tricks are applied in the training of the DQN networks. Firstly, the experience replay is applied at each time $t$, the transition $(S_{t}, A_{t}, R_{t}, S_{t+1})$ is stored into replay memory $\mathcal{M}$. Then, a mini-batch of independent samples from $M$ is sampled to train neural network via stochastic gradient descent. The goal of experience replay is to obtain uncorrelated samples, which yield accurate gradient estimation for the stochastic optimization problem. However, to be noted, the state where the moving furniture is outside the room is not stored in the memory $M$. Secondly, a target network $Q_{\theta^{*}}$ with parameter $\theta^{*}$ is applied in the training. It uses independent samples ${(s_{i},a_{i},r_{i},s_{i+1})}i\in[n]$ from the replay memory. The parameter $\theta$ of the Q-network is updated once every $T_{target}$ steps by letting $\theta_{*}=\theta$. That is, the target network is hold fixed for $T_{target}$ steps and then it's updated by the current weights of the Q-network. \begin{table}[t] \centering \begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|p{2cm}|} \hline \multicolumn{1}{|c|}{}&\multicolumn{2}{|c|}{\text{IoU}}\\ \hline \hfil Model &\hfil PlanIT & \hfil Ours\\ \hline \hfil Bathroom &$0.623\pm0.008$ &$0.961\pm 0.014$\\ \hfil Bedroom &$0.647\pm0.009$ &$0.952\pm 0.026$\\ \hfil Study &$0.619\pm0.006$ &$0.957\pm 0.018$ \\ \hfil Tatami &$0.637\pm0.006$ &$0.948\pm 0.049$ \\ \hfil Living-Room &$0.603\pm0.006$ &$0.953\pm 0.037$ \\ \hfil Dining-Room &$0.611\pm0.006$ &$0.949\pm 0.029$ \\ \hfil Kitchen &$0.636\pm0.006$ &$0.953\pm 0.041$ \\ \hfil Balcony &$0.651\pm0.006$ &$0.951\pm 0.035$ \\ \hline \end{tabular} \caption{Comparison with the state-of-art Model.} \end{table}\label{table1} \begin{table}[t] \centering \begin{tabular}{|p{2cm}|p{2cm}|p{2cm}|} \hline \multicolumn{1}{|c|}{}&\multicolumn{1}{|c|}{\text{IoU}}\\ \hline \hfil Model & \hfil Ours\\ \hline \hfil Bathroom &$0.953\pm0.012$ \\ \hfil Bedroom &$0.959\pm0.017$ \\ \hfil Study &$0.946\pm0.014$ \\ \hfil Tatami &$0.949\pm0.009$ \\ \hfil Living-Room &$0.956\pm0.018$ \\ \hfil Dining-Room &$0.962\pm0.005$ \\ \hfil Kitchen &$0.957\pm0.015$ \\ \hfil Balcony &$0.961\pm0.007$ \\ \hline \end{tabular} \caption{Evaluation with initial random positions.} \end{table}\label{table1} Both the Q network and the target network contain three convolution layers and three fc layers. The convolution layers contain $8$, $16$, $32$ features, the fc layers contain $7200$, $512$ and $512$ features. Both the Q network and the target network are learned in the training process. \section{Evaluation} In this section, we present qualitative and quantitative results demonstrating the utility of our proposed model and developed simulation environment. Eight main types of indoor rooms are evaluated including the bedroom, the bathroom, the study room, the kitchen, the tatami room, the dining room, the living room and the balcony. We compared the proposed model with the state-of-art models, Besides, for each room, we also test the performance of the proposed model in the developed environment with $2000$ random starting points. For the comparison, we train $5k$ samples for each type of rooms and test $1k$ samples for the corresponding type of rooms. All of the samples are from the designs of professional designers. \subsection{Evaluation metrics} For the task of interior scene synthesis, we apply a metric for the evaluation. For the comparison between the proposed method with the state-of-the-art models, the average IoU is calculated for each type of rooms. The IoU is defined as the following: \begin{equation*} IoU_{average} = \frac{\sum_{0}^{n} IoU(f_{pred},f_{gt})}{n} \end{equation*} where $f_{gt}$ represents the ground truth position and size of the furniture, $f_{pred}$ represents the predicted position and size of the corresponding furniture. IoU represents the intersection between the ground truth object and the predicted object. We compare the state-of-the-art models for scene synthesis for the rooms. The results are shown in Figure \ref{fig6} to \ref{fig7}. It can be observed that our model outperforms the state-of-art models in the following two aspects. First, the predicted position of the center of the furniture is more accuracy than the state-of-the-art models. Second, the predicted size of the furniture is the same with the ground truth size. While the state-of-the-art model is only able to predict approximate size of the furniture. Similarly, Figure \ref{fig8} and Figure \ref{fig9} represents the samples for different starting points of the furniture in the indoor room environment. While the state-of-the-art model is capable to predict the position and size of the model with an empty room as input. Furthermore, the corresponding quantitative comparison and evaluation is as shown in Table \ref{table1}. In detail, Figure \ref{fig6} represents the predicted layout and ground truth in both the simulated environment and the rendering environment at the product end. For the evaluation sample of the bedroom.The proposed method outperforms the state-of-the-art model for the accuracy of the size and position. Similarly, Figure \ref{fig7} represents the predicted layout of the tatami room, the study room, the dining room and the living-room. The proposed method also outperforms the state-of-the-art model for the performance of the size and position. Besides, Figure \ref{fig8} represents the evaluation samples in randomly starting positions. The proposed method is able to move the furniture to the ground truth position from random positions. For the four types of rooms including the bedroom, the bathroom, the kitchen and the balcony. Similarly, Figure \ref{fig9} represents the evaluation samples in randomly starting positions for the bathroom. However, the state-of-the-art models is not able to achieve this task that give the prediction of layout of the furniture with randomly initial positions as input. \section{Discussion} In this paper, we explore to step into Markov Decision Process(MDP) for deep furniture layout in indoor scenes. In addition, we study the formulation of MDP for this layout task of indoor scenes. Besides, a simulator for this layout task is developed. Then, the reward, action, policy and agent are studied under the explored MDP formulation. After this exploration to formulate this layout task to the field of the reinforcement learning, we train two models to find a optimal policy. In the En valuation, the proposed method achieves better performance than the state-of-art models on the interior layouts dataset with more accurate position, size of the furniture. However, there are several avenues for future work. Our method is currently limited to moving a single furniture in the simulated environment. The simulated environment is not driven though a real-time rend erring engineer, Therefore, the simulated environment is not at the lever of industrial lever.The simulator is able to support a limited number of rooms in the training process. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,048
Q: LazyInitializationException: could not initialize proxy - no Session lazy proxy, false fetch join? I have a case where I get LazyInitialzationException in my project. It happens on here: if (study.getIbId().equals(actor.getRepository().getIbId())) { actor variable is type of Account and Repository is type of Repository. Ibid is Long type. Account and Repository comes from hibernate. Error is coming from getIbId(), which means Repository object was not hydrated(?). Here is Account.hbm.xml file: Account: ... </many-to-one> <many-to-one cascade="all" class="com.accelarad.data.mapping.account.Repository" column="REPOSITORY_ID" lazy="proxy" name="repository" unique="true"> ... As you can see, there is lazy=proxy property. When I change it to lazy=false, I don't get LazyInitializationException no more. From what I understand, if lazy=false, it is eagerly fetched, so it is not efficient to do so. Is there way to keep lazy=proxy and load the Repository? What do lazy=${something} and fetch=${something} mean? EDIT: error log: org.hibernate.LazyInitializationException: could not initialize proxy - no Session at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:147) at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:260) at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:73) at com.accelarad.data.mapping.account.Repository_$$_jvst9a5_5d.getIbId(Repository_$$_jvst9a5_5d.java) at com.accelarad.smr.widgets.service.impl.ShareImageServiceImpl.isNetworkStudy(ShareImageServiceImpl.java:265) at com.accelarad.smr.widgets.ShareImageController.autoCompleteAccount(ShareImageController.java:231) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) A: There are some information missing such as how you are managing the session and transactions. The exception tells you just that, there is no session bound to your request. Nevertheless I'll try to point you in a good direction. First you will need a SessionFactory object to manage your sessions. It is a good practice to encapsulate it in a Singleton pattern because hibernate sessions are not thread safe. public class HibernateFactory { private SessionFactory factory; private static HibernateFactory hf; private HibernateFactory() throws HibernateException { /* different versions of hibernate has different ways to build a session factory */ factory = new Configuration().configure().buildSessionFactory(); } /* synchronized so there is no multi use problems */ synchronized public static HibernateFactory getInstance() throws HibernateException { if (hf == null) { hf = new HibernateFactory(); } return hf; } /* will open a session and keep it open until closed manually */ public Session getSession() throws HibernateException { return this.factory.openSession(); } /* will open a session and close it automatically after transaction */ public Session getCurrentSession() throws HibernateException { return this.factory.getCurrentSession(); } public void finalize() throws HibernateException { this.factory.close(); } } So now you can call Session hSession = HibernateFactory.getInstance().getSession(); Then make sure your lazy load is inside the transaction bound you define. ... Session session = HibernateFactory.getInstance().getSession(); session.beginTransaction(); if (study.getIbId().equals(actor.getRepository().getIbId())) { ... session.getTransaction().commit() session.close(); Generally you won't need to explicitly open a transaction if you are just fetching data but it's a good practice anyway. If you do, remember to commit and rollback if needed. Also for your other question on what is fetch and load. Refer to: https://docs.jboss.org/hibernate/stable/core.old/reference/en/html/performance.html
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,014
\section{Introduction} The escalation of globalization burgeons the great demand for Cross-Lingual Information Retrieval~(CLIR), which has broad applications such as cross-border e-commerce, cross-lingual question answering, and so on \cite{eco,ruckle2019improved, xu2021artificial}. Informally, given a query in one language, CLIR is a document retrieval task that aims to rank the candidate documents in another language according to the relevance between the search query and the documents. Most existing solutions to tackle the CLIR task are built upon machine translation~\cite{dwivedi2016survey} systems~(also known as MT systems). One technical route is to translate either the query or the document to the same language as the other side~\cite{dic-trans1,dic-trans2,cor-trans1,doc-trans1}. The other is to translate both the query and the document to the same intermediate language~\cite{kishida2003two}, e.g. English. After aligning the language of the query and documents, monolingual retrieval is performed to accomplish the task. Hence, the performance of the MT systems and the error accumulations may render them inefficient in CLIR. Recent studies strive to model CLIR with deep neural networks that encode both query and document into a shared space rather than using MT systems~\cite{zhang2019improving,share-repre,hui2018co,eco}. Though these approaches achieve some remarkable successes, the intrinsic differences between different languages still exist due to the implicit alignment of these methods. Meanwhile, the query is not very long, leading the lack of information while matching with candidate documents. To tackle these issues, we aim to find a ``silver bullet'' to simultaneously perform \emph{explicit alignment} between queries and documents and \emph{broaden} the information of queries. The multilingual knowledge graph~(KG), e.g. Wikidata~\cite{vrandevcic2014wikidata}, is our answer. As a representative multilingual KG, Wikidata\footnote{\url{https://www.wikidata.org/wiki/Wikidata:Main_Page}} includes more than 94 million entities and 2 thousand kinds of relations, and most of the entities in Wikidata have multilingual aligned names and descriptions\footnote{ More than 260 languages are supported now.}. With such an external source of knowledge, we can build an explicit bridge between the source language and target language on the premise of the given query information. For example, Figure~\ref{fig_intr} exhibits a query ``\begin{CJK*}{UTF8}{gbsn}新冠病毒\end{CJK*}'' in Chinese (``COVID-19'' in English) and candidate documents in English. Through the multilingual KG, we could link ``\begin{CJK*}{UTF8}{gbsn}新冠病毒\end{CJK*}'' to its aligned entity in English, i.e. ``COVID-19'', and then extend to some related neighbors, such as ``Fever'', ``SARS-CoV-2'' and ``Oxygen Therapy''. Both the aligned entity and the local neighborhood might contribute to extend the insufficient query and fill in the linguistic gap between the query and documents. \begin{figure}[t] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{-5pt} \includegraphics[width=\linewidth]{figs/mmm5-crop.pdf} \caption{A toy example for utilizing the multilingual KG for CLIR. The query is in Chinese and the documents are in English. In the query, we give an English translation for better understanding. The entities are denoted in circles. The dotted black line presents the descriptions of an entity. The solid black arrow presents relations between entities. The solid blue arrow shows the related entity of the given query. The hollow arrow presents the documents of the query. The entities and corresponding descriptions in KG are bilingual.} \label{fig_intr} \end{figure} Along this line, we adopt the multilingual KG as an external source to facilitate CLIR and propose a \textbf{HI}erarchical \textbf{K}nowledge \textbf{E}nhancement~(HIKE for short) mechanism to fully integrate the relevant knowledge. Indeed, queries are usually short but rich in entities. HIKE establishes a link between queries and multilingual KG through the entities mentioned in queries, and makes full use of the semantic information of entities and their neighborhood in KG with a hierarchical information fusion mechanism. Specifically, a knowledge-level fusion integrates the information in each individual language in the KG, and a language-level fusion combines the integrated information from different languages. The multilingual KG provides valuable information, which helps to reduce the disparity between different languages and is beneficial to the matching process over queries and documents. To summarize, the contributions are as follows. \begin{itemize} \item We adopt the external multilingual KG not only as an enhancement for sparse queries but also as an explicit bridge mitigating the gap between the query and the document in CLIR. To the best of our knowledge, this is the first work that utilizes multilingual KG for the neural CLIR task. \item We propose HIKE that makes full use of the entities mentioned in queries as well as the local neighborhoods in the multilingual KG for improving the performance in CLIR. HIKE contains a hierarchical information fusion mechanism to resolve the sparsity in queries and perform easier matching over the query-document pairs. \item Extensive experiments on a number of benchmark datasets in four languages~(English, Spanish, French, Chinese) validate the effectiveness of HIKE against state-of-the-art baselines. \end{itemize} \section{Related Work} Current information retrieval models for cross-lingual tasks can be categorized into two groups: (i) translation-based approaches~\cite{nie2010cross, zbib2019neural} and (ii) semantic alignment approaches~\cite{bai2010learning,sokolov2013boosting}. Early works mainly focus on translation-based models. One way is to translate queries to the target language of documents~\cite{query-trans-1}, or to translate the documents or corpus to the same language as queries~\cite{doc-trans-1,doc-trans-2}. The other is to translate both queries and documents to the same intermediate language, e.g. English~\cite{kishida2003two}. In both cases, they aim to simplify the process and use the monolingual information retrieval methods to solve the CLIR problem. Recently, with the development of deep neural networks, semantic alignment approaches, which directly tackle the CLIR tasks without the translation process, have gained much attention. These methods align queries and documents into the same space with probabilistic or neural network methods and perform query-document matching in the aligned space. \citet{sokolov2013boosting} proposed a method about learning bilingual n-gram correspondences from relevance rankings. \citet{share-repre} presented a simple yet effective method using shared representations across CLIR models trained in different language pairs. The release of BERT~\cite{bert} leads to breakthroughs in various NLP tasks~\cite{jiang2020cross}, including document ranking tasks. Thus Contextualized Embeddings for Document Ranking~(CEDR)~\cite{cedr} is an effective method for using BERT to enhance the current prevalent neural ranking models, such as KNRM~\cite{knrm}, PACRR~\cite{pacrr} and DRMM~\cite{drmm}. \citet{clirmatrix} utilized a multilingual version of BERT~(a.k.a multilingual BERT or mBERT) to conduct the CLIR task. These BERT-based neural ranking models achieve the state-of-the-art results compared with other models. Besides, due to the fast-growing scale of KGs such as Wikidata~\cite{vrandevcic2014wikidata} and DBpedia~\cite{auer2007dbpedia}, some researches focus on using high-quality KGs as extra knowledge to perform the information retrieval task. \citet{word-entity} presented a word-entity duet framework for utilizing KGs in ad-hoc retrieval. Entity-Duet Neural Ranking Model~(EDRM)~\cite{entityduet}, which introduces KGs to neural search systems, represents queries and documents by their word and entity annotations. Despite the popularity of KG for information retrieval, the works on the topic of KG for CLIR are rarely found. \citet{zhang2016xknowsearch} introduced KG to CLIR systems using the standard similarity measures for document ranking. However, this work does not use neural network models. To the best of our knowledge, our work is the first work that incorporates multilingual KG information for the neural CLIR task. \section{Methodology} In this section, we illustrate the overall framework of our HIKE model, including the model architecture and the detailed description of model components. \begin{figure*}[t] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{-10.0pt} \includegraphics[width=\linewidth]{figs/model4-crop.pdf} \caption{The overall framework of HIKE. The left part is the general architecture, and the right part is the detailed illustration. } \label{model} \end{figure*} \subsection{Notations} CLIR is a retrieval task in which search queries and candidate documents are written in different languages. Since search queries are usually short but rich in entities, HIKE establishes a connection between CLIR and the multilingual KG via the entities mentioned in queries, and leverage the KG information through these entities and their local neighborhood in KG. Specifically, for each entity, we obtain the following information from the multilingual KG: (i) entity label\footnote{In some large-scale KGs like Wikidata~\cite{vrandevcic2014wikidata} and DBpedia~\cite{auer2007dbpedia}, the name of an entity is denoted as its label.}, (ii) entity description, (iii) labels of neighboring entities, and (iv) descriptions of neighboring entities. It is worth noting that all the information in the KG is multilingual, and the information in different languages is aligned. We leverage the above information to facilitate the CLIR task. Given a query $q$ and a document $d$. We present an entity $e_q \in \mathcal{E}$ and the $i$-th neighboring entity $n_{ei} \in \mathcal{E}$, where $\mathcal{E}$ is the entity set in KG. Both the entity and neighboring entities have two information for incorporating: labels and descriptions. Furthermore, for a specific bilingual information retrieval task, the label and description of $e_q$ can be described as $l_{e_q}^r$ and $p_{e_q}^r$, respectively. The label and the description of $n_{ei}$ can be descried as $l_{n_{ei}}^r$ and $p_{n_{ei}}^r$, where $r \in \{s, t\}$ indicates the source language or target language. All these information, including $q$, $d$, $l_{e_q}^r$, $p_{e_q}^r$, $l_{n_{ei}}^r$ and $p_{n_{ei}}^r$, is composed of a sequence of tokens. \subsection{Model Architecture} HIKE incorporates the multilingual semantic information of the entities and their local neighborhoods from KG into the current CLIR model. The overall architecture of HIKE is shown in Figure 2. HIKE consists of three modules: an encoder module, a hierarchical information fusion module and a query-document matching module. Specifically, in the encoder module, HIKE utilizes multilingual BERT to embed the queries, documents, and semantic information from KG into low-dimensional vectors. Thus the encoder outputs the embeddings to the hierarchical information fusion module, and the latter combines the information from KG into queries and expedites the matching with documents. Particularly, the knowledge-level (first-level) fusion integrates the information in KG, using the multi-head attention mechanism~\cite{attention}. We use two individual knowledge-level fusion modules to extract features from source and target languages. And then, the language-level~(second-level) fusion integrates two representations of an entity in source and target languages through a multi-layer perceptron. After the hierarchical information fusion mechanisms, we utilize a matching model to get the relevance score of the query-document pair. The higher the score, the more relevant the query and the document are. \subsection{Encoder} The encoder aims to embed the tokens from queries, documents, entities and neighboring entities. It consists of two parts: Query and Document Duet Encoder~(QD-Duet-Encoder) and Knowledge Encoder~(K-Encoder). QD-Duet-Encoder embeds a query-document pair to a $d$-dimensional vector. And K-Encoder transforms the label and description of an entity into another $d$-dimension vector. \noindent \textbf{QD-Duet-Encoder} concatenates the tokens from queries and documents into one sequence, using [CLS] and [SEP] as meta-tokens. [CLS] is a special symbol added in front of every input example, and [SEP] is a special separator token~\cite{bert}. And then the encoder sums the token embedding, segment embedding, positional embedding for each token to get the input embedding and computes the output embedding that represents the semantic and matching information of a query-document pair. Embedding query and document together can make the ranking model benefit from deep semantic information from BERT in addition to individual contextualized token matching~\cite{cedr}. For a given query $q$ and document $d$, we have an output from QD-Duet-Encoder as shown in Equation (\ref{qd-encode}). $\bm{v}_{qd}$ is the [CLS] embedding of the output. \begin{equation} \begin{aligned} \bm{v}_{qd} = {\mbox{QD-Duet-Encoder}}( \{\mbox{[CLS]}, q, \mbox{[SEP]}, d\}), \label{qd-encode} \end{aligned} \end{equation} where $\mbox{QD-Duet-Encoder}(\cdot)$ is a multilingual BERT model\footnote{We used BERT-base, multilingual cased.} and $\{\cdot,\cdot\}$ means concatenating two sequences of tokens to one sequence. \noindent \textbf{K-Encoder} aims to embed the knowledge information from entities or neighboring entities in two languages to a feature vector. Inspired by the advantages of embedding the query and document together, we use [CLS] and [SEP] to concatenate the label and the description of an entity to obtain the embedding. Suppose there are $k$ neighboring entities, we denote the set of neighboring entity labels as $\mathcal{N}_l^r = \{l_{n_{e1}}^r, l_{n_{e2}}^r, \cdots, l_{n_{ek}}^r\}$ and the descriptions as $\mathcal{N}_p^r = \{p_{n_{e1}}^r, p_{n_{e2}}^r, \cdots, p_{n_{ek}}^r\}$. All these entities are fed into K-Encoder to compute a feature embedding of the entity as \begin{equation} \begin{aligned} \bm{v}_{e_q}^r & = \mbox{K-Encoder}(\{\mbox{[CLS]}, l_{e_q}^r, \mbox{[SEP]}, p_{e_q}^r\}), \\ \bm{v}_{n_{ei}}^r & = \mbox{K-Encoder}(\{\mbox{[CLS]}, l_{n_{ei}}^r, \mbox{[SEP]}, p_{n_{ei}}^r\}), \label{k-encode} \end{aligned} \end{equation} where $i=1,2,\cdots,k$. $\mbox{K-Encoder}(\cdot)$ is also a multilingual BERT. $r \in \{s, t\}$ denotes that the parameter is for source and target languages, respectively. We sort the neighboring entities in descending order according to their relevance to the central entity and select top $k$ neighboring entities to obtain $\bm{v}_{n_{ei}}^r$, where $k$ is a hyper-parameter. Specifically, we first run the popular KG embedding model TransE~\cite{TransE} to get the embeddings of entities, and then calculate the cosine similarity between two entities as the relevance score. $\bm{v}_{e_q}^r$ and $\bm{v}_{n_{ei}}^r$ are the [CLS] embedding of the entity and the $i$-th neighboring entity, respectively. The set of feature vectors of neighboring entities is $\mathcal{N}^r = \{\bm{v}_{n_{e1}}^r, \bm{v}_{n_{e2}}^r, \cdots,\bm{v}_{n_{ek}}^r\}$. $\bm{v}_{qd}$, $\bm{v}_{e_q}^r$ and $\mathcal{N}^r$ will be treated as the inputs of the fusion module in the next subsection. \subsection{Hierarchical Information Fusion} In this section, we detail the hierarchical information fusion module, which is a two-level fusion mechanism, comprising knowledge-level fusion and language-level fusion. \noindent \textbf{Knowledge-Level Fusion} contains two modules: a multi-head self-attention mechanism and an information aggregator. With the help of both two modules, our model can learn a wealth of similar semantic information among the entity, neighboring entities and query-doc pair. In the self-attention mechanism, $\bm{v}_{qd}$, $\bm{v}_{e_q}^r$ and $\mathcal{N}^r$ are gathered together and fed into the attention module to calculate the attention values. The input matrix $\bm{E}^r$ is denoted as: \begin{equation} \begin{aligned} \bm{E}^r = (\bm{v}_{qd} \odot \bm{v}_{e_q}^r \odot \bm{v}_{n_{e1}}^r \odot \bm{v}_{n_{e2}}^r \odot \cdots \odot \bm{v}_{n_{ek}}^r), \label{input} \end{aligned} \end{equation} where $\odot$ is an operation that stacks row vectors into a matrix. $\bm{E}^r$ contains the embeddings from query, document, entity and the local neighborhood of the entity. To encapsulate more valuable information, we utilize the multi-head attention mechanism~\cite{attention} to learn better latent semantic information. The self-attention module takes three inputs~(the query, the key, and the value), which are denoted as $\bm{Q}$, $\bm{K}$, $\bm{V} \in \mathbb{R}^{(2+k) \times d}$ ($d$ is the embedding size) respectively. To be specific, we only discuss the $j$-th head of the multi-head attention mechanism. First, the self-attention model uses each embedding in $\bm{E}^r$ to get the query $\bm{Q}^j$, key $\bm{K}^j$ and value $\bm{V}^j$ through a linear transformation layer. Then the model goes on using each embedding in the query to attend each embedding in the key through the scaled dot-product attention mechanism \cite{attention}, and gets the attention score. Finally, the obtained attention score is applied upon the value $\bm{V}^j$ to calculate a new representation of $\mbox{Att}(\bm{Q}^j, \bm{K}^j, \bm{V}^j)$, which is formulated as: \begin{equation} \mbox{Att}(\bm{Q}^j, \bm{K}^j, \bm{V}^j) = \mathrm{softmax}(\frac{\bm{Q}^j \cdot (\bm{K}^j)^T}{\sqrt{d}})\cdot \bm{V}^j. \end{equation} Therefore, each row of $\mbox{Att}(\bm{Q}^j, \bm{K}^j, \bm{V}^j)$ is capable of incorporating the semantic information from the rows in $\bm{V}^j$. Furthermore, a layer normalization operation~\cite{ba2016layer} is applied to the output of attention model to obtain the representation of the $j$-th head $\bm{H}^j = \mathrm{LayerNorm(}\mbox{Att}(\bm{Q}^j, \bm{K}^j, \bm{V}^j))$. Next, we pack the multi-head information using the following operation: \begin{equation} \mbox{Multi-Head}(\bm{Q}, \bm{K}, \bm{V}) = (\bm{H}^1||\bm{H}^2||\cdots|| \bm{H}^m)\bm{W}_H, \end{equation} where $\bm{W}_H \in \mathbb{R}^{md \times d}$ is a parameter matrix and $m$ is the number of heads. Accordingly, we obtain the representation after the multi-head attention $\bm{M}^r = ({\bm{v}_{qd}}^{\prime} \odot {\bm{v}_{e_q}^r}^{\prime} \odot {\bm{v}_{n_{e1}}^r}^{\prime} \odot {\bm{v}_{n_{e2}}^r}^{\prime} \odot \cdots \odot {\bm{v}_{n_{ek}}^r}^{\prime}) = \mbox{Multi-Head}(\bm{Q}, \bm{K}, \bm{V}) \in \mathbb{R}^{(2+k)\times d}$, where $r \in \{s, t\}$ denotes that the parameter is for source and target languages respectively. ${\bm{v}_{qd}}^{\prime}$, ${\bm{v}_{e_q}^r}^{\prime}$ and ${\bm{v}_{n_{ei}}^r}^{\prime} (i = 1,2,\dots,k)$ represent the output vectors of multi-head self attention. Finally, we use an information aggregator which consists of a linear transformation layer as Equation (\ref{aggre}) to compute the final representation of the knowledge-level features. \begin{equation} \bm{e}_{kg}^r = \mathrm{Tanh}(\bm{W}_K \cdot \mathrm{vec}(\bm{M}^r) +\bm{b}_K), \label{aggre} \end{equation} where $\mathrm{vec}(\cdot)$ is a vectorization function that concatenates each row of a matrix as a long vector. $\bm{W}_K \in \mathbb{R}^{d \times (2+k)d}$ is a parameter matrix and $\bm{b}_K$ is a $d$-dimension vector. $\bm{e}_{kg}^r$ incorporates the deep semantic information from the KG. \noindent \textbf{Language-Level Fusion} combines the query-document pair information with $\bm{e}_{kg}^s$ and $\bm{e}_{kg}^t$, which are obtained from the knowledge-level fusion. We use the $\bm{v}_{qd}$ as guidance in the fusion processing, which is donated in blue arrow in Figure~\ref{model}. And then, these embeddings are combined by a linear transformation layer which uses $\mathrm{Tanh}$ as the activation function to generate a unified representation as: \begin{equation} \bm{e}_{kglang} =\mathrm{Tanh}[\bm{W}_L(\bm{v}_{qd}||\bm{e}^{s}_{kg}||\bm{e}^{t}_{kg}) + \bm{b}_L], \end{equation} where $s$ and $t$ represent the source and target languages. $\bm{W}_L \in \mathbb{R}^{d \times 3d}$ and $\bm{b}_L \in \mathbb{R}^d$ are parameters. $\bm{e}_{kglang}$ is the unified embedding that incorporates the information from queries, documents, and the multilingual KG. \subsection{Matching Function} Finally, HIKE uses the matching function to obtain the score of a query-document pair. Particularly, $\bm{v}_{qd}$ and $\bm{e}_{kglang}$ will be concatenated and fed into another linear layer to obtain the relevant ranking score of the query-document pair: \begin{equation} f(q, d) = \mathrm{Softmax}[\bm{W}_S(\bm{v}_{qd}||\bm{e}_{kglang}) + b_S], \end{equation} where $f(q, d)$ is the ranking score between the query and document. $\bm{W}_S \in \mathbb{R}^{1 \times 2d}$ and $b_S \in \mathbb{R}^1$ are parameters. And $\mathrm{Softmax}$ is an activate function to convert the results into the probability over different classes. In the training stage, we use standard pairwise hinge loss to train the model as shown in Equation (\ref{loss}). \begin{equation} \mathcal{L} = \sum\limits_{d \in D_q^+} \sum\limits_{d' \in D_q^-}[1 - f(q, d) + f(q, d')]_+. \label{loss} \end{equation} $D_q^+$ and $D_q^-$ are the set of relevant documents and irrelevant documents of the query $q$ , and $[\cdot]_+ = \text{max}(0, \cdot)$. \section{Experiment Methodology} In this section, we describe the details of our experiments, including the dataset, the multilingual KG, baselines, evaluation metrics and implementation details. \subsection{Dataset} We evaluate the HIKE model in a public CLIR dataset CLIRMatrix \cite{clirmatrix}. Specifically, we use the MULTI-8 set in CLIRMatrix, in which queries and documents are jointly aligned in 8 different languages. The dataset is mined from 49 million unique queries and 34 billion (query, document, relevance label) triplets. The relevance label $\in \{0, 1, 2, 3, 4, 5, 6\}$ indicates the relevance of the query-document pair. The higher the value, the more relevant the query-document pair is. In MULTI-8, queries remain the same no matter what the language of documents is. For instance, three language pairs English-Spanish, English-French and English-Chinese in MULTI-8 share the same queries. Furthermore, we choose four widely used languages in the world to conduct the bilingual information retrieval tasks, including English~(EN), French~(FR), Spanish~(ES) and Chinese~(ZH). Thus there are 12 language pairs in the dataset for training, validation and testing. The training sets of every language pair contain 10,000 queries, while the validation and the test sets contain 1,000 queries. Meanwhile, the number of candidate documents for each query is 100. We use the test1 set in MULTI-8 as our test set to verify the model performance. The statistics of the datasets are summarized in Table \ref{data}. \begin{table}[h] \centering \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.5cm} \begin{tabular}{cccc} \toprule Dataset & train & valid & test \cr \midrule $\{s \rightarrow t\}$ & 10000 & 1000 & 1000 \cr \bottomrule \end{tabular} \caption{Statistic of datasets. Here $ s, t \in \{\mbox{EN}, \mbox{ES},\mbox{FR},$ $ \mbox{ZH}\}$ and $s \neq t$.} \label{data} \end{table} \begin{table}[h] \centering \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-0.5cm} \fontsize{8}{6}\selectfont \setlength\tabcolsep{3.0pt} \begin{tabular}{lcccccc} \toprule &EN-ES & EN-FR & EN-ZH & ES-EN & ES-FR & ES-ZH \cr \midrule source language & \multicolumn{3}{c}{7.11}&\multicolumn{3}{c}{6.53}\cr \cmidrule(lr){2-4} \cmidrule(lr){5-7} target language & 6.15&6.34&4.86&7.37&6.73&5.21 \cr \bottomrule \toprule & FR-EN & FR-ES & FR-ZH & ZH-EN & ZH-ES & ZH-FR \cr \midrule source language & \multicolumn{3}{c}{6.41}&\multicolumn{3}{c}{4.95} \cr \cmidrule(lr){2-4} \cmidrule(lr){5-7} target language & 7.11&6.19&4.93&7.02&6.13&6.33 \cr \bottomrule \end{tabular} \caption{ Average number of golden neighboring entities. ``Golden'' means the neighboring entities have both the description and the label in a specific language of the queries. The source language is on the left of the connector ``-'', while the target language is on the right. } \label{know} \end{table} \begin{table*}[t] \centering \setlength{\abovecaptionskip}{0.1cm} \setlength{\belowcaptionskip}{-8pt} \fontsize{8}{7}\selectfont \begin{threeparttable} \begin{tabular}{p{1.8cm}<{\centering}p{1.5cm}p{1.8cm}<{\centering}p{1.8cm}<{\centering}p{1.8cm}<{\centering}p{1.8cm}<{\centering}p{1.8cm}<{\centering}p{1.8cm}<{\centering}} \toprule \multirow{2}[2]{*}{\bf{Language Pair}}& \multirow{2}[2]{*}{\bf Metrics} & \multicolumn{6}{c}{\bf Models} \cr \cmidrule(lr){3-8} & & Vanilla BERT & CEDR-DRMM & CEDR-KNRM & CEDR-PACRR & HIKE$^{-}$ & HIKE \cr \midrule \multirow{3}{*}{EN-ES} & NDCG@1 & 75.82 & 73.55&75.40&77.28&80.05 &\bf 83.81$^*$\cr & NDCG@5 & 80.08&79.19&80.30&80.69& 82.63 &\textbf{84.05}$^*$\cr & NDCG@10 & 83.36&82.55&83.47&83.42& 85.14& \textbf{86.18}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{EN-FR}& NDCG@1 & 76.92&74.63&71.40&78.33&80.05 &\bf 82.93$^*$\cr & NDCG@5 & 78.99&78.27&78.53&80.90&81.21 &\textbf{83.43}$^*$\cr & NDCG@10 & 82.02&81.01&81.89&83.40&83.20 &\bf{85.22}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{EN-ZH}& NDCG@1 & 68.98&70.33&76.60&75.10&72.25 &\bf 78.16$^*$\cr & NDCG@5 & 78.30&78.13&81.35&79.92&78.90 &\textbf{81.86}$^*$\cr & NDCG@10 & 82.32&81.91&84.23&82.71&82.90 &\bf{84.96}$^*$\cr \midrule \multirow{3}{*}{ES-EN}& NDCG@1 & 74.88&70.73&74.05&74.55&76.38 &\bf 80.13$^*$\cr & NDCG@5 & 75.04&72.34&74.58&75.05&75.10 &\textbf{78.34}$^*$\cr & NDCG@10 & 76.09&74.60&75.99&76.44&76.20 &\bf{78.61}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{ES-FR}& NDCG@1 & 67.40&74.97&76.05&77.38&73.97 &\bf 80.21$^*$\cr & NDCG@5 & 72.86&74.65&76.75&76.73&75.18 &\textbf{78.97}$^*$\cr & NDCG@10 & 75.51&76.59&78.20&78.16&77.10 &\bf{79.88}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{ES-ZH}& NDCG@1 & 64.25& 65.00& 69.35& 65.62&65.75 & \bf 70.70$^*$\cr & NDCG@5 & 69.82&68.69&73.16&73.58&70.71 &\textbf{74.75}$^*$\cr & NDCG@10 & 74.08&72.70&75.99&75.85&74.60 &\bf{77.06}$^*$\cr \midrule \multirow{3}{*}{FR-EN}& NDCG@1 & 71.15&71.28&70.52&76.90&76.23 &\bf 81.03$^*$\cr & NDCG@5 & 72.99&72.82&73.99&76.58&75.37 &\textbf{77.73}$^*$\cr & NDCG@10 & 75.46&75.14&75.58&78.03&76.78 &\bf{78.72}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{FR-ES}& NDCG@1 & 77.01&74.60&74.43&80.85&78.98 &\bf 83.52$^*$\cr & NDCG@5 & 78.18&76.67&77.22&78.89&79.70 &\textbf{80.57}$^*$\cr & NDCG@10 & 79.91&78.41&79.16&80.56&80.81 &\bf{81.69}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{FR-ZH}& NDCG@1 & 63.33&62.37&69.75&65.33&65.37 &\bf 70.78$^*$\cr & NDCG@5 & 71.73&70.65&73.86&67.82&72.34 &\textbf{74.42}$^*$\cr & NDCG@10 & 75.92&74.49&76.89&74.79&76.16 &\bf{77.47}$^*$\cr \midrule \multirow{3}{*}{ZH-EN} & NDCG@1 & 56.63&62.83&60.32&61.53&60.45 &\bf 68.52$^*$\cr & NDCG@5 & 61.69&64.71&64.61&64.53&63.89 &\textbf{68.43}$^*$\cr & NDCG@10 & 64.79&66.99&67.03&66.57&66.43 &\bf{69.72}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{ZH-ES}& NDCG@1 & 54.03&59.95&61.55&60.45&63.33 &\bf 67.88$^*$\cr & NDCG@5 & 61.64&64.53&66.47&65.61&66.16 &\textbf{68.95}$^*$\cr & NDCG@10 & 66.20&67.99&69.30&68.55&69.19 &\bf{71.09}$^*$\cr \cmidrule(lr){2-8} \multirow{3}{*}{ZH-FR}& NDCG@1 & 59.05&53.23&59.97&58.85&59.47 &\bf 65.40$^*$\cr & NDCG@5 & 63.40&61.68&64.81&63.91&64.84 &\textbf{68.07}$^*$\cr & NDCG@10 & 66.97&65.71&68.34&67.27&68.26 &\bf{70.51}$^*$\cr \bottomrule \end{tabular} \end{threeparttable} \caption{NDCG values of baselines and our model. Numbers in the table are in percentages. * marks statistically significant improvements (t-test with p-value $<$ 0.05) compared with the best baseline.} \label{result} \end{table*} \subsection{Knowledge Graph} We use Wikidata~\cite{vrandevcic2014wikidata}, a multilingual KG with entities and relations in a multitude of languages. Up until now, Wikidata contains more than 94 million entities and more than 2000 kinds of relations. And the related entities of queries are annotated by mGENRE~\cite{decao2020multilingual}, a multilingual entity linking model which has a high accuracy of entity linking on 105 languages. Table~\ref{know} shows the average number of neighboring entities in each dataset. \subsection{Baselines} To demonstrate the effectiveness of our model, we compare the performance with the following baselines. \begin{itemize} \item Vanilla BERT~\cite{cedr,clirmatrix}: a fine-tuned multilingual BERT model for CLIR. \item CEDR~\cite{cedr}: the contextualized embeddings for document ranking (CEDR) model. This model can be applied to various popular neural ranking models, including KNRM~\cite{knrm}, DRMM~\cite{drmm} and PACRR~\cite{pacrr}, to form CEDR-KNRM/DRMM/PACRR. \item HIKE$^{-}$: A variant of HIKE, which concatenates the KG information with the query directly. The difference between HIKE$^{-}$ and HIKE is that HIKE$^{-}$ does not use the hierarchical information fusion mechanism. \end{itemize} \begin{table*}[t] \centering \setlength{\abovecaptionskip}{2pt} \setlength{\belowcaptionskip}{-5.0pt} \fontsize{8}{8}\selectfont \setlength\tabcolsep{3.5pt} \begin{threeparttable} \begin{tabular}{lcccccccccccc} \toprule \multirow{2}{*}{\bf Model } & \multicolumn{3}{c}{\bf EN}& \multicolumn{3}{c}{\bf ES}& \multicolumn{3}{c}{\bf FR}& \multicolumn{3}{c}{\bf ZH} \cr \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} \cmidrule(lr){11-13} & ES & FR & ZH & EN & FR & ZH&EN&ES&ZH&EN&ES&FR \cr \midrule HIKE &\textbf{86.18}&\textbf{85.22}&\textbf{85.30}&\textbf{78.61}&\textbf{79.88}&\textbf{77.06}&\textbf{78.72}&\textbf{81.69}&\textbf{77.47}&\textbf{69.72}&\textbf{71.09}&\textbf{70.51}\cr \midrule HIKE w/o descriptions & 85.39&84.29&84.05&77.27&79.09&76.35&77.79&80.95&76.41&68.69&70.31&69.51 \cr HIKE w/o labels &85.47&84.86&84.81&78.34&79.57&76.38&78.58&81.36&76.71&69.29&70.59&70.34 \cr HIKE w/o neighboring entities &85.33&84.47&84.58&78.03&78.17&76.65&78.15&80.90&76.55&68.65&70.23&69.09\cr HIKE w/o target language information &84.68&83.98&83.84&77.70&78.39&76.22&77.79&81.18&76.25&68.59&69.94&69.09 \cr \bottomrule \end{tabular} \end{threeparttable} \caption{NDCG@10 of models in ablation study. } \label{abtest} \end{table*} \begin{figure*}[h] \centering \setlength{\abovecaptionskip}{1pt} \setlength{\belowcaptionskip}{-10.0pt} \includegraphics[width=\linewidth]{figs/para2.png} \caption{The change of NDCG@10 with the number of neighboring entities increasing.} \label{para} \end{figure*} \subsection{Evaluation Metrics} Normalized Discounted Cumulative Gain~(NDCG) is adopted for evaluation. And we choose NDCG@1, NDCG@5 and NDCG@10~(only evaluate the top 1, 5 and 10 returned documents) as the metrics in all language pairs. \subsection{Implementation Details} In the training stage, the number of heads for the multi-head attention mechanism in knowledge-level fusion is set to 6. In order to reduce the GPU memory and training time, we save the embeddings of entity information before training. The number of all entities we extracted from KG is 376,785. And we only fine-tune the BERT model to obtain textual representations. The learning rates are divided into two parts: the BERT $lr_1$ and the other modules $lr_2$. And we set $lr_1$ to 1e-5 and $lr_2$ to 1e-3. We set the number of neighboring entities in KG as 3. For those entities without enough neighboring entities, we copy the existing neighboring entities instead. We randomly sample 1600 query-document pairs as our training data per epoch. The maximum training epochs are set to 15. \section{Evaluation Results} We conduct three experiments to demonstrate the effectiveness of the HIKE model. \subsection{Ranking Accuracy} Table~\ref{result} summarizes the evaluation results of different cross-lingual retrieval models. From Table~\ref{result}, we have the following findings. (i) The results indicate that HIKE significantly and consistently outperforms all the baseline models on 12 language pairs w.r.t all metrics, which demonstrates the effectiveness of the proposed model HIKE. (ii) Comparing with Vanilla BERT, the improvement of HIKE$^{-}$ embodies the usefulness and importance of the KG. The external KG makes up for the deficiency of queries and provides accurate information while ranking the documents. Moreover, the results of HIKE perform better than HIKE$^{-}$, which shows the advantages of our hierarchical fusion mechanism. (iii) Specifically, HIKE achieves substantial improvements of both NDCG@1 and NDCG@5 on most datasets comparing with other models, which indicates the knowledge information learned from the entities and neighboring entities is highly related to the task. This result shows that HIKE is capable of ranking the most relevant documents to the top. All these findings prove that KG information and the hierarchical information fusion can facilitate the CLIR task, and narrow the gap between different languages. \subsection{Ablation Study} In this section, we conduct the ablation study to testify the effectiveness of different information used in HIKE. In addition, we do the experiments as: \begin{itemize} \item Remove the labels or descriptions of entities and neighboring entities to verify the effects of them. \item Remove the information of neighboring entities to study the influence of neighboring entities. \item Remove the information of target language to learn the importance of them in document ranking. \end{itemize} The results are shown in Table~\ref{abtest}. From the results, we observe that (i) HIKE obtains the best ranking performance than other incomplete models, indicating that every part of our model makes contributions to the ranking performance. (ii) The model without entity labels outperforms the one without entity description. We conjecture the reason lies in that the information from entity descriptions is more abundant than that from the labels, which is able to provide more beneficial information for the CLIR task. (iii) The model without target language information performs worst in our ablation test. It demonstrates that target language information plays a significant role in the CLIR task, which establishes an explicit connection between the query in the source language and the documents in the target language. \subsection{The Effect of Neighboring Entity Number} In this subsection, we explore the influence of the number of neighboring entities. We set the number of neighboring entities from 1 to 7~(step-size is 2) and conduct the experiments over all datasets. Figure~\ref{para} demonstrates the results, which are divided into four groups according to the different source languages. Each group contains three different target languages. From the figure, there exists an optimal number of neighbors for each language pair. The model performance first goes up as the number of neighboring entities increases. After the optimal value, the performance falls down. We conjecture the reason lies in that models with small numbers of neighbors cannot take full advantage of the local neighborhood information in KG, resulting in weak NDCG@10 values. While large numbers of neighboring entities may bring in some unrelated information, leading to unsatisfactory results as well. \section{Conclusion} In this paper, we presented HIKE, a hierarchical knowledge-enhanced model for the CLIR task. HIKE introduces external multilingual KG into the CLIR task and is equipped with a hierarchical information fusion mechanism to take full advantage of the KG information. Specifically, the knowledge-level fusion integrates the KG information in each language. And the language-level fusion combines the information from both source and target languages. The multilingual KG is capable of providing valuable information for the CLIR task, which is beneficial to bridge the gap between queries and documents in different languages. Finally, extensive experiments on benchmark datasets clearly validated the superiority of HIKE against various state-of-the-art baselines. \section{Acknowledgments} This work is supported by Alibaba Group through Alibaba Innovative Research Program. The research work is supported by the National Key Research and Development Program of China under Grant No. 2017YFB1002104, the National Natural Science Foundation of China under Grant No. U1836206, 62176014, U1811461, and the China Postdoctoral Science Foundation under Grant No. 2021M703273. Xiang Ao is also supported by the Project of Youth Innovation Promotion Association CAS and Beijing Nova Program Z201100006820062. \bibliographystyle{aaai}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,034
☶ Go up a page Kevin Jensen In this podcast Kevin reflects on taking the New Zealand AIDS Memorial Quilt into the community, and designing the quilt website. The recording was made just after the official gifting ceremony of the New Zealand AIDS Memorial Quilt to Te Papa - the national museum of New Zealand. Special thanks to Te Papa for allowing us to record on Te Marae. Audio and Text Download mp3 Download HQ mp3 1990s, 2010s, aids support network, ansett new zealand, aotearoa new zealand, education, hiv/aids, kevin jensen, museum of new zealand te papa tongarewa, nelson, new zealand aids memorial quilt, nicki eddy, peter cuthbert, school, takaka, wellington, west coast Tags (computer generated) access, aids memorial quilt, ancestors, archives, attack, auckland, board, capital, class, events, family, friends, god, health, hospital, individual, japan, london, love, meetings, memorial, michael bancroft, northland, other, people, quilt, research, romania, running, stuff, support, time, twins, website, work Record date: 3rd May 2012 Interviewer: Gareth Watkins Copyright: PrideNZ.com Location: Museum of New Zealand Te Papa Tongarewa, Wellington URL: https://www.pridenz.com/aids_memorial_quilt_kevin_jensen.html
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,793
{"url":"http:\/\/www.mathnet.ru\/php\/archive.phtml?wshow=paper&jrnid=adm&paperid=21&option_lang=eng","text":"RUS\u00a0 ENG JOURNALS \u00a0 PEOPLE \u00a0 ORGANISATIONS \u00a0 CONFERENCES \u00a0 SEMINARS \u00a0 VIDEO LIBRARY \u00a0 PACKAGE AMSBIB\n General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS\n\n Algebra Discrete Math.: Year: Volume: Issue: Page: Find\n\n Algebra Discrete Math., 2010, Volume\u00a09, Issue\u00a01, Pages\u00a079\u201385 (Mi adm21)\n\nRESEARCH ARTICLE\n\nA generalization of groups with many almost normal subgroups\n\nFrancesco G. Russo\n\nDepartment of Mathematics, University of Naples Federico II, via Cinthia I-80126, Naples, Italy\n\nAbstract: A subgroup $H$ of a\u00a0group\u00a0$G$ is called almost normal in\u00a0$G$ if it has finitely many conjugates in\u00a0$G$. A classic result of B.\u2009H.\u00a0Neumann informs us that $|G:\\mathbf{Z}(G)|$ is finite if and only if each\u00a0$H$ is almost normal in\u00a0$G$. Starting from this result, we investigate the structure of a\u00a0group in which each non-finitely generated subgroup satisfies a property, which is weaker to be almost normal.\n\nKeywords: Dietzmann classes; anti-$\\mathfrak{X}C$-groups; groups with $\\mathfrak{X}$-classes of conjugate subgroups; Chernikov groups.\n\nFull text: PDF file (203\u00a0kB)\n\nBibliographic databases:\nMSC: 20C07, 20D10, 20F24\nRevised: 25.02.2010\nLanguage:\n\nCitation: Francesco G. Russo, \u201cA generalization of groups with many almost normal subgroups\u201d, Algebra Discrete Math., 9:1 (2010), 79\u201385\n\nCitation in format AMSBIB\n\\Bibitem{Rus10} \\by Francesco G. Russo \\paper A generalization of groups with many almost normal subgroups \\jour Algebra Discrete Math. \\yr 2010 \\vol 9 \\issue 1 \\pages 79--85 \\mathnet{http:\/\/mi.mathnet.ru\/adm21} \\mathscinet{http:\/\/www.ams.org\/mathscinet-getitem?mr=2676713} \\zmath{https:\/\/zbmath.org\/?q=an:1209.20025}","date":"2020-04-08 21:18:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3463010787963867, \"perplexity\": 12324.223263543721}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585371824409.86\/warc\/CC-MAIN-20200408202012-20200408232512-00133.warc.gz\"}"}
null
null
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. using System; using System.Linq; using System.Net; using System.Net.Http; using System.Threading; using System.Threading.Tasks; using Azure.Core.TestFramework; using Azure.Storage.Blobs; using Azure.Storage.Blobs.Specialized; using Microsoft.Azure.WebJobs.Extensions.Storage.Common.Tests; using Microsoft.Azure.WebJobs.Host; using Microsoft.Azure.WebJobs.Host.Config; using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using NUnit.Framework; namespace Microsoft.Azure.WebJobs.Extensions.Storage.ScenarioTests { public class EventGridBlobTriggerEndToEndTests : LiveTestBase<WebJobsTestEnvironment> { private const string TestArtifactPrefix = "e2etests"; private const string EventGridContainerName = TestArtifactPrefix + "eventgrid-%rnd%"; private const string TestBlobName = "test"; private readonly string _resolvedContainerName; private readonly BlobContainerClient _testContainer; private readonly BlobServiceClient _blobServiceClient; private readonly RandomNameResolver _nameResolver; private const string RegistrationRequest = @"[{ ""id"": ""09473e51-90aa-4a7b-88f1-039ea0d7ee64"", ""topic"": ""/subscriptions/[subId]/resourceGroups/EventGrid/providers/Microsoft.Storage/StorageAccounts/alrodegtest"", ""subject"": """", ""data"": { ""validationCode"": ""F83B17BA-898A-4309-8F89-8BB2B3A06D02"", ""validationUrl"": ""https://[region].eventgrid.azure.net:553/eventsubscriptions/eg1/validate?id=F83B17BA-898A-4309-8F89-8BB2B3A06D02&t=2020-12-08T02:28:30.6463986Z&apiVersion=2020-04-01-preview&token=AEvNcDi9Gonj83RQEK4owr6zXA31QPzppkc7BwlvBeI%3d"" }, ""eventType"": ""Microsoft.EventGrid.SubscriptionValidationEvent"", ""eventTime"": ""2020-12-08T02:28:30.6463986Z"", ""metadataVersion"": ""1"", ""dataVersion"": ""2"" }]"; private const string NotificationRequest = @"[{ ""topic"":""/subscriptions/[subId]/resourceGroups/EventGrid/providers/Microsoft.Storage/storageAccounts/alrodegtest"", ""subject"":""/blobServices/default/containers/sample-workitems/blobs/blob.txt"", ""eventType"":""Microsoft.Storage.BlobCreated"", ""id"":""e5c50ef5-f01e-0017-048b-d20b04066601"", ""data"":{ ""api"":""PutBlob"", ""clientRequestId"":""8dd38cbd-67e6-473e-a64c-e4d715ed0a52"", ""requestId"":""e5c50ef5-f01e-0017-048b-d20b04000000"", ""eTag"":""0x8D8A0A2FA6E70FF"", ""contentType"":""application/octet-stream"", ""contentLength"":1, ""blobType"":""BlockBlob"", ""url"":""https://test.blob.core.windows.net/[blobPathPlaceHolder]"", ""sequencer"":""000000000000000000000000000089AE00000000018dd658"", ""storageDiagnostics"":{ ""batchId"":""aea96df5-b006-0006-008b-d291b0000000"" } }, ""dataVersion"":"""", ""metadataVersion"":""1"", ""eventTime"":""2020-12-15T02:41:51.9623179Z"" } ]"; public EventGridBlobTriggerEndToEndTests() { _nameResolver = new RandomNameResolver(); // pull from a default host var host = new HostBuilder() .ConfigureDefaultTestHost(b => { b.AddAzureStorageBlobs().AddAzureStorageQueues(); }) .Build(); _blobServiceClient = new BlobServiceClient(TestEnvironment.PrimaryStorageAccountConnectionString); _resolvedContainerName = _nameResolver.ResolveInString(EventGridContainerName); _testContainer = _blobServiceClient.GetBlobContainerClient(_resolvedContainerName); Assert.False(_testContainer.ExistsAsync().Result); _testContainer.CreateAsync().Wait(); } public IHostBuilder NewBuilder<TProgram>(TProgram program, Action<IWebJobsBuilder> configure = null) { var activator = new FakeActivator(); activator.Add(program); return new HostBuilder() .ConfigureDefaultTestHost<TProgram>(b => { IWebJobsBuilder builder = b.AddAzureStorageBlobs().AddAzureStorageQueues(); var ss = builder.Services.BuildServiceProvider(); }) .ConfigureServices(services => { services.AddSingleton<IJobActivator>(activator); services.AddSingleton<INameResolver>(_nameResolver); }); } [Test] public async Task EventGridRequest_Subscription_Succeeded() { var prog = new EventGrid_Program(); var host = NewBuilder(prog).Build(); using (host) { host.Start(); HttpResponseMessage response = await SendEventGridRequest(host, RegistrationRequest, "SubscriptionValidation"); Assert.True(response.StatusCode == HttpStatusCode.OK); } } [Test] public async Task EventGridRequest_Notification_Succeeded() { var blob = _testContainer.GetBlockBlobClient(TestBlobName); await blob.UploadTextAsync("0"); var prog = new EventGrid_Program(); var host = NewBuilder(prog).Build(); using (prog._completedEvent = new ManualResetEvent(initialState: false)) using (host) { host.Start(); HttpResponseMessage response = await SendEventGridRequest(host, NotificationRequest.Replace("[blobPathPlaceHolder]", _resolvedContainerName + "/" + TestBlobName), "Notification"); Assert.True(response.StatusCode == HttpStatusCode.Accepted); Assert.True(prog._completedEvent.WaitOne(TimeSpan.FromSeconds(60))); // wait for all messages to be processed await TestHelpers.Await(() => { var log = host.GetTestLoggerProvider().GetAllLogMessages() .Where(x => x != default && x.FormattedMessage != default) .FirstOrDefault(x => x.FormattedMessage.Contains($"(Reason='New blob detected({BlobTriggerSource.EventGrid})")); return log != null; }, 5000, 1000); } } [Test] public async Task PageBlob_NotSupported() { var prog = new EventGrid_PageBlob(); var host = NewBuilder(prog).Build(); using (host) { host.Start(); await Task.Delay(5000); // Wait util all logs are populated var log = host.GetTestLoggerProvider().GetAllLogMessages() .FirstOrDefault(x => x.Level == Microsoft.Extensions.Logging.LogLevel.Error && x.FormattedMessage == $"PageBlobClient is not supported with {nameof(BlobTriggerSource.EventGrid)}"); Assert.IsNotNull(log); } } private async Task<HttpResponseMessage> SendEventGridRequest(IHost host, string content, string eventType) { var configProvidersEnumerator = host.Services.GetServices(typeof(IExtensionConfigProvider)).GetEnumerator(); while (configProvidersEnumerator.MoveNext()) { if (configProvidersEnumerator.Current is IAsyncConverter<HttpRequestMessage, HttpResponseMessage> convertor) { HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, "https://test?functionName=EventGridBlobTrigger"); request.Content = new StringContent(content); request.Headers.Add("Aeg-Event-Type", eventType); return await convertor.ConvertAsync(request, CancellationToken.None); } } throw new Exception("IAsyncConverter was not found"); } public class EventGrid_Program { public ManualResetEvent _completedEvent; [FunctionName("EventGridBlobTrigger")] public void EventGridBlobTrigger( [BlobTrigger(EventGridContainerName + "/{name}", Source = BlobTriggerSource.EventGrid)] string input) { _completedEvent.Set(); } } public class EventGrid_PageBlob { [FunctionName("EventGridBlobTrigger")] public void EventGridBlobTrigger( [BlobTrigger(EventGridContainerName + "/{name}", Source = BlobTriggerSource.EventGrid)] PageBlobClient input) { } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,004
Plastic model kit of the Lockheed AC-130 gunship, a heavily armed, long-endurance ground-attack variant of the C-130 Hercules transport fixed-wing aircraft. Developed during the Vietnam War as 'Project Gunship II', it carries a wide array of anti-ground oriented weapons that are integrated with sophisticated sensors, navigation, and fire-control systems. Kit has recessed panel lines, cargo interior, single piece (nacelles, propellers with spinners), engraved canopy glazing and landing lights, optional external fuel tanks, optional undercarriage, cargo hatch with detail, optional beaver tail (early/late), clear display stand. Decals and markings for (4) aircraft: (x2) 'Surprise Package' (late and early); 'Azrael' and Prototype.
{ "redpajama_set_name": "RedPajamaC4" }
2,029
STUDY: Kids Who Play Multiple Sports Have Built-In Advantage Highlands players, graduates, coaches, administrators see good in playing multi-sports PHOTO: Allen Ramsey, DWCPhoto.com. Highlands senior Bradley Greene grabs the ball in the state tournament a few weeks ago. Greene plays football, basketball and baseball for the Bluebirds. In an age where it is encouraged to focus on one sport, Highlands administrators, coaches, players and former players pointed out reasons for playing more than one. Highlands has seen student-athletes do both this past season and previous seasons. But Highlands Director of Athletics Matt Haskamp studied Health and Physical Education as a college student and pointed out four main reasons to play more than one sport. The first two are overuse injuries and burnout. "There is research out that shows athletes that focus on single sports are at greater risk of injury due to over use," Haskamp said. "I also have seen students become burned out with the sport they love and excel at when they focus on it year-round." A study done by the University of Wisconsin, just this month is one of the first of its nature, but it could be a warning for parents in the future. It was published in the American Journal of Sports Medicine just this month. The study focused on 302 high school student-athletes. They were put into categories based on their specialization, or the number of sports they participated in. Of the 302, 105 (34.8%) ranked as low specialization, 87 (28.8%) were moderately specialized, and 110 (36.4%) were defined as being highly specialized. The research team found that athletes from smaller schools were much more likely to fall into the low specialization category. Only 25% of athletes from smaller schools were recognized as being high specialization, while 48% of the students at large school fell into the same category. It also revealed that athletes considered highly specialized were more than twice as likely than the other categories to report a history of overuse knee and hip injuries. Participating in a single sport for more than eight months during the year was an important factor in high injury risk of highly specialized athletes. The study comes after some high-profile coaches and athletes spoke to the advantages of having kids play multiple sports. Earlier this year a chart circulated the internet that showed the percentage of football players at Ohio State, who played under coach Urban Meyer. The information showed Meyer's preference to recruit multi-sport players. Via OhioVarsity.com. Haskamp's points to kids who play multiple sports have having gained experience in sport and life. When he coached, Haskamp encouraged his players to play other sports. "That meant they were staying competitive, staying in shape and most likely staying out of trouble," Haskamp said. "I loved the social aspect as well. Students were able to lead in different settings. These multi-sport athletes were able to see competition through different lenses. A student might excel in football, but struggle in baseball. A little adversity can be good as it can show that they need to work extra hard to reach the level of play they want in baseball. I have seen this hard work effect not only on the sport they may not be great in, but the sport they excel in as well." Haskamp's final point is limited opportunity. He does not like to see students graduate with regrets of not trying something. Haskamp said he knows a number of people who wish they'd have played more sports during their high school years. "Once high school is over, the opportunity to compete in competitive sport is drastically diminished," Haskamp said. "Therefore, I hope that our students try as many different things that they can. Hopefully, this diverse approach will lead itself to something they can focus on later in life when the opportunities are not as vast." Highlands has had a few student-athletes go as far as playing three sports. Seniors Kyle Finfrock, Kyle Rust and Bradley Greene play football, basketball and baseball with junior Karsen Hunter running track, cross country and playing basketball. Finfrock, Rust and Greene helped the football and baseball teams to region championships last year and Hunter helped the Ladybird cross country team win its fourth straight Class AA state championship and the Ladybird track and field squad to a Class AA state runner-up finish. "It helps keep your options open for college," Finfrock said. "It also keeps you in shape in the offseason for (the other) sports. That's about it." Jared Lorenzen, a 1999 Highlands graduate, played basketball and baseball in addition to dominating on the football field before playing football at the University of Kentucky. Lorenzen helped the Highlands basketball team to a state runner-up finish in 1997 in addition to quarterbacking the Bluebirds to the 1998 Class 3A undefeated state championship. The combined enrollment at Highlands High has been approximately between 800 to 1,000 students allowing for numerous multi-sport athletes. That's where Highlands football Head Coach Brian Weinrich said communicating with other head coaches is vital. For instance, Weinrich and staff would not want to wear players down lifting weights if they are to also participate in a basketball camp later in the day. "If you look at the football roster, the basketball roster, the baseball roster and the track roster, you'll see a lot of the same guys," Weinrich said. "You'll have some guys who are not basketball or baseball players. Not every guy can do all of them. But at Highlands, we are in such a unique situation in 2016 with the numbers we have to cross over. As coaches, we are the biggest supporters of one another. We'll do everything we can to help each other. I think Highlands is the exception to all the one-sport specialization out there. We have more two-sport and three-sport athletes than most schools our size." Sophomore Zoie Barth returned to softball this fall and cracked the starting line-up after focusing on basketball in recent seasons. Barth started at third base for the 9th Region champion Highlands softball team and at a guard spot on the basketball team as a freshman. "I'm really glad I decided to play. I really love the girls, the sport and the great time so far," Barth said. "I've gotten close to the girls and our memories are awesome." Barth's head basketball coach Jaime Walz-Richey also played multi-sports in high school in addition to her Hall of Fame basketball career. Richey played varsity volleyball in the eighth and ninth grade, softball from seventh grade through her sophomore year in addition to varsity golf and tennis as a senior. "I tell the kids whatever makes them happy and I believe that each sport can help one another," Richey said. "I also tell them that if they are going to play multiple sports then they will need to put the time into each one. For instance (recent Highlands graduate) Haley Coffey, in the fall, she had fall softball on the weekend but during the week, she made sure to attend basketball workouts so she could excel at both. If you have the work ethic and motivation, you can excel in multi sports. I think if you play multi sports, it gives you a little break from your sport and also you are around other athletes and coaches." Elizabeth Poindexter, a 2003 Highlands graduate, ran track, played soccer and cheered during her high school days. She ended up going to Transylvania University in Lexington on a soccer championship. Her younger sisters Pam (a 2006 Highlands graduate) and Victoria (2008) also played multi sports in high school. "The cross-training helps with the other sports because you work different muscles," Elizabeth Poindexter said. "The more toned you are, the fewer injuries you have and it gets you pretty prepared. I wasn't going to give up any of them. Sports can be clickish at times, But I had really great teammates." Mallory Adler, a 2009 Highlands graduate, played soccer and basketball during her days wearing the Blue and White. Adler was not eligible to play soccer as a sophomore after the family moved in 2006. But Adler became eligible to play basketball. Adler went on to play both at Gardner-Webb University in North Carolina for a year before focusing on academics. "Soccer and basketball are two completely different sports. They are two completely different types of conditioning," Adler said. "You work different muscles and do different types of agility so I think it helps you be a more well-rounded athlete. It's tough to chose one when you have the love for both. The girls are different. The personalities are different. But I'm glad I got both. It's nice to have two different types of friends. Luckily, we were good in both." The upcoming season promises to see more multi-sport athletes. The official fall practice season begins July 15. Posted by G. Michael Graham at 12:00 AM Labels: Highlands athletics Meet Fort Thomas City Council: Jeff Bezold (PODCAST) Everything Fourth of July this Weekend Rescue Dog Joins Fort Thomas Couple on Honeymoon STUDY: Kids Who Play Multiple Sports Have Built-In... Rules for Fireworks in Fort Thomas Fort Thomas Fourth of July Parade (2016) Water Outages Widespread in Fort Thomas These NKY Communities Rank Among Best Places to Live FTM's Story Matters is a Success When Can My Child Babysit Their Siblings? Car Break-Ins In Unlocked Cars Continue Over The W... Dave & Buster's opens in Florence Today; PLUS a Gi... Burglars Preying on Unlocked Cars in Fort Thomas HHS Dance Team Erin Janson is Officially Retiring Storm Pictures (6-23-16) Art Around Towne: Another Fort Thomas Success Highlands Athletics Year in Review School Board Roundup (June) The Day I Met Robert Frost Andrea Janovic Announces Candidacy For Campbell Co... Find Waldo In Fort Thomas! June Council Meeting Roundup Highlands Graduates Open Book Store in Bellevue PREVIEW: Fort Thomas To Discuss Proposed Budget The Midway Cafe Starts Sunday Brunch in Fort Thomas Campbell County Road Projects (Update 6-20-16) YMCA Offers Flexible Summer Day Camp Options The Best Dollar I Ever Spent Man Dies, Man Charged in Stabbing St. Therese Summer Festival Southgate Top This Donut Bar Opens in Fort Thomas Luke Muller 10th Annual Golf Outing Man Stabbed in Fort Thomas Former Campbell County District Court Judge Files ... Highlands Teacher Wins An Award Of Distinction Poindexter receives new levels of experience in Ni... Meet Roger Peterman: Fort Thomas City Council Podcast New Owners For Fort Thomas Convenient Mart Art Around Towne This Friday (2016) 2015 Report Released on Overdose Deaths in Kentucky Fort Thomas Mother and Son Beekeepers Saving the P... How to Safely Cash in on Craigslist Cameron Blau Appointed 17th District Judicial Seat Former Firefighter Jumps To Action To Stop Car Fire New Fort Thomas Signage Will Welcome and Inform Ladybirds eliminated from state tournament Boone County Sheriff Breaks Up Juvenile Gang Attention American Idol Fans: Upcoming Concert In ... Starting off strong in Owensboro Tough inning in Lexington Refresher Course: Mopeds on Kentucky Roads Back to Lexington Headed to Owensboro for business Illegal Still, Moonshine Confiscated Highlands Graduate Provides Dance Lessons to Under... Opticare Vision Centers: "Take Baby to the Eye Doc... Sexual Assault Kit Backlog Being Pared Down, Lawma... Meet Fort Thomas City Council: John Muller Fort Thomas Woman Hospitalized After Hit-Skip Crash Mark Majors to Bicycle Across Ohio for American Ca... Trenwa, Inc. awards $7,500 College Scholarship Northern Kentucky Family Thankful for Rett Syndrom... Fort Thomas Teacher Appointed by Governor To State... Double Whammy!! Highlands High School Announces 2016 Athletic Hall... S. Fort Thomas Avenue Shut Down Due to Oil Spill Story Matters: Event Set To Tell The Story of Fort... 15 North to Bring "Sisterhood" Events to Restaurant Bluebirds Bring Home Consecutive 9th Region Crown Top This Donuts To Open June 17 St. Catherine of Siena Festival THIS WEEKEND (Fort... Highlands Softball Player Has Sports Equipment Sto... Highlands baseball, softball hope to make history The Barracks Project: Campbell County Resident For... The SkyWheel® coming to Newport on the Levee
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,084
{"url":"https:\/\/msdn.microsoft.com\/en-us\/library\/aa367562(v=vs.85).aspx","text":"AnyPath\n\n# AnyPath\n\nThe AnyPath data type is a text string containing either a full path or a relative path. When specifying a relative path, you can include a long file name with the short file name by separating the short and long names with a vertical bar (|). Note that you cannot specify multiple levels of a directory or fully qualified paths in this way. The path may contain properties enclosed within square brackets ([ ]).\n\nExamples of valid AnyPath data:\n\n\u2022 \\\\server\\share\\temp\n\u2022 c:\\temp\n\u2022 \\temp\n\u2022 projec~1|Project Status\n\nExamples of invalid AnyPath data:\n\n\u2022 c:\\temp\\projec~1|c:\\temp one\\Project Status\n\u2022 \\temp\\projec~1|\\temp one\\Project Status","date":"2015-10-05 13:21:32","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8239743709564209, \"perplexity\": 6228.702052058937}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-40\/segments\/1443736677342.4\/warc\/CC-MAIN-20151001215757-00050-ip-10-137-6-227.ec2.internal.warc.gz\"}"}
null
null
\section*{Introduction} Entity linking (Entity Normalization) is the task of mapping entity mentions in text documents to standard entities in a given knowledge base. For example, the word ``Paris'' is \emph{ambiguous}: It can refer either to the capital of France or to a hero of Greek mythology. Now given the text ``Paris is the son of King Priam'', the goal is to determine that, in this sentence, the word refers to the Greek hero, and to link the word to the corresponding entity in a knowledge base such as YAGO \cite{suchanek2007yago} or DBpedia \cite{auer2007dbpedia}. In the biomedical domain, entity linking maps mentions of diseases, drugs, and measures to normalized entities in standard vocabularies. It is an important ingredient for automation in medical practice, research, and public health. Different names of the same entities in Hospital Information Systems seriously hinder the integration and use of medical data. If a medication appears with different names, researchers cannot study its impact, and patients may erroneously be prescribed the same medication twice. The particular challenge of biomedical entity linking is not the ambiguity: a word usually refers to only a single entity. Rather, the challenge is that the surface forms vary markedly, due to abbreviations, morphological variations, synonymous words, and different word orderings. For example, \textit{``Diabetes Mellitus, Type 2''} is also written as \textit{``DM2''} and \textit{``lung cancer''} is also known as \textit{``lung neoplasm malignant''}. In fact, the surface forms vary so much that all the possible expressions of an entity cannot be known upfront. This means that standard disambiguation systems cannot be applied in our scenario, because they assume that all forms of an entity are known. One may think that variation in surface forms is not such a big problem, as long as all variations of an entity are sufficiently close to its canonical form. Yet, this is not the case. For example, the phrase \textit{"decreases in hemoglobin"} could refer to at least 4 different entities in MedDRA, which all look alike: \textit{"changes in hemoglobin"}, \textit{"increase in hematocrit"}, \textit{"haemoglobin decreased"}, and \textit{"decreases in platelets"}. In addition, biomedical entity linking cannot rely on external resources such as alias tables, entity descriptions, or entity co-occurrence, which are often used in classical entity linking settings. For this reason, entity linking approaches have been developed particularly for biomedical entity linking. Many methods use deep learning: the work of \citet{li2017cnn} casts biomedical entity linking as a ranking problem, leveraging convolutional neural networks (CNNs). More recently, the introduction of BERT has advanced the performance of many NLP tasks, including in the biomedical domain \cite{huang2019clinicalbert,lee2020biobert,ji2020bert}. BERT creates rich pre-trained representations on unlabeled data and achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures. However, considering the number of parameters of pre-trained BERT models, the improvements brought by fine-tuning them come with a heavy computational cost and memory footprint. This is a problem for energy efficiency, for smaller organizations, or in poorer countries. In this paper, we introduce a very lightweight model that achieves a performance statistically indistinguishable from the state-of-the-art BERT-based models. The central idea is to use an alignment layer with an attention mechanism, which can capture the similarity and difference of corresponding parts between candidate and mention names. Our model is 23x smaller and 6.4x faster than BERT-based models on average; and more than twice smaller and faster than the lightweight BERT models. Yet, as we show, our model achieves comparable performance on all standard benchmarks. Further, we can show that adding more complexity to our model is not necessary: the entity-mention priors, the context around the mention, or the coherence of extracted entities \cite[as used, e.g., in][]{hoffart2011robust} do not improve the results any further. \footnote{All data and code are available at \url{https://github.com/tigerchen52/Biomedical-Entity-Linking}.} \section*{Related Work} In the biomedical domain, much early research focuses on capturing string similarity of mentions and entity names with rule-based systems~\cite{dogan2012inference, kang2013using, d2015sieve}. Rule-based systems are simple and transparent, but researchers need to define rules manually, and these are bound to an application. To avoid manual rules, machine-learning approaches learn suitable similarity measures between mentions and entity names automatically from training sets~\cite{leaman2013dnorm, dougan2014ncbi, ghiasvand2014r, leaman2016taggerone}. However, one drawback of these methods is that they cannot recognize semantically related words. Recently, deep learning methods have been successfully applied to different NLP tasks, based on pre-trained word embeddings, such as word2vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove}. \citet{li2017cnn} and \citet{wright2019normco} introduce a CNN and RNN, respectively, with pre-trained word embeddings, which casts biomedical entity linking into a ranking problem. However, traditional methods for learning word embeddings allow for only a single context-independent representation of each word. Bidirectional Encoder Representations from Transformers (BERT) address this problem by pre-training deep bidirectional representations from unlabeled text, jointly conditioning on both the left and the right context in all layers. \citet{ji2020bert} proposed an biomedical entity normalization architecture by fine-tuning the pre-trained BERT / BioBERT / ClinicalBERT models \cite{devlin2018bert,huang2019clinicalbert,lee2020biobert}. Extensive experiments show that their model outperforms previous methods and advanced the state-of-the-art for biomedical entity linking. A shortcoming of BERT is that it needs high-performance machines. \section*{Our Approach} Formally, our inputs are (1) a \emph{knowledge base} (KB), i.e., a list of entities, each with one or more names, and (2) a \emph{corpus}, i.e., a set of text documents in which certain text spans have been tagged as entity mentions. The goal is to link each entity mention to the correct entity in the KB. To solve this problem, we are given a training set, i.e., a part of the corpus where the entity mentions have been linked already to the correct entities in the KB. Our method proceeds in 3 steps: \begin{description} \item[\textbf{Preprocessing.}] We preprocess all mentions in the corpus and entity names in the KB to bring them to a uniform format. \item[\textbf{Candidate Generation.}] For each mention, we generate a set of candidate entities from the KB. \item[\textbf{Ranking Model.}] For each mention with its candidate entities, we use a ranking model to score each pair of mention and candidate, outputting the top-ranked result. \end{description} \noindent Let us now describe these steps in detail. \subsection*{Preprocessing} We preprocess all mentions in the corpus and all entity names in the KB by the following steps: \textbf{Abbreviation Expansion.} Like previous work~\cite{ji2020bert}, we use the Ab3p Toolkit~\cite{sohn2008abbreviation} to expand medical abbreviations. The Ab3p tool outputs a probability for each possible expansion, and we use the most probable expansion. For example, Ab3p knows that ``DM'' is an abbreviation of ``Diabetes Mellitus'', and so we replace the abbreviation with its expanded term. We also expand mentions by the first matching one from an abbreviation dictionary constructed by previous work \cite{d2015sieve}, and supplement 20 biomedical abbreviations manually (such as Glycated hemoglobin (HbA1c)). Our dictionary is available in the supplementary material and online. \textbf{Numeral Replacement.} Entity names may contain numerals in different forms (e.g., Arabic, Roman, spelt out in English, etc.) We replace all forms with spelled-out English numerals. For example, ``type \uppercase\expandafter{\romannumeral2} diabetes mellitus'' becomes ``type two diabetes mellitus''. For this purpose, we manually compiled a dictionary of numerals from the corresponding Wikipedia pages. Finally, we remove all punctuation, and convert all words to lowercase. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{picture/model.pdf} \caption{The architecture of our ranking model, with the input mention ``decreases in hemoglobin'' and the input entity candidate ``haemoglobin decreased''.} \label{fig:architecture} \end{figure*} \textbf{KB Augmentation.} We augment the KB by adding all names from the training set to the corresponding entities. For example, if the training set links the mention ``GS'' in the corpus to the entity ``Adenomatous polyposis coli'' in the KB, we add ``GS'' to the names of that entity in the KB. \subsection*{Candidate Generation}\label{sec:cand} Our ranking approach is based on a deep learning architecture that can compute a similarity score for each pair of a mention in the corpus and an entity name in the KB. However, it is too slow to apply this model to all combinations of all mentions and all entities. Therefore, we generate, for each mention $M$ in the corpus, a set $C_M$ of candidate entities from the KB. Then we apply the deep learning method only to the set $C_M$. To generate the candidate set $C_M$, we calculate a score for $M$ and each entity in the KB, and return the top-$k$ entities with the highest score as the candidate set $C_M$ (in our experiments, $k=20$). As each entity has several names, we calculate the score of $M$ and all names of the entity $E$, and use the maximum score as the score of $M$ and the entity $E$. To compute the score between a mention $M$ and an entity name $S$, we split each of them into tokens, so that we have $M=\{m_{1}, m_{2},..., m_{|M|}\}$ and $S=\{s_{1}, s_{2},..., s_{|S|}\}$. We represent each token by a vector taken from pre-trained embedding matrix $\mathbf V \in \mathbb{R}^{d\times | V |}$ where $d$ is the dimension of word vectors and $V$ is a fixed-sized vocabulary (details in the section of \nameref{sec:experimental setting}). To take into account the possibility of different token orderings in $M$ and $S$, we design the \emph{aligned cosine similarity} (\textit{ACos}), which maps a given token $m_i \in M$ to the most similar token $s_j \in S$ and returns the cosine similarity to that token: \begin{equation} \textit{ACos}(m_{i}, S) = \max \{ cos(m_{i}, s_{j}) \mid s_{j} \in S \} \end{equation} \noindent The similarity score is then computed as the sum of the aligned cosine similarities. To avoid tending to long text, and to make the metric symmetric, we add the similarity scores in the other direction as well, yielding: \begin{multline} \textit{sim}(M,S) = \frac{1}{\left| M \right| + \left| S \right|} (\sum_{m_{i} \in M} \textit{ACos}(m_{i}, S) \\ + \sum_{s_{j} \in S} \textit{ACos}(s_{j},M)) \end{multline} \noindent We can now construct the candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$ where $E_i$ is the id of the entity, and $S_i$ is the chosen name of the entity. This set contains the top-$k$ ranked entity candidates for each mention $M$. Specifically, if there are candidates whose score is equal to 1 in this set, we will filter out other candidates whose score is less than 1. \subsection*{Ranking Model} Given a mention $M$ and its candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$, the ranking model computes a score for each pair of the mention and an entity name candidate $S_i$. Figure~\ref{fig:architecture} shows the corresponding neural network architecture. Let us first describe the base model. This model relies exclusively on the text similarity of mentions and entity names. It ignores the context in which a mention appears, or the prior probability of the target entities. To compute the text similarity, we crafted the neural network following the candidate generation: it determines, for each token in the mention, the most similar token in the entity name, and vice versa. Different from the candidate generation, we also take into account character level information here and use an alignment layer to capture the similarity and difference of correspondences between mention and entity names. \paragraph{Representation Layer.} As mentioned in the \nameref{sec:cand}, we represent a mention $M$ and an entity name $S$ by the set of the embeddings of its tokens in the vocabulary $V$. However, not all tokens exist in the vocabulary $V$. To handle out-of-vocabulary words, we adopt a recurrent Neural Network (RNN) to capture character-level features for each word. This has the additional advantage of learning the morphological variations of words. We use a Bi-directional LSTM (BiLSTM), running a forward and backward LSTM on a character sequence \cite{graves2013speech}. We concatenate the last output states of these two LSTMs as the character-level representation of a word. To use both word-level and character-level information, we represent each token of a mention or entity name as the concatenation of its embedding in $V$ and its character-level representation. \paragraph{Alignment Layer.} To counter the problem of different word orderings in the mention and the entity name, we want the network to find, for each token in the mention, the most similar token in the entity name. For this purpose, we adapt the attention mechanisms that have been developed for machine comprehension and answer selection~\cite{chen2016enhanced,wang2016compare}. Assume that we have a mention $M = \{\bar{m}_{1},$ $\bar{m}_{2},$ $..., \bar{m}_{|M|}\}$ and an entity name $S = \{\bar{s}_{1},$ $\bar{s}_{2},$ $..., \bar{s}_{|S|}\}$, which were generated by the Representation Layer. We calculate a $|M|\times|S|$-dimensional weight matrix $W$, whose element $w_{i,j}$ indicates the similarity between the token $i$ of the mention and the token $j$ of the entity name, $w_{ij} = \bar{m}_{i}^{T} \bar{s}_{j}$. Thus, the $i^{th}$ row in $W$ represents the similarity between the $i^{th}$ token in $M$ and each token in $S$. We apply a softmax function on each row of $W$ to normalize the values, yielding a matrix $W'$. We can then compute a vector $\tilde{m}_i$ for the $i^{th}$ token of the mention, which is the sum of the vectors of the tokens of $S$, weighted by their similarity to $\bar{m}_i$: \begin{equation} \tilde{m}_{i} = \sum_{j=1}^{t} w_{ij}' \bar{s}_{j} \end{equation} \noindent This vector ``reconstructs'' $\bar{m}_i$ by adding up suitable vectors from $S$, using mainly those vectors of $S$ that are similar to $\bar{m}_i$. If this reconstruction succeeds (i.e., if $\bar{m}_i$ is similar to $\tilde{m}_i$), then $S$ contained tokens which, together, contain the same information as $\bar{m}_i$. \ignore{ it so that we obtain an attention matrix where each element $\alpha_{ij} \in [0, 1]$: \begin{equation} \alpha_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{ik}} \end{equation} while we also apply a softmax function on each column of $W$ to get the attention matrix for $S$: \begin{equation} \beta_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{lj}} \end{equation} After, the alignment representation can be computed as a weighted sum: \begin{align} \tilde{m}_{i} = \sum_{j=1}^{t}\beta_{ij} \bar{s}_{j} &&\text{and}&& \tilde{s}_{j} = \sum_{i=1}^{l}\alpha_{ij} \bar{m}_{i} \end{align} where $\tilde{m}_{i}$ is the most relevant part to $\bar{m}_{i}$ that selected from $ S = \{ \bar{s}_{1}, \bar{s}_{2},..., \bar{s}_{t}\}$. We do the same operation for each word in $S$ to get $\tilde{s}_{j}$. In this step, we can find the corresponding parts of two texts to compare without being influenced by the order of words } To measure this similarity, we could use a simple dot-product. However, this reduces the similarity to a single scalar value, which erases precious element-wise similarities. Therefore, we use the following two comparison functions \cite{tai2015improved,wang2016compare}: \begin{equation} \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}) = (\bar{m}_{i}-\tilde{m}_{i}) \odot (\bar{m}_{i}-\tilde{m}_{i}) \end{equation} \begin{equation} \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) = \bar{m}_{i} \odot \tilde{m}_{i} \end{equation} \noindent where the operator $\odot$ means element-wise multiplication. Intuitively, the functions $sub$ and $mul$ represent subtraction and multiplication, respectively. The function \emph{sub} has similarities to the Euclidean distance, while \emph{mul} has similarities to the cosine similarity -- while preserving the element-wise information. Finally, we obtain a new representation of each token $i$ of the mention by concatenating $\bar{m}_{i}, \tilde{m}_{i}$ and their difference and similarity: \begin{equation} \hat{m}_{i} = [\bar{m}_{i}, \tilde{m}_{i}, \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}), \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) ] \end{equation} \noindent By applying the same procedure on the columns of $W$, we can compute analogously a vector $\tilde{s}_{j}$ for each token vector $s_j$ of $S$, and obtain the new representation for the $j^{th}$ token of the entity name as \begin{equation} \hat{s}_{j} = [\bar{s}_{j}, \tilde{s}_{j}, \textit{sub}(\bar{s}_{j}, \tilde{s}_{j}), \textit{mul}(\bar{s}_{j}, \tilde{s}_{j}) ] \end{equation} \noindent This representation augments the original representation $\bar{s}_{j}$ of the token by the ``reconstructed'' token $\tilde{s}_{j}$, and by information about how similar $\tilde{s}_{j}$ is to $\bar{s}_{j}$. \paragraph{CNN Layer.} We now have rich representations for the mention and the entity name, and we apply a one-layer CNN on the mention $[\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{|M|}]$ and the entity name $[\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{|S|}]$. We adopt the CNN architecture proposed by \cite{kim2014convolutional} to extract n-gram features of each text: \begin{equation} f_{M} = \textit{CNN}([\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{M}]) \end{equation} \begin{equation} f_{E} = \textit{CNN}([\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{S}]) \end{equation} \noindent We concatenate these to a single vector $f_{\textit{out}} = [ f_{M}, f_{E} ]$. \paragraph{Output Layer.} We are now ready to compute the final output of our network using a two-layer fully connected neural network: \begin{equation} \Phi ( M, E ) = \textit{sigmoid} (W_{2}~~\textit{ReLU}(W_{1}~f_{\textit{out}} + b_{1} ) + b_{2} ) \end{equation} \noindent where $W_{2}$ and $W_{1}$ are learned weight matrices, and $b_1$ and $b_2$ are bias values. This constitutes our base model, which relies solely on string similarity. We will now see how we can add add prior, context, and coherence features. \subsection*{Extra Features}\label{sec:extra} \paragraph{Mention-Entity Prior.} Consider an ambiguous case such as \textit{``You should shower, let water flow over wounds, pat dry with a towel.''} appearing in hospital Discharge Instructions. In this context, the disease name \textit{``wounds''} is much more likely to refer to \textit{``surgical wound''} than \textit{``gunshot wound''}. This prior probability is called the \emph{mention-entity prior}. It can be estimated, e.g., by counting in Wikipedia how often a mention is linked to the page of an entity~\cite{hoffart2011robust}. Unlike DBpedia and YAGO, biomedical knowledge bases generally do not provide links to Wikipedia. Hence, we estimate the mention-entity prior from the training set, as: \begin{equation} \textit{prior}(M,E) = \log \textit{count}(M, E) \end{equation} \noindent where $\textit{count}(M, E)$ is the frequency with which the mention $M$ is linked to the target entity $E$ in the training dataset. To reduce the effect of overly large values, we apply the logarithm. This prior can be added easily to our model by concatenating it in $f_{\textit{out}}$: \begin{equation} f_{\textit{out}} = [ f_{M}, f_{E}, \textit{prior}(M,E) ] \end{equation} \paragraph{Context.} The context around a mention can provide clues on which candidate entity to choose. We compute a context score that measures how relevant the keywords of the context are to the candidate entity name. We first represent the sentence containing the mention by pre-trained word embeddings. We then run a Bi-directional LSTM on the sentence to get a new representation for each word. In the same way, we apply a Bi-directional LSTM on the entity name tokens to get the entity name representation $cxt_{E}$. To select keywords relevant to the entity while ignoring noise words, we adopt an attention strategy to assign a weight for each token in the sentence. Then we use a weighted sum to represent the sentence as $cxt_{M}$. The context score is then computed as the cosine similarity between both representations: \begin{equation} \textit{context}(M, E) = \cos (cxt_{M}, cxt_{E}) \end{equation} As before, we concatenate this score to the vector $f_{out}$. \paragraph{Coherence.} Certain entities are more likely to occur together in the same document than others, and we can leverage this disposition to help the entity linking. To capture the co-occurrence of entities, we pre-train entity embeddings in such a way that entities that often co-occur together have a similar distributed representation. We train these embeddings with Word2Vec~\cite{mikolov2013distributed} on a collection of PubMed abstracts\footnote{ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/}. Since the entities in this corpus are not linked to our KB, we consider every occurrence of an exact entity name as a mention of that entity. Given a mention $M$ and a candidate entity $E$, we compute a coherence score to measure how often the candidate entity co-occurs with the other entities in the document. We first select the mentions around $M$. For each mention, we use the first entity candidate (as given by the candidate selection). This gives us a set of entities $P_{M} = \{ {p}_{1}, {p}_{2},..., {p}_{k}\}$, where each element is a pre-trained entity vector. Finally, the coherence score is computed as: \begin{equation} \textit{coherence}(M, E) = \frac{1}{k} \sum_{i=1}^{k} \cos(p_{i},p_{E}) \end{equation} \noindent where $p_{E}$ is the pre-trained vector of the entity candidate $E$. This score measures how close the candidate entity $E$ is, on average, to the other presumed entities in the document. As before, we concatenate this score to the vector $f_{\textit{out}}$. More precisely, we pre-trained separate entity embeddings for the three datasets and used the mean value of all entity embeddings to represent missing entities. \subsection*{NIL Problem} The NIL problem occurs when a mention does not correspond to any entity in the KB. We adopt a traditional threshold method, which considers a mention unlinkable if its score is less than a threshold $\tau$. This means that we map a mention to the highest-scoring entity if that score exceeds $\tau$, and to NIL otherwise. The threshold $\tau$ is learned from a training set. For datasets that do not contain unlinkable mentions, we set the threshold $\tau$ to zero. \subsection*{Training} For training, we adopt a triplet ranking loss function to make the score of the positive candidates higher than the score of the negative candidates. The objective function is: \begin{multline} \theta ^{*} = \mathop{\arg\min}_{\theta} \sum_{D \in \mathcal{D}}\sum_{M \in D}\sum_{E \in C} \\ \max (0, \gamma + \Phi ( M, E^{+} ) - \Phi ( M, E^{-} )) \end{multline} \noindent where $\theta$ stands for the parameters of our model. $\mathcal{D}$ is a training set containing a certain number of documents and $\gamma$ is the parameter of margin. $E^{+}$ and $E^{-}$ represent a positive entity candidate and a negative entity candidate, respectively. Our goal is to find an optimal $\theta$, which makes the score difference between positive and negative entity candidates as large as possible. For this, we need triplets of a mention $M$, a positive example $E^+$ and a negative example $E^-$. The positive example can be obtained from the training set. The negative examples are usually chosen by random sampling from the KB. In our case, we sample the negative example from the candidates that were produced by the candidate generation phase (excluding the correct entity). This choice makes the negative examples very similar to the positive example, and forces the process to learn what distinguishes the positive candidate from the others. \section*{Experiments} \begin{table}[b!] \small \begin{tabu} {p{1.3cm} X[c] X[c] X[c] X[c] X[c] X[c]} \toprule &\multicolumn{2}{c}{ShARe/CLEF} &\multicolumn{2}{c}{NCBI} &\multicolumn{2}{c}{ADR} \\ &train &test &train &test &train &test \\ \midrule documents &199 &99 &692 &100 &101 &99 \\ mentions &5816 &5351 &5921 &964 &7038 &6343 \\ NIL &1641 &1750 &0 &0 &47 &18 \\ \midrule concepts &\multicolumn{2}{c}{88140} &\multicolumn{2}{c}{9656} &\multicolumn{2}{c}{23668} \\ synonyms &\multicolumn{2}{c}{42929} &\multicolumn{2}{c}{59280} &\multicolumn{2}{c}{0}\\ \bottomrule \end{tabu} \caption{Dataset Statistics}\label{tab:datasets} \end{table} \ignore{ \begin{table*}[!t] \small \begin{minipage}{.25\linewidth} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{minipage}% \hfill% \begin{minipage}{.72\linewidth}% \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$4.05&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.38&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$1.31&84.65$\pm$3.84&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$3.32&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$1.12\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.33&86.10$\pm$3.63&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf1.29}&\cellcolor{lightgray!50}89.06$\pm$3.32&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf1.04}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.35 &\cellcolor{lightgray!50}89.07$\pm$3.32&\cellcolor{lightgray!50}92.89$\pm$1.06\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$1.33 &\cellcolor{lightgray!50}{\bf89.59$\pm$3.22}&\cellcolor{lightgray!50}93.00$\pm$1.06\cr \bottomrule \end{tabular} \end{threeparttable} \end{minipage}% \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$3.09&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.02&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$0.96&84.65$\pm$3.00&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$2.59&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$0.84\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.00&86.10$\pm$2.79&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf0.96}&\cellcolor{lightgray!50}89.06$\pm$2.63&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf0.79}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.00 &\cellcolor{lightgray!50}89.07$\pm$2.63&\cellcolor{lightgray!50}92.63$\pm$0.81\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$0.99 &\cellcolor{lightgray!50}{\bf89.59$\pm$2.59}&\cellcolor{lightgray!50}92.74$\pm$0.80\cr \bottomrule \end{tabular} \end{threeparttable} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{table*} \subsection*{Datasets and Metrics. } We evaluate our model on three datasets (shown in Table~\ref{tab:datasets}). The \textbf{ShARe/CLEF} corpus~\cite{pradhan2013task} comprises 199 medical reports for training and 99 for testing. As Table~\ref{tab:datasets} shows, $28.2\%$ of the mentions in the training set and $32.7\%$ of the mentions in the test set are unlinkable. The reference knowledge base used here is the SNOMED-CT subset of the UMLS 2012AA~\cite{bodenreider2004unified}. The \textbf{NCBI} disease corpus~\cite{dougan2014ncbi} is a collection of 793 PubMed abstracts partitioned into 693 abstracts for training and development and 100 abstracts for testing. We use the July 6, 2012 version of MEDIC~\cite{davis2012medic}, which contains 9,664 disease concepts. The TAC 2017 Adverse Reaction Extraction (\textbf{ADR}) dataset consists of a training set of 101 labels and a test set of 99 labels. The mentions have been mapped manually to the MedDRA 18.1 KB, which contains 23,668 unique concepts. Following previous work, we adopt accuracy to compare the performance of different models. \subsection*{Experimental Settings} \label{sec:experimental setting} We implemented our model using Keras, and trained our model on a single Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, using less than 10Gb of memory. Each token is represented by a 200-dimensional word embedding computed on the PubMed and MIMIC-III corpora~\cite{zhang2019biowordvec}. As for the character embeddings, we use a random matrix initialized as proposed in \citet{he2015delving}, with a dimension of $128$. The dimension of the character LSTM is $64$, which yields $128$-dimensional character feature vectors. In the CNN layer, the number of feature maps is $32$, and the filter windows are $[1, 2, 3]$. The dimension of the context LSTM and entity embedding is set to $32$ and $50$ respectively. We adopt a grid search on a hold-out set from training samples to select the value $\tau$, and and find an optimal for $\tau = 0.75$. During the training phase, we select at most $20$ entity candidates per mention, and the parameter of the triplet rank loss is $0.1$. For the optimization, we use Adam with a learning rate of $0.0005$ and a batch size of $64$. To avoid overfitting, we adopt a dropout strategy with a dropout rate of $0.1$. \subsection*{Competitors} We compare our model to the following competitors: \textbf{DNorm} \cite{leaman2013dnorm}; \textbf{UWM} \cite{ghiasvand2014r}; \textbf{Sieve-based Model} \cite{d2015sieve}; \textbf{TaggerOne} \cite{leaman2016taggerone}; a model based on \textbf{Learning to Rank} \cite{xu2017uth_ccb}; \textbf{CNN-based Ranking} \cite{li2017cnn}; and \textbf{BERT-based Ranking} \cite{ji2020bert}. \section*{Results} \subsection*{Overall Performance} During the candidate generation, we generate 20 candidates for each mention. The recall of correct entities on the ShARe/CLEF, NCBI, and ADR test datasets is 97.79\%, 94.27\%, and 96.66\% respectively. We thus conclude that our candidate generation does not eliminate too many correct candidates. Table~\ref{tab:performance_comparison} shows the performance of our model and the baselines. Besides accuracy, we also compute a binomial confidence interval for each model (at a confidence level of 0.02), based on the total number of mentions and the number of correctly mapped mentions. The best results are shown in bold text, and all performances that are within the error margin of the best-performing model are shown in gray. We first observe that, for each dataset, several methods perform within the margin of the best-performing model. However, only two models are consistently within the margin across all datasets: BERT and our method. Adding extra features (prior, context, coherence) to our base model yields a small increase on the three datasets. However, overall, even our base model achieves a performance that is statistically indistinguishable from the state of the art. \begin{table}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule - Character Feature &-1.21&-0.31&-0.30\cr - Alignment Layer & \underline{-3.80}&\underline{-4.06}&\underline{-3.17}\cr - CNN Layer &-1.87&-0.93&-0.35\cr \rowcolor{lightgray!50} Our Base Method &90.10&89.07&92.63\cr + Mention-Entity Prior &+0.33&+0.04&+0.03\cr + Context &-0.09&+0.21&-0.24\cr + Coherence &-0.02&+0.27&+0.11\cr \bottomrule \end{tabular} \caption{Ablation study}\label{tab:ablation} \end{threeparttable} \end{table} \begin{table*}[!t] \centering \begin{threeparttable} \begin{tabular}{ccccccc} \toprule \multirow{1}{*}{Model}&Original ADR&10\%&30\%&50\%&70\%&90\%\cr \midrule + Ordering Change &92.63&92.20&92.18&91.95&92.31&92.05\cr + Typo &92.63&92.03&91.61&91.38&91.41&91.13\cr \bottomrule \end{tabular} \caption{Performance in the face of typos: Simulated ADR Datasets}\label{tab:simulate} \end{threeparttable} \end{table*} \ignore{ \begin{table*}[!t] \begin{threeparttable} \begin{tabular}{cccc} \toprule &Sieve-based Model&BERT-based Ranking&Our Base Model\cr \midrule Parameter Numbers &-&110M/340M&6.5M/4.9M/2.3M\cr Abbreviation Expansion Tool &$\checkmark$&$\checkmark$&$\checkmark$\cr Abbreviation Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Numeral Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Synonym Dictionary&$\checkmark$&$\times$&$\times$\cr Spelling Check Dictionary&$\times$&$\checkmark$&$\times$\cr Stemming Tool&$\checkmark$&$\checkmark$&$\times$\cr Information Retrieval Tool &$\times$&$\checkmark$&$\times$\cr \bottomrule \end{tabular} \caption{Model parameter numbers and external resources used.} \end{threeparttable} \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccccccc} \toprule Model&Parameters&\multicolumn{2}{c}{ShARe/CLEF}&\multicolumn{2}{c}{NCBI}&\multicolumn{2}{c}{ADR}&Avg&Speedup\cr &&CPU &GPU &CPU &GPU &CPU &GPU && \cr \midrule BERT (large)&340M&2230s&1551s&353s&285s&2736s&1968s&1521s&12.3x\cr BERT (base)&110M&1847s&446s&443s&83s&1666s&605s&848s&6.4x\cr TinyBERT$_{6}$&67M&1618s&255s&344s&42s&2192s&322s&796s&6.0x\cr MobileBERT (base)&25.3M&1202s&330s&322s&58s&1562s&419s&649s&4.7x\cr ALBERT (base)&12M&836s&\textbf{129s}&101s&24s&1192s&170s&409s&2.6x\cr Our Base Model&4.6M&\textbf{181s}&131s&\textbf{38s}&\textbf{22s}&\textbf{196s}&\textbf{116s}&\textbf{114s}&-\cr \bottomrule \end{tabular} \caption{Number of model parameters and observed inference time} \label{tab:running time} \end{threeparttable} \end{table*} \subsection*{Ablation Study} To understand the effect of each component of our model, we measured the performance of our model when individual components are removed or added. The results of this ablation study on all three datasets are shown in Table~\ref{tab:ablation}. The gray row is the accuracy of our base model. The removal of the components of the base model is shown above the gray line; the addition of extra features (see the section of \nameref{sec:extra}) below. If we remove the Alignment Layer (underlined), the accuracy drops the most, with up to 4.06 percentage points. This indicates that the alignment layer can effectively capture the similarity of the corresponding parts of mentions and entity names. The CNN Layer extracts the key components of the names, and removing this part causes a drop of up to 1.87 percentage points. The character-level feature captures morphological variations, and removing it results in a decrease of up to 1.21 percentage points. Therefore, we conclude that all components of our base model are necessary. Let us now turn to the effect of the extra features of our model. The Mention-Entity Prior can bring a small improvement, because it helps with ambiguous mentions, which occupy only a small portion of the dataset. The context feature, likewise, can achieve a small increase on the NCBI dataset. On the other datasets, however, the feature has a negative impact. We believe that this is because the documents in the NCBI datasets are PubMed abstracts, which have more relevant and informative contexts. The documents in the ShARe/CLEF and ADR datasets, in contrast, are more like semi-structured text with a lot of tabular data. Thus, the context around a mention in these documents is less helpful. The coherence feature brings only slight improvements. This could be because our method of estimating co-occurrence is rather coarse-grained, and the naive string matching we use may generate errors and omissions. In conclusion, the extra features do bring a small improvement, and they are thus an interesting direction of future work. However, our simple base model is fully sufficient to achieve state-of-the-art performance already. \subsection*{Performance in the Face of Typos} To reveal how our base model works, we further evaluate it on simulated ADR datasets. We generate two simulated datasets by randomly adding typos and changing word orderings of mention names. As described in Table~\ref{tab:simulate}, as we gradually add typos, the accuracy does not drop too much, and adding 90\% of typos only results in a 1.5 percent drop. This shows our model can deal well with morphological variations of biomedical names. Besides, ordering changes almost have no effect on our base model, which means it can capture correspondences between mention and entity names. \subsection*{Parameters and Inference Time} To measure the simplicity of our base model, we analyze two dimensions: the number of model parameters and the practical inference time. In Table~\ref{tab:running time}, we compare our model with BERT models, including three popular lightweight models: ALBERT\cite{lan2019albert}, TinyBERT\cite{jiao2019tinybert}, and MobileBert\cite{sun2020mobilebert}. Although ALBERT's size is close to our model, its performance is still 2.2 percentage points lower than the BERT$_{\textit{BASE}}$ model on average. The second column in the table shows the number of parameters of different models. Our model uses an average of only 4.6M parameters across the three data sets, which is 1.6x to 72.9x smaller than the other models. The third column to the tenth column show the practical inference time of the models on the CPU and GPU. The CPU is described in the \nameref{sec:experimental setting}, and the GPU we used is a single NVIDIA Tesla V100 (32G). Our model is consistently the fastest across all three datasets, both for CPU and GPU (except in the fourth column). On average, our model is 6.4x faster than other BERT models, and our model is much lighter on the CPU. \subsection*{Model Performance as Data Grows} In this section, we study how our model performs with an increasing amount of training samples, by subsampling the datasets. As shown in Figure~\ref{fig:smalldata}, the performance of our base model keeps growing when we gradually increase the number of training samples. When using 50\% of the training samples, the accuracies of ShARe/CLEF, NCBI, and ADR dataset are already $0.8342, 0.8747,$ and $0.9106$, respectively. More data leads to better performance, and thus our model is not limited by its expressivity, even though it is very simple. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{picture/model_efficiency.pdf} \caption{Model efficiency on a small amount of data.} \label{fig:smalldata} \end{figure} \section*{Conclusion} In this paper, we propose a simple and lightweight neural model for biomedical entity linking. Our experimental results on three standard evaluation benchmarks show that the model is very effective, and achieves a performance that is statistically indistinguishable from the state of the art. BERT-based models, e.g., have 23 times more parameters and require 6.4 times more computing time for inference. Future work to improve the architecture can explore \emph{1)} automatically assigning a weight for each word in the mentions and entity names to capture the importance of each word, depending, e.g., on its grammatical role; \emph{2)} Graph Convolutional Networks (GCNs) \cite{kipf2016semi,wu2020dynamic} to capture graph structure across mentions and improve our notion of entity coherence. \goodbreak \section*{Acknowledgments} This project was partially funded by the DirtyData project (ANR-17-CE23-0018-01). \section*{Introduction} Entity linking (Entity Normalization) is the task of mapping entity mentions in text documents to standard entities in a given knowledge base. For example, the word ``Paris'' is \emph{ambiguous}: It can refer either to the capital of France or to a hero of Greek mythology. Now given the text ``Paris is the son of King Priam'', the goal is to determine that, in this sentence, the word refers to the Greek hero, and to link the word to the corresponding entity in a knowledge base such as YAGO \cite{suchanek2007yago} or DBpedia \cite{auer2007dbpedia}. In the biomedical domain, entity linking maps mentions of diseases, drugs, and measures to normalized entities in standard vocabularies. It is an important ingredient for automation in medical practice, research, and public health. Different names of the same entities in Hospital Information Systems seriously hinder the integration and use of medical data. If a medication appears with different names, researchers cannot study its impact, and patients may erroneously be prescribed the same medication twice. The particular challenge of biomedical entity linking is not the ambiguity: a word usually refers to only a single entity. Rather, the challenge is that the surface forms vary markedly, due to abbreviations, morphological variations, synonymous words, and different word orderings. For example, \textit{``Diabetes Mellitus, Type 2''} is also written as \textit{``DM2''} and \textit{``lung cancer''} is also known as \textit{``lung neoplasm malignant''}. In fact, the surface forms vary so much that all the possible expressions of an entity cannot be known upfront. This means that standard disambiguation systems cannot be applied in our scenario, because they assume that all forms of an entity are known. One may think that variation in surface forms is not such a big problem, as long as all variations of an entity are sufficiently close to its canonical form. Yet, this is not the case. For example, the phrase \textit{"decreases in hemoglobin"} could refer to at least 4 different entities in MedDRA, which all look alike: \textit{"changes in hemoglobin"}, \textit{"increase in hematocrit"}, \textit{"haemoglobin decreased"}, and \textit{"decreases in platelets"}. In addition, biomedical entity linking cannot rely on external resources such as alias tables, entity descriptions, or entity co-occurrence, which are often used in classical entity linking settings. For this reason, entity linking approaches have been developed particularly for biomedical entity linking. Many methods use deep learning: the work of \citet{li2017cnn} casts biomedical entity linking as a ranking problem, leveraging convolutional neural networks (CNNs). More recently, the introduction of BERT has advanced the performance of many NLP tasks, including in the biomedical domain \cite{huang2019clinicalbert,lee2020biobert,ji2020bert}. BERT creates rich pre-trained representations on unlabeled data and achieves state-of-the-art performance on a large suite of sentence-level and token-level tasks, outperforming many task-specific architectures. However, considering the number of parameters of pre-trained BERT models, the improvements brought by fine-tuning them come with a heavy computational cost and memory footprint. This is a problem for energy efficiency, for smaller organizations, or in poorer countries. In this paper, we introduce a very lightweight model that achieves a performance statistically indistinguishable from the state-of-the-art BERT-based models. The central idea is to use an alignment layer with an attention mechanism, which can capture the similarity and difference of corresponding parts between candidate and mention names. Our model is 23x smaller and 6.4x faster than BERT-based models on average; and more than twice smaller and faster than the lightweight BERT models. Yet, as we show, our model achieves comparable performance on all standard benchmarks. Further, we can show that adding more complexity to our model is not necessary: the entity-mention priors, the context around the mention, or the coherence of extracted entities \cite[as used, e.g., in][]{hoffart2011robust} do not improve the results any further. \footnote{All data and code are available at \url{https://github.com/tigerchen52/Biomedical-Entity-Linking}.} \section*{Related Work} In the biomedical domain, much early research focuses on capturing string similarity of mentions and entity names with rule-based systems~\cite{dogan2012inference, kang2013using, d2015sieve}. Rule-based systems are simple and transparent, but researchers need to define rules manually, and these are bound to an application. To avoid manual rules, machine-learning approaches learn suitable similarity measures between mentions and entity names automatically from training sets~\cite{leaman2013dnorm, dougan2014ncbi, ghiasvand2014r, leaman2016taggerone}. However, one drawback of these methods is that they cannot recognize semantically related words. Recently, deep learning methods have been successfully applied to different NLP tasks, based on pre-trained word embeddings, such as word2vec \cite{mikolov2013distributed} and Glove \cite{pennington2014glove}. \citet{li2017cnn} and \citet{wright2019normco} introduce a CNN and RNN, respectively, with pre-trained word embeddings, which casts biomedical entity linking into a ranking problem. However, traditional methods for learning word embeddings allow for only a single context-independent representation of each word. Bidirectional Encoder Representations from Transformers (BERT) address this problem by pre-training deep bidirectional representations from unlabeled text, jointly conditioning on both the left and the right context in all layers. \citet{ji2020bert} proposed an biomedical entity normalization architecture by fine-tuning the pre-trained BERT / BioBERT / ClinicalBERT models \cite{devlin2018bert,huang2019clinicalbert,lee2020biobert}. Extensive experiments show that their model outperforms previous methods and advanced the state-of-the-art for biomedical entity linking. A shortcoming of BERT is that it needs high-performance machines. \section*{Our Approach} Formally, our inputs are (1) a \emph{knowledge base} (KB), i.e., a list of entities, each with one or more names, and (2) a \emph{corpus}, i.e., a set of text documents in which certain text spans have been tagged as entity mentions. The goal is to link each entity mention to the correct entity in the KB. To solve this problem, we are given a training set, i.e., a part of the corpus where the entity mentions have been linked already to the correct entities in the KB. Our method proceeds in 3 steps: \begin{description} \item[\textbf{Preprocessing.}] We preprocess all mentions in the corpus and entity names in the KB to bring them to a uniform format. \item[\textbf{Candidate Generation.}] For each mention, we generate a set of candidate entities from the KB. \item[\textbf{Ranking Model.}] For each mention with its candidate entities, we use a ranking model to score each pair of mention and candidate, outputting the top-ranked result. \end{description} \noindent Let us now describe these steps in detail. \subsection*{Preprocessing} We preprocess all mentions in the corpus and all entity names in the KB by the following steps: \textbf{Abbreviation Expansion.} Like previous work~\cite{ji2020bert}, we use the Ab3p Toolkit~\cite{sohn2008abbreviation} to expand medical abbreviations. The Ab3p tool outputs a probability for each possible expansion, and we use the most probable expansion. For example, Ab3p knows that ``DM'' is an abbreviation of ``Diabetes Mellitus'', and so we replace the abbreviation with its expanded term. We also expand mentions by the first matching one from an abbreviation dictionary constructed by previous work \cite{d2015sieve}, and supplement 20 biomedical abbreviations manually (such as Glycated hemoglobin (HbA1c)). Our dictionary is available in the supplementary material and online. \textbf{Numeral Replacement.} Entity names may contain numerals in different forms (e.g., Arabic, Roman, spelt out in English, etc.) We replace all forms with spelled-out English numerals. For example, ``type \uppercase\expandafter{\romannumeral2} diabetes mellitus'' becomes ``type two diabetes mellitus''. For this purpose, we manually compiled a dictionary of numerals from the corresponding Wikipedia pages. Finally, we remove all punctuation, and convert all words to lowercase. \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{picture/model.pdf} \caption{The architecture of our ranking model, with the input mention ``decreases in hemoglobin'' and the input entity candidate ``haemoglobin decreased''.} \label{fig:architecture} \end{figure*} \textbf{KB Augmentation.} We augment the KB by adding all names from the training set to the corresponding entities. For example, if the training set links the mention ``GS'' in the corpus to the entity ``Adenomatous polyposis coli'' in the KB, we add ``GS'' to the names of that entity in the KB. \subsection*{Candidate Generation}\label{sec:cand} Our ranking approach is based on a deep learning architecture that can compute a similarity score for each pair of a mention in the corpus and an entity name in the KB. However, it is too slow to apply this model to all combinations of all mentions and all entities. Therefore, we generate, for each mention $M$ in the corpus, a set $C_M$ of candidate entities from the KB. Then we apply the deep learning method only to the set $C_M$. To generate the candidate set $C_M$, we calculate a score for $M$ and each entity in the KB, and return the top-$k$ entities with the highest score as the candidate set $C_M$ (in our experiments, $k=20$). As each entity has several names, we calculate the score of $M$ and all names of the entity $E$, and use the maximum score as the score of $M$ and the entity $E$. To compute the score between a mention $M$ and an entity name $S$, we split each of them into tokens, so that we have $M=\{m_{1}, m_{2},..., m_{|M|}\}$ and $S=\{s_{1}, s_{2},..., s_{|S|}\}$. We represent each token by a vector taken from pre-trained embedding matrix $\mathbf V \in \mathbb{R}^{d\times | V |}$ where $d$ is the dimension of word vectors and $V$ is a fixed-sized vocabulary (details in the section of \nameref{sec:experimental setting}). To take into account the possibility of different token orderings in $M$ and $S$, we design the \emph{aligned cosine similarity} (\textit{ACos}), which maps a given token $m_i \in M$ to the most similar token $s_j \in S$ and returns the cosine similarity to that token: \begin{equation} \textit{ACos}(m_{i}, S) = \max \{ cos(m_{i}, s_{j}) \mid s_{j} \in S \} \end{equation} \noindent The similarity score is then computed as the sum of the aligned cosine similarities. To avoid tending to long text, and to make the metric symmetric, we add the similarity scores in the other direction as well, yielding: \begin{multline} \textit{sim}(M,S) = \frac{1}{\left| M \right| + \left| S \right|} (\sum_{m_{i} \in M} \textit{ACos}(m_{i}, S) \\ + \sum_{s_{j} \in S} \textit{ACos}(s_{j},M)) \end{multline} \noindent We can now construct the candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$ where $E_i$ is the id of the entity, and $S_i$ is the chosen name of the entity. This set contains the top-$k$ ranked entity candidates for each mention $M$. Specifically, if there are candidates whose score is equal to 1 in this set, we will filter out other candidates whose score is less than 1. \subsection*{Ranking Model} Given a mention $M$ and its candidate set $C_M = \{\langle{}E_{1}, S_{1}\rangle,$ $\langle{}E_{2}, S_{2}\rangle,$ $..., \langle{}E_{k}, S_{k}\rangle\}$, the ranking model computes a score for each pair of the mention and an entity name candidate $S_i$. Figure~\ref{fig:architecture} shows the corresponding neural network architecture. Let us first describe the base model. This model relies exclusively on the text similarity of mentions and entity names. It ignores the context in which a mention appears, or the prior probability of the target entities. To compute the text similarity, we crafted the neural network following the candidate generation: it determines, for each token in the mention, the most similar token in the entity name, and vice versa. Different from the candidate generation, we also take into account character level information here and use an alignment layer to capture the similarity and difference of correspondences between mention and entity names. \paragraph{Representation Layer.} As mentioned in the \nameref{sec:cand}, we represent a mention $M$ and an entity name $S$ by the set of the embeddings of its tokens in the vocabulary $V$. However, not all tokens exist in the vocabulary $V$. To handle out-of-vocabulary words, we adopt a recurrent Neural Network (RNN) to capture character-level features for each word. This has the additional advantage of learning the morphological variations of words. We use a Bi-directional LSTM (BiLSTM), running a forward and backward LSTM on a character sequence \cite{graves2013speech}. We concatenate the last output states of these two LSTMs as the character-level representation of a word. To use both word-level and character-level information, we represent each token of a mention or entity name as the concatenation of its embedding in $V$ and its character-level representation. \paragraph{Alignment Layer.} To counter the problem of different word orderings in the mention and the entity name, we want the network to find, for each token in the mention, the most similar token in the entity name. For this purpose, we adapt the attention mechanisms that have been developed for machine comprehension and answer selection~\cite{chen2016enhanced,wang2016compare}. Assume that we have a mention $M = \{\bar{m}_{1},$ $\bar{m}_{2},$ $..., \bar{m}_{|M|}\}$ and an entity name $S = \{\bar{s}_{1},$ $\bar{s}_{2},$ $..., \bar{s}_{|S|}\}$, which were generated by the Representation Layer. We calculate a $|M|\times|S|$-dimensional weight matrix $W$, whose element $w_{i,j}$ indicates the similarity between the token $i$ of the mention and the token $j$ of the entity name, $w_{ij} = \bar{m}_{i}^{T} \bar{s}_{j}$. Thus, the $i^{th}$ row in $W$ represents the similarity between the $i^{th}$ token in $M$ and each token in $S$. We apply a softmax function on each row of $W$ to normalize the values, yielding a matrix $W'$. We can then compute a vector $\tilde{m}_i$ for the $i^{th}$ token of the mention, which is the sum of the vectors of the tokens of $S$, weighted by their similarity to $\bar{m}_i$: \begin{equation} \tilde{m}_{i} = \sum_{j=1}^{t} w_{ij}' \bar{s}_{j} \end{equation} \noindent This vector ``reconstructs'' $\bar{m}_i$ by adding up suitable vectors from $S$, using mainly those vectors of $S$ that are similar to $\bar{m}_i$. If this reconstruction succeeds (i.e., if $\bar{m}_i$ is similar to $\tilde{m}_i$), then $S$ contained tokens which, together, contain the same information as $\bar{m}_i$. \ignore{ it so that we obtain an attention matrix where each element $\alpha_{ij} \in [0, 1]$: \begin{equation} \alpha_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{ik}} \end{equation} while we also apply a softmax function on each column of $W$ to get the attention matrix for $S$: \begin{equation} \beta_{ij} = \frac{exp ( w_{ij} )}{\sum_{k=1}^{t} w_{lj}} \end{equation} After, the alignment representation can be computed as a weighted sum: \begin{align} \tilde{m}_{i} = \sum_{j=1}^{t}\beta_{ij} \bar{s}_{j} &&\text{and}&& \tilde{s}_{j} = \sum_{i=1}^{l}\alpha_{ij} \bar{m}_{i} \end{align} where $\tilde{m}_{i}$ is the most relevant part to $\bar{m}_{i}$ that selected from $ S = \{ \bar{s}_{1}, \bar{s}_{2},..., \bar{s}_{t}\}$. We do the same operation for each word in $S$ to get $\tilde{s}_{j}$. In this step, we can find the corresponding parts of two texts to compare without being influenced by the order of words } To measure this similarity, we could use a simple dot-product. However, this reduces the similarity to a single scalar value, which erases precious element-wise similarities. Therefore, we use the following two comparison functions \cite{tai2015improved,wang2016compare}: \begin{equation} \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}) = (\bar{m}_{i}-\tilde{m}_{i}) \odot (\bar{m}_{i}-\tilde{m}_{i}) \end{equation} \begin{equation} \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) = \bar{m}_{i} \odot \tilde{m}_{i} \end{equation} \noindent where the operator $\odot$ means element-wise multiplication. Intuitively, the functions $sub$ and $mul$ represent subtraction and multiplication, respectively. The function \emph{sub} has similarities to the Euclidean distance, while \emph{mul} has similarities to the cosine similarity -- while preserving the element-wise information. Finally, we obtain a new representation of each token $i$ of the mention by concatenating $\bar{m}_{i}, \tilde{m}_{i}$ and their difference and similarity: \begin{equation} \hat{m}_{i} = [\bar{m}_{i}, \tilde{m}_{i}, \textit{sub}(\bar{m}_{i}, \tilde{m}_{i}), \textit{mul}(\bar{m}_{i}, \tilde{m}_{i}) ] \end{equation} \noindent By applying the same procedure on the columns of $W$, we can compute analogously a vector $\tilde{s}_{j}$ for each token vector $s_j$ of $S$, and obtain the new representation for the $j^{th}$ token of the entity name as \begin{equation} \hat{s}_{j} = [\bar{s}_{j}, \tilde{s}_{j}, \textit{sub}(\bar{s}_{j}, \tilde{s}_{j}), \textit{mul}(\bar{s}_{j}, \tilde{s}_{j}) ] \end{equation} \noindent This representation augments the original representation $\bar{s}_{j}$ of the token by the ``reconstructed'' token $\tilde{s}_{j}$, and by information about how similar $\tilde{s}_{j}$ is to $\bar{s}_{j}$. \paragraph{CNN Layer.} We now have rich representations for the mention and the entity name, and we apply a one-layer CNN on the mention $[\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{|M|}]$ and the entity name $[\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{|S|}]$. We adopt the CNN architecture proposed by \cite{kim2014convolutional} to extract n-gram features of each text: \begin{equation} f_{M} = \textit{CNN}([\hat{m}_{1}, \hat{m}_{2},..., \hat{m}_{M}]) \end{equation} \begin{equation} f_{E} = \textit{CNN}([\hat{s}_{1}, \hat{s}_{2},..., \hat{s}_{S}]) \end{equation} \noindent We concatenate these to a single vector $f_{\textit{out}} = [ f_{M}, f_{E} ]$. \paragraph{Output Layer.} We are now ready to compute the final output of our network using a two-layer fully connected neural network: \begin{equation} \Phi ( M, E ) = \textit{sigmoid} (W_{2}~~\textit{ReLU}(W_{1}~f_{\textit{out}} + b_{1} ) + b_{2} ) \end{equation} \noindent where $W_{2}$ and $W_{1}$ are learned weight matrices, and $b_1$ and $b_2$ are bias values. This constitutes our base model, which relies solely on string similarity. We will now see how we can add add prior, context, and coherence features. \subsection*{Extra Features}\label{sec:extra} \paragraph{Mention-Entity Prior.} Consider an ambiguous case such as \textit{``You should shower, let water flow over wounds, pat dry with a towel.''} appearing in hospital Discharge Instructions. In this context, the disease name \textit{``wounds''} is much more likely to refer to \textit{``surgical wound''} than \textit{``gunshot wound''}. This prior probability is called the \emph{mention-entity prior}. It can be estimated, e.g., by counting in Wikipedia how often a mention is linked to the page of an entity~\cite{hoffart2011robust}. Unlike DBpedia and YAGO, biomedical knowledge bases generally do not provide links to Wikipedia. Hence, we estimate the mention-entity prior from the training set, as: \begin{equation} \textit{prior}(M,E) = \log \textit{count}(M, E) \end{equation} \noindent where $\textit{count}(M, E)$ is the frequency with which the mention $M$ is linked to the target entity $E$ in the training dataset. To reduce the effect of overly large values, we apply the logarithm. This prior can be added easily to our model by concatenating it in $f_{\textit{out}}$: \begin{equation} f_{\textit{out}} = [ f_{M}, f_{E}, \textit{prior}(M,E) ] \end{equation} \paragraph{Context.} The context around a mention can provide clues on which candidate entity to choose. We compute a context score that measures how relevant the keywords of the context are to the candidate entity name. We first represent the sentence containing the mention by pre-trained word embeddings. We then run a Bi-directional LSTM on the sentence to get a new representation for each word. In the same way, we apply a Bi-directional LSTM on the entity name tokens to get the entity name representation $cxt_{E}$. To select keywords relevant to the entity while ignoring noise words, we adopt an attention strategy to assign a weight for each token in the sentence. Then we use a weighted sum to represent the sentence as $cxt_{M}$. The context score is then computed as the cosine similarity between both representations: \begin{equation} \textit{context}(M, E) = \cos (cxt_{M}, cxt_{E}) \end{equation} As before, we concatenate this score to the vector $f_{out}$. \paragraph{Coherence.} Certain entities are more likely to occur together in the same document than others, and we can leverage this disposition to help the entity linking. To capture the co-occurrence of entities, we pre-train entity embeddings in such a way that entities that often co-occur together have a similar distributed representation. We train these embeddings with Word2Vec~\cite{mikolov2013distributed} on a collection of PubMed abstracts\footnote{ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/}. Since the entities in this corpus are not linked to our KB, we consider every occurrence of an exact entity name as a mention of that entity. Given a mention $M$ and a candidate entity $E$, we compute a coherence score to measure how often the candidate entity co-occurs with the other entities in the document. We first select the mentions around $M$. For each mention, we use the first entity candidate (as given by the candidate selection). This gives us a set of entities $P_{M} = \{ {p}_{1}, {p}_{2},..., {p}_{k}\}$, where each element is a pre-trained entity vector. Finally, the coherence score is computed as: \begin{equation} \textit{coherence}(M, E) = \frac{1}{k} \sum_{i=1}^{k} \cos(p_{i},p_{E}) \end{equation} \noindent where $p_{E}$ is the pre-trained vector of the entity candidate $E$. This score measures how close the candidate entity $E$ is, on average, to the other presumed entities in the document. As before, we concatenate this score to the vector $f_{\textit{out}}$. More precisely, we pre-trained separate entity embeddings for the three datasets and used the mean value of all entity embeddings to represent missing entities. \subsection*{NIL Problem} The NIL problem occurs when a mention does not correspond to any entity in the KB. We adopt a traditional threshold method, which considers a mention unlinkable if its score is less than a threshold $\tau$. This means that we map a mention to the highest-scoring entity if that score exceeds $\tau$, and to NIL otherwise. The threshold $\tau$ is learned from a training set. For datasets that do not contain unlinkable mentions, we set the threshold $\tau$ to zero. \subsection*{Training} For training, we adopt a triplet ranking loss function to make the score of the positive candidates higher than the score of the negative candidates. The objective function is: \begin{multline} \theta ^{*} = \mathop{\arg\min}_{\theta} \sum_{D \in \mathcal{D}}\sum_{M \in D}\sum_{E \in C} \\ \max (0, \gamma + \Phi ( M, E^{+} ) - \Phi ( M, E^{-} )) \end{multline} \noindent where $\theta$ stands for the parameters of our model. $\mathcal{D}$ is a training set containing a certain number of documents and $\gamma$ is the parameter of margin. $E^{+}$ and $E^{-}$ represent a positive entity candidate and a negative entity candidate, respectively. Our goal is to find an optimal $\theta$, which makes the score difference between positive and negative entity candidates as large as possible. For this, we need triplets of a mention $M$, a positive example $E^+$ and a negative example $E^-$. The positive example can be obtained from the training set. The negative examples are usually chosen by random sampling from the KB. In our case, we sample the negative example from the candidates that were produced by the candidate generation phase (excluding the correct entity). This choice makes the negative examples very similar to the positive example, and forces the process to learn what distinguishes the positive candidate from the others. \section*{Experiments} \begin{table}[b!] \small \begin{tabu} {p{1.3cm} X[c] X[c] X[c] X[c] X[c] X[c]} \toprule &\multicolumn{2}{c}{ShARe/CLEF} &\multicolumn{2}{c}{NCBI} &\multicolumn{2}{c}{ADR} \\ &train &test &train &test &train &test \\ \midrule documents &199 &99 &692 &100 &101 &99 \\ mentions &5816 &5351 &5921 &964 &7038 &6343 \\ NIL &1641 &1750 &0 &0 &47 &18 \\ \midrule concepts &\multicolumn{2}{c}{88140} &\multicolumn{2}{c}{9656} &\multicolumn{2}{c}{23668} \\ synonyms &\multicolumn{2}{c}{42929} &\multicolumn{2}{c}{59280} &\multicolumn{2}{c}{0}\\ \bottomrule \end{tabu} \caption{Dataset Statistics}\label{tab:datasets} \end{table} \ignore{ \begin{table*}[!t] \small \begin{minipage}{.25\linewidth} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{minipage}% \hfill% \begin{minipage}{.72\linewidth}% \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$4.05&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.38&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$1.31&84.65$\pm$3.84&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$3.32&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$1.12\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.33&86.10$\pm$3.63&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf1.29}&\cellcolor{lightgray!50}89.06$\pm$3.32&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf1.04}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.35 &\cellcolor{lightgray!50}89.07$\pm$3.32&\cellcolor{lightgray!50}92.89$\pm$1.06\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$1.33 &\cellcolor{lightgray!50}{\bf89.59$\pm$3.22}&\cellcolor{lightgray!50}93.00$\pm$1.06\cr \bottomrule \end{tabular} \end{threeparttable} \end{minipage}% \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule DNorm \cite{leaman2013dnorm}&-&82.20$\pm$3.09&-\cr UWM \cite{ghiasvand2014r} &89.50$\pm$1.02&-&-\cr Sieve-based Model \cite{d2015sieve}&\cellcolor{lightgray!50}90.75$\pm$0.96&84.65$\pm$3.00&-\cr TaggerOne \cite{leaman2016taggerone}&-&\cellcolor{lightgray!50}88.80$\pm$2.59&-\cr Learning to Rank \cite{xu2017uth_ccb}&-&-&92.05$\pm$0.84\cr CNN-based Ranking \cite{li2017cnn}&\cellcolor{lightgray!50}90.30$\pm$1.00&86.10$\pm$2.79&-\cr BERT-based Ranking \cite{ji2020bert}&\cellcolor{lightgray!50}{\bf91.06$\pm$\bf0.96}&\cellcolor{lightgray!50}89.06$\pm$2.63&\cellcolor{lightgray!50}{\bf93.22$\pm$\bf0.79}\cr Our Base Model &\cellcolor{lightgray!50}90.10$\pm$1.00 &\cellcolor{lightgray!50}89.07$\pm$2.63&\cellcolor{lightgray!50}92.63$\pm$0.81\cr Our Base Model + Extra Features &\cellcolor{lightgray!50}90.43$\pm$0.99 &\cellcolor{lightgray!50}{\bf89.59$\pm$2.59}&\cellcolor{lightgray!50}92.74$\pm$0.80\cr \bottomrule \end{tabular} \end{threeparttable} \caption{Performance of different models. Results in gray are not statistically different from the top result.}\label{tab:performance_comparison} \end{table*} \subsection*{Datasets and Metrics. } We evaluate our model on three datasets (shown in Table~\ref{tab:datasets}). The \textbf{ShARe/CLEF} corpus~\cite{pradhan2013task} comprises 199 medical reports for training and 99 for testing. As Table~\ref{tab:datasets} shows, $28.2\%$ of the mentions in the training set and $32.7\%$ of the mentions in the test set are unlinkable. The reference knowledge base used here is the SNOMED-CT subset of the UMLS 2012AA~\cite{bodenreider2004unified}. The \textbf{NCBI} disease corpus~\cite{dougan2014ncbi} is a collection of 793 PubMed abstracts partitioned into 693 abstracts for training and development and 100 abstracts for testing. We use the July 6, 2012 version of MEDIC~\cite{davis2012medic}, which contains 9,664 disease concepts. The TAC 2017 Adverse Reaction Extraction (\textbf{ADR}) dataset consists of a training set of 101 labels and a test set of 99 labels. The mentions have been mapped manually to the MedDRA 18.1 KB, which contains 23,668 unique concepts. Following previous work, we adopt accuracy to compare the performance of different models. \subsection*{Experimental Settings} \label{sec:experimental setting} We implemented our model using Keras, and trained our model on a single Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz, using less than 10Gb of memory. Each token is represented by a 200-dimensional word embedding computed on the PubMed and MIMIC-III corpora~\cite{zhang2019biowordvec}. As for the character embeddings, we use a random matrix initialized as proposed in \citet{he2015delving}, with a dimension of $128$. The dimension of the character LSTM is $64$, which yields $128$-dimensional character feature vectors. In the CNN layer, the number of feature maps is $32$, and the filter windows are $[1, 2, 3]$. The dimension of the context LSTM and entity embedding is set to $32$ and $50$ respectively. We adopt a grid search on a hold-out set from training samples to select the value $\tau$, and and find an optimal for $\tau = 0.75$. During the training phase, we select at most $20$ entity candidates per mention, and the parameter of the triplet rank loss is $0.1$. For the optimization, we use Adam with a learning rate of $0.0005$ and a batch size of $64$. To avoid overfitting, we adopt a dropout strategy with a dropout rate of $0.1$. \subsection*{Competitors} We compare our model to the following competitors: \textbf{DNorm} \cite{leaman2013dnorm}; \textbf{UWM} \cite{ghiasvand2014r}; \textbf{Sieve-based Model} \cite{d2015sieve}; \textbf{TaggerOne} \cite{leaman2016taggerone}; a model based on \textbf{Learning to Rank} \cite{xu2017uth_ccb}; \textbf{CNN-based Ranking} \cite{li2017cnn}; and \textbf{BERT-based Ranking} \cite{ji2020bert}. \section*{Results} \subsection*{Overall Performance} During the candidate generation, we generate 20 candidates for each mention. The recall of correct entities on the ShARe/CLEF, NCBI, and ADR test datasets is 97.79\%, 94.27\%, and 96.66\% respectively. We thus conclude that our candidate generation does not eliminate too many correct candidates. Table~\ref{tab:performance_comparison} shows the performance of our model and the baselines. Besides accuracy, we also compute a binomial confidence interval for each model (at a confidence level of 0.02), based on the total number of mentions and the number of correctly mapped mentions. The best results are shown in bold text, and all performances that are within the error margin of the best-performing model are shown in gray. We first observe that, for each dataset, several methods perform within the margin of the best-performing model. However, only two models are consistently within the margin across all datasets: BERT and our method. Adding extra features (prior, context, coherence) to our base model yields a small increase on the three datasets. However, overall, even our base model achieves a performance that is statistically indistinguishable from the state of the art. \begin{table}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccc} \toprule \multirow{1}{*}{Model}&ShARe/CLEF&NCBI&ADR\cr \midrule - Character Feature &-1.21&-0.31&-0.30\cr - Alignment Layer & \underline{-3.80}&\underline{-4.06}&\underline{-3.17}\cr - CNN Layer &-1.87&-0.93&-0.35\cr \rowcolor{lightgray!50} Our Base Method &90.10&89.07&92.63\cr + Mention-Entity Prior &+0.33&+0.04&+0.03\cr + Context &-0.09&+0.21&-0.24\cr + Coherence &-0.02&+0.27&+0.11\cr \bottomrule \end{tabular} \caption{Ablation study}\label{tab:ablation} \end{threeparttable} \end{table} \begin{table*}[!t] \centering \begin{threeparttable} \begin{tabular}{ccccccc} \toprule \multirow{1}{*}{Model}&Original ADR&10\%&30\%&50\%&70\%&90\%\cr \midrule + Ordering Change &92.63&92.20&92.18&91.95&92.31&92.05\cr + Typo &92.63&92.03&91.61&91.38&91.41&91.13\cr \bottomrule \end{tabular} \caption{Performance in the face of typos: Simulated ADR Datasets}\label{tab:simulate} \end{threeparttable} \end{table*} \ignore{ \begin{table*}[!t] \begin{threeparttable} \begin{tabular}{cccc} \toprule &Sieve-based Model&BERT-based Ranking&Our Base Model\cr \midrule Parameter Numbers &-&110M/340M&6.5M/4.9M/2.3M\cr Abbreviation Expansion Tool &$\checkmark$&$\checkmark$&$\checkmark$\cr Abbreviation Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Numeral Dictionary &$\checkmark$&$\checkmark$&$\checkmark$\cr Synonym Dictionary&$\checkmark$&$\times$&$\times$\cr Spelling Check Dictionary&$\times$&$\checkmark$&$\times$\cr Stemming Tool&$\checkmark$&$\checkmark$&$\times$\cr Information Retrieval Tool &$\times$&$\checkmark$&$\times$\cr \bottomrule \end{tabular} \caption{Model parameter numbers and external resources used.} \end{threeparttable} \end{table*} } \begin{table*}[!t] \centering \small \begin{threeparttable} \begin{tabular}{cccccccccc} \toprule Model&Parameters&\multicolumn{2}{c}{ShARe/CLEF}&\multicolumn{2}{c}{NCBI}&\multicolumn{2}{c}{ADR}&Avg&Speedup\cr &&CPU &GPU &CPU &GPU &CPU &GPU && \cr \midrule BERT (large)&340M&2230s&1551s&353s&285s&2736s&1968s&1521s&12.3x\cr BERT (base)&110M&1847s&446s&443s&83s&1666s&605s&848s&6.4x\cr TinyBERT$_{6}$&67M&1618s&255s&344s&42s&2192s&322s&796s&6.0x\cr MobileBERT (base)&25.3M&1202s&330s&322s&58s&1562s&419s&649s&4.7x\cr ALBERT (base)&12M&836s&\textbf{129s}&101s&24s&1192s&170s&409s&2.6x\cr Our Base Model&4.6M&\textbf{181s}&131s&\textbf{38s}&\textbf{22s}&\textbf{196s}&\textbf{116s}&\textbf{114s}&-\cr \bottomrule \end{tabular} \caption{Number of model parameters and observed inference time} \label{tab:running time} \end{threeparttable} \end{table*} \subsection*{Ablation Study} To understand the effect of each component of our model, we measured the performance of our model when individual components are removed or added. The results of this ablation study on all three datasets are shown in Table~\ref{tab:ablation}. The gray row is the accuracy of our base model. The removal of the components of the base model is shown above the gray line; the addition of extra features (see the section of \nameref{sec:extra}) below. If we remove the Alignment Layer (underlined), the accuracy drops the most, with up to 4.06 percentage points. This indicates that the alignment layer can effectively capture the similarity of the corresponding parts of mentions and entity names. The CNN Layer extracts the key components of the names, and removing this part causes a drop of up to 1.87 percentage points. The character-level feature captures morphological variations, and removing it results in a decrease of up to 1.21 percentage points. Therefore, we conclude that all components of our base model are necessary. Let us now turn to the effect of the extra features of our model. The Mention-Entity Prior can bring a small improvement, because it helps with ambiguous mentions, which occupy only a small portion of the dataset. The context feature, likewise, can achieve a small increase on the NCBI dataset. On the other datasets, however, the feature has a negative impact. We believe that this is because the documents in the NCBI datasets are PubMed abstracts, which have more relevant and informative contexts. The documents in the ShARe/CLEF and ADR datasets, in contrast, are more like semi-structured text with a lot of tabular data. Thus, the context around a mention in these documents is less helpful. The coherence feature brings only slight improvements. This could be because our method of estimating co-occurrence is rather coarse-grained, and the naive string matching we use may generate errors and omissions. In conclusion, the extra features do bring a small improvement, and they are thus an interesting direction of future work. However, our simple base model is fully sufficient to achieve state-of-the-art performance already. \subsection*{Performance in the Face of Typos} To reveal how our base model works, we further evaluate it on simulated ADR datasets. We generate two simulated datasets by randomly adding typos and changing word orderings of mention names. As described in Table~\ref{tab:simulate}, as we gradually add typos, the accuracy does not drop too much, and adding 90\% of typos only results in a 1.5 percent drop. This shows our model can deal well with morphological variations of biomedical names. Besides, ordering changes almost have no effect on our base model, which means it can capture correspondences between mention and entity names. \subsection*{Parameters and Inference Time} To measure the simplicity of our base model, we analyze two dimensions: the number of model parameters and the practical inference time. In Table~\ref{tab:running time}, we compare our model with BERT models, including three popular lightweight models: ALBERT\cite{lan2019albert}, TinyBERT\cite{jiao2019tinybert}, and MobileBert\cite{sun2020mobilebert}. Although ALBERT's size is close to our model, its performance is still 2.2 percentage points lower than the BERT$_{\textit{BASE}}$ model on average. The second column in the table shows the number of parameters of different models. Our model uses an average of only 4.6M parameters across the three data sets, which is 1.6x to 72.9x smaller than the other models. The third column to the tenth column show the practical inference time of the models on the CPU and GPU. The CPU is described in the \nameref{sec:experimental setting}, and the GPU we used is a single NVIDIA Tesla V100 (32G). Our model is consistently the fastest across all three datasets, both for CPU and GPU (except in the fourth column). On average, our model is 6.4x faster than other BERT models, and our model is much lighter on the CPU. \subsection*{Model Performance as Data Grows} In this section, we study how our model performs with an increasing amount of training samples, by subsampling the datasets. As shown in Figure~\ref{fig:smalldata}, the performance of our base model keeps growing when we gradually increase the number of training samples. When using 50\% of the training samples, the accuracies of ShARe/CLEF, NCBI, and ADR dataset are already $0.8342, 0.8747,$ and $0.9106$, respectively. More data leads to better performance, and thus our model is not limited by its expressivity, even though it is very simple. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{picture/model_efficiency.pdf} \caption{Model efficiency on a small amount of data.} \label{fig:smalldata} \end{figure} \section*{Conclusion} In this paper, we propose a simple and lightweight neural model for biomedical entity linking. Our experimental results on three standard evaluation benchmarks show that the model is very effective, and achieves a performance that is statistically indistinguishable from the state of the art. BERT-based models, e.g., have 23 times more parameters and require 6.4 times more computing time for inference. Future work to improve the architecture can explore \emph{1)} automatically assigning a weight for each word in the mentions and entity names to capture the importance of each word, depending, e.g., on its grammatical role; \emph{2)} Graph Convolutional Networks (GCNs) \cite{kipf2016semi,wu2020dynamic} to capture graph structure across mentions and improve our notion of entity coherence. \goodbreak \section*{Acknowledgments} This project was partially funded by the DirtyData project (ANR-17-CE23-0018-01).
{ "redpajama_set_name": "RedPajamaArXiv" }
7,566
{"url":"https:\/\/mathoverflow.net\/questions\/201782\/homotopy-invariance-of-kan-nerve-of-simplicial-categories","text":"# Homotopy invariance of Kan nerve of simplicial categories\n\nThe following question concerns the well-known paper of Dwyer and Kan \"Localization of Simplicial Categories\". They define a nerve for simplicial categories (with fixed set of objects $O$), by the following construction: given $\\mathcal{A}\\in sO-Cat$, they set $$\\mathcal{N}\\mathcal{A}:=diag(k \\mapsto N \\mathcal{A}_k)$$ where $N:Cat \\to sSet$ is the usual nerve, and $\\mathcal{A}_k$ is obtained by $\\mathcal{A}$ by considering only the $k$-simplices in each mapping space.\n\nWhy does it hold that a weak equivalence $F:\\mathcal{A} \\to \\mathcal{B}$ in $sO-Cat$ (i.e. a simplicial functor inducing weak equivalences $\\mathcal{A}(X,Y) \\to \\mathcal{B}(FX,FY)$ for any $X,Y \\in \\mathcal{A}$) should give a weak equivalence $$\\mathcal{N} \\mathcal{A} \\to \\mathcal{N} \\mathcal{B} \\ \\ \\ ?$$ Of course this would follow by proving that we have weak equivalences $$N\\mathcal{A}_k \\to N\\mathcal{B}_k$$ for any $k \\geq 0$, but I do not get why it should be the case.\n\nThanks in advance for any help of hint.\n\nLet $\\mathcal{N}_*\\mathcal{A}$ denote the bisimplicial set $k \\mapsto N\\mathcal{A}_k$. If $\\mathcal{A} \\to \\mathcal{B}$ is a DK-equivalence, then $\\mathcal{N}_*\\mathcal{A} \\to \\mathcal{N}_*\\mathcal{B}$ is a natural weak equivalence of bisimplicial sets, except not in $k$ but rather in $l$ the \"nerve simplicial direction\". This can be easily seen by realizing that $\\mathcal{N}_*\\mathcal{A}$ can be also written as\n$$(\\mathcal{N}_*\\mathcal{A})_l = \\coprod_{x_0, \\ldots, x_l \\in O} \\mathcal{A}(x_0, x_1) \\times \\ldots \\times \\mathcal{A}(x_{l-1}, x_l) \\text{.}$$","date":"2019-10-18 12:26:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9701147079467773, \"perplexity\": 98.45228477007694}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986682037.37\/warc\/CC-MAIN-20191018104351-20191018131851-00154.warc.gz\"}"}
null
null
\section{INTRODUCTION} The presence of an elementary scalar in the Standard Model (SM) provides the most compelling reason to expect new physics at the TeV scale and beyond. While new physics candidates may differ quite significantly with respect to the underlying theory, there could be similarities in the properties of the new particles predicted by these scenarios. For instance several models are characterized by the presence of a heavy scalar with Higgs like couplings. One class of models (say Class A) which are characterized by these heavy scalars include two Higgs doublet model, supersymmetric model \textit{etc.}. Alternatively, these scalars can also arise as the Kaluza-Klein (KK) excitations of the bulk Higgs in the extra-dimensional models or supersymmetric extensions of Little Higgs scenarios~\cite{Roy:2005hg, Csaki:2008se}. We refer to these kind of models as Class B. In the event of a discovery of such excited scalars, it is essential to identify the true nature of physics beyond the SM. In this paper we propose an analysis that help us to recognize such distinct features.\\ The models are segregated into two classes: A and B introduced earlier. The latter is characterized by the presence of additional vector-like fermions. For instance, the vector-like fermions could correspond to the KK excitations of the fermions in the extra dimensional models~\cite{Agashe:2003zs} with the lightest one generally corresponding to the top-partner or they can also arise in the extended Little Higgs model~\cite{Roy:2005hg, Csaki:2008se}. Using this difference, we attempt to devise a unique signature which is a trademark of models belonging to Class B.\\ The phenomenology of the heavy top partners at the LHC has been discussed in details in Ref.~\cite{Gopalakrishna:2013hua, Gopalakrishna:2011ef,DeSimone:2012fs, Contino:2006qr,Vignaroli:2012sf, Anastasiou:2009rv, Kribs:2010ii, Carena:2006jx, Han:2003wu, Matsumoto:2008fq, Buchkremer:2013bha, Banfi:2013yoa, Li:2013xba, Gripaios:2014pqa,Chala:2014mma, Endo:2014bsa, Dolan:2016eki}. Similarly the phenomenology of a heavy scalar coming from a general Higgs sector or a warped extra dimension at the LHC has been discussed in Ref.~\cite{Dumont:2014wha, Mahmoudi:2016aib}. Recently, ATLAS and CMS have searched for a heavy Higgs-like resonance in the $WW^{*}$, diphoton, $ZZ$ and $hh$ channels~\cite{Aad:2015kna, Aad:2014ioa, ATLAS-CONF-2016-059, ATLAS-CONF-2016-079, ATLAS-CONF-2016-056, Sirunyan:2016cao}. Although, translating the maximum observed cross section as an exclusion on the mass of the heavy Higgs is highly model dependent, one can safely assume that a heavy scalar beyond 1 TeV is still compatible with these search limits. These searches for heavy Higgs as well as heavy top partners are in general carried out independent of each other. Thus, even if there is a discovery in any of these search modes, it is difficult to pin-point the right class of model. The aim of this paper is to present a unified search strategy for the heavy Higgs scalar and the heavy top-partner. This would eventually serve as a litmus test in distinguishing models belonging to Class-A from Class-B. \footnote{It is relevant to note at this point that the heavy scalar can be replaced by any other spin particle with similar mass, which couples to the vector like top. The analysis presented in the work proceeds in an exactly similar fashion. A detailed discussion to this effect is given in Section \ref{results} }\\ \section{Model} Consider a simplified model with a heavy Higgs-like scalar ($H_{1}$) and a vector-like gauge singlet fermion ($t'$). The relevant couplings of the scalar $H_{1}$ to $t'$, gluons\footnote{$H_1$ is assumed to be produced through gluon fusion diagram with SM top quarks propagating in the loop} and third generation quarks are governed by the following effective Lagrangian: \begin{equation} \mathcal{L}_{NP}\supset G^a_{\mu\nu}G^a_{\mu\nu}H_{1}+\left(Y_t\bar Q_3H_1t'+Y_t\bar Q_3H_{1} t + Y_t\bar Q_3H t'+M_{t'}\bar t't' + Y_t\bar Q_3H t+h.c\right) \label{effective_lagrangian} \end{equation} where $G^a_{\mu\nu}$ is the field strength tensor for the gluons. We have assumed the rest of the vector-like spectrum to be heavy and is decoupled from the effective low energy theory. Without loss of generality, we assume the coupling strength of the scalar $H_{1}$ to $Q_3$ and $t'$ to be the same as $Y_t$, the top Yukawa coupling \footnote{The size of this coupling comes into play when considering the branching fraction of $H\rightarrow t_2~t$. In this scenario we assumed a branching fraction of 50\% for this model. For a warped model the branching fraction is dominated in $H1\rightarrow t t$ mode with the gauge bosons mode suppressed due to orthonormality. The Branching fraction of $H1\rightarrow t t$ can be adjusted by playing with the localization parameter of the bulk scalar. Observation of these edges requires the accumulation of a certain minimum number of signal points. Lower branching fractions would suffer with larger luminosity reaches, primarily due to the lower production cross section of a heavy scalar.}. Since the heavy scalar $H_{1}$ has Higgs-like interaction, its decay to a pair of $t\bar{t},WW,ZZ,hh$ is common for both classes of models under consideration. As mentioned earlier, models belonging to Class B are characterized by the presence of an additional vector-like states. The fourth term in the parenthesis in Eq. \ref{effective_lagrangian} induces a mass-mixing between the SM top and its vector-like counterpart. The mass matrix, in the basis $(t,t')$, parametrizing this mixing is given by: \begin{equation} \mathcal{M}_{tt'} = \begin{bmatrix} \frac{Y_{t} v}{\sqrt{2}}&& \frac{Y_{t} v}{\sqrt{2}}\\ 0 && M_{t'} \end{bmatrix} \end{equation} where $v$ represents vacuum expectation value (vev) of the Higgs. In the presence of this mixing the mass-basis is related to the interaction basis as: \begin{equation} \begin{bmatrix} t_1\\tilde_2 \end{bmatrix}_{L,R}=\mathcal{O}_{2\times 2}\begin{bmatrix} t\\tilde' \end{bmatrix}_{L,R} \end{equation} where $t_1$ is now identified as the SM top and $t_2$ is the heavier partner. $\mathcal{O}$ is the $2\times 2$ rotation matrix to move from the interaction basis to the mass basis. Henceforth, the top will be denoted as $t$ for convenience. An example of a complete model in this case would be a warped extra dimensional model \cite{Randall:1999ee,Gherghetta:2000qt}. The masses of the heavy partner of the $SU(2)$ singlet is light as the corresponding bulk field is localized closer to the IR brane (The bulk localization parameter is $c\sim0$). The doublets do not enjoy a similar localization as due to constraints from $Zb\bar{b}$. As a result, the corresponding $KK$ partners are heavier. The masses of $n=1$ KK partners of $W$ and $Z$ are significantly heavier due to constraints from precision electroweak and flavour physics. On the other hand, the mass of the $n=1$ KK partner of Higgs is not as severely constrained and can be as low as 1 TeV. The presence of an additional fermion $t_2$ opens up an additional channel for $H_{1}$ to decay i.e $H_1\rightarrow tt_2$. For the setup under consideration, $t_2$ can only decay into $t~(b)~+~X$ where $X=W,h,Z$. The branching fraction of $t_2$ decaying to the gauge bosons is governed by its interaction to the scalar degrees of the Higgs doublet $H$. Two out of four degrees of freedom correspond to the longitudinal polarization of $W^{\pm}$, while one is that of the $Z$-boson. The remaining is the SM Higgs boson $h$. Consequently, one can roughly estimate the branching rates to be \begin{equation} B.R(t_2\rightarrow b+W)\sim 50\%;\;\;B.R(t_2\rightarrow t+h)\sim 25\%;\;\;B.R(t_2\rightarrow t+Z)\sim 25\% \end{equation} All the channels corresponding to $H_{1}~\rightarrow~tt_2~\rightarrow~tt(b)X$ as depicted in Fig.~\ref{tbw} are characterized by distinct kinematic endpoints in certain invariant mass distributions. This unique feature not only distinguishes it from SM backgrounds, but also from models belonging to Class A, serving as a smoking gun signal for Class B. Kinematic variables like $M_{T_2}$ have been used in different SUSY searches \cite{Lester:2001zx,Lester:1999tx,Allanach:2000kt,Meade:2006dw,Cheng:2007xv}. We would like to emphasize that our current analysis is to use the known technique of kinematic edges as a smoking gun towards the presence of certain specific models. To this effect, we have constructed a topology with a heavy Higgs and a vector like top partner. This leads to final states with for instance top, bottom and W (for the leading decay mode of $t_2$) which are visible with known masses. As discussed, the invariant mass distribution of top and bottom and bottom and lepton (from W) exhibit edges in the kinematic endpoints. This `edgy' feature in this particular final state is only a characteristic of models which have a vector like top partner. For a cascade decay having $P_{1}~\rightarrow~P_{2} d_{1}~\rightarrow~d_{2}d_{3}$, upper edge in the invariant mass distribution involving final state particles $d_{1}, d_{2}$ is given by ~\cite{Lester:2001zx,Lester:2006yw} \begin{eqnarray} m_{edge}^{2} = m_{d_1}^{2} + m_{d_2}^{2} + \frac{f(m_{P_1}, m_{P_2}, m_{d_1}) f (m_{P_2}, m_{d_2}, m_{d_3})-g(m_{P_2}, m_{d_1}, m_{P_1}) g(m_{P_2}, m_{d_2}, m_{d_3})}{2 m_{P_2}^{2}} \label{sample} \end{eqnarray} where $f(a, b, c)~=~\sqrt{(a^2 - b^2 - c^2)(a^2 + b^2 - c^2)(a^2 - b^2 + c^2)(a^2 + b^2 + c^2)}$ and \\ $g(a, b, c)~=~a^{2} - b^{2} - c^{2}$.\\ Models like MSSM which can also lead to similar final states, do not however exhibit these edges as the final states are uncorrelated and similar invariant masses in such a case will lead to gradually falling pattern. The Heavy Higgs with a vector like top in this case is only a toy model. The Higgs partner can be replaced to include a $Z'$, Graviton and the analysis proceeds similarly. We now study each of these channels and define the invariant mass distributions where the kinematic endpoints can be observed. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8cm]{template_feynman.pdf} \end{tabular} \end{center} \caption{\it{ggF production of $H_{1}$ and its decay to $t~t (b)~X$}} \label{tbw} \end{figure} \subsection{Channel 1 : $H_1\rightarrow tt_2 \rightarrow t b W$} This is the most dominant channel where $H_1$ decays to $t$ and $t_2$ with $t_2$ further decaying to $b$ -quark and $W$-boson. While the top decays hadronically, we consider the leptonic decay mode $W$- boson. The topology is endowed with the following features: \begin{itemize} \item \textbf{Kinematic edge in the $m_{bt}$ distribution:} As discussed the invariant mass distribution of the top and bottom quarks system is characterized by a kinematic endpoint. The invariant mass of top and bottom system is given by \begin{equation} \left(m^{edge}_{tb}\right)^2~=~m_t^2~+~m_{b}^{2}~+~2\left(E_{t}E_{b} - \textbf{{P}}_{t}.\textbf{{P}}_{b} \right), \label{edgetb} \end{equation} where $t$ is the top quark originating from the heavy Higgs while the $b$-quark emerges from the decay of $t_2$. The magnitude of the quarks momentum in the rest frame of $t_2$ are given as \begin{eqnarray} p_{t}^2=\frac{m_t^4+m_{t_2}^4+m_{H_1}^4-2\left(m_t^2m_{t_2}^2+m_t^2m_{H_1}^2+m_{t_2}^2m_{H_1}^2\right)}{4m_{t_2}^2}\nonumber\\ p_{b}^2=\frac{m_{t_2}^4+m_{b}^4+m_{W}^4-2\left(m_b^2m_{t_2}^2+m_b^2m_W^2+m_{t_2}^2m_{W}^2\right)}{4m_{t_2}^2} \end{eqnarray} and $E_{i}^2=m_i^2+p_{i}^2$. The invariant mass acquires its maximum value when the angle between the top quark and the bottom quark is $\pi$. The edge of the invariant mass is a function of the mass of $H_1$ and $t_2$ as shown in Eq.~\ref{sample}. Right panel of Fig.~\ref{edges} gives the position of the edge in the $m_{tb}$ distribution as a function of $m_{t_2}$. It is plotted for two different masses of the heavy scalar $H_{1}$. The green curve represents the edge for $m_{H_1}=1100$ GeV and the blue curve is for $1200$ GeV. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{m2_edge.pdf} \includegraphics[width=9cm]{mtb_edge.pdf} \end{tabular} \end{center} \caption{\it{Variation of the edge of invariant mass for two final state quarks with mass of $t_{2}$ (left figure)}. The green plot represents the edge for $H_1 \rightarrow t b W$ and magenta plot is for $H_{1} \rightarrow t t h.$ Variation of the edge of $m_{tb}$ with mass of $t_{2}$ (right figure) for $H_1$ having mass of 1.1 TeV (green) and 1.2 TeV (blue).} \label{edges} \end{figure} It is clear from Fig.~\ref{edges} that the position of the edge is unique to the choices of masses under consideration. At this stage it is important to note that we are limited in our choices for the masses for these two particles. Due to s-channel suppression, the production cross section of the heavy Higgs falls rapidly with increase in mass. Reducing it will necessitate reducing the mass of vector-like quark $(m_{t_2})$ putting it in tension with the searches for third generation vector-like quarks. Thus, we consider the following three benchmark points: \begin{eqnarray} \text{BP1}:~m_{H_1}~&=&~1.2~\text{TeV},~m_{t_2}~=~600~\text{GeV};\nonumber\\ \text{BP2}:~m_{H_1}~&=&~1.1~\text{TeV},~ m_{t_2}~=~700~\text{GeV};\nonumber\\ \text{BP3}:~m_{H_1}~&=&~1.5~\text{TeV},~ m_{t_2}~=~1000~\text{GeV} \label{benchmark} \end{eqnarray} Due to lower cross section for the heavy Higgs-like scalar we will consider only the dominant decay mode of $t_{2}$ i.e $b$ and $W$ for BP3. The parton-level distribution for $m_{tb}$ is given in the left panel of Fig.~\ref{channel1_parton}. Clearly, the distribution exhibits a kinematic edge for both the benchmark points in Eq. \ref{benchmark}. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=9cm]{mtb_parton.pdf} \includegraphics[width=9cm]{mlb_parton.pdf} \end{tabular} \end{center} \caption{\it{Parton-level distributions for $m_{tb}$ (left) and $m_{lb}$ (right) for the two benchmark points, BP1 (red) and BP2 (black)}} \label{channel1_parton} \end{figure} Right panel of Fig.~\ref{channel1_parton} shows the corresponding distribution for $m_{bl}.$ \item \textbf{ Kinematic edge in the $m_{lb}$ distribution:} In addition to $m_{tb}$, the invariant mass distribution of the bottom quark and the lepton also exhibits distinct edge given by \begin{equation} \left(m^{edge}_{bl}\right)^2~=~m_b^2~+~2\left(E_{b}E_{l} + |\textbf{P}_{b}||\textbf{P}_{l}| \right) \label{edgebl} \end{equation} where the lepton ($l$) originates from the $W$ decay. The magnitude of the quarks momentum in the rest frame of $W$ are given as \begin{equation} p_{l}^2=\frac{m_W^2}{4}~,~p_{b}^2=\frac{m_{t_2}^4+m_{b}^4+m_{W}^4-2\left(m_b^2m_{t_2}^2+m_b^2m_W^2+m_{t_2}^2m_{W}^2\right)}{4m_{W}^2} \end{equation} and $E_{i}^2=m_i^2+p_{i}^2$. \item \textbf{ Kinematic edge in $m_{tbl}$ distribution:} For completeness, we also note that the invariant mass of top, bottom and lepton system also shows behaviour similar to $m_{tb}$ and $m_{bl}$. The distribution has an edge at mass of the heavy resonance i.e. $m_{H_1}$. The distribution does not reveal any further information which can add to ones obtained from $m_{tb}$ and $m_{bl}$ and hence, will not be considered further. \end{itemize} The parton-level plots in Fig \ref{channel1_parton} are generated by implementing the Lagrangian given in Eq. \ref{effective_lagrangian} in FEYNRULES~\cite{Christensen:2008py} and interfacing it with MADGRAPH~\cite{Alwall:2014hca}. For a given benchmark point, say BP2 , substitution of the masses in Eq. \ref{edgetb} gives $m^{edge}_{tb}$ at 840 GeV which is roughly the location of the edge in the plot. Similar conclusion hold for the other benchmark points. Given the fact that we are restricted in our choice of the masses for the heavy resonances, Fig \ref{edges} can be used to determine the masses of $H_{1}$ and $t_{2}$ exclusively. The mass of $t_2$ determined from Fig \ref{edges} can be validated by plotting $m_{bl}$ (right panel of Fig~\ref{channel1_parton}), which has a kinematic endpoint at $m_{t_2}$. The presence of such kinematic edges is unique to cascade topologies of the form in Fig.~\ref{tbw}. This feature has been used extensively in SUSY searches ~\cite{Lester:2006cf} for heavy neutralino $\chi_2^0$ (equivalent to $H_{1}$ in Fig.~\ref{tbw}), which decays into a di-lepton pair (equivalent to pair of quarks in Fig.~\ref{tbw}) and missing energy $\chi_1^0$ (equivalent to X). Unlike SUSY, however, the known masses of the final state particles increases the utility of this variable to a far greater effect.The combinations of the invariant mass in this channel are bereft of combinatorial uncertainties that are typical in SUSY and other channels discussed below. \subsection{Channel 2 : $H_1\rightarrow tt_2 \rightarrow t t Z$} We consider leptonic decay of $Z$-boson while both the tops decay hadronically. Similar to the Channel 1, this mode also exhibits the following features: \begin{itemize} \item \textbf{Kinematic edge in the $m_{tt}$ distribution:} The distribution of the invariant mass of the top pairs has an edge given by \footnote{The distribution of the invariant mass of the top quark which is the daughter of $t_2$ and one of the leptons will also have the kinematic edge similar to $m_{lb}$. However, unlike $m_{lb}$ the identity of top is uncertain and leads to a combinatorial uncertainty. Additionally, the transverse mass of top quark and Z-boson defined by $m_T=\sqrt{m_Z^2+m_t^2+2\left(E^t_TE^Z_T-\textbf{p}^t_T.\textbf{p}^Z_T\right)}$ also has an edge at $m_{t_2}$.}. \begin{equation} \left(m^{edge}_{tt}\right)^2~=~2 m_t^2~+~2\left(E_{t_{a}}E_{t_{b}} + |\textbf{P}_{t_{a}}||\textbf{P}_{t_{b}}| \right) \label{edgett} \end{equation} The magnitude of the transverse momenta of the top quarks in the rest frame of $t_2$ is given by \begin{eqnarray} p_{t_a}^2=\frac{m_t^4+m_{t_2}^4+m_{H_1}^4-2\left(m_t^2m_{t_2}^2+m_t^2m_H^2+m_{t_2}^2m_{H_1}^2\right)}{4m_{t_2}^2}\nonumber\\ p_{t_b}^2=\frac{m_t^4+m_{t_2}^4+m_{h}^4-2\left(m_t^2m_{t_2}^2+m_t^2m_h^2+m_{t_2}^2m_{h}^2\right)}{4m_{t_2}^2} \end{eqnarray} where $t_{a,b}$ are the two tops for the event and $E_{i}^2=m_i^2+p_{i}^2$. \end{itemize} \subsection{Channel 3 : $H_1\rightarrow tt_2 \rightarrow t t h$} We consider $h\rightarrow b\bar b$ decay mode of the Higgs as it is the most dominant. The final states is characterized by a pair of top and bottom quarks. We consider one of the tops to decay semi-leptonically that suppresses multi-jet QCD background. Similar to Channel 2 this topology also exhibits a kinematic edge in $m_{tt}$ distributions as well as the transverse mass $M_T$ of the $t,h$ system. While the parton-level results are all promising, it is still challenging to observe the edges at the LHC beneath the SM backgrounds with proper identification of the top, bottom and Higgs.The rest of the analysis is dedicated in identifying a collider strategy which can closely reproduce the parton-level plots in Fig. \ref{channel1_parton}. \\ \section{Identifying edges at the LHC} The final state particles in a collider environment are typically identified in terms of leptons, photons, $\tau$ and jets. The strength of the analysis lies, not only in reproducing the parton-level plots presented earlier but also in its effectiveness in reducing the SM backgrounds. In a scenario where top quarks, Higgs, $Z$ and $W$-bosons are boosted, their decay products can be captured inside a cone of radius $R$. The criterion to determine $R$ follows from the fact that the the mass difference between the heavy Higgs $H_{1}$ and the top partner $t_{2}$ must be significantly greater than the top threshold. i.e \begin{equation} \Delta m=m_{H_{1}}-m_{t_{2}}\geq 400~\text{GeV}, \end{equation} This ensures that the opening angle ($R$) between the top decay products is \begin{equation} R\sim\frac{2m_{t}}{p_T^{t}}\leq 1.5 \label{radius} \end{equation} The specific choice of our benchmark points~\ref{benchmark} warrants us such opening angle. Fig \ref{parton} shows the $p_T$ distribution of the top quarks and the Higgs for Channel 3 with the second benchmark point. The $p_{T}$ distribution has a peak at about 350 GeV for the leading top and the Higgs. As a result, the top jet, satisfies the condition in Eq.\ref{radius}. The Higgs, on account of its lighter mass will also satisfy the criterion in Eq. \ref{radius}. The slightly larger choice of $R$ ensures that the constituents of the sub-leading top (for the second and third channel) can be captured inside a jet as well.\\ \begin{figure}[!t] \begin{center} \begin{tabular}{c} \includegraphics[width=8.2cm]{partonpT.pdf} \end{tabular} \end{center} \caption{ $p_T$ distribution of the top and the Higgs at the parton-level. } \label{parton} \end{figure} \subsection{Jet Reconstruction:} The parton-level events for our signal topology are generated with {\tt{MADGRAPH}} at 14 TeV centre of mass energy using PDF {\tt{NNLO1}} \cite{Ball:2012cx} The events are showered and hadronized using {\tt{PYTHIA}}~\cite{Sjostrand:2007gs}. The showered events are then subsequently passed through the {\tt{DELPHES}} detector simulator \cite{deFavereau:2013fsa} using the CMS card. We extract the calorimetric four vectors for each event using the following acceptance criteria: \begin{equation} E_{e-cal}>0.1 ~\text{GeV}\;\;\;;\;\;\;E_{h-cal}>0.5 ~\text{GeV} \end{equation} These calorimetric outputs are clustered using {\tt{FASTJET}}~\cite{Cacciari:2011ma} with the $Cambridge-Achen$ algorithm~\cite{Dokshitzer:1997in, Bentvelsen:1998ug} to reconstruct fat-jets. The top candidates in the event are identified using substructures of the reconstructed fat-jets with the jet reconstruction parameter to be $R=1.5$. On account of the large transverse-momentum ($p_T$) associated with each event, we require the jet to have a minimum $p_T$ of 50 GeV. The reconstructed `fat' jets are required to have rapidity in the range $[-2.5,2.5]$. All the Channels discussed above are characterized by the presence of at least one top \footnote{In Channels 2 and 3 with two tops, only the leading top is tagged using the top tagger.}. There has been tremendous improvement in the techniques of identifying boosted tops using TopTagger~\cite{Plehn:2009rk, Kaplan:2008ie}. We briefly outline the algorithm adapted by us for tagging the top jet; \begin{itemize} \item Scanning through the three leading jet in each event, we identify the top candidate using {\tt{HEPTOPTAGGER}}~\cite{Plehn:2009rk}. \item The fat-jets passing through the tagger are subject to filtering procedure where the constituents of each jet are reclustered with $R_{filt}=0.3$. \item Out of all the subjets inside a jet, only $5 (n_{filt})$ of the hardest subjets are retained. \item The invariant mass of the three subjets are required to lie between 150 GeV to 200 GeV. \end{itemize} We now discuss the individual strategy adapted for each of the decay channel of the $t_2$: $\bullet$ $p p~\rightarrow~H~\rightarrow~t t_2,~t_2 \rightarrow b W:$ The three leading jet in the event correspond to one of the top, $b-$ jet and the $W-$ jet. Only events with a single isolated lepton associated with the decay of W-boson are selected. The leptons are isolated with respect to the fat-jets. For each event with a single lepton(at the parton-level), we construct a cone of $\Delta R=0.3$ around the lepton. The leptons are considered to be isolated if the total energy deposit within this cone is less than $10\%$ of the transverse momentum of the lepton. Since we assume the $W$ to decay leptonically, the $W-$ jet can be easily differentiated from the other two by computing the hadronic energy fraction inside the jet defined as \begin{equation} \theta_J=\frac{1}{E_j^{total}}\sum_iE_i^{h-cal} \end{equation} $E_j^{total}$ is the total energy of the $j^{th}$ jet and $E_i$ is the energy deposited in the $i^{th}$ $h-cal$ cell by a constituent of the $j^{th}$ jet. A $W-$ like jet is likely to deposit all its energy in the electromagnetic calorimeter, $\theta_J$ is likely to be close to zero. On the other hand, top-like and $b-$ jets deposit most of their energy in the hadronic calorimeter (since we consider hadronic decay of top). This leads to larger values of $\theta_J$ for them. It is convenient to take the logarithm of $\theta_J$ which further accentuates the difference between jet with or without hadronic activity. We identify W-like jet as the fat-jet having minimum hadronic activity. The other two jets are top-like jet and b-like jet. The distribution plotted in the left panel of Fig.~\ref{thetaJ} shows the comparison of $Log(\theta_J)$ of $W-$ like jet and the other hadronic jets (labeled as Hadron Jet 1(2)) for Channel 1. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8.2cm]{thetaJ_channel1.pdf}&\includegraphics[width=8.2cm]{thetaJ_channel2.pdf} \end{tabular} \end{center} \caption{\it{$\theta_J$ distribution for the three leading jets for Channel 1 (left) and Channel 2 (right)}} \label{thetaJ} \end{figure} As expected, $Log(\theta_J)$ for top-like and $b-$ jets peaks close to zero, while those for $W-$ like jet are large and negative in comparison. The presence of hadronic activity inside a $W-$ like jet can be attributed to the fact that these are fat ungroomed jets and are likely to collect stray QCD activity. After the identification of the $W-$ jet we identify the top from the remaining jets through top-tagger discussed above, while the remaining jet is considered to be the $b-$ jet. We find that a cut of $Log(\theta_J)<-0.3$ on the jet identified as the $W-$like is useful in suppressing the $t\bar{t}$ + jets background. The distribution of the invariant mass of the lepton along with the jet not tagged as the top is plotted in the right panel of Fig. \ref{channel1}. Using the edge of $m_{lb}$, we can determine the mass of $t_2$. The distribution of the invariant mass of the two completely hadronic jets is given in the right panel of Fig.\ref{channel1}. Both these distributions are plotted with 150 signal points. Similar distributions can be obtained for much lesser signal points. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8.2cm]{mtb.pdf}&\includegraphics[width=8.2cm]{mlb.pdf} \end{tabular} \end{center} \caption{\it{The simulation plots for $m_{tb}$ (left) and $m_{bl}$ (right) for the two benchmark points.}} \label{channel1} \end{figure} The position of the edge using monte-carlo simulation closely replicates that obtained using the parton-level information, thus highlighting the strength of our analysis.\newline $\bullet$ $p p~\rightarrow~H~\rightarrow~t t_2,~t_2 \rightarrow t Z:$ This channel is characterized by the presence of two top quarks which decay hadronically along with the presence of a $Z$ boson which is assumed to decay leptonically. The event is triggered by the presence of two isolated leptons. The $Z$ jet can be distinguished from the top jet by computing the hadronic energy fraction inside the jet described above. The right panel of Fig.~\ref{thetaJ} shows the comparison of $Log(\theta_J)$ of $Z-$ like jet and the other hadronic jets (labeled as Hadron Jet 1(2)) for Channel 2. Like earlier, we give a cut of $Log(\theta_J)<-0.3$ on the $Z-$jet. This is followed by tagging one of the top out of the two remaining jets. The left panel of Fig.~\ref{jetdist} shows the distribution of fat-jet multiplicity. Right panel of Fig.~\ref{jetdist} represents the distribution of $m_{tt}$ constructed out of the filtered hadronic jets (with relatively larger hadronic content). The distribution is plotted with 200 signal points. One of the jet is tagged as the top. The plots for both the benchmark points exhibit an edge close to the expected value.\newline \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8.2cm]{jets.pdf}&\includegraphics[width=8.2cm]{mtt_channel2.pdf} \end{tabular} \end{center} \caption{Left panel shows the distribution of fat-jet multiplicity for channel 2 with BP1. Right panel corresponds to distribution of $m_{tt}$ for this channel} \label{jetdist} \end{figure} $\bullet$ $p p~\rightarrow~H~\rightarrow~t t_2,~t_2 \rightarrow b h:$ The signal is characterized by the presence of two top jets and a Higgs jet. We assume $h\rightarrow b\bar b$ decay mode of the Higgs. Multi-jet QCD background can be suppressed by assuming that one of the top decays leptonically. As a result we select events with a single isolated lepton. Once a top is tagged, we scan over the other leading jets to tag the Higgs using the {\tt{MASSDROP}}\cite{Butterworth:2008iy} tagger outlined below: \begin{itemize} \item For a given candidate fat-jet $j$, the last stage of clustering is undone and broken into two subjets $j_1$ and $j_2$ such that $m_{j_1}>m_{j_2}$. \item In the event of a significant mass drop, $m_{j_1}<\mu m_j$, with a not too asymmetric splitting, $y=\frac{min(p^2_{t_{j_1}},p^2_{t_{j_2}})\Delta R^2_{j_1j_2}}{m_j^2}>y_{cut}$, the jet $j$ is considered to be tagged. \item If the second condition is not satisfied, redefine $j$ to be the subjet $j_1$ and first step is repeated. \end{itemize} $\mu$ and $y_{cut}$ are real numbers and are chosen to be $\mu=0.67$ and $y_{cut}=0.09$. We retain only those 'Higgs-like jet' whose invariant mass lie within the window of 10 GeV centered about 125 GeV. The other top can be reconstructed by assuming neutrinos to be the only source of missing energy for the event. We extract transverse components of the neutrino momentum as the negative of the vector sum of the transverse momentum of all visible particles in an event. The $z$-component of the neutrino momentum is extracted by solving the equation for the $W$-boson mass $m_w^2=(p_l+p_\nu)^2$ and is given as \begin{equation} p_{\nu z}=\frac{1}{2p_{eT}^2}\left(Dp_{eL}\pm E_e\sqrt{D^2-4p_{eT}^2\slashed{E}_T^2} \right) \end{equation} where $D=m_w^2+2\bar p_{eT}. \slashed{\bar E}_T $ and we assume $D^2-4p_{eT}^2\slashed{E}_T^2 >0$. Once the $z$-component of the `neutrino' momentum is identified, we reconstruct $W$ using the momenta of the isolated lepton and neutrino. We identify the anti-kt \cite{Cacciari:2008gp} b-tagged jet (reconstructed with $R=0.5$ and $p_T^{min}=50$ GeV) coming from the second top by demanding that $\Delta R$ between the Higgs-like jet (top-tagged jet) and the b-tagged jet is greater than 1.5. Using the b-tagged jet and $W$, we further reconstruct the semi leptonic top. Fig \ref{edge_lepton} gives the $m_{tt}$ distribution (in green) for the signal with the $ttbb$ background superimposed (in blue). Both plots are plotted with about 35 signal points. The left plot shows a very distinct edge at $\sim 800$ GeV while the expected edge for $(m_H,m_{t_2})=(1100,700)$ is $839$ GeV. Similar agreement is obtained for the mass combination $(m_H,m_{t_2})=(1200,600)$ where an edge-like feature is seen at $~970$ GeV while the parton-level result is at $1023$ GeV. Thus our simulation can predict the location of the edges to within $10 \%$ of the actual value and can serve as a smoking gun for the existence of such topologies The final state for this channel is exactly similar to the $t\bar{t}h$ process in the SM In the event of an observation of the latter, it is an irreducible background for signal topology considered in Channel 3. However, the construction of the $m_{tt}$ invariant mass in the SM would not exhibit a edge like feature like in the case of our signal and hence can be easily distinguished. In addition we would like to point out that some of the techniques introduced in this work could be beneficial to probe the $t\bar{t}h$ in the SM especially with a boosted Higgs decaying as $h\rightarrow\gamma\gamma$. The `Higgs jet' could be identified with as the one with low $Log(\theta_J)$ similar to the jets with low hadronic content jets in Channel 1 and Channel 2. \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8.2cm]{edge_lepton1100.pdf} &\includegraphics[width=8.2cm]{edge_lepton1200.pdf} \end{tabular} \end{center} \caption{ $m_{tt}$ distribution for signal (green) and $ttbb$ background (blue) for BP1 (right) and BP2 (left).} \label{edge_lepton} \end{figure} \section{Results and Discussion:} \label{results} The observation of these distributions require certain number of signal events. In accordance with the branching fractions, we demand a minimum of $\sim$ 50,30,30 signal points for Channel 1,2 and 3 respectively. Table \ref{summary} gives the projected luminosities for the accumulation of these signal points for the calculated acceptance from our simulation. In computing the projected luminosities we assume 60$\%$ branching fraction of $H_1$ into $tt_2$. It also gives the predicted and the observed values of the edges for all three channels. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{tabular}{cc|c |c |c |c |c|c|c|} \cline{2-9} &\multicolumn{4}{|c}{BP1( 92 $fb$)}&\multicolumn{4}{|c|}{BP2(129 $fb$)}\\ \cline{1-9} \hline \multicolumn{1}{ |c| }{Channel}&Edge$^{obs}$&Edge$^{exp.}$&Efficiency&Luminosity($fb^{-1}$)&Edge$^{obs}$&Edge$^{expec}$&Efficiency& Luminosity($fb^{-1}$)\\ \hline \multicolumn{1}{ |c| }{Channel 1($m_{tb}$)}&$\sim$1000&1025 & 0.005&1100 & $\sim$800&830&0.003&1300\\ \hline \multicolumn{1}{ |c| }{Channel 2($m_{tt}$)}&$\sim$1000& 1036 & 0.007&$>3000$ & $\sim$850&847&0.005&3000\\ \hline \multicolumn{1}{ |c| }{Channel 3($m_{tt}$)}&$\sim$970& 1025& 0.0008&$>3000$& $\sim$800&840&0.0008&3000\\ \hline \end{tabular} \caption{Reaches and predictions of the edges for three different channels. The cross-sections in brackets correspond to the gluon-gluon fusion production rate of the heavy resonance for the benchmark points at 13 TeV $N^3$LO.} \label{summary} \end{table} The smaller efficiency for Channel 3 can be attributed to the fact that in addition to the top tagging we also require the Higgs jet to be tagged. In addition the leptonic top is reconstructed from the $b$ tagged jet which necessarily must not lie inside either of the top tagged or the Higgs tagged jet. The efficiencies for the first two channels are on the lower side due to a cut on $\theta_J$. Higher efficiencies can be obtained by either relaxing or completely ignoring the cut. However, this cut is highly essential in suppressing the $t\bar t+jets$ background which may possibly smear the edges. Channel 1 offers the most optimistic scenario for both the benchmark points observables at High Luminosity (HL) phase of the LHC. Additionally, this channel is free from any combinatorial uncertainties for the construction of the second edge ($m_{lb}$) which complements the $m_{tb}$ distribution. Both these aspects make it an exciting prospect to explore. The analysis from Run-I of the LHC constrain the masses of the top partner to be $>950$ GeV \cite{Aad:2016qpo}. In light of this, we implement a scenario $(m_{H_1},m_{t_2})=(1500,1000)$ GeV. Due to the small production cross section of heavy scalar ($37 fb^{-1}$) for this mass, only the leading decay mode of $t_2\rightarrow Wb$ is relevant in this case. The edges for this bench mark point in given in Fig. \ref{edge_bp3} and Table \ref{summary1} gives a summary of results for the same. \begin{table}[h] \renewcommand{\arraystretch}{1.5} \begin{tabular}{|c|c|c|c |c|c|} \cline{3-6} \multicolumn{2}{c}{} &\multicolumn{2}{|c}{$m_{tb}$}&\multicolumn{2}{|c|}{$m_{tl}$}\\ \cline{1-6} \hline Efficiency&Luminosity($fb^{-1}$)&Edge$^{obs}$&Edge$^{exp.}$&Edge$^{obs}$&Edge$^{expec}$\\ \hline 0.0044&2500&1100&1103&900&997\\ \hline \end{tabular} \caption{Table shows the observed and the expected value of the edge for $m_{tb}$ and $m_{bl}$ invariant mass distributions for the third benchmark point. The luminosity corresponds to the accumulation of 40 signal points. The cross-sections in brackets correspond to the gluon-gluon fusion production rate of the heavy resonance for the benchmark points at 13 TeV $N^3$LO.} \label{summary1} \end{table} \begin{figure}[!t] \begin{center} \begin{tabular}{cc} \includegraphics[width=8.2cm]{mtb_bp3.pdf}&\includegraphics[width=8.2cm]{mlb_bp3.pdf} \end{tabular} \end{center} \caption{\it{The simulation plots for $m_{tb}$ (left) and $m_{bl}$ (right) for the third benchmark point $(m_{H_1},m_{t_2})=(1500,1000)$ GeV.}} \label{edge_bp3} \end{figure} The expected reach to accumulate $40$ signal points is about $2.5~ab^{-1}$. It is interesting to stress at this point that this technique is not restricted to the case with heavy scalars. The analysis can be repeated with more massive colored objects (KK excitations of gluons), which enjoy two advantages:\\ a) Colored objects enjoy large production cross section at heavier masses. A heavy scalar of mass 1.4 TeV has cross section similar to a 3 TeV KK gluon. The edges corresponding to this mass will be at much heavier scales resulting in much less smearing.\\ b) It also increases the sensitivity to probe much heavier masses for the vector like top partner, much beyond the limit possible by the high luminosity LHC. This opens up a lot of interesting possibilities and will be addressed in an upcoming publication \cite{KKg}. It is important to note that at this stage the cuts are not tuned to get the desired $S/\sqrt{B}\sim 5$ for the leptonic case. They are fashioned to get the desired kinematic distributions with enough signal points at much lower luminosities. Observation of these distributions would serve as a smoking gun to tighten the selection around the probable masses to achieve the desired significance. We find that the analysis discussed thus far serves to achieve a multi-fold objective: a) Edges are typically constructed out of leptonic final states which have a sharp feature owing the distinct determination of the lepton momenta . In this work we have constructed edges out of top and bottom jets which are likely to exhibit smearing, even for the signal. In this work we have successfully demonstrated the construction of these edges using jets and achieved a fair degree of success to this effect. The quality of the edges can be improved further by imposing $b-$ tagging criteria. b) A definite pointer towards the existence of new physics scenarios. This, can be further extended to argue that it is an indicator towards the existence of non-MSSM scenarios. c) Gives a hint towards the region of parameter space where such new physics resonances can be expected to lie. An analysis of this nature has an extremely wide scope in general. Looking out for the existence of new physics by identifying characteristic unique to it can serve as a trigger which may aid the direct searches for the current and future runs.\\ \textbf{Acknowledgements:} We would like to thank Amit Chakraborty, Monoranjan Guchait and Tuhin Roy for useful discussions. We also thank N. Manglani for collaboration in the preliminary stages of the work. We would also like to thank Department of Theoretical Physics, TIFR for the use of its computational resources.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,603
<!DOCTYPE html> <html class='reftest-wait'> <body onload="document.getElementsByTagName('fieldset')[0].disabled = true; document.documentElement.className='';"> <fieldset> <input type='file'> <input type='checkbox'> <input type='radio'> <input> <button>foo</button> <textarea></textarea> <select><option>foo</option></select> <fieldset></fieldset> </fieldset> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
8,336
Q: configuration for haproxy in ubuntu? I'm getting the following problem when I'm configuring haproxy stats: Job for haproxy.service failed because the control process exited with error code. See "systemctl status haproxy.service" and "journalctl -xe" for details. Below is my configuration file code: listen stats 192.168.10.10:1936 mode http log global maxconn 10 clitimeout 100s srvtimeout 100s contimeout 100s timeout queue 100s stats enable stats hide-version stats refresh 30s stats show-node stats auth admin:password stats uri /haproxy?stats global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # Default ciphers to use on SSL-enabled listening sockets. # For more information, see ciphers(1SSL). This list is from: # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/ ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS ssl-default-bind-options no-sslv3 defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http frontend haproxy_in bind *:80 default_backend haproxy_http backend haproxy_http balance roundrobin mode http server ironman 104.211.241.39:80 server thor 104.211.246.147:80 A: if you run journalctl -xe or view your log file you would see you have severe config problems 'listen' cannot handle unexpected argument '192.168.10.10:1936'. parsing [/etc/haproxy/haproxy.cfg:1] : please use the 'bind' keyword for listening addresses. Error(s) found in configuration file : /etc/haproxy/haproxy.cfg Place the keyword "bind" infront of your stats ip:port , i.e: listen stats bind 192.168.10.10:1936
{ "redpajama_set_name": "RedPajamaStackExchange" }
683
What are gastric and duodenal ulcers? Gastric and duodenal ulcers are types of peptic ulcer. The main distinction is that they affect different parts of the digestive tract. A person could have both at the same time. Some causes of peptic ulcers include an excess of stomach acid, bacterial infection, and certain medications. In this article, we look at what gastric and duodenal ulcers are and how a doctor diagnoses them. We also explore their causes and treatments, along with associated symptoms and risk factors. Gastric and duodenal ulcers are open sores in the lining of the digestive tract. Gastric and duodenal ulcers are peptic ulcers, which are open sores in the lining of the digestive tract. Gastric ulcers form in the lining of the stomach. Duodenal ulcers develop in the lining of the duodenum, which is the upper part of the small intestine. Many people with peptic ulcers rely on medical treatment to relieve their symptoms. Peptic ulcers sometimes heal on their own, but they can recur if a person does not receive treatment. Symptoms of gastric and duodenal ulcers are generally similar. The most common complaint is a burning pain in the stomach. Duodenal ulcers may also cause abdominal pain a few hours after eating. This pain tends to respond well to medications or foods that reduce stomach acid, but as the effects of these wear off, the pain usually returns. Abdominal pain from a duodenal ulcer may be worse when the stomach is empty, for example, between meals, at night, or first thing in the morning. Some people with these ulcers develop intolerances for specific foods. These foods may make a person feel sick, or they may make ulcer-related symptoms worse. Some people with peptic ulcers have no symptoms. A doctor may only discover the ulcer when checking for a different digestive disorder. Anyone with symptoms of peptic ulcers should see a doctor. If symptoms are severe, seek urgent medical attention. An overgrowth of H. pylori bacteria in the digestive tract may cause a peptic ulcer. Peptic ulcers result from damage or erosion to the protective lining of the digestive tract. Bacterial infections and certain medications can also lead to peptic ulcers. A person has a higher risk of developing a peptic ulcer if they have an overgrowth of Helicobacter pylori (H. pylori) bacteria in the digestive tract. This type of bacterial infection is common. While an H. pylori infection does not cause symptoms in most people, it sometimes irritates the lining of the digestive tract, which can lead to peptic ulcers. Long-term use of certain medications, such as nonsteroidal anti-inflammatory drugs (NSAIDs) can also damage or irritate the lining and increase the risk of peptic ulcers. NSAIDS include many other-the-counter pain relievers, such as ibuprofen (Advil), naproxen (Aleve), and aspirin. Esophageal ulcers are a type of peptic ulcer. They develop between the stomach and the throat. Learn more here. A person's genetics and lifestyle can also increase the risk of developing a peptic ulcer. If close family members have peptic ulcers, a person may be more likely to develop them. Smoking tobacco products can also increase a person's risk. Doctors no longer think that alcohol, spicy foods, or rich foods cause ulcers. However, consuming them may make symptoms worse or slow the healing process. The role of stress in the development of ulcers is uncertain. Some doctors believe that stress is a direct risk factor, while others do not. In one small study, psychological stress increased the risk of developing peptic ulcers. However, the researchers believed that the link was partly indirect, that stress led to other risk behaviors, such as taking NSAIDs and smoking. Symptoms of peptic ulcers can be similar to those of other conditions, such as gallstones or gastroesophageal reflux disease, which is commonly called GERD. Receiving a correct diagnosis is essential. A doctor may begin by asking about a person's medical history and current medications. They will also ask about symptoms and the location of any pain. A variety of tests can help confirm a diagnosis. The doctor may test the blood, stool, or breath to check for signs of H. pylori infection. The doctor may also perform an endoscopy to look for ulcers. This involves inserting a thin tube with an attached camera down a person's throat and into the stomach and upper small intestine. In some cases, a doctor may also recommend a barium swallow test. This involves swallowing a liquid that contains barium. The barium helps the doctor see the intestinal tract more clearly on an X-ray of the abdomen. A variety of medications are available to treat stomach ulcers. For most people, treatment will involve taking medications that either reduce the amount of acid in the stomach or protect the lining of the stomach and duodenum. If an H. pylori infection is responsible for the ulcers, a doctor may prescribe antibiotics to kill the bacteria. They may also prescribe medications that help suppress excess stomach acid, such as PPIs. If other medications, such as NSAIDs, have caused the ulcers, the doctor may prescribe a PPI or review the need for the drug. Some doctors also recommend reducing or better managing levels of stress. Untreated ulcers can cause complications. Rarely, peptic ulcers can lead to a perforation, or hole, in the wall of the stomach or intestine. A perforation can put a person at serious risk of infection in the abdominal cavity. The medical name for this infection is peritonitis. If a person with peptic ulcers experiences sudden abdominal pain that grows worse, they should see a doctor immediately. In addition, ulcers can cause internal bleeding. If this bleeding develops slowly, it can lead to anemia. Symptoms of anemia can include fatigue, pale skin, and shortness of breath. If the bleeding is severe, a person may see blood in vomit or stools. Anyone with symptoms of severe internal bleeding should seek immediate medical attention. It may not be possible to prevent a peptic ulcer. However, reducing risk, for example by quitting tobacco use and eating a healthful diet, may help. People who use NSAIDs or other medications that can cause peptic ulcers should talk to a doctor about managing their ulcer-related risk. The medical community is not entirely certain how H. Pylori spreads. People should protect themselves by cooking foods thoroughly and frequently washing the hands with soap and water. Gastric and duodenal ulcers are both types of peptic ulcer. They can cause pain and other symptoms in the digestive tract. Treatment usually involves addressing the underlying cause and taking appropriate medication, including medicine to reduce stomach acid. If left untreated, these ulcers can cause serious complications. Johnson, Jon. "What are gastric and duodenal ulcers?." Medical News Today. MediLexicon, Intl., 23 Aug. 2018. Web.
{ "redpajama_set_name": "RedPajamaC4" }
510
MIDA/ENIS Spring School "Contesting Authority: Knowledge, Power and Expressions of Selfhood" De Lun, 02/03/2020 hasta Sáb, 07/03/2020 Venue: Palazzo Pedagaggi, Dipartimento di Scienze politiche e sociali, Università degli Studi di Catania (Italy). Organized by: the Innovative Training Network Mediating Islam in the Digital Age (ITN-MIDA), the European Network for Islamic Studies (ENIS) and the University of Catania. The CSIC institutions that form part of ENIS are: the Instituto de Lenguas y Culturas del Mediterráneo (ILC-CSIC, Madrid), the Escuela de Estudios Árabes (EEA-CSIC, Granada) and the Instituto Milá i Fontanals (IMF-CSIC, Barcelona). The MIDA/ENIS Spring School 2020 addresses two closely interrelated aspects of Islam in the digital age. Firstly, how (past and contemporary) technological revolutions have informed the performance of selfhood (including gender), the modes of engagement with society, and the political consequences of shifting boundaries between public and private spheres. Secondly, it addresses the construction and transformation of religious authority and religious knowledge production, and concomitant questions of legitimacy, power and discipline, under changing circumstances. Presently there is a mushrooming of YouTube channels presenting testimonials and life accounts, face book pages providing space for minority groups (e.g. homosexuals or ex-Muslims) that publicise previous hidden aspects of identity, as well as blogs and homemade videos communicating everyday life events or short clips showing artistic performance in an affordable non-celebrity style sharing them with a wide audience. Quite often they contain an (implicit) political statement about the societies in which the expressions are uttered, not only in the message but also in the mere fact of the utterance. (Young) people in the Muslim world, like elsewhere, share more and more aspects of self, including more intimate and previously hidden ones, or experiences with 'illegality'. These new digital forms of self-expression also entail a claim to space for individualised selfhood. Out of sight of different regimes of surveillance, forms of marginality, secret lives and intimate experiences take on a more public form. With that it questions dominant forms of authority, whether parental, communal, religious or political. The Muslim / Arab world isusually characterised as stressing communal or relational forms of identities and putting less emphasis on individualised selfhood in comparison to the West. The Arab Uprisings first seemed to overturn some deeply rooted forms of authority, including with respect to political power, but now long-established authoritarian forms of power with their different nuances appear to be square back. Yet several observers notice a 'silent revolution' taking place on an individual level, asserting individual selfhood and rights. Do these new forms of self-narratives and artistic performances offer us insight into the development of new forms of selfhood? What are the most important characteristics and expressive forms of these new forms of selfhood? What are the potential political consequences of new forms of self-understanding and expression? Issues of selfhood and artistic performance are closely linked to questions of legitimacy, power and discipline. Muslims have held varying, sometimes conflicting, views on the extent to which knowledge and authority are exclusive of a single figure, a masculine 'professional' group, or distributed in society, how knowledge should be transmitted and controlled, and the literary forms that it should take, and how it should be reproduced. The widely held assumption that in the pre-digital era Islamic reasoning was a collective matter of established scholars and theology-centred argumentation lacks historical pedigree. The individual as a political subject emerged centuries before the dawn of digital technology. This also questions the assumption that religious authority was uncontested, only to be challenged very recently by the same technological innovations. Questioning 'established' religious authorities and addressing new audiences is as old as Islam. The invention of paper, the rise of literacy and the emergence of 'calligraphic states', and not least the spread of print technology have had profound influence on authority and knowledge production, but also generated new expressions of selfhood. Digitisation has intensified this process in an unprecedented way, resulting in the rise of new intellectuals, the feminisation of contestation, the 'democratisation' of knowledge production, the emergence of new audiences and discursive communities, the relocation, subjectivation, and fragmentation of authority, but also in new forms of community building, online and offline. Finally, digitisation also prompted 'established' religious authorities to reflect upon these newly arising challenges and how to effectively cope with them. Instituto de Lenguas y Culturas del Mediterráneo y Oriente Próximo (ILC) Dpto. de Estudios Judíos e Islámicos Historia Cultural del Mediterráneo Centro de Ciencias Humanas y Sociales - Consejo Superior de Investigaciones Científicas
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,953
Australian Theatre and Performance Subject ENGL40020 (2015) Contact Hours: A 2-hour seminar per week Total expected time commitment is 170 hours across the semester, including class time. Admission to the Master of Arts & Cultural Management; Postgraduate Diploma in Arts & Cultural Management; fourth year honours, postgraduate certificate or postgraduate diploma in English & theatre studies. For the purposes of considering request for Reasonable Adjustments under the disability Standards for Education (Cwth 2005), and Students Experiencing Academic Disadvantage Policy, academic requirements for this subject are articulated in the Subject Description, Subject Objectives, Generic Skills and Assessment Requirements of this entry. Prof Denise Varney Denise Varney dvarney@unimelb.edu.au Australian Theatre and Performance is a study of representative Australian performing arts selected for historical, dramatic, theatrical and cultural significance. Important plays, performance groups, and artists from the 1960s until the present-day will be discussed and analysed. Students will read plays, view live works and performance documentation, engage in archival research, and undertake textual and performance analysis. An interdisciplinary approach combining aspects of theatre and performance studies and cultural history will inform the subject. Artistic trends are discussed alongside analysis of social, political and cultural movements and contexts evident in the development and expansion of the national performance scene. Australian Theatre and Performance investigates the contemporary senses of diversity and innovation in the arts as well as examining challenges and the changing cultural landscape. Upon completion of this subject students will be able to: Apply interdisciplinary methodologies drawing on aesthetic, cultural, social and economic perspectives to inform an understanding of the creation of dramatic literature and performance; Understand how Australian drama is an expression of culture and society; Broaden the understanding of contemporary drama and performance in the contemporary period; and Debate the contribution of the arts to society. An individual research paper on an aspect of the subject 5000 words 100% (due in the examination period). Students are required to attend a minimum of 80% (or 10 out of 12) classes in order to qualify to have their written work assessed. Any student who fails to meet this hurdle without valid reason will not be eligible to pass the subject. All required written work must be submitted in order to pass the subject. Essays submitted after the due date without an extension will be penalised 2% per day. Essays submitted after two weeks of the assessment due date without a formally approved application for special consideration or an extension will only be marked on a pass/fail basis if accepted. A Bovell, When the Rain Stops Falling, Currency Press, 2009. P Cornelius & C Tsiolkas et al, Who's Afraid of the Working Class? In Melbourne Stories: Three Plays, Currency Press, 2001. W Enoch et al, The Seven Stages of Grieving, Playlab Press, 1996. L. Katz, Neighbourhood Watch, Currency Press, 2011. J Kemp, Madeleine, www.australianplays.org. J. Murray-Smith, Honour (1995) Currency Press, 2006. J Romeril, The Floating World, Currency Press, 1975. C Vu, A Story Of Soil, Australian Script Centre, 2002. P. White. A Season at Sarsaparilla, in Collected Plays Volume 1, Currency Press, 2012. D Williamson, The Removalists, Currency Press, 1972. Students who successfully complete this subject will be able to: Prepare and present their ideas in both verbal and written mode to an advanced level and in conformity to conventions of academic presentation; Participate in discussion and group activities and be sensitive to the participation of others; Apply creative and critical thinking in the analysis of artistic works; Manage time effectively in the completion of assessment tasks; and Access a broad range of resource material, including traditional text, art works and electronic media. Graduate Diploma in Arts and Cultural Management (Advanced) Postgraduate Diploma in Arts and Cultural Management 100 Point Master of Arts and Cultural Management English and Theatre English and Theatre Studies
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,206
The Bulls are missing a big voice while Kris Dunn recovers from injury. Until then, it's on Wendell Carter Jr to lead his team despite his young age. Wise words from an NBA vet-errr, never mind. Those wise words were spoken in the visitor's locker room at the TD Garden in Boston last night by Chicago Bulls rookie Wendell Carter Jr. Yeah, that 19 year-old kid. Remind me who the leader is on this listless team right now? The 4-11 Bulls are looking anywhere and everywhere for answers during a brutal stretch of their schedule; trying to stay afloat while they wait for key pieces like Lauri Markkanen, Kris Dunn, Bobby Portis and Denzel Valentine to get healthy. Zach LaVine is doing everything he can to lead their scoring efforts on a nightly basis. But when LaVine has an off-night, things get ugly rather quickly. That's exactly what happened last night in Boston, as LaVine's 20-point game streak came crashing to a halt and the Bulls mustered just 82 points in 48 minutes. For those keeping score at home, that's 10 points fewer than they surrendered to the Golden State Warriors in 24 minutes of basketball earlier this season. As Will Perdue said, where was the pride of this young and undermanned group that night at the United Center when Klay Thompson went nuts? Why did no one step up and say, "I don't care how, but we do NOT let that guy break the 3-point record in OUR house"? It goes back to the sentiments the Bulls' youngest player expressed after their 11th loss last night. When things take a turn for the worse, they separate instead of coming together. How is it that a 19 year-old rookie sounds like the most mature and insightful person in this locker room? It's a great sign for the future of the Bulls' most recent lottery pick. This kid has "veteran leader" written all over him. But is there no one else to do the leading in a season when his focus should be developing his NBA game? Who might that be? The obvious options are LaVine and Dunn. One is clearly the team's best offensive player, while the other anchors the team's defense when healthy. LaVine deserves credit for shattering expectations early this season, scoring at will from all over the floor. But there's little evidence of him being the vocal leader on the floor, and that's not necessarily a bad thing. There's no written rule saying your team's best scorer must also serve as the vocal leader. (Let's remember that a young Derrick Rose was not the leader of the Bulls teams of yesteryear. No, that role belonged to Joakim Noah and Luol Deng.) Honestly, LaVine should get some measure of a pass for not being a vocal leader on the defensive end given everything being asked of him on offense. The front office duo of John Paxson and Gar Forman repeatedly laud Justin Holiday and Robin Lopez as veteran leaders for this very young team. But is there much evidence of them serving those roles on the court? Holiday might be among the team's leaders in minutes played, but that doesn't mean he's leading them in any other way. Lopez has been in and out of the rotation, and it's hard to be a leader if you're not regularly on the floor. Dunn is the vocal leader this team is missing. He spoke during the offseason about wanting to take on that role, and his teammates were ready to fall in line. Bobby Portis praised Dunn for helping him understand his defensive role as a big manning the baseline, recalling Dunn's comparison to a quarterback who can see the whole field. LaVine expressed his gratitude to Dunn for calling out his defensive shortcomings after Dunn suggested to Zach that his length and athleticism should allow him to be just as strong as Dunn on that end of the floor. But while Dunn nurses his sprained MCL, who has taken on that responsibility of holding teammates accountable? Making sure the whole team is communicating during the game and in timeouts? According to Wendell, the answer is sadly: no one. Guess what, youngin? I think we found the voice. It's you. Tell your teammates to fall in line until Dunn gets back. Have thoughts on last night's game or Wendell's postgame comments? Comment below or continue the conversation with me on Twitter @Bulls_Peck. and in ɑccessіon capital to assert that I acquire in fact enjoyed account your bloց posts. particular information for a very lengthy time. Thanks and best of ⅼuϲk. Hi there! Woսld you mind if I share your blog with my tѡitter group? There's a lоt of folks that I think would realⅼy enjoy your content. What's up mɑtes, һow is all, and what you would like to say regarding this articⅼe, in my vieѡ its actսallу remarkable in support оf me. Hі there! Someone in my Faceboоk group shared this website with us so I came to give it a look. this to my followers! Superb Ƅlօg and superb desiցn and style. I do accept ɑs true with all the ideas y᧐u have presented to your post. very quick for starteгs. Mаy just you please prolong them a little from subsequent time? paragrɑph, while I am also zealouѕ of gettіng knowledցe.
{ "redpajama_set_name": "RedPajamaC4" }
9,517
Q: История покупок Google Play Billing Library Есть ли возможность получить всю историю покупок пользователя приложения, а не только самые последние, возвращаемые методом queryPurchaseHistoryAsync() ? Под "историей" понимается выписка всех операций(транзакций) пользователя для всех платных объектов, предлагаемых приложением. A: В библиотеке судя по исходникам нет такой возможности. Вместо неё вам придётся напрямую использовать Service маркета для совершения запросов. Такие запросы вернут для истории часть совершённых покупок + токен для запроса не поместившихся в ответ транзакций и даст вам возможность запросить реально всю историю. Вот дока к нужному методу: https://developer.android.com/google/play/billing/billing_reference#getPurchaseHistory Вот дока по самому механизму запроса через Service маркета: http://androiddoc.qiniudn.com/google/play/billing/billing_integrate.html
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,094
Adonisea cupes är en fjärilsart som beskrevs av Augustus Radcliffe Grote 1875. Adonisea cupes ingår i släktet Adonisea och familjen nattflyn. Inga underarter finns listade i Catalogue of Life. Källor Nattflyn cupes
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,591
{"url":"http:\/\/leancrew.com\/all-this\/2014\/01\/my-report-writing-workflow\/","text":"# My report writing workflow\n\nBlame Kieran Healy for this. Last night he posted a nice account of his writing workflow, with flow charts, lists of tools, examples of input and output, and, most important, a cogent explanation of how and why he developed his way of working. If you haven\u2019t already seen it, go there now\u2014I\u2019ll wait.\n\n\u266b Tall and tan and young and lovely, the girl from Ipanema goes\u2026 \u266b\n\nOh, you\u2019re back. Good. Yes, that was nice of him to quote from one of my posts. But let\u2019s move on to the blame thing. Prof. Healy\u2019s post inspired me to draw up my own workflow for the reports I write.1\n\nThere\u2019s a lot of stuff in there, but my input comes only at the points where you see my picture. The workflow starts with me writing the text of the report in Markdown format. That file invariably contains references to photographs, drawings, or plots that\u2019ll be included in the final output. These are usually prepared before I start writing, but are, more often than not, edited, annotated, or otherwise adjusted as I write the text, because the process of writing gives me new ideas on how to present the information.\n\nThe Markdown file is processed by a shell script, md2report, which spits out a LaTeX version of the report. Most of the time this file is ready to be processed by pdflatex, which also gathers in all the images files and creates the PDF output. If the report needs a particularly complex table, or if I need to tweak the spacing somewhere, I\u2019ll go in and edit the LaTeX file directly. This is pretty rare, which is why I\u2019ve grayed out my input to the LaTeX file. The great majority of my reports need no direct LaTeX input from me.\n\nMy goal is for almost all of my effort to be spent on the content of the report and almost none on styling and processing. The styles are all defined in auxiliary files and scripts that I wrote long ago. The only processing I do consists of invocations of the two executables on the main spine of the workflow:\n\nmd2report report\npdflatex report\n\n\n(They\u2019re smart enough to know the file extensions.)\n\nThe md2report script is really just a pipeline. It runs the Markdown file through mmmd, which is my fork of Fletcher Penney\u2019s MultiMarkdown, and John Gruber\u2019s Smarty Pants to produce XHTML. This is then piped through xsltproc to produce LaTeX. The XSL file that defines the transformation, xhtml2article.xsl, is my fork of Fletcher\u2019s original. Finally, the LaTeX is piped through a couple of small scripts, addsignature and separateplates, to add some final formatting touches.\n\nI should mention here that I forked MultiMarkdown and the XSL file in 2005 or 2006 so I could include equations in my reports (which MultiMarkdown didn\u2019t support at the time) and use a LaTeX style file, report.sty, that I\u2019d developed some years earlier, when I wrote my reports in LaTeX directly.2 If I were starting today, I\u2019d use the current MultiMarkdown (or Pandoc, as Kieran does), but since what I have works, there\u2019s no incentive for me to change. My job is to produce reports, not workflows.\n\nI did spend quite a while developing this workflow, but that was many years ago and the effort has paid off several times over. Randall Munroe isn\u2019t always right.\n\n1. I\u2019m pretty sure that by linking the image below directly to the original size\u2014which you can get to by clicking on it\u2014I\u2019m violating Flickr\u2019s terms of service. I hope you appreciate the risk I\u2019m taking. Also, here\u2019s the Flickr image page I\u2019m supposed to link to.","date":"2016-12-03 15:57:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7662737369537354, \"perplexity\": 1786.2333518627129}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698540975.18\/warc\/CC-MAIN-20161202170900-00038-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/eng.libretexts.org\/Bookshelves\/Materials_Science\/TLP_Library_I\/08%3A_Brillouin_Zones\/8.04%3A_Section_4-","text":"8.4: The general case in three dimensions\n\nAfter considering these simple examples, it should hopefully be clear how to construct Brillouin Zones. Whilst in two dimensions this geometric method is easy to apply, in three dimensions the lattice cannot be represented on a piece of paper and in general it is much harder to picture the shape of the Brillouin Zones beyond the first. This section considers how the relevant Bragg planes for zone construction may be generated in a systematic fashion.\n\nIn vector notation, the equation of a plane may be written as\n\n$(\\mathbf{r}-\\mathbf{a}) \\cdot \\hat{\\mathbf{n}}=0$\n\nIn this expression, a is a vector from the origin to a specified (but arbitrary) point in the plane, r is a general point in the plane and $$\\hat{\\mathbf{n}}$$ is the unit vector normal to the plane.\n\nFor Brillouin Zones it is convenient to choose a so that it is the perpendicular vector from the origin to the Bragg plane of interest, i.e., a vector of the form\n\n$\\mathbf{a}=\\frac{1}{2}\\left(h \\mathbf{b}_{1}+k \\mathbf{b}_{2}+l \\mathbf{b}_{3}\\right)$\n\nalso, the unit normal is then given by\n\n$\\hat{\\mathbf{n}}=\\frac{\\mathbf{n}}{|\\mathbf{a}|}$\n\nLetting h, k and l be integers (positive or negative) and excluding the point where all three are equal to zero, the relevant Bragg Planes may be generated in a systematic fashion. It is usually sufficient for finding the first three or four zones to have the di range between \u20133 and +3.\n\nThen, given any point in reciprocal space, it may be allocated to a Brillouin zone by determining the number, N, of Bragg Planes that lie in between that point and the origin. The point is then in the (N+1)th Brillouin Zone.\n\nWhether a Bragg plane lies between a general point, r, and the origin may be determined quite simply by considering the projection of the position vector of the point in the direction of the unit normal to the Bragg plane. The scalar product\n\n$\\mathbf{a} \\cdot(\\mathbf{r}-\\mathbf{a})$\n\nis positive if the plane is between the point and the origin, and is negative when the plane does not lie between the point and the origin.\n\nThe main complication when extending to Brillouin Zones beyond the first zone is that it is very easy to overlook an important Bragg Plane. This can only really be avoided by being careful and systematic. A useful test to indicate if a mistake has been made is to check that all the zones have the same symmetries as the reciprocal lattice itself. For example the Brillouin Zones for the 2-D Square lattice always have fourfold-rotational symmetry about the origin. A further useful check is to confirm that the area (in 2D) or volume (in 3D) of each Brillouin zone is the same.\n\nSome examples which show the Brillouin Zones for common 2-D lattices may be found by clicking the links below.","date":"2022-01-28 03:25:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6201817393302917, \"perplexity\": 198.1366898107686}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320305341.76\/warc\/CC-MAIN-20220128013529-20220128043529-00458.warc.gz\"}"}
null
null
Attacks on birthright citizenship — such as the one recently published in the Washington Post by former Trump White House aide Michael Anton, he of the "Flight 93 manifesto" — are nothing new. They bubbled up during the right-wing anti-immigration politics of the George W. Bush era, with a further boost from birtherism. Birthright citizenship is a constitutional right, no less for the children of undocumented persons than for descendants of passengers of the Mayflower. [A]s late as 1957, pro-Southern commentators took the position that the amendment itself, having been imposed upon the South by Yankee scum, was not valid at all. That battle goes on today. It is, and has been for 150 years, a battle for the very heart of the American republic. This helps explain how seriously Epps takes the stakes involved, but his response to Anton went even further, focusing ultimately on the latter's extraordinary claim that President Trump alone, via executive order, could effectively invalidate the amendment with a stroke of a pen. No one but Anton, Epps wrote, "has dared to suggest that a president could void the citizenship clause by executive order." If that happened, he told Salon in a recent interview, "It would be the biggest constitutional crisis of our lifetime." But before exploring what's really at stake, we need to understand just how ridiculous Anton's position really is. The framers of the 14th Amendment added the jurisdiction clause precisely to distinguish between people to whom the United States owes citizenship and those to whom it does not. Freed slaves definitely qualified. The children of immigrants who came here illegally clearly don't. There was of course no remote equivalent in 1868 to the "children of immigrants who came here illegally": The United States did not pass its first restrictive immigration law, the Page Act, until 1875. It's hard to say what Anton thinks he means here: Do we all get to vote on who gets to be a citizen and who doesn't? By this nonsensical logic, every newborn American — even a 10th-generation Daughter of the American Revolution — would need the unanimous consent of every existing citizen in order to become a citizen themselves! If that were actually the case, we might end up with no citizens at all! This "intellectual" train wreck shouldn't be seen as an isolated event, or only loosely related to earlier attacks on the 14th Amendment. They are all part of a sweeping, long-standing conservative attack against Reconstruction and the Progressive Era, which has sought in various ways to invalidate, deny or repeal the constitutional amendments passed in those eras, which laid the foundations for modern-day American society. If we want to truly understand the deadly stakes behind Anton's gambit, we first need to take several steps backward and grasp the historical context from which it appears. Conservatives have never been shy about attacking the Great Society programs of the 1960s, and it's well known they want to repeal the New Deal as well. But there's a much broader record of attacks on virtually every expansion of federal power and individual rights since the Civil War. For example, former Rep. Ron Paul's attacks on the income tax reflect one facet of this. Cripple the government's power to tax, and you cripple the government. What could be easier? In 2007, Paul even tried to equate militia-style tax evaders with Martin Luther King Jr., in a since-deleted video. There's an earlier 2004 CNBC video in which Paul says he is "concerned about the way the 16th Amendment was passed" and doesn't think it was "technically correct," trying to deploy minor grammatical disparities to throw out the constitutional amendment that empowered Congress to levy a national income tax. There's a whole alternative universe of similarly bogus arguments, many based on earlier amendments, which the 16th Amendment would simply have overridden, if any of those arguments actually held water (which they don't). After Barack Obama's election in 2008, the Tea Party movement sparked a wave of conservative calls to repeal the popular election of senators, enabled by the 17th Amendment. That might seem an odd cause for a "populist" uprising to embrace, until you recall that the Koch brothers had spent nearly a a decade trying to get the Tea Party off the ground. Women's suffrage has been attacked by Ann Coulter, most prominently, and echoed by others. Lest it be thought these are all comical gestures, found only on the far-right margins al gestures, we should also include the Supreme Court's Shelby County decision of 2013, which effectively invalidated the 15th Amendment's protection of minority voting rights. It casually disregarding Section 2, "The Congress shall have power to enforce this article by appropriate legislation," on the basis of an imaginary "fundamental principle" dreamed up by Chief Justice John "Balls and Strikes" Roberts. These and many more lines of conservative attack reflect a deep hostility to what modern America has become, not just since Lyndon Johnson or Franklin D. Roosevelt, but since Teddy Roosevelt and Abraham Lincoln. [M]any amendments … have been designed to push American government and society in a progressive direction. The Thirteenth Amendment outlaws slavery in sweeping terms; the Fourteenth protects the civil rights and legal equality of citizens; the Fifteenth, Seventeenth, Twenty-Fourth, and Twenty-Sixth Amendments all expand the right to vote and protect it against state interference. The Sixteenth Amendment gives the federal government the power to enact a progressive income tax; the Seventeenth requires that the people, not legislators, choose United States senators. This is a historically predictable development. In "Revolution and Rebellion in the Early Modern World," Jack Goldstone first presented a comprehensive theoretical framework for understanding the processes involved in state breakdown, based on studies spanning Europe, China and the Middle East from 1500 to 1850. There was, he argued, a single basic process, which "unfolded like a fugue, with a major trend giving birth to four related critical trends that combined for a tumultuous conclusion." The fourth and final trend was the rise of heterodox belief systems in response to the material breakdowns of the other three. The decline of traditional religious denominations, the rise of televangelism and the spread of quasi-religious fake history narratives to reinterpret the U.S. Constitution are all facets of the same phenomena that Goldstone first observed in the early modern world, in a book published 25 years ago. Conservatives enjoy an asymmetric advantage, in part, because they incline toward sweeping narratives, geared toward persuasion, while liberals incline toward practical problem-solving, guided by Enlightenment models of reason. That's the argument of Chris Mooney in "The Republican Brain," also presented by George Lakoff in multiple works. This also reflects the ancient distinction between mythos (finding meaning in the world) and logos (figuring out how things work). Liberals can't simply adopt conservatives' ways of thinking and doing politics, but both sides of this divide have deep roots in human nature, so no rejection of their existing strengths is really needed — only a development of what they've tended to neglect. "Right here in my home state of North Carolina, a white minister and a black minister worked together in 1868 to write the Constitution whose moral language has guided our 21st-century movement," Barber told me in a 2016 interview. "Their language scared extremists then as much as it does now." That state-level action reflected a similar national vision that was bound up in the drafting of the 14th Amendment, which has become the subject of intense disinformation on its 150th anniversary. That brings us back to the subject of the 14th Amendment itself, the battle over its meaning and significance, and the clownish but potentially deadly turn it has just taken in our public life. In his discussion with Salon, Epps noted that there's wide diversity within the conservative position as well as some common threads, a point also made in his book. Still, a basic argument can be made about what the 14th Amendment intended and why conservatives continue to argue against it. If you think that the problem was the desire of various communities to treat people as in essence stateless people, not really Americans, you can see that the idea of this paramount national citizenship is the key. It's the centerpiece of section 1 of the 14th Amendment. Everybody is a citizen, that's step one. Step two is privileges or immunities of citizenship, those are national [and] states can't abridge them. Then third and fourth are due process and equal protection for persons. … So you can see that the 14th Amendment is very carefully put together. I think it's perceived by a lot of people today is just as random set of operations, but it's not. The attack on birthright citizenship strikes at the very foundation for all that. "Paramount national uniform citizenship is the keystone of the 14th Amendment, and the reforms made to our Constitution after the Civil War," Epps said. "Notice that the 15th amendment, the right to vote, references 'citizens of the United States.' So the key to the United States as a democratic nation is that there is one citizenship, paramount. It is not a gift of the government, it is a birthright. "In historical terms, the attempts to alter that represents precisely the same impulse that led to Southern segregation," he continued. "That is, 'You're not serious about that. We can't have everybody be a citizen. We need a caste of people that we are better than, and that we can make use of.' That was what segregation was about, and that's what would happen," he concluded, if the principle of birthright citizenship were eliminated. One thing that gives conservatives a powerful rhetorical advantage is their belief in an overriding purpose driving the Constitution, even if that purpose can seem demented or obscure. But this clashes head-on with the historical reality that the Constitution was and is a practical document shaped by the needs to confront practical problems, including new problems that arise from past inadequacies. This is the larger situation we find ourselves in: liberal logos and pragmatism, versus conservative mythos and fantasies of salvation. Anton's call for Trump to abolish birthright citizenship with the stroke of a pen is a "da Vinci Code" move, which could plunge America into a state of political chaos not seen since the Civil War. The progressive mythos we need to fight back is right there before us, in America's generations-long struggle to forge "a more perfect union."E pluribus unum. Its moral core is as simple as the parent-child bond, and the universal desire to make a better world for our children.
{ "redpajama_set_name": "RedPajamaC4" }
3,055
package com.miloshpetrov.sol2.game.screens; import com.miloshpetrov.sol2.SolApplication; import com.miloshpetrov.sol2.ui.SolLayouts; public class GameScreens { public final MainScreen mainScreen; public final MapScreen mapScreen; public final MenuScreen menuScreen; public final InventoryScreen inventoryScreen; public final TalkScreen talkScreen; public GameScreens(float r, SolApplication cmp) { SolLayouts layouts = cmp.getLayouts(); RightPaneLayout rightPaneLayout = layouts.rightPaneLayout; mainScreen = new MainScreen(r, rightPaneLayout, cmp); mapScreen = new MapScreen(rightPaneLayout, cmp.isMobile(), r, cmp.getOptions()); menuScreen = new MenuScreen(layouts.menuLayout, cmp.getOptions()); inventoryScreen = new InventoryScreen(r, cmp.getOptions()); talkScreen = new TalkScreen(layouts.menuLayout, cmp.getOptions()); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,921
Demo to test dsengine
{ "redpajama_set_name": "RedPajamaGithub" }
3,914
Earlier this past spring, I started reading Game of Thrones. I was instantly enthralled and completely taken by the storyline and characters! After I read the first 3 books (starting the 4th soon!), I started watching the HBO series and it was amazing to see this world come to life! This cake was for a boyfriend/girlfriend celebrating their birthdays. They're both Game of Thrones fans. He especially loves House Lannister. So my amazingly talented husband drew the House Lannister sigil by hand on MM Fondant with an edible food marker. But the cake wasn't quite done there. The She in the couple likes Games of Thrones enough, but her true love is Harry Potter. So, the cake also has a small nod to Harry Potter in the border, which is meant to mimic the binding of the Harry Potter books (but in House Lannister Colors)! I was beyond thrilled about this Game of Thrones/Harry Potter mash up! Especially since I am a huge fan of both! I've never heard of anyone loving the Lannisters (aside from Tyrion) but regardless this cake is so cool!!
{ "redpajama_set_name": "RedPajamaC4" }
5,924
{"url":"https:\/\/tex.stackexchange.com\/questions\/615951\/auto-pst-pdf-and-hyperref-conflict","text":"# auto-pst-pdf and hyperref conflict\n\nThis code don't work on texlive-2021 or on Miktex (MiKTeX 21.8), with pdflatex : the auxilliary complation fail.\n\n\\documentclass{article}\n\\usepackage{pstricks}\n%\\usepackage[cleanup={},pspdf={-dALLOWPSTRANSPARENCY -dNOSAFER}]{auto-pst-pdf}%\n\\usepackage{auto-pst-pdf}\n\\usepackage{hyperref}\n\\begin{document}\n\nSome text\n\\begin{pspicture}(0,0)(5,5)\n\\psline(0,0)(5,5)\n\\end{pspicture}\nOther text\n\n\\end{document}\n\n\n\u2022 Sep 19, 2021 at 10:12\n\nJust to explicit the answer that can be found following Ulrike Fischer's comment, adding the following in the preamble should work (also I think -dALLOWPSTRANSPARENCY is currently needed in the ps2pdf step, do not comment it out).\n\n\\makeatletter\n\\AtBeginDocument{\n\\ifpdf\\else\n\nHowever, it might not behave well in future versions, and is not enough to make [auto-]pst-pdf work with beamer.","date":"2022-05-28 23:44:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9582284092903137, \"perplexity\": 7938.879348374077}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652663021405.92\/warc\/CC-MAIN-20220528220030-20220529010030-00727.warc.gz\"}"}
null
null
{"url":"https:\/\/mran.microsoft.com\/snapshot\/2020-12-04\/web\/packages\/wwntests\/vignettes\/wwntests_vignette.html","text":"# An Overview of the wwntests Package\n\n#### 2020-05-18\n\nThis package aims to provide a variety of hypothesis tests to be used on functional data, testing assumptions of weak\/strong white noise, conditional heteroscedasticity, and stationarity.\n\nWe draw up some simple expository examples with a sample of Brownian motion curves and a sample of FAR(1,0.75)-IID curves (which are conditionally heteroscedastic).\n\nlibrary(wwntests)\n#> Registered S3 method overwritten by 'quantmod':\n#> method from\n#> as.zoo.data.frame zoo\nset.seed(1234)\nb <- brown_motion(N = 200, J = 100)\nf <- far_1_S(N = 200, J = 100, S = 0.75)\n\nNote that \u2018T\u2019 denotes the number of samples (Brownian motions) and \u2018J\u2019 denotes the number of times each Brownian motion is sampled, henceforth referred to as the resolution of the data.\n\nWe denote a discretely observed functional time series of length $$T$$ by $$\\{X_i(u) : 1 \\le i \\le T, u \\in (0, 1]\\} = (X_i)$$ (the parameter $$i$$ indexes the samples). Each $$X_i$$ is seen as an element of the Hilbert space of real-valued square integrable functions on the interval $$(0,1]$$.\n\n## Single- and Multi-Lag Tests [1]\n\nThe autocovariance function for a given lag h is given by $$\\gamma_h(t,s) = E[(X_0(t) - \\mu_X(t))(X_h(s) - \\mu_X(s))$$. The single-lag test tests the hypothesis $$\\mathscr{H}_{0,h} : \\gamma_h(t,s) = 0$$. Thus, this test is useful to identify correlation at a specified lag.\n\nOn the other hand, the multi-lag test is able to identify correlation over a range of lags. It tests the hypothesis $$\\mathscr{H}_{0,K} : \\forall j \\in \\{1, \\ldots, K\\} \\gamma_j(t,s) = 0$$.\n\nThe tests statistics for $$\\mathscr{H}_{0, h}$$ and $$\\mathscr{H}_{0,K}$$ are $Q_{T, h} = T || \\gamma_h ||^2 \\text{ and } V_{T, K} = T \\sum_{h = 1}^K ||\\gamma_h||^2$ For a complete and rigorous treatment of this process, and the theory these two tests, please refer to Kokoszka, Rice, Shang [1].\n\n### Applying the Single-Lag and Multi-Lag Tests to Data\n\nWe try the single-lag tests with a lag of 1, and the multi-lag test with a maximum lag of 10 (note, the default significance level is $$\\alpha = 0.05$$) on our functional Brownian motion and FAR data using the fport_test function and passing the string handles \u2018single-lag\u2019 and \u2018multi-lag\u2019 to the test parameter. For the single-lag test, the lag parameter determines the lag of the of the autocovariance function, and for the multi-lag test, it determines the maximum lag to include in $$V_{T,K}$$ (that is, lag = $$K$$).\n\nfport_test(f_data = b, test = 'single-lag', lag = 1, suppress_raw_output = TRUE)\n#> Single-Lag Test\n#>\n#> null hypothesis: the series is uncorrelated at lag 1\n#> p-value = 0.729104\n#> sample size = 200\n#> lag = 1\nfport_test(f_data = f, test = 'single-lag', lag = 1, suppress_raw_output = TRUE)\n#> Single-Lag Test\n#>\n#> null hypothesis: the series is uncorrelated at lag 1\n#> p-value = 0.000000\n#> sample size = 200\n#> lag = 1\nfport_test(f_data = b, test = 'multi-lag', lag = 10, suppress_raw_output = TRUE)\n#> Multi-Lag Test\n#>\n#> null hypothesis: the series is a weak white noise\n#> p-value = 0.797724\n#> sample size = 200\n#> maximum lag = 10\nfport_test(f_data = f, test = 'multi-lag', lag = 10, suppress_raw_output = TRUE)\n#> Multi-Lag Test\n#>\n#> null hypothesis: the series is a weak white noise\n#> p-value = 0.000000\n#> sample size = 200\n#> maximum lag = 10\n\nWe omit any analysis of results here for the sake of brevity, however, one will see that all results are as expected given our knowledge of the underlying data generating processes.\n\n### Visualizing the Single-Lag Test\n\nThe nature of the single-lag test allows for a simple and illustrative visualization. The autocorrelation_coeff_plot plots estimated autocorrelation coefficients, which are defined by $$\\rho_h = \\frac{||\\gamma_h||}{\\int y_0(t,t)\\mu(dt)}$$, over a range of lags. It also plots confidence bounds (for a significance level $$\\alpha$$) for these coefficients under weak white noise (plotted in blue) and strong white noise assumptions (constant, plotted in red). We remark that these bounds should be violated approximately $$\\alpha \\%$$ of the time if the underlying assumptions are satisfied. We plot the single-lag autocorrelation plots for our Brownian motion and FAR data below.\n\nautocorrelation_coeff_plot(f_data = b, K = 20)\n\nautocorrelation_coeff_plot(f_data = f, K = 20)\n\n## The Spectral Density Test [2]\n\nThe single-lag test, and in particular, the multi-lag test, are computationally expensive. Another supported test, referred to by its string handle \u2018spectral\u2019, which is significantly faster. The drawback of this test, is that it is not built for general white noise (e.g.\u00a0functional conditionally heteroscedastic) series. It is based on the spectral density operator $$\\mathscr{F}(\\omega) = \\frac{1}{2\\pi} \\sum_{j \\in \\mathbb{Z}} C(j)e^{-ij\\omega}, \\omega \\in [-\\pi, \\pi]$$, where $$C(j)$$ are the autocovariance operators, $$C(j) = E[X_j \\otimes X_0], j \\in \\mathbb{Z}$$. These operators are estimated by $$\\hat{C}_n(j) = \\frac{1}{n} \\sum_{t = j+1}^n u_t \\otimes u_{t-j}, 0 \\le j < n$$ and $$\\hat{\\mathscr{F}}_n(\\omega) = \\frac{1}{2\\pi} \\sum_{|j|<n} k(\\frac{j}{p_n})\\hat{C}_n(j)e^{-ij\\omega}, \\omega \\in [-\\pi, \\pi]$$, where $$k$$ is a user-chosen kernel function and $$p_n$$ is the bandwidth parameter (or lag-window); it may either be a user-inputted positive integer, computed from the sample size via $$p_n = n^{\\frac{1}{2q+1}}$$, or computed via a data-adaptive process (see Characiejus, Rice [2]). Currently supported kernel functions are the Bartlett and Parzen kernels: \\begin{align*} k_B(x) &= \\begin{cases} 1 - |x| & \\text{ for } |x| \\le 1 \\\\ 0 & \\text{ otherwise } \\end{cases} & \\text{(Bartlett)} \\\\ k_P(x) &= \\begin{cases} 1 - 6x^2 + 6|x|^3 & \\text{ for } 0 \\le |x| \\le \\frac{1}{2} \\\\ 2(1 - |x|)^3 & \\text{ for } \\frac{1}{2} \\le |x| \\le 1 \\\\ 0 & \\text{ otherwise } \\end{cases} & \\text{(Parzen)} \\end{align*}\n\nWe then consider the the distance $$Q$$ (in terms of integrated normed error) between the spectral density operator $$\\mathscr{F}(\\omega), \\omega \\in [-\\pi, \\pi]$$ and $$\\frac{1}{2\\pi}C(0)$$: $Q^2 = 2 \\pi \\int_{-\\pi}^{\\pi} || \\mathscr{F}(\\omega) - \\frac{1}{2\\pi}C(0)||_2^2 d \\omega$ The test statistic is: $T_n = T_n(k, p_n) = \\frac{2^{-1} n \\hat{Q}_n^2 - \\hat{\\sigma}_n^4C_n(k)}{||\\hat{C}_n(0)||_2^2 \\sqrt{2D_n(k)}}, n \\ge 1$ , where $$\\hat{\\sigma}^2_n = n^{-1} \\sum_{t=1}^n ||X_t||^2$$, $$C_n(k) = \\sum_{j=1}^{n-1}(1 - \\frac{j}{n})k^2(\\frac{j}{p_n})$$, and $$D_n(k) = \\sum_{j=1}^{n-2} (1 - \\frac{j}{n})(1 - \\frac{j+1}{n})k^4(\\frac{j}{p_n})$$. We actually use a power transformation of this test statistic proposed by Chen and Deo [5], but this is quite involved and we will omit this (see [2], [5]).\n\n### Applying the Spectral Density Test to Data\n\nWe apply the spectral density test to our Brownian motion and FAR data with some different parameter configurations for illustration.\n\nfport_test(b, test='spectral', bandwidth = 'static', suppress_raw_output = TRUE)\n#> Spectral Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.926583\n#> sample size = 200\n#> kernel function = Bartlett\n#> bandwidth = 5.848035\n#> bandwidth selection = static\nfport_test(b, test='spectral', kernel = 'Bartlett', bandwidth = 3, suppress_raw_output = TRUE)\n#> Spectral Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.811402\n#> sample size = 200\n#> kernel function = Bartlett\n#> bandwidth = 3.000000\n#> bandwidth selection = 3\nfport_test(f, test='spectral', kernel = 'Parzen', bandwidth = 10, suppress_raw_output = TRUE)\n#> Spectral Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.000000\n#> sample size = 200\n#> kernel function = Parzen\n#> bandwidth = 10.000000\n#> bandwidth selection = 10\nfport_test(f, test='spectral', bandwidth = 'adaptive', alpha = 0.01, suppress_raw_output = TRUE)\n#> Spectral Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.000000\n#> sample size = 200\n#> kernel function = Bartlett\n#> bandwidth = 16.840540\n#> bandwidth selection = adaptive\n\n## Independence Test [3]\n\nPerforms a test for independence and identical distribution of functional observations. The test relies on a dimensional reduction via a projection of the data on the K most important functional principal components. The empirical autocovariance operator is given by $C_N(x) = \\frac{1}{N} \\sum_{n=1}^N \\langle X_n x \\rangle X_n, x \\in L^2[0,1)$ , (where $$N$$ is the sample size) and the (empirical) eigenelements of $$C_N$$ are defined by $C_N(v_{j,N}) = \\lambda_j v_{j,N}, j \\ge 1$ Note, the (non-empirical) eigenfunctions $$v_{j}$$ form an orthonormal basis of $$L^2[0,1)$$, and we assume $$\\lambda_{1,N} \\ge \\lambda_{2,N} \\ge \\ldots$$, which are all non-negative. We decompose our functional data into its $$p$$ most important principal components: $X_n(t) = \\sum_{k=1}^{p} X_{k,n} v_{k,N}$ , where $$X_{k,n} = \\int_0^1 X_n(t) v_{k,N}(t)$$ Let $$\\mathbf{C_h}$$ denote the sample autocovariance matrix with entries: $c_h(k,l) = \\frac{1}{N} \\sum_{t = 1}^{N-h} X_{k,t}X_{l, t+h}$ Letting $$r_{f,h}(i,j)$$ and $$r_{b,h}(i,j)$$ denote the $$(i,j)$$ entries of $$\\mathbf{C_0}^{-1} \\mathbf{C_h}$$ and $$\\mathbf{C_h} \\mathbf{C_0}^{-1}$$, respectively, we define the test statistic: $Q_n = N \\sum_{h = 1}^H \\sum_{i,j = 1}^p r_{f,h}(i,j) r_{b,h}(i,j)$ , which, under suitable conditions, converges to a $$\\chi^2_{p^2 H}$$ distribution under the null hypothesis. See Gabrys, Kokoszka [3].\n\n### Applying the Independence Test to Data\n\nThe \u2018components\u2019 parameter (denoted by p above) determines how many functional principal components to use (kept in order of importance, which is determined by the proportion of the variance that each computed component explains). The \u2018lag\u2019 parameter (denoted by H above) determines the maximum lag to consider. We apply the independence test to our Brownian motion and FAR data.\n\nfport_test(b, test = 'independence', components = 3, lag = 3, suppress_raw_output = TRUE)\n#> Independence Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.820193\n#> number of principal components = 3\n#> maximum lag = 3\nfport_test(f, test = 'independence', components = 16, lag = 10, suppress_raw_output = TRUE)\n#> Independence Test\n#>\n#> null hypothesis: the series is iid\n#> p-value = 0.000087\n#> number of principal components = 16\n#> maximum lag = 10\n\n## General Remarks\n\n### Suppressing Output\n\nThe main hypothesis function fport_test, as well as all the individual test functions may return two forms of output. In the default configuration, when suppress_raw_output and suppress_print_output are given as FALSE, each function will first print to the console the name of the test, the null hypothesis being tested, the p-value of the test, the sample size of the functional data, and additional information that may be unique to the given test. It will then return a list containing the p-value, the value of the test statistic, and the quantile of the respective limiting distribution. Passing suppress_print_output = TRUE will cause the function to omit any output to the console. Passing suppress_raw_output = TRUE will cause the function to not return the list. At least one of these parameters must be TRUE.\n\n## References\n\n[1] Kokoszka P., & Rice G., & Shang H.L. (2017). Inference for the autocovariance of a functional time series under conditional heteroscedasticity. Journal of Multivariate Analysis, 162, 32-50, DOI: 10.1016\/j.jmva.2017.08.004 .\n\n[2] Characiejus V., & Rice G. (2019). A general white noise test based on kernel lag-window estimates of the spectral density operator. Econometrics and Statistics, DOI: 10.1016\/j.ecosta.2019.01.003 .\n\n[3] Gabrys R., & Kokoszka P. (2007). Portmanteau Test of Independence for Functional Observations. Journal of the American Statistical Association, 102:480, 1338-1348, DOI: 10.1198\/016214507000001111 .\n\n[4] Zhang X. (2016). White noise testing and model diagnostic checking for functional time series. Journal of Econometrics, 194, 76-95, DOI: 10.1016\/j.jeconom.2016.04.004 .\n\n[5] Chen W.W. & Deo R.S. (2004). Power transformations to induce normality and their applications. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 66, 117\u2013130, DOI: 10.1111\/j.1467-9868.2004.00435.x .","date":"2022-08-15 03:19:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9314417243003845, \"perplexity\": 2050.1026939576004}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882572127.33\/warc\/CC-MAIN-20220815024523-20220815054523-00573.warc.gz\"}"}
null
null
Jolly: being happy or joyful, cheerful and lively without question the Biblical account of Jesus' birth was a jolly season. The angels celebrated, the shepherds were joyful, Simon and Anna were cheerful and lively. Jesus the babe, the Savior, the promised King provides for all of us a true reason to be jolly and joyful! Have a jolly and Christ-filled weekend! There is a brochure we can send out. If anyone is interested, please contact us at acts2day@iphc.org or 405-792-7143. There are other enrollment and beneficiary forms that need to be filled out.
{ "redpajama_set_name": "RedPajamaC4" }
483
Jamie Marie Martinek Ritter August 11, 1971 - Birthplace: Danville, Illinois Resided In: San Antonio, Texas Jamie Marie Martinek Ritter passed away October 11, 2021 in San Antonio, Texas. She was born in Danville, Illinois on August 11, 1971 to Thomas William Martinek Jr. and Shelley Jean Risser Martinek. She spent her childhood years on her family farm in Covington, Indiana, surrounded by her loving friends and family. Her family moved to Austin, Texas in 1984 after losing their home to a fire. Jamie was a true Texan but left a piece of her heart in Indiana. Jamie was a strong person and lived a full and good life. She worked hard to put herself through college and was an elementary school teacher for many years in Austin and San Antonio. She was loved by her students. She changed careers in 2015 when she took a position as Communications Supervisor for Spectrum Association Management in San Antonio. She loved to write and create and was extremely talented in her work. Jamie was a beautiful woman who left us way too soon. She was a beloved mother, daughter, sister and friend. We are all heartbroken by her loss. She is survived by her beloved son, Riley Cade Ritter, who was the light of her life, her partner Ivan Villavicencio, both of San Antonio, her parents Thomas William Martinek, Jr. and Shelley Risser Martinek, her sister and brother-in-law Jennifer Jean Martinek Masters and Adam Jones Masters, her brother and sister-in-law Joshua Thomas and Rebecca Martinek, her sister Jessica "Pepper" Leigh Martinek, and her Grandfather Jack Louis Risser, all of Austin. She treasured her beloved nephews and nieces Will and Laura Kate Masters, Jack Masters, Susan Masters, and Eliana Martinek, all of Austin, who all called her "Aunt Mamie," a name she was given by Will when he was small. She will be missed by her many, many aunts and uncles and cousins across the country. A Celebration of Life will be held Saturday November 6 at 11 a.m. at Life Austin Chapel, 8901 Highway 71 West, Austin, TX 78735. Memorial contributions can be made to the Riley Ritter Education Fund at Go Fund Me.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,888
\section{2D conformal superintegrability of the 2nd order} Systems of Laplace type are of the form \begin{equation}\label{Laplace} H\Psi\equiv \Delta_n\Psi+V\Psi=0.\end{equation} Here $\Delta_n $ is the Laplace-Beltrami operator on a real or complex conformally flat $nD$ Riemannian or pseudo-Riemannian manifold. We assume that all functions occurring in this paper are locally analytic, real or complex.) A conformal symmetry of this equation is a partial differential operator $ S$ in the variables ${\bf x}=(x_1,\cdots,x_n)$ such that $[ S, H]\equiv SH-HS=R_{ S} H$ for some differential operator $R_{S}$. A conformal symmetry maps any solution $\Psi$ of (\ref{Laplace}) to another solution. Two conformal symmetries ${ S}, { S}'$ are identified if $S=S'+RH$ for some differential operator $R$, since they agree on the solution space of (\ref{Laplace}). (For short we will say that $S=S', \mod (H)$ and that $S$ is a symmetry if $[S,H]=0,\mod(H)$.) The system is {\it conformally superintegrable} for $n>2$ if there are $2n-1$ functionally independent conformal symmetries, ${ S}_1,\cdots,{ S}_{2n-1}$ with ${ S}_1={ H}$. It is second order conformally superintegrable if each symmetry $S_i$ can be chosen to be a differential operator of at most second order. For $n=2$ the definition must be restricted, since for a potential $V=0$ there will be an infinite dimensional space of conformal symmetries when $n=2$; every analytic function induces such symmetries. \begin{comment} Indeed necessary and sufficient conditions that $S=u(x,y)\partial_x+v(x,y)\partial_y$ is a 1st order conformal symmetry for $H=\Delta_2$ are that $u$ and $v$ satisfy the Cauchy-Riemann equations \[ \partial_x u=\partial_y v,\ \partial_y u=-\partial_x v.\] \end{comment} However, in this paper we are interested in multiparameter Laplace equations, i.e., those with potentials of the form $V=\sum_{j=0}^sc_jV^{(j)}$ where the set $\{V^{(j)}\}$ is linearly independent, $V^{(0)}=1$ and the $c_j$ are arbitrary parameters. Thus we require that each symmetry be conformal for arbitrary choice of parameters $c_j$ and, in particular for the special case $V_0=c_0$ where $c_0$ is arbitrary. With this restriction we say that a 2D multiparameter Laplace equation is superintegrable if it admits 3 algebraically independent symmetries. Every $2D$ Riemannian manifold is conformally flat, so we can always find a Cartesian-like coordinate system with coordinates ${\bf x}=(x,y)\equiv (x_1,x_2)$ such that the Laplace equation takes the form \begin{equation}\label{Laplace4} {\tilde H}=\frac{1}{\lambda(x,y)}(\partial_x^2+\partial_y^2)+{\tilde V}({\bf x})=0.\end{equation} However, this equation is equivalent to the flat space equation \begin{equation}\label{Laplace5}{ H}\equiv \partial_x^2+\partial_y^2+ V({\bf x})=0,\quad V({\bf x})=\lambda({\bf x}){\tilde V}({\bf x}).\end{equation} In particular, the conformal symmetries of (\ref{Laplace4}) are identical with the conformal symmetries of (\ref{Laplace5}). Indeed, denoting by $\Lambda$ the operator of multiplication by the function $\lambda(x,y)$ and using the operator identity $[A,BC]=B[A,C]+[A,B]C$ we have \[ [S,H]=[S,\Lambda{\tilde H}] =\Lambda[S,{\tilde H}]+[S,\Lambda]{\tilde H}= \Lambda R{\tilde H}+[S,\Lambda]{\tilde H}=(\Lambda R \Lambda^{-1}+[S,\Lambda]\Lambda^{-1})H.\] Thus without loss of generality we can assume the manifold is flat space with $\lambda\equiv 1$. Since the Hamiltonians are formally self-adjoint, without loss of generality we can always assume that a 2nd order conformal symmetry $S$ is formally self-adjoint and that a 1st order conformal symmetry $L$ is skew-adjoint: \begin{equation} { S}=\frac{1}{\lambda}\sum ^2_{k,j=1}\partial_k\cdot (\lambda a^{kj}({\bf x}))\partial_j +W({\bf x})\equiv S_0+W,\quad a^{jk}=a^{kj} \label{2ndordersymm} \end{equation} \begin{equation} L=\sum_{k=1}^2\left(a^k({\bf x})\partial_k+\frac{\partial_k(\lambda a^k)}{2\lambda}\right).\label{1stordersymm}\end{equation} \begin{equation}\label{confsym2} [S,H]=(R^{(1)}({\bf x})\partial_x+(R^{(2)}({\bf x})\partial_y)H, \end{equation} \begin{equation}\label{confsym1} [L,H]=R({\bf x}))H, \end{equation} for some functions $R^{(j)}({\bf x}), R({\bf x})$. Equating coefficients of the partial derivatives on both sides of (\ref{confsym2}), we obtain the conditions \begin{eqnarray}\label{killingtensors} a_i^{ii}&=&2a_j^{ij}+a_i^{jj}, i\ne j \end{eqnarray} and \begin{equation} W_j=\sum_{s=1}^2a^{sj} V_s+a_j^{jj}V,\quad k,j=1,2 .\label{potc} \end{equation} (Here a subscript $j$ on $a^{\ell m}$, $V$ or $W$ denotes differentiation with respect to $x_j$.) The requirement that $\partial_{x} W_2=\partial_{y}W_1$ leads from (\ref{potc}) to the second order (conformal) Bertrand-Darboux partial differential equations for the potential: \[a^{12}(V_{11}-V_{22})+(a^{22}-a^{11})V_{12}+(a^{12}_1+a^{22}_2-a^{11}_2)V_1+(a^{22}_1-a^{11}_1-a^{12}_2)V_2\] \begin{equation}\label{BertrandDarboux} +2A^{12}_{12}V=0. \end{equation} Furthermore, we can always add the trivial conformal symmetry $\rho({\bf x})H$ to $S$. Equating coefficients of the partial derivatives on both sides of (\ref{confsym1}), we obtain the conditions \begin{eqnarray}\label{killingvectors} a^2_1+a^1_2=0; \frac{R({\bf x})}{2}=a^1_1=a^2_2,\ 2a^1_1V+a^1V_1+a^2V_2=0. .\label{potc1} \end{eqnarray} In general the spaces of 1st and 2nd order symmetries could be infinite dimensional. However, the requirement that $H$ have a multiparameter potential reduces the possible symmetries to a finite dimensional space. Indeed each such symmetry must necessarily be a symmetry for the potential $V=c_0$ where $c_o$ is an arbitrary parameter. Thus the conformal Bertrand-Darboux condition for a 2nd order symmetry yields the requirement $\partial_{xy}(a^{11}-a^{22})=0$. Furthermore we can always assume, say, $a^{11}=0$. The result is that the pure derivative terms $S_0$ belong to the space spanned by symmetrized products of the conformal Killing vectors \begin{equation}\label{conformalKV} P_1=\partial_x,\ P_2=\partial_y,\ J=x_1\partial_y-y\partial_x,\ D=x\partial_x+y\partial_y,\end{equation} \[K_1=(x^2-y^2)\partial_x +2xy\partial_y,\ K_2=(y^2-x^2)\partial_y+2xy\partial_x.\ \] and terms $g({\bf x})(\partial_x^2+\partial_y^2))$ where $g$ is an arbitrary function. For a given multiparameter potential only a subspace of these conformal tensors occurs. This is for two reasons. First the conformal Bertrand-Darboux equations restrict the allowed Killing tensors. Second, on the hypersurface ${\cal H}=0$ in phase space all symmetries $g({\bf x}){\cal H}$ vanish, so any two symmetries differing by $g({\bf x}){\cal H}$ can be identified. Similarly the requirement that a 1st order conformal symmetry $L$ be a symmetry for the potential $V=c_0$ leads to the requirements $a^1_x=a^2_y=R=0$ so, in particular, $L$ is a true (not just conformal) symmetry. Therefore its pure derivative part must be a linear combination of the Euclidean Killing vectors $\partial_x,\ \partial_y,\ x\partial_y-y\partial_x$. The following results are easy modifications of results for 3D conformal superintegrable systems proved in \cite{KKMP2011}. We give them for completeness. For a conformal superintegrable system with 3 2nd order symmetries there will be 2 independent conformal Bertrand-Darboux equations (the equation for the symmetry $H$ is trivial) and the assumption of algebraic independence means that we can solve these equations for $V_{22}$ and $V_{12}$: \begin{equation}\label{veqn1a}\begin{array}{lllll} V_{22}&=&V_{11}&+&A^{22}V_1+B^{22}V_2+C^{22}V,\\ V_{12}&=& && A^{12}V_1+B^{12}V_2+C^{12}V\\ \end{array}\end{equation} Here the $A^{ij},B^{ij},C^{ij}$ are functions of $\bf x$ that can be calculated explicitly. Indeed if ${\cal S}_1=\sum_{k,j=1}^2\partial_k\cdot (\ell^{kj}(x,y))\partial_j)+W^{(1)}(x,y)$, ${\cal S}_2=\sum_{k,j=1}^2\partial_k\cdot(b^{kj}(x,y) \partial_j)+W^{(2)}(x,y)$, ${\cal H}$, is a basis for the symmetries then \begin{equation}\label{canoneqns1} A^{12}=\frac{D_{(2)}}{D},\quad A^{22}=\frac{D_{(3)}}{D},\quad B^{12}=-\frac{D_{(0)}}{D},\quad B^{22}=-\frac{D_{(1)}}{D},\end{equation} \begin{equation}\label{canoneqns2} C^{12}=-\frac{D_{(5)}}{D},\quad C^{22}=-\frac{D_{(4)}}{D},\end{equation} \[ D=\det \left(\begin{array}{cc} \ell^{11}-\ell^{22},& \ell^{12}\\ b^{11}-b^{22},& b^{12}\end{array}\right),\quad D_{(0)}=\det \left(\begin{array}{cc} 3\ell^{12}_2,& -\ell^{12}\\ 3b^{12}_2,& -b^{12} \end{array}\right), \] \[ D_{(1)}=\det \left(\begin{array}{cc} 3\ell^{12}_2,& \ell^{11}-\ell^{22}\\ 3b^{12}_2,& b^{11}-b^{22}\end{array}\right),\quad D_{(2)}=\det \left(\begin{array}{cc} 3\ell^{12}_1,& \ell^{12}\\ 3b^{12}_1,& b^{12}\end{array}\right), \] \[ D_{(3)}=\det \left(\begin{array}{cc} 3\ell^{12}_1,& \ell^{11}-\ell^{22}\\ 3b^{12}_1,& b^{11}-b^{22}\end{array}\right),\] \[ D_{(4)}=\det \left(\begin{array}{cc} 2\ell^{12}_{12},& \ell^{11}-\ell^{22}\\ 2b^{12}_{12},& b^{11}-b^{22}\end{array}\right),\ D_{(5)}= \det \left(\begin{array}{cc} 2\ell^{12}_{12},& -\ell^{12}\\ 2b^{12}_{12},& -b^{12} \end{array}\right).\] The functions $A^{22},B^{22},A^{12},B^{12},C^{22},C^{12}$ are defined independent of the choice of basis for the 2nd order symmetries. \subsection{The integrability conditions for the potential} To determine the integrability conditions for the system (\ref{veqn1a}) we first introduce the dependent variables $Z^{(0)}=V$, $Z^{(1)}=V_1$, $Z^{(2)}=V_2$, $Z^{(3)}=V_{11}$, the vector \begin{equation}\label{wvector1a}{\bf z}^{\rm tr}=( Z^{(0)},Z^{(1)},Z^{(2)}, Z^{(3)}), \end{equation} and the matrices \begin{equation} {\bf A}^{(1)}=\left(\begin{array}{rrrr} 0&1&0&0\\ 0&0&0&1\\ C^{12}&A^{12}&B^{12}&0\\ C^{13}&A^{13}&B^{13}&B^{12}-A^{22}\end{array}\right), \end{equation} \begin{equation} {\bf A}^{(2)}=\left(\begin{array}{rrrr} 0&0&1&0\\ C^{12}&A^{12}&B^{12}&0 \\ C^{22}&A^{22}&B^{22}&1 \\ C^{23}&A^{23}&B^{23}&A^{12}\end{array}\right), \end{equation} where \begin{eqnarray} A^{13}&=&A^{12}_2-A^{22}_1+B^{12}A^{22}+A^{12}A^{12}-B^{22}A^{12}-C^{22},\nonumber\\ B^{13}&=&B^{12}_2-B^{22}_1+A^{12}B^{12}+C^{12},\nonumber\\ C^{13}&=&C^{12}_2-C^{22}_1+A^{12}C^{12}-B^{22}C^{12}+B^{12}C^{22},\nonumber\\ A^{23}&=& A^{12}_1+B^{12}A^{12}+C^{12},\quad B^{23}=B^{12}_1+B^{12}B^{12},\nonumber\\ \quad C^{23}&=&B^{12}C^{12}+C^{12}_1.\nonumber\\ \end{eqnarray} Then the integrability conditions for the system \begin{equation}\label{int21} \partial_{x_j}{\bf z}={\bf A}^{(j )}{\bf z}\qquad j=1,2, \end{equation} must hold. They are \begin{equation}\label{int31} {\bf A}^{(j)}_i-{\bf A}^{(i)}_j={\bf A}^{(i)}{\bf A}^{(j)}-{\bf A}^{(j)}{\bf A}^{(i)}\equiv [{\bf A}^{(i)},{\bf A}^{(j)}]. \end{equation} Suppose the integrability conditions for system (\ref{veqn1a}) are satisfied identically. In this case we say that the potential is {\it nondegenerate}. Otherwise the potential is {\it degenerate}. If $V$ is nondegenerate then at any point ${\bf x}_0$, where the $A^{ij}, B^{ij}, C^{ij}$ are defined and analytic, there is a unique solution $V({\bf x})$ with arbitrarily prescribed values of $V({\bf x}_0)$, $V_1({\bf x}_0)$, $V_2({\bf x}_0)$, $V_{11}({\bf x}_0)$. The points ${\bf x}_0$ are called {\it regular}. The points of singularity for the $A^{ij},B^{ij},C^{ij}, D^{ij}$ form a manifold of dimension $<2$. Degenerate potentials depend on fewer parameters. (For example, we could have that the integrability conditions are not satisfied identically. Or a first order conformal symmetry might exist and this would imply a linear condition on the first derivatives of $V$ alone.) Note that for a nondegenerate potential the solution space of (\ref{veqn1a}) is exactly 4-dimensional, i.e. the potential depends on 4 parameters. Degenerate potentials depend on $<$ 4 parameters. Note also that the integrability conditions depend only on the free parts $\ell^{jk},b^{jk}$ of the conformal symmetry basis, not on the potential terms $V,W^{(1)},W^{(2)}$. If the integrability conditions are satisfied identically, then the equations for the potential terms can be solved. \subsection{The conformal St\"ackel transform} We quickly review the concept of the St\"ackel transform \cite{KMP2010} and extend it to conformally superintegrable systems. Suppose we have a second order {\it conformal} superintegrable system \begin{equation}\label{confl} { H}=\frac{1}{\lambda(x,y)}(\partial_{xx}+\partial_{yy})+V(x,y)=0,\quad { H}={ H}_0+V. \end{equation} and suppose $U(x,y) $ is a particular solution of equations (\ref{veqn1a}), nonzero in an open set. The {\it conformal St\"ackel transform} of (\ref{confl}), induced by $U$, is the (Helmholtz) system \begin{equation}\label{helms} {\tilde { H}}=E,\quad {\tilde { H}}=\frac{1}{{\tilde \lambda}}(\partial_{xx}+\partial_{yy})+{\tilde V},\quad {\tilde \lambda}=\lambda U,\ {\tilde V}=\frac{V}{U} \end{equation} \begin{theorem}\label{stackelt} The transformed (Helmholtz) system (\ref{helms}) is {\it truly} superintegrable. \end{theorem} \medskip\noindent {\bf Proof} : Let ${S}={S}_0+W$ be a second order conformal symmetry of $H$ and ${S}_U={S}_0+W_U$ be the special case that is in conformal involution with $\frac{1}{\lambda}(\partial_{xx}+\partial_{yy})+ U$. Then $$[ {S}, H]=R_{{ S}_0} H,\quad [{S}_U, H_0+U]=R_{{ S}_0}({ H}_0+U),\quad [S_0,H_0]=R_{S_0}H_0$$ and ${\tilde{ S} }={ S}-\frac{W_U}{U}{ H}$ is a corresponding true symmetry of $\tilde { H}$. Indeed, $$[{\tilde{ S}},{\tilde { H}}]=[{ S},U^{-1} H]-[\frac{W_U}{U} H,\frac{1}{U} H]=U^{-1}R_{{ S}_0}H-U^{-1}[S_0,U]U^{-1}H$$ $$-U^{-1}[W_U,H_0]U^{-1}H=U^{-1}R_{{ S}_0}H-U^{-1}R_{S_0}H=0.$$ This transformation of second order symmetries preserves linear and algebraic independence. Thus the transformed system is Helmholtz superintegrable. $\Box$ Note that if $H\Psi=0$ then ${\tilde S}\Psi =S\Psi$ and $H(S\Psi)=0$ so $S$ and $\tilde S$ agree on the null space of $H$ and they preserve this null space. There is a similar result for first order conformal symmetries $ L$. \begin{corollary} Let $ L$ be a first order conformal symmetry of the superintegrable system (\ref{confl}) and suppose $U({\bf x}) $ is a particular solution of equations (\ref{veqn1a}), nonzero in an open set. Then $ L$ is a true symmetry of the Helmholtz superintegrable system (\ref{helms}): $[{ L},{\tilde { H}}]=0$. \end{corollary} \medskip\noindent {\bf Proof}: By assumption, $[{ L},{ H}]=R_{ L}({\bf x}){ H}=R_{ L}({ H}_0+V)$ where $R_{ L}$ is a function. Thus, $[{ L}, { H}_0]=R_{ L} { H}_0, [{ L},V]=R_{ L} V$, so also $[{ L},U]=R_{L} U$. Then \[[{ L},{\tilde{ H}}]=[{ L},U^{-1} H]=U^{-1}[{ L},{ H}]-U^{-1}[L,U]U^{-1}H\] \[=U^{-1}RH-U^{-1}RUU^{-1}H=U^{-1}RH-U^{-1}RH=0.\] $\Box$ These results show that any second order conformal Laplace superintegrable system admitting a nonconstant potential $U$ can be St\"ackel transformed to a Helmholtz superintegrable system. This operation is invertible, but the inverse is not a St\"ackel transform. By choosing all possible special potentials $U$ associated with the fixed Laplace system (\ref {confl}) we generate the equivalence class of all Helmholtz superintegrable systems (\ref{helms}) obtainable through this process. As is easy to check, any two Helmholtz superintegrable systems lie in the same equivalence class if and only if they are St\"ackel equivalent in the standard sense. All Helmholtz superintegrable systems are related to conformal Laplace systems in this way, so the study of all Helmholtz superintegrability on conformally flat manifolds can be reduced to the study of all conformal Laplace superintegrable systems on flat space. \begin{theorem} There is a one-to-one relationship between flat space conformally superintegrable Laplace systems with nondegenerate potential and St\"ackel equivalence classes of superintegrable Helmholtz systems with nondegenerate potential on conformally flat spaces. \end{theorem} Indeed, let \begin{equation}\label{nonconf} (H_1-E_1)\Psi =0,\ (H_2-E_2)\Psi =0,\end{equation} be Schr\"odinger eigenvalue equations where \[ H_j-E_j=\frac{1}{\lambda_j(x,y)}(\partial_{xx}+\partial_{yy}+V^{(j)})-E_j,\quad j=1,2,\] and \begin{equation}\label{Vident}V=V^{(1)}+E_1\lambda_1=V^{(2)}+E_2\lambda _2\end{equation} is a nondegenerate potential for the conformally superintegrable system \begin{equation}\label{confsup}\partial_{xx}+\partial_{yy}+V=0.\end{equation} Suppose $\{ \lambda_1,\lambda_2\}$ is a linearly independent set (otherwise there is nothing to prove). Then we can find a potential basis for $V$ of the form \[ V(x,y)=-E_1\lambda_1(x,y)-E_2\lambda_2(x,y)+k_3U^{(3)}(x,y)+k_4U^{(4)}(x,y)\] \[=-E_1\lambda_1-E_2\lambda_2+{\tilde V}\] where $\{ \lambda_1,\lambda_2,U^{(3)},U^{(4)}\}$ is a linearly independent set. Dividing (\ref{confsup}) by $\lambda_1,\lambda_2$, respectively, we see that systems (\ref{nonconf}) are regular superintegrable with nondegenerate (3-parameter) potentials. Furthermore, multiplying the first system (\ref{nonconf}) by $\lambda^{(1)}/\lambda^{(2)}$ we see that it is St\"ackel equivalent to the second system. Conversely, if systems (\ref{nonconf}) are regular superintegrable and equality (\ref{Vident}) holds, then it is easy to verify that system (\ref{confsup}) is conformally superintegrable with nondegenerate (4-parameter) potential. Even for true Helmholtz superintegrable systems there are good reasons to add a seemingly trivial constant to the potentials. Thus, for a St\"ackel transform induced by the function $U^{(1)}$, we can take the original system to have Hamiltonian \begin{equation}\label{parameter} H=H_0+V=H_0+U^{(1)}\alpha_1+U^{(2)}\alpha_2+U^{(3)}\alpha_3+\alpha_4\end{equation} where $\{U^{(1)},U^{(2)},U^{(3)},1\}$ is a basis for the 4-dimensional potential space. A 2nd order symmetry $S$ would have the form \[ S=S_0+W^{(1)}\alpha_1+W^{(2)}\alpha_2+W^{(3)}\alpha_3.\] The St\"ackel transformed Hamiltonian and symmetry take the form \[ {\tilde H}=\frac{1}{U^{(1)}}H_0+\frac{U^{(1)}\alpha_1+U^{(2)}\alpha_2+U^{(3)}\alpha_3+\alpha_4}{U^{(1)}},\ {\tilde S}=S-W^{(1)}{\tilde H}.\] Note that the parameter $\alpha_1$ cancels out of the expression for $\tilde S$; it is replaced by $-\alpha_4$. Now suppose that $\Psi$ is a formal eigenfunction of $H$ (not required to be normalizable): $H\Psi=E\Psi$. If we choose the parameter $\alpha_4=-E$ in (\ref{parameter}) then, in terms of this redefined $H$, we have $H\Psi =0$. It follows immediately that ${\tilde S}\Psi =S\Psi$. Thus, for the 3-parameter system $H'$ and the St\"ackel transform ${\tilde H}'$, \[H'=H_0+V'=H_0+U^{(1)}\alpha_1+U^{(2)}\alpha_2+U^{(3)}\alpha_3,\] \[{\tilde H}'=\frac{1}{U^{(1)}}H_0 +\frac{-U^{(1)}E+U^{(2)}\alpha_2+U^{(3)}\alpha_3}{U^{(1)}},\] we have $H'\Psi=E\Psi $ and ${\tilde H}'\Psi=-\alpha_1\Psi$. It follows that The effect of the St\"ackel transform is to replace $\alpha_1$ by $-E$ and $E$ by $-\alpha_1$. Further, since $S$ and $\tilde S$ don't depend on the choice of $\alpha_4$ we see that these operators must agree on eigenspaces of $H'$ We know that the symmetry operators of all 2nd order nondegenerate superintegrable systems in 2D generate a quadratic algebra of the form \[{} [R,S_1]=f^{(1)}(S_1,S_2,\alpha_1,\alpha_2,\alpha_3,H'),\ [R,S_2]=f^{(2)}(S_1,S_2,\alpha_1,\alpha_2,\alpha_3,H'),\] \begin{equation}\label{quadratic1} R^2=f^{(3)}(S_1,S_2,\alpha_1,\alpha_2,\alpha_3,H'),\end{equation} where $\{S_1,S_2,H\}$ is a basis for the 2nd order symmetries and $\alpha_1,\alpha_2,\alpha_3$ are the parameters for the potential, \cite{4,5,MPW2013}. It follows from the above considerations that the effect of a St\"ackel transform generated by the potential function $U^{(1)}$ is to determine a new superintegrable system with structure \begin{equation}\label{quadratic2}{} [{\tilde R},{\tilde S}_1]=f^{(1)}({\tilde S}_1,{\tilde S}_2,-{\tilde H}',\alpha_2,\alpha_3,-\alpha_1),\end{equation} \[ [R,{\tilde S}_2]=f^{(2)}({\tilde S}_1,{\tilde S}_2,-{\tilde H}',\alpha_2,\alpha_3,-\alpha_1),\] \[ R^2=f^{(3)}({\tilde S}_1,{\tilde S}_2,-{\tilde H}',\alpha_2,\alpha_3,-\alpha_1).\] Of course, the switch of $\alpha_1$ and $H'$ is only for illustration; there is a St\"ackel transform that replaces any $\alpha_j$ by $-H'$ and $H'$ by $-\alpha_j$. Formulas (\ref{quadratic1}) and (\ref{quadratic2}) are just instances of the quadratic algebras of the superintegrable systems belonging to the equivalence class of a single nondegenerate conformally superintegrable Hamiltonian \begin{equation}\label{confham}\hat{H}=\partial_{xx}+\partial_{yy}+\sum_{j=1}^4\alpha_j V^{(j)}(x,y).\end{equation} Let $\hat{S}_1,\hat{S}_2, \hat{H}$ be a basis of 2nd order conformal symmetries of $\hat H$. From the above discussion we can conclude the following. \begin{theorem} The symmetries of the 2D nondegenerate conformal superintegrable Hamiltonian $\hat H$ generate a quadratic algebra \begin{equation}\label{confquadalg} [{\hat R},{\hat S}_1]=f^{(1)}({\hat S}_1,\hat{S}_2,\alpha_1,\alpha_2,\alpha_3,\alpha_4),\ [{\hat R},{\hat S}_2]=f^{(2)} ({\hat S}_1,{\hat S}_2,\alpha_1,\alpha_2,\alpha_3,\alpha_4),\end{equation} \[ {\hat R}^2=f^{(3)}({\hat S}_1,\hat{S}_2,\alpha_1,\alpha_2,\alpha_3,\alpha_4),\] where $\hat{R}=[{\hat S}_1,\hat{S}_2]$ and all identities hold $\mod({\hat H})$. A conformal St\"ackel transform generated by the potential $V^{(j)}(x,y)$ yields a nondegenerate Helmholtz superintegrable Hamiltonian $\tilde H$ with quadratic algebra relations identical to (\ref{confquadalg}), except that we make the replacements ${\hat S}_\ell\to {\tilde S}_\ell$ for $\ell=1,2$ and $\alpha_j\to -{\tilde H}$. These modified relations (\ref{confham}) are now true identities, not $\mod ({\hat H})$. \end{theorem} Note that expressions (\ref{confquadalg}) define a true quadratic algebra, interpreted $\mod ({\hat H})$. They differ from the quadratic algebra for a Helmholtz system in that the Hamiltonian doesn't appear, whereas there is an extra parameter. The quadratic algebras of all Helmholtz systems obtained from $\hat H$ via conformal St\"ackel transforms follow by simple substitution. \begin{comment} Every 2nd order conformal symmetry is of the form $S=S_0+W$ where $S_0$ is a 2nd order element of the enveloping algebra of $so(4,\C)$. The dimension of this space of 2nd order elements is 21 but there is an 11-dimensional subspace of symmetries congruent to 0 $\mod H_0$ where $H_0=P_1^2+P_2^2$. A basis for this subspace is \[ P_1^2+P_2^2\sim 0,\ J^2+D^2\sim 0,\ K_1^2+K_2^2\sim 0,\ \{P_1,K_2\}+2JD\sim 0,\] \[ \{P_1,J\}-\{P_2,D\}\sim 0,\ \{P_1,K_1\}-\{P_2,K_2\}\sim0,\ \{J,K_1\}+\{D,K_2\}\sim 0,\] \[ \{P_1,D\}+\{P_2,J\}\sim 0,\ \{P_1,K_2\}+\{P_2,K_1\}\sim0,\ \{J,K_2\}-\{D,K_1\}\sim 0,\] \[ 4J^2+\{P_1,K_1\}+\{P_2,K_2\}\sim 0.\] Thus $\mod H_0$ the space of 2nd order symmetries is 10-dimensional. \end{comment} \subsection{Contractions of conformal superintegrable systems with potential induced by generalized In\"on\"u-Wigner contractions} The basis symmetries ${\cal S}^{(j)} ={\cal S}^{(j)}_0+W^{(j)}$,\ ${\cal H}={\cal H}_0+V$ of a nondegenerate 2nd order conformally superintegrable system determine a conformal quadratic algebra (\ref{confquadalg}), and if the parameters of the potential are set equal to $0$, the free system $ {\cal S}^{(j)}_0, {\cal H}_0,\ j=1,2$ also determines a conformal quadratic algebra without parameters, which we call a {\it free conformal quadratic algebra}. The elements of this free algebra belong to the enveloping algebra of $so(4,\C)$ with basis (\ref{conformalKV}). Since the system is nondegenerate the integrability conditions for the potential are satisfied identically and the full quadratic algebra can be computed from the free algebra, modulo a choice of basis for the 4-dimensional potential space. Once we choose a basis for $so(4,\C)$, its enveloping algebra is uniquely determined by the structure constants. Structure relations in the enveloping algebra are continuous functions of the structure constants, so a contraction of one $so(4,\C)$ to itself induces a contraction of the enveloping algebras. Then the free conformal quadratic algebra constructed in the enveloping algebra will contract to another free quadratic algebra. (In \cite{KM2014} essentially the same argument was given in more detail for Helmholtz superintegrable systems on constant curvature spaces.) In this paper we consider a family of contractions of $so(4,\C)$ to itself that we call B\^ocher contractions. All these contractions are implemented via coordinate transformations. Suppose we have a conformal nondegenerate superintegrable system with free generators ${\cal H}_0, {\cal S}^{(1)}_0, {\cal S}^{(2)}_0$ that determines the conformal and free conformal quadratic algebras $Q$ and $Q^{(0)} $ and has structure functions $A^{ij}({\bf x}),\ B^{ij}({\bf x}),\ C^{ij}({\bf x})$ in Cartesian coordinates ${\bf x}=(x_1,x_2)$. Further, suppose this system contracts to another nondegenerate system ${\cal H'}_0, {\cal S'}^{(1)}_0 ,{\cal S'}^{(2)}_0 $ with conformal quadratic algebra ${Q'}^{(0)}$. We show here that this contraction induces a contraction of the associated nondegenerate superintegrable system ${\cal H}={\cal H}_0+V$, ${\cal S}^{(1)}={\cal L}^{(1)}_0+W^{(1)}$, ${\cal S}^{(2)}={\cal S}^{(2)}_0+W^{(2)}$, $Q$ to ${\cal H}'={{\cal H}'}_0+V'$, ${\cal S'}^{(1)}={\cal S'}_0^{(1)}+{W^{(1)}}'$, ${\cal S'}^{(2)}={\cal S'}_0^{(2)}+{W^{(2)}}'$, $Q'$. The point is that in the contraction process the symmetries ${{\cal H}'}_0(\epsilon)$, ${\cal S'}^{(1)}_0(\epsilon)$, $ {\cal S'}^{(2)}_0(\epsilon)$ remain continuous functions of $\epsilon$, linearly independent as quadratic forms, and $\lim_{\epsilon\to 0} {\cal H'}_0(\epsilon)={{\cal H'}}_0$, $\lim_{\epsilon\to 0} {\cal S'}^{(j)}_0(\epsilon)={\cal S'}_0^{(j)}$. Thus the associated functions $A^{ij}(\epsilon), B^{ij}(\epsilon), C^{(ij)}$ will also be continuous functions of $\epsilon$ and $\lim_{\epsilon\to 0}A^{ij}(\epsilon)={A'}^{ij}$, $\lim_{\epsilon\to 0}B^{ij}(\epsilon)={B'}^{ij}$, $\lim_{\epsilon\to 0}C^{ij}(\epsilon)={C'}^{ij}$. Similarly, the integrability conditions for the potential equations \begin{equation}\label{nondegpot2} \begin{array}{lllll} V^{(\epsilon)}_{22}&=& V^{(\epsilon)}_{11}&+&A^{22}(\epsilon) V^{(\epsilon)}_1+B^{22}(\epsilon) V^{(\epsilon)}_2+C^{22}(\epsilon) V^{(\epsilon)},\\ V^{(\epsilon)}_{12}&=& &&A^{12}(\epsilon) V^{(\epsilon)}_1+B^{12}(\epsilon) V^{(\epsilon)}_2+C^{12}(\epsilon) V^{(\epsilon)},\end{array} \end{equation} will hold for each $\epsilon$ and in the limit. This means that the 4-dimensional solution space for the potentials $V$ will deform continuously into the 4-dimensional solution space for the potentials $V'$. Thus the target space of solutions $V'$ (and of the functions $W'$) is uniquely determined by the free quadratic algebra contraction. There is an apparent lack of uniqueness in this procedure, since for a nondegenerate superintegrable system one typically chooses a basis $V^{(j)},\ j=1,\cdots,4$ for the potential space and expresses a general potential as $V=\sum_{j=1}^4a_jV^{(j)}$. Of course the choice of basis for the source system is arbitrary, as is the choice for the target system. Thus the structure equations for the quadratic algebras and the dependence $a_j(\epsilon)$ of the contraction constants on $\epsilon$ will vary depending on these choices. However, all such possibilities are related by a basis change matrix. \section{Tetraspherical coordinates and relations with the 2-sphere and 2D flat space} The tetraspherical coordinates $(x_1,\cdots,x_4)$ satisfy $x_1^2+x_2^2+x_3^2+x_4^2=0$. They are projective coordinates on the null cone and have 3 degrees of freedom. Their principal advantage over flat space Cartesian coordinates is that the action of the conformal algebra (\ref{conformalKV}) and of the conformal group $\sim SO(4,\C)$ is linearized in tetraspherical coordinates. \medskip \noindent{\bf Relation to Cartesian coordinates $(x,y)$ and coordinates on the 2-sphere $(s_1,s_2,s_3)$ }: \[ x_1=2XT,\ x_2=2YT,\ x_3=X^2+Y^2-T^2,\ x_4=i(X^2+Y^2+T^2).\] \[ x=\frac{X}{T}=-\frac{x_1}{x_3+ix_4},\ y=\frac{Y}{T}=-\frac{x_2}{x_3+ix_4}, \] \[ x=\frac{s_1}{1+s_3},\ y=\frac{s_2}{1+s_3},\] \[ s_1=\frac{2x}{x^2+y^2+1},\ s_2=\frac{2y}{x^2+y^2+1},\ s_3=\frac{1-x^2-y^2}{x^2+y^2+1},\] \[ H=\partial_{xx}+\partial_{yy}+{\tilde V}=(x_3+ix_4)^2\left(\sum_{k=1}^4\partial_{x_k}^2+V\right) =(1+s_3)^2\left(\sum_{j=1}^3p_{s_j}^2+V\right),\] where ${\tilde V}=(x_3+ix_4)^2V$ and \[ (1+s_3)=-i\frac{(x_3+ix_4)}{x_4},\ (1+s_3)^2=-\frac{(x_3+ix_4)^2}{x_4^2},\] \[s_1=\frac{ix_1}{x_4},\ s_2=\frac{ix_2}{x_4},\ s_3=\frac{-ix_3}{x_4}.\] Also, $ \sum_{k=1}^4x_k\partial_{x_k}=0$ and, classically, $\sum_{k=1}^4x_k{p_k}=0$. \noindent {\bf Relation to flat space and 2-sphere 1st order conformal constants of the motion}: We define \[ L_{jk}=x_j\partial_{x_k}-x_k \partial_{x_j}, \ 1\le j,k\le 4,\ j\ne k,\] where $L_{jk}=-L_{kj}$. The generators for flat space conformal symmetries are related to these via \begin{equation}\label{identifications}P_1= \partial_x=L_{13}+iL_{14},\ P_2=\partial_y=L_{23}+iL_{24},\ D=iL_{34},\end{equation} \[ J=L_{12},\ K_1=L_{13}-iL_{14},\ K_2=L_{23}-iL_{24}.\] Here \[ D=x\partial_x+y\partial_y,\ J=x\partial_y-y\partial_x,\ K_1=2xD-(x^2+y^2)\partial_x,\] etc. The generators for $2$-sphere conformal constants of the motion are related to the $L_{jk}$ via \[ L_{12}=J_{12}=s_1\partial_{s_2}-s_2\partial_{s_1},\ L_{13}=J_{13},\ L_{23}=J_{23},\] \[ L_{14}=-i\partial_{s_1},\ L_{24}=-i\partial_{s_2},\ L_{34}=-i\partial_{s_3}.\] Note that in identifying tetraspherical coordinates we can always permute the parameters $1,2,3,4$. More generally, we can apply an arbitrary $SO(4,\C)$ transformation to the tetraspherical coordinates, so the above relations between Euclidean and tetraspherical coordinates are far from unique. \medskip \noindent {\bf 2nd order conformal symmetries $\sim H $}: The 11-dimensional space of conformal symmetries $\sim H$ has basis \[L_{12}^2-L_{34}^2,\ L_{13}^2-L_{24}^2,\ L_{23}^2-L_{14}^2,\ L_{12}^2+L_{13}^2+L_{23}^2,\] \begin{equation}\label{congH} L_{12}L_{34}+L_{23}L_{14}-L_{13}L_{24},\end{equation} \[ \{L_{13},L_{14}\}+\{L_{23},L_{24}\}, \ \{L_{13},L_{23}\}+\{L_{14},L_{24}\},\ \{L_{12},L_{13}\}+\{L_{34},L_{24}\}, \] \[ \{L_{12},L_{14}\}-\{L_{34},L_{23}\}, \ \{L_{12},L_{23}\}-\{L_{34},L_{14}\},\ \{L_{12},L_{24}\}+\{L_{34},L_{13}\}, \] All of this becomes much clearer if we make use of the decomposition $so(4,\C)\equiv so(3,\C)\oplus so(3,\C)$ and the functional realization of the Lie algebra. Setting \[ J_1=\frac12(L_{23}-L_{14}),\ J_2=\frac12(L_{13}+L_{24}),\ J_3=\frac12(L_{12}-L_{34}),\] \[ K_1=\frac12(L_{23}+L_{14}),\ K_2=\frac12(L_{13}-L_{24}),\ K_3=\frac12(L_{12}+L_{34}),\] we have \[ [J_i,J_j]=\epsilon_{ijk}J_k,\ [K_i,K_j]=\epsilon_{ijk}K_k,\ [J_i,K_j]=0. \] In terms of the variable $z=x+iy,{\bar z}=x-iy$ we have \[ J_1=\frac12(i\partial_z-iz^2\partial_z),\ J_2=\frac12(\partial_z+z^2\partial_z),\ J_3=iz\partial_z,\] \[ K_1=\frac12(-i\partial_{\bar z}+i{\bar z}^2\partial_{\bar z}),\ K_2=\frac12(\partial_{\bar z}+{\bar z}^2\partial_{\bar z}), \ K_3=-i{\bar z}\partial_{\bar z},\] so the $J_i$ operators depend only on the variable $z$ and the $K_j$ operators depend only on the variable $\bar z$. Also \begin{equation}\label{Cas} J_1^2+J_2^2+J_3^2\equiv 0,\ K_1^2+K_2^2+K_3^2\equiv 0.\end{equation} The space of 2nd order elements in the enveloping algebra is thus 21-dimensional and decomposes as $A_z\oplus A_{\bar z}\oplus A_{z{\bar z}}$ where $A_z$ is 5-dimensional with basis $J_1^2$, $J_3^2$, $\{J_1,J_2\}$, $\{J_1,J_3\}$, $\{J_2,J_3\}$,\ $A_{\bar z}$ is 5-dimensional with basis $K_1^2$, $K_3^2$, $\{K_1,K_2\}$, $\{K_1,K_3\}$, $\{K_2,K_3\}$, and $A_{z{\bar z}}$ is 9-dimensional with basis $J_iK_j$, $1\le i,j\le 3$. Note that all of the elements of $A_{z{\bar z}}$ are $\sim H$, whereas none of the nonzero elements of $A_z,A_{\bar z}$ have this property. The 11 elements (\ref{congH}) include the relations (\ref{Cas}). Here, the transposition $J_i\leftrightarrow K_i$ is a conformal equivalence. \subsection{Classification of 2nd order conformally superintegrable systems with nondegenerate potential} With this simplification it becomes feasable to classify all conformally 2nd order superintegrable systems with nondegenerate potential. Since every such system has generators ${ S}^{(1)}={ S}_0^{(1)}+W_1(z,{\bar z})$, ${ S}^{(2)}={ S}_0^{(2)}+W_2(z,{\bar z})$, it is sufficient to classify, up to $ SO(4,\C)$ conjugacy, all free conformal quadratic algebras with generators ${ S}_0^{(1)},\ { S}_0^{(2)}$, $\mod { H}_0$, and then to determine for which of these free conformal algebras the integrability conditions (\ref{int31}) hold identically, so that the system admits a nondegenerate potential ${\tilde V}(z,{\bar z})$ which can be computed. The classification breaks up into the following possible cases: \begin{itemize} \item Case 1: ${ S}_0^{(1)},\ { S}_0^{(2)}\in A_z$. (This is conformally equivalent to ${ S}_0^{(1)},\ { S}_0^{(2)}\in A_{\bar z}$.) The possible free conformal quadratic algebras of this type, classified up to $SO(3,\C)$ conjugacy $\mod J_1^2+J-2^2+J_3^2$ can easily be obtained from the computations in \cite{KM2014}. They are the pairs \begin{enumerate} \item \[J_3^2,\ J_1^2\] \item \[J_3^2,\ \{J_1+iJ_2,J_3\}\] \item \[ J_3^2,\ \{J_1,J_3\} \] \item \[\{J_2,J_2+iJ_1\},\ \{J_2,J_3\}\] \item \[J_3^2,\ (J_1+iJ_2)^2\] \item \begin{equation} \{J_1+iJ_2,J_3\},\ (J_1+iJ_2)^2.\label{conjugacyclasses}\end{equation} \end{enumerate} Checking pairs $1)-5)$ we find that they do not admit a nonzero potential, so they do not correspond to nodegenerate conformal superintegrable systems. This is in dramatic distinction to the results of \cite{KM2014} where for Helmholtz systems on constant curvature spaces there was a 1-1 relationship between free quadratic algebras and nondegenerate superintegrable systems. Pair $6)$, (\ref{conjugacyclasses}), does correspond to a superintegrable system, the singular case ${\tilde V}=f(z)$ where $f(z)$ is arbitrary. (This system is conformally St\"ackel equivalent to the singular Euclidean system $E_{15}$.) Equivalently, the system in $A_{\bar z}$ with analogous $K$-operators yields the potential ${\tilde V}=f({\bar z})$, (\ref{Varb'}). \item Case 2: ${ S}_0^{(1)}={ S}_J^{(1)}+{ S}_K^{(1)},\ { S}_0^{(2)}={ S}_J^{(2)}$ where ${ S}_J^{(1)},\ S_J^{(2)}$ are selected from one of the pairs $1)-6)$ above and ${ S}_K^{(1)}$ is a nonzero element of $A_{\bar z}$. Again there is a conformally equivalent case where the roles of $J_i$ and $K_i$ are switched. To determine the possibilities for ${ S}_K^{(1)}$ we classify the 2nd order elements in the enveloping algebra of $so(3,\C)$ up to $SO(3,\C)$ conjugacy, $\mod K_1^2+K_2^2+K_3^2$. From the computations in \cite{KM2014} we see easily that there are the following representatives for the equivalence classes: \begin{description}\item[a)]\[ K_3^2\] \item[b)]\[K_1^2+aK_2^2,\ a\ne 0,1\] \item[c)]\[(K_1+iK_2)^2\] \item[d)] \[ K_3^2+(K_1+iK_2)^2\] \item[e)] \[ \{K_3,K_1+iK_2\}.\] \end{description} For pairs $1),3),4),5)$ above and all choices $a)-e)$ we find that the integrabilty conditions are never satisfied, so there are no corresponding nondegenerate superintegrable systems. For pair $2)$, however, we find that any choice $a)-e)$ leads to the same nondegenerate superintegrable system $[2,2]$, (\ref{V[22norm']}). While it appears that there are multiple generators for this one system, each set of generators maps to any other set by a conformal St\"ackel transformation and a change of variable. For pair $6)$, we find that any choice $a)-e)$ leads to the same nondegenerate superintegrable system $[4]$, (\ref{V[4]norm'}). Again each set of generators maps to any other set by a conformal St\"ackel transformation and a change of variable. \item Case 3: ${ S}_0^{(1)}={ S}_J^{(1)},\ { S}_0^{(2)}={ S}_J^{(2)}+{ S}_K^{(2)}$ where ${ S}_J^{(1)},\ S_J^{(2)}$ are selected from one the pairs $1)-6)$ above and ${ S}_K^{(2)}$ is a nonzero element of $A_{\bar z}$. Again there is a conformally equivalent case where the roles of $J_i$ and $K_i$ are switched. To determine the possibilities for ${ S}_K^{(2)}$ we classify the 2nd order elements in the enveloping algebra os $so(3,\C)$ up to $SO(3,\C)$ conjugacy, $\mod K_1^2+K_2^2+K_3^2$. They are $a)-e)$ above. For pairs $1)-4),6)$ above and all choices $a)-e)$ the integrabilty conditions are never satisfied, so there are no corresponding nondegenerate superintegrable systems. For pair $5)$, however, we find that any choice $a)-e)$ leads to the same nondegenerate superintegrable system $[2,2]$, (\ref{V[22norm']}). Again each set of generators maps to any other set (and to any $[2,2]$ generators in Case 2) by a conformal St\"ackel transformation and a change of variable. \item Case 4: ${ S}_0^{(1)}={ S}_J^{(1)},\ { S}_0^{(2)}={ S}_K^{(2)}$ where ${ S}_J^{(1)}$ is selected from one of the representatives $a)-e)$ above and ${ S}_K^{(2)}$ is selected from one of the analogous representatives $a)-e)$ expressed as $K$-operators. We find that each of the 25 sets of generators leads to the single conformally superintegrable system $[0]$, (\ref{V[0]norm'}), and each set of generators maps to any other set by a conformal St\"ackel transformation and a change of variable. \item Case 5: ${ S}_0^{(1)}={ S}_J^{(1)}+{ S}_K^{(1)},\ { S}_0^{(2)}={ S}_J^{(2)}+{ S}_K^{(2)}$ where ${ S}_J^{(1)},\ S_J^{(2)}$ are selected from one of the pairs $1)-6)$ above and ${ S}_K^{(1)}$, $S_K^{(2)}$ are obtained from ${ S}_J^{(1)}$, $ S_J^{(2)}$, respectively, by replacing each $J_i$ by $K_i$. We find the following possibilities: \begin{description} \item[i)] ${ S}_0^{(1)}=J_1^2+K_1^2,\ { S}_0^{(2)}=J_3^2+K_3^2$. This extends to the system $[1,1,1,1]$, (\ref{V[1111norm']}). \item[ii)] ${ S}_0^{(1)}=J_3^2+K_3^2,\ { S}_0^{(2)}=\{J_3,J_1+iJ_2\}+\{K_3,K_1+iK_2\}$. This extends to the system $[2,1,1]$, (\ref{V211norm'}). \item[iii)] ${ S}_0^{(1)}=J_3^2+K_3^2,\ { S}_0^{(2)}=\{J_1,J_3\}+\{K_1,K_3\}$. This extends to the system $[1,1,1,1]$, (\ref{V[1111norm']}) again, equivalent to the generators $i)$ by a conformal St\"ackel transformation and a change of variable. \item[iv)] ${ S}_0^{(1)}=\{J_1,J_2+iJ_1\}+\{K_1,K_2+iK_1\},\ { S}_0^{(2)}=\{J_2,J_3\}+\{K_2,K_3\}$. This does not extend to a conformal superintegrable system. \item[v)] ${ S}_0^{(1)}=(J_1+iJ_2)^2+(K_1+iK_2)^2,\ { S}_0^{(2)}=J_3^2+K_3^2$. This extends to the system $[2,1,1]$, (\ref{V211norm'}) again, equivalent to the generators $ii)$ by a conformal St\"ackel transformation and a change of variable. \item[vi)] ${ S}_0^{(1)}=\{J_3,J_1+iJ_2\}+\{K_3,K_1+iK_2\},\ { S}_0^{(2)}=(J_1+iJ_2)^2+(K_1+iK_2)^2$, which extends to the system $[3,1]$, (\ref{V[31]norm'}). \end{description} \end{itemize} This completes the classification. \begin{example} We describe how apparantly distinct superintegrable systems of a fixed type are actually the same. In Case 2 consider the system with generators $\{J_1+iJ_2,J_3\}+(K_1+iK_2)^2,\ (J_1+iJ_2)^2$. This extends to the conformally superintegrable system $[4]$ with flat space Hamiltonian operator $H_1=\partial_{z{\bar z}}+ V^{(1)}$ where \[ V^{(1)}=2k_3z{\bar z}+2k_4z+k_3{\bar z}^3+3k_4{\bar z}^2+k_1{\bar z}+k_2.\] The system with generators $\{J_1+iJ_2,J_3\}+K_3^2+(K_1+iK_2)^2,\ (J_1+iJ_2)^2$ again extends to the conformally superintegrable system $[4]$. Indeed, replacing $z,{\bar z}$ by $Z, {\bar Z}$ to distinguish the two systems, we find the 2nd flat space Hamiltonian operator $H_2=\partial_{Z{\bar Z}}+ V^{(2)}$ where \[ V^{(2)}=\frac{c_3\ {\rm arcsinh}^3({\bar Z})+3c_4\ {\rm arcsinh}^2({\bar Z})+(2c_3 Z+c_1)\ {\rm arcsinh}({\bar Z})+2c_4 Z+c_2}{\sqrt{1-{\bar Z}^2}}.\] Now we perform a conformal St\"ackel transform on $H_2$ to obtain the new flat space system \[ {\tilde H}_2=\sqrt{1-{\bar Z}^2}\ \partial_{Z{\bar Z}}+ c_3\ {\rm arcsinh}^3({\bar Z})+3c_4\ {\rm arcsinh}^2({\bar Z})\] \[+(2c_3 Z+c_1)\ {\rm arcsinh}({\bar Z})+2c_4 Z+c_2.\] Making the change of variable ${\bar Z}=\sinh W $, we find \[ {\tilde H}_2= \partial_{ZW}+ c_3 W^3+3c_4W^2+(2c_3 Z+c_1)W+2c_4 Z+c_2.\] Thus, with the identifications $Z=z$, $W={\bar z}$, $c_i=k_i$, we see that $H_1\equiv {\tilde H}_2$. \end{example} \subsection{Relation to separation of variables} B\^ocher's analysis \cite{Bocher} involves symbols of the form $[n_1,n_2,..,n_p]$ where $n_1+...+n_p=4$. These symbols are used to define coordinate surfaces as follows. Consider the quadratic forms \begin{equation}\label{ellipsoidalcoords}\Omega =x^2_1+x^2_2+x^2_3+x^2_4=0,\ \Phi =\frac{x^2_1}{ \lambda -e_1} + \frac{x^2_2}{ \lambda -e_2} + \frac{x^2_3}{ \lambda -e_3} + \frac{x^2_4}{ \lambda -e_4}=0.\end{equation} If $e_1,e_2,e_3,e_4$ are pairwise distinct, the elementary divisors of these two forms are denoted by the symbol $[1,1,1,1]$. Given a point in 2D flat space with Cartesian coordinates $(x^0,y^0)$, there corresponds a set of tetraspherical coordinate $(x^0_1,x^0_2,x^0_3,x^0_4)$, unique up to multiplication by a nonzero constant. If we substitute these coordinates into expressions (\ref{ellipsoidalcoords}) we can verify that there are exactly 2 roots $\lambda=\rho,\mu$ such that $\Phi=0$. These are elliptic coordinates. It can be verified that they are orthogonal with respect to the metric $ds^2=dx^2+dy^2$ and that they are $R$-separable for the Laplace equations $(\partial^2_x+\partial^2_y)\Theta=0$ or $(\sum_{j-1}^4\partial_{x_j}^2)\Theta=0$. Now consider the potential \[V_{[1,1,1,1]}=\frac{a_1}{ x^2_1} + \frac{a_2}{ x^2_2} + \frac{a_3}{ x^2_3} + \frac{a_4}{ x^2_4}.\] It turns out to be the only possible potential $V$ such that the Laplace equation $(\sum_{j-1}^4\partial_{x_j}^2+V)\Theta=0$ is $R$-separable in elliptic coordinates for {\it all} choices of the parameters $e_j$. The separation is characterized by 2nd order conformal symmetry operators that are linear in the parameters $e_j$. In particular the symmetries span a 3-dimensional subspace of symmetries, so the system $(\sum_{j-1}^4\partial_{x_j}^2+V_{[1,1,1,1]})\Theta=0$ must be conformally superintegrable. We can write this as \[H=(x_3+ix_4)^2(\partial ^2_{x_1}+ \partial ^2_{x_2}+ \partial ^2_{x_3}+ \partial ^2_{x_4} +\frac{a_1}{ x^2_1} +\frac {a_2}{ x^2_2} + \frac{a_3}{ x^2_3} + \frac{a_4}{ x^2_4}),\] or in terms of flat space coordinates $x,y$ as \[ H= \partial_x^2+\partial_y^2+\frac{a_1}{x^2}+\frac{a_2}{y^2}+\frac{4a_3}{(x^2+y^2-1)^2}-\frac{4a_4}{(x^2+y^2+1)^2}.\] For the coordinates $s_i,i=1,2,3$ we obtain \[H=(1+s_3)^2(\partial ^2_{s_1}+\partial ^2_{s_2}+\partial ^2_{s_3} -\frac{a_1}{ s^2_1} - \frac{a_2}{ s^2_2} - \frac{a_3}{ s^2_3} -a_4).\] The coordinate curves are described by $[1,1,1,\stackrel{\infty}{1} ]$ (because we can always transform to equivalent coordinates for which $e_4=\infty$) and the corresponding $H\Theta=0$ system is proportional to $S_9$, the eigenvalue equation for the generic potential on the 2-sphere, which separates variables in elliptic coordinates $s^2_i=\frac{(\rho -e_i)(\mu -e_i)}{ (e_i-e_j)(e_i-e_k)}$ where $(e_i-e_j)(e_i-e_k)\neq 0$ and $i,j,k=1,2,3$. The quantum Hamiltonian when written using these coordinates is equivalent to \[{\cal H}=\frac{1}{ \rho -\mu }[P_\rho^2-P_\mu^2] -\sum ^3_{i=1} a_i\frac{(e_i-e_j)(e_i-e_k)}{ (\rho -e_i)(\mu -e_i)}],\] where $P_\lambda=\sqrt{\Pi ^3_{i=1}(\lambda -e_i)}\ \partial_\lambda$. \section{B\^ocher contractions} These are contractions of $so(4,\C)$ to itself that are induced by coordinate transformations on the null cone that B\^ocher used to derive the separable coordinate systems for the flat space Laplace and wave equations, \cite{Bocher,KMR1984}. In the following notes we shall usually list 6 symmetries for each superintegrable system $[1,1,1,1]-[4]$, which is strictly the case for the analogous systems on the 2-sphere. However, these systems are defined on the null cone, which implies extra constraints, Therefore instead of 6 linearly independent symmetries we have only 3. We start with the potential \begin{equation}\label{V[1111]} V_{[1,1,1,1]}=\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3}{x_3^2}+\frac{a_4}{x_4^2},\end{equation} and the system $[1,1,1,1]$ and use successive B\^ocher contractions to derive the systems $[2,1,1],[2,2],[3,1], [4]$ and $[0]$. \subsection{The $[1,1,1,1]$ to $[2,1,1]$ contraction} If two of the $e_i$ in eqns (\ref{ellipsoidalcoords}) become equal, B\^ocher shows that the process of making $e_1\rightarrow e_2$ together with suitable transformations of the $a_i's$ produces a conformally equivalent $H$. This corresponds to the choice of coordinate curves obtained by the B\^ocher limiting process $[1,1,1,1]\to [2,1,1]$, i.e., \[ e_1=e_2+\epsilon ^2,\ x_1\rightarrow \frac{iy_1}{ \epsilon },\ x_2\rightarrow \frac{y_1}{ \epsilon } + \epsilon y_2, \ x_j\rightarrow y_j,j=3,4,\] which results in the pair of quadratic forms \[\Omega =2y_1y_2+y^2_3+y^2_4=0,\ \Phi =\frac{y^2_1}{ (\lambda -e_2)^2}+\frac{2y_1y_2}{(\lambda -e_2)} + \frac{y^2_3}{ (\lambda -e_3)} +\frac {y^2_4}{ (\lambda -e_4)} =0.\] The coordinate curves with $e_4= \infty $ correspond to cyclides with elementary divisors $[2,1,\stackrel{\infty}{1} ]$, \cite{Bromwich}, i.e., $\Phi =\frac{y^2_1}{ (\lambda -e_2)^2}+\frac{2y_1y_2}{ (\lambda -e_2)} + \frac{y^2_3}{ (\lambda -e_3)}=0$. \begin{comment} Indeed, making the substitution $\lambda=\frac{\alpha \lambda' +\beta}{\gamma\lambda'+\delta}$, $ e_i=\frac{\alpha e_i'+\beta}{\gamma e_i'+\delta}$ we do not change the family of surfaces described (see \cite{Bocher}, page 59). In particular the second quadratic form becomes \[ \Phi=\frac{y_1^2(\gamma e_1'+\delta)^2}{(\lambda'-e_1')^2(\alpha\delta-\beta\gamma)}+\frac{2y_1y_2}{\lambda'-e_1'}+\frac{y_3^2} {\lambda'-e_3'}+\frac{y_4^2}{\lambda'-e_4'}=0.\] Now if we let $e_1'=\infty$ we obtain essentially $\Phi=\frac{y_1^2\gamma^2}{(\alpha\delta-\beta\gamma)}+\frac{y_3^2}{\lambda'-e_3'}+\frac{y_4^2}{\lambda'-e_4'}=0$, which means that we have degenerate elliptic coordinates of type 1 in the plane with coordinate curves denoted by $[\stackrel{\infty}{2},1,1]$. If we took $e_4'=\infty$ we would obtain the coordinate curves of degenerate elliptic cordinates on the sphere with coordinate curves denoted by $[2,1,\stackrel{\infty}{1}]$. If we take $e_4'= \infty$ in generic tetracyclic coordinates we obtain elliptic coordinates on the 3-sphere with cordinate curves denoted by $[1,1,1,\stackrel{\infty}{1}]$. Our subsequent studies elaborate on these observations. \end{comment} Note that the composite linear coordinate mapping \[x_1+ix_2=\frac{i\sqrt{2}}{\epsilon}(x'_1+ix'_2)+\frac{i\epsilon}{\sqrt{2}}(x'_1-ix'_2),\ x_1-ix_2=-\frac{i\epsilon}{\sqrt{2}}(x'_1-ix'_2),\] \[x_3=x'_3,\ x_4=x'_4,\] satisfies $\lim_{\epsilon\to 0} \sum_{j=1}^4 x_j^2= \sum_{j=1}^4 {x'}_j^2=0$, and induces a contraction of the Lie algebra $so(4,\C)$ to itself. An explicit computation yields \[ L'_{12}=L_{12},\ L'_{13}=-\frac{i}{\sqrt{2}\ \epsilon}(L_{13}-iL_{23})-\frac{i\ \epsilon}{\sqrt{2}}L_{13},\ L'_{23}=-\frac{i}{\sqrt{2}\ \epsilon}(L_{13}-iL_{23})-\frac{\ \epsilon}{\sqrt{2}}L_{13}\] \[ L'_{34}=L_{34},\ L'_{14}=-\frac{i}{\sqrt{2}\ \epsilon}(L_{14}-iL_{24})-\frac{i\ \epsilon}{\sqrt{2}}L_{14},\ L'_{24}=-\frac{i}{\sqrt{2}\ \epsilon}(L_{14}-iL_{24})-\frac{\ \epsilon}{\sqrt{2}}L_{14}.\] This is the B\^ocher contraction $[1,1,1,1]\to [2,1,1]$. \subsubsection{Conformal St\"ackel transforms of the [1,1,1,1] system} \label{3.1.1} We write the parameters $a_j$ defining the potential $V_{[1,1,1,1]}$ as a vector: $(a_1,a_2,a_3,a_4)$. \begin{enumerate} \item The potentials $(1,0,0,0)$, and any permutation of the indices $a_j$ generate conformal St\"ackel transforms to $S9$. \item The potentials $(1,1,0,0)$ and $(0,0,1,1)$ generate conformal St\"ackel transforms to $S7$. \item The potentials $(1,1,1,1)$, $(0,1,0,1)$, $(1,0,1,0)$, $(0,1,1,0)$ and $(1,0,0,1)$ generate conformal St\"ackel transforms to $S8$. \item The potentials $(a_1,a_2,0,0),\ a_1a_2\ne 0,a_1\ne a_2$, and any permutation of the indices $a_j$. generate conformal St\"ackel transforms to $D4B$. \item The potentials $(1,1,a,a), a\ne 0,1$, and any permutation of the indices $a_j$. generate conformal St\"ackel transforms to $D4C$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[1,1,1,1]$. \end{enumerate} Now under the contraction $[1,1,1,1]\to [2,1,1]$ we have \[ V_{[1,1,1,1]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V_{[2,1,1]} \] where \begin{equation} \label{V[211]} V_{[2,1,1]}=\frac{b_1}{(x'_1+ix'_2)^2}+\frac{b_2(x'_1-ix'_2)}{(x'_1+ix'_2)^3}+\frac{b_3}{{x'_3}^2}+\frac{b_4}{{x'_4}^2},\end{equation} and \[ a_1=-\frac12(\frac{b_1}{\epsilon^2}+\frac{b_2}{2\epsilon^4}),\ a_2=- \frac{b_2}{4\epsilon^4},\ a_3=b_3,\ a_4=b_4.\] \begin{comment} We established the potential limit by direct computation. However this contraction can be understood in terms of the generic elliptic coordinates. We put $e_1=0 , e_2=\epsilon ^2$ and $e_3=e_3$ in the coordinates \[s^2_1 = \frac{(x_1-e_1)(x_2-e_1)}{ (e_1-e_2)(e_1-e_3)},\ s^2_2 = \frac{(x_1-e_2)(x_2-e_2)}{ (e_2-e_1)(e_2-e_3)},\ s^2_3 = \frac{(x_1-e_3)(x_2-e_3)}{ (e_3-e_2)(e_3-e_1)}.\] We take as the generic potential $V=\frac{ a_1}{ s^2_1} + \frac{a_2}{ s^2_2} + \frac{a_3}{ s^2_3}$ and subject the coefficents $a_i$ to the contraction transformations \[ a_1\rightarrow -\frac{b_1}{ \epsilon ^2} + \frac{b_2}{\epsilon ^4},\ a_2\rightarrow \frac{b_2}{ \epsilon ^4},\] Under $\epsilon \rightarrow 0$ we obtain \[V\rightarrow \frac{b_1e_3}{ x_1x_2} +b_2 [\frac{-e_3(x_1+x_2)+x_1x_2}{ x^2_1x^2_2}] + \frac{e^2_3a_3}{ (x_1-e_3)(x_2-e_3)}.\] This form of the potential is separable in type $[2,1,1]$ coordinates, as can be seen from the relations \[\frac{(x_1-x_2)}{ x_1x_2} = \frac{1}{ x_2}-\frac {1}{x_1},\ \frac{(x^2_1-x^2_2)}{ x^2_1 x^2_2} = \frac{1}{x^2_2} - \frac{1}{x^2_1},\] \[ \frac{(x_1 -x_2)}{ (x_1-e_3)(x_2-e_3)} =\frac{1}{ x_2-e_3} -\frac{1}{ x_1-e_3}.\] Consider another case, viz $e_1=0,\ e_2=\epsilon$ and $e_3=A\epsilon$. From the transformations \[a_1\rightarrow \frac{c_1}{ \epsilon ^4} + \frac{c_2}{ \epsilon ^6} + \frac{b_3}{ \epsilon ^8},\ a_2\rightarrow \frac{c_2}{ \epsilon ^6(A-1)} + \frac{c_3A^2}{ \epsilon ^8(A-1)^2},\ a_3\rightarrow \frac{c_3}{ \epsilon ^8(A-1)^2},\] we obtain the limit of $V$ as \[V \rightarrow \frac{Ac_1}{ x_1x_2} - \frac{Ac_2(x_1+x_2)}{ x^2_1x^2_2} + \frac{A^2c_3(x^2_1+x_1x_2+x^2_2)}{ x^3_1x^3_2}.\] What we deduce from these examples is that if $e_1$ is a root of the polynomial of the corresponding degenerate elliptic system on the complex sphere and it has multplicity $p$, there are terms in the potential of the form \[\frac{1}{ (x_1-x_2)}[\frac{1}{ (x_1-e_1)^s} -\frac {1}{ (x_2-e_1)^s}],\ s=1\cdots p\] and the transformation of $a_i$ can be determined. \end{comment} \begin{examples} Using Cartesian coordinates $x,y$, we consider the Hamiltonian \[ H=\partial^2_x+\partial^2_y+ \frac{a_1}{ x^2} + \frac{a_2}{ y^2} + \frac{4a_3}{ (x^2+y^2-1)^2} + \frac{4a_4}{ (x^2+y^2+1)^2}.\] Multiplying on the left by $x^2$ we obtain \[\hat H=x^2(\partial^2_x+\partial^2_y)+a_1 + a_2 \frac{x^2}{ y^2}+ 4a_3\frac{x^2}{ (x^2+y^2-1)^2} - 4a_4\frac{x^2}{ (x^2+y^2+1)^2},\] the case ${\bf a}=(1,0,0,0)$. This becomes more transparent if we introduce variables $x=e^{-a},y=r$. The Hamiltonian $\hat H$ can be written \[\hat H=\partial^2_a+\partial_a+e^{-2a}\partial^2_r + a_1+ a_2\frac {e^{-2a}}{ r^2} + a_3 \frac{4}{ (e^{-a}+e^a(r^2-1))^2} - a_4\frac{4}{ (e^{-a}+e^a(r^2+1))^2}.\] Recalling horospherical coordinates on the complex two sphere, viz. \[s_1=\frac{i}{ 2}(e^{-a}+(r^2+1)e^a),\ s_2=re^a,\ s_3=\frac{1}{ 2}(e^{-a}+(r^2-1)e^a)\] we see that the Hamiltonian $\hat H$ can be written as \[ \hat H=\partial^2_{s_1}+ \partial^2_{s_2}+ \partial^2_{s_3}+ a_1+ \frac{a_2}{ s^2_2} + \frac{a_3}{ s^2_3} + \frac{a_4}{ s^2_1},\] and this is explicitly the superintegrable system $S_9$. Now consider the case ${\bf a}=(0,1,0,1)$ which for $x=e^a\sin\varphi,\ y=e^a\cos\varphi $ and conformal St\"ackel multiplier \[ (\frac{1}{ y^2} - \frac{4}{ (x^2+y^2+1)^2})=e^{-2a}(\frac{1}{ \cos ^2\varphi } -\frac {1}{ \cosh^2a})\] yields the Hamiltonian \[\frac{1}{ (\frac{1}{ \cos ^2\varphi } - \frac{1}{ \cosh^2a})}\left [\partial^2_a+\partial^2_\varphi + \frac{a_1}{ \sin ^2\varphi } + \frac{a_2+a_4}{2}(\frac{1}{ \cos ^2\varphi } + \frac{1}{ \cosh^2a}) + \frac{a_3}{ \sinh^2a}\right]+ \] \[ \frac{a_2-a_4}{ 2},\] which is just $S_8$ in elliptic coordinates of type 1, the coordinates on the 2-sphere being taken as \[s_1+is_2 = \frac{1}{ \cos\varphi \cosh a},\ s_1-is_2 = \frac{\cos\varphi }{ \cosh a} + \frac{\cosh a}{ \cos\varphi } - \frac{1}{ \cos\varphi \cosh a},\ s_3=i\tan\varphi \tanh a,\] where $s^2_1+s^2_2+s^2_3=1$. Now consider the case ${\bf a}=(1,1,0,0)$, with \[ x=e^{ia/2}\cos b,\ y=e^{ia/2} \sin b.\] If instead we use the variable $B$ where $\sin 2b= \frac{1}{ \cosh B}$ then the Hamiltonian can be written \[\partial^2_B+\tanh B\, \partial_B- \frac{1}{ \cosh^2 B} \partial^2_a + b_1\tanh B + b_2 \frac{1}{\sinh^2a\cosh^2B} + b_3 \frac{1}{ \cosh^2a\cosh^2B} +b_0\] which is directly St\"ackel equivalent to $S_7$. A suitable choice of coordinates on the complex 2-sphere is \[ s_1=\cosh a\cosh B,\ s_2=i\cosh a\sinh B,\ s_3=i\sinh a.\] For the case ${\bf a}=(b_1,b_2,0,0)$ the St\"ackel multiplier (potential that induces the St\"ackel transform) is $b_1/x^2+b_2/y^2$. In terms of the coordinates $x=e^v\cos\theta,y=e^v\sin\theta$ the Hamiltonian takes the form \[ H=\frac{\sin^22\theta}{2[(b_2-b_1)\cos 2\theta+(b_1-b_2)]}\left[\partial_\theta^2+\partial_v^2+k+\frac{a_3}{\sinh^2v}+\frac{a_4}{\cosh^2v}\right]\] for $k$ a parameter. This is equivalent to $D4B$. For the case ${\bf a}=(0,0,b_3,b_4)$ the St\"ackel multiplier is $b_3/(x^2+y^2-1)^2+b_4/(x^2+y^2+1)^2$. In terms of the coordinates $x=-ie^{iu}\cosh v,y=e^{iu}\sinh v$ the Hamiltonian again takes a form equivalent to $D4B$. For the case ${\bf a}=(1,1,a,a)$, using polar coordinates as directly above, we see that the Hamiltonian takes the form \[ H=\frac{1}{[\frac{1}{\sin^22\theta}+\frac{a}{\sinh^22v}]}\left[\partial_\theta^2+\partial_v^2+\frac{a_1}{\cos^2\theta}+\frac{a_2}{\sin^2\theta}+ \frac{a_3}{\sinh^2v}+\frac{a_4}{\cosh^2v}\right],\] equivalent to $D4C$. From these examples we note that it is always possible to choose coordinates for which the entire Hamiltonian is a rational function. \end{examples} \subsubsection{[1,1,1,1] to [2,1,1] contraction and St\"ackel transforms}\label{3.1.2} For fixed $A_j$, $B_j$, $D_j$ we have the expansions \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}\] \[ =\frac{A_3}{{x'}_3^2}+\frac{A_4}{{x'}_4^2}+\frac{2(A_2-A_1)\epsilon^2}{(x'_1+ix'_2)^2} +\frac{4A_2(-x'_1+ix'_2)\epsilon^4}{(x'_1+ix'_2)^3}+O(\epsilon^6),\] \[ V^A_{[2,1,1]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}\] \[ =\frac{A_3}{{x'_3}^2}+\frac{A_4}{{x'}_4^2}-\frac{A_1}{2(x'_1+ix'_2)^2}\epsilon^2+\frac{(A_2+2A_1)(x'_1-ix'_2)}{4(x'_1+ix'_2)^3} \epsilon^4+O(\epsilon^6),\] \[ V^A_{[2,2]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^3}+\frac{A_3}{(x_3+ix_4)^2}+\frac{A_4(x_3-ix_4)}{(x_3+ix_4)^3}\] \[ =\frac{A_3}{(x'_3+ix'_4)^2}+\frac{A_4(x'_3-ix'_4)}{(x'_3+ix'_4)^3}-\frac{A_1}{2(x'_1+ix'_2)^2}\epsilon^2 \] \[+\frac{(A_2+2A_1)(x'_1-ix'_2)}{4(x'_1+ix'_2)^3}\epsilon^4+O(\epsilon^6),\] \[ V^B_{[3,1]}=\frac{B_1}{(x_1+ix_2)^2}+\frac{B_2x_1}{(x_1+ix_2)^3}+\frac{B_3(4x_3^2+x_4^2)}{(x_1+ix_2)^4}+\frac{B_4}{x_4^2}\] \[=\frac{B_3(4{x'_3}^2+{x'_4}^2)}{(x'_1+ix'_2)^4}+\frac{B_4}{{x'_4}^2}-\frac{(B_2+2B_1)}{4(x'_1+ix'_2)^2}\epsilon^2+O(\epsilon^4),\] \[ V^D_{[4]}=-\frac{D_1}{2(x'_1+ix'_2)^2}\epsilon^2+\frac{i\sqrt{2}}{4}\ \frac{D_2(x'_3+ix'_4)-2D_3(x'_3-ix'_4)}{(x'_1+ix'_2)^3}\epsilon^3\] \[ +\left[ \frac{3D_3(x'_3+ix'_4)^2}{(x'_1+ix'_2)^4}-\frac{(D_1+2D_4)({x'_3}^2+{x'_4}^2)}{2(x'_1+ix'_2)^4}\right]\epsilon^4+O(\epsilon^5),\ ({\rm see}\ (\ref{V[4]})),\] \subsubsection{Conformal St\"ackel transforms of the [2,1,1] system} \label{3.1.3} We write the potential in the normalized form \begin{equation}\label{V211norm} V'_{[2,1,1]}=\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{a_4}{(x_3+ix_4)^2},\end{equation} and designate it via the vector $(a_1,a_2,a_3,a_4)$. \begin{enumerate} \item The potential $(1,1,0,0)$ generates a conformal St\"ackel transform to $S4$. \item The potentials $(1,0,0,0)$, $(0,1,0,0)$ generate conformal St\"ackel transforms to $S2$. \item The potential $(0,0,0,1))$ generates a conformal St\"ackel transforms to $E1$. \item The potential $(0,0,1,0)$ generates a conformal St\"ackel transform to $E16$. \item Potentials $(a_1,a_2,0,0)$, with $ a_1a_2\ne 0$, $a_1\ne a_2$ generate conformal St\"ackel transforms to $D4A$. \item Potentials $(0,0,a_3,a_4)$, with $a_3a_4\ne 0$ generate conformal St\"ackel transforms to $D3B$. \item Potentials $(a,0,0,1)$ and $(0,a,0,1)$ with $a\ne 0$ generate conformal St\"ackel transforms to $D2B$. \item Potentials $(1,1,a,0)$ with $a\ne 0$ generate conformal St\"ackel transforms to $D2C$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[2,1,1]$. \end{enumerate} \noindent {\bf Basis of conformal symmetries for original system}: Let $H_0=\sum_{j=1}^4\partial_{x_j}^2$. A basis is \[ H_0+V_{[1,1,1,1]},\ Q_{12},\ Q_{13},\] where \[Q_{jk}=L_{jk}^2+a_j\frac{x_k^2}{x_j^2}+a_k\frac{x_j^2}{x_k^2},\ 1\le j<k\le 4.\] \noindent {\bf Contraction of basis}: Using the notation of (\ref{V[211]}) we have \[ H_0+V_{[1,1,1,1]}\to H'_0+V_{[2,1,1]},\] \[ Q'_{12}=Q_{12}-\frac{b_1}{2\epsilon^2}-\frac{b_2}{2\epsilon^4}=(L'_{12})^2+b_1(\frac{x_1'-ix_2'}{x_1'+ix_2'})+b_2(\frac{x_1'-ix_2'}{x_1'+ix_2'})^2,\] \[ Q'_{13}=2\epsilon^2 Q_{13}=(L'_{23}-iL'_{13})^2+\frac{b_2{x'_3}^2}{(x'_1+ix'_2)^2}-\frac{b_3(x'_1+ix'_2)^2}{{x'_3}^2},\] If we apply the same $[1,1,1,1]\to [2,1,1]$ contraction to the $[2,1,1]$ system, the system contracts to itself, but with parameters $c_1,\cdots,c_4$ where \[ b_1=-\frac{2c_1}{\epsilon^2},\ b_2=\frac{c_1}{\epsilon^2}+\frac{4c_2}{\epsilon^4},\ b_3=c_3,\ b_4=c_4.\] If we apply the same contraction to the $[2,2]$ system, the system contracts to itself, but with altered parameters, and to $[0]$. If we apply the same contraction to the $[3,1]$ system, the system contracts to $(1)$ or to itself. If we apply the same contraction to the $[4]$ system the system contracts to $(2)$ or to a system with potential \begin{equation}\label{V[0]} V[0]= \frac{c_1}{(x'_1+ix'_2)^2}+\frac{c_2x'_3+c_3x'_4}{(x'_1+ix'_2)^3}+c_4\frac{{x'}_3^2+{x'}_4^2}{(x'_1+ix'_2)^4}.\end{equation} If we apply this same contraction to the $[0]$ system, (\ref{V[0]norm}) it contracts to itself with altered parameters. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it contracts to $(2)$ or to itself with altered parameters. If we apply this same contraction to the $(2)$, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{Conformal St\"ackel transforms of the [0] system} We write the potential $V[0]$ in the normalized form \begin{equation}\label{V[0]norm} V'_{[0]}=\frac{c_1}{(x_3+ix_4)^2}+\frac{c_2x_1+c_3x_2}{(x_3+ix_4)^3}+c_4\frac{x_1^2+x_2^2}{(x_3+ix_4)^4},\end{equation} and designate it by the vector $(c_1,c_2,c_3,c_4)$. \begin{enumerate} \item The potentials $(\frac{c_2^2+c_3^2}{4},c_2,c_3,1)$ generate conformal St\"ackel transforms to $E20$. \item The potentials $(c_1,1,\pm i,0)$ generate conformal St\"ackel transforms to $E11$. \item The potential $(1,0,0,0))$ generates a conformal St\"ackel transform to $E3'$. \item Potentials $(c_1,c_2,c_3,0)$, with $c_2^2+c_3^2\ne 0$ generate conformal St\"ackel transforms to $D1C$. \item Potentials $(c_1,c_2,c_3,1)$, with $c_1\ne \frac{c_2^2+c_3^2}{4} $ generate conformal St\"ackel transforms to $D3A$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[0]$. \end{enumerate} \subsection{[1,1,1,1] to [2,2]:} \[ L'_{12}=L_{12},\ L'_{34}=L_{34}, \ L'_{24}+L'_{13}=L_{24}+L_{13},\] \[ L'_{24}-L'_{13}=(\epsilon^2+\frac{1}{\epsilon^2})L_{13}-\frac{1}{\epsilon^2}(iL_{14}-L_{24}-iL_{23}),\] \[ L'_{23}-L'_{14}=2L_{23}+iL_{13}-iL_{24},\] \[ L'_{23}+L'_{14}=i\left((\epsilon^2-\frac{1}{\epsilon^2})L_{13}+\frac{1}{\epsilon^2}(iL_{14}+L_{24}+iL_{23})\right).\] \noindent Coordinate implementation \[ x_1=\frac{i}{\sqrt{2}\ \epsilon}(x'_1+ix'_2),\ x_2=\frac{1}{\sqrt{2}}\left(\frac{x'_1+ix'_2}{\epsilon}+\epsilon \ (x'_1-ix'_2)\right),\] \[ x_3=\frac{i}{\sqrt{2}\ \epsilon}(x'_3+ix'_4),\ x_4=\frac{1}{\sqrt{2}}\left(\frac{x'_3+ix'_4}{\epsilon}+\epsilon \ (x'_3-ix'_4)\right),\] \noindent {\bf Limit of 2D potential}: \[ V_{[1,1,1,1]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V_{[2,2]}\] where \begin{equation}\label{V[22]} V_{[2,2]}=\frac{b_1}{(x'_1+ix'_2)^2}+\frac{b_2(x'_1-ix'_2)}{(x'_1+ix'_2)^3} +\frac{b_3}{(x'_3+ix'_4)^2}+\frac{b_4(x'_3-ix'_4)}{(x'_3+ix'_4)^3},\end{equation} and \[ a_1=-\frac12\frac{b_1}{\epsilon^2}-\frac{b_2}{4\epsilon^4},\ a_2=- \frac{b_2}{4\epsilon^4},\ a_3=-\frac12\frac{b_3}{\epsilon^2}-\frac{b_4}{4\epsilon^4}, \ a_4=- \frac{b_4}{4\epsilon^4}.\] \subsubsection{Conformal St\"ackel transforms of the [2,2] system} We designate the potential (\ref{V[22]}) by the vector $(b_1,b_2,b_3,b_4)$. \begin{enumerate} \item The potential $(0,0,1,0)$ generates a conformal St\"ackel transform to $E8$. \item The potential $(0,0,0,1)$ generates a conformal St\"ackel transform to $E17$. \item Potentials $(1,0,a,0))$ generate conformal St\"ackel transforms to $E7$. \item Potentials $(0,1,0,a)$ generate conformal St\"ackel transforms to $E19$. \item Potentials $(0,0,b_3,b_4)$, with $b_3b_4\ne 0$ generate conformal St\"ackel transforms to $D3C$. \item Potentials $(b_1,b_2,0,0)$ with $b_1b_2\ne 0$ generate conformal St\"ackel transforms to $D3D$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[2,2]$. \end{enumerate} \noindent {\bf Contracted basis}: \[H_0+V_{[1,1,1,1]}\to H'_0+V_{[2,2]},\] \[Q_{12}-\frac{b_2}{2\epsilon^4}-\frac{b_1}{2\epsilon^2}\to Q_1'={ L'}_{12}^2+b_1\frac{x'_1-ix'_2}{x'_1+ix'_2}+b_2\frac{(x'_1-ix'_2)^2}{(x'_1+ix'_2)^2},\] \[ 4\epsilon^4 Q_{13}\to Q_2'=(L'_{13}+iL'_{14}+iL'_{23}-L'_{24})^2-b_2\frac{(x'_3+ix'_4)^2}{(x'_1+ix'_2)^2}-b_4\frac{(x'_1+ix'_2)^2}{(x'_3+ix'_4)^2},\] Note also that \[ \epsilon^2(Q_{23}-Q_{14})\to Q_3'=-\frac{i}{2}\{L'_{14}-L'_{23},iL'_{23}+L'_{13}-L'_{24}+iL'_{14}\}-\frac{b_1}{2}\frac{(x'_3+ix'_4)^2}{(x'_1+ix'_2)^2}\] \[-b_2\frac{(x'_2x'_4 +x'_1x'_3)(x'_3+ix'_4)}{(x'_1+ix'_2)^3)}+\frac{b_3}{2}\frac{(x'_1+ix'_2)^2}{(x'_3+ix'_4)^2}+ b_4\frac{(x'_2x'_4 +x'_1x'_3)(x'_1+ix'_2)}{(x'_3+ix'_4)^3)}\] If we apply the same $[1,1,1,1]\to [2,2]$ contraction to the $[2,1,1]$ system with potential parameters $k_1,\cdots,k_4$ , the system contracts to the $[2,2]$ potential with parameters $b_1,\cdots,b_4$, where, \[ k_1=-\frac{2b_1}{\epsilon^2},\ k_2=\frac{4b_3}{\epsilon^4},\ k_3=-\frac{b_2}{2\epsilon^2}-\frac{b_4}{4\epsilon^4},\ k_4=-\frac{b_4}{4\epsilon^4},\] or to a special case of $E15$. If we apply the same contraction to the $[2,2]$ system we recover the same system but with altered parameters, or $[0]$. If we apply the same contraction to the superintegrable $[3,1]$ system in the form \[V[3,1]'=\frac{k_1}{(x_1+ix_2)^2}+\frac{k_4x_3}{(x_1+ix_2)^3}+k_3\frac{(4x_3^2+x_4^2)}{(x_1+ix_2)^4}+\frac{k_4}{x_4^2},\] the system contracts to a special case of $E15$, or to one with potential of the form \begin{equation}\label{V(1)} V(1)=\frac{c_1}{(x'_1+ix'_2)^2}+\frac{c_2}{(x'_3+ix'_4)^2} +c_3\frac{x'_3+ix'_4}{(x'_1+ix'_2)^3}+c_4\frac{(x'_3+ix'_4)^2}{(x'_1+ix'_2)^4}.\end{equation} It admits 2 1st order symmetries and is St\"ackel equivalent to special cases of the Euclidean superintegrable system $E15$ via transforms $(x'_1+ix'_2)^2$ or $(x'_3+ix'_4)^2$. If we apply the same contraction to the superintegrable $[4]$ system we get system conformally equivalent to (\ref{V(2)norm}). This admits a 1st order symmetry and goes to a special case of $E15$ by a conformal St\"ackel transform. If we apply this same contraction to the $[0]$ system, (\ref{V[0]norm}) it contracts to itself with altered parameters. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it contracts to itself with altered parameters, or to a special case of $E15$. If we apply this same contraction to the $(2)$ system, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{[1,1,1,1] to [2,2] contraction and St\"ackel transforms} For fixed $A_j$ we have the expansions \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}\] \[ =\frac{2(A_2+A_4-A_1-A_3)\epsilon^2}{(x'_1+ix'_2)^2}+ \left(\frac{4A_4(-x'_3+ix'_4)}{(x'_3+ix'_4)^3}+\frac{4A_2(-x'_1+ix'_2)}{(x'_1+ix'_2)^3}\right)\epsilon^4\] \[ +\left(\frac{6A_4(-x'_3+ix'_4)^2}{(x'_3+ix'_4)^4}+\frac{6A_2(-x'_1+ix'_2)^2}{(x'_1+ix'_2)^4}\right)\epsilon^6+O(\epsilon^8),\] \[ V^A_{[2,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{A_4}{(x_3+ix_4)^2}\] \[ = \left(\frac{2(A_2-A_1)}{(x'_1+ix'_2)^2}-\frac{A_4}{2(x'_3+ix'_4)^2}\right)\epsilon^2\] \[+\left(-\frac{4A_2(x'_1-ix'_2)}{(x'_1+ix'_2)^3}+\frac{(A_3+2A_4)(x'_3-ix'_4)}{4(x'_3+ix'_4)^3}\right)\epsilon^4+O(\epsilon^6),\] \[ V^A_{[2,2]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^3} +\frac{A_3}{(x_3+ix_4)^2}+\frac{A_4(x_3-ix_4)}{(x_3+ix_4)^3}\] \[ =-\frac12\left(\frac{A_1}{(x'_1+ix'_2)^2}+\frac{A_3}{(x'_3+ix'_4)^2}\right)\epsilon^2\] \[ +\frac14\left(\frac{(A_2+2A_1)(x'_1-ix'_2)}{(x'_1+ix'_2)^3}+\frac{(A_4+2A_3)(x'_3-ix'_4)}{(x'_3+ix'_4)^3}\right)\epsilon^4+O(\epsilon^6),\] \subsection{[2,1,1] to [3,1]} \noindent {\bf Coordinate implementation}: \[ x_1+ix_2=-\frac{i\sqrt{2}\ \epsilon}{2}x_2' +\frac{(i x'_1- x'_3)}{\epsilon},\] \[x_1-ix_2=-\epsilon \ (x_3'+ix'_1) +\frac{3i\sqrt{2}x'_2}{4\epsilon}+\frac12 \frac{(ix'_1- x'_3)}{\epsilon^3},\] \[ x_3=-\frac12 x'_2-\frac{\sqrt{2}}{2} \frac{(x'_1+i x'_3)}{\epsilon^2},\ x_4=x_4'.\] \[ L'_{24}=\frac{\sqrt{2}i}{2\epsilon}(L_{14}+iL_{24})-L_{34},\ L'_{14}+iL'_{34}=-i\epsilon\ (L_{14}+iL_{24}),\] \[ L'_{14}-iL'_{34}=\frac{1}{\epsilon}\left(iL_{14}(1+\frac{1}{2\epsilon^2})+L_{24}(1-\frac{1}{2\epsilon^2})-\frac{\sqrt{2}}{\epsilon}L_{34}\right),\] \[ L'_{13}=-L_{12}-2\sqrt{2}\ L_{13}\,(\epsilon+2\epsilon^3),\] \[ \ L'_{23}+iL'_{12}=4\epsilon^3 L_{13},\ L'_{23}-iL'_{12}=(2\sqrt{2}-\frac{\sqrt{2}}{\epsilon^2})L_{12}\] \[+(8\epsilon^3+4\epsilon-\frac{2}{\epsilon}+\frac{1}{2\epsilon^3})L_{13}+\frac{i}{2\epsilon^3}L_{23}.\] \noindent {\bf Limit of 2D potential}: \[ V_{[2,1,1]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V_{[3,1]},\] where \begin{equation}\label{V[31]} V_{[3,1]}=\frac{c_1}{(x'_1+ix'_3)^2}+\frac{c_2x'_2}{(x'_1+ix'_3)^3} +\frac{c_3(4{x'_2}^2+{x'_4}^2)}{(x'_1+ix'_3)^4}+\frac{c_4}{{x'_4}^2},\end{equation} and \[ b_1=\frac{c_3}{\epsilon^6}+\frac{\sqrt{2}\ c_2}{4\epsilon^4}-\frac{c_1}{\epsilon^2},\ b_2=- \frac{c_3}{\epsilon^4}-\frac{\sqrt{2}\ c_2}{2\epsilon^2},\ b_3=\frac{c_3}{4\epsilon^8},\ b_4=c_4.\] \subsubsection{Conformal St\"ackel transforms of the [3,1] system} We write potential $V_{[3,1]}$ in the normalized form \begin{equation}\label{V[31]norm} V'_{[3,1]}=\frac{a_1}{(x_3+ix_4)^2}+\frac{a_2x_1}{(x_3+ix_4)^3} +\frac{a_3(4{x_1}^2+{x_2}^2)}{(x_3+ix_4)^4}+\frac{a_4}{{x_2}^2},\end{equation} and designate it $(a_1,a_2,a_3,a_4)$. \begin{enumerate} \item The potential $(0,0,0,1)$ generates a conformal St\"ackel transform to $S1$. \item The potential $(1,0,0,0)$ generates a conformal St\"ackel transform to $E2$. \item The potential $(a,1,0,0)$ generates a conformal St\"ackel transform to $D1B$. \item The potential $(0,0,1,0)$ generates a conformal St\"ackel transform to $D2A$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[3,1]$. \end{enumerate} \noindent {\bf Basis of conformal symmetries for original system}: \[ H_0+V_{[2,1,1]},\] \[ Q_{12}=(L_{12})^2+b_1(\frac{x_1-ix_2}{x_1+ix_2})+b_2(\frac{x_1-ix_2}{x_1+ix_2})^2,\] \[ Q_{13}=(L_{23}-iL_{13})^2+\frac{b_2{x_3}^2}{(x_1+ix_2)^2}-\frac{b_3(x_1+ix_2)^2}{{x_3}^2},\] \noindent {\bf Contraction of basis}: \[ H_0+V_{[2,1,1]}\to H_0'+V_{[3,1]},\] \[ Q'_{12}=-2\epsilon^4\ Q_{12}+\frac{c_3}{2\epsilon^4}-c_1=(L'_{12}-iL'_{23})^2+\frac{c_2x_2'}{x_1'+ix_3'} +\frac{4c_3{x_2'}^2}{(x_1'+ix_3')^2},\] \[ Q'_{13}=-\frac{\sqrt{2}}{4}(Q_{13}+2\epsilon^2 Q_{12}-\frac{3c_3}{2\epsilon^6}-\frac{\sqrt{2}\ c_2}{4\epsilon^4}+c_1)=\] \[\frac12\{ L'_{13},L'_{23}+iL'_{12}\} +\frac{ c_1x'_2}{x'_1+ix'_3}+\frac{\ c_2({x'_4}^2+4{x'_2}^2)}{4(x'_1+ix'_3)^2}+\frac{2 c_3x'_2({x'_4}^2+2{x'_2}^2)}{(x'_1+ix'_3)^3},\] If we apply the same $[2,1,1]\to [3,1]$ contraction to the $[1,1,1,1]$ system, the system contracts to the $[3,1]$ potential, but with parameters $c_1,\cdots,c_4$ where \[ a_1=\frac{c_1}{\epsilon^8}+c_2\ (\frac{16}{\epsilon^{10}}+\frac{1}{\epsilon^{12}}),\ a_2=\frac{c_2}{\epsilon^{12}},\ a_3=\frac{c_3}{\epsilon^4}+ \frac{8c_1-512c_2}{\epsilon^6}+\frac{64c_2}{\epsilon^8},\ a_4=c_4.\] If we apply the same contraction to the $[2,2]$ system, the system contracts to one with potential \begin{equation}\label{E3'} V=\frac{c_1}{(x'_1+ix'_3)^2}+\frac{c_2x'_2+c_3x'_4}{(x'_1+ix'_3)^3}+c_4\frac{{x'}_2^2+{x'}_4^2}{(x'_1+ix'_3)^4},\end{equation} where \[ b_1=-\frac{\sqrt{2}\ c_2}{4\epsilon^4}+\frac{c_4}{4\epsilon^6},\ b_2=-\frac{c_4}{4\epsilon^4},\ b_3=\sqrt{2}\ \frac{(-2c_2+ic_3)}{8\epsilon^6}+\frac{c_4}{8\epsilon^8},\] \[ b_4=\frac{c_1}{2\epsilon^4}+\sqrt{2}\ \frac{(-ic_3+c_2)}{8\epsilon^6}-\frac{c_4}{16\epsilon^8}.\] This is conformally equivalent to (\ref{V[0]norm}). If we apply this same contraction to the system with $V[3,1]$ potential, the system contracts to one with $V[3,1]$ potential again, but with different parameters, or to $[0]$. If we apply this same contraction to the system with $V[4]$ potential, the system contracts to one with potential (\ref{E3'}) again, but with different parameters. If we apply this same contraction to the $[0]$ system, (\ref{E3'}) it contracts to itself with altered parameters. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it becomes a potential conformally equivalent to (\ref{V(2)norm}). If we apply this same contraction to the $(2)$ system, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{[2,1,1] to [3,1] contraction and St\"ackel transforms} For fixed $A_j$, $B_j$ we have the expansion \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}=\] \[ \frac{A_4}{{x'_4}^2}+\frac{2A_3}{(x'_1+ix'_3)^2}\epsilon^4+\left(\frac{16(A_2-A_1)} {(x'_1+ix'_3)^2}-\frac{2\sqrt{2}A_3x'_2}{(x'_1+ix'_3)^3}\right)\epsilon^6+O(\epsilon^8).\] \[ V^A_{[2,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{A_4}{(x_3+ix_4)^2} =\frac{2(A_3+A_4)}{(x'_1+ix'_3)^2}\epsilon^4\] \[+\left(\frac{16(A_2-A_1)}{(x'_1+ix'_3)^2}+\frac{(3A_3+2A_4)\sqrt{2}(-x'_2+2ix'_4)}{(x'_1+ix'_3)^3} +\frac{A_3\sqrt{2}(x'_2+2ix'_4)}{(x'_1+ix'_3)^3}\right)\epsilon^6+O(\epsilon^8).\] \[ V^A_{[2,2]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^3}+\frac{A_3}{(x_3+ix_4)^2}+\frac{A_4(x_3-ix_4)}{(x_3+ix_4)^3}\] \[ =-\frac{A_2}{2(x'_1+ix'_3)^2}+\left(-\frac{A_1}{(x'_1+ix'_3)^2}-\frac{3\sqrt{2}A_2x'_2}{(x'_1+ix'_2)^3}\right)\epsilon^2\] \[ \left(-\frac{\sqrt{2}A_1x'_2}{(x'_1+ix'_3)^3}-\frac{(4{x'_4}^2+19{x'_2}^2)A_2}{(x'_1+ix'_3)^4}\right)\epsilon^4+O(\epsilon^6),\] \[V^B_{[3,1]}=\frac{B_1}{(x_1+ix_2)^2}+\frac{B_2x_2}{(x_1+ix_3)^3}+\frac{B_3(4x_2^2+x_4^2)}{(x_1+ix_3)^4}+\frac{B_4}{x_4^2}\] \[=\frac{B_4}{{x'_4}^2}-16\frac{(B_1+iB_2-4B_3)}{(x'_1+ix'_3)^2}\epsilon^6+O(\epsilon^7),\] \subsection{[1,1,1,1] to [4]:} In this case there is a 2-parameter family of contractions, but all lead to the same result. Let $A,B$ be constants such that $AB(1-A)(1-B)(A-B)\ne 0$. \noindent Coordinate implementation \[ x_1=\frac{i}{\sqrt{2AB}\ \epsilon^3}(x'_1+ix'_2),\] \[x_2=\frac{(x'_1+ix'_2)+\epsilon^2(x'_3+ix'_4)+\epsilon^4(x'_3-ix'_4)+\epsilon^6(x'_1-ix'_2)}{\sqrt{2(A-1)(B-1)}\ \epsilon^3},\] \[ x_3=\frac{(x'_1+ix'_2)+A\epsilon^2(x'_3+ix'_4)+A^2\epsilon^4(x'_3-ix'_4)+A^3\epsilon^6(x'_1-ix'_2)}{\sqrt{2A(A-1)(A-B)}\ \epsilon^3},\] \[ x_4=\frac{(x'_1+ix'_2)+B\epsilon^2(x'_3+ix'_4)+B^2\epsilon^4(x'_3-ix'_4)+B^3\epsilon^6(x'_1-ix'_2)}{\sqrt{2B(B-1)(B-A)}\ \epsilon^3},\] In this case:{\small \begin{eqnarray} iL'_{14}+iL'_{23}+L'_{13}-L'_{24}&=&-2i\epsilon^4\sqrt{AB(A-1)(B-1)}\ L_{12},\\ iL'_{14}-iL'_{23}-L'_{13}-L'_{24}&=&2i\ \epsilon^2\left(\sqrt{B(A-1)(A-B)}\ L_{13}-\sqrt{AB(A-1)(B-1)}\ L_{12}\right),\nonumber\\ L'_{12}&=& \frac{\sqrt{AB}}{\sqrt{(A-1)(B-1)}}L_{12}+\frac{\sqrt{B}}{\sqrt{(A-1)(A-B)}}L_{13}\nonumber\\ &-&\frac{i\sqrt{A}}{\sqrt{(B-1)(A-B)}}L_{14},\nonumber \\ L'_{34}&=& \frac{\sqrt{B(B-1)}}{\sqrt{A(A-1)}}L_{12}-\frac{\sqrt{B(A-B)}}{\sqrt{(A-1)}}L_{13}+i\frac{\sqrt{(B-1)(A-B)}}{\sqrt{A}}L_{23},\nonumber\\ -iL'_{14}+iL'_{23}-L'_{13}-L'_{24}&=&\frac{2}{\epsilon^2}\left( \frac{i(A+B-1)}{\sqrt{AB(A-1)(B-1)}}L_{12}+\frac{i\sqrt{B}}{\sqrt{(A-1)(A-B)}}L_{13}\right.\nonumber\\ &-&\frac{\sqrt{A}}{\sqrt{B(B-1)(A-B)}}L_{14}+\frac{\sqrt{(B-1)}}{\sqrt{A(A-B)}}L_{23}\nonumber\\ &-&\left.\frac{i\sqrt{(A-1)}}{\sqrt{B(A-B)}}L_{24}\right),\nonumber\\ iL'_{14}+iL'_{23}-L'_{13}+L'_{24}&=&\frac{2i}{\epsilon^4} \left(-\frac{1}{\sqrt{AB(A-1)(B-1)}}(L_{12}+L_{34})\right.\nonumber\\ &+&\frac{i}{\sqrt{A(B-1)(A-B)}}(L_{14}+L_{23})\nonumber\\ &-&\left.\frac{1}{\sqrt{B(A-1)(A-B)}}(L_{13}-L_{24}) \right). \nonumber \end{eqnarray}} \noindent {\bf Limit of 2D potential}: \[ V_{[1,1,1,1]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V_{[4]},\] where \begin{equation}\label{V[4]} V_{[4]}=\frac{d_1}{(x'_1+ix'_2)^2}+\frac{d_2(x'_3+ix'_4)}{(x'_1+ix'_2)^3}\end{equation} \[+d_3\left(\frac{3({x'_3}+ix'_4)^2}{(x'_1+ix'_2)^4}-2\frac{(x'_1+ix'_2)(x'_3-ix'_4)}{(x'_1+ix'_2)^4}\right)+\] \[d_4\frac{4(x'_1+ix'_2)({x'_1}^2+{x'_2}^2)+2(x'_3+ix'_4)^3 }{(x'_1+ix'_2)^5} .\] and \[ a_1=-\frac{d_4}{4A^2B^2\epsilon^{12}}-\frac{d_3}{2AB^2\epsilon^{10}}-\frac{ d_2}{4AB\epsilon^8}-\frac{d_1}{2AB\epsilon^6},\] \[ a_2=- \frac{d_4}{4(1-A)^2(1-B)^2\epsilon^{12}}+\frac{d_3}{2(1-A)(1-B)^2\epsilon^{10}}-\frac{d_2}{4(1-A)(1-B)\epsilon^8},\] \[ a_3=-\frac{d_4}{4A^2(1-A)^2(A-B)^2\epsilon^{12}},\] \[ a_4=-\frac{d_4}{4B^2(1-B)^2(A-B)^2\epsilon^{12}}-\frac{d_3}{2B^2(1-A)^2(A-B)\epsilon^{10}}.\] \subsubsection{Conformal St\"ackel transforms of the [4] system} We write potential $V_{[4]}$ in the normalized form \begin{equation}\label{V[4]norm} V'_{[4]}=\frac{a_1}{(x_3+ix_4)^2}+a_2\frac{x_1+ix_2}{(x_3+ix_4)^3} +a_3\frac{3(x_1+ix_2)^2-2(x_3+ix_4)(x_1-ix_2)}{(x_3+ix_4)^4}\end{equation} \[+a_4\ \frac{4(x_3+ix_4)(x_3^2+x_4^2)+2(x_1+ix_2)^3}{(x_3+ix_4)^5},\] and designate it $(a_1,a_2,a_3,a_4)$. \begin{enumerate} \item The potentials $(1,a,0,0)$ generate conformal St\"ackel transforms to $E10$. \item The potential $(0,1,0,0)$ generates a conformal St\"ackel transform to $E9$. \item The potential $(0,0,0,1))$ generates a conformal St\"ackel transform to $D1A$. \item Each potential not proportional to one of these must generate a conformal St\"ackel transform to a superintegrable system on a Koenigs space in the family $K[4]$. \end{enumerate} In these coordinates a basis for the conformal symmetry algebra is $H,Q_1,Q_2$ where \[ Q_1=\frac14(L_{14}+L_{23}-iL_{13}+iL_{24})^2+4a_3(\frac{x_1+ix_2}{x_3+ix_4})+4a_4(\frac{x_1+ix_2}{x_3+ix_4})^2,\] \[ Q_2=\frac12\{L_{23}+L_{14}-iL_{13}+iL_{24},L_{12}+L_{34}\}+\frac14(L_{14}-L_{23}+iL_{13}+iL_{24})^2\] \[+2a_1(\frac{x_1+ix_2}{x_3+ix_4}) +a_2\left(2\frac{x_1-ix_2}{x_3+ix_4}-(\frac{x_1+ix_2}{x_3+ix_4})^2\right)\] \[+2a_3\left(6(\frac{x_1^2+x_2^2}{(x_3+ix_4)^2})-(\frac{x_1+ix_2}{x_3+ix_4})^3\right)\] \[ -4a_4\left((\frac{x_1-ix_2}{x_3+ix_4})^2-3(\frac{(x_1+ix_2)^2(x_1-ix_2)}{(x_3+ix_4)^3}+\frac14(\frac{x_1+ix_2}{x_3+ix_4})^4\right).\] \noindent {\bf Basis of conformal symmetries for original system}: \[ H_0+V_{[1,1,1,1]},\ Q_{12},\ Q_{13},\] where \[Q_{jk}=(x_j\partial_{x_k}-x_k\partial_{x_j})^2+a_j\frac{x_k^2}{x_j^2}+a_k\frac{x_j^2}{x_k^2},\ 1\le j<k\le 4.\] \noindent {\bf Contraction of basis}: \[H_0+V_{[1,1,1,1]}\to H_0'+V_{[4]},\] \[\epsilon^8 Q_{12}\sim \frac{-1}{4(A-1)(B-1)AB}(L'_{13}-L'_{24}+iL'_{23}+iL'_{14})^2\] \[+\frac{4d_3(x'_3+ix'_4)}{AB(A-1)(B-1)(x'_1+ix'_2)}\] \[+\frac{d_4}{4AB(A-1)(B-1)}\left[\frac{(x'_3+ix'_4)^2}{(x'_1+ix'_2)^2}+2\frac{x_3'-ix_4'}{x_1'+ix_2'}\right],\] In this case we do not obtain a basis of symmetries for the $[4]$ system. The basis can be computed from the contracted potential. If we apply the same $[1,1,1,1]\to [4]$ contraction to the $[2,1,1]$ system, the system contracts to a modified $[4]$ potential, of the form \[{\tilde V}_{[4]}=\frac{d'_1}{(x'_1+ix'_2)^2}+\frac{d'_2(x'_3+ix'_4)}{(x'_1+ix'_2)^3}\] \[+d'_3\left(\frac{3 ({x'_3}+ix'_4)^2}{(x'_1+ix'_2)^4}-2\lambda \frac{(x'_1+ix'_2)(x'_3-ix'_4)}{(x'_1+ix'_2)^4}\right)+\] \[d'_4\ \left(\frac{4\lambda (x'_1+ix'_2)({x'_1}^2+{x'_2}^2)}{(x'_1+ix'_2)^5}+\frac{2(x'_3+ix'_4)^3 }{(x'_1+ix'_2)^5}\right) ,\] where $\lambda$ is a nonzero function of $A$ and $B$. However, under an appropriate conformal transformation \[ x_1'+ix_2'\to \mu(x_1'+ix_2'), \ x'_1-ix'_2 \to \mu^{-1}(x_1'-ix_2'),\] we obtain the potential $V_{[4]}$ exactly. If we apply the same contraction to the $[2,2]$ system, the system contracts to \begin{equation}\label{E3p} V=\frac{e_1}{(x'_1+ix'_2)^2}+e_2\frac{(x'_3+ix'_4)}{(x'_1+ix'_2)^3}+e_3\frac{ (x'_3-ix'_4)}{(x'_1+ix'_2)^3} +e_4\frac{({x'}_3^2+{x'}_4^2)}{(x'_1+ix'_2)^4},\end{equation} conformally equivalent to (\ref{V[0]norm}). If we apply the same contraction to the $[3,1]$ system, the system contracts to \[ V=\frac{f_1}{(x'_1+ix'_2)^2}+f_2\frac{(x'_3+ix'_4)}{(x'_1+ix'_2)^3}\] \[+\frac{f_3}{(x'_1+ix'_2)^4}\left[3 \lambda(x'_3+ix'_4)^2+(x'_1+ix'_2)(x'_3-ix'_4)\right] \] \[+\frac{f_4(x'_3+ix'_4)}{(x'_1+ix'_2)^5}\left[\lambda(x'_3+ix'_4)^2+(x'_1+ix'_2)(x'_3-ix'_4)\right],\] where the nonzero scalar $\lambda$ depends on the choice of $A$ and $B$. It can be rescaled to any desired nonzero value by a conformal transform \[ x_1'+ix_2'\to \mu(x_1'+ix_2'), \ x'_1-ix'_2 \to \mu^{-1}(x_1'-ix_2').\] This system is conformally equivalent to (\ref{V[4]norm}) again. If we apply the same contraction to the $[4]$ system, the system contracts to one with potential (\ref{E3p}) again, but with different parameters. If we apply the same contraction to the $[0]$ system (\ref{E3p}) the system contracts to one with potential (\ref{E3p}) again, but with different parameters. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it becomes a potential conformally equivalent to (\ref{V(2)norm}). If we apply this same contraction to the $(2)$ system, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{[1,1,1,1] to [4] contraction and St\"ackel transforms} For fixed $A_j$ we have (in the special case $A=10,B=5$) the expansions \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}\] \[=\frac{4(-5A_1+2A_2+30A_4+3A_3)}{(x'_1+ix'_2)^2}\epsilon^6+\frac{16(-A_2+3A_3-75A_4)(x'_3+ix'_4)}{(x'_1+ix'_2)^3}\epsilon^8+O(\epsilon^{10}).\] \[ V^A_{[2,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{A_4}{(x_3+ix_4)^2}\] \[=\frac{-\frac{4}{127}(135A_1-54A_2+[110+2\sqrt{10}]A_4)-\frac{40}{243}(161+44\sqrt{10})A_3}{(x'_1+ix'_2)^2}\epsilon^6+O(\epsilon^8),\] \subsection{[2,2] to [4]:} \begin{eqnarray} L'_{12}&=&i(1+\frac{2}{\epsilon}-\frac{1}{2\epsilon^2})L_{12}+\frac{1}{\epsilon}(1-\frac{3}{4\epsilon}+\frac{1}{4\epsilon^2})L_{13} +\frac{i}{4\epsilon^2}(3-\frac{1}{\epsilon})L_{14}\nonumber\\ &+&\frac{i}{4\epsilon^2}(3-\frac{1}{\epsilon})L_{23}+(3-\epsilon+\frac{3}{4\epsilon^2}- \frac{1}{4\epsilon^3})L_{24}+i(\frac{3\epsilon}{2}-2+\frac{1}{\epsilon}-\frac{1}{2\epsilon^2})L_{34},\nonumber\\ L'_{12}+iL'_{24}&=&\epsilon(L_{13}-iL_{14}),\\ L'_{13}+iL'_{34}&=&\epsilon(L_{23}-iL_{24}),\nonumber\\ L'_{14}&=&(-1+\epsilon)L_{12}+i(1-\epsilon)L_{13}+(1+\epsilon)L_{14},\nonumber\\ L'_{23}-L'_{14}&=&-L_{14}+L_{23},\nonumber\\ L'_{13}+L'_{24}&=& (\frac12-\frac{1}{\epsilon})L_{12}+\frac{i}{\epsilon}L_{13}+\frac12 L_{14}+\frac12L_{23}+(2+\frac{i}{\epsilon})L_{24}+(\epsilon-\frac12+\frac{1}{\epsilon})L_{34}.\nonumber \end{eqnarray} \noindent {\bf Coordinate implementation}: \[ x_1=\frac{1}{2}(\frac{1}{\epsilon}+\frac{1}{\epsilon^2})(x'_1-ix'_4)+\frac{\epsilon}{2}(x'_1+ix'_4) -(1+\frac{1}{2\epsilon})(x'_2-ix'_3)+\frac{1}{2}(\epsilon-1)(x'_2+ix'_3),\] \[ x_2=\frac{i}{2}(\frac{1}{\epsilon}-\frac{1}{\epsilon^2})(x'_1-ix'_4)-\frac{i\epsilon}{2}(x'_1+ix'_4) -i(1-\frac{1}{2\epsilon})(x'_2-ix'_3)+\frac{i}{2}(\epsilon+1)(x'_2+ix'_3),\] \[ x_3=\frac{1}{2}(\frac{1}{\epsilon}-\frac{1}{\epsilon^2})(x'_1-ix'_4)+(-\frac12+\frac{1}{\epsilon})(x'_2-ix'_3),\] \[x_4=\frac{i}{2}(\frac{1}{\epsilon}+\frac{1}{\epsilon^2})(x'_1-ix'_4)-i(\frac12+\frac{1}{\epsilon})(x'_2-ix'_3).\] \noindent {\bf Limit of 2D potential}: \[ V_{[2,2]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V'_{[4]},\] \begin{equation}\label{V[4]'} V'_{[4]}=\frac{e_1}{(x'_1-ix'_4)^2}+\frac{e_2(x'_2-ix'_3)}{(x'_1-ix'_4)^3}\end{equation} \[+e_3\left(\frac{3({x'_2}-ix'_3)^2}{(x'_1-ix'_4)^4}+2\frac{(x'_1-ix'_4)(x'_2+ix'_3)}{(x'_1-ix'_4)^4}\right)+\] \[e_4\left(\frac{4(x'_1-ix'_4)({x'_2}^2+{x'_3}^2)+2(x'_2-ix'_3)^3 }{(x'_1-ix'_4)^5} \right) ,\] where \[ b_1=\frac{e_1}{\epsilon^4}+2\frac{e_4}{\epsilon^7},\ b_2=-\frac{e_2}{4\epsilon^6}-\frac{e_3}{2\epsilon^7}-\frac{e_4}{\epsilon^8}, \ b_3=2\frac{e_3}{\epsilon^6}-2\frac{e_4}{\epsilon^7},\ b_4=-\frac{e_2}{4\epsilon^6}+\frac{3e_3}{2\epsilon^7}-\frac{e_4}{\epsilon^8}.\] This is conformally equivalent to $V[4]$. \medskip\noindent \noindent {\bf Basis of conformal symmetries for original system}: \[ H_0+V_{[2,2]},\ Q_1,\ Q_3\] \medskip \noindent{ \bf Contraction of basis}: \[ H_0+V_{[2,2]}\to H'_0+V'_{[4]},\] \[-4\epsilon^4( Q_1+\frac{k_4}{\epsilon^6}-\frac{k_3}{2\epsilon^5})\to (iL'_{13}-L'_{12}-iL'_{24}-L'_{34})^2\] \[+k_2+4k_3\frac{x'_2-ix'_3}{x'_1-ix'_4}-4k_4\frac{(x'_2-ix'_3)^2}{(x'_1-ix'_4)^2},\] \[ \epsilon^3 (Q_3-\frac{2k_4}{\epsilon^7}+\frac{k_3}{\epsilon^6}+ \frac{k_1}{2\epsilon^4})\to \] \[\frac{i}{2} \{L'_{23}-L'_{14},(L'_{12}-iL'_{13}+L'_{24}+L'_{34}\}+k_1\frac{(x'_2-ix'_3)}{(x'_1-ix'_4)}+ k_2\frac{(x'_2-ix'_3)^2}{(x'_1-ix'_4)^2}\] \[ +k_3\frac{3(x'_2-ix'_3)^3+2({x'_2}^2+{x'_3}^2)(x'_1-ix'_4)}{(x'_1-ix'_4)^3}\] \[-2k_4(x'_2-ix'_3)\frac{(x'_2-ix'_3)^3+2({x'_2}^2+{x'_3}^2)(x'_1-ix'_4)}{(x'_1-ix'_4)^4}.\] However, the second limit here is equivalent to the contracted Hamiltonian, not an independent basis element. \medskip If we apply the $[2,2]\to [4]$ contraction to the $[1,1,1,1]$ system, the system contracts to \[ {V[4]}''=\frac{f_1}{(x'_1-ix'_4)^2}+f_2\frac{(x'_2-ix'_3)}{(x'_1-ix'_4)^3}\] \[+\frac{f_3}{(x'_1-ix'_4)^4} \left[3(x'_2-ix'_3)^2+2(x'_1-ix'_4)(x'_2+ix'_3)\right] \] \begin{equation}\label{V4pp}+\frac{f_4(x'_2-ix'_3)}{(x'_1-ix'_4)^5}\left[(x'_2-ix'_3)^2+2(x'_1-ix'_4)(x'_2+ix'_3)\right],\end{equation} where \begin{eqnarray} b_1&=&\frac{f_1+2f_3}{4\epsilon^4}+\frac{f_2+10f_4}{64\epsilon^6}-\frac{f_3-4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8},\nonumber\\ b_2&=&\frac{f_2+10f_4}{64\epsilon^6}-\frac{f_3+4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8},\nonumber\\ b_3&=& \frac{f_2-16f_3+10f_4}{64\epsilon^6}+\frac{f_3-4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8},\nonumber\\ b_4&=&\frac{f_2+16f_3+10f_4}{64\epsilon^6}+\frac{3f_3+4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8},\nonumber\end{eqnarray} also conformally equivalent to $V_{[4]}$. If we apply the same contraction to the $[2,1,1]$ system, the system contracts to potential $(2)$, or to (\ref{V4pp}) again, except that now \begin{eqnarray} b_1&=&\frac{f_1-f_3}{\epsilon^4}-\frac{2f_2+5f_4}{2\epsilon^5}-\frac{2f_3}{\epsilon^6}+\frac{f_4}{\epsilon^7},\nonumber\\ b_2&=&\frac{3f_3}{2\epsilon^7}-\frac{f_4}{2\epsilon^8},\nonumber\\ b_3&=& \frac{f_2+7f_4}{4\epsilon^5}-\frac{f_2+7f_4}{16\epsilon^6}+\frac{f_3-4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8},\nonumber\\ b_4&=-&\frac{f_2+7f_4}{4\epsilon^5}-\frac{f_2+7f_4}{16\epsilon^6}+\frac{f_3+4f_4}{32\epsilon^7}+\frac{f_4}{32\epsilon^8}.\nonumber\end{eqnarray} If we apply the same contraction to the $[3,1]$ system, the system contracts to a system with potential \begin{equation}\label{V2'} V(2)=\frac{c_1}{(x'_1-ix'_4)^2}+\frac{c_2(x'_2-ix'_3)}{(x'_1-ix'_4)^3}+\frac{c_3(x'_2-ix'_3)^2}{(x'_1-ix'_4)^4} +\frac{c_4(x'_2-ix'_3)^3}{(x'_1-ix'_4)^5}.\end{equation} This system admits a first order symmetry. It corresponds to a special case of the flat space superintegrable system $E15$ via the transform $(x'_1-ix'_4)^2$. If we apply the same contraction to the $[4]$ system, the system contracts to a system with potential (\ref{V[4]norm}) again, but with different parameters. If we apply the same contraction to the $[0]$ system (\ref{E3p}) the system contracts to one with potential (\ref{E3p}) again, but with different parameters. or to $(2)$. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it becomes a potential conformally equivalent to (\ref{V(2)norm}). If we apply this same contraction to the $(2)$ system, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{[2,2] to [4] contraction and St\"ackel transforms} For fixed $A_j$ we have the expansions \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2} =4\frac{A_1-A_2+A_3-A_4}{(x'_1-ix'_4)^2}\epsilon^4\] \[-8\frac{(A_1+A_2-A_3-A_4)(x'_1-ix'_4)-(A_1-A_2+2A_3-2A_4) (x'_2+ix'_3)}{(x'_1-ix'_4)^3}\epsilon^5\] \[+O(\epsilon^6),\] \[ V^A_{[2,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{A_4}{(x_3+ix_4)^2}= \frac{(4A_1-4A_2+A_4)}{(x'_1-ix'_4)^2}\epsilon^4\] \[-\frac{[(8A_1+8A_2+A_3)(x'_1-ix'_4)+4(-2A_1+2A_2-A_4)(x'_2-ix'_3)}{(x'_1-ix'_4)^3}\epsilon^5+O(\epsilon^6),\] \[ V^A_{[2,2]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^3} +\frac{A_3}{(x_3+ix_4)^2}+\frac{A_4(x_3-ix_4)}{(x_3+ix_4)^3}\] \[ = \frac{A_1+A_3}{(x'_1-ix'_4)^2}\epsilon^4+\left(\frac{(2A_1+4A_3)(x'_2-ix'_3)}{(x'_1-ix'_4)^3} +\frac{(A_2-A_4)}{(x'_1-ix'_4)^2}\right)\epsilon^5+O(\epsilon^6). \] \subsection{[3,1] to [4]} This specific contraction is not needed because already the $[1,1,1,1]\to[4]$ contraction takes the system $V[3,1]$ to $V[4]$. \subsection{[2,1,1] to [4]} This specific contraction is not needed because already the $[1,1,1,1]\to[4]$ contraction takes the system $V[2,1,1]$ to $V[4]$. \subsection{[1,1,1,1] to [3,1]} \begin{eqnarray} -L'_{12}+iL'_{24}&=& -a\sqrt{2a^2-2}\ \epsilon L_{12}\\ L'_{13}&=& -\frac{i}{\sqrt{a^2-1}}(L_{13}+aL_{12}),\nonumber\\ L'_{14}+iL'_{34}&=&\sqrt{2}\ a\epsilon L_{14},\nonumber\\ -L'_{12}+iL'_{23}&=&i\sqrt{2}a\epsilon L_{23},\nonumber\\ L'_{24}&=&i(\sqrt{a^2-1}\ L_{24}-iaL_{14}),\nonumber\\ -L'_{14}+iL'_{34}&=& \frac{\sqrt{2}}{\epsilon\ a\sqrt{a^2-1}}\left( L_{34}-\sqrt{a^2-1}L_{14}-iaL_{24}\right).\nonumber\end{eqnarray} \noindent {\bf Coordinate implementation}: \[ x_1=\frac{1}{\sqrt{2}\ a\epsilon}(x'_1+ix'_3)+\frac{x'_2}{a}+\frac{a\epsilon}{\sqrt{2}}(x'_1-ix'_3),\] \[x_2=\frac{i(x'_1+ix'_2)}{\sqrt{2a^2-2}\ \epsilon},\] \[ x_3=-\frac{(x'_1+ix'_3)}{\sqrt{2a^2-2}\ a\epsilon}+\frac{\sqrt{a^2-1}}{a} x'_2,\ x_4=x'_4,\] where $a$ is a parameter such that $a(a-1)\ne 0$. \noindent {\bf Limit of 2D potential}: \[ V_{[1,1,1,1]} \stackrel{\epsilon\ \to\ 0}{\Longrightarrow}\ V_{[31]},\] where $V[31]$ is given by (\ref{V[31]}) and \[ a_1=\frac{c_1}{2\epsilon^2}+\frac{c_3}{4a^4\epsilon^4},a_2=\frac{c_2}{4\sqrt{2}(a^2-1)^2\epsilon^3}+\frac{c_3}{4 (a^2-1)^2\epsilon^4},\] \[ a_3=\frac{c_2}{4\sqrt{2}(a^2-1)^2a^2\epsilon^3}+\frac{c_3}{4 (a^2-1)^2a^4\epsilon^4},\ a_4=c_4.\] \noindent {\bf Basis of conformal symmetries for original system}: \[ H_0+V_{[1,1,1,1]},\ Q_{12},\ Q_{13},\] where \[Q_{jk}=(x_j\partial_{x_k}-x_k\partial_{x_j})^2+a_j\frac{x_k^2}{x_j^2}+a_k\frac{x_j^2}{x_k^2},\ 1\le j<k\le 4.\] \noindent {\bf Contracted basis}: \[H_0+V_{[1,1,1,1]}\to H_0'+V_{[3,1]},\] \[\epsilon^2\left( Q_{12}+\frac{c_3}{2a^2(a^2-1)\epsilon^4} +\frac{\sqrt{2}c_2}{a^2(a^2-1)\epsilon^3}\right)\to -\frac{c_1}{2(a^2-1)}\] \[-\frac{2c_3{x'_2}^2}{a^2(a^2-1)(x'_1+ix'_3)^2}-\frac{c_2}{2a^2(a^2-1)(x'_1+ix'_3)}-\frac{1}{2a^2(a^2-1)}(L'_{12}-iL'_{23})^2,\] \[\epsilon\left(Q_{13}+a^2Q_{12}+\frac{(a^2-1)c_3}{2a^4\epsilon^4}+\frac{\sqrt{2}\ c_2}{8a^2\epsilon^3}+\frac{c_1(a^2-1)}{2\epsilon^2}\right) \to \frac{\sqrt{2}\ c_1 x'_2}{x'_1+ix'_3}\] \[ +\frac{\sqrt{2}\ c_2(4{x'_2}^2+{x'_4}^2)}{4(x'_1+ix'_3)^2}+\frac{2\sqrt{2}\ c_3 x'_2(2{x'_2}^2+{x'_4}^2)}{(x'_1+ix'_3)^3} +\frac{i\sqrt{2}}{2}\{ L'_{13},L'_{12}-iL'_{23}\}.\] \medskip If we apply the $[1,1,1,1]\to [3,1]$ contraction to the $[2,1,1]$ system, the system contracts to one with potential $V[3,1]$, but with different parameters, or to $[0]$. If we apply the same contraction to the $[2,2]$ system, the system again contracts to one with potential $V[0]$, but with different parameters. If we apply the same contraction to the $[3,1]$ system, the system contracts to itself, but with different parameters. If we apply the same contraction to the $[4]$ system, the system contracts to the system with potential $V[0]$, (\ref{E3'}), but with altered parameters. If we apply the same contraction to the $[0]$ system, the system contracts to the system with potential $V[0]$, (\ref{E3'}), but with altered parameters. If we apply this same contraction to the $(1)$ system, (\ref{V(1)norm}) it becomes a potential conformally equivalent to (\ref{V(2)norm}). If we apply this same contraction to the $(2)$ system, (\ref{V(2)norm}) it contracts to itself with altered parameters. \subsubsection{[1,1,1,1] to [3,1] contraction and St\"ackel transforms} For fixed $A_j$ we have the expansions \[ V^A_{[1,1,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+ \frac{A_4}{x_4^2}= \] \[ \frac{A_4}{x_4^2}+\frac{2\left(A_2+(A_1-A_2-A_3)a^2+A_3a^4\right)\epsilon^2}{(x_1+ix_3)^2}\] \[+\frac{4\sqrt{2}a^2x_2(A_3-A_1-2A_3a^2+A_3a^4)\epsilon^3}{(x_1+ix_3)^3}\] \[-\frac{4a^2\left(A_1a^2x_1^2+(-3A_1 +3A_3(1-a^2))x_2^2 +A_1a^2x_3^2\right)\epsilon^4}{(x_1+ix_3)^4}+O(\epsilon^5),\] \[ V^A_{[2,1,1]}=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{A_4}{(x_3+ix_4)^2}\] \[ =\frac{2\left(A_1a^2+A_2(1-a^2)+(A_3+A_4)a^2(1-a^2)^2\right)}{(x'_1+ix'_3)^2}\epsilon^2+O(\epsilon^3),\] \[ V^A_{[2,2]}=\frac{A_1}{(x_1+ix_2)^2}+\frac{A_2(x_1-ix_2)}{(x_1+ix_2)^3}+\frac{A_3}{(x_3+ix_4)^2}+\frac{A_4(x_3-ix_4)}{(x_3+ix_4)^3}\] \[ =\frac{k_1A_1+k_2A_2+k_3(A_3+A_4)}{(x'_1+ix'_3)^2}\epsilon^2+O(\epsilon^3),\ k_1,k_2,k_3\ {\rm generic},\] \[ V^A_{[3,1]}=\frac{A_1}{(x_3+ix_4)^2}+\frac{A_2x_1}{(x_3+ix_4)^3}+\frac{A_3(4x_1^2+x_2^2)}{(x_3+ix_4)^4}+\frac{A_4}{x_2^2}\] \[=\left[\frac{2A_1a^2-2A_2a^2\sqrt{a^2-1}+4A_3a^2(3a^2-4)-2A_4}{(x'_1+ix'_3)^2}\right](a^2-1)\epsilon^2+O(\epsilon^4),\] \subsubsection{Conformal St\"ackel transforms of the (1) system} We write potential $V(1)$ in the form \begin{equation}\label{V(1)norm} V(1)=a_1\frac{1}{(x_1+ix_2)^2}+a_2\frac{1}{(x_3+ix_4)^2} +a_3\frac{(x_3+ix_4)}{(x_1+ix_2)^3}+a_4\frac{(x_3+ix_4)^2}{(x_1+ix_2)^4}\end{equation} and designate it $(a_1,a_2,a_3,a_4)$, defining the conformally superintegrable system $[1]$. For every choice of $(a_1,a_2,a_3,a_4)$ the potential $V(1)$ generates a conformal St\"ackel transform to a special case of $E15$, always flat. \subsubsection{Conformal St\"ackel transforms of the (2) system} We write potential $V(2)'$ in the normalized form \begin{equation}\label{V(2)norm} V(2)'=a_1\frac{1}{(x_3+ix_4)^2}+a_2\frac{(x_1+ix_2)}{(x_3+ix_4)^3} +a_3\frac{(x_1+ix_2)^2}{(x_3+ix_4)^4}+a_4\frac{(x_1+ix_2)^3}{(x_3+ix_4)^5}\end{equation} and designate it $(a_1,a_2,a_3,a_4)$, defining the conformally superintegrable system $[2]$. For every choice of $(a_1,a_2,a_3,a_4)$ the potential $V(2)'$ generates a conformal St\"ackel transform to a special case of $E15$, always flat. \section{Helmholtz contractions from B\^ocher contractions} We describe how B\^ocher contractions of conformal superintegrable systems induce contractions of Helmholtz superintegrable systems. The basic idea here is that the procedure of taking a conformal St\"ackel transform of a conformal superintegrable system, followed by a Helmholtz contraction yields the same result as taking a B\^ocher contraction followed by an ordinary St\"ackel transform: The diagrams commute. We illustrate with an example. We consider the conformal St\"ackel transforms of the conformal system $[1,1,1,1]$ with potential $V_{[1,1,1,1]}$. The various possibilities are listed in subsection \ref{3.1.1}. Let $H$ be the initial Hamiltonian. In terms of tetraspherical coordinates the conformal St\"ackel transformed potential will take the form \[ V=\frac{\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3}{x_3^2}+\frac{a_4}{x_4^2}}{\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2}} =\frac{V_{[1,1,1,1]}}{F({\bf x},{\bf A})},\] where \[ F({\bf x},{\bf A})=\frac{A_1}{x_1^2}+\frac{A_2}{x_2^2}+\frac{A_3}{x_3^2}+\frac{A_4}{x_4^2},\] and the transformed Hamiltonian will be \[{\hat H}=\frac{1}{ F({\bf x},{\bf A})}H,\] where the transform is determined by the fixed vector $(A_1,A_2,A_3,A_4)$. Now we apply the B\^ocher contraction $[1,1,1,1]\to [2,1,1]$ to this system. In the limit as $\epsilon\to 0$ the potential $V_{[1,1,1,1]}\to V_{[2,1,1]}$, (\ref{V[211]}), and $H\to H'$ the $[2,1,1]$ system. Now consider \[ F({\bf x}(\epsilon),{\bf A})= V'({\bf x}',A)\epsilon^\alpha+O(\epsilon^{\alpha+1}),\] where the the integer exponent $\alpha$ depends upon our choice of $\bf A$. We will provide the theory to show that the system defined by Hamiltonian \[ {\hat H}'=\lim_{\epsilon\to 0}\epsilon^\alpha {\hat H}(\epsilon)=\frac{1}{V'({\bf x}',A)}H'\] is a superintegrable system that arises from the system $[2,1,1]$ by a conformal St\"ackel transform induced by the potential $V'({\bf x}',A)$. Thus the Helmholtz superintegrable system with potential $V=V_{1,1,1,1}/F$ contracts to the Helmholtz superintegrable system with potential $V_{[2,1,1]}/V'$. The contraction is induced by a generalized In\"on\"u-Wigner Lie algebra contraction of the conformal algebra $so(4,\C)$. In this case the possibilities for $V'$ can be read off from the expression in subsection \ref{3.1.2}. Then the $V'$ can be identified with a $[2,1,1]$ potential from the list in subsection \ref{3.1.3}. The results follow. For each $\bf A$ corresponding to a constant curvature or Darboux superintegrable system $O$ we list the contracted system $O'$ and $\alpha$. For Koenigs spaces we will not go into detail but merely give the contraction for a ``generic'' Koenigs system: One for which there are no rational numbers $r_j$, not all $0$, such that $\sum_{j=1}^4r_jA_j=0$. This ensures that the contraction is also ``generic". \begin{example} In Section \ref{3.1.2}, first equation, consider the St\"ackel transform $(1,0,0,0)$, i.e., $1/x_1^2$ . The transformed system is \[H=\frac{1}{\frac{1}{x_1^2}}(\sum_{i=1}^4 \partial_{x_i}^2)+ \frac{1}{\frac{1}{x_1^2}}(\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3}{x_3^2}+\frac{a_4}{x_4^2})\] which is $S9$. Now take the $[1,1,1,1]\to [2,1,1]$ Bocher contraction, equation (\ref{V[211]}). The sum of the derivatives in $H$ goes to $\sum_{i=1}^4 \partial_{x'_i}^2$ and the numerator of the potential goes to equation (\ref{V[211]}). However, the denominator $1/x_1^2$ goes as \[1/x_1^2=-2 \epsilon^2/((x_1'+ix_2')^2 +O(\epsilon^6)\] from the first equation in Section \ref{3.1.2}, case $A_1=1$, $A_2=0$, $A_3=0$, $A_4=0$. Thus, if we set $H'=\epsilon^2 H$ and go to the limit as $\epsilon \to 0$, we get a contracted system with potential $b_1+b_2(x^2+y^2)+b_3/x^2+b_4/y^2$ in Cartesian coordinates, up to a scalar factor $-2$. This is $E1$. \end{example} \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[1,1,1,1].} \begin{enumerate} \item \[ {\bf A}=(1,0,0,0),\ (0,1,0,0),\quad O=S_9\to O'=E_1,\quad \alpha =2,\] \[ {\bf A}=(0,0,1,0),\ (0,0,0,1)\quad O=S_9\to O'=S_2,\quad \alpha =0,\] \item\[ {\bf A}=(1,1,1,1),\quad O=S_8\to O'=S_4,\quad \alpha =0,\] \[ {\bf A}=(0,1,0,1), \ (1,0,1,0)\quad O=S_8\to O'=S_2,\quad \alpha =0,\] \item \[{\bf A}=(0,0,1,1),\quad O=S_7\to O'=S_4,\quad \alpha =0,\] \[ {\bf A}=(1,1,0,0,)\quad O=S_7\to O'=E_{16},\quad \alpha =4,\] \item \[ {\bf A}=(A_1,A_2,0,0),\ (A_1A_2\ne 0,A_1\ne A_2),\ O=D4B\to O'=E_1,\quad \alpha =2,\] \[ (0,0,A_1,A_2), \ O=D4B\to O'=D4A,\quad \alpha =0,\] \[{\bf A}= {\rm all\ other\ permutations}, \ O=D4B\to O'=S_2,\quad \alpha =0,\] \item \[{\bf A}=(1,1,A,A),\ (A,A,1,1),\ A\ne 0,\ O=D4C\to O'=S_4,\quad \alpha =0,\] \[{\bf A}= {\rm all\ other\ permutations}, \ O=D4C\to O'=D4A,\quad \alpha =0,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=D4A,\quad \alpha=0.\] \end{enumerate} \begin{comment} Already in this example we are able to characterize contractions of Darboux systems in a manner completely analogous to those of constant curvature systems. That wasn't possible before we extended our method to conformally superintegrable systems. \end{comment} \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[1,1,1,1].} The target systems are conformal St\"ackel transforms of $V_{[2,2]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,0,0,0)\ {\rm and\ all\ permutations},\quad O=S_9\to O'=E_7,\quad \alpha =2,\] \item\[ {\bf A}=(1,1,1,1),\ (0,1,1,0)\quad O=S_8\to O'=E_{19},\quad \alpha =4,\] \[ {\bf A}=(0,1,0,1), \ (1,0,1,0)\quad O=S_8\to O'=E_7,\quad \alpha =2,\] \[ {\bf A}=(1,0,0,1)\quad O=S_8\to O'=E_{17},\quad \alpha =2,\] \item \[{\bf A}=(0,0,1,1),\quad O=S_7\to O'=E_{17},\quad \alpha =4,\] \[ {\bf A}=(1,1,0,0,)\quad O=S_7\to O'=E_{19},\quad \alpha =4,\] \item \[{\bf A}=(0,0,A_3,A_4),\ A_3A_4\ne 0,A_3\ne A_4,\ {\rm and\ all\ permutations},\] \[O=D4B\to O'=E_7,\ \alpha =2,\] \item \[{\bf A}=(1,1,A,A),\ A\ne 0,\ {\rm and\ all\ permutations},\] \[O=D4C\to O'=E_{19},\quad \alpha =1,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=E_7,\quad \alpha=2.\] \end{enumerate} Additional results can be obtained for this contraction and the following by permutiong the coordinate inidces of the image potential before appluying the contraction. \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[1,1,1,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results are (assuming generic $a$): \begin{enumerate} \item \[ {\bf A}=(1,0,0,0), (0,1,0,0),(0,0,1,0),\quad O=S_9\to O'=E_2,\quad \alpha =2,\] \[ {\bf A}=(0,0,0,1),\quad O=S_9\to O'=S_1,\quad \alpha =0,\] \item\[ {\bf A}=(1,1,1,1),(0,1,0,1),(1,0,0,1)\quad O=S_8\to O'=S_1,\quad \alpha =0,\] \[ {\bf A}=(1,0,1,0),(0,1,1,0)\quad O=S_8\to O'=E_2,\quad \alpha =2,\] \item \[{\bf A}=(0,0,1,1),\quad O=S_7\to O'=S_1,\quad \alpha =0,\] \[ {\bf A}=(1,1,0,0,)\quad O=S_7\to O'=E_{2},\quad \alpha =2,\] \item {\small \[{\bf A}=(0,0,A_3,A_4),(A_3,0,0,A_4),(0,A_3,0,A_4),\ A_3A_4\ne 0,A_3\ne A_4,\]\[ O=D4B\to O'=S_1,\ \alpha =0,\] \[ {\bf A}=(A_1,A_2,0,0),(A_1,0,A_2,0),(0,A_1,A_2,0),\ A_1A_2\ne 0,A_1\ne A_2,\] \[O=D4B\to O'=E_2,\ \alpha =2,\] } \item \[{\bf A}=(1,1,A,A),\ {\rm and\ all\ permutations},\ A\ne 0,1,\] \[O=D4C\to O'=S_1,\quad \alpha =0,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=S_1,\quad \alpha=0.\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[1,1,1,1].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results are (generic in the parameters $a,b$): \begin{enumerate} \item \[ {\bf A}=(1,0,0,0), \ {\rm and\ all\ permutations},\quad O=S_9\to O'=E_{10},\quad \alpha =6,\] \item\[ {\bf A}=(1,1,1,1),(0,1,0,1),(1,0,0,1),\quad O=S_8\to O'=E_{10},\quad \alpha =6,\] \[ {\bf A}=(1,0,1,0),(0,1,1,0)\quad O=S_8\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(0,0,1,1),(1,1,0,0,),\quad O=S_7\to O'=E_{10},\quad \alpha =6,\] \item {\small \[{\bf A}=(0,0,A_3,A_4), \ {\rm and\ all\ permutations},\ A_3A_4\ne 0,A_3\ne A_4,\] \[ O=D4B\to O'=E_{10},\ \alpha =6,\]} \item \[{\bf A}=(1,1,A,A), \ {\rm and\ all\ permutations},\ A\ne 0,1,\] \[O=D4C\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=E_{10},\quad \alpha=6.\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[1,1,1,1].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,0,0,0),\ {\rm and\ all\ permutations},\quad O=S_9\to O'=E_{10},\quad \alpha =4,\] \item\[ {\bf A}=(1,1,1,1),\quad O=S_8\to O'=E_{9},\quad \alpha =6,\] \[ {\bf A}=(0,1,0,1),(1,0,1,0),\quad O=S_8\to O'=E_{10},\quad \alpha =4,\] \[ {\bf A}=(0,1,1,0),(1,0,0,1),\quad O=S_8\to O'=E_{9},\quad \alpha =5,\] \item \[{\bf A}=(0,0,1,1),(1,1,0,0,),\quad O=S_7\to O'=E_{10},\quad \alpha =5,\] \item {\small\[{\bf A}=(0,0,A_3,A_4),\ {\rm and\ all\ permutations},\ A_3A_4\ne 0,A_3\ne A_4,\] \[ O=D4B\to O'=E_{10},\ \alpha =4,\] \item \[{\bf A}=(1,1,A,A),{\rm and\ all\ permutations}\ A\ne 0,1,\] \[O=D4C\to O'=E_{10},\quad \alpha =5,\]} \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=E_{10},\quad \alpha=4.\] \end{enumerate} Note that, although the values of $\alpha$ differ, the target systems agree with those for $[1,1,1,1]\to [4]$ contractions of $V_{[1,1,1,1]}$, except in the single case $S_8\to E_9$. \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[1,1,1,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,0,0,0), (0,1,0,0),\quad O=S_9\to O'=E_{2},\quad \alpha =6,\] \[ {\bf A}= (0,0,1,0),\quad O=S_9\to O'=E_{2},\quad \alpha =4,\] \[ {\bf A}= (0,0,0,1),\quad O=S_9\to O'=S_1,\quad \alpha =0,\] \item\[ {\bf A}=(1,1,1,1),(0,1,0,1),(1,0,0,1)\quad O=S_8\to O'=S_1,\quad \alpha =0,\] \[ {\bf A}=(1,0,1,0),(0,1,1,0)\quad O=S_8\to O'=E_{2},\quad \alpha =4,\] \item \[{\bf A}=(0,0,1,1),\quad O=S_7\to O'=S_1,\quad \alpha =0,\] \[{\bf A}=(1,1,0,0,),\quad O=S_7\to O'=E_2,\quad \alpha =8,\] \item {\small \[{\bf A}=(0,0,A_3,A_4),(A_3,0,0,A_4),(0,A_3,0,A_4),\ A_3A_4\ne 0,A_3\ne A_4,\] \[O=D4B\to O'=S_1,\ \alpha =0,\] \[{\bf A}= (A_1,A_2,0,0),(0,A_1,A_2,0)\ A_1A_2\ne 0,A_1\ne A_2,\] \[O=D4B\to O'=E_{2},\quad \alpha =6,\] \[ {\bf A}=(A_1,0,A_3,0),\ A_1A_3\ne 0, A_1\ne A_3,\] \[O=D4B\to O'=E_{2},\quad \alpha =4,\]} \item \[{\bf A}=(1,1,A,A),\ {\rm and\ all\ permutations},\ A\ne 0,1,\] \[O=D4C\to O'=S_1,\quad \alpha =0,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[1,1,1,1]\to O'=S_1,\quad \alpha=0.\] \end{enumerate} Note that, although the values of $\alpha$ differ, the target systems agree with those for $[1,1,1,1]\to [3,1]$ contractions of $V_{[1,1,1,1]}$. \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[2,1,1]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),,\quad O=S_4\to O'=S_4,\quad \alpha =0,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=S_{2},\quad \alpha =0,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_1,\quad \alpha =2,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{16},\quad \alpha =4,\] \item \[{\bf A}=(A_3,A_4,0,0), (A_3A_4\ne 0,A_3\ne A_4),\quad \] \[O=D4A\to O'=D4A,\quad \alpha =0,\] \item \[{\bf A}=(0,0,A_1,A_2,),(A_1A_2\ne 0),\quad O=D3B\to O'=E_1,\quad \alpha =2,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad O=D2B\to O'=S_2,\quad \alpha =0,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=S_4,\quad \alpha =0,\] \item \[{\bf A}=(A_3,A_4,A_2,A_1),\ O=K[2,1,1]\to O'=S_4,\quad \alpha=0.\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[2,2]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),\quad O=S_4\to O'=E_{19},\quad \alpha =4,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=E_7,\quad \alpha =2,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_8,\quad \alpha =2,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{17},\quad \alpha =4,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0,A_1\ne A_2),\quad\] \[O=D4A\to O'=E_7,\quad \alpha =2,\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3B\to O'=E_8,\quad \alpha =2,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad O=D2B\to O'=E_7,\quad \alpha =2,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=E_{19},\quad \alpha =4,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,1,1]\to O'=E_7,\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Generically in $a$, partial results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),\quad O=S_4\to O'=E_{2},\quad \alpha =2,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=E_2,\quad \alpha =2,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_2,\quad \alpha =2,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{2},\quad \alpha =2,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0,A_1\ne A_2),\quad\] \[O=D4A\to O'=E_2,\quad \alpha =2,\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3B\to O'=E_2,\quad \alpha =2,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad O=D2B\to O'=E_2,\quad \alpha =2,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=E_{2},\quad \alpha =2,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,1,1]\to O'=E_2,\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results are: St\"ackel transforms of $V_{[3,1]}$. Generically in $a$, the results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),\quad O=S_4\to O'=E_{10},\quad \alpha =6,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=E_{10},\quad \alpha =6,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_{10},\quad \alpha =6,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0,A_1\ne A_2),\quad\] \[O=D4A\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3B\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad O=D2B\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=E_{10},\quad \alpha =6,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,1,1]\to O'=E_{10},\quad \alpha=6,\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),\quad O=S_4\to O'=E_{10},\quad \alpha =5,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=E_{10},\quad \alpha =4,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_{10},\quad \alpha =4,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{10},\quad \alpha =5,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0,A_1\ne A_2),\quad\] \[O=D4A\to O'=E_{10},\quad \alpha =4,\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3B\to O'=E_{10},\quad \alpha =4,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad\] \[O=D2B\to O'=E_{10}, ({\rm generically})\quad \alpha =4,\] \[\qquad O=D2B\to O'=E_{9}, ({\rm special\ case})\quad \alpha =5,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=E_{10}, ({\rm generically})\quad \alpha =5,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,1,1]\to O'=E_{10},\quad \alpha=4,\] \end{enumerate} \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[2,1,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(1,1,0,0),\quad O=S_4\to O'=E_{2},\quad \alpha =8,\] \item \[ {\bf A}= (1,0,0,0),(0,1,0,0),\quad O=S_2\to O'=E_{2},\quad \alpha =6,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_1\to O'=E_{2},\quad \alpha =4,\] \item \[ {\bf A}=(0,0,1,0),\quad O=E_{16}\to O'=E_{2},\quad \alpha =4,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0,A_1\ne A_2),\quad\] \[O=D4A\to O'=E_{2},\quad \alpha =6,\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad\] \[O=D3B\to O'=E_{2}, ({\rm generic})\quad \alpha =4,\] \item \[{\bf A}=(A,0,0,1),(0,A,0,1)\ A\ne 0,\quad\] \[O=D2B\to O'=E_{2}, ({\rm generically})\quad \alpha =6,\] \item \[{\bf A}=(1,1,A,0),\ A\ne 0,\ O=D2C\to O'=E_{2}\quad \alpha =4,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,1,1]\to O'=E_{2},\quad \alpha=4,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[2,2].} The target systems are conformal St\"ackel transforms of $V_{[2,2]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E_{8},\quad \alpha =0,)\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E_{17},\quad \alpha =0,\] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E_{7},\quad \alpha =0, ({\rm generically}\] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E_{19},\quad \alpha =0, ({\rm generically})\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3C\to O'=D3C,\quad \alpha =0,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad O=D3D\to O'=E_{7},\quad \alpha =2,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=D3C,\quad \alpha=0,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[2,2].} The target systems are conformal St\"ackel transforms of $V_{[2,2]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E_{8},\quad \alpha =2,)\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E_{17},\quad \alpha =2,\] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E_{7},\quad \alpha =2, ({\rm generically}\] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E_{19},\quad \alpha =4, ({\rm generically})\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3C\to O'=E_8,\quad \alpha =2,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad O=D3D\to O'=E_{7},\quad \alpha =2,\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=E_7,\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[2,2].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E_{3}',\quad \alpha =2,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E_{3}',\quad \alpha =2, \] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E_{3}',\quad \alpha =2, ({\rm generically})\] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E_{3}',\quad \alpha =2, ({\rm generically})\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad\] \[O=D3C\to O'=E_3',\quad \alpha =2, ({\rm generically})\] \[ \qquad O=D3C\to O'=D1C,\quad \alpha =3, ({\rm special \ case})\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad\] \[O=D3D\to O'=E_{3}',\quad \alpha =2, ({\rm generically})\] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=E_3',\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[2,2].} Partial results: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E'_{3},\quad \alpha =6,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E'_{3},\quad \alpha =6, \] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E'_{3},\quad \alpha =6, \] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E'_{3},\quad \alpha =6, \] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3C\to O'=E'_{3},\quad \alpha =6,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad O=D3D\to O'=E'_{3},\quad \alpha =6, \] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=E'_{3},\quad \alpha=6,\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[2,2].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E_{10},\quad \alpha =4,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E_{10},\quad \alpha =5, \] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E_{10},\quad \alpha =4, ({\rm generically})\] \[\qquad O=E_7\to O'=E_{9},\quad \alpha =5, ({\rm special\ case})\] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E_{10},\quad \alpha =5, ({\rm generically})\] \[\qquad O=E_{19}\to O'=E_{9},\quad \alpha =6, ({\rm special\ case})\] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad O=D3C\to O'=E_{10},\quad \alpha =4,\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad O=D3D\to O'=E_{10},\quad \alpha =4, \] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=E_{10},\quad \alpha=4,\] \end{enumerate} \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[2,2].} The target systems are conformal St\"ackel transforms of $V_{[0]}$.Partial results: \begin{enumerate} \item \[ {\bf A}=(0,0,1,0),\quad O=E_8\to O'=E_{3}',\quad \alpha =4,\] \item\[ {\bf A}=(0,0,0,1),\quad O=E_{17}\to O'=E_{3}',\quad \alpha =4, \] \item \[ {\bf A}= (1,0,A_3,0),\quad O=E_7\to O'=E_{3}',\quad \alpha =2, \] \item \[ {\bf A}=(0,1,0,A_4),\quad O=E_{19}\to O'=E_{3}',\quad \alpha =0, \] \item \[{\bf A}=(0,0,A_3,A_4,),(A_3A_4\ne 0),\quad\] \[O=D3C\to O'=E_{3}',\quad \alpha =4, ({\rm generically})\] \[\qquad O=D3C\to O'=D1C,\quad \alpha =6, ({\rm special\ case})\] \item \[{\bf A}=(A_1,A_2,0,0), (A_1A_2\ne 0),\quad O=D3D\to O'=E_{3}',\quad \alpha =0, \] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[2,2]\to O'=E_{3}',\quad \alpha=0,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of the singular system $V(1)$. All systems are flat space and St\"ackel equivalent to special cases of $E15$. \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of the singular system $V(1)$. All systems are flat space and St\"ackel equivalent to special cases of $E15$. \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(0,0,0,1),\quad O=S_1\to O'=E_2,\quad \alpha =2,\] \item\[ {\bf A}=(1,0,0,0),\quad O=E_{2}\to O'=E_{2},\quad \alpha =2, \] \item \[ {\bf A}= (a,1,0,0),\quad O=D1B\to O'=E_{2},\quad \alpha =2, \] \item \[ {\bf A}=(0,0,1,0),\quad O=D2A\to O'=E_2, (\rm generically)\quad \alpha =2, \] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[3,1]\to O'=E_2,\quad \alpha=2.\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of $V_{[4]}$. Partial results are: \begin{enumerate} \item \[ {\bf A}=(0,0,0,1),\quad O=S_1\to O'=E_{10},\quad \alpha =6,\] \item\[ {\bf A}=(1,0,0,0),\quad O=E_{2}\to O'=E_{10},\quad \alpha =6, \] \item \[ {\bf A}= (a,1,0,0),\quad O=D1B\to O'=E_{10},\quad \alpha =6, \] \item \[ {\bf A}=(0,0,1,0),\quad O=D2A\to O'=E_{10}, \quad \alpha =6, \] \item \[{\bf A}=(A_1,A_2,A_3,A_4),\ O=K[3,1]\to O'=E_{10},\quad \alpha=6.\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of the singular system $V_{[2]}$. All systems are flat space and St\"ackel equivalent to special cases of $E15$. \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[3,1].} The target systems are conformal St\"ackel transforms of $V_{[3,1]}$. Partial results: \begin{enumerate} \item \[ {\bf B}=(0,0,0,1),\quad O=S_1\to O'=S_1,\quad \alpha =0,\] \item\[ {\bf B}=(1,0,0,0),\quad O=E_{2}\to O'=E_{2},\quad \alpha =6, \] \item \[ {\bf B}= (a,1,0,0),\quad O=D1B\to O'=E_{2},\quad \alpha =6, \] \item \[ {\bf B}=(0,0,1,0),\quad O=D2A\to O'=E_2,\quad \alpha =6, \] \item \[{\bf B}=(B_1,B_2,B_3,B_4),\ O=K[3,1]\to O'=S_1,\quad \alpha=0.\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf D}=(1,D_2,0,0),\quad O=E_{10}\to O'=E_{3}',\quad \alpha =2,)\] \item\[ {\bf D}=(0,1,0,0),\quad O=E_{9}\to O'=E_{11},\quad \alpha =3,\] \item \[ {\bf D}= (0,0,0,1),\quad O=D1A\to O'=E_{20},\quad \alpha =4, \] \item \[{\bf D}=(D_1,D_2,D_3,D_4),\ O=K[4]\to O'=E_3',\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of the singular system $V(2)$. All systems are flat space and St\"ackel equivalent to special cases of $E15$. \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=(1,C_2,0,0),\quad O=E_{10}\to O'=E_{3}',\quad \alpha =2,\] \item\[ {\bf C}=(0,1,0,0),\quad O=E_{9}\to O'=E_{3}',\quad \alpha =2,\] \item \[ {\bf C}= (0,0,0,1),\quad O=D1A\to O'=E_{3}',\quad \alpha =2, \] \item \[{\bf C}=(C_1,C_2,C_3,C_4),\ O=K[4]\to O'=E_3',\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=(1,C_2,0,0),\quad O=E_{10}\to O'=E_{3}',\quad \alpha =6,\] \item\[ {\bf C}=(0,1,0,0),\quad O=E_{9}\to O'=E_{3}',\quad \alpha =6,\] \item \[ {\bf C}= (0,0,0,1),\quad O=D1A\to O'=E_{3}',\quad \alpha =6, \] \item \[{\bf C}=(C_1,C_2,C_3,C_4),\ O=K[4]\to O'=E_{3}',\quad \alpha=6,\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of the singular system $V_{(2)}$. All systems are flat space and St\"ackel equivalent to special cases of $E15$. \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[4].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results: \begin{enumerate} \item \[ {\bf C}=(1,C_2,0,0),\quad O=E_{10}\to O'=E_{3}',\quad \alpha =1,(a\ne0), 0,(a=0)\] \item\[ {\bf C}=(0,1,0,0),\quad O=E_{9}\to O'=E_{3}',\quad \alpha =1,\] \item \[ {\bf C}= (0,0,0,1),\quad O=D1A\to O'=E_{3}',\quad \alpha =-1, \] \item \[{\bf C}=(C_1,C_2,C_3,C_4),\ O=K[4]\to O'=E3',\quad \alpha=-1,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,1,1] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}',\] \[(C_2^2+C_3^2\ne 0)\quad \alpha =2,\] \[ \qquad O=E_{20}\to O'=E_{11}, (C_2^2+C_3^2= 0,C_2C_3\ne 0)\quad \alpha =3,\] \[ \qquad O=E_{20}\to O'=D3A, (C_2= C_3 =0)\quad \alpha =4,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', (C_1\ne 0)\quad \alpha =2,\] \[\qquad O=E_{11}\to O'=E_{11}, (C_1= 0)\quad \alpha =3,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =2\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad\] \[O=D1C\to O'=E_{3}', (C_1\ne 0)\quad \alpha =2, \] \[ \qquad O=D1C\to O'=D1C, (C_1= 0)\quad \alpha =3, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\quad\] \[O=D3A\to O'=E_{3}',\ (C_1\ne 0)\quad \alpha =2,\] \[ \qquad O=D3A\to O'=D1C,\ (C_1= 0)\quad \alpha =3,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=E_{3}',\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [2,2] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}',\] \[(C_2^2+C_3^2\ne 0)\quad \alpha =2,\] \[ \qquad O=E_{20}\to O'=E_{20}, (C_3=-iC_2\ne 0)\quad \alpha =4,\] \[ \qquad O=E_{20}\to O'=E_{3}', ( C_3 =iC_2\ne 0)\quad \alpha =2,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', (C_1\ne 0,C_3=-i)\quad \alpha =2,\] \[\qquad O=E_{11}\to O'=E_{11}, (C_3= i)\quad \alpha =2,\] \[\qquad O=E_{11}\to O'=E_{11}, (C_1= 0,C3=-i)\quad \alpha =4,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =2\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad O=D1C\to O'=D1C, \quad \alpha =2, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\ O=D3A\to O'=E_{11},\] \[(C_1\ne 0,C_2^2+C_3^2=0)\ \alpha =2,\] \[ \qquad O=D3A\to O'=D1C,\ (C_2^2+C_3^2\ne 0)\quad \alpha =2,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=D1C,\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}',\quad \alpha =2,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', ({\rm generic})\quad \alpha =2,\] \[\qquad O=E_{11}\to O'=D1C, ({\rm spacial\ case})\quad \alpha =3,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =2\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad\] \[O=D1C\to O'=E_3',({\rm generic}) \quad \alpha =2, \] \[ \qquad O=D1C\to O'=D1C,({\rm special\ case}) \quad \alpha =3, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\] \[O=D3A\to O'=E_{3}',\ ({\rm generic})\ \alpha =2,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=E_3',\quad \alpha=2,\] \end{enumerate} \subsection{Contraction [1,1,1,1] to [4] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}',\quad \alpha =6,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', \quad \alpha =6,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =6\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad O=D1C\to O'=E_3', \quad \alpha =6, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\ O=D3A\to O'=E_{3}',\alpha =6,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=E_3',\quad \alpha=6,\] \end{enumerate} \subsection{Contraction [2,2] to [4] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results are: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}',\quad \alpha =4,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', ({\rm generic})\quad \alpha =4,\] \[ \qquad O=E_{11}\to O'=E_{3}', ({\rm special case})\quad \alpha =5,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =4\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad\] \[ O=D1C\to O'=E_3',({\rm generic}) \quad \alpha =4, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\] \[O=D3A\to O'=E_{3}',\ ({\rm generic})\ \alpha =4,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=E_3',\quad \alpha=4,\] \end{enumerate} \subsection{Contraction [2,1,1] to [3,1] applied to conformal St\"ackel transforms of system V[0].} The target systems are conformal St\"ackel transforms of $V_{[0]}$. Partial results: \begin{enumerate} \item \[ {\bf C}=((C_2^2+C_3^2)/4,C_2,C_3,1),\quad O=E_{20}\to O'=E_{3}'\quad \alpha =6,\] \item\[ {\bf C}=(C_1,1,\pm i,0),\quad O=E_{11}\to O'=E_{3}', ({\rm generic})\quad \alpha =6,\] \item \[ {\bf C}= (1,0,0,0), \quad O=E_{3}'\to O'=E_{3}',\quad \alpha =6\] \item \[ {\bf C}=(C_1,C_2,C_3,0),\ (C_2^2+C_3^2\ne 0),\quad\] \[O=D1C\to O'=E_3', ({\rm generic})\quad \alpha =6, \] \item \[{\bf C}=(C_1,C_2,C_3,1),\,(4C_1\ne C_2^2+C_3^2),\] \[O=D3A\to O'=E_{3}',\ ({\rm generic})\ \alpha =6,\] \item \[{\bf C}=(C_1,C_2,C_3,C_4), \ O=K[0]\to O'=E_3',\quad \alpha=6,\] \end{enumerate} \section{Summary of the 8 Laplace superintegrable systems with nondegenerate potentials} All systems are of the form $\left(\sum_{j=1}^4\partial_{x_j}^2+V({\bf x})\right)\Psi=0$, or $\left(\partial_x^2+\partial_y^2+{\tilde V}\right)\Psi=0$ as a flat space system in Cartesian coordinates. The potentials are: \begin{equation}\label{V[1111norm']} V_{[1,1,1,1]}=\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3}{x_3^2}+\frac{a_4}{x_4^2},\end{equation} \[{\tilde V}_{[1,1,1,1]}=\frac{a_1}{x^2}+\frac{a_2}{y^2}+\frac{4a_3}{(x^2+y^2-1)^2}-\frac{4a_4}{(x^2+y^2+1)^2},\] \begin{equation}\label{V211norm'} V_{[2,1,1]}=\frac{a_1}{x_1^2}+\frac{a_2}{x_2^2}+\frac{a_3(x_3-ix_4)}{(x_3+ix_4)^3}+\frac{a_4}{(x_3+ix_4)^2},\end{equation} \[{\tilde V}_{[2,1,1]}=\frac{a_1}{x^2}+\frac{a_2}{y^2}-a_3(x^2+y^2)+a_4,\] \begin{equation}\label{V[22norm']} V_{[2,2]}=\frac{a_1}{(x_1+ix_2)^2}+\frac{a_2(x_1-ix_2)}{(x_1+ix_2)^3} +\frac{a_3}{(x_3+ix_4)^2}+\frac{a_4(x_3-ix_4)}{(x_3+ix_4)^3},\end{equation} \[{\tilde V}_{[2,2]}=\frac{a_1}{(x+iy)^2}+\frac{a_2(x-iy)}{(x+iy)^3} +a_3-a_4(x^2+y^2),\] \begin{equation}\label{V[31]norm'} V_{[3,1]}=\frac{a_1}{(x_3+ix_4)^2}+\frac{a_2x_1}{(x_3+ix_4)^3} +\frac{a_3(4{x_1}^2+{x_2}^2)}{(x_3+ix_4)^4}+\frac{a_4}{{x_2}^2},\end{equation} \[ {\tilde V}_{[3,1]}=a_1-a_2x +a_3(4x^2+{y}^2)+\frac{a_4}{{y}^2},\] \begin{equation}\label{V[4]norm'} V_{[4]}=\frac{a_1}{(x_3+ix_4)^2}+a_2\frac{x_1+ix_2}{(x_3+ix_4)^3} +a_3\frac{3(x_1+ix_2)^2-2(x_3+ix_4)(x_1-ix_2)}{(x_3+ix_4)^4}\end{equation} \[+a_4\ \frac{4(x_3+ix_4)(x_3^2+x_4^2)+2(x_1+ix_2)^3}{(x_3+ix_4)^5},\] \[ {\tilde V}_{[4]}=a_1-a_2(x+iy) +a_3\left(3(x+iy)^2+2(x-iy)\right) -a_4\left(4(x^2+y^2)+2(x+iy)^3\right),\] \begin{equation}\label{V[0]norm'} V_{[0]}=\frac{a_1}{(x_3+ix_4)^2}+\frac{a_2x_1+a_3x_2}{(x_3+ix_4)^3}+a_4\frac{x_1^2+x_2^2}{(x_3+ix_4)^4},\end{equation} \[ {\tilde V}_{[0]}=a_1-(a_2x+a_3y)+a_4(x^2+y^2),\] \begin{equation}\label{Varb'} V_{arb}=\frac{1}{(x_3+ix_4)^2}f(\frac{-x_1-ix_2}{x_3+ix_4}),\end{equation} \[ {\tilde V}_{arb}=f({x+iy}),\ f\ {\rm arbitrary}\] \begin{equation}\label{V[1]norm'}V(1)=a_1\frac{1}{(x_1+ix_2)^2}+a_2\frac{1}{(x_3+ix_4)^2} +a_3\frac{(x_3+ix_4)}{(x_1+ix_2)^3}+a_4\frac{(x_3+ix_4)^2}{(x_1+ix_2)^4},\end{equation} \[{\tilde V}(1)=\frac{a_1}{(x+iy)^2}+a_2 -\frac{a_3}{(x+iy)^3}+\frac{a_4}{(x+iy)^4},\] This is a special case of (\ref{Varb'}). \begin{equation}\label{V[2]norm'} V(2)'=a_1\frac{1}{(x_3+ix_4)^2}+a_2\frac{(x_1+ix_2)}{(x_3+ix_4)^3} +a_3\frac{(x_1+ix_2)^2}{(x_3+ix_4)^4}+a_4\frac{(x_1+ix_2)^3}{(x_3+ix_4)^5},\end{equation} \[ {\tilde V}(2)'=a_1+a_2(x+iy) +a_3(x+iy)^2+a_4(x+iy)^3.\] This is a special case of (\ref{Varb'}). \section{Summary of St\"ackel equivalence classes of Helmholtz superintegrable systems} \begin{enumerate} \item{$[1,1,1,1]$}: \[ S9,S8,S7,D4B,D4C, K[1,1,1,1]\] \item{$[2,1,1]$}: \[ S4,S2,E1,E16,D4A,D3B,D2B,D2C,K[2,1,1]\] \item{$[2,2]$}:\[E8,E17,E7,E19,D3C,D3D,K[2,2]\] \item{$[3,1]$}: \[ S1,E2,D1B,D2A,K[3,1]\] \item{$[4]$}: \[E10,E9,D1A,K[4]\] \item{$[0]$}: \[ E20,E11,E3',D1C,D3A,K[0]\] \item{$(1)$}:\[{\rm special\ cases\ of}\ E15\] \item{$(2)$}: \[{\rm special\ cases\ of}\ E15\] \end{enumerate} \subsection{Summary of B\^ocher contractions of Laplace systems}\label{4} This is a summary of the results of applying each of the B\^ocher contractions to each of the Laplace conformally superintegrable systems. {\small \begin{enumerate} \item{$[1,1,1,1]\to [2,1,1]$ contraction}: \[ V_{[1,1,1,1]}\downarrow V_{[2,1,1]};\ V_{[2,1,1]}\downarrow V_{[2,1,1]},V_{[2,2]},V_{[3,1]};\ V_{ [2,2]}\downarrow V_{ [2,2]},V_{[0]};\ V_{[3,1]}\downarrow V_{(1)},V_{[3,1]};\] \[ V_{[4]}\downarrow V_{[0]},V_{(2)};\ V_{[0]}\downarrow V_{[0]};\ V_{(1)}\downarrow V_{(1)},V_{(2)};\ V_{(2)}\downarrow V_{(2)}. \] \item{$[1,1,1,1]\to [2,2]$ contraction}: \[ V_{[1,1,1,1]}\downarrow V_{[2,2]};\ V_{[2,1,1]}\downarrow V_{ [2,2]},\,{\rm\ special\ case\ of\ } E15;\ V_{ [2,2]}\downarrow V_{[2,2]},V_{[0]};\ V_{[3,1]}\downarrow V_{(1)},\, {\rm special\ case\ of\ } E_{15};\] \[V_{[4]}\downarrow V_{ (2)};\ V_{[0]}\downarrow V_{[0]};\ V_{(1)}\downarrow V_{(1)},{\rm\ special\ case\ of\ } E15;\ V_{(2)}\downarrow V_{(2)}. \] \item{$[2,1,1]\to [3,1]$ contraction}: \[ V_{[1,1,1,1]}\downarrow V_{[3,1]};\, V_{[2,1,1]}\downarrow V_{[3,1]},V_{[0]};\, V_{[2,2]}\downarrow V_{ [0]},\quad V_{[3,1]}\downarrow V_{[3,1]},V_{[0]};\, V_{[4]}\downarrow V_{[0]};\] \[ V_{ [0]}\downarrow V_{[0}];\, V_{(1)}\downarrow V_{(2)};\, V_{(2)}\downarrow V_{(2)}. \] \item{$[1,1,1,1]\to [4]$ contraction}: \[ V_{[1,1,1,1]}\downarrow V_{[4]};\, V_{[2,1,1]}\downarrow V_{[4]};\, V_{ [2,2]}\downarrow V_{[0]};\, V_{[3,1]}\downarrow V_{[4]};\, V_{[4]}\downarrow V_{[0]},V_{[4]};\, V_{[0]}\downarrow V_{[0]};\] \[V_{(1)}\downarrow V_{ (2)};\, V_{(2)}\downarrow V_{(2)}; \] \item{$[2,2]\to [4]$ contraction}: \[ V_{ [1,1,1,1]}\downarrow V_{[4]};\, V_{[2,1,1]}\downarrow V_{[4]},V_{(2)};\, V_{[2,2]}\downarrow V_{[4]},V_{[0]};\, V_{[3,1]}\downarrow V_{(2)};\, V_{[4]}\downarrow V_{(2)};\] \[ V_{[0]}\downarrow V_{[0]},V_{(2)};\, V_{ (1)}\downarrow V_{(2)};\, V_{(2)}\downarrow V_{(2)}; \] \item{$[1,1,1,1]\to [3,1]$ contraction}: \[ V_{[1,1,1,1]}\downarrow V_{[3,1]},\ V_{[2,1,1]}\downarrow V_{[3,1]},V_{[0]};\, V_{[2,2]}\downarrow V_{ [0]};\, V_{[3,1]}\downarrow V_{[3,1]},V_{[0]};\, V_{[4]}\downarrow V_{[0]},\ V_{[0]}\downarrow V_{[0]},\] \[V_{ (1)}\downarrow V_{(2)},\ V_{(2)}\downarrow V_{(2)}. \] \end{enumerate} } \section{Summary of Helmholtz contractions} The superscript for each targeted Helmholtz system is the value of $\alpha$. In each table, corresponding to a single Laplace equation equivalence class, the top line is a list of the Helmholtz systems in the class, and the lower lines are the target systems under the B\^ocher contraction. {\small \bigskip Contractions of systems: \begin{equation}\label{Table1} \begin{array}{clllllll}& $[1,1,1,1]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &{S_9}&S_7&S_8&D_4B&D_4C&K[1111]\\ \hline\\ {[}1111]\downarrow[211]&E_1^2&S_4^0&S_4^0&E_1^2&S_4^0&D_4A^0\\ &S_2^0&S_2^0&E_{16}^0&D_4A^0&D_4A^0\\ &&&&S_2^0&\\ \hline\\ {[}1111]\downarrow[22]&E_7^2 &E_{19}^4 &E_{17}^4&E_7^2&E_{19}^1&E_7^2\\ &&E_7^2&E_{19}^4\\ & &E_{17}^2 &\\ \hline\\ {[}1111]\downarrow[31]&E_2^2&S_1^0&S_1^0&S_1^0&S_1^0&S_1^0\\ &S_1^0&E_2^2&E_2^2&E_2^2\\ \hline\\ {[}1111]\downarrow[4]&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6\\ \hline\\ {[}22]\downarrow[4]&E_{10}^4&E_9^6&E_{10}^5&E_{10}^4&E_{10}^5&E_{10}^4\\ &&E_{10}^4&-\\ &&E_9^5&\\ \hline\\ {[}211]\downarrow[31]&E_2^6&S_1^0&S_1^0&S_1^0&S_1^0&S_1^0\\ &E_2^4&E_2^4&E_2^8&E_2^6\\ &S_1^0&&&E_2^4\\ \hline \end{array} \end{equation} \begin{equation}\label{Table2} \begin{array}{clllllllll}& $[2,1,1]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &{S_4}&S_2&E_1&E_{16}&D_4A&D_3B&D_2B&D_2C&K[211]\\ \hline\\ {[}1111]\downarrow[211]&S_4^0&S_2^0&E_1^2&E_{16}^4&D_4A^0&E_1^2&S_2^0&S_4^0&S_4^0\\ &E_{17}^4&E_8^2&E_8^0&E_{17}^0&E_8^2&D_3C^0&E_8^0&E_{17}^0&D_3C^0\\ &S_1^0&S_1^0&E_2^2&E_2^2&S_1^0&E_2^2&S_1^0&S_1^0&S_1^0\\ &&E_2^2&&&&D_1B^3&E_2^2&&\\ \hline\\ {[}1111]\downarrow[22]&E_{17}^4 &E_{8}^2 &E_{8}^2&E_{17}^4&E_{7}^2&E_8^2&E_7^2&E_{19}^4&E_7^2\\ &&&&&E_8^2&E_{17}^2&E_8^2&E_{17}^4&\\ \hline\\ {[}1111]\downarrow[31]&S_1^0&S_1^0&E_2^2&E_2^2&S_1^0&E_2^2&E_1^2&S_1^0&S_1^0\\ &&&&&&D_1B^3&&&\\ &{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &&&&&D_1C^3&D_1C^3&D_1C^3&&\\ \hline\\ {[}1111]\downarrow[4]&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6\\ &&&&&E_9^8&E_9^8&E_9^8&E_9^8&\\ \hline\\ {[}22]\downarrow[4]&E_{10}^5&E_{10}^4&E_{10}^4&E_{10}^5&E_{10}^4&E_{10}^4&E_{10}^4&E_{10}^4&E_{10}^4\\ &&&&&&E_{10}^5&E_{10}^5&&\\ &\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(2)&\\ \hline\\ {[}211]\downarrow[31]&S_1^0&S_1^0&E_2^6&E_2^8&S_1^0&E_2^6&S_1^0&S_1^0&S_1^0\\ &&E_2^5&&&&&E_2^5&&\\ &{E_3'}^8&{E_3'}^6&{E_3'}^4&{E_3'}^4&{E_3'}^6&{E_3'}^6&{E_3'}^4&{E_3'}^4&{E_3'}^4\\ \hline \end{array} \end{equation} \begin{equation}\label{Table3} \begin{array}{clllllll}& $[2,2]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &E_8&E_{17}&E_7&E_{19}&D_3C&D_3D&K[22]\\ \hline\\ {[}1111]\downarrow[211]&E_8^0&E_{17}^0&E_7^0&E_{19}^0&D_3C^0&E_7^2&D_3C^0\\ &{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ \hline\\ {[}1111]\downarrow[22]&E_{8}^2 &E_{17}^4 &E_{7}^2&E_{19}^4&E_{8}^2&E_8^2&E_7^2\\ &{E_3'}^2&E_{11}^2&{E_3'}^2&E_{11}^2&E_{11}^2&E_{11}^2&E_{11}^2\\ \hline\\ {[}1111]\downarrow[31]&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &&&&E_{11}^4,E_{20}^4&D_1C^3&D_1C^3&\\ \hline\\ {[}1111]\downarrow[4]&{E'}_{3}^6&{E'}_{3}^6&{E'}_{3}^6&{E'}_{3}^6&{E'}_{3}^6&{E'}_{3}^6&{E'}_{3}^6\\ &&&E_{11}^8&E_{11}^8&E_{11}^8&E_{11}^8&\\ \hline\\ {[}22]\downarrow[4]&E_{10}^4&E_{10}^5&E_{10}^4&E_{10}^5&E_{10}^4&E_{10}^4&E_{10}^4\\ &&&E_9^5&E_9^6&&&\\ &{E_3'}^2&E_{11}^1&{E_3'}^2&E_{11}^1&E_{11}^1&E_{11}^1&E_{11}^1\\ &&&E_{11}^3&E_{20}^4&&&\\ \hline\\ {[}211]\downarrow[31]&{E'_3}^4&{E'}_3^4&{E'_3}^2&{E_3'}^2&{E'_3}^4&D_1C^2&D_1C^2\\ &{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_{20}}^4&{E_3'}^6&{E_3'}^6&{E_3'}^6\\ &&&&&D_1C^9&&\\ \hline \end{array} \end{equation} \begin{equation}\label{Table4} \begin{array}{clllllll}& $[3,1]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &S_1&E_{2}&D_1B&D_2A&K[31]\\ \hline\\ {[}1111]\downarrow[211]&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(1)&\\ &S_1^0&E_2^2&E_2^2&E_2^2&S_1^0\\ &&&D_1B^3&D_2A^4&\\ \hline\\ {[}1111]\downarrow[22]&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(1)\\ \hline\\ {[}1111]\downarrow[31]&S_1^0&E_2^2&E_2^2&E_2^2&S_1^0\\ &&&D_1B^3&&&\\ &{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &&&D_1C^3&&\\ \hline\\ {[}1111]\downarrow[4]&{E}_{10}^6&{E}_{10}^6&{E}_{10}^6&{E}_{10}^6&{E}_{10}^6\\ &&&E_9^8&&\\ \hline\\ {[}22]\downarrow[4]&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(2)\\ \hline\\ {[}211]\downarrow[31]&{S_1}^0&{E}_2^6&{E_2}^6&{E_2}^6&{S_1}^0\\ &&E_2^2&S_1^1&S_1^0&\\ &{E_3'}^4&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^4\\ \hline \end{array} \end{equation} \begin{equation}\label{Table5} \begin{array}{clllllll}& $[4]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &E_{10}&E_{9}&D_1A&K[4]\\ \hline\\ {[}1111]\downarrow[211]&{E_3'}^2&E_{11}^2&E_{20}^4&{E_3'}^2\\ &&{E_3'}^2&{E_3'}^2&&\\ &&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(2)\\ \hline\\ {[}1111]\downarrow[22]&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(2)\\ &{E_3'}^2&{E_3'}^2&D_1C^2&D_3A^2\\ \hline\\ {[}1111]\downarrow[31]&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &E_{11}^2&&&\\ \hline\\ {[}1111]\downarrow[4]&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6\\ &E_{11}^8&&&\\ &E_{10}^6&E_{10}^6&E_{10}^6&E_{10}^6\\ &E_9^8&&&\\ \hline\\ {[}22]\downarrow[4]&\qquad {\rm \mbox{St\"{a}ckel}}& {\rm transforms}& {\rm of}& V(2)\\ \hline\\ {[}211]\downarrow[31]&{E_3'}^1&{E_3'}^1&{E_3'}^{-1}&{E_3'}^{-1}\\ &{E_3'}^4&{E_3'}^5&{E_3'}^4&{E_3'}^3\\ &{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6\\ \hline \end{array} \end{equation} \begin{equation}\label{Table6} \begin{array}{clllllll}& $[0]$&{\rm equivalence}&{\rm class\ }\ & {\rm contractions}& &\\ \hline\\ {\rm contraction} &E_{20}&E_{11}&E_3'&D_1C&D_3A&K[0]\\ \hline\\ {[}1111]\downarrow[211]&{E_3'}^2&{E_{3}'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &E_{11}^3&E_{11}^3&&D_1C^3&D_1C^3&&\\ \hline\\ {[}1111]\downarrow[22]&E_{11}^2&{E_{11}}^2&{E_3'}^2&E_{11}^2&E_{11}^2&E_{11}^2\\ &&&&&{E_3'}^2&{E_3'}^2\\ \hline\\ {[}1111]\downarrow[31]&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2&{E_3'}^2\\ &&&&D_1C^3&D_1C^3&\\ \hline\\ {[}1111]\downarrow[4]&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6\\ &E_{11}^8&E_{11}^8&&E_{11}^8&E_{11}^8&\\ \hline\\ {[}22]\downarrow[4]&{E_3'}^4&{E_3'}^4&{E_3'}^4&{E_3'}^4&{E_3'}^4&{E_3'}^4\\ &{E_{11}}^5&E_{11}^5&&E_{11}^5&\\ \hline\\ {[}211]\downarrow[31]&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6&{E_3'}^6\\ &&&&D_1C^9&&\\ \hline \end{array} \end{equation} } \section{Acknowledgement} This work was partially supported by a grant from the Simons Foundation (\# 208754 to Willard Miller, Jr).
{ "redpajama_set_name": "RedPajamaArXiv" }
6,742
The Nigerian Institution of Estate Surveyors and Valuers (NIESV) was founded in 1969 by the few qualified General Practice Chartered Surveyors who were trained mainly in the United Kingdom. The Institution was granted government recognition by the enactment of the Estate Surveyors and Valuers (Registration Act)" Decree No. 24 of 1975. The first Annual Conference was held at Ibadan in 1969. The Estate Surveyors and Valuers Registration Board of Nigeria (ESVARBON) is empowered to regulate and control the practice of the profession of Estate Surveying and Valuation in the country. The Institution is affiliated to the International Real Estate Federation (FIABCI), Commonwealth Association of Surveying and Land Economy (CASLE), International Federation of Surveyors (FIG), Royal Institution of Chartered Surveyors (RICS), Association of Professional Bodies of Nigeria(APBN) AND The International Valuation Standards Council (IVSC). The objectives of the Institution as provided in Chapter 1 (i) of the Constitution of the Institution are to establish a high and reputable standard of professional conduct and practice in the landed profession throughout the Federal Republic of Nigeria; to secure and improve the technical knowledge which constitutes land economy including valuation or appraisal of real estate and such fixtures and fittings thereto including plant and machinery; land management and development Investment and Town Planning and to facilitate the acquisition of such knowledge by working in close collaboration with Universities, Institutions of Higher learning and other professional bodies. Other objectives include to promote the general interests of the profession and to maintain and extend its usefulness for the public good by advising members of the public. Government departments, statutory bodies, Local governments, association, Institutions and such like bodies on all matters coming within the scope of the profession and to initiate and consider any legislation relevant to the objects of the institution; to endeavour to acquaint the public with the role of the Estate of the Surveyor and valuer in the economic development of the country.
{ "redpajama_set_name": "RedPajamaC4" }
6,872
Q: Orthogonal Eigenvector Matrices which are Symmetric What (extra) conditions must be satisfied by a real symmetric matrix, $A$, with distinct eigenvalues, so that its orthogonal matrix of eigenvectors V, can be arranged to also be symmetric? I.e. if $A^T=A=V \Lambda V^T$, where $V^{-1}=V^T$, and $\Lambda$ is a diagonal matrix of (distinct) eigenvalues, what additional condition(s) on $A$ are required so that $V=V^T$? A: I don't think there's any special characterization of such matrices. As a rule of thumb, the "natural" properties of a matrix are those invariant under change of basis, i.e., those that can be inferred from the eigenvalues alone.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,964
Writer Types Podcast S.W. Lauden Crime + Mystery + Interviews Eryk Pruitt "The Long Dance" Podcast Review July 14, 2018 — 1 Comment Like most podcast listeners way back in 2014, I couldn't get enough of Serial. I wasn't quite as taken by Serial's second season, but plenty of other true crime options started popping up in my feed. A couple of all-time favorites include Dirty John and S-Town. My favorite new true crime podcast is The Long Dance. Here's a description of the 8-part series from the show's website: A disclaimer before we begin. One of the show's creators, Eryk Pruitt, is a friend of mine from the Indie crime fiction community. You'll also hear my voice at the beginning of each episode because Mr. Pruitt and his team were kind enough to let us promote our crime, mystery and thriller fiction podcast, Writer Types, with a quick ad. If all of that will keep you from taking this review seriously, stop reading now—but definitely don't let it stop you from checking out Episode 1 of The Long Dance. I have a feeling you'll be hooked, just like I was. Which brings us to my review… Producing a podcast about an unsolved 46-year-old double homicide seems like a tricky business. Not only is the evidence old or missing, but many of the people you'd naturally want to interview have long since shuffled off this mortal coil. This includes everybody from law enforcement officials, witnesses and suspects, to family and friends connected with the victims. So it's really impressive that Pruitt (pictured at left), Adamek (pictured below) and Kessler manage to weave a captivating tale despite the many roadblocks built up by the sands of time. Even more impressive is their ability to pull the story into the present by essentially re-opening the cold case themselves. In many ways, their dogged persistence is the real engine that moves this gripping story forward. This mostly works because of the professional relationship Pruitt and Adamek developed with Major Tim Horne of the Orange County Sheriff's Office. If this story was one of Pruitt's dark rural noir novels, Horne's thorough approach, cautious optimism, and even-keeled demeanor would almost be clichéd—which makes him that much more engaging as the de facto third narrator of the series. That alone is worth investing 8 hours of your podcast-listening time, but it's not even my favorite part of this series. The main thing that kept me bingeing The Long Dance over the course of a couple of days was the well-developed sense of time and place. Durham, North Carolina is not the town it was 46 years ago, and neither are the people most affected by the Mann/McBane murders. Many of the interviews included in the series highlight how the world moves on, even from something as heinous as the brutal murders of a young couple. It's possible to listen to The Long Dance as a straight true crime narrative and you definitely will not be disappointed, but the real triumph of this podcast is the way it showcases the pain, regret and anger that lives on inside the people touched by this almost forgotten tragedy. What We Got Wrong About Ford Fairlane "Do You Remember" Podcast Review Defending Your Influences Dusting Off My Drums Some Songs Make Great Short Stories S.W. Lauden is the Anthony Award-nominated author of the Tommy & Shayna novella, CROSSWISE, and the sequel, CROSSED BONES. His Greg Salem punk rock P.I. series includes BAD CITIZEN CORPORATION, GRIZZLY SEASON and HANG TIME. He is also the co-host of the Writer Types crime, mystery and thriller podcast. Steve lives in Los Angeles. This entry was tagged Drew Adamek, Durham, Eryk Pruitt, North Carolina, Piper Kessler, podcast, podcast review, podcaster, podcasters, podcasting, review, The Long Dance, true crime, Writer Types. Bookmark the permalink. 2016: Favorite Rock and Roll Reads December 19, 2016 — Leave a comment If you like reading about rock and roll as much as I do, then 2016 was a really good year. Not only were the bookshelves stocked with amazing punk rock non-fiction from the likes of Keith Morris and John Doe, but The Replacements came back into my life in a BIG way. I also discovered a few other crime writers out there who, like me, are using rock and roll as the leaping off point for their violent tales of intrigue, lust and woe. And it was another great year for music-inspired short fiction as well. Here are a few of my favs, in no particular order. Nothing's more rock and roll than a list! TROUBLE BOYS: THE TRUE STORY OF THE REPLACEMENTS—Bob Mehr There have long been theories about why this Minneapolis punk outfit-turned-critical darlings never achieved their long-predicted commercial success. Rumors of self-doubt and self-sabotage were the stuff of legend. This well-researched book sets the record straight in a way that even the most die-hard fans will appreciate. ALL YOUR LIES CAME TRUE—Mike Creeden It's hard to read this high-octane thriller without thinking of your favorite rock and roll duos—Axl Rose/Slash, Mick Jagger/Keith Richards, or David Johansen/Johnny Thunders. Creeden does a great job of wrapping this page-turner in a glittery cape of rock and roll imagery to keep the action pumping. Strong characters, a fast-moving plot, and a killer back story deliver some unexpected twists and turns. This is a dark, but fun read that you won't be able to put down. Read my interview with Mike Creeden. UNDER THE BIG BLACK SUN: A PERSONAL HISTORY OF L.A. PUNK—John Doe & Tom DeSavia This collection of overlapping essays about the first-wave of LA punk is a fascinating look at how legendary scenes are born. It's incredible to think that a hundred kids, one apartment building and a handful of clubs gave us decades of great music from bands like X, The Germs, The Go Gos, The Minutemen and The Blasters. It goes by fast, so read it twice. FLIGHT 505—Leslie Bohem A private jet powered by broken dreams, regret and self-delusion. Fame might have eluded Mickey and Al, but that doesn't stop them from getting back in the chase—long after their expiration date. A fun, fast read that brings the 80s LA New Wave scene to life in vivid color, and explores the meaning of success through the perspective of three very different, but hopelessly intertwined characters. A great read for anybody that ever chased the brass ring down Hollywood Blvd. Read my interview with Leslie Bohem. MAMA TRIED—Edited by James Ray Tuck I can't think of a better marriage than the one between crime fiction and outlaw country—and this collection doesn't disappoint. What started out as a random Facebook post according to editor, James Ray Tuck ("Someone should do a crime fiction anthology based on outlaw country songs called MAMA TRIED so I can write a story for it."), turned into one of the best music themed anthologies of 2016. Stand out stories include Eryk Pruitt's "I'm The Only Hell My Mama Ever Raised," Christa Faust's "Truth or Consequences (Waiting' Round to Die)" and Eric Beetner's "Pardon Me (I've Got Someone To Kill)." PEEPLAND—Christa Faust & Gary Phillips I don't always read comic books or graphic novels, but when I do they're about a peepbooth worker and her punk rock ex-partner. The brutal murder of a public access pornographer puts this unlikely duo under fire from criminals, cops, and the city elite, uncovering a web of corruption that leads right to city hall. Christa Faust and Gary Phillips are two of L.A.'s best pulp and noir writers, and Andrea Camerini's artwork in PEEPLAND is fantastic. Read my interview with Christa Faust and Gary Phillips. MY DAMAGE: THE STORY OF A PUNK SURVIVOR—Keith Morris & Jim Ruland Keith Morris is a founding member of two groundbreaking SoCal bands, Black Flag and The Circle Jerks (among others). But this well-written book goes beyond those stories to show you his winding path to underground infamy. It's been a strange trip for this soulful punk icon, and it just keeps getting more interesting. CRIME + MUSIC—Edited by Jim Fusilli Jim Fusilli, editor for this fantastic short story collection, starts his forward this way: "I don't suppose it would be much of a surprise to discover that there's a dark and deadly side to the world of popular music." What is surprising about this anthology is the diverse talents of the contributors, including Zoe Sharp, Peter Robinson, Reed Farrel Coleman, Tyler Dilts, Bill Fitzhugh and Erica Wright—among many others. Every one of these stories hums, sings or (in the case of Gary Phillips', "Shaderoc The Soul Shaker") rips your head clean off. DESERT CITY DIVA—Corey Lynn Fayman I came into the Rolly Waters series in this third installment, but had no problem getting acquainted with the character and his San Diego. This book is a romp across a SoCal desert full of paranoid outsiders and lost souls. Love the musical references threaded throughout, and Rolly's ability to solve the action-packed case without constantly waving a gun around or punching through walls. A fast, fun read that will keep you coming back. Read my interview with Corey Lynn Fayman. WAITING TO BE FORGOTTEN: SONGS OF CRIME AND HEARTBREAK INSPIRED BY THE REPLACEMENTS—Edited by Jay Stringer Putting aside my own contribution to this anthology, Jay Stringer has assembled a truly impressive collection of crime and mystery writers including Johnny Shaw, Kristi Belcamino, Josh Stallings, Angel Colon, Jen Conley, Tom Leins, Alex Segura and Mike McCrary—among many others. Not to mention, talented contributors like Franz Nicolay (The Hold Steady) and Gorman Bechard (Director of "Color Me Impressed: A Film About The Replacements," and "Every Everything: The Music, Life & Times of Grant Hart"). Read my interview with Jay Stringer. S.W. Lauden's debut novel—about a punk rock musician turned disgraced cop—is called BAD CITIZEN CORPORATION. It was released in October 2015 from Rare Bird Books. The second Greg Salem novel, GRIZZLY SEASON, was published on October 2016. His standalone Tommy Ruzzo novella, CROSSWISE, is available from Down & Out Books. This entry was tagged All Your Lies Came Tru, Bob Mehr, Christa Faust, Corey Lynn Fayman, Crie + Music, Desert City Diva, Down & Out Books, Eric Beetner, Eryk Pruitt, Flight 505, Gary Phillips, Gutter Books, Hard Case Crime, James Ray Tuck, Jay Stringer, Jim Fusilli, Jim Ruland, Keith Morris, Leslie Bohem, Mama Tried, MIke Creeden, My Damage, Peepland, The Replacements, Trouble Boys, Waiting To Be Forgotten. Bookmark the permalink. Guest DJ—Eryk Pruitt Eryk Pruitt is a screenwriter, author, filmmaker and radio host. And starting today he is also a Guest DJ! Check out this amazing playlist featuring everything from Bob Dylan and Lee Hazlewood to Slim Cessna's Auto Club and Sublime. And don't miss our radio/podcast discussion about "Music in Crime Fiction" this Monday, Dec. 14. If you aren't already familiar with Eryk's work, you're missing out. His short fiction has appeared in The Avalon Literary Review, Thuglit, Pulp Modern, and Zymbol, among others. In 2014, his fiction was twice nominated for the Pushcart Prize, and also a finalist for a Derringer Award. His debut novel, DIRTBAGS, and his follow-up novel, HASHTAG, are both available now. He wrote and produced the short film FOODIE which won eight top awards at over sixteen film festivals. Since then, he has written several others, including KEEPSAKE and LIYANA, ON COMMAND. Here's an excerpt from an interview I did with Eryk Pruitt earlier this year. How does your approach to short stories differ from your longer works? I can write a first draft for a short story in a day. If I get the kernel of an idea, I can sit down and write and then set it aside and come back and rewrite a couple days later, then do it again… and after a couple weeks I will have a polished, fine-tuned little piece of fiction. That's pretty rewarding. Finishing something is its own reward, and the short story allows you to reward yourself more often than you can with a novel. Find Eryk Pruitt: Website and Amazon Previous Playlists: Guest DJ—Tom Pitts Guest DJ—Craig T. McNeely Guest DJ—Angel Colon Guest DJ—Josh Stallings 29 SoCal Punk Songs S.W. Lauden's debut novel, BAD CITIZEN CORPORATION, is available now from Rare Bird Books. His novella, CROSSWISE, will be published by Down & Out Books in 2016. This entry was tagged #amreading, #amwriting, 280 Steps, crime, Dirtbags, Eryk Pruitt, fiction, film, foodie, Hashtag, Keepsake, Liyana, lyrics, music, On Command, playlist, songs, The Crime Scene. Bookmark the permalink. Recommended Reading 2015 It's that time of the year. I've made a list, checked it a couple dozen times, and now I'm posting it here. This is not a "Best Of" list in the traditional sense. More of a "Man, I read some great books that got published this year!" list. The titles and authors are in no particular order, and there are probably a few I forgot. If you haven't already read these books, you should. Black Friday, Small Business Saturday and Cyber Monday are all great excuses to support your favorite booksellers. As if you need another excuse to buy books. (UPDATE: I've gotten some great suggestions for this list on other platforms. If you want to mention a book I didn't, please leave it in the comments below Because: Conversation! —Thanks!). CANARY by Duane Swierczynski CONTENDERS by Erika Krouse STRANGE SHORES by Arnaldur Indridason HOW TO SUCCESSFULLY KIDNAP STRANGERS by Max Booth III THE GIRL ON THE TRAIN by Paula Hawkins RUMRUNNERS by Eric Beetner UNCLE DUST by Rob Pierce THE MAGICIAN'S LAND by Lev Grossman A NEGRO AND AN OFAY by Danny Gardner WORM by Anthony Neil Smith WAYS TO DIE IN GLASGOW by Jay Stringer GO DOWN HARD by Craig Faustus Buck VORTEX by Paul D. Marks NEW YORKED by Rob Hart YOUNG AMERICANS by Josh Stallings THE MAN IN THE WINDOW by Dana King THE CARTEL by Don Winslow Still On The TBR List KILL ME QUICK by Paul D. Brazill HASHTAG by Eryk Pruitt ZERO SAINTS by Gabino Iglesias BULL MOUNTAIN by Brian Panowich THE SUBTLE ART OF BRUTALITY by Ryan Sayles Novellas & Anthologies THE FURY OF BLACKY JAGUAR by Angel Luis Colon DREAMING DEEP by Anonymous-9 THE DEEPENING SHADE by Jake Hinkson REDBONE by Matt Phillips DEAD HEAT WITH THE REAPER by William E. Wallace KNUCKLEBALL by Tom Pitts SAFE INSIDE THE VIOLENCE by Chris Irvin CLEANING UP FINN by Sarah M. Chen GRAVEYARD LOVE by Scott Adlerberg CITY OF ROSE by Rob Hart DECEMBER BOYS by Joe Clifford ROUGH TRADE by Todd Robinson HARD-BOILED HEART by Will Viharo FLOODGATE by Johnny Shaw This entry was tagged #amreading, #amwriting, Angel Luis Colon, Anonymous-9, Anthony Neil Smith, Arnaldur Indridason, Best Of 2015, Brian Panowich, Christopher Irvin, Craig Faustus Buck, crime, Dana King, Danny Gardner, Don Winslow, Duane Swierczynski, Erika Krouse, Eryk Pruitt, fiction, Gabino Iglesias, Jake Hinkson, Jay Stringer, Joe Clifford, Josh Stallings, Lev Grossman, Matt Phillips, Max Booth III, mystery, Paul D. Brazill, Paul D. Marks, Paula Hawkins, publishing, Rob Hart, Rob Pierce, Ryan Sayles, Scott Adlerberg, Todd Robinson, Tom Pitts, Will Viharo, William E. Wallace. Bookmark the permalink. Interrogation—Eryk Pruitt/ Noir at the Bar, Bouchercon September 28, 2015 — Link — Leave a comment Who: Eryk Pruitt What: A screenwriter, author and filmmaker living with his wife Lana and cat Busey. His short films FOODIE and LIYANA, ON COMMAND have won several awards at film festivals across the U.S. His fiction appears in The Avalon Literary Review, Pulp Modern, Thuglit and Zymbol, to name a few. In 2015, he was a finalist for the Derringer Award for his short story "Knockout." His novels, DIRTBAGS and HASHTAG, are available in e-book and paperback. He is also the founder of Noir at the Bar, Durham, and organized Noir at the Bar, Raleigh Bouchercon. Where: Durham, N.C. Interview conducted by email. Some questions and answers have been edited. How did you first find out about Noir at the Bar? Did you attend Noir at the Bar events in other cities before you launched the one in Durham? I kept stumbling upon them across the internet and wanted to attend one, possibly get the stones to read at one after a while. I traced them back to Jed Ayres and asked him what Durham had to do to get one, so I could experience it. He said "You got to start one yourself." He helped me find authors who would drive to Durham and it was a blast. We had great readers and afterward, I had a night on the town with Grant Jerkins, Peter Farris and Charles Dodd White, which could not be beat. The next one we did featured eight authors from the immediate area. We had another. I've read in Baltimore and at Shade in New York City. It was my first time up there and man, it was a total hoot. I've never met nicer people. This entry was tagged #amreading, #amwriting, #Bouchercon, Christa Faust, Dirtbags, Ed Kurtz, Eric Beetner, Eryk Pruitt, Hashtag, Jedidiah Ayres, Jen Conley, Joe Clifford, Johnny Shaw, Led Edgerton, Noir at the Bar, North Carolina, publishing, Raleigh, S.W. Lauden, Thomas Pluck, Tom Pitts. Bookmark the permalink. Eryk Pruitt's HASHTAG Is Out Today May 26, 2015 — Leave a comment Eryk Pruitt is screenwriter, author and filmmaker living in Durham, NC. His short films FOODIE and LIYANA, ON COMMAND have won several awards at film festivals across the US. His fiction appears in THE AVALON LITERARY REVIEW, PULP MODERN, THUGLIT and ZYMBOL, to name a few. His novel DIRTBAG S was published in April 2014, and his follow-up novel, HASHTAG, was published today by 280 Steps. I was lucky enough to catch up with the author last month between book and film projects, long enough to have him answer a few questions. Here is an excerpt from my interview with Eryk Pruitt. How does your new novel, HASHTAG, differ from your debut novel, DIRTBAGS? HASHTAG, for one, is a little longer. Much like DIRTBAGS, it is told in three parts. It also gets a prologue and an epilogue, which I'm pretty happy about. Our characters get a chance to leave town some in HASHTAG, which is fun. I think there's no place more beautiful, more sinister, more dangerous and more blessed than the American South, and I wanted to take the readers on a little ride, so we manage to get out of Lake Castor. How we do it… well, that's a different story. Was writing a novel easier the second time around? I was fortunate enough to have already written HASHTAG by the time DIRTBAGS was published. However, after having gone through line edits and copy edits, it fiddled with my head during HASHTAG rewrites. I kept rewriting it and rewriting it, and even after 280 Steps took it, I still emailed them and asked if I could rewrite it one more time. Since I've been lucky enough to get some good reactions from people regarding DIRTBAGS, I put a lot of pressure on myself to make a book that people will like. I kind of forgot that I was supposed to have a lot of fun and that's what people will respond to. It took me a while to get that through my thick skull, but I think I've got it down now. Have fun while you're writing and everything will be just fine… I hope. Read the whole INTERVIEW HERE. Buy HASHTAG HERE S.W. Lauden is a writer and drummer living in Los Angeles. His short fiction has been accepted for publication by Out of the Gutter, Criminal Element, Akashic Books, Spelk Fiction, Shotgun Honey and Crimespree Magazine. His debut novel, BAD CITIZEN CORPORATION, will be published in 2015. His novella, CROSSWISE, will be published in 2016. This entry was tagged #amreading, #amwriting, #writing, #writingtips, 280 Steps, crime fiction, Dirtbags, Eryk Pruitt, North Carolina, novel, Raleigh, Southern fiction. Bookmark the permalink. Punctual Alcoholic Something Bad Happened To Me Smart Boys - Tsar I Should Be That Guy Da Cat vs. Da Bull - The Zoggs Go Go - The Zoggs
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,138
require 'spec_helper' describe 'onion' do context 'with defaults for all parameters' do it { should contain_class('onion') } end end
{ "redpajama_set_name": "RedPajamaGithub" }
4,898
Megachile dariensis is een vliesvleugelig insect uit de familie Megachilidae. De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1965 door Pasteels. dariensis
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,427
Q: Typescript: Multi-dimensional objects doesn't recognise property despite being typed I am trying to type a multi-dimensional array of objects but my inner object properties do not recognise the inner type despite having the property in my custom type. I have a members property in my group type but it literally says I don't have a property called 'members' or any of the other properties defined in group, when I try to make resultObj property of type number or group interface membersObj{ 'name': string; 'regNo': string; 'age': number; 'dob': string } type group={ 'members': Array<membersObj>; 'oldest': number; 'sum': number; 'regNos': Array<number>; } interface resultObj { [index: string]: group | number; } let result: resultObj = {}; let arrayOfDetails:Array<membersObj> = input.map((x:membersObj)=> { return {'name':x.name, 'regNo': x.regNo, 'age': baseDate- (new Date(x.dob)).getFullYear(), 'd arrayOfDetails= arrayOfDetails.sort((a,b)=> a.age -b.age); for (let i=0; i<arrayOfDetails.length; i++){ if (!result['group'+groupNo] || result['group'+groupNo].members.length==3 || (arrayOfDetails[i].age - result['group'+groupNo]['members'][0]['age'])>5){ groupNo++; result.noOfGroups = groupNo; result['group'+groupNo]= { members: [arrayOfDetails[i]], oldest: arrayOfDetails[i].age, sum: arrayOfDetails[i].age, regNos: [Number(arrayOfDetails[i].regNo)] } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,234
\section{Introduction} \label{sec:introduction} Astronomy is currently living an open-data revolution led by legacy surveys such as Euclid \citep{Euclid2011}, the Rubin Observatory Legacy Survey of Space and Time \citep{LSST2019}, Planck \citep{Planck2020} and the Laser Interferometer Gravitational Wave Observatory \citep{LIGO2015}. The main barrier to research in such a revolution is access to increasingly sophisticated analysis methods. For example, forward modelling and machine learning have emerged as important techniques for the next generation of surveys. Both of these techniques depend heavily on realistic simulations. In this context, accurate representations of the galaxy populations in the simulation and analysis of the ongoing and future large-scale cosmology experiments are essential. In this paper, we study the phenomenology of the evolution of galaxy demographics and implement an accurate prescription based on a quenching model to generate galaxy catalogues. Traditionally the galaxy mass distribution is described by the Schechter function \citep{Schechter_1976}, and two main galaxy populations are distinguished: active and quiescent. The active or star-forming population is composed of galaxies actively forming stars, increasing their stellar mass. Conversely, the quiescent population is made of quenched galaxies which do not create stars (or do so very slowly). Typically two quenching mechanisms transform star-forming galaxies into quiescent objects: mass quenching and satellite quenching (see for example \cite{Peng_2010}). Satellite quenching usually occurs when a subhalo (and its galaxy) enters a denser region of space, e.g. when falling into a parent halo. Physically this phenomena has been related to strangulation \citep{Larson_1980, Balogh_2000} and ram pressure stripping \citep{GunnGott_1972}. Likewise, the primary cause of mass quenching is believed to be feedback from active galactic nuclei or supernovae \citep{Fabian_2012}. However, there exist other potential interpretations that consider these two mechanisms as different manifestations of a common group quenching \citep{Knobel_2015}. In the astrophysical literature, quiescent galaxy samples at low redshift are commonly described by a double Schechter function \citep{LiWhite_2009, Peng_2010, Pozzetti_2010, Baldry_2012, Ilbert_2013, Muzzin_2013, Birrer_2014}, which is the addition of two single Schechter functions. Authors such as \cite{Peng_2010, Peng_2012} empirically connect the star-forming mass function via the quenching phenomena or ceasing of star formation. Typically, these findings are drawn from observational data, employing different methods such as the classical $1/V_{max}$ approach \citep{Schmidt_1968}, parametric maximum likelihood methods \citep{Sandage_1979} or non-parametric step-wise-maximum likelihood techniques \citep{Efstathiou_1988}. Alternatively, similar conclusions are drawn from more physical approaches, i.e. \cite{Birrer_2014}. In this work we derive the continuity equations describing the rate of population change with fixed mass and environment. The model distinguishes between satellite galaxies and central galaxies and considers the probability of being satellite quenched, mass quenched and growing in stellar mass. By solving the equations analytically we identify a double Schechter function for the quiescent galaxies and validate our model against the best-fit SDSS DR7 sample from \cite{Weigel_2016}. A similar model was originally described by \cite{Peng_2010} (c.f. Figure \ref{fig:quenching}). The authors derived a set of continuity equations for the number of blue galaxies lost per unit time in a given infinitesimal mass bin. In this picture they distinguish between three different mechanisms: \textit{a)} star formation, the galaxy grows in mass and moves to the next bin, and \textit{b)} satellite and/or mass quenching, leaving the galaxy with fixed mass and moving to the quiescent sample. However, it is not clear how their model would account for satellite galaxies and the incoming blue galaxies from less massive bins due to growth. In that sense, their model is incomplete and cannot be implemented within simulation pipelines without additional specification and constraints. In this paper we include satellite galaxies explicitly and present these evolution processes as a continuous Markov chain, accounting correctly for the growth in stellar mass. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{plots/quenching.pdf} \caption{This diagram represents the growth of the blue population and quiescent populations described by Peng et al. \citep{Peng_2010}. The axis represents the infinitesimal mass bins, $B$ the active population, $S$ the satellite-quenched and $M$ the mass-quenched galaxies. In every bin, the blue galaxies can come out through the boundaries (hatched green lines) by growing or becoming quiescent.} \label{fig:quenching} \end{figure} Additionally, our model will be included in the \verb!skypy.galaxies! module of the \verb!SkyPy! library\footnote{\url{https://github.com/skypyproject/skypy.git}} after the publication of this manuscript. \verb!SkyPy! ~\citep{Amara_2021, skypy_collaboration_2020_3755531} is an open-source Python package for simulating the astrophysical sky. It comprises a library of astrophysical models and a command -line script to run end-to-end simulations. With the implementation of our galaxy demographics model, the user can also draw active and quiescent populations. The new functionality and pipeline to reproduce our results will appear in the examples page of the \verb!SkyPy! documentation\footnote{\url{https://skypy.readthedocs.io/en/stable/}}. In this paper we present the quenching model in Section \ref{sec:the_quenching_model} and identify the solutions of the continuity equations with the star-forming (single) and quiescent (double) Schechter functions. In Section \ref{sec:schechter_mass_function} we show how the Schechter parameters for quiescent galaxies are related to the star-forming parameters, identifying the satellite-quenched galaxies as a subset of the blue sample. In addition, we present the time evolution of the amplitude of the star-forming Schechter function. Then we validate our results using a fitting curve to the SDSS DR7 sample in Section \ref{sec:validation}. Finally we conclude and introduce our future lines of work in Section \ref{sec:conclusions}. \section{The quenching model} \label{sec:the_quenching_model} In this section we describe the quenching phenomena associated with an increase in stellar mass and an increase in the density of the environment. The mass function describing the number density distribution of galaxies as a function of stellar mass is given by the aforementioned Schechter distribution \citep{Schechter_1976} \begin{equation}\label{eq:schechter} \phi(m, t) = \phi_{*}(t) \left(\frac{m}{m_{*}} \right)^{\alpha} e^{- \frac{m}{m_{*}}} \; , \end{equation} where $\phi_{*}$ is the amplitude of the Schechter function, $\alpha$ is the faint-end slope parameter and $m_{*}$ is the characteristic mass in units of solar masses. Different values of this set of parameters will describe both the star-forming and the quiescent populations. \begin{itemize} \item \textbf{Active galaxies}. Active or blue galaxies correspond to the population that actively forms stellar mass. In the cosmological model, dark matter halos are formed and grow by acquiring smaller halos. These halos host galaxies which grow in stellar mass. As they become more massive, some of these central galaxies start feeling the gravitational pull from other larger objects and turn into satellite galaxies. Therefore there exists a probability of a central galaxy becoming a satellite galaxy, $\eta_{\rm sat}$. This probability depends on mass, i.e. more massive galaxies will tend to remain central whereas smaller galaxies will become satellites. This probability is related to the fraction of satellite galaxies \begin{equation}\label{eq:fsat} f_{\rm sat} \equiv \frac{n_{\rm sat}}{n_{\rm total}} \end{equation} with $n_{\rm sat}$ the number density of satellite galaxies and $n_{\rm total}$ the number density of the total galaxy sample of a given mass. The increase in mass is driven by the star formation rate of the long-lived stellar population \begin{equation}\label{eq:sfr} SFR \equiv \frac{dm}{dt} \end{equation} with the specific star-formation rate defined as $sSFR \equiv SFR/ m$. \item \textbf{Mass quenching}. Mass quenching is the cessation of star formation when a blue galaxy reaches a critical mass. Let us consider a galaxy within a constantly accreting halo. At the beginning, both galaxy and halo are growing together \citep{Birrer_2014}. Eventually, the galaxy is quenched and stops growing, fixing its stellar mass; even though the halo continuous to grow. Note here that galaxy-galaxy mergers will eventually increase the galaxy's stellar mass post-quenching. This process is characterised by the mass-quenching rate which is the probability of a galaxy being mass quenched per unit time. This transformation rate represents the fraction of active galaxies that are mass-quenched, $f_m$. According to \cite{Peng_2010}, the mass-quenching law is given by \begin{equation}\label{eq:massq} \eta_m = \mu SFR \end{equation} with $\mu = m_{*}^{-1}$ and $SFR$ the star formation rate. Equation \eqref{eq:massq} is valid for all masses and environments and at all epochs. The quenching rate has units of inverse $Gyr$ and can be interpreted as the time that a blue galaxy statistically awaits to be mass-quenched. In terms of the Schechter function, we will show that this population presents the same characteristic mass, $m_{*}$, as the blue population but with a faint-end slope parameter which differs from the blue population by one unit. This was originally proved by \cite{Peng_2010}.\\ \item \textbf{Satellite quenching}. Satellite quenching is the cessation of star formation potentially due to an increase in the density of the environment. Note that the physical cause and time scales of satellite quenching are still the subject of ongoing debate. As a simplified picture, we assign an instantaneous probability of the in-falling satellite to be quenched \citep{Birrer_2014}. Finally, the blue survivals are subjected to growth and mass quenching as they become massive. This phenomenon is characterised by the satellite-quenching rate which is the probability of a blue satellite galaxy being satellite-quenched per unit time. This transformation rate represents the fraction of active galaxies, hosted by subhalos, that are satellite-quenched, $f_{\rho}$. According to \cite{Peng_2010} this quenching rate is given by \begin{equation}\label{eq:envq} \eta_{\rho} = \frac{1}{1 - \epsilon_{\rho}}\frac{\partial \epsilon _{\rho}}{\partial \log \rho} \frac{\partial \log \rho}{\partial t} \end{equation} with $\epsilon_{\rho}$ the quenching efficiency that depends on the comoving density of the environment, $\rho$. Again, the quenching rate has units of $Gyr^{-1}$ and can be interpreted as the time that a blue galaxy statistically awaits to be satellite-quenched. In principle, the fraction of galaxies turning quiescent through satellite quenching should be redshift dependent, although slowly \begin{equation}\label{eq:frho_etarho} \partial f_{\rho}/ \partial t = \eta_{\rho} (1 - f_{\rho})\, . \end{equation} For simplicity we consider this a constant, $0< f_{\rho} < 1$ \citep{Birrer_2014}. For a more realistic scenario refer to \cite{Hartley_2013}. In terms of the Schechter function, we will show that this quiescent population is essentially a subset of the blue population. They have the same shape, $\alpha$, same characteristic mass, $m_{*}$, but lower amplitude \citep{Peng_2010}. \end{itemize} Besides quenching there exist further complex phenomena in the picture of galaxy evolution such as galaxy-galaxy merging. The merging of galaxies would impact the massive end in the quiescent sample distribution, increasing the number density of massive red galaxies. The modelling of such processes will be the scope of future work. \subsection{Continuity Equations for Galaxy Demographics} \label{sec:continuity_equations} In this work we interpret the galaxy evolution as a Markov process \citep{Birrer_2014}. In Figure \ref{fig:markov} the chain starts with a blue central galaxy, $B_c$, with a probability of becoming a satellite galaxy, $B_s$. The central galaxy could remain active and grow (determined by the star-formation rate) or could be too massive with a probability of being mass-quenched, $M_q$. If it becomes a satellite galaxy, it could be eventually satellite quenched, $S_q$. Otherwise, the satellite galaxy could also be mass-quenched or remain active and grow. In our model we consider that once a galaxy had been quenched there is no way to become active again. This simple prescription not only has the power to show the connection between the quenching phenomena and the different populations, but also allows us to get individual star-formation histories, including for quenched galaxies, which is not possible with the \cite{Peng_2010} formalism alone. We will show at the end of the next section how this translates into a double Schechter function for the quiescent galaxies and how they relate to the active population. From the above description we derive the equations that govern galaxy evolution \begin{figure} \centering \includegraphics[width=0.3\textwidth]{plots/markov_process.pdf} \caption{This is the Markov chain representing the population change for an infinitesimal time. $B$ represents active galaxies, $B_c$ central galaxies, $B_s$ satellite galaxies, $M_q$ mass quenched galaxies and $S_q$ satellite quenched galaxies. Please refer to text for a detailed explanation.} \label{fig:markov} \end{figure} \begin{equation}\label{eq:evolution} \begin{split} \left.\frac{d B_c}{dt}\right\vert_{m, \rho} & = \alpha sSFR\, B_c - \eta_m B_c - \eta_{\rm sat} B_c\\ \left.\frac{d B_s}{dt}\right\vert_{m, \rho} & = \alpha sSFR\, B_s - \eta_m B_s - \eta_{\rho} B_s + \eta_{\rm sat} B_c \\ \left.\frac{d M_q}{dt}\right\vert_{m, \rho} & = \eta_m B_c + \eta_m B_s \\ \left.\frac{d S_q}{dt}\right\vert_{m, \rho} & = \eta_{\rho} B_s \end{split} \end{equation} where $B_c$ denotes the number density of central galaxies, $B_s$ satellite galaxies, $M_q$ mass-quenched galaxies and $S_q$ satellite-quenched galaxies. Note the increase of the number density of active galaxies due to ongoing star formation is characterised by the logarithmic slope of the mass function \eqref{eq:schechter}, $\alpha \equiv d \log \phi / d \log m$, and the sSFR-mass relation, $sSFR$.\\ In order to solve equations \eqref{eq:evolution} analytically we consider the following list of assumptions: \textit{a)} $\alpha$ and $m_{*}$ are constant, \textit{b)} the evolution of the galaxy distribution with time is very slow, \textit{c)} the fraction of satellite galaxies $f_{\rm sat}$ evolves slowly with time and $f_{\rho}$ is constant, and \textit{d)} the initial conditions are set at a time where stellar mass was very low $m_0 << m_{*}$, as well as the environmental density. This choice implies that the initial blue sample was only composed of central galaxies, $B_0 = B_{c0}$ ($B_{s0} = 0$), there existed no quiescent galaxies, $M_{q0} = S_{q0} = 0$, and therefore $f_{sat0} = f_{\rho 0} = 0$. The solutions for the active population read \begin{equation}\label{eq:nblue} \begin{split} B_c(m, t) & = B_{c*}(t) \left( \frac{m}{m_{*}}\right) ^{\alpha} e^{- \frac{m}{m_{*}}} \\ B_s(m, t) & = B_{s*}(t) \left( \frac{m}{m_{*}}\right) ^{\alpha} e^{- \frac{m}{m_{*}}} \end{split} \end{equation} with amplitudes \begin{equation}\label{eq:nblue_amplitude} \begin{split} B_{c*}(t) & = B_0 \left( \frac{m_{*}}{m_0}\right) ^{\alpha} e^{ \frac{m_0}{m_{*}}} e^{- \int_{t_0}^t \eta_{\rm sat} dt'} \\ B_{s*}(t) & = B_{c*}(t) \int_{t_0}^t \eta_{\rm sat} dt' \, . \end{split} \end{equation} Note that when setting $t$ to the initial time we retrieve the expected results $B_c(m, t_0) = B_0$ and $B_s(m, t_0) = 0$. And the quiescent galaxies \begin{equation}\label{eq:nred} \begin{split} M_q(m, t) & \simeq M_{q*}(t) \left( \frac{m}{m_{*}}\right) ^{\alpha + 1} e^{- \frac{m}{m_{*}}} \\ S_q(m, t) & = S_{q*}(t) \left( \frac{m}{m_{*}}\right) ^{\alpha} e^{- \frac{m}{m_{*}}} \end{split} \end{equation} where we have made explicit use of $ m_0 << m_{*}$ and that the time evolution of the blue galaxy distribution is very slow. The amplitudes read \begin{equation}\label{eq:nred_amplitude} \begin{split} M_{q*}(t) & = B_{c*}(t) + B_{s*}(t) \\ S_{q*}(t) & = B_{s*}(t) \int_{t_0}^t \eta_{\rho} dt' \end{split} \end{equation} Note that indeed $M_q(m, t_0) \simeq 0$ and $S_q(m, t_0) = 0$. In summary, we showed in this section how the galaxy demographics can be presented as a Markov chain and described as a set of continuity equations that can be solved analytically. \section{The Quiescent Schechter Mass Functions} \label{sec:schechter_mass_function} In this section, we show the relation between the quiescent Schechter parameters and the properties of the blue population. At the end of the section we study the explicit time dependence of the amplitude of the mass function. \subsection{Reduction of the parameter space} By inspection of the equations above \eqref{eq:nblue}, we deduce that evidently the star-forming population follows a Schechter mass function \eqref{eq:schechter} \begin{equation}\label{eq:schechter_blue} \phi_{b}(m, t) = B_c(m, t) + B_s(m, t) \end{equation} with $\phi_{*b}(t) = B_{c*}(t) + B_{s*}(t)$ given by \eqref{eq:nblue_amplitude}, $\alpha_{b} = \alpha$ and $m_{*b} = m_{*}$. From equations \eqref{eq:nred} and \eqref{eq:nred_amplitude}, we demonstrate how the mass-quenched population has the same amplitude and characteristic mass than the active population but a different faint-end slope \begin{equation}\label{eq:massq_params} \begin{split} \alpha_{m} & = \alpha_{b} + 1 \\ m_{*m} & = m_{*b}\\ \phi_{*m}(t) & \simeq \phi_{*b}(t)\; . \end{split} \end{equation} Likewise, the satellite-quenched galaxies are clearly a subclass of the active population, although with lower amplitude \begin{equation}\label{eq:satq_params} \begin{split} \alpha_{\rho} & = \alpha_{b}\\ m_{*\rho} & = m_{*b}\\ \phi_{*\rho}(t) & = F_{\rho} \phi_{*bs}(t) \end{split} \end{equation} with $\phi_{*bs}(t) = B_{s*}(t)$ in \eqref{eq:nblue_amplitude} and $F_{\rho} \equiv \int_{t_0}^t \eta_{\rho} dt' = \mathrm{ln} (1 / (1- f_{\rho}))$, using equation \eqref{eq:frho_etarho}. These relations reduce the parameter space from nine to five parameters \begin{equation}\label{eq:parameter_space} \begin{Bmatrix} \phi_{*b} & \alpha_b & m_{*b}\\ \phi_{*m} & \alpha_m & m_{*m}\\ \phi_{*\rho} & \alpha_{\rho} & m_{*\rho} \end{Bmatrix} \longrightarrow \begin{Bmatrix} \phi_{*b} & \alpha_b & m_{*b}\\ f_{\rho} & f_{\rm sat} & \end{Bmatrix} \end{equation} with the possibility of reducing it to four parameters $\{\phi_{*b}, \alpha_b, m_{*b}, f_{\rho}\}$ if the separation between satellites and central galaxies is known. \subsection{Time evolution of the Schechter Function} The time dependence of the galaxy population is crucial to track the galaxy evolution throughout cosmic time. In the literature there exist many empirical expressions and parametrisations of such time dependence. One example of the parametrisation of the time evolution of the amplitude of the Schechter function is the model used by \cite{Herbel} \begin{equation}\label{eq:herbel} \phi_{*}(z) = b e^{az} \end{equation} where $z$ is redshift and $a$ and $b$ free fitting parameters. As a novelty, we need not perform any parametrisation since we obtain the exact analytical solutions. We can re-write the amplitude of the active population \eqref{eq:schechter_blue} as a function of redshift \begin{equation}\label{eq:amplitude_z} \phi_{*b}(z) = A e^{f(z)} \end{equation} where $A$ is a combination of prefactors in equations \eqref{eq:nblue_amplitude} and $f(z)$ is the argument of the exponential $ e^{- \int_{t_0}^t \eta_{\rm sat} dt'}$ written in terms of redshift. One can clearly observe that had we not any information about $f(z)$, we would perform a polynomial expansion as a first approach, retrieving the parametrisation in equation \eqref{eq:herbel} \citep{Herbel}. \\ In this section we showed how these distributions correspond to Schechter functions. We also justified the appearance of the double Schechter function for the quiescent populations, demonstrating that the satellite-quenched galaxies are indeed a subset of the blue galaxies and that the mass-quenched galaxies have a different faint-end slope parameter. This connection between the quenching phenomena and the galaxy populations allowed us to reduce the parameter space and derive the analytical time dependence of the amplitude of the Schechter function for the first time in the literature. \section{Validation} \label{sec:validation} In this section we validate our model using the results from the best fit to SDSS DR7 data in \cite{Weigel_2016}. The authors present a comprehensive method to determine stellar mass functions and apply it to samples in the local universe, in particular to SDSS DR7 data in the redshift range from $0.02$ to $0.06$. Note that we are only comparing our results to a fit. However, this is a reasonable procedure and expect the same outcome when directly matching to real data, since the fit is a sufficient representation to the current data. To generate our Figure \ref{fig:weigel} we take their best-fit values of the blue parameters \begin{equation}\label{eq:blue_weigel} \begin{split} \phi_{*b} & = 10^{-2.423}/h^3 Mpc^{-3}\\ \alpha_{b} & = -1.21\\ m_{*b} & = 10^{10.60}M_{\odot} \end{split} \end{equation} and use them in equations \eqref{eq:schechter} and \eqref{eq:schechter_blue}. Then we generate the satellite and central curves according to equation \eqref{eq:nblue}. For simplicity we gather all of the prefactors in the blue amplitudes \eqref{eq:nblue_amplitude} into a single parameter and use the relation between fraction of satellites and probability. Therefore, we can write \begin{equation}\label{eq:nblue_amplitude_validation} \begin{split} B_{c*} & = B(1-f_{\rm sat}) \\ B_{s*} & = B(1-f_{\rm sat}) \ln \left(\frac{1}{ 1-f_{\rm sat}}\right) \, . \end{split} \end{equation} We determine the value of the parameter $B$ by imposing the reasonable condition that the sum of the blue amplitudes \eqref{eq:nblue_amplitude_validation} should equal the total blue sample from \cite{Weigel_2016}. At this point, one could fix the fraction of satellite galaxies to a simple constant. Nonetheless, to make it more realistic and mass dependent we take the entire sample from \cite{Weigel_2016}, split by centrals and satellite galaxies (their Figure 16) and calculate the fraction \eqref{eq:fsat}. This is a simple procedure, and one could use a more sophisticated method, but suffices for our validation purposes. Finally, the fraction of satellite-quenched galaxies needs fine tuning. In this case, we simply take a known value from the literature $f_{\rho} \simeq 0.5$ \citep{Birrer_2014}. With all these ingredients and equations \eqref{eq:nred}, we plot both the active and quiescent galaxy populations (Figure \ref{fig:weigel}). \begin{figure*} \label{fig:weigel} \centering \includegraphics[width=0.9\textwidth]{plots/weigel_validation.pdf} \caption{ Model from equations \eqref{eq:nblue} and \eqref{eq:nred} compared to Weigel et al. \citep{Weigel_2016}. From left to right, we plot the different populations: active galaxies, quiescent galaxies and the entire sample. The solid black lines corresponds to the best fitting model from \citep{Weigel_2016}, and the dashed grey lines in the right panel represent the central and the satellite population of the entire sample. On the left, the dashed blue lines correspond to the survival central and satellite galaxies \eqref{eq:nblue}, whereas the solid blue line corresponds to the entire blue sample \eqref{eq:schechter_blue}. On the middle, the dashed lines represent the mass-quenched and satellite-quenched galaxy populations \eqref{eq:nred} and the solid red line corresponds to the total quiescent population. On the right panel, the purple line is the total sample using our model. Our simple model successfully produces two quiescent populations linked to different quenching processes, justifying the double Schechter function. Our results are compatible with the best fit curves from Weigel et al. except for the massive end of the quiescent galaxies --where more complex phenomena need to be considered. } \end{figure*} Our results are highly compatible with Weigel et al.'s best-fit curves. As expected, the main discrepancy shows in the massive-end of the red sample where other complex phenomena such as galaxy-galaxy merging processes dominate. All in all, our simple model successfully connects the growth and quenching processes with the different galaxy populations and justifies the appearance of a double Schechter function for the quiescent galaxies: the satellite-quenched galaxies, a subset of the blue population, and the mass-quenched galaxies with a different faint-end slope parameter. For a more realistic prescription one would need to track down the history of the dark matter halos and their galaxies, the merger trees, and their time evolution to precisely know the time- and mass-dependence of the fraction of satellites, as well as the time evolution of the fraction of satellite-quenched galaxies. This will be the scope of future work within the SkyPy collaboration. \section{Conclusions} \label{sec:conclusions} In this paper we focused on the theoretical description of galaxy demographics. The physical scene sets in halos hosting galaxies growing in mass, the so-called active population. As they become more massive, these central galaxies are subjected to mass quenching and cease star formation. The survival can become satellite galaxies with a probability of being satellite quenched due to the increase in environmental density. Satellite galaxies that survive this phenomenon will continue to grow until eventually becoming mass quenched as they reach a critical mass. In this picture we classified the quiescent population into mass-quenched galaxies and satellite-quenched galaxies. We distinguished between active galaxies (centrals and satellites) and quiescent galaxies (mass-quenched and satellite-quenched), describing the galaxy demographics with a set of continuity equations that we solved analytically. Such equations invoke two quenching mechanisms that transform star-forming galaxies into quiescent objects: mass quenching and satellite quenching. In this paper we made the necessary specification and explicitly included satellite galaxies, completing the description of current published sets of such continuity equations, e.g. \cite{Peng_2010}. This allowed us to provide a more accurate description of galaxies demographics so critical for the generation of realistic simulations. From the analytical solutions, we showed that the combination of the two quenching mechanisms produces a double Schechter function for the quiescent population. We demonstrated that the satellite-quenched galaxies are indeed a subset of the active galaxies and that the mass-quenched galaxies have a different faint-end slope parameter. The connection between quenching and galaxy populations reduced significantly the parameter space of our simulations. Instead of nine Schechter parameters, the same samples can be drawn by fixing the three star-forming Schechter parameters plus the fraction of satellite galaxies and the fraction of satellite-quenched galaxies. We then derived the analytical time dependence of the amplitude of the Schechter function for the first time in the literature. Comparison with empirical models showed the parametrisation used by \cite{Herbel} appears to be a sensible model. Then we validated our model against SDSS DR7 data using the best-fitting model for the blue Schechter parameters from \cite{Weigel_2016} in the redshift bin $0.02 < z < 0.06$. We split their samples into centrals and galaxies to obtain the fraction of satellite galaxies and used a fixed known value for the fraction of satellite-quenched galaxies. The main discrepancy in our comparison showed in the massive-end of the red sample, where galaxy-galaxy merging effects are believed to dominate. We leave for future work the modelling of a more complex scenario, including galaxy-galaxy merging. Another extension of this work will consider the time dependence of the characteristic mass $m_{*}$ \citep{Herbel}. Finally, our model will be included in the \verb!skypy.galaxies! module of the \verb!SkyPy! library \citep{Amara_2021, skypy_collaboration_2020_3755531} after the publication of this manuscritp. In addition, the sonification, or transformation of physical data via sound, is becoming increasingly important to make astronomy accessible for those who are visually impaired, and to enhance visualisations and convey information that visualisation alone cannot (c.f \cite{harrison2021audio}). In this work we also made our main plot available in sound format using the \verb!STRAUSS! software \citep{james_trayford_2021_5776280}. This will be found in the \verb!SkyPy! documentation page. \section*{Acknowledgements} We would like to acknowledge all of the insightful comments from our colleagues in the SkyPy Collaboration, specially I. Harrison, R. Rollins and N. Tessore. We also acknowledge J. Trayford for helping us sonify our results. The preparation of this manuscript was made possible by a number of software packages: \verb!NumPy!, \verb!SciPy! \citep{Scipy_2020}, \verb!Astropy! \citep{Astropy_2018}, \verb!Matplotlib! \citep{Matplotlib_2007}, \verb!IPython/Jupyter! \citep{Jupyter_2007}. We also employed the \verb!STRAUSS! software \citep{james_trayford_2021_5776280} to sonify our main results. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,484
Q: Chrome DevTools > Service Workers > Testing > Local dev environment serves via http on a LAN As above, is there an easy way to allow service workers for local development when served via http on a local network when using chrome & dev tools? Is there a port # to serve on? Did I miss an option in Dev Tools to allow a domain? I am aware of this answer from a Google Dev suggesting it isn't possible without hacking chrome, but that answer was given back in 2015 and a lot has happened with PWA/sw.js since then - so I am hoping I'll get lucky! Thanks for any assistance in advance. A: One of the easiest way is to install Web Server for Chrome Extension and select the work folder inside the App and start web server in seconds to test Service Worker. A: No change. For service workers to work, the LAN needs a valid SSL certificate. Workarounds appears to be: * *serve dev on localhost, if you can; *as per the link in my question, launch chrome from terminal with unsafe flags and temporary directory (also see this hopefully available link showing an example command); or *if your dev build output has relative uris, launch another server on your computer (e.g. web server for chrome) with the relevant LAN dir as source. I simply went localhost. If a Chrome Dev sees this, it might be a handy feature to add an option in DevTools > Application to temporarily permit an unsafe domain (like the flags option). Have fun people.
{ "redpajama_set_name": "RedPajamaStackExchange" }
621
from __future__ import print_function import logging import argparse import redis # Logger logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) ch = logging.StreamHandler() ch.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s %(levelname)s: [%(name)s] %(message)s') ch.setFormatter(formatter) logger.addHandler(ch) def load_symbol_keys(symbol): symbol_redis = redis.StrictRedis(password='whatever') if symbol != '*': prefix = '{}:*'.format(symbol) keys = symbol_redis.keys(prefix) else: keys = list(filter(lambda x: x != 'progress', symbol_redis.keys('*'))) logger.info('{} keys'.format(len(keys))) return list(map(lambda x: re.search('\d+$', x).group(), keys)) def extract(keys): ret = [] source_redis = redis.StrictRedis(db=2, password='whatever') for key in keys: ret.append({ 'sent': source_redis.get(key), 'key': key }) return ret if __name__ == '__main__': # Parser parser = argparse.ArgumentParser(description='extract sentiment data') group = parser.add_argument_group(title='required arguments') group.add_argument('--symbol', required=True, help='symbol to process') args = parser.parse_args() print(extract(load_symbol_keys(args.symbol)))
{ "redpajama_set_name": "RedPajamaGithub" }
8,187
Consultation + Public Affairs Reputation Management + Crisis Communications Land + Planning Film + Photography korcommunications Kor-social-forward River Otter "bird wardens" praised for three decades of rare birdlife conservation Clinton Devon Estates - A Devon landowner has praised a husband and wife for their dedication as River Otter bird wardens for almost 30 years and the role they've played in reviving a rare species of bird. Ever since they moved to East Devon from London in 1990, octogenarians Doug and Joan Cullen have volunteered for the Pebblebed Heaths Conservation Trust, which was set up by landowner Clinton Devon Estates in 2006 to manage the heathland and conserve the surrounding landscape, including the River Otter Estuary. In addition to monitoring bird numbers on the Otter Estuary and advising on additional habitat creation work on the adjacent wetland meadows, the couple's work has included influencing habitat management around Stantyway Farm in Otterton for cirl buntings, a rare farmland bird. The population of the species had declined so much, due to the intensification of farming and loss of habitat, that by 1989 the RSPB estimated that there were just 118 pairs remaining in the whole of Britain, confined only to South Devon. After the couple spotted a pair near the farm around 10 years ago, they instigated a collaborative effort with the then tenant Martin Williams, Cath Jeffs, Cirl Bunting Project Manager and Deborah Deveney, Cirl Bunting Project Officer for the RSPB and Dr Sam Bridgewater, Head of Wildlife and Conservation for Clinton Devon Estates, owners of the farm, to improve their chances of survival. At the time, the birds hadn't been seen this side of the River Exe Estuary in at least 20 years, but the couple's most recent count in January put the cirl bunting population in Otterton at 28, with another pair spotted over near Weston, Sidmouth. Helping the cirl buntings has had a knock-on effect: Doug and Joan have also helped boost Otterton's winter and breeding skylark population. Joan, 85, who grew up in Honiton, recalls: The objective was to keep them there, so the initial focus was how best to improve the habitat. One way was substituting their main food source of arable plant seeds, with a millet and canary seed mix. The retention of over-wintering stubble fields, which provide a winter seed food source, the management of broad grassy field margins to ensure summer feeding opportunities and encouraging dense hedgerows for nesting, were important. The collective's efforts have been continued and expanded by farmers Sam and Nell Walker who took over the tenancy of Stantyway Farm, a 264-acre arable farm which was certified organic last year, when Martin retired. Downstream, along the Lower Ottery Estuary at Budleigh Salterton, the couple has helped improve the habitat for migrational birds including establishing scrapes – water beds of varying depths – and managing large reed beds. A combination between their efforts to improve the habitat and the impact of climate change has resulted in an increase in migrational bird species at the estuary. Doug, 84, who is originally from Hackney, East London, explained: Our whole aim was to encourage birdlife and wildlife to the area. So we looked at how the landscape could be improved. The scrapes are good because different species like different depths. You could say that it's been landscaped for birds. Joan and Doug walk along the River Otter most days watching out for the birds. Their early work served as a precursor for the evolving Lower Otter Restoration Project, a joint partnership between Clinton Devon Estates and the Environment Agency deemed crucial in restoring the ecological health and inter-tidal habitats of the lower Otter valley and adapting to climate change. Doug added: Birdlife and wildlife have increased hugely over the years and we're seeing more rare species. And we're getting more migrants staying the whole year, rather than breeding here and then going. The couple estimate that they've been volunteering for around 35 years in total, starting with the RSPB and the Kent Wildlife Trust when they worked in London, Doug as a printer for the London Evening Standard, and Joan as a manager in the fraud department of a bank. Joan said: We took early retirement and haven't stopped since. Sometimes I feel we've been busier than when we were working! We were never going to put our feet up. You get stamp collectors, and you get bird collectors. Bird watching is an enjoyable part of life. Dr Bridgewater, said: Doug and Joan have played a significant role in improving the future prospects of the cirl buntings in Devon and the species as a whole. We are most grateful to both of them for being our eyes and ears along the Lower Otter Estuary for almost 30 years; their enthusiasm and dedication has had a direct impact on bird populations there. Ms Jeffs, added: People like Doug and Joan really make a difference and are part of why my job is so rewarding. It has been a pleasure working with them and sharing the ups and downs of nature recovery. Clinton Devon Estates has been brilliant at supporting their tenants to be wildlife friendly and they should be proud of what they are doing for out threatened farmland species. PrevPrevious newsroom storyThird generation farmers turn their hand to rearing goats Next newsroom storyWestcountry homebuilder keeping the dying art of stonemasonry aliveNext Contact us to discuss how KOR can help you Kor-social-linkedin Kor-social-twitter © 2020 KOR Communications
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,297
Astragalus baibutensis är en ärtväxtart som beskrevs av Aleksandr Andrejevitj Bunge. Astragalus baibutensis ingår i släktet vedlar, och familjen ärtväxter. Inga underarter finns listade i Catalogue of Life. Källor Vedlar baibutensis
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,358
Q: hard drive problem I have a problem with my laptop that I am unable to solve. I bought this Lenovo E495 Laptop with two hard drives: 1 SSD with windows preinstalled 1 HD I installed Ubuntu after a while, with automatic partitioning and it had been installed on the 2nd HD, which is a lot slower. So I tried partitioning the SSD to install it alongside Windows... and I messed up. It wouldn't boot anymore from anything, I tried to follow guides to fix the bootloader but messed even more I guess :) I ended up reinstalling windows from recovery disk. But. It has been installed on the 2nd slow HD. The SSD is not recognized at all. It's not visibile in the disk management, it's not visible from ubuntu live USB, it's not visible in the BIOS, it's not visible in the diagnostic tool from lenovo. The only strange thing is if I press F12, I see which options are available for booting. I see windows, I see USB and then I see something called Ubuntu. Which I believe was my test when I messed up? I have no idea if it's the SSD or just an entry somewhere I did by mistake. I am not sure if the situation is clear. Is it a hardware fault? It would be strange because the laptop is new, it was working perfectly until I started installing and uninstalling stuff. If not, is there any kind of advanced tool I could use to check? From the Live Ubuntu I tried checking hardware using command line tools but no trace of the SSD. Any input appreciated Thanks a lot
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,990
If you usually shy away from environmentally friendly or organic products because they are more expensive take another look. Going green doesn't necessarily have to mean sending more money. Kelli Grant, Senior Consumer Reporter for SmartMoney.com gives 5 tips on how to be green while keeping your bill lean. First, stop receiving paper bill statements by opting to get them online instead of receiving them by mail. It cuts paper wasted and saves trees. There can be better deals too. Some insurance companies, including Allstate and Progressive give a discount of up to 5% on your auto policy for receiving policy documents online. Second, swap out old appliances. Cut your energy bill and greenhouse emissions by upgrading to a new energy-efficient boiler, fridge or washing machine. Cut your energy bill and greenhouse emissions by upgrading to a new energy-efficient boiler, fridge or washing machine. States are currently offering rebates for trading in old appliances. For example, someone in New York could save $105 on a fridge. Stores and manufacturers are adding to the deals. Check energysavers.gov - some programs have already expired. Third, recycle your electronics. An improperly disposed of desktop computer adds about three pounds of hazardous materials to landfills. Keep in mind many manufactures will recycle it for free when you buy a new computer from them. There are also third parties such as Gazelle.com that will buy your still-working electronics to resell or use for parts. Fourth, make sure to reuse plastic bags. The Environmental Protection Agency has said that if every person in New York City used one fewer plastic bag each year, we'd save about 5 million pounds of waste. You can also cut your store bills slightly. Target offers a five-cent discount per bag, while CVS offers a $1 credit every four visits. You don't need a pricey reusable bag either. Just keep those from previous visits, or hand carry items. Fifth, check the tire pressure on your vehicle. Tires that aren't properly inflated increase drag, making your car about 3% less fuel-efficient. That in turn releases more emissions. Check your driver's manual for proper inflation levels and then hit the gas station to re-inflate. By doing this you could cut your gas bill by another $25 to $50 annually. For more information on saving when going green and other consumer tips click here.
{ "redpajama_set_name": "RedPajamaC4" }
7,884
\section{Introduction} There exist two commonly used different (but dual) approaches for the investigating of the error exponent in the literature, which are closely related to the approaches used in the theory of large deviation \cite{LD,ld98}; \begin{enumerate} \item The first one is based on the method of types. This approach is closely related to Sanov's approach\footnote{The general Sanov's approach is applicable to any random variable. Here by Sanov, we mean the approach used for discrete random variables which is also based on the method of types. } in the large deviation (LD) theory. A comprehensive exposition of this approach for the basic problems can be found in \cite{csiszar:book}. \item Gallager's approach \cite{gallager:book}, which is similar in its spirit to the Cr\` amer approach in the LD.\end{enumerate} While Sanov's method is more general than the Cramer's one, the latter has the advantage of being strong enough to find the exact order of the desired probability (such as the probability of deviating from the zero of the sum of independent r.v.'s). This observation was made by Bahadur and Rao, see \cite{LD}. In the same way, the exact order of random coding bound has been recently derived in \cite{yucel, scarlett2014mismatched}, using an approach related to Bahadur-Rao. Soft covering lemma (also known as channel resolvability) \cite{han1993approximation,cs-cuff,wyner1975common} is another basic problem which has many applications such as secrecy problems, simulation of channels, etc. Further, it is somehow the dual to the channel coding problem. Recently, the exact exponent of the soft covering lemma under different measures of closeness has been derived in \cite{parizi, YuTan,yagli}. Although the techniques used in these papers are different, all are based on the method of types and thus the results are limited to channels with finite alphabets. This motivates us to investigate other techniques similar to those used by Gallager and Cramer. Fortunately, such approach gives us a new proof which is not only a different proof of the exact exponent that is valid for any channel with some regularity condition, but leads to the exact order of the soft covering lemma. The outline of the paper is as follows: after stating the problem in section \ref{sec:notation}, we first present a one-shot upper bound for soft covering problem using the total variation distance in Subsection \ref{subsec}, then we present our main results on the exact order of soft covering lemma using both the total variation distance and relative entropy in the rest of Section \ref{sec:ex-soft}. Section \ref{sec:analysis} is devoted to the ensemble converse proof of the results, while Section \ref{sec:achievability} is devoted to the achievability proof. \section{Notations and Definitions}\label{sec:notation} We follow closely the notation of Verd{\'u}'s book, \cite{IT}, with the exception of using Boldface letters to denote vectors (e.g. $\mathbf{x}=(x_1,\cdots,x_n)$). Throughout the the paper, the base of the $\log$ and $\exp$ is $\mathrm{e}$. Also, we use the asymptotic notations such as $O(.),\Theta(.),\Omega(.)$ in the paper. \begin{definition}{\bf Relative information}. \it Given two measures $P$ and $Q$ on the same probability space such that $P \ll Q$, the relative information $\imath_{P||Q}$ is defined as \[ \imath _ { P \| Q } ( x ) = \log \frac { \mathrm { d } P } { \mathrm { d } Q } ( x ).\] \end{definition} \begin{definition}{\bf Information density.} \it Given a joint distribution $P_{XY}$, the information density is defined as $\imath_{X;Y}(x,y)\triangleq \imath_{P_{XY}||P_X\times P_Y}(x,y).$ Throughout of the paper, we usually omit the subscript, whenever it is clear from the context. \end{definition} \begin{definition}\it For two distributions $P$ and $Q$ such that $P\ll Q$, the relative entropy and the total variation (TV) distance are defined as follows, \begin{align} D(P||Q)&:=\mathbb{E}[\imath_{P||Q}(X)]\\ \|P-Q\|&:=\mathbb{E}[[\exp(\imath_{P||Q}(\overline{X}))-1]_+]=\frac{1}{2}\mathbb{E}[|\exp(\imath_{P||Q}(\overline{X}))-1|] \end{align} where $X\sim P$ , $\overline{X}\sim Q$ and $[x]_+\triangleq\max\{0,x\}$. \end{definition} \begin{definition}\it Given $P_{XY}$, the $\alpha$-mutual information $I_{\alpha}(X;Y)$ \cite{IT} is defined by $$I_{\alpha}(X;Y)=\frac{\alpha}{\alpha-1}\log\mathbb{E}[\mathbb{E}^{\frac{1}{\alpha}}[\exp(\alpha\imath_{X;Y}(X;\widetilde{Y}))|\widetilde{Y}]],$$ where $(X,\widetilde{Y})\sim P_X P_Y$. \end{definition} \subsection{Problem Statement} Let $\mathcal{C}=\{X(k)\}_{k=1}^\mathsf{M}$ be a random codebook, in which its codewords are generated according to $P_X$. Given a channel $P_{Y|X}$, the output distribution $\mathsf{P}_Y$ induced by selecting uniformly an index $k$ from $[1:\mathsf{M}]$ and then transmitting $X(k)$ through the channel, is \[ \mathsf{P}_Y(.):=\frac{1}{\mathsf{M}}\sum_{k=1}^{\mathsf{M}}P_{Y|X}(.|X(k)). \] We are interested to evaluate the closeness of the induced distribution $\mathsf{P}_Y$ to $P_Y$, where $P_X\rightarrow P_{Y|X}\rightarrow P_Y$. We use the relative entropy and total variation distance to measure the closeness. \section{ Exact Soft Covering Order}\label{sec:ex-soft} \subsection{Gallager Type one-shot upper bound on TV-distance}\label{subsec} We begin the investigation of soft covering problem by stating a Gallager type upper bound on the TV-distance in the one-shot regime. \begin{thm}\label{thm:TV-one-shot}\it The TV-distance between the induced distribution $\mathsf{P}_Y$ and the desired distribution $P_Y$, is upper bounded by \begin{align} \mathbb{E}[\|\mathsf{P}_{Y}-P_Y\|]&\leq \dfrac{3}{2}\min_{0\le\rho\le\frac{1}{2}}~\mathsf{M}^{-\rho}.\exp\left(\rho I_{\frac{1}{1-\rho}}(X;Y)\right) \label{eq:os-tv-e \end{align} \end{thm} \vskip 2mm \begin{rem}\it The one-shot bound \eqref{eq:os-tv-e} readily implies that the exact exponent of the soft covering lemma for the i.i.d. codebook \cite[Theorem 1]{yagli} and for the constant-composition codebook \cite[Theorem 2]{yagli} is achievable, respectively. This follows by, \begin{itemize} \item i.i.d. codebook. In this case, the codewords are drawn from $P_X^{\otimes n}$ and the channel $P_{Y^n|X^n}=\prod P_{Y|X}$ is memoryless. In this setting the assertion of \cite[Theorem 1]{yagli} follows from the identity $I_{\frac{1}{1-\rho}}(\mathbf{X};\mathbf{Y})=nI_{\frac{1}{1-\rho}}(X;Y)$. \item Constant composition codebook. In this case, the codewords are drawn uniformly from the set of all $\mathbf{x}$ with the same type $P_X$, where $P_X$ is an $n$-type. In this setting, the assertion \cite[Theorem 2]{yagli} follows from the inequality $I_{\frac{1}{1-\rho}}(\mathbf{X};\mathbf{Y})\le nI^c_{\frac{1}{1-\rho}}(X;Y)$, where the Csiszar's $\alpha$-mutual information $I^c_{\alpha}(X;Y)$ is defined as \[ I^c_{\alpha}(X;Y)=\inf_{Q_Y}\mathbb{E}[D_{\alpha}(P_{Y|X=X'}||Q_Y)] \] where $X'\sim P_X$. \end{itemize} Moreover, this exponent is achievable for any memoryless channel (not only the finite discrete memoryless one) with the assumption that the r.h.s. of \eqref{eq:os-tv-e} is finite. \end{rem} \begin{rem}In a recent work \cite{MBAYF} , Mojahedian, et. al. consider wiretap channel $P_{YZ|X}$ and derive a lower bound bound on the exponent of the TV-distance between the joint distribution $P_{M\mathbf{Z}}$ of message $M$ and eavesdropper's observation $\mathbf{Z}$ and the product distribution $P_MP_{\mathbf{Z}}$ in term of Csiszar-$\alpha$-mutual information $I_{\alpha}(X;Z)$ (with the same exponent as \eqref{eq:os-tv-e} for the channel $P_{Z|X}$), using a completely different proof.\end{rem} \begin{rem}\it Duality between Gallager's bound for channel coding and the exponent of the soft covering. The expression of \eqref{eq:os-tv-e} is the same as the Gallager's one \cite[Eq. 78]{verdu2015alpha} for the channel coding with the exception that $\rho$ is replaced by $-\rho$. \end{rem} \vskip 2mm \begin{IEEEproof} The proof follows from the one-shot bound in \cite[Corollary VII.2]{cs-cuff} with a simple modification. The \cite[inequality (106)]{cs-cuff} asserts that \begin{align} & \mathbb{E}[\|\mathsf{P}_{Y}-P_Y\|]\leq \mathbb{P}[\mathcal{F}^c]\nonumber\\&~~~+\frac{1}{2}\mathbb{E}\left[\sqrt{\mathsf{M}^{-1}\mathbb{E}[\exp(\imath(X;Y))\mathbbm{1}\{(X,Y)\in\mathcal{F}\}|Y]}\right]\label{eq:os-tv} \end{align} where $(X,Y)\sim P_{XY}$ and $\mathcal{F}$ is an arbitrary event\footnote{Cuff \cite{cs-cuff} only considered specific event $\mathcal{F}$ that gives a simple upper bound on the second term in \eqref{eq:os-tv}. However the analysis is valid for any event $\mathcal{F}$. }. For any $0\leq \lambda\leq 1$, define \begin{align} & \mathcal{F}:=\Big\{(x,y):\exp(\imath(x;y))\nonumber\\ &~~~~~~~~~~~\leq \left(\mathsf{M}\mathbb{E}[\exp(\lambda\imath(X;Y))|Y=y]\right)^{\frac{1}{1+\lambda}}\triangleq \kappa_{\lambda}\Big \} \end{align} For a given $Y=y$, we have \begin{align} \mathbb{P}[\mathcal{F}^c|Y=y]\leq \mathsf{M}^{-\frac{\lambda}{1+\lambda}}\mathbb{E}^{\frac{1}{1+\lambda}}[\exp(\lambda\imath(X;Y))|Y=y]\label{eq:tv-atyp} \end{align} where the inequality follows from the Markov inequality. Next, consider \begin{align} &\sqrt{\mathsf{M}^{-1}\mathbb{E}[\exp(\imath(X;Y))|Y=y]\mathbbm{1}\{(X,Y)\in\mathcal{F}\}}\nonumber\\ &\le\sqrt{\mathsf{M}^{-1}\kappa_\lambda^{1-\lambda}\mathbb{E}[\exp(\lambda\imath(X;Y))|Y=y]}\\ & = \mathsf{M}^{-\frac{\lambda}{1+\lambda}}\mathbb{E}^{\frac{1}{1+\lambda}}[\exp(\lambda\imath(X;Y))|Y=y]\label{eq:tv-sqrt} \end{align} where we used the definition of $\mathcal{F}$ and $\kappa_\lambda$. Observe that $$\mathbb{E}[\exp(\lambda\imath(X;Y))|Y=y]=\mathbb{E}[\exp((1+\lambda)\imath(X;y))]$$ by change of measure argument. Using this fact, substituting \eqref{eq:tv-atyp} and \eqref{eq:tv-sqrt} in \eqref{eq:os-tv} and setting $\rho=\frac{\lambda}{1+\lambda}$ imply \eqref{eq:os-tv-e}. \end{IEEEproof} \subsection{Exact order of Soft covering under relative entropy} In the rest of the paper, we consider the soft covering problem for the memoryless channel $P_{Y|X}$ in the $n$-shot regime, with the codebook $\mathcal{C}$ consisting of $\mathsf{M}_n=\exp(nR)$ codewords such that the codewords are generated according to the i.i.d.\ distribution $P^{\otimes n}_X:=\prod_{k=1}^n P_X$. Here we denote the induced distribution with $\mathsf{P}_{Y^n}$. \begin{thm}\it\label{thm:KL} Suppose that $R>I(X;Y)>0$. Further, assume that the moment generating function of $\imath(X;Y)$ is finite in the neighborhood of origin, that is $\mathbb{E}[\exp(\tau\imath(X;Y)]<\infty$ in the neighborhood of origin. Let \begin{equation} \tau^*=\arg\max_{0\leq\tau\leq 1} \tau R-\log\mathbb{E}[\exp(\tau\imath_{X;Y}(X;Y))].\label{eq:tau-defn} \end{equation} Then \begin{align} &\mathbb{E}\left[D(\mathsf{P}_{Y^n}||P^{\otimes n}_Y)\right]\nonumber\\&=\left\{\begin{array}{ll} \Theta\left(\frac{\exp(-n\tau^*R)}{\sqrt{n}}\mathbb{E}^n[\exp(\tau^*\imath(X;Y))]\right)&\tau^*<1 \\ \Theta\left({\exp(-nR)}\mathbb{E}^n[\exp(\imath(X;Y))]\right)& \tau^*=1\end{array}\right. \label{eq:KL-Exact} \end{align} \end{thm} \subsection{Exact order of Soft covering under TV distance} \begin{definition}\it A channel $P_{Y|X}$ is said singular, if $\mathrm{Var}[\imath(X;Y)|Y]=0$, almost surly w.r.t. $P_Y$.\footnote{For the discrete channel, this definition is equivalent to the definition of singular channel in \cite[Definition 1]{yucel}.} Otherwise, the channel is non-singular. \end{definition} \begin{thm}\it\label{thm:exact-tv} Suppose that $R>I(X;Y)>0$ and $\mathbb{E}[\exp(\tau\imath(X;Y)]<\infty$ in the neighborhood of origin. Let \begin{align} \rho^*=\arg&\max_{0\leq\rho\leq \frac{1}{2}} \rho (R-I_{\frac{1}{1-\rho}}(X;Y)) \label{eq:rho-defn} \end{align} To state the exact order, we should distinguish between singular and non-singular channels. We have \begin{align} &\mathbb{E}\left[\|\mathsf{P}_{Y^n}-P^{\otimes n}_Y\|\right]\nonumber\\&=\left\{\begin{array}{ll} \Theta\Big({{n^{-\frac{\beta^*}{2}}}}\exp(-{n\rho^*}(R-I_{\frac{1}{1-\rho^*}}(X;Y))\Big)&\rho^*<\frac{1}{2} \\ \Theta\left(\exp(-\frac{n}{2}(R-I_{2}(X;Y))\right)& \rho^*=\frac{1}{2} \end{array}\right. \label{eq:TV-Exact} \end{align} where $\beta^*=1-\rho^*$, for the non-singular channels and $\beta^*=1$ for the singular channels. \begin{rem}{\it Duality.} \it Again, the expression of the \eqref{eq:TV-Exact} is similar to the expression of the exact order of random coding bound for the channel coding \cite{yucel}, except that $\rho$ is replaced by $-\rho$. While the expressions are similar, the proofs are quite different. \end{rem} \end{thm} \section{Exact analysis for Ensemble Converse}\label{sec:analysis} In this section, we present the ensemble converse proof of the Theorem \ref{thm:KL} and Theorem \ref{thm:exact-tv}. The proof is divided to four main steps. To make the analysis concise, we utilize the idea of Poissonizating the problem, which have been used in \cite{yagli} to eliminate the correlation between weakly dependent r.v's. Using the concentration of the Poisson r.v. around its mean, we show that the exact order of the relative entropy and TV-distance after Poissonization is the same the main problems for the fixed rate. So it suffices to find the exact oreder of the Poissonizated problem. To evaluate the exact order of Poissonizated problem, we use the thinning property of certain Poisson random sum to find lower bounds on the desired parameters. Next, we further lower bounded bounds in terms of moments of certain r.v. We present these steps in parallel for both the relative entropy and TV-distance. Then, we continue the analysis separately for these two cases, although the main trick is the same. We use the change of measure trick in the same way as the one used in the converse proof of the Bahadur-Rao \cite{LD} theorem (i.e. the exact order of the probability of the deviating of a sum from the mean) to find the exact order. \subsection{Poissonization} By Poissonization, we assume that the number of codewords is not fixed, but is a Poisson random variable, with the mean close to the size of the codebook. More precisely, we assume that the Poisson-codebook is $\{\mathbf{X}(k)\}_{k\in\mathbb{N}}$, where the codewords are generated according to $P^{\otimes n}_X$. Further, we assume that \underline{$M$ is a Poisson r.v.} with {\it mean} $\mu_n=2\exp(nR)$. Let $\mathsf{L}_m$ and $\mathsf{V}_m$ be the average of the relative entropy and the TV distance, respectively, when the number of codewords is $m$, that is \begin{IEEEeqnarray}{rcl} \mathsf{L}_{m}=\mathbb{E}\left[ D\left( \mathsf{P}_{\mathbf{Y}}^{(m)}||P^{\otimes n}_{Y}\right) \right] \label{eq:LM-dfn}\\ \mathsf{V}_{m}=\mathbb{E}\left[ \left\| \mathsf{P}_{\mathbf{Y}}^{(m)}-P^{\otimes n}_{Y}\right\| \right]\label{eq:VM-dfn} \end{IEEEeqnarray} where \( \mathsf{P}_{\mathbf{Y}}^{(m)}(.):=\frac {1}{m}\sum ^{m}_{k=1}P_{\mathbf{Y}|\mathbf{X}}\left( .|\mathbf{X}\left( k\right) \right) \). We will show that $\mathbb{E}[\mathsf{L}_M]$ is a good approximation for $\mathsf{L}_{\exp(nR)}$. Also $\mathbb{E}[\mathsf{V}_M]$ is a good approximation for $\mathsf{V}_{\exp(nR)}$. More precisely, we have, \begin{lem}\label{le:1} \begin{align} \mathsf{L}_{\exp(nR) }&\ge \mathbb{E}\left[ \mathsf{L}_{M}\right] - nI\left( X;Y\right) \varepsilon_{\frac {1}{2}}^{\mu _{n}}\\ \mathsf{V}_{\exp(nR) }&\ge \mathbb{E}\left[ \mathsf{V}_{M}\right] - \varepsilon_{\frac {1}{2}}^{\mu _{n}} \end{align} where $\varepsilon_{\frac{1}{2}}=\sqrt{2}{\mathrm{e}^{-\frac{1}{2}}}<1$. \end{lem} \begin{IEEEproof} By Lemma \ref{le:mono-f-divergence} in the Appendix \ref{apx:mono-f-divergence} , the sequence $\{\mathsf{L}_m\}$ is a decreasing sequence of $m$. Further $\mathsf{L}_1=\mathbb{E}[D(P_{\mathbf{Y}|\mathbf{X}=\mathbf{X}_1}||P_\mathbf{Y})]=nI(X;Y)$, because $\mathbf{X}_1\sim P_X^{\otimes n}$. Thus, \begin{align} \mathbb{E}\left[ \mathsf{L}_{M}\right] &\leq \mathsf{L}_1\mathbb{P}[M< \exp(nR)]+\mathsf{L}_{\exp(nR)}\mathbb{P}[M\ge \exp(nR)]\\ &\le nI\left( X;Y\right) \varepsilon_{\frac {1}{2}}^{\mu _{n}}+\mathsf{L}_{\exp \left( nR\right) }. \end{align} where the last inequality follows from \cite[Theorem 5.4]{mitz}. Similarly, the sequence $\{\mathsf{V}_m\}$ is decreasing by Lemma \ref{le:mono-f-divergence}, thus \begin{align} \mathbb{E}\left[ \mathsf{V}_{M}\right] &\leq \mathsf{V}_1\mathbb{P}[M< \exp(nR)]+\mathsf{V}_{\exp(nR)}\mathbb{P}[M\ge \exp(nR)]\\ &\le \varepsilon_{\frac {1}{2}}^{\mu _{n}}+\mathsf{V}_{\exp \left( nR\right) }. \end{align} where we have used $\mathsf{V}_1\le 1$. \end{IEEEproof} Let $T$ be a random variable defined by\footnote{It is worthy to note that the randomness in $T$ comes from the randomness of the codebook, poisson r.v. $M$ and the r.v. $\mathbf{Y}\sim P_Y^{\otimes n}$.} \begin{equation} T=\frac {1}{\mu _{n}}\sum ^{M}_{k=1}\exp \left( \imath\left( \mathbf{X}(k);\mathbf{Y}\right) \right) . \end{equation} \begin{lem}\label{le:2} \begin{align} \mathbb{E}\left[ \mathsf{L}_{M }\right] &\geq \frac {1}{2}\mathbb{E}\left[T\log {T}\right] -\frac {1}{2\mu _{n}}\left( 1+\mathsf{L}_{2\mu _{n}} \mu _{n}\varepsilon _{\frac{3}{2}}^{\mu _{n}}\right)\\ &= \frac {1}{2}\mathbb{E}\left[T\log {T}\right] -O\left(\exp(-nR)\right) \end{align} where $\varepsilon_{\frac{3}{2}}=\frac{\mathrm{e}^{.5}}{1.5^{1.5}}<1$. \end{lem} \begin{lem}\label{le:TV2} \begin{align} \mathbb{E}\left[ \mathsf{V}_{M }\right] &\geq \frac {1}{4}\mathbb{E}\left[\Big|T-1\Big|\right] -\frac {1}{4\sqrt{\mu _{n}}}-\frac{1}{2} \varepsilon _{\frac{3}{2}}^{\mu _{n}}\\ &= \frac {1}{4}\mathbb{E}\left[\Big|T-1\Big|\right] -O\left(\exp(-n\frac{R}{2})\right). \end{align} \end{lem} The proofs are Lemma \ref{le:2} and Lemma \ref{le:TV2} are provided in the Appendix \ref{apx:relative-entropy} and Appendix \ref{apx:TV}, respectively. Comparing Lemma \ref{le:1} with Lemmas \ref{le:2} and \ref{le:TV2}, we get \begin{cor}\label{cor:both} \begin{align} \mathsf{L}_{\exp(nR)}&\geq \frac {1}{2}\mathbb{E}\left[T\log {T}\right] -O\left(\exp(-nR)\right),\label{eq:190}\\ \mathsf{V}_{\exp(nR)}&\geq \frac {1}{4}\mathbb{E}\left[\Big|T-1\Big|\right] -O\left(\exp(-n\frac{R}{2})\right).\label{eq:TV-190} \end{align} \end{cor} \subsection{Negligibility of the O-terms in the Corollary \ref{cor:both}} We show that the O-terms in \eqref{eq:190} and \eqref{eq:TV-190} are negligible with respect to the exact expressions \eqref{eq:KL-Exact} and \eqref{eq:TV-Exact}, respectively. Thus, it is only required to prove the exact expressions are lower bounds for the expectation terms in the Corollary \ref{cor:both}. Observe that the exact exponent in the exact order \eqref{eq:KL-Exact} is \begin{align} \max_{0\le\tau\leq 1}\tau R-\log\mathbb{E}[\exp(\tau\imath_{X;Y}(X;Y))]&\leq \max_{0\le\tau\leq 1} \tau (R-I(X;Y))\\ &<R \end{align} where we used the Jensen inequality for the concave function $\log x$ and the assumption $I(X;Y)>0$. Thus the O-term $O\left(\exp(-nR)\right)$ is negligible w.r.t. the exact order \eqref{eq:KL-Exact}. Similarly, the exact exponent in the exact order \eqref{eq:TV-Exact} is \begin{align} \max_{0\leq\rho\leq \frac{1}{2}} \rho (R-I_{\frac{1}{1-\rho}}(X;Y))\leq\max_{0\leq\rho\leq \frac{1}{2}} \rho (R-I(X;Y))<\frac{R}{2}, \end{align} where we used the fact that $I_s(X;Y)$ is an increasing function of $s$. \subsection{Lower bounding using Tinning property of Poisson random sum Let $\mathcal{F}$ be an arbitrary event. To obtain a lower bound on $\mathbb{E}[T\log T]$ (for the TV- case, $\mathbb{E}[|T-1|]$), we split $T$ to two parts $T_1$ and $T_2$ defined below, \begin{align} {T}_1&=\frac {1}{\mu _{n}}\sum ^{M}_{k=1}\exp \left( \imath\left( \mathbf{X}(k);\mathbf{Y}\right) \right) \mathbbm{1}\left\{ \left( \mathbf{X}\left( k\right) ,\mathbf{Y}\right) \in \mathcal{F}\right\} \\ {T}_2&=\frac {1}{\mu _{n}}\sum ^{M}_{k=1}\exp \left( \imath\left( \mathbf{X}(k);\mathbf{Y}\right) \right) 1{\left\{ \left( \mathbf{X}\left( k\right) ,\mathbf{Y}\right) \notin \mathcal{F}\right\} } \end{align} It is clear that $T=T_1+T_2$. Further conditioned on any instance $\mathbf{Y}=\mathbf{y}$, the random variables $U_k:=\exp \left( \imath\left( \mathbf{X}(k);\mathbf{Y}\right) \right)$ are i.i.d. So the tinning property of the Poisson random sum $\sum_{k=1}^M U_k$, (which is proved in the Appendix \ref{apx:Poisson-tinning}) shows that $T_1$ and $T_2$ are independent given $\mathbf{Y}=\mathbf{y}$ Moreover, $$\mathbb{E}[T|\mathbf{Y}=\mathbf{y}]=\frac{1}{\mu_n}\mathbb{E}\left[\mathbb{E}\left[\sum_{k=1}^M\exp(\imath(\mathbf{X}_k;\mathbf{y}))\right]\Big|M\right]=\frac{1}{\mu_n}\mathbb{E}\left[M\mathbb{E}\left[\exp(\imath(\mathbf{X}_1;\mathbf{y}))\right]\right]=\frac{\mathbb{E}[M]}{\mu_n}=1.$$ Thus using the Jensen inequality for the convex function $f(x)=x\log x$, we have \begin{align} \mathbb{E}[T\log T|\mathbf{Y}]&=\mathbb{E}[(T_1+T_2)\log (T_1+T_2)|\mathbf{Y}]\\ &\ge \mathbb{E}\Big[(T_1+\mathbb{E}[T_2|\mathbf{Y}])\log({T}_1+\mathbb{E}[{T}_2|\mathbf{Y}])|\mathbf{Y}\Big]\\ &= \mathbb{E}\Big[(T_1+1-\mathbb{E}[T_1|\mathbf{Y}])\log({T}_1+1-\mathbb{E}[{T}_1|\mathbf{Y}])|\mathbf{Y}\Big].\label{eq:22} \end{align} Similarly the Jensen inequality for the convex function $f(x)=|x-1|$ implies \begin{equation} \mathbb{E}\left[|T-1|\Big|\mathbf{Y}\right]\ge \mathbb{E}\Big[|T_1-\mathbb{E}[T_1|\mathbf{Y}]|\Big|\mathbf{Y}\Big].\label{eq:V22} \end{equation} \subsection{Useful bounds on $\mathbb{E}[U\log U]$ and $\mathbb{E}[|U-\mathbb{E}[U]|]$ in terms of the moments and their consequences} The following lemma, which is of independent interest, plays the key role in proving the converse for the relative entropy. Its proof is given in the Appendix \ref{apx:log}. \begin{lem}\label{le:conv} For a positive random variable $U$ with $\mathbb{E}[U]=1$, we have \begin{equation} \mathbb{E}\left[ U\log U\right]\geq \dfrac {\mathbb{E}\left[ \left( U-1\right) ^{2}\right] ^{2}}{{2}\mathbb{E}\left[ \left( U-1\right) ^{2}\right] +\dfrac {2}{3}\mathbb{E}\left[ \left( U-1\right) ^{3}\right] }. \end{equation} \end{lem} Also, the following lemma is the TV-counterpart of the previous lemma. \begin{lem}\label{le:TV-conv} For any positive random variable $U$, we have \begin{equation} \mathbb{E}\left[ \big|U-\mathbb{E}[U]\big|\right]\geq \sqrt{\dfrac {\mathbb{E}\left[ \left( U-\mathbb{E}[U]\right) ^{2}\right] ^{3}}{\mathbb{E}\left[ \left( U-\mathbb{E}[U]\right) ^{4}\right] }}.\label{eqn:TV-conv} \end{equation} \end{lem} \begin{IEEEproof}For any r.v. $V$, we have \begin{align} \mathbb{E}[|V|]^{\frac{2}{3}}\mathbb{E}[V^4]^{\frac{1}{3}}\geq \mathbb{E}[V^2]\label{eq:Holder-00} \end{align} where we have used the Holder inequality. Setting $V\leftarrow U-\mathbb{E}[U]$ and rearranging \eqref{eq:Holder-00} yield \eqref{eqn:TV-conv}. \end{IEEEproof} Using Lemma \ref{le:conv}, we prove the following lemma in the Appendix \ref{apx:sub-kl}. \begin{lem}\label{le:4} For any event $\mathcal{F}$, \begin{align} \mathbb{E}[T\log T]\ge&\frac{1}{4}\min\left\{ \frac{1}{\mu_n}\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]\right.\nonumber\\ &,\left. 3\dfrac{\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]^2}{\mathbb{E}\left[\exp(2\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]} \right\} \end{align} where $(\mathbf{X},\mathbf{Y})\sim P_{XY}^{\otimes n}$. \end{lem} Also, using Lemma \ref{le:TV-conv}, we prove the following lemma in the Appendix \ref{apx:sub-TV}. \begin{lem}\label{le:TV-4} For any event $\mathcal{F}$, \begin{align} \mathbb{E}\left[\big|T-1\big|\right] &\ge\mathbb{E}\sqrt{\dfrac {1}{\dfrac{\mathbb{E}\left[\exp(3\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]}{\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right] ^{3}}+{3}\mu_n\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]^{-1} }} \end{align} where $(\mathbf{X},\mathbf{Y})\sim P_{XY}^{\otimes n}$. \end{lem} \subsection{Large deviation type analysis for Relative entropy} To evaluate the bound in Lemma \ref{le:4}, we use the change of measure trick in the same spirit as the one used in the large deviation for proving Cramer theorem and its extension by Bahadur-Rao, see \cite{LD}. Define the tilted distribution $P_{X_*Y_*}$ via the following Radon-Nikodym derivative, \begin{equation} \dfrac{\mathrm{d}P_{X_*Y_*}}{\mathrm{d}P_{XY}}(x,y):=\dfrac{\exp(\tau^*\imath(x;y))}{\mathbb{E}[\exp(\tau^*\imath(X;Y))]} \triangleq \dfrac{\exp(\tau^*\imath(x;y))}{S}\label{eqn:tilted} \end{equation} where $\tau^*$ was defined in \eqref{eq:tau-defn}. We consider the cases $\tau^*<1$ and $\tau^*=1$, separately. {\bf Case I: $\tau^*<1$}. Differentiating the function inside \eqref{eq:tau-defn} and equalling it to zero gives, \begin{align} R&=\dfrac{\mathbb{E}[\imath_{X;Y}(X;Y)\exp(\tau^*\imath_{X;Y}(X;Y))]}{\mathbb{E}[\exp(\tau^*\imath_{X;Y}(X;Y))]}=\mathbb{E}[\imath_{X;Y}(X_*;Y_*)] \end{align} Now set, \begin{align} &\mathcal{F}:=\{(\mathbf{x},\mathbf{y}):n\mathbb{E}[\imath_{X;Y}(X_*;Y_*)]\le\imath_{\mathbf{X};\mathbf{Y}}(\mathbf{x};\mathbf{y})\nonumber\\ &\qquad\qquad\qquad\qquad\quad\qquad\le n\mathbb{E}[\imath_{X;Y}(X_*;Y_*)]+A\} \end{align} Here we choose the positive constant $A$ large enough such that $\mathbb{P}[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}]\geq \frac{C}{\sqrt{n}}$ for some positive constant $C$, where $(\mathbf{X}_*,\mathbf{Y}_*)\sim P_{X_*Y_*}^{\otimes n}$. The existence of such $A$ is guaranteed by the application of Berry-Essen CLT to the r.v. $\imath_{\mathbf{X};\mathbf{Y}}(\mathbf{X}_*;\mathbf{Y}_*)=\sum_{i=1}^n \imath_{X;Y}(X_{*,i};Y_{*,i})$. Then for $\tau^*<1$ , we have, \begin{align} &\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]\nonumber\\ &~= S^n\mathbb{E}\left[\exp((1-\tau^*)\imath(\mathbf{X}_*;\mathbf{Y}_*))\mathbbm{1}\{\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\}\right]\label{eq:ch35}\\ &~\geq S^n\exp(n(1-\tau^*)\mathbb{E}[\imath_{X;Y}(X_*;Y_*)])\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]\label{eq:ch36}\\ &~= S^n\exp(n(1-\tau^*)R)\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]\label{eq:ch37} \end{align} where \eqref{eq:ch35} follows by change of measure using the definition of $P_{X_*Y_*}$ and \eqref{eq:ch36} follows from the definition of the event $\mathcal{F}$. Similarly we have, \begin{align} &\mathbb{E}\left[\exp(2\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]\nonumber\\ &~= S^n\mathbb{E}\left[\exp((2-\tau^*)\imath(\mathbf{X}_*;\mathbf{Y}_*))\mathbbm{1}\{\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\}\right]\\ &~\leq S^n\exp((2-\tau^*)(nR+A))\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]\label{eq:ch39} \end{align} Substituting \eqref{eq:ch37} and \eqref{eq:ch39} Lemma \eqref{le:4}, implies that for some $C_1>0$, \begin{align} \mathbb{E}[T\log T]&\geq C_1 S^n\exp(-n\tau^*R)\mathbb{P}[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}]\\ &=\Omega\left(\frac{\exp(-n\tau^*R)}{\sqrt{n}}\mathbb{E}^n[\exp(\tau^*\imath(X;Y))]\right)\label{eq:410} \end{align} Putting this in \eqref{eq:190} concludes the converse proof of Theorem \ref{thm:KL} {\bf Case II: $\tau^*=1$}. Lemma \ref{le:apx-kl} in the Appendix \ref{apx:optimum} implies \begin{align} R&\ge\dfrac{\mathbb{E}[\imath_{X;Y}(X;Y)\exp(\imath_{X;Y}(X;Y))]}{\mathbb{E}[\exp(\imath_{X;Y}(X;Y))]}=\mathbb{E}[\imath_{X;Y}(X_*;Y_*)]\label{eqn:tilt-c} \end{align} where $P_{X_*Y_*}$ is defined by \eqref{eqn:tilted} with $\tau^*=1$. Now set, \begin{align} &\mathcal{F}:=\{(\mathbf{x},\mathbf{y}):\imath_{\mathbf{X};\mathbf{Y}}(\mathbf{x};\mathbf{y}) \le n\mathbb{E}[\imath_{X;Y}(X_*;Y_*)]\} \end{align} We have, \begin{align} \mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right] &= S^n\mathbb{E}\left[\mathbbm{1}\{\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\}\right]\label{eq:ch-1-35}\\ & =S^n\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]\label{eq:ch-1-37}\end{align} where \eqref{eq:ch-1-35} follows by change of measure using the definition of $P_{X_*Y_*}$. Similarly we have, \begin{align} &\mathbb{E}\left[\exp(2\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]\nonumber\\ &~= S^n\mathbb{E}\left[\exp(\imath(\mathbf{X}_*;\mathbf{Y}_*))\mathbbm{1}\{\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\}\right]\\ &~\leq S^n\exp(n\mathbb{E}[\imath_{X;Y}(X_*;Y_*)])\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]\label{eq:ch-1-39} \end{align} Substituting \eqref{eq:ch-1-37} and \eqref{eq:ch-1-39} in Lemma \eqref{le:4}, yields \begin{align} \mathbb{E}[T\log T]&\ge\frac{1}{4}\min\left\{ \frac{1}{\mu_n}\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right],3\dfrac{\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]^2}{\mathbb{E}\left[\exp(2\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right]} \right\}\\ &\ge\frac{\mathbb{P}\left[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}\right]}{4}S^n\min\left\{ \frac{1}{2}\exp(-nR),3\exp(-n\mathbb{E}[\imath_{X;Y}(X_*;Y_*)]) \right\}\\ &\ge\left(\frac{1}{16}+O\left(\frac{1}{\sqrt{n}}\right)\right)S^n \exp(-nR) \end{align} where the last inequality follows from \eqref{eqn:tilt-c} and the Berry-Esseen approximation $\mathbb{P}[(\mathbf{X}_*,\mathbf{Y}_*)\in\mathcal{F}]=\frac{1}{2}+O(n^{-\frac{1}{2}})$. Putting this in \eqref{eq:190} concludes the converse proof of Theorem \ref{thm:KL} \subsection{Large deviation type analysis for TV-distance} To evaluate the bound in Lemma \ref{le:TV-4}, we use again the change of measure trick, although it is more involved w.r.t. the one used for the relative entropty. Define the tilted conditional distribution $P_{\overline{X}|\overline{Y}}$ and distribution $P_{\overline{Y}}$ via the following Radon-Nikodym derivatives, \begin{align} \frac{\mathrm{d}P_{\overline{X}|\overline{Y}}}{\mathrm{d}P_{X|Y}}(x,y)&:=\frac{\exp\left(\frac{\rho^*}{1-\rho^*}\imath(x;y)\right)}{\mathbb{E}\left[\exp\left(\frac{\rho^*}{1-\rho^*}\imath(X;Y)\right)|Y=y\right]}\label{eqn:mhym-b}\\ \frac{\mathrm{d}P_{\overline{Y}}}{\mathrm{d}P_{Y}}(y) &:=\frac{\mathbb{E}^{{1-\rho^*}}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y=y\right]}{\mathbb{E}\left[\mathbb{E}^{1-\rho^*}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y\right]\right]}, \end{align} where $\rho^*$ was defined in \eqref{eq:rho-defn}. Also, for brevity let \begin{equation} {\sf S}:={\mathbb{E}\left[\mathbb{E}^{1-\rho^*}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y\right]\right]}=\exp\left(\rho^* I_{\frac{1}{1-\rho^*}}(X;Y)\right), \end{equation} where $(X,Y)\sim P_{XY}$. We consider the cases $\rho^*<\frac{1}{2}$ and $\rho^*=1$, separately. {\bf Case I: $\rho^*<\frac{1}{2}$}. By Corollary \ref{cor:apx-tv} in Appendix \ref{apx:optimum} , $R$ and $\rho^*$ satisfy the following identity, \begin{align} R&=\frac{1}{1-\rho^*}\mathbb{E}[\imath_{X;Y}(\overline{X};\overline{Y})]~-\mathbb{E}\left[\log\mathbb{E}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y=\overline{Y}\right] \right]\\ &=\mathbb{E}[ Z]\label{eqn:mhym-identity} \end{align} where the r.v. $Z$ is defined through, \begin{align} Z:=&\frac{1}{1-\rho^*}\imath_{X;Y}(\overline{X};\overline{Y})-\log\mathbb{E}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y=\overline{Y}\right].\label{eqn:TV-tilted} \end{align} Also for $k=1,\cdots,n$, let \begin{align} Z_k:=&\frac{1}{1-\rho^*}\imath_{X;Y}(\overline{X}_k;\overline{Y}_k)-\log\mathbb{E}\left[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y=\overline{Y}_k\right],\label{eqn:mhym-e} \end{align} where $(\overline{X}_1,\overline{Y}_1),\cdots,(\overline{X}_n,\overline{Y}_n)$ are i.i.d and drawn from $P_{\overline{X},\overline{Y}}$. Now, we compute the expressions appeared in Lemma \ref{le:TV-4} in terms of $Z_1,\cdots,Z_n$. First, consider \begin{align} \mathbb{E}[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}=\mathbf{y}] &=\mathbb{E}\left[\exp (\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))\Big|\mathbf{Y}=\mathbf{y}\right]\nonumber\\&\qquad\qquad \mathbb{E}\left[\exp(\frac{1-2\rho^*}{1-\rho^*}\imath_{\mathbf{X};\mathbf{Y}}(\overline{\mathbf{X}};\overline{\mathbf{Y}}))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}\Big|\overline{\mathbf{Y}}=\mathbf{y}\right]\label{eqn:tv-c-m-1}\\ &{=\mathbb{E}^{{2}{(1-\rho^*)}}\left[\exp (\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))\Big|\mathbf{Y}=\mathbf{y}\right]}\nonumber\\&~~~~~~~~~~~~~~~~{\mathbb{E}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{y}]},\label{eqn:tv-c-m-2} \end{align} where \eqref{eqn:c-o-m-1} follows by change of measure and \eqref{eqn:c-o-m-2} follows from the definition of $Z_k$. Similarly, we have \begin{align} \mathbb{E}[\exp(3\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}=\mathbf{y}] &=\mathbb{E}[\exp(\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))|\mathbf{Y}=\mathbf{y}]\nonumber\\& ~~~~~~~~~~\mathbb{E}[\exp(\frac{3-4\rho^*}{1-\rho^*}\imath_{\mathbf{X};\mathbf{Y}}(\overline{\mathbf{X}};\overline{\mathbf{Y}}))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{y}]\\ &=\mathbb{E}^{{4}{(1-\rho^*)}}[\exp (\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))|\mathbf{Y}=\mathbf{y}]\nonumber\\&~~~~~~~~~~~\mathbb{E}[\exp((3-4\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{y}].\label{eqn:tv-c-m-4} \end{align} Using \eqref{eqn:tv-c-m-2} and \eqref{eqn:tv-c-m-4}, the lower bound in Lemma \ref{le:TV-4} simplifies as follows, \begin{align} &\mathbb{E}\left[\left(\dfrac{\mathbb{E}\left[\exp(3\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]}{\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right] ^{3}}+{3}\mu_n\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]^{-1} \right)^{-\frac{1}{2}}\right] \nonumber\\ &=\mathbb{E}\Bigg[\mathbb{E}^{{1-\rho^*}}[\exp (\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))|\mathbf{Y}] \nonumber\\ &\qquad\qquad\left(\dfrac{\mathbb{E}[\exp((3-4\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{Y}]}{\mathbb{E}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{Y}] ^{3}}\right. \nonumber\\ &\left.\qquad\qquad\qquad\qquad\qquad\qquad+\frac{3\mu_n}{\mathbb{E}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{Y}]} \right)^{-\frac{1}{2}}\Biggm] \label{eqn:1260-0}\\ &=\mathsf{S}^n\mathbb{E}\Bigg[\left(\dfrac{\mathbb{E}[\exp((3-4\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]}{\mathbb{E}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}] ^{3}}\right. \nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\left.+\frac{6{\sf M}_n}{\mathbb{E}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]} \right)^{-\frac{1}{2}}\Biggm] \label{eqn:1260-1}\\ &={\sf S}^n\mathsf{M}_n^{-\rho^*}\mathbb{E}\Bigg[\left(\dfrac{\mathbb{E}[\exp((3-4\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]}{\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}] ^{3}}\right.\nonumber\\&\left.\qquad\qquad\qquad\qquad+6\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]^{-1} \right)^{-\frac{1}{2}}\Biggr]\label{eqn:1260} \end{align} where \begin{itemize} \item \eqref{eqn:tv-c-m-2} and \eqref{eqn:tv-c-m-4} yields \eqref{eqn:1260-0} \item change of measure $P_Y\rightarrow P_{\overline{Y}}$ implies \eqref{eqn:1260-1}. Also, here $\mathsf{M}_n:=\exp(nR)$. \item The identity $R=\mathbb{E}[Z]$ gives \eqref{eqn:1260}. \end{itemize} Now ${\sf S}^n{\sf M}_n^{-\rho^*}=\exp(-n\rho^*(R-I_{\frac{1}{1-\rho^*}}(X;Y)))$, which is the exponent appeared in Theorem \ref{thm:exact-tv}. So it is remained to bound the expectation inside \eqref{eqn:1260} to get the desired pre-factor. Let \begin{align} \mathsf{P}=\mathbb{E}&\Bigg[\left(\dfrac{\mathbb{E}[\exp((3-4\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]}{\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}] ^{3}}\right.\nonumber\\&\left.\qquad\qquad\qquad\qquad+6\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]^{-1} \right)^{-\frac{1}{2}}\Biggr],\label{eqn:EV-SF-P} \end{align} and set \begin{align} \mathcal{F}:=\left\{-a\le\sum_{k=1}^n(Z_k-\mathbb{E}[Z])\le0\right\}.\label{eqn:mathcal-F-dfn} \end{align} where $a$ is a large enough fixed value, such that $\mathbb{P}[(\overline{X},\overline{Y})\in\mathcal{F}]\geq\frac{c}{\sqrt{n}}$ for some $c>0$. Then the pre-factor ${\sf P}$ can be lower-bounded as follows, \begin{align} \mathsf{P}&\geq\mathbb{E}\left[\left(\frac{\mathsf{K}_1}{\mathbb{P}[\mathcal{F}|\overline{\mathbf{Y}}]^2}+\frac{\mathsf{K}_2}{\mathbb{P}[\mathcal{F}|\overline{\mathbf{Y}}]}\right)^{-\frac{1}{2}}\right]\\ &\geq \mathbb{E}\left[\left(\frac{\mathsf{K}_1+K_2}{\mathbb{P}[\mathcal{F}|\overline{\mathbf{Y}}]^2}\right)^{-\frac{1}{2}}\right]\\ &=\frac{1}{\sqrt{K_1+K_2}}\mathbb{P}[\mathcal{F}]\\ &\geq \frac{C}{\sqrt{n}}\label{eqn:f-singular-f} \end{align} where $K_1=\exp(3(1-2\rho^*)a)$, $K_2=6\exp((1-2\rho^*)a)$ and $C$ is a positive constant. Finally, \eqref{eqn:f-singular-f} completes the converse proof of Theorem \ref{thm:exact-tv} for the \underline{singular channels} with $\tau^*<1$. \subsection*{\underline{Non-Singular channels:}} The choice of $\mathcal{F}$ in \eqref{eqn:mathcal-F-dfn} led to pre-factor scale $\frac{1}{\sqrt{n}}$ which is optimal for the singular channels. However there is a gap between it and the optimal pre-factor scale $n^{-\frac{1-\rho^*}{2}}$ for the non-singular channels. Here, we perturb the definition of $\mathcal{F}$ to get the optimal pre-factor scale. Let \begin{equation} \mathcal{F}:= \left\{ -(a+\frac{1}{2}\log n)\le\sum_{k=1}^{n}(Z_k-\mathbb{E}[Z])\le-\frac{1}{2}\log n\right\} \end{equation} and \begin{equation} \mathcal{G}:= \left\{\mathbf{y} \begin{aligned} \left|\sum_{k=1}^n\mathbb{E}[Z_k|\overline{Y}_k=y_k]-n\mathbb{E}[Z]\right|&\le\sqrt{n}\\ \left|\sum_{k=1}^n\mathrm{Var}[Z_k|\overline{Y}_k=y_k]-n\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]\right|&\le\frac{1}{2}n\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]\\ \left|\sum_{k=1}^n\mathrm{M}_3[Z_k|\overline{Y}_k=y_k]-n\mathbb{E}[\mathrm{M}_3[Z|\overline{Y}]]\right|&\le\frac{1}{2}n\mathbb{E}[\mathrm{M}_3[Z|\overline{Y}]] \end{aligned} \right\}\label{eqn:violating} \end{equation} where for a r.v. $X$, $\mathrm{M}_3[X]\triangleq \mathbb{E}[|X-\mathbb{E}[X]|^3]$. The key property of non-singular channel is that $Z$ is not a function of $Y$, a.s. $P_Y$. Since $P_Y\ll\gg P_{\overline{Y}}$, it is not also a function of $Y$, a.s. $P_{\overline{Y}}$. As a result, $\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]$ and $\mathbb{E}[\mathrm{M}_3[Z|\overline{Y}]]$ are strictly positive. \begin{lem}\label{le:lemma9} For large enough $n$, \begin{enumerate} \item There exists a constant $C_0$ such that for any $\mathbf{y}\in\mathcal{G}$, \begin{equation} \mathbb{P}\left[\mathcal{F}|\overline{\mathbf{Y}}=\mathbf{y}\right]\geq \frac{C_0}{\sqrt{n}} \end{equation} \item There exists a constant $C_1$ such that \begin{equation}\mathbb{P}[\overline{\mathbf{Y}}\in\mathcal{G}]\geq C_1.\end{equation} \end{enumerate} \end{lem} The proof of this lemma is relegated to the Appendix \ref{apx:lemma8}. Using Lemma \ref{le:lemma9}, the pre-factor $\mathsf{P}$ in \eqref{eqn:EV-SF-P} is lower-bounded as follows, \begin{align} \mathsf{P}&\geq\mathbb{E}\Bigg[\left(\dfrac{\mathbb{E}[\exp((3-4\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]}{\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}] ^{3}}\right.\nonumber\\ &\left.\qquad\qquad\qquad+6\mathbb{E}[\exp((1-2\rho^*) \sum_{k=1}^n (Z_k-\mathbb{E}[Z]))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]^{-1} \right)^{-\frac{1}{2}}\mathbbm{1}\left\{\overline{\mathbf{Y}}\in\mathcal{G}\right\}\Biggr]\label{eqn:EV-SF-PP}\\ &\geq \mathbb{E}\left[\left(\frac{K_1n^{-\rho^*}}{\mathbb{P}[\mathcal{F}|\overline{\mathbf{Y}}]^2}+\frac{\mathsf{K}_2n^{\frac{1}{2}-\rho^*}}{\mathbb{P}[\mathcal{F}|\overline{\mathbf{Y}}]}\right)^{-\frac{1}{2}}\mathbbm{1}\left\{\overline{\mathbf{Y}}\in\mathcal{G}\right\}\right]\label{eqn:1340}\\ &\geq \frac{n^{-\frac{1-\rho^*}{2}}}{\sqrt{K_1C_0^{-2}+K_2C_0^{-1}}} \mathbb{P}\left[\overline{\mathbf{Y}}\in\mathcal{G}\right]\label{eqn:1350}\\ &\geq \frac{C_1}{\sqrt{K_1C_0^{-2}+K_2C_0^{-1}}}n^{-\frac{1-\rho^*}{2}}\label{eqn:1360} \end{align} where $K_1$ and $K_2$ were defined before, \eqref{eqn:1340} follows from the definition of $\mathcal{F}$, \eqref{eqn:1350} follows from the first item of Lemma \ref{le:lemma9} and \eqref{eqn:1360} follows from the second item of Lemma \ref{le:lemma9}. Putting \eqref{eqn:1360} in \eqref{eqn:1260}, concludes the converse proof of Theorem \ref{thm:exact-tv} for the non-singular channels. \subsection*{\bf Case II, $\rho^*=\frac{1}{2}$.} It is shown in Appendix \ref{apx:optimum} that \begin{align} R&\ge\mathbb{E}[Z]\label{eqn:TV-tilt-c} \end{align} where $Z$ is defined by \eqref{eqn:TV-tilted} with $\rho^*=\frac{1}{2}$. Now set, \begin{align} &\mathcal{F}:=\left\{\sum_{k=1}^nZ_k \le n\mathbb{E}[Z]\right\} \end{align} Then the lower-bound in Lemma \ref{le:TV-4} is lower-bounded as follows, \begin{align} &\mathbb{E}\left[\left(\dfrac{\mathbb{E}\left[\exp(3\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]}{\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right] ^{3}}+{3}\mu_n\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]^{-1} \right)^{-\frac{1}{2}}\right]\nonumber\\ &=\mathsf{S}^n\mathbb{E}\Bigg[\left(\dfrac{\mathbb{E}[\exp(\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]}{\mathbb{E}[\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}] ^{3}}+\frac{6{\sf M}_n}{\mathbb{E}[\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}]} \right)^{-\frac{1}{2}}\Biggm]\label{eqn:similar-1}\\ &\ge{\sf S}^n\mathbb{E}\Bigg[\left(\dfrac{\exp(n\mathbb{E}[Z] )}{\mathbb{P}[(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}|\overline{\mathbf{Y}}] ^{2}}+\dfrac{6\exp(nR )}{\mathbb{P}[(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}|\overline{\mathbf{Y}}] } \right)^{-\frac{1}{2}}\Biggr]\label{eqn:similar-2}\\ &\ge\frac{1}{\sqrt{7}}{\sf S}^n\exp(-n\frac{R}{2})\mathbb{P}[(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}]\label{eqn:similar-3}\\ &\ge C\exp\left(\frac{n}{2}(I_{{2}(X;Y)}-R)\right)\label{eqn:similar-4} \end{align} where \begin{itemize} \item \eqref{eqn:similar-1} follows from \eqref{eqn:1260-1} (which is valid for any $\rho^*$) with $\rho^*=\frac{1}{2}$, \item The definition of $\mathcal{F}$ gives \eqref{eqn:similar-2}, \item $R\geq\mathbb{E}[Z]$ yields \eqref{eqn:similar-3}, \item Berry-Essen approximation $\mathbb{P}[(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}]=\frac{1}{2}+O(\frac{1}{\sqrt{n}})$ results in \eqref{eqn:similar-4}. \end{itemize} Finally, this concludes the converse proof for $\rho^*=\frac{1}{2}$. \section{Exact Analysis for the Achievability}\label{sec:achievability} \subsection{Relative Entropy} The starting point in the achievability proof is the following well-known upper bound on the relative entropy, ( which is an one-shot bound, see \cite[Appendix IV]{hayashi11} among many others), \begin{equation} \mathbb{E}\left[ D\left( \mathsf{P}_{\mathbf{Y}}||P^{\otimes n}_{Y}\right) \right] \leq \mathbb{E}[\log 1+\mathsf{M}_n^{-1}\exp(\imath(\mathbf{X};\mathbf{Y}))]\label{eq:hayashi} \end{equation} where $\mathsf{M}_n:=\exp(nR)$. We know proceed to get an almost exact computable expression for the r.h.s. of \eqref{eq:hayashi}. To do this, we prove the following general result, \begin{thm}\it\label{thm:kl-up} Let $(V_1,\cdots,V_n)$ be i.i.d. r.v.'s with $V_i\sim P_V$ and $\mathbb{E}[V]<0$. Further, assume that $V$ has finite moment generating function in the neighborhood of the origin. Let \begin{equation} \tau^*=\arg\max_{0\le\tau\le1} \log\mathbb{E}[\exp(\tau V)] \end{equation} Then, if $\tau^*<1$, we have for some $C>0$ which does not depend on $n$ and depends only on $\tau^*$, \begin{align} \mathbb{E}\left[\log\left(1+\exp\left(\sum_{k=1}^nV_k\right)\right)\right] \le\dfrac{C}{\sqrt{n}} \mathbb{E}^n[\exp(\tau^*V)] \end{align} \end{thm} \begin{rem}\it The previous technique \cite{hayashi11} for bounding the l.h.s. of \eqref{eq:hayashi} gives an upper bound with the same exponent but without the pre-factor $\frac{1}{\sqrt{n}}$. \end{rem} Setting $V_i\leftarrow (\imath_{X;Y}(X_i;Y_i)-R)$, in Theorem \ref{thm:kl-up} implies (notice that $\mathbb{E}[V]=I(X;Y)-R<0$), \begin{align*} \mathbb{E}\left[ D\left( \mathsf{P}_{\mathbf{Y}}||P^{\otimes n}_{Y}\right) \right]=O\left(\frac{\exp(-n\tau^*R)}{\sqrt{n}}\mathbb{E}^n[\exp(\tau^*\imath(X;Y))]\right) \end{align*} where $\tau^*$ was defined in \eqref{eq:tau-defn}. This completes the proof of the achievability for the case $\tau^*<1$. The case $\tau^*=1$ is follows from the known result in \cite{hayashi11}. \begin{IEEEproof}[Proof of Theorem \ref{thm:kl-up}] Define the tilted distribution $P_{\overline{V}}$ via the following Radon-Nikodym derivative, \begin{equation} \dfrac{\mathrm{d}P_{\overline{V}}}{\mathrm{d}P_{V}}(v):=\dfrac{\exp(\tau^*v)}{\mathbb{E}[\exp(\tau^*V)]} \triangleq \dfrac{\exp(\tau^*v)}{\mathsf{T}} \end{equation} if $\tau^*<1$, then the following equation holds, \begin{equation} \mathbb{E}\left[~\overline{V}~\right]=\dfrac{\mathbb{E}[V\exp(\tau^*V)]}{\mathbb{E}[\exp(\tau^*V)]} =\frac{d}{d\tau}\log\mathbb{E}[\exp(\tau V)]\Big|_{\tau^*}=0.\footnote{ It is worthy to mention that if $\mathbb{E}[V]>0$, then any tilted distribution with the positive tilted parameter $\tau$, has a positive mean. It follows from the convexity of the function $f(t)= \log\mathbb{E}[\exp(\tau V)]$ and the fact that $f'(t)=\mathbb{E}[V_{\tau}]$, where $V_{\tau}$ is a r.v. with the tilted distribution with the parameter $\tau$. } \end{equation} Now we can write, \begin{align} &\mathbb{E}\left[\log\left(1+\exp\left(\sum_{k=1}^nV_k\right)\right)\right]\nonumber\\ &=\mathsf{T}^n\mathbb{E}\left[\exp(-\tau^*\sum_{k=1}^n\overline{V_k})\log\left(1+\exp\left(\sum_{k=1}^n\overline{V_k}\right)\right)\right] \end{align} where $(\overline{V}_1,\cdots,\overline{V}_n)$ are i.i.d. and distributed according to $P_{\overline{V}}$. The equality follows by change of measure. \vskip 2mm Let $S_n=\frac{1}{\sqrt{n}}\sum_{k=1}^n \overline{V_k}$ and $g(x):=\exp\left({-\tau^*}x\right)\log\left(1+\exp\left(x\right)\right)$. Then we have, \begin{align} &\mathbb{E}\left[\exp(-\tau^*\sum_{k=1}^n\overline{V_k})\log\left(1+\exp\left(\sum_{k=1}^n\overline{V_k}\right)\right)\right]\nonumber\\ &=\mathbb{E}\left[\exp\left({-\tau^*\sqrt{n}}S_n\right)\log\left(1+\exp\left(\sqrt{n}S_n\right)\right)\right]\nonumber\\ &=\int_{-\infty}^{\infty} g(\sqrt{n}x) dF_{S_n}(x)\\ &=g(\infty)F_{S_n}(\infty)-g(-\infty)F_{S_n}(-\infty)\nonumber\\ &\qquad\qquad-\sqrt{n}\int_{-\infty}^{\infty}F_{S_n}(x)g'(\sqrt{n} x)dx\label{eq:int-1}\\ &=-\sqrt{n}\int_{-\infty}^{\infty}F_{S_n}(x)g'(\sqrt{n} x)dx\label{eq:int-2}\\ &=-\dfrac{1}{\sqrt{n}}\int_{-\infty}^{\infty}\sqrt{n}F_{S_n}\left(\dfrac{x}{\sqrt{n}}\right)g'(x)dx\label{eq:int-3} \end{align} where \eqref{eq:int-1} follows by integration by part and \eqref{eq:int-2} is due to the fact that $g$ vanishes at $\pm\infty$ (this is true, since $0<\tau^*<1$). Let $\sigma^2:=\mathbb{E}\left[\overline{V}^2\right]$ and $\rho=\mathbb{E}\left[|\overline{V}|^3\right]$. By the Berry-Esseen theorem \cite[Theorem 9.8]{mitz}, \begin{equation}\sup _ { x } \left| F_{Y_{\sigma}} ( x ) - F _ {S_ n } ( x ) \right| \leq \dfrac{3\rho}{ \sigma ^ { 3 } \sqrt { n } }\triangleq \dfrac{C_1}{ \sqrt { n } } \end{equation} where $F_{Y_{\sigma}} ( x )$ is the c.d.f. of a mean zero Gaussian random variable $Y _ { \sigma }$ with variance $\sigma ^ { 2 } .$ Hence \begin{align} &\left|\int_{-\infty}^{\infty}\sqrt{n}F_{S_n}\left(\dfrac{x}{\sqrt{n}}\right)g'(x)dx\right|\\&\le \left|\int_{-\infty}^{\infty}\sqrt{n}F_{Y_\sigma}\left(\dfrac{x}{\sqrt{n}}\right)g'(x)dx\right|+{C_1}\int_{-\infty}^{\infty}|g'(x)|dx\\ &= \left|\int_{-\infty}^{\infty}\sqrt{n}\left(F_{Y_\sigma}\left(\dfrac{x}{\sqrt{n}}\right)-F_Y(0)\right)g'(x)dx\right|\nonumber\\ &\qquad\qquad\qquad\qquad\qquad\qquad\quad+{C_1}\int_{-\infty}^{\infty}|g'(x)|dx\label{eq:int-4}\\ &\le K\int_{-\infty}^{\infty}|xg'(x)|dx+{C_1}\int_{-\infty}^{\infty}|g'(x)|dx\label{eq:int-5} \end{align} where \eqref{eq:int-4} holds, since $\int_{-\infty}^\infty g'(x)dx=0$ (because $|g'|$ is integrable and again $g$ vanishes at infinities). Here, $K=\frac{1}{\sqrt{2\pi \sigma^2}}$ is the upper bound on $F'_{Y_\sigma}=f_{Y_\sigma}$. It is easy to verify that $g'(x)$ decays exponentially fast at $\pm\infty$. Thus both the integrals inside \eqref{eq:int-5} are convergent. In summary, we conclude that there exists a constant $C$ depending only on the distribution $P_V$, such that \begin{align} \mathbb{E}\left[\exp\left({-\tau\sqrt{n}}S_n\right)\log\left(1+\exp\left(\sqrt{n}S_n\right)\right)\right]\le \dfrac{C}{\sqrt{n}}. \end{align} \end{IEEEproof} \subsection{TV-distance} Theorem \ref{thm:TV-one-shot} implies achievability of the bound \eqref{eq:TV-Exact} without the pre-factor for any $\rho^*$. As a result, it yields the achievability of the bound \eqref{eq:TV-Exact} for the case $\rho^*=\frac{1}{2}$. So it is only required to investigate the case $\rho^*<\frac{1}{2}$. As in the proof of Theorem \ref{thm:TV-one-shot}, we start with the following $n$-shot version of the lower bound \eqref{eq:os-tv}, \begin{align} \mathbb{E}[\|\mathsf{P}_{\mathbf{Y}}-P_Y^{\otimes n}\|]\leq \mathbb{P}[\mathcal{F}^c]+\frac{1}{2}\mathbb{E}\left[\sqrt{\mathsf{M}_n^{-1}\mathbb{E}[\exp(\imath(\mathbf{X};\mathbf{Y})){1 }\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}]}\right]\label{eq:nshot-tv} \end{align} where ${\sf M}_n=\exp(nR)$. To compute this bound for the appropriate choice of $\mathcal{F}$ (will be determined later), we need the following lemma, \begin{lem}[ {\cite[Lemma 47]{PPV2010}} ]\it\label{thm:BR-type} Let $(V_1,\cdots,V_n)$ be i.i.d. zero mean r.v.'s with $V_i\sim P_V$. Further, assume that $V$ has finite third moment $T_3$ and non-zero variance $\sigma^2$. Then, for any $A$ \begin{align} \mathbb{E}\left[\exp\left(- \sum_{k=1}^nV_k\right)\mathbbm{1}\left\{\sum_{k=1}^nV_k\ge A\right\}\right] \le\dfrac{C}{\sqrt{n}} \exp(-A) \end{align} where $C=\frac{2}{\sigma}(\frac{\log 2}{\sqrt{2\pi}}+\frac{12T_3}{\sigma^2})$. \end{lem} We investigate the non-singular channels and singular channels, separately. \subsubsection{Non-singular channels} Recall the definitions of $P_{\overline{X}|\overline{Y}}$, $P_{\overline{Y}}$, $\mathsf{S}$, $Z$, $Z_k$ and $\rho^*$ in the equations \eqref{eqn:mhym-b}—\eqref{eqn:mhym-e}. Also, let \begin{equation} \mathcal{F}:=\left\{\sum_{k=1}^n Z_k\le -\frac{1}{2}\log n\right\} \end{equation} We compute each term of \eqref{eq:nshot-tv} separately. First consider \begin{align} \mathbb{P}[(\mathbf{X},\mathbf{Y})\in\mathcal{F}^c]&=\mathsf{S}^n\mathbb{E}\bigg[\mathbb{E}^{{\rho^*}}[\exp(\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))|\mathbf{Y}=\overline{\mathbf{Y}}]\nonumber\\ &~~~~~~~~~~ \mathbb{E}[\exp(-\frac{\rho^*}{1-\rho^*}\imath_{\mathbf{X};\mathbf{Y}}(\overline{\mathbf{X}};\overline{\mathbf{Y}}))\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\notin\mathcal{F}\}|\overline{\mathbf{Y}}]\bigg] \label{eqn:f-t-1}\\ &={\sf S}^n\mathbb{E}\left[\exp(-{\rho^*}\sum_{k=1}^n Z_k)\mathbbm{1}\left\{\sum_{k=1}^n Z_k\ge n\mathbb{E}[Z]-\frac{1}{2}\log n\right\}\right]\label{eqn:f-t-2}\\ &={\sf S}^n\mathsf{M}_n^{-\rho^*}{\mathbb{E}\left[\exp\left(-{\rho^*}\sum_{k=1}^n (Z_k- \mathbb{E}[Z])\right)\mathbbm{1}\left\{\sum_{k=1}^n (Z_k-\mathbb{E}[Z])\ge -\frac{1}{2}\log n\right\}\right]}\label{eqn:f-t-21}\\ &\leq C_1{\sf S}^n\mathsf{M}_n^{-{\rho^*}}n^{-\frac{1-\rho^*}{2}}\label{eqn:f-t-3} \end{align} where \begin{itemize} \item change of measure $P_{XY}\rightarrow P_{\overline{X}\overline{Y}}$ implies \eqref{eqn:f-t-1}, \item definitions of $Z_k$ and $\mathcal{F}$ gives \eqref{eqn:f-t-2}, \item the identity $R=\mathbb{E}[Z]$ gives \eqref{eqn:f-t-21} \item Lemma \ref{thm:BR-type} with $V_k\leftarrow \rho^*(Z_k-\mathbb{E}[Z])$ and $A\leftarrow-\frac{\rho^*}{2}\log n$ yields \eqref{eqn:f-t-3}. \end{itemize} Next consider the second term of \eqref{eq:nshot-tv}, \begin{align} &\mathbb{E}\left[\sqrt{\mathsf{M}_n^{-1}\mathbb{E}[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}]}\right]\label{eqn:s-t-00}\\ &=\mathbb{E}\bigg[\mathsf{M}_n^{-\frac{1}{2}}\mathbb{E}^{{(1-\rho^*)}}\left[\exp (\frac{\rho^*}{1-\rho^*}\imath(\mathbf{X};\mathbf{Y}))\Big|\mathbf{Y}\right]\mathbb{E}^{\frac{1}{2}}[\exp((1-2\rho^*)\sum_{k=1}^nZ_k)\mathbbm{1}\{(\overline{\mathbf{X}},\overline{\mathbf{Y}})\in\mathcal{F}\}|\overline{\mathbf{Y}}=\mathbf{Y}] \bigg]\label{eqn:s-t-0}\\ &= {\sf S}^n\mathsf{M}_n^{-\frac{1}{2}}\mathbb{E}\left[\sqrt{\mathbb{E}\left[\exp((1-2\rho^*)\sum_{k=1}^n Z_k)\mathbbm{1}\left\{\sum_{k=1}^n Z_k\le n\mathbb{E}[Z]-\frac{1}{2}\log n\right\}\Bigg|\overline{\mathbf{Y}}\right]}~\right]\label{eqn:s-t-1}\\ &\le {\sf S}^n\mathsf{M}_n^{-\frac{1}{2}}\sqrt{\mathbb{E}\left[\exp((1-2\rho^*)\sum_{k=1}^n Z_k)\mathbbm{1}\left\{\sum_{k=1}^n Z_k\le n\mathbb{E}[Z]-\frac{1}{2}\log n\right\}\right]}\label{eqn:s-t-2}\\ &= {\sf S}^n\mathsf{M}_n^{-{\rho^*}} \sqrt{\mathbb{E}\left[\exp\left((1-2\rho^*)\sum_{k=1}^n (Z_k- \mathbb{E}[Z])\right)\mathbbm{1}\left\{\sum_{k=1}^n (Z_k-\mathbb{E}[Z])\le -\frac{1}{2}\log n\right\}\right]}\label{eqn:s-t-3}\\ &\leq C_2{\sf S}^n\mathsf{M}_n^{-\rho^*}n^{-\frac{1-\rho^*}{2}}\label{eqn:s-t-4} \end{align} where \begin{itemize} \item putting \eqref{eqn:tv-c-m-2} in \eqref{eqn:s-t-00} yields \eqref{eqn:s-t-0}, \item change of measure $P_{Y}\rightarrow P_{\overline{Y}}$ and the definitions of $Z_k$ and $\mathcal{F}$, imply \eqref{eqn:s-t-1}, \item Jensen inequality for the concave mapping $x\mapsto \sqrt{x}$ gives \eqref{eqn:s-t-2}, \item the identity $R=\mathbb{E}[Z]$ gives \eqref{eqn:s-t-3}, \item Lemma \ref{thm:BR-type} with $V_k\leftarrow -(1-2\rho^*)(Z_k-\mathbb{E}[Z])$ and $A\leftarrow\frac{1-2\rho^*}{2}\log n$ yields \eqref{eqn:s-t-4}. \end{itemize} Finally putting \eqref{eqn:f-t-3} and \eqref{eqn:s-t-4} together gives the desired result for the non-singular channels. \subsubsection{Singular channel} The Definition of singular channel implies that $\imath_{X;Y}(X;Y)$ is a function of $Y$, almost surely $P_Y$. For brevity, let $\imath_{X;Y}(x;y):=g(y)$. Then, the definitions of $P_{\overline{Y}}$ and $\mathsf{S}$ are reduced to \begin{align} \frac{\mathrm{d}P_{\overline{Y}}}{\mathrm{d}P_{Y}}(y) &:=\frac{\exp\left({\rho^*}g(y)\right)}{\mathbb{E}\left[\exp\left({\rho^*}g(Y)\right)\right]} \end{align} \[ {\sf S}:=\mathbb{E}\left[\exp\left({\rho^*}g(Y)\right)\right] \] Also, the identity \eqref{eqn:mhym-identity} is reduced to, \begin{align} R&=\frac{1}{1-\rho^*}\mathbb{E}[\imath_{X;Y}(\overline{X};\overline{Y})]-\mathbb{E}\left[\log\mathbb{E}[\exp(\frac{\rho^*}{1-\rho^*}\imath(X;Y))|Y=\overline{Y}] \right]\\ &=\mathbb{E}[g(\overline{Y})] \end{align} Set,\[\mathcal{F}:=\left\{\mathbf{y}:\imath(\mathbf{x};\mathbf{y})\le n\mathbb{E}[g(\overline{Y})]\right\}=\left\{\mathbf{y}:\sum_{k=1}^ng(y_k)\le n\mathbb{E}[g(\overline{Y})]\right\}, \] where $(\overline{Y}_1,\cdots,\overline{Y}_n)\sim P^{\otimes n}_{\overline{Y}}$. Now, we compute the terms inside \eqref{eq:nshot-tv}. First consider, \begin{align} \mathbb{P}[(\mathbf{X},\mathbf{Y})\in\mathcal{F}^c]&=\mathbb{P}\left[\sum_{k=1}^{n}g(Y_k)\ge n\mathbb{E}[g(\overline{Y})]\right]\\ &={\sf S}^n\mathbb{E}\left[\exp\left(-\rho^*\sum_{k=1}^{n}g(\overline{Y}_k)\right)\mathbbm{1}\left\{\sum_{k=1}^{n}g(\overline{Y}_k)\ge n\mathbb{E}[g(\overline{Y})]\right\}\right]\label{eqn:f-t-1-0}\\ &={\sf S}^n\mathsf{M}_n^{-\rho^*}\mathbb{E}\left[\exp\left(-\rho^*(\sum_{k=1}^{n}g(\overline{Y}_k))-n\mathbb{E}[g(\overline{Y})]\right)\mathbbm{1}\left\{\sum_{k=1}^{n}g(\overline{Y}_k))\ge n\mathbb{E}[g(\overline{Y})]\right\}\right]\label{eqn:f-t-2-0}\\ &\leq C_1\frac{{\sf S}^n\mathsf{M}_n^{-\rho^*}}{\sqrt{n}}\label{eqn:f-t-3-0} \end{align} where \begin{itemize} \item change of measure $P_{Y}\rightarrow P_{\overline{Y}}$ implies \eqref{eqn:f-t-1-0}, \item the identity $R=\mathbb{E}[g(\overline{Y})]$ gives \eqref{eqn:f-t-2-0}, \item Lemma \ref{thm:BR-type} with $V_k\leftarrow -\rho^*(g(\overline{Y}_k)-\mathbb{E}[\overline{Y}])$ and $A=0$ yields \eqref{eqn:f-t-3-0}. \end{itemize} Next, consider \begin{align} &\mathbb{E}\left[\sqrt{\mathsf{M}_n^{-1}\mathbb{E}[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{(\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}]}\right]\nonumber\\ &=\mathbb{E}\left[\sqrt{\mathsf{M}_n^{-1}\mathbb{E}[\exp(\imath(\mathbf{X};\mathbf{Y}))\mathbbm{1}\{\imath(\mathbf{X};\mathbf{Y})\le n\mathbb{E}[g(\overline{Y})]\}|\mathbf{Y}]}\right]\nonumber\\ &=\mathbb{E}\left[\sqrt{\mathsf{M}_n^{-1}\exp\left(\sum_{k=1}^{n}g(Y_k)\right)\mathbbm{1}\left\{\sum_{k=1}^{n}g(Y_k)\le n\mathbb{E}[g(\overline{Y})] \right\}}\right]\nonumber\\ &={\sf S}^n\mathsf{M}_n^{-\frac{1}{2}}\mathbb{E}\left[\exp\left((\frac{1}{2}-\rho^*)\sum_{k=1}^{n}g(\overline{Y}_k)\right)\mathbbm{1}\left\{\sum_{k=1}^{n}g(\overline{Y}_k)\le n\mathbb{E}[g(\overline{Y})]\right\}\right]\label{eqn:s-t-1-0}\\ &={\sf S}^n\mathsf{M}_n^{-\rho^*}\mathbb{E}\left[\exp\left((\frac{1}{2}-\rho^*)\left(\sum_{k=1}^{n}g(\overline{Y}_k)-\mathbb{E}[g(\overline{Y})])\right)\right)\mathbbm{1}\left\{\sum_{k=1}^{n}g(\overline{Y}_k)\le n\mathbb{E}[g(\overline{Y})] \right\}\right]\label{eqn:s-t-2-0}\\ &\leq C_2\frac{{\sf S}^n\mathsf{M}_n^{-\rho^*}}{\sqrt{n}}\label{eqn:s-t-3-0} \end{align} where \begin{itemize} \item change of measure $P_{Y}\rightarrow P_{\overline{Y}}$ implies \eqref{eqn:s-t-1-0}, \item the identity $R=\mathbb{E}[g(\overline{Y})]$ gives \eqref{eqn:s-t-2-0}, \item Lemma \ref{thm:BR-type} with $V_k\leftarrow -(\frac{1}{2}-\rho^*)(g(\overline{Y}_k)-\mathbb{E}[\overline{Y}])$ and $A=0$ yields \eqref{eqn:s-t-3-0}. \end{itemize} Finally putting \eqref{eqn:f-t-3-0} and \eqref{eqn:s-t-3-0} together gives the desired result for the singular channels. \appendices \section{Proof of Lemma \ref{le:conv}}\label{apx:log} Using the identity $u\log u+1-u=(u-1)^2\int_0^1 \frac{1-t}{1+t(u-1)}\mathsf{d}t$, we have \begin{IEEEeqnarray}{rCl} \mathbb{E}\left[ U\log U\right] &=&\mathbb{E}\left[ U\log U+1-U\right] \\ &=&\mathbb{E}\left[ \left( U-1\right) ^{2}\int ^{1}_{0}\dfrac {1-t}{1+t\left( U-1\right) }dt\right]\\ & \ge &\dfrac{\left( \mathbb{E}\left[\displaystyle{\int} ^{1}_{0}\left( U-1\right) ^{2}\left( 1-t\right) dt\right] \right) ^{2}}{\mathbb{E}\left[ \displaystyle{\int} ^{1}_{0}\left( 1+t\left( U-1\right) \right) \left( U-1\right) ^{2}\left( 1-t\right) dt\right] }~~~~~\label{eq:cauchy}\\ & =&\dfrac {\dfrac {1}{4}\mathbb{E}\left[ \left( U-1\right) ^{2}\right] ^{2}}{\dfrac {1}{2}\mathbb{E}\left[ \left( U-1\right) ^{2}\right] +\dfrac {1}{6}\mathbb{E}\left[ \left( U-1\right) ^{3}\right] }\label{eq:kl-lower} \end{IEEEeqnarray} where \eqref{eq:cauchy} follows from Cauchy-Schwarz inequality. \section{Monotonicity of $\mathsf{L}_m$ and $\mathsf{V}_m$}\label{apx:mono-f-divergence} We prove a more general result. Let $f:\mathbb{R}^{\ge0}\mapsto \mathbb{R}$ be a convex function with $f(1)=0$ and $D_f(P||Q)$ (defined below) is the $f$-divergence between $P$ and $Q$, \[ D_f(P||Q):=\mathbb{E}\left[f\left(\dfrac{\mathsf{d}P}{\mathsf{d}Q}(Z)\right)\right] \] where $Z\sim Q$. Let $$\mathsf{L}_m^{(f)}:=\mathbb{E}\left[D_f\left(\mathsf{P}_Y^{(m)}||P_Y\right)\right],$$ where $\mathsf{P}_Y^{(m)}(.):=\frac{1}{m}\sum_{k=1}^m P_{Y|X}(.|X(k))$, in which $(X_1,\cdots,X_m)\sim P_X\otimes\cdots\otimes P_X$. \begin{lem}\label{le:mono-f-divergence} $\mathsf{L}_m^{(f)}$ is a decreasing sequence of $m$. \end{lem} \begin{IEEEproof} Observe that \[ \dfrac{\mathsf{d}\mathsf{P}_Y^{(m)}}{\mathsf{d}P_Y}=\frac{1}{m}\sum_{k=1}^m\exp(\imath_{X;Y}(X(k);Y)) \] Let $$Z_i:=\frac{1}{m-1}\sum_{k\neq i}\exp(\imath_{X;Y}(X(k);Y)). $$ Then, we have \[ \dfrac{\mathsf{d}\mathsf{P}_Y^{(m)}}{\mathsf{d}P_Y}=\frac{1}{m}\sum_{i=1}^mZ_i\] Using this and the Jensen inequality for the convex function $f$, we get \begin{align} \mathsf{L}_{m}^{(f)}&=\mathbb{E}\left[D_f\left(\mathsf{P}_Y^{(m)}||P_Y\right)\right]\\ &=\mathbb{E}_{(X_1,\cdots,X_m,Y)\sim P_X\otimes\cdots\otimes P_X\otimes P_Y}\left[f\left(\dfrac{\mathsf{d}\mathsf{P}_Y^{(m)}}{\mathsf{d}P_Y}(Y)\right)\right]\\ &=\mathbb{E}\left[f\left(\frac{1}{m}\sum_{i=1}^mZ_i\right)\right]\label{eq:j63}\\ &\leq \frac{1}{m}\sum_{i=1}^m\mathbb{E}\left[f\left(Z_i\right)\right]\label{eq:j64}\\ &=\mathbb{E}[f(Z_m)]\label{eq:j65}\\ &=\mathbb{E}\left[f\left(\dfrac{\mathsf{d}\mathsf{P}_Y^{(m-1)}}{\mathsf{d}P_Y}(Y)\right)\right]\\ &=\mathsf{L}_{m-1}^{(f)} \end{align} where \eqref{eq:j64} follows from Jensen inequality and \eqref{eq:j65} follows from symmetry. \end{IEEEproof} \section{Proof of Lemma \ref{le:2}}\label{apx:relative-entropy} Consider, \begin{align} \mathbb{E}\left[ \mathsf{L}_{M}\right] &=\mathbb{E}\left[ \left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\right] \label{eq:PA-1}\\ &\geq \mathbb{E}\left[ \left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \mathbbm{1}\{M\le 2\mu_n\}\right]\label{eq:PA-2}\\ &\geq \frac{1}{2\mu_n}\mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \mathbbm{1}\{M\le 2\mu_n\}\right]\label{eq:PA-3}\\ &= \frac{1}{2\mu_n}\mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]\nonumber\\ &\qquad\quad- \frac{1}{2\mu_n}\mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:PA-4}\\ &=\frac{1}{2\mu_n}\mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]-\frac{1}{2\mu_n}\mathbb{E}\left[ M\log M \right]\nonumber\\ &\qquad\quad- \frac{1}{2\mu_n}\mathbb{E}\left[ M\mathsf{L}_M\mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:PA-5}\\ &=\frac{1}{2\mu_n}\mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]-\frac{1}{2}\left( \log \mu_n+ \frac{1}{\mu_n} \right)\nonumber\\ &\qquad\quad- \frac{1}{2\mu_n}\mathbb{E}\left[ M\mathsf{L}_M\mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:PA-6}\\ &=\frac{1}{2}\mathbb{E}\left[ \left( \frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left(\frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]-\frac{1}{2\mu_n}\nonumber\\ &\qquad\quad- \frac{1}{2\mu_n}\mathbb{E}\left[ M\mathsf{L}_M\mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:PA-7}\\ &=\frac{1}{2}\mathbb{E}\left[ \left( \frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left(\frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]-\frac{1}{2\mu_n}\nonumber\\ &\qquad\quad- \frac{1}{2\mu_n}\mathsf{L}_{\lceil2\mu_n\rceil}\mathbb{E}\left[ M\mathbbm{1}\{M\geq \lceil2\mu_n\rceil\}\right]\label{eq:PA-8}\\ &=\frac{1}{2}\mathbb{E}\left[ \left( \frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log\left(\frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right) \right]-\frac{1}{2\mu_n}\nonumber\\ &\qquad\quad- \frac{1}{2}\mathsf{L}_{\lceil2\mu_n\rceil}\mathbb{P}\left[M\geq 2\mu_n-1\right]\label{eq:PA-9} \end{align} where \begin{itemize} \item Identity \eqref{eq:PA-1} follows from the definition of relative entropy and the definition of $\mathsf{L}_M$ in \eqref{eq:LM-dfn}, \item Equality \eqref{eq:PA-5} follows from the following identity, \begin{align} \mathbb{E}\left[ \left( \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right)\log M \right]&=\mathbb{E}\left[\log M\mathbb{E}\left[ \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \Big|M \right]\right] \nonumber\\ &=\mathbb{E}\left[ M \log M\right] \label{eq:PAE} \end{align}, since $\mathbb{E}[\exp( \imath( \mathbf{X}_{k};\mathbf{Y}) )]=1$ for any $k$. \item Inequality \eqref{eq:PA-6} follows from the following inequality for the poisson r.v. $M$, \begin{align} \mathbb{E}[M\log M]=\mathbb{E}[M\log \mu_n]+\mathbb{E}\left[M\log \frac{M}{\mu_n}\right] \geq\mu_n\log\mu_n+\mathbb{E}\left[M\left(\frac{M}{\mu_n}-1\right)\right]=\mu_n\log\mu_n+1 \end{align} where we used the inequality $\log x\leq x-1$, $\mathbb{E}[M^2]=\mu_n^2+\mu_n$ and $\mathbb{E}[M]=\mu_n$. \item Similar to \eqref{eq:PAE}, equality \eqref{eq:PA-7} follows from the following identity $$\mathbb{E}\left[ \frac{1}{\mu_n} \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) \right]=\frac{1}{\mu_n}\mathbb{E}[M]=1$$ \item \eqref{eq:PA-8} follows, because $\mathsf{L}_k$ is a decreasing sequence, \item Simple algebraic calculation for the Poisson r.v. $M$ implies \eqref{eq:PA-9}. \end{itemize} Finally, applying the following tail probability of the Poisson r.v. concludes the proof, \begin{align} \mathbb{P}[M\ge 2\mu_n-1]\leq \mathbb{P}\left[M\ge \frac{3}{2}\mu_n\right] \le\varepsilon_{\frac{3}{2}}^{\mu_n}.\label{eq:PA-10} \end{align} \section{Proof of Lemma \ref{le:TV2} }\label{apx:TV} The proof modifies the one given in \cite{yagli}. \begin{IEEEproof} \begin{align} \mathbb{E}\left[ \mathsf{V}_{M}\right] &=\frac{1}{2}\mathbb{E}\left[ \left| \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-1 \right|\right] \label{eq:VA-1}\\ &\geq \frac{1}{2}\mathbb{E}\left[ \left| \frac {1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-1 \right| \mathbbm{1}\{M\le 2\mu_n\}\right]\label{eq:VA-2}\\ &\geq \frac{1}{4\mu_n}\mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-M \right| \mathbbm{1}\{M\le 2\mu_n\}\right]\label{eq:VA-3}\\ &= \frac{1}{4\mu_n}\mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-M \right| \right]- \frac{1}{4\mu_n}\mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-M \right| \mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:VA-4}\\ &=\frac{1}{4\mu_n}\mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) -\mu_n\right| \right]-\frac{1}{4\mu_n}\mathbb{E}\left[ \left|M-\mu_n\right| \right]- \frac{1}{2\mu_n}\mathbb{E}\left[ M\mathsf{V}_M\mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:VA-5}\\ &\ge\frac{1}{4\mu_n}\mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) -\mu_n\right| \right]-\frac{1}{4\sqrt{\mu_n}}- \frac{1}{2\mu_n}\mathbb{E}\left[ M\mathbbm{1}\{M> 2\mu_n\}\right]\label{eq:VA-6}\\ &\ge\frac{1}{4}\mathbb{E}\left[ \left| \frac{1}{\mu_n}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right) -1\right| \right]-\frac{1}{4\sqrt{\mu_n}}-\frac{1}{2} \varepsilon_{\frac{3}{2}}^{\mu_n}\label{eq:VA-7} \end{align} where \begin{itemize} \item Identity \eqref{eq:VA-1} follows from the definition of relative entropy and the definition of $\mathsf{V}_M$ in \eqref{eq:VM-dfn}, \item Equality \eqref{eq:VA-5} follows from the triangle inequality and the following identity, \begin{align} \mathbb{E}\left[ \left| \sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-M \right| \mathbbm{1}\{M> 2\mu_n\}\right] &=\mathbb{E}\left[ \mathbb{E}\left[\left| \frac{1}{M}\sum ^{M}_{k=1}\exp\left( \imath\left( \mathbf{X}_{k};\mathbf{Y}\right) \right)-1 \right|\Bigg| M \right] M\mathbbm{1}\{M> 2\mu_n\}\right]\nonumber\\ &=2\mathbb{E}[M\mathsf{V}_M\mathbbm{1}\{M>2\mu_n\}] \label{eq:VAE} \end{align} \item Inequality \eqref{eq:VA-6} follows from $\mathbb{E}[|X-\mathbb{E}[X]|]\le \mathrm{Var}[X]$ for any r.v. $X$, $\mathrm{Var}[M]=\mu_n$ and $\mathsf{V}_k\le 1$ for any $k$. \item The inequality leading to \eqref{eq:VA-7} is currently proven in \eqref{eq:PA-9} and \eqref{eq:PA-10}. \end{itemize} \end{IEEEproof} \section{Proofs of the Lemma \ref{le:conv} and Lemma \ref{le:TV-conv}} \subsection{Proof of Lemma \ref{le:conv}}\label{apx:sub-kl} Write $T_1=\sum_{k=1}^MZ_k$, where $$Z_k:=\frac {1}{\mu _{n}}\exp \left( \imath\left( \mathbf{X}(k);\mathbf{Y}\right) \right) \mathbbm{1}\left\{ \left( \mathbf{X}\left( k\right) ,\mathbf{Y}\right) \in \mathcal{F}\right\}.$$ Utilizing Lemma \ref{le:conv} with $U\leftarrow T_1+1-\mathbb{E}[T_1|Y]$ in \eqref{eq:22}, yields \begin{align} &\mathbb{E}[T\log T]\ge \mathbb{E}\Big[\mathbb{E}\Big[(T_1+1-\mathbb{E}[T_1|\mathbf{Y}])\log(T_1+1-\mathbb{E}[T_1|\mathbf{Y}])|\mathbf{Y}\Big]\Big]\\ &\ge \mathbb{E}\left[\dfrac {\mathbb{E}\left[ \left(T_1-\mathbb{E}[T_1|\mathbf{Y}]\right) ^{2}|\mathbf{Y}\right] ^{2}}{{2}\mathbb{E}\left[\left(T_1-\mathbb{E}[T_1|\mathbf{Y}]\right) ^{2}|\mathbf{Y}\right] +\dfrac {2}{3}\mathbb{E}\left[ \left(T_1-\mathbb{E}[T_1|\mathbf{Y}]\right) ^{3}|\mathbf{Y}\right] }\right]\\ &= \mathbb{E}\left[\dfrac {\mu_n^2\mathbb{E}\left[Z_1^2|\mathbf{Y}\right] ^{2}}{{2\mu_n}\mathbb{E}\left[Z_1^2|\mathbf{Y}\right] +\dfrac {2}{3}\mu_n\mathbb{E}\left[ Z_1^{3}|\mathbf{Y}\right] }\right]\label{eq:simple-c}\\ &\ge \dfrac {\mu_n\mathbb{E}\left[Z_1^2\right] ^{2}}{{2}(\mathbb{E}\left[Z_1^2\right] +\dfrac {1}{3}\mathbb{E}\left[ Z_1^{3}\right]) }\label{eq:jen-x2y}\\ & \ge\frac{\mu_n}{4}\min\left\{ {\mathbb{E}\left[Z_1^2\right] },3\dfrac {\mathbb{E}\left[Z_1^2\right] ^{2}}{\mathbb{E}\left[ Z_1^{3}\right]) }\right\}\label{eq:29} \end{align} where \eqref{eq:simple-c} is a result of simple algebraic calculations using the moments of the Poisson r.v. $M$ and \eqref{eq:jen-x2y} follows by applying the Jensen inequality to the jointly convex function $f(x,y):=\frac{x^2}{y}$. Next consider, \begin{align} \mathbb{E}\left[Z_1^2\right]&=\frac{1}{\mu_n^2}\mathbb{E}\left[\exp(2\imath(\mathbf{X}(1);\mathbf{Y})\mathbbm{1}\{\mathbf{X}(1),\mathbf{Y})\in\mathcal{F}\}\right]\nonumber\\ &=\frac{1}{\mu_n^2}\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right];\label{eq:eq1}\\\mathbb{E}\left[Z_1^3\right]&=\frac{1}{\mu_n^3}\mathbb{E}\left[\exp(3\imath(\mathbf{X}(1);\mathbf{Y})\mathbbm{1}\{\mathbf{X}(1),\mathbf{Y})\in\mathcal{F}\}\right]\nonumber\\ &=\frac{1}{\mu_n^3}\mathbb{E}\left[\exp(2\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right];\label{eq:eq2} \end{align} where the equalities \eqref{eq:eq1} and \eqref{eq:eq2} follow from change of measure, since $(\mathbf{X}(1),\mathbf{Y})\sim P^{\otimes n}_X P^{\otimes n}_Y$. Substituting \eqref{eq:eq1} and \eqref{eq:eq2} in \eqref{eq:29} concludes the proof. \subsection{Proof of Lemma \ref{le:TV-conv}}\label{apx:sub-TV} Utilizing Lemma \ref{le:TV-conv} with $U\leftarrow T_1$ in \eqref{eq:V22}, yields \begin{align} \mathbb{E}[|T-1|]&\ge \mathbb{E}\left[\mathbb{E}\left[|T_1-\mathbb{E}[T_1|\mathbf{Y}]|\Big|\mathbf{Y}\right]\right]\\ &\ge \mathbb{E}\left[\dfrac {\mathbb{E}\left[ \left(T_1-\mathbb{E}[T_1|\mathbf{Y}]\right) ^{2}|\mathbf{Y}\right] ^{3}}{\mathbb{E}\left[\left(T_1-\mathbb{E}[T_1|\mathbf{Y}]\right) ^{4}|\mathbf{Y}\right] }\right]\\ &= \mathbb{E}\left[\dfrac {\mu_n^3\mathbb{E}\left[Z_1^2|\mathbf{Y}\right] ^{3}}{{\mu_n}\mathbb{E}\left[Z_1^4|\mathbf{Y}\right] +{3}\mu_n^2\mathbb{E}\left[ Z_1^{2}|\mathbf{Y}\right]^2 }\right]\label{eq:TV-simple-c} \end{align} where \eqref{eq:TV-simple-c} is a result of simple algebraic calculations using the moments of the Poisson r.v. $M$. Next consider, \begin{align} \mathbb{E}\left[Z_1^2|\mathbf{Y}\right]&=\frac{1}{\mu_n^2}\mathbb{E}\left[\exp(2\imath(\mathbf{X}(1);\mathbf{Y})\mathbbm{1}\{\mathbf{X}(1),\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]\nonumber\\ &=\frac{1}{\mu_n^2}\mathbb{E}\left[\exp(\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right];\label{eq:TV-eq1}\\ \mathbb{E}\left[Z_1^4|\mathbf{Y}\right]&=\frac{1}{\mu_n^4}\mathbb{E}\left[\exp(4\imath(\mathbf{X}(1);\mathbf{Y})\mathbbm{1}\{\mathbf{X}(1),\mathbf{Y})\in\mathcal{F}\}|\mathbf{Y}\right]\nonumber\\ &=\frac{1}{\mu_n^4}\mathbb{E}\left[\exp(3\imath(\mathbf{X};\mathbf{Y})\mathbbm{1}\{\mathbf{X},\mathbf{Y})\in\mathcal{F}\}\right|\mathbf{Y}];\label{eq:TV-eq2} \end{align} where the equalities \eqref{eq:TV-eq1} and \eqref{eq:TV-eq2} follows from change of measure, since $(\mathbf{X}(1),\mathbf{Y})\sim P^{\otimes n}_X P^{\otimes n}_Y$. Substituting \eqref{eq:TV-eq1} and \eqref{eq:TV-eq2} in \eqref{eq:TV-simple-c} concludes the proof. { \section{Proof of Lemma \ref{le:lemma9}}\label{apx:lemma8} \subsection{ Proof of item 1:} Using the Berry-Essen theorem for the sum $\sum_{k=1}^n Z_k$, we get the following approximation for any $\mathbf{y}\in\mathcal{G}$, \begin{align} \mathbb{P}\left[\mathcal{F}|\overline{\mathbf{Y}}=\mathbf{y}\right]&\geq \mathbb{P}\left[b_\mathbf{y}\le N\le c_\mathbf{y}\right] -6\dfrac{\sum_{k=1}^n\mathrm{M}_3[Z_k|\overline{Y}_k=y_k]}{\left(\sum_{k=1}^n\mathrm{Var}[Z_k|\overline{Y}_k=y_k]\right)^{\frac{3}{2}}}\\ &\geq \mathbb{P}\left[b_\mathbf{y}\le N\le c_\mathbf{y}\right] -\frac{d}{\sqrt{n}}\label{eqn:BEEEE} \end{align} where $N$ is the standard normal random variable, $$b_\mathbf{y}=\frac{n\mathbb{E}[Z]-\sum_{k=1}^n\mathbb{E}[Z_k|\overline{Y}_k=y_k]-\frac{1}{2}\log n-a}{\sqrt{\sum_{k=1}^n\mathrm{Var}[Z_k|\overline{Y}_k=y_k]}}, ~~~c_\mathbf{y}=\frac{n\mathbb{E}[Z]-\sum_{k=1}^n\mathbb{E}[Z_k|\overline{Y}_k=y_k]-\frac{1}{2}\log n}{\sqrt{\sum_{k=1}^n\mathrm{Var}[Z_k|\overline{Y}_k=y_k]}},$$ and $d=\frac{9\sqrt{8}\mathbb{E}[\mathrm{M}_3[Z|\overline{Y}]]}{\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]^{\frac{3}{2}}}$. Observe that for large enough $n$ $$\max\{|b_{\mathbf{y}}|,|c_\mathbf{y}|\}\leq \sqrt{\frac{2}{\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]}}\left(1+\frac{.5\log n+a}{\sqrt{n}}\right)\leq \frac{2}{\sqrt{\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]}}\triangleq \kappa.$$ Hence \begin{equation} \mathbb{P}\left[b_\mathbf{y}\le N\le c_\mathbf{y}\right]=\int_{b_\mathbf{y}}^{c_\mathbf{y}}\frac{\mathrm{e}^{-\frac{x^2}{2}}}{\sqrt{2\pi}}dx\geq (c_\mathbf{y}-b_\mathbf{y})\frac{\mathrm{e}^{-\frac{\kappa^2}{2}}}{\sqrt{2\pi}}\geq \frac{a\mathrm{e}^{-\frac{\kappa^2}{2}}}{\sqrt{3\pi\mathbb{E}[\mathrm{Var}[Z|\overline{Y}]]}}.\frac{1}{\sqrt{n}}\label{eqn:BE-N} \end{equation} Putting \eqref{eqn:BEEEE} and \eqref{eqn:BE-N} together yields that for large enough $a$, \begin{equation} \mathbb{P}\left[\mathcal{F}|\overline{\mathbf{Y}}=\mathbf{y}\right]\geq \frac{C}{\sqrt{n}} \end{equation} for some constant $C$. \subsection{ Proof of item 2:} Using Chebyshev inequality, it is easy to show that the probability of violating the second constraint and the third constraint of \eqref{eqn:violating} is of order $O\left(n^{-.5}\right)$. Further, the Berry-Essen theorem implies that the probability of deviating of order $\sqrt{n}$ from the mean is lower-bounded by some non-zero constant $K$. Thus for large enough $n$, the probability of $\mathcal{G}$ is lower bounded by $\frac{K}{2}$. } \iffalse \begin{IEEEproof} Let $S_n=\frac{1}{\sqrt{n}}\sum_{k=1}^n V_k$. Then we have, \begin{align} \mathbb{E}&\left[\exp\left(-\beta \sum_{k=1}^nV_i\right)\mathbbm{1}\left\{\sum_{k=1}^nV_i\ge \alpha\log n\right\}\right] =\mathbb{E}\left[\exp\left(-\beta \sqrt{n}S_n\right)\mathbbm{1}\left\{S_n\ge \alpha\dfrac{\log n}{\sqrt{n}}\right\}\right]\\ &=\int_{\alpha\frac{\log n}{\sqrt{n}}}^\infty \exp(-\beta\sqrt{n} x)dF_{S_n}(x)\\ &=-F_{S_n}\left(\alpha\frac{\log n}{\sqrt{n}}\right)\exp\left(-\alpha\beta\log n\right)+\beta\sqrt{n}\int_{\alpha\frac{\log n}{\sqrt{n}}}^\infty \exp(-\beta\sqrt{n} x)F_{S_n}(x)dx\\ &=-F_{S_n}\left(\alpha\frac{\log n}{\sqrt{n}}\right)\exp\left(-\alpha\beta\log n\right)+\frac{\beta}{\sqrt{n}}\int_{\alpha{\log n}}^\infty \sqrt{n}\exp(-\beta x)F_{S_n}(\frac{x}{\sqrt{n}})dx \end{align} \end{IEEEproof} \fi \section{Thinning property of a random Poisson sum}\label{apx:Poisson-tinning} Let $X_1,X_2,\cdots$ be a sequence of i.i.d. random variables with the distribution $P_X$ and characteristic function $\varphi_X(t):=\mathbb{E}[\exp(\mathrm{i}tX)]$, where $X\sim P_X$. Let $M$ be a poisson r.v. with mean $\mu$ and independent of the sequence $(X_1,X_2,\cdots)$. Further let $\mathcal{F}\subseteq\mathcal{X}$ be a measurable event w.r.t. to $P_X$. The following lemma is related to the \emph{thinning} property of Poisson random process \cite{last2017lectures}. \begin{lem} Let $U=\sum_{k=1}^M X_k\mathbbm{1}\{X_k\in\mathcal{F}\}$ and $V=\sum_{k=1}^M X_k\mathbbm{1}\{X_k\notin\mathcal{F}\}$. Then $U$ and $V$ are independent. \end{lem} \begin{IEEEproof} Let $\varphi_{U,V}(s,t):=\mathbb{E}[\exp(\mathrm{i}(sU+tV))]$, be the joint characteristic function of the pair $(U,V)$. It suffices to show \begin{equation} \varphi_{U,V}(s,t)=\varphi_{U}(s)\varphi_{V}(t),~~~~~~ \forall (s,t)\in\mathbb{R}^2.\label{thin-0}\end{equation} The characteristic function of the random sum $U$ is given by (see \cite[Equation 2.4, P. 504]{Feller1971}) \begin{align} \varphi_U(s)=\exp(\mu(\varphi_{X\mathbbm{1}\{X\in\mathcal{F}\}}(s)-1)) \end{align} Observe that \begin{equation} \varphi_{X\mathbbm{1}\{X\in\mathcal{F}\}}(s)=\mathbb{E}[\exp(\mathrm{i}sX\mathbbm{1}\{X\in\mathcal{F}\})]=\mathbb{P}[X\notin\mathcal{F}] +\mathbb{E}[\exp(\mathrm{i}sX)\mathbbm{1}\{X\in\mathcal{F}\}] \end{equation} Hence, \begin{align} \varphi_U(s)=\exp\left(\mu\big(\mathbb{E}[\exp(\mathrm{i}sX)\mathbbm{1}\{X\in\mathcal{F}\}]-\mathbb{P}[X\in\mathcal{F}]\big)\right) \end{align} Similarly, \begin{align} \varphi_V(t)=\exp\left(\mu\big(\mathbb{E}[\exp(\mathrm{i}tX)\mathbbm{1}\{X\notin\mathcal{F}\}]-\mathbb{P}[X\notin\mathcal{F}]\big)\right) \end{align} Thus, \begin{align} \varphi_{U}(s)\varphi_V(t)=\exp\left(\mu\big(\mathbb{E}[\exp(\mathrm{i}sX)\mathbbm{1}\{X\in\mathcal{F}\}]+\mathbb{E}[\exp(\mathrm{i}tX)\mathbbm{1}\{X\notin\mathcal{F}\}]-1\big)\right)\label{thin-1} \end{align} Next consider, \begin{align} \varphi_{U,V}(s,t)&=\mathbb{E}[\exp(\mathrm{i}(sU+tV))]\\ &=\mathbb{E}\left[\exp\left(\mathrm{i}\sum_{k=1}^M X_k(s\mathbbm{1}\{X_k\in\mathcal{F}\}+t\mathbbm{1}\{X_k\notin\mathcal{F}\})\right)\right]\\ &=\exp\left(\mu(\varphi_{X(s\mathbbm{1}\{X\in\mathcal{F}\}+t\mathbbm{1}\{X\notin\mathcal{F}\})}(1)-1)\right)\label{thin-2} \end{align} where we have used again the formula \cite[Equation 2.4, P. 504]{Feller1971} for the Poisson random sum $\sum_{k=1}^M X_k(s\mathbbm{1}\{X_k\in\mathcal{F}\}+t\mathbbm{1}\{X_k\notin\mathcal{F}\})$. Now observe, \begin{align} \varphi_{X(s\mathbbm{1}\{X\in\mathcal{F}\}+t\mathbbm{1}\{X\notin\mathcal{F}\})}(1)&=\mathbb{E}\left[\exp(\mathrm{i}X(s\mathbbm{1}\{X\in\mathcal{F}\}+t\mathbbm{1}\{X\notin\mathcal{F}\}))\right]\\ &=\mathbb{E}\left[\exp(\mathrm{i}sX)\mathbbm{1}\{X\in\mathcal{F}\}\right]+\mathbb{E}\left[\exp(\mathrm{i}tX)\mathbbm{1}\{X\notin\mathcal{F}\}\right]\label{thin-3} \end{align} Substituting \eqref{thin-3} in \eqref{thin-2} and comparing the result with \eqref{thin-1} yield \eqref{thin-0}. \end{IEEEproof} \section{On the optimum values $\tau^*$ and $\rho^*$}\label{apx:optimum} \begin{lem}The mapping $G:[0,1)\rightarrow \mathbb{R}$ defined by \begin{equation} G(\rho):=\log {\mathbb{E}\left[\mathbb{E}^{1-\rho}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right]} \end{equation} where $(X,{Y})\sim P_{XY}$, is convex. \end{lem} \begin{IEEEproof} Let $F(\rho):={\mathbb{E}\left[\mathbb{E}^{1-\rho}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right]}$. It suffices to show that for any $\theta,\alpha,\beta\in[0,1]$, \begin{equation} F(\theta\alpha+\bar{\theta}\beta)\leq F(\alpha)^{\theta}F(\beta)^{\bar{\theta}}, \end{equation} where $\bar{\theta}=1-\theta$. The proof follows from repeatedly applying Jensen inequality, as follows, \begin{align} F(\alpha)^{\theta}F(\beta)^{\bar{\theta}}&= \mathbb{E}^{\theta}\left[\mathbb{E}^{1-\alpha}\left[\exp\left(\frac{\alpha}{1-\alpha}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right]\mathbb{E}^{\bar{\theta}}\left[\mathbb{E}^{1-\beta}\left[\exp\left(\frac{\beta}{1-\beta}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right]\\ &\geq \mathbb{E}\left[\mathbb{E}^{\theta( 1-\alpha)}\left[\exp\left(\frac{\alpha}{1-\alpha}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\mathbb{E}^{\bar{\theta}(1-\beta)}\left[\exp\left(\frac{\beta}{1-\beta}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right]\label{eqn:jen-con-1}\\ &= \mathbb{E}\left[\left(\mathbb{E}^{\frac{\theta( 1-\alpha)}{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}}\left[\exp\left(\frac{\alpha}{1-\alpha}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right.\right.\\&\qquad\qquad\qquad\qquad\left.\left.\mathbb{E}^{\frac{\bar{\theta}(1-\beta)}{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}}\left[\exp\left(\frac{\beta}{1-\beta}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right)^{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}\right]\\ &\geq \mathbb{E}\left[\left(\mathbb{E}\left[\exp\left(\frac{\theta\alpha+\bar{\theta}\beta}{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right)^{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}\right]\label{eqn:jen-con-2}\\ &=F(\theta\alpha+\bar{\theta}\beta) \end{align} where \eqref{eqn:jen-con-1} and \eqref{eqn:jen-con-2} follow from the Holder inequality and the fact that the mapping $x\mapsto x^{\theta( 1-\alpha)+\bar{\theta}(1-\beta)}$ is increasing. \end{IEEEproof} \begin{cor} The function $H:[0,1)\mapsto \mathbb{R}$ defined by \begin{align} H(\rho)&=\frac{d}{d\rho}G(\rho):=\dfrac{1}{F(\rho)} \mathbb{E}\left[\left\{\mathbb{E}^{1-\rho}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right\}.\right.\nonumber\\ &\left.\left\{-\log\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]+\frac{1}{1-\rho}.\frac{\mathbb{E}\left[{\imath_{X;Y}(X;{Y})}\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|Y\right]}{\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|Y\right]} \right\}\right] \label{eqn:Derivative} \end{align} is increasing. \end{cor} Let $(X_{\rho},Y_\rho)$ be a pair of tilted random variables defined by the following pair of Radon-Nikodym derivatives, \begin{align} \frac{\mathrm{d}P_{{X}_\rho|{Y}_\rho}}{\mathrm{d}P_{X|Y}}(x,y)&:=\frac{\exp\left(\frac{\rho}{1-\rho}\imath(x;y)\right)}{\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath(X;Y)\right)|Y=y\right]}\\ \frac{\mathrm{d}P_{{Y}_\rho}}{\mathrm{d}P_{Y}}(y) &:=\frac{\mathbb{E}^{{1-\rho}}\left[\exp(\frac{\rho}{1-\rho}\imath(X;Y))|Y=y\right]}{\mathbb{E}\left[\mathbb{E}^{1-\rho}\left[\exp(\frac{\rho}{1-\rho}\imath(X;Y))|Y\right]\right]}. \end{align} Also let $Z_\rho$ be a r.v. defined by \begin{align} Z_\rho:=&\frac{1}{1-\rho}\imath_{X;Y}({X}_\rho;{Y}_\rho)-\log\mathbb{E}\left[\exp(\frac{\rho}{1-\rho}\imath(X;Y))|Y={Y}_\rho\right] \end{align} Using these definitions, the r.h.s. of the derivative \eqref{eqn:Derivative} can be simplified as follows, \begin{align} \dfrac{1}{F(\rho)} \mathbb{E}&\left[\left\{\mathbb{E}^{1-\rho}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right\}.\right.\nonumber\\ &\left.\left\{-\log\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]+\frac{1}{1-\rho}.\frac{\mathbb{E}\left[{\imath_{X;Y}(X;{Y})}\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|Y\right]}{\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|Y\right]} \right\}\right] \nonumber\\ =\dfrac{1}{F(\rho)} &\mathbb{E}\left[\left\{\mathbb{E}^{1-\rho}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]\right\}.\right.\nonumber\\ &\left.\qquad\left\{-\log\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}\right]+\frac{1}{1-\rho}.{\mathbb{E}\left[{\imath_{X;Y}(X_\rho;{Y}_\rho)}\Big|Y_\rho=Y\right]}\right\}\right]\label{eqn:c-o-m-1} \\ =& \mathbb{E}\left[-\log\mathbb{E}\left[\exp\left(\frac{\rho}{1-\rho}\imath_{X;Y}(X;{Y})\right)\Big|{Y}=Y_\rho\right]+\dfrac{1}{1-\rho}.{\mathbb{E}\left[{\imath_{X;Y}(X_\rho;{Y}_\rho)}\Big|Y_\rho\right]}\right]\label{eqn:c-o-m-2}\\ =&\mathbb{E}[Z_\rho] \end{align} where \eqref{eqn:c-o-m-1} and \eqref{eqn:c-o-m-2} follow from the change of measures $P_{X|Y}\rightarrow P_{X_\rho|Y_\rho}$ and $P_Y\rightarrow P_{Y_\rho}$, respectively. Now let $R>I(X;Y)$ be a fixed number and define, \begin{equation} \rho^*:=\arg\max_{0\leq\rho\leq\frac{1}{2}} \rho\left(R-I_{\frac{1}{1-\rho}(X;Y)}\right)=\arg\max_{0\leq\rho\leq\frac{1}{2}} \rho R-G(\rho) \end{equation} Since $G$ is convex, the function $f(\rho):=\rho R-G(\rho)$ gets its maximum either at the end-points $0,\frac{1}{2}$ or an interior point $\rho^*$ of the interval $[0,\frac{1}{2}]$ such that $\rho^*=H(\rho^*)=\mathbb{E}[Z_{\rho^*}]$. Also $\rho^*\neq 0$, because $f'(0)=R-I(X;Y)>0$. Therefore, if $\rho^*$ is not an interior point, it should be $\rho^*=\frac{1}{2}$ and it happens, if $R\geq\mathbb{E}[Z_{\frac{1}{2}}]$ due to the fact that $\mathbb{E}[Z_\rho]=H(\rho)$ is increasing. In summary, we proved the following, \begin{cor}\label{cor:apx-tv} We have, \begin{equation} \rho^*=\left\{\begin{array}{lr} \frac{1}{2}& R\geq\mathbb{E}[Z_{\frac{1}{2}}]\\ t<\frac{1}{2}& R=\mathbb{E}[Z_t] \end{array}\right. \end{equation} \end{cor} A similar argument shows the following counterpart for the optimization in \eqref{eq:tau-defn}. \begin{lem}\label{le:apx-kl} We have \begin{equation} \tau^*=\left\{\begin{array}{lr} 1& R\geq\mathbb{E}[\imath_{X;Y}(X_1;Y_1)]\\ t<1& R=\mathbb{E}[\imath_{X;Y}(X_t;Y_t)] \end{array}\right. \end{equation} where $(X_\tau,Y_\tau)$ is defined via the following Radon-Nikodym derivative, \begin{equation} \dfrac{\mathrm{d}P_{X_\tau Y_\tau}}{\mathrm{d}P_{XY}}(x,y):=\dfrac{\exp(\tau\imath_{X;Y}(x;y))}{\mathbb{E}[\exp(\tau\imath_{X;Y}(X;Y))]} \end{equation} \end{lem} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,955
{"url":"http:\/\/clay6.com\/qa\/36789\/the-average-speed-at-t-1k-and-the-most-probable-speed-at-t-2k-of-co-2-gas-i","text":"Browse Questions\n\nThe average speed at $T_1K$ and the most probable speed at $T_2K of$CO_2$gas is$9\\times10^4cm\/s. Calculate the value of $T_1 and T_2$\n\n$\\begin {array} {1 1}(a)\\;T_1 = 1684.0K ; T_2 = 2143.37K\\\\(b)\\;T_1=1234K;T_2 = 1298.3K\\\\(c)\\;T_1=2378.3K;T_2=1369.3K\\\\(d)\\;T_1=3456.1K;T_2=3139.6K \\end {array}$\n\nAverage speed = $\\sqrt{(\\large\\frac{8RT}{\\pi m})}$\nMost probable speed = $\\sqrt{(\\large\\frac{2RT}{m})}$\nAverage speed at $T_1 K$ = MP speed at $T_2K for\\; CO_2$\n$\\sqrt{(\\large\\frac{8RT_1}{\\pi m})} = \\sqrt{(\\large\\frac{2RT_2}{m})}$\n$\\large\\frac{T_1}{T_2} = \\large\\frac{\\pi}{4}$------(i)\nAlso for $CO_2 u_{mp} = \\sqrt{(\\large\\frac{2RT}{m})} = 9\\times10^4$\n$=\\sqrt{\\large\\frac{2\\times8.314\\times10^7\\times T_2}{44})}= 9\\times10^4$\nBy eq (1) We get\n$T_2 = 2143.37K$\n$T_1 = 1684.0K$","date":"2016-12-03 21:52:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8365883231163025, \"perplexity\": 3210.739710278822}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-50\/segments\/1480698541140.30\/warc\/CC-MAIN-20161202170901-00335-ip-10-31-129-80.ec2.internal.warc.gz\"}"}
null
null
Vaissís (en francès Vaychis) és un municipi francès del departament de l'Arieja i la regió d'Occitània. Referències Municipis del districte de Foix
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,198
CS Ştiinţa Bacău – żeński klub piłki siatkowej z Rumunii. Swoją siedzibę ma w Bacău. Sukcesy Mistrzostwo Rumunii: 1998, 2005, 2013, 2014 1994, 1995, 1996, 1999, 2000, 2001, 2004, 2006, 2011, 2012 2007, 2008, 2018 Puchar Rumunii: 2005, 2006, 2013, 2014, 2015 Kadra zawodnicza 2013/14 1 Ramona Breban 2 Denisa Rogojinaru 3 Roxana Iosef-Bacșiș 4 Lucía Gaido 5 Mihaela Herlea 6 Monika Potokar 7 Aleksandra Petrović 8 Mihaela Albu 9 Aylín Pereyra 10 Alexandra Sobo 11 Crina Bălțatu 12 Marina Cvetanović (od 06.12.2013 Polski Cukier Muszynianka Fakro Bank BPS Muszyna) 13 Sabina Miclea-Grigoruță 14 Kim Staelens 15 Emilce Sosa 16 Marina Vujović Linki zewnętrzne Rumuńskie kluby siatkarskie Bacău
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,535
Q: Set up nginx proxy for react application I'm trying to create a docker-compose using two services, a Spring Boot backend (running on port 8080) and React frontend running on Nginx. The react app calls backend API like /api/tests. However, when I run the docker compose and frontend makes a request, it always fails with 404 error: GET http://localhost/api/tests 404 (Not Found) When I set the frontend dockerfile not to use Nginx, just npm start, it worked fine, but I would prefer using production build on Nginx. Current frontend dockerfile: FROM node:11.13 as builder RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install RUN npm install react-scripts@2.1.8 -g COPY ./package-lock.json /usr/src/app/ COPY ./public /usr/src/app/public COPY ./src /usr/src/app/src COPY ./nginx.conf /etc/nginx/nginx.conf RUN npm run build FROM nginx:1.15.10-alpine COPY --from=builder /usr/src/app/build /usr/share/nginx/html CMD ["nginx", "-g", "daemon off;"] Nginx.conf: server { listen 80; location / { try_files $uri $uri/ /index.html; add_header Cache-Control public; expires 1d; } location /api { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://server:8080/; } } docker-compose: version: "3" services: server: build: test-server/ expose: - 8080 ports: - 8080:8080 ui: build: test-ui/ expose: - 80 ports: - 80:80 The react app has a line "proxy": "http://server:8080" in its package.json. Nginx logs the following error: 2019/04/15 12:50:03 [error] 6#6: *1 open() "/usr/share/nginx/html/api/tests" failed (2: No such file or directory), client: 172.20.0.1, server: localhost, request: "GET /api/tests HTTP/1.1", host: "localhost", referrer: "http://localhost/" A: I found the problem. In the multi-stage build of the docker image, I accidentally copied the nginx.conf file into the builder image, not the production one. The fixed Dockerfile now looks like this: # build environment FROM node:11.13 as builder RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install RUN npm install react-scripts@2.1.8 -g COPY ./package-lock.json /usr/src/app/ COPY ./public /usr/src/app/public COPY ./src /usr/src/app/src RUN npm run build # production environment FROM nginx:1.15.10-alpine COPY --from=builder /usr/src/app/build /var/www COPY ./nginx.conf /etc/nginx/nginx.conf CMD ["nginx", "-g", "daemon off;"] and nginx.conf: server { listen 80; include /etc/nginx/mime.types; root /var/www; index index.html index.htm; location /api { resolver 127.0.0.11; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://server:8080$request_uri; } location / { try_files $uri $uri/ =404; } } A: It works fine in dev because you have a webpack dev server proxying you requests to port 8080 ( the "proxy": "http://server:8080" line ), but this is gone in production builds. Adding $request_url to you proxy_pass should fix it. location /api { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://server:8080$request_uri; } A: It will fail since you are accessing to api http://localhost/api/tests, while in your docker-compose file you point api to port 8080. So try this one: http://localhost:8080/api/tests I would suggest you to use ENVIRONMENT VARIABLES so you can change it whenever you change your port or something. Then in React you can access them by: ex. You can set them from your terminal SET REACT_APP_API_URL=http://localhost:8080 and access it by process.env.REACT_APP_API_URL Or you can even set it in your Dockerfile or docker-compose file.
{ "redpajama_set_name": "RedPajamaStackExchange" }
248
// BackButton Component /** * Component that handles user login / logout via google sign-in * @packageDocumentation */ import React, { useState, useContext } from 'react'; import { GoogleLogout, GoogleLogin } from 'react-google-login'; import styled from 'styled-components'; import config from '../../config/ConfigLoader'; import { AuthContext } from '../../contexts/AuthContext'; const CustomGoogleLogin = styled(GoogleLogin)` width: 100px; height: 56px; margin-top: auto; margin-bottom: auto; margin-right: 5px; `; const CustomGoogleLogout = styled(GoogleLogout)` width: 100px; height: 56px; margin-top: auto; margin-bottom: auto; margin-right: 5px; `; /** GoogleButton State */ interface State { /** True if user is logged in*/ isLoggedIn: boolean; } /* Button that handles google login and logout */ export const GoogleButton: React.FC = () => { const authContext = useContext(AuthContext); const [isLoggedIn, setIsLoggedIn] = useState<State['isLoggedIn']>(false); const logIn = (response) => { authContext.setTokenId(response.tokenObj.id_token); setIsLoggedIn(true); }; const logOut = () => { authContext.setTokenId(''); setIsLoggedIn(false); }; const handleError = () => {}; return ( <> {!isLoggedIn && ( <CustomGoogleLogin clientId={config.clientId} buttonText="Login" onSuccess={logIn} onFailure={handleError} responseType="id_token" cookiePolicy={'single_host_origin'} isSignedIn={true} /> )} {isLoggedIn && ( <CustomGoogleLogout clientId={config.clientId} buttonText="Logout" onLogoutSuccess={logOut} onFailure={handleError} cookiePolicy={'single_host_origin'} isSignedIn={true} /> )} </> ); }; export default GoogleButton;
{ "redpajama_set_name": "RedPajamaGithub" }
5,856
University of Warwick (Uniwersytet Warwick) – brytyjski uczelnia publiczna w Coventry, założona w 1965 roku. Jest usytuowany 13 kilometrów na północ od miasta Warwick. Wśród absolwentów i pracowników naukowych University of Warwick są zdobywcy Nagrody Nobla, Nagrody Turinga, Medalu Fieldsa, Medalu Richarda W. Hamminga, Nagrody Emmy, Grammy i Padma Vibhushan oraz stypendyści Akademii Brytyjskiej, Królewskiego Towarzystwa Literatury, Królewskiej Akademii Inżynierii i Towarzystwa Królewskiego. Wśród nich są także głowy państw, urzędnicy państwowi, liderzy organizacji międzyrządowych oraz obecny główny ekonomista Banku Anglii. Naukowcy z Warwick wnieśli również znaczący wkład w opracowanie penicyliny, muzykoterapii, konsensusu waszyngtońskiego, feminizmu drugiej fali, standardów komputerowych, w tym ISO i ECMA, teorii złożoności, teorii kontraktów i międzynarodowej ekonomii politycznej jako dziedziny nauki. Stale zajmuje miejsce w pierwszej dziesiątce wszelkich światowych rankingów uczelni, tak w klasyfikacji ogólnej, jak i w zdecydowanej większości dziedzin nauczania. Profil akademicki W październiku 2018 r. Warwick miał 26 531 studentów, z czego około dwie piąte było absolwentami studiów podyplomowych. Około 43% studentów pochodzi spoza Wielkiej Brytanii, a na terenie kampusu reprezentowanych jest ponad 120 krajów. Uczelnia ma 29 wydziałów akademickich i ponad 40 ośrodków badawczych i instytutów na trzech wydziałach: Sztuki, Nauki, Inżynierii Technologii i Matematyki (STEM) oraz Nauk Społecznych. W październiku 2018 r. pracowało ponad 2500 pracowników naukowych i badawczych. Partnerstwa międzynarodowe Studenci University of Warwick mogą odbywać semestr lub rok studów za granicą na jednej z wielu uczelni partnerskich. Do partnerów międzynarodowych należą Columbia University, McGill University, Cornell University, UC Berkeley, oraz Sciences Po Paris. Rankingi Według rankingu ARWU Uniwersytet w Warwick należy do ścisłej czołówki (top 50) z następujących przedmiotów: 10. miejsce z matematyki 20. miejsce z zarządzania 24. miejsce ze statystyki 28. miejsce z ekonomii 33. miejsce z nauk politycznych Według rankingów przedmiotowych rankingu QS World University Rankings Uniwersytet w Warwick zajmuje: 16. miejsce w zakresie statystyki 19. miejsce w zakresie matematyki 23. miejsce w zakresie literatury angielskiej 23. miejsce w zakresie zarządzania 25. miejsce w zakresie ekonomii i ekonometrii 38. miejsce w zakresie filozofii 39. miejsce w zakresie historii 42. miejsce w zakresie lingwistyki i języków współczesnych 47. miejsce w zakresie finansów 48. miejsce w zakresie socjologii 48. miejsce w zakresie studiów nad rozwojem 49. miejsce w zakresie nauk politycznych. Wydziały Ekonomii i Nauk Politycznych Uniwersytetu w Warwick są uważane za jedne z najlepszych w Wielkiej Brytanii. Obydwa departamenty znalazły się na pierwszej pozycji Good University Guide 2020, wyprzedzając tym samym Oxbridge oraz LSE. W dodatku Wydział Matematyki jest powszechnie uważany za jeden z czterech najlepszych w Wielkiej Brytanii, znanych jako COWI (Cambridge, Oxford, Warwick, Imperial). Departament ten zajął 3. miejsce w Anglii (AWRU) lub 4. (QS Ranking). Guardian University Guide, pozycjonuje Warwick Business School (Szkoła Biznesowa) na drugim miejscu w kraju, tuż za Oxford Said Business School. Według QS ranking WBS zajmuje czwarte miejsce w Wielkiej Brytanii i 23 na świecie. Warwick jest wskazywany jako jeden z dziesięciu najlepszych uniwersytetów w Wielkiej Brytanii przez wszystkie trzy najbardziej popularne rankingi. Uniwersytet w Warwick jest członkiem prestiżowej grupy Russell Group oraz Sutton 13. Warwick został wybrany uniwersytetem roku 2015 tygodnika The Times. W 2017 roku Warwick zajął miejsce drugiego najlepszego uniwersytetu według graduate employment rate (zatrudnialności absolwenckiej), wykazując zatrudnienie 97,7% absolwentów. Znani absolwenci i byli pracownicy naukowi Oliver Hart, laureat Nagrody Nobla w dziedzinie ekonomii z 2016 roku Stephen Merchant, aktor, scenarzysta, reżyser i komik Linda Jackson, prezes Citroën , prezes Aston Martin , prezes Jaguar Land Rover Guðni Th. Jóhannesson, prezydent Islandii Susan Strange, prekursorka dziedziny międzynarodowej ekonomii politycznej. Luis Arce, prezydent Boliwii Sir John Cornforth, laureat Nagrody Nobla w dziedzinie chemii z 1975 roku David Li, CEO i przewodniczący rady nadzorczej Bank of East Asia E P Thompson, brytyjski historyk i pisarz Nicholas Stern, były główny ekonomista Banku Światowego George Saitoti, były wiceprezydent Kenii oraz były Chairman Banku Światowego oraz Międzynarodowego Funduszu Walutowego Mahmoud Mohieldin, starszy wiceprezydent Banku Światowego Christopher Zeeman, matematyk, jeden z założycieli uniwersytetu w Warwick Estelle Morris, była sekretarz edukacji z ramienia Partii Pracy Valerie Amos, była dyplomatka, pierwsza w historii osoba czarnoskóra będąca dziekanem college'u na Uniwersytecie Oksfordzkim Andy Haldene, główny ekonomista Banku Anglii Andrea Leadsom, była przewodnicząca Partii Konserwatywnej w Izbie Gmin oraz była sekretarz na rzecz biznesu, energetyki i przemysłu. Przypisy Warwick, University of
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,373
\section{Introduction} Material flows are found in many places of the world. This concerns, for example, traffic flows in urban areas or flows of commodities in logistic systems. There is also some similarity with material flows in production or biological systems, from cells over bodies upto ecological food chains. Many of these material flows are not of diffusive nature or going on in continuous space. They are often directed and organized in networks. In comparison with data flows in information networks, however, there are conservation laws, which can be used to set up equations for material flows in networks. It turns out, however, that this is not a trivial task. While there is already a controversial discussion about the correct equations representing traffic flows along road sections \cite{Daganzo,RMP,mitSchoenhof,Kernerbuch}, their combination in often complex and irregular networks poses further challenges. In particular, there have been several publications on the treatment of the boundary conditions at nodes (connections) of several network links (i.e. road sections) \cite{Hilliges,CellTransmission,Lebacque,control,Klar,Piccoli1,Piccoli2,Herty3,Herty2,Herty1}. In particular, the modelling of merging and intersecting flows is not unique, as there are many possible forms of organization, including the use of traffic lights. Then, however, the question comes up how these traffic lights should be operated, coordinated, and optimized. In order to address these questions, in Sec.~\ref{Sec2} we formulate a simple model for network flows, which contains the main ingredients of material or traffic flows. Section \ref{Sec3} will then discuss the treatment of diverges, merges, and intersections. Equations for the interaction-dependent permeability at merging zones and intersections will be formulated in Sec.~\ref{Sec4}. We will see that, under certain conditions, they lead to spontaneous oscillations, which have features similar to the operation of traffic lights. Finally, Sec.~\ref{Sec5} summarizes and concludes this paper. \section{Flows in Networks}\label{Sec2} The following section will start with a summary of the equations derived for traffic flows in networks in a previous paper. These equations are based on the following assumptions: \begin{itemize} \item The road network can be decomposed into road sections of homogeneous capacity (links) and nodes describing their connections. \item The traffic dynamics along the links is sufficiently well described by the Lighthill-Whitham model, i.e. the continuity equation for vehicle conservation and a flow-density relationship (``fundamental diagram''). This assumes adiabatic speed adjustments, i.e. that acceleration and deceleration times can be neglected. \item The parameters of vehicles such as the maximum speed $V_i^0$ and the safe time headway $T$ are assumed to be identical in the same road section, and who enters a road section first exits first (FIFO principle). That is, overtaking is assumed to be negligible. \item The fundamental diagram can be well approximated by a triangular shape, with an increasing slope $V_i^0$ at low densities and a decreasing slope $c$ in the congested regime. This implies two constant characteristic speeds: While $V_i^0$ corresponds to the free speed or speed limit on road section $i$, \begin{equation} - c = -\frac{1}{\rho_{\rm max} T} \end{equation} the dissolution speed of the downstream front of a traffic jam and the velocity of upstream propagation of perturbations in congested traffic. While $\rho_{\rm max}$ denotes the maximum vehicle density in vehicle queues, $T\approx 1.8$s is the safe time gap between two successive vehicles. \item The vehicle density in traffic jams is basically constant. \end{itemize} These assumptions may be compensated for by suitable corrections \cite{RMP}, but already the model below displays a rich spectrum of spatio-temporal behaviors and contains the main elements of traffic dynamics we are interested in here. \subsection{Flow Conservation Laws}\label{laws} In the following, we will introduce our equations for traffic flows in networks very shortly, as a detailed justification and derivation has been given elsewhere \cite{JPhysA,TGF03,control}. These equations are also meaningful for pipeline networks \cite{pipelines} (if complemented by equations for momentum conservation), logistic systems \cite{logistics}, or supply networks \cite{supply}. Our notation is illustrated in Fig.~\ref{Fig1}. \par Compared to Ref.~\cite{control}, we will use a simplified notation, here.\footnote{The arrival flow $A_j(t)$ has previously been denoted by $Q_j^{\rm arr}(t)$, the potential arrival flow $\widehat{A}_j(t)$ by $Q_j^{\rm arr, pot}(t)$, the departure flow $O_j(t)$ by $Q_j^{\rm dep}(t)$ and the potential departure flow $\widehat{O}_j(t)$ by $Q_j^{\rm dep,pot}(t)$.} The {\em arrival flow} $A_j(t)$ denotes the actual inflow of vehicles into the upstream end of road section $j$, while $O_j(t)$ is the actual {\em departure flow}, i.e. the flow of vehicles leaving road section $j$ at its downstream end. The quantity \begin{equation} \widehat{Q}_j = \left( T + \frac{1}{V_j^0\rho_{\rm max}} \right)^{-1} = \frac{\rho_{\rm max}}{1/c + 1/V_j^0} \end{equation} represents the maximum in- or outflow of road section $j$. All the above quantities refer to flows {\it per lane}. $I_j$ is the number of lanes and $L_j$ the length of road section $j$. $l_j(t)\le L_j$ is the length of the congested area on link $j$ (measured from the downstream end), and $\Delta N_j$ is the number of stopped or delayed vehicles, see Eqs. (\ref{shock}) and (\ref{delayed}). With these definitions, we can formulate constraints for the {\em actual} arrival and departure flows, which are given by the {\em potential arrival flows} $\widehat{A}_j(t)$ and the {\em potential departure flows} $\widehat{O}_i(t)$, respectively. \par The actual arrival flow $A_j(t)$ is limited by the maximum inflow $\widehat{Q}_j$, if road section $j$ is not fully congested ($l_j(t) < L_j$). Otherwise (if $l_j=L_j$) it is limited by the actual departure flow $O_j(t-L_j/c)$ a time period $L_j/c$ before, as it requires this time period until the downstream flow value has propagated upto the upstream end of the road section by forward movement of vehicles under congested traffic conditions. This implies \begin{equation} 0 \le A_j(t) \le \widehat{A}_j(t) := \left\{ \begin{array}{ll} \widehat{Q}_j & \mbox{if } l_j(t) < L_j \\ O_j(t-L_j/c) & \mbox{if } l_j(t) = L_j. \end{array} \right. \label{one} \end{equation} Moreover, the potential departure flow $\widehat{O}_i(t)$ of road section $i$ is given by its permeability $\gamma_i(t)$ times the maximum outflow $\widehat{Q}_i$ from this road section, if vehicles are queued up ($\Delta N_i >0$) and waiting to leave. Otherwise (if $\Delta N_i =0$) the outflow is limited by the permeability times the arrival flow $A_i$ a time period $L_i/V_i^0$ before, as this is the time period that entering vehicles need to reach the end of road section $i$ when moving freely at the speed $V_i^0$. This gives the additional relationship \begin{equation} 0 \le O_i(t) \le \widehat{O}_i(t) := \gamma_i(t) \left\{ \begin{array}{ll} A_i(t-L_i/V_i^0) & \mbox{if } \Delta N_i(t) = 0 \\ \widehat{Q}_i & \mbox{if } \Delta N_i(t) > 0 \, . \end{array} \right. \label{two} \end{equation} The permeability $\gamma_i(t)$ for traffic flows at the downstream end of section $i$ can assume values between 0 and 1. In case of a traffic light, $\gamma_i(t) = 1$ corresponds to a green light for road section $i$, while $\gamma_i(t) =0$ corresponds to a red or amber light. \par Alternatively and shorter than Eqs.~(\ref{one}) and (\ref{two}) one can write \begin{equation} \widehat{A}_j(t) = \max\Big[ \widehat{Q}_j \Theta(l_j(t) < L_j),O_j(t-L_j/c)\Big] \label{easy1} \end{equation} and \begin{equation} \widehat{O}_i(t) = \gamma_i(t) \max \Big[\widehat{Q}_i\Theta(\Delta N_i > 0), A_i(t-L_i/V_i^0)\Big] \, , \label{easy2} \end{equation} where the Heaviside function $\Theta$ is 1, if the argument (inequality) has the logical value ``true'', otherwise it is 0. Note that the above treatment of the traffic flow in a road section requires the specification of the boundary conditions only, as we have integrated up Lighthill's and Whitham's partial differential equation over the length of the road section. The dynamics in the inner part of the section can be easily reconstructed from the boundary conditions thanks to the constant characteristic speeds. However, a certain point of the road section may be determined either from the upstream boundary (in the case of free traffic) or by the downstream boundary (if lying in the congested area, i.e. behind the upstream congestion front). Therefore, we have a switching between the influence of the upstream and the downstream boundary conditions, which makes the dynamics both, complicated and interesting. This switching results from the maximum functions above and implies also that material flows in networks are described by hybrid equations. Although the dynamics is determined by linear ordinary differential equations in all regimes, the switching between the regimes can imply a complex dynamics and even deterministic chaos \cite{Peters}. \par Complementary to the above equations, we have now to specify the constraints for the nodes, i.e. the connection, merging, diverging or intersection points of the homogeneous road sections. Let the ingoing links be denoted by the index $i$ and the outgoing ones by $j$. To distinguish quantities more easily when we insert concrete values $1,2,\dots$ for $i$ and $j$, we mark quantities of outgoing links additionally by a prime (${}'$). \par Due to the condition of flow conservation, the arrival flow into a road section $j$ with $I'_j$ lanes must agree with the sum of the fractions $\alpha_{ij}$ of all outflows $I_iO_i(t)$ turning into road section $i$. Additionally, the arrival flows are limited, i.e. we have \begin{equation} I'_jA'_j(t) = \sum_i I_iO_i(t) \alpha_{ij} \le I'_j\widehat{A}'_j(t) \label{inequal} \end{equation} for all $j$. Of course, the turning fractions $\alpha_{ij}\ge 0$ are normalized due to flow conservation: \begin{equation} \sum_j \alpha_{ij}(t) = 1 \, . \end{equation} \par In cases of no merging flows, Eq.~(\ref{inequal}) simplifies to \begin{equation} I'_jA'_j(t) = I_iO_i(t) \alpha_{ij} \le I'_j\widehat{A}'_j(t) \end{equation} for all $j$. At the same time, $0 \le O_i(t) \le \widehat{O}_i(t)$ must be fulfilled for all $i$. Together, this implies \begin{equation} O_i(t) \le \min \left[ \widehat{O}_i(t) , \min_j \left( \frac{I'_j \widehat{A}'_j}{I_i\alpha_{ij}} \right) \right] \label{nomerge} \end{equation} for all $i$. \par The advantage of the above model is that it contains the most important elements of the traffic dynamics in networks. This includes the transition from free to congested traffic flows due to lack of capacity, the propagation speeds of vehicles and congested traffic, spillover effects (i.e. obstructions when entering fully congested road sections) and, implicitly, load-dependent travel times as well. \subsection{Two Views on Traffic Jams} Let us study the traffic dynamics on the road sections in more detail. Traffic jams can be handled in two different ways: First by determining the number of cars that are delayed compared to free traffic or, second, by determining fronts and ends of traffic jams. The former method is more simple, but it cannot deal correctly with spill-over effects, when the end of a traffic jam reaches the end of a road section. Therefore, the first method is sufficient only in situations where the spatial capacity of road sections is never exceeded. \subsubsection{Method 1: Number of Delayed Vehicles} The first method just determines the difference between the number $N_i^{\rm in}(t)$ of vehicles that would reach the end of road section $i$ upto time $t$ and the number $N_i^{\rm out}(t)$ of vehicles that actually leave the road section upto this time. $N_i^{\rm in}(t)$ just corresponds to the number of vehicles which have entered the road section upto time $t-L_i/V_i^0$, as $L_i/V_i^0$ is the free travel time. This implies \begin{equation} N_i^{\rm in}(t) = \int\limits_0^t dt' \; A_i(t'-L_i/V_i^0) \, , \end{equation} while the number of vehicles that have acually left the road section upto time $t$ is \begin{equation} N_i^{\rm out}(t) = \int\limits_0^t dt' \; O_i(t') \, . \end{equation} Hence, the number $\Delta N_i(t)$ of delayed vehicles is given by \begin{equation} \Delta N_i(t) = \int\limits_0^t dt' \; [A_i(t'-L_i/V_i^0) - O_i(t')] \ge 0 \, . \end{equation} Alternatively, one can use the following differential equation for the temporal change in the number of delayed vehicles: \begin{equation} \frac{d\,\Delta N_i}{dt} = A_i(t-L_i/V_i^0) - O_i(t) \, . \label{delayed} \end{equation} In contrast, the number of {\em all} vehicles on road section $i$ (independently of whether they are delayed or not) changes in time according to \begin{equation} \frac{dN_i}{dt} = A_i(t) - O_i(t) \, . \end{equation} \subsubsection{Method 2: Jam Formation and Resolution} In our simple macroscopic traffic model, the formation and resolution of traffic jams is described by the shock wave equations, where we have the two characteristic speeds $V_i^0$ (the free speed) and $c$ (the jam resolution speed). According to the theory of shock waves \cite{LW,Whitham}, the upstream end of a traffic jam, which is located at a place $l_i(t)\ge0$ upstream of the end of road section $i$, is moving at the speed \begin{equation} \frac{dl_i}{dt} = - \frac{A_i\big(t-[L_i-l_i(t)]/V_i^0\big) - O_i\big(t-l_i(t)/c\big) }{\rho_1(t) - \rho_2(t)} \, \label{shock} \end{equation} with the (free) density \begin{equation} \rho_1(t) = A_i\big(t-[L_i-l_i(t)]/V_i^0\big)/V_i^0 \end{equation} immediately before the upstream shock front and the (congested) density \begin{equation} \rho_2(t) = [1-TO_i\big(t-l_i(t)/c\big)]\rho_{\rm max} \end{equation} immediately downstream of it. This is, because free traffic is upstream of the shock front, and congested traffic downstream of it (for details see Eqs. (1.6) and (1.4) in Ref.~\cite{control}). In contrast, the downstream front of a traffic jam is moving at the speed \begin{equation} - \frac{0 - O_i\big(t-l_i(t)/c\big) }{\rho_{\rm max} - O_i\big(t-l_i(t)/c\big)/V_i^0 } = \frac{O_i\big(t-l_i(t)/c\big) }{\rho_{\rm max} - O_i\big(t-l_i(t)/c\big)/V_i^0 } \, , \end{equation} since congested traffic with zero flow is upstream of the shock front and free traffic flow occurs downstream of it. \subsubsection{Comparison of the Two Methods}\label{compa} Let us discuss a simple example to make the differences of both descriptions clearer. For this, we assume that, at time $t=0$, traffic flow on the overall road section $i$ is free, i.e. any traffic jam has resolved and there are no delayed vehicles. The flow shall be stopped by a red traffic light for a time period $t_0$. At time $t=t_0$, the traffic light shall turn green, and the formed traffic jam shall resolve. For the arrival flow, we simply assume a constant value $A_i$, and the road section shall be long enough to take up the forming traffic jam. Moreover, the departure flow shall be $O_i$. Then, according to method 1, the number of delayed vehicles at time $t_0$ is \begin{equation} \Delta N_i(t_0) = A_i t_0 \, , \end{equation} and it is reduced according to \begin{equation} \Delta N_i(t) = A_i t_0 - (O_i - A_i) (t-t_0) \, . \end{equation} Therefore, any delays are resolved after a time period \begin{equation} t-t_0 = \frac{A_it_0}{O_i - A_i} = \frac{\Delta N_i(t_0)}{O_i -A_i} \, , \label{dis1} \end{equation} i.e. at time \begin{equation} t_2 = t_0 \frac{O_i}{O_i - A_i} \, . \label{acco} \end{equation} Afterwards, $\Delta N_i(t) = 0$. \par In contrast, the end of the traffic jam grows with the speed \begin{equation} \frac{dl_i}{dt} = - \frac{A_i - 0} {A_i/V_i^0 - (1-0)\rho_{\rm max}} = \frac{1}{\rho_{\rm max}/A_i - 1/V_i^0} =: C_i \, . \end{equation} Therefore, we have $l_i(t_0) = C_i t_0$. Surprisingly, this is greater than $\Delta N_i(t_0)/\rho_{\rm max}$, i.e. the expected length of the traffic jam based on the number of delayed vehicles. The reason is that the delay of a vehicle joining the traffic jam at location $x_i = L_i - l_i$ is noticed at the downstream end of the road section only after a time period $l_i/V_i^0$. \par The resolution of the traffic jam starts from the downstream end with the speed \begin{equation} \frac{0-\widehat{Q}_i}{\rho_{\rm max} - \widehat{Q}_i/V_i^0} = \frac{-1}{\rho_{\rm max}/\widehat{Q}_i - 1/V_i^0} = -c \, , \end{equation} if the outflow is free (i.e. $O_i = \widehat{Q}_i$), otherwise with the speed \begin{equation} \frac{0-O_i}{\rho_{\rm max} - (\rho_{\rm max} - O_i/c)} = - c \, , \end{equation} since congested traffic with zero flow and maximum density is upstream of the shock front. \par Obviously, the jam resolution has reached the further growing, upstream jam front when $C_it = c(t-t_0)$. Therefore, the jam of density $\rho_{\rm max}$ has disappeared after a time period $t-t_0 = C_it_0/(c - C_i)$, i.e. at time \begin{equation} t_1 = ct_0/(c - C_i) \, . \end{equation} Surprisingly, it can be shown that $t_1 < t_2$, i.e. the traffic jam resolves before the number of delayed vehicles reaches a value of zero. In fact, it still takes the time $C_it_1/\widehat{V}_i^0$ until the last delayed vehicle has left the road section, where \begin{equation} \widehat{V}_i^0 = \frac{A_i -O_i}{A_i/V_i^0 - (\rho_{\rm max} - O_i/c)} \end{equation} is the shock front between free upstream traffic flow and the congested outflow $O_i$, which usually differs from the speed $V_i = O_i/[(1-TQ_i)\rho_{\rm max}]$ of outflowing vehicles. For $O_i = \widehat{Q}_i$, we have $\widehat{V}_i^0 = V_i^0$ because of $1/c = \rho_{\rm max}/\widehat{Q}_i - 1/V_i^0$. \par Undelayed traffic starts when this shock front reaches the end of the road section, i.e. at time \begin{equation} t_2 = t_1 \left(1 + \frac{C_i}{\widehat{V}_i^0}\right) = \frac{t_0}{1 - C_i/c} \left( 1 + \frac{C_i(A_i/V_i^0-\rho_{\rm max}) + C_iO_i/c}{A_i - O_i} \right) \, . \end{equation} Inserting $C_i(A_i/V_i^0-\rho_{\rm max}) = - A_i$ eventually gives $t_2 = t_0 O_i/O_i - A_i)$. This agrees perfectly with the above result for the first method (based on vehicle delays rather than traffic jams). \par In conclusion, both methods of dealing with traffic jams are consistent, and delayed vehicles occur as soon as traffic jam formation begins. However, according to method 1, a queued vehicle at position $x_i = L_i - l_i$ is counted as delayed only after an extra time period $l_i/V_i^0$, but it is counted as undelayed after the same extra time period. This is because method 1 counts on the basis of vehicle arrivals at the downstream end of road section $i$. \par As it is much simpler to use the method 1 based on determining the number of delayed vehicles than using method 2 based on determining the movement of shock fronts, we will use method 1 in the following. More specifically, in Eq.~(\ref{one}) we will replace $l_j(t) < L_j$ by $\Delta N_j(t) < N_j^{\rm max} := L_j \rho_{\rm max}$ and $l_j(t) = L_j$ by $\Delta N_j(t) = N_j^{\rm max}$. This corresponds to a situation in which the vehicles would not queue up along the road section, but at the downstream end of the road section, like in a wide parking lot or on top of each other. As long as road section $j$ is not fully congested, this difference does not matter significantly. If it is fully congested, the dynamics will potentially be different, defining a modified model of material network flows. However both, the original and the modified model fulfill the conservation equation and show spillover effects. \subsubsection{Calculation of Cumulative and Maximum Individual Waiting Times} In Ref. \cite{JPhysA}, we have derived a delay differential equation to determine the travel time $T_i(t)$ of a vehicle entering road section $i$ at time $t$ (see also Ref. \cite{Astarita1995,Astarita2002,Carey2003}): \begin{equation} \frac{dT_i(t)}{dt} = \frac{A_i(t)}{O_i(t+T_i(t))} - 1 \, . \label{travtime} \end{equation} According to this, the travel time $T_i(t)$ increases with time, when the arrival rate $A_i$ at the time $t$ of entry exceeds the departure rate $O_i$ at the leaving time $t+T_i(t)$, while it decreases when it is lower. It is remarkable that this formula does not explicitly depend on the velocities on the road section, but only on the arrival and departure rates. \par Another method to determine the travel times is to integrate up over the number of vehicles arriving in road section $i$, \begin{equation} N_i^A(t) = \int\limits_0^t dt' \; A_i(t') = N_i^{\rm in}(t+L_i/V_i^0) \, , \end{equation} and over the number of vehicles leaving it, \begin{equation} N_i^O(t) = \int\limits_0^t dt' \; O_i(t') = N_i^{\rm out}(t) \, , \end{equation} starting at at time $t=0$ when there are no vehicles in the road. If $T'_i(t)$ denotes the time at which $N_i^O(t+T'_i(t)) = N_i^A(t)$, then $T'_i(t)$ is the travel time of a vehicle entering road section $i$ at time $t$ and \begin{equation} T_i(t) = T'_i(t) - L_i/V_i^0 \end{equation} is its waiting time. \par Another interesting quantity is the cumulative waiting time $T_i^{\rm c}(t)$, which is determined by integrating up over the number $\Delta N_i$ of all delayed vehicles. We obtain \begin{eqnarray} T_i^{\rm c}(t) &=& \int\limits_0^t dt' \; \Delta N_i(t') = \int\limits_0^t dt'\; [N_i^{\rm in}(t'-L_i/V_i^0) - N_i^{\rm out}(t')] \nonumber \\ &=& \int\limits_0^t dt' \int\limits_0^{t'} dt^{\prime\prime} \; [A_i(t^{\prime\prime}-L_i/V_i^0) - O_i(t^{\prime\prime}) ] \end{eqnarray} and the differential equation \begin{equation} \frac{dT_i^{\rm c}(t)}{dt} = \Delta N_i(t) = \int\limits_0^t dt' \; [A_i(t'-L_i/V_i^0) - O_i(t') ] \, . \end{equation} For a constant arrival flow $A_i$ and a red traffic light from $t=0$ to $t=t_0$ (i.e. $O_i(t) =0$), we find \begin{equation} T_i^{\rm c} = \frac{A_i t^2}{2} \, . \end{equation} In this time period, a number of $N_i(t) = A_i t$ vehicles accumulates, which gives an average waiting time of \begin{equation} \frac{T_i^{\rm c}(t_0)}{\Delta N(t_0)} = \frac{t_0}{2} \end{equation} at the end of the red light. The first vehicle has to wait twice as long, namely, a time period $t_0$. \section{Treatment of Merging, Diverging and Intersection Points}\label{Sec3} While the last section has given general formulas that must be fulfilled at nodes connecting two or more different links, in the following we will give some concrete examples, how to deal with standard elements of street networks. For previous treatments of traffic flows at intersections see, for example, Refs.~\cite{Piccoli1,Hilliges,CellTransmission,Lebacque}. \begin{figure}[htbp] \begin{center} \includegraphics[width=13cm, angle=0]{crossing} \end{center} \caption[]{Schematic illustration of the (a) diverging, (b) merging, and (c) intersecting flows discussed in this paper.\label{Fig1}} \end{figure} \subsection{Diverging Flows: One Inflow and Several Outflows} In the case of one road section $i$ diverging into several road sections $j$ (see Fig.~\ref{Fig1}a), Eqs.~(\ref{nomerge}) and (\ref{easy1}) to (\ref{inequal}) imply \begin{eqnarray} O_i(t) &\le& \min \left\{ \gamma_i(t) \max \left[\widehat{Q}_i\Theta(\Delta N_i > 0), A_i\left(t-\frac{L_i}{V_i^0}\right)\right] ,\right. \nonumber \\ & & \qquad\quad \left. \min_j \left[ \frac{I'_j}{I_i\alpha_{ij}} \max \left( \widehat{Q}_j \Theta(l_j < L_j), O_j(t- L_j / c) \right) \right] \right\} \end{eqnarray} for all $i$. If we assume that downstream road sections are never completely congested, this simplifies to \begin{equation} O_i(t) = \min \left\{ Q_i, \gamma_i \max \left[\widehat{Q}_i\Theta(\Delta N_i > 0), A_i\left(t-L_i/V_i^0 \right)\right] \right\} \end{equation} with \begin{equation} Q_i = \min_j \left( \frac{I'_j\widehat{Q}_j}{I_i\alpha_{ij}} \right) \, . \end{equation} Otherwise \begin{equation} Q_i(t) = \min_j \left[ \max\left( \frac{I'_j\widehat{Q}_j}{I_i\alpha_{ij}}\Theta(l_j < L_j) , \frac{I'_jO_j(t-\frac{L_j}{c})}{I_i\alpha_{ij}} \right)\right] \, . \label{otherwise} \end{equation} \subsection{Merging Flows: Two Inflows and One Outflow} We assume a flow $I_1O_1(t)$ that splits into two flows $I_1O_1(t)\alpha_{11} $ (going straight) and $I_1O_1(t)\alpha_{12} $ (turning right), but a right-turning flow $I_2O_2(t)$ merging with flow $I_1O_1(t) \alpha_{11}$, as in turn-right-on-red setups (see Fig.~\ref{Fig1}b). For this situation, we have the equations \begin{eqnarray} I'_1A'_1(t) &=& I_1O_1(t) \alpha_{11} + I_2O_2(t) \le I'_1\widehat{A}'_1(t) \, , \\ I'_2A'_2(t) &=& I_1O_1(t) \alpha_{12} \le I'_2\widehat{A}'_2(t) \, . \end{eqnarray} One can derive \begin{equation} 0 \le O_1 = \min\Big[ \widehat{O}_1(t), \frac{I'_1\widehat{A}'_1(t)-I_2O_2(t)}{I_1\alpha_{11}}, \frac{I'_2\widehat{A}'_2(t)}{I_1\alpha_{12}}\Big] \end{equation} and \begin{equation} 0 \le O_2 = \min\Big[ \widehat{O}_2(t), \frac{I'_1\widehat{A}'_1(t)-I_1O_1(t)\alpha_{11}}{I_2} \Big] \, . \end{equation} Let us set \begin{equation} O_1 = \min\Big[\widehat{O}_1, \frac{I'_1\widehat{A}'_1(t)}{I_1\alpha_{11}}, \frac{I'_2\widehat{A}'_2(t)}{I_1\alpha_{12}}\Big] \label{speci} \end{equation} and \begin{equation} O_2(O_1) = \min\Big[ \widehat{O}_2(t), \frac{I'_1\widehat{A}'_1(t)-I_1O_1\alpha_{11}}{I_2} \Big] \, . \label{maxi} \end{equation} Then, it can be shown that $O_2(t) \ge 0$ and $O_1(t) \le [I'_1\widehat{A}'_1(t)-I_2O_2(t)]/(I_1\alpha_{11})$, as demanded. If $O_1(t)$ is chosen a value $\Delta O_1$ smaller than specified in Eq.~(\ref{speci}), but $O_2$ is still set to the maximum related value $O_2(O_1-\Delta O_1)$ according to Eq.~(\ref{maxi}), the overall flow \begin{equation} F=I_1O_1+I_2O_2 \end{equation} is reduced as long as $\alpha_{11} < 1$, since this goes along with additional turning flows (while the number of lanes does not matter!). Therefore, it is optimal to give priority to the outflow $O_1(t)$ according to Eq.~(\ref{speci}) and to add as much outflow $O_2(t)$ as capacity allows. This requires suitable flow control measures, otherwise the optimum value of the overall flow $F$ could not be reached. In fact, the merging flow would ``steel'' some of the capacity reserved for the ``main'' flow ($i=1$), which would reduce the possible outflow $O_1(t)$ and potentially cause a breakdown of free traffic flow, as it is known from on-ramp areas of freeways \cite{mitTreiber} . \subsection{A Side Road Merging with a Main Road} Compared to the last section, the situation simplifies, if we have just a side road or secondary turning flow merging with a the flow of a main road without any turning flow away from the main road. In this case, we have $\alpha_{11}=1$ and $\alpha_{12} =0$, which leaves us with the relationships \begin{equation} O_1 = \min\Big[\widehat{O}_1, \frac{I'_1\widehat{A}'_1(t)}{I_1}\Big] \end{equation} and \begin{equation} O_2(O_1) = \min\Big[ \widehat{O}_2(t), \frac{I'_1\widehat{A}'_1(t)-I_1O_1}{I_2} \Big] \, . \end{equation} according to Eqs.~(\ref{speci}) and (\ref{maxi}). \subsection{Intersection-Free Designs of Road Networks} With the formulas for the treatment of merges and diverges in the previous sections, it is already possible to simulate intersection-free designs of urban road networks, which do not need any traffic light control. The most well-known design of intersection-free nodes are roundabouts (see the upper left illustration in Fig.~\ref{free}). It is, however, also possible to construct other intersection-free designs based on subsequent merges and diverges of flows with different destinations. Two examples are presented in Fig.~\ref{free}b and c. \par \begin{figure}[htbp] \begin{center} \includegraphics[width=5.5cm, angle=0]{free1}\hspace*{1cm} \includegraphics[width=6cm, angle=0]{cheese}\\ \includegraphics[width=6cm, angle=0]{hybrid} \end{center} \caption[]{Three examples for intersection-free designs of urban road networks.\label{free}} \end{figure} Although intersection-free designs require the driver to take small detours, such a road network will normally save travel time and fuel, given that the traffic volume is not too low. This is because intersections then need to be signalized in order to be safe and efficient.\footnote{Of course, a first-come-first-serve or right-before-left rule will be sufficient at small traffic volumes.} Traffic signals, however, imply that vehicles will often be stopped for considerable time intervals. This causes significant delays, at least for vehicles not being served by a green wave. Intersection-free designs, in contrast, do not necessarily require vehicles to stop. Therefore, the average speeds are expected to be higher and the travel times lower than for road networks with intersections. This has significant implications for urban transport planning, if intersections cannot be avoided by bridges or tunnels. \subsection{Two Inflows and Two Outflows} The treatment of intersecting flows is more complicated than the treatment of merges and diverges. Moreover, the resulting flows are only uniquely defined, if additional rules are introduced such as the optimization of the overall flow. Let us here treat the case of an intersection with two inflows and two outflows (see Fig.~\ref{Fig1}c). Equation (\ref{easy1}) implies the inequalities \begin{eqnarray} & & 0 \le I'_1A'_1(t) = I_1O_1(t) \alpha_{11} + I_2O_2(t) \alpha_{21} \le I'_1 \widehat{A}'_1(t) \, , \nonumber \\ & & 0 \le I'_2A'_2(t) = I_1O_1(t) \alpha_{12} + I_2O_2(t) \alpha_{22} \le I'_2 \widehat{A}'_2(t) \label{constraints} \end{eqnarray} with the constraints \begin{eqnarray} & & 0 \le O_1(t) \le \widehat{O}_1(t) \, , \nonumber \\ & & 0 \le O_2(t) \le \widehat{O}_2(t) \, , \label{rectangle} \end{eqnarray} so that $I'_jA'_j(t) \ge 0$ is automatically fulfilled. The constraints (\ref{rectangle}) define an rectangular area of possible $O_i$-values in the $O_1$-$O_2$ plane, where the size of the rectangle varies due to the time-dependence of $\widehat{O}_i(t)$. The inequalities (\ref{constraints}) can be rewritten as \begin{equation} O_2(t) \le \frac{I'_1\widehat{A}_1(t) - I_1O_1(t)\alpha_{11}}{I_2\alpha_{21}} =: a_1 - b_1 O_1(t) \, , \label{const1} \end{equation} and \begin{equation} O_2(t) \le \frac{I'_2\widehat{A}_2(t) - I_1O_1(t)\alpha_{12}}{I_2\alpha_{22}} =: a_2 - b_2 O_1(t) \, . \label{const2} \end{equation} They potentially cut away parts of this rectangle, and the remaining part defines the convex set of feasible points $(O_1,O_2)$ at time $t$. We are interested to identify the ``optimal'' solution $(O_1^*,O_2^*)$, which maximizes the overall flow \begin{equation} \sum_j I'_j A'_j(t) = \sum_i I_i O_i(t) \, . \end{equation} As this defines a linear optimization problem, the optimal solution corresponds to one of the corners of the convex set of feasible points, namely the one which is touched first by the line \begin{equation} O_2 = \frac{Z - I_1 O_1}{I_2} \, , \label{goal1} \end{equation} when we reduce $Z$ from high to low values. \par Let us, therefore, determine all possible corners of the convex set and the conditions, under which they correspond to the optimal solution. We will distinguish the following cases: \begin{itemize} \item[(a)] {\it None} of the boundary lines (\ref{const1}) and (\ref{const2}) corresponding to the equality signs cuts the rectangle defined by $0 \le O_1(t) \le \widehat{O}_1(t)$ and $0 \le O_2(t) \le \widehat{O}_2(t)$ in more than 1 point. This case applies, if $a_1 - b_1\widehat{O}_1 \ge \widehat{O}_2$ and $a_2 - b_2\widehat{O}_1 \ge \widehat{O}_2$, as $a_i\ge 0$ and $b_i \ge 0$ implies that both lines are falling or at least not increasing. Since the line (\ref{goal1}) reflecting the goal function is falling as well, the optimal point is \begin{equation} (O_1^*,O_2^*) = (\widehat{O}_1,\widehat{O}_2) \, , \end{equation} i.e. the outer corner of the rectangle corresponding to the potential or maximum possible departure flows (see Fig.~\ref{Fig2}). \begin{figure}[htbp] \begin{center} \includegraphics[width=6cm, angle=0]{simplex} \end{center} \caption[]{Illustration of the possible optimal solutions for two intersecting flows (see text for details).\label{Fig2}} \end{figure} \item[(b)] Only {\it one} of the two boundary lines/border lines, $O_2(t) = a_1-b_1O_1(t)$ or $O_2(t) = a_2 - b_2O_1(t)$, cuts the rectangle in {\it more} than one point. Let us assume, this holds for line $i$, i.e. $a_i - b_i\widehat{O}_1 < \widehat{O}_2$. Then, the left cutting point \begin{equation} \qquad (O_1^{i{\rm l}},O_2^{i{\rm l}}) = \left\{ \begin{array}{ll} \Big((a_i - \widehat{O}_2)/b_i,\widehat{O}_2\Big) & \mbox{if } a_i > \widehat{O}_2 \, ,\\ (0,a_i) & \mbox{otherwise} \end{array}\right. \end{equation} is the optimal point if $I_1/I_2 < b_i$, i.e. if the slope $I_1/I_2$ of the goal function (\ref{goal1}) is smaller than the one of the cutting border line. Otherwise, if $I_1/I_2 > b_i$, the optimal point is given by the right cutting point \begin{equation} \qquad (O_1^{i{\rm r}},O_2^{i{\rm r}}) = \left\{ \begin{array}{ll} (\widehat{O}_1,a_i-b_i\widehat{O}_1) & \mbox{if } a_i > b_i\widehat{O}_1 \, , \\ (a_i/b_i,0) & \mbox{otherwise} \end{array}\right. \end{equation} (see Fig.~\ref{Fig2}). \item[(c)] If {\it both} border lines cut through the rectangle, but one of them lies above the other line, then only the lower line determines the optimal solution, which can be obtained as in case (b). Case (c) occurs if $a_2-b_2O_1^{1{\rm l}} > a_1-b_1O_1^{1{\rm l}}$ and $a_2-b_2O_1^{1{\rm r}} > a_1-b_1O_1^{1{\rm r}}$ (line 1 is the lower one) or if $a_2-b_2O_1^{1{\rm l}} < a_1-b_1O_1^{1{\rm l}}$ and $a_2-b_2O_1^{1{\rm r}} < a_1-b_1O_1^{1{\rm r}}$ (line 2 is the lower one). \item[(d)] The boundary lines cut each other in the inner part of the rectangle. This occurs if $a_1 - b_1\widehat{O}_1 < \widehat{O}_2$ and $a_2 - b_2\widehat{O}_1 < \widehat{O}_2$. Then, the left-most cutting point $ (O_1^{i{\rm l}},O_2^{i{\rm l}})$ is the optimal solution, if the slope $I_1/I_2$ of the goal function is smaller than the smallest slope of the two boundary lines, while it is the lower right cutting point $ (O_1^{i{\rm r}},O_2^{i{\rm r}})$, if $I_1/I_2$ is greater than the steepest slope of the two boundary lines, otherwise, the cutting point of the two boundary lines, \begin{equation} (O'_1,O'_2) = \left( \frac{a_2 - a_1}{b_2-b_1} , \frac{a_1b_2 - b_1a_2}{b_2-b_1} \right) \end{equation} is the optimal point (see Fig.~\ref{Fig2}). Mathematically speaking, we have \begin{equation} (O_1^*,O_2^*) = \left\{ \begin{array}{ll} (O_1^{\rm 1l},O_2^{\rm 1l}) & \mbox{if } I_1/I_2 < b_1 < b_2, \\ (O'_1,O'_2) & \mbox{if } b_1 < I_1/I_2 < b_2, \\ (O_1^{\rm 2r},O_2^{\rm 2r}) & \mbox{if } b_1 < b_2 < I_1/I_2, \\ (O_1^{\rm 2l},O_2^{\rm 2l}) & \mbox{if } I_1/I_2 < b_2 < b_1, \\ (O'_1,O'_2) & \mbox{if } b_2 < I_1/I_2 < b_1, \\ (O_1^{\rm 1r},O_2^{\rm 1r}) & \mbox{if } b_2 < b_1 < I_1/I_2, \end{array}\right. \end{equation} \end{itemize} \par It is astonishing that the simple problem of two intersecting traffic flows has so many different optimal solutions, which sensitively depend on the parameter values. This can reach from situations where both outgoing road sections experience the maximum possible outflows upto situations, where the outflow in the system-optimal point becomes zero for one of the road sections. A transition from one optimal solution to another one could easily be triggered by changes in the turning fractions $\alpha_{ij}$ entering the parameters $a_i$ and $b_i$, for example due to time-dependent turning fractions $\alpha_{ij}(t)$. \subsection{Inefficiencies due to Coordination Problems} An interesting question is how to actually establish the flows corresponding to the system optima that were determined in the previous sections on merging and intersecting flows. Of course, zero flows can be enforced by a red traffic light, while maximum possible flows can be established by a node design giving the right of way to one road (the ``main'' road). However, it is not so easy to support an optimimum point corresponding to mixed flows, such as $(O'_1,O'_2)$. That would need quite tricky intersection designs or the implementation of an intelligent transportation system ensuring optimal gap usage, e.g. based on intervehicle communication. Only in special cases, the task could be performed by a suitable traffic light control. \par In normal merging or intersection situations, there will always be coordination problems \cite{mitJohansson} when entering or crossing another flow, if the traffic volumes reach a certain level. This will cause inefficiencies in the usage of available road capacity, i.e. mixed flows will not be able to use the full capacity. Such effects can be modelled by specifying the corresponding permeabilities $\gamma_i(t)$ as a function of the merging flows, particularly the main flow or crossing flow. The deviation of $\gamma_i(t)$ from 1 will then be a measure for the inefficiency. A particularly simple, phenomenological specification would be \begin{equation} \gamma_2(t) = \frac{1}{1+ a\mbox{e}^{b(O_1-O_2)}} \, , \label{phen} \end{equation} where the own outflow $O_2$ supports a high permeability and the intersecting outflow $O_1$ suppresses it. However, rather than using such a phenomenological approach, the permeability could also be calculated analytically, based on a model of gap statistics, since large enough vehicle gaps are needed to join or cross a flow. Such kinds of calculations have been carried out in Refs.~\cite{analyticJiang,Biometrica,Troutbeck1986,Troutbeck1997}. \section{Towards a Self-Organized Traffic Light Control}\label{Sec4} In Ref.~\cite{control}, it has been pointed out that, for not too small arrival flows, an oscillatory service at intersections reaches higher intersection capacities and potentially shorter waiting times than a first-in-first-out service of arriving vehicles. This is due to the fact that the outflow of queued vehicles is more efficient than waiting for the arrival of other freely flowing vehicles, which have larger time gaps. For similar reasons, pedestrians are passing a bottleneck in an oscillatory way \cite{mitMolnar}, and also two intersecting flows tend to organize themselves in an oscillatory way \cite{analyticJiang,TransportationScience}. \par Therefore, using traffic lights at intersections is natural and useful, if operated in the right way. However, how to switch the traffic lights optimally? While this is a solvable problem for single traffic lights, the optimal coordination of many traffic lights \cite{Papadimitriou1999} is a really hard (actually NP hard) problem \cite{Schutter2002}. Rather than solving a combinatorial optimization problem, here, we want to suggest a novel approach, which needs further elaboration in the future. The idea is to let the network flows self-organize themselves, based on suitable equations for the permeabilities $\gamma_i(t)$ as a function of the outflows $O_i(t)$ and the numbers $\Delta N_i(t)$ of delayed vehicles. \par Here, we will study the specification \begin{equation} \gamma_1(t) = \frac{1}{1+ a\mbox{e}^{b(O_2-O_1) - cD}} \label{pre1} \end{equation} and \begin{equation} \gamma_2(t) = \frac{1}{1+ a\mbox{e}^{b(O_1-O_2) + cD}} \, , \label{pre2} \end{equation} which generalizes formula (\ref{phen}). While the relative queue length \begin{equation} D(t)=\Delta N_1(t) - \Delta N_2(t) \end{equation} quantifies the pressure to increase the permeability $\gamma_1$ for road section 1, the outflow $O_2(t)$ from the road section 2 resists this tendency, while the flow $O_1(t)$ on road section 1 supports the permeability. The increasing pressure eventually changes the resistance threshold and the service priority. An analogous situation applies to the permeability $\gamma_2$ for road section 2, where the pressure corresponds to $-D$, which is again the difference in queue length. $a$, $b$, and $c$ are non-negative parameters. $a$ may be set to 1, while $c$ must be large enough to establish a sharp switching. Here, we have assumed $c=100$. The parameter $b$ allows to influence the switching frequency $f$, which is approximately proportional to $b$. We have adjusted the frequency $f$ to the cycle time \begin{equation} T^{\rm cyc} = \frac{2\tau}{1-(A_1+A_2)/\widehat{Q}} \, , \label{cycl} \end{equation} which results if the switching (setup) time (``yellow traffic light'') is $\tau = 5$s and a green light is terminated immediately after a queue has dissolved after lifting the red light.\footnote{If $\Delta T_1$ and $\Delta T_2$ denote the green time periods for the intersecting flows 1 and 2, respectively, the corresponding red time periods for a periodic signal control are $\Delta T_2$ and $\Delta T_1$, to which the switching setup time of duration $\tau$ must be added. From formula (\ref{acco}) and with $O_i = \widehat{Q}$ we obtain $\Delta T_1 = (\Delta T_2 +\tau)\widehat{Q}/(\widehat{Q}-A_1)$ and $\Delta T_2 = (\Delta T_1 +\tau)\widehat{Q}/(\widehat{Q}-A_2)$. Using the definition $T^{\rm cyc} = \Delta T_1 + \tau + \Delta T_2 + \tau$ for the cycle time, we finally arrive at Eq.~(\ref{cycl}).} The corresponding parameter value is \begin{equation} b = \frac{500}{\widehat{Q} - (A_1+A_2)} \, . \label{choice} \end{equation} Figure \ref{ill} shows a simulation result for $A_1/\widehat{Q} = 0.3$ and $A_2/\widehat{Q} = 0.4$. \par\begin{figure}[htbp] \begin{center} \includegraphics[width=\textwidth, angle=0]{osc} \end{center} \caption[]{Illustration of the dynamics of self-organized oscillations in the permeabilities and the resulting flows for a single intersection with constant inflows (see text for details). Note that the road section with the higher inflow (arrival rate) is served longer, and its queues are shorter (see solid lines).\label{ill}} \end{figure} The properties of the corresponding specification of the permeabilitities $\gamma_i(t)$ are as follows: \begin{itemize} \item $\gamma_i(t)$ is non-negative and does not exceed the value 1. \item For the sum of permeabilities and $a\ge 1$, we have \begin{equation} \gamma_1 +\gamma_2 = \frac{2+a(\mbox{e}^E + \mbox{e}^{-E})}{1+ a^2 + a(\mbox{e}^E + \mbox{e}^{-E})} \le 1 \, , \end{equation} where we have introduced the abbreviation \begin{equation} E = b(O_1 - O_2) + c(\Delta N_1 - \Delta N_2) \, . \end{equation} The sum is close to 1 for large absolute values of $E$, while for $E \approx 0$ the overall permeability $\gamma_1 + \gamma_2$ is small. \item For large enough values of $ab$ and for $c, A_1, A_2 >0$, the equations for the permeability do not have a stable stationary solution. This can be concluded from \begin{equation} \frac{dE}{dt} = b \left( \frac{dO_1}{dt} - \frac{O_2}{dt}\right) + c \left( \frac{d\Delta N_1}{dt} - \frac{d\Delta N_2}{dt}\right) \end{equation} together with \begin{equation} \frac{d\Delta N_i}{dt} = A_i - O_i(t) \end{equation} and \begin{equation} O_i(t) = \gamma_i(t) \max[\widehat{Q}\Theta(\Delta N_i> 0), A_i] \, , \end{equation} see Eqs. (\ref{delayed}) and (\ref{easy2}). As $dD/dt = d\Delta N_1/dt - d\Delta N_2/dt$ varies around zero, the same applies to $D(t)$, which leads to oscillations of the permeabilities $\gamma_i(t)$. \item With the specification (\ref{choice}) of parameter $b$, the cycle time is approximately proportional to the overall inflow $(A_1+A_2)$. \item The road section with the higher flow gets a longer green time period (see Fig. \ref{ill}). \end{itemize} If the above self-organized traffic flows shall be transfered to a new principle of traffic light control, phases with $\gamma_i(t) \approx 1$ could be interpreted as green phases and phases with $\gamma_i(t)\approx 0$ as red phases. Inefficient, intermediate switching time periods for certain choices of parameter values could be translated into periods of a yellow traffic light. \section{Summary and Outlook}\label{Sec5} We have presented a simple model for conserved flows in networks. Although our specification has been illustrated for traffic flows in urban areas, similar models are useful for logistic and production system or even transport in biological cells or bodies. Our model considers propagation speeds of entities and congestion fronts, spill-over effects, and load-dependent transportation times. \par We have also formulated constraints for network nodes. These constraints contain several minimum and maximum functions, which implies a multitude of possible cases even for relatively simple intersections. It turns out that the arrival and departure flows of diverges have uniquely defined values, while merges or intersections have a set of feasible solutions. This means, the actual result may sensitively depend on the intersection design. For mathematical reasons, we have determined flow-optimizing solutions for two merging and two intersecting flows. However, it is questionable whether these solutions can be established in reality without the implementation of intelligent transport systems facilitating optimal gap usage: In many situations, coordination problems between vehicles in merging or intersection areas cause inefficiencies, which reduce their permeability. \par In fact, at not too small traffic volumes, it is better to have an oscillation between minimum and maximum permeability values. Therefore, we have been looking for a mechanism producing emergent oscillations between high and low values. According to our proposed specification (which is certainly only one of many possible ones), the transition between high and low permeability was triggered, when the difference between the queue lengths of two traffic flows competing for the intersection capacity exceeded a certain value. The resulting oscillatory service could be used to {\it define} traffic phases. One potential advantage of such an approach would be that the corresponding traffic light control would be based on the self-organized dynamics of the system. Further work in this direction seems very promising. \section*{Acknowledgements} The authors are grateful for partial financial support by the German Research Foundation (research projects He 2789/5-1, 8-1) and by the ``Cooperative Center for Communication Networks Data Analysis'', a NAP project sponsored by the Hungarian National Office of Research and Technology under grant No.\ KCKHA005.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,365
import AdaptableController from './AdaptableController'; import CacheAdapter from '../Adapters/Cache/CacheAdapter'; const KEY_SEPARATOR_CHAR = ':'; function joinKeys(...keys) { return keys.join(KEY_SEPARATOR_CHAR); } /** * Prefix all calls to the cache via a prefix string, useful when grouping Cache by object type. * * eg "Role" or "Session" */ export class SubCache { constructor(prefix, cacheController, ttl) { this.prefix = prefix; this.cache = cacheController; this.ttl = ttl; } get(key) { const cacheKey = joinKeys(this.prefix, key); return this.cache.get(cacheKey); } put(key, value, ttl) { const cacheKey = joinKeys(this.prefix, key); return this.cache.put(cacheKey, value, ttl); } del(key) { const cacheKey = joinKeys(this.prefix, key); return this.cache.del(cacheKey); } clear() { return this.cache.clear(); } } export class CacheController extends AdaptableController { constructor(adapter, appId, options = {}) { super(adapter, appId, options); this.role = new SubCache('role', this); this.user = new SubCache('user', this); } get(key) { const cacheKey = joinKeys(this.appId, key); return this.adapter.get(cacheKey).then(null, () => Promise.resolve(null)); } put(key, value, ttl) { const cacheKey = joinKeys(this.appId, key); return this.adapter.put(cacheKey, value, ttl); } del(key) { const cacheKey = joinKeys(this.appId, key); return this.adapter.del(cacheKey); } clear() { return this.adapter.clear(); } expectedAdapterType() { return CacheAdapter; } } export default CacheController;
{ "redpajama_set_name": "RedPajamaGithub" }
6,922
Harry Douglas (active 1900s) was an English football outside right who played in the Football League for Middlesbrough and in non-league football for South Bank, Darlington and Bishop Auckland. Life and career Douglas was born in Hartlepool. He played football for Northern League club South Bank, and was included in the North Riding FA team for an inter-association match against West Yorkshire in September 1902, before signing for Football League First Division club Middlesbrough in January 1903. The move was controversial: the South Bank club took the matter to the Northern League committee, claiming that proper procedures had not been followed, but the meeting ruled that "there was no question of poaching" and that Douglas had requested a transfer. Meanwhile, he made his Football League debut on 17 January, standing in at outside right for the injured Robert Watson for the visit to West Bromwich Albion; Middlesbrough lost 1–0. Douglas played in three of the next four matches, but those were his last. He had remained an amateur while playing for Middlesbrough, and resumed his Northern League career with Darlingtonfor whom he scored twice in the 1904–05 FA Cupand then Bishop Auckland. References Year of birth missing Year of death missing Footballers from Hartlepool English footballers Association football forwards South Bank F.C. players Middlesbrough F.C. players Darlington F.C. players Bishop Auckland F.C. players Northern Football League players English Football League players
{ "redpajama_set_name": "RedPajamaWikipedia" }
346
require 'nokogiri' module FoodCritic # Helper methods that form part of the Rules DSL. module Api include FoodCritic::AST include FoodCritic::XML include FoodCritic::Chef include FoodCritic::Notifications # Find attribute access by type. def attribute_access(ast, options = {}) options = {:type => :any, :ignore_calls => false}.merge!(options) return [] unless ast.respond_to?(:xpath) unless [:any, :string, :symbol, :vivified].include?(options[:type]) raise ArgumentError, "Node type not recognised" end case options[:type] when :any then vivified_attribute_access(ast, options[:cookbook_dir]) + standard_attribute_access(ast, options) when :vivified then vivified_attribute_access(ast, options[:cookbook_dir]) else standard_attribute_access(ast, options) end end # Does the specified recipe check for Chef Solo? def checks_for_chef_solo?(ast) raise_unless_xpath!(ast) # TODO: This expression is too loose, but also will fail to match other # types of conditionals. ! ast.xpath(%q{//if/*[self::aref or self::call][count(descendant::const[@value = 'Chef' or @value = 'Config']) = 2 and ( count(descendant::ident[@value='solo']) > 0 or count(descendant::tstring_content[@value='solo']) > 0 ) ]}).empty? end # Is the [chef-solo-search library](https://github.com/edelight/chef-solo-search) # available? def chef_solo_search_supported?(recipe_path) return false if recipe_path.nil? || ! File.exists?(recipe_path) # Look for the chef-solo-search library. # # TODO: This will not work if the cookbook that contains the library # is not under the same `cookbook_path` as the cookbook being checked. cbk_tree_path = Pathname.new(File.join(recipe_path, '../../..')) search_libs = Dir[File.join(cbk_tree_path.realpath, '*/libraries/search.rb')] # True if any of the candidate library files match the signature: # # class Chef # def search search_libs.any? do |lib| ! read_ast(lib).xpath(%q{//class[count(descendant::const[@value='Chef'] ) = 1]/descendant::def/ident[@value='search']}).empty? end end # The name of the cookbook containing the specified file. def cookbook_name(file) raise ArgumentError, 'File cannot be nil or empty' if file.to_s.empty? until (file.split(File::SEPARATOR) & standard_cookbook_subdirs).empty? do file = File.absolute_path(File.dirname(file.to_s)) end file = File.dirname(file) unless File.extname(file).empty? # We now have the name of the directory that contains the cookbook. # We also need to consult the metadata in case the cookbook name has been # overridden there. This supports only string literals. md_path = File.join(file, 'metadata.rb') if File.exists?(md_path) name = read_ast(md_path).xpath("//stmts_add/ command[ident/@value='name']/descendant::tstring_content/@value").to_s return name unless name.empty? end File.basename(file) end # The dependencies declared in cookbook metadata. def declared_dependencies(ast) raise_unless_xpath!(ast) # String literals. # # depends 'foo' deps = ast.xpath(%q{//command[ident/@value='depends']/ descendant::args_add/descendant::tstring_content[1]}) # Quoted word arrays are also common. # # %w{foo bar baz}.each do |cbk| # depends cbk # end var_ref = ast.xpath(%q{//command[ident/@value='depends']/ descendant::var_ref/ident}) unless var_ref.empty? deps += ast.xpath(%Q{//block_var/params/ident#{var_ref.first['value']}/ ancestor::method_add_block/call/descendant::tstring_content}) end deps.map{|dep| dep['value']} end # Create a match for a specified file. Use this if the presence of the file # triggers the warning rather than content. def file_match(file) raise ArgumentError, "Filename cannot be nil" if file.nil? {:filename => file, :matched => file, :line => 1, :column => 1} end # Find Chef resources of the specified type. # TODO: Include blockless resources # # These are equivalent: # # find_resources(ast) # find_resources(ast, :type => :any) # # Restrict to a specific type of resource: # # find_resources(ast, :type => :service) # def find_resources(ast, options = {}) options = {:type => :any}.merge!(options) return [] unless ast.respond_to?(:xpath) scope_type = '' scope_type = "[@value='#{options[:type]}']" unless options[:type] == :any # TODO: Include nested resources (provider actions) no_actions = "[command/ident/@value != 'action']" ast.xpath("//method_add_block[command/ident#{scope_type}]#{no_actions}") end # Helper to return a comparable version for a string. def gem_version(version) Gem::Version.create(version) end # Retrieve the recipes that are included within the given recipe AST. # # These two usages are equivalent: # # included_recipes(ast) # included_recipes(ast, :with_partial_names => true) # def included_recipes(ast, options = {:with_partial_names => true}) raise_unless_xpath!(ast) filter = ['[count(descendant::args_add) = 1]'] # If `:with_partial_names` is false then we won't include the string # literal portions of any string that has an embedded expression. unless options[:with_partial_names] filter << '[count(descendant::string_embexpr) = 0]' end included = ast.xpath(%Q{//command[ident/@value = 'include_recipe']#{filter.join} [descendant::args_add/string_literal]/descendant::tstring_content}) # Hash keyed by recipe name with matched nodes. included.inject(Hash.new([])){|h, i| h[i['value']] += [i]; h} end # Searches performed by the specified recipe that are literal strings. # Searches with a query formed from a subexpression will be ignored. def literal_searches(ast) return [] unless ast.respond_to?(:xpath) ast.xpath("//method_add_arg[fcall/ident/@value = 'search' and count(descendant::string_embexpr) = 0]/descendant::tstring_content") end # Create a match from the specified node. def match(node) raise_unless_xpath!(node) pos = node.xpath('descendant::pos').first return nil if pos.nil? {:matched => node.respond_to?(:name) ? node.name : '', :line => pos['line'].to_i, :column => pos['column'].to_i} end # Does the provided string look like an Operating System command? This is a # rough heuristic to be taken with a pinch of salt. def os_command?(str) str.start_with?('grep ', 'net ', 'which ') or # common commands str.include?('|') or # a pipe, could be alternation str.include?('/') or # file path delimiter str.match(/^[\w]+$/) or # command name only str.match(/ --?[a-z]/i) # command-line flag end # Read the AST for the given Ruby source file def read_ast(file) source = if file.to_s.end_with? '.erb' Template::ExpressionExtractor.new.extract( File.read(file)).map{|e| e[:code]}.join(';') else File.read(file) end build_xml(Ripper::SexpBuilder.new(source).parse) end # Retrieve a single-valued attribute from the specified resource. def resource_attribute(resource, name) raise ArgumentError, "Attribute name cannot be empty" if name.empty? resource_attributes(resource)[name.to_s] end # Retrieve all attributes from the specified resource. def resource_attributes(resource, options={}) atts = {} name = resource_name(resource, options) atts[:name] = name unless name.empty? atts.merge!(normal_attributes(resource, options)) atts.merge!(block_attributes(resource)) atts end # Resources keyed by type, with an array of matching nodes for each. def resource_attributes_by_type(ast) result = {} resources_by_type(ast).each do |type,resources| result[type] = resources.map{|resource| resource_attributes(resource)} end result end # Retrieve the name attribute associated with the specified resource. def resource_name(resource, options = {}) raise_unless_xpath!(resource) options = {:return_expressions => false}.merge(options) if options[:return_expressions] name = resource.xpath('command/args_add_block') if name.xpath('descendant::string_add').size == 1 and name.xpath('descendant::string_literal').size == 1 and name.xpath('descendant::*[self::call or self::string_embexpr]').empty? name.xpath('descendant::tstring_content/@value').to_s else name end else # Preserve existing behaviour resource.xpath('string(command//tstring_content/@value)') end end # Resources in an AST, keyed by type. def resources_by_type(ast) raise_unless_xpath!(ast) result = Hash.new{|hash, key| hash[key] = Array.new} find_resources(ast).each do |resource| result[resource_type(resource)] << resource end result end # Return the type, e.g. 'package' for a given resource def resource_type(resource) raise_unless_xpath!(resource) type = resource.xpath('string(command/ident/@value)') if type.empty? raise ArgumentError, "Provided AST node is not a resource" end type end # Does the provided string look like ruby code? def ruby_code?(str) str = str.to_s return false if str.empty? checker = FoodCritic::ErrorChecker.new(str) checker.parse ! checker.error? end # Searches performed by the provided AST. def searches(ast) return [] unless ast.respond_to?(:xpath) ast.xpath("//fcall/ident[@value = 'search']") end # The list of standard cookbook sub-directories. def standard_cookbook_subdirs %w{attributes definitions files libraries providers recipes resources templates} end # Template filename def template_file(resource) if resource['source'] resource['source'] elsif resource[:name] if resource[:name].respond_to?(:xpath) resource[:name] else "#{File.basename(resource[:name])}.erb" end end end # Templates in the current cookbook def template_paths(recipe_path) Dir[Pathname.new(recipe_path).dirname.dirname + 'templates' + '**/*.erb'] end private def block_attributes(resource) # The attribute value may alternatively be a block, such as the meta # conditionals `not_if` and `only_if`. atts = {} resource.xpath("do_block/descendant::method_add_block[ count(ancestor::do_block) = 1][brace_block | do_block]").each do |batt| att_name = batt.xpath('string(method_add_arg/fcall/ident/@value)') if att_name and ! att_name.empty? and batt.children.length > 1 atts[att_name] = batt.children[1] end end atts end # Recurse the nested arrays provided by Ripper to create a tree we can more # easily apply expressions to. def build_xml(node, doc = nil, xml_node=nil) doc, xml_node = xml_document(doc, xml_node) if node.respond_to?(:each) # First child is the node name node.drop(1).each do |child| if position_node?(child) xml_position_node(doc, xml_node, child) else if ast_node_has_children?(child) # The AST structure is different for hashes so we have to treat # them separately. if ast_hash_node?(child) xml_hash_node(doc, xml_node, child) else xml_array_node(doc, xml_node, child) end else xml_node['value'] = child.to_s unless child.nil? end end end end xml_node end def extract_attribute_value(att, options = {}) if ! att.xpath('args_add_block[count(descendant::args_add)>1]').empty? att.xpath('args_add_block').first elsif ! att.xpath('args_add_block/args_add/ var_ref/kw[@value="true" or @value="false"]').empty? att.xpath('string(args_add_block/args_add/ var_ref/kw/@value)') == 'true' elsif ! att.xpath('descendant::assoc_new').empty? att.xpath('descendant::assoc_new') elsif att.xpath('descendant::symbol').empty? if options[:return_expressions] and (att.xpath('descendant::string_add').size != 1 or ! att.xpath('descendant::*[self::call or self::string_embexpr]').empty?) att else att.xpath('string(descendant::tstring_content/@value)') end else att.xpath('string(descendant::symbol/ident/@value)').to_sym end end def node_method?(meth, cookbook_dir) chef_dsl_methods.include?(meth) || patched_node_method?(meth, cookbook_dir) end def normal_attributes(resource, options = {}) atts = {} # The ancestor check here ensures that nested blocks are not returned. # For example a method call within a `ruby_block` would otherwise be # returned as an attribute. # # TODO: This may need to be revisted in light of recent changes to the # application cookbook which is popularising nested blocks. resource.xpath('do_block/descendant::*[self::command or self::method_add_arg][count(ancestor::do_block) = 1]').each do |att| unless att.xpath('string(ident/@value | fcall/ident/@value)').empty? atts[att.xpath('string(ident/@value | fcall/ident/@value)')] = extract_attribute_value(att, options) end end atts end def patched_node_method?(meth, cookbook_dir) return false if cookbook_dir.nil? || ! Dir.exists?(cookbook_dir) # TODO: Modify this to work with multiple cookbook paths cbk_tree_path = Pathname.new(File.join(cookbook_dir, '..')) libs = Dir[File.join(cbk_tree_path.realpath, '*/libraries/*.rb')] libs.any? do |lib| ! read_ast(lib).xpath(%Q{//class[count(descendant::const[@value='Chef']) > 0][count(descendant::const[@value='Node']) > 0]/descendant::def/ ident[@value='#{meth.to_s}']}).empty? end end def raise_unless_xpath!(ast) unless ast.respond_to?(:xpath) raise ArgumentError, "AST must support #xpath" end end # XPath custom function class AttFilter def is_att_type(value) return [] unless value.respond_to?(:select) value.select{|n| %w{node default override set normal}.include?(n.to_s)} end end def standard_attribute_access(ast, options) if options[:type] == :any [:string, :symbol].map do |type| standard_attribute_access(ast, options.merge(:type => type)) end.inject(:+) else type = options[:type] == :string ? 'tstring_content' : options[:type] expr = '//*[self::aref_field or self::aref]' expr += '[is_att_type(descendant::ident' expr += '[not(ancestor::aref/call)]' if options[:ignore_calls] expr += "/@value)]/descendant::#{type}" expr += "[ident/@value != 'node']" if type == :symbol ast.xpath(expr, AttFilter.new).sort end end def vivified_attribute_access(ast, cookbook_dir) calls = ast.xpath(%q{//*[self::call or self::field] [is_att_type(vcall/ident/@value) or is_att_type(var_ref/ident/@value)] [@value='.'][count(following-sibling::arg_paren) = 0]}, AttFilter.new) calls.select do |call| call.xpath("aref/args_add_block").size == 0 and (call.xpath("descendant::ident").size > 1 and ! node_method?(call.xpath("ident/@value").to_s.to_sym, cookbook_dir)) end.sort end end end
{ "redpajama_set_name": "RedPajamaGithub" }
3,689
\section{Introduction} \noindent \underbar{Introduction:} The synthesis of graphene, {\it i.e.}~single layers of carbon atoms in a hexagonal lattice, in 2004, has led to a remarkable body of subsequent work\cite{geim07,blackschaffer14}. One of the key elements of interest has been the Dirac dispersion relation of free electrons in this geometry, allowing the exploration of aspects of relativistic quantum mechanics in a conventional solid. ``Dirac point engineering" has also become a big theme of investigation of fermions confined in hexagonal optical lattices\cite{wunsch08}. It has been natural to ask what the effects of electron-electron interactions are on this unusual noninteracting dispersion relation. Early quantum Monte Carlo (QMC) simulations and series expansion investigations of the Hubbard model on a honeycomb lattice found a critical value of the on-site repulsion $U_c \sim 4t$ for the onset of antiferromagnetic (AF) order at half-filling\cite{paiva05}. This stood in contrast to the extensively studied square lattice geometry for which the perfect Fermi surface nesting and the van Hove singularity of the density of states (DOS) imply $U_c=0$. Subsequent QMC studies refined this value to $U_c \sim 3.87$ and suggested the possibility that a gapped, spin-liquid (resonating valence bond) phase exists between the weak coupling semimetal and strong coupling AF regimes\cite{meng10}, a conclusion further explored in the strong coupling (Heisenberg) limit\cite{clark11}. Yet more recent work challenged this scenario, and pointed instead to a conventional, continuous quantum phase transition (QPT) between the semimetal and AF insulator\cite{sorella12,assaad13,otsuka16}. Equally interesting is the possibility of unusual, topological superconducting phases arising from these spin fluctuations\cite{blackschaffer07,nandkishore12,honerkamp08,wang12,kiesel12,pathak10,ma11,gu13,jiang14,xu16}. Graphene itself is, in fact, only moderately correlated. First principles calculations of the on-site Hubbard $U$ yield $U_{00} \sim 9.3$ eV\cite{wehling11}, with a nearest neighbor hopping $t \sim 2.8$ eV, so that $U/t \sim 3.3$ is rather close (and slightly below) $U_c$. Longer range $U_{01}$ interactions can lead to a rich phase diagram including charge ordered phases\cite{herbut06,honerkamp08}, especially in the semimetal phase where the Coulomb interaction is unscreened. Charge ordering may also arise when electron-phonon coupling (EPC) is taken into account\citep{gruner88,gruner94}. Indeed, considering such coupling would allow an exploration of the effect of other sorts of interactions on the Dirac fermions of graphene, complementing the extensive existing literature on electron-electron repulsion. There are a number of fundamental differences between the two types of correlations. Most significantly, the continuous symmetry of the Hubbard interaction, and the AF order parameter, preclude a finite 2D temperature transition. Therefore the focus is instead on quantum phase transitions. On the other hand, in the Holstein case the charge-density-wave (CDW) order has a one-component order parameter, leading to a transition that breaks a {\it discrete} symmetry and, consequently, a finite critical temperature (in the Ising universality class). Precise QMC values of $T_c$ on a square lattice were only quite recently obtained\cite{costa17,weber18,costa18}. These build on earlier QMC studies of CDW physics in the Holstein model \cite{marsiglio90,vekic92}, and introduce an exact treatment of fluctuations into earlier mean-field calculations\cite{zheng97}. {\it In this paper we explore the effect of electron-phonon, rather than electron-electron, interactions, on the properties of Dirac fermions, through QMC simulations of the Holstein model}\cite{holstein59} {\it on a honeycomb lattice.} We use the charge structure factor, compressibility, and Binder ratio to evaluate the critical transition temperatures and EPC, leading to a determination of the phase diagram of the model. Taken together, these results provide considerable initial insight into the nature of the CDW transition for Dirac fermions coupled to phonons. \vskip0.03in \noindent \underbar{Model and Methodology:} The Holstein model\cite{holstein59} describes conduction electrons locally coupled to phonon degrees of freedom, \begin{align} \label{eq:Holst_hamil} \nonumber \mathcal{\hat H} = & -t \sum_{\langle \mathbf{i}, \mathbf{j} \rangle, \sigma} \big(\hat d^{\dagger}_{\mathbf{i} \sigma} \hat d^{\phantom{\dagger}}_{\mathbf{j} \sigma} + {\rm h.c.} \big) - \mu \sum_{\mathbf{i}, \sigma} \hat n_{\mathbf{i}, \sigma} \\ & + \frac{1}{2} \sum_{ \mathbf{i} } \hat{P}^{2}_{\mathbf{i}} + \frac{\omega_{\, 0}^{2}}{2} \sum_{ \mathbf{i} } \hat{X}^{2}_{\mathbf{i}} + \lambda \sum_{\mathbf{i}, \sigma} \hat n_{\mathbf{i}, \sigma} \hat{X}_{\mathbf{i}} \,\,, \end{align} where the sums on $\mathbf{i}$ run over a two-dimensional honeycomb lattice (see Fig.\ref{fig1}\,(a)), with $\langle \mathbf{i}, \mathbf{j} \rangle$ denoting nearest neighbors. $d^{\dagger}_{\mathbf{i} \sigma}$ and $d_{\mathbf{i} \sigma}$ are creation and annihilation operators of electrons with spin $\sigma$ at a given site $\mathbf{i}$. The first term on the right side of Eq.\,\eqref{eq:Holst_hamil} corresponds to the hopping of electrons, with chemical potential $\mu$ given by the second term. The phonons are local (dispersionless) quantum harmonic oscillators with frequency $\omega_{0}$, described in the next two terms of Eq.~\eqref{eq:Holst_hamil}. The EPC is included in the final term. The hopping integral ($t=1$) sets the energy scale, with bandwidth $W=6\,t$ for the honeycomb geometry. We use determinant quantum Monte Carlo (DQMC) simulations \cite{blankenbecler81} to investigate the properties of Eq.\eqref{eq:Holst_hamil}. Since the fermionic operators appear only quadratically in the Hamiltonian, they can be traced out, leaving an expression for the partition function which is an integral over the space and imaginary time dependent phonon field. The integrand takes the form of the square of the determinant of a matrix $M$ of dimension the spatial lattice size, as well as a "bosonic" action\cite{creutz81} arising from the harmonic oscillator terms in Eq.\eqref{eq:Holst_hamil}. The square appears since the traces over the up and down fermions are identical, which leads to a case where the minus sign problem is absent for any electronic filling. Nevertheless, we focus on the half-filled case, $\langle \hat n_{{\bf i},\sigma} \rangle=\frac{1}{2}$. This gives us access to the Dirac point where the DOS vanishes linearly. It is also the density for which CDW correlations are most pronounced. It can be shown, using an appropriate particle-hole transformation, that this filling occurs at $\mu = -\lambda^2/\omega_0^2$. We analyze lattices with linear sizes up to $L=8$ (128 sites). By fixing the discretization mesh to $\Delta\tau=1/20$, systematic Trotter errors become smaller than the statistical ones from Monte Carlo sampling. To facilitate the discussion, and eventual comparisons with the square lattice case, we introduce a dimensionless EPC: $\lambda_D = \lambda^2 / (\omega_0^2 \, W)$. Charge ordering is characterized by the charge-density correlation function, \begin{eqnarray} c({\bf r}) = \big\langle \, \big( \, n_{{\bf i}\uparrow} + n_{{\bf i}\downarrow} \, \big) \big( \, n_{{\bf i+r}\uparrow} + n_{{\bf i+r}\downarrow} \, \big) \, \big\rangle, \end{eqnarray} and its Fourier transform, the CDW structure factor, \begin{eqnarray} S_{\rm cdw} = \sum_{\bf r} (-1)^{\bf r} c({\bf r}) \,, \end{eqnarray} The $-1$ phase accesses the staggered pattern of the charge ordering. The long-range behavior is investigated by performing finite size scaling, and by tracking the evolution of the insulating gap in the CDW phase. \begin{figure}[t] \includegraphics[scale=0.24]{Fig1_v3.pdf} % \caption{ (a) A $4 \times 4$ honeycomb lattice, with the trajectory (red dashed line) corresponding to the horizontal axis of (b), which shows charge correlations $c({\bf r})$ at $\lambda_D=2/3$, $\omega_0=1$, and several temperatures. Here, and in all subsequent figures, when not shown, error bars are smaller than the symbol size.} \label{fig1} \end{figure} \begin{figure}[t] \includegraphics[scale=0.34]{Fig2_v4_2.pdf} % \caption{ (a) The charge structure factor as a function of $\beta$, for different lattice sizes ($L=4$-$8$), and its (b) best data collapse, with the 2D Ising critical exponents, which yields $\beta_c=5.8$. (c) The crossing plot for $S_{\rm cdw}/L^{\gamma/\nu}$, with vertical dashed lines indicating the uncertainty in the critical temperature. Here $\lambda_D=2/3$ and $\omega_0=1$. } \label{fig2} \end{figure} \vskip0.03in \noindent \underbar{Existence of CDW phase:} We first consider the behavior of charge-density correlations when the temperature $T=\beta^{-1}$ is lowered. Figure \ref{fig1}\,(b) displays $c({\bf r})$ along the real space path of Fig. 1(a), for $\lambda_D=2/3$, $\omega_0=1$ and several inverse temperatures $\beta$. When $T$ is high ($\beta = 4$), we find $c({\bf r}) \approx \rho^2=1$, where $\rho$ is the density, indicating an absence of long-range order. However, an enhancement of charge correlations starts to appear at $\beta = 5$, with the emergence of a staggered pattern, which is even more pronounced at lower $T$, $\beta = 6$ and $7.5$. This temperature evolution of real space charge correlations suggests a transition into a CDW phase. A more compelling demonstration of long-range ordering (LRO) is provided by Fig.\,\ref{fig2}\,(a), which exhibits the structure factor $S_{\rm cdw}$ as a function of $\beta$, for different linear sizes $L$. In the disordered phase at high $T$, $c({\bf r})$ is short-ranged and, consequently, $S_{\rm cdw}$ is independent of lattice size $L$. The emergence of a lattice size dependence of $S_{\rm cdw}$, and, ultimately, its saturation at a value not far from $N = 2L^2$, signals the onset temperature of LRO, and a correlation length approaching the lattice size. Figure \ref{fig2}\,(a) shows that a change between these two behaviors occurs around $\beta \sim 5-6$, giving an initial, rough estimate of $\beta_c$. The ground state is obtained for $\beta \gtrsim 8$; for larger values, the density correlations no longer change. The precise determination of the critical temperature $T_c$ is accomplished by performing finite size scaling of these data, using the 2D Ising critical exponents $\gamma=7/4$ and $\nu=1$, as displayed in Fig. 2(b). The best data collapse occurs at $\beta_c = 5.8\,(1)$, consistent with the crossing of $S_{\rm cdw}/L^{\gamma/\nu}$ presented in Fig. 2(c), and also supported by the crossing in the Binder cumulants (see Supplemental Material \cite{sm,binder81}). $T_c$ for the honeycomb lattice is of the same order as that for the square lattice. For the latter at $\omega_0=1$, $\beta_c$ ranges from $\beta_c \sim 16.7$ at $\lambda_D = 0.15$ to $\beta_c \sim 5$ at $\lambda_D = 0.27$ \cite{weber18}, and $\beta_c \sim 6.0$ at $\lambda_D = 0.25$ \cite{costa18,batrouni18}. For the range of EPC shown in Ref.\,\onlinecite{weber18}, $\beta_c$ steadily decreases with increasing $\lambda_D$. A dynamical mean-field theory approach \cite{blawid01,Freericks93} found that there is a minimal $\beta_c$ (maximum in $T_c$) for an optimal coupling strength. This non-monotonicity is also present in the repulsive half-filled 3D Hubbard model; the AF $\beta_{\rm Neel}$ has a minimum at intermediate $U$. We return to this issue in what follows. \begin{figure}[t] \includegraphics[height=2.5in,width=3.5in]{Fig3_v51.pdf} % \caption{ CDW structure factor $S_{\rm cdw}$ as a function of dimensionless coupling $\lambda_D$. $S_{\rm cdw}$ becomes small for $\lambda_D \lesssim 0.25$. For the square lattice, $S_{\rm cdw}$ is large to much smaller values of $\lambda_D$. In addition, for the honeycomb (Hc.) lattice $S_{\rm cdw}$ does not change for the two lowest temperatures, whereas $S_{\rm cdw}$ continues to grow at weak coupling for the square (Sq.) lattice. } \label{fig3} \end{figure} \vskip0.03in \noindent \underbar{Finite Critical Coupling:} We investigate next how charge correlations behave as a function of the EPC, and, specifically the possibility that CDW does not occur below a critical interaction strength, as is known to be the case for the Hubbard model on a honeycomb lattice. This is a somewhat challenging question, since at weak coupling one might expect $T_c \sim \omega_0 \, e^{-1/\lambda_D}$ becomes small, necessitating a careful distinction between the absence of a CDW transition and $T_c$ decreasing below the simulation temperature. Figure \ref{fig3} displays the CDW structure factor as a function of $\lambda_D$ at different $T$, on square (open symbols) and honeycomb (filled symbols) lattices, for similar system sizes. The most noticeable feature is that $S_{\rm cdw}$ appears to vanish for weak coupling, $\lambda_D \lesssim 0.25$, strongly suggesting a finite critical EPC for CDW order on the honeycomb lattice. This is a qualitatively reasonable consequence of the vanishing DOS at half-filling, since having a finite DOS is part of the Peierls' requirement for CDW formation\cite{peierls55,gruner88,gruner94}. To ensure this is not a finite $T$ effect, we contrast this behavior of $S_{\rm cdw}$ with that of the square lattice, for which it is believed that a CDW transition occurs at all nonzero $\lambda_D$ owing to the divergence of the square lattice DOS\cite{weber18}. We note first that $S_{\rm cdw}$ remains large for the square lattice down to values of $\lambda_D$ a factor of $2-3$ below those of the honeycomb lattice. In addition, there is a distinct difference in the $T$ dependence. In the square lattice case, CDW correlations are enhanced as $T$ is lowered. The $S_{\rm cdw}$ curves shift systematically to lower $\lambda_D$ as $\beta$ increases, consistent with order for all nonzero $\lambda_D$. On the other hand, $S_{\rm cdw}$ shows much less $T$ dependence in the honeycomb case, with results from $\beta=12$ to $20$ being almost identical (within error bars). \begin{figure}[t] \includegraphics[scale=0.30]{Fig4_v7.pdf} % \caption{ (a) The charge gap $\Delta_c$ (see text) as a function of $\lambda_D$. (b) The electronic compressibility $\kappa$ as a function of $\lambda_D$ for square (open symbols) and honeycomb (filled symbols) lattices with linear sizes $L=8$ and 6, respectively. } \label{fig4} \end{figure} Further insight into the existence of a critical EPC is provided by CDW gap, inferred from the plateau in $\rho(\mu)$ via $\Delta_c \equiv \mu(\rho=1+x) - \mu(\rho=1-x)$. Here we choose $x=0.01$; other values of $x$ give qualitatively similar results. Figure \ref{fig4}\,(a) displays $\Delta_c$ for different $\lambda_D$ and fixed $\beta=10$ and $16$. The gap has a non-monotonic dependence on the EPC, with a maximum at $\lambda_D \approx 0.43$. For smaller EPCs the CDW gap is strongly suppressed. A crossing of the curves occurs at $\lambda_D \sim 0.27$ so that $\Delta_c$ {\it decreases} as $T$ is lowered for $\lambda_D \lesssim 0.27$, consistent with a critical EPC. The compressibility $\kappa=\partial\rho/\partial \mu$ is presented as a function of $\lambda_D$ in Fig.~\ref{fig4}\,(b) for honeycomb and square lattices at several $T$. We have normalized by the noninteracting value $\kappa_{0}$ (evaluated in the thermodynamics limit) to provide a comparison that eliminates trivial effects of the DOS. For the honeycomb lattice, $\kappa/\kappa_0$ shows a sharp increase around $\lambda_D \sim 0.27 \pm 0.01$, consistent with the vanishing of $S_{\rm cdw}$ in Fig.\,\ref{fig3}. Furthermore, $\kappa/\kappa_0$ grows with $\beta$. For the square lattice, $\kappa/\kappa_0$ vanishes down to much smaller $\lambda_D$, behaves more smoothly at the lowest $T$, and is an order of magnitude smaller. Its small residual value is a consequence of the exponentially divergence of the CDW ordering temperature as $\lambda_D \rightarrow 0$. Finally, we have obtained $T_c$ for a range of $\lambda_D$ above the critical EPC, yielding the phase diagram in Fig.\,\ref{fig5}. $T_c$ decreases rapidly at $\lambda_D \approx 0.28$. The inset shows the crossing of the invariant correlation ratio $R_c$, a quantity which is independent of lattice size at a quantum critical point (QCP)(see Supplemental Material \cite{sm,binder81}). $T_c$ exhibits a maximum at $\lambda_D \sim 0.4$-$0.5$, which lies close to the coupling for which $\Delta_{\rm cdw}$ is greatest (Fig.\,\ref{fig4}). The maximum in $T_c$ reflects a competition between a growth with $\lambda_D$ as it induces CDW order with a reduction as the EPC renormalizes the single electron mass, yielding a heavy polaron \cite{jeckelmann98,bonca99,romero99,ku02,kornilovitch98,hohenadler04,macridin04,goodvin06,sm}. Unlike CDW order which arises directly from intersite interactions, in the Holstein model it is produced by a second order process: the lowering of the kinetic energy by virtual hopping between doubly occupied and empty sites. A mass renormalization-driven reduction in this hopping lowers $T_c$. \begin{figure}[t] \includegraphics[scale=0.3]{Fig5_v43.pdf} % \caption{Critical temperature for the CDW transition in the honeycomb Holstein model inferred from finite size scaling analysis in Fig.~2. The inset shows the crossing of the invariant correlation ratio $R_c$ (see text), resulting in the indicated QCP, in good agreement with the value at which an extrapolated $T_c$ would vanish. } \label{fig5} \end{figure} \vskip0.03in \noindent \underbar{Conclusions:} In this paper we have presented DQMC simulations of the Holstein model on a honeycomb lattice. The existence of long-range charge order was established below a finite critical transition temperature in the range $T \sim t/6$, for sufficiently large EPC. $T_c$ is similar for the square and honeycomb lattices, despite the dramatic differences in their noninteracting densities of states: diverging in the former case, and vanishing in the latter. Our data suggest that, as for the honeycomb Hubbard model, \cite{paiva05,meng10,clark11,sorella12,assaad13,otsuka16}, the vanishing non-interacting density of states of Dirac fermions gives rise to a minimal value for $\lambda_D \sim (0.27 \pm 0.01)\,t$, only above which does LRO occur. Thus although the critical CDW transition temperatures for the two geometries are similar {\it when order occurs}, the Dirac density of states does fundamentally alter the phase diagram by introducing a weak coupling regime in which order is absent. The 1D Holstein model is also known to have a metallic phase for electron-phonon couplings below a critical value. \cite{jeckelmann99,hohenadler17} This initial study has focused on a simplified model. The phonon spectra of graphene and graphitic materials have been extensively explored\cite{karssemeijer11} and, of course, are vastly more complex than the optical phonon mode incorporated in the Holstein Hamiltonian. However, as has been recently emphasized\cite{costa18}, including realistic phonon dispersion relations is relatively straightforward in QMC simulations, since the associated modifications affect only the local bosonic portion of the action, and not the computationally challenging fermionic determinants. One important next step will be the study of more complex phonon modes, and the types of electronic order and phase transitions that they induce. Such investigations open the door to examining hexagonal CDW materials like the transition metal dichalcogenides \cite{Zhu2367,PhysRevB.89.235115,PhysRevB.89.165140,PhysRevB.57.13118}. However, their layered structures add considerable challenges to descriptions with simple models. {\it Note added.}---While preparing this manuscript, we learned of a related investigation by Chen \textit{et al.} \cite{Chen18}. \vskip0.03in \noindent \underbar{Acknowledgements:} The work of Y.-X.Z., W.-T.C. and R.T.S. was supported by the Department of Energy under Award No. DE-SC0014671. G.G.B. is partially supported by the French government, through the UCAJEDI Investments in the Future project managed by the National Research Agency (ANR) with Reference No. ANR-15-IDEX-01. N.C.C. was supported by the Brazilian funding agencies CAPES and CNPq.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,122
Q: MediaMuxer writing H264 stream to mpg file MediaMuxer has been driving me mad for two days & nights now :-( The situation: I receive a H264 encoded 1280x720 video stream via UDP. The h264 stream contains NALU 1 - slice and NALU 5 - keyframe (5 is always preceded by NALU 7 - SPS and NALU 8 - PPS). This stream appears to be stable 30fps with at least one NALU 5 keyframe per second. Bitrate is variable but less than 4Mbps. MediaCodec sucessfully decodes the stream and renders it in a surface view so that part works well. But now I need to save the H.264 into a local mpg file. I set up a MediaMuxer with all MediaFormat information that I have, and feed it with the sample data from the stream. Each sample contains one frame (NALU 1 or 5), and the first data sent to MediaMuxer is a keyframe (NALU 5). The presentation time is calculated based on framenumber and framerate. All involved methods are called from the same thread. But the mpg file is never created. As you can see in the output below the data in the ByteBuffer does start with NALU headers followed by varying size of data. MediaMuxer seems to "see" frames in the data as it counts the frames. So what is wrong here? Minimum API is 21, and I have tested with a Samsung Galaxy S4 running stock Android 5 and a couple of devices running Lineageos Oreo and Nougat. Here is the code to setup the MediaMuxer: void setupMuxer(File f) throws IOException { if (DEBUG) Log.d(TAG, "Setup Muxer: " + f.getAbsolutePath() +" can write: " + f.canWrite()); MediaFormat format = MediaFormat.createVideoFormat(MediaFormat.MIMETYPE_VIDEO_AVC, decoderWidth, decoderHeight); format.setInteger(MediaFormat.KEY_BIT_RATE, 4000000); format.setInteger(MediaFormat.KEY_FRAME_RATE, 29); format.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 1); format.setByteBuffer("csd-0", ByteBuffer.wrap(sps)); // sps and pps have been retrieved from the stream's NAL 7/8 format.setByteBuffer("csd-1", ByteBuffer.wrap(pps)); format.setInteger(MediaFormat.KEY_MAX_INPUT_SIZE, 1920 * 1080); muxer = new MediaMuxer(f.getPath(), MediaMuxer.OutputFormat.MUXER_OUTPUT_MPEG_4); videoTrack = muxer.addTrack(format); muxer.start(); } This method is called for each (complete) NALU 1 and NALU 5: void muxFrame(ByteBuffer buf, int frame) { MediaCodec.BufferInfo bufferInfo = new MediaCodec.BufferInfo(); bufferInfo.offset = buf.arrayOffset(); bufferInfo.size = buf.position() - bufferInfo.offset; bufferInfo.flags = (buf.get(4) & 0x1f) == 5 ? MediaCodec.BUFFER_FLAG_KEY_FRAME : 0; bufferInfo.presentationTimeUs = computePresentationTime(frame); if (DEBUG) Log.d(TAG, "muxFrame frame: " + frame + " size: " + bufferInfo.size + " NAL: " + (buf.get(4) & 0x1f) + " Flags: " + bufferInfo.flags + " PTS: " + bufferInfo.presentationTimeUs + " content: " + BitByteUtil.toByteString(buf.array(), buf.arrayOffset(), 8)); try { muxer.writeSampleData(videoTrack, buf, bufferInfo); } catch (Exception e) { Log.w(TAG, "muxer failed", e); } finally { } } private static long computePresentationTime(int frameIndex) { return 42 + frameIndex * 1000000 / FRAME_RATE; } Here is my output if MediaMuxer is stopped after it has consumed 100 NALUs. 05.651 D/VideoDecoderView: Setup Muxer: /storage/emulated/0/Pictures/test.mpg can write: true 05.656 I/MPEG4Writer: limits: 4294967295/0 bytes/us, bit rate: -1 bps and the estimated moov size 3317 bytes 06.263 D/VideoDecoderView: muxFrame frame: 2 size: 7257 NAL: 5 Flags: 1 PTS: 66708 content: 0:000 1:000 2:000 3:001 4:101 5:184 6:000 7:015 06.264 I/MPEG4Writer: setStartTimestampUs: 66708 06.264 I/MPEG4Writer: Earliest track starting time: 66708 06.308 D/VideoDecoderView: muxFrame frame: 3 size: 8998 NAL: 1 Flags: 0 PTS: 100042 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:034 7:020 06.342 D/VideoDecoderView: muxFrame frame: 4 size: 13664 NAL: 1 Flags: 0 PTS: 133375 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:066 7:020 06.375 D/VideoDecoderView: muxFrame frame: 5 size: 13674 NAL: 1 Flags: 0 PTS: 166708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:098 7:020 06.409 D/VideoDecoderView: muxFrame frame: 6 size: 13772 NAL: 1 Flags: 0 PTS: 200042 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:130 7:020 06.483 D/VideoDecoderView: muxFrame frame: 7 size: 13707 NAL: 1 Flags: 0 PTS: 233375 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:162 7:020 06.520 D/VideoDecoderView: muxFrame frame: 8 size: 13778 NAL: 1 Flags: 0 PTS: 266708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:194 7:020 06.555 D/VideoDecoderView: muxFrame frame: 9 size: 13743 NAL: 1 Flags: 0 PTS: 300042 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:226 7:020 06.575 D/VideoDecoderView: muxFrame frame: 10 size: 7338 NAL: 5 Flags: 1 PTS: 333375 content: 0:000 1:000 2:000 3:001 4:101 5:184 6:000 7:015 06.593 D/VideoDecoderView: muxFrame frame: 11 size: 9059 NAL: 1 Flags: 0 PTS: 366708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:034 7:020 06.618 D/VideoDecoderView: muxFrame frame: 12 size: 13587 NAL: 1 Flags: 0 PTS: 400042 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:066 7:020 06.644 D/VideoDecoderView: muxFrame frame: 13 size: 13650 NAL: 1 Flags: 0 PTS: 433375 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:098 7:020 06.671 D/VideoDecoderView: muxFrame frame: 14 size: 13797 NAL: 1 Flags: 0 PTS: 466708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:130 7:020 .... [snip] 09.620 D/VideoDecoderView: muxFrame frame: 97 size: 7212 NAL: 5 Flags: 1 PTS: 3233375 content: 0:000 1:000 2:000 3:001 4:101 5:184 6:000 7:015 09.661 D/VideoDecoderView: muxFrame frame: 98 size: 8814 NAL: 1 Flags: 0 PTS: 3266708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:034 7:020 09.692 D/VideoDecoderView: muxFrame frame: 99 size: 13566 NAL: 1 Flags: 0 PTS: 3300042 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:066 7:020 09.737 D/VideoDecoderView: muxFrame frame: 100 size: 13733 NAL: 1 Flags: 0 PTS: 3333375 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:098 7:020 09.771 D/VideoDecoderView: muxFrame frame: 101 size: 13771 NAL: 1 Flags: 0 PTS: 3366708 content: 0:000 1:000 2:000 3:001 4:065 5:224 6:130 7:020 09.775 D/MPEG4Writer: Video track stopping. Stop source 09.775 I/MPEG4Writer: Received total/0-length (100/1) buffers and encoded 100 frames. - Video 09.775 D/MPEG4Writer: Video track source stopping 09.775 D/MPEG4Writer: Video track source stopped 09.775 D/MPEG4Writer: Video track stopped. Stop source 09.775 D/MPEG4Writer: Stopping writer thread 09.776 D/MPEG4Writer: 0 chunks are written in the last batch 09.779 D/MPEG4Writer: Writer thread stopped 09.780 I/MPEG4Writer: Ajust the moov start time from 66708 us -> 66708 us 09.780 D/MPEG4Writer: Video track stopping. Stop source A: @greeble31: You are right. The first log entry clearly states "Pictures" and not "Videos". I spent hours looking at this problem without noticing a simple cut&paste mistake in my preferences keys. How stupid is that!!?! Note to myself: Coding two days & nights in a row is not heroic but just plain stupid.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,429
Regina West was a federal electoral district in Saskatchewan, Canada, that was represented in the House of Commons of Canada from 1979 to 1988. This riding was created in 1976 from parts of Regina—Lake Centre riding. It consisted of the part of the City of Regina lying west of Albert Street, and adjacent rural areas and Indian Reserves. It was abolished in 1987 when it was redistributed into Regina—Lumsden and Regina—Wascana ridings. The representative was Les Benjamin. Election results See also List of Canadian federal electoral districts Past Canadian electoral districts External links Former federal electoral districts of Saskatchewan
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,758
/* * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.facebook.presto.verifier.event; import com.facebook.airlift.event.client.EventField; import com.facebook.airlift.event.client.EventType; import javax.annotation.concurrent.Immutable; import java.util.List; import java.util.Optional; import static java.util.Objects.requireNonNull; @Immutable @EventType("QueryInfo") public class QueryInfo { private final String catalog; private final String schema; private final String originalQuery; private final String queryId; private final String checksumQueryId; private final String query; private final List<String> setupQueries; private final List<String> teardownQueries; private final String checksumQuery; private final Double cpuTimeSecs; private final Double wallTimeSecs; public QueryInfo( String catalog, String schema, String originalQuery) { this(catalog, schema, originalQuery, Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty(), Optional.empty()); } public QueryInfo( String catalog, String schema, String originalQuery, Optional<String> queryId, Optional<String> checksumQueryId, Optional<String> query, Optional<List<String>> setupQueries, Optional<List<String>> teardownQueries, Optional<String> checksumQuery, Optional<Double> cpuTimeSecs, Optional<Double> wallTimeSecs) { this.catalog = requireNonNull(catalog, "catalog is null"); this.schema = requireNonNull(schema, "schema is null"); this.originalQuery = requireNonNull(originalQuery, "originalQuery is null"); this.queryId = queryId.orElse(null); this.checksumQueryId = checksumQueryId.orElse(null); this.query = query.orElse(null); this.setupQueries = setupQueries.orElse(null); this.teardownQueries = teardownQueries.orElse(null); this.checksumQuery = checksumQuery.orElse(null); this.cpuTimeSecs = cpuTimeSecs.orElse(null); this.wallTimeSecs = wallTimeSecs.orElse(null); } @EventField public String getCatalog() { return catalog; } @EventField public String getSchema() { return schema; } @EventField public String getOriginalQuery() { return originalQuery; } @EventField public String getQueryId() { return queryId; } @EventField public String getChecksumQueryId() { return checksumQueryId; } @EventField public String getQuery() { return query; } @EventField public List<String> getSetupQueries() { return setupQueries; } @EventField public List<String> getTeardownQueries() { return teardownQueries; } @EventField public String getChecksumQuery() { return checksumQuery; } @EventField public Double getCpuTimeSecs() { return cpuTimeSecs; } @EventField public Double getWallTimeSecs() { return wallTimeSecs; } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,764
Q: JSON not in Correct Format what's wrong in my JSON: { "id":1, "album":"127 Hours", "songs": [{ "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" }], "id":2, "album":"Adele 21", "songs": [{ "id":1, "name":"Rolling In The Deep", "duration":"03:48" }] } before using JSON in my app, i am checking it here : http://json.parser.online.fr/ but getting JSON Eval for id: 2 only, not for both id: 1 & 2 A: You can't have a property more than once in an object, and you have twice the "id" property, twice "album" and twice "songs". Even if it's a valid (parsable) JSON string, it doesn't store what you want it to store, or more precisely it can't be restored in an object with all those property values. You should make it an array of objects : [ { "id":1, "album":"127 Hours", "songs":[ { "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" } ] }, { "id":2, "album":"Adele 21", "songs":[ { "id":1, "name":"Rolling In The Deep", "duration":"03:48" } ] } ] A: You have a single top-level object with two mappings for each key id, album and songs. This is not valid - the keys of an object should be unique. You probably need the top-level to be either a list of objects [ { "id":1, "album":"127 Hours", "songs": [ { "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" } ] }, { "id":2, ..... or change the representation slightly and use a top-level object keyed by album ID { "album_1":{ "title":"127 Hours", "songs": [ { "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" } ] }, "album_2":{ "title":"Adele 21", .... Which representation is more appropriate depends on your use case. The list-of-objects representation lets you iterate over the list of albums in a predictable order but means you need to do a (linear or binary) search to find the album with a particular ID. The top-level object representation lets you find the details for a single album ID more easily but you're no longer guaranteed to get them in the same order when iterating. A: { "Album list": [ { "id":1, "album":"127 Hours", "songs":{ "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" } }, { "id":2, "album":"Adele 21", "songs": { "id":1, "name":"Rolling In The Deep", "duration":"03:48" } } ] } A: I believe this is how your JSON should be [{ "id": 1, "album": "127 Hours", "songs": [{ "id": 1, "name": "Never Hear Surf Music Again", "duration": "5:52" }] }, { "id": 2, "album": "Adele 21", "songs": [{ "id": 1, "name": "Rolling In The Deep", "duration": "03:48" }] }] A: I edited your question with proper indentation for your JSON. You can now easily see that you have duplicate properties, specifically, id, album and songs. In a valid JSON you cannot have repeated property names - hence your error. Most likely, you want to have an array of objects rather than everything together. I suspect, the following JSON will satisfy your needs: [{ "id":1, "album":"127 Hours", "songs": [{ "id":1, "name":"Never Hear Surf Music Again", "duration":"5:52" }] }, { "id":2, "album":"Adele 21", "songs": [{ "id":1, "name":"Rolling In The Deep", "duration":"03:48" }] }]
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,805
{"url":"https:\/\/dl1.cuni.cz\/course\/view.php?id=11797&lang=ru","text":"## \u041a\u0430\u043b\u0435\u043d\u0434\u0430\u0440\u043d\u044b\u0439 \u043f\u043b\u0430\u043d\n\n\u2022 \u0424\u043e\u0440\u0443\u043c\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 ### Quantitative methods setup and materials\n\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\nInstall package Tidyverse in R using the command (see\u00a0Wickham and Grolemund 2017:\u00a0xvi):\n\ninstall.packages(\"tidyverse\")\n\nCheck if the installation was successful and load the package into R:\n\nlibrary(tidyverse)\n\n\u2022 ### 1) Organization, types of arguments, concepts vs. theories\n\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 ### Concepts and Measures\n\n\u2022 You will work with 14 Czech administrative units - \"kraje\".\n\nIt is often argued that socio-economic frustration leads to vote for far-left or far-right parties.\u00a0Lets asume that economic frustration (at a regional level) has three components - Unemployment, Average Salary, Low Education (education lower than high school \"Matura\").\n\nProvide at least three alternative operationalizations agregating the above mentioned components. Explain briefly the logic behind their construction and try to ilustrate the diference in output classification (measurement) produced by alternative operationalizations. Try to briefly defend which operationalization is the best.\n\nIn the next step try to conceptualize and operationalize \"far-(right\/lef)\" political party. In this case the eventual classification must be binary (0; 1).\n\nThen, take the data from 2021 and measure \"kraje\" according to the economic frustration, and measure\/classify political parties participating in the elections (with more than 1,5 %).\n\nPlot the putative cause (econ. frustration) against the putative effect (share of far-left\/right parties).\n\nProvide a brief evaluation of the putative relationship.\n\nHand in as a PDF (title page and the text with inserted tables adn charts) plus Excel (tables with calculations and charts + you may want to use the fisrt list as a codebook).\n\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 ### Building a database\n\n\u2022 \u0417\u0430\u0434\u0430\u043d\u0438\u0435\n\nDatabase: COVID impact\n\nYour task is to build a dataset containing (i) GDP growth predicitions of the European Commision (EC) for the EU countries (including the UK) for 2020 and 2021 as issued just before the COVID (Feb \u00b420), (ii) current estimates of the actual GDP growth in 2020 and predictions for 2021 (EC - Feb \u00b421). Add to this dataset the pre-covid values for three marco-economic indicators: (a) GDP per capita (constant), (b) (total) GDP, (c) Gross Governmental Debt (for 2019). Find the data for the macroeconomic indicators in IMF WEO database (use April 2020 edition).\n\nCalculate the Covid related GDP loss - as % (2019 = 100) for 2020 and 2021. For this task lets assume that the pre-COVID predictions would have been perfect (if there was not COVID).\n\nTry to plot all the variables, It is possible that some have extremely skewed distribution - in that case, use log or any other appropriate transformation.\n\nHand in as a single excel document (dont forget to include a fine codebook\u00a0sheet)\n\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 ### Analytical description\n\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 Our research often starts with a descriptive inquiry that may identify some interesting patterns for further research.\n\nYour (GROUP) task will be to prepare a short report on the COVID-impact on the EU (+UK) economies.\n\n1) Introduce briefly the state of the EU members economies just before the COVID (end of 2019\/early2020. Support your argument with a table and a chart (might containg more than one data serie).\n\n2) Describe briefly the COVID-induced GDP loss over 2020 for EU economies (UK included). DOnt describe all the countries, rather focus on the general trend (within-EU variance might be interesting here) + the extreme (or otherwise interesting) cases. Support your description with a table and a chart.\n\n3) This leads us to ask a logical question - what is behind the differences in COVID induced GDP loss. SO in the third section look at the possible relationship among the GDP-loss (dependent variable) and (i) the Debt (Gross govern. debt), (ii) GDP per capita, Covid-Deaths for 2020 (see WHO for the data), (iv) Democracy. Given the nature of this task, your inquiry here should be centered on charts (simply plot and compare the putative relationships). Provide a brief evaluation of the visual inspection - which factors seems to be most promissing for further research.\n\n(iv) provide a brief conclusion\n\nDont forget the formal requirements.\n\nHand in as a word\/pdf (text + the tables and charts), and an excel (data from which you will prepare your report)\n\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 ### Causality\n\n\u2022 HW 4:\n\nCausality - Read ch. 5 and 6 (Gerring Christenson book)\n\n1) Define a class of events you would like to explain (define your Y)\n\nFor example: why some (otherwise comparable) countries spend more or less on defense?\n\n2) Try to generate three alternative theories\/explanations (use different frameworks - combinations allowed). You need to define the IV\u00b4s (conceptualize them and if needed provide a brief operationalization) + define the logic of your explanations.\n\nSo eventually, you will define the independent variable (or variables) and the general logic of a mechanism connecting IV and DV.\n\nAt least one explanation will use two IVs!\n\n3) In the next step, construct the (simplified) counterfactual outputs for alternative values of your IVs.\u00a0E.g. low X leads to high Y, low X leads to low Y. A 2x2 table could be useful here.\n\n4) Try to find at least two actual empirical cases within the scope of your theory, measure (estimate) the IV\u00b4s and propose a hypothesis (regarding the value of your DV).\n\nCompare \"predictions\" (hypothesized values) for your two cases with the actual situation (actual values on the DV).\n\n5) Summarize your findings into a nice (read \"J. Schwabishian\") table + short text.\n\nYour theories will have to be quite simple (naive if you want) unless you want to spend the next 30 hours on thorough research. If some data for your cases are missing, you can make \"an expert estimate\" (in that case, mention it).\n\n\u2022 \u0424\u0430\u0439\u043b\n\n\u2022 \u0424\u0430\u0439\u043b\n\n\u2022 \u0424\u0430\u0439\u043b\n\n\u2022 \u0417\u0430\u0434\u0430\u043d\u0438\u0435\n\u2022 ### Mixed methods\n\nHow to select cases from a larger population or from a statistical model.\n\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0424\u0430\u0439\u043b\n\u2022 \u0417\u0430\u0434\u0430\u043d\u0438\u0435\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 \u0413\u0438\u043f\u0435\u0440\u0441\u0441\u044b\u043b\u043a\u0430\n\u2022 \u0424\u0430\u0439\u043b","date":"2022-10-02 19:48:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4937869608402252, \"perplexity\": 5126.784901247536}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337339.70\/warc\/CC-MAIN-20221002181356-20221002211356-00199.warc.gz\"}"}
null
null
Q: Windows Server 2008 SBS incorrectly reports that external domains are internal I'm having a problem with some of our domains on our external web server, which appeared just a couple of days ago. We have several domains on the external web server, but 3 of them are refusing to resolve from our internal network. From outside our internal network those 3 problem domains are fine. The other domains that are on the same external server are resolving as they should. Checked DNSWatch and all the domains are reporting the correct results. So, I checked nslookup with the debug option and I notice that it says: QUESTIONS: ourdomain.co.uk.wsd.local, type = AAAA, class = IN With the apparent problem being that it thinks that this external domain is in fact an internal one. Tried /flushdns on a workstation, and on the server itself. No dice. I don't think that this is often reported problem when not using forwarders for resolution because the symptoms don't fit. Am I missing something really obvious? UPDATE This issue seems to have disappeared with no action taken. I'm still intrigued by this as it was originally working, then it stopped and then it started working again. Problems that disappear by themselves make me nervous. No telling when they will come back A: There's enough information missing* to make it somewhat of a puzzle, but I can make an educated guess that you're using split horizon DNS and there are A records for your external websites on your public DNS server(s) and your internal SBS machine. Simply open up your DNS console on SBS and make A records for those three domains and point them to the associated external IP address for the websites. I suspect that perhaps the external web server has been at one time joined to the domain, and that could be why there are .local records for at least one of your websites already in your AD integrated DNS zone. While you're in the DNS console, check the sanity of the included records. Compare the records with the computers joined to the domain and see if there are any stale records. See if any computers need to be removed from the domain gracefully, or have been removed physically and you simply need to delete the computer object. Consider turning on aging and scavenging in the DNS, once you're thoroughly aware of what that will do. *(questions about hairpin NAT, external server domain membership, virtual hosts, VPN tunnels, and IPv6 have been excluded for brevity, your mileage may vary, no guarantee of success is issued or implied)
{ "redpajama_set_name": "RedPajamaStackExchange" }
971
Talks - Four Stages of Vedanta Versus Siddhanta Four Stages of Vedanta Versus Siddhanta Trilogy Commentary, MWS Lesson 317 Gurudeva blends the practices of Vedanta and Siddhanta together in his teachings. Charya, kriya, yoga and jnana: the progressive pada of Saiva Siddhanta. "Jnana is the blossoming of wisdom, of enlightened consciousness, of true being." What makes one a jnani, in Siddhanta, is not study: "It is when the yogi's intellect is shattered that he soars into Parasiva and comes out a jnani." Master Course Trilogy, Merging with Siva, Lesson 317. Taking an excerpt from our Merging with Siva, Lesson 317 which I think is yesterday's and giving a few comments on it. "For Hindus, the path is seen as divided into four stages or phases of inner development. Some say karma yoga, bhakti yoga, raja yoga and jnana; others say charya, kriya, yoga and jnana." So, add a few comments on how they're different in the popular mind. So, charya, kriya, yoga and jnana of course are the practices as they're explained in Saiva Siddhanta. And karma yoga, bhakti yoga, raja yoga and jnana yoga are the practices as they're explained in Vedanta. And Gurudeva nicely blends them together in his teachings. But the way they're yogas are normally explained, they're not necessarily blended to mean the same thing. So, one of the basic difference is progressive verses non-progressive. Progressive is like going to school. You go to the first grade, then you go to the second grade, then you go to the third grade, then the fourth grade. It's progressive. You have to have a certain accomplishment in each stage to move on to the next. Otherwise, you don't move on. And non-progressive of course is they're not structured like that. Yet another way of looking at it is the ladder. Ladder is very simple. If you want to end up on the fifth rung of the ladder you don't run and jump, right? No you step on the first, then the second, then the third, then the fourth. And then you get up to the fifth. So that's the progressive idea. Terms of the yogas, different Hindu groups look at them quite differently but I think the most general description is this one which was on the Vedanta Society of Southern California's website for many years and then they took it off. "Spiritual aspirants can be broadly classified into four psychological types: the predominantly emotional, the predominantly intellectual, the physically active, and the meditative. There are four primary yogas designated to 'fit' each psychological type. "For the predominantly emotional bhakti yoga is recommended..." Whoa! That's an interesting statement, huh? Certainly not the way we look at it. "...for the predominantly intellectual we have jnana yoga, for the physically active there is karma yoga and for the meditative raja yoga is recommended." Very interesting way of looking at it. Well that's actually the broadest statement. So you choose the one according to your nature. Very interesting idea. Another approach is: One of the four yogas is the highest path and therefore should be followed by everyone. So, good example of that is the Vaishnava organizations. Of course Vaishnava organizations would say bhakti yoga is the highest path, right? We should all be performing bhakti yoga. Any other kind of yoga just leads up to the bhakti yoga path, the way they explain it. So for example, Sri Ramanuja just says that: "In preparation for meditation, or the contemplative remembrance of the Divine one should engage in karma yoga." Very simple idea. Then of course we have the Adi Shankara tradition. And in their tradition of course everyone needs to work up to jnana yoga. That's the highest yoga. So, this is found in Adi Shankara's "Vivekachudamani:" "Work is for the purification of the mind, not for the perception of Reality. The realization of Truth is brought about by discrimination, not in the least by ten millions of acts." Very nicely said. So that gives you a sense of how diverse it is and of course, if someone isn't that knowledgeable at all it can be kind of confusing. I wrote a Publisher's Desk on that and suggest that: Well what do you do if you're confused by the different explanations as to which yoga you should follow? Well, if you have a guru of course then however he explains it or she explains it is the correct answer. And if you don't, then Gurudeva's approach would be, well you start with karma yoga, master service and then you move on to bhakti yoga, then eventually you move on to raja yoga. That's Gurudeva's way of structuring it. Well, that's the first difference. Second difference is charya, kriya, yoga and jnana are what I call 'temple centric.' In other words, their practice is really designed, when there's an agamic temple that you can get to on a regular basis. That's how they're structured. So charya which Gurudeva explains is similar to karma yoga is explained in the Tirumantiram as follows and you can see how it's totally temple centric. "To do the simple service of placing the lighted lamps, to collect the flowers from the trees and plants, to coat the ground with cow-dung, being with softened heart, to sweep the floor gently, to praise the Lord, to ring the bells fixed in the temple, to arrange for various kinds of ceremonial bath for the Lord - performance of such deeds related to the temple is the characteristic of charya." So everything there is, requires the temple. Totally temple centric. You compare this to the next story, which wonderful story on Swami Sivananda about karma yoga at Mahatma Gandhi's ashram. "Swami Sivananda shares a relevant story of karma yoga training provide by Mahatma Gandhi at his ashram. 'Study the autobiography of Mahatma Gandhiji. He never made any difference between menial service and dignified work. Scavenging and cleaning of the latrine was the highest Yoga for him. This was the highest puja for him. He himself did the cleaning of latrines. He annihilated the illusory little 'I' through service of various sorts. Many highly educated persons joined his ashram for learning yoga under him. They thought that Gandhiji would teach them yoga in some mysterious manner in a private room and would give lessons on pranayama, meditation, awakening of kundalini, etc. "They were disappointed when they were asked to clean the latrine first. They left the Ashram immediately. Gandhiji himself repaired his shoes. He himself used to grind flour and take upon his shoulders the work of others also when they were unable to do their allotted work for the day in the Ashram. When an educated person, a new ashramite, felt shy to do the grinding work, Gandhiji himself would do his work in front of him and then the man would do the work himself from that next day willingly." Swami Sivananda adds the comment: "He who has understood the right significance of karma yoga will take every work as yogic activity or worship of the Lord. There is no menial work in his vision. Every work is puja of Narayana. In the light of karma yoga all actions are sacred. That aspirant who always takes immense delight in doing works which are considered by the worldly man as menial services, and who always does willingly such acts only will become a dynamic yogi. He will be absolutely free from conceit and egoism. He will have no downfall. The canker of pride cannot touch him." Wonderful story, right? Well you can see how part of seva is doing humble service. But what's missing in the ashram story? The temple, right? There was no agamic temple whereas in the agamic practice of charya everything related to what you do at the temple is the servant of the Deity. In this example it's in an ashram and there's no temple involved. So, there's differences but they're both trying to produce the same result. Give us humility in the sense of being of service. Then the last difference is in jnana. Gurudeva brings this up a few times in the Master Course Trilogy. The difference between jnana yoga and the jnana pada and jnana yoga in its, as it's sometimes interpreted. So, start out here with a quote from Gurudeva, from the same lesson. "Jnana is the last stage. Most people don't understand jnana. They think it is little more than intellectual study of the path, a simple kind of wisdom. But jnana does not mean simplistic reading of scriptures or understanding of philosophical books and knowing pat answers to stereotyped questions. Jnana is the blossoming of wisdom, of enlightened consciousness, of true being. Jnana is the state of the realized soul who knows Absolute Reality through personal experience, who has reached the end of the spiritual path after many, many lifetimes." So what Gurudeva's commenting on is that jnana yoga when it's not approached in its depth can just be the reading of books. I'm a jnani, I read a book and I can tell you everything that's in it. But, Gurudeva's pointing out, you know, that's not the idea of jnana. Jnana yoga when it's approached in its traditional way in Vedanta is described next here: "Not knowledge in the intellectual sense--but the knowledge of Brahman and Atman and the realization of their unity. Where the devotee of God follows the promptings of the heart, the jnani uses the powers of the mind to discriminate between the real and the unreal, the permanent and the transitory." That's a very nice statement. And then, when you're following jnana yoga in it's traditional way there's four disciplines: Sravana -- listening to scripture; Manana -- thinking and reflecting; Nididhyasana -- constant and profound meditation; Atma sakshatkara -- direct realization. So in Siddhanta we don't study scripture for that purpose. We're not using scripture as the basis of discriminating between the Real and the unreal. We're trying to end up in the state of jnana through our practice of raja yoga or the yoga pada. So the description of that is: "Through yoga one bursts into the superconscious mind, experiencing bliss, all-knowingness and perfect silence. It is when the yogi's intellect is shattered that he soars into Parasiva and comes out a jnani." Nicely said. So, what makes someone a jnani is not, in Siddhanta is not study, it's soaring into Parasiva and that produces wisdom. Yogaswami was very outspoken on the idea that the knowledge is within you and you don't need to read a book to obtain the deepest knowledge. So we have a few quotes on that, it's the last idea. Paramaguru Yogaswami often spoke of the superiority of one's inner scripture to any outer writing. He said: "Instead of spending time in book-reading, it is better to spend it in studying yourself. Study is also a kind of yoga." Second quote: "The book is within you. Turn over the leaves and study." Third quote: "Truth is not encompassed by books and learning. You must know yourself by yourself. There is nothing else to be known." And, the last quote: "It must come from within. Don't rely on book knowledge. Trust the self alone." Thank you very much. Have a wonderful day. Watch for those small incidents that imperceptively get under your skin and create an eruption a few days later.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,450
New Hampshire Criminal Lawyer Appeals Lawyer DWI Lawyer Gun Lawyer Prostitution Lawyer Theft Lawyer I have read the terms and conditions* New Hampshire Sex Crimes Lawyer The stigma that follows a sexual assault charge is serious. Sex crimes carry a particular stigma that other crimes do not. The very word "sex offender," regardless of the details of the charge, can determine where a person can work or even live for the rest of their lives. Long after a person has paid their debt to society, they can carry the stigma of a sex offense. However, a qualified New Hampshire sex crimes lawyer may prevent that from happening. Contact an experienced defense attorney for help building a strong defense to your sex crimes charges. Types of Sex Crimes in New Hampshire Some may be surprised to know what qualifies as a sex crime in New Hampshire. Common sexual offenses in the state of New Hampshire include: Child pornography and distribution Even if familiar with the terminology, it might be surprising what is considered "exposure," "assault" or "pornography." When facing these confusing legal issues and jargon, and the possibility of a serious criminal conviction, it is advised to speak with a reputable New Hampshire sex crimes attorney as early as possible. A knowledgeable New Hampshire sex crimes lawyer is available to help navigate their client through the overwhelming, confusing and troubled waters of a criminal proceeding. Consequences of Conviction The penalties for a sex crime conviction– both state sanctioned and societal– can be far reaching. In addition to substantial jail or prison sentences, a sex crime conviction may make it impossible for a person to secure a well-paying job, live in certain areas, or even obtain a loan. Hefty fines, probation, and mandatory sex-offender programs also may be involved. Role of a Sex Crimes Attorney If charged with a sex crime, individuals should not hesitate to speak with a qualified New Hampshire attorney as quickly as possible. New Hampshire lawyers take a proactive approach, often assisting their clients before charges are formally brought to them. They are here to protect the rights of their clients from the minute that they are brought in for questioning, until the final decision has been reached, and beyond. A lawyer will work to help attain the most positive outcome available for their client. In many cases, charges dropped or reduced before a client ever has to set foot inside a courtroom. They are experienced trial lawyers who are prepared to meticulously and aggressively defend their client before a judge and jury. Sex crimes lawyers from New Hampshire are passionate advocates who listen with understanding and fight with tenacity. They will carefully examine the particular nuances of the case and mount a strong defense. Contact a New Hampshire Sex Crimes Attorney With careful strategy, sharp minds and years of experience, skilled attorneys are willing and able to help you. It is the right of any U.S. citizen to have a fair trial regardless of the charges. Do not let society or the judicial system rule the narrative around your life. A licensed attorney– one versed in state, federal, and local law– can help you fight back. Contact a New Hampshire sex crimes lawyer today. Law Offices of Mark Stevens 5 Manor Pkwy, Salem, NH 03079 Copyright © 2020 byebyedwi.com - All Rights Reserved
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,638
Q: Race Condition Vulnerability Lab I have a lab where I have to use this SeedUbuntu virtual machine for a race condition vulnerability. There is C program that I need to utilize to create the attack so I can access the shadow file. Here is the link to the lab with the programs and PDF information: http://www.cis.syr.edu/~wedu/seed/Labs/Vulnerability/Race_Condition/ What I am confused about is exactly how to approach this. I can compile and run the vulp.c program and the shell script in seperate terminals but what do I do next? What sort of code or commands do I need to execute? I'm so confused and I would really appreciate any assistance in solving this task! Thank you! /* vulp.c */ #include <stdio.h> #include<unistd.h> #define DELAY 10000 int main() { char * fn = "/tmp/XYZ"; char buffer[60]; FILE *fp; long int i; /* get user input */ scanf("%50s", buffer ); if(!access(fn, W_OK)){ /* simulating delay */ for (i=0; i < DELAY; i++){ int a = i^2; } fp = fopen(fn, "a+"); fwrite("\n", sizeof(char), 1, fp); fwrite(buffer, sizeof(char), strlen(buffer), fp); fclose(fp); } else printf("No permission \n"); } A: The way I solved when I was a student at SU, two different shells are executing two loops simultaneously. keep_run will continuously run the vulp.c program with input string passed in a file called FILE. keep_attack will creat the file first and then remove it and then it create a symbolic link to our targeted file that belongs to the root. We need the following order of execution of commands in order for our attack to succeed: * *keep_attack >> touch /tmp/XYZ *keep_run >> running vulp and checking the file (doing the access command) *keep_ attack >> rm /tmp/XYZ *keep_attack >> ln –s target_file /tmp/XYZ *keep_run >> reaching the command that tries to OPEN the file *(Make sure the program has the set-uid and is owned by the root -- setuid program owned by non-root user) A: First, you must compile the vulp.c program gcc vulp.c -o vulp Next you must change the vulp executable to a set-UID program: sudo chown root vulp sudo chmod 4755 vulp Now, notice in the source code of vulp.c that when vulp is ran we first must get input from the user. The program stays at the scanf command until input is provided. So, there is no access control check until the user provides input. That is, we never get to if(!access(...)) until after input is provided. Now before we run vulp let us first change /tmp/XYZ to point to /dev/null because we know our regular user account has write access on this file. If we can pass the if(!access(...)) access control test, we can get into the critical section. So the idea is that /tmp/XYZ points to /dev/null during the access control check and we pass the check and get into the critical section. Then you simulate having a slow machine with the DELAY loop. While this loop is executing we want to change our symbolic link of /tmp/XYZ to point to /etc/shadow. Thus, when the open() command is executed on /tmp/XYZ, we are opening /etc/shadow for write. To summarize: Step 1 in terminal 1: ln -sf /dev/null /tmp/XYZ Step 2 in terminal 2: ./vulp Step 3 in terminal 2: Supply the input that you want written to /etc/shadow file Step 4 in terminal 1 (do this quickly after supplying input): ln -sf /etc/shadow /tmp/XYZ After these steps, whatever you supplied as input in Step 3 will be written to /etc/shadow
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,071
Q: java equivalent of dynamically allocated arrays in C++ I'm learning java after having programmed in C++ for a while, and I'm wondering if you can dynamically allocate an array in java as you do in C++. Say in C++ we do: int* array = new int[arraySize]; //allocate an array delete[] array; //delete it Can you do the same in java or is there a java equivalent that basically does the same thing? Thanks! A: Yes you can. With small syntax correction, int arraySize = 10; // may resolve at runtime even int[] array = new int[arraySize]; A: You can create new arrays using int[] myNewArray = new int[myArraySize]; // myArraySize being an int or use List like ArrayList which are resizable. In java, deletion is made by the garbage collector. So you usually don't call any methods to manually remove your objects. To do so, you can simply change the reference to null myNewArray = null; The next time the garbage collector is called, it may delete the "array" object. You can also manually notiy the garbage collector using System.gc(); But you can't be sure your object will be deleted at this time.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,733
\section{ABSTRACT}\label{sec:intro} \input{input/abstract} \section{Interference}\label{sec:interference} \input{input/interference} \section{Profitable Algorithm}\label{sec:greedy} \input{input/greedy} \section{Environment Implementation}\label{sec:env} \input{input/env} \section{Conclusion}\label{sec:conclusion} \input{input/conclusion} \bibliographystyle{IEEEbib} \subsection{Packets} The packets fields are listed in Table \ref{table:packet}. This model contains several methods. \begin{description} \item[packet step] Changes the packet's current location to the next hop, if one exists. If it reaches its destination, it returns the number of packets received. \item[get next hop]Returns the next hop station of the packet. \end{description} \subsection{Buffers} \begin{table}[tb] \small \centering \scalebox{0.9}{ \begin{tabular}{l|| cl c l c} \hline \hline Field & Purpose \\ \hline Name & ID of the station. \\ Connection matrix & Information of the topology. \\ Shortest path list & List of the shortest path in the topology. \\ Out links & Dictionary of mmWave out-going links. \\ Max transceiver & Maximum number of transceivers. \\ Current transceiver & Current number of transceivers. \\ \hline \hline \end{tabular}} \caption{Station's fields and purpose.} \label{table:station} \end{table} \begin{table}[tb] \small \centering \scalebox{0.7}{ \begin{tabular}{l|| cl c l c} \hline \hline Field & Purpose \\ \hline Source name & Name of the station the buffer is located in. \\ Out going link to & Name of which station the queue is intending the send packets to.\\ Flows & Dictionary containing packets that are waiting in the buffer. \\ Total flows & The number of different flows that the buffer currently holds. \\ Link's maximum capacity & The associated mmWave's link maximum capacity per step.\\ Used bw & Amount of used bandwidth of the current step. \\ Power & The chosen level of power used by the mmWave link \\ & associated with a source and destination of the buffer. \\ Current packets & Total number of packets in the buffer. \\ Max packets & Maximum number of packets the buffer can hold, before dropping packets. \\ Dropped packets & The number of dropped packets this buffer has dropped during a step.\\ \hline \hline \end{tabular}} \caption{Buffer's fields and purpose.} \label{table:buffer} \end{table} The buffer's fields are listed in Table \ref{table:buffer}. This model contains several methods. \begin{description} \item[add flow to q] - Adds a packet-based flow to the buffer. If the buffer overflows, packets are dropped. \item[remove flow from q] - Removes a packet-based flow from the buffer. Additionally, the relevant fields are updated. \item[zero bw in buffer] - The used band width and dropped packets fields are set to zero. \item[get total data] - Returns the number of total packets that are in the buffer. \item[get dropped packets in q] - The total number of packets dropped during the step is returned. \end{description} \subsection{Station} The station's fields are listed in Table \ref{table:station}. This model contains several methods. \begin{description} \item[initialize out queues] - Sets the buffers in that station to their default values. \item[add flow] - Adds a flow into the station's buffers. \item[is link activated] - If the link is active for the current step, this function returns true. \item[remove flow] - This function removes a flow from the station's buffer. \item[update active links] - The method activates the mmWave links with the specified power levels for each of them. \item[get buffers observations] - Returns a dictionary containing a triple - total number of data packets in the buffer, percentage load on each buffer - between [0,1] and total number of dropped packets by the buffer during the step - for each buffer in the station. \item[zero bw] - Method zero bw in buffer is called for each of the station's buffers. \item[get dropped packets] - Returns the sum of dropped packets for each of the station's buffers by calling "get dropped packets in q." \end{description} \subsection{Interference Model} \ref{sec:interference} explains the environment's interference model. Each instance of the interference model is a $I(l,l')$ matrix of size [number of mmWave links x number of mmWave links], with $l$ interfering all other mmWave links $l'$ by the value in the matrix. \subsection{Network Topology} The task of creating a topology. Python's networkx module is used to generate the topology. The input is a dictionary of dictionaries containing the link weights for mmWave. A uniform distribution of weights is used to select each weight at random. We will now describe the environment in which the Deep Reinforcement Learning algorithm will operate, using Stable Baselines3 agents \cite{Raffin_Stable_Baselines3_2020}. Table \ref{table:env} lists the environmental fields. The main methods for the environment: \begin{description} \item[generate demand random] Generates random data for the next episode. SB3's regular reset method makes use of this. It generates (demand matrix, total packets, flow list, and interference) for the following episode. \item[process flows] Moves packets in the system one step. It decreases the total packet counter until it reaches zero. An episode ends when the number of flows in the system reaches zero. \item[get dropped packets] Returns the sum of dropped packets for the passing step. \item[get state observation] Returns the next state's observation. \item[adopt interference] Before proceeding to the next step, it computes the effective power of each mmWave link using the interference matrix. \item[update active links and bw] Before the start of the step, the method first zeroes the various mmWave links' used bandwidth in the system, and then it calls "update interfernce" to update the next effective power of each mmWave link in the system, according to the chosen actions. \item[reset] The standard reset method for SB3, inheriting from Gym. For the training phase, this reset generates data at random. \item[reset custom] A special reset method for evaluating DRL algorithms against the heuristics-based algorithm. This reset uses data from a list that was created at the start of the program's execution, ensuring that the comparison is exact on the same data. \item[reward] Reward funtion for SB3 to use. \item[step] This is the step in which the SB3 library is put to use, inheriting from Gym. The environment employs all models and moves packets until they are dropped or reach their final destination at each step. \item[convert actions to edges] It is an adopter method in two cases, depending on whether a hueristics-based or DRL algorithm is used. It associates an action with a specific mmWave link in the topology; for example, if the vector's i'th entry is $0.5$, it means that when id=i is translated to edges using the dictionary saved, the specific mmWave link will use a power level of $0.5$. \end{description} \begin{table}[tb] \small \centering \scalebox{0.7}{ \begin{tabular}{l|| cl c l c} \hline \hline Field & Purpose \\ \hline Edges to ID & Mapping mmWave links to IDs. \\ ID to edges & Mapping IDs to mmWave links.\\ Step count & Number of steps of the current episode. \\ Net & Topology. \\ Episode count & Number of episodes.\\ Dropped packets & Number of dropped packets for the episode. \\ Global all shortest path & List of list of shortest paths of the topology. \\ Edges info & Panda's frame of mmWave links with their weights. \\ Interference & The interference model being currently used. \\ Next episode demand matix & The demand matrix for the next episode.\\ Routers list & Dictionary of stations.\\ Episodes demand eval & List of data to compare with heuristics based algorithms.\\ & Data is (demand matrix, total packets, list of flows and interference) \\ Episodes demand train & Same as above, but for verbose and training. \\ Observation space & Observation space that will be used by SB3. \\ Action space & Action space that will be used by SB3. \\ \hline \hline \end{tabular}} \caption{Environment's fields and purpose.} \label{table:env} \end{table}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,320
{"url":"https:\/\/math.stackexchange.com\/questions\/1454651\/need-help-finding-the-closed-form-of-a-sequence-based-upon-the-fibonacci-sequenc","text":"# Need help finding the closed form of a sequence based upon the fibonacci sequence.\n\nI have been given an assignment question that asks for a simple closed form of the following sequence:\n\n$$G_n=\\left|\\begin{array}{cc} F_n & F_{n+1}\\\\ F_{n+1} & F_{n+2} \\end{array}\\right|$$\n\nI have tried taking the determinant, but substituting in the closed form of the Fibonacci sequence leads to nothing simple at all.\n\nThanks.\n\n\u2022 Closed form will work. Or else compute the values for a few $n$, make a conjecture based on the results, and prove it by induction. \u2013\u00a0Andr\u00e9 Nicolas Sep 28 '15 at 5:04\n\u2022 By your expression do you mean the following? $$\\left|\\begin{array}{cc} F_n & F_{n+1}\\\\ F_{n+1} & F_{n+2} \\end{array}\\right|$$ \u2013\u00a0Samrat Mukhopadhyay Sep 28 '15 at 5:15\n\u2022 Yes, that is what I mean. Except the first entry is F_n and it is the determinant of the matrix. I do not know how to use Latex or the formatting on this site. \u2013\u00a0Edward Nashton Sep 28 '15 at 5:19\n\u2022 @EdwardNashton, $the \\left|\\right|$ sign itself denotes the determinant, you do not need to put a $det$ before it. \u2013\u00a0Samrat Mukhopadhyay Sep 28 '15 at 5:24\n\u2022 Oh, OK. Thanks. :) \u2013\u00a0Edward Nashton Sep 28 '15 at 5:26\n\nUsing closed form of $F_n=\\frac{\\alpha^n-\\beta^n}{\\alpha-\\beta}$ where $\\alpha=\\frac{-1+\\sqrt{5}}{2},\\ \\beta=\\frac{-1-\\sqrt{5}}{2}$ will work, but maybe after a long and tedious calculation. A simpler way is to look at it in the following way.\n$$G_n=F_{n}F_{n+2}-F_{n+1}^2=F_n(F_n+F_{n+1})-F_{n+1}^2\\\\=F_n^2-F_{n+1}(F_{n+1}-F_n)=F_n^2-F_{n+1}F_{n-1}=-G_{n-1}\\\\\\implies G_n=(-1)^{n-1}G_1=(-1)^{n-1}$$\n\u2022 Thanks for the reply and the help. Would it be possible to get some clarification on how you got from $$(F_{n})^2-F_{n+1}*F_{n-1} to -G_{n-1}$$ and the lines after that. \u2013\u00a0Edward Nashton Sep 28 '15 at 5:51\n\u2022 I know how to get to the final line now and I understand why the solution is what it is, but I am unsure how you go from $G_n$ to $(-1)^{n-1}G_1$ algebraically from $G_n=-G_{n-1}$ \u2013\u00a0Edward Nashton Sep 28 '15 at 11:19\n\u2022 Note that $G_n=-G_{n-1}=(-1)^2G_{n-2}=(-1)^3G_{n-3}$ and so on. \u2013\u00a0Samrat Mukhopadhyay Sep 28 '15 at 16:16","date":"2019-08-25 04:27:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8833855390548706, \"perplexity\": 250.43373484142887}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027323067.50\/warc\/CC-MAIN-20190825042326-20190825064326-00407.warc.gz\"}"}
null
null
Q: Android.mk and library project I have an application and a shared library (composed of Java classes and resources). I'm trying to compile my app directly in a ROM and I need to write the Android.mk. However it's not working as expected. Here are my two Android.mk: Library: LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_SRC_FILES := $(call all-java-files-under, src) LOCAL_MODULE_TAGS := optional LOCAL_MODULE := SharedLibrary include $(BUILD_JAVA_LIBRARY) # additionally, build tests in sub-folders in a separate .apk include $(call all-makefiles-under,$(LOCAL_PATH)) App: LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_JAVA_LIBRAIRIES = SharedLibrary LOCAL_SRC_FILES := $(call all-java-files-under, src) LOCAL_PACKAGE_NAME := Pricing include $(BUILD_PACKAGE) # additionally, build tests in sub-folders in a separate .apk include $(call all-makefiles-under,$(LOCAL_PATH)) Where am I wrong ?
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,828
layout: simple --- # Bulldogs Racing The Yale Bulldogs Racing team built a gas-electric hybrid racecar that [won first place](https://seas.yale.edu/news-events/news/bulldogs-racing-roars-big-win) at the 2013 Formula Hybrid International competition. I worked on low-voltage electronics, and wrote large portions of the code for the Arduino microcontroller ([Github link](https://github.com/BulldogsRacing/Yale-Formula-Hybrid)) that handled functions like: * core driving functions: mapping the gas pedal to the throttle, activating the brake light * displaying data to the driver on a control panel * managing a power-saving "endurance mode" * monitoring the high voltage battery system * wirelessly sending telemetry data * safety cutoffs in dangerous situations ![](/images/project_images/formula-hybrid/team.jpg)
{ "redpajama_set_name": "RedPajamaGithub" }
4,004
Helpful stuff for latency testing over a *private* (not connected to the real one) Tor network comprised of Docker containers. Note that the configs in this directory have a setting called TestingTorNetwork: hopefully, this means we stay in our own overlay "LAN". torrc: tor config *for our test client* not for ORs or the dirserver torrc_auth: torrc config for the directory server torrc_exit_relay: torrc config for an or relay that is also an exit torrc_relay: torrc config for a plain old or create-container.sh: run outside the container (I'm just using for reference and not actually running it) setup-container.sh: run inside the container (less typing!) latenc: set up and run latency testing over our mini Tor c/ contains the torperf 'trivsocks' client *None* of this is supposed to, at any time, connect to the actual big grand Tor network. That said, use at your own risk.
{ "redpajama_set_name": "RedPajamaGithub" }
2,698
{"url":"https:\/\/nforum.ncatlab.org\/discussion\/4868\/binary-golay-code\/?Focus=39279","text":"# Start a new discussion\n\n## Not signed in\n\nWant to take part in these discussions? Sign in if you have an account, or apply for one below\n\n## Site Tag Cloud\n\nVanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.\n\n\u2022 CommentRowNumber1.\n\u2022 CommentAuthorDavidRoberts\n\u2022 CommentTimeApr 15th 2013\n\u2022 (edited Apr 16th 2013)\n\nCreated binary Golay code. The construction is a little involved, and I haven\u2019t put it in yet, because I think I can nut out a nicer description. The construction I aim to describe, in slightly different notation and terminology is in\n\nR. T. Curtis (1976). A new combinatorial approach to M24. Mathematical Proceedings of the Cambridge Philosophical Society, 79, pp 25-42. doi:10.1017\/S0305004100052075.\n\n\u2022 CommentRowNumber2.\n\u2022 CommentAuthorTodd_Trimble\n\u2022 CommentTimeApr 15th 2013\n\nDavid, since you are looking at these topics (Golay code, Mathieu groups, Monster group): have you looked at the book by Frankel, Lepowsky and Meurman, Vertex Operator Algebras and the Monster?\n\n\u2022 CommentRowNumber3.\n\u2022 CommentAuthorDavidRoberts\n\u2022 CommentTimeApr 15th 2013\n\nNo, I haven\u2019t. It\u2019s just bits from wikipedia and papers. Thanks for the reference.\n\n\u2022 Please log in or leave your comment as a \"guest post\". If commenting as a \"guest\", please include your name in the message as a courtesy. Note: only certain categories allow guest posts.\n\u2022 To produce a hyperlink to an nLab entry, simply put double square brackets around its name, e.g. [[category]]. To use (La)TeX mathematics in your post, make sure Markdown+Itex is selected below and put your mathematics between dollar signs as usual. Only a subset of the usual TeX math commands are accepted: see here for a list.\n\n\u2022 (Help)","date":"2020-07-08 04:45:18","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8555176258087158, \"perplexity\": 3681.9227645049136}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655896374.33\/warc\/CC-MAIN-20200708031342-20200708061342-00128.warc.gz\"}"}
null
null
Djurgården Hockey spelade i Division I östra denna säsong efter att ha blivit nerflyttade från Elitserien året innan. Grundserien vanns av Djurgården med endast tre förlorade matcher. Man slog ut IFK Bäcken och Tingsryds AIF i playoff till kvalserien. I kvalserien förlorade man bara en match och Djurgården var klara för spel i Elitserien säsongen 1977/1978. Den unge talangen Kent Nilsson, som föregående säsong vann Elitseriens poängliga, valde att spela med AIK denna säsong på grund av att han ville spela i Elitserien. Något som han senare har sagt att han ångrat. Ordinarie säsong Grundserien SM = Spelade Matcher V = Vinster, O = Oavgjorda, F = Förluster, GM = Gjorda mål, IM = Insläppta mål, Poäng = Antalet poäng Matcherna i grundserien Playoff till kvalserien Kvalserien Not: Sista matchen mellan Timrå IK och HV 71 blev inställd då serien redan var avgjord. Matcherna i kvalserien Spelarstatistik Grundserien Not: SM = Spelade matcher, M = Mål, A = Assists, Pts = Poäng, PIM = Utvisningsminuter Kvalserien Källor DIFHockey.se - Säsongen 1976/1977 Svensk ishockey - Division 1 östra 1976/77 Sport i Sverige 1976 Sport i Sverige 1977 Ishockeysäsongen 1976/1977 efter lag 1976/1977
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,720
package org.xdi.oxauth.uma.service; import org.slf4j.Logger; import org.xdi.oxauth.model.config.WebKeysConfiguration; import org.xdi.oxauth.model.configuration.AppConfiguration; import org.xdi.oxauth.model.error.ErrorResponseFactory; import org.xdi.oxauth.model.jwt.Jwt; import org.xdi.oxauth.model.registration.Client; import org.xdi.oxauth.model.uma.UmaErrorResponseType; import org.xdi.oxauth.model.uma.UmaTokenResponse; import org.xdi.oxauth.model.uma.persistence.UmaPermission; import org.xdi.oxauth.model.uma.persistence.UmaScopeDescription; import org.xdi.oxauth.security.Identity; import org.xdi.oxauth.service.ClientService; import org.xdi.oxauth.service.external.ExternalUmaRptPolicyService; import org.xdi.oxauth.service.token.TokenService; import org.xdi.oxauth.uma.authorization.*; import org.xdi.oxauth.util.ServerUtil; import javax.ejb.Stateless; import javax.inject.Inject; import javax.inject.Named; import javax.servlet.http.HttpServletRequest; import javax.ws.rs.WebApplicationException; import javax.ws.rs.core.Response; import java.util.*; /** * UMA Token Service */ @Named @Stateless public class UmaTokenService { @Inject private Logger log; @Inject private Identity identity; @Inject private ErrorResponseFactory errorResponseFactory; @Inject private UmaRptService rptService; @Inject private UmaPctService pctService; @Inject private UmaPermissionService permissionService; @Inject private UmaValidationService umaValidationService; @Inject private ClientService clientService; @Inject private TokenService tokenService; @Inject private AppConfiguration appConfiguration; @Inject private WebKeysConfiguration webKeysConfiguration; @Inject private UmaNeedsInfoService umaNeedsInfoService; @Inject private ExternalUmaRptPolicyService policyService; @Inject private UmaExpressionService expressionService; public Response requestRpt( String grantType, String ticket, String claimToken, String claimTokenFormat, String pctCode, String rptCode, String scope, HttpServletRequest httpRequest) { try { log.trace("requestRpt grant_type: {}, ticket: {}, claim_token: {}, claim_token_format: {}, pct: {}, rpt: {}, scope: {}" , grantType, ticket, claimToken, claimTokenFormat, pctCode, rptCode, scope); umaValidationService.validateGrantType(grantType); List<UmaPermission> permissions = umaValidationService.validateTicket(ticket); Jwt idToken = umaValidationService.validateClaimToken(claimToken, claimTokenFormat); UmaPCT pct = umaValidationService.validatePct(pctCode); UmaRPT rpt = umaValidationService.validateRPT(rptCode); Map<UmaScopeDescription, Boolean> scopes = umaValidationService.validateScopes(scope, permissions); Client client = identity.getSessionClient().getClient(); if (client != null && client.isDisabled()) { throw new UmaWebException(Response.Status.FORBIDDEN, errorResponseFactory, UmaErrorResponseType.DISABLED_CLIENT); } pct = pctService.updateClaims(pct, idToken, client.getClientId(), permissions); // creates new pct if pct is null in request Claims claims = new Claims(idToken, pct, claimToken); Map<UmaScriptByScope, UmaAuthorizationContext> scriptMap = umaNeedsInfoService.checkNeedsInfo(claims, scopes, permissions, pct, httpRequest, client); if (!scriptMap.isEmpty()) { expressionService.evaluate(scriptMap, permissions); } else { log.warn("There are no any policies that protects scopes. Scopes: " + UmaScopeService.asString(scopes.keySet()) + ". Configuration property umaGrantAccessIfNoPolicies: " + appConfiguration.getUmaGrantAccessIfNoPolicies()); if (appConfiguration.getUmaGrantAccessIfNoPolicies() != null && appConfiguration.getUmaGrantAccessIfNoPolicies()) { log.warn("Access granted because there are no any protection. Make sure it is intentional behavior."); } else { log.warn("Access denied because there are no any protection. Make sure it is intentional behavior."); throw new UmaWebException(Response.Status.FORBIDDEN, errorResponseFactory, UmaErrorResponseType.FORBIDDEN_BY_POLICY); } } log.trace("Access granted."); final boolean upgraded; if (rpt == null) { rpt = rptService.createRPTAndPersist(client.getClientId()); upgraded = false; } else { upgraded = true; } updatePermissionsWithClientRequestedScope(permissions, scopes); addPctToPermissions(permissions, pct); rptService.addPermissionToRPT(rpt, permissions); UmaTokenResponse response = new UmaTokenResponse(); response.setAccessToken(rpt.getCode()); response.setUpgraded(upgraded); response.setTokenType("Bearer"); response.setPct(pct.getCode()); return Response.ok(ServerUtil.asJson(response)).build(); } catch (Exception ex) { log.error("Exception happened", ex); if (ex instanceof WebApplicationException) { throw (WebApplicationException) ex; } } log.error("Failed to handle request to UMA Token Endpoint."); throw new UmaWebException(Response.Status.INTERNAL_SERVER_ERROR, errorResponseFactory, UmaErrorResponseType.SERVER_ERROR); } private void addPctToPermissions(List<UmaPermission> permissions, UmaPCT pct) { for (UmaPermission p : permissions) { p.getAttributes().put(UmaPermission.PCT, pct.getCode()); permissionService.mergeSilently(p); } } private void updatePermissionsWithClientRequestedScope(List<UmaPermission> permissions, Map<UmaScopeDescription, Boolean> scopes) { log.trace("Updating permissions with requested scopes ..."); for (UmaPermission permission : permissions) { Set<String> scopeDns = new HashSet<String>(permission.getScopeDns()); for (Map.Entry<UmaScopeDescription, Boolean> entry : scopes.entrySet()) { log.trace("Updating permissions with scope: " + entry.getKey().getId() + ", isRequestedScope: " + entry.getValue() + ", permisson: " + permission.getDn()); scopeDns.add(entry.getKey().getDn()); } permission.setScopeDns(new ArrayList<String>(scopeDns)); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
9,521
Die Liste der Stadtpräsidenten von Olten listet chronologisch die Stadtpräsidenten der Schweizer Stadt Olten auf. Der Stadtpräsident (bis 1993 Stadtammann) präsidiert den fünfköpfigen Stadtrat im Vollamt. Die übrigen vier Mitglieder amten im Nebenamt. Das Stadtpräsidium war in den ersten 196 Jahren in freisinniger Hand. Erst 2013 wurde die langjährige Reihe beendet, als kein Kandidat der FDP zur Wahl antrat. Die Freisinnig-Demokratische Partei wurde 1894 im Bahnhofbuffet Olten gegründet. In der Zeit von 1817 (Anfänge der Gemeindeautonomie) bis 1830 wechselten Stadtammann und Statthalter jährlich ab in der Ausübung des Amtes. Quelle Stadtarchiv Olten Staatskalender des Kantons Solothurn Einzelnachweise Liste Olten Olten Liste Der Stadtprasidenten Von Olten !Liste Der Stadtprasidenten Von Olten Stadtprasidenten Olten
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,661
OP-ED: Say it: Hank Aaron is the real home run champion Kenneth Shouler New York Daily News (TNS) Hank Aaron's death last Friday brought forth a steady stream of highlights, respectful reflections and revelations about his baseball career. But one revelation was missing. We waited in vain to see a declarative sentence with just 10 words: Hank Aaron is major league baseball's all-time home run leader. Those words, whose truth is irrefutable due to his non-cheating record of 755 home runs, were neither spoken nor seen in print. Talk about ignoring the elephant in the room. Pause to take in the full meaning of this. Not one "official" baseball organization had the moxie to declare who the leader was. Not Major League Baseball, not on their website nor on their network channel; not ESPN, not on their website nor their network channel; not the Elias Sports Bureau either, despite their online blare of trumpets: "The Elias Sports Bureau is the Official Statistician of Major League Baseball." More:Hank Aaron, baseball's one-time home run king, dies at 86 More:No new players added to baseball Hall of Fame; Schilling, Bonds, Clemens snubbed Toss in The Associated Press who referred to Aaron, sheepishly, as the "longtime home run leader," which implies that Aaron only held the record from 1974, when he passed Babe Ruth, until 2007 when Barry Bonds, who surpassed him with the aid of performance-enhancing substances, reached 762. Since Aaron's death, thousands of articles have been written, television spots aired, interviews conducted, memories divulged, documentaries reprised. Despite enough sports ammunition to make the rubble bounce, not one baseball commentator took a shot. Many of these media outlets told much of the truth about "Hammerin' Hank." Not one of them told the whole truth. Now we have an historic confluence of three events. Aaron dies and we behold his life; Bonds came up for a Hall of Fame vote on Tuesday; and his cheating mark of 762 home runs is still held up. He failed to gain Hall of Fame entry again, likely because it's common knowledge that he cheated, as evidenced by his physical changes and an absurd increase in home run production starting at the age of 35. In his first 14 seasons, he hit 420 home runs, for an average of 30 per season. Then, from the ages of 35 to 39, he averaged 51.6 home runs over a five-year stretch. Since this absurd home run total is not credible and is the very thing keeping him from Hall of Fame induction, then why are news organizations not acknowledging that Aaron is the record-holder? In an email response to my question of whose record is recognized, Major League Baseball Network reporter and commentator Tom Verducci replied, rather cryptically, "Bonds officially holds the record." He added, "But as I said on the air, and as we showed with a Sports Illustrated cover story I wrote from 2007, Aaron remains the people's king, the true champion." This fits the category of a near-answer merely, since the terms "official record-holder" and "true champion" ought to be the same person. No one broadcaster or scribe could bring himself to say that Bonds' total of 762 is untrustworthy due to performance-enhancing drugs. Thus, they couldn't state what follows from that fact: that Aaron's record of 755, a mark he held since he hit his 715th to surpass Babe Ruth on April 8, 1974, should stand. — Kenneth Shouler is a philosophy professor at the County College of Morris. He was one of the panelists selected by Major League Baseball to pick its All-Century Team.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,785