text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Alexandre Jacques François Bertrand, född den 25 april 1795 i Rennes, död den 22 januari 1831, var en fransk fysiker. Han var far till arkeologen Alexandre Bertrand (1820–1902) och matematikern Joseph Bertrand (1822–1900). Han var även nära knuten till filosofen Pierre Leroux (1798–1871) och Saint-Simonianerna. Bertrand är känd för sina vetenskapliga undersökningar av animal magnetism och somnambulism. I sina offentliga föreläsningar om animal magnetism började han som övertygad anhängare av teorin, men genom erfarenhet och tankeverksamhet kom han att ändra uppfattning och blev en av dess ledande kritiker. Från 1825 till 1830 publicerade Bertrand talrika artiklar i den progressiva tidningen Le Globe. Bibliografi (i urval) Traité du somnambulisme et des différentes modifications qu'il présente, Paris, Dentu, 1823. Lettres sur les révolutions du globe, Paris, bröderna Bossange, 1824. Lettres sur la physique, Paris, Bossange, 1824 and 1825. De l'extase, Paris, 1826. Du magnétisme en France et des jugements qu'en ont porté les sociétés savantes, Paris, Baillière, 1826. Källor Franska läkare Saintsimonister Alumner från École polytechnique Födda 1795 Avlidna 1831 Män
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,614
\section{Introduction} \label{sec:intro} \subsection{Motivation} \label{Motivation} The displacement of conventional synchronous generators by converter-interfaced generation such as solar and wind energy leads to reduced levels of system inertia. Large-scale low-inertia power grids face many challenges, including but not limited to frequency stability, voltage stability, and converter-driven stability \cite{8450880}. While the power system community including both academia and industry is working towards addressing the above challenges, an emerging critical issue that, so far, has received little to no attention is frequency quality. The objectives of this paper are to fill this gap and to raise awareness in the community on the topic. \begin{table}[t!] \centering \caption{Frequency quality parameters of the CE and IE/NI \cite{entsoe, eirgrid}.} \label{tab:param} \begin{tabular}{cccccc} \hline Parameter & CE & IE/NI \\ \hline Standard frequency range & $\pm$ 50 mHz & $\pm$ 200 mHz \\ Maximum instantaneous frequency deviation & 800 mHz & 1000 mHz\\ Maximum steady-state frequency deviation & 200 mHz & 500 mHz\\ Time to recover frequency & not used & 1 minute\\ Frequency recovery range & not used & $\pm$ 500 mHz\\ Time to restore frequency & 15 minutes & 15 minutes\\ Frequency restoration range & not used & $\pm$ 200 mHz\\ Alert state trigger time & 5 minutes & 10 minutes\\ Maximum number of minutes & \multirow{2}{*}{15,000} & \multirow{2}{*}{15,000} \\ outside the standard frequency range \\ \hline \end{tabular} \end{table} \begin{figure}[t!] \begin{center} \resizebox{0.945\linewidth}{!}{\includegraphics{albania.pdf}} \caption{Frequency in the CE and IE/NI power systems for 01.01.2021.} \label{fig:albania} \end{center} \vspace*{-0.3cm} \end{figure} \vspace{-2mm} \subsection{Literature Review} \label{sec:literature} Transmission system operators (TSOs) in Europe (including EirGrid and SONI) define frequency quality in terms of different target/defining parameters. Table~\ref{tab:param} shows the main parameters for the Continental European (CE) and Ireland/Northern Ireland (IE/NI) TSOs \cite{entsoe, eirgrid}. Keeping these parameters within limits help: (i) to better control the operation of the power system and prevent damage to plant and equipment; (ii) keep the electric time on clocks that rely on counting the zero crossings; (iii) maintain the relevance of power system analysis that is generally performed at the nominal frequency; (iv) prevent motors from stalling; and (v) increase the trust of TSO customers and market participants on supply reliability and quality, etc. Note the wider range of most of the parameters of IE/NI compared to the parameters defined for the CE control area. For example, the standard frequency range for the CE and IE/NI control areas are $\pm$ 50 mHz and $\pm$ 200 mHz, respectively. Such parameters make sense considering that the CE synchronous area accounts for around 435 GW of peak demand (largest synchronous electrical grid in the world) compared to 6.9 GW of the All-Island transmission system (AITS) (i.e., it is harder to control frequency in a small power system). To better illustrate these differences, Fig.~\ref{fig:albania} shows the frequency trace of the Albanian (part of CE) and AITS (IE/NI) for 01.01.2021. Clearly, and as expected, frequency fluctuates much more in the AITS compared to the CE power system. Recent research has demonstrated that there is an almost linear relationship between renewables penetration and frequency variations \cite{KAZMI2022119565, 8783475, KERCI2020105819, 7891044}. This suggest that there is a need to deploy more and faster reserve resources to deal with the ever increasing penetration of stochastic and intermittent renewable sources. For example, due to increased intra-interval fluctuations and limited ramping from generators, the frequency deviations in a provincial power system in China increased from 0.019 to 0.032 Hz from 2014 to 2020 (i.e., 68.4\% increase) \cite{9631168}. In the same vein, reference \cite{9245548} focuses on the issue of frequency quality for Southwest China considering the operation data from asynchronous operation tests and automatic generation control (AGC). It is also worth mentioning that controlling the frequency in power systems with high shares of photo-voltaic (PV) is a challenging task due to PV power dropping much faster than, for example, wind power (e.g., 60\% of the installed power capacity per minute when cloud passes) \cite{ZHANG2019809}. A way to meet frequency quality standards in low-inertia systems is that renewable energies and emerging technologies such as battery energy storage systems (BESS) provide frequency support. In this context, references \cite{validation, 8606157} study the participation of a 30 and 10 MW wind farms in AGC and show their potential by testing the behavior against field measurements and experimental results, respectively. On the other hand, the potential of solar PV providing AGC services under different conditions (solar resource intensity) is shown in \cite{300mw} through a successful 300 MW power plant test. BESS is shown to respond well to AGC commands/set-points and provide secondary frequency regulation in \cite{8935195}. \subsection{Contributions} \label{Contributions} The specific contributions of this paper are the following: \begin{itemize} \item An analysis of frequency quality based on a real-world low-inertia system namely the AITS. \item Show through the analysis that while some frequency quality parameters have improved over the last years, others such as the standard deviation of the frequency is increasing linearly. \item Propose different solutions to address the recent deterioration in frequency quality in the AITS. \item Study the effectiveness of AGC to smooth frequency variations due to wind and load variations and noise. \end{itemize} \subsection{Paper Organization} The remainder of the paper is organized as follows. Section \ref{sec:background} briefly describes the frequency control employed in the AITS. Section \ref{sec:case} provides the results of the frequency quality analysis of the AITS. Section \ref{sec:agc} discusses the effectiveness of AGC in reducing the standard deviation of the frequency through an illustrative example. Finally, conclusions and future work directions are given in Section \ref{sec:conclu}. \section{Frequency Control in the All-Island Transmission System } \label{sec:background} Figure~\ref{fig:control} shows the current frequency control services employed in the AITS \cite{control}. EirGrid and SONI have various frequency services to ensure frequency quality parameters remain within predefined limits. Such services include synchronous inertial response (SIR), fast frequency response (FFR) and primary, secondary and tertiary operating reserves (POR, SOR and TOR) as well as replacement reserves (RR) and ramping products. It should be noted that there does not exist an automatic secondary frequency control (or AGC) in the AITS. Instead, manual activations are currently the approach used in regulating system frequency proactively. The vast majority of the contracted volumes of system services procured to date comes from conventional sources \cite{ds3}. However, as the AITS moves towards a reduced number of conventional units online by 2030, other technologies are expected to provide the majority of the services including BESS (approximately 650 MW installed and mostly used for system services rather than providing energy to the grid), demand response, wind and solar power. \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{control.png}} \caption{Frequency control overview in the AITS \cite{control}.} \label{fig:control} \end{center} \vspace*{-0.3cm} \end{figure} Active power control (APC) is another crucial frequency control service impacting frequency quality and currently in place in the AITS. APC is a droop-based frequency control service and is mandatory for all dispatchable wind farms in Ireland \cite{code}. It involves a selectable deadband setting of $\pm$ 200 mHz or $\pm$ 15 mHz. The default value of the deadband is $\pm$ 200 mHz but when the frequency control is challenging EirGrid changes remotely this deadband to $\pm$ 15 mHz. In this mode, wind farms adjust their output much more dynamically and contribute to the control of system frequency under normal, pre-contingency conditions. In the near future, it is expected that other technologies such as solar power will have APC functionality enabled in order to help maintain the frequency quality parameters. \section{Frequency Quality in the All-Island Transmission System} \label{sec:case} The AITS is a synchronous island currently accommodating up to 75\% of non-synchronous generation at any point in time and is relaxing a number of operational constraints such as the minimum number of conventional generating units (from 8 to 7) and inertia (from 23 GWs to 20 GWs) to further increase the penetration of renewables \cite{policy}. This transition involves dealing with different technical challenges such as ensuring power system stability and security. However, an emerging challenge is that of maintaining frequency quality parameters within acceptable limits. The aim of this section is to analyse, through actual data, the frequency quality in the AITS. \subsection{Frequency Deviations (Nadir/Zenith)} \label{sec:parameters} This section focuses on the evolution of the maximum and minimum frequency deviations in the AITS over the last years. Note that this limit is 1000 mHz according to Table ~\ref{tab:param}. Figure~\ref{fig:minmax} shows the trend of these frequency parameters (i.e., frequency nadir and zenith). It is interesting to notice that both parameters are within limits and improving in their performance. The performance of these parameters is, in particular, strongly related to the reduced number of large generator trippings in the system and the response from wind, high voltage direct current (HVDC) interconnectors, demand response, and BESS. \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{freqminmax.pdf}} \caption{Minimum and maximum frequency deviation evolution in the AITS.} \label{fig:minmax} \end{center} \vspace*{-0.3cm} \end{figure} \subsection{Frequency Standard Deviation} \label{sec:100mhz} Figure~\ref{fig:parameters} shows the evolution of three main parameters namely minutes above and below the standard frequency range ($\pm$ 200 mHz) and the standard deviation of the frequency. There are a number of factors that have led to a dramatic improvement in the quality of the parameters during 2009-2018, among others: (i) the change from verbal dispatch to electronic logged dispatch of generation units, leading to improved unit operator response; (ii) newer generating units in the latter years having the latest electronic governor controls; (iii) the retrofit of the electro-mechanical control systems on older generating units with modern electronic controls; (iv) the increase in system inertia as a result of more generating units to meet increases in system demand; and (v) the response of HVDC interconnectors and BESS (but also a reduction in the number of large generator trippings). However, there has been an increase in the standard deviation of the frequency during 2019-2021. This is mainly due to: (i) a reduction in regulating resources; (ii) an increasing proportion of the reserves from inverter-based resources that are not configured to regulate frequency; and (iii) aging of conventional generating portfolio. This trend could continue as more wind and solar power are integrated into the system and the operational policy evolves, i.e., reducing the number of conventional units online and, consequently, also reducing the inertia. \begin{figure}[t!] \begin{center} \resizebox{0.85\linewidth}{!}{\includegraphics{parameters.pdf}} \caption{Evolution of minutes outside the standard frequency range ($\pm$ 200 mHz) and standard deviation of the frequency over the last decade.} \label{fig:parameters} \end{center} \vspace*{-0.3cm} \end{figure} While the EU Network Codes and the synchronous area operational agreement (Table \ref{tab:param}) require EirGrid to keep frequency within its standard range ($\pm$ 200 mHz), the national regulatory body in Ireland have put in place an incentive to keep frequency within an even tighter range namely $\pm$ 100 mHz for $\ge$ 98\% of the time \cite{cru}. This is also known as the $\pm$ 100 mHz criteria. Its evolution over the years is shown in Fig.~\ref{fig:100mhz}. A deterioration can be seen for the last year. Table \ref{tab:100mhz} shows the violation minutes for the last 12 months (i.e., greater during winter months mainly due to more wind). This is a concern considering that, compared to 2020, there was less wind available in 2021. \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{100mhz.pdf}} \caption{Evolution of 100 mHz criteria over the last years in the AITS.} \label{fig:100mhz} \end{center} \vspace*{-0.3cm} \end{figure} \begin{table}[t!] \centering \caption{Frequency key performance indicator (KPI) statistics for 2021-2022 ($\pm$ 100 mHz criteria $\ge$ 98\% of time).} \label{tab:100mhz} \begin{tabular}{lccccc} \hline \multirow{2}{*}{Month} & \multirow{2}{*}{Minutes} & KPI \\ & & Violation Minutes \\ \hline August 2021 & 44,640 & 372 \\ September 2021 & 43,200 & 420 \\ October 2021 & 44,640 & 862 \\ November 2021 & 43,200 & 935 \\ December 2021 & 44,640 & 1,803 \\ January 2022 & 44,640 & 1,135 \\ February 2022 & 40,320 & 1,066 \\ March 2022 & 44,640 & 1,078 \\ April 2022 & 43,200 & 411 \\ May 2022 & 44,640 & 433 \\ June 2022 & 43,200 & 333 \\ July 2022 & 44,640 & 230 \\ \hline Total minutes (12 months) & 525,600 & 9,078 \\ \hline Time within KPI limits & & 98.27\% \\ \hline \end{tabular} \end{table} There are a number of potential solutions to address the performance degradation of the frequency regulation in the last years (Figs.~\ref{fig:parameters} and \ref{fig:100mhz}), as follows: (i) increasing the minimum regulating/dynamic reserve requirement (which all currently comes from conventional generation with governor deadbands of $\pm$ 15 mHz) – this would seem a backward step given it will result in additional conventional unit commitment; (ii) narrowing frequency deadbands on BESS/HVDC interconnectors; (iii) updating rules for activation of wind farm APC (including NI wind farms when feasible) so that it is enabled more often, say when at minimum units – readily implementable and proven; (iv) implementing an AGC; and (v) introducing new market products to assist with frequency regulation. In this paper, we explore the effectiveness of one of the options to improve frequency performance, that is, implementing an AGC (see Section~\ref{sec:agc} below). \subsection{Frequency Recovery} \label{sec:slow} This section illustrates the issue of the slow frequency recovery in the AITS. With this aim, Fig.~\ref{fig:ewic} displays the frequency trace for a 2022 event (9th of August) where the largest single infeed (LSI) tripped from 530 MW import. As can be seen, it took the frequency almost 15 minutes to recover to 50 Hz (i.e., time to restore frequency in Table \ref{tab:param}). In particular, it is worth noticing that frequency recovers to around 49.87 Hz in almost 3 minutes but then stays there for a long time. The fast recovery in the 3 minutes is mainly because of FFR from BESS (the majority of BESS installed in AITS have a trigger point of 49.8 Hz). \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{ewic.pdf}} \caption{LSI trip on August 2022.} \label{fig:ewic} \end{center} \vspace*{-0.3cm} \end{figure} \section{Illustrative example} \label{sec:agc} EU Network Codes and the national energy regulators require EirGrid and SONI to justify the need to install or not an AGC every few years \cite{entsoe}. This section aims to illustrate the AGC performance in terms of long-term frequency quality enhancement using the IEEE 39-bus system. \subsection{Stochastic Long-Term Power System Model} \label{sec:stochastic} Frequency quality is impacted by several dynamical processes starting from fast ones such as the inertial response of synchronous machines, the stochastic wind speed variations, to longer ones, such as primary and secondary frequency controllers of conventional generators. To capture and model all of these dynamics, we consider a combined short- and long-term dynamic power system model represented by a set of hybrid non-linear stochastic differential-algebraic equations \cite{6547228}, as follows: \begin{equation} \label{eq:hdae} \begin{aligned} \frac{d}{dt}{\bfg x} &= \bfg f( \bfg x, \bfg y, \bfg u, \bfg z, \frac{d}{dt}{\bfg \eta}) \, , \\ \bfg 0 &= \bfg g(\bfg x, \bfg y, \bfg u, \bfg z, \bfg \eta) \, , \\ \frac{d}{dt}{\bfg \eta} &= \bfg a( \bfg x, \bfg y, \bfg \eta) + \bfg b( \bfg x, \bfg y, \bfg \eta) \, \bfg \zeta \, , \end{aligned} \end{equation} where $\bfg f$ and $\bfg g$ represent the differential and algebraic equations, respectively; $\bfg x$ and $\bfg y$ represent the state and algebraic variables, such as generator rotor speeds and bus voltage angles, respectively; $\bfg u$ represents the inputs, such as the schedules of synchronous generators; $\bfg z$ represents discrete variables; $\bfg \eta$ represents the stochastic characterization of wind speed as well as the volatility of load power consumption; $\bfg a$ and $\bfg b$ are the \textit{drift} and \textit{diffusion} of the stochastic differential equations, respectively; and $\bfg \zeta$ is the white noise. To represent inertial and primary control dynamics, we consider conventional models of synchronous machines (4th-order models) and of their primary controllers, as well as dynamic models of wind power plants (5th-order doubly-fed induction generator) with inclusion of maximum power point tracker, voltage, pitch-angle, and frequency controls \cite{Milano:2010}. With regard to the long-term dynamics, the AGC is implemented as a centralized discrete controller in the control centers of TSOs and updates the power order set-points of dispatchable generators at certain time intervals, for example, every 4 seconds \cite{9361269}. In this paper, we use the standard AGC scheme shown in Fig.~\ref{fig:agc}. The AGC consists of an integrator with gain $K_o$ that aims to nullify the steady-state frequency error, in this case, the difference between the reference frequency $\omega^{\rm ref}$ and the measured frequency $\omega_{\rm CoI}$ (i.e., the center of inertia (CoI)), as follows: \begin{align} \label{eq:agc} \frac{d}{dt}{\Delta p} = K_{o}(\omega^{\rm ref}-\omega_{\rm CoI}) \, , \end{align} where $\Delta p$ is the output of the integrator. To simulate the discrete nature of the AGC, $\Delta p$ is first discretized at given fixed-time intervals and then sent to each turbine governor (TG). These signals ($\Delta p_i$) are proportional to the capacity of the machines and the TG droops ($R_i$) and normalized with respect to the total droop of the system: \begin{align} \label{droop} R_{\rm tot} = \sum_{i=1}^{n_g}R_{i} \, . \end{align} \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{agc.pdf}} \caption{Standard AGC.} \label{fig:agc} \end{center} \vspace*{-0.3cm} \end{figure} \subsection{Simulation Results} \label{sec:results} The purpose of this section is to simulate the effectiveness of AGC to reduce frequency fluctuations. The example is based on the IEEE 39-bus system and assumes a 25\% wind power penetration (i.e., replace three conventional generators with wind power plants). Two scenarios are considered: (i) impact of stochastic noise (given by both load and wind); and (ii) scenario 1 plus the introduction of wind/load step and ramp variations \cite{KERCI2020105819}. All the simulations in this section are performed using the software tool Dome developed by the last author \cite{6672387}. \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{noise.pdf}} \caption{Scenario 1: Impact of noise.} \label{fig:noise} \end{center} \vspace*{-0.3cm} \end{figure} \begin{figure}[t!] \begin{center} \resizebox{0.8\linewidth}{!}{\includegraphics{ramp.pdf}} \caption{Scenario 2: Impact of noise and wind/load step and ramp variations.} \label{fig:ramp} \end{center} \vspace*{-0.3cm} \end{figure} Figure~\ref{fig:noise} shows the results of the first scenario with and without the AGC. In this scenario, the inclusion of the AGC does not appear to have any visible impact on frequency fluctuations. This is due to the fact that the AGC controller is slow compared to the dynamics of the noise (stochastic process). On the other hand, Fig.~\ref{fig:ramp} compares the effect of AGC under both noise and wind/load step and ramp power variations. Since wind/load ramp time scales are closer to that of the AGC, in this case, the inclusion of the AGC allows reducing frequency deviations. Specifically, the standard deviations of the frequency with and without AGC are 0.05395 Hz and 0.0765 Hz, respectively. These results indicate that an AGC implementation may be an option to improve frequency quality in the AITS in the future. \section{Conclusions} \label{sec:conclu} This paper discusses the issue of frequency quality in a real-world low-inertia power system, namely, the AITS. The paper shows that while some frequency quality parameters have dramatically improved (e.g., minutes below and above $\pm$ 200 mHz) over the last decade, others have deteriorated. In particular, the standard deviation of the frequency has increased linearly for the last three years. The paper proposes different solutions to keep frequency within operational limits. The potential effectiveness of one of the proposals, that is, installing AGC, is demonstrated through an example. It is shown that AGC is an option to regulate frequency around the target value. Future work will focus on testing the effectiveness of different AGC approaches on a model of the AITS. This work will then feed in to the assessment of the range of solutions for managing frequency quality on the AITS.
{ "redpajama_set_name": "RedPajamaArXiv" }
952
The Australia First Movement was a fascist movement, founded in October 1941. It grew out of the Rationalist Association of New South Wales and the Victorian Socialist Party, and was led by former Rhodes scholar Percy Stephensen and Adela Pankhurst. Writers Xavier Herbert and Eleanor Dark were involved with the organisation, which was inspired by the activities of retired businessman, William John Miles, who had campaigned during the 1930s under the "Australia First" slogan. Between 1936 and 1942, Miles published 16 volumes of a newsletter titled The Publicist, to which he contributed. He was a leading member of the Rationalist Association, and used The Publicist as his mouthpiece. Before 1939, it described itself as being "for national socialism" and "for Aryanism; against semitism". In January 1942, the ailing Miles transferred editorship of The Publicist to his co-author Stephensen, and had no involvement in the Australia First Movement, dying later that year. The Australia First Movement has been characterised as anti-Semitic, anti-war and pro-isolationist, and advocated Australia's independence from the British Empire. It attracted the support of the Catholic weekly, The Advocate, as well as the Odinist Alexander Rud Mills. By 1938, those who were later associated with the Australia First Movement were advocating the establishment of a national socialist corporate state and a political alliance with the Axis powers of Germany, Italy and Japan. A number of members came from a far-left background: Stephensen, Pankhurst and Walsh were former Communists. In March 1942, four members of the Australia First Movement in Perth, and sixteen in Sydney, were arrested, based on the suspicion that they would provide help to Japanese invaders. Two were convicted of conspiring to assist the enemy, and others were interned for the duration of the war. Adela Pankhurst, of the famous suffragette family, had visited Japan in 1939 and was arrested and interned in 1942 for her advocacy of peace with Japan. In his official history of Australian involvement in the Second World War, Paul Hasluck criticised those internments as the "grossest infringement of individual liberty made during the war". See also New Guard Centre Party Far-right politics in Australia References Further reading Political history of Australia Political movements in Australia Australian nationalism Fascist movements
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,516
\section{Introduction} The Local Group is an excellent laboratory for studies of galaxy evolution at the highest possible resolution in that it provides us with a wide range of different galaxy types and a variety of environments. Yet the Local Group is a poor group, contains relatively few galaxies, and lacks very massive galaxies, such as large ellipticals. Similar to many other nearby groups, the mass and the luminosity of the Local Group are dominated by two large spirals, the Milky Way and M31. Most of the other Local Group members are dwarf galaxies, and the majority of them are found in close proximity to the two large spirals. Dwarf galaxies are often considered building blocks of more massive galaxies in models of hierarchical structure formation. Dwarf galaxies come in many different flavors and cover a range of masses, luminosities, morphologies, gas content, star formation histories, etc. The distinction between dwarf galaxies and larger galaxies is somewhat fuzzy. The difference is primarily a luminosity difference -- it is customary to call galaxies with absolute magnitudes of $M_V > -18$ dwarf galaxies. Gas-rich dwarfs include dwarf spirals, dwarf irregulars (dIrrs) and blue compact dwarf galaxies, which usually show differing levels of ongoing star formation. Gas-poor dwarfs are primarily dwarf ellipticals (dEs). These can be further subdivided into subtypes such as the more massive, strongly centrally concentrated dwarf ellipticals with higher surface brightness, and the less massive, faint, fairly diffuse dwarf spheroidals (dSphs) (see also Gallagher \& Wyse 1994; Grebel, Gallagher, \& Harbeck 2003). What makes the Local Group special (apart from its being our home) is that here we have the possibility to actually resolve its constituent galaxies into individual stars and to study the properties of these stars. We can use these stars as probes of the past -- they permit us to uncover the evolutionary histories of their host galaxies. Moreover, they permit us to study these evolutionary histories at a level of detail and accuracy that is unmatched by any more distant galaxy, where only the integrated light can be studied. The Local Group is the only place where we can analyze even {\em ancient}\ stars and uncover the {\em early}\ formation history of individual galaxies beyond our own, particularly of the dwarf companions of the Milky Way. To summarize, the Local Group is ideally suited for studies of ``near-field cosmology'', i.e., for studies of galaxy evolution over cosmological epochs based on their resolved stellar fossil record, and for tests of the corresponding cosmological models. \section{Local Group Dwarf Spheroidals} The galaxy census of the Local Group remains uncertain. Within the Local Group's volume as defined by its zero velocity surface of $\sim$ 1 Mpc (Karachentsev et al.\ 2002), we currently know of some 38 probable member galaxies. Some were only recently discovered, and additional faint candidates continue to be found (e.g., Zucker et al.\ 2004a,b). All of the newly discovered dwarfs are dSphs, the least massive, least luminous galaxies known, and thus contribute to the faint end of the galaxy luminosity function. For reviews on Local Group dwarfs, see, e.g., Grebel (1997, 1999, 2000, 2001, 2005), Mateo (1998), and van den Bergh (1999, 2000). DSphs are usually the most numerous type of galaxy in galaxy groups and are characterized by M$_V < -14$ mag, $\mu_V < 22$ mag arcsec$^{-2}$, $M_{\rm HI} < 10^5$ M$_{\odot}$, and $M_{tot} \sim 10^7$ M$_{\odot}$. Often their stellar populations are purely old, but mixtures of old and intermediate-age populations are found as well. In dSphs where several populations can be distinguished the younger and/or more metal-rich populations are more centrally concentrated, indicating extended star formation episodes in the centers of the shallow potential wells of their parent galaxies (Harbeck et al.\ 2001). The gas deficiency of dSphs remains an unsolved puzzle -- dSphs typically contain even less gas than expected from red giant mass loss over time scales of several Gyr. The metallicity--luminosity relations of dSphs and dIrrs show the usual trend of increasing metallicity with increasing galaxy luminosity, but the relations are offset from each other: DSphs have higher mean stellar metallicities at a given optical luminosity, which may indicate more rapid star formation and enrichment at early times as compared to dIrrs (Grebel et al.\ 2003). DSphs do not seem to be supported by rotation and appear to contain large amounts of dark matter. The latter is inferred from the high velocity dispersion and the resulting high mass-to-light ratios derived under the assumption of virial equilibrium. Indirectly, a high dark matter content is also supported by the morphology of some nearby dSphs (Odenkirchen et al.\ 2001) and by the observed lack of a significant depth extent (Klessen, Grebel, \& Harbeck 2003). The radial velocity dispersion profiles of dSphs fall off large radii (Wilkinson et al.\ 2004), possibly indicating the presence of a kinematically cold stellar population at the outermost radii. If dwarf galaxies in general and dSphs in particular are indeed building blocks of larger galaxies, then today's dwarf population may be considered to be the surviving population of satellites that has not yet been accreted. The most numerous type of dwarfs in galaxy groups, the dSphs, may then be the most pristine members of the original building block population. Studying dSphs may teach us about the properties of objects that presumably were once accreted in large numbers to form galaxies like the Milky Way. Alternatively, it is conceivable that dSphs are in fact not fossil building blocks, but stripped remnants of disrupted and originally much more massive galaxies that have since merged. To find out more about the nature of dSphs and their cosmological significance we need to understand their past and present-day properties. \section{Dwarf Spheroidals: The Earliest Measurable Epoch of Star Formation and Its Cosmological Implications} Cold dark matter models predict that low-mass systems were the first sites of star formation, possibly as early as at a redshift of 30 (e.g., Barkana \& Loeb 2001). Since larger systems form through hierarchical merging of smaller systems, they should contain surviving populations of these early epochs of star formation. Furthermore, several models predict that small galaxies should have formed most of their stars prior to reionization, while reionization would have suppressed further star formation activity. In fact, galaxies less massive than $10^9$ M$_{\odot}$ should have lost their star-forming material during reionization (e.g., Susa \& Umemura 2004). Hence one would expect that low-mass galaxies contain ancient populations, while star formation should have ceased after reionization. These are predictions that can be tested in the dwarfs in our immediate surroundings. The least massive dwarfs, the dSphs, should have been most severely affected. Deep color-magnitude diagram data of these dwarfs that reach below the oldest main-sequence turn-offs permit us to carry out relative age dating of their old populations with internal accuracies of fractions of $\sim$1~Gyr, the highest accuracy currently attainable for any method for old stars. Note that this method can only be applied to populations sufficiently numerous to form detectable main-sequence turn-offs. This holds only for old Population II stars, while potential Population III stars are far too few even in our Milky Way. The differential ages of old populations in dwarf galaxies -- either field populations or globular clusters -- can then be compared to the ages of the oldest Galactic globular clusters of similar composition. \begin{figure} \includegraphics[height=3in,width=5.3in]{grebel_fig1.eps} \caption{ Sketch indicating the approximate duration of star formation episodes in dSph galaxies ($\sim 10^7$ M$_{\odot}$). The adopted beginning and end of the reionization epoch are based on results from WMAP and from the Sloan Digital Sky Survey. The predicted cessation of star formation after reionization is not observed. For details, see Grebel \& Gallagher (2004). } \end{figure} Importantly, this method reveals that (1) old populations are ubiquitous (but their fractions vary) and (2) the oldest ages in all of the galaxies studied so far are indistinguishable within the measurement accuracy (see Grebel 2000 and Grebel \& Gallagher 2004 for details). All nearby dwarf galaxies studied in sufficient detail were shown to contain ancient populations. Moreover, these nearby Milky Way companions and the Milky Way itself share a common epoch of star formation for their ancient Population II (within $\sim 1$ Gyr). These observations are consistent with the expectations from the building block scenario. However, the predicted cessation of star formation after reionization, expected to have affected particularly dSphs owing to their low mass, is not observed (Grebel \& Gallagher 2004). Instead, even dSphs entirely dominated by old populations show evidence for star formation extending over many Gyr (Harbeck et al.\ 2001; Ikuta \& Arimoto 2002). In dSphs with a mixture of populations we usually find star formation episodes that lasted many Gyr without being interrupted by a pronounced, long hiatus to the extent that we can measure the duration of star formation (exception: Carina with its episodic star formation). This may mean that the above quoted cosmological models are incorrect and do not properly consider other effects that might prevent complete photoevaporation (e.g., Susa \& Umemura 2004). On the other hand, photoionization is one plausible way to circumvent the substructure problem (e.g., Somerville 2002). Alternatively, it is conceivable that dSphs were once considerably more massive (by roughly a factor of 100), which could also have prevented photoionization squelching. In this case the galaxies observed today as dSphs must have undergone substantial mass loss. \section{Dwarf Spheroidal Star Formation Histories and Abundances} Thanks to deep, high-resolution photometry and synthetic color-magnitude diagram techniques, fairly detailed knowledge of the star formation histories of Local Group dSphs is now available. The resulting picture is one of high complexity: No two dSphs exhibit the same star formation history (Grebel 1997). As mentioned already, all dSphs studied in detail so far were found to contain old Population II stars. Some dSphs are dominated by ancient stars, others only have a small old population and a dominant intermediate-age population, and there is one example of a dSph that experienced star formation as recently as a few hundred Myr ago (Fornax, see Grebel \& Stetson 1999). Generally, star formation has proceeded continuously in these galaxies, although the amplitude varied and eventually declined at intermediate or younger ages (e.g., Grebel et al.\ 2003). Only one dSph with clearly episodic star formation is known (Carina, Smecker-Hane et al.\ 1994 and Monelli et al.\ 2003). DSphs with several populations typically show population gradients in the sense that more metal-rich and/or younger populations are more centrally concentrated (Harbeck et al.\ 2001). Substructure of this kind is not necessarily symmetrically distributed (e.g., Stetson et al.\ 1998). While the past decade was mainly one of photometrically derived star formation histories, we are now entering an era where the age-metallicity degeneracy, which is inherent to purely photometric determinations, can be broken by adding spectroscopic abundance information (e.g., Tolstoy et al.\ 2001; Pont et al.\ 2004, Cole et al.\ 2005, Koch et al. in these proceedings). This will ultimately permit us to derive detailed age-metallicity relations for these galaxies. \subsection{Comparing stellar populations and star formation histories} Comparing star formation histories of dwarf galaxies in general and dSphs in particular (e.g., Grebel 1997, 1999), one finds variations in the duration of star formation, in the star formation rates as a function of time, and in the enrichment. In spite of being overall metal-poor, all dSph galaxies that were studied spectroscopically so far show a spread of metallicities of typically 1 dex in [Fe/H] or more (e.g., Shetrone et al.\ 2001, 2003; Bonifacio et al.\ 2004). There appears to be a trend of increasing intermediate-age population fractions with increasing distance from the Milky Way among the Galactic dSphs (van den Bergh 1994; Grebel 1997), which may be due to the environmental impact of the Milky Way. If environment was indeed the governing factor determining the evolution of these low-mass galaxies, then one should expect to find a similar trend among the dSph companions of M31. However, this is not observed. Although M31's dSphs cover a comparable range of distances as their Galactic counterparts, they all appear to be dominated by old populations and lack the indicators of prominent intermediate-age populations present in the more distant Milky Way dSphs (Harbeck et al.\ 2001, 2004, 2005). Considering what we can infer from present-day dSphs about their star formation histories, how do they fit in as potential building blocks? With respect to stellar populations, dSphs dominated by old populations are compatible with the stellar content of the Galactic halo. DSphs with substantial intermediate-age populations seem less likely to have made a major contribution to the build-up of the halo of our Milky Way (Unavane et al.\ 1996). On the other hand, this problem would be diminished if most of the minor merger events took place at very early epochs. Comparing the old, metal-poor stellar populations in M31's dSphs to M31's halo indicates that the dSphs cannot have been primary building blocks of M31's halo since it was found to contain a substantial contribution from intermediate-age, comparatively metal-rich populations (Brown et al.\ 2003). An old, metal-poor halo population, however, has been detected as well (Brown et al.\ 2004), and again the population differences would be less severe if most of the dSph accretion had taken place at very early epochs, whereas the remainder of the younger halo of M31 would have been formed through the later accretion of more massive and more evolved galaxies. -- These statements assume that dSphs have not changed appreciably over time (e.g., did not lose substantial amounts of mass) and that their observed stellar content permits one to arrive at a fair representation of their evolutionary history. \begin{figure} \includegraphics[height=3.4in,width=5.3in]{grebel_fig2.eps} \caption{ Mean stellar metallicity of Population II stars versus baryonic luminosity for different classes of dwarf galaxies as indicated in the legend. Note the offset between gas-deficient (filled symbols) and gas-rich (open symbols) dwarfs. At the same galaxy luminosity, the old populations of dSphs are more metal-rich than those of dIrrs. Thus in contrast to dIrrs, dSphs must have experienced comparatively rapid early enrichment. Note the location of the so-called dIrr/dSph transition-type galaxies, which combine population properties of dSphs with ongoing star formation and a measurable gas content in the diagram. Their properties make them plausible dSph progenitors. For details, see Grebel, Gallagher, \& Harbeck (2003). } \end{figure} \subsection{Are DSph Abundance Patterns Consistent with the Building Block Scenario?} During the past few years more and more detailed, high-resolution abundance analyses of individual red giants in dSphs have become available, leading to a growing, yet still limited body of knowledge about their detailed element abundance ratios. In particular, [$\alpha$/Fe] ratios, r- and s-process abundances are being measured. If dSphs were dominant contributors to the build-up of the Galactic halo, their abundance patterns should match those of the halo. However, the existing measurements show pronounced differences to the abundance ratios in our Galactic halo: Dwarfs are characterized by slower star formation rates, leading to reduced [$\alpha$/Fe] ratios at lower metallicity ([Fe/H]) than found in the Galactic halo. This can be interpreted as a signature of a larger contribution of supernovae of Type Ia early on, such that a solar [$\alpha$/Fe] is reached sooner (e.g., Matteucci 2003). In contrast, the Galactic halo experienced comparatively rapid star formation accompanied by gas removal, leading to low metallicities with higher $\alpha$ element ratios. These different properties lead to the conclusion that dSphs cannot have contributed in a major way to the build-up of the Galactic halo (Shetrone et al.\ 2001), unless the majority of the minor merger events occurred at very early epochs when the abundance ratios in the Milky Way and in the dSphs were still very similar. \subsection{Morphological segregation and the metallicity-luminosity relation for dwarf galaxies} Going back to global metallicities (``[Fe/H]''), what can these tell us about galaxy evolution and interrelations between different galaxies? Now we do not consider dSphs as building blocks of larger galaxies, but dSphs as the stripped remnants of initially more massive galaxies. Clearly, dSphs must once have been more massive and considerably more gas-rich in order to have formed the stars we observe in them today. Their present-day gas deficiency still lacks a satisfactory explanation (Gallagher et al.\ 2003, Grebel et al.\ 2003). What were the progenitors of dSphs? The first type of galaxies that comes to mind are dIrrs, gas-rich, irregularly shaped dwarfs with ongoing star formation yet also with very old populations. Could dSphs simply be stripped dIrrs? Taken at face value, the morphological segregation observed in the Local Group (as well as in other groups) would seem to support this idea (see, e.g., Fig.\ 1 in Grebel 1995): Gas-deficient dwarf galaxies (dEs, dSphs) are usually found within 300 kpc around more massive galaxies, while gas-rich dwarfs (esp.\ dIrrs) are also (and predominantly) found at larger distances. When plotting distance from the nearest primary vs. H\,{\sc i} content, there is a clear tendency to find dSphs with H\,{\sc i} mass limits below $10^5$ M$_{\odot}$ within 300 kpc, while dIrr galaxies with typical H\,{\sc i} masses $> 10^7$ M$_{\odot}$ tend to lie at distances $> 400$ kpc (Grebel et al.\ 2003, their Fig.\ 3). The proximity to massive galaxies and interactions with these may be an efficient agent in removing material from the dwarfs (e.g., Mayer et al.\ 2001). On the other hand, the luminosity-metallicity relations of dSphs and dIrrs have long been known to differ. While for both classes of galaxies the metallicity increases with luminosity (and hence with mass), the two relations are offset from one another (e.g., Skillman \& Bender 1995) in the sense that dSphs are more metal-rich for a given luminosity. However, the luminosity-metallicity relations are based on different tracers: For dIrrs, usually the present-day oxygen abundances as measured in H\,{\sc ii} regions are used, while for dEs and dSphs, metallicities of old populations (and occasionally oxygen abundances of intermediate-age planetary nebulae) are used. Thus the metallicities of populations of very different ages as well as nebular abundances versus stellar abundances are compared. This mixture of different evolutionary stages and different metallicity indicators is unsatisfactory. Therefore we decided to attempt to compare apples with apples: In order to compare not only mean {\em stellar metallicities}\ in dIrrs and dSphs, but also the metallicities of the {\em same populations} (i.e., of stars of similar age), we chose old Population II giants, which are found in all LG dwarf galaxies. We used (1) {\em old red giants} in dSphs and in the outskirts of dIrrs (where old populations dominate), (2) {\em spectroscopic abundances} wherever available (from our own and literature data), and (3) {\em photometric abundances} elsewhere. The resulting data set may not yet have an ideal degree of homogeneity, but is the best and most comprehensive one currently available (Grebel et al.\ 2003). In the coming years, undoubtedly stellar spectroscopic measurements will also become available for those dwarfs for which we only have photometric estimates at present. Interestingly, even when confining the comparison of luminosity-metallicity relations to old populations, the differences continue to exist. {\em Thus at the same galaxy luminosity, the old populations of dSphs are more metal-rich than those of dIrrs}. This indicates that in contrast to dIrrs, dSphs must have experienced fairly rapid early enrichment (Grebel et al.\ 2003). This and several other factors make dIrrs unlikely progenitors of dSphs. If dSphs are stripped remnants of more massive galaxies, then the fact that they do follow a baryonic luminosity-metallicity relation indicates that they must have continued to form stars and to experience enrichment even after the main mass removal occurred. Grebel et al.\ (2003) present a series of arguments why dIrr/dSph transition-type galaxies appear to be fairly plausible dSph progenitors, suggesting a gentle and slow transition from one kind of low-mass galaxy to another. \section{Harassment and accretion} While we presented arguments against a simple cosmological building block scenario in the preceding paragraphs, there is clear evidence for ongoing harassment and accretion of dwarf galaxies. The most prominent examples of ongoing accretion are the tidal streams of the Sagittarius dSph galaxy (Ibata et al.\ 1994), and the giant stream of metal-rich giants around M31 (Ibata et al.\ 2001). Additional stellar overdensities have been detected in the Milky Way, e.g., the Monoceros feature (Newberg et al.\ 2002; Yanny et al.\ 2003), the Canis Major overdensity (Martin et al.\ 2004), the Triangulum-Andromeda feature (Rocha-Pinto et al.\ 2004) and more substructure near M31 (Zucker et al.\ 2004a). These may be parts of the tidal tails of disrupted dwarfs. The continuation of deep wide-field surveys and the addition of spectroscopic data for phase-space information will ultimately permit us to identify and constrain less pronounced accretion events and their number, providing an important observable for hierarchical structure formation. Evidence for harassment is apparent in the S-shaped surface density profile of the Galactic dSph Ursa Minor (Palma et al.\ 2003) and in the twisted isophotes of the M31 dE companions M32 and NGC\,205 (Choi et al.\ 2002). These and other dSphs may eventually be accreted as well. A crucial bit of information in this context is the knowledge of the orbits of dwarf companions -- something the Gaia mission will help to establish. \section{Concluding remarks} What is the cosmological role of dSphs? With regard to the oldest measurable ages and the earliest epoch of star formation, we find consistency with expectations from cosmological modes. There appears to be a common epoch of early star formation in the Milky Way and in its dwarf companions. In contrast to model predictions, the expected cessation of star formation after reionization is, however, not observed in dSphs. The observed population structure in the (very different) halos of M31 and of the Milky Way, makes it seem unlikely that (present-day) dSphs played a major role in the build-up of the halo of the two spirals. The large variations in the star formation histories of dSphs and the presence of younger populations than in the Galactic halo can be reconciled with the building block scenario if most of the accretion events occurred very early on. The global metallicities of the Milky Way dSphs are well-matched to those observed in the Galactic halo, but this is not the case for the M31 dSph companions. With regard to detailed chemical element abundance ratios, it is emerging that dSphs cannot have been dominant contributors to halo build-up unless -- again -- the merger events would have taken place at very early times. The differences in the metallicity-luminosity relation of different types of dwarfs seem to exclude that dSphs are simply stripped dIrrs. Both disruption and accretion of dwarf companions are still occurring today, demonstrating that dSphs must have played some role in the growth of larger galaxies. Unfortunately, the number and importance of accretion events remains unclear for either of the two large spirals in the Local Group. In spite of admirable progress, dwarfs remain an evolutionary puzzle. They are excellent and important test cases of cosmological predictions. Regardless of their cosmological importance, however, dwarf galaxies are also interesting in their own right! \begin{acknowledgments} Many thanks to Helmut Jerjen and Bruno Binggeli for a wonderful conference and for their patience while this contribution was finished. I am also indebted to Jay Gallagher for a critical reading of this text. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,101
{"url":"https:\/\/brilliant.org\/problems\/ques-26\/","text":"# Let's integrate\n\nCalculus Level 1\n\nLet $$a,b,c$$ be non-zero real constants such that $\\displaystyle \\int_{0}^3 (3ax^2+2bx+c) \\, dx=\\displaystyle \\int_{1}^3 (3ax^2+2bx+c) \\, dx.$ What is the value of $$a+b+c$$?\n\n\u00d7","date":"2017-10-18 04:05:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.75734943151474, \"perplexity\": 849.5669568359358}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187822739.16\/warc\/CC-MAIN-20171018032625-20171018052625-00517.warc.gz\"}"}
null
null
{"url":"https:\/\/en.wikipedia.org\/wiki\/Orbit_method","text":"Orbit method\n\nIn mathematics, the orbit method (also known as the Kirillov theory, the method of coadjoint orbits and by a few similar names) establishes a correspondence between irreducible unitary representations of a Lie group and its coadjoint orbits: orbits of the action of the group on the dual space of its Lie algebra. The theory was introduced by Kirillov\u00a0(1961, 1962) for nilpotent groups and later extended by Bertram Kostant, Louis Auslander, Lajos Puk\u00e1nszky and others to the case of solvable groups. Roger Howe found a version of the orbit method that applies to p-adic Lie groups. David Vogan proposed that the orbit method should serve as a unifying principle in the description of the unitary duals of real reductive Lie groups.\n\nRelation with symplectic geometry\n\nOne of the key observations of Kirillov was that coadjoint orbits of a Lie group G have natural structure of symplectic manifolds whose symplectic structure is invariant under G. If an orbit is the phase space of a G-invariant classical mechanical system then the corresponding quantum mechanical system ought to be described via an irreducible unitary representation of G. Geometric invariants of the orbit translate into algebraic invariants of the corresponding representation. In this way the orbit method may be viewed as a precise mathematical manifestation of a vague physical principle of quantization. In the case of a nilpotent group G the correspondence involves all orbits, but for a general G additional restrictions on the orbit are necessary (polarizability, integrality, Puk\u00e1nszky condition). This point of view has been significantly advanced by Kostant in his theory of geometric quantization of coadjoint orbits.\n\nKirillov character formula\n\nFor a Lie group ${\\displaystyle G}$, the Kirillov orbit method gives a heuristic method in representation theory. It connects the Fourier transforms of coadjoint orbits, which lie in the dual space of the Lie algebra of G, to the infinitesimal characters of the irreducible representations. The method got its name after the Russian mathematician Alexandre Kirillov.\n\nAt its simplest, it states that a character of a Lie group may be given by the Fourier transform of the Dirac delta function supported on the coadjoint orbits, weighted by the square-root of the Jacobian of the exponential map, denoted by ${\\displaystyle j}$. It does not apply to all Lie groups, but works for a number of classes of connected Lie groups, including nilpotent, some semisimple groups, and compact groups.\n\nSpecial cases\n\nNilpotent group case\n\nLet G be a connected, simply connected nilpotent Lie group. Kirillov proved that the equivalence classes of irreducible unitary representations of G are parametrized by the coadjoint orbits of G, that is the orbits of the action G on the dual space ${\\displaystyle {\\mathfrak {g}}^{*}}$ of its Lie algebra. The Kirillov character formula expresses the Harish-Chandra character of the representation as a certain integral over the corresponding orbit.\n\nCompact Lie group case\n\nComplex irreducible representations of compact Lie groups have been completely classified. They are always finite-dimensional, unitarizable (i.e. admit an invariant positive definite Hermitian form) and are parametrized by their highest weights, which are precisely the dominant integral weights for the group. If G is a compact semisimple Lie group with a Cartan subalgebra h then its coadjoint orbits are closed and each of them intersects the positive Weyl chamber h*+ in a single point. An orbit is integral if this point belongs to the weight lattice of G. The highest weight theory can be restated in the form of a bijection between the set of integral coadjoint orbits and the set of equivalence classes of irreducible unitary representations of G: the highest weight representation L(\u03bb) with highest weight \u03bbh*+ corresponds to the integral coadjoint orbit G\u00b7\u03bb. The Kirillov character formula amounts to the character formula earlier proved by Harish-Chandra.","date":"2019-08-21 05:28:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 3, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8911342620849609, \"perplexity\": 204.99682411048016}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027315809.69\/warc\/CC-MAIN-20190821043107-20190821065107-00510.warc.gz\"}"}
null
null
Q: Does Twig include a filter for auto-linking text? Symfony1 had a helper function called auto_link_text(), which parsed a block of text and wrapped all text URLs in <a> tags, automatically populating the href attribute. Does Twig include a function like this? I've looked on Google, and gone through the code, but can't find one. I can obviously code one myself, but don't want to replicate something if it's already there. If I do code one myself, should it be a function or a filter? A: The other listed "answer" is a little out of date and has issues. This one will work in the latest versions of Symfony and has less issues class AutoLinkTwigExtension extends AbstractExtension { public function getFilters() { return [new TwigFilter('auto_link', [$this, 'autoLink'], [ 'pre_escape'=>'html', 'is_safe' => ['html']])]; } static public function autoLink($string) { $pattern = "/http[s]?:\/\/[a-zA-Z0-9.\-\/?#=&]+/"; $replacement = "<a href=\"$0\" target=\"_blank\">$0</a>"; $string = preg_replace($pattern, $replacement, $string); return $string; } } A: the function doesn't exist in twig, but you can even add your own extensions to Twig : class AutoLinkTwigExtension extends \Twig_Extension { public function getFilters() { return array('auto_link_text' => new \Twig_Filter_Method($this, 'auto_link_text', array('is_safe' => array('html'))), ); } public function getName() { return "auto_link_twig_extension"; } static public function auto_link_text($string) { $regexp = "/(<a.*?>)?(https?)?(:\/\/)?(\w+\.)?(\w+)\.(\w+)(<\/a.*?>)?/i"; $anchorMarkup = "<a href=\"%s://%s\" target=\"_blank\" >%s</a>"; preg_match_all($regexp, $string, $matches, \PREG_SET_ORDER); foreach ($matches as $match) { if (empty($match[1]) && empty($match[7])) { $http = $match[2]?$match[2]:'http'; $replace = sprintf($anchorMarkup, $http, $match[0], $match[0]); $string = str_replace($match[0], $replace, $string); } } return $string; } } A: If you are using twig inside of Symfony2, there's a bundle for that: https://github.com/liip/LiipUrlAutoConverterBundle If you're using it outside of Symfony2, you could submit a PR to them in order to decouple the bundle and the twig extension!
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,459
\section{INTRODUCTION} \label{sec:intro} \goal{Explain importance of distributed memory machines for large-scale processing.} The scale of today's data processing is growing steadily. For example, the size of Facebook's social graph is many petabytes~\cite{bronson2013tao,Venkataramani:2012:TFS:2213836.2213957} and graphs processed by the well-known HPC benchmark Graph500~\cite{murphy2010introducing} can have trillions of vertices. Efficient analyses of such datasets require distributed-memory (DM) machines with deep \emph{Non-Uniform Memory Access} (NUMA) hierarchies. \goal{State that locks are important in large-scale processing.} Locks are among the most effective synchronization mechanisms used in codes for such machines~\cite{bienia11benchmarking}. On one hand, if used improperly, they may cause deadlocks. Yet, they have intuitive semantics and they often outperform other schemes such as atomic operations~\cite{schweizer2015evaluating} or transactions~\cite{besta2015accelerating}. \goal{Say that deep memory hierarchies are challenging for efficient locks.} Designing efficient locks for machines with deep hierarchical memory systems is challenging. Consider four processes competing for the same lock. Assume that two of them (A and B) run on one socket and the remaining two (C and D) execute on the other one. Now, in a naive lock design oblivious to the memory hierarchy, the lock may be passed between different sockets up to three times, degrading performance (e.g., if the order of the processes entering the critical section (CS) is A, C, B, and D). Recent advances~\cite{Chabbi:2015:HPL:2688500.2688503, Dice:2012:LCG:2145816.2145848} tackle this problem by reordering processes acquiring the lock to reduce inter-socket communication. Here, the order of A, B, C, and D entails only one inter-socket lock transfer, trading fairness for higher throughput. Extending such schemes to DM machines with weak memory models increases complexity. Moreover, expensive inter-node data transfers require more aggressive communication-avoidance strategies than those in intra-node communication~\cite{fompi-paper}. To our best knowledge, no previous lock scheme addresses these challenges. \begin{figure}[!h] \centering \includegraphics[width=0.4\textwidth]{space_5-eps-converted-to.pdf} \caption{The space of parameters of the proposed Reader-Writer lock.} \label{fig:space} \end{figure} \goal{State that we care about RW locks.} Another property of many large-scale workloads is that they are dominated by reads (e.g., they constitute 99.8\% of requests to the Facebook graph~\cite{Venkataramani:2012:TFS:2213836.2213957}). Here, simple locks would entail unnecessary overheads. Instead, the Reader-Writer (RW) lock~\cite{Mellor-Crummey:1991:SRS} can be used to reduce the overhead among processes that only perform reads in the critical section (CS). Initial RW \emph{NUMA-aware} designs have recently been introduced~\cite{Calciu:2013:NRL}, but they do not address DM machines. In this work, we develop a lock that addresses the above challenges. Its core concept is a modular design for adjusting performance to various types of workloads. The lock consists of three key data structures. First, the distributed counter (DC) indicates the number of readers or the presence of a writer in the CS. Second, the distributed queue (DQ) synchronizes writers belonging to a given element of the memory hierarchy (e.g., a rack). Finally, the distributed tree (DT) binds together all queues at different levels of the memory hierarchy and synchronizes writers with readers. Each of these three structures offers an adjustable performance tradeoff, enabling high performance in various settings. DC can lower the latency of lock acquire/release performed by either readers or writers, DQ can be biased towards improving either locality or fairness, and DT can increase the throughput of either readers or writers. The values of these parameters constitute a three dimensional space that is illustrated in Figure~\ref{fig:space}. Each point is a specific lock design with selected performance properties. \goal{Advertise RMA and say we'll use it for our locks.} \sloppy Most DM machines offer Remote Direct Memory Access (RDMA)~\cite{recio2007remote}, a hardware scheme that removes the OS and the CPU from the inter-node communication path. RDMA is the basis of many Remote Memory Access (RMA)~\cite{fompi-paper} programming models. Among others, they offer a Partitioned Global Address Space (PGAS) abstraction to the programmer and enable low-overhead direct access to remote memories with put/get communication primitives. RMA principles are used in various HPC languages and libraries: Unified Parallel C (UPC)~\cite{upc}, Fortran 2008~\cite{fortran2008}, MPI-3~\cite{mpi3}, or SHMEM~\cite{shmem}. We will illustrate how to utilize RMA in the proposed locks for DM machines, addressing the above-mentioned challenges. In the following, we use MPI-3 RMA but we keep our protocols generic and we discuss (\cref{sec:discussion}) how other RMA languages and libraries can also be used. \goal{State our contributions} In summary, our key contributions are as follows: \begin{itemize} \item We develop a topology-aware distributed Reader-Writer lock that enables various tradeoffs between fairness, throughput, latency, and locality. \item We offer a topology-aware distributed MCS lock that accelerates the state-of-the-art MPI-3 RMA codes~\cite{fompi-paper}. \item We illustrate that our designs outperform the state-of-the-art in throughput/latency (7.2x/6.8x on average) and that they accelerate distributed hashtables used in key-value (KV) stores or graph processing. \end{itemize} \section{RMA AND LOCKS} \label{sec:background} \goal{Introduce the section} We start by discussing RMA (\cref{sec:background_rma}), our tool to develop the proposed locks. Next, we present traditional (\cref{sec:traditional_locks}) and state-of-the-art (\cref{sec:state-of-the-art_locks}, \cref{sec:distributed_mcs}) locks that we use and extend. \textbf{\textsf{Notation/Naming:}} We denote the number of processes as $P$; we use the notion of a \emph{process} as it occurs frequently in DM codes such as MPI~\cite{mpi3}. Still, our schemes are independent of whether heavyweight processes or lightweight threads are incorporated. Each process has a unique ID called the \emph{rank} $\in \{1, ..., P\}$. A process in the CS is called \emph{active}. A null pointer is denoted as $\emptyset$. Then, $N$ is the number of levels of the memory hierarchy of the used machine. Here, the selection of the considered levels depends on the user. For example, one can only focus on the nodes connected with a network and racks that contain nodes and thus $N=3$ (three levels: the nodes, the racks, and the whole machine). We refer to a single considered machine part (e.g., a node) as an \emph{element}. We refer to a node that is a shared-memory cache-coherent domain connected to other such domains with a non-coherent network as a \emph{compute node} (or just \emph{node}). One compute node may contain smaller elements that are cache-coherent and together offer \emph{non-uniform memory access (NUMA)}. We refer to such elements as \emph{NUMA nodes}; an example NUMA node is a socket with a local DRAM. We present symbols used in the paper in Table~\ref{tab:symbols}. \begin{table}[h!] \centering \footnotesize \sf \begin{tabular}{r|l} \toprule $P$ & Number of processes.\\ $p$ & Rank of a process that attempts to acquire/release a lock.\\ $N$ & Number of levels of the considered machine.\\ $N_i$ & Number of machine elements at level~$i$; $1 \leq i \leq N$.\\ $i$ & Index used to refer to the $i$th machine level.\\ $j$ & Index used to refer to the $j$th element at a given machine level.\\ \bottomrule \end{tabular} \caption{Symbols used in the paper.} \label{tab:symbols} \end{table} \subsection{RMA Programming} \label{sec:background_rma} \goal{+ Describe RDMA \& RMA and show they're popular} In RMA programming, processes communicate by directly accessing one another's memories. Usually, RMA is built over OS-bypass RDMA hardware for highest performance. RMA non-blocking \emph{put}s (writes to remote memories) and \emph{get}s (reads from remote memories) offer low latencies, potentially outperforming message passing~\cite{fompi-paper}. Remote \emph{atomics} such as compare-and-swap~\cite{mpi3,Herlihy:2008:AMP:1734069} are also available. Finally, RMA \emph{flushes} ensure the consistency of data by synchronizing respective memories. RDMA is provided in virtually all modern networks (e.g., IBM PERCS~\cite{arimilli2010percs}, IBM's on-chip Cell, InfiniBand~\cite{IBAspec}, iWARP~\cite{iwarp}, and RoCE~\cite{roce}). Moreover, numerous libraries and languages offer RMA features. Examples include MPI-3 RMA~\cite{mpi3}, UPC~\cite{upc}, Titanium~\cite{hilfinger2005titanium}, Fortran 2008~\cite{fortran2008}, X10~\cite{x10}, or Chapel~\cite{chapel}. The number of RMA codes is growing steadily, and RMA itself is being continually enhanced~\cite{besta2014fault, besta2015active}. \textbf{\textsf{RMA Windows:}} In RMA, each process explicitly exposes an area of its local memory as shared. In MPI, this region is called a \emph{window}. Once shared, a window can be accessed with puts/gets/atomics and synchronized with flushes. We will refer to such an exposed memory in any RMA library/language as a window. \textbf{\textsf{RMA Functions:}} We describe the syntax/semantics of the used RMA calls in Listing~\ref{lst:rma_calls}. All \texttt{int}s are 64-bit. For clarity, we also use the \texttt{bool} type and assume it to be an \texttt{int} that can take the 0 (\texttt{false}) or 1 (\texttt{true}) values, respectively. Values returned by \texttt{Get}/\texttt{FAO}/\texttt{CAS} are only valid after the subsequent \texttt{Flush}. The syntax is simplified for clarity: we omit a pointer to the accessed window (we use a single window). We use an \emph{origin}/a \emph{target} to refer to a process that issues or is targeted by an RMA call. \begin{lstlisting}[float=h,label=lst:rma_calls,caption=The syntax/semantics of the utilized RMA calls.] /* |\underline{Common parameters:}| $target$: target's rank; $offset$: an offset * into $target$'s window that determines the location of the * targeted data; $op$: an operation applied to a remote piece of * data (either an atomic replace (REPLACE) or a sum (SUM)); * $oprd$: the operand of an atomic operation $op$.*/ /* Place atomically $src\_data$ in $target$'s window.*/ void Put(int src_data, int target, int offset); /* Fetch and return atomically data from $target$'s window.*/ int Get(int target, int offset); /* Apply atomically $op$ using $oprd$ to data at $target$.*/ void Accumulate(int oprd, int target, int offset, MPI_Op op); /* Atomically apply $op$ using $oprd$ to data at $target$ * and return the previous value of the modified data.*/ int FAO(int oprd, int target, int offset, MPI_Op op); /* Atomically compare $cmp\_data$ with data at $target$ and, if * equal, replace it with $src\_data$; return the previous data.*/ int CAS(int src_data, int cmp_data, int target, int offset); /* Complete all pending RMA calls started by the calling process * and targeted at $target$.*/ void Flush(int target); \end{lstlisting} \subsection{Traditional Hardware-Oblivious Locks} \label{sec:traditional_locks} We now present hardware-oblivious locks used in this work. \subsubsection{Reader-Writer (RW) Locks} \label{sec:trad_rw_lock} \goal{Describe RW locks} Reader-Writer (RW) locks~\cite{Courtois:1971:CCL} distinguish between processes that only perform reads when in the CS (\emph{readers}) and those that issue writes (\emph{writers}). Here, multiple readers may simultaneously enter a given CS, but only one writer can be granted access at a time, with no other concurrent readers or writers. RW locks are used in OS kernels, databases, and present in various HPC libraries such as MPI-3~\cite{mpi3}. \subsubsection{MCS Locks} \label{sec:trad_mcs_lock} \goal{Describe MCS locks} Unlike RW locks, the MCS lock (due to Mellor-Crummey and Scott)~\cite{Mellor-Crummey:1991:ASS, Scott:2001:SQS:379539.379566, 292571} does not distinguish between readers or writers. Instead, it only allows one process $p$ at a time to enter the CS, regardless of the type of memory accesses issued by $p$. Here, processes waiting for the lock form a queue, with a process at the head holding the lock. The queue contains a single global pointer to its tail. Moreover, each process in the queue maintains: (1) a local flag that signals if it can enter the CS and (2) a pointer to its successor. To enter the queue, a process $p$ updates both the global pointer to the tail and the pointer at its predecessor so that they both point to $p$. A releasing process notifies its successor by changing the successor's local flag. The MCS lock reduces the amount of coherence traffic that limits the performance of spinlocks~\cite{Anderson:1990:PSL:628891.628973}. Here, each process in the queue spin waits on its local flag that is modified once by its predecessor. \subsection{State-of-the-Art NUMA-Aware Locks} \label{sec:state-of-the-art_locks} We now discuss lock schemes that use the knowledge of the NUMA structure of the underlying machine for more performance. We will combine and extend them to DM domains, and enrich them with a family of adjustable parameters for high performance with various workloads. \subsubsection{NUMA-Aware RW Locks} \label{sec:numa_rw_lock} \goal{Describe NUMA-aware RW-locks and explain why research is needed} Many traditional RW locks (\cref{sec:trad_rw_lock}) entail performance penalties in NUMA systems as they usually rely on a centralized structure that becomes a bottleneck and entails high latency when accessed by processes from remote NUMA elements. Calciu et al.~\cite{Calciu:2013:NRL} tackle this issue with a flag on each NUMA node that indicates if there is an active reader on that node. This reduces contention due to readers (each reader only marks a local flag) but may entail additional overheads for writers that check for active readers. \subsubsection{Hierarchical MCS Locks} \label{sec:hierarchical_locks} \goal{Describe hierarchical locks and show why this is needed today} Hierarchical locks tackle expensive lock passing described in~\cref{sec:intro}. They trade fairness for higher throughput by ordering processes that enter the CS to reduce the number of such passings. Most of the proposed schemes address two-level NUMA machines~\cite{Chabbi:2015:HPL:2688500.2688503, Dice:2011:FNL, luchangco2006hclh, Radovic:2003:HBL}. Chabbi et al.~consider a multi-level NUMA system~\cite{Chabbi:2015:HPL:2688500.2688503}. Here, each NUMA hierarchy element (e.g., a socket) entails a separate MCS lock. To acquire the global lock, a process acquires an MCS lock at each machine level. This increases locality~\cite{tate2014programming} but reduces fairness: processes on the same NUMA node acquire the lock consecutively even if processes on other nodes are waiting. \subsection{Distributed RMA MCS Locks} \label{sec:distributed_mcs} \goal{Introduce the section.} Finally, we present a distributed MCS (D-MCS) lock based on an MPI-3 MCS lock~\cite{gropp2014using}. We will use it to accelerate state-of-the-art MPI RMA library foMPI~\cite{fompi-paper} and as a building block of the proposed distributed topology-aware RW and MCS locks (\cref{sec:rw_locks}). \subsubsection{Summary and Key Data Structures} \label{sec:mcs_data} \goal{Tell how we use windows} Here, processes that wait for the D-MCS lock form a queue that may span multiple nodes. Each process maintains several globally visible variables. A naive approach would use one window per variable. However, this would entail additional memory overheads (one window requires $\Omega(P)$ storage in the worst case~\cite{fompi-paper}). Thus, we use one window with different offsets determining different variables: a pointer to the next process in the MCS queue (offset \texttt{NEXT}, initially $\emptyset$) and a flag indicating if a given process has to spin wait (offset \texttt{WAIT}, initially \texttt{false}). A selected process (rank \texttt{tail\_rank}) also maintains a pointer to a process with the queue tail (offset \texttt{TAIL}, initially $\emptyset$). \subsubsection{Lock Protocols} \label{sec:mcs_implementation} We now describe the protocols for acquire/release. We refer to respective variables using their offsets in the window. \textbf{Lock Acquire (Listing~\ref{lst:mcs_acquire})} First, $p$ atomically modifies \texttt{TAIL} with its own rank and fetches the predecessor rank (Line~\ref{line:rma_mcs_fetch_pred}). If there is no predecessor, it proceeds to the CS. Otherwise, it enqueues itself (Line~\ref{line:rma_mcs_enqueue_itself}) and waits until its local \texttt{WAIT} is set to \texttt{false}. \texttt{Flush}es ensure the data consistency. \begin{lstlisting}[float=h,label=lst:mcs_acquire,caption=Acquiring D-MCS.] void acquire() { /* Prepare local fields. */ Put($\emptyset$, $p$, NEXT); Put(true, $p$, STATUS); /* Enter the tail of the MCS queue and get the predecessor. */ int pred = FAO($p$, tail_rank, TAIL, REPLACE);|\label{line:rma_mcs_fetch_pred}| Flush(tail_rank); /* Ensure completion of FAO. */ if(pred != $\emptyset$) { /* Check if there is a predecessor. */ /* Make the predecessor see us. */ Put($p$, pred, NEXT); Flush(pred);|\label{line:rma_mcs_enqueue_itself}| bool waiting = true; do { /* Spin locally until we get the lock. */ waiting = Get($p$, WAIT); Flush($p$); } while(waiting == true); } } \end{lstlisting} \textbf{Lock Release (Listing~\ref{lst:mcs_release})} First, $p$ checks if it has a successor in the queue (Line~\ref{line:rma_mcs_release_check_succ}). If there is none, it atomically verifies if it is still the queue tail (Line~\ref{line:rma_mcs_release_check_tail}); if yes, it sets \texttt{TAIL} to $\emptyset$. Otherwise, $p$ waits for a process that has modified \texttt{TAIL} to update its \texttt{NEXT} field (Lines~\ref{line:rma_mcs_release_wait_start}-\ref{line:rma_mcs_release_wait_end}). If there is a successor, the lock is passed with a single \texttt{Put} (Line~\ref{line:rma_mcs_release_notify}). \begin{lstlisting}[label=lst:mcs_release,float=h,caption=Releasing D-MCS.] void release() { int succ = Get($p$, NEXT); Flush($p$); if(succ == $\emptyset$) { |\label{line:rma_mcs_release_check_succ}| /* Check if we are waiting for the next proc to notify us.*/ int curr_rank = CAS($\emptyset$, $p$, tail_rank, TAIL); |\label{line:rma_mcs_release_check_tail}| Flush(tail_rank); if($p$ == curr_rank) return; /* We are the only process in the queue. */ do { /* Wait for a successor. */ |\label{line:rma_mcs_release_wait_start}| successor = Get($p$, NEXT); Flush($p$); } while (successor == $\emptyset$); |\label{line:rma_mcs_release_wait_end}| } /* Notify the successor. */ Put(0, successor, WAIT); Flush(successor);|\label{line:rma_mcs_release_notify}|} \end{lstlisting} \begin{figure*} \centering \includegraphics[width=1\textwidth]{structures_4-eps-converted-to.pdf} \caption{An example RMA-RW on a three-level system.} \label{fig:structures} \end{figure*} \section{DISTRIBUTED RMA RW LOCKS} \label{sec:rw_locks} We now present a distributed \emph{topology-aware} RW lock (RMA-RW) for scalable synchronization and full utilization of parallelism in workloads dominated by reads. We focus on the RW semantics as the key part of the introduced lock. Symbols specific to RMA-RW are presented in Table~\ref{tab:symbols_rma-rw}. \macb{Lock Abbreviations} We always refer to the proposed topology-aware distributed RW and MCS lock as RMA-RW and RMA-MCS, respectively. Both RMA-RW and RMA-MCS use as their building block a simple distributed topology-oblivious MCS lock (\cref{sec:distributed_mcs}) denoted as D-MCS. \macb{Example} In the whole section, we will use the example shown in~Figure~\ref{fig:structures}. Here, $N=3$ and the considered levels are: compute nodes, racks, and the whole machine. \begin{table}[h!] \centering \footnotesize \sf \begin{tabular}{r|l} \toprule $T_{DC}$ & The \emph{Distributed Counter} threshold (\cref{sec:counters}).\\ $T_{L,i}$ & The \emph{Locality} threshold at level~$i$ (\cref{sec:qnode}).\\ $T_{R}$ & The \emph{Reader} threshold (\cref{sec:hnode}).\\ $T_{W}$ & The \emph{Writer} threshold; $T_{W} = \prod_{i=1}^{N} T_{L,i}$ (\cref{sec:hnode}).\\ $c(p)$ & Mapping from a process $p$ to its physical counter (\cref{sec:counters}).\\ $e(p,i)$ & Mapping from a process $p$ to its home machine element at level~$i$ (\cref{sec:qnode}).\\ $F_W$ & The fraction of writers in a given workload (the fraction of readers: $1-F_W$).\\ \bottomrule \end{tabular} \caption{Symbols used in RMA-RW.} \label{tab:symbols_rma-rw} \end{table} \subsection{Design Summary and Intuition} \label{sec:concept} As explained in~\cref{sec:intro}, RMA-RW consists of three types of core data structures: distributed queues (DQs), a distributed tree (DT), and a distributed counter (DC). They are illustrated in Figure~\ref{fig:structures}. First, every machine element (at each considered level) has an associated DQ and thus a D-MCS lock \emph{local} to this element (as opposed to the \emph{global} RMA-RW lock). In our example, every node, rack, and the whole machine have their own DQ (and thus a local MCS lock). Note that some DQs that are associated with elements such as nodes are not necessarily distributed, but we use the same name for clarity. Second, all the DQs form a DT that corresponds to the underlying memory hierarchy, with one DQ related to one tree vertex. For example, DQs associated with nodes that belong to a given rack $r$ constitute vertices that are children of a vertex associated with a DQ running on rack $r$. Third, DC counts active readers and writers and consists of several physical counters located on selected processes. DT on its own (without DC and any readers) constitutes RMA-MCS. \macb{Writers } A writer that wants to acquire a lock starts at a leaf of DT located at the lowest level~$N$ (a node in our example). At any level~$i$ ($2 \le i \le N$), it acquires a local D-MCS lock that corresponds to a subtree of D-MCS locks (and thus DQs) rooted at the given element. Here, it may compete with other writers. When it reaches level~1, it executes a different protocol for acquiring the whole RMA-RW lock. Here, it may also compete with readers. RMA-RW's locality-aware design enables a \emph{shortcut}: some writers stop before reaching level~1 and directly proceed to the CS. This happens if a lock is passed within a given machine element. \macb{Readers } Readers do not enter DQs and DT and thus have a single acquire protocol. This design reduces synchronization overhead among readers. \subsection{Key Data Structures} \label{sec:key_data} We now present the key structures in more detail. \subsubsection{Distributed Counter (DC)} \label{sec:counters} \goal{explain how the modes READ and WRITE work} DC maintains the number of active readers or writers. It enables an adjustable performance tradeoff that accelerates readers or writers. For this, one DC consists of multiple physical counters, each maintained by every $T_{DC}$th process; $T_{DC}$ is a parameter selected by the user. To enter the CS, a reader $p$ increments only one associated physical counter while a writer must check each one of them. Thus, selecting more physical counters (smaller $T_{DC}$) entails lower reader latency (as each reader can access a counter located on a closer machine element) and contention (as each counter is accessed by fewer readers). Yet, higher $T_{DC}$ entails lower latency for a writer that accesses fewer physical counters. A physical counter associated with a reader $p$ is located at a rank $c(p)$; $c(\cdot) \in \{1, ..., P\}$ can be determined at compile- or run-time. In a simple hardware-oblivious scheme, one can fix $c(p) = \left\lceil p / T_{DC} \right\rceil$. For more performance, the user can locate physical counters in a topology-aware way. For example, if the user allocates $x$ processes/node and a node $s$ hosts processes with $x$ successive ranks starting from $(s-1)x+1$, then setting $T_{DC} = kx$ in the above formula results in storing one physical counter every $k$th node. This can be generalized to any other machine element. To increase performance, we implement each physical counter as two 64-bit fields that count the readers (assigned to this counter) that arrived and departed from the CS, respectively. This facilitates obtaining the number of readers that acquired the lock since the last writer and reduces contention between processes that acquire and release the lock. We dedicate one bit of the field that counts arriving readers to indicate whether the CS of RMA-RW is in the \texttt{READ} mode (it contains readers) or the \texttt{WRITE} mode (it contains a writer). \textbf{\textsf{RMA Design of DC:}} Each physical counter occupies two words with offsets \texttt{ARRIVE} (for counting arriving readers) and \texttt{DEPART} (for counting departing readers); physical counters together constitute an RMA window. \subsubsection{Distributed Queue (DQ)} \label{sec:qnode} DQ orders writers from a single element of the machine that attempt to enter the CS. DQs from level~$i$ have an associated threshold $T_{L,i}$ that determines the maximum number of lock passings between writers running on a machine element from this level before the lock is passed to a process from a different element. We use a separate threshold $T_{L,i}$ for each~$i$ because some levels (e.g., racks) may need more locality (a higher threshold) than others (e.g., nodes) due to expensive data transfers. This design enables an adjustable tradeoff between fairness and throughput at each level. DQ extends D-MCS in that the local flag that originally signals whether a process can enter the CS now becomes an integer that carries (in the same RMA operation) the number of past lock acquires within a given machine element. We use this value to decide whether to pass the lock to a different element at a given level~$i$ (if the value reaches $T_{L,i}$) or not (if the value is below $T_{L,i}$). \textbf{\textsf{RMA Design of DQ:}} All DQs at a given level constitute an RMA window. Respective offsets in the window are as follows: \texttt{NEXT} (a rank of the next process in the queue), \texttt{STATUS} (an integer that both signals whether to spin wait and carries the number of past lock acquires in the associated machine element), and \texttt{TAIL} (a rank of the process that constitutes the current tail of the queue). \texttt{TAIL} in DQ at level~$i$ associated with $j$th element is stored on a process \texttt{tail\_rank[$i$,$j$]}. \subsubsection{Distributed Tree of Queues (DT)} \label{sec:hnode} DT combines DQs at different memory hierarchy levels into a single structure. This enables $p$ to make progress in acquiring/releasing RMA-RW by moving from level~$N$ to level~1. Then, at the tree root, writers synchronize with readers. Specifically, the lock is passed from writers to readers (if there are some waiting) when the total number of lock passings between writers reaches a threshold $T_W$. In our design, $T_W = \prod_{i=1}^{N} T_{L,i}$. To avoid starvation of writers, we also introduce a threshold $T_R$ that is the maximum number of readers that can enter the CS consecutively before the lock is passed to a writer (if there is one waiting). Increasing $T_R$ or $T_{W}$ improves the throughput of readers or writers because more processes of a given type can enter the CS consecutively. While climbing up DT, a writer must determine the next DQ (and thus D-MCS) to enter. This information is encoded in a mapping $e(\cdot,\cdot)$ and structure \texttt{tail\_rank[$i$,$j$]}. $e(p,i)$ $\in \{ 1, ..., N_i \}$ returns the ID of a machine element associated with a process $p$ at level~$i$. An expression \texttt{tail\_rank[$i$,$e(p,i)$]} returns the rank of a process that points to the tail of a DQ at level~$i$ within a machine element assigned to $p$. This enables $p$ to enter D-MCS at the next level on the way to the CS. Similarly to $c(p)$, $e(p,i)$ can be determined statically or dynamically. Depending on $T_{L,i}$, some writers do not have to climb all DT levels and can proceed directly to the CS. Thus, we further extend the \texttt{STATUS} field used in DQ with one more special value \texttt{ACQUIRE\_PARENT}. This indicates that $p$ cannot directly enter the CS and should continue up DT. \subsubsection{Discussion on the Status Field} A central part of DQ and DT is the \texttt{STATUS} field that enables processes to exchange various additional types of information in a single RMA communication action, including: (1) if a lock mode changed (e.g., from \texttt{READ} to \texttt{WRITE}), (2) if a given process should acquire a lock at a higher DT level, (3) if a given process can enter the CS, and (4) the number of past consecutive lock acquires. Two selected integer values are dedicated to indicate (1) and (2). All the remaining possible values indicate that the given process can enter the CS (3); at the same time the value communicates (4). \subsection{Distributed Reader-Writer Protocol} We now illustrate how the above data structures play together in the acquire and release protocols. A writer starts at the leaf of DT (level~$N$) both for acquiring and releasing. At any level~$i$ ($2 \le i \le N$), it proceeds up the tree executing a protocol for a partial acquire/release of the respective part of the tree (\cref{sec:writer_acquire_n}, \cref{sec:writer_release_n}). At level~1, it executes a different protocol for locking or releasing the whole lock (\cref{sec:writer_acquire_1}, \cref{sec:writer_release_1}). Readers do not follow such a hierarchy and thus have single acquire (\cref{sec:reader_acquire}) and release (\cref{sec:reader_release}) protocols. \subsubsection{Writer Lock Acquire: Level $N$ to $2$ (Listing~\ref{lst:writer_acquire_n})} \label{sec:writer_acquire_n} \goal{description of writer acquisition} \textbf{\textsf{Intuition:}} $p$ enters the DQ associated with a given level~$i$ and its home element $e(p,i)$; it then waits for the update from its predecessor. If the predecessor does not have to hand over the lock to a process from another element (i.e., has not reached the threshold $T_{L,i}$), the lock is passed to $p$ that immediately enters the CS. Otherwise, $p$ moves to level~$i-1$. \noindent \textbf{\textsf{Details:}} $p$ first modifies its \texttt{NEXT} and \texttt{STATUS} to reflect it spin waits at the DQ tail (Lines~\ref{line:writer_acq_n_local1}-\ref{line:writer_acq_n_local2}). Then, it enqueues itself (Line~\ref{line:writer_far_n}). If there is a predecessor at this level, $p$ makes itself visible to it with a \texttt{Put} (Line~\ref{line:writer_put_n}) and then waits until it obtains the lock. While waiting, $p$ uses \texttt{Get}s and \texttt{Flush}es to check for any updates from the predecessor. If the predecessor reached $T_{L,i}$ and released the lock to the parent level, $p$ must itself acquire the lock from level~$i-1$ (Line~\ref{line:writer_acq_n_up}). Otherwise, it can directly enter the CS as the lock is simply passed to it (Line~\ref{line:writer_acq_n_go}). If there is no predecessor at level~$i$, $p$ also proceeds to acquire the lock for level~$i-1$ (Line~\ref{line:writer_acq_n_up}). \begin{lstlisting}[float=h,caption=Acquiring the RMA-RW lock by a writer; levels $N$ to $2$.,label=lst:writer_acquire_n] void writer-acquire<$i$>() { Put($\emptyset$, $p$, NEXT);|\label{line:writer_acq_n_local1}| Put(WAIT, $p$, STATUS); Flush($p$);|\label{line:writer_acq_n_local2}| /* Enter the DQ at level $i$ and in this machine element. */ int pred = FAO($p$, tail_rank[$i$,$e(p,i)$], TAIL, REPLACE);|\label{line:writer_far_n}| Flush(tail_rank[$i$,$e(p,i)$]);|\label{line:writer_flush_n}| if(pred != $\emptyset$) { Put($p$, pred, NEXT); Flush(pred); /* pred sees us. */ |\label{line:writer_put_n}| int status = WAIT; do { /* Wait until pred passes the lock. */ status = Get($p$, STATUS); Flush($p$); } while(status == WAIT); /* Check if pred released the lock to the parent level. This would happen if |$T_{L,i}$| is reached. */ if(status != ACQUIRE_PARENT) { /* |$T_{L,i}$| is not reached. Thus, the lock is passed to $p$ that directly proceeds to the CS. */ return; /* The global lock is acquired. */|\label{line:writer_acq_n_go}| } } /* Start to acquire the next level of the tree.*/ Put(ACQUIRE_START, $p$, STATUS); Flush($p$); writer-acquire<$i-1$>();|\label{line:writer_acq_n_up}|} \end{lstlisting} \subsubsection{Writer Lock Release: Level $N$ to $2$ (Listing~\ref{lst:writer_release_n})} \label{sec:writer_release_n} \textbf{\textsf{Intuition:}} $p$ passes the lock within $e(p,i)$ if there is a successor and $T_{L,i}$ is not yet reached. Otherwise, it releases the lock to the parent level~$i-1$, leaves the DQ, and informs any new successor that it must acquire the lock at level~$i-1$. \noindent \textbf{\textsf{Details:}} $p$ first finds out whether it has a successor. If there is one and $T_{L,i}$ is not yet reached, the lock is passed to it with a \texttt{Put} (Line~\ref{line:release_put_n}). If $T_{L,i}$ is reached, $p$ releases the lock for this level and informs its successor (if any) that it has to acquire the lock at level~$i-1$. If there is no known successor, it checks atomically if some process has already entered the DQ at level~$i$ (Line~\ref{line:release_cas_n}). If so, the releaser waits for the successor to make himself visible before it is notified to acquire the lock at level~$i-1$. \begin{lstlisting}[float=h,caption=Releasing an RMA-RW lock by a writer; levels $N$ to $2$., label=lst:writer_release_n] void writer-release<$i$>() { /* Check if there is a successor and get the local status. */ int succ = Get($p$, NEXT); int status = Get($p$, STATUS); Flush($p$); if(succ != $\emptyset$ && status < $T_{L,i}$) { /* Pass the lock to succ at level i as well as the number of past lock passings within this machine element. */ Put(status + 1, succ, STATUS); Flush(succ); return;|\label{line:release_put_n}| } /* There is no known successor or the threshold at level $i$ is reached. Thus, release the lock to the parent level. */ writer-release<$i-1$>(); if(succ == $\emptyset$) { /* Check if some process has just enqueued itself. */ int curr_rank = CAS($\emptyset$, $p$, tail_rank[$i$,$e(p,i)$], TAIL); |\label{line:release_cas_n}| Flush(tail_rank[$i$,$e(p,i)$]); if($p$ == curr_rank) { return; } do { /* Otherwise, wait until succ makes itself visible. */ succ = Get($p$, NEXT); Flush($p$); } while(succ == $\emptyset$); } /* Notify succ to acquire the lock at level $i-1$. */ Put(ACQUIRE_PARENT, succ, STATUS); Flush(succ); } \end{lstlisting} \subsubsection{Writer Lock Acquire: Level 1 (Listing~\ref{lst:writer_acquire_1})} \label{sec:writer_acquire_1} \textbf{\textsf{Intuition:}} This scheme is similar to acquiring the lock at lower levels (\cref{sec:writer_acquire_n}). However, the predecessor may notify $p$ of the \emph{lock mode change} that enabled readers to enter the CS, forcing $p$ to acquire the lock from the readers. \noindent \textbf{\textsf{Details:}} $p$ first tries to obtain the lock from a predecessor (Lines~\ref{line:writer_acq_1_get_from_pred_start}-\ref{line:writer_acq_1_get_from_pred_end}). If there is one, $p$ waits until the lock is passed. Still, it can happen that the predecessor hands the lock over to the readers (Line~\ref{line:writer_acq_1_mode_change}). Here, $p$ changes the mode back to \texttt{WRITE} before entering the CS (Line~\ref{line:acquire_cacc1_1}); this function checks each counter to verify if there are active readers. If not, it switches the value of each counter to \texttt{WRITE} (see Listing~\ref{lst:manipulate_counters}). If there is no predecessor (Line~\ref{line:writer_acq_1_no_pred}), $p$ tries to acquire the lock from the readers by changing the mode to \texttt{WRITE} (Line~\ref{line:acquire_cacc2_1}). \begin{lstlisting}[float=h,caption=Functions that manipulate counters., label=lst:manipulate_counters] /****** Change all physical counters to the WRITE mode ******/ void set_counters_to_WRITE() { /* To simplify, we use one counter every $T_{DC}$th process.*/ for(int $p$ = 0; $p$ < $P$; $p$ += $T_{DC}$) { /* Increase the arrival counter to block the readers.*/ Accumulate(INT64_MAX/2, $p$, ARRIVE, SUM); Flush($p$); } } /***************** Reset one physical counter *****************/ void reset_counter(int rank) { /* Get the current values of the counters.*/ int arr_cnt = Get(rank, ARRIVE), dep_cnt = Get(rank, DEPART); Flush(rank); /* Prepare the values to be subtracted from the counters.*/ int sub_arr_cnt = -dep_cnt, sub_dep_cnt = -dep_cnt; /* Make sure that the WRITE is reset if it was set.*/ if(arr_cnt >= INT64_MAX/2) { sub_arr_cnt -= INT64_MAX/2; } /* Subtract the values from the current counters.*/ Accumulate(sub_arr_cnt, rank, ARRIVE, SUM); Accumulate(sub_dep_cnt, rank, DEPART, SUM); Flush(rank); } /***************** Reset all physical counters ****************/ void reset_counters() { for(int $p$ = 0; $p$ < $P$; $p$ += $T_{DC}$) { reset_counter($p$); } } \end{lstlisting} \begin{lstlisting}[float=h,caption=Acquiring an RMA-RW lock by a writer; level 1., label=lst:writer_acquire_1] void writer-acquire<1>() { Put($\emptyset$, $p$, NEXT); Put(WAIT, $p$, STATUS);|\label{line:writer_acq_1_get_from_pred_start}| Flush($p$); /* Prepare to enter the DQ.*/ /* Enqueue oneself to the end of the DQ at level 1.*/ int pred = FAO($p$, tail_rank[1,$e(p,1)$], TAIL, REPLACE); Flush(tail_rank[1,$e(p,1)$]); if(pred != $\emptyset$) { /* If there is a predecessor...*/ Put($p$, pred, NEXT); Flush(pred); int curr_stat = WAIT; do { /* Wait until pred notifies us.*/ curr_stat = Get($p$, STATUS); Flush($p$); } while (curr_stat == WAIT); if(curr_stat == MODE_CHANGE) { /* The lock mode changed...*/|\label{line:writer_acq_1_mode_change}| /* The readers have the lock now; try to get it back.*/ set_counters_to_WRITE(); |\label{line:acquire_cacc1_1}| Put(ACQUIRE_START, $p$, STATUS); Flush($p$); } } |\label{line:writer_acq_1_get_from_pred_end}| else { /* If there is no predecessor...*/|\label{line:writer_acq_1_no_pred}| /* Change the counters to WRITE as we have the lock now.*/ set_counters_to_WRITE();|\label{line:acquire_cacc2_1}| Put(ACQUIRE_START, $p$, STATUS); Flush($p$); } } \end{lstlisting} \subsubsection{Writer Lock Release: Level 1 (Listing~\ref{lst:writer_release_1})} \label{sec:writer_release_1} \textbf{\textsf{Intuition:}} $p$ first checks if it has reached $T_{W}$ and if there is a successor waiting at level~1. If any case is true, it passes the lock to the readers and notifies any successor that it must acquire the lock from them. Otherwise, the lock is handed over to the successor. \noindent \textbf{\textsf{Details:}} First, if $T_{W}$ is reached, $p$ passes the lock to the readers by resetting the counters (Line~\ref{line:release_reset_1}). Then, if it has no successor, it similarly enables the readers to enter the CS (Line~\ref{line:release_reset2_1}). Later, $p$ appropriately modifies the tail of the DQ and verifies if there is a new successor (Line~\ref{line:release_cas_1}). If necessary, it passes the lock to the successor with a \texttt{Put} (line \ref{line:release_put_1}) and simultaneously (using \texttt{next\_stat}) notifies it about a possible lock mode change. \begin{lstlisting}[float=h,caption=Releasing an RMA-RW lock by a writer; level 1., label=lst:writer_release_1] void writer-release<1>(){ bool counters_reset = false; /* Get the count of consecutive lock acquires (level 1).*/ int next_stat = Get($p$, STATUS); Flush($p$); if(++next_stat == $T_{W}$) { /* Pass the lock to the readers.*/ reset_counters();|\label{line:release_reset_1}|/* See Listing |\ref{lst:manipulate_counters}|.*/ next_stat = MODE_CHANGE; counters_reset = true; } int succ = Get($p$, NEXT); Flush($p$); if(succ == $\emptyset$) { /* No known successor.*/ if(!counters_reset) { /* Pass the lock to the readers.*/ reset_counters(); next_stat = MODE_CHANGE;|\label{line:release_reset2_1}|/* Listing |\ref{lst:manipulate_counters}|.*/ } /* Check if some process has already entered the DQ.*/ int curr_rank = CAS($\emptyset$, $p$, tail_rank[1,$e(p,1)$], TAIL); Flush(tail_rank[1,$e(p,1)$]); if($p$ == curr_rank) { return; } /* No successor...*/ |\label{line:release_cas_1}| do { /* Wait until the successor makes itself visible.*/ succ = Get($p$, NEXT); Flush($p$); } while (succ == $\emptyset$); } /* Pass the lock to the successor.*/ Put(next_stat, succ, STATUS); Flush(succ);|\label{line:release_put_1}| } \end{lstlisting} \subsubsection{Reader Lock Acquire (Listing~\ref{lst:reader_acquire})} \label{sec:reader_acquire} \textbf{\textsf{Intuition:}} Here, $p$ first spin waits if there is an active writer or if $p$'s arrival made its associated counter $c(p)$ exceed $T_{R}$. Then, it can enter the CS. If $c(p) = T_{R}$, then $p$ resets DC. \noindent \textbf{\textsf{Details:}} In the first part, $p$ may spin wait on a boolean \texttt{barrier} variable (Line~\ref{line:reader_if}), waiting to get the lock from a writer. Then, $p$ atomically increments its associated counter and checks whether the count is below $T_{R}$. If yes, the lock mode is \texttt{READ} and $p$ enters the CS. Otherwise, either the lock mode is \texttt{WRITE} or $T_{R}$ is reached. In case of the latter, $p$ checks if there are any waiting writers (Line~\ref{line:reader_acq_check_writers}). If there are none, $p$ resets the DC (Line~\ref{line:reader_reset}) and re-attempts to acquire the lock. If there is a writer, $p$ sets the local barrier and waits for DC to be reset by the writer. \begin{lstlisting}[float=h,caption=Acquiring an RMA-RW lock by a reader., label=lst:reader_acquire] void reader-acquire() { bool done = false; bool barrier = false; while(!done) { int curr_stat = 0; if(barrier) { |\label{line:reader_if}| do { curr_stat = Get($c(p)$, ARRIVE); Flush($c(p)$); } while(curr_stat >= $T_{R}$); } /* Increment the arrival counter.*/ curr_stat = FAO(1, $c(p)$, ARRIVE, SUM); Flush($c(p)$); if(curr_stat >= $T_{R}$) { /* $T_{R}$ has been reached...*/ barrier = true; if(curr_stat == $T_{R}$) {/* We are the first to reach $T_{R}$.*/ /* Pass the lock to the writers if there are any.*/ int curr_tail = Get(tail_rank[1,$e(p,1)$], TAIL);|\label{line:reader_acq_check_writers}| Flush(tail_rank[1,$e(p,1)$]); if(curr_tail == $\emptyset$) { /* There are no waiting writers.*/ reset_counter($c(p)$); barrier = false;|\label{line:reader_reset}|/* Listing |\ref{lst:manipulate_counters}|.*/ } } /* Back off and try again.*/ Accumulate(-1, $c(p)$, ARRIVE, SUM); Flush($c(p)$); } } } \end{lstlisting} \subsubsection{Reader Lock Release (Listing~\ref{lst:reader_release})} \label{sec:reader_release} Releasing a reader lock only involves incrementing the departing reader counter. \begin{lstlisting}[caption=Releasing an RMA-RW reader lock., label=lst:reader_release] void reader-release() { Accumulate(1, $c(p)$, DEPART, SUM); Flush($c(p)$); } \end{lstlisting} \subsection{Example} \label{sec:example} Consider the scenario from Figure~\ref{fig:structures}. Here, there are three machine levels, 12 readers, and 12 writers ($F_W = 0.5$). \macb{Writer Acquire } Assume a new writer $W_x$ running on a node related to DQ$_{3.1}$ attempts to acquire RMA-RW (Figure~\ref{fig:structures}, Part~5). First, it enters DQ$_{3.1}$ (Listing~\ref{lst:writer_acquire_n}). As this queue is not empty, $W_x$ spins locally (Lines~10-12) until its predecessor $W_9$ modifies $W_x$'s \texttt{STATUS}. Now, if $W_9$ has not yet reached $T_{L,3}$, $W_x$ gets the lock and immediately proceeds to the CS (Lines~15-19). Otherwise, it attempts to move to level~2 by updating its \texttt{STATUS} (Line~22) and calling \texttt{writer-acquire<$i-1$>()}. Thus, it enters DQ$_{2.1}$ and takes the same steps as in DQ$_{3.1}$: it spins locally until $W_4$ changes its \texttt{STATUS} and it either directly enters the CS or it proceeds up to level~1. Assuming the latter, $W_x$ enters DQ$_{1.1}$ and waits for $W_1$ to change its \texttt{STATUS} (Listing~\ref{lst:writer_acquire_1}, Lines~10-12). If \texttt{STATUS} is different from \texttt{MODE\_CHANGE} (Line~17), $W_x$ can enter the CS. Otherwise, the lock was handed over to the readers and $W_x$ calls \texttt{set\_counters\_to\_WRITE()} to change all physical counters to the \texttt{WRITE} mode (Line~15), which blocks new incoming readers. At some point, the readers reach the $T_{R}$ threshold and hand the lock over to $W_x$. \macb{Writer Release } Assume writer $W_x$ occupies the CS and starts to release RMA-RW (Figure~\ref{fig:structures}, Part~6). It begins with level~3 (Listing~\ref{lst:writer_release_n}). Here, it first checks if it has a successor in DQ$_{3.1}$ and if $T_{L,3}$ is not yet reached (Line~5). Its successor is $W_{10}$ and assume that the latter condition is true. Then, $W_x$ passes the lock to $W_{10}$ by updating its \texttt{STATUS} so that it contains the number of lock acquires within the given element. If $T_{L,3}$ is reached, $W_x$ releases the lock at level~2 (Line~12). Here, it repeats all the above steps (its successor is $W_6$) and then starts to release the lock at level~1 (Listing~\ref{lst:writer_release_1}). Here it hands the lock over to the readers if $T_W$ is reached (Lines~5-8). Finally, it notifies its successors at each level ($N$ to~2) to acquire the lock at the parent level (Listing~\ref{lst:writer_release_n}, Line~23). \macb{Reader Acquire } A reader $R_x$ that attempts to acquire RMA-RW first increments $c(R_x)$ (Listing~\ref{lst:reader_acquire}, Line~12) and checks if $T_R$ is reached (in the first attempt Lines~6-8 are skipped). If yes, it sets \texttt{barrier} (Line~14), backs off (Line~24), and reattempts to acquire the lock. In addition, if $R_x$ is the first process to reach $T_R$, it also checks if there are any waiting writers (Lines~15-21). If not, it resets $c(R_x)$ and sets \texttt{barrier} to \texttt{false} so that it can enter the CS even if $T_R$ was reached. Then, it reexecutes the main loop (Line~3); this time it may enter the loop in Lines~6-8 as the lock was handed over to a writer (if $T_R$ was reached). In that case, $R_x$ waits until its $c(R_x)$ is reset (Listing~\ref{lst:reader_acquire}, Lines~6-8). \macb{Reader Release } This is a straightforward scenario in which $R_x$ only increments \texttt{DEPART} at $c(R_x)$. \subsection{RMA-RW vs. RMA-MCS} \label{sec:topo_mcs} We also outline the design of RMA-MCS. RMA-MCS consists of DQs and DT but not DC. $T_R$ and $T_W$ are excluded as the are no readers. Similarly, $T_{L,1}$ is not applicable because there is no need to hand the lock to readers. The acquire/release protocols are similar to the ones in Listings~\ref{lst:writer_acquire_n} and~\ref{lst:writer_release_n} for any $i \in \{1, ..., N\}$. \begin{figure*} \centering \subfloat[Latency (LB).]{ \includegraphics[width=0.185\textwidth]{queue_lat_analysis_e_e-eps-converted-to.pdf} \label{fig:mcs_fompi_latency_comp} }\hfill \subfloat[Throughput (ECSB).]{ \includegraphics[width=0.185\textwidth]{queue_thr_empty_analysis_e_e-eps-converted-to.pdf} \label{fig:queue_ecsb_perf} }\hfill \subfloat[Throughput (SOB).]{ \includegraphics[width=0.185\textwidth]{queue_thr_single_read_analysis_e_e-eps-converted-to.pdf} \label{fig:queue_sob_perf} }\hfill \subfloat[Throughput (WCSB).]{ \includegraphics[width=0.185\textwidth]{queue_thr_sleep_in_analysis_e_e-eps-converted-to.pdf} \label{fig:queue_wcsb_perf} }\hfill \subfloat[Throughput (WARB).]{ \includegraphics[width=0.185\textwidth]{queue_thr_sleep_after_analysis_e_e-eps-converted-to.pdf} \label{fig:queue_warb_perf} } \caption{(\cref{sec:eval_mcs_lock}) Performance analysis of RMA-MCS and comparison to the state-of-the-art.} \label{fig:rma_mcs_perf} \end{figure*} \section{CORRECTNESS ANALYSIS} We now discuss how RMA-RW ensures three fundamental correctness properties: mutual exclusion (ME), deadlock freedom (DF), and starvation freedom (SF)~\cite{Herlihy:2008:AMP:1734069}. At the end of this section, we show how we use model checking to verify the design. \subsection{Mutual Exclusion} ME is violated if two writers or a reader and a writer enter the CS concurrently. We now discuss both cases. \textbf{\textsf{Writer \& Writer: }} We distinguish between writers that are in the same DQ (case~A) or in different ones (case~B). In case~A, they operate on the same \texttt{TAIL}. Thus, they could only violate ME if both writers do not see any predecessor. This is prevented by using \texttt{FAO} for atomically modifying \texttt{TAIL}. In case~B, two writers competing in different DQs have a common DQ in DT where they or their predecessor compete for the lock. Similarly as above, the MCS lock must be acquired at each DT level. If a predecessor has to compete for the lock, a writer waits until he gets notified by its predecessor and thus does not interfere in the lock acquiring process. \textbf{\textsf{Reader \& Writer: }} A reader and a writer can be active at the same time if the lock mode is \texttt{READ} and about to change to \texttt{WRITE}. This is because the reader on its own cannot change the mode and as a consequence cannot acquire a lock while a writer is active. However, a writer can alter the mode to \texttt{WRITE} while a reader is active. This is prevented by a writer that checks each counter again for active readers after changing all of them. \subsection{Deadlock Freedom} Here, we also differentiate two base cases: two writers deadlock or a reader and a writer deadlock. \textbf{\textsf{Writer \& Writer}} The only way how writers deadlock is if there is a cycle in a queue. For two writers it means that one becomes the predecessor of the other. Therefore, both wait on the other to get notified. This cannot happen as the processes use an atomic \texttt{FAO} to obtain their predecessor. As explained, this function is atomic and thus we can order the uses of \texttt{FAO} in a timeline. This contradicts that the writers have a cycle in their waiting queue. \textbf{\textsf{Reader \& Writer}} A reader may deadlock after $T_{R}$ is reached (case~A) or the mode goes into \texttt{WRITE} (case~B). In case~A, either there is no writer active and the reader resets the DC or a writer is waiting and a reader backs off. Thus, the writer changes the mode to \texttt{WRITE} after all readers back off which is done in a finite time. As writers do not deadlock and the last writer changes the mode back to \texttt{READ}, no reader will deadlock in case~B either. \subsection{Starvation Freedom} Finally, we show that no writer or reader can starve. \textbf{\textsf{Writers}} A writer may starve while other writers or readers are active. We prevent it with different thresholds. First, there is $T_{L,i}$ at each DT level~$i$. After reaching $T_{L,i}$, writers in one of the associated DQs at $i$ release the lock to the next DQ at the same level. Thus, we only need to show that one DQ is starvation-free which is already provided by the underlying MCS queue lock design. Yet, there is the $T_{R}$ threshold that regulates the number of lock acquires by readers for one counter before the readers associated to the counter back off. We already showed that the readers make progress. Thus, at some point, all counters have reached $T_R$ and a writer changes the mode to \texttt{WRITE}. \textbf{\textsf{Readers}} There are two ways how readers could starve. First, other readers are active while processes associated with a certain counter back off to let writers acquire the lock. However, there is the $T_R$ threshold for each counter after which the readers associated with this counter back off. Thus, eventually, all readers wait on the writers to take over. % This leads us to the second case where the writers have the lock and do not pass it to the waiting readers. This is not possible since there is the $T_{L,i}$ threshold at each level of the writer hierarchy and at most after $T_{W} = \prod_{i=1}^{N} T_{L,i}$ lock passings between writers the lock goes to readers; we have also already illustrated that the writers will make progress until this threshold is reached. \subsection{Model Checking} To confirm that RMA-RW provides the desired correctness properties, we also conduct model checking with SPIN~\cite{Holzmann:1997:MCS} (v6.4.5), a software tool for the formal verification of multi-threaded codes. The input to SPIN is constructed in PROMELA, a verification modeling language that allows for the dynamic creation of concurrent processes to model, for example, distributed systems. We evaluate RMA-RW for up to $N \in \{1, ..., 4\}$ and a maximum of 256 processes. The machine elements on each level of the simulated system have the same number of children. Thus, for $N = 3$ and four subelements per machine element, the system would consist of $4^{3}$ processes. Each process is defined randomly either as a reader or a writer at the beginning and after that, it tries to acquire the lock 20 times. We choose this value as it generates a feasible number of cases that SPIN has to check even for a high count of processes. During the execution of a test, we use a designated process that verifies that either only one writer or multiple readers hold a lock. All the tests confirm mutual exclusion and deadlock freedom. \section{EVALUATION} We now illustrate performance advantages of RMA-MCS and RMA-RW over state-of-the-art distributed locks from the foMPI implementation of MPI-3 RMA~\cite{fompi-paper}. \textbf{\textsf{Comparison Targets }} We consider D-MCS and both foMPI locking schemes: a simple spin-lock (\texttt{foMPI-Spin}) that enables mutual exclusion, and an RW lock (\texttt{foMPI-RW}) that provides both shared and exclusive accesses to the CS. \textbf{\textsf{Selection of Benchmarks }} We conduct six series of experiments. The latency benchmark (LB) measures the latency of both acquiring and releasing a lock; an important performance metric in workloads such as real-time queries. Four other analyses obtain throughput under varying conditions and parameters. The empty-critical-section benchmark (ECSB) derives the throughput of acquiring an empty lock with no workload in the CS. The single-operation benchmark (SOB) measures the throughput of acquiring a lock with only one single operation (one memory access) in the CS; it represents irregular parallel workloads such as graph processing with vertices protected by fine locks. Next, the workload-critical-section benchmark (WCSB) covers variable workloads in the CS: each process increments a shared counter and then spins for a random time (1-4$\mu$s) to simulate local computation. The wait-after-release benchmark (WARB) varies lock contention: after release, processes wait for a random time (1-4$\mu$s) before the next acquire. The throughput experiments represent data- and communication-intensive workloads. Finally, we integrate and evaluate the proposed locks with a distributed hashtable (DHT) to cover real codes such as key-value stores. \begin{figure*} \centering \subfloat[(\cref{sec:eval_t_dc}) $T_{DC}$ analysis, SOB, $F_W = 2\%$.]{ \includegraphics[width=0.3\textwidth]{rw_marker_single_read_analysis-eps-converted-to.pdf} \label{fig:rma_rw_T_DC} }\hfill % \subfloat[(\cref{sec:eval_t_li}) $\prod_{i=1}^{N} T_{L,i}$ analysis, SOB, $F_W = 25\%$.]{ \includegraphics[width=0.3\textwidth]{rw_writer_t_single_read_analysis_one_lvl-eps-converted-to.pdf} \label{fig:rma_rw_T_Li_prod} }\hfill % \subfloat[(\cref{sec:eval_t_li}) $T_{L,i}$ analysis, SOB, $F_W = 25\%$.]{ \includegraphics[width=0.3\textwidth]{rw_writer_t_single_read_analysis_multiple_lvls-eps-converted-to.pdf} \label{fig:rma_rw_T_Li} }\\ % \subfloat[(\cref{sec:eval_t_li}) $T_{L,i}$ analysis, LB, $F_W = 25\%$.]{ \includegraphics[width=0.3\textwidth]{rw_writer_t_latency_analysis_multiple_lvls-eps-converted-to.pdf} \label{fig:rma_rw_T_Li_LB} }\hfill % \subfloat[(\cref{sec:eval_t_rw}) $T_{R}$ analysis, ECSB, $F_W=0.2\%$.]{ \includegraphics[width=0.3\textwidth]{rw_reader_t_empty_analysis_one_rate-eps-converted-to.pdf} \label{fig:rma_rw_T_R_one_rate} }\hfill % \subfloat[(\cref{sec:eval_t_rw}) $T_{R}$ analysis, ECSB, $F_W \in \{2\%, 5\%\}$.]{ \includegraphics[width=0.3\textwidth]{rw_reader_t_empty_analysis_multiple_rates-eps-converted-to.pdf} \label{fig:rma_rw_T_R_m_rates_ecsb} } % \caption{Analysis of the performance impact of various thresholds. \label{fig:rma_rw_thresholds} \vspace{-0.5em} \end{figure*} \textbf{\textsf{Varied Parameters }} To evaluate various scenarios, we vary: $T_{DC}$, $T_{L,i}$, and $T_{R}$. Unless stated otherwise, we set the fraction of writers $F_W=0.2\%$ as it reflects Facebook workloads~\cite{Venkataramani:2012:TFS:2213836.2213957}; however, we also evaluate other values. \textbf{\textsf{Experimentation Methodology }} To calculate the latency, we derive the arithmetic mean of 100,000 operations per process (for each latency benchmark). Throughput is the aggregate count of lock acquires or releases divided by the total time to run a given benchmark. 10\% of the first measurements are discarded (warmup). All time measurements are taken using a high precision \texttt{rdtsc} timer~\cite{hoefler-netgauge-hpcc07}. \textbf{\textsf{Experimental Setup }} We conduct experiments on CSCS Piz Daint (Cray XC30). Each node has an 8-core HT-enabled Intel Xeon E5-2670 CPU with 32 GiB DDR3-1600 RAM. The interconnection is based on Cray's Aries and it implements the Dragonfly topology~\cite{faanes2012cray,kim2008technology}. The batch system is slurm 14.03.7. We use C++ and the GNU 5.2.40 g++ compiler with -O3 optimizations. The utilized Cray DMAPP is 7.0.1-1.0501.8315.8.4.ari. Unless stated otherwise, we use all the compute resources and run one MPI process per one HT resource (16 processes per one compute node). \macb{Machine Model } We consider two levels of the hierarchy: the whole machine and compute nodes, thus $N=2$. \textbf{\textsf{Implementation Details }} We use the \emph{libtopodisc}~\cite{wgropp} library for discovering the structure of the underlying compute nodes and for obtaining MPI communicators that enable communication within each node. We group all the locking structures in MPI allocated windows to reduce the memory footprint~\cite{fompi-paper}. \subsection{Performance Analysis of RMA-MCS} \label{sec:eval_mcs_lock} \goal{Explain results of foMPI} We present the results in Figure~\ref{fig:rma_mcs_perf}. The latency of RMA-MCS is lower than any other target. For example, for $P=1,024$, it is $\approx$10x and $\approx$4x lower than \texttt{foMPI-Spin} and \texttt{D-MCS}, respectively. This is because \texttt{foMPI-Spin} entails lock contention that limits performance. In addition, both \texttt{foMPI-Spin} and \texttt{D-MCS} are topology-oblivious. Then, the throughput analysis confirms the advantages of RMA-MCS across all the considered benchmarks. The interesting spike in ECSB and SOB is because moving from $P=8$ to $P=16$ does not entail inter-node communication, initially increasing RMA-MCS's and D-MCS's throughput. We conclude that RMA-MCS consistently outperforms the original foMPI design and D-MCS. \subsection{Performance Analysis of RMA-RW} \label{sec:eval_rw_lock} We now proceed to evaluate RMA-RW. First, we analyze the impact of various design parameters (Figure~\ref{fig:rma_rw_thresholds}) and then compare it to the state-of-the-art (Figure~\ref{fig:rma_rw_perf_state-of-the-art}). Due to space constraints, we only present a subset of the results, all remaining plots follow similar performance patterns. \subsubsection{Influence of $T_{DC}$} \label{sec:eval_t_dc} We first discuss how different $T_{DC}$ values impact performance. We consider $T_{DC} \in \{1,2,4\}$ (one physical counter on each compute node and every 2nd and 4th compute node, respectively). We also vary the number of counters on one node ($1,2,4,8$). The results are presented in Figure~\ref{fig:rma_rw_T_DC}. First, lower $T_{DC}$ entails more work for writers that must access more counters while changing the lock mode. This limits performance, especially for high $P$, because of the higher total number of counters. Larger $T_{DC}$ increases throughput (less work for writers), but at some point (e.g., $P=512$ a counter on every 2nd node) the overhead due to readers (contention and higher latency) begins to dominate. We conclude that selecting the proper $T_{DC}$ is important for high performance of RMA-RW, but the best value depends on many factors and should be tuned for a specific machine. For example, higher $T_{DC}$ might entail unpredictable performance penalties on Cray XE because the job scheduler does not enforce contiguous job allocations~\cite{bhatele2013there}. \subsubsection{Influence of $T_{L,i}$} \label{sec:eval_t_li} \begin{figure*} \centering \subfloat[Latency (LB).]{ \includegraphics[width=0.31\textwidth]{rw_rate_latency_analysis_rw_rate_e-eps-converted-to.pdf} \label{fig:s} }\hfill \subfloat[Throughput (ECSB).]{ \includegraphics[width=0.31\textwidth]{rw_rate_empty_analysis_rw_rate_e-eps-converted-to.pdf} \label{fig:q} }\hfill \subfloat[Throughput (SOB).]{ \includegraphics[width=0.31\textwidth]{rw_rate_single_read_analysis_rw_rate_e-eps-converted-to.pdf} \label{fig:q} } \caption{(\cref{sec:comp_rw_lock}) Performance analysis of RMA-RW and comparison to the state-of-the-art.} \label{fig:rma_rw_perf_state-of-the-art} \end{figure*} \begin{figure*}[t] \centering \vspace{-1.3em} \subfloat[$F_W = 20\%$.]{ \includegraphics[width=0.23\textwidth]{ht_80_rw-eps-converted-to.pdf} \label{fig:ht_rw_80} }\hfill \subfloat[$F_W = 5\%$.]{ \includegraphics[width=0.23\textwidth]{ht_95_rw-eps-converted-to.pdf} \label{fig:ht_rw_95} }\hfill \subfloat[$F_W = 2\%$.]{ \includegraphics[width=0.23\textwidth]{ht_98_rw-eps-converted-to.pdf} \label{fig:ht_rw_98} }\hfill \subfloat[$F_W = 0\%$.]{ \includegraphics[width=0.23\textwidth]{ht_100_rw-eps-converted-to.pdf} \label{fig:ht_rw_100} }\hfill \caption{(\cref{sec:ht_eval}) Performance analysis of a distributed hashtable.} \label{fig:ht_eval} \vspace{-0.5em} \end{figure*} Next, we analyze the performance impact of $T_{L,i}$ in the considered system $i \in \{1,2\}$. We fix $F_W=25\%$ to ensure that there are multiple writers per machine element on each level. We start with various $\prod_{i=1}^{N}T_{L,i}$: the maximal number of writer acquires before the lock is passed to the readers; see Figure~\ref{fig:rma_rw_T_Li_prod}. As expected, smaller product increases throughput because more readers can enter the CS, but reduces fairness as writers wait longer. In the second step, we analyze how varying each $T_{L,i}$ impacts performance. We first fix $\prod_{i=1}^{N}T_{L,i} = 1000$. As $N=2$, we use $T_{L,2} \in (10,25,50)$ and $T_{L,1} \in (100,40,20)$. The outcome is shown in Figure~\ref{fig:rma_rw_T_Li}. When more writers consecutively acquire the lock within one node (higher $T_{L,2}$), the throughput increases. Still, the differences between the considered options are small (up to 25\% of the relative difference), especially for lower $P$. This is because of smaller amounts of inter-node communication. Interestingly, options that increase throughput (e.g., 50-20) also increase latency, see Figure~\ref{fig:rma_rw_T_Li_LB}. We conjecture this is due to improved fairness caused by smaller $T_{L,2}$ (more processes from different nodes can acquire the lock). However, the average latency increases because other writers have to wait for a longer time. \begin{table*} \centering \sf \small \begin{tabular}{@{}lllllll@{}} \toprule & \textbf{UPC (standard)~\cite{upc}} & \textbf{Berkeley UPC~\cite{bupc}} & \textbf{SHMEM~\cite{shmem}} & \textbf{Fortran 2008~\cite{fortran2008}} & \textbf{Linux RDMA/IB~\cite{ofed-atomics,IBAspec}} & \textbf{iWARP~\cite{iwarp,sharp2014remote}} \\ \midrule \texttt{Put} & \texttt{UPC\_SET} & \texttt{bupc\_atomicX\_set\_RS} & \texttt{shmem\_swap} & \texttt{atomic\_define} & \texttt{MskCmpSwap} & masked \texttt{CmpSwap} \\ \texttt{Get} & \texttt{UPC\_GET} & \texttt{bupc\_atomicX\_read\_RS} & \texttt{shmem\_mswap} & \texttt{atomic\_ref} & \texttt{MskCmpSwap} & masked \texttt{CmpSwap} \\ \texttt{Accumulate} & \texttt{UPC\_INC} & \texttt{bupc\_atomicX\_fetchadd\_RS} & \texttt{shmem\_fadd} & \texttt{atomic\_add} & \texttt{FetchAdd} & \texttt{FetchAdd} \\ \texttt{FAO (SUM)} & \texttt{UPC\_INC}, \texttt{UPC\_DEC} & \texttt{bupc\_atomicX\_fetchadd\_RS} & \texttt{shmem\_fadd} & \texttt{atomic\_add} & \texttt{FetchAdd} & \texttt{FetchAdd} \\ \texttt{FAO (REPLACE)} & \texttt{UPC\_SET} & \texttt{bupc\_atomicX\_swap\_RS} & \texttt{shmem\_swap} & \texttt{atomic\_define}* & \texttt{MskCmpSwap} & masked \texttt{CmpSwap} \\ \texttt{CAS} & \texttt{UPC\_CSWAP} & \texttt{bupc\_atomicX\_cswap\_RS} & \texttt{shmem\_cswap} & \texttt{atomic\_cas} & \texttt{CmpSwap} & \texttt{CmpSwap} \\ \bottomrule \end{tabular} \caption{Illustration of the feasibility of using libraries/languages other than MPI RMA for RMA-MCS/RMA-RW. \texttt{*} indicates the lack of an atomic swap in Fortran 2008, suggesting that some of RMA-RW protocols that depend on it would have to be adjusted to a different set of available atomics.} \label{tab:other_libs} \vspace{-0.5em} \end{table*} \subsubsection{Influence of $T_{R}$} \label{sec:eval_t_rw} Next, we analyze the impact of $T_{R}$; see Figure~\ref{fig:rma_rw_T_R_one_rate}. We first use $F_W=0.2\%$. The throughput for $T_{R} \in \{$1,000 $;$ 2,000$\}$ drops significantly for $P>512$ due to the higher overhead of writers. Contrarily, increasing $T_{R}$ improves the throughput significantly. This is because the latency of readers is lower than that of writers and a higher $T_{R}$ entails a preference of readers. However, the larger $T_{R}$ the longer the waiting time for writers is. Finally, we analyze the relationship between $T_{R}$ and $F_W$ in more detail; see Figure~\ref{fig:rma_rw_T_R_m_rates_ecsb}. Here, we vary $F_W \in \{2\%, 5\%\}$. The results indicate no consistent significant advantage ($<$1\% of relative difference for most $P$) of one threshold over others within a fixed $F_W$. \subsubsection{Comparison to the State-of-the-Art} \label{sec:comp_rw_lock} We now present the advantages of RMA-RW over the state-of-the-art foMPI RMA library~\cite{fompi-paper}; see Figure~\ref{fig:rma_rw_perf_state-of-the-art}. Here, we consider different $F_W$ rates. As expected, any RW distributed lock provides the highest throughput for $F_W = 0.2\%$. This is because readers have a lower latency for acquiring a lock than writers and they can enter the CS in parallel. The maximum difference between the rates $F_W = 0.2\%$ and $F_W = 2\%$ is 1.8x and between $F_W = 0.2\%$ and $F_W = 5\%$ is 4.4x. We then tested other values of $F_W$ up to 100\% to find out that for $F_W > 30\%$ the throughput remains approximately the same. At such rates, the throughput is dominated by the overhead of writers that enter the CS consecutively. In each case, RMA-RW always outperforms foMPI by $>$6x for $P \ge 64$. One reason for this advantage is the topology-aware design. Another one is the presence of $T_{L,i}$ and $T_{R}$ that prevent one type of processes to dominate the other one resulting in performance penalties. \subsection{Case Study: A Distributed Hashtable} \label{sec:ht_eval} We now illustrate how RMA-RW accelerates a distributed hashtable (DHT) that represents irregular codes. Our DHT stores 64-bit integers and it consists of parts called local volumes. Each local volume consists of a table of elements and an overflow heap for elements after hash collisions. The table and the heap are constructed with fixed-size arrays. Every local volume is managed by a different process. Inserts are based on atomic CASes. If a collision happens, the losing thread places the element in the overflow list by atomically incrementing the next free pointer. In addition, a pointer to the last element is also updated with a second CAS. Flushes are used to ensure memory consistency. We illustrate a performance analysis in Figure~\ref{fig:ht_eval}. In the benchmark, $P-1$ processes access a local volume of a selected process with a specified number of inserts and reads targeted at random hashtable elements. We compare the total execution time of foMPI-A (a variant that only synchronizes accesses with CAS/FAO), foMPI-RW, and RMA-RW. For $F_W \in \{2\%,5\%,20\%\}$ RMA-RW outperforms both the remaining variants. For $F_w = 0\%$, foMPI-RW and RMA-RW offer comparable performance. \section{DISCUSSION} \label{sec:discussion} \macb{Using Different RMA Libraries/Languages} In our implementation, we use MPI RMA. Still, the proposed schemes are generic and can be implemented using several other existing RMA/PGAS libraries/languages that support the required operations described in Listing~\ref{lst:rma_calls}. We illustrate this in Table~\ref{tab:other_libs} (we omit the distinction between blocking and non-blocking operations as any type can be used in the proposed locks). The analysis indicates that RMA-MCS and RMA-RW can be used in not only traditional HPC domains (by utilizing UPC, SHMEM, or RDMA/IB), but also in TCP/IP-based settings (by using iWARP). \macb{Selecting RMA-RW Parameters} To set the parameters, we first find an appropriate value for $T_{DC}$. This is because our performance analysis indicates that $T_{DC}$ has on average the highest impact on performance of both readers and writers. Here, our evaluation indicates that placing one counter per compute node results in a reasonable balance between reader throughput and writer latency. In the second step, we further influence the reader/writer performance tradeoff by manipulating with $T_R$ and $T_{L,i}$. To reduce the parameter space, we fix $T_W$ as indicated in Table~2. Selecting $T_{L,i}$ depends on the hardware hierarchy and would ideally incorporate several performance tests before fixing final numbers. One rule of the thumb is to reserve larger values for $T_{L,i}$ associated with components with higher inter-component communication costs, such as racks; this may reduce fairness, but increases throughput. \section{RELATED WORK} \textbf{\textsf{Queue-Based Locks}} The well-known traditional examples of this family are CLH~\cite{Craig93buildingfifo, Magnusson:1994:QLC} and MCS~\cite{Mellor-Crummey:1991:ASS}. Yet, they are oblivious to the memory hierarchy and cannot use this knowledge to gain performance. More recently, Radovic and Hagersten~\cite{Radovic:2003:HBL} proposed a hierarchical backoff lock that exploits memory locality: a thread reduces its backoff delay if another thread from the same cluster owns the lock. This increases the chance to keep the lock within the cluster, but introduces the risk of starvation. Luchangco et al.~\cite{luchangco2006hclh} improved this scheme by introducing a NUMA-aware CLH queue that ensures no starvation. Yet, it considers only two levels of the memory hierarchy. Chabbi et al.~\cite{Chabbi:2015:HPL:2688500.2688503} generalized it to any number of memory hierarchy levels. Similarly to our scheme, they introduce an MCS lock for each level. Yet, they do not target DM machines. None of these protocols can utilize the parallelism of miscellaneous workloads where the majority of processes only read the data. \noindent \textbf{\textsf{RW Locks}} There exist various traditional RW proposals~\cite{Hsieh:1992:PPS, Krieger93afair}. Recently, Courtois et al.~\cite{Courtois:1971:CCL} introduced different preference schemes that favor either readers (a reader can enter the CS even if there is a writer waiting) or writers (a writer can enter the CS before waiting readers). Yet, this protocol neither prevents starvation nor scales well. Mellor-Crummey and Scott~\cite{Mellor-Crummey:1991:SRS} extended their MCS lock to distinguish between readers and writers. This algorithm however does not scale well under heavy read contention. Next, Krieger et al.~\cite{Krieger93afair} use a double-linked list for more flexibility in how processes traverse the queue. Yet, there is still a single point of contention. Hsieh and Weihl \cite{Hsieh:1992:PPS} overcome this by trading writer throughput for reader throughput. In their design, each thread has a private mutex; the readers acquire the lock by acquiring their private mutex but the writers need to obtain all mutex objects. This introduces a massive overhead for the writers for large thread counts. Other approaches incorporate elaborate data structures like the Scalable Non-Zero Indicator (SNZI) tree~\cite{Lev:2009:SRL} that traces readers in the underlying NUMA hierarchy for more locality. Yet, writers remain NUMA-oblivious. Calciu et al.~\cite{Calciu:2013:NRL} extend this approach with an RW lock in which both readers and writers are NUMA-aware. This design improves memory locality but it only considers two levels in a NUMA hierarchy. None of these schemes address DM environments. \noindent \textbf{\textsf{Distributed Locks}} To the best of our knowledge, little research has been performed into locks for DM systems. Simple spin-lock protocols for implementing MPI-3 RMA synchronization were proposed by Gerstenberger et al.~\cite{fompi-paper}. Some other RMA languages and libraries (e.g., UPC) also offer locks, but they are not RW, their performance is similar to that of foMPI, and they are hardware-oblivious. We conclude that our work offers the first lock for DM systems that exploits the underlying inter-node structure and utilizes the RW parallelism present in various data- and communication-intensive workloads. \section{CONCLUSION} Large amounts of data in domains such as graph computations require distributed-memory machines for efficient processing. Such machines are characterized by weak memory models and expensive inter-node communication. These features impact the performance of topology-oblivious locks or completely prevent a straightforward adoption of existing locking schemes for shared-memory systems. In this work, we propose a distributed topology-aware Reader-Writer (RMA-RW) and MCS lock that outperform the state-of-the-art. RMA-RW offers a modular design with three parameters that offer performance tradeoffs in selected parts of the lock. These are: higher lock fairness or better locality, larger throughput of readers or writers, and lower latency of readers or writers. This facilitates performance tuning for a specific workload or environment. RMA-RW could also be extended with adaptive schemes for a runtime selection and tuning of the values of the parameters. This might be used in accelerating dynamic workloads. Microbenchmark results indicate that the proposed locks outperform the state-of-the-art in both latency and throughput. Finally, RMA-RW accelerates a distributed hashtable that represents irregular workloads such as key-value stores. {\section*{Acknowledgements} This work was supported by Microsoft Research through its Swiss Joint Research Centre. We thank our shepherd Patrick G.~Bridges, anonymous reviewers, and Jeff Hammond for their insightful comments. We thank the CSCS team granting access to the Piz Dora and Daint machines, and for their excellent technical support.} \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,021
\section{Introduction}\label{intro} We consider the problem of Bayesian variable selection in a generalized linear mixed model with $Y$ a $n$-vector of responses, given a set of $p$ potential fixed regressors \begin{displaymath} g(\mathbb E(Y_i \mid U, \beta))=X_i^T\beta + Z_i^TU, \end{displaymath} where $g$ stands for the link function associated to the model, and $X_i$ and $Z_i$ for the fixed and random effect regressors associated to the $i${th} observation. The parameter $\beta\in\mathbb R^p$ corresponds to the fixed-effect coefficients and the parameter $U$ to the random-effect coefficients. $X$ and $Z$ are known design matrices associated with the fixed and random effects. We consider $K$ random effects, $U=(U_1^T, \cdots, U_K^T)^T$ where each $U_l$ is a vector of size $q_l$, and $\sum_{l=1}^K q_l = q$. In a stochastic search variable selection (SSVS) framework, it is convenient to denote by $\gamma$ the vector of latent variables indicating if a variable is selected or not; that is, $\gamma_j=1$ if $\beta_j \neq 0$ and $\gamma_j=0$ if $\beta_j =0$. We then denote by $\beta_{\gamma}$ the vector of all non-zero elements of $\beta$ and by $\mathbf{X}_{\gamma}$ the design matrix with columns corresponding to the elements of $\gamma$ that are equal to 1.\\ To complete the model, a conventional prior distribution for $\beta_{\gamma}|\gamma$ is a $d_{\gamma}$-dimensional Gaussian distribution, with $d_{\gamma}=\sum_{j=1}^p \gamma_j$, \begin{equation}\label{priorbeta} \beta_{\gamma} |\gamma \sim {\cal N}_{d_{\gamma}}(0, \Sigma_{\gamma}). \end{equation} Concerning the prior covariance matrix $ \Sigma_{\gamma}$, an attractive and standard choice is \begin{equation}\label{sigmagamma} \Sigma_{\gamma} = \tau (\mathbf{X}_{\gamma}' \mathbf{X}_{\gamma})^{-1}. \end{equation} Equations (\ref{priorbeta}) and (\ref{sigmagamma}) correspond to the {\it g}-prior distribution, proposed by \cite{Zellner86} in the case of standard linear models. This prior replicates the covariance structure of the design and enables an automatic scaling based on the data. Up to the scalar $\tau$, the prior covariance matrix is related to the Fisher Information Matrix in the linear model \citep[see for instance][]{ChenIbrahim}. Moreover, it leads to simple expressions of the marginal likelihood, and as pointed out in \cite{GeorgeFoster}, the marginal likelihood becomes a function of both R-square and the number of covariates like in AIC or BIC criteria. The parameter $\tau>0$ is referred to as the variable selection coefficient in \cite{BottoloRichardson}. In the homoscedastic linear model with variance $\sigma^2$, this parameter can be expressed as $\tau = g \sigma^2$. Therefore this prior has been used by many authors in the case of linear models, but also for generalized linear models (see \cite{SabanesHeld}). In case of a binary response variable, this prior is frequently encountered in probit models which are quite practical in a Bayesian setting \citep[see][]{LeeSha,ShaVannucci,ZhouWang1,YangSong}. The choice of the variable selection coefficient $\tau$ can have a great influence on the variable selection process \citep[see][]{GeorgeFoster} and has been considered by many authors. Some of them considered a fixed value for $\tau$. For instance \cite{SmithKohn} suggested to choose $\tau$ between 10 and 100. Another approach is the approach of \cite{GeorgeFoster}, who developed empirical Bayes methods based on the estimation of $\tau$ from its marginal likelihood. Other authors proposed to put a hyper-prior distribution on $\tau$, like \cite{ZellnerSiow} that used an inverse-gamma distribution $\mathcal{IG}(1/2,n/2)$. But under the Zellner-Siow prior, marginal likelihoods are not available in closed forms, and approximations are necessary \citep[see][]{BottoloRichardson}. Note also that the Zellner-Siow prior can be seen as a mixture of \textit{g}-priors. Following this remark, \cite{LiangPaulo} proposed a new family of priors on $\tau$, the \textit{hyper-g prior family} which leads to new mixtures of $g$-priors: the marginal likelihoods are then in closed forms, but not in a practical way because hypergeometric functions are used. Independently but in the same spirit, \cite{CuiGeorge} suggested to put an inverse-gamma prior distribution on $(1+\tau)$ (rather than on $\tau$ like Zellner and Siow), obtaining a family of priors on $\tau$ which contains the \textit{hyper-g prior family} as a special case. \cite{BottoloRichardson} used a similar prior. In the linear regression framework, \cite{CeleuxMarinRobert} and \cite{MarinRobert2007} suggested an improper discrete prior on $\tau$. But this prior is difficult to use in practice because it induces an infinite sum. As a consequence, \cite{CeleuxAnbari2011} proposed a Jeffrey prior continuous on $\tau$, and \cite{GuoSpeckman} showed the consistence of associated Bayes factors. In spite of the variety of all these works to choose the variable selection coefficient $\tau$, a crucial problem remains with priors using the matrix $(\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma})^{-1}$. Indeed, $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ should be invertible. However, there are two standard cases where $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ is singular: \begin{itemize} \item If the number of observations is lower than the number of variables in the model, $n < d_{\gamma}$. \item If some variables are linear combinations of others. In practice, even if $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ is theoretically invertible, some variables can be highly correlated and $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ can be computationally singular. It is often the case in genomic high-dimensional datasets for example. This problem can also be encountered when several datasets are merged: some variables can be collinear or almost collinear if same variables were present into several datasets under different labels for instance. \end{itemize} In these cases the classical $g$-prior does not work. Concerning the first case, several authors proposed alternative priors. \cite{MaruyamaGeorge} proposed a generalization of the $g$-prior, working with a singular value decomposition of the design matrix $X$. But their approach is valid only in case of classical linear models. \cite{YangSong} proposed to replace the matrix $(\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma})^{-1}$ in $ \Sigma_{\gamma}$ by its Moore Penrose's inverse (see also\cite{West}). However, the computation of the posterior distribution has a technical issue that do not permit the use of MCMC algorithm (see \cite{CommentYangSong}). Another idea would be to avoid this first case by fixing the number of selected covariates at each iteration, as in \cite{Baragatti1}. It appeared computationally advantageous and it reduced the effect of the variable selection coefficient $\tau$ used in the $g$-prior. But the number of selected variables at each iteration must be arbitrarily fixed. Moreover, fixing the number of selected covariates is not a solution for the second case, as well as the priors proposed by \cite{MaruyamaGeorge} and \cite{YangSong}. In a spirit of ridge regression (see \cite{Marquardt}), \cite{GuptaIbrahim} proposed an extension of the $g$-prior, by introducing a ridge parameter. Their prior can be used in the two cases, but they did not study the second case in which some variables are linear combinations of others. Recently \cite{KwonVannucci2011} proposed a variable selection method which take into account high correlations between predictors, but again their approach is not valid when some variables are linear combinations of others. More generally, to our knowledge the problem of variable selection when some variables are linear combinations of others is not present in literature. In this paper we develop the idea of \cite{GuptaIbrahim} concerning the introduction of a ridge parameter, and we study the influence of this parameter on the selection of variables. Besides, we suggest a way to choose the associated hyper-parameters: following the original idea of \cite{Zellner86} which is to keep the covariance structure of the design, we propose to keep the total variance of the data through the trace of $\mathbf{X}^T \mathbf{X}$. We focused on probit models, as studied in \cite{Baragatti1}, \cite{YangSong} and \cite{LeeSha}. The aim is to study the behavior of the variable selection process while using the proposed prior, especially when some variables are linear combinations of others. The approach developed is applied both to simulation data, and to data obtained from Affymetrix microarray experiments. We compare the numerical results with those obtained by a Bayesian Lasso approach (see \cite{ParkCasella2008} and \cite{Hans2009} for a recent review) in the context of probit mixed models. This paper is organized as follows. In Section \ref{Ridge} the extension of the prior to be used for $\beta_{\gamma}$ is introduced and a choice for the hyper-parameters is suggested. The Section \ref{probit} outlines the priors, full conditional distributions and the sampler to be used in case of a probit mixed model, for both the SSVS approach and a Bayesian Lasso approach. In Section \ref{Results} experimental results are given and a sensitivity analysis is performed. Finally Section \ref{Discussion} discusses the method. \section{Introducing a ridge parameter}\label{Ridge} \subsection{Prior distribution of $\beta$ with a ridge parameter}\label{partpriorbeta} As previously explained, in the case of singularity of the matrix $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$, the classical $g$-prior can not be used. \cite{GuptaIbrahim} proposed to use a ridge parameter, denoted $\lambda>0$, by replacing in (\ref{sigmagamma}) the matrix $\tau^{-1} \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}$ by $\tau^{-1}\big( \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}+ \lambda I\big)$. Imitating \cite{GuptaIbrahim} we write \begin{equation}\label{sigmagammaridge} \Sigma_{\gamma}(\lambda) = (\tau^{-1} \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}+\lambda I)^{-1}, \end{equation} and we consider the prior \begin{equation}\label{priorbetaridge} \beta_{\gamma} |\gamma \sim {\cal N}_{d_{\gamma}}\big(0, (\tau^{-1} \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}+\lambda I)^{-1}\big) \qquad \textrm{with} \qquad d_{\gamma}=\sum_{j=1}^p \gamma_j. \end{equation} Since $\lambda$ is strictly positive, the matrix $\Sigma_{\gamma}(\lambda)$ is always of full rank and (\ref{priorbetaridge}) can be viewed as a modified form of the $g$-prior, which is a compromise between independence and instability. Indeed, for large values of $\lambda$ and $\tau$, $\Sigma_{\gamma}(\lambda)$ is close to a diagonal matrix that coincides with the conditional independent case. On the opposite, for small values of $\lambda$ and $\tau$, the term $\tau^{-1} \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}$ prevails and the inverse of $\tau^{-1} \mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}+\lambda I$ will be instable if $\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}$ is singular. In that case, the prior distribution (\ref{priorbetaridge}) is close to the $g$-prior case. \subsection{Calibrating hyper-parameters}\label{calibrate} Following \cite{Zellner86}, our purpose is to use the design to calibrate the covariance of $\beta_{\gamma}$ with a ridge parameter. Write $\Sigma_{\gamma}(0)=\tau_0 (\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma})^{-1}$, with $\tau_0$ the fixed hyper-parameter used in this classical prior. Using $\Sigma_{\gamma}(\lambda)$ instead of $\Sigma_{\gamma}(0)$ amounts to introducing a perturbation in the classical $g$-prior. An interesting feature of the classical $g$-prior is that the variance-covariance structure of the data is preserved. The ridge parameter prevents us to strictly preserve this structure. However, it is possible to replicate the total variance of the data, which corresponds, up to a normalization, to the trace of $\Sigma_{\gamma}(0)^{-1}$. The constraint used is then \begin{displaymath} tr\Big(\Sigma_{\gamma}(0)^{-1}\Big)=tr\Big(\Sigma_{\gamma}(\lambda)^{-1}\Big), \end{displaymath} which yields \begin{equation*} \tau = \tau_0\Big[1+\displaystyle\frac{\lambda p \tau_0}{tr(\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}) - \lambda p\tau_0}\Big], \end{equation*} with the condition $\lambda p\tau_0 \neq tr(\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma})$. Concerning the choice of $\lambda$, in order to take into account the number $p$ of covariates and to reduce the effect of the ridge factor, we suggest to take $\lambda=1/p$, getting \begin{equation*} \tau = \tau_0\Big[1+\displaystyle\frac{\tau_0}{tr(\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}) - \tau_0}\Big], \end{equation*} with the condition $\tau_0 \neq tr(\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma})$. The vector $\gamma$ can be different between two iterations of the algorithm. Therefore we propose to use the complete design matrix $\mathbf{X}$ instead of $\mathbf{X}_{\gamma}$, yielding \begin{equation}\label{choixtau} \tau = \tau_0\Big[1+\displaystyle\frac{\tau_0}{tr(\mathbf{X}^T\mathbf{X}) - \tau_0}\Big], \end{equation} with $\tau_0 \neq tr(\mathbf{X}^T\mathbf{X})$. In practice, the user has to choose only the parameter $\tau_0$, as $\lambda$ and $\tau$ are then obtained by $1/p$ and (\ref{choixtau}). Following \cite{SmithKohn}, $\tau_0$ could be chosen between 10 and 100, and not too close to $tr(\mathbf{X}^T\mathbf{X})$. It is of interest to study the influence of the hyper-parameters $\lambda$ and $\tau$. In Section \ref{sensitivity}, it will be show that these hyper-parameters do not have a large bearing on the results. \begin{rmk} The choice $\lambda=1/p$ has the advantage to be automatic and to reduce the influence of the ridge parameter when the number of variables is large. However it can lead to computational instability if this number is too large and hence $\lambda$ too small, since $\Sigma_{\gamma}(\lambda)$ is then almost singular. In our numerical study we did not encountered this problem for $p$ around $300$. But for very large $p$ we could add a threshold $\epsilon$ and then choose $\lambda=\max(1/p, \epsilon)$. \end{rmk} \section{Illustration trough a mixed probit model}\label{probit} \subsection{The probit mixed model} We consider the problem of variable selection among a set of $p$ potential fixed regressors, in the following probit mixed model \begin{displaymath} P(Y_i=1 \mid U, \beta)=p_i=\Phi(X_i^T\beta + Z_i^TU), \end{displaymath} where $\Phi$ stands for the standard Gaussian cumulative distribution function. Following \cite{AlbertChib} and \cite{LeeSha}, a vector of latent variables $L=(L_1,\ldots,L_n)^T$ is introduced, and we assume that the conditional distribution of $L$ is Gaussian, that is $L \mid U, \beta \sim \mathcal{N}_n(X\beta+ZU, I_n)$, with $I_n$ the identity matrix. We then have \begin{equation}\label{LatentVar} Y_i = \left\{ \begin{array}{rl} 1 & \text{if } L_i>0\\ 0 & \text{if } L_i<0. \end{array} \right. \end{equation} \subsection{Stochastic Search Variable Selection}\label{SSVS} \paragraph{Prior and full conditional distributions}~~\\ We used the following prior distributions, which are classical except the one for $\beta_{\gamma}$: \begin{itemize} \item As explained in Section \ref{Ridge}, we use the prior (\ref{priorbetaridge}) for $\beta_{\gamma}$. \item The $\gamma_j$ are assumed to be independent Bernoulli variables, with \begin{equation}\label{priorgamma} P(\gamma_j=1) =\pi, \qquad 0 \leq \pi \leq 1, \end{equation} as we do not want to use prior knowledge to favor any variables. \item The vector of coefficients associated with the random effects is assumed to be Gaussian and centered, with covariance matrix $D$: \begin{equation}\label{priorU} U|D \sim {\cal N}_q(0,D). \end{equation} We will consider the case where $D$ is a diagonal matrix $D=diag(A_1,\ldots,A_K)$, where $A_l=\sigma_l^2 I_{q_l}$, $l=1,\ldots,K$ and $I_{q_l}$ the identity matrix. The prior distributions for the $\sigma_l^2$ are then Inverse Gamma $\mathcal{IG}amma(a,b)$ ($b$ denoting the scale parameter). In a more general case, if no structure is assumed for the variance-covariance matrix $D$, its prior distribution should be an Inverse-Wishart. \end{itemize} \noindent Most of the full conditional distributions did not depend on the ridge parameter. In particular: \begin{itemize} \item The full conditional distribution of $L$ is given by (see \cite{AlbertChib}): \begin{eqnarray}\label{fullL} L_i|\beta, U, Y_i=1 & \sim & \mathcal{N}(X_i^T\beta+Z_i^TU,1) {\rm \ left \ truncated \ at \ } 0 \\ \nonumber L_i|\beta, U, Y_i=0 & \sim & \mathcal{N}(X_i^T\beta+Z_i^TU,1) {\rm \ right \ truncated \ at \ } 0. \end{eqnarray} \item Defining $W=(Z^TZ+D^{-1})^{-1}$, the full conditional distribution of $U$ is: \begin{equation}\label{fullU} U|L,\beta, D \sim \mathcal{N}_q(WZ^T(L-\mathbf{X}\beta),W). \end{equation} \item The full conditional distribution of the $\sigma_l^2,l=1,\ldots,K$ are Inverse-Gamma: \begin{eqnarray}\label{fullsigma} \sigma_l^2 \mid U_l &\sim& \mathcal{IG}amma\Big(\frac{q_l}{2}+a,\big(\frac 12 U_l^TU_l+b\big)\Big). \end{eqnarray} \end{itemize} Only the full conditional distributions of $\beta_{\gamma}$ and $\gamma$ depend on $\lambda$, as follows: \begin{itemize} \item For $\beta_{\gamma}$: \begin{equation}\label{fullbeta} \beta_{\gamma}|L,U,\gamma \sim \mathcal{N}_{d_{\gamma}}(V_{\gamma}\mathbf{X}_{\gamma}^T(L-ZU),V_{\gamma}), \quad \textrm{with} \quad V_{\gamma} = \Big[\frac{(1+\tau)}{\tau}\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma} + \lambda I\Big]^{-1}. \end{equation} \item And for $\gamma$: \begin{eqnarray}\label{fullgamma} \nonumber f(\gamma |L,U,\beta_{\gamma}) & \propto & \frac{(2\pi)^{-\frac{d_{\gamma}}{2}}}{|\Sigma_{\gamma}(\lambda)|^{1/2}} \exp\Big[-\frac{1}{2}\big(\beta_{\gamma}^T V_{\gamma}^{-1}\beta_{\gamma} - (L-ZU)^T \mathbf{X}_{\gamma} \beta_{\gamma} - \beta_{\gamma}^T \mathbf{X}_{\gamma}^T (L-ZU) \big)\Big]\\ & \times & \prod_{j=1}^p\pi_j^{\gamma_j}(1-\pi_j)^{1-\gamma_j}. \end{eqnarray} \end{itemize} \paragraph{The sampler}~~\\ The posterior distribution of $\gamma$ is of particular interest for the variable selection problem. An idea is to use a Gibbs sampler to explore the full posterior distribution and to search for high probability $\gamma$ values. Simulations from all the full conditional distributions can be easily obtained, except for $\gamma$ which full conditional distribution does not correspond to a standard multivariate one. The $\gamma$ vector can be simulated either element by element, or by using a Metropolis-Hastings algorithm. In general, in the case of a high number of variables, the Metropolis-Hastings algorithm is computationally advantageous. Moreover, using a Metropolis-Hastings step in a Gibbs sampler improves the sampler in terms of variance, see \cite{MonteCarloStatMethods}. As a consequence, we decided to use a Metropolis-within-Gibbs algorithm. But even with a Metropolis-Hastings algorithm, the full conditional distribution of $\gamma$ cannot be directly simulated, since it depends on the actual value of $\beta_{\gamma}$. Following \cite{LeeSha} we then used the grouping technique of \cite{Liu}, by considering the parameters $\gamma$ and $\beta_\gamma$ jointly. The advantage of this technique is that the convergence of the Markov chain is improved, and autocorrelations are reduced, see \cite{Liu} and \cite{vanDyk}. Using this technique is equivalent to integrate the full conditional distribution of $\gamma$ in $\beta_{\gamma}$ (see \cite{Baragatti1} for more details). We then obtain: \begin{eqnarray}\label{fullgammaint} \nonumber f(\gamma |L,U) & \propto & \displaystyle\frac{|V_{\gamma}|^{1/2}}{|\Sigma_{\gamma}(\lambda)|^{1/2}}\exp\Big[-\frac{1}{2}(L-ZU)^T (I-\mathbf{X}_{\gamma}V_{\gamma}\mathbf{X}_{\gamma}^T)(L-ZU)\Big] \\ & \times & \displaystyle\prod_{j=1}^p\pi_j^{\gamma_j}(1-\pi_j)^{1-\gamma_j}. \end{eqnarray} Note that setting $\lambda=0$, we can recover the formula corresponding to the classical $g$-prior. \begin{rmk} The influence of $\tau$ appears here through the ratio $R^{1/2}=\left(\frac{|V_{\gamma}|}{|\Sigma_{\gamma}|}\right)^{1/2}$. We can see that $$ \left\{ \begin{array}{ll} {\rm if \ } \tau \rightarrow \infty, & R \rightarrow | \frac{1}{\lambda}\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}+ I|^{-1}, \\ {\rm if \ } \tau \rightarrow 0, & R \rightarrow 1, \\{\rm if \ } \lambda \rightarrow \infty, & R \rightarrow 1, \\ {\rm if \ } \lambda \rightarrow 0, & R \rightarrow (\displaystyle\frac{1}{1+\tau})^{d_{\gamma}/2}. \end{array} \right. $$ \end{rmk} The Metropolis-Hastings algorithm used to generate the $\gamma$ vector can be summarized as follows: at iteration $(i+1)$ a candidate $\gamma^*$ is proposed from $\gamma^{(i)}$, and using a symmetric transition kernel the acceptance rate is \begin{equation*} \rho(\gamma^{(i)},\gamma^*) = \min \Bigg\{1,\displaystyle\frac{f(\gamma^*|L,U)}{f(\gamma^{(i)}|L,U)} \Bigg\}, \end{equation*} with \begin{eqnarray}\label{acceptancerate} \nonumber \displaystyle\frac{f(\gamma^*|L,U)}{f(\gamma^{(i)}|L,U)} & = & \displaystyle\left(\frac{|V_{\gamma^*}\Sigma_{\gamma^{(i)}}|}{|\Sigma_{\gamma^*}V_{\gamma^{(i)}}|}\right)^{1/2} \exp\Big\{-\frac{1}{2}(L-ZU)^T (\mathbf{X}_{\gamma^i}V_{\gamma^{(i)}}\mathbf{X}_{\gamma^{(i)}}^T-\mathbf{X}_{\gamma^*}V_{\gamma^*}\mathbf{X}_{\gamma^*}^T)(L-ZU)\Big\} \\ & \times & \displaystyle\prod_{j=1}^p \left(\displaystyle\frac{\pi_j}{1-\pi_j}\right)^{\gamma_j^*-\gamma_j^{(i)}}, \qquad \textrm{if} \qquad \forall j \in \{1,\ldots,p\} \quad \pi_j=\pi. \end{eqnarray} The simplest way to have a symetric transition kernel is to propose a $\gamma^*$ which corresponds to $\gamma^{(i)}$ in which $r$ components have been randomly changed (see \cite{ChipmanGeorge} and \cite{GeorgeMcCulloch97}). \begin{rmk} The influence of $\tau$ appears via the ratio $Q^{1/2} = \left(\frac{|V_{\gamma^*}\Sigma_{\gamma^{(i)}}|}{|\Sigma_{\gamma^*}V_{\gamma^{(i)}}|}\right)^{1/2}$ that satisfies: $$ \left\{ \begin{array}{ll} {\rm if \ } \tau \rightarrow \infty, & Q \rightarrow | \mathbf{X}_{\gamma^*}^T\mathbf{X}_{\gamma^*}+\lambda I|\times | \mathbf{X}_{\gamma^i}^T\mathbf{X}_{\gamma^i}+\lambda I|^{-1}, \\ {\rm if \ } \tau \rightarrow 0, & Q \rightarrow 1, \\{\rm if \ } \lambda \rightarrow \infty, & Q \rightarrow 1, \\ {\rm if \ } \lambda \rightarrow 0, & Q \rightarrow 1. \end{array} \right. $$ \end{rmk} \paragraph{Post-processing}~~\\ The number of iterations of the algorithm is $b+m$, where $b$ corresponds to the burn-in period and $m$ to the observations from the posterior distributions. For selection of variables, the sequence $\{\gamma^{(t)}=(\gamma_1^{(t)},\ldots,\gamma_p^{(t)}),t=b+1,\ldots,b+m\}$ is used. The most relevant variables for the regression model are those which are supported by the data and prior information. Thus they are those corresponding to the $\gamma$ components with higher posterior probabilities, and can be identified as the $\gamma$ components that are most often equal to 1. To decide which variables should be finally selected after a run, a confidence interval based on a Poisson distribution could be used. However we noticed that usually a reasonable number of relevant variables can be isolated from the others using the number of selections. Therefore we suggest to use a box-plot of the number of iterations during which variables were selected. For each run the variables distinguishable from the others can be selected by fixing a threshold: if a variable has been selected during a number of iterations which is higher than this threshold, then the variable is kept in the final selection \subsection{Bayesian Lasso for probit mixed models}\label{lasso} A competing paradigm to the classical SSVS approach is the Bayesian Lasso framework (see \cite{ParkCasella2008}), which is inspired from the frequentist Lasso (\cite{LASSO}). In order to compare the two frameworks, we adapted the Bayesian Lasso to probit mixed models. The Bayesian Lasso has already been used for probit models (\cite{BaeMallick2004}), and also for mixed models (\cite{Legarra2011}). Combining these two approaches, we considered a fully Bayesian analysis with the following prior distributions: \begin{itemize} \item[$\bullet$] For each $\beta_j,j=1,\ldots,p$ we consider a Laplace prior: $\mathcal{L}aplace(0,1/\sqrt{\delta})$. This Laplace distribution can be expressed as a scale mixture of normal distributions with independent exponentially distributed variances, see \cite{AndrewsMallows1974}. This prior is then equivalent to $\beta_j \mid \lambda_j \sim \mathcal{N}(0,\lambda_j)$ and $\lambda_j \sim \mathcal{E}xpo(\delta/2)$. Writing $\Lambda=diag(\lambda_1,\ldots,\lambda_p)$, we have $\beta \mid \Lambda \sim \mathcal{N}_p(0,\Lambda)$. \item[$\bullet$] Concerning the random effects $U$ and the variance covariance matrix $D$, we use the same classical priors than in the SSVS approach. \item[$\bullet$] Following \cite{ParkCasella2008}, a hyperprior distribution is put on the Lasso parameter $\delta$: $\delta \sim \mathcal{G}amma(e,f)$, $f$ denoting the scale parameter. On the following experimental results, we found like \cite{YiXu2008} and \cite{LiDasFuLiWu2011} that the posteriors were not too sensitive to the hyperparameters $e$ and $f$, as long as they were small enough so that the hyperprior is sufficiently flat. In practice we used $e=f=1$, but results were similar with $e=f=10$ for instance. \end{itemize} The Bayesian Lasso estimates for the $\beta_j$ are then obtained by a Gibbs sampler using the following posterior distributions: \begin{itemize} \item[$\bullet$] For $L$, $U$ and the $\sigma^2_l,l=1,\ldots,K$, the full conditional distributions are the same than those in the SSVS method, using (\ref{fullL}), (\ref{fullU}) and (\ref{fullsigma}). \item[$\bullet$] For the $\beta$, the posterior is: \begin{equation}\label{fullbetalasso} \beta|L,U,\Lambda \sim \mathcal{N}_{p}(V_{\Lambda}\mathbf{X}^T(L-ZU),V_{\Lambda}) \quad \textrm{with} \quad V_{\Lambda} = \Big[\mathbf{X}^T\mathbf{X}+\Lambda^{-1}]^{-1}. \end{equation} \item[$\bullet$] The posterior distributions for the $1/\lambda_j,j=1,\ldots,p$ are inverse Gaussian: \begin{equation}\label{fulllambdalasso} 1/\lambda_j \mid \beta \sim \mathcal{IG}auss\Big(\frac{\sqrt{\delta}}{\beta_j},\delta\Big). \end{equation} \item[$\bullet$] The posterior for the Lasso parameter $\delta$ is a gamma distribution: \begin{equation}\label{fulldeltalasso} \delta \mid \Lambda \sim \mathcal{G}amma\Big(p+e,\big(\frac{\sum \lambda_j}{2} + \frac1f\big)^{-1}\Big). \end{equation} \end{itemize} \paragraph{Post-processing}~~\\ From the results of the Bayesian Lasso we obtain posterior estimates for the $\beta_j$s and the $\lambda_j$s, and the variables can be selected by three different ways: \begin{enumerate} \item One can select the variables corresponding to an absolute value $|\beta_j|$ higher than a threshold, like \cite{YiXu2008} or \cite{LiDasFuLiWu2011} for instance. \item \cite{BaeMallick2004} among others considered that the variables associated to $\beta_j$s with smaller posterior variances have no effect and should be excluded from the model. Therefore, they proposed to select variables corresponding to high values of $\lambda_j$. \item Finally, the results of the Lasso enable us to obtain posterior credible intervals (CI) for the $\beta_j$s. Hence we can select variables corresponding to a $\beta_j$ with a posterior CI which does not cover 0, see \cite{Kyung2010} for instance. \end{enumerate} \section{Experimental results}\label{Results} \subsection{Simulated data}\label{simulateddata} We simulated 200 binary observations and 300 variables, the observations being obtained using a probit mixed model with 5 of these variables and one random effect of length 4. Among the 300 variables, 280 were generated from a uniform on $[-5,5]$ and denoted by $V1,\ldots,V280$. Then 10 variables denoted by $V281,\ldots,V290$ were build to be collinear to the first 10 variables, with a factor 2: for instance $V282=2 \times V2$. One variable was build to be a linear combination of $V1$ and $V2$ ($V291=V1+V2$), and another was build to be a linear combination of $V3$ and $V4$ ($V292=V3-V4$). Finally, 8 variables were build to be linear combinations of variables 5 to 12 and variables 13 to 20 (for instance $V293=V5+V13$). The five variables used to generate the binary observations were the first five: $V1,V2,V3,V4$ and $V5$. The vector of coefficients associated with these variables was $\beta=(1,-1,2,-2,3)$. The first 100 observations were part of the training set, and the last 100 were part of the validation set. In the training and the validation sets, 25 observations were associated with each component of the random effect, whom vector of coefficients was $U=(-3,-2,2,3)$. We had only one random effect and the different components were supposed independent, hence we put $D=\sigma^2 I_4$. \paragraph{SSVS approach using the prior with a ridge parameter}~~\\ The objective was to assess the behavior of the proposed method when some variables are linear combinations of others, and to compare it to the case where no variable is linear combination of others. Therefore we performed 10 runs of the sampler using only the first 280 variables, and 10 runs using the 300 variables. In these two cases and for each run the same parameters were used: 5 variables were initially selected, one component of $\gamma$ was proposed to be changed at each iteration of the Metropolis-Hastings step, the prior of $\sigma^2$ was a $\mathcal{IG}(1,1)$, $\pi_j=5/280$ for all $j$ when 280 variables were kept, $\pi_j=5/300$ for all $j$ when 300 variables were kept, 4000 iterations were performed after a burn-in period of 1000 iterations, and each Metropolis-Hastings step consisted of 500 iterations. We decided to choose $\tau_0=50$, which is a standard choice, see \cite{SmithKohn} for instance. The parameters $\lambda$ and $\tau$ were then chosen as explained in \ref{calibrate}, yielding $\lambda=1/280$ and $\tau=50.01075$ when using 280 variables, and $\lambda=1/300$ and $\tau=50.00885$ when using 300 variables. A final selection was performed for each of the 20 runs. Figure \ref{Boxplot:boxplotSimu} presents two boxplots associated to 280 variables and 300 variables, respectively. For the run with 280 variables there is a gap between the variables $V2,V3,V4$ and $V5$ and the others, hence we selected these four variables. For the run with 300 variables, there is a gap between the variables selected in more than 400 iterations and the others, hence we selected the eight corresponding variables. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{boxplotSimu.eps} \end{center} \caption{\small{Boxplots of the number of selections of a variable after the burn-in period. Each point represents a variable (or several variables if superposed). The left boxplot corresponds to the run 5 with 280 variables. The right boxplot corresponds to the run 6 with 300 variables.}} \label{Boxplot:boxplotSimu} \end{figure} Table \ref{tab10runs} gives the variables kept in the final selections of the 20 runs. Among the runs with the first 280 variables, 3 among the 5 variables used to generate the data were in the final selection of almost all runs, and the variables $V4$ was in the final selection of half of the runs. Notice that $V1$ was in none of the final selections. Among the runs with 300 variables, the variables $V1,V2,V3$ and $V5$ were present in most of the final selections, directly or indirectly through linear combinations. Contrarily to the runs with 280 variables, the variables $V4$ or $V284$ were in none of the final selections, while the variables $V1$ and $V281$ were in all the final selections. Concerning $V4$, it was indirectly in all the final selections through $V292$, which is a linear combination of $V3$ and $V4$. Eventually, the final selections of the runs with 300 variables appeared as relevant as the final selections of the runs with 280 variables, despite the fact that some variables were linear combinations of others. \begin{rmk} We obtained similar results with only 500 burn-in iterations and 500 post burn-in iterations, except that the variable $V4$ was in none of the final selections. \end{rmk} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline Variables & Number of selections & Number of selections\\ & among the 10 runs & among the 10 runs\\ & with 280 variables & with 300 variables\\ \hline $V1$ & 0 &10\\ $V2$ & 9 & 8\\ $V3$ & 10 & 2\\ $V4$ & 5 & 0\\ $V5$ & 10 & 10\\ \hline $V281=2 \times V1$ & & 10\\ $V282=2 \times V2$ & & 9\\ $V283=2 \times V3$ & & 3\\ $V284=2 \times V4$ & & 0\\ $V285=2 \times V5$ & & 10\\ $V291= V1 + V2$ & & 7\\ $V292= V3 - V4$ & \multirow{-5}{*}{Not available} & 10\\ \hline \end{tabular} \caption{\small{Number of final selections among the 10 runs with the first 280 variables and among the 10 runs with 300 variables, for the variables $V1,V2,V3,V4,V5$ and linear combinations of these variables. No other variable was present in the final selections.}} \label{tab10runs} \end{center} \end{table} \FloatBarrier To assess the relevance of the final selections, predictions were performed. Concerning the runs with 300 variables, we can not fit a model with all the variables in final selections, because some of them are linear combinations of others. In that case we decided to fit a probit mixed model on the training set with the five linearly independent variables $V281,V282,V283, V285$ and $V292$. Sensitivity and specificity results are presented in Table \ref{tabsensispe}. For comparison, using the five variables used to generate the data, we obtained a sensitivity and a specificity equal to 0.94 and 0.89. This is equivalent to results obtained using the selected variables $V281,V282,V283, V285$ and $V292$. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{\cellcolor{lightgray}Variables selected among 280} & \multicolumn{3}{c|}{\cellcolor{lightgray}Variables selected among 300} \\ \hline Variables & Sensitivity & Specificity & Variables & Sensitivity & Specificity\\ \hline $V2, V3, V5$ & 0.87 & 0.89 & $V281$, $V282$ & & \\ & & & $V283$, $V285$ & 0.94 & 0.89\\ $V2, V3, V4, V5 $ & 0.93 & 0.96 & and $V292$ & & \\ \hline \end{tabular} \caption{\small{Sensitivity and specificity on the validation dataset.}} \label{tabsensispe} \end{center} \end{table} \FloatBarrier The number of components of $\gamma$ equal to 1 can vary from one iteration to another. Figure \ref{Barplot:nbvarSimuSing} shows, for the 10 runs with 300 variables, the number of iterations of the runs associated with a number of selected variables from 1 to 15. Similar results were obtained for the 10 runs with the first 280 variables. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{nbvarSimuSing-En.eps} \end{center} \caption{\small{Number of iterations of the runs associated with a number of selected variables from 1 to 15. For the 10 runs, there were a total of 40000 post burn-in iterations.}} \label{Barplot:nbvarSimuSing} \end{figure} \FloatBarrier \paragraph{Bayesian Lasso approach}~~\\ Ten runs of the Bayesian Lasso were performed, with 5000 burn-in iterations and 15000 post-burn-in iterations. The results of the 7th run are illustrated in Figure \ref{Lassosimu7}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{boxplotLassoSimu7.eps} \end{center} \caption{\small{Results of the 7th run of the Bayesian Lasso. On the left are represented the values of the $\beta_j$s as well as the threshold used. Another way to represent these values is by using a boxplot, represented in the middle. On the right are represented the values of the $\lambda_j$s and the threshold used.}} \label{Lassosimu7} \end{figure} \FloatBarrier From this run, using the absolute values of the $\beta_j$s and a treshold of 0.85, the relevant variables $V281$, $V282$, $V283$, $V285$ and $V292$ were selected ($V284$ was indirectly selected by $V292$). However, we can note that the other relevant variables were not selected, and in particular the variables which are linear combinations of the selected variables. Moreover, the non-relevant variable $V25$ was selected. Using the values of the $\lambda_j$s and a threshold of 1, the relevant variables $V281$, $V283$, $V285$ and $V292$ were selected, and the non-relevant variable $V25$ was still selected. Finally, using posterior CI, only the variables $V283$ and $V285$ were selected.\\ The variables selected during the ten runs are given in Table \ref{tablelassosimu}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \rule[-6pt]{0cm}{20pt} Variables & Using the $|\beta_j|$s & Using the $\lambda_j$s & Using the posterior CI\\ \hline \rule[-6pt]{0cm}{20pt} selected in 9 runs & $V285$ & $V285$ & $V285$\\ \hline \rule[-6pt]{0cm}{20pt} selected in 7 runs & $V283,V292$ & $V283$ & \\ \hline \rule[-6pt]{0cm}{20pt} selected in 6 runs & & $V292$ & $V283$ \\ \hline \rule[-6pt]{0cm}{20pt} selected in 3 runs & $V282$ & & \\ \hline \rule[-6pt]{0cm}{20pt} selected in 2 runs & $V281$ & $V281,V282$ & \\ \hline \rule[-6pt]{0cm}{20pt} & $V25,V208,V209,V250$ & $V25,V250,V87,V127$ & $V250,V87,V127$\\ \rule[-6pt]{0cm}{20pt} \multirow{-2}{*}{selected in 1 runs} & $V87,V127,V270$ & $V270$ & \\ \hline \end{tabular} \caption{\small{Variables selected during the 10 runs of the Bayesian Lasso approach, using three different methods, see \ref{lasso}.}} \label{tablelassosimu} \end{center} \end{table} \FloatBarrier Generally, it appeared that using the values of the $\beta_j$s and the $\lambda_j$s enabled us to select more relevant variables than using the posterior CI, but at the price of also selecting non-relevant variables. Besides, we can note that this Bayesian Lasso approach did not give very stable results. Indeed, some runs gave relevant selections of variables, like the run 1 for instance from which the variables $V281,V282,V283,V285$ and $V292$ were selected, while other runs gave selections of variables quite less relevant, like the run 10 from which the variables $V127$ and $V270$ were selected, or like the run 4 from which the variables $V285,V208,V209$ and $V250$ were selected. Between these two cases, some runs enabled us to select some relevant variables, but not all of them, like the runs 2, 5, 8 and 9 from which the variables $V283,V285$ and $V292$ were selected. Eventually, it appeared that the subsets of selected variables obtained were less stable than those obtained by the SSVS approach using the prior with a ridge parameter, and that more ``noise'' was observed since several non-relevant variables which were selected in only one of the ten runs. \subsection{Illustrations through real data}\label{realdata} As an illustration, Affymetrix microarray experiment results from patients with breast cancer were used. Data used in \cite{Baragatti1} were considered, see there for more details. Briefly, the patients come from three different hospitals, and the objective was to select some variables (probesets) which are indicative of the activity of the estrogen receptor (ER) gene in breast cancer. The hospital was considered as a random effect in the model, thus accounting for the different experimental conditions between the three hospitals. For each patient, the expressions of 275 probesets were kept, among which some were known to be relevant to explain the ER status (corresponding to variables 148, 260, 263 and 273). We used a training set made of 100 patients, and a validation set of 88 patients. In order to have a potentially singular $\mathbf{X}_{\gamma}^T\mathbf{X}_{\gamma}$ matrix, we added three variables to the data matrix $\mathbf{X}$. These variables were linear combinations of the known relevant variables, hence $\mathbf{X}$ was no more of full rank: $V276=2 \times V148$, $V277=-V260$ and $V278=V263+V273$. We had only one random effect, which corresponded to the different hospitals. The hospitals are supposed independent, hence we put $D=\sigma^2 I_3$. \paragraph{SSVS approach using the prior with a ridge parameter}~~\\ We performed 10 runs of the sampler using only the first 275 variables, and 10 runs using all the 278 variables. In these two cases and for each run the same 100 patients and the same parameters were used. As in the previous illustration we chose $\tau_0=50$. The parameters $\lambda$ and $\tau$ were chosen as explained in \ref{calibrate}, yielding $\lambda=1/275$ and $c=50.0009$ when using 275 variables, and $\lambda=1/278$ and $c=50.00088$ when using 278 variables. Figure \ref{Boxplot:boxplotEx} presents a boxplot of a run with 275 variables and a boxplot of a run with 278 variables. Following the same reasoning as in the previous example, six variables were selected from the left boxplot and three from the right boxplot. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.5]{boxplotEx.eps} \end{center} \caption{\small{Boxplots of the number of selections of a variable after the burn-in period. The left boxplot corresponds to the run 3 with 275 variables and the right boxplot corresponds to the run 8 with 278 variables.}} \label{Boxplot:boxplotEx} \end{figure} Table \ref{varkept} gives the variables kept in the final selections of the 20 runs. As in the previous example, the final selections of the runs with 278 variables appeared as relevant as the final selections of the runs with 275 variables, despite the fact that some variables were linear combinations of others. \begin{table}[h!] \vspace{0.5cm} \begin{center} \begin{tabular}{|c|c|c|c|} \hline Variables & Corresponding & Number of selections & Number of selections\\ & probesets & among the 10 runs & among the 10 runs\\ & & with 275 variables & with 278 variables\\ \hline $V260$ & 228241\_at & 10 & 3\\ $V273$ & 205862\_at & 9 & 0\\ $V148$ & 209604\_s\_at & 5 & 1\\ $V263$ & 228554\_at & 10 & 0\\ $V83$ & 203628\_at & 7 & 0\\ $V66$ & 202088\_at & 1 & 0\\ $V212$ & 215157\_x\_at & 1 & 0\\ \hline $V277=-V260$ & collinearity & & 3\\ $V278=V263+V273$ & linear combination & \multirow{-2}{*}{Not available} & 10\\ \hline \end{tabular} \caption{\small{Number of final selections among the 10 runs with the first 275 variables and among the 10 runs with 278 variables, for the different variables and linear combinations. No other variable was present in the final selections.}} \label{varkept} \end{center} \end{table} \FloatBarrier Predictions were also performed. Table \ref{tabSS2} contains sensitivity and specificity results. For comparison, using the four relevant variables $V260,V263,V148$ and $V273$, we obtained a sensitivity equal to 0.94 and a specificity equal to 1. This is equivalent to results obtained using only the two variables $V278$ and $V277$. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{\cellcolor{lightgray}Variables selected among 275} & \multicolumn{3}{c|}{\cellcolor{lightgray}Variables selected among 278} \\ \hline Variables & Sensitivity & Specificity & Variables & Sensitivity & Specificity\\ \hline $V260, V273, V263$ & 0.92 & 1 & $V278$ & 0.87 & 0.97\\ & & & $V278$, $V277$ & 0.94 & 1\\ \hline \end{tabular} \caption{\small{Sensitivity and specificity on the validation dataset.}} \label{tabSS2} \end{center} \end{table} \FloatBarrier \paragraph{Bayesian Lasso approach}~~\\ Ten runs of the Bayesian Lasso were performed, with 5000 burn-in iterations and 15000 post-burn-in iterations. The results of the 5th run are illustrated in Figure \ref{LassoExample5}. \begin{figure}[h!] \begin{center} \includegraphics[scale=0.45]{boxplotLassoExample5.eps} \end{center} \caption{\small{Results of the 5th run of the Bayesian Lasso. On the left are represented the values of the $\beta_j$s as well as the threshold used. Another way to represent these values is by using a boxplot, represented in the middle. On the right are represented the values of the $\lambda_j$s and the threshold used.}} \label{LassoExample5} \end{figure} \FloatBarrier From this run, using the absolute values of the $\beta_j$s and a treshold of 2.5, the relevant variables $V260$, $V278$ and $V277$ were selected. Moreover, the less relevant variable $V137$, $V159$, $V147$, $V271$ were selected. Using the values of the $\lambda_j$s and a threshold of 7, the relevant variables $V260$, $V278$ and $V277$ were selected, and the less relevant variables $V103$, $V137$, $V159$, $V147$ and $V271$ were selected. Finally, using posterior CI, no variable was selected.\\ The variables selected during the ten runs are given in Table \ref{tablelassoexample}. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline \rule[-6pt]{0cm}{20pt} Variables & Using the $|\beta_j|$s & Using the $\lambda_j$s & Using the posterior CI\\ \hline \rule[-6pt]{0cm}{20pt} selected in 7 runs & $V278$ & $V278$ & \\ \hline \rule[-6pt]{0cm}{20pt} selected in 4 runs & $V277,V260$ & & \\ \hline \rule[-6pt]{0cm}{20pt} selected in 3 runs & & $V260,V263,V277$ & \\ \hline \rule[-6pt]{0cm}{20pt} & $V263,V114,V140,V272$ & $V140,V145,V83$ & $V278,V145,V271,V266$\\ \rule[-6pt]{0cm}{20pt} \multirow{-2}{*}{selected in 2 runs} & $V147,V271,V83$ & & \\ \hline \rule[-6pt]{0cm}{20pt} & $V276,V80,V145,V71$ & $V114,V80,V252,V71$ & $V114,V140,V272,V147$\\ \rule[-6pt]{0cm}{20pt} & $V102,V137,V159,V59$ & $V78,V215,V165,V102$ & $V84,V83,V95,V115$\\ \rule[-6pt]{0cm}{20pt} & $V84,V105,V78,V95$ & $V60,V137,V159,V147$ & $V161,V105,V272,V276$\\ \rule[-6pt]{0cm}{20pt} & $V161,V266,V105,V2$ & $V271,V103,V59,V84$ & \\ \rule[-6pt]{0cm}{20pt} \multirow{-6}{*}{selected in 1 runs} & $V161,V266,V105,V2$ & $V106,V2,V105,V266,V161$ & \\ \hline \end{tabular} \caption{\small{Variables selected during the 10 runs of the Bayesian Lasso approach, using three different methods, see \ref{lasso}.}} \label{tablelassoexample} \end{center} \end{table} \FloatBarrier The same general remarks than those made from the simulated example can be done (see \ref{simulateddata}) \begin{rmk} We compared the SSVS approach using the prior with a ridge parameter with the Bayesian Lasso. Note that in the case of smaller problems, the marginal likelihoods of all the models can be calculated using the method of \cite{Chib1995}, and it is then possible to confront them to the results obtained by the SSVS or the Bayesian Lasso approaches. \end{rmk} \subsection{Sensitivity analysis}\label{sensitivity} Concerning the variable selection coefficient $\tau$, the method of variable selection without the ridge parameter is not sensitive to its value (see \cite{Baragatti1}), but it is mainly due to the fact that the number of variables selected at each iteration of this algorithm was fixed. It is no more the case for the algorithm proposed in this paper, hence it seems necessary to assess its sensitivity to this parameter. Therefore we studied the influences of $\tau$ and $\lambda$ when they are chosen as proposed in Section \ref{calibrate}, or arbitrarily. We also looked at the behavior of the algorithm when the value of the $\pi_j$, the prior distribution parameters of $\sigma^2$ and the number of iterations vary.\\ For this sensitivity study we used the example with simulated data (Section \ref{simulateddata}) with 300 variables. The different values of the parameters are presented in Table \ref{Tab:tabsensiridge}. In this table, the number of relevant variables in the final selections of the runs are given, the twelve relevant variables being $V1,V2,V3,V4,V5,V281$, $V282,V283,V284,V285,V291$ and $V292$. The sensitivity was assessed by using the relative weighted consistency measure of \cite{Somol2008}, denoted by $CW_{rel}$. It is a measure evaluating how much subsets of selected variables for several runs overlap, and it shows the relative amount of randomness inherent in the concrete variable selection process. It takes values between 0 and 1, where 0 represents the outcome of completely random occurrence of variables in the selected subsets and 1 indicates the most stable variable selection outcome possible. \begin{table}[!h] \hspace{-1cm} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \rowcolor{lightgray} & & & & Value & Prior & Iterations & Nb of & \\ \rowcolor{lightgray} Run & $\tau_0$ & $\tau$ & $\lambda$ & of $\pi_j$ & for & post burn-in & relevant & $\mathcal{S}$\\ \rowcolor{lightgray} & & & & $\forall j$ & $\sigma^2$ & (burn-in) & variables & \\ \hline 1 & 10 & 10.00035 (\ref{choixtau}) & & & & & 3 & \\ 2 & 50 & 50.00885 (\ref{choixtau}) & & & & & 8 & \\ 3 & 100 & 100.0354 (\ref{choixtau}) & & & & & 8 & \\ 4 & 1000 & 1003.553 (\ref{choixtau}) & & & & & 8 & \\ 5 & 10000 & 10367.03 (\ref{choixtau}) & \multirow{-5}{*}{$1/p=1/300$} & \multirow{-5}{*}{5/300} & \multirow{-5}{*}{$\mathcal{IG}(1,1)$} & \multirow{-5}{*}{4000 (1000)} & 8 & \multirow{-5}{*}{0.857}\\ \hline 6 & & & $1/p=1/300$ & & & & 8 & \\ 7 & & & $100/p=1/3$ & & & & 8 & \\ 8 & \multirow{-3}{*}{(\ref{choixtau}) non} & & $1$ & & & & 8 & \\ 9 & \multirow{2}{*}{used} & & $10$ & & & & 8 & \\ 10 & & \multirow{-5}{*}{100} & $100$ & \multirow{-3}{*}{5/300} & \multirow{-3}{*}{$\mathcal{IG}(1,1)$} & \multirow{-3}{*}{4000 (1000)} & 3 & \multirow{-5}{*}{0.8}\\ \hline 11 & & 10 & $1/p=1/300$ & & & & 5 & \\ 12 & & 10 & 10 & & & & 3 & \multirow{-2}{*}{0.348}\\ 13 & \multirow{-3}{*}{(\ref{choixtau}) non} & 1000 & $1/p=1/300$ & & & & 0 & (0.639\\ 14 & \multirow{2}{*}{used} & 1000 & 10 & & & & 8 & without \\ 15 & & 100 & $100/p=1/3$ & \multirow{-5}{*}{5/300} & \multirow{-5}{*}{$\mathcal{IG}(1,1)$} & \multirow{-5}{*}{4000 (1000)} & 8 & run 13) \\ \hline 16 & & & & $5/300$ & & & 8 & \\ 17 & & & & $50/300$ & & & 12 & \\ 18 & \multirow{-3}{*}{100} & \multirow{-3}{*}{100.0354 (\ref{choixtau})} & \multirow{-3}{*}{1/p=1/300} & $100/300$ & \multirow{-3}{*}{$\mathcal{IG}(1,1)$} & \multirow{-3}{*}{4000 (1000)} & 12 & \multirow{-3}{*}{0.848}\\ \hline 19 & & & & & $\mathcal{IG}(1,1)$ & & 8 & \\ 20 & & & & & $\mathcal{IG}(2,5)$ & & 8 & \\ 21 & \multirow{-3}{*}{100} & \multirow{-3}{*}{100.0354 (\ref{choixtau})} & \multirow{-3}{*}{1/p=1/300} & \multirow{-3}{*}{5/300} & $\mathcal{IG}(5,2)$ & \multirow{-3}{*}{4000 (1000)} & 8 & \multirow{-3}{*}{1} \\ \hline 22 & & & & & & 500 (500) & 8 & \\ 23 & & & & & & 4000 (1000) & 8 & \\ 24 & \multirow{-3}{*}{100} & \multirow{-3}{*}{100.0354 (\ref{choixtau})} & \multirow{-3}{*}{1/p=1/300} & \multirow{-3}{*}{5/300} & \multirow{-3}{*}{$\mathcal{IG}(1,1)$} & 40000 (10000) & 8 & \multirow{-3}{*}{1}\\ \hline \end{tabular} \caption{\small{Parameters of the runs for the sensitivity study and associated relative weighted consistency measure of Somol and Novovicova $CW_{rel}$. }} \label{Tab:tabsensiridge} \end{table} The algorithm was generally not sensitive to the values of the hyper-parameters, since most of the relevant variables were usually selected. The boxplots obtained were often similar to the right boxplot of Figure \ref{Boxplot:boxplotSimu}. In particular, the algorithm was not overly sensitive to the values of $\tau$ and $\lambda$. There was only one run (the 13th) where no variable could be really distinguished from others, and none of the top-ranked variables was a relevant one, see Figure \ref{Boxplot:boxplotsensibSimu}. This run corresponds to a large $\tau$ and a small $\lambda$. The runs 17 and 18 are also noticeable, as all relevant variables were finally selected, see Figure \ref{Boxplot:boxplotsensibSimu}. They correspond to high values of $\pi_j$, and the cost for these relevant runs was longer computational times. Eventually, we observed that the values of $\tau$ and $\pi_j$ play a role in the number of variables selected at each iteration of the algorithm. The value of $\tau$ modified the distribution of this number, see Figure \ref{nbvarSimuNonSing-c}. Besides, this number increased with the value of $\pi_j$, see Figure \ref{nbvarSimuNonSing-pi}. However, even if the number of variables selected at each iteration of the algorithm was high, it did not influence the final selections of the runs, and it did not influence the number of variables which were distinguishable from others. \begin{figure}[!h] \begin{center} \includegraphics[scale=0.5]{boxplotsensibSimu.eps} \end{center} \caption{\small{Boxplot of the number of selections of a variable after the burn-in period, for two runs with 300 variables.}} \label{Boxplot:boxplotsensibSimu} \end{figure} \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{nbvarSimuNonSing-c.eps} \end{center} \caption{\small{Number of iterations of the runs 1,4 and 5 associated with a number of selected variables from 1 to 14. For each run, there were 4000 post burn-in iterations.}} \label{nbvarSimuNonSing-c} \end{figure} \FloatBarrier \begin{figure}[!h] \begin{center} \includegraphics[scale=0.4]{nbvarSimuNonSing-pi.eps} \end{center} \caption{\small{Number of iterations of the runs 16,17 and 18 associated with a number of selected variables from 1 to 100. For each run, there were 4000 post burn-in iterations.}} \label{nbvarSimuNonSing-pi} \end{figure} \section{Discussion}\label{Discussion} Classical stochastic search variable selection methods often propose the use of the $g$-prior of Zellner. This prior can not be used if $p>n$, or if some variables are linear combinations of others. In particular, this last case can occur when several datasets with common covariates are merged. The prior for $\beta_{\gamma}$ studied in this manuscript is a possible alternative, and is a reparametrization of the prior of \cite{GuptaIbrahim} where $\tau$ and $\lambda$ can be chosen independently. In this case the parameter $\tau$ does not influence the coefficient of the identity matrix. Using this prior, a way to jointly choose $\tau$ and $\lambda$ was suggested and the results obtained on simulated data and on a real dataset were good and stable, whether some variables were linear combinations of others or not. Moreover, when $\tau$ and $\lambda$ were chosen independently, the proposed method proved to be robust to the choices of these hyper-parameters, as shown in the sensitivity analysis.\\ In practice, even if $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ is theoretically invertible, some variables can be highly correlated and $\mathbf{X}_{\gamma}^T \mathbf{X}_{\gamma}$ can be computationally singular. Moreover, we do not necessarily know if some variables are linear combinations of others and to avoid a computational problem we suggest in any cases to use the prior and the algorithm proposed in this paper. Once a final selection of variables benoted by $\gamma +$ is obtained by our algorithm, the rank of the matrix with all the variables finally selected, denoted by $X_{\gamma +}$, should be computed. If this matrix is not of full rank, we can take a submatrix of $X_{\gamma +}$ of full rank as a new data matrix. Note that it is easier to take linearly independent columns of $X_{\gamma +}$, than linearly independent columns of $X$, especially if $p$ is quite large.\\ We compared the results of the proposed SSVS method using a prior with a ridge parameter, with results obtained from the competing approach of the Bayesian Lasso. This last approach can also be used when some variables are linear combinations of others or if $p>n$. Moreover, its implementation is quite easy, and does not necessitate Metropolis-Hastings steps. Compared to the SSVS method proposed in \ref{SSVS}, an iteration is then less computing demanding. However, the Bayesian Lasso approach seems less practical than the SSVS approach. Indeed, in formula (\ref{fullbetalasso}) it is $\mathbf{X}$ which is used and not a submatrix $\mathbf{X}_{\gamma}$ like in the SSVS approach. The computing time to multiply or to invert matrices is then higher, and it could become an issue if $p$ is very large. Besides, on the previous simulation and example, it appeared than more iterations are needed by the Bayesian Lasso compared to the SSVS approach (20000 vs 5000). Concerning the results, it appeared that the runs of the Bayesian Lasso were less stable than those of the SSVS approach, and that more ``noise'' was observed, since several non or less relevant variables were selected in only one of the ten runs. It is important to note that we adapted a simple Bayesian Lasso approach to probit mixed models, but many extensions of the classical Lasso exist and can be adapted in Bayesian approaches, like the fused Lasso (\cite{Tibshirani2005}), the group Lasso (\cite{YuanLin2006}) or the Elastic Net (\cite{ZouHastie2005}) for instance. Recently, \cite{Kyung2010} showed how to adapt this extensions in Bayesian approaches, and it could be interesting to compare them to the SSVS approach.\\ Several extensions of the proposed SSVS method using a prior with a ridge parameter can be done in the future. First, in classical cases using the $g$-prior, many authors suggested to put prior distributions on $\tau$, see Section \ref{intro}. Following them, an idea could be to put prior distributions on the hyper-parameters $\tau$ and $\lambda$. However, these authors often used Bayes Factors \citep[see for instance][]{CeleuxMarinRobert} and not a latent $\gamma$ vector as done in this paper. They were then more in the spirit of model selection than in the spirit of variable selection. Finally, it would be interesting to have a non-supervised criterion to decide which variables should be in the final selection of a run. We suggested to represent the vector of the numbers of iterations during which variables have been selected by a boxplot, and to use a treshold to decide which variables should be in the final selection. However, we could develop a more formal criterion.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,181
{"url":"http:\/\/likendislike.com\/coverage-ratio-definition\/","text":"# Coverage Ratio Definition\n\n### What is a coverage ratio?\n\nA coverage ratio, roughly speaking, is a set of measures of a firm\u2019s ability to repay debt and meet financial obligations such as paying interest or dividends. The higher the coverage ratio, the easier it should be to pay interest on your debt or pay dividends. The trend in coverage ratios over time is also studied by analysts and investors to check the development of a company\u2019s financial situation.\n\n1:45\n\n### Key points to remember\n\n\u2022 Coverage ratios come in many forms and can be used to help identify companies in a potentially problematic financial situation.\n\u2022 A coverage ratio, roughly speaking, is a measure of a company\u2019s ability to repay its debt and meet its financial obligations. The higher the coverage ratio, the easier it should be to pay interest on your debt or pay dividends.\n\u2022 Current coverage ratios include the interest coverage ratio, the debt service coverage ratio and the asset coverage ratio.\n\n### What does a coverage ratio tell you?\n\nCoverage ratios come in many forms and can be used to help identify companies in a potentially troubled financial situation, although low ratios are not necessarily an indication that a company is in financial difficulty. Many factors come into play in determining these ratios and further analysis of a company\u2019s financial statements is often recommended to check the health of a business.\n\nNet income, interest expense, outstanding debt and total assets are just a few examples of the elements of the financial statements that need to be examined. To determine if the business is still in business, it is necessary to look at the liquidity and solvency ratios, which assess the ability of a business to pay its short-term debt (i.e. to convert its cash assets).\n\nInvestors can use coverage ratios in two ways. First, you can track the evolution of the company\u2019s debt situation over time. In cases where the debt service coverage rate is barely within the acceptable range, it may be wise to look at recent company history. If the ratio gradually decreases, it may only be a matter of time before it falls below the recommended figure.\n\nCoverage ratios are also valuable when looking at a company compared to its competitors. It is imperative to evaluate similar companies, because an acceptable interest coverage rate in one sector can be considered risky in another area. If the company you are evaluating seems out of step with the main competitors, this is often a red flag.\n\nAlthough comparing the coverage ratios of companies in the same industry or sector can provide valuable information about their relative financial situation, it is not as useful to do it between companies in different sectors, because it could be like comparing apples and oranges. Current coverage ratios include the interest coverage ratio, the debt service coverage ratio and the asset coverage ratio. These coverage ratios are summarized below.\n\n### Types of coverage ratios\n\n#### Interest coverage ratio\n\nThe interest coverage rate measures a company\u2019s ability to pay interest expense on its debt. The ratio, also called the ratio multiplied by the interest earned, is defined as follows:\n\nThe\n\nbegin {aligned} & text {Interest coverage ratio} = frac { text {EBIT}} { text {Interest charges}} \\ & textbf {where:} \\ & text { EBIT} = text {Earnings before interest and taxes} \\ end {aligned}\n\nTheInterest coverage ratio=Interest chargesEBITTheor:EBIT=Earnings before interest and taxesTheThe\n\nAn interest coverage ratio of two or more is generally considered satisfactory.\n\n#### Debt service coverage ratio\n\nThe Debt Service Coverage Ratio (DSCR) measures the extent to which a business is able to pay its entire debt service. Debt service includes all principal and interest payments to be made in the short term. The relationship is defined as:\n\nThe\n\nbegin {aligned} & text {DSCR} = frac { text {Net operating income}} { text {Total Debt Service}} \\ end {aligned}\n\nTheDSCR=Total debt serviceNet operating profitTheTheThe\n\nA ratio of one or more indicates that a business generates sufficient income to fully cover its debts.\n\n#### Asset coverage ratio\n\nThe asset coverage ratio is similar in nature to the debt service coverage ratio, but it examines assets on the balance sheet rather than comparing revenues to debt levels. The relationship is defined as:\n\nThe\n\nbegin {aligned} & text {DSCR} = frac { text {Total assets} \u2013 text {Short-term liabilities}} { text {Total debt}} \\ & textbf {where:} \\ & text {Total Assets} = text {Tangible fixed assets, such as land, buildings,} \\ & text {machinery and inventory} \\ end {aligned}\n\nTheDSCR=Total debtTotal assetsShort-term liabilitiesTheor:Total assets=Tangible goods, such as land, buildings,machines and inventoryTheThe\n\nIn general, utilities should have an asset coverage ratio of at least 1.5, and industrial enterprises should have an asset coverage ratio of at least 2.\n\n### Other coverage ratios\n\nSeveral other coverage ratios are also used by analysts, although they are not as important as the three above:\n\n\u2022 the fixed cost coverage rate measures a firm\u2019s ability to cover fixed costs, such as debt payments, interest expense, and equipment rental costs. It shows how well a company\u2019s profits can cover its fixed expenses. Banks often take this ratio into account when assessing whether to lend money to a business.\n\u2022 the loan life cover ratio (LLCR) is a financial ratio used to estimate the credit worthiness of a business or the ability of a borrowing company to repay an outstanding loan. The LLCR is calculated by dividing the net present value (NPV) of money available for debt repayment by the amount of outstanding debt.\n\u2022 the EBITDA coverage ratio on interest is a ratio that is used to assess the financial sustainability of a business by examining whether it is at least profitable enough to pay off its interest expense.\n\u2022 the preferred dividend coverage ratio is a coverage ratio that measures a company\u2019s ability to repay its required preferred dividends. Preferred dividend payments are the expected dividend payments to be paid on the preferred shares of the company. Unlike common shares, dividend payments for preferred shares are set in advance and cannot be changed from quarter to quarter. The company is required to pay them.\n\u2022 the liquidity coverage ratio (LCR) refers to the proportion of highly liquid assets held by financial institutions, in order to guarantee their continued ability to meet their short-term obligations. This ratio is essentially a generic stress test which aims to anticipate market-wide shocks and to ensure that financial institutions have appropriate capital preservation, to avoid any disruption of short-term liquidity, that could affect the market.\n\u2022 the capital loss coverage ratio is the difference between the book value of an asset and the amount received from a sale compared to the value of non-performing assets being liquidated. The capital loss coverage ratio is an expression of the amount of transaction assistance provided by a regulator to involve an external investor.\n\n### Example of coverage rate\n\nTo see the potential difference between coverage rates, let\u2019s look at a fictional company, Cedar Valley Brewing. The company generates quarterly earnings of $200,000 (EBIT is$ 300,000) and corresponding interest payments of 50,000. Given that Cedar Valley made a large portion of its borrowing during a period of low interest rates, its interest coverage rate seems extremely favorable: The begin {aligned} & text {Interest coverage ratio} = frac { 300,000 } { 50,000 } = 6,0 \\ end {aligned} TheInterest coverage ratio=50,000$300,000The=6.0TheThe However, the debt service coverage ratio reflects a large capital amount that the company pays each quarter for a total of$ 140,000. The resulting figure of 1.05 leaves little room for error if the company\u2019s sales are hit unexpectedly:\n\nThe\n\nbegin {aligned} & text {DSCR} = frac { 200,000 } { 190,000 } = 1,05 \\ end {aligned}\n\nTheDSCR=$190,000$200,000The=1.05TheThe\n\nEven if the company generates positive cash flow, it seems more risky from a debt perspective once debt service coverage is taken into account.","date":"2020-09-29 04:58:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 5, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24004881083965302, \"perplexity\": 3122.7222378218244}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600401624636.80\/warc\/CC-MAIN-20200929025239-20200929055239-00562.warc.gz\"}"}
null
null
Liberty holds off VMI Published Saturday, Jan. 22, 2011, 8:19 am Jesse Sanders had 21 points and 13 assists, David Minaya scored 22 points and the Liberty Flames defeated the VMI Keydets 100-82 in a Big South showdown played at Liberty University in Lynchburg, Va. For the game, Liberty shot 56.3% from the field, including 9 of 11 from three-point range (81.8%). The Flames (14-7, 8-1 Big South) placed five players in double figures, and had 26 assists to VMI's 15. Minaya's point total and Sanders' assist mark were both career highs. VMI (11-8, 4-5 BSC) was paced by Austin Kenon, who had VMI's first eight points of the night and finished with 29. The current Big South Player of the Week hit five three-pointers, tying him with Chavis Holmes '09 for the most made treys in Keydet history (306). The Keydets lost for the first time on the road in Big South action, falling to 4-1 in such contests this season. The Keydets jumped out to the early lead, starting the game on an 8-2 run behind Kenon's strong start. Liberty responded with seven straight points, and took its first lead of the contest at the 16:58 mark, 9-8, but VMI went right back on top moments later when Keith Gabriel hit a three-pointer, making it 11-9, VMI. After an exchange of baskets, Jesse Sanders tied the game with a layup, but Ron Burks gave VMI a 16-13 edge at the 14:14 mark, converting an old-fashioned three-point play. The visitors would hold the lead for the next six minutes, stretching it out to a game-high tying six on a pair of occasions and holding a four-point edge at the 10:20 mark, 24-20. At that point, Liberty launched the game's decisive run, a 7-0 spurt that culminated with five straight points by Minaya, giving the home team a 27-24 lead. VMI would not lead again. After several minutes of back-and-forth action, the Flames' lead hit double-digits for the first time at the 3:07 mark, on yet another basket by Minaya, but VMI scored the next five straight points to cut the margin to five, 38-33. Liberty had the final four points of the half, and took a 42-33 lead to the locker room at intermission. Kenon led VMI at halftime with 16 points, including 4 of 5 from three-point range. The Keydets shot 46.2% from the floor in the opening period, but Liberty was 17 of 31 from the floor (54.8%) to hold the advantage. The teams exchanged two-point possessions for the first four-plus minutes of the second half, until Ron Burks converted a layup with 15:40 left, cutting the Liberty lead to four, 48-44. From there, the home team gradually pulled away, capping their efforts with a 10-0 run. Jeremy Anderson hit back-to-back three-pointers, with the final trey coming at the 11:36 mark, and Liberty held a 17-point cushion, 63-46. VMI could get no closer than 15 the rest of the way, and Liberty went up by a game-high 25 with 1:40 to go, 100-75. The Keydets responded by scoring the final seven points of the game, accounting for the final margin. In addition to Kenon's exploits, the Keydets saw Stan Okoye notch 17 points, while Ron Burks (13) and Keith Gabriel (10) were also in double figures. Kenon added a team-high five assists, while Glasgow chipped in four dimes as well. For Liberty, Evan Gordon and John Brown had 14 points apiece, while Brown, the Big South's leading rebounder, added nine boards to help the Flames hold a 39-26 edge on the glass. VMI freshman center D.J. Covington did not play due to a thumb injury.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,388
Revision History ================ 3.7.8-SNAPSHOT - don't print supervisor names during each schedule cycle 3.7.7-SNAPSHOT - use partition file with number-of-partitions 3.7.6-SNAPSHOT - only do the scheduling if supervisors could be found. 3.7.5-SNAPSHOT - do the regular scheduling also 3.7.4-SNAPSHOT - new metis file scheduler 3.7.2-SNAPSHOT - use logger in scheduler 3.7.1-SNAPSHOT - remove zookeeper map (conflict with zookeeper version) --> we use zookeeper 3.3.3 (for storm 0.8.2) 3.7.0-SNAPSHOT - scheduler test 3.6.4-SNAPSHOT - make number of workers configurable again 3.6.2-SNAPSHOT - Support for distributed processing (multi worker capabilities TerminationMonitor capabilities) 3.5.0 - version used for SSWS/ISWC'13 submission. 3.5.0-SNAPSHOT - fixed bug in FileGraphPatternReader where it would emit bindings even though not all triple patterns were matched, but only all variables had been bound. 3.4.2-SNAPSHOT - json-sendgraph uses real ids and not mapped ones 3.4.1-SNAPSHOT - graph pattern reader now outputs all matches 3.3.8-SNAPSHOT - use curator framework to talk to Zookeeper 3.3.6-SNAPSHOT - fixed bug in recorder that would generate zk-nodes multiple times 3.3.2-SNAPSHOT - write triple counts to zookeeper 3.2.3-SNAPSHOT - reverted process method in aggregator 3.2.2-SNAPSHOT - support for file output in Aggregator 3.1.0-SNAPSHOT - fromDate and toDate on fileGraphReader 3.0.5-SNAPSHOT - new program argument "--max-spout-pending" 3.0.4-SNAPSHOT - increase zookeeper timeout to 30 seconds 3.0.3-SNAPSHOT - 5k unacked tuples 3.0.2-SNAPSHOT - copy start and end dates in ExpressionFilterBolt and ExpressionFunctionBolt 3.0.1-SNAPSHOT - 1 million unacked messages 3.0.0-SNAPSHOT - Ripped out the "parallelism magic" - bufferTimeout and waitTimeout configurable 2.5.0-SNAPSHOT - ScheduledThreadPoolExecutor instead of TimerTask - TOPOLOGY_MAX_SPOUT_PENDING set to 100k - 1h ack-timeout - 10 acker executors - 15 minutes wait timeout 2.4.1-SNAPSHOT - 15 seconds waitTimeout 2.4.0-SNAPSHOT - new AbstractSynchronizedBold implementation using the SortedTimeoutBuffer 2.3.12-SNAPSHOT - reformat out-of-order message 2.3.8-SNAPSHOT - graph reader reads one line at the time until it could emit at least one tuple 2.3.7-SNAPSHOT - print batch date 2.3.6-SNAPSHOT - Copy bound variables in bindings 2.3.5-SNAPSHOT - ExpressionFunction back to normal. 2.3.4-SNAPSHOT - ExpressionFunction just acks everything... 2.3.3-SNAPSHOT - remove synchronization from TemporalJoinBolt 2.3.1-SNAPSHOT - print name of source when messages fail 2.3.1-SNAPSHOT - moved readToLineNo variable to the source - termination monitoring now supports multiple sources, but still only works correctly if run on only one machine. 2.2.3-SNAPSHOT - fileSourcePath as variable in graphreader - new property toLineNo in sourceFile descriptor 2.2.2-SNAPSHOT - fixed bug: "D" is not a double 2.2.1-SNAPSHOT - fixed bug in expression filter which would only allow "inherited" variable configuration 2.2.0-SNAPSHOT - first version of sub graph reader 2.1.5-SNAPSHOT - don't log acker messages 2.1.4-SNAPSHOT - using a comparator in ElasticPriorityQueue 2.1.0-SNAPSHOT - acking facility activated again. - end of run detection depends on acks now 2.0.0-SNAPSHOT - removed heartbeat 1.3.4-SNAPSHOT - concurrent messagerecorder 1.3.3-SNAPSHOT - value converstion does not rely on exceptions anymore. 1.3.2-SNAPSHOT - new timeformat and loglevel for TerminationBolt - don't blow up if storm info is written to zookeeper multiple times 1.3.1-SNAPSHOT - parallelism support for TripleFilter 1.3.0 - support for n5 files as source - minAggregator 1.2.0 - store evaluation parameters in google spreadsheet - counts without heartbeat - counts with sources 1.1.16-SNAPSHOT - store evaluation parameters in google spreadsheet - counts without heartbeat - counts without sources 1.1.15-SNAPSHOT - counts without heartbeat - counts without sources 1.1.13-SNAPSHOT - counts without heartbeat 1.1.12-SNAPSHOT - counts with heartbeat - add watcher over and over again 1.1.11-SNAPSHOT - counts with heartbeat - fast and the furious 1.1.9-SNAPSHOT - counts with heartbeat - 5 seconds delay 1.1.8-SNAPSHOT - counts with heartbeat - cached file source 1.1.7-SNAPSHOT - counts with heartbeat - correct usage of MESSAGE_RECORDER_FINISHED_PATH 1.1.6-SNAPSHOT counts without heartbeat 1.1.5-SNAPSHOT counts with heartbeat 1.1.3-SNAPSHOT heartbeat new cachedFileSource 1.1.2-SNAPSHOT heartbeat 1.1.1-SNAPSHOT no heartbeat 1.1.0 include evaluation and waiter projects into the main project - no heartbeat 1.0.5 dummy release (lorenz) 1.0.4 dummy release (tom) 1.0.3 include maven repository and scm configuration 1.0.2 use storm 0.8.2 1.0.0 master thesis of TH
{ "redpajama_set_name": "RedPajamaGithub" }
3,059
Still Not 'Lovin' it'? Battle Over McDonald's Back On by: Melissa Reid OHIO CITY--The battle to keep a McDonald's restaurant out of Ohio City is back on. "We want this to be a bicycle-friendly, pedestrian-friendly neighborhood," said Ward 3 Councilman Joe Cimperman. Councilman Cimperman represents some residents strongly opposed to the idea of a McDonald's in Ohio City. "We know that if you do a 2-gauge, 24-hour drive-thru on a side street with schools and playgrounds, you are going to put our children in jeopardy,"said Councilman Cimperman. Concerns have resurfaced after the owner of McDonald's appealed the planning commission's decision back in November. They were denied permission to build a restaurant at the corner of Lorain Road and Fulton Avenue. "By this appeal, they basically said that the planning commission made an arbitrary decision that they didn't have the grounds to make the decision that they did. The truth is the planning commission makes a thousand decisions a year that aren't challenged," said Councilman Cimperman. Residents have voiced concerns about extra traffic, and the potential danger posed to pedestrians, including school children. "We want local businesses around here. And we want the bike path to make it more walkable environment rather than cars backed up because of a drive-thru," said Justin Carson, local business owner. Other residents indicated that a McDonald's did not fit the neighborhood atmosphere of Ohio City. "The people have their own feel here. They kind of stay different than the rest of Cleveland. You can tell with the houses and the businesses. I think it would bring it down a little," said Jiovanni Semidy, local business owner. The owner of the restaurant has said in the past, he was a small businessman who partnered with a big company, and that he would "respect and honor" any neighborhood where he did business. "I don't understand why everyone is so upset. It wouldn't bother me," said Curtis Griffin, resident. But the residents who are bothered plan to have their voices heard by attending the board of zoning appeals meeting Tuesday morning. The meeting is set to get under way Tuesday morning at 9:30 in Cleveland City Hall. READ MORE on the story.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,873
Directions to post image to the web. To put this image into your MySpace or Xanga profile please copy and paste the code below into your profile. Directions to import image to MSN Messenger. 1. Right click the avatar and click "save picture as". 2. When the dialog box opens, browse to the folder where you want to store this image, and then click OK. 3. From within MSN Messenger, you can import your display picture.
{ "redpajama_set_name": "RedPajamaC4" }
4,337
\section{Introduction} \label{sec1} The response of quantum systems to external perturbations is a problem of paramount importance in many areas of physics. Many of the properties of complex quantum system change dramatically when the system is perturbed, generating fundamental phenomena as quantum phase transitions, irreversibility or dissipation. The present development of experimental technics in quantum complex systems make the understanding and characterization of the effect of perturbations highly desirable. The most likely suitable magnitude to characterize the effects of perturbations on quantum systems is the local density of states (LDOS). The LDOS, also called strength function, was introduced by Wigner \cite{wigner} to understand the statistical properties of the wave functions of complex quantum systems. The LDOS is the profile of an eigenstate of an unperturbed quantum system over the eigenbasis of its perturbed version. To be more specific, let us consider a system with a one parameter dependent Hamiltonian $H(k)$ with eigenfrecuencies $\omega_j(k)$ and eigenstates $|\psi_{j}(k)\rangle$. The LDOS of an eigenstate $|\psi_{i}(k_0)\rangle$ (that we call unperturbed) is given by \begin{equation} \rho_{i}( \omega,\delta k)= \sum_{j} \vert\langle\psi_{j}(k)\vert\psi_{i}(k_{0})\rangle\vert^{2}\delta(\omega-\omega_{ij}(k,k_0)) , \label{ldos} \end{equation} with $\omega_{ij}(k,k_0)=\omega_i(k_0)-\omega_j(k)$ and $\delta k \equiv k-k_0$ the perturbation strength. Eq. \ref{ldos} shows that the LDOS is a density of states in which the delta functions are weighed by the overlaps between perturbed and unperturbed states. In addition, the LDOS width gives an estimation of how many perturbed states contribute to an unperturbed one. Besides, it is the Fourier transform of the fidelity amplitude (FA) of the state $|\psi_{i}(k_0)\rangle$, \begin{equation} \rho_{i}( \omega,\delta k)= {\cal{F} } [ \langle\psi_{i}(k_0)\vert e^{i H(k) t/\hbar}e^{ -i H(k_0) t/\hbar} \vert \psi_{i}(k_{0})\rangle]. \end{equation} Both the FA and its absolute square value, called the Loschmidt echo, are important measures of sensitivity to perturbations and irreversibility of quantum evolutions \cite{jalabert,prosen-rev,jacquod,diego,scholarpedia}. The LDOS has been considered in many contexts. In a seminal paper, Wigner studied the LDOS in a simple model of banded random matrices \cite{wigner}. Subsequently, many authors have used the LDOS to characterize the structure of the eigenstates of different random matrix models \cite{casati1, casati2, jacquod1}. The LDOS has also been studied in several microscopic systems as for example in a Ce atom \cite{Flambaun1}, in chaotic billiards \cite{Doron1} or a system of a particle that evolves in a smooth Hamiltonian \cite{cohen2}. In addition, the LDOS has been studied to characterize the effect of perturbations in the operation of quantum computers in the presence of static imperfections \cite{georgeot,casati3}. It was shown that depending on the characteristics of the system, the LDOS has many regimes as a function of the perturbation strength $\delta k$. However, all the mentioned studies have revealed a region of perturbation strength in which the LDOS has a lorentzian shape, that has been usually called Breit-Wigner distribution. A step forward has been recently made in the understanding of the LDOS for chaotic systems \cite{Natalia}. Its relation with the FA has been exploited to develop a semiclassical theory of the LDOS for locally perturbed billiards or maps, that is, when the perturbation is concentrated in a small region of the phase space accessible for the system. It was shown that the LDOS has a Lorentzian shape under very general perturbations of arbitrarily high intensity an a semiclassical expression for its width was derived. This expression only depends on the perturbation, while the properties of the system are taken into account through a uniform measure in phase space. The same results were obtained in a subsequent publication for maps that are globally perturbed but the dynamics was assumed to be completely random \cite{Nacho1}. The aim of our study is to test the validity of the semiclassical theory of Ref. \cite{Natalia, Nacho1} in quantum maps when the perturbation is applied in all the phase space and the dynamics of the classical map is not completely random. We also consider perturbations that act in an region of the phase space. We study the behavior of the LDOS for maps with different degree of chaoticity and intensity of the perturbation. For this purpose we consider two of the most paradigmatic systems of quantum chaos studies: the perturbed cat map and the Harper map. We show that the semiclassical approximation of the width of the LDOS works very well even for systems with mixed dynamics in which chaos coexist with regular islands. The prediction of Lorenzian shape of the LDOS is fulfilled for highly chaotic maps or when the intensity of the perturbation is big enough. The paper is organized as follows. In Sec. \ref{secmaps} we introduce the dynamical systems that we have used for the numerical study, the cat and the Harper maps. In this section,we describe the main characteristics of the classical and quantum dynamics of the maps. Sec. \ref{sec2} is devoted to present the semiclassical theory of the LDOS \cite{Natalia, Nacho1}. The starting point of this theory is a semiclassical approximation of the fidelity amplitude called Dephasing Representation \cite{vanicek}. In Sec. \ref{results} we study the behavior of the LDOS for the systems in several situations and test the validity of the semiclassical theory. We consider various degree of chaoticity and intensities of the perturbation. We also compare the cases ol local and global perturbations. Finally, we conclude with a summary of our results and some final remarks in Sec. \ref{conclu}. \section{Systems: maps on a torus} \label{secmaps} An usual procedure to understand a complex behavior is to consider very simple systems in which such a phenomena is observed. The most simple dynamical systems which develops all types of complexity are abstract maps. Due to their simplicity, classical and quantum maps have been very important in the development of classical and quantum chaos \cite{Hannay, Balazs-Voros, Keating}. Furthermore, many quantum maps have been implemented experimentally in previous studies \cite{Exp 1,Exp 2, Exp 3}. In this paper we have used maps acting on a torus phase space of area ${\cal{A} }= 1$. In particular we have considered the well known cat and Harper maps. These maps possess all the essential ingredients of chaotic and mixed dynamics and are extremely simple from a numerical point of view. The cat maps are linear automorphisms of the torus that exhibit hard chaos. Anosov's theorem \cite{Anosov} establishes that the cat maps are structurally stable, that is, the orbits of a slightly perturbed map are conjugated to those of the unperturbed map by a homeomorphism. A perturbation of a cat map can be represented by matrices acting on the coordinates \begin{equation}\left[ \begin{array}{c} q^{'} \\ p^{'} \end{array} \right] = G \left[ \begin{array}{c} q \\ p \end{array} \right] + \left[ \begin{array}{c} 0 \\ 1 \end{array} \right] \epsilon (q, k) \hspace{0.5cm} \left( mod1\right),\label{Gatos}\end{equation} where $G$ is a $2 \times 2$ matrix with integer elements choosen that $Tr \left(G \right)> 2$ and $det(G)=1$ since the maps are hyperbolic and conservative. We consider a perturbation \begin{equation} \epsilon (q, k) = ( k/2 \pi)[\cos(2\pi q)-\cos(4\pi q)], \label{catpert} \end{equation} with the perturbation strength $k<0.11$ to satisfy the Anosov theorem \cite{Matos,Anosov}. To take into account different degrees of chaoticity, in this paper we have considered the following matrices $G$, \begin{displaymath} G_1= \left( \begin{array}{c} 2 \\ 1 \\ \end{array} \begin{array}{c} 1 \\ 1 \\ \end{array} \right) , \;\;\;\;\;\;\; G_2= \left( \begin{array}{c} 80 \\ 6399 \end{array} \begin{array}{c} 1 \\ 80 \end{array} \right). \end{displaymath} The corresponding Lyapunov exponents, which determine the rate of exponential divergence of classical trajectories are $\lambda_1\approx 0.96$ and $\lambda_2\approx 5.07$. We note that $\lambda$ is approximately uniform over the whole phase space and nearly independent of $k$ \cite{Ares2}. Perturbed cat maps do not capture all the possible motions of Hamiltonian systems. The most common situation is a mixture of regular islands interspersed by chaotic regions. To consider this general situation the model that we have chosen to study is the Harper map in the unit square \cite{Harper}, \begin{eqnarray} q^{'} &=& q- k\sin{2\pi p} \qquad ({\rm mod}\; 1), \nonumber \\ p^{'} &=& p+ k\sin{2\pi q^{'}} \qquad ({\rm mod}\; 1), \label{eq:1} \end{eqnarray} where $k$ is a parameter that controls the behavior of the system. This map can be understood as the stroboscopic version of the flow corresponding to the (kicked) Hamiltonian \begin{equation} H(p,q,t) = -\frac{1}{2\pi} \cos(2\pi p) - \frac{k}{2\pi} \cos(2\pi q) \sum_n \delta(t-n k). \label{eq:2} \end{equation} This is an approximated Hamiltonian for the motion of an an electron in a crystal under the action of an external field. The Harper map presents a mixed dynamics that depends on the parameter $k$. Fig.~\ref{fighc} shows some phase space pictures for this model as an example of the underlying classical dynamics. As can be seen in Fig. \ref{fighc} (left panel), the system presents a mixed dynamics with regions of regularity around the origin and the corners coexisting with chaos, in agreement with the KAM theorem \cite{Anosov}. When the parameter $k=200$ the size of the island are so small that it is not possible to observe without a finer resolution [see Fig. \ref{fighc} (right panel)]. \begin{figure}[h] \begin{center} \includegraphics[width=8cm, angle=0]{HARPERCLASICO.ps} \caption{ \label{fighc} Classical phase space of the Harper map for $k= 0.3$ (left panel) and $k= 200$ (right panel). See text for details.} \end{center} \end{figure} The quantization on the torus implies that the wave function should be periodic in both position and momentum representation. If in the coordinate and momentum representation the wave function has a period 1 with spacing $1/N$, it follows that $1= 2 \pi \hbar N$. Then, we have a Hilbert space of $N$ dimension for a fixed value of $\hbar$. As $N$ takes increasing values, we reach the semiclassical limit. The position basis $\{q_i\}_{i=1}^{N-1}$ (with $q_{i} = i/N$) and momentum basis $\{p_i\}_{i=1}^{N-1}$ (with $p_i = i/N$) are related by the discrete Fourier transform. In this setting a quantum map is simply a unitary $U$ acting on an $N$ dimensional Hilbert space and evolution after $n$ steps is given by $U^n$. There is no general method for map quantization. For the perturbed cat map we have considered the quantization based on the classical propagator of Ref. \cite{Hannay,Matos}. In this case, the matrix elements of the propagator in the position basis are \begin{eqnarray} \ U^{C}_{k}(q',q)=\sqrt{\frac{N}{ig_{12}}} \exp \left[ \frac{i\pi N}{g_{12}}(g_{11}q^{2}-2q'q+g_{22}q'^{2}) \right] \nonumber\\ \exp \left[ \frac{i k N }{2\pi}(\sin(2\pi q)-\frac{1}{2}\sin(4\pi q)) \right], \label{pertq1} \end{eqnarray} where $g_{i,j}$ are the elements of the matrix $G$ and we have used $g_{12}=1$. For the Harper map \cite{Harper}, the matrix elements of evolution operator in the mixed basis of position and momenta are \begin{equation} \label{eq:harp} U^{H}_{k}(q,p)=e^{i N k \cos (2 \pi q)} e^{i N k \cos (2 \pi p)}. \end{equation} \section{Semiclassical theory of the LDOS of chaotic maps} \label{sec2} The LDOS as defined in Eq. \ref{ldos} depends on the characteristics of the state $|\psi_i(k_0)\rangle$. To avoid any dependence with some particular characteristics of this state an average over unperturbed states is performed. Due to the finite number of states in quantum maps, we average over all the Hilbert space. Thus, the averaged LDOS $\rho (\omega,\delta k)$ is \begin{equation} \rho (\omega,\delta k) =\frac{1}{N}\sum_{i=1}^{N} \rho_{i} (\omega,\delta k) . \label{LDOSA} \end{equation} The inverse Fourier transform of Eq. \ref{LDOSA}, the so called average fidelity amplitude (AFA), is the starting point of the a semiclassical approximation of the LDOS, \begin{eqnarray} \overline{ O(t,\delta k) } =\frac{1}{N} \sum_{i} \langle \psi_i(k_0) | e^{iH(k)t/\hbar} e^{-iH_0(k_0)t/\hbar} | \psi_i(k_0) \rangle . \label{transformada} \end{eqnarray} To evaluate Eq. \ref{transformada} we have used the so called dephasing representation, a semiclassical formulation for fidelity amplitude which avoids the usual trajectory-search problem of the standard semiclassics \cite{vanicek}. One of the forms of the FA obtained using the dephasing representation is \begin{eqnarray} O_{\phi}(t,\delta k) = \int W_{\phi}(q,p) e^{-i\Delta S_{t}(q,p,\delta k)/\hbar} dqdp , \end{eqnarray} where $\Delta S_{t}(q,p,\delta k)$ is the action difference evaluated along the umperturbed orbit starting at $(q,p)$ that evolves at a time $t$ and $W_{\phi}(q,p)$ is the Wigner function of the initial state $|\phi \rangle$. Then, \begin{eqnarray} \overline{O(t,\delta k)} = \int W(q,p) e^{-i\Delta S_{t}(q,p,\delta k)/\hbar} dqdp , \end{eqnarray} where $W(q,p)= (1/N) \sum W_i(q,p) $, with $W_i(q,p)$ being the Wigner function of $\vert \psi_i (k_0)\rangle$. For chaotic systems, the mean value of the Wigner function for a base of eigenstates is approximately a uniform distribution so $W(q,p)=1/V$ where $V$ is the volume of the phase space. Therefore, \begin{equation} \overline{ O(t, \delta k) } = \frac{1}{V} \int e^{-i\Delta S_{t}(q,p,\delta k)/\hbar} dqdp \label{O(t)semi}. \end{equation} Time is discrete in maps, so from now on we use the integer $n$ to count time steps and $V$ is the area of the phase space that in our case is equal to unity. In order to solve Eq. \ref{O(t)semi} for maps we need to assume that trajectories become uncorrelated between two successive hits in the perturbed region. This approximation is valid when the perturbation acts on an infinitesimal portion of phase space \cite{Natalia,Goussev,Nacho2} or if the unperturbed dynamics of the system is completely random \cite{Nacho1}. Here we have considered the second case, the $\lambda \rightarrow \infty$ limit, by assuming that the dynamics is purely random. This evolution is completely stochastic in the sense that there is no correlation for the different times of the evolution. Then, to compute $\overline{ O(n, \delta k) }$, we have divided the phase space in $N_c$ cells. The probability to jump from cell to any other in phase space is uniform. Therefore it is straightforward to show that the mean FA results \begin{eqnarray} \overline{ O(n,\delta k) } &=& \sum_{j_1} ... \sum_{j_n} e^{[-i(\Delta S_{j_1} + ... + \Delta S_{j_n})/\hbar]} \nonumber\\ &=& \left( \sum_{j} e^{(-i \Delta S_{j}/\hbar)} \right) ^n , \end{eqnarray} where $\Delta S_{j p}$ is the action difference evaluated in the cell $j$ at time $p$. The continuous limit is approached when $N_c\rightarrow \infty$ resulting in \begin{equation} \overline{ O(n,\delta k) } = \left( \int e^{-i \Delta S (q,p,\delta k) /\hbar } dqdp \right)^n , \label{integral} \end{equation} where $\Delta S(q,p,\delta k)$ is the action difference after one step of the map. The exponential decay of Ec. \ref{integral} can be rewritten as \begin{equation} \overline{ O(n,\delta k) } = e^{- \Gamma n +i \varphi n } \label{n}, \end{equation} with \begin{eqnarray} \Gamma &=& -\ln(|\int e^{-i \Delta S (q,p,\delta k)/\hbar} dq dp| ) \label{1} . \end{eqnarray} and \begin{eqnarray} \varphi &=& \arg(\int e^{-i \Delta S (q,p,\delta k)/\hbar} dq dp) \label{2} . \end{eqnarray} We note that $\Gamma$ and $\varphi$ depend on the perturbation strength $\delta k$. Now, we obtain the semiclassical expression for the average LDOS by the inverse Fourier transform of Eq. \ref{n}, \begin{equation} \rho_{sc}(\omega,\delta k) = {\cal{F}}^{-1}_{[\bar{O}]}(\omega,\Gamma,\varphi) = \frac{\Gamma }{\pi[(\omega-\varphi)^2 + \Gamma^2]} . \end{equation} The phase $\varphi$ determines the location of the center of the Lorenzian function and $\Gamma$ its width. Finally, we have to take into account the fact that the spectrum of a map is periodic because of a compact phase space. This periodicity changes the form of the LDOS into a periodized Lorentzian function \begin{eqnarray} {\rho}_{sc}(\omega,\delta k) &=& L^{(p)}(\omega,\Gamma,\varphi) \nonumber\\ &=& \sum _{j=-\infty} ^{\infty} \frac{\Gamma} {\pi[(\omega -\varphi- 2\pi j)^2+\Gamma^2]} . \label{LorPeriod} \end{eqnarray} The same semiclassical expressions for the LDOS were obtained in Ref. \cite{Natalia,Nacho2} when the perturbation acts in a region of the phase space of area $\alpha \rightarrow 0$. \begin{figure} \begin{center} \includegraphics[width=9.0cm, angle=0]{imf.eps} \caption{\label{FASE1}\footnotesize (Color online) . Witdh $\sigma$ vs. $\Gamma$ for a periodized Lorentzian function [Eq. \ref{LorPeriod}]. The limit of $\sigma$ for $\Gamma \rightarrow \infty$ which corresponds for a constant LDOS is also plotted with red dotted line. } \end{center} \end{figure} A magnitude that has physical interest is the width $\sigma$ of the LDOS which is a measure of the number of perturbed states that are needed to describe a unperturbed one. Therefore, this quantity offers clear information about the effect of perturbations on on a quantum system. Moreover, the width of the LDOS determines for some regime of the perturbation, the rate of fidelity decay under imperfect motion reversal (the Loschmidt echo). There are different ways of determining this width of a distribution. In our case we are going to take the distance about the average value of the LDOS that contains 70 \% of the probability. That is, \begin{equation} \int^{\langle \omega \rangle + \sigma}_{\langle \omega \rangle - \sigma} \rho(\omega, \delta k) d \omega = 0.7 \hspace{0.4cm} . \label{ANCH} \end{equation} where \begin{equation} \langle \omega \rangle= \int^{\pi}_{-\pi} w \rho(\omega, \delta k) d \omega \end{equation} We show in Fig. \ref{FASE1} the relation between its width $\sigma$ and $\Gamma$ for the periodized Lorentzian function of Eq. \ref{LorPeriod} . \section{Results} \label{results} \begin{figure} \begin{center} \includegraphics[width=9.0cm, angle=0]{FASEvsPERTGATO.ps} \caption{\label{FASEGATO}\footnotesize (Color online) $\Gamma$ and the phase $\varphi$ as a function of the scaled pertubation $\chi$ for the perturbed cat map. } \end{center} \end{figure} The main interest of a semiclassical theory is to describe quantum mechanical quantities using classical information. In this section we show the behavior of the LDOS for the quantum maps presented before and test the validity of the semiclassical approximation of the LDOS described in the previous section. The aim this section is to compare the approximated $\rho_{sc}$ and $\sigma_{sc}$ with the corresponding exact quantum values. The latter are numerically computed by diagonalization of the evolution operators of Eq. \ref{pertq1} and \ref{eq:harp}. The semiclassical approximation of the LDOS is completely determined by $\Gamma$ and $\varphi$ that are obtained with the calculation of the integral of Eq. \ref{integral}. To avoid the dependence of the results with the dimension of the Hilbert space $N$ we have considered all the studied quantities as a function of the scaled strength of the perturbation \begin{equation} \chi \equiv (k-k_{0})/(2 \pi \hbar) =\delta k N. \end{equation} In all the calculations included in this section the number of states of the Hilbert space set as $N=2000$. \subsection{Peturbed cat map} The action difference for one iteration of the perturbed cat map described in Sec. \ref{secmaps} is given by \begin{equation} \Delta S (q,p,\delta k)= \left( \frac{\delta k}{4 \pi^2} \right) \left[ \sin(2\pi q)-\frac{1}{2} \sin(4 \pi q) \right] . \label{ACCIONGATO} \end{equation} Using Eq. \ref{ACCIONGATO} , \ref{1} and \ref{2} we compute $\Gamma$ and $\varphi$. In Fig. \ref{FASEGATO} we plot $\Gamma$ and $\varphi$ for the perturbed cat map as a function of the scaled perturbation strength $\chi$. We can see that for perturbation of Eq. \ref{catpert}, $\varphi$ has only two possible values either $0$ or $\pi$. \begin{figure} \begin{center} \includegraphics[width=9cm, angle=0]{ANCHOGATOlw2.ps} \caption{\label{ANCHO}\footnotesize (Color online) Width $\sigma$ of the LDOS as a function of the scaled perturbation strength $\chi=(k-k_0) N$ for the cat map with $G_1$ ({\large $\circ$}) and $G_2$ ({\large $\times$}). The red solid line is the semiclassical approximation of $\sigma$. The number of states of the Hilbert space $N=2000$ and $k_0=0.01$. We indicate with arrows the perturbations strength of the LDOS displayed in Fig. \ref{LDOS}. } \end{center} \end{figure} We firstly compare the semiclassical approximation of the width of the LDOS with the corresponding quantum value. For this reason the width of the LDOS has been computed for the cat map using $k_0=0.01$ to avoid all the arithmetic peculiarities of the cat map ($k=0$), which account for the non-generic spectral statistics \cite{Keating2}. In Fig. \ref{ANCHO} the width of the LDOS is shown for the cat maps with $G_1$ and $G_2$. The semiclassical approximation $\sigma_{sc}$, plotted in solid line, works extremely well for both cat maps in the whole range of considered perturbations. The width of the LDOS $\sigma$ for the cat maps has two clearly different regimes [ Fig. \ref{ANCHO}] . For small perturbation strength when $\chi \lesssim 10$ it presents a quadratic behavior that is usually called Fermi Golden Rule regime. Conversely for greater strength the width is an oscillating function. In order to understand the behavior of $\sigma_{sc}$ when $\chi \rightarrow \infty$ we have used the stationary phase approximation method to solve the integral Eq. \ref{O(t)semi} obtaining \begin{displaymath} \Gamma \rightarrow -\log[1/\sqrt{\chi}] \hspace{0.5cm} for \hspace{0.3cm} \chi \rightarrow \infty , \end{displaymath} therefore the width $\sigma_{sc} \rightarrow 0.7 \pi$ that corresponds to a uniform distribution. \begin{figure} \begin{center} \includegraphics[width=8.5cm, angle=0]{fig5.ps} \caption{\label{LDOS}\footnotesize (Color online) LDOS $\rho$ (points) and its semiclassical approximation (solid line). (a) cat map of Eq. \ref{Gatos} with $G=G_2$ for a scaled perturbation strengh $\chi=159.6$ (main plot), $\chi=14.4$ (left inset) and $\chi=18.0$ (right inset). (b) cat map of Eq. \ref{Gatos} with $G=G_1$ for $\chi=159.6$, $\chi=14.4$ (left inset) and $\chi=18.0$ (right inset).} \end{center} \end{figure} At this point we would like to see how good the semiclassical approximation of the LDOS can describe the complete distribution. We have therefore computde the quantum LDOS for several values of perturbation strength for the cat maps $G_1$ and $G_2.$ In Fig. \ref{LDOS} we compare the LDOS with its semiclassical approximation for perturbations indicated in Fig. \ref{ANCHO} with arrows. Fig. \ref{LDOS}(a) corresponds to the most chaotic case $G_2$. In the main plot $\chi=159.6$, in the left inset $\chi=14.4$ and in the right inset $\chi=18$. We can see that the semiclassical approximation works very well for all the perturbations, that is, the LDOS is a periodized Lorentzian function indistinctly of the perturbation strength. Left and right inset corresponds to approximately the same width of the distribution but in the left figure $\varphi=0$ and $\varphi=\pi$ for the right so in this case the periodized Lorentzian is centered in $ \omega=\pi$. As can be seen in Fig. \ref{FASEGATO}, near $\chi \approx 15$, the phase $\varphi$ has a discontinuity and jumps from $0$ to $\pi$, for this reason the center of the LDOS changes from $\omega=0$ to $\omega=\pi$. Similar behavior occurs in the other discontinuities of $\varphi$ near $\chi \approx 50$ and $70$. In Fig. \ref{LDOS}(b) the results for the cat map with the matrix $G_1$ are shown. In the figure main panel, we show that for big perturbation strength after the quadratic regime ($\chi=159.6$) the LDOS is well described by the semiclassical Lorentzian distribution. Conversely for smaller perturbations strength the LDOS does not show a Lorentzian behavior [see inset of Fig. \ref{LDOS}(b)]. To understand this behavior we show in Fig. \ref{odet} the mean value of the fidelity amplitude $\overline{O(n)}$ for $\chi=14.4$ of both cat maps with $G_1$ ({\large $\circ$}) and $G_2$ ({\large $\times$}) which corresponds to the inverse Fourier transform of the LDOS plotted in the left inset of Fig. \ref{LDOS}(a) and (b). We see that in the case in which the LDOS is not a periodized Lorentzian function the corresponding $\overline{O(n)}$ has a big revival (at $n=4$). This kind of behavior, known as survival collapse after which the largest revivals appear, was observed in a spin chain \cite{revivals} and can be the cause for non-Markovian quantum evolutions \cite{non-markovian}. \begin{figure} \begin{center} \includegraphics[width=8.0cm, angle=0]{fig6.ps} \caption{ \label{odet}(Color online) Mean value of the amplitude fidelity $\overline{O(n)}$ for the cat map with $G_1$ ({\large $\circ$}) and $G_2$ ({\large $\times$}) with perturbation strength $\chi=14.4$. The exponential decay given by $\exp(-\Gamma n)$ is plotted with solid blue line. We also plot with dotted red line the exponential decay given by $\exp(-\lambda n/2)$ with $\lambda$ the Lyapunov exponent for the cat map with $G_1$. } \end{center} \end{figure} We test now the validity of the semiclassical approximation of the LDOS for local perturbation. For this reason the perturbation is applied in a $q$ strip from $q_0=0.25$ to $q_1=0.46$ so the area of the perturbed region is $\alpha=\Delta q \Delta p=q_1-q_0=0.21$. In Fig. \ref{FASEGATOCORTADO} we show $\Gamma$ and $\varphi$ as a function of the scaled perturbation strength $\xi$ computed using Eq. \ref{1} and \ref{2}. We can see that for this local perturbation $\varphi$ is an oscillating function so the semiclassical approximation of the LDOS is periodized Lorentzian function with an oscillating mean value. In Fig. \ref{FASEGATOCORTADO}(top panel) the mean value of the exact LDOS it is also plotted with ({\large $\Box$}) showing that the semiclassical $\varphi$ describe very well this quantity. \begin{figure} \begin{center} \includegraphics[width=8.0cm, angle=0]{fig-gamma-cortado.ps} \caption{\label{FASEGATOCORTADO}\footnotesize (Color online) $\Gamma$ and the phase $\varphi$ as a function of the scaled pertubation $\chi$ for the cat map when the perturbation is applied in a $q$ strip from $q_0=0.25$ to $q_1=0.46$. The mean value of the quantum LDOS is also plotted with ($\Box$). } \end{center} \end{figure} The LDOS is also very well approximated by the semiclassical LDOS for all the perturbations strength that we have studied. In Fig. \ref{fig-ldos-cut} we show the LDOS for in $\chi=8$ when the width grows cuadratically (FGR regime) and for $\chi=28$ when the width shows an oscillating behavior. The semiclassical approximation is plotted with solid line. In the inset of Fig \ref{fig-ldos-cut} we show the width of the LDOS for this local perturbation and its semiclassical approximation. We can clearly see that the $\sigma_{sc}$ works very well for local perturbation. It is noteworthy that all the calculations for local perturbations were done using the map $G_1$ showing that when the perturbation is applying is a small region of the phase space less degree of chaoticity is needed for the semiclassical LDOS to be accurate. \begin{figure} \begin{center} \includegraphics[width=9.5cm, angle=0]{fig8.eps} \caption{\label{fig-ldos-cut}\footnotesize (Color online) LDOS $\rho$ for a local perturbations. The cat map with $G_1$ is perturbed from $q_0=0.25$ to $q_1=0.46$. The scaled perturbation strength $\chi=8$ ($\bigtriangleup$) and $\chi=28$ ($\Box$). The semiclassical approximation of the LDOS is plotted with red solid line. Inset: Width $\sigma$ of the LDOS as a function of the scaled perturbation strength $\chi$ ($\circ$) and with red solid line the semiclassical approximation.} \end{center} \end{figure} \subsection{Harper map} We have studied the LDOS of the Harper map using the evolution operator of Eq. \ref{eq:harp} with $k=k_0+ \delta k$. The parameter $\delta k$ is the perturbation strength and as we have used for the cat map, the scaled perturbation straight $\chi=\delta k N$. In this case the action difference for one iteration of the Harper map is given by \begin{equation} \Delta S (q,p,\delta k)= \left( \frac{\delta k}{2 \pi} \right) \left[ \cos(2\pi p)+ \cos(2 \pi q') \right] . \label{ACCIONHARPER} \end{equation} where $q'$ is given by Eq. \ref{eq:1}. We have considered as unperturbed system the cases with $k_0=0.30$ [mixed dynamics, Fig. \ref{fighc}(left panel)] and $k_0=200$ [chaotic dynamics, Fig. \ref{fighc}(right panel)]. Using Eq. \ref{1}, \ref{2} and \ref{LorPeriod} we compute $\Gamma$, $\varphi$ and the corresponding semiclassical approximation of the LDOS. In Fig. \ref{GAMAHARPER} we show $\Gamma$ as a function of the scaled perturbation strength $\chi$. For the action difference of the Harper map [Eq. \ref{ACCIONHARPER}] we have obtained that $\varphi=0$. \begin{figure} \begin{center} \includegraphics[width=9.0cm, angle=0]{GAMAHARPER.ps} \caption{\label{GAMAHARPER}\footnotesize (Color online) $\Gamma$ as a function of the scaled pertubation strength $\chi$ for the Harper map. } \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=9cm, angle=0]{ANCHOHARPERlw2.ps} \end{center} \caption{(Color online) Width $\sigma$ of the LDOS as a function of the scaled perturbation strength $\chi$ for the Harper map with $k_0=0.3$ ({\large $\circ$}) and $k_0=200$ ({\large $\times$}). In red line it is plotted the semiclassical approximation $\sigma_{sc}$. We indicate with arrows the perturbations strength of the LDOS displayed in Fig. \ref{LDOSHARPER} } \label{HARPERWIDTH} \end{figure} In Fig. \ref{HARPERWIDTH} we show the width of the LDOS for the Harper map and the corresponding semiclassical approximation. When the dynamics of the Harper map is completely chaotic, the semiclassical $\sigma_{sc}$ works well as expected. Surprisingly, the semiclassical approximation works reasonably well even for mixed dynamics. This agreement is more noticeable for bigger $\chi$. The explanation of this unexpected behavior is as follows. Eq. \ref{integral} is exact for one time step ($n=1$) and if the perturbation strength is big enough the fidelity amplitude decays in this short time. Therefore, this short time decay gives the width of the Fourier transform which is the LDOS. In Fig. \ref{LDOSHARPER} we show the LDOS for the Harper map. Although the semiclassical width of the LDOS $\sigma_{sc}$ works well for mixed dynamics, the complete distribution is not well reproduced by a periodized Lorentzian distribution. This is shown in the inset of Fig. \ref{LDOSHARPER} (b) for the Harper map with $k_0=0.3$ and $\chi=1.7$. If the perturbation strength is bigger [Fig. \ref{LDOSHARPER} (a) (main plot)] the semiclassical theory works reasonably well but the quantum LDOS is a more fluctuating function than the chaotic case [see Fig. \ref{LDOSHARPER} (a)] . As expected, the semiclassical theory works well for the case of $k_0=200$ in which the Harper map is fully chaotic (Fig. \ref{LDOSHARPER} (a)). \begin{figure} \begin{center} \includegraphics[width=8.5cm, angle=0]{fig11.ps} \caption{\label{LDOSHARPER}(Color online) $\rho$ (points) and its semiclassical approximation $\rho_{sc}$ (solid line) for the Harper map. (a) $k_0=200$. In the main plot $\chi=17$ and in the inset $\chi=1.8$. In both plot it is seen that the semiclassical LDOS describes the full quantum result. (b) $k_0=0.3$ (mixed dynamics). In the main plot $\chi=17$ and in the inset $\chi=1.8$.} \end{center} \end{figure} \section{Conclusions} \label{conclu} The reaction of a system to perturbations is a fundamental problem in quantum mechanics. In this paper we have made a detailed analysis of the response to perturbations of the simplest quantum systems, which can have complex classical dynamics. For this reason, we have studied the LDOS in the perturbed cat map, a completely chaotic system and the Harper map which has mixed dynamics. Our fundamental goal was to discuss the validity of a semiclassical theory of LDOS that has been recently developed \cite{Natalia,Nacho1}. This theory is based on the relation of the LDOS with the fidelity amplitude, a measure of irreversibility and sensitivity to perturbations of quantum systems. Furthermore, it uses the dephasing representation of the fidelity amplitude, a semiclassical formulation that avoids the usual problems of semiclassical theories. The main assumption of the semiclassical theory of LDOS is that the trajectories get uncorrelated after one step of the map. This condition is fullfiled if the dynamics is completely random or when the perturbation is applied in a infinitesimal region of the phase space. Due to the fact that these conditions are not achieved in dynamical system, we tested the validity of such a semiclassical theory of the LDOS. We have analyzed various situations: local and global perturbations and also we have varied the degree of chaoticity. We show that the LDOS is very well described by its semiclassical expression when the map is highly chaotic, either if the perturbation is localized in phase space, or when the perturbation strength is big enough . We remark that in these cases the semiclassical LDOS completely reproduces the quantum version without any fiting parameters. We have studied the case of mixed dynamics and surprisingly enough our results show that the semiclassical width of the LDOS describes the full quantum version even in this case. We would like to highlite that our results could be of importance in the study of the LDOS of billiards. Indeed, the behavior of a billiard system has many resemblances with maps. For example, the classical dynamics of a billiard can be described by a map on the boundary. Quantum billiards are realistic systems that can be constructed in experimental setups of several nature. In fact, there are cavities of microwave, acoustic or optical wave. The semiclassical approximation of the width of the LDOS has been successfully applied in billiard that has been perturbed both locally \cite{Goussev, Goussev2} and globally \cite{Natalia}. However, in these works the behavior of the whole distribution was not properly discussed. Further insight on the LDOS of this systems will be part of future studies. \section{Acknowledgements} The authors acknowledge the support from CONICET (PIP-6137) , UBACyT (X237, 20020100100741, 20020100100483) and ANPCyT (1556). We would like to thank Ignacio Garc\'{\i}a Mata for useful discussions.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,172
Q: how to use spring annotations I have the @Controller, @Service and @Repository classes. The application works fine, but I think I'm not using the annotations properly for the "entity" and "repository" classes. I'm actually not using a db(not even an in-memory db) and don't intend to. I'm currently annotating the repository with @Repository and the entities with @Service and this is my concern: am I doing this correctly? How should I design and use the Spring annotations to wire the entity and repository classes to the service if I don't want to persist the data? Currently it looks like this: Service class @Service public class ServiceClass{ @Autowired RepositoryClass repositoryClass; public ServiceClass(RepositoryClass repositoryClass) { this.repositoryClass = repositoryClass; } } Repository class @Repository public class RepositoryClass{ @Autowired private Entity entity; public DocumentRepository(Entity entity) { this.entity = entity; } } Entity class @Service public class Entity { private Map<String, List<Integer>> entityMap; public Entity (Map<String, List<Integer>> entityMap) { this.entityMap = entityMap; } } A: Annotating an entity class with @Service is wrong. A class annotated with @Service is usually stateless, and for that reason, there is usually only one object of such a class. A class annotated with @Entity is usually stateful, and for that reason, there are usually many objects of such a class. An example scenario is a simple news service: * *There is one NewsService that contains interesting code to fetch news from the repository. *For each news item, there is a NewsEntity object, holding the data of the individual news item. A: You can't use @Entity from data-jpa w/o setting up some db, so your entities don't need any annotation. They aren't beans that you need to wire in anywhere. But the general idea that 'when you don't know' it's probably a service is a good one. XD The other annotations are right. You can basically annotate them with @Component, @Service, @Config, @Repository ... It wouldn't break your code, the names are mostly just to be more clear for the people working on the code. A: The question is: what's the role of each class here. Usually a repository is the point to access data. An entity is a data object, not a logic component, so it is usually created and managed by the repository, in this example, not Spring. It's hard to firmly say anything with only that code (no information about how every component is used), but I would remove the @Service from the Entity class. The other classes are ok with those annotations.
{ "redpajama_set_name": "RedPajamaStackExchange" }
637
\section{Introduction} \label{sintro} The present work is part of a larger project to study the galactic plane (ISOGAL, Omont \& Blommaert~\cite{O97}; P\'erault et al.~\cite{Pea96}). During the ISO mission, the ISOGAL consortium observed with ISOCAM at 15 and 7 $\mu$m selected parts of the galactic plane (about 18 sq.deg. distributed along the inner galactic disk) in order to study the stellar populations in the inner galaxy, with a sensitivity and resolution two orders of magnitude better than IRAS. The main scientific goal of the ISOGAL project was the study of the distribution and properties of the AGB stars. However, the survey is unbiased, with the only exception of excluding from the surveyed area strong IRAS sources (with 12 $\mu$m flux densities greater than 6-10 Jy) in order to avoid saturation effects. Thus the survey data can be used to study any other type of mid-IR source present in the galactic plane, as for instance the less numerous HII regions associated to young massive stars. For a proper identification of source types, the ISOGAL results need to be compared with observations at other wavelengths. In particular, for the study of AGB stars comparisons with near IR observations, taken primarily with DENIS (Epchtein~\cite{E98}), are useful. For the study of HII regions comparisons with radio continuum surveys are more appropriate. A large fraction of the northern sky galactic fields covered by ISOGAL have already been observed at 6~cm (5~GHz) with the VLA (see Becker et al 1994 and references therein), and a comparison of the two surveys is underway. However, these radio observations terminate at $l=+40^{\circ}$ and there were no high frequency (e.g $\ge$5~GHz) radio continuum observations for the ISOGAL field at $l=+45^{\circ}$. Observations at lower frequencies, such as the 1.4 GHz Northern VLA Sky Survey (NVSS -- Condon et al.~\cite{Cea98}), are inadequate to detect the younger and more dense compact HII regions, which may be optically thick at 1.4 GHz. Given our interest, within the ISOGAL team, to study the young massive stars, we decided to observe the $l=+45^{\circ}$ field at high frequencies with the VLA, to provide a data base comparable to that of Becker et al.\ (1994). In order to obtain radio spectral index information we covered at 6 and 3.6~cm an area slightly larger than the $l=+45^{\circ}$ ISOGAL field. The selection of the ISOGAL galactic plane fields does not follow any {\it ad hoc} criterion, but is based on symmetrically spaced samples on both sides of the Galactic Center, with the spacing increasing with distance from the Galactic Center. The $l=+45^{\circ}$ field happens to be located tangent to a spiral arm of our Galaxy, the Scutum arm (see e.g. Kurtz et al. 1994). Inspection of the 4.875 GHz galactic plane survey of Altenhoff et al.~(\cite{Aea78}) shows that there is very weak diffuse galactic background emission in this direction. Only 7 sources of the Altenhoff et al. catalogue fall in our surveyed area or at its borders (see Table~\ref{talt}). One of these (44.786--0.490) is partly outside our surveyed area. Most of these sources are associated with bright IRAS point sources and have not been covered by the ISOCAM observations except for 45.202--0.441 and 45.341--0.370. In this work we present the radio observations and discuss the comparison with other radio surveys and with IRAS data. Comparison with ISOGAL data, as well as with dedicated J, H, K observations of the same field taken with TIRGO will be the subject of following works. \section{Observations and data reduction} \label{sobs} The ISOGAL field centered at $l=+45^\circ$, $b=0^\circ$ was observed at 6 (4.9~GHz) and 3.6~cm (8.5~GHz) using the NRAO\footnote{The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under agreement by the Associated Universities, Inc.} Very Large Array (VLA) in the C configuration on 1997 August 5 (8~hours). At 6~cm the observational setup was similar to that used by Becker et al.~(\cite{Bea94}), the only differences being that our pointing centers are more closely packed and, due to the peculiar geometry of the sub-fields observed with ISO, we covered the field by scanning strips at constant galactic longitude, which required a total of 31 pointings; our integration time per position was 300~s. At 3.6~cm we used a similar pointing scheme but scaled due to the smaller primary beam. The observing time per position was reduced to 210~s, and the entire field was mapped with 74 pointings. 8 pointings were observed at 3.6~cm during a 1~hour test run on 1997 July~4, however, due to a bug in the schedule, only some of the pointings fell in our survey region. For the sake of completeness we will also report the results for the 3 pointings outside our formal survey region that form a spur in position angle 30 degrees. Due to the ill-determined primary beam correction and the rapid loss of sensitivity far from the pointing center, we searched for sources only the area where the primary beam attenuation is less than a factor of 3. With this constraint, we covered an area of $\sim$0.620~sq.~deg. at 6~cm, and $\sim$0.525~sq.~deg. at 3.6~cm. In Fig.~\ref{fcover} we show all the pointing positions: the small grey circles represent the VLA primary beam HPBW at 3.6~cm (4.9$^\prime$), while the larger black circles those at 6~cm (8.6$^\prime$). The dotted line show the boundaries of the area covered at both wavelengths ($\sim$0.493~sq.~deg.), the ISOGAL sub-fields are included in this area, the dashed lines mark the boundary of the field observed either at 6 and/or 3.6~cm ($\sim$0.652~sq.~deg.). \begin{figure} \centerline{\psfig{figure=ds1710f01.eps,height=7cm}} \caption[]{\label{fcover} At each pointing position a circle with diameter equal to the VLA primary beam FWHM is shown. Grey circles represent 3.6~cm pointings, black circles 6~cm pointings. The dotted line marks the boundaries of the area observed at both frequencies, the dashed line encompasses the area observed at either of the two bands. In both cases we considered only the area where the a primary beam attenuation is less than a factor of 3. Axes are galactic longitude and latitude (degrees).} \end{figure} \begin{figure*} \centerline{\psfig{figure=ds1710f02a.eps,height=9cm,angle=-90} \hskip 0.4cm \hskip 0.4cm \psfig{figure=ds1710f02b.eps,height=9cm,angle=-90}} \vskip 0.4cm \caption[]{\label{frms}Computed noise maps for the 3.6 and 6~cm observations (left and right, respectively). Dotted and dashed lines as in Fig.~\ref{fcover}. The on-axis noise level in the black areas can be as high as 8~mJy/beam.} \end{figure*} Frequent observations of the quasar 1922$+$155 were used for gain and phase calibration, while the flux density scale was determined by observations of 3C286. The calibration is expected to be accurate within 10\%. We imaged all the fields using the AIPS IMAGR task with natural weighting, the resulting synthesized beam varied somewhat from field to field depending on the hour angle at which each field was observed, typical FWHM values are $\sim 6^{\prime\prime}$ at 6~cm and $\sim 3^{\prime\prime}$ at 3.6~cm. \subsection{Sensitivity} Due to the VLA primary beam attenuation and the different noise values in the various fields, the sensitivity of our observations is not expected to be uniform accross the observed region. Using our knowledge of the VLA primary beam attenuation pattern and the measured on-axis rms level in each of the observed fields, we computed the sensitivity maps for our survey at 3.6 and 6~cm (see also Zoonematkermani et al.~\cite{Zea90} and Becker et al.~\cite{Bea94}). The measured on-axis noise level in the maps is generally $\sim 0.12$--$0.15$~mJy/beam at both frequencies, with the exception of some fields close to the bright complexes located at $l\sim 45^\circ\!.10$, $b\sim0^\circ\!.13$ ($\alpha(2000)=19^h13^m27^s$ $\delta(2000)=10^\circ 53^\prime35^{\prime\prime}$) and $l\sim 45^\circ\!.45$, $b\sim0^\circ\!.06$ ($\alpha(2000)=19^h14^m21^s$ $\delta(2000)=11^\circ 09^\prime13^{\prime\prime}$) which have a higher noise level (in the range 1--8~mJy/beam) due to residual phase and amplitude errors. The computed rms maps are shown in Fig.~\ref{frms}, the area of each pixel ($10^{\prime\prime}\times 10^{\prime\prime}$) corresponds to $\sim$3.5 beams at 6~cm and $\sim$14 beams at 3.6~cm. As seen from Fig.~\ref{frms} most of the area covered by our survey has a rms sensitivity less than 0.3~mJy/beam at both frequencies. In Fig.~\ref{frmscum} we show the cumulative distributions of the pixel values in the rms maps, more than 85\% of the surveyed area has an rms value less than 0.5~mJy/beam. \begin{figure} \centerline{\psfig{figure=ds1710f03.eps,width=8.8cm}} \caption[]{\label{frmscum}Cumulative distributions of the noise values in the maps of Figure~\ref{frms}. The grey line is for 3.6~cm data, the black line for 6~cm data.} \end{figure} \subsection{Source extraction} \begin{figure*} \centerline{\psfig{figure=ds1710f04a.eps,height=8.5cm,angle=-90} \psfig{figure=ds1710f04b.eps,height=8.5cm,angle=-90} \psfig{figure=ds1710f04c.eps,height=8.5cm,angle=-90}} \caption[]{\label{fpos} a) Positions of the detected sources at 3.6~cm (pluses) and 6~cm (empty circles), larger symbols represent extended sources; b) VLA 20~cm surveys: sources from Zoonematkermani et al.~(\cite{Zea90}; ZGS) are shown as pluses, grey contours show the NVSS image of the region; c) the position of the IRAS--PSC2 sources inside our extended survey area are shown as plus symbols, filled squares show the five sources which satisfy the Wood \& Churchwell~(\cite{WC89}) color criteria.} \end{figure*} All images were inspected by means of contour plots and greyscale display to find sources. The images were then corrected for primary beam attenuation using the AIPS task PBCOR before source extraction. The J2000.0 positions, peak and integrated flux densities and the sizes of the detected sources at both frequencies are listed in Table~\ref{tsrc}. In general, all the reported detections have a signal-to-noise ratio greater than five in at least one of the two bands. The names assigned to the sources have been derived from their galactic coordinates, as in Becker et al.~(\cite{Bea94}). We arbitrarily divided the sources into two categories: 1) compact sources and 2) extended sources. In the first group are all the unresolved sources or the sources with deconvolved sizes of the same order or smaller than the synthesized beam FWHM. All the extended sources have sizes much greater than the synthesized FWHM, and thus they may be partially resolved out by the interferometer. The flux densities (and sizes) reported in Table~\ref{tsrc} for these sources should be considered as lower limits. We shall see in the following that this arbitrary distinction based on observational considerations reflects also some intrinsic physical difference between the two groups. At large distances from the pointing center, the correction factor due to the VLA primary beam attenuation may be large, and hence the source flux density could be ill-determined. In Table~\ref{tsrc} the source that has the maximum correction factor applied is source \#5, which is close to the edge of the surveyed area and has correction factors $\sim$2.1 at 6~cm and $\sim$2.5 at 3.6~cm. All other sources, with the exception of \#22 and \#29 at 6~cm, have correction factors lower than 2.0. The positions of all the detected sources within our surveyed region are shown in Fig.~\ref{fpos}~a), where pluses represent 3.6~cm sources (21), and circles represent 6~cm sources (29), and larger symbols represent extended sources. Contour plots for all the detected sources are shown in the Appendix. \begin{table*} \caption[]{\label{tsrc}Detected radio sources} \begin{tabular}{llccrrrrrrl} \hline & & & & \multicolumn{3}{c}{ -------------- 6~cm -------------- } & \multicolumn{3}{c}{ ------------ 3.6~cm ------------ } & \\ \# &Name$^a$& $\alpha$ & $\delta$ & \multicolumn{1}{c}{F$_p^b$} & \multicolumn{1}{c}{F$_i^b$} & \multicolumn{1}{c}{d$^b$} & \multicolumn{1}{c}{F$_p^b$} & \multicolumn{1}{c}{F$_i^b$} & \multicolumn{1}{c}{d$^b$} & Shown in\\ &&(2000)&(2000)&\multicolumn{1}{c}{(mJy/beam)}&\multicolumn{1}{c}{(mJy)}&\multicolumn{1}{c}{($^{\prime\prime}$)}&\multicolumn{1}{c}{(mJy/beam)}&\multicolumn{1}{c}{(mJy)}&\multicolumn{1}{c}{($^{\prime\prime}$)}&Figure$^c$\\ \hline 1&G044.841$+$0.554&19:11:24.65&$+$10:50:20.7& 1.4$\pm$ 0.2& 1.6&0 & 0.8$\pm$ 0.2& 1.1&1&\ref{pcfig1}\\ 2&G044.854$+$0.519&19:11:33.51&$+$10:50:02.8& 0.7$\pm$ 0.2& 0.7&1 & 0.6$\pm$ 0.2& 1.6&3&\ref{pcfig1}\\ 3&G044.837$+$0.506&19:11:34.52&$+$10:48:46.0&$<$ 0.3& -- & -- & 0.7$\pm$ 0.2& 0.7&0&\ref{pcfig2}\\ 4&G044.873$+$0.520&19:11:35.64&$+$10:51:05.3& 0.9$\pm$ 0.2& 1.4&5&$<$ 0.6 & -- & -- &\ref{pcfig2}\\ 5&G044.792$+$0.380&19:11:56.70&$+$10:42:53.5& 2.5$\pm$ 0.2& 3.2&0 & 1.7$\pm$ 0.2& 2.1&0&\ref{pcfig1}\\ 6&G045.129$+$0.550&19:11:58.07&$+$11:05:32.9& 0.8$\pm$ 0.2& 1.2&0 & 0.5$\pm$ 0.2& 0.5&0&\ref{pcfig1}\\ 7&G044.878$+$0.403&19:12:01.41&$+$10:48:09.0& 0.6$\pm$ 0.2& 0.9&4&$<$ 0.6 & -- & -- &\ref{pcfig2}\\ 8&G044.908$+$0.295&19:12:28.20&$+$10:46:43.2& 1.5$\pm$ 0.4& 1.6&1 & 2.0$\pm$ 0.3& 2.5&1&\ref{pcfig1}\\ 9&G044.965$+$0.284&19:12:37.07&$+$10:49:27.1& 18.4$\pm$ 0.6& 20.1&1 & 18.5$\pm$ 0.4& 18.1&0&\ref{pcfig1}\\ 10&G045.027$+$0.123&19:13:18.94&$+$10:48:16.5& 9.0$\pm$ 0.8& 10.1&0 & 7.2$\pm$ 1.1& 6.6&0&\ref{pcfig1}\\ 11&G045.070$+$0.132&19:13:21.87&$+$10:50:49.0& 63.2$\pm$ 3.4& 270$^d$&6 & 57.6$\pm$ 3.7& 106.6&2&\ref{exfig1}\\ 12&G045.072$+$0.132&19:13:22.08&$+$10:50:53.2& 128.7$\pm$ 3.6& 270$^d$&0 & 307.3$\pm$ 3.9& 326.1&0&\ref{exfig1}\\ 13&G045.118$+$0.143&19:13:24.90&$+$10:53:41.1& 24.0$\pm$ 3.6& 172.0&6 &$<$24.0& -- & -- &\ref{exfig2}\\ 14&G045.101$+$0.122&19:13:27.67&$+$10:52:09.6& 16.9$\pm$ 3.4& 34.0&5&$<$ 24.0 & -- & -- &\ref{pcfig2}\\ 15&G045.123$+$0.132&19:13:27.91&$+$10:53:36.3& 1436.1$\pm$ 3.4&2905.8&5 &1431.6$\pm$ 17.2&3294.7&3&\ref{exfig2}\\ 16&G045.133$+$0.133&19:13:28.81&$+$10:54:09.8& 24.0$\pm$ 3.6& 88.0&4 &$<$24.0& -- & -- &\ref{exfig2}\\ 17&G045.130$+$0.131&19:13:28.83&$+$10:53:56.1& 37.2$\pm$ 3.6& 91.0&4 &38$\pm$8.0& 77 & 3 &\ref{exfig2}\\ 18$^e$&G045.455$+$0.060&19:14:21.29&$+$11:09:12.3& --$^f$& --$^f$ & --$^f$ & 195.0$\pm$ 4.2&1050.0$^e$&6&\ref{exfig3}\\ 19&G045.466$+$0.045&19:14:25.66&$+$11:09:26.1& --$^f$& --$^f$ & --$^f$ & 87.2$\pm$ 4.6& 105.4&1&\ref{pcfig2}\\ 20&G045.026$-$0.227&19:14:34.64&$+$10:38:28.7& 0.9$\pm$ 0.2& 0.8&0&$<$ 0.6 & -- & -- &\ref{pcfig2}\\ 21&G044.958$-$0.270&19:14:36.02&$+$10:33:37.7& 1.0$\pm$ 0.2& 1.2&0&$<$ 0.6 & -- & -- &\ref{pcfig2}\\ 22&G045.333$-$0.113&19:14:44.71&$+$10:57:56.7& 4.1$\pm$ 0.5& 4.4&0 & 3.6$\pm$ 0.4& 4.3&1&\ref{pcfig1}\\ 23&G045.339$-$0.183&19:15:00.62&$+$10:56:17.6& 2.1$\pm$ 0.5& 3.3&3&$<$ 0.9 & -- & -- &\ref{pcfig2}\\ 24&G045.337$-$0.185&19:15:00.87&$+$10:56:09.7& 1.3$\pm$ 0.5& 1.4&0&$<$ 0.9 & -- & -- &\ref{pcfig2}\\ 25&G044.996$-$0.446&19:15:18.35&$+$10:30:44.3& 7.5$\pm$ 0.1& 7.8&0 & 4.2$\pm$ 0.3& 4.6&0&\ref{pcfig1}\\ 26&G045.333$-$0.322&19:15:29.98&$+$10:52:08.0& 1.3$\pm$ 0.3& 2.3&3&$<$ 0.6 & -- & -- &\ref{pcfig2}\\ 27&G044.967$-$0.544&19:15:36.29&$+$10:26:30.6& 6.5$\pm$ 0.2& 6.9&1& --$^f$ & --$^f$ & --$^f$ &\ref{pcfig2}\\ 28&G044.995$-$0.555&19:15:41.95&$+$10:27:38.0& 1.1$\pm$ 0.2& 1.0&0& --$^f$ & --$^f$ & --$^f$ &\ref{pcfig2}\\ 29&G045.007$-$0.614&19:15:56.12&$+$10:26:38.1& 3.4$\pm$ 0.2& 2.8&0& --$^f$ & --$^f$ & --$^f$ &\ref{pcfig2}\\ \hline \multicolumn{9}{c}{Extended sources}\\ \hline 30&G045.066$+$0.138&19:13:20.5&$+$10:50:50& 39.5$\pm$ 3.0& 348.6&26 & 14.9$\pm$ 2.0& 433.0&26&\ref{exfig1}\\ 31&G045.134$+$0.145&19:13:26.5&$+$10:54:20& 73.0$\pm$ 3.0&1960.0&48 & 60.2$\pm$ 8.0&1727.7&48&\ref{exfig2}\\ 32&G045.479$+$0.130&19:14:08.8&$+$11:12:28& --$^f$& --$^f$ & --$^f$ & 37.2$\pm$ 2.0&1500.0&30&\ref{exfig3}\\ 33$^e$&G045.455$+$0.059&19:14:21.3&$+$11:09:10& --$^f$& --$^f$ & --$^f$ & 65.0$\pm$ 2.0&3450.0$^e$&47&\ref{exfig3}\\ 34&G045.190$-$0.439&19:15:39.0&$+$10:41:15& 7.6$\pm$ 0.3& 95.6&36 & 3.5$\pm$ 0.2& 69.7&36&\ref{exfig4}\\ \hline \end{tabular} \vskip 0.3cm $^a$) Derived from galactic coordinates, as in Becker et al.~(\cite{Bea94})\\ $^b$) F$_p\equiv$ peak flux density; F$_i\equiv$ integrated flux density; d$\equiv$ size (deconvolved).\\ $^c$) Contour plots for all the detected sources are reported in Appendix.\\ $^d$) Sources \#11 and \#12 are blended together at 6~cm, the separation of the two contribution to the integrated flux is very uncertain, thus we report the integrated flux of both components together..\\ $^e$) Source \#18 is inside the extended source \#33. The integrated flux density of the compact component has been subtacted from the total integrated flux density, the resulting flux has been assigned to source \#33.\\ $^f$) Not observed. \end{table*} \section{Comparison with other surveys} \label{sres} \subsection{VLA 20~cm surveys} The observed field is included in the VLA 20~cm galactic plane survey (ZGS; Zoonematkermani~\cite{Zea90}) and in the NRAO-VLA Sky Survey (NVSS; Condon et al.~\cite{Cea98}). Both these surveys used the VLA at 20~cm (1.4~GHz), however, the ZGS used the moderately extended B array and has a typical peak flux density sensitivity of 25~mJy/beam and a synthesized beam of $\sim 5^{\prime\prime}$ (FWHM), while the NVSS used the most compact D array with a flux density limit of $\sim 2.5$~mJy/beam ($sim 0.5$~mJy/beam rms) and an angular resolution of $\sim 45^{\prime\prime}$. Given the relatively low sensitivity, and the similar ($u,v$) sampling with our 6~cm observations, we expect to detect all the ZGS sources in our maps (see also Becker et al.~\cite{Bea94}). On the other hand, due to the much higher sensitivity of the NVSS and its ability to detect extended structures, many of the fainter 20 cm sources with non-thermal spectral indexes and/or sizes greater than 10$^{\prime\prime}$ will not be detectable in our observations. In Fig.~\ref{fpos}~b) we show the positions of all the ZGS (11 -- pluses) overlaid on the contour plot of the NVSS image of our survey region. In Table~\ref{tass} the results of the correlation between our catalogue and the 20~cm surveys is presented. The relevant parameters (names, positions, flux densities and sizes) of the 20~cm sources are from the published catalogues (Zoonematkermani et al.~\cite{Zea90} for the ZGS and the deconvolved data from the fits catalogue available on the World Wide Web at {\tt http://www.nrao.edu} in October~1998 for the NVSS). The matching criterion used is positional coincidence: ZGS sources are considered to be associated with our sources if the positional difference is less than half a beamwidth for point sources, or if the source position falls within the boundary of one of our extended sources; NVSS sources are considered to be associated if one of our point source falls inside of, or if the boundaries of one of our extended sources overlap with, the deconvolved size of the 20~cm source. As expected, all the ZGS sources in our surveyed field do have a counterpart at 6~cm. In one case (source \#32 in our list), we considered two ZGS sources as being part of the same (extended) 6~cm source. In Table~\ref{tass}, columns~1 and~2 report the numbers and names of our sources from Table~\ref{tsrc}, columns~3 to 6 the names, peak and integrated flux densities, and sizes of the ZGS sources, columns~7 to 10 the names, integrated flux densities and deconvolved sizes of the NVSS sources, and column~11 the IRAS sources names (see Sect. 3.4). In general, given the higher sensitivity of the NVSS and its ability to detect extended sources that might be resolved out in the ZGS, we expect that all the ZGS sources in our field should be detected in the NVSS as well. The only possible exception is that of very compact high surface brightness sources close or inside large low surface brightness sources with high integrated flux. There are 3 ZGS sources without an NVSS counterpart, one (045.129$+$0.131, associated to our \#17) is indeed inside the bright complex shown in Fig.~\ref{exfig2}, and thus may be missing from the NVSS catalogue due to confusion. Similarly, the one associated with our \#19 could be undetected in the NVSS due to its proximity to the extended source \#33. Both \#17 and \#19 have thermal spectral indexes (see below and Table~\ref{tspecind}) and we do not expect them to be variable at 20~cm. On the other hand, the ZGS source associated with \#29 should have been detected in the NVSS, thus for this source, given also its non-thermal spectral index, the only viable explanation for the NVSS non-detection is variability at 20~cm. Finally, there is a very bright ($\sim$280~mJy), unresolved, NVSS source which is undetected in the ZGS and in our survey. This source (clearly visible in Fig.~\ref{fpos}~b) at $l\sim 45.35$, $b\sim -$0.22) is the high energy source G1915$+$105 (Mirabel \& Rodr\'{\i}guez~\cite{MR94}). At radio wavelengths is known to be highly variable, with flux densities at 1.4~GHz that can be as high as $\sim 1$~Jy at the peak of radio bursts and below the mJy level during quiescence (Rodr\'{\i}guez \& Mirabel~\cite{RM99}). \begin{table*} \caption[]{\label{tass}Associated ZGS, NVSS and IRAS--PSC2 sources} \begin{tabular}{rllrrrlrrrl} \hline && \multicolumn{4}{c}{ -------------------------- ZGS -------------------------- } & \multicolumn{4}{c}{ --------------------- NVSS$^a$ --------------------- } & IRAS\\ \#&Name& Name & \multicolumn{1}{c}{F$_p$} & \multicolumn{1}{c}{F$_i$} & \multicolumn{1}{c}{d} & Name$^a$ &\multicolumn{1}{c}{F$_i$} & \multicolumn{1}{c}{Size$^b$} & \multicolumn{1}{c}{p.a.$^b$} & Name \\ &&&\multicolumn{1}{c}{(mJy/b)}&\multicolumn{1}{c}{(mJy)}&\multicolumn{1}{c}{($^{\prime\prime}$)}&(NVSS J)&\multicolumn{1}{c}{(mJy)}&\multicolumn{1}{c}{($^{\prime\prime}\times^{\prime\prime}$)}&\multicolumn{1}{c}{($^\circ$)}&\\ \hline 5&G044.792$+$0.380& & & & &191156$+$104256&2.8&129$\times$55&0& \\ 9&G044.965$+$0.284& & & & &191236$+$104930&3.3&58$\times$48&0& \\ 14&G045.101$+$0.122&045.101$+$0.121& 41& 49& 2.2&191327$+$105217&58.2&64$\times$21&22& \\ 15&G045.123$+$0.132&045.123$+$0.132& 287&1468&10.4&191327$+$105338&1540.4&21$\times$15&58& 19111$+$1048\\ 17&G045.130$+$0.131&045.129$+$0.131& 22& 93& 9.1& & & & &\\ 19&G045.466$+$0.045&045.466$+$0.046& 16& 21& 2.6& & & & &\\ 25&G044.996$-$0.446&044.995$-$0.445& 21& 22& 0.0&191518$+$103042&20.4&22$\times$29&83& \\ 27&G044.967$-$0.544&044.967$-$0.543& 21& 23& 1.7&191536$+$102630&14.2&32$\times$27&0& \\ 29&G045.007$-$0.614&045.007$-$0.614& 12& 10& 0.0& & & & &\\ 30&G045.066$+$0.138& & & & &191320$+$105054&401.9&31$\times$19&$-$83&19110$+$1045\\ 31&G045.134$+$0.145&045.134$+$0.143& 48&2245&34.8&191326$+$105422&2445.3&46$\times$42&53&\\ 32&G045.479$+$0.130&045.477$+$0.130& 97&1222&17.0&191408$+$111229&1672.6&33$\times$14&$-$33&19117$+$1107\\ & &045.480$+$0.136& 62& 653&15.2& & & & & \\ 33&G045.455$+$0.059&045.454$+$0.060& 167&2207&17.5&191421$+$110913&4771.5&41$\times$36&$-$20&19120$+$1103\\ 34&G045.190$-$0.439& & & & &191539$+$104123&50.8&20$\times$17&$-$41&19132$+$1035\\ \hline \end{tabular} \vskip 0.3cm $^a$) In this table, the ``NVSS J'' prefixes have been omitted from the names of the NVSS sources.\\ $^b$) Deconvolved major, minor axes and position angle (see Cotton et al.~\cite{Cea98}).\\ \end{table*} In Table~\ref{tspecind}, columns~2 to 6, we report the radio continuum spectral indexes ($\alpha$, defined as $F_\nu\sim\nu^\alpha$) as calculated from our integrated flux densities and the ZGS and NVSS integrated flux densities. It should be noted that all extended sources are probably partially resolved out in the higher resolution surveys, particularly in our 3.6~cm images, and thus some of the measured spectral indexes are probably lower limits due to the missing flux at high frequency. \begin{figure} \centerline{\psfig{figure=ds1710f05.eps,width=8.5cm}} \caption[]{\label{fspec}Comparison between the high and low frequency spectral indexes. Top panel: sources in our survey detected at 20~cm in the NVSS. Bottom panel: sources detected in the ZGS. In both panels the dotted line represent equal spectral indexes.} \end{figure} In Fig.~\ref{fspec} we compare the high frequency spectral indexes (those calculated between 3.6 and 6~cm) with the low frequency ones (calculated between 6 and 20~cm), only the sources inside the area observed both at 3.6 and 6~cm have been considered. A 10\% error has been assumed for all ZGS and NVSS integrated flux densities (this may be a slight underestimate of the true error for the faintest sources in these surveys). In the upper panel we show the comparison for sources detected in the NVSS and in the lower panel that for sources detected in the ZGS. We find very good agreement between the high frequency and the low frequency spectral indexes for ZGS sources. This is probably due to the matched beams of the observations. In contrast, for NVSS sources, the spread between low and high frequency spectral indexes is much greater. There are two possible explanations for this: 1) the increased sensitivity to extended structures of the NVSS and 2) the greater sensitivity of the NVSS with respect to the ZGS. The increased sensitivity allows for the detection in the NVSS of some thermal sources that are optically thick at low frequency and become optically thin at high frequency (this is probably the case for \#9 and \#34). \begin{table*} \caption[]{\label{tspecind}Radio continuum spectral indexes and IRAS fluxes for the detected sources} \begin{tabular}{lrrrrrrrrrl} \hline \# &$\alpha_{3.6-6}$&$\alpha_{\rm 6-ZGS}$&$\alpha_{\rm 3.6-ZGS}$&$\alpha_{\rm 6-NVSS}$&$\alpha_{\rm 3.6-NVSS}$&F$_{12\mu\rm m}$&F$_{25\mu\rm m}$&F$_{60\mu\rm m}$&F$_{100\mu\rm m}$& ID\\ &&&&&&(Jy)&(Jy)&(Jy)&(Jy)&\\ \hline 01 & $-$1.29$\pm$0.90 & -- & -- & -- & -- &&&&&\\ 02 & $-$0.13$\pm$1.10 & -- & -- & -- & -- &&&&& Cand~HII\\ 03 & $>+$1.01 & -- & -- & -- & -- &&&&&Cand~HII\\ 04 & $<-$1.50 & -- & -- & -- & -- &&&&\\ 05 & $-$0.73$\pm$0.62 & -- & -- & $+$0.10$\pm$0.19 & $-$0.16$\pm$0.17 &&&&\\ 06 & $-$1.07$\pm$1.38 & -- & -- & -- & -- &&&&\\ 07 & $<-$0.06 & -- & -- & -- & -- &&&&\\ 08 & $+$0.93$\pm$0.90 & -- & -- & -- & -- &&&&&Cand~HII\\ 09 & $-$0.18$\pm$0.16 & -- & -- & $+$1.45$\pm$0.12 & $+$0.95$\pm$0.07 &&&&&Cand~HII\\ 10 & $-$0.77$\pm$0.76 & -- & -- & -- & -- &&&&\\ 11$^a$ & $+$0.86$\pm$0.30$^a$ & -- & -- & -- & -- &&&&&Cand~HII\\ 12$^a$ & $+$0.86$\pm$0.30$^a$ & -- & -- & -- & -- &&&&&Cand~HII\\ 13 & --$^b$ & -- & -- & -- & -- &&&&\\ 14 & --$^b$ & $-$0.31$\pm$0.33 & --$^b$ & $-$0.43$\pm$0.31 & --$^b$ &&&&\\ 15 & $+$0.23$\pm$0.04 & $+$0.59$\pm$0.09 & $+$0.47$\pm$0.07 & $+$0.51$\pm$0.08 & $+$0.42$\pm$0.06 &250&1400&5900&7500&HII\\ 16 & --$^b$ & -- & -- & -- & -- &&&&\\ 17 & $-$0.29$\pm$0.45 & $-$0.02$\pm$0.19 & $-$0.11$\pm$0.13 & -- & -- &&&&&Cand~HII\\ 18$^c$ & -- & -- & -- & -- & -- &&&&&HII$^c$\\ 19 & -- & -- & $+$0.94$\pm$0.11 & -- & -- &&&&&Cand~HII\\ 20 & $<-$0.46 & -- & -- & -- & -- &&&&\\ 21 & $<-$1.21 & -- & -- & -- & -- &&&&\\ 22 & $-$0.06$\pm$0.67 & -- & -- & -- & -- &&&&&Cand~HII\\ 23 & $<-$2.33 & -- & -- & -- & -- &&&&\\ 24 & $<-$0.62 & -- & -- & -- & -- &&&&\\ 25 & $-$0.94$\pm$0.23 & $-$0.90$\pm$0.11 & $-$0.91$\pm$0.12 & $-$0.77$\pm$0.10 & $-$0.83$\pm$0.11 &&&&\\ 26 & $<-$2.38 & -- & -- & -- & -- &&&&\\ 27 & -- & $-$1.03$\pm$0.13 & -- & $-$0.57$\pm$0.12 & -- &&&&\\ 28 & -- & -- & -- & -- & -- &&&&\\ 29 & -- & $-$1.09$\pm$0.17 & -- & -- & -- &&&&&Variable?\\ 30 & $+$0.39$\pm$0.36 & -- & -- & $-$0.11$\pm$0.16 & $+$0.04$\pm$0.11 &58&490&$<$5900&$<$7500&HII\\ 31 & $-$0.26$\pm$0.36 & $-$0.10$\pm$0.17 & $-$0.15$\pm$0.12 & $-$0.16$\pm$0.16 & $-$0.19$\pm$0.11 &&&&&HII\\ 32 & -- & -- & $-$0.13$\pm$0.12 & -- & $-$0.06$\pm$0.11 &37&303&2600&$<$7900&HII\\ 33$^c$ & -- & -- & $+$0.42$\pm$0.10 & -- & $-$0.03$\pm$0.10 &79&640&5300&7900&HII$^c$\\ 34 & $-$0.57$\pm$0.37 & -- & -- & $+$0.51$\pm$0.16 & $+$0.18$\pm$0.11 &6.9&34&280&490&HII\\ \hline \end{tabular} \vskip 0.3cm $^a$) Sources \#11 and \#12 are blended together at 6~cm, the separation of the two contribution to the integrated flux is very uncertain, thus we calculated the spectral index using the integrated flux of both components together\\ $^b$) For these sources, due to the confusion and noise in the 8.4~GHz map it is difficult to obtain a reliable estimate of the upper limit on the spectral index\\ $^c$) Source \#18 is inside the extended source \#33. The total integrated flux density (4.5~Jy) has been used to determine the spectral indexes (reported only for \#33). \end{table*} \subsubsection{NVSS sources undetected at high frequency} Most of the NVSS sources in our field (48) are not detected at 6 and/or 3.6~cm. We believe that in most cases the negative spectral index, rather than the different ($u,v$) coverage between the observations, is the main reason for the non-detection at high frequency. The most plausible explanation is that a large fraction of these NVSS sources are extragalactic objects, with a possible contamination from faint planetary nebulae. \begin{figure} \centerline{\psfig{figure=ds1710f06.eps,width=8.5cm}} \caption[]{\label{fnvsslf}Top panel: differential luminosity functions of all (All; thin continuous line) and not detected at high frequency (NHF; thick continuous line) NVSS sources inside our field, and of NVSS sources from two 0.652~sq.deg. areas close to the northern (NGP; dotted line) and southern (SGP; dashed line) galactic poles. Bottom panel: cumulative luminosity functions for the same sources shown in the upper panel.} \end{figure} To check whether the 20~cm flux distribution and source count for the NVSS sources not detected at high frequency are consistent with the population of extragalactic radio sources, we extracted from the NVSS the sources in two areas toward the galactic poles, each of the two with the same extent of our surveyed region. The number of sources extracted toward the northern and southern galactic poles are 36 and 27, respectively, these numbers compare relatively well with the 37 NVSS sources without high frequency counterpart in our field. As additional check, in Figure~\ref{fnvsslf}, we show the differential and cumulative luminosity functions for the sources in our field and those in the areas toward the galactic poles. The luminosity function of all the sources in our field (thin line) show an excess of bright sources with respect to the galactic poles, this excess disappears if we plot only the sources without a high frequency counterpart (thick line). This effect is more clear in the cumulative luminosity function plot (Fig.~\ref{fnvsslf}, lower panel). More quantitatively, the Kolmogorov-Smirnov test on the cumulative luminosity functions gives a probability lower than 40\% that the NVSS sources in the Galactic poles samples and those in our field are drawn from the same distribution. This probability rises above 80\% if we remove from our sample the sources detected at high frequency and the well known galactic high energy source G1915$+$105. \subsection{Effelsberg 5~GHz survey} \label{salt} As mentioned in Sec.~\ref{sintro}, our surveyed region has been covered by the Altenhoff et al.~(\cite{Aea78}) 5~GHz (6~cm) single dish survey. The names and peak flux densities of the seven single dish sources inside or partially within our survey boundaries are listed in Table~\ref{talt}. In the same table, for each source, we report the integrated flux densities of our VLA 6~cm sources within the Effelsberg beam (2.6$^\prime$). \begin{table} \caption[]{\label{talt}Comparison between single dish and VLA 5~GHz sources.} \begin{tabular}{lrrl} \hline \multicolumn{2}{c}{Effelsberg}&VLA&\\ Name & F$_p$ & F$_i$ & Sources ID from\\ & (Jy) & (Jy) & Table~\ref{tsrc} \\ \hline 44.786$-$0.490 & 0.2 & -- & Not detected, high rms\\ 45.066$-$0.135 & 0.7 & 0.62 & 11, 12, and 30\\ 45.125$+$0.136 & 5.8 & 5.25 & 13--17, and 31\\ 45.202$-$0.411 & 0.2 & 0.096 & 34\\ 45.341$-$0.370 & 0.2 & 0.002 & 26$^a$ \\ 45.451$+$0.060 & 6.4 & -- & Not mapped at 6~cm\\ 45.475$+$0.130 & 2.1 & -- & Not mapped at 6~cm\\ \hline \end{tabular} \vskip 0.3cm $^a$) This source is known to be variable (e.g. Harmon et al.~\cite{Hea97}). \end{table} For one of the single dish sources (44.786$-$0.490) the peak is outside our survey area. We do not detect this source at either 6 or 3.6~cm, probably because it is resolved out in our interferometric observations. The last two sources in Table~\ref{talt} are in the region covered only at 3.6~cm, they have been detected at this wavelength and correspond to sources [18+19+33] and 32 in Table~\ref{tsrc}. The other four sources have been detected in our 6~cm observations, and our integrated flux densities are in reasonable agreement with the single dish ones, except for 45.341$-$0.370 (our source 26) which is known to be a highly variable source (see e.g. Harmon et al.~\cite{Hea97}). Somewhat surprisingly, in our VLA 6~cm images we recover almost all the single dish flux for the extended complexes 45.066$-$0.135 and 45.125$+$0.136, while about half of the single dish flux is recovered for 45.202$-$0.411. \subsection{IRAS Point Sources Catalogue} To search for far infrared (FIR) counterparts to our detected radio sources, we extracted from the IRAS-PSC2 (Beichman et al.~\cite{Bea88}) catalogue all the sources inside our survey area. In Figure~\ref{fpos}~c) we show the positions of all (43) IRAS point sources inside the observed field. We could find an IRAS counterpart within 100$^{\prime\prime}$ only for 5 of our 3.6 and/or 6~cm sources. In all five cases, the IRAS error ellipse contains the radio continuum source or overlaps with the boundaries of the extended radio sources. In fact, in all five cases the distance from the peak of the radio continuum source and the nominal IRAS position is less than 30$^{\prime\prime}$. The FIR fluxes of these five sources are reported in Table~\ref{tspecind}, columns~7 to 10. The study of the IRAS color-color diagram is a powerful tool to investigate the nature of the FIR sources. Different types of objects tend to populate different parts of the color-color planes. In Fig.~\ref{firascc} we show three of the color-color diagrams that can be constructed using the four IRAS fluxes, and that have been shown to be able to separate different types of galactic sources (e.g. Eder, Lewis \& Terzian~\cite{ELT88}; Pottasch et al.~\cite{Pea88}; WC89 White, Becker \& Helfand~\cite{WBH91}). In each diagram the contour plots represent the normalized surface density of the colors ([$\lambda_i$,$\lambda_j$]$\equiv log_{10}(F_{\lambda_i}/F_{\lambda_j})$) of IRAS-PSC2 sources within the inner galactic plane, defined as: $|l|\le 90^\circ$, $|b|\le0^\circ\!\!.65$. \begin{figure*} \centerline{\psfig{figure=ds1710f07a.eps,width=5.5cm,angle=-90} \hskip 0.4cm \psfig{figure=ds1710f07b.eps,width=5.5cm,angle=-90} \hskip 0.4cm \psfig{figure=ds1710f07c.eps,width=5.5cm,angle=-90}} \caption[]{\label{firascc} In each diagram: [$\lambda_i$,$\lambda_j$]$\equiv\log_{10}(F_{\lambda_i}/F_{\lambda_j})$; the contour plots represent the normalized surface density of IRAS-PSC2 sources in the region $|l|\le 90^\circ$, $|b|\le0^\circ\!\!.65$. Black filled circles show the colors of the 43 sources in our surveyed region (one source detected only at 100~$\mu$m is not present in the first two plots). Many of the sources only have upper limits at one or more IRAS bands, the colors for these sources are either upper, lower limits, or are undetermined. We have not marked these sources with special symbols as we have not corrected the color-color surface density contours for upper limits. The five IRAS sources with a radio continuum counterpart are marked with plus symbols. } \end{figure*} We note that the 43 IRAS sources in our field tend to populate the color-color planes in the same fashion as the entire inner galactic plane sample (contour plots), which, of course, is what we expected. It is remarkable, however, that all, and only, the IRAS sources detected in radio continuum (marked with plus symbols in the figure) lie in well-defined, low-density parts of the planes. This is the part of the color-color planes where ultra compact HII (UCHII) regions are expected to be found (WC89; Wood \& Churchwell~\cite{WC89b}; Kurtz, Churchwell \& Wood~\cite{KCW94}; White et al.~\cite{WBH91}; Becker et al.~\cite{Bea94}). \section{Discussion} \label{sdis} \subsection{Classification of radio sources} We shall now classify the sources detected in our survey and reported in Table 1 according to morphology, spectral index and coincidence with IRAS sources. Five complexes of sources have all the specifications for being classified as thermal galactic HII regions. They include all the extended sources plus some additional small diameter source in the same area, more precisely [11 + 12 + 30], [13 + 15 + 16 + 17 + 31],~[34],~[18 + 33] and [32] (numbers are from Table~\ref{tsrc}). All these complexes coincide with corresponding sources in the Altenhoff et al.~\cite{Aea78} survey (see Sect.~\ref{salt}) and are now resolved in much more detail. Morphologically they show the classical aspect of a cluster of HII regions, of which G9.62+0.19 is a typical example (Testi et al.~\cite{TFPR98}; \cite{THKR99}), i.e. several sources of different angular sizes (from unresolved to several tens of arcsec) are clustered in the same area. The continuum sources may represent independent UCHII regions born in the same star forming complex but presently observed in different evolutionary phases with the unresolved sources being the youngest and the more extended sources more evolved. Six of the small diameter sources (2, 3, 8, 9, 19, 22) can be classified as ``candidate HII region'' according to their spectral index. No IRAS source is associated with them, but their radio flux is rather low. Non detection at far infrared wavelengths could be either due to the intrinsic weakness of some of these sources or, most probably, due to the incompleteness of the IRAS-PSC in the galactic plane (see also Becker et al.~\cite{Bea94}). The remaining 15 sources (4, 5, 6, 7, 10, 14, 20, 21, 23, 24, 25, 26, 27, 28 and 29) can be classified from the spectral index as non thermal (probably extragalactic) sources. Only five of these have been detected at 20 cm. These have in general greater integrated flux densities at 6 cm than those not detected at 20 cm (the mean 6~cm flux densities of the two groups are 10 and 2 mJy, respectively), so that the effect can be simply explained as due to our higher sensitivity at 6 cm. All 15 sources have been detected at 6~cm and 4 of them at 3.6~cm as well. Given the area observed at 6~cm (0.620~sq.~deg.) and that observed at 3.6~cm (0.525~sq.deg.), the number of extragalactic sources above the 1~mJy threshold, which we can assume as a mean detection limit for our survey, can be estimated from deep VLA surveys. Following Fomalont et al.~(\cite{Fea91}) at 6~cm we expect 15 extragalactic sources above our 1~mJy threshold, while at 3.6~cm the number is reduced to 9 sources for the same threshold (Windhorst et al.~\cite{Wea93}). Given the small number statistics, these numbers are in relatively good agreement with the source counts in our surveyed area. Becker et al.~(\cite{Bea94}) estimated a total of $\sim 100$ planetary nebulae (PNs) down to a flux limit of $\sim 2.5$~mJy in their 6~cm survey of 50~sq.deg. of the inner galactic plane. This number correspond to less than 2 PNs expected down to the same flux level in our 6~cm survey region. Thus the contamination from PNs in our source lists should be very small. \subsection{IRAS ``UCHII-type'' sources} In Sect. 3.4 it was pointed out that all the IRAS sources with a corresponding radio counterpart in our survey (5 out of 43) satisfy the color-color criteria to be classified as UCHII regions (WC89). However, with the possible exception of the double source G045.070$+$0.132 and G045.072$+$0.132 (11 and 12), none of the radio sources within 100$^{\prime\prime}$ from the IRAS-PSC position can be classified as {\it bona fide} UCHII region using the usual definition (Wood \& Churchwell~\cite{WC89b}; Kurtz et al.~\cite{KPPIV99}). The radio continuum sources are extended (non homogeneous) HII regions, with emission peaks inside them that may appear as UCHII regions when observerved with an extended VLA configuration. A tipical example could be G045.455$+$0.060 which appears as a compact source inside the extended HII region G045.455$+$0.059 (see Figure~\ref{exfig3}), this source has the appearence of an UCHII region in the Wood \& Churchwell~(\cite{WC89b}) survey (their source G45.45$+$0.06). The VLA high frequency and high resolution surveys of IRAS selected UCHII candidates are all biased to the detection of only the most compact and dense ionized gas, due to the spatial filtering of the interferometer, and are unable to detect the extended components. Our results agree with those of Kurtz et al.~(\cite{K99}) and show that, when observed with sufficient sensitivity to extended structures, most, if not all, the IRAS selected UCHII candidates do have extended radio components. This implies that samples of IRAS-PSC sources selected with the WC89 criteria are contaminated by a {\it substantial} number of {\it older} more extended HII regions (see also Codella, Felli \& Natale~\cite{CFN94}; Ramesh \& Sridharan~\cite{RS97}; Kurtz et al.~\cite{K99}). The number of UCHII regions estimated from the color selected IRAS-PSC counts may be, consequently, overestimated by a large factor. If most IRAS-WC89 sources are indeed associated with extended HII rather than UCHII regions, the lifetime of the IRAS-WC89 color phase of UCHII/HII regions may be much longer than estimated from the early high resolution radio surveys. Consequently, the estimated lifetime of the UCHII phase for O-type young stars is probably much shorter that previously thought (see also Ramesh \& Sridharan~\cite{RS97}). Additionally, we find 6 UCHII candidates in our radio continuum survey without an associated IRAS source. As discussed by Becker et al.~(\cite{Bea94}), this is probably due to the confusion limit of the PSC on the galactic plane, and the generally lower radio luminosity of these sources. However, we note that in our field {\it only} unresolved thermal radio sources are not present in the IRAS-PSC, while {\it all} resolved HII regions are detected in the far-infrared. Incidentally, we note that all the compact thermal radio sources in our survey not associated with IRAS PSC sources are fainter at centimeter wavelengths than those detected in the far infrared, and thus they may be associated with stars of type later than O. However, without knowing the distances it is impossible to draw a final conclusion. In our surveyed region, the percentage of IRAS sources satisfying WC89 color criteria is (5$/$43$\sim$12\%). This is consistent with the percentage found accross the entire inner galactic plane ($|l|\le 90^\circ$, $|b|\le 0^\circ\!.6$, $\sim 8$\%). The fraction of WC89 sources in the IRAS-PSC database drops to much lower values outside the inner galactic plane (WC89). \subsection{Continuum emission from the H$_2$O maser} During an incomplete low spatial resolution (2$^\prime$) single dish survey of the $l=+45^{\circ}$ field in the H$_2$O 22~GHz maser line, a new maser was detected. The masing gas is probably coincident with a 15 $\mu$m source (F$_{15{\mu}\rm m}$ = 370 mJy) located at $\alpha(2000)=19^{\rm h}12^{\rm m}46^{\rm s}$ $\delta(2000)=10^\circ45^\prime30^{\prime\prime}$, and was interpreted as a candidate young stellar object (Testi et al.~\cite{Tea97}). Therefore, it was interesting to see if any radio continuum emission from an associated UC HII region could be detected. From a careful inspection of the area around the maser, no radio continuum emission was seen above the (local) 3$\sigma$ level (0.6~mJy/beam at 3.6~cm and 1.2~mJy/beam at 6~cm). With the young stellar object hypothesis in mind, there are two possible explanations: 1) the putative UCHII region is intrinsically too weak to be detected or absent because the eventual exciting star is of late spectral type; or 2) there is an UCHII region, but it is in such an early evolutionary phase that it is optically thick even at 3.6 cm. The lack of radio continuum emission close to H$_2$O masers in high luminosity star forming regions has been amply demonstrated by a survey of a large number of maser in the radio continuum, which showed that many maser associated with high luminosity sources do not have any close-by radio continuum source (Tofani et al.~\cite{TFTH95}). Subsequent molecular observations of the masers without continuum emission has indeed confirmed that these are associated with very young star forming regions since in all cases a hot molecular core was found at the same position (Cesaroni et al.~\cite{CFW99}). To settle the nature of the new maser - 15$\mu$m source, molecular observations in high density tracers are needed, as well as an estimate of its luminosity. \section{Conclusions} \label{scon} The unbiased radio continuum survey of the ISOGAL field at $l=+45^{\circ}$ has resolved the structure of five thermal extended complexes and discovered 21 additional small diameter sources, six of which are candidate HII regions. Comparison with the IRAS PSC shows that all 5 of the extended thermal sources have corresponding FIR emission and that the colors of these sources satisfy the WC89 color criteria for UCHII. Our sources, however, are {\it not} UCHII regions, but are more evolved extended HII regions. This result is consistent with the results of earlier single dish surveys (Codella et al.~\cite{CFN94}) and of a recent survey for extended emission around IRAS-selected UCHII regions(Kurtz et al.~\cite{K99}). We conclude that UCHII counts based on IRAS selected samples are overestimated by a large factor, consequently the estimated lifetime of the UCHII phase may be substantially reduced, removing the so-called lifetime problem for UCHII regions. The percentage of IRAS sources associated with HII regions is $\sim$10\% in our field, which seems to be a general property of IRAS sources in the galactic plane. \begin{acknowledgements} Support from CNR-NATO Advanced Fellowship program and from NASA's {\it Origins of Solar Systems} program (through grant NAGW--4030) is gratefully acknowledged. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,669
Esta é uma lista de governantes do estado brasileiro de Minas Gerais. Minas Gerais integrava primeiramente a capitania do Rio de Janeiro, São Paulo e Minas Gerais. Por Carta Régia de 9 de novembro de 1709 foi criada a capitania de São Paulo e Minas de Ouro, separada da do Rio de Janeiro. Por Alvará de D. João V, de 2 de dezembro de 1720, foi separada de São Paulo. Tornou-se uma província do Brasil em 28 de fevereiro de 1821, e um estado integrante, após a proclamação do novo regime em 1889, dos Estados Unidos do Brasil (posteriormente República Federativa do Brasil). O estado de Minas Gerais, assim como em uma república, é governado por três poderes: o executivo, representado pelo governador, o legislativo, representado pela Assembleia Legislativa do Estado de Minas Gerais (ALMG), e o judiciário, representado pelo Tribunal de Justiça do Estado de Minas Gerais e outros tribunais e juízes. Além dos três poderes, o estado também permite a participação popular nas decisões do governo através de referendos e plebiscitos. A atual constituição do estado de Minas Gerais foi promulgada em 1989, acrescida das alterações resultantes de posteriores emendas constitucionais. O Poder Executivo está centralizado no Governador do Estado, que é eleito em sufrágio universal e voto direto e secreto pela população para mandatos de até quatro anos de duração, podendo ser reeleito para mais um mandato. O cargo é ocupado por Romeu Zema, membro do Partido Novo, sendo Paulo Brant o vice-governador. Ouro Preto foi a capital mineira entre 1721 e o final do século XIX, no entanto em 1897 a sede do governo fora transferida para a recém-criada cidade de Belo Horizonte, devido à antiga Vila Rica não comportar o crescimento econômico e populacional. Nesta mesma ocasião foi construído o Palácio da Liberdade, primeira sede do governo mineiro em Belo Horizonte, que desde 2010 funciona no Palácio Tiradentes, localizado na Cidade Administrativa de Minas Gerais. O poder legislativo mineiro é unicameral, constituído pela Assembleia Legislativa do Estado de Minas Gerais, que tem sede no Palácio da Inconfidência e é constituída por 77 deputados, que são eleitos a cada quatro anos. No Congresso Nacional, a representação mineira é de três senadores e 55 deputados federais. Já o poder judiciário tem a função de julgar, conforme leis criadas pelo legislativo e regras constitucionais brasileiras, sendo composto por desembargadores, juízes e ministros. Atualmente, a mais alta corte do Poder Judiciário mineiro é o Tribunal de Justiça de Minas Gerais (TJMG). De acordo com o Tribunal Superior Eleitoral (TSE), o estado possuía, em novembro de 2013, eleitores, o que representa 10,6% dos eleitores do país, sendo o segundo maior colégio eleitoral do Brasil. Governantes do período colonial (1553 — 1822) Capitania de Minas Gerais Em meio ao chamado "ciclo do ouro" no Brasil, era criada a capitania de Minas Gerais, em um dia como este, no ano de 1720. Sua origem partiu de uma cisão da capitania de São Paulo e Minas de Ouro. Sua capital era Vila Rica (atual Ouro Preto). Praticamente 100 anos depois, em 28 de fevereiro de 1821, a capitania de Minas foi transformada em província, que seria o atual estado de Minas Gerais com a Proclamação da República. Por conta do ouro encontrado em seu território, na primeira metade do século XVIII, Minas Gerais era o centro econômico da colônia, com rápido crescimento populacional. Este fluxo migratório começou no final do século anterior, quando foi encontrado ouro na Serra do Sabarabuçu e nos ribeirões do Carmo e do Tripuí. Em 1696, foi fundado o arraial de Nossa Senhora do Ribeirão do Carmo, que, em 1711, se tornou a primeira vila de Minas Gerais (atual município de Mariana).  A descoberta do ouro também trouxe conflitos, como Guerra dos Emboabas (1707-1710) e a Revolta de Felipe dos Santos (1720). No auge da exploração do ouro em Minas, 500 mil negros escravos foram inseridos na capitania para fazer o trabalho de extração e lavoura. Mais de 30% da população era formada pelos escravos. Os negros chamados "Minas", de Gana, eram os mais requisitados para os garimpos, pois já faziam este trabalho na África. Já os de Angola e Moçambique eram usados na lavoura. O declínio da produção aurífera começou a partir de 1750. Portugal precisou aumentar a arrecadação e elevou os impostos, o que causou a revolta popular que resultou na Inconfidência Mineira, em 1789. Governantes do período imperial (1822 — 1889) Legenda Governantes do período republicano (1889 — atual) No início do período republicano, os governadores eram denominados "presidentes", termo que perdurou até a Revolução de 1930. A partir de 1930, o mandatário estadual passou a ser nomeado pelo governo federal, e o termo usado para se referir a ele era "interventor". O termo "governador" aparece na primeira Constituição Estadual de Minas Gerais, de 1890, mas, já no ano seguinte, foi feita nova Constituição Estadual, em que o termo "governador" é trocado para "presidente". A Constituição Estadual de Minas Gerais de 1891 manterá o termo "presidente". Apenas em 1947, quando o primeiro mandatário foi eleito após a ditadura Vargas, o ocupante do cargo passou a ser chamado de "governador", sendo assim a maneira como se designam os governantes de Minas Gerais até os dias atuais. Ver também Lista de governadores das unidades federativas do Brasil História de Minas Gerais Minas Gerais Governadores
{ "redpajama_set_name": "RedPajamaWikipedia" }
971
\section{Introduction} 2D supramolecular self-assembly is a powerful tool for creating novel structures from the bottom up~\cite{whitelam15,whitesides02}. A range of systems can exhibit self-assembly given the right conditions: monolayers have been created at near-full packing density~\cite{beton10}, directional halogen bonds allow pyrene derivatives to form molecular networks on gold~\cite{pham14}, and ordered assemblies of millimetre size polymers at the perfluorodecalin-water interface~\cite{bowden97} are also possible via self-assembly. The necessary conditions for self-assembly of a given molecular network will depend on several different factors, for example the combination of both dynamics and hydrogen bonding is critical for an explanation of the assembly of 1,4-substituted benzenediamine nanostructures on gold substrate~\cite{haxton13}. Cyclohexa-m-phenylene will form different molecular networks, depending on whether it is adsorbed onto copper, silver, or gold substrates~\cite{bieri10}. It has been shown that changing the solvent, temperature, or molecular unit of a supramolecular network can alter the effective intermolecular interaction strength, leading to supramolecular networks with different structure, yet with molecular units of the same shape~\cite{betonAnother11}. Furthermore, it has been recently demonstrated that large chiral domains can be formed from achiral molecules~\cite{hu16}. Since many self-assembled systems have been shown to acquire useful or unusual properties, there is significant value in the rational design of self-assembled molecular networks. A variety of techniques have been implemented to create self-assembled structures, such as deposition of molecules in ultra-high vacuum conditions, use of scanning tunelling microscopy to probe the structure, and the creation of regular molecular patterns via annealing~\cite{beton11,pham14}. Another avenue of self-assembly research is in DNA origami, where 2D crystalline arrays of origami tiles have been built~\cite{liu10}, and complex nano-scale shapes have been created out of self-assembled DNA strands~\cite{wei12}. Design principles have an advantage over the trial-and-error approach, for example it can be predicted that 1,4,5,6-naphthalenetetracarboxylic diimide-melamine adsorbed molecules will not assemble a honeycomb array because of the periodicity of the underlying silver-silicon substrate~\cite{beton11}. This kind of reasoning can be used to inform the choice of new experiments. Aside from space-filling tilings, several studies have explored rational design of shapes and structures from site-specific interactions, or by entropic self-ordering~\cite{barnes09}. This includes complexity analysis of minimal sets of building blocks~\cite{ahnert10}, the use of patchy particles to form quasicrystals~\cite{doye12}, and self-assembly of charged soft dumbells~\cite{sciortino13}. Self-assembly principles are also critical for the production of many 3D structures, such as viral capsids~\cite{grime16} and supramolecular coordination complexes~\cite{cook13}. However, the design of specific interactions to assemble more complicated structures, say of lower symmetry, or with particular wallpaper groups~\cite{schattschneider78}, remains a significant challenge. When molecular tilings are fully-packed, lattice models can be used effectively, since the molecules become interlocked. In particular, the statistical mechanics of dimer packings on the square lattice has been extensively studied since the work of Fisher~\cite{fisher61} and Kasteleyn~\cite{kasteleyn61,kasteleyn63}, who gave an exact expression for the partition function of the purely entropic model (zero interaction energy) in terms of Pfaffians and this work was expanded by others~\cite{rokhsar88,moessner01}. Subsequent studies have been primarily entropic or with achiral nearest neighbour aligning interactions~\cite{kundu14,ramola15}. Theoretical tools can be used to describe the relevant behaviour of self-asembled molecular systems. Molecular dynamics simulations are used for realistic time evolution of self-assembly~\cite{martsinovich10}, partition functions encode the thermodynamics of assembled systems~\cite{jankowski11}, and investigation of assembly pathways provides new methods to generate ordered structures~\cite{nguyen11}. Furthermore, mathematical principles offer critical understanding, such as height functions to provide efficient computational sampling~\cite{korn04}, topological characterisation of the dynamic connectivity of states, and phase transition analysis to describe the temperature-dependence of system properties~\cite{kundu13}. However, prediction of experimental results remains a challenge. We propose a simple analogue model system for molecular networks and use it to find likely ordered configurations, and conversely, to determine conditions which would favour desired structures. An important method is the characterisation of possible energy assignments via nearest-neighbour interactions, which Istrail et al.~\cite{istrail00} have introduced to identify rhombus motifs on a hexagonal lattice. We use this method to show how to design for the symmetry group of the molecular network, degeneracy of ground state, robustness to entropic transitions, as dependent on the form of nearest-neighbour interactions, and in the presence of simple kinetics. In this paper, we focus on the domino system, putting emphasis on the design of ground states with desired properties. Many of the methods generalise in a straightforward way for other polyomino shapes. Enumeration of configurations with various symmetries gives design principles for the wallpaper group of an ordered polyomino configuration, including the chirality of the polyomino packing. Degeneracy of the ground state is likewise an important quantity, since in experimental applications, a unique ground state with known properties is often preferred. Furthermore, we describe how to design for robustness of the ground state, both in terms of local kinetic moves (for dominoes), and for entropic phase transitions (for polyominoes). \section{Methods} To represent high-density packings of molecules adsorbed to a substrate, we use fully-packed polyomino configurations on a 2D square lattice. Furthermore, we consider edge-specific intermolecular interactions (see, for example, Figure~\ref{hull4by4}(a)) and describe the energy landscape in terms of the geometry of the state space. This identifies the lattice configurations that can be designed as ground states or as excited states. The nature of each ground state can also be used to indicate its robustness to an entropically driven phase transition. Our principle method is to create a complete construction of configuration space, and to represent it in a natural way. The polyominoes interact through site specific interactions with energies $\varepsilon_{AB}$ between faces of type $A$ and $B$, so that the total energy of a particular configuration is determined by the number of interactions $n_{AB}$ of each type. A system made up of one kind of polyomino with $Q$ different face types has $Q(Q+1)/2$ different pairs of face types and hence this many interaction parameters $\varepsilon_{AB}$. Therefore, the general Hamiltonian for the system is of the form \begin{equation} H = \sum_{A=1}^{Q} \sum_{B\geq A}^{Q} n_{AB} \varepsilon_{AB} \end{equation} However, for a fully-packed system, there are packing constraints that reduce the effective dimension of the problem. There is only one type of polyomino in the system, and each face of each polyomino is in contact with a face of another polyomino. This gives a set of $Q$ linear equations relating the number of ``bonds'' to number of faces, which is in turn related to the number of plaquettes ($NM$ on an $N\times M$ lattice) of the periodic lattice containing that configuration \begin{equation} NM = n_{AA} + \sum_{B=1}^{Q} n_{AB} \end{equation} We can use these equations for each face type $A$, giving $Q$ equations to eliminate $Q$ interaction counts. All of the same-face interactions $n_{AA}$ can be eliminated to leave only $Q(Q-1)/2$ independent interaction parameters, after suitable redefinition of the parameters $\varepsilon_{AB}$. Similarly, configurations can be classified by a $Q(Q-1)/2$ component vector of interaction counts. For the specific case of dominoes, and allowing for chiral interactions, there are 3 face types and, therefore, 6 different possible pairs of faces, but only 3 of the interaction counts are linearly independent, giving a 3D parameter space. We can choose our 3 interaction counts to be $n_{ab}, n_{ac}$ and $n_{bc}$ as shown in Figure~\ref{hull4by4}(a), thereby giving the chiral-interaction domino Hamiltonian \begin{equation} \label{eq:Hamiltonian} H = n_{ab} \varepsilon_{ab} + n_{ac} \varepsilon_{ac} + n_{bc} \varepsilon_{bc} . \end{equation} Individual configurations (indexed by $i$), can be labelled by the values of these interaction counts $\vec{n}^{i}$ and associated energy $E^{i} = \vec{n}^{i}\cdot \vec{\varepsilon}$. Each configuration therefore exists as a point in an abstract space ($\vec{n}$-space), with directions being the counts for each independent interaction. In general, there will be many distinct domino configuations with the same value of $\vec{n}$, meaning that they have equal energy under any choice of $\vec{\varepsilon}$. Using the Dancing Links algorithm~\cite{knuth00}, we have exhaustively enumerated all possible fully packed configurations of these dominoes on an $N\times M$ periodic square lattice, up to $N=M=8$. In contrast to previous studies of tetromino fluids~\cite{barnes09,woszczyk15}, we are interested in the crystalline fully-packed configurations. The system of fully-packed dominoes has an equivalent representation as a height model~\cite{henley97,kenyon01}. In the height representation, adjacent vertices of the underlying square lattice are given integer values such that moving anticlockwise around an even plaquette will decrease the height by 3 when crossing a domino, and increase the height by 1 otherwise. This gives a unique representation for each domino configuration as a set of vertex heights, apart from a constant shift to the heights at all vertices. For a periodic domino configuration, the change in height has a mean value along both vertical and horizontal directions, which we will refer to as the height change per plaquette (a 2D vector). Furthermore, the coarse-grained properties of this height model can be used to show that for one set of interaction parameters on the infinite lattice~\cite{alet05}, the system undergoes a Berezinskii-Kosterlitz-Thouless-type phase transition. For each configuration, we employ a discrete space algorithm to identify the wallpaper group of a tiling. This is based on the work in Ref.\citenum{schattschneider78}, however our algorithm is optimised for configurations which have an underlying lattice and is therefore ideal for polyomino tilings. Polyomino tilings on a square lattice cannot have hexagonal symmetry and, therefore, can be classified into one of 12 of the full 17 wallpaper groups. The translational symmetry must include that of the original lattice, but may be larger. Any additional symmetries correspond to a subgroup of $\mathbb{Z}_N\times\mathbb{Z}_M$ and contain at least one subgroup of prime order. If a prime $p$ divides only one of $N$ or $M$, then there will be one subgroup with that order. If $p$ divides both $N$ and $M$, there will be $p\!+\!1$ subgroups. Checking such subgroups for each prime which divides $NM$ is the necessary and sufficient condition to check that there are no extra translational symmetries. This greatly increases the efficiency of the algorithm over a naive implementation. We also check for rotations, glides, or mirror symmetries. If we find there are no extra symmetries, the algorithm is finished, the wallpaper group is p1. If there are symmetries in the configuration, we first find the shortest two linearly independent translation vectors of the periodic tiling. This also tells us the lattice type of the pattern (square, rectangular, rhombic or oblique), and the size of the primitive cell. For each of the lattice types, there are a number of compatible wallpaper groups. This is identified by searching for the appropriate discrete symmetries. \section{Results} \subsection{Design of Crystalline Ground States} The energy landscape of domino packings can be understood by visualising the set of all possible configurations in the $\vec{n}$-space. In this space, planes normal to the vector $\vec{\varepsilon}$ are necessarily iso-energetic. Extremal values of the scalar projection of $\vec{n}^{i}$ onto $\vec{\varepsilon}$ correspond to configurations with maximal or minimal energies. Hence configurations that lie on the boundary of the convex hull of points in this space represent packings that are ground states for an appropriately chosen $\vec{\varepsilon}$. The domino configurations that can be made unique energetic ground states are those corresponding uniquely to a vector $\vec{n}$ that is a vertex of the convex hull, provided this vertex corresponds to a single lattice configuration. \begin{figure}[h] \includegraphics[width=\linewidth]{hull4by4.pdf} \caption{\label{hull4by4}Configurations of dominoes on the $4 \times 4$ lattice. (a) Example domino collection, with faces $a$-$c$ labelled and coloured rectangles representing interactions between domino faces. (b) Points corresponding to configurations in $\vec{n}$-space, and the boundary of the convex hull is shown as dark lines. Vertex points are red, other boundary points are black, and interior points are blue. The red arrow gives an example vector of interaction parameters, such that the top-right configuration is the ground state. First excited states are given by the two points which the dashed line passes through. Also shown are the specific patterns for the vertex configurations, along with their wallpaper group and height change per plaquette.} \end{figure} An illustrative example is the $4\times 4$ lattice on which there are 13 possible configurations, some of which are displayed in Figure~\ref{hull4by4}(b). Of the full set of configurations, there is one with 4-fold rotational symmetry, 11 with 2-fold symmetry, and one without rotational symmetry. On the $4\times 4$ lattice the packings are further restricted by the condition $n_{ab}=n_{ac}=\frac{NM}{2}-n_{aa}$ so that the allowed configurations all lie in a 2D plane of the full configuration space. The hull is a quadrilateral so that there are four vertex ground states, depicted as red points in Figure~\ref{hull4by4}(b). The corresponding configurations happen to be unique to their position in $\vec{n}$-space, and have wallpaper groups pmm, p4g, pgg, and cmm (going anticlockwise from upper-right in Figure~\ref{hull4by4}(b)). These 4 configurations are therefore the possible non-degenerate ground states. For example, using a vector of interaction parameters $\vec{\varepsilon}=(-1,-1,-1)$, the p4g configuration can be designed as the ground state. This ground state has 4-fold rotation symmetry, primitive cell of area 8, and is non-degenerate for this combination of interaction parameters. However, if instead the interaction vector is $\vec{\varepsilon}=(-1,-1,-2)$, the ground state will be made up of 3 configurations, including the p4g configuraion, which is now degenerate under this choice of interaction parameters. \begin{figure}[h] \includegraphics[width=\linewidth]{firstexcited} \caption{\label{firstexcited}Sequence of local rearrangements of domino configurations on the $4 \times 4$ lattice, starting from the all-aligned configuration, and then moving to low energy excitations. Arrows indicate local rearrangements. $E_{rel}$ is the energy relative to the all-aligned configuration, under the choice $\varepsilon_{bc}=\varepsilon_{aa}=-1$ and all other components zero. This is the same as the example system given in Figure~\ref{hull4by4}, and here we visually give an example of an energy barrier for local rearrangement to a first excited state. Also, there is no set of local rearrangements that connects the lower-left configuration to the all-aligned configuration.} \end{figure} \subsection{Kinetically Restricted Design} One can introduce kinetics into the model via local moves, which in this case takes the form of rotations of adjacent domino pairs (each of these moves is called a flip). As viewed from the height model representation, the height change per plaquette gives the criteria for domino configurations to be connected via flips, as described in Ref.\citenum{saldanha95}. For the four previously mentioned vertex states, the height change per plaquette is (respectively) $(0,0)$, $(0,0)$, $(1,1)$, and $(2,0)$, as shown in Figure~\ref{hull4by4}(b). To illustrate the concept of local moves, the domino pattern in the top left can be reached by the pattern in the top right by a sequence of flips. This sequence of local rearrangements is guaranteed to exist, since the two configurations have the same height change per plaquette of $(0,0)$. The orange arrow in Figure~\ref{hull4by4}(b) is an example vector for $\vec{\varepsilon}$, and in this case, the lowest value of $E = \vec{n} \cdot \vec{\varepsilon}$ corresponds to the configuration in the upper-right. Taking the vector of interaction parameters $\vec{\varepsilon}$ to have unit magnitude, the transition temperature to the higher entropy macrostate is $3.205$, in natural units. Furthermore, the two blue points on the boundary of the convex hull shown in Figure~\ref{hull4by4}(b) (close to the upper-right) constitute the first excited states of the Hamiltonian. These points are iso-energetic because they have equal scalar projection onto the example $\vec{\varepsilon}$ vector. Each of these two points corresponds to two domino configurations, but not all of these configurations can be reached from the ground state by local rearrangements~\cite{henley97}. Consequently, using only local moves, some configurations are completely inaccessible from the ground state, while others can only be reached by going through higher-energy configurations, as seen in Figure~\ref{firstexcited}. For a domino configuration on a rectangular cell with periodic boundaries, not all configurations are connected by flips, which naturally gives a partition of configurations into sets of dynamically connected configurations. Given two dynamically connected configurations, there are generally several possible ways to make sequences of flips to transform from one configuration to the other. The transformation between them has an effective energy barrier as great as the lowest energy sequence of local rearrangements between them. In addition to energetic design, it is possible to design ground state configurations that are kinetically isolated, meaning they cannot be converted into any other configuration via local moves, as they occupy a partition of size 1. For example, choosing $\vec{\varepsilon}=(-1,-1,1)$, we get the Herringbone pattern as unique ground state, which is shown in the lower-left of Figure~\ref{hull4by4}(b). For a system that allows only local rearrangements, this state will be topologically protected, and the system will not be able to explore the phase space of configurations. More generally, a given configuration will only be connected to some subset of the other configurations via local moves~\cite{saldanha95}. With the exception of the extremal case, each connected component is exactly the set of configurations with the same height change per plaquette. This separates configurations according to kinetic accessibility, as can be seen in Figure~\ref{densityandconnections}(b), where each connected component is depicted separately, for the $8\times 8$ lattice. If only local rearrangements are possible after adsorption, then the system is trapped in the connected component of the initial configuration, thereby giving the system a new set of possible effective ground states (shown in Figure~\ref{densityandconnections}(b)), which are not necessarily the true thermal ground state. This is a consequence of the fully-packed behaviour, which causes local moves to be non-ergodic in the full space of configurations. \begin{figure*}[t] \includegraphics[width=\linewidth]{densityandconnections} \caption{\label{densityandconnections}(a) Points correspond to the configurations in $\vec{n}$-space which exist on the boundary of the convex hull of the set of configurations which can be made by dominoes on an $8\times 8$ lattice. The vertex points on the achiral plane are the same as for the $N=M=4$ case. Away from the achiral plane, the convex hull has 12 vertices corresponding to 24 configurations - 6 p1 and 18 p2. Configurations not on the achiral plane have cyclic (chiral) point groups. The line $n_{ab}=n_{ac},n_{bc}=0$ consists of the configurations with maximal $L^{1}$ norm of height change per plaquette, and the line $n_{ab}=n_{ac}=0$ contains the configurations which have only dominoes of one orientation. Also, the log of density of states is visualised as a heat map, with dark colours signifying high density. (b) The 7 non-trivial and inequivalent connected components of the $8\times 8$ lattice are shown in $\vec{n}$-space, with vertex points in red and other boundary points in black. A shadow is given to show the convex hull of the set of all domino configurations on the $8\times 8$ lattice. Alongside each of the 7 connected components is given the height change per plaquette.} \end{figure*} \subsection{Chiral versus Achiral Configurations} Configurations on the $4\times 4$ lattice are all achiral, in the sense that it is not possible to favour one configuration over its mirror image energetically. This a consequence of the additional condition $n_{ab}=n_{ac}=\frac{NM}{2}-n_{aa}$ on the number of interactions of each type. This is lifted for any larger square lattice and the set of possible configurations defines a 3D region in the $\vec{n}$-space. The convex hull for $N=M=8$, is shown in Figure~\ref{densityandconnections}(a), which has 1,224,518 configurations, divided into 1,551 different position vectors in $\vec{n}$-space. Packings that also arise on the smaller $4\times 4$ lattice lie in the plane $n_{ab}=n_{ac}$, a plane of mirror symmetry for all configurations. In comparison to the $4 \times 4$ lattice, the majority of configurations on the $8 \times 8$ lattice lie strictly within the boundary of the convex hull of points in $\vec{n}$-space. There are 12,237 boundary configurations, meaning that about 1\% of configurations of the $8\times 8$ lattice can be designed as ground state; of these configurations, only 4 achiral and 2 chiral pairs can be designed for as unique lowest energy states, (this is a strong limitation to the possible choices for the design of unique ground states). Since we have an enumeration of states, we can design for desired properties by choosing the ground state which has the favoured properties. For example, on the $8 \times 8$ lattice, the vertex state $\vec{n} = (32,32,32)$ corresponds to exactly one configuration, which has 4-fold rotational symmetry. It is possible to design for 4-fold symmetry by choosing an appropriate combination of interaction energies to make this configuration the unique energetic ground state. One such choice would be $\vec{\varepsilon}=(-1,-1,-1)$, but generally, $\vec{\varepsilon}$ can be chosen as any solution to the set of linear inequalities which define the energy of the ground state to be lower than the energies of all other states. Conversely, one could design for a ground state with no rotational symmetry by finding a vertex state which corresponds to only configurations without rotational symmetry. For the $8 \times 8$ lattice, there are only two such vertex states: $\vec{n}=(34,20,24)$ and its chiral twin $\vec{n}=(20,34,24)$. The permitted periodicity of the configuration is important to this problem. For the $4 \times 4$ lattice, all ground states have some rotational symmetry, therefore it is not possible to design for a ground state without rotational symmetry in that case. The degree of handedness~\cite{efrati14} of a domino packing can be given a qualitative value, using the distance from the achiral plane, defined as $|n_{ab}-n_{ac}|/\sqrt{2}$. To design an extremally chiral configuration, the vector of interaction energies should have the direction orthogonal to this plane, $\vec{\varepsilon}=(-1,1,0)$, so that the system will show a strong preference for one type of chiral interaction compared to the other. This is not the only possible choice for the vector of interaction energies that gives rise to a chiral ground state. For the $8\times 8$ lattice, 13.8\% of the interaction parameter space corresponds to chiral ground states. However, to design for a non-degenerate chiral ground state, only 3.7\% of parameter space is available. This would require a greater control of the interaction parameters, which may be more challenging experimentally. To design for a very high degeneracy ground state, which is not simply the highest entropy macrostate, it is also necessary to have precise control of the interaction parameters. A striking feature of the boundary of the convex hull for $N=M=8$ are the faces with many distinct states, corresponding to ground states with high degeneracy for energy vectors $\vec{\varepsilon}$ perpendicular to either of these faces. There are two such faces, distinguished by their chirality. The equation for the normal of one of these faces is $NM=n_{ab}+n_{bc}$, meaning that the direction of the vector of interaction parameters has zero freedom, it must be chosen precisely. Also, it is possible to traverse the $\vec{n}$-space of this face by local rearrangements, meaning that this ground state is highly degenerate, even when considering kinetic constraints. However, it is not possible to reach all configurations which lie on this face, since there are some points that correspond to both locally accessible and inaccessible configurations. \subsection{Robustness Against Temperature-Induced Phase Transitions} We now turn our attention to the selection of interactions that encode particular phase transitions. The set of parameters $\vec{\varepsilon}$ define a density of states in energy as the number of configurations associated to points $\vec{n}$, in planes orthogonal to the energy vector, which typically increases away from the extremal boundary configurations toward the centre of the convex hull (Figure~\ref{heatmapdos}(a) and (c)), leading to temperature induced phase transitions. A peak in heat capacity indicates the temperature at which this transition takes place. In fact, in the particular case of $\varepsilon_{bc}=-1$ and all other components zero, if the periodicity constraint is removed, it is already established that a Berezinskii-Kosterlitz-Thouless phase transition occurs~\cite{alet05}. \begin{figure}[h] \includegraphics[width=\linewidth]{thirdfigure} \caption{\label{heatmapdos}The set of configurations considered here are the domino configurations which fit into an 8-by-8 lattice. For (a) and (b) we confine $\vec{\varepsilon}$ to the achiral plane, and for (c) and (d) we set $\varepsilon_{bc}=0$. Shown in (a) and (c) is the degeneracy of configurations, the colour scale goes from low (blue) to high (red), representing the logarithm of degeneracy. (b) and (d) give a heat capacity contour map, against energy interaction strengths, where the variation of energy with temperature is taken in the radial direction from the origin.} \end{figure} In the general case, one can map heat capacity as a function of the magnitude and direction of the $\vec{\varepsilon}$ vector. In Figure~\ref{heatmapdos}(b), heat capacity is shown as a function of the $\vec{\varepsilon}$ vector, where we have taken the chiral interaction energy $\varepsilon_{ab}-\varepsilon_{ac}$ to be zero. The four outer regions of Figure~\ref{heatmapdos}(b) correspond to the four vertex configurations, given by the vertices in Figure~\ref{heatmapdos}(a), which visualises the projection of the density of states onto the achiral plane. Importantly, the peak of the curve (as seen going radially outward) in Figure~\ref{heatmapdos}(b) gives an indication of the transition from ground state to the higher entropy macrostate. We can design a ground state of high degeneracy by choosing interactions to be $\vec{\varepsilon}=(-1,-1,-2)$, thereby selecting the edge $n_{bc}=NM-(n_{ab}+n_{ac})/2$ as ground state, which has high degeneracy, as can be seen in Figure~\ref{heatmapdos}(a), since the points on the upper-left boundary have fairly high degeneracy. As shown in Figure~\ref{heatmapdos}(b), this choice corresponds to a relatively small heat capacity peak, which is between the parallel peaks in the lower-right quadrant. The transition away from the ground state occurs (for the $\vec{\varepsilon}$ given above, and natural units) at temperature 1.51104. This transition temperature is unusually high due to the high degeneracy of the ground state. A further example is a transition that breaks chirality, as shown in Figure~\ref{heatmapdos}(d) for the subspace $\varepsilon_{bc}=0$. The upper-left and lower-right regions in Figure~\ref{heatmapdos}(d) are the left and right handed chiral ground states. The two other predominant regions of the phase diagram are the upper-right region, corresponding to all configurations of full orientational order, and the lower-left region, which corresponds to the collection of configurations with zero dominoes end-to-end. These regions indicate ground states which could be realised over a range of experimental parameter values. The heat capacity peak in the chiral direction $\vec{\varepsilon}=(-1,1,0)$ is less sharp than a generic direction, due to there being several chiral states of similar energy, thereby giving a smoother transition between the (chiral) ground state and disordered state. The ground state established by $\vec{\varepsilon}=(-1,1,0)$ consists of 7 chiral configurations, and the transition away from this ground state occurs at temperature 0.787068 (in natural units). \begin{figure}[h] \includegraphics[width=0.8\linewidth]{polyominofigure} \caption{\label{polyominofigure}The convex hull boundary of the set of linear trimers that can be made on a $9\times 9$ lattice, plotted in $\vec{n}$-space for non-chiral interactions. Red points show vertices and black points are boundary points of the convex hull. The colours purple, green and brown have been used to indicate points on 3 different faces of the convex hull.} \end{figure} Keeping the above examples in mind, some general rules become apparent for the design of ground states that are robust to temperature-induced phase transitions. When the vector of interaction parameters has greater magnitude, the interactions are overall stronger, which raises the transition temperature to the high-entropy macrostate. Also important is the direction of the vector of interaction parameters, since this will determine the smoothness of the transition from the ground state to the high-entropy macrostate, as well as influencing the temperature at which this transition occurs. Robustness against transition to the high-entropy macrostate is also strongly affected by the kinetic restrictions we mentioned previously. Due to these kinetic restrictions, the ground state configurations with greater height change per plaquette are able to access a smaller set of excited states via local rearrangements. In particular, for configurations with extremal height change per plaquette, there is no possibility for rearrangement via domino pair flips. \subsection{Polyominoes} Many of the methods and techniques used here will translate directly to other kinds of polyomino, for example longer rectangular polyominoes~\cite{kundu14} or T-tetrominoes~\cite{korn04}. For general polyomino shapes a convex hull construction can be used to identify possible ground states. As a brief illustration we show in Figure~\ref{polyominofigure} the convex hull for non-chiral linear trimers. Packing constraints have been used to reduce the effective dimension of the problem and properties of the configurations have been computed, such as the wallpaper groups, using direct generalisations of methods from the domino system. Similar to domino configurations, we find that for linear trimers, there are possible ground states with especially high degeneracy. These states correspond to highly populated faces of the convex hull in $\vec{n}$-space and, again, it is possible to find local move sets which traverse the faces. Specifically, for linear trimers on the $9\times 9$ lattice, three highly degenerate faces of the convex hull meet at one vertex state. The three faces are shown by differently coloured individual states on each face, in Figure~\ref{polyominofigure}. All three faces are close to being parallel, such that only 1.28\% of interaction parameter space lies in the region between the three highly degenerate states. This means that each of these faces can be realised as lowest energy state under similar interaction parameters, resulting in sensitive dependence on the form of the nearest-neighbour interactions. Each of these strongly degenerate ground states have high transition temperatures to the high-entropy macrostate, analogous to the domino system. For example, the vector of interaction parameters $\vec{\varepsilon}=(0,0,1)$ corresponds to a highly degenerate ground state with a transition temperature at 1.50625, whereas the choice $\vec{\varepsilon}=(0,-1,0)$ corresponds to a low degeneracy ground state of transition temperature 0.73366. In the case of linear trimers, there is also a concept of a height function~\cite{jacobsen07}. However, the question of connectivity of configurations via local moves is not as straightforward as for dominoes. \section{Discussion and Conclusions} We have shown how properties of periodic 2D supramolecular networks can be selected in terms of nearest-neighbour interactions between individual molecules. We describe details of the relationships between the properties of the molecular network that forms and the parameters that brought it about. Furthermore, we demonstrate how these relationships can be used as design principles for informing the conditions needed to create a given desired property. A range of properties have been described in terms of design, such as: degeneracy and symmetry of ground state, chirality, kinetic trapping, and entropic robustness. These properties depend strongly on the type of nearest-neighbour interactions, motivating visualisation of the set of configurations in $\vec{n}$-space as a design tool. In particular, we have found the possibility for high-degneracy and kinetically trapped ground states. Many experiments have previously demonstrated ordered domains at very high packing fraction~\cite{hu16,bieri10,beton11,pham14,betonAnother11}, and our results indicate new directions in which to take these kinds of experiment. There are many directions for future work. We have only demonstrated fully-packed configurations, for which there already exists a significant literature on the corresponding lower-density systems. The calculation of a convex hull to determine ground states becomes much more difficult in higher dimensions, which restricts the number of different interaction types we can meaningfully consider simultaneously. An approach based on sampling of periodic configurations, rather than enumeration, could bring greater insight. For an algorithm performing depth-first enumeration, it is possible to modify the algorithm to perform Markovian dynamics on the partially completed configuration space, such that completed configurations are sampled with equal probability. This would make it possible to investigate configurations of larger periodicity. The principles and methods described here will hopefully provide inspiration for the design of self-assembling monolayers. In this paper, we have kept primarily to domino configurations, but the methods presented are general for systems of any other type of polyomino. However, only some polyominoes have a currently known height function~\cite{pak00,jacobsen07,korn04}. For those that do have a height function, this does not always give a sufficient condition for configurations to be connected by local moves. Nonetheless, it may be possible to use those height functions to look for necessary conditions for connectivity by local moves. Another complication related to general polyominoes, is that the $\vec{n}$-space of other polyominoes can be higher than 3D, making the convex hull more difficult to visualise. In terms of design, we would like to stress that it is the convex hull of configurations in $\vec{n}$-space, which gives the relevant information on the possible ground states of the system, under very general nearest-neighbour interactions. \begin{acknowledgement} This work was partially supported by the UK EPSRC through Grant No. EP/I01358X/1 (JN and GPA). \end{acknowledgement} \input{joelarxiv.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,470
Charles Prosper Eugène Schneider, also known as Eugène Schneider II (French: Eugène II Schneider; 29 October 1868 – 17 November 1942), was a French industrialist, head of Schneider-Creusot and other works in France, politician and inventor. In 1923, he was awarded the John Fritz Medal. Biography Early life Schneider was born on October 29, 1868, in Le Creusot, rural France. His father, Henri Schneider, was a businessman and politician. His paternal grandfather, Eugène Schneider, was the co-founder of Schneider-Creusot with his grand-uncle Adolphe Schneider in 1836. He grew up at the Château de la Verrerie in Le Creusot. Career Schneider was appointed as co-chairman of Schneider-Creusot in 1896. He became its sole chairman in 1898. The company dominated the steel and armaments sector of France and much of central Europe. He served on the boards of directors of the Crédit Lyonnais, Chemins de fer de Paris à Lyon et à la Méditerranée, the Société Métallurgique de Normandie and the Banque de l'Union Parisienne. He also served as the chairman of the Banque de l'union européenne industrielle et financière. He joined the Popular Liberal Action, a center-right political party. He served as a member of the French Chamber of Deputies for them from 1889 to 1910. He also served as the Mayor of Le Creusot from 1896 to 1900. He was a member of the Académie des Sciences Morales et Politiques. In 1917 he accepted the presidency of the British Iron and Steel Institute, a position he occupied for 2 years. In 1919 he was sent on a mission to America by the French government and whilst there was awarded the Gold Medal of the Mining and Metallurgical Society of America. In 1930 he was awarded, like his father in 1889, the Bessemer Gold Medal for services to the steel industry. He died in Paris in 1942, only weeks after the Le Creusot factory was demolished by the RAF in World War II. Personal life He married Antoinette de Rafélis de Saint-Sauveur, an heiress to the Château d'Apremont-sur-Allier. They had three sons, Charles, Henri-Paul and Jean, and a daughter, Marie-Zélie, also known as May, who became the Duchess of Brissac by marriage. He died in Paris on November 17, 1942. Legacy His statue, designed by sculptor Paul Landowski, stands on the Boulevard Henri-Paul Schneider (named after his son) in Le Creusot. Patents C.P.E. Schneider. "Patent US713691 - Recoil apparatus for guns." 1901. C.P.E. Schneider. "Patent US716114 - Apparatus for sighting guns," 1902. C.P.E. Schneider. "Patent US800021 - Apparatus for loading ordnance," 1905. C.P.E. Schneider. "Patente US896669 - Breech-operating mechanism for ordnance," 1907. C.P.E. Schneider. "Patent US946402 - Sighting apparatus for guns," 1910. References External links 1868 births 1942 deaths People from Le Creusot Mayors of places in Bourgogne-Franche-Comté Popular Liberal Action politicians Members of the 7th Chamber of Deputies of the French Third Republic Members of the 8th Chamber of Deputies of the French Third Republic Members of the 9th Chamber of Deputies of the French Third Republic 20th-century French businesspeople French corporate directors 20th-century French inventors John Fritz Medal recipients Foreign associates of the National Academy of Sciences Bessemer Gold Medal Grand Officers of the Order of the White Lion
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,227
{"url":"https:\/\/questions.examside.com\/past-years\/jee\/question\/plet-fx--sin-left-pi-over-6sin-left-pi-jee-advanced-mathematics-complex-numbers-ryvbxlzkg9raparr","text":"NEW\nNew Website Launch\nExperience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...\n1\n\n### JEE Advanced 2015 Paper 1 Offline\n\nMCQ (More than One Correct Answer)\n\nLet $$f(x) = \\sin \\left( {{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)} \\right)$$ for all $$x \\in R$$ and g(x) = $${{\\pi \\over 2}\\sin x}$$ for all x$$\\in$$R. Let $$(f \\circ g)(x)$$ denote f(g(x)) and $$(g \\circ f)(x)$$ denote g(f(x)). Then which of the following is\/are true?\n\nA\nRange of f is $$\\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$.\nB\nRange of f $$\\circ$$ g is $$\\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$.\nC\n$$\\mathop {\\lim }\\limits_{x \\to 0} {{f(x)} \\over {g(x)}} = {\\pi \\over 6}$$.\nD\nThere is an x$$\\in$$R such that (g $$\\circ$$ f)(x) = 1.\n\n## Explanation\n\n(a) $$f(x) = \\sin \\left[ {{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)} \\right],\\,x \\in R$$\n\n$$= \\sin \\left( {{\\pi \\over 6}\\sin \\theta } \\right),\\,\\theta \\in \\left[ { - {\\pi \\over 2},{\\pi \\over 2}} \\right]$$,\n\nwhere, $$\\theta = {\\pi \\over 2}\\sin x$$\n\n$$= \\sin \\alpha ,\\alpha \\in \\left[ { - {\\pi \\over 6},{\\pi \\over 6}} \\right]$$,\n\nwhere, $$\\alpha = {\\pi \\over 6}\\sin \\theta$$\n\n$$\\therefore$$ $$f(x) \\in \\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$\n\nHence, range of $$f(x) \\in \\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$\n\nSo, option (a) is correct.\n\n(b) $$f\\{ g(x)\\} = f(t),t \\in \\left[ { - {\\pi \\over 2},{\\pi \\over 2}} \\right]$$\n\n$$\\Rightarrow f(t) \\in \\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$\n\n$$\\therefore$$ Option (b) is correct.\n\n(c) $$\\mathop {\\lim }\\limits_{x \\to 0} {{f(x)} \\over {g(x)}}$$\n\n$$= \\mathop {\\lim }\\limits_{x \\to 0} {{\\sin \\left[ {{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)} \\right]} \\over {{\\pi \\over 2}(\\sin x)}}$$\n\n$$= \\mathop {\\lim }\\limits_{x \\to 0} {{\\sin \\left[ {{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)} \\right]} \\over {{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)}}\\,.\\,{{{\\pi \\over 6}\\sin \\left( {{\\pi \\over 2}\\sin x} \\right)} \\over {\\left( {{\\pi \\over 2}\\sin x} \\right)}}$$\n\n$$= 1 \\times {\\pi \\over 6} \\times 1 = {\\pi \\over 6}$$\n\n$$\\therefore$$ Option (c) is correct.\n\n(d) $$g\\{ f(x)\\} = 1$$\n\n$$\\Rightarrow {\\pi \\over 2}\\sin \\{ f(x)\\} = 1$$\n\n$$\\Rightarrow \\sin \\{ f(x)\\} = {2 \\over \\pi }$$ ..... (i)\n\nBut, $$f(x) \\in \\left[ { - {1 \\over 2},{1 \\over 2}} \\right] \\subset \\left[ { - {\\pi \\over 6},{\\pi \\over 6}} \\right]$$\n\n$$\\therefore$$ $$\\sin \\{ f(x)\\} \\in \\left[ { - {1 \\over 2},{1 \\over 2}} \\right]$$ ..... (ii)\n\n$$\\Rightarrow \\sin \\{ f(x)\\} \\ne {2 \\over \\pi }$$, [from Eqs. (i) and (ii)]\n\ni.e. No solution.\n\n$$\\therefore$$ Option (d) is not correct.\n\n2\n\n### JEE Advanced 2014 Paper 1 Offline\n\nMCQ (More than One Correct Answer)\nLet $$f:\\left( { - {\\pi \\over 2},{\\pi \\over 2}} \\right) \\to R$$ be given by $$f(x) = {[\\log (\\sec x + \\tan x)]^3}$$. Then,\nA\nf(x) is an odd function\nB\nf(x) is a one-one function\nC\nf(x) is an onto function\nD\nf(x) is an even function\n\n## Explanation\n\n$$f:\\left( { - {\\pi \\over 2},{\\pi \\over 2}} \\right) \\to R$$\n\n$$f(x) = {[\\log (\\sec x + \\tan x)]^3}$$\n\n$$f( - x) = {[\\log (\\sec x - \\tan x)]^3}$$\n\n$$= {\\left[ {\\log \\left( {{{(\\sec x - \\tan x)(\\sec x + \\tan x)} \\over {\\sec x + \\tan x}}} \\right)} \\right]^3}$$\n\n$$= {\\left[ {\\log \\left( {{1 \\over {\\sec x + \\tan x}}} \\right)} \\right]^3} = {[ - \\log (\\sec x + \\tan x)]^3}$$\n\n$$= - {[\\log (\\sec x + \\tan x)]^3} = - f(x)$$\n\n$$\\therefore$$ f is an odd function. (a) is correct and (d) is not correct.\n\nAlso,\n\n$$f'(x) = 3{[\\log (\\sec x + \\tan x)]^2}\\,.\\,{{\\sec x\\tan x + {{\\sec }^2}x} \\over {\\sec x + \\tan x}}$$\n\n$$= 3\\sec x{[\\log (\\sec x + \\tan x)]^2} > 0\\,\\forall x \\in \\left( { - {\\pi \\over 2},{\\pi \\over 2}} \\right)$$\n\n$$\\therefore$$ f is increasing on $$\\left( { - {\\pi \\over 2},{\\pi \\over 2}} \\right)$$\n\nWe know that strictly increasing function is one one.\n\n$$\\therefore$$ f is one one\n\n$$\\therefore$$ (b) is correct.\n\n$$(\\sec x + \\tan x) = \\tan \\left( {{\\pi \\over 4} + {\\pi \\over 2}} \\right)$$\n\nas $$x \\in \\left( { - {\\pi \\over 2},{\\pi \\over 2}} \\right)$$, then\n\n$$0 < \\tan \\left( {{\\pi \\over 4} + {\\pi \\over 2}} \\right) < \\infty$$\n\n$$0 < \\sec x + \\tan x < \\infty$$\n\n$$\\Rightarrow - \\infty < \\ln (\\sec x + \\tan x) < \\infty$$\n\n$$- \\infty < {[\\ln (\\sec x + \\tan x)]^3} < \\infty$$\n\n$$\\Rightarrow - \\infty < f(x) < \\infty$$\n\nRange of f(x) is R and thus f(x) is an onto function.\n\n$$\\therefore$$ (c) is correct.\n3\n\n### JEE Advanced 2014 Paper 1 Offline\n\nMCQ (More than One Correct Answer)\nFor every pair of continuous function f, g : [0, 1] $$\\to$$ R such that max {f(x) : x $$\\in$$ [0, 1]} = max {g(x) : x $$\\in$$ [0, 1]}. The correct statement(s) is (are)\nA\n[f(c)]2 + 3f(c) = [g(c)]2 + 3g(c) for some c $$\\in$$ [0, 1]\nB\n[f(c)]2 + f(c) = [g(c)]2 + 3g(c) for some c $$\\in$$ [0, 1]\nC\n[f(c)]2 + 3f(c) = [g(c)]2 + g(c) for some c $$\\in$$ [0, 1]\nD\n[f(c)]2 = [g(c)]2 + 3g(c) for some c $$\\in$$ [0, 1]\n\n## Explanation\n\nSuppose f(x) is maximum at c1 and g(x) is maximum at c2. When f(x) is maximum g(x) may or may not be maximum.\n\nTherefore, in the function h(x) = f(x) $$-$$ g(x), we get\n\n$$h({c_1}) = f({c_1}) - g({c_1}) \\ge 0$$ and $$h({c_2}) = f({c_2}) - g({c_2}) \\ge 0$$.\n\nTherefore, h(x) = 0 for some c $$\\in$$ [0, 1].\n\nTherefore, $$h(c) = 0 \\Rightarrow f(c) - g(c) = 0$$.\n\nTherefore, $$f(c) = g(c)$$.\n\nOption (a) $$\\Rightarrow {f^2}(c) - {g^2}(c) + 3[f(c) - g(c)] = 0$$ which is true from Eq. (i).\n\nOption (d) $$\\Rightarrow {f^2}(c) - {g^2}(c) = 0$$ which is true from Eq. (i)\n\nNow, if we take\n\nf(x) = 1 and g(x) = 1, $$\\forall$$x $$\\in$$[0, 1]\n\nOption (b) and (c) does not hold. Hence, option (a) and (d) are correct.\n4\n\n### IIT-JEE 2012 Paper 2 Offline\n\nMCQ (More than One Correct Answer)\n\nLet $$f:( - 1,1) \\to R$$ be such that $$f(\\cos 4\\theta ) = {2 \\over {2 - {{\\sec }^2}\\theta }}$$ for $$\\theta \\in \\left( {0,{\\pi \\over 4}} \\right) \\cup \\left( {{\\pi \\over 4},{\\pi \\over 2}} \\right)$$. Then the value(s) of $$f\\left( {{1 \\over 3}} \\right)$$ is(are)\n\nA\n$$1 - \\sqrt {{3 \\over 2}}$$\nB\n$$1 + \\sqrt {{3 \\over 2}}$$\nC\n$$1 - \\sqrt {{2 \\over 3}}$$\nD\n$$1 + \\sqrt {{2 \\over 3}}$$\n\n## Explanation\n\n$$f(\\cos 4\\theta ) = {2 \\over {2 - {{\\sec }^2}\\theta }}$$\n\nLet $$\\cos 4\\theta = t$$\n\n$$\\Rightarrow 2{\\cos ^2}2\\theta - 1 = t \\Rightarrow {\\cos ^2}2\\theta = {2 \\over 3}$$\n\nFor $$t = {1 \\over 3}$$ we have $${\\cos ^2}2\\theta = {2 \\over 3}$$\n\n$$\\cos 2\\theta = \\sqrt {{2 \\over 3}}$$ or $$\\cos 2\\theta = - \\sqrt {{2 \\over 3}}$$\n\n$$f(\\cos 4\\theta ) = {2 \\over {2 - {1 \\over {{{\\cos }^2}\\theta }}}} = {{2{{\\cos }^2}\\theta } \\over {2{{\\cos }^2}\\theta - 1}} = {{1 + \\cos 2\\theta } \\over {\\cos 2\\theta }} = 1 + {1 \\over {\\cos 2\\theta }}$$\n\nHence, $$f\\left( {{1 \\over 3}} \\right) = 1 + \\sqrt {{3 \\over 2}}$$ or $$1 - \\sqrt {{3 \\over 2}}$$\n\n### Joint Entrance Examination\n\nJEE Main JEE Advanced WB JEE\n\n### Graduate Aptitude Test in Engineering\n\nGATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN\n\nNEET\n\nClass 12","date":"2022-06-28 09:55:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9195881485939026, \"perplexity\": 3249.454868102634}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103360935.27\/warc\/CC-MAIN-20220628081102-20220628111102-00599.warc.gz\"}"}
null
null
You will also want to combine the base coat for larger holes so an electrical drill and paint mixing attachment is greatest. Whether you are using drywall compound, spackling or painters' putty, all of them generally tend to shrink as they dry, so you'll need to repeat the method a number of occasions earlier than the opening is correctly crammed. If the crack extends by way of the seam's paper tape, or if the tape has pulled loose from the wall, use a razor knife to cut the tape about 6 to 12 inches from each ends of the harm (picture 1 and a pair of). Remove the tape but be careful not to tear away the drywall's paper protecting. Labor setup time, mobilization time and minimum hourly costs that are commonly included for small Drywall Repair jobs. Unpredictable events can do serious harm to the drywall and plaster in your house or workplace. Hairline cracks in drywall, particularly at the high corners of windows and doors, are indicators that the wall framing has settled or moved just a little—a common situation and one that's easy to repair. Using fast-drying compound can be used in some cases, so that the repair is made in at some point. To reduce the drywall, you can either reduce it with the drywall saw or use a blade knife to attain and snap it, scoring the entrance utilizing the blade knife and a straight edge, then snapping it in two items. So, begin across the fringe of the broken area and just minimize a sq. to take away the damaged drywall. The biggest mistake most individuals make when taping their very own walls is making an attempt to make it good the primary coat. Cost of associated supplies and supplies typically required to restore drywall including: fasteners, seam tape, exterior corner beads and topping compound. Measure the opening, and visit your native ironmongery shop or dwelling improvement heart for a package. Loose paper and gypsum will trigger the drywall compound to not bond strongly and may create a bubble you will see solely after coating with compound. The kits typically have a strengthened heart panel surrounded by self-sticking tape. Home renovation and building can be a scary prospect for many people, particularly in the event that they wish to try to do some of the work themselves with a purpose to cut down on the prices. Once the installer has fully installed their first Drywall Repair Plug, most future Drywall Repair Plugs are installed in 5-6 minutes. A widespread drywall drawback, especially in newer homes, is nail pops," or nail heads that pull away from the wooden studs and protrude via the drywall tape or paint. If this is the case in your undertaking, do the identical, it's going to save you $10 or so on shopping for drywall. A hired drywall repair specialist, though, would save you time which you could possibly select to spend doing more vital issues. Search YouTube and watch the video on repairing holes in drywall (sheet rock). Within all sections we provide detailed undertaking prices and knowledge for all predominant kinds of drywall. If there are no electrical or plumbing lines present, use a drywall saw to punch a hole through the drywall alongside your line. Then repeat the method, every time spreading it a bit further out from the edges of the patch. Repairing a hole in your drywall could seem like a challenge in case you've by no means performed it earlier than. Drywall set up is among the most common dwelling improvement initiatives many owners find themselves faced with. Whether you're using drywall compound, spackling or painters' putty, all of them have a tendency to shrink as they dry, so you will need to repeat the process several instances earlier than the outlet is properly filled. If the crack extends through the seam's paper tape, or if the tape has pulled free from the wall, use a razor knife to cut the tape about 6 to 12 inches from both ends of the harm (image 1 and 2). Remove the tape but be careful not to tear away the drywall's paper overlaying. Labor setup time, mobilization time and minimal hourly charges which might be commonly included for small Drywall Repair jobs. When you've got accidentally banged some furnishings into the wall, had to transfer that picture a few too many times, or moved some fixtures and plugs round, you are often left with holes within the drywall. If you are a DIY fanatic or motivated to save cash on the repair, these drywall repair suggestions will help. Imagine your baby enjoying in the room and he bangs the door in opposition to the wall somewhat too arduous, it might cause harm to the drywall. This software is especially useful for any drywall restore job in a completed area of your private home. Decide whether you need to repaint the entire wall or simply contact up patched areas. In this regard, you need to come into contact with a reliable and skilled drywall repairs contractor that may handle any state of affairs arising out of the blue, and makes you feel that your own home is in secure fingers. Although some products declare to be a permanent restore, no other product on the market is really manufactured from drywall. It is sort of inevitable that you will want to patch or restore drywall somewhere in your own home. Standard drywall joint compound is the unique product for ending drywall seams and nail holes. Use an electronic stud finder to mark the studs behind the broken drywall, then use a drywall saw or reciprocating noticed to cut away the drywall. Drywall restore isn't something most people look forward to. Although it is comparatively simple in concept, in case you have ever performed it then you realize that the dry time of the mud and all the mud created by sanding can flip the task into a big hassle. Crease a size of paper tape down the middle lengthwise, then embed it into the compound. Outer drywall corners are bolstered with metallic or plastic edging, known as nook bead. Contact Match All Drywall Repair LLC, in Mesa, AZ.. an skilled drywall contractor and work you may depend upon. We provide excellent transforming, and renovations work, together with home repairs. The key to this sort of restore is to make sure your drywall patch is identical thickness as the drywall used in your wall. Fifth: Insert the Drywall Repair Plug into the opening again and from the center of the plug, begin squeezing out the excess compound with the putty knife to the outer edges of the paper. Most small repairs by which a patch of drywall is used require 3 to 6 hours of labor.
{ "redpajama_set_name": "RedPajamaC4" }
4,350
all about combining an enjoyable, playful vibe with space-saving solutions that help improve available room. With space becoming such an essential commodity in modern homes, it is well worth your time to think vertically! Part of many amazing kids' rooms over the planet, bunk beds bring with them a multitude of advantages. Yet modern bunk beds need not be confined to the kids' room only. Related Post "35 Inspirational Pics Of Pink Metal Bunk Beds"
{ "redpajama_set_name": "RedPajamaC4" }
9,902
Government to create new head of resilience role New strategy aims to improve transparency and accountability over resilience work Photo: Adobe Stock By Tevye Markson The government will create a new head of resilience role to oversee departments' emergency planning work and improve cross-government working, as part of a new strategy announced by the Cabinet Office. The head of resilience will guide best practice, encourage adherence to standards, and set guidance, with the aim of improving transparency and accountability, the Cabinet Office said. Lead government departments will continue to take responsibility for individual national security risk assessments, with the head of resilience providing leadership for this system. The government said it will clarify roles and responsibilities across government for each risk, but has only set itself a target of 2025 to do this. This review will aim to avoid a repeat of the Covid pandemic, where "treating [it] as a health emergency meant that there was limited planning outside of the healthcare sector", the framework states. The head of resilience will complement the existing role of the National Security Advisor. Devolved administrations will retain control over resilience, with the new head of resilience working with them in partnership. To strengthen accountability, the government has committed to delivering an annual statement to parliament on civil contingencies risk and the UK government's performance on resilience. The Cabinet Office said the strategy will make resilience a national endeavour for the first time in what it dubbed a "whole of society" approach. "Resilience has long been part of the UK's approach to national security, but in an increasingly integrated world in which we cannot predict or prevent all of the challenges ahead, we need to refresh our approach – that's why we are making resilience a national endeavour, so that as a country we are prepared for the next crisis, whatever it may be," Cabinet Office minister Oliver Dowden said. The plan also includes: Growing the government's advisory groups made up of experts, academics and industry experts to inform risk planning and provide external challenge Creating a new sub-committee of the National Security Council to specifically consider issues relating to resilience Creating a UK Resilience Academy, built out from the Emergency Planning College, to make world class professional training available to all that need it Strengthening Local Resilience Forums in England by working across three key pillars of reform – leadership, accountability, and integration of resilience into the UK's levelling up mission The framework follows the commitment made in last year's Integrated Review to strengthen the government's approach to resilience. Since then, the Cabinet Office has created a dedicated COBR unit to continue to lead the government's response to emergencies and established a Resilience Directorate to take a more strategic approach to national resilience and drive work across the system to strengthen it. The government has also set up the National Situation Centre to bring data, analysis and insight together, boosting the government's ability to identify, monitor and manage risks. During the extreme heat in July, SitCen worked with partners to identify vulnerable groups and locations, enabling responders to target support effectively. 16 Dec 2022 Health & Social Care 'I am impressed at colleagues' resilience, but 2023 is unlikely to bring much respite': Chris Whitty's New Year predictions by Civil Service World Read the most recent articles written by Tevye Markson - 'One crisis and I will be homeless': view from the civil service picket line Cabinet Office Oliver Dowden Civil Service Reform Security & Defence HR Operational Delivery 08 Aug 2022 Health & Social Care Future-proofing healthcare resilience: lessons learned from the COVID-19 Infection Survey 24 May 2022 Operational Delivery National resilience — a new approach A strong foundation for national resilience New online public policy courses to boost civil service staff development 16 Nov 2022 by Blavatnik School of Government (in partnership with Pearson) Defence procurement has an opportunity to step up to the global challenge Tackling the backlog burden through people-led change 26 Sep 2022 by Baringa
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,398
from __future__ import print_function import re from streamlink import PluginError from streamlink.compat import urlparse, parse_qsl, urlencode, urlunparse from streamlink.plugin import Plugin from streamlink.plugin.api import http from streamlink.plugin.api import validate from streamlink.stream import HLSStream class SRGSSR(Plugin): url_re = re.compile(r"""https?://(?:www\.)? (srf|rts|rsi|rtr)\.ch/ (?: play/tv| livestream/player| live-streaming| sport/direct/(\d+)- )""", re.VERBOSE) api_url = "http://il.srgssr.ch/integrationlayer/1.0/ue/{site}/video/play/{id}.json" token_url = "http://tp.srgssr.ch/akahd/token" video_id_re = re.compile(r'urn(?:%3A|:)(srf|rts|rsi|rtr)(?:%3A|:)(?:ais(?:%3A|:))?video(?:%3A|:)([^&"]+)') video_id_schema = validate.Schema(validate.transform(video_id_re.search)) api_schema = validate.Schema( { "Video": { "Playlists": { "Playlist": [{ "@protocol": validate.text, "url": [{"@quality": validate.text, "text": validate.url()}] }] } } }, validate.get("Video"), validate.get("Playlists"), validate.get("Playlist")) token_schema = validate.Schema({"token": {"authparams": validate.text}}, validate.get("token"), validate.get("authparams")) @classmethod def can_handle_url(cls, url): return cls.url_re.match(url) is not None def get_video_id(self): parsed = urlparse(self.url) qinfo = dict(parse_qsl(parsed.query or parsed.fragment.lstrip("?"))) site, video_id = None, None url_m = self.url_re.match(self.url) # look for the video id in the URL, otherwise find it in the page if "tvLiveId" in qinfo: video_id = qinfo["tvLiveId"] site = url_m.group(1) elif url_m.group(2): site, video_id = url_m.group(1), url_m.group(2) else: video_id_m = http.get(self.url, schema=self.video_id_schema) if video_id_m: site, video_id = video_id_m.groups() return site, video_id def auth_url(self, url): parsed = urlparse(url) path, _ = parsed.path.rsplit("/", 1) token_res = http.get(self.token_url, params=dict(acl=path + "/*")) authparams = http.json(token_res, schema=self.token_schema) existing = dict(parse_qsl(parsed.query)) existing.update(dict(parse_qsl(authparams))) return urlunparse(parsed._replace(query=urlencode(existing))) def _get_streams(self): site, video_id = self.get_video_id() if video_id and site: self.logger.debug("Found {0} video ID {1}", site, video_id) try: res = http.get(self.api_url.format(site=site, id=video_id)) except PluginError: return for stream_info in http.json(res, schema=self.api_schema): for url in stream_info["url"]: if stream_info["@protocol"] == "HTTP-HLS": for s in HLSStream.parse_variant_playlist(self.session, self.auth_url(url["text"])).items(): yield s __plugin__ = SRGSSR
{ "redpajama_set_name": "RedPajamaGithub" }
5,640
Apiksaban () – wielofunkcyjny organiczny związek chemiczny, lek przeciwzakrzepowy, bezpośredni i odwracalny inhibitor czynnika Xa, stosowany w zapobieganiu zakrzepom u pacjentów z grup wysokiego ryzyka. Mechanizm działania Apiksaban jest bezpośrednim i odwracalnym inhibitorem czynnika Xa i nie wymaga antytrombiny dla działania przeciwzakrzepowego. Poprzez zahamowanie czynnika Xa zapobiega wytwarzaniu trombiny i poprzez to powstawiania skrzepu. Apiksaban wpływa na wyniki badań krzepnięcia takie jak czas protrombinowy (PT), międzynarodowy współczynnik znormalizowany (INR) i czas częściowej tromboplastyny po aktywacji (APTT), jednakże wpływ ten jest niewielki i wykazuje wysoką zmienność. Zastosowanie zapobieganie epizodom żylnej choroby zakrzepowo-zatorowej u dorosłych pacjentów po planowej operacji endoprotezoplastyki stawu biodrowego lub kolanowego, zapobieganie udarom mózgu i zatorowości systemowej u dorosłych pacjentów z niezastawkowym migotaniem przedsionków z co najmniej jednym czynnikiem ryzyka, takim jak przebyty udar mózgu lub przemijający napad niedokrwienny (), wiek ≥ 75 lat, nadciśnienie tętnicze, cukrzyca, objawowa niewydolność serca (klasa wg NYHA ≥ II). Apiksaban znajduje się na wzorcowej liście podstawowych leków Światowej Organizacji Zdrowia () (2019). Apiksaban jest dopuszczony do obrotu w Polsce (2020). Działania niepożądane Apiksaban może powodować następujące działania niepożądane u ponad 1% pacjentów: niedokrwistość, krwawienie (w tym krwawienie z przewodu pokarmowego, krwawienie z nosa, krwiomocz oraz krwiak podskórny) i nudności. Dawkowanie Zalecane dawkowanie apiksabanu: w zapobieganiu udarom mózgu i zatorowości systemowej u dorosłych pacjentów z niezastawkowym migotaniem przedsionków to dwa razy dziennie 5 mg, natomiast u pacjentów, u których stwierdzono występowanie dwóch z następujących problemów zdrowotnych: wiek powyżej 80 lat, waga poniżej 60 kg lub też stężenie kreatyniny w osoczu krwi powyżej 1,5 mg/dl, to dwa razy dziennie po 2,5 mg w zapobieganiu żylnej chorobie zakrzepowo–zatorowej po zabiegu chirurgicznym endoprotezoplastyki stawu biodrowego lub kolanowego to dwa razy dziennie po 2,5 mg, przez odpowiednio 32 do 38 dni i 10 do 14 dni. Dawkowanie apiksabanu jest niezależne od pochodzenia rasowego. Przypisy Antykoagulanty Etery z podstawioną grupą fenylową Heterocykliczne związki azotu Delta-Laktamy Leki z listy leków podstawowych Światowej Organizacji Zdrowia Tetrahydropirydyny
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,647
'use strict'; /*! * Module dependencies. */ const SchemaType = require('../schematype'); const MongooseError = require('../error/index'); const SchemaStringOptions = require('../options/SchemaStringOptions'); const castString = require('../cast/string'); const utils = require('../utils'); const populateModelSymbol = require('../helpers/symbols').populateModelSymbol; const CastError = SchemaType.CastError; let Document; /** * String SchemaType constructor. * * @param {String} key * @param {Object} options * @inherits SchemaType * @api public */ function SchemaString(key, options) { this.enumValues = []; this.regExp = null; SchemaType.call(this, key, options, 'String'); } /** * This schema type's name, to defend against minifiers that mangle * function names. * * @api public */ SchemaString.schemaName = 'String'; /*! * Inherits from SchemaType. */ SchemaString.prototype = Object.create(SchemaType.prototype); SchemaString.prototype.constructor = SchemaString; SchemaString.prototype.OptionsConstructor = SchemaStringOptions; /*! * ignore */ SchemaString._cast = castString; /** * Get/set the function used to cast arbitrary values to strings. * * ####Example: * * // Throw an error if you pass in an object. Normally, Mongoose allows * // objects with custom `toString()` functions. * const original = mongoose.Schema.Types.String.cast(); * mongoose.Schema.Types.String.cast(v => { * assert.ok(v == null || typeof v !== 'object'); * return original(v); * }); * * // Or disable casting entirely * mongoose.Schema.Types.String.cast(false); * * @param {Function} caster * @return {Function} * @function get * @static * @api public */ SchemaString.cast = function cast(caster) { if (arguments.length === 0) { return this._cast; } if (caster === false) { caster = v => { if (v != null && typeof v !== 'string') { throw new Error(); } return v; }; } this._cast = caster; return this._cast; }; /** * Attaches a getter for all String instances. * * ####Example: * * // Make all numbers round down * mongoose.Schema.String.get(v => v.toLowerCase()); * * const Model = mongoose.model('Test', new Schema({ test: String })); * new Model({ test: 'FOO' }).test; // 'foo' * * @param {Function} getter * @return {this} * @function get * @static * @api public */ SchemaString.get = SchemaType.get; /*! * ignore */ SchemaString._checkRequired = v => (v instanceof String || typeof v === 'string') && v.length; /** * Override the function the required validator uses to check whether a string * passes the `required` check. * * ####Example: * * // Allow empty strings to pass `required` check * mongoose.Schema.Types.String.checkRequired(v => v != null); * * const M = mongoose.model({ str: { type: String, required: true } }); * new M({ str: '' }).validateSync(); // `null`, validation passes! * * @param {Function} fn * @return {Function} * @function checkRequired * @static * @api public */ SchemaString.checkRequired = SchemaType.checkRequired; /** * Adds an enum validator * * ####Example: * * var states = ['opening', 'open', 'closing', 'closed'] * var s = new Schema({ state: { type: String, enum: states }}) * var M = db.model('M', s) * var m = new M({ state: 'invalid' }) * m.save(function (err) { * console.error(String(err)) // ValidationError: `invalid` is not a valid enum value for path `state`. * m.state = 'open' * m.save(callback) // success * }) * * // or with custom error messages * var enum = { * values: ['opening', 'open', 'closing', 'closed'], * message: 'enum validator failed for path `{PATH}` with value `{VALUE}`' * } * var s = new Schema({ state: { type: String, enum: enum }) * var M = db.model('M', s) * var m = new M({ state: 'invalid' }) * m.save(function (err) { * console.error(String(err)) // ValidationError: enum validator failed for path `state` with value `invalid` * m.state = 'open' * m.save(callback) // success * }) * * @param {String|Object} [args...] enumeration values * @return {SchemaType} this * @see Customized Error Messages #error_messages_MongooseError-messages * @api public */ SchemaString.prototype.enum = function() { if (this.enumValidator) { this.validators = this.validators.filter(function(v) { return v.validator !== this.enumValidator; }, this); this.enumValidator = false; } if (arguments[0] === void 0 || arguments[0] === false) { return this; } let values; let errorMessage; if (utils.isObject(arguments[0])) { values = arguments[0].values; errorMessage = arguments[0].message; } else { values = arguments; errorMessage = MongooseError.messages.String.enum; } for (let i = 0; i < values.length; i++) { if (undefined !== values[i]) { this.enumValues.push(this.cast(values[i])); } } const vals = this.enumValues; this.enumValidator = function(v) { return undefined === v || ~vals.indexOf(v); }; this.validators.push({ validator: this.enumValidator, message: errorMessage, type: 'enum', enumValues: vals }); return this; }; /** * Adds a lowercase [setter](http://mongoosejs.com/docs/api.html#schematype_SchemaType-set). * * ####Example: * * var s = new Schema({ email: { type: String, lowercase: true }}) * var M = db.model('M', s); * var m = new M({ email: 'SomeEmail@example.COM' }); * console.log(m.email) // someemail@example.com * M.find({ email: 'SomeEmail@example.com' }); // Queries by 'someemail@example.com' * * @api public * @return {SchemaType} this */ SchemaString.prototype.lowercase = function(shouldApply) { if (arguments.length > 0 && !shouldApply) { return this; } return this.set(function(v, self) { if (typeof v !== 'string') { v = self.cast(v); } if (v) { return v.toLowerCase(); } return v; }); }; /** * Adds an uppercase [setter](http://mongoosejs.com/docs/api.html#schematype_SchemaType-set). * * ####Example: * * var s = new Schema({ caps: { type: String, uppercase: true }}) * var M = db.model('M', s); * var m = new M({ caps: 'an example' }); * console.log(m.caps) // AN EXAMPLE * M.find({ caps: 'an example' }) // Matches documents where caps = 'AN EXAMPLE' * * @api public * @return {SchemaType} this */ SchemaString.prototype.uppercase = function(shouldApply) { if (arguments.length > 0 && !shouldApply) { return this; } return this.set(function(v, self) { if (typeof v !== 'string') { v = self.cast(v); } if (v) { return v.toUpperCase(); } return v; }); }; /** * Adds a trim [setter](http://mongoosejs.com/docs/api.html#schematype_SchemaType-set). * * The string value will be trimmed when set. * * ####Example: * * var s = new Schema({ name: { type: String, trim: true }}) * var M = db.model('M', s) * var string = ' some name ' * console.log(string.length) // 11 * var m = new M({ name: string }) * console.log(m.name.length) // 9 * * @api public * @return {SchemaType} this */ SchemaString.prototype.trim = function(shouldTrim) { if (arguments.length > 0 && !shouldTrim) { return this; } return this.set(function(v, self) { if (typeof v !== 'string') { v = self.cast(v); } if (v) { return v.trim(); } return v; }); }; /** * Sets a minimum length validator. * * ####Example: * * var schema = new Schema({ postalCode: { type: String, minlength: 5 }) * var Address = db.model('Address', schema) * var address = new Address({ postalCode: '9512' }) * address.save(function (err) { * console.error(err) // validator error * address.postalCode = '95125'; * address.save() // success * }) * * // custom error messages * // We can also use the special {MINLENGTH} token which will be replaced with the minimum allowed length * var minlength = [5, 'The value of path `{PATH}` (`{VALUE}`) is shorter than the minimum allowed length ({MINLENGTH}).']; * var schema = new Schema({ postalCode: { type: String, minlength: minlength }) * var Address = mongoose.model('Address', schema); * var address = new Address({ postalCode: '9512' }); * address.validate(function (err) { * console.log(String(err)) // ValidationError: The value of path `postalCode` (`9512`) is shorter than the minimum length (5). * }) * * @param {Number} value minimum string length * @param {String} [message] optional custom error message * @return {SchemaType} this * @see Customized Error Messages #error_messages_MongooseError-messages * @api public */ SchemaString.prototype.minlength = function(value, message) { if (this.minlengthValidator) { this.validators = this.validators.filter(function(v) { return v.validator !== this.minlengthValidator; }, this); } if (value !== null && value !== undefined) { let msg = message || MongooseError.messages.String.minlength; msg = msg.replace(/{MINLENGTH}/, value); this.validators.push({ validator: this.minlengthValidator = function(v) { return v === null || v.length >= value; }, message: msg, type: 'minlength', minlength: value }); } return this; }; /** * Sets a maximum length validator. * * ####Example: * * var schema = new Schema({ postalCode: { type: String, maxlength: 9 }) * var Address = db.model('Address', schema) * var address = new Address({ postalCode: '9512512345' }) * address.save(function (err) { * console.error(err) // validator error * address.postalCode = '95125'; * address.save() // success * }) * * // custom error messages * // We can also use the special {MAXLENGTH} token which will be replaced with the maximum allowed length * var maxlength = [9, 'The value of path `{PATH}` (`{VALUE}`) exceeds the maximum allowed length ({MAXLENGTH}).']; * var schema = new Schema({ postalCode: { type: String, maxlength: maxlength }) * var Address = mongoose.model('Address', schema); * var address = new Address({ postalCode: '9512512345' }); * address.validate(function (err) { * console.log(String(err)) // ValidationError: The value of path `postalCode` (`9512512345`) exceeds the maximum allowed length (9). * }) * * @param {Number} value maximum string length * @param {String} [message] optional custom error message * @return {SchemaType} this * @see Customized Error Messages #error_messages_MongooseError-messages * @api public */ SchemaString.prototype.maxlength = function(value, message) { if (this.maxlengthValidator) { this.validators = this.validators.filter(function(v) { return v.validator !== this.maxlengthValidator; }, this); } if (value !== null && value !== undefined) { let msg = message || MongooseError.messages.String.maxlength; msg = msg.replace(/{MAXLENGTH}/, value); this.validators.push({ validator: this.maxlengthValidator = function(v) { return v === null || v.length <= value; }, message: msg, type: 'maxlength', maxlength: value }); } return this; }; /** * Sets a regexp validator. * * Any value that does not pass `regExp`.test(val) will fail validation. * * ####Example: * * var s = new Schema({ name: { type: String, match: /^a/ }}) * var M = db.model('M', s) * var m = new M({ name: 'I am invalid' }) * m.validate(function (err) { * console.error(String(err)) // "ValidationError: Path `name` is invalid (I am invalid)." * m.name = 'apples' * m.validate(function (err) { * assert.ok(err) // success * }) * }) * * // using a custom error message * var match = [ /\.html$/, "That file doesn't end in .html ({VALUE})" ]; * var s = new Schema({ file: { type: String, match: match }}) * var M = db.model('M', s); * var m = new M({ file: 'invalid' }); * m.validate(function (err) { * console.log(String(err)) // "ValidationError: That file doesn't end in .html (invalid)" * }) * * Empty strings, `undefined`, and `null` values always pass the match validator. If you require these values, enable the `required` validator also. * * var s = new Schema({ name: { type: String, match: /^a/, required: true }}) * * @param {RegExp} regExp regular expression to test against * @param {String} [message] optional custom error message * @return {SchemaType} this * @see Customized Error Messages #error_messages_MongooseError-messages * @api public */ SchemaString.prototype.match = function match(regExp, message) { // yes, we allow multiple match validators const msg = message || MongooseError.messages.String.match; const matchValidator = function(v) { if (!regExp) { return false; } const ret = ((v != null && v !== '') ? regExp.test(v) : true); return ret; }; this.validators.push({ validator: matchValidator, message: msg, type: 'regexp', regexp: regExp }); return this; }; /** * Check if the given value satisfies the `required` validator. The value is * considered valid if it is a string (that is, not `null` or `undefined`) and * has positive length. The `required` validator **will** fail for empty * strings. * * @param {Any} value * @param {Document} doc * @return {Boolean} * @api public */ SchemaString.prototype.checkRequired = function checkRequired(value, doc) { if (SchemaType._isRef(this, value, doc, true)) { return !!value; } // `require('util').inherits()` does **not** copy static properties, and // plugins like mongoose-float use `inherits()` for pre-ES6. const _checkRequired = typeof this.constructor.checkRequired == 'function' ? this.constructor.checkRequired() : SchemaString.checkRequired(); return _checkRequired(value); }; /** * Casts to String * * @api private */ SchemaString.prototype.cast = function(value, doc, init) { if (SchemaType._isRef(this, value, doc, init)) { // wait! we may need to cast this to a document if (value === null || value === undefined) { return value; } // lazy load Document || (Document = require('./../document')); if (value instanceof Document) { value.$__.wasPopulated = true; return value; } // setting a populated path if (typeof value === 'string') { return value; } else if (Buffer.isBuffer(value) || !utils.isObject(value)) { throw new CastError('string', value, this.path); } // Handle the case where user directly sets a populated // path to a plain object; cast to the Model used in // the population query. const path = doc.$__fullPath(this.path); const owner = doc.ownerDocument ? doc.ownerDocument() : doc; const pop = owner.populated(path, true); const ret = new pop.options[populateModelSymbol](value); ret.$__.wasPopulated = true; return ret; } const castString = typeof this.constructor.cast === 'function' ? this.constructor.cast() : SchemaString.cast(); try { return castString(value); } catch (error) { throw new CastError('string', value, this.path); } }; /*! * ignore */ function handleSingle(val) { return this.castForQuery(val); } function handleArray(val) { const _this = this; if (!Array.isArray(val)) { return [this.castForQuery(val)]; } return val.map(function(m) { return _this.castForQuery(m); }); } SchemaString.prototype.$conditionalHandlers = utils.options(SchemaType.prototype.$conditionalHandlers, { $all: handleArray, $gt: handleSingle, $gte: handleSingle, $lt: handleSingle, $lte: handleSingle, $options: String, $regex: handleSingle, $not: handleSingle }); /** * Casts contents for queries. * * @param {String} $conditional * @param {any} [val] * @api private */ SchemaString.prototype.castForQuery = function($conditional, val) { let handler; if (arguments.length === 2) { handler = this.$conditionalHandlers[$conditional]; if (!handler) { throw new Error('Can\'t use ' + $conditional + ' with String.'); } return handler.call(this, val); } val = $conditional; if (Object.prototype.toString.call(val) === '[object RegExp]') { return val; } return this._castForQuery(val); }; /*! * Module exports. */ module.exports = SchemaString;
{ "redpajama_set_name": "RedPajamaGithub" }
2,910
First off, let me apologize for being late with the post. The last 24 hours have been interesting and not necessarily in a good way. No, nothing dire. Just serious enough to be of concern and to cause some major readjustments to how things are done on the personal side of life. It wasn't unexpected but, no matter how much you prepare yourself, it can still be a punch in the gut. Basically, we received confirmation yesterday that my almost 87-year-old mother needs to have a complete shoulder replacement after taking a fall this summer. The fall didn't cause the problem. It simply made a problem Mom didn't know she had bad enough it can no longer be ignored. Fortunately, we live in a day and age where this sort of surgery is not the big risk it used to be. Hell, they aren't even using the "standard" replacement procedure on her. Because of the type of injury, as well as her age, they are going to do what they call a "reverse replacement". It's an amazing procedure on so many different levels. There is no getting around the fact Mom will be 87 by the time she has the surgery next month. That means there are serious complications possible, everything from how she handles the anesthesia to how she manages the recovery and rehab. I won't lie and say there's no fear because there is. But, we both have a great deal of confidence in her surgeon and in her PCP, who we will be seeing in 10 days for her pre-op workup. I know neither will let her undergo the surgery if there is any real risk. Not that it helps the child in me from screaming in terror in the back of my mind. Add to that the changes in lifestyle both of us are having to undergo both leading up to the surgery and for the months after it. I'm late doing the post this morning because Mom decided I've been right to worry about her driving. I don't want her to stop her volunteering activities because she looks forward to them. Plus, she is active and healthy–save for the bum wing. Last night, she was still insisting she could drive to today's volunteer gig. This morning, well, let's say she changed her mind. This is just a preview of what the three months or so after the surgery. The first 10 days post-op, Mom will be unable to do much of anything for herself. That's not only because of potential pain, etc., but also because her right arm will be secured to her side and Mom is very, very right-handed. She won't be able to drive for months. It will probably be 2 months post-op before she can do a lot of things she is used to doing and even then she will be limited. That means, I have to step in and do not just the chores around the house she's been doing but will have to be there to help her with simple things like dressing for awhile. Think about how that would impact any of us. My mother is a strong-willed, proud woman and does not like being helpless. So there will be a mental aspect involved in all this as well–for both of us. Now what does this have to do with writing? It means I am going to have to adapt. In a lot of ways, it will be like when I was a single mother with a small child at home. Instead of sitting down after my morning coffee and working a regular "work day", I'm going to have to grab writing time when I can. I foresee getting up early and going to be late. There will be naps when I can grab them. Writing will be done in waiting rooms and whenever I can grab a minute or two. It means making sure I have the appropriate apps on phone, tablet and laptops and that all of them are set to sync with one another. This is where I love the fact Word and Scrivener are now much easier to sync between machines than they used to be. It means making sure I always have the current projects queued up on whatever machines are with me. It also means making sure I have a simple pad and pen with me because pulling out electronics isn't always feasible. It means taking care of myself, physically and emotionally. The last thing either of us needs right now is me getting sick or so stressed I'm not able to do what needs to be done. It means getting organized. ACK! Most of all, it means finding time to write to keep sane. What that probably means is less gaming, although that has gown down drastically the last month or so as I've been working on multiple writing projects at the same time. It means, in other words, being proactive and that isn't always easy. So if I come here and simply gaze at the lint in my belly button, knock me up the side of the head. For now, I need to sit down and start making lists about what needs to be done to get the house–and the family–ready for Mom's surgery. Then it is time to write. My goal is to get at least four hours a day of work in once she has her surgery. It might not be all at once. In fact, I know it won't be. But that is the minimum I can do and still come close to hitting my general deadlines. That's the goal, now to see if I can meet it. Excuse me now while I go do a primal scream or two. Then it's work–after more coffee–before going to pick Mom up in a couple of hours. Fingers crossed I manage to keep my sanity between now and the first of the year. Setting any kind of writing goal under these circumstances is impressive. Good luck! FWIW, one thing I've discovered in a lifetime of family crises is that the first few days are the worst, because it's not humanly possible to predict all the problems and their solutions. By a week to ten days post-surgery you may be into a routine that leaves adequate space for writing time. Or not. Either way, the first week is only made worse if you torment yourself with, "If this happens every day, I'll never be able to work." It won't, and you will. Be sure and let us know which happens. You might try talking your stories (plots, characters, whatever) over with her? I've known adult writers who entertained their babies by talking over their stories with them, and I suspect having someone who could actually talk back might also work. Help her avoid boredom, and help you get the stories going? Or is mixing caretaking and storytelling a no-no? Primal screams are good. Hang in there. I'll add you and your mom to my prayer list. If posts are late, I doubt anyone would squawk much under the circumstances. Wishing you and your mom all the best, Amanda. Putting a like on seems almost wrong, but my intention is to wish all the luck you need in your trying time, and the best possible outcome for your Mum. There are services (around here, SF Bay Area) that take unwell seniors to medical appointments and pick them up again. We needed wheelchair carrying, and that taxi guy was a godsend. Check for such things in your area. They can be a big help when you need to be in three places at once. We brought a recorder to all doctor appts, so the non-local kids could know exactly what we heard and saw. The doctors didn't mind. When Mom got hit by a car she went into a convalescent home after her surgery to pin the leg bones back together. Having PT on site helped a lot. your mother won't need that level of care? That's… mildly surprising, considering the long recovery and the rehab required. Iif you find out she does, not all convalescent home are equal, check reviews. If you can afford it, either a temporary stay in high quality assisted living where physical therapy for rehab is available*, or getting one of those home-health-services to send people to your place to help for a few hours a day may save your sanity. And your relationship with your mother. Sometimes it's easier to be helpless with strangers. As well as just contact with people outside immediate family is healthy. It' was the minor day-to-day things that got to us: showering, nail trimming, bathroom stuff. *the one Mom went into after the stroke rented rooms and provided services for people out of the 'acute' category of convalescent care, but who still needed more care than family could easily handle. We've got great services around here and she has good insurance. Fortunately. We're playing that part by ear. The doctor doesn't seem to think she'll need it but we'll evaluate and re-evaluate as we get closer to surgery and then immediately afterwards. Don't be a hero. There are a number of companies out there that can provide help for these situations; I got one of them for my mother when she was sickest. I hear you, Charlie. You may need to pound it into my head some but it is on the list of things to do if necessary. Right now, the doctor doesn't think it will be. I know it is not the same thing… But whey protein helped my middle-aged arm heal faster than my bone doctor expected, and there are plenty studies about how it helps bones and muscles grow back. So if your mom can do dairy stuff, think about it. I just did a lot of washcloth baths until my arm got better, although I also did full showers with my arm bagged in plastic. One-armed shampooing is annoying but doable. A washable chair in the shower is a common hack. We are looking at all options like the whey thing. As for showering, once she feels she is able to–and the doctor clears it–her shower has a built-in seat which will help. Our biggest "discussion" right now is on whether or not to install a bar next to the toilet to help her get up. I want to. She is resisting. We finally agreed to disagree until we talked with a friend who is a PT and with the doctor again. After surgery and when I wasn't yet up to standing at a sink I found that a wipe down with witch hazel soaked cotton pads very refreshing. I use whey protein as a partial substitute for flour in my low-carb desserts. If you do go the whey route, and your mother decides that doesn't like drinking the shakes, I can send you some receipts for cookies or brownies made with whey. I found that the kitchen sprayer worked for washing my hair. It made all the difference to my mood to have it clean. That's what I did when I had my shoulder surgery years ago. As soon as Mom feels up to it, I'll make sure she resumes her weekly visit to the beauty shop. That will make her feel worlds better. I am sorry to hear about the adjustments that have been sprung upon you. The Spouse says that age isn't so bad, its the broken that gets to you. Both you and your mother will be in my prayers going forward. Having been in the south as long as I have if I were closer there would be at least one meal coming your way. Thanks. Mom and I have watched up close and personal how important staying involved is. People she worked with who are younger than she is and retired after she did are now "older" than she because they retired into nothingness, if that makes sense. She'd seen it happen before. So before she retired, she made sure she already had volunteer activities, etc., lined up. Ah! Brings back memories! Both fond and sad. May I recommend shirts a size or two too big, so they're easier to get in and out of? Elastic waist pants? Slip on or velcro shoes? Sturdy slippers that won't make tripping easier? We've been looking at and discussing the clothing option. The elastic pants aren't difficult. She already has some of those. Shoes, well, we'll argue about them later. It's been the shirts that she and I have been butting heads over. I finally had to remind her that I've been through shoulder surgery and having my arm secured across my stomach for 3 weeks. So I know what I'm talking about. I found some tops in her closet she'll be able to wear and I've convinced her that getting some really big men's tee shirts for around the house is perfectly fine. Now to get them ordered before she changes her mind. If she's not averse to shirts with sf/fantasy imagery on them, I've got a number of 4XL and 5XL shirts I'd like to get rid of. They're leftovers from designs our screenprinters discontinued, and I'm having real trouble getting them to sell. If sf/fantasy t-shirts aren't her cuppa, no problem. But if you think it would work, let me know and we can work something out. Hugs. My husband is getting a knee replacement in November, and I'm already considering how surgery and recovery will affect our lives and business activities. So I'll be keeping you and. your mom in mind as the two of you go through this difficult period, and hoping everything turns out fine.
{ "redpajama_set_name": "RedPajamaC4" }
8,536
\section{Introduction} \label{sec:intro} \vspace{-3pt} A ground penetrating radar (GPR, hereafter) is used for probing the underground by transmitting radio waves in the subsurface and recording the backscattered reflections. The interest in GPR is due to its ability to reveal buried objects non-invasively and detect non-metallic scatterers with increased sensitivity to dielectric contrast \cite{jol2008ground,persico2014introduction}. This sensing technique is, therefore, attractive for several applications such as geophysics, archaeology, forensics, and defense (see e.g. \cite{daniels2005ground,jol2008ground} for some surveys). In this work, our focus is detection of buried landmines. It is one of the most extensively investigated GPR applications due to its obvious security and humanitarian importance. Mine detection GPR usually operates in L-band ($1$-$2$ GHz) with ultra-wideband (UWB) transmit signals that allow resolving small targets ($5$-$10$ cm diameter) at shallow depths ($~15$-$30$ cm) \cite{yarovoy2002ultra, giovanneschi2013parametric}. In such situations, GPR target recognition suffers from signal distortion due to inhomogeneous soil clutter, surface roughness and antenna ringing. Moreover, the constituting material of many models of landmines is largely plastic and has a very weak response to radar signals due to its low dielectric contrast with respect to the soil \cite{daniels2005ground,bruschini2016survey}. A variety of signal processing algorithms have been proposed for detection of low metal-content landmines in realistic scenarios; approaches based on feature extraction and classification are found to be the most effective (see e.g. \cite{ratto2011exploiting,gonzalez2013combined,torrione2014histograms,giannakis2016model}), yet false-alarm rates remain very high. Further, a high-resolution GPR has long scan times thereby making the data acquisition by a portable instrument very cumbersome \cite{suksmono2010compressive}. In order to reduce the scan time or number of measurements, an emerging trend in GPR research \cite{gurbuz2012compressive,krueger2014compressive} is to employ the recently proposed compressed sensing (CS) framework \cite{eldar2012compressed,eldar2015sampling}. In CS, a signal can be reconstructed using a reduced number of samples w.r.t. the the Nyquist rate requirements, provided the signal is sparse in some domain.However, unlike point scatterers, the mine echoes are spatially extended and the resulting GPR received signal is not sparse in conventional range-time and frequency domains \cite{shao2013sparse}. Therefore, our immediate goal is to find an efficient sparse representation (SR) which accurately represents the scattering behaviors related to soil type and targets. This has been shown to improve the classifier performance in discriminating mines from clutter \cite{shao2013sparse, giovanneschi2015preliminary}. In SR, the signal-of-interest is transformed to a domain where the signal can be expressed as a linear combination of only a few columns or \textit{atoms} of the \textit{dictionary} matrix \cite{elad2006image}. When it is inefficient to pre-define the dictionary to contain arbitrary basis (e.g. Fourier or wavelets), the usual resort is to \textit{learn} the dictionary from previous measurements. Dictionary learning (DL) techniques aim to create adapted dictionaries which provide the sparsest reconstruction for given training-sets, i.e., a representation with a minimum number of constituting atoms. Classical DL algorithms such as Method of Optimal Directions (MOD) \cite{engan1999method} and K-SVD \cite{elad2006image} operate in batches - dealing with the entire training set in each iteration. Although extremely successful, these methods are computationally demanding and not scalable to high-dimension training sets. An efficient alternative is the online dictionary learning (ODL) algorithm \cite{mairal2009online} that converges fast, processes small sets, and can infer the dictionary from large or time-varying training sets \cite{naderahmadian2016correlation}. Improved DL methods can aid in better target identification and subsequent reduction in GPR measurements through CS-based design. To this end, our work focuses on hitherto unexamined application of DL towards GPR-based landmine classification. Only one previous study has employed DL (K-SVD) using GPR signals \cite{shao2013sparse}, for identifying bedrock features. We propose employing ODL and then use the coefficients of the resulting sparse vectors as input to a Support Vector Machine (SVM) classifier to distinguish mines from clutter. Our comparison of ODL and K-SVD using real data from L-band GPR shows that ODL enjoys distinct advantage in speed and low false-alarm rates. Fast ODL computations pave the way towards cognitive GPR operation, wherein the system uses previous measurements to optimize the processing performance and is capable of sequential sampling adaptation based on the learned dictionary \cite{guerci2010cognitive,mishra2016cognitive,mishra2017performance}. In the next section, we describe the GPR system and the data sets. In Section III, we introduce our technique for GPR target identification, with particular focus on ODL. Section IV presents classification and reconstruction results using real radar data. We provide concluding remarks in Section V. \vspace{-6pt} \section{System and field tests} \label{sec:sys_meth} \vspace{-3pt} We used a commercial GPR system and carried out the field campaign at Leibniz Institute for Applied Geophysics (LIAG), Hannover (Germany) \cite{gonzalez2013combined}. We now provide details for the system and data. \vspace{-8pt} \subsection{L-band GPR} \label{subsec:gpr} The GPR system (see Fig. \ref{fig:GPR} inset) is an impulse radar with central frequency of 2 GHz.The frequency of the pulse repetition (PRF) and the sampling of the receiver ADC is of $1$ MHz. Table \ref{tbl:techparams} lists the salient technical parameters of the system. The scan rate of the system is $\sim1$ m/s with sampling resolution of 1 cm towards the perpendicular broadside (or X direction) and $4$ cm towards the cross-beam (Y direction). The radar uses a $8$ cm$\times8$ cm dual bow-tie dipole antenna for both transmit (Tx) and receive (Rx) sealed in a metallic shielding and an internal absorber. The raw data consists of samples of complex envelope of the received signal. \begin{table}[t] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.2]{figs/GPR_TestField2.pdf} \captionof{figure}{Test site (red) with buried mines. Inset shows GPR system.} \label{fig:GPR} \end{minipage}\hfill \begin{minipage}[b]{0.47\linewidth} \centering \begin{tabular}{ l | l } \hline \noalign{\vskip 1pt} Parameter & Value\\[1pt] \hline \hline \noalign{\vskip 1pt} Operating frequency & 2 GHz\\[1pt] PRF & 1 MHz\\[1pt] Pulse length & 0.5 ns\\[1pt] Sampling time & 25 ps\\[1pt] Spatial sampling & 1 cm\\[1pt] Cross resolution & 4 cm\\[1pt] Antenna height & 5-9 cm\\[1pt] Samples/A-scan & 512\\ \hline \hline \end{tabular} \caption{GPR parameters} \label{tbl:techparams} \end{minipage} \vspace{-14pt} \end{table} \vspace{-8pt} \subsection{Mines data} \label{subsec:minesdata} The testbed was a grassy, moderately rough surface containing landmine simulants of different sizes that in the order of decreasing size are PMN/PMA2, ERA and T72, all buried at a depth of $5$-$10$ cm \cite{gonzalez2013accurate}. During the field tests, the GPR scanned different $1$ m$\times1$ m sections of the test-bed. The soil texture was sandy and highly inhomegeneous (due to the presence of material such as organic matter and stones), thereby leading to a high variability in the electrical parameters. We measured the dielectric constant at three different locations of the testbed with a Time Domain Reflectometer (TDR) to obtain an estimate of its mean value and variability. The average value oscillated between 4.6 and 10.1 with $15\%$ standard deviation and correlation length \cite{gonzalez2013combined} of $20$ cm. Such big variations in soil compositions pose difficulties in mine detection with existing methods. \vspace{-6pt} \section{GPR Target Identification Method} \label{sec:methodology} \vspace{-3pt} We now describe our method for dictionary learning and classification for GPR target identification. The literature for DL and SVM is extensive, and hence we only summarize these methods here. In the following, we use boldface lowercase and uppercase letters for vectors and matrices, respectively. \vspace{-8pt} \subsection{Dictionary learning} A dictionary learning algorithm finds an over-complete dictionary $\mathbf{D} \in \mathbb{R}^{m\times n}$, $m<n$ that can sparsely represent measurements $\mathbf{y}\in \mathbb{R}^m$. For the training data $\mathbf{Y} = [\mathbf{y}_1, \cdots, \mathbf{y}_L]$, we call $\mathbf{X} = [\mathbf{x}_1, \cdots, \mathbf{x}_L] \in \mathbb{R}^{n\times L}$ a sparse representation of $\mathbf{Y}$ over $\mathbf{D} = [\mathbf{d}_1, \cdots, \mathbf{d}_n]$, if $\mathbf{Y} \simeq \mathbf{D}\mathbf{X}$. Here, $\mathbf{d}_j$, $j=1,\cdots,n$ is called an atom of the dictionary. Each of the vectors $\mathbf{x}_i$ is a sparse representation of $\mathbf{y}_i$ with only $K$ nonzero entries. A tractable formulation of this problem is the following non-convex optimization\par\noindent\small \begin{flalign} \label{eq:csrecoverl1_1} & \underset{\mathbf{D},\mathbf{X}}{\text{minimize}}\phantom{1}\left\Vert \mathbf{Y}-\mathbf{D}\mathbf{X}\right\Vert _{F}\nonumber\\ & \text{subject to}\phantom{1} \left\Vert\mathbf{x}_i\right\Vert_0 \le K,\:\forall 1\le i \le L, \end{flalign}\normalsize where $||\cdot||_F$ denotes Frobenius norm. Since both $\mathbf{D}$ and $\mathbf{X}$ are unknown, commonly this is turned into a two-step convex problem that alternately minimizes $\mathbf{X}$ (\textit{sparse coding step}) \cite{aharon2006k} and $\mathbf{D}$ (\textit{dictionary update step}). The popular K-SVD algorithm sequentially updates all the atoms for each alternation between the aforementioned two steps. This is a \textit{batch} algorithm that uses the entire training data for updating the dictionary at each alternation. The ODL also updates the entire dictionary sequentially, but uses one element of training data at a time for the gradient descent-based dictionary update step. For the sparse coding step, K-SVD employs Orthogonal Matching Pursuit (OMP) with the formulation: \par\noindent\small \begin{flalign} \label{eq:omp} & \underset{\mathbf{x}_i}{\text{minimize}}\phantom{1}\left\Vert\mathbf{x}_i\right\Vert_0\nonumber\\ & \text{subject to}\phantom{1} \left\Vert \mathbf{y}_i-\mathbf{D}\mathbf{x}_i\right\Vert^2_{2} \le \alpha,\:\forall 1\le i \le L, \end{flalign}\normalsize where $\alpha$ is the maximum residual error used as a stopping criterion. The ODL, on the other hand, uses Cholesky-based implementation of the LARS-LASSO algorithm \cite{osborne2000new}. The latter solves a $\ell_1$-regularized least-squares problem:\par\noindent\small \begin{align} \label{eq:lars} \mathbf{x}_T \triangleq \underset{\mathbf{x}\in \mathbb{R}^n}{\mathrm{min}} \frac{1}{2}||\mathbf{y}_T - \mathbf{D}_{T-1}\mathbf{x}||^2_2 + \lambda||x||_1, \end{align}\normalsize where the subscripts $T-1$ and $T$ denote last and current iterations. \vspace{-6pt} \subsection{Signal classification} We use SVM to classify the sparsely represented GPR range profiles. Given a pre-defined collection of labeled observations, namely a ``classification set'', SVM searches for a functional $f:\mathbf{R}^n \rightarrow \mathbf{R}$ that maps any given observation $\mathbf{x}_i$ to a class $c\in \mathbb{R}$. In our work, the classification set is a labeled collection of GPR range profiles of targets/clutter which have been sparsely decomposed using the learned dictionary $\mathbf{D}$. We refer the reader to \cite{chang2011libsvm} for details of SVM. Briefly, SVM transforms the data into a high dimensional feature space where it is easier to separate between different classes. The kernel function that we use to compute the high dimensional operations in the feature space, is the Radial Basis Function (RBF) To optimally select the SVM input parameters, we arranged the original classification set into training and validation vectors in $\nu$ different ways ($\nu$-fold cross-validation with $\nu$=10) to arrive at a certain mean cross-classification accuracy of the validation vectors. \vspace{-6pt} \section{Experimental Results and Analysis} \label{sec:results} \vspace{-3pt} We divided the entire LIAG data into three different sets: the training set to learn the dictionary, the classification set for the SVM classifier and the test set. We devised a statistical approach to select the best parameters for DL with GPR mines data, and then use the resultant dictionary for the classification procedure. \vspace{-6pt} \subsection{Dictionary comparison} \label{subsec:compexp} \begin{figure}[!t] \centering \includegraphics[scale=0.25]{figs/distributions.pdf} \caption{\scriptsize{Normalized histograms of similarity measure. Here, $T = 40$, $K=512$, $\lambda = 0.1$, and $\alpha = 0.1$.}\vspace{-9pt}} \label{fig:odlKsvdDist} \end{figure} The training set $\mathbf{Y}$ consisted of more than two thousand range profiles that were randomly selected from a collection of GPR responses in proximity of the buried targets; these areas belonged to different surveys of the test site. In order to compare the dictionaries obtained from ODL and K-SVD, we use a \textit{similarity measure} that quantifies the closeness of the reconstructed vectors $\mathbf{\hat{y}}_i$ obtained using the sparse coefficients of the learned dictionary $\mathbf{D}$ with the original training set vectors $\mathbf{y}_i$. Let the cross-correlation between the two vectors be $\mathbf{r}_{\mathbf{y}_i,\mathbf{\hat{y}}_i}(m) = \sum\limits_{n=-\infty}^{+\infty}\mathbf{y}_i(n)\mathbf{\hat{y}}_i(n+m)$. The normalized cross-correlation is\par\noindent\small \begin{align} \overline{\mathbf{r}_{\mathbf{y}_i,\mathbf{\hat{y}}_i}}(m) = \frac{\mathbf{r}_{\mathbf{y}_i,\mathbf{\hat{y}}_i}(m)}{\sqrt{\mathbf{r}_{\mathbf{y}_i,\mathbf{y}_i}(0)\mathbf{r}_{\mathbf{\hat{y}}_i,\mathbf{\hat{y}}_i}(0)}} \end{align}\normalsize We define the similarity measure $s_i$ to be the maximum of the absolute value of the normalized cross correlation between $\mathbf{y}_i$ and $\mathbf{\hat{y}}_i$: $s_i = \textrm{max}|\overline{\mathbf{r}_{\mathbf{y}_i,\mathbf{\hat{y}}_i}(m)}|$. The set of similarity measure values $\{s_i\}_{i=1}^{N}$ form an empirical probability density function (epdf), $p_{s_{DL}}$ where the subscript DL represents the method used (K-SVD or ODL). As an example, Fig. \ref{fig:odlKsvdDist} shows the epdfs for the dictionaries learned for the GPR mines data. We note that $p_{s_{ODL}}$ is more skewed towards unity than $p_{s_{KSVD}}$, thereby demonstrating similarity of the learned ODL dictionary with the original training set. To arrive at optimum parameter values affecting these epdfs, we now use statistical metrics. \vspace{-8pt} \subsection{Parameter analysis} \label{subsec:paranal} \begin{figure}[!t] \centering \subfloat[Coefficient of variation]{% \includegraphics[width=3.9cm]{figs/CV_mod.pdf}% \label{fig:cov}% }\qquad \subfloat[Kolmogorov-Smirnov distance]{% \includegraphics[width=4.0cm]{figs/kstest2_mod.pdf}% \label{fig:ksd}% } \caption{\scriptsize{Statistics of similarity measure distributions for $\lambda=0.1$ and $\alpha=0.1$ and varying number of iterations and trained atoms.}\vspace{-14pt}} \label{fig:cvks} \end{figure} The parameters which predominantly affect the results of ODL and K-SVD algorithms are the number of iterations ($T$), the number of trained atoms $K$, the regularization parameter $\lambda$ in the SD step, and the error parameter $\alpha$ to sparsely decompose the training set via OMP. The epdf $p_s$ is then a function of these four parameters. For a given DL method, our goal is to compare the epdfs of similarity measure by varying these parameters, and arrive at the thresholds of parameter values after which the changes in $p_{s_{DL}}$ are only incremental. We are looking for set of values $\{T, K, \lambda, \alpha\}$ for which the $p_{s_{DL}}$ is skewed towards unity and has small variance. The individual comparisons of mean ($\mu$) and standard deviation ($\sigma$), as used in previous GPR DL studies \cite{shao2013sparse}, are not sufficient to quantify the observed dispersion in the epdfs obtained by varying any of the parameter values. We, therefore, simultaneously compare both statistics by using the coefficient of variation, $CV = \sigma/\mu$; in our analysis, it represents the extent of variability in relation to the mean of the similarity values. Figure \ref{fig:cov} shows the variation of CV for ODL and K-SVD for different values of trained atoms and fixed values of $\lambda=0.1$ and $\alpha=0.1$. The ODL shows that after $40$ iterations, the variation in the epdf is negligible and the impact of increasing the number of trained atoms grows. We note that K-SVD doesn't show any such trend. Our second metric to compare the distributions of the similarity measure obtained by successive changes in parameter values is the Kolmogorov-Smirnov (K-S) distance \cite{chakravarti1967handbook}, which is the maximum distance between two given empirical cumulative distribution functions (ecdf). Larger values of K-S distance indicate that samples are drawn from different underlying distributions. Suppose $P_{s_1}$ and $P_{s_2}$ are the ecdfs of the same length corresponding to epdfs $p_{s_1}$ and $p_{s_2}$, respectively. Then K-S distance $D(P_{s_1}, P_{s_2})$ is\par\noindent\small \begin{align} D(P_{s_1}, P_{s_2}) = \sup_{1 \le i \le N}|P_{s_1}(i) - P_{s_2}(i)|, \end{align}\normalsize where $\sup$ denotes the supremum over all distances. Figure \ref{fig:ksd} shows the variation of K-S distance for the identical values of parameters as used for CV evaluation in Fig. \ref{fig:cov}. We kept the ecdf corresponding to $K=128$ atoms and $T=1$ iteration as a reference. We then computed the K-S distance of ecdfs obtained with other parameter values with respect to this reference. As the parameter values move away from this reference, the K-S distance increases. However, for both DL methods, we notice the following trend: as the number of iterations increase, the difference in K-S distance between successive iterations is negligible and is influenced mostly by the number of trained atoms. The K-S distance quantifies the difference between ODL and K-SVD distributions rather than stating which one is better. Combining this information with Fig. \ref{fig:cov}, it is evident that ODL has lower CV and is also more robust to parameter changes than K-SVD. \vspace{-8pt} \subsection{Classification and computational efficiency} \label{subsec:compeff} After examining the thresholds at which the CV and K-S distance stabilize, we selected the following values for dictionary learning: $T = 40$, $K=512$, $\lambda = 0.1$, and $\alpha = 0.1$. For this set of values, Fig. \ref{fig:odlKsvdDist} shows the epdfs of similarity measures for ODL and K-SVD. In a landmine clearance campaign, these parameters will be learned offline and will remain fixed during the classification procedure. The test set contains 2 different surveys for each type of mine class for a total of five thousand range profiles. Once the test-set is sparsely decomposed using the learned dictionary, we estimated a mean sparsity to be $\approx 4$ for ODL and $\approx 6$ for K-SVD. In order to find the best functional for the SVM classifier, we conducted a cross-validation on the sparsely decomposed classification set using a set of different RBF kernel parameters. After obtaining the functional, we used it to classify the sparsely decomposed test set. Figure \ref{fig:classResult} shows the raw data for PMA/PMN mines at $15$ cm depth and the classification maps obtained with a learned dictionary using ODL and K-SVD methods. It is clear that target and clutter recognition are drastically improved using a dictionary learned with ODL when compared with K-SVD. \begin{figure}[!t] \centering \includegraphics[scale=0.25]{figs/plot_full_class_final.pdf} \caption{(a) Raw-data for PMA/PMN (warmer values indicate presence of mines). Classification map obtained with (b) K-SVD and (c) ODL.\vspace{-10pt}} \label{fig:classResult} \end{figure} Table \ref{tbl:perf} compares the performance of classification using the two DL methods. Using accurate ground truth information, we defined \textit{target halos} as the boundaries of the buried landmines. Let the number of pixels and the declared mine pixels inside the target halo be $N_t$ and $N_m$, respectively. Similarly, we denote the number of true and declared clutter pixels outside the target halo by $N_c$ and $N_d$, respectively. Then, the probabilities of correct classification ($P_{CC}$) for mines and clutter are \begin{align} P_{CC_{\text{mines}}} = \frac{N_m}{N_t},\:\:\textrm{and}\:\:P_{CC_{\text{clutter}}} = \frac{N_d}{N_c}. \end{align} The $P_{CC}$ being the output of a classifier should not be mistaken as the radar's probability of detection $P_d$ which is the result of a detector. A detector would declare the presence of a mine when only a few pixels inside the halo have been declared as mine. Since $P_{CC}$ takes into account the entire target halo, it provides a fair and accurate evaluation of of the classification result. The execution time for sparse decomposition and classification steps were identical ($0.26$ s and $0.14$ s, respectively) for both ODL and K-SVD methods. However, for DL update, ODL took only $0.46$ s - more than sixteen times faster than K-SVD ($8.09$ s). \begin{table}[t] \centering \begin{tabular}{ l | l | l | l | l} \hline \noalign{\vskip 1pt} \multirow{ 2}{*}{} & \multicolumn{4}{c}{$P_{CC}$}\\ \cline{2-5} \noalign{\vskip 1pt} & Clutter & PMN/PMA2 & ERA & T72\\[1pt] \hline \hline \noalign{\vskip 1pt} K-SVD & 0.824 & 0.810 & 0.666 & 0.750 \\[1pt] ODL & 0.923 & 0.950 & 0.833 & 0.750 \\ \hline \hline \end{tabular} \caption{\small{Performance of ODL and K-SVD for mine-detection GPR. ODL and KSVD results are identical for the smallest mine T72 because only a few pixels are considered for T72 classification.}\vspace{-14pt}} \label{tbl:perf} \end{table} \vspace{-6pt} \section{Summary} \label{sec:summary} In this work, we proposed ODL for sparse decomposition of GPR-based mine data. Our results indicate near-real-time execution times using ODL, high clutter rejection and improved classifier performance. These characteristics open interesting opportunities for future cognitive operation of GPR. For example, in a realistic landmine-clearance campaign, an operator could gather the training measurements over a safe area next to the contaminated site, hypothetically placing some buried landmine simulants over it in order to have a faithful representation of the soil/targets interaction beneath the surface. In other words, our work allows the operator to ``calibrate'' the acquisition by providing a good training set to learn the dictionary. \newcommand{\setlength{\itemsep}{0.001 em}}{\setlength{\itemsep}{0.001 em}} \bibliographystyle{IEEEtran} \scriptsize{
{ "redpajama_set_name": "RedPajamaArXiv" }
8,143
Q: Definition of magnetic field Can we define magnetic field at a point as: Force on a point magnetic north pole at that point divided by its pole strength. Anything wrong in this definition? (The concept of point magnetic pole is an idealization as point magnetic poles don't exist in nature and also there are no magnetic monopoles observed in nature.) Question edited: So does the above definition mean $\vec{B}$ or $\vec{H}$? I think my definition does not represent $\vec{H}$ because we have not divided $\mu_0$ anywhere. So it represents $\vec{B}$ in free space. Am I correct? A: Your definition is exactly that of magnetic field strength, H, on the old cgs system. The oersted was its unit, a magnetic field strength of 1 dyne per unit pole. [A unit pole was such that if two unit poles were placed 1 cm apart in vacuo, there would be a force of 1 dyne between them.] The study of electromagnetism was built on this foundation, and produced the edifice of electromagnetic theory that we have today – even though we may now choose to define things differently. So one answer to your question would be that there's nothing wrong with the 'per unit pole' definition. It has delivered the goods. Another view is that it's odd to use the non-existent monopole as the basis for electromagnetism. In fact the subject used to be taught using long ball-ended (dumb-bell shaped) magnets. The ball at the 'North' (or South) end produced a magnetic field in the surrounding air radially outwards (or inwards) varying with an inverse square law. So the balls seemed to be behaving as monopoles. But there's a very important caveat… The net magnetic flux from the North pole ball (unlike from a North monopole) is zero! This is because as much flux approaches the ball through the 'bar' of the magnet as leaves through the air, and as much flux leaves the South pole ball through the bar of the magnet as enters through the air. One of the consequences is that we have to be especially careful studying magnetic materials using this approach. Arguably an approach based on Ampère's current loops or the corresponding quantum concept is less contrived. A: Instead of magnetic North Pole, you could refer to a moving charge with a known velocity as a test particle. This avoids the issue of monopoles you mention.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,048
Say it aint so! Did Michelle Phan get plastic surgery?! Before and after photos reveal that the makeup queen likely got jaw surgery, a nose job, and double eyelid surgery. Many fans feel betrayed that she used plastic surgery to change her look when all she does is preach makeup, makeup, MAKEUP! In the before photo she has a round jaw, while in the after photo her face is heart shaped. People don't realize just how significant a part of your face the jaw is. All of the South Korean plastic surgery that you may have seen or read about is totally predicated on jaw surgery. It looks like Michelle Phan had bone shaved from the sides of her jaw along with a small chin implant. It's also rumored that she had double eyelid surgery. It's popular now to get your eyes widened for that anime look. It looks like her nose is wider in the before picture (again, her eyes are much smaller too). If Michelle Phan had a nose job, the doctor removed cartilage from the bridge and tip. Her nose looks much thinner and smoother now. Is it still just makeup???
{ "redpajama_set_name": "RedPajamaC4" }
4,194
\section{Introduction} \label{Sec:Introduction} Wave-particle interactions are a nonlinear phenomenon \cite{Shukla1986, Elskens2003, Mendonca2001}, presenting regular and chaotic trajectories in phase space \cite{Escande1985, Escande1982, Lichtenberg1992}. Resonant islands can be used for particle acceleration \cite{Pakter1995, deSousa2010, deSousa2012, deSousa2018}, whereas chaotic orbits are responsible for particle heating \cite{Karney1978, CorreaSilva2013}. The linear regime for wave-particle interactions is well known, but many of its nonlinear aspects remain unclear. This type of interaction is a fundamental process in plasmas \cite{Shukla1986, Elskens2003, Fisch1987, Berk1992}, particle beams and accelerators \cite{Shukla1986, Davidson2001, Edwards2004}. In particular, wave-particle interactions are the basis for electromagnetic radiation amplifiers, such as free electron lasers \cite{Shukla1986}, gyrotrons \cite{Gilmour2011}, traveling wave tubes \cite{Pierce1950, Gilmour1994, Gilmour2011} (TWTs), etc. TWTs are vacuum electron devices \cite{Faillon2008, Gilmour2011} that present a broad bandwidth with a rather simple design. Industrial TWTs range from 2 to 30~cm in length, and are mainly used as signal amplifiers for wireless communications \cite{Minenna2019EPJH}, such as space telecommunication. On the other hand, longer TWTs (some meters long) can be used for basic plasma physics research \cite{Dimonte1977, Dimonte1978, Tsunoda1987, Tsunoda1991, Hartmann1995, Guyomarch1996, Doveil2005PRL, Doveil2005PPCF, MacorThesis2007, Doveil2011} since the equations that describe the TWT \cite{Pierce1950, Nordsieck1953, Tien1956, Gilmour1994} are the same as those characterizing the beam-plasma instability \cite{ONeil1971, ONeil1972} in the small cold beam limit \cite{Dimonte1977, Dimonte1978}. Electromagnetic radiofrequency (rf) waves in the TWT propagate through a slow wave structure (SWS) and interact with an electron beam in a vacuum environment. Thus, it is possible to experimentally mimic a beam-plasma system without the effects caused by the background plasma, and we are able to properly identify the effects due to the beam dynamics. These characteristics make the TWT an extremely useful device to simulate one-dimensional beam-plasma systems, which represent a paradigm for instabilities in wave-particle interactions. The first TWT used for plasma physics research was described by Dimonte and Malmberg \cite{Dimonte1977, Dimonte1978}. It was 3 meters long, built at the University of California in San Diego. The second research TWT \cite{Guyomarch1996, Doveil2005PRL, Doveil2005PPCF, Chandre2005, Doveil2006, MacorThesis2007, MacorEPJD2007, Doveil2011}, with 4 meters in length, was located at PIIM Laboratory, Aix-Marseille University (former Universit\'e de Provence). Both devices were helix TWTs \cite{Pierce1950, Gilmour1994, Gilmour2011}, with the helix supported by three alumina rods. In this paper, we present the third TWT specially designed to simulate beam-plasma systems. This TWT is also located at PIIM Laboratory. It allows a great control of the waves and beam parameters, and contains a measurement system that provides information about both the waves and the beam. The TWT presents an upgraded SWS with the helix rigidly wrapped in a dielectric polyimide tape, which guarantees a more precise helix pitch along the whole device length. This reduces the uncertainty of the experimental data and allows us to work with arbitrary waveforms. Furthermore, the wave phase velocity is lower in the upgraded TWT. Resonant electrons also move slower and the interaction time between waves and particles is longer, resulting in the appearance of a great variety of nonlinear effects. \begin{figure*}[!tb] \centering \includegraphics{FIG1-eps-converted-to.pdf} \caption{(Color online) (a) TWT structure (Reproduced from Ref. \onlinecite{Doveil2006}): (1) helix; (2) electron gun; (3) trochoidal velocity analyzer; (4) movable antenna; (5) glass vacuum tube; (6) slotted rf ground cylinder; (7) main magnetic coil; (8) resistive rf termination to reduce reflections. (b) Picture of the TWT showing its magnetic coils: (7) main coil producing the magnetic field $B_z$ to confine the beam, and (9) rectangular coils generating $B_x$ and $B_y$ for beam tilt correction. (c) Helix wrapped in a dielectric polyimide tape.} \label{fig:TWTStructure} \end{figure*} All these features of the upgraded TWT make new experiments possible, among which we may cite the use of pulsed beams \cite{MacorNPCS2007}, the experimental investigation of self-consistent effects \cite{Tennyson1994, Elskens2003, del-Castillo2002, Doveil2011}, and the quasilinear theory predictions \cite{Vedenov1962, Drummond1962, Vedenov1963, Tsunoda1987, Tsunoda1991, Hartmann1995, Elskens2007, Elskens2010, Elskens2010AAP, Besse2011, Elskens2012}. The upgraded TWT will also provide important experimental data for the validation of numerical codes \cite{Andre2013, Minenna2018, Minenna2019IEEE} that simulate wave-particle interactions in periodic structures. These new experiments and numerical simulations are important for plasma physics studies, but also contribute to the improvement of industrial devices. We introduce a theoretical model describing the electromagnetic field through the upgraded SWS. We determine the dispersion relation, phase and group velocities, and we show that the theoretical parameters agree very well with the experimental data. We obtain experimentally the damping caused by the helix wire in the wave amplitude, and the voltage standing wave ratio (VSWR) that accounts for wave reflections inside the device. With these parameters, we completely characterize wave propagation in the upgraded TWT. The interaction between waves and electrons is defined by the interaction impedance, or coupling impedance. We obtain the impedance both theoretically and experimentally with a very good agreement. The impedance decreases rapidly with the wave frequency, indicating a more efficient coupling for frequencies below 20~MHz. We also investigate nonlinear effects occurring in the TWT. When the beam is emitted with initial velocity slightly higher than the wave phase velocity, electrons and wave enter in resonance. The wave receives momentum and energy from the beam, and its amplitude increases. This is the mechanism used by industrial TWTs to amplify telecommunication signals \cite{Minenna2019EPJH}. The TWT at PIIM Laboratory is 2 to 3 times longer than the length necessary for waves to saturate. After saturation, the beam electrons are trapped by the wave and form bunches that move back and forth in the wave potential, making the wave amplitude oscillate along the device. We determine the wave growth coefficient and saturation amplitude. When the beam current is small, these parameters follow the predictions of the linear theory, proving that the wave saturates as a result of the development of electron bunches that are trapped in the wave potential. For higher values of current, we show that the growth coefficient and saturation amplitude deviate from the linear predictions due to nonlinear space charge effects caused by the repulsive electrostatic force among the beam electrons. Another nonlinear effect analyzed in this paper is the modulation of the electron beam. An initially monokinetic beam gets modulated by the wave, and presents two distinct energy peaks at the end of the TWT. The difference between the two energy peaks provides a linear approximation, without damping effects, for the wave amplitude. We show that modulation occurs for electrons emitted with initial velocity both lower or higher than the wave phase velocity. The paper is organized as follows. The experimental setup for the upgraded TWT is described in Section \ref{Sec:ExperimentalSetup}. In Section \ref{Sec:ColdParameters}, we develop the theoretical model for waves propagating in the TWT. We determine the theoretical and experimental dispersion relation, phase and group velocities, and we obtain experimentally the damping coefficient and VSWR. Section \ref{Sec:InteractionParameters} presents linear and nonlinear effects arising from the beam-wave interaction, including the modulation of the electron beam, the wave growth and saturation, the development of electrons bunches and the consequent oscillations in the wave amplitude. We calculate the four Pierce linear parameters \cite{Pierce1950, Gilmour1994, Guyomarch1996} that define the linear regime of TWTs. We show that the gain and space charge parameters increase with the beam current, meaning that nonlinear effects become important and the linear predictions lose accuracy for sufficiently high currents. In Section \ref{Sec:Conclusions}, we draw our conclusions and perspectives for the upgraded TWT. \section{Experimental setup} \label{Sec:ExperimentalSetup} At PIIM Laboratory, we use a 4 meters long TWT specially conceived to study wave-particle interactions with applications in plasma physics. In the TWT, an electron beam moves in the axial direction, and it interacts with electromagnetic waves propagating through a helix waveguide. Near the axis, the magnetic field generated by the wave is negligible, and the electric field presents only longitudinal components, i.e.\ in the TWT axial direction. Therefore, electrons on the axis experience an electrostatic field as those observed in plasmas, which makes the TWT an ideal device to investigate wave-particle interactions in plasmas. Furthermore, the TWT at PIIM Laboratory is long enough for nonlinear effects to take place \cite{Chandre2005, MacorThesis2007}. The TWT can thus be used to mimic a one-dimensional beam-plasma experiment, with the advantages that it is much less noisy than any plasma and the medium supporting the waves is always in its linear regime. The main components of the TWT are an electron gun, a trochoidal energy analyzer, and a SWS formed by a helix, where electromagnetic waves propagate \cite{Gilmour1994, Chandre2005}. In the TWT, it is possible to control several parameters with great accuracy. We use an arbitrary waveform generator that controls the number of modes produced, as well as the frequency, amplitude and phase of each individual mode. The electron beam is produced in such a way that we are able to determine its current, energy, and energy distribution function. Figure~\ref{fig:TWTStructure} shows a schematic representation of the TWT at PIIM Laboratory. The most important part of the equipment is the SWS (labeled as (1) in Figure~\ref{fig:TWTStructure}(a)). It is composed of a 4 meters long helix made of a 0.5~mm diameter beryllium copper (BeCu) wire with a radius $a = 16.355$~mm. The helix has a small pitch $p = 1.0$~mm, so that waves traveling at the speed of light along the helix wire have a much smaller phase velocity along the axis. It guarantees that the waves interact resonantly with an electron beam propagating along the TWT axis. In previous research TWTs \cite{Dimonte1977, Dimonte1978, Tsunoda1987, Tsunoda1991, Guyomarch1996, Doveil2005PRL, Doveil2005PPCF, Chandre2005, Doveil2006, MacorThesis2007, MacorEPJD2007, Doveil2011}, the helix was held inside a glass tube by three alumina rods. In this upgraded version of PIIM TWT, the helix is wrapped in and rigidly held by a dielectric polyimide tape (Figure~\ref{fig:TWTStructure}(c)), which ensures a nearly constant helix pitch along its full length, $0 \leq z \leq 4$~m. As we show along the paper, the results obtained with the upgraded helix are much more precise than those generated by the previous PIIM TWT. The experimental errors observed for the upgraded TWT are mainly due to the resolution of the equipments used for diagnostic and for launching the waves and the beam. The helix is inserted into a glass vacuum tube with a resistive rf termination at each end to reduce wave reflections. The glass tube is evacuated by two ion pumps, one at each end of the device. The pressure inside the tube is typically on the order of $10^{-8}$~Torr. A good vacuum is necessary to avoid that the electron beam excites ions and forms a plasma in the system. The glass vacuum tube is enclosed by an axially slotted cylinder with $R_5 = 57.5$~mm of radius that defines the rf ground. The TWT also contains four movable antennas capacitively coupled to the helix through the glass vacuum tube. Some of the antennas emit the waves produced by the arbitrary waveform generator. The other antennas move along the slotted cylinder to receive the spectrum of waves after interaction with the electron beam. A triode (labeled as (2) in Figure~\ref{fig:TWTStructure}(a)) is located in one of the TWT extremities. It is used as an electron gun to produce a quasi-monoenergetic beam. The triode is composed of a heated cathode, a grid, and an anode with a small hole that determines the beam diameter (3~mm). The electron beam propagates along the axis of the SWS, and it is confined by an axial magnetic field $B_z$ generated by the main coil (Figure~\ref{fig:TWTStructure}(b)) that reaches a maximum value of 500~G. Two rectangular coils produce lower intensity magnetic fields, $B_x$ and $B_y$, on the order of 1~G for beam tilt correction. A trochoidal energy analyzer \cite{Guyomarch2000} (labeled as (3) in Figure~\ref{fig:TWTStructure}(a)) is located in the other extremity of the TWT. The energy analyzer gives us the distribution function of energy in the beam with a resolution sharper than 0.5~eV. A small fraction ($\sim 0.5\%$) of the electrons passes through a hole in the center of the frontal collector, and it is decelerated by four electrodes. The electrons are then selected by the drift velocity caused by the presence of an electric field perpendicular to the magnetic field. Using this technique, it is possible to directly measure the current collected through a tiny off-axis hole, which gives us the time averaged axial energy distribution of the beam \cite{Guyomarch2000}. \section{Wave propagation in the TWT} \label{Sec:ColdParameters} In this section, we analyze wave propagation in the TWT in absence of the electron beam. This propagation is characterized by the cold parameters: amplitude of the electromagnetic field generated by the waves through the SWS, dispersion relation, phase and group velocities, wave damping caused by the helix wire, and voltage standing wave ratio (VSWR) caused by wave reflections. We use Maxwell's equations to determine theoretically the electromagnetic field, dispersion relation, phase and group velocities. We compare the theoretical predictions with experimental data, and find an excellent agreement. The damping coefficient and the VSWR are obtained experimentally with great accuracy. \subsection{Theoretical model} \label{Sec:ColdParametersTheory} A wave propagating at the speed of light $c$ along the helix wire has a much smaller velocity $v_z$ in the axial direction of the TWT. For a helix of radius $a$ and pitch $p$, we define $\tan\psi = p/(2 \pi a)$. The axial velocity $v_z$ may be approximated as $v_z = c \sin\psi$, which corresponds to $2.92 \times 10^6$~m/s for the upgraded TWT of PIIM Laboratory, with $\tan \psi = 0.00973 = 1/102.76$. The actual wave phase velocity $v_{\varphi}$ along the $z$ direction also depends on the other elements that compose the SWS, and is obtained through the dispersion relation calculated in this section. The propagating wave generates electric and magnetic fields in the SWS given by Maxwell's equations in Heaviside-Lorentz units \cite{Spohn2004} \begin{gather} \label{eq:MaxwellEquations} \begin{aligned} \vec{\nabla} \cdot \vec{E} &= 0, \quad \quad & \vec{\nabla} \times \vec{E} = - \frac{1}{c} \frac{\partial \vec{B}}{\partial t} , \\* \vec{\nabla} \cdot \vec{B} &= 0, \quad \quad & \vec{\nabla} \times \vec{B} = \frac{\epsilon (r)}{c} \frac{\partial \vec{E}}{\partial t} , \end{aligned} \end{gather} where $\epsilon (r)$ is the dielectric constant of the medium through which the electromagnetic wave propagates. Considering a plane wave for which $\vec{E}, \, \vec{B} \, \sim \, {\mathrm e}^{{\mathrm i} (kz - \omega t)}$, we calculate the components of the electromagnetic field in cylindrical coordinates and obtain the solution to equations (\ref{eq:MaxwellEquations}): \begin{gather} \label{eq:EMField} \begin{aligned} E_{z,j} &= \left[ {C_j I_0 \left( {\gamma _j r} \right) + D_j K_0 \left( {\gamma _j r} \right) } \right] {\mathrm e}^{{\mathrm i} (kz - \omega t)} , \\ B_{z,j} &= \left[ {A_j I_0 \left( {\gamma _j r} \right) + B_j K_0 \left( {\gamma _j r} \right) } \right] {\mathrm e}^{{\mathrm i} (kz - \omega t)} , \end{aligned} \end{gather} where $I_0$ and $K_0$ are modified Bessel functions, $\omega$ is the angular frequency of the wave, $k$ is the (longitudinal) wavenumber, $\gamma _j$ is the (transversal) propagation constant of each medium given by \begin{equation} \label{eq:Gamma} \gamma _j ^2 = k^2 - \epsilon _j \left( {\frac{\omega }{c}} \right) ^2 , \end{equation} and the index $j$ indicates the medium through which the electromagnetic field propagates: $j=1$ vacuum (from the axis $r=0$~mm to the helix wire, which has an average radius $a = 16.355$~mm), $j=2$ dielectric tape (internal radius $R_1 = a$, external radius $R_2 = 16.515$~mm), $j=3$ vacuum ($R_2 < r < R_3$), $j=4$ glass vacuum tube (internal radius $R_3 = 17.1$~mm, external radius $R_4 = 22.25$~mm), $j=5$ air ($R_4 < r < R_5 = 57.5$~mm, with $R_5$ corresponding to the internal radius of the rf ground cylinder). In our model, we assume that all these structures are concentric. We consider the helix as an infinitely thin, perfect conductor. It means that the electric field is null and the magnetic field is continuous inside the helix \cite{Jackson1999}: \begin{gather} \label{eq:BoundaryConditions_r_a_E0} r = a \quad \Rightarrow \quad \left\{ \begin{aligned} & E_{z,1} \sin\psi + E_{\theta ,1} \cos\psi = 0 , \\* & E_{z,2} \sin\psi + E_{\theta ,2} \cos\psi = 0 , \\* & B_{z,1} \sin\psi + B_{\theta ,1} \cos\psi \\* & \quad = B_{z,2} \sin\psi + B_{\theta ,2} \cos\psi . \end{aligned} \right. \end{gather} Moreover, the electric field components perpendicular to the radial direction are continuous \cite{Jackson1999} \begin{gather} \label{eq:BoundaryConditions_r_a_EC} r = a \quad \Rightarrow \quad \left\{ \begin{aligned} E_{z,1} &= E_{z,2} , \\* E_{\theta ,1} &= E_{\theta ,2} . \end{aligned} \right. \end{gather} The rf ground cylinder is also considered a perfect conductor. Therefore, we have \begin{gather} \label{eq:BoundaryConditions_r_R5} r = R_5 \quad \Rightarrow \quad \left\{ \begin{aligned} E_{z,5} &= 0 , \\* E_{\theta ,5} &= 0 . \end{aligned} \right. \end{gather} On the surface that separates two dielectric media, and in the case this surface does not contain localized electric charges or superficial currents, the components of the electric and magnetic fields are related as \cite{Jackson1999} \begin{widetext} \begin{gather} \label{eq:BoundaryConditions_r_Rj} r = R_j \,(j = 2,3,4) \, \quad \Rightarrow \quad \left\{ \begin{aligned} \epsilon _j E_{r,j} &= \epsilon _{j+1} E_{r, j+1} , \quad \quad & E_{\theta ,j} = E_{\theta ,j+1}, \quad \quad E_{z,j} &= E_{z,j+1} , \\* B_{r,j} &= B_{r, j+1}, \quad \quad & \frac{B_{\theta ,j}}{\mu _j} = \frac{B_{\theta , j+1}}{\mu _{j+1}}, \quad \quad \frac{B_{z,j}}{\mu _j} &= \frac{B_{z, j+1}}{\mu _{j+1}} , \end{aligned} \right. \end{gather} \end{widetext} where $\mu _j$ is the magnetic permeability of medium $j$. We use the values $\epsilon _1 = \epsilon _3 = 1$ for the vacuum, $\epsilon _2 \cong 3.40$ for the dielectric polyimide tape at 1~MHz, $\epsilon _4 \cong 4.55$ for the Pyrex tube at 1~MHz, $\epsilon _5 \cong 1$ for the air, and $\mu _j \cong 1$ for all the dielectric materials in the SWS. From equations (\ref{eq:EMField})-(\ref{eq:BoundaryConditions_r_Rj}), we calculate the coefficients of the electromagnetic field. For region $j=1$ that contains the helix axis, $B_1 = D_1 = 0$ to avoid divergences in (\ref{eq:EMField}). The first equation in (\ref{eq:BoundaryConditions_r_a_E0}) determines the ratio \begin{equation} \label{eq:A1/C1} \frac{A_1}{C_1} = {\mathrm i} \frac{c \gamma_1}{\omega} \frac{I_0 (a \gamma_1)}{I_1 (a \gamma_1)} \tan \psi . \end{equation} From the other equations in (\ref{eq:BoundaryConditions_r_a_E0})-(\ref{eq:BoundaryConditions_r_Rj}), we obtain all the coefficients $A_j$, $B_j$, $C_j$ and $D_j$, with $2 \leq j \leq 5$, proportional to $C_1$. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG2-eps-converted-to.pdf} \caption{(Color online) Amplitude of each component of the (a) electric ($\vec{E}$) and (b) magnetic ($\vec{B}$) fields through a radial plane in the SWS. The wave that generates the field has a frequency 30~MHz.} \label{fig:EMField} \end{figure} Figure~\ref{fig:EMField} displays the amplitude of each component of the $\vec{E}$, $\vec{B}$ fields, with the normalization $C_1 = 1$~statV/cm $\cong 30$~V/mm, for a propagating wave with frequency 30~MHz. In the figure, it is possible to observe the behavior of the electromagnetic field through each individual component of the SWS. Near the axis ($r = 0$~mm), the electric and magnetic fields present only longitudinal components $E_z$, $B_z$. Electrons propagating along the TWT axis interact with an electrostatic field similar to the ones observed in plasmas. The total amplitudes of the electric and magnetic fields reach their maximum value close to the helix ($r = a = 16.355$~mm). For the electric field, the radial component is the most important one near the helix. On the other hand, the radial and axial components of the magnetic field have comparable amplitudes. For both the electric and the magnetic field, the amplitude of the tangential component is null throughout the SWS. In the region near the rf ground cylinder ($r = R_5 = 57.5$~mm), all the fields components decay to zero. Numerical simulations show that the maximum value of the total electric and magnetic fields, $|\vec{E}| / C_1$ and $|\vec{B}| / C_1$, increases with the wave frequency. However, the qualitative behavior of the fields remains the same as in Figure~\ref{fig:EMField}. The dispersion relation is obtained from the equation that ensures the continuity of the magnetic field at the helix wire: \begin{equation} \label{eq:DispRelTransc} \left( {\dfrac{\omega }{c \gamma _2 \tan \psi }} \right) ^2 = \dfrac{\dfrac{Y_2 }{Y_1 } - \dfrac{\gamma _1 }{\gamma _2 } \dfrac{I_0 \left( {a \gamma _1 } \right) }{I_1 \left( {a \gamma _1 } \right) }}{\epsilon _2 \dfrac{Y_4 }{Y_3 } - \dfrac{\gamma _2 }{\gamma _1 } \dfrac{I_1 \left( {a \gamma _1 } \right) }{I_0 \left( {a \gamma _1 } \right) }} , \end{equation} with $Y_j$ a function of the wave and SWS parameters. When the phase velocity $v_{\varphi} = \omega / k$ is much smaller than the speed of light, we may approximate \begin{equation} \label{eq:GammaAppro} \gamma _j \approx k . \end{equation} Considering approximation (\ref{eq:GammaAppro}) in (\ref{eq:DispRelTransc}) yields \begin{equation} \label{eq:DispRel} \omega = ck \tan \psi \left( {\dfrac{\dfrac{Y_2 }{Y_1 } - \dfrac{I_0 \left( {ak} \right) }{I_1 \left( {ak} \right) }}{\epsilon _2 \dfrac{Y_4 }{Y_3 } - \dfrac{I_1 \left( {ak} \right) }{I_0 \left( {ak} \right) }}} \right) ^{1/2} . \end{equation} From the dispersion relation (\ref{eq:DispRel}), we obtain the phase velocity $v_{\varphi }$ and the group velocity $v_{\mathrm g}$: \begin{equation} \label{eq:vp_vg} v_{\varphi } = \frac{\omega }{k} , \quad \quad \quad v_{\mathrm g} = \frac{\partial \omega }{\partial k} . \end{equation} \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG3-eps-converted-to.pdf} \caption{(a) Phase and (b) group velocities as a function of the wave frequency.} \label{fig:PhaseGroupVelocities} \end{figure} Figure~\ref{fig:PhaseGroupVelocities} shows the phase and group velocities as a function of the wave frequency $f$. The phase velocity decreases rapidly for frequencies between 0 and 40~MHz, and it is almost constant above 50~MHz. The group velocity also decreases rapidly for small frequencies. It presents a minimum around 27~MHz, and it increases again after this point. \subsection{Experimental data} \label{Sec:ColdParametersExperiments} The antennas in the SWS are capacitively coupled to the helix through the glass vacuum tube. The signal is emitted by one of the antennas located near the electron gun. Another antenna moving axially along the TWT receives the signal that propagates along the helix wire. The temporal signal received by the moving antenna is registered by an oscilloscope and part of it is shown in Figure~\ref{fig:TemporalSignal} for a propagating wave at 30~MHz. The black line in Figure~\ref{fig:TemporalSignal} indicates the theoretical phase velocity $v_{\varphi }$ obtained from expression (\ref{eq:vp_vg}), and it agrees very well with the experimental data. The wave propagates all along the 4 meters TWT with the same phase velocity. The temporal signal shown in Figure~\ref{fig:TemporalSignal} is registered by the oscilloscope for some given positions along the TWT (usually 900 -- 1800 different positions). For each position, we perform a Fast Fourier Transform (FFT) of the temporal signal to obtain its amplitude and phase. Figure~\ref{fig:DirRefWaveAmplitude} shows the experimental wave amplitude $V$ and phase $\varphi$ as a function of the axial position $z$ along the device for the temporal signal of Figure~\ref{fig:TemporalSignal}. In panel~(a), we notice the presence of 60~MHz harmonics for $z > 1600$~mm. Panel~(b) shows that the wave propagates along the 4 meters TWT with a uniformly varying phase, as predicted by the experimental wavenumber ($\varphi_{\mathrm{pred}} = k_{\mathrm{exp}}z$). \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG4.jpg} \caption{(Color online) Temporal signal registered by an oscilloscope for a propagating wave at 30~MHz. The black solid line indicates the theoretical phase velocity.} \label{fig:TemporalSignal} \end{figure} The experimental wavelength is obtained by interferometry. For each frequency, the interferometer multiplies the signal emitted by one of the antennas with the signal received by another antenna (as in Figure~\ref{fig:TemporalSignal}). We register the product of the signals as a function of the axial position of the receiver antenna. Through a numerical procedure, we determine the maxima of the registered signal, which gives us the averaged wavelength. The error is estimated as the standard deviation of the data points. The experimental wavelength can also be obtained through the FFT of the temporal signal. We determine the maximum points of the wave amplitude as shown in Figure~\ref{fig:DirRefWaveAmplitude}(a), and calculate the average wavelength. In both cases, the experimental wavelength agrees very well with the theoretical prediction (\ref{eq:DispRel}), with less than $1\%$ of difference. Figure~\ref{fig:DispRelDampingStandingWave}(a) shows the theoretical dispersion relation given by equation (\ref{eq:DispRel}) (blue solid curve) and the experimental points (full red circles) obtained by interferometry. The dispersion relation of the TWT closely resembles that of a finite radius, finite temperature plasma \cite{Malmberg1969}, but, unlike a plasma, the helix does not add appreciable noise. Figure~\ref{fig:DispRelDampingStandingWave}(a) also shows the experimental points obtained for the previous version of the SWS (green open squares). By comparing the error bars for the two sets of experimental points, we observe that the upgraded helix and measurement system are much more precise than the previous one, which enables us not only to obtain more accurate experimental data, but also to carry out experiments that require a fine adjustment of the parameters. Furthermore, waves propagating along the upgraded SWS present a lower phase velocity. Electrons resonantly interacting with the wave move slower, resulting in a longer interaction time and the appearance of more nonlinear effects along the TWT. In Figures \ref{fig:TemporalSignal} and \ref{fig:DirRefWaveAmplitude}(a), we observe a standing wave pattern, especially at the end of the device. Resistive rf terminations (labeled as (8) in Figure~\ref{fig:TWTStructure}(a)) are placed on both extremities of the glass tube to reduce wave reflections. However, residual reflections at the extremities and irregularities of the helix and glass tube generate a standing wave in the TWT. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG5-eps-converted-to.pdf} \caption{(Color online) (a) Experimental wave amplitude (green solid curve), and decomposition of total signal in two parts: propagating (red dashed curve) and reflected (blue dot-dashed curve) waves. (b) Experimental phase (green solid curve) and value predicted from the experimental wavenumber (red dashed curve). The wave amplitude and phase were obtained for the temporal signal in Figure~\ref{fig:TemporalSignal}.} \label{fig:DirRefWaveAmplitude} \end{figure} The voltage standing wave ratio (VSWR) is defined as \begin{equation} \label{eq:VSWR} \text{VSWR} = \frac{V_{\max{}}}{V_{\min{}}} , \end{equation} where $V_{\max{}}$ and $V_{\min{}}$ are, respectively, the local maximum and minimum values of the wave amplitude. In Figure~\ref{fig:DirRefWaveAmplitude}(a), we identify the maximum (red dots) and minimum (blue dots) points of the standing wave (green solid curve). With these points, we calculate the average VSWR along the TWT, as shown in Figure~\ref{fig:DispRelDampingStandingWave}(c). The error bars represent the standard deviation. For a propagating wave, i.e.\ no reflections, $\text{VSWR} = 1$. In the TWT, the VSWR varies between 1 and 2. At the maximum points of the standing wave, the propagating and reflected waves are in phase and they interact constructively, so that $V_{\max{}} = V_{\mathrm{prop}} + V_{\mathrm{refl}}$. On the other hand, the waves are out of phase and they interact destructively at the minimum points: $V_{\min{}} = V_{\mathrm{prop}} - V_{\mathrm{refl}}$. Using this procedure, we decompose the total signal (green solid curve) in two parts representing the propagating (red dashed curve) and reflected (blue dot-dashed curve) waves, as can be seen in Figure~\ref{fig:DirRefWaveAmplitude}(a). Figure~\ref{fig:DirRefWaveAmplitude}(a) shows that the wave is damped and its amplitude decreases along the TWT. In the absence of beam, the amplitude of the propagating wave varies as $V_{\mathrm{prop}} \sim {\mathrm e}^{- k_{\mathrm d} z}$. Thus, from the propagating wave we obtain the experimental damping coefficient $k_{\mathrm d}$ of the helix for different wave frequencies, as shown in Figure~\ref{fig:DispRelDampingStandingWave}(b). By decomposing the signal and identifying the propagating wave, we determine the damping coefficient with great accuracy as shown by the small error bars in the figure. For the upgraded SWS, the damping coefficient is proportional to the wave frequency. \begin{figure*}[!tb] \centering \includegraphics[width=1\linewidth]{FIG6-eps-converted-to.pdf} \caption{(Color online) (a) Theoretical dispersion relation (blue solid curve) and experimental data (full red circles) for the upgraded SWS, and experimental points (green open squares) for the previous SWS. (b) Damping coefficient and (c) average voltage standing wave ratio (VSWR) as a function of wave frequency for the upgraded SWS.} \label{fig:DispRelDampingStandingWave} \end{figure*} \section{Beam-wave interaction} \label{Sec:InteractionParameters} The interaction between waves and beam in the TWT is mainly characterized by the interaction impedance. We obtain the experimental impedance and show that it agrees with the theoretical predictions. The TWT at PIIM Laboratory is long enough to allow the appearance of nonlinear effects \cite{Chandre2005, MacorThesis2007}. In this section, we describe linear and nonlinear phenomena arising from the beam-wave interaction such as modulations in the electron distribution function, wave growth and saturation, and the development of electron bunches that alter the wave amplitude. \subsection{Interaction impedance} \label{Sec:Impedance} The interaction impedance, also known as coupling impedance, characterizes the coupling between the electron beam and the wave electric field $E_z$ in the direction the beam propagates. The interaction impedance $Z_0$ is calculated theoretically as \begin{equation} \label{eq:Z0} Z_0 = \frac{\left\langle {E_z^2} \right\rangle_{\mathrm b} }{2k^2 P} . \end{equation} $\left\langle {E_z^2} \right\rangle_{\mathrm b}$ is the average value of $E_z^2$ over the transversal section $A_{\mathrm b} = \pi r_{\mathrm b} ^2$ of the electron beam with radius $r_{\mathrm b} = 1.5$~mm: \begin{equation} \label{eq:Ez2} \left\langle {E_z^2} \right\rangle_{\mathrm b} = \frac{1}{A_{\mathrm b} } \int {E_z^2 {\mathrm d} A_{\mathrm b} } . \end{equation} $P$ is the total wave power inside the rf ground cylinder given by \begin{equation} \label{eq:Ptotal} P = \frac{1}{2} \text{Re} \left[ {\int {(\vec{S} \cdot \hat{z}) {\mathrm d} A_{\mathrm c}}} \right] , \end{equation} with $\vec{S} = (\vec{E} \times \vec{B}) / \mu _0$ the Poynting vector, $\mu _0$ the permeability of free space, and $A_{\mathrm c}$ the transversal section of the rf ground cylinder. \begin{figure}[!tb] \centering \includegraphics[width=0.55\linewidth]{FIG7-eps-converted-to.pdf} \caption{(Color online) $V_{{\mathrm b} \varphi}$ voltage for an electron beam with initial velocity $v_{{\mathrm b} 0}$ equal to the wave phase velocity $v_{\varphi}$.} \label{fig:BeamVoltage} \end{figure} To obtain the experimental interaction impedance, we need to define the voltage $V_{{\mathrm b} \varphi}$, which corresponds to the voltage applied to the electrons to create a beam with initial velocity $v_{{\mathrm b} 0}$ equal to the wave phase velocity $v_\varphi$. The beam voltage $V_{{\mathrm b} \varphi}$ is given by \begin{equation} \label{eq:VbPhi} V_{{\mathrm b} \varphi} = \frac{m_{\mathrm e} v_\varphi^2}{2e} , \end{equation} where $m_{\mathrm e}$ is the electron mass, and $e$ is the elementary charge. Figure~\ref{fig:BeamVoltage} presents the beam voltage $V_{{\mathrm b} \varphi}$ as a function of the wave frequency. The beam voltage decreases rapidly for frequencies below 40~MHz, and remains almost constant for higher frequencies. The blue solid curve was obtained from the theoretical dispersion relation using expressions (\ref{eq:DispRel}), (\ref{eq:vp_vg}) and (\ref{eq:VbPhi}). The red dots correspond to the experimental data in Figure~\ref{fig:DispRelDampingStandingWave}(a). \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG8-eps-converted-to.pdf} \caption{Wave amplitude (30~MHz, $V_{{\mathrm b} \varphi} = 19.1$~V) after interaction with an electron beam ($V_{{\mathrm b} 0} = 15$~V, $I_{\mathrm b} = 2$~$\upmu$A). The Kompfner dip is located at $z_{{\mathrm K} {\mathrm d}} = 783.1$~mm.} \label{fig:KompfnerDip} \end{figure} The original wave emitted by one of the TWT antennas produces a modulation in the electron beam, which in turn generates a second wave. The second wave induces another modulation in the beam, which produces a third wave. This process continues and generates a hierarchy of waves propagating in the TWT. However, when the wave has a small amplitude and the beam current $I_{\mathrm b}$ is low, the beam-wave interaction is well described by Pierce's three-wave model \cite{Pierce1950, Gilmour1994, Guyomarch1996}. In this case, and if the electrons initial velocity $v_{{\mathrm b} 0}$ is lower than the wave phase velocity $v_\varphi$, it is possible to find values of beam current and voltage for which the three waves interfere destructively in such a way that the total wave amplitude becomes null for a given position $z_{{\mathrm K} {\mathrm d}}$ (known as Kompfner dip) along the TWT axis. Figure~\ref{fig:KompfnerDip} shows a Kompfner dip observed for a wave emitted with $f = 30$~MHz, which corresponds to $V_{{\mathrm b} \varphi} \cong 19.1$~V. The electron beam was emitted with $V_{{\mathrm b} 0} = 15$~V and $I_{\mathrm b} = 2$~$\upmu$A. For this configuration, the total wave amplitude is null at $z_{{\mathrm K} {\mathrm d}} = 783.1$~mm. The conditions to observe a null total wave amplitude were first described by Kompfner \cite{Kompfner1950}, and complemented later by Johnson \cite{Johnson1955}. We use the conditions and expressions described in these references, Pierce's three-wave model, and the parameters $V_{{\mathrm b} 0}$, $I_{\mathrm b} $, $z_{{\mathrm K} {\mathrm d}}$ obtained for the TWT to determine the experimental interaction impedance for the upgraded SWS. Figure~\ref{fig:InteractionImpedance} depicts the theoretical impedance (blue solid curve) calculated from expression (\ref{eq:Z0}), and the experimental values (red dots) obtained through the Kompfner dip method. Once again, theoretical and experimental values present a very good agreement. It shows the robustness of the theoretical model described in Section \ref{Sec:ColdParametersTheory}, and the accuracy of the experimental measurements for the upgraded version of the SWS and data acquisition system. The electric and magnetic fields ($|\vec{E}|$, $|\vec{B}|$) present a peak near the helix, as can be seen in Figure~\ref{fig:EMField}. The peak value increases with the wave frequency, whereas the ($|\vec{E}|$, $|\vec{B}|$) values remain approximately constant near the TWT axis where the electron beam propagates. This means that the electromagnetic field gets more concentrated near the helix, and far from the beam, for higher frequencies, which results in a lower impedance. Figure~\ref{fig:InteractionImpedance} shows that the interaction impedance strongly decreases with the wave frequency, indicating that the coupling between particles and waves is less efficient for wave frequencies above 20~MHz. \begin{figure}[!tb] \centering \includegraphics[width=0.63\linewidth]{FIG9-eps-converted-to.pdf} \caption{(Color online) Theoretical and experimental interaction impedance as a function of wave frequency. Error bars are within the marker size.} \label{fig:InteractionImpedance} \end{figure} \subsection{Electron velocity distribution and wave amplitude} \label{Sec:VelocityDistribution} \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG10-eps-converted-to.pdf} \caption{Electron distribution function showing the current collected by the trochoidal analyzer as a function of the electrons voltage at the end of the TWT. In panel~(a) the beam was emitted with $I_{\mathrm b} = 1.24$~$\upmu$A and $V_{{\mathrm b} 0} = 16$~V. For panel~(b) $I_{\mathrm b} = 0.18$~$\upmu$A and $V_{{\mathrm b} 0} = 22$~V. In both panels, the beam interacts with a 30~MHz wave ($V_{{\mathrm b} \varphi} = 19.1$~V) propagating through the SWS.} \label{fig:ElectronDistributionFunction} \end{figure} When waves interact with an electron beam, nonlinear effects take place such as the modulation of the beam. Figure~\ref{fig:ElectronDistributionFunction} shows the distribution function at the end of the TWT for two beams interacting with a 30~MHz wave ($V_{{\mathrm b} \varphi} = 19.1$~V). The electron gun generates a monokinetic beam with $V_{{\mathrm b} 0} = 16$~V in panel~(a), and $V_{{\mathrm b} 0} = 22$~V in panel~(b). After interacting with the wave along the device, the electron beams present distribution functions with peaks for two different values of voltage. For panel~(a), $V_{{\mathrm b} 0} = 16$~V, the peaks are centered around $V_{{\mathrm b} -} = 14$~V and $V_{{\mathrm b} +} = 18$~V, and the distribution function exhibits a local minimum for 16~V. This means that some electrons received energy from the wave reaching 18~V, while other electrons lost energy to the wave and were slowed down to 14~V. In panel~(b), the peaks of the distribution function are centered at $V_{{\mathrm b} +} = 22$~V, which is the initial beam voltage $V_{{\mathrm b} 0}$, and $V_{{\mathrm b} -} = 19$~V, corresponding to the voltage $V_{{\mathrm b} \varphi}$ of an electron beam propagating at the wave phase velocity for a 30~MHz wave. The distribution functions in Figure~\ref{fig:ElectronDistributionFunction} are characteristic of beam modulation caused by its interaction with an electromagnetic wave. The difference between peaks in the distribution function can be used to estimate the wave amplitude $V_0$ disregarding the damping caused by the helix wire: \begin{eqnarray} \label{eq:V0} V_0 & = & \frac{m_{\mathrm e}}{2e} (v_{{\mathrm b} +} - v_{{\mathrm b} -}) |v_{{\mathrm b} 0} - v_{\varphi}| \nonumber \\* & = & (\sqrt{V_{{\mathrm b} +}} - \sqrt{V_{{\mathrm b} -}}) |\sqrt{V_{{\mathrm b} 0}} - \sqrt{V_{{\mathrm b} \varphi}}| , \end{eqnarray} where the velocities are obtained from $v_{\mathrm b} = \sqrt{2eV_{\mathrm b} /m_{\mathrm e}}$. Using the linear approximation (\ref{eq:V0}), we estimate the wave amplitude as $V_0 = 0.18$~V for Figure~\ref{fig:ElectronDistributionFunction}(a), and $V_0 = 0.11$~V in Figure~\ref{fig:ElectronDistributionFunction}(b). Note that this amplitude is estimated directly from the wave effect on the beam, whereas the amplitudes recorded by the oscilloscope (in our other figures) are rescaled by the measurement chain. \subsection{Wave growth and saturation} \label{Sec:WaveGrowth} \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG11-eps-converted-to.pdf} \caption{Wave amplitude (30~MHz) after interaction with an electron beam ($I_{\mathrm b} = 326$~$\upmu$A) emitted with higher velocity than the wave phase velocity.} \label{fig:ElectronPackets} \end{figure} The linear and nonlinear interaction between waves and particles can also produce the wave growth observed in Figure~\ref{fig:ElectronPackets}. Wave growth occurs for electron beams emitted with initial velocity $v_{{\mathrm b} 0}$ slightly higher than the wave phase velocity $v_\varphi$. In the beginning of the interaction process, the wave receives momentum and energy from the beam and its amplitude increases, as can be seen for $0 < z < 1500$~mm in Figure~\ref{fig:ElectronPackets}. This is the operation mechanism for industrial TWTs used as signal amplifiers \cite{Minenna2019EPJH}. The TWT at PIIM Laboratory is long enough for us to observe the development of electron bunches for $z > 1500$~mm, i.e.\ after the wave amplitude saturates. The electrons are trapped by the wave, moving back and forth in its potential. As a result of momentum and energy conservation, the wave amplitude oscillates along the TWT. The interaction between wave and electrons introduces noise in the signal, as shown in Figure~\ref{fig:ElectronPackets}, but the wave phase remains well defined. To determine the wave growth coefficient, we begin by measuring the wave amplitude in absence of a beam (this is what we call signal 1). This signal presents effects related only to the SWS, such as the damping caused by the helix wire and the coupling between the helix and the receiving antenna. We then measure the wave amplitude in the presence of an electron beam (signal 2). Signal 2 contains effects related to both the SWS and the beam-wave interaction. By subtracting signal 1 from signal 2, we eliminate the influences caused by the SWS, and obtain a final signal that presents effects produced only by the beam-wave interaction. In the growth region of final signal ($0 < z < 1500$~mm in Figure~\ref{fig:ElectronPackets} for example), the wave amplitude grows exponentially along the TWT as $V_{\mathrm{final}} \sim {\mathrm e}^{k_{\mathrm g} z}$, with $k_{\mathrm g}$ the growth coefficient. Since the final signal contains only the effects caused by the beam-wave interaction, it enables us to determine the growth coefficient with great accuracy. \begin{figure}[!tb] \centering \includegraphics[width=1\linewidth]{FIG12-eps-converted-to.pdf} \caption{(Color online) (a) Growth coefficient and (b) saturation amplitude as a function of beam current for a wave emitted with 30~MHz.} \label{fig:GrowthSaturation} \end{figure} Figure~\ref{fig:GrowthSaturation}(a) displays the growth coefficient $k_{\mathrm g}$ as a function of the beam current $I_{\mathrm b}$ for a wave emitted at 30~MHz. The growth coefficient increases with the beam current, but it tends to a constant value for $I_{\mathrm b} \gtrsim 150$~$\upmu$A. The experimental data (red dots) for $I_{\mathrm b} < 150$~$\upmu$A agree with the theoretical prediction \cite{Pierce1950, Gilmour1994, Guyomarch1996} (blue solid curve), which estimates an increase in the growth coefficient proportional to $I_{\mathrm b} ^{1/3}$. The saturation amplitude $V_{\mathrm{sat}}$ is the maximum amplitude reached by the wave at the end of the first growth stage. It is determined from signal 2, and corresponds to $z_{\mathrm{sat}} \sim 1500$~mm in Figure~\ref{fig:ElectronPackets}. The saturation amplitude varies with the beam current, as can be seen in Figure~\ref{fig:GrowthSaturation}(b). As well as the growth coefficient, the saturation amplitude increases for beam currents below 150~$\upmu$A, and tends to a constant value for $I_{\mathrm b} \gtrsim 150$~$\upmu$A. When the wave saturates due to the development of electron bunches that are trapped by the wave potential, $V_{\mathrm{sat}}$ increases \cite{Dimonte1978, Guyomarch1996} with the beam current proportionally to $I_{\mathrm b} ^{2/3}$. The experimental data (red dots) in Figure~\ref{fig:GrowthSaturation}(b) agree very well with the theoretical prediction (blue solid curve), indicating that waves in the TWT saturate because of the nonlinear development of electron bunches along the device. \subsection{Pierce linear parameters} \label{Sec:PierceParams} \begin{figure*}[!tb] \centering \includegraphics[width=1\linewidth]{FIG13-eps-converted-to.pdf} \caption{Pierce linear parameters: (a) gain, (b) detuning, (c) damping, and (d) space charge as a function of beam current for a 30~MHz wave and a beam emitted with $V_{{\mathrm b} 0} = 24$~V.} \label{fig:PierceParameters} \end{figure*} The linear regime of interaction between waves and beam in the TWT is completely characterized by four parameters \cite{Pierce1950, Gilmour1994, Guyomarch1996}, known as Pierce linear parameters. The gain parameter $C$ defines the wave gain as it interacts with the beam along the device: \begin{equation} \label{eq:PierceParametersGain} C^3 = \frac{I_{\mathrm b} Z_0}{4V_{{\mathrm b} 0}} . \end{equation} The detuning parameter $b$ measures the normalized difference between the initial beam velocity and the wave phase velocity in the absence of electrons: \begin{equation} \label{eq:PierceParametersDetuning} b = \frac{v_{{\mathrm b} 0} - v_{\varphi}}{C v_{\varphi}} . \end{equation} The damping parameter $d$ is the damping rate of the SWS in the absence of electrons normalized with the wave frequency, initial beam velocity and gain parameter: \begin{equation} \label{eq:PierceParametersDamping} d = \dfrac{ k_{\mathrm d} }{C \omega / v_{{\mathrm b} 0}} . \end{equation} The space charge parameter $QC$ accounts for the repulsive electrostatic force between the beam electrons. It also takes into account the TWT geometry. Birdsall and Brewer \cite{Birdsall1954} calculated $QC$ as \begin{equation} \label{eq:PierceParametersSpaceCharge} QC = \dfrac{1}{4C^2} \left( {\dfrac{{\omega _{\mathrm q}} / \omega}{1 + {\omega _{\mathrm q}} / \omega}} \right) ^2 , \end{equation} where $\omega _{\mathrm q} = P_{\mathrm q} \omega _{{\mathrm p} {\mathrm b}}$, with $\omega _{{\mathrm p} {\mathrm b}} = (1/r_{\mathrm b} ) \sqrt{e I_{\mathrm b} / (\pi \epsilon_0 m_{\mathrm e} v_{{\mathrm b} 0})}$ the beam plasma frequency, $\epsilon_0$ the vacuum permittivity, $P_{\mathrm q} = \left( {1 + R_{\mathrm q} ^2} \right) ^{- 1/2}$ the plasma frequency reduction factor due to the finite geometry of the beam \cite{Branch1955, Guyomarch1996}, $R_{\mathrm q} = v_{{\mathrm b} 0} \tau / (\omega r_{\mathrm b}) $, and $\tau$ a geometric factor of unitary order that varies slowly as a function of $\omega r_{\mathrm b} / v_{{\mathrm b} 0}$. Figure~\ref{fig:PierceParameters} shows Pierce's linear parameters obtained from expressions (\ref{eq:PierceParametersGain})-(\ref{eq:PierceParametersSpaceCharge}) as a function of beam current for a 30~MHz wave, a constant beam voltage $V_{{\mathrm b} 0} = 24$~V, and $\tau = 1.3\, \omega r_{\mathrm b} / v_{{\mathrm b} 0} + 0.7228$ for the TWT. As expected, the gain parameter increases with the beam current. It means the wave extracts more energy and momentum from the beam, resulting in a higher growth coefficient and saturation amplitude as shown in Figure~\ref{fig:GrowthSaturation}. The detuning and damping parameters, on the other hand, decrease with the current. In panel~\ref{fig:PierceParameters}(d), we observe that the space charge parameter increases with the beam current. For sufficiently high values of current, the electrostatic force acting on the beam electrons increases, the nonlinear effects caused by the beam space charge become important and the predictions of the linear theory lose accuracy. This is the case for the growth coefficient and saturation amplitude in Figure~\ref{fig:GrowthSaturation}, which deviate from the theoretical prediction for currents above 150~$\upmu$A. The linear theory is valid for small enough values of the gain parameter, i.e.\ $C \ll 1$. For the upgraded TWT, we can estimate the beam current threshold for which the linear theory loses accuracy by considering $0.1<C<0.2$ in expression (\ref{eq:PierceParametersGain}), with $V_{{\mathrm b} 0}$ slightly higher than $V_{{\mathrm b} \varphi}$. Experimentally, we observe that the growth coefficient and saturation amplitude deviate from the linear predictions and tend to a constant value for $I_{\mathrm b} \gtrsim 150$~$\upmu$A for different values of wave frequency. \section{Conclusions} \label{Sec:Conclusions} We analyzed the propagation of electromagnetic waves and electron beams, as well as their interaction, in an upgraded helix TWT. We presented a theoretical model describing the electromagnetic field through the SWS, and obtained the theoretical dispersion relation, phase and group velocities, and interaction impedance. We showed that the predicted theoretical parameters agree very well with the experimental data. It demonstrates the robustness of the model, as well as the good performance of the experimental device for its operating frequency range. We also studied the nonlinear effects that take place in the TWT due to the beam-wave interaction. For an initially monokinetic beam, the distribution function gets modulated by the wave, presenting two peaks with different energies at the end of the device. Another nonlinear effect occurs when the beam presents an initial velocity slightly higher than the wave phase velocity. In this case, the beam electrons resonantly interact with the wave. They transfer energy and momentum to the wave and its amplitude increases. After saturation, the wave amplitude oscillates along the TWT as the electrons form bunches that move back and forth in the wave potential. We determined the wave growth coefficient and saturation amplitude as a function of the beam current. For sufficiently low values of current, we showed that these parameters increase with the current according to the linear prediction. Nonlinear effects are caused by the repulsive electrostatic force acting on the beam electrons. Such effects become important for high currents, and the growth coefficient and saturation amplitude deviate from the linear predictions, tending to a constant value. The upgraded TWT of PIIM Laboratory presents a new configuration for the SWS. Usually, the helix in the SWS is held by three alumina rods. In the upgraded TWT, the helix is held by a dielectric polyimide tape rigidly wrapped all around the helix. It guarantees a more precise helix pitch along the 4 meters device, resulting in more accurate experimental measurements, and the possibility of working with different waveforms. Furthermore, waves propagating in the upgraded TWT present a lower phase velocity. This increases the interaction time for electrons resonantly interacting with the wave, and a variety of nonlinear effects can be observed. All these features will allow us to perform new experiments to simulate wave-particle interactions in plasmas. Among the new experiments, we may cite the use of a pulsed beam \cite{MacorNPCS2007} instead of a continuous one, experiments to analyze the synergy between chaos and self-consistent effects \cite{Tennyson1994, Elskens2003, del-Castillo2002, Doveil2011}, and experiments to study the effects produced by the magnetic fields in the beam dynamics. In this paper, we considered the interaction of waves with a cold beam, and showed that waves saturate due to trapping of the beam electrons in the wave potential. In previous TWTs, experiments with a warm beam revealed that saturation in this case is caused by chaotic diffusion of the electrons in the broad spectrum excited by the beam \cite{Tsunoda1987, Tsunoda1991, Hartmann1995}. In a future work, we will use the upgraded TWT to carry out experiments with warm beams to investigate the predictions of the quasilinear theory \cite{Vedenov1962, Drummond1962, Vedenov1963, Tsunoda1987, Tsunoda1991, Hartmann1995, Elskens2007, Elskens2010, Elskens2010AAP, Besse2011, Elskens2012}. Finally, this TWT may also be used to benchmak numerical models. Electromagnetic PIC (particle-in-cell) codes used for TWT simulations are usually too slow because of the great number of degrees of freedom to be considered. For this reason, PIC codes are not suitable for industrial applications that require faster simulations. As an alternative, a new time-domain code has been developed: DIMOHA \cite{Andre2013, Minenna2018, Minenna2019IEEE} (DIscrete MOdel with Hamiltonian Approach). The new code combines a Hamiltonian approach, that guarantees the respect of conservation laws, and an $N$-body description with a drastic reduction in the number of degrees of freedom. These characteristics allow DIMOHA to simulate nonlinear effects in TWTs much faster than traditional PIC codes \cite{Minenna2019IEEE}, enabling its use for industrial applications. DIMOHA simulates wave-particle interactions in periodic structures, and it has already been validated against industrial helix and folded waveguide TWTs \cite{Minenna2019IEEE} ($2-15$~cm long) and against the frequency-domain equivalent circuit Pierce model \cite{Minenna2019PhysScr}. The code will be upgraded to simulate long devices used for research in plasma physics, such as the 4 meters device at PIIM Laboratory. The experimental data obtained with the upgraded TWT will be used to validate the numerical results. \begin{acknowledgments} We thank D.~Guyomarc'h, J.-P.~Busso, J.-B.~Faure, V.~Long and J.-F.~Pioche for technical support with the experimental device, and Thales for contributing to the device upgrade. We acknowledge financial support from the scientific agencies: S\~ao Paulo Research Foundation (FAPESP) under Grants No. 2013/01335-6, No. 2011/20794-6, No. 2015/05186-0 and No. 2018/03211-6, Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N{\'\i}vel Superior (CAPES) under Grants No. 88887.307684/2018-00 and No. 88881.143103/2017-01, and Comit\'e Fran\c{c}ais d'\'Evaluation de la Coop\'eration Universitaire et Scientifique avec le Br\'esil (COFECUB) under Grant No. 40273QA-Ph908/18. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding authors upon reasonable request.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,533
Q: Crash on open of Android App with AdMob I have developed an Android app with libGDX and have added Admob to it, but when I open the APK on an Android it crashes, stating that the process has stopped. Here is my code in the Android project. public class MainActivity extends AndroidApplication{ protected AdView adview; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); AndroidApplicationConfiguration cfg = new AndroidApplicationConfiguration(); cfg.useGL20 = true; final TelephonyManager tm =(TelephonyManager)getBaseContext().getSystemService(Context.TELEPHONY_SERVICE); String deviceid = tm.getDeviceId(); RelativeLayout layout = new RelativeLayout(this); getWindow().setFlags(WindowManager.LayoutParams.FLAG_FULLSCREEN, WindowManager.LayoutParams.FLAG_FULLSCREEN); getWindow().clearFlags(WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN); setContentView(graphics.getView(), createLayoutParams()); View gameview = initializeForView(new BalloonBreakout(), false); adview = new AdView(this); adview.setAdSize(AdSize.BANNER); adview.setAdUnitId("ca-app-pub-6258330641042393/6188790266"); adview.loadAd(new AdRequest.Builder().addTestDevice(deviceid).build()); RelativeLayout.LayoutParams adparams = new RelativeLayout.LayoutParams(Gdx.graphics.getWidth(), Gdx.graphics.getHeight() / 14); adparams.addRule(RelativeLayout.ALIGN_PARENT_BOTTOM); adparams.addRule(RelativeLayout.ALIGN_PARENT_RIGHT); layout.addView(gameview); layout.addView(adview, adparams); setContentView(layout); } } I'm not sure why it's crashing, any help would be greatly appreciated. UPDATE: I've got the error log here, but I don't know why I'm getting a null pointer. E( 3875) Caused by: java.lang.NullPointerException (AndroidRuntime) E( 3875) at com.sevenbit.Balloon_Breakout.MainActivity.onCreate(MainActivity.java:37)(AndroidRuntime) Line 37 is: setContentView(graphics.getView(), createLayoutParams()); However when I deleted this line, I still got a null pointer at the same place, line 37. Any ideas? A: Your problem (like all the other libgdx posts I have seen lately) is that libgdx AndroidApplication#initializeForView call Activity#setContextView to set its own layout (which it shouldn't). You then call setContextView with your own layout. It crashes presumably because some libgdx code is assuming that it's layout object has been loaded but it's not there because you have replaced it with your own. A: You should post or look at your stack trace. That will tell you what line your code is failing at. Otherwise people will have to guess from looking at your code. The problem may be anywhere.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,515
\section{Introduction} The Apollo-type near-Earth asteroid (4179) Toutatis was originally discovered on 10 February 1934 and remained a lost asteroid until it was once again detected by C. Pollas and colleagues on 4 January 1989 in Caussols, France. From a dynamical viewpoint, the asteroid moves on an approximately 4:1 resonant orbit at a large eccentricity with the Earth and has passed through a close encounter with the Earth every four years since 1992 \citep{Whipple1993,Krivova1994}. Dating back to the decadal years of near-Earth flybys of the asteroid, the radar observations obtained from Arecibo and Goldstone reveal that Toutatis appears to be an irregularly shaped asteroid with two distinct lobes \citep{Ostro1995,Hudson1998,Ostro1999,Ostro2002,Hudson2003}. Various types of ground-based observations indicate that Toutatis is a tumbling, non-principal axis (NPA)-rotating small body \citep{Hudson1995,Ostro1999}. These effects, as observed in Earth-approaching flybys, have also been reported based on optical observations and extensive radar measurements \citep{Spencer1995, Hudson1995,Ostro1999,Takahashi2013}. The first near-Earth flyby for Toutatis occurred in December 1992 at a distance of 0.242 AU, when the asteroid again came into view. Optical observations were gathered from at least 25 sites around the world through an international campaign. The observed rotational light curves of Toutatis appeared to be highly unusual, with a large amplitude and a non-periodic long rotation period. Subsequently, \citet{Spencer1995} reported two major periods of complex rotation of approximately 7.3 and 3.1 days, as estimated from analysis of the data. In addition, they reported that Toutatis was the first asteroid to show strong photometric evidence of complex rotation. However, the authors did not clarify this complex rotation phenomenon. Furthermore, radar observations were performed by Goldstone in California and by Arecibo Observatory during Toutatis' approach in 1992. The delay-Doppler images achieved a spatial resolution of 19 meters in range and 0.15 millimeters per second in radial velocity. \citet{Ostro1995} suggested a rotational period between 4 and 5 days based on these radar data. According to the investigations of \citet{Burns1971,Burns1973}, the damping timescale for the slow non-principal axis rotation of Toutatis exceeds the age of the solar system. Thus, \citet{Ostro1995} noted that the spin state of Toutatis may be primordial. However, recent investigation suggests that YORP effects may slow the spin states of asteroids. Thus, Toutatis' spin state remains a mystery. Based on these high-resolution delay-Doppler radar observations, \citet{Hudson1995} used a least-squares estimation to calculate Toutatis' three-dimensional shape, spin states, and moment-of-inertia ratios. They showed that the dimensions along the three principal axes are 1.92, 2.40 and 4.60 kilometers and that Toutatis rotates in a long-axis mode. The two major periods were found to be 5.41 days for the rotation about the long principal axis and 7.35 days for the long-axis precession about the angular momentum vector. The results derived from the radar data were inconsistent with the solutions presented by \citet{Spencer1995}. Moreover, \citet{Hudson1998} adopted the published optical light curves \citep{Spencer1995} and a radar-derived shape and spin-state model \citep{Hudson1995} to estimate the Hapke parameters of Toutatis. The Hapke photometric model was applied, and a $\chi_{2}$ minimization proposed by \citet{Hudson1997} was performed. The synthetic light curves that were generated based on their model provided a good fit to the optical data, with an rms residual of 0.12 mag. They showed that the combination of the optical data and radar observations led to an estimation of the spin-state parameters for Toutatis that was superior to the radar-derived outcomes. The two parameters describing the moment-of-inertia ratios were determined to be 3.22 and 3.02, respectively. Based on the triaxial ellipsoid shape and spin state given by \citet{Ostro1995}, \citet{Kryszczynska1999} presented the results of modeling the light curve variations of this unusual rotating asteroid by numerically integrating Euler's equation in combination with the explicit expression for an asteroid's brightness as a function of Euler angles. They achieved good agreement between the observed and calculated light curves. They emphasized that the light curves of Toutatis were dominated by the precession effect and by the superposition of precession and rotation, which resulted in an unapparent relationship between the rotation period alone and the light curves. This understanding yielded an appropriate explanation for the inconsistency between the rotational period of Toutatis determined from optical data \citep{Spencer1995} and that determined from radar observations \citep{Hudson1995}. During the 1996 near-Earth approach, Toutatis was observed by the Goldstone 8510-MHz radar system. Based on the physical model derived from the observations of the 1992 approach, \cite{Ostro1999} analyzed the radar measurements and refined the estimates of the spin state of Toutatis. The combination of optical and radar data was proven to better predict the orientational sequence displayed in the images captured in 1996. After refinement, the two periods of Toutatis were updated and estimated to be $5.376\pm0.001$ days for the rotation about the long principal axis and $7.420\pm0.005$ days for the uniform precession of the long principal axis about the angular momentum vector. These two parameters yielded moment-of-inertia ratios of $3.22\pm0.01$ and $3.09\pm0.01$. Thus, the orientation at the 2004 approach could be predicted in both inertial and geocentric coordinate systems. \citet{Scheeres2000} determined that mutual gravitational interactions between an asteroid and a planet or another asteroid can play a significant role in shaping the asteroid's spin state. They analyzed the interactions of a sphere with an arbitrary mass and with Toutatis based on the radar-derived shape model. The results thus obtained could partially explain the phenomenon of Toutatis' current unusual rotational state. It was demonstrated that the tumbling spin state of Toutatis might have been caused by near-Earth flybys over its lifetime. This hypothesis enabled the estimation of the mass distribution and moment-of-inertia for Toutatis \citep{Busch2012}, thereby allowing the likely internal structure to be inferred. Using radar observations of five flybys from 1992 to 2008, \citet{Takahashi2013} modeled the rotational dynamics and estimated Toutatis' spin-state parameters using the least-squares method. They calculated the Euler angles, angular velocities, and moment-of-inertia ratios as well as the center-of-mass (COM)-center-of-figure (COF) offset. By directly relating the COM-COF offset and the moment-of-inertia ratios to the spherical harmonic coefficients of the first- and second-degree gravity potential, they could determine the driving force of the external torque due to an external spherical body and evaluate the spin state. The terrestrial and solar tidal torques were considered in their dynamical models, and all aforementioned parameters were included in the variable state vector to be estimated in the study. Furthermore, the spin states and uncertainties were propagated to the 2012 flyby epoch. On 13 December 2012, the first space-borne close observation of Toutatis was achieved by the second Chinese lunar probe, Chang'e-2, at a distance of $770\pm120~(3\sigma)$ meters from Toutatis' surface \citep{Huang2013}. Optical images of the asteroid were acquired by one of the onboard engineering cameras during the outbound flyby. Through analysis of over 400 images, \citet{Huang2013} estimated Toutatis' osculating orbit, its dimensions along the major axes, and its orientations. The highest resolution of the images was better than 3 meters. New discoveries were made, including the presence of a giant depression at the large end, a sharply perpendicular silhouette near the neck region, and direct evidence of boulders and regoliths. The geological features suggest that Toutatis may have a rubble-pile structure. The physical length and width were determined to be $4.75\times1.95~\rm{km}\pm~10\%$, respectively, and the direction of the $+z$ axis was calculated to be (234.1$^\circ$, 60.7$^\circ$). They showed that the bifurcated configuration may indicate that Toutatis is of contact binary origin and that it is composed of two major lobes (head and body). In this work, we perform an extensive investigation of the optical images of Toutatis captured by Chang'e-2, and we determine the orientation of the asteroid at the flyby epoch. In combination with radar observations (\citet{Takahashi2013} and references therein), we estimate the rotational parameters of Toutatis. Moreover, the solar and terrestrial tidal torques are considered in the establishment of the rotational dynamics model. The torque due to the misalignment of the center of mass and the origin of the body-fixed frame is evaluated to be insignificant at first order \citep{Hudson2003,Busch2012,Busch2014}. Furthermore, we incorporate the external gravitational tidal effects from the Moon and Jupiter in our dynamical model. Compared with the previous prediction \citep{Takahashi2013}, our results for Toutatis' orientation, derived for Chang'e-2's flyby epoch from both radar data and optical images, demonstrate good consistency with the observational results of the spacecraft \citep{Huang2013,Zou2014}. Our simulations reproduce the trajectory of the long axis in space, with a precession amplitude of approximately $60^{\circ}$. This high amplitude of Toutatis' precession is supportive of its tumbling attitude as observed from Earth. The characteristics of the angular momentum variations is investigated in detail, and the variation induced by the near-Earth flyby in 2004 is estimated to be $0.03\%$. The orientation of its angular momentum in space is found to be described by $\lambda_{H}=180.2^{+0.2^\circ}_{-0.3^\circ}$ and $\beta_{H}=-54.75^{+0.15^\circ}_{-0.10^\circ}$, and therefore, this orientation has remained nearly constant over the past two decades. The rotational periods are estimated from the simulations to be 5.38 and 7.40 days for the rotation and precession, respectively. These values are in good agreement with the work of \citet{Ostro1999}. This work is structured as follows: Section 2 presents the observational data, which comprise ground-based measurements and optical images acquired by Chang'e-2. In this section, we also analyze the optical data to derive the orientation of Toutatis at the flyby epoch. In Section 3, we model the rotational dynamics of Toutatis based on Euler's equation. The least-squares and multiple shooting methods are employed to fit the variable state vector and the corresponding results. The simulation results are presented in Section 4. Finally, we conclude by discussing the innovations of our investigation compared with previous works. \section{Observations} As described above, Toutatis progrades on an approximately 4:1 resonant eccentric trajectory with the Earth. Orbital determination and rotational parameters for Toutatis have been documented since the asteroid began to be continually observed in 1992. Since that time, ground-based observations have been performed for its every near-miss of Earth. As is well known, on 13 Dec 2012, Chang'e-2 completed the first successful close flyby of Toutatis and acquired numerous images of this asteroid \citep{Huang2013}. Using the released data from the Minor Planet Center and hundreds of optical observations from the ground-based observational campaign that lasted from July to December of 2012, the orbital determination of Toutatis was precisely achieved within uncertainties on the order of several kilometers, and the orbital parameters at the flyby epoch were calculated to be $a$=2.5336 AU, $e$=0.6301, $i$=0.4466$^\circ$, $\Omega$=124.3991$^\circ$, $\omega$=278.6910$^\circ$ and $M$=6.7634$^\circ$. Hence, the initial orbit can be integrated to calculate the relative positions of Toutatis with respect to the Sun, Earth, Moon and other major planets in the solar system, which are required for computing the external torques from the solar tides, the terrestrial tides and the gravitational tides from other bodies. The positions of the major planets and the Moon are calculated based on the DE405 ephemerides released by JPL \footnotemark[1]. The gravitation of the Sun, the major planets and 67 asteroids in the main belt as well as post-Newtonian effects are considered in the dynamical model to achieve the orbital integration of Toutatis. In addition, Chebyshev polynomial fitting is numerically implemented to obtain the position of the asteroid at any given epoch. \footnotetext[1]{{\citet{Takahashi2013} used the DE430 planetary ephemeris which is more accurate for fitting the observation data of Toutatis. However, the position offset between DE405 and DE430 is simply a few kilometers that will induce systematic errors of approximately 10 parts per million into our torque calculations, which is too small to change the conclusions of this work.}} \subsection{Radar Measurements} Radar measurements of Toutatis acquired by Goldstone and Arecibo from 1992 to 2008 are used to solve for the asteroid's rotational parameters. \citet{Takahashi2013} presented 33 sets of radar observations. Together with the orientation obtained by Chang'e-2 at the flyby epoch (see Section 2.2), we have 33 sets of ground-based observation outcomes, including Euler angles, angular velocities and one space-borne orientation parameter. The observational data are summarized in Table 1. The observational errors of the radar data are estimated to be between $3^{\circ}$ and $15^{\circ}$ \citep{Takahashi2013} for the Euler angles and between 2$^{\circ}$ day$^{-1}$ and 10$^{\circ}$ day$^{-1}$ for the components of angular velocity; these errors are taken into account in our fitting. \begin{table*} \caption{Observational data for Toutatis from 1992 to 2012 (\citet{Takahashi2013} and references therein).} \begin{tabular}{cccccccc} \hline Year & Month & Date & Hour & Minute & Second & Euler angles ($^\circ$) & Angular velocity ($^\circ$/day) \\ \hline 1992 & 12 & 2 & 21 & 40 & 0 & (122.2, 86.5, 107.0) & (-35.6, 7.2, -97.0) \\ 1992 & 12 & 2 & 19 & 30 & 0 & (86.3, 81.8, 24.5) & (-16.4, -29.1,-91.9) \\ 1992 & 12 & 4 & 18 & 10 & 0 & (47.8, 60.7, 284) & (29.1, -23.2, -97.8) \\ 1992 & 12 & 5 & 18 & 50 & 0 & (14.6, 39.4, 207.1) & (33.3, 8.2, -92.9) \\ 1992 & 12 & 6 & 17 & 30 & 0 & (331.3, 23.7, 151.6) & (6.6, 34.5, -95.8) \\ 1992 & 12 & 7 & 17 & 20 & 0 & (222.5, 25.4, 143.9) & (12.8, 25.4, -104.1) \\ 1992 & 12 & 8 & 16 & 40 & 0 & (169.8, 45.5, 106.9) & (-31.1 -21.9, -97.7) \\ 1992 & 12 & 9 & 17 & 50 & 0 & (137.3, 71.3, 22.3) & (11.8, -36.9, -94.9) \\ 1992 & 12 & 10 & 17 & 20 & 0 & (103.1, 85.2, 292.6) & (35.8, -8.9, -97.9) \\ 1992 & 12 & 11 & 9 & 40 & 0 & (77, 85.7, 225.5) & (31, 17, -96.3) \\ 1992 & 12 & 12 & 9 & 20 & 0 & (42.8, 70.2, 133.2) & (-1.3, 37, -95.9) \\ 1992 & 12 & 13 & 8 & 10 & 0 & (13.7, 44.4, 51.9) & (-38.3, 17.9, -97.3) \\ 1992 & 12 & 14 & 7 & 50 & 0 & (323.7, 14, 0) & (-70.5, -50.6, -91.1) \\ 1992 & 12 & 15 & 7 & 50 & 0 & (193.2, 24.4, 21.4) & (22.1, -26.6, -96.6) \\ 1992 & 12 & 16 & 7 & 10 & 0 & (165.1, 46.4, 310.6) & (33.4, -3.4, -93.7) \\ 1992 & 12 & 17 & 6 & 49 & 0 & (130.6, 76.1, 234.9) & (12.6, 33.9, -94) \\ 1992 & 12 & 18 & 7 & 9 & 0 & (91.6, 81.6, 142.4) & (-24.3, 29.6, -102) \\ 1996 & 11 & 25 & 19 & 48 & 0 & (130.5, 78.9, 143.2) & (-32, 16.4, -98.2) \\ 1996 & 11 & 26 & 17 & 51 & 0 & (94.2, 88.1, 57.7) & (-30.6, -18.7, -91.5) \\ 1996 & 11 & 27 & 17 & 34 & 0 & (60.4, 81.2, 320.9) & (10.7, -36.8, -94.7) \\ 1996 & 11 & 29 & 15 & 37 & 0 & (349.3, 30, 168) & (23.1, 28.9, -98.3) \\ 1996 & 11 & 30 & 15 & 47 & 0 & (250.3, 14.2, 166.9) & (-18.6, 32.1, -94.9) \\ 1996 & 12 & 1 & 14 & 23 & 0 & (180.4, 37.6, 139.3) & (-38.7, -0.5, -98.1) \\ 1996 & 12 & 2 & 13 & 43 & 0 & (146.7, 64, 64.9) & (-12.6, -34.8, -97.9) \\ 1996 & 12 & 3 & 12 & 20 & 0 & (116.7, 81.4, 340.4) & (24.3, -28.2, -98.1) \\ 2000 & 11 & 4 & 17 & 6 & 0 & (110, 88.5, 30) & (0, -32.5, -98.9) \\ 2000 & 11 & 5 & 18 & 1 & 0 & (70.6, 84, 281) & (34.5, -17.2, -97.9) \\ 2004 & 10 & 7 & 13 & 56 & 0 & (79.19, 85.3, 365.2) & (-2.5, -35.4, -109) \\ 2004 & 10 & 8 & 14 & 4 & 0 & (44.9, 72.5, 263.1) & (32.4, -18.1, -97.9) \\ 2004 & 10 & 9 & 13 & 57 & 0 & (12.8, 47.3, 181.4) & (29.7, 22.8, -98.1) \\ 2004 & 10 & 10 & 13 & 17 & 0 & (327.7, 20.4, 124.1) & (-10.7, 34.7, -97.3) \\ 2008 & 11 & 22 & 10 & 54 & 0 & (119.5, 90.7, 92) & (118.1, 90.4, 93.6) \\ 2008 & 11 & 23 & 10 & 45 & 0 & (86.2, 85, 0.3) & (-0.4, -36.2, -98.9) \\ 2012 & 12 & 13 & 8 & 30 & 0 & (-20.1, 27.6, 42.2) & \\ \hline \end{tabular} \end{table*} \subsection{Observations by Chang'e-2} As mentioned previously, Chang'e-2 captured Toutatis' silhouette via one of the onboard engineering cameras at the time when the asteroid was approaching the Earth in December 2012. The camera has a lens with a 54-mm focal length and a 1024-by-1024-pixel CMOS detector. The field of view of the camera is $7.2^\circ$ by $7.2^\circ$. The images were acquired during the outbound flyby of Chang'e-2 because of the large Sun-Toutatis-Chang'e-2 phase angle on the inbound route \citep{Huang2013,Zhao2014a}. The imaging of Toutatis lasted approximately 25 minutes. \citet{Huang2013} reported the first panoramic image of the asteroid in the sequence, which was acquired at a distance of 67.7 km from Toutatis at a resolution of 8.30 m, as shown in Figure 1a. \begin{figure*} \includegraphics[scale=0.70]{fig1.eps} \centering \caption{Images of Toutatis. a: First panoramic image captured by Chang'e-2. b: Best-matching attitude of the radar model with the optical image shown in Fig. 1a. c: Illustration of the graphical frame of the camera. d: Combination and comparison of the radar model with the optical results. } \end{figure*} \subsubsection{Attitude matching} The 3D shape model derived from delay-Doppler radar imaging, as presented in Figure 1b, is used to discern the attitude of Toutatis. The latest radar-derived shape model \citep{Busch2012} was constructed based on additional radar measurements performed by Goldstone in 2000 and by Arecibo in 2004 and 2008, and in this work, this model is employed in combination with the optical images from Chang'e-2's flyby to match the spin state of Toutatis. In general, the attitude of a rigid body in space can be determined from its rotations about the three axes of an orthogonal coordinate system. As Figure 2 shows, to obtain the attitude of Toutatis, we suppose that the axes $l_{1}$, $l_{2}$ and $l_{3}$ are defined as follows: the mutually perpendicular axes $l_{1}$ and $l_{2}$ extend through the center of Toutatis' shape and along the directions of the long short axes in the image, respectively. In addition, $l_{3}$ is perpendicular to the image plane through the intersection of $l_{1}$ and $l_{2}$, and thus, these axes form a right-handed coordinate system. Considering the render and the orientation of the camera's optical axis \citep{Zhao2014a}, the 3D radar-derived shape model of Toutatis can be rotated at an interval of $1^{\circ}$ for each of the three Euler angles about the three principal axes of its body-fixed frame to match its attitude to that shown in the optical images. As shown in Figure 2, we choose three criteria related to the optical image to determine whether the rotated model is consistent with the optical results from Chang'e-2's view direction: (1) the slope of the long axis, represented by the red line in Figure 2a; (2) the ratio of the long axis to the short axis, indicated by the green line; and (3) the obvious topography on the neck area connecting the two major lobes of the asteroid, as shown in Figure 2b and 2c. These features of the optical image can be reproduced by rotating the 3D radar shape model about its three principle axes. Figure 1b shows the best approximation of the attitude of the radar model to that indicated by the optical image shown in Figure 1a. The two images are quantitatively compared in Table 2. \begin{table} \begin{minipage}{90mm} \caption{Comparison of optical and radar image results.} \begin{tabular}{lll} \hline Property & Optical Image & Radar Image \\ \hline Slope & 1.386 &1.340 \\ Ratio of length to width & 2.52 &2.08 \\ \hline \label{comp} \end{tabular} \end{minipage} \end{table} Between the optical image and the model, the difference in the slope of $l_{1}$ is not obvious; however, the deviation in the ratio of the length to the width appears to be significant. \citet{Zou2014} suggested using a render frame with the lighting of the radar model and combining multiple optical images using computer graphics methods, which may yield a likely explanation for the higher value of the length-to-width ratio obtained for the optical images acquired by Chang'e-2. Using the optical images from Chang'e-2, \citet{Bu2015} rotated the radar-derived model and retrieved an orientation with respect to the graphical frame described by direction cosine angles of $(126.13\pm0.29^{\circ}, 122.98\pm0.21^{\circ}, 126.63\pm0.46^{\circ})$. In accordance with the definition of the cosine angles given by \citet{Bu2015}, we calculate this set of cosine angles to be $(130.3\pm1.0^{\circ}, 134.78\pm1.0^{\circ}, 106.95\pm1.0^{\circ})$. The two sets of results are quite similar to each other. However, we should note that the results presented here neglect the attitude of the camera in the inertial frame. \begin{figure*} \includegraphics[scale=0.33]{fig2.eps} \centering \caption{Properties of an optical image of Toutatis. a: The red line and green line represent the lengths of the image of the asteroid along the $l_{1}$ and $l_{2}$ directions, respectively; the ratio between these lengths is one of the properties that is used to characterize the image. The blue line represents the horizontal line, which can be used to determine the slope of $l_{1}$. b and c: The area within the green circle represents an obvious characteristic topological feature that can be used to derive information concerning the rotation about $l_{1}$. } \end{figure*} \subsubsection{Euler angles} Because of the fixed direction of the camera's optical axis, Chang'e-2 maintained a nearly constant attitude throughout the shooting process. Figure 3 depicts the spacecraft's body-fixed frame, where $\vec{l}$ is the direction of the camera's optical axis and the corresponding unit vector in the spacecraft's body-fixed frame is $(-0.06976, 0.9976, 0)$. The location relationship represents a transformation from Chang'e-2's body-fixed frame to the graphical frame shown in Figure 1c. In combination with the attitude information of the spacecraft, we can establish a relationship between the graphical frame and the inertial coordinate system. As Figure 1c shows, the left portion of Toutatis is blocked by the solar panel on the spacecraft. The unit vectors of the x and y axes, which represent the horizontal and vertical axes in the graphical frame, are determined to be $(0.824,0.540,-0.173)$ and $(0.344,-0.233,0.910)$, respectively, in the J2000 equatorial coordinate system \citep{Huang2013b}. \begin{figure*} \includegraphics[scale=0.50]{fig3.eps} \centering \caption{a. Graphical frame. b. Spacecraft's body-fixed frame.} \end{figure*} By merging the rotated radar-derived shape model and the optical image, Figure 1d illustrates the similarities and differences between the two models. In combination with the conversion relationship among the asteroid's body-fixed frame, the graphical frame, and the inertial coordinate system, we can obtain the matrix describing the transformation from the J2000 ecliptic coordinate system to the asteroid's body-fixed system expressed in terms of 3-1-3 Euler angles as below: \begin{equation} \vec{R}=R_z(42.2^\circ)R_x(27.6^\circ)R_z(-20.1^\circ)\vec{r}, \end{equation} where $R_x$ and $R_z$ are the standard rotation matrices for right-handed rotations around the $X$ and $Z$ axes, respectively. The coordinate transformation and the corresponding Euler angles will be briefly introduced in the next section. The orientation of the principle axis is then obtained with respect to the attitude of the radar model, which is estimated to be ($249.87 \pm 1^\circ, 62.43 \pm1^\circ$) in the J2000 ecliptic coordinate system \citep{Zhao2014b}. Corresponding errors arise from the matching process, the uncertainties of the radar-derived shape model and the attitude uncertainties of the spacecraft \citep{Huang2013}. \section{Numerical Models} \subsection{Dynamical Model} The rotation matrix shown in equation ($1$), which is composed of the three 3-1-3 Euler angles $\vec\alpha=(\alpha,\beta,\gamma)$, maps the information for conversion from the inertial coordinate system to the body-fixed frame, as shown in Figure 4. The body-fixed frame is generated by applying the following rotation sequence of the yaw, pitch and roll angles: a rotation of $\alpha$ around the z axis, then a rotation of $\beta$ around the x axis, and finally, a rotation of $\gamma$ around the z axis from the inertial frame. \begin{figure*} \includegraphics[scale=0.48]{fig4.eps} \centering \caption{Coordinate system transformation relationship in terms of 3-1-3 Euler angles. Subscripts i and b indicate the inertial coordinate system and the body-fixed frame, respectively. The three angles $\alpha, \beta,$ and $\gamma $ form a 3-1-3 set of Euler angles. } \end{figure*} The Euler angles describe the orientation of a rigid body in space at a specific time, and their variations represent spin states. Let the vector $\vec{\omega}$ define the instantaneous rotational velocity in the body-fixed frame. Then, the set of kinematic differential equations for the 3-1-3 Euler angles is as follows: \begin{eqnarray} \dot{\vec{\alpha}}=\frac{1}{\sin\beta} \left(\begin {array}{c} ~~~~\sin\gamma~~~~~~~~~~~\cos\gamma~~~~~~0 \\ \cos\gamma\sin\beta~~~-\sin\gamma\sin\beta~~0\\ -\sin\gamma\cos\beta~~-\cos\gamma\cos\beta ~~\sin\beta \end{array}\right)\vec{\omega} \nonumber \\ =[B(\vec{\alpha})]\vec{\omega} ~~. \end{eqnarray} The time derivative of the Euler angles encounters a singularity at either $\beta=0^{\circ}$ or $\beta=180^{\circ}$, which may cause $\alpha$ and $\gamma$ to rotate in the same plane. As derived from the Euler equation, Euler's rotational equation of motion describes the time derivative of the angular velocities: \begin{equation} [I]\dot{\vec{\omega}}=-[\tilde{\omega}][I]\vec{\omega}+\vec{L} ~~, \end{equation} where the moment-of-inertia matrix $[I]$ is constant, symmetric and given in the body-fixed frame. It has dimensions of $[3\times3]$ and can be defined in terms of six quantities. $\vec{L}$ represents the external torques acting on the dynamical system, and the matrix $[\tilde\omega]$ has the following form: \begin{equation} [\tilde\omega]= \left(\begin {array}{c} ~~0~~~~~~~-\omega_{3}~~~\omega_{2} \\ ~~\omega_{3}~~~~~~~0~~~~-\omega_{1}\\ -\omega_{2}~~~~~\omega_{1}~~~~0 \end{array}\right) ~~. \end{equation} Then, the time derivative of the angular velocities is expressed as follows: \begin{equation} \dot{\vec{\omega}}=[I]^{-1}(-[\tilde{\omega}][I]\vec{\omega}+\vec{L}) ~~. \end{equation} To calculate the external torque exerted by a spherical body, we assume that $\vec{r}'$ is the vector of an infinitesimal mass element $(dm)$ of the asteroid relative to the origin. Figure 5 presents a schematic diagram of the effect exerted by a spherical perturbing body on an irregularly shaped rigid body. According to the definition of angular momentum $\vec{H}$, we have \begin{equation} \frac{\rm{d}\vec{H}}{\rm{d}t}=\frac{\rm{d}}{\rm{d}t}\int dm~\vec{r}'\times\dot{\vec{r}'} =\int \vec{r}'\times(d\vec{F}_{G}/dm-\ddot{\vec{B}})dm ~~, \end{equation} where $\ddot{\vec{B}}$ is the acceleration of the origin in the inertial coordinate system and $d\vec{F}_{G}$ is the gravitational attraction experienced by an arbitrary mass element. When the center of mass of the rigid body is chosen as the origin $\bar{O}$, we have $\int dm \vec{r}'\times\ddot{\vec{B}}=0$. Therefore, the torques exerted on the rigid body are written in the form \citep{Schaub2009} \begin{equation} \vec{L}_{c}=\int_{M}\vec{r}'\times d\vec{F}_{G} ~~. \end{equation} Otherwise, if the vector of $(dm)$ relative to the center of mass is $\vec{r}_{c}$, equation (6) has the following form: \begin{eqnarray} \frac{\rm{d}\vec{H}}{\rm{d}t} =\int\vec{r}'\times(d\vec{F}_{G}/dm-\ddot{\vec{B}})dm \nonumber \\ =\int(\vec{r}_{c}-\vec{r}_{cm})\times(d\vec{F}_{G}/dm-\ddot{\vec{B}})dm ~~, \end{eqnarray} where $\vec{r}_{cm}=(x_{cm},y_{cm},z_{cm})^T$ is the vector of the center of figure with respect to the center of mass (the COM-COF offset). Thus, equation (6) can be expressed as follows: \begin{equation} \vec{L}=\int_{M}\vec{r}'\times d\vec{F}_{G}+M\vec{r}_{cm}\times\ddot{\vec{B}} ~~, \end{equation} where $M$ is the total mass of the asteroid. Let $\vec{r}$ be the position vector of the spherical body relative to the asteroid. $\vec{R}=\vec{r}'-\vec{r}$ denotes the position of a body element measured from the spherical body. Consequently, $d\vec{F}_{G}$ has the following form: \begin{equation} d\vec{F}_{G}=-\frac{GM_{s}}{R^{3}}\vec{R}dm ~~, \end{equation} where $M_s$ is the mass of the external spherical body. Hence, the integral in equation (9) can be expressed as \begin{equation} \vec{L}=-GM_{s} \int_{M}\vec{r}'\times \frac{\vec{R}}{R^{3}}dm +M\vec{r}_{cm}\times\ddot{\vec{B}}~~. \end{equation} \begin{figure*} \includegraphics[scale=0.40]{fig5.eps} \centering \caption{External torque of a spherical body on an irregularly shaped rigid body. } \end{figure*} As Figure 5 shows, under the assumption that $\vec{r}\gg\vec{r}'$, the first-order expansion of $\frac{\vec{R}}{R^{3}}$ has the following form: \begin{equation} \frac{\vec{R}}{R^{3}}=-[1+\frac{3(\vec{r}\cdot\vec{r}')}{r^{2}}] \frac{\vec{r}}{r^{3}}+\frac{\vec{r}'}{r^{3}} ~~. \end{equation} Substituting (12) into equation (9) allows the moment from the perturbing body to be divided into two components: \begin{equation} \vec{L}=\vec{L}_{1}+\vec{L}_{2} ~~, \end{equation} where \begin{equation} \vec{L}_{1}=GM_{s}\int_{M}\vec{r}'\times\frac{\vec{r}}{r^{3}}dm+M\vec{r}_{cm}\times\ddot{\vec{B}} ~~, \end{equation} \begin{equation} \vec{L}_{2}=\frac{3GM_{s}}{r^{5}}\int_{M} (\vec{r}\cdot\vec{r}')(\vec{r}'\times\vec{r})dm ~~. \end{equation} If the origin $\bar{O}$ is the center of mass of the irregularly shaped rigid body, then we obviously have $\vec{r}_{cm}\equiv0$ and $\vec{L}_{1}\equiv0$. Otherwise, truncated at the lowest order, we have \begin{eqnarray} \ddot{\vec{B}}=\frac{GM_{s}}{M}\int_{M}\frac{\vec{r}}{r^{3}}dm ~~. \end{eqnarray} If perturbing bodies other than the Sun are considered, then equation (16) should incorporate their contributions. Substituting equation (16) into equation (14), we obtain \begin{eqnarray} \vec{L}_{1}=GM_{s}\int_{M}(\vec{r}'+\vec{r}_{cm}) \times\frac{\vec{r}}{r^{3}}dm \nonumber \\ =GM_{s}\int_{M}\vec{r}_{c}\times\frac{\vec{r}}{r^{3}}dm=0 ~~. \end{eqnarray} Essentially, the torques imposed by the COM-COF offset are far too small to contribute in the first-order approximation. Furthermore, the change in angular momentum that results from $L_{1}$ is not an intrinsic feature of the rigid body; rather, it is determined solely by the selection of the base point when solving for the rotational dynamics. \citet{Takahashi2013} expressed the second-degree potential as follows: \begin{equation} U_{2}=\frac{G}{2r^{3}}I_{T}-\frac{3G}{2r^{5}}\vec{r}[I]\vec{r} ~~, \end{equation} where $I_{T}$ is the trace of $[I]$. The partial derivative of the potential can be derived as follows: \begin{equation} \frac{\partial{U_{2}}}{\partial\vec{r}} =-\frac{3G}{2r^{5}}I_{T}\vec{r}+\frac{15G}{2r^{7}}(\vec{r}[I]\vec{r})\vec{r} -\frac{3G}{r^{5}}[I]\vec{r} ~~; \end{equation} therefore, the deduced moment can be expressed in the following form: \begin{equation} L_{2}=\frac{3GM_{s}}{r^{5}}[\tilde{r}][I]\vec{r} ~~. \end{equation} Our second-degree expansion of the moment given in equation (15) is consistent with the results represented by equation (20), which indicates that tidal torques caused by the oblateness of the rigid body. \subsection{Numerical Method} The least-squares and multiple shooting methods are used to fit the observational data and to simulate the propagation of the rotational parameters. The dynamical equation of the state vector, which is composed of three Euler angles and three angular velocities, is written as follows: \begin{equation} \dot{X}=F(X,t) ~~, \end{equation} where $ X=(\alpha,\beta,\gamma,\omega_{x},\omega_{y},\omega_{z})^T $; $\alpha$, $\beta$, and $\gamma$ are the 3-1-3 Euler angles, and $\omega_{1}$, $\omega_{2}$, and $\omega_{3}$ are the three components of angular velocity. The previous results for $I_{xx}$, $I_{yy}$, $I_{zz}$, $I_{xy}$, $I_{xz}$, and $I_{yz}$ are used in our calculations \citep{Takahashi2013}. $F$ is the function that represents the time derivative of $X$ and the dynamical model. In the first-order approximation, the dynamical model has the following form: \begin{equation} \dot{X}=A(X,t)X(t) ~~, \end{equation} where $A(X,t)$ is the dynamical matrix. The transition matrix $\Phi$ is defined as follows: \begin{equation} \Phi(t_{1},t_{2})=\frac{\partial x(t_{1})}{\partial x(t_{2})} ~~. \end{equation} Based on the nominal orbit, we establish the relationship between the state vectors at two specific times, $t_{1}$ and $t_{2}$, as follows: \begin{equation} X(t_{2})=\phi(t_{1},t_{2})X(t_{1}) ~~. \end{equation} Then, the transition matrix can be obtained by integrating the derivate equation: \begin{equation} \dot\phi(t_{0},t)=A(t,X)\phi(t_{0},t) ~~. \end{equation} The observational equation for the measurements $Y$ is \begin{equation} Y_{i}=Z(X_{i},t_{i})+\epsilon_{i} ~~, \end{equation} where $Z$ represents the observational model, the subscript $i$ indicates the sequence of the observational data and $\epsilon$ represents the observational uncertainties. In the first-order approximation, the partial derivatives of the observational quantities with respect to the variables form the relationships between them, and we have \begin{equation} Y_{i}=\frac{\partial{Z}}{\partial{X}}|_{x=x_{i}}X_{i}+\epsilon_{i} =\frac{\partial{Z}}{\partial{X}}|_{x=x_{i}}\phi(t_{0},t_{i})X(t_{0})+\epsilon_{i}~~, \end{equation} where $t_{i}$ is the observation epoch of the data set. The least-squares method adopted herein is similar to that of \citet{Takahashi2013}. The cost function is defined as follows: \begin{eqnarray} J= \frac{1}{2}(Y-\frac{\partial{Z}}{\partial{X}}|_{x=x_{i}} \phi(t_{0},t_{i})X(t_{0}))^{T}W \nonumber \\ (Y-\frac{\partial{Z}}{\partial{X}}|_{x=x_{i}} \phi(t_{0},t_{i})X(t_{0}))\nonumber \\ +\frac{1}{2}(\bar{X}(t_{0})-X(t_{0}))^{T}\bar{P}^{-1} \nonumber \\ (\bar{X}(t_{0})-X(t_{0}))\nonumber ~~,\\ \end{eqnarray} where $W$ is the weighting matrix, and $P$ is the covariance matrix. The bars over $X$ and $P$ represent the a priori values deduced through estimation or from previous results. The modified differential equation that computes the correction to the variables can be expressed as follows: \begin{eqnarray} \triangle X(t_{0})=\nonumber \\ (\sum H^{T}WH+\bar{P}^{-1})^{-1}(\sum H^{T}WY+\bar{P}^{-1}\bar{X}(t_{0}))^{-1} ~~. \end{eqnarray} The RKF78 integrator can be used to integrate the rotational equations. The initial step size is set to approximately $10^{-9 ~\circ}$ for the Euler angles and $10^{-8~\circ}/day$ for the rotational velocities. In the integration, an adaptive step size is used to numerically solve the equations. The truncation errors for the Euler angles and rotational velocities are set to $10^{-8~\circ}$ and $10^{-7~\circ}/day$, respectively. \section{Results} To better understand the rotational dynamics of Toutatis, we performed a large number of numerical simulations based on Chang'e-2's observations and ground-based radar measurements, using our dynamical models described above. In the following, we will present the major outcomes for the spin states, rotational period and variation in angular momentum of Toutatis. In this work, the initial variables for Toutatis' spin state that were used in our numerical simulations were adopted from \citet{Takahashi2013} for the epoch $t_{0}$ (17:49:47 UTC on 9 Nov 1992). The orientation of the angular momentum and the rotational periods were also calculated in the study. In our calculation, we balanced the weights of the optical data based on the uncertainties of the observations. Consequently, we derived innovative solutions for the spin state of Toutatis, which are summarized in Table 3. \begin{table*} \caption{Spin-state parameters of Toutatis derived from our numerical simulations.} \begin{tabular}{ll} \hline Property & Value\\ \hline Simulated solutions at $t_{0}$ & $\alpha=147.5^\circ,\beta=63.9^\circ,\gamma=241.5^\circ$ \\ &$\omega_{1}=14.5^\circ/day,\omega_{2}=33.7^\circ/day, \omega_{3}=-98.5^\circ/day$\\ Results at flyby epoch & $\alpha=-3.65^\circ,\beta=43.62^\circ,\gamma=24.7^\circ$ \\ Orientation of angular momentum & $\lambda_{H}=180.2^{+0.2^\circ}_{-0.3^\circ}, \beta_{H}=-54.75^{+0.15^\circ}_{-0.10^\circ}$\\ Rotational/precession period & $5.38$ days, $7.40$ days\\ \hline \end{tabular} \end{table*} The residuals of the Euler angles (34 sets) and angular velocity (33 sets) were normalized with respect to the maximum radar observational errors ($15^{\circ}$ for the Euler angles and $10^{\circ}$/day for the angular velocities) and are shown in Figure 6. Because the previous prediction of the orientation at the Chang'e-2 flyby epoch based on the radar-derived results differs from that observed by Chang'e-2, the use of the optical data might have degraded the convergence of the simulation algorithm. Therefore, the magnitude of the residuals is larger than that observed in the results from the radar data (Takahashi et al, 2013). The residual errors in the simulations were normalized with respect to the observational uncertainties. Because of the inconsistency of the observational data, the magnitudes of the residuals are slightly higher than those of the previous results. However, all deviations lie within the $3\sigma$ region. The largest bias is found in the roll angle, which exhibits a remarkable difference between the prediction obtained from the radar measurements and the authentic spin state of Toutatis that is directly indicated by Chang'e-2's observations at the flyby epoch. \begin{figure*} \includegraphics[scale=0.65]{fig6.eps} \centering \caption{Residuals of rotational parameters with respect to the maximum radar observational errors ( $15^{\circ}$ for the Euler angles and $10^{\circ}$/day for the angular velocities). Left panels: residuals of the Euler angles. Right panels: residuals of the angular velocity components.} \end{figure*} \subsection{Spin States} According to the numerical results derived from radar observations collected before 2008 \citep{Busch2012}, we considered a render effect for Toutatis and generated a predicted imaging outcome prior to Chang'e-2's flyby, as shown in Figure 7a \citep{Busch2012,Zhao2014a}. In addition, based on the images acquired by Chang'e-2, we corrected the attitude of Toutatis by rotating the radar-derived shape model (see Section 2.2.1) to search for a good match with the Chang'e-2 images acquired at the flyby epoch (Fig. 7b), which provide the only space-borne optical data regarding Toutatis' orientation. Furthermore, the present simulations yielded another solution for Toutatis' attitude during the near-Earth flyby in 2012. Figure 7c shows the outcomes derived from our rotational model using space- and ground-based observations. In comparison with the results obtained from the optical images (Fig. 7b), the radar-derived results (Fig. 7a) exhibit a dramatic deviation in the roll angle; hence, these results yield a different profile of the asteroid. The simulation results derived from our dynamical model (Fig. 7c) differ from those of Figure 7b with a pitch angle bias of within $20^\circ$. Thus, we may safely conclude that our outcomes represent a good improvement in the understanding of Toutatis' spin state. \begin{figure*} \includegraphics[scale=0.35]{fig7.eps} \centering \caption{Comparison of Toutatis' orientation. a: The left panel shows the results derived from \citet{Takahashi2013}'s work, with an uncertainty of dozens of degrees. b: The middle panel shows the results calculated from optical images acquired by Chang'e-2 and by rotating the radar shape model. c: The right panel presents the outcomes derived from our rotational model using space- and ground-based observations, which are close to those obtained from the optical images acquired by Chang'e-2.} \end{figure*} The orientation of the long axis in the inertial frame likely reflects the precession of Toutatis. Based on the dynamical model of rotation, we calculated the variation in the direction of the long axis. Figure 8 shows the trajectories of the long axis with respect to the J2000 ecliptic coordinate system in a unit sphere over the past two decades. The motions of the long axis are projected onto the X-Y, X-Z and Y-Z planes (see Figs. 8a, 8b and 8c, respectively). The figure reveals that the long-axis motion of the asteroid has remained ellipsoidal in the X-Y and Y-Z planes, whereas it has rectilinearly precessed in the X-Z plane. All curves lie outside the ecliptic plane, implying that the small lobe of Toutatis is always located above the large end from a viewpoint close to the ecliptic. Moreover, the orientation of the center axis of precession of Toutatis can be approximately determined from Figure 8, and the derived spherical coordinates can be estimated to be $(-0.2^\circ,54.6^\circ)$ in the ecliptic coordinate system. As a result of Toutatis' clockwise rotation and precession, the center axis of precession points nearly along the opposite direction to the angular momentum (see Section 4.3). The amplitude of the precession is approximately $60^{\circ}$, which may shed light on the significantly different attitudes of the asteroid that have been observed from Earth. \begin{figure*} \includegraphics[scale=0.40]{fig8.eps} \centering \caption{Trajectories in the J2000 ecliptic coordinate system of the long axis of Toutatis in a unit sphere. Panels a, b, and c show the trajectories in the X-Y, X-Z, and Y-Z planes, respectively, and panel d shows the motion of the long axis in 3D.} \end{figure*} \subsection{Angular Momentum} Considering the radar-derived shape model and the components of the inertial matrix inferred by \citet{Takahashi2013}, we will now explore the angular momentum of Toutatis induced by various external gravitational torques. Figure 9 shows the variations in the external gravitational torques acting on the spin state of Toutatis from 1992 to 2012. The solar torque is on the order of $10^{-10}-10^{-7}$, indicating that its value is 2-3 orders higher at the perigee than at the apogee. Its periodic variation is clearly associated with Toutatis' orbital period. The variation tendencies of the gravitational torques arising from the Earth and Moon are similar, as shown by the red and blue curves, respectively. There is a $10^{-2}$ difference in the orders of these torques because of the magnitudes of the masses of the bodies from which they arise. The periods of both torques are consistent with that of the black curve because of Toutatis' resonance orbit with the Earth. At present, Toutatis is also in a 3:1 mean motion resonance orbit with Jupiter \citep{Whipple1993}; thus, one entire period of the torque induced by Jupiter is displayed by the green curve \citep{Busch2014}. \begin{figure*} \includegraphics[scale=0.55]{fig9.eps} \centering \caption{Variations in external gravitational torques. The black, red, blue, and green curves represent the torques arising from the Sun, the Earth, the Moon, and Jupiter, respectively. } \end{figure*} Based on the rotational dynamical equation and the integrated orbit, the overall influence of these external torques on the variation in the magnitude of Toutatis' rotational angular momentum from 1992 to 2012 was normalized with respect to the initial magnitude $H_{0}$ \citep{Takahashi2013}, as shown in Figure 10a. The terrestrial tidal torque (see the red curve in Fig. 10b) causes a considerable change in angular momentum when the asteroid approaches Earth at the perihelion or during the Earth flyby that occurs every four years. The most significant change, with a variation in angular momentum magnitude on the order of 0.03\%, occurred in 2004 as a result of Toutatis passing the Earth within 4.02 lunar distances. Similarly, the tendency of the effect of the lunar torque is consistent with that of the terrestrial torque, as shown in Figure 10c. The solar tides always have a predominant influence on the rotational variation. However, the terrestrial tides also play an important role in the variation in angular momentum during Toutatis' regular nearby visits to Earth. Figure 10d shows the influence on the angular momentum exerted by Jupiter. The order of magnitude of this effect slightly changed after the 2004 near-Earth flyby, and the amplitude continually remains lower than that of the terrestrial torque. \begin{figure*} \includegraphics[scale=0.45]{fig10.eps} \centering \caption{Variation in the angular momentum of Toutatis from 1992 to 2012.} \end{figure*} As our simulation results indicate, the angular momentum orientation of Toutatis is determined to be described by ($\lambda_{H}=180.2^{+0.2^\circ}_{-0.3^\circ}$ and $\beta_{H}=-54.75^{+0.15^\circ}_{-0.10^\circ}$) and has remained nearly unchanged in space over the past two decades. Figure 11 shows the variations in Toutatis' angular momentum orientation from 1992 to 2012 in the J2000 ecliptic frame. The amplitude of this change is shown to be less than one degree in both longitude and latitude. Jumps in the angular momentum orientation occur at the perihelion of each orbit. A small change in behavior is evident in Figure 11b as a result of the 2004 near-Earth flyby, consistent with Figure 10b. The variation with solar distance that is apparent in Figure 11c and 11d indicates that the solar and terrestrial torques predominantly affect the rotational motion of the asteroid. The misalignment of the curves in Figure 11d is also a result of the 2004 near-Earth flyby. Figure 11e and 11f show the angular momentum orientations of 33 sets of radar observations. Compared with the numerical results, the observation data fall within a reasonable error range, with the exception of a few points at large bias. \begin{figure*} \includegraphics[scale=0.60]{fig11.eps} \centering \caption{Variations in Toutatis' angular momentum orientation from 1992 to 2012. Panels a and b show the variations in longitude and latitude versus time. Panels c and d show the change in longitude and latitude with the distance of Toutatis from the Sun. Panels e and f show the corresponding results from radar observations.} \end{figure*} \subsection{Rotational Period} The motions of the short or middle axis reflect the status of the rotation about the long axis, whereas the long axis' motion represents precession. To calculate the two periods associated with the spin states of Toutatis, we determined the latitudinal variations of the asteroid's long and middle axes in the J2000 ecliptic frame, as shown in Figure 12. We applied Fourier transform to analyze the periods of the two oscillation parameters and found that they are 5.38 days for the rotation about the principal axis and 7.40 days for the precession of the principal axis. These results are in good agreement with the previous results \citep{Ostro1999}. Let $\beta_{X}$ and $\beta_{Z}$ indicate the latitudes of the asteroid's long and middle axes, respectively, in the J2000 ecliptic coordinate system. Figures 12 and 13 show the latitudinal variations of these axes during the 1992 and 1996 flybys, respectively. Additionally, the numerical results (represented by dotted lines in Figs. 12 and 13) that were calculated from our dynamical model are found to be in good agreement with the radar observations within the error bars (marked by stars) \citep{Takahashi2013}, as listed in Table 4. Hence, we may conclude that the orientation parameters of Toutatis obtained from our investigation are very reliable. This evidence provides further confirmation that our proposed rotational model can be used to correctly evaluate the spin status of Toutatis or other asteroids. \begin{figure*} \includegraphics[scale=0.50]{fig12.eps} \centering \caption{Latitudinal variations of Toutatis' long axis $\beta_{z}$ and middle axis $\beta_{x}$ in the J2000 inertial coordinate system in 1992.} \end{figure*} \begin{figure*} \includegraphics[scale=0.50]{fig13.eps} \centering \caption{Latitudinal variations of Toutatis' long axis $\beta_{z}$ and middle axis $\beta_{x}$ in the J2000 inertial coordinate system during the near-Earth flyby in 1996.} \end{figure*} \section{Conclusions and Discussions} In this work, we apply the observations collected during Chang'e-2's outbound flyby to model the rotational dynamics and determine the spin state of Toutatis. Based on flyby images, we utilize the radar-derived shape model to calculate Toutatis' orientation at the flyby epoch. In addition, we estimate the 3-1-3 Euler angles to be $-20.1^\circ\pm1^\circ$, $27.6^\circ\pm1^\circ$, and $42.2^\circ\pm1^\circ$, respectively. Consequently, our results have greatly improved the estimation of the orientational parameters of Toutatis with respect to the previous predictions. In combination with ground-based observations, we investigated the evolution of the spin parameters using numerical simulations. In addition to the solar and terrestrial torques, the tidal effects arising from the Moon and Jupiter are extensively considered in our dynamical model. The magnitude and influence of these gravitational torques were analyzed in this work. The solar tide appears to always be the dominant torque acting on the angular momentum of Toutatis. Furthermore, the contribution to the external gravitation torque due to the COM-COF offset appears to be negligible in the first-order approximation. We also found that the closest near-Earth flyby, at 4.02 lunar distances, resulted in a 0.03\% change in the magnitude of the angular momentum of Toutatis. The dynamical influence exerted by Saturn on the angular momentum was also assessed in further simulations and found to be approximately $10^2$ lower than that of Jupiter. Hence, we can safely conclude that Saturn plays a less important role in the variation of Toutatis' angular momentum. The attitude at the Chang'e-2 flyby epoch that was derived from the numerical simulations yielded a better approximation to the optical results than that previously obtained from radar data alone. The largest deviation in the Euler angles is observed in the pitch angle, with a bias of less than $20^\circ$. The uncertainties corresponding to observational uncertainties and data processing error were considered in the simulations. The inconsistency in the different types of observational data may have led to higher residuals compared with previous results. Simulations based solely on radar observations were also performed, and the corresponding rms magnitude was much lower. However, the results obtained using a higher-accuracy dynamical model and a combination of the various types of observations yielded a good result that is highly consistent with the optical images acquired during the Chang'e-2 flyby in 2012. The precession of Toutatis was investigated by considering the motion of its long axis. The behavior of the orbit in the inertial frame was found to be circular, with a center axis pointing along ($-0.2^\circ, 54.6^\circ$). The precession amplitude was estimated to be up to $60^\circ$, which may be responsible for the significantly different attitude of the asteroid as observed by ground-based facilities. Moreover, by exploring the motions of the long axis and middle axis, we determined the rotation period of Toutatis using Fourier analysis. The two major periods were found to be 5.38 days for the principal axis rotation and 7.40 days for the precession, in agreement with the results reported by \citet{Ostro1999}. Toutatis' angular momentum orientation was determined to be described by $\lambda_{H}=180.2^{+0.2^\circ}_{-0.3^\circ}$ and $\beta_{H}=-54.75^{+0.15^\circ}_{-0.10^\circ}$, indicating that it has remained nearly unchanged for the last two decades. Because of the increasing magnitudes of the solar and terrestrial torques, tiny jumps in the angular momentum orientation occur at perihelion in each orbital period. However, the dynamical effects caused by the near-Earth flyby in 2004 slightly changed the latitude of Toutatis' angular momentum orientation. Hence, our simulation results are in good agreement with previous radar observations. In a word, based on the combination of Chang'e-2's observations and radar data, our investigation offers an improved understanding of the rotational dynamics of Toutatis. \section*{Acknowledgements} The authors greatly acknowledge M. W. Busch and Y. Takahashi for their helpful discussions and suggestions. This work is financially supported by National Natural Science Foundation of China (Grants No. 11303103, 11273068, 11473073), the Strategic Priority Research Program-The Emergence of Cosmological Structures of the Chinese Academy of Sciences (Grant No. XDB09000000), the innovative and interdisciplinary program by CAS (Grant No. KJZD-EW-Z001), the Natural Science Foundation of Jiangsu Province (Grant No. BK20141509), and the Foundation of Minor Planets of Purple Mountain Observatory.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,888
/* eslint-disable no-param-reassign, prefer-arrow-callback, func-names */ import { Meteor } from '../global.js'; import getCollection from '../getCollection.js'; export default function manageEmails() { if (Meteor.isServer) { getCollection('people').before.insert(function (userId, doc) { if (doc.emails) { doc.emails = doc.emails.map(e => e.toLowerCase()); if (getCollection('people').findOne({ emails: { $in: doc.emails }, emailConflict: { $ne: true }, })) { doc.emailConflict = true; } } }); getCollection('people').before.update(function (userId, doc, fieldNames, modifier) { let emailConflict = false; if (modifier.$set && modifier.$set.emails) { modifier.$set.emails = modifier.$set.emails.map(e => e.toLowerCase()); if (getCollection('people').findOne({ emails: { $in: modifier.$set.emails }, emailConflict: { $ne: true }, _id: { $ne: doc._id }, })) { emailConflict = true; } if (emailConflict) { if (!modifier.$set) modifier.$set = {}; modifier.$set.emailConflict = true; } else { if (!modifier.$unset) modifier.$unset = {}; modifier.$unset.emailConflict = null; } } else if (modifier.$unset && modifier.$unset.emails) { if (!modifier.$set) modifier.$set = {}; modifier.$set.emailConflict = true; } }); getCollection('people').after.update(function (userId, doc) { if ( JSON.stringify([...(this.previous.emails || [])].sort()) !== JSON.stringify([...(doc.emails || [])].sort()) ) { const oldConflicts = getCollection('people').find({ emails: { $in: this.previous.emails }, }).fetch(); (oldConflicts || []).forEach(p => getCollection('people').update({ _id: p._id }, { $set: { emails: p.emails } }), ); } }); getCollection('people').after.remove(function (userId, doc) { if (doc.emails) { const conflicts = getCollection('people').find({ emails: { $in: doc.emails } }).fetch(); (conflicts || []).forEach(p => getCollection('people').update({ _id: p._id }, { $set: { emails: p.emails } }), ); } }); } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,388
Q: Eclipse 'Quick Fix' Will not Open I recently discovered that I need to perform a 'quick fix' on a mavenised project I have. I was able to perform this quick fix at work, however when I right click on my home laptop, the quick fix window does not open. I am using Eclipse Luna for Mac (whereas at work I have Eclipse Keplar - Windows). I cannot force the quick fix window to open by using the short-cut keys - nor can I access it via the 'edit --> quick fix' menu option. I need to access quick fix and select 'Exclude the associated raw classpath entry from the set of potential publish/export dependencies.' Any help would be greatly appreciated :) Heres the log from eclipse: !ENTRY org.eclipse.ui 4 0 2015-05-19 20:32:45.263 !MESSAGE Unhandled event loop exception !STACK 0 org.eclipse.e4.core.di.InjectionException: org.eclipse.core.commands.ExecutionException at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:62) at org.eclipse.e4.core.internal.di.InjectorImpl.invokeUsingClass(InjectorImpl.java:247) at org.eclipse.e4.core.internal.di.InjectorImpl.invoke(InjectorImpl.java:229) at org.eclipse.e4.core.contexts.ContextInjectionFactory.invoke(ContextInjectionFactory.java:132) at org.eclipse.e4.core.commands.internal.HandlerServiceHandler.execute(HandlerServiceHandler.java:149) at org.eclipse.core.commands.Command.executeWithChecks(Command.java:499) at org.eclipse.core.commands.ParameterizedCommand.executeWithChecks(ParameterizedCommand.java:508) at org.eclipse.e4.core.commands.internal.HandlerServiceImpl.executeHandler(HandlerServiceImpl.java:210) at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.executeItem(HandledContributionItem.java:825) at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.handleWidgetSelection(HandledContributionItem.java:701) at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem.access$6(HandledContributionItem.java:685) at org.eclipse.e4.ui.workbench.renderers.swt.HandledContributionItem$4.handleEvent(HandledContributionItem.java:613) at org.eclipse.swt.widgets.EventTable.sendEvent(EventTable.java:84) at org.eclipse.swt.widgets.Display.sendEvent(Display.java:4188) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1467) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1490) at org.eclipse.swt.widgets.Widget.sendEvent(Widget.java:1475) at org.eclipse.swt.widgets.Widget.notifyListeners(Widget.java:1279) at org.eclipse.swt.widgets.Display.runDeferredEvents(Display.java:4031) at org.eclipse.swt.widgets.Display.readAndDispatch(Display.java:3658) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine$9.run(PartRenderingEngine.java:1151) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.e4.ui.internal.workbench.swt.PartRenderingEngine.run(PartRenderingEngine.java:1032) at org.eclipse.e4.ui.internal.workbench.E4Workbench.createAndRunUI(E4Workbench.java:148) at org.eclipse.ui.internal.Workbench$5.run(Workbench.java:636) at org.eclipse.core.databinding.observable.Realm.runWithDefault(Realm.java:332) at org.eclipse.ui.internal.Workbench.createAndRunWorkbench(Workbench.java:579) at org.eclipse.ui.PlatformUI.createAndRunWorkbench(PlatformUI.java:150) at org.eclipse.ui.internal.ide.application.IDEApplication.start(IDEApplication.java:135) at org.eclipse.equinox.internal.app.EclipseAppHandle.run(EclipseAppHandle.java:196) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.runApplication(EclipseAppLauncher.java:134) at org.eclipse.core.runtime.internal.adaptor.EclipseAppLauncher.start(EclipseAppLauncher.java:104) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:380) at org.eclipse.core.runtime.adaptor.EclipseStarter.run(EclipseStarter.java:235) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:648) at org.eclipse.equinox.launcher.Main.basicRun(Main.java:603) at org.eclipse.equinox.launcher.Main.run(Main.java:1465) Caused by: org.eclipse.core.commands.ExecutionException at org.eclipse.ui.internal.views.markers.QuickFixHandler.execute(QuickFixHandler.java:128) at org.eclipse.ui.internal.handlers.HandlerProxy.execute(HandlerProxy.java:294) at org.eclipse.ui.internal.handlers.E4HandlerProxy.execute(E4HandlerProxy.java:90) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.eclipse.e4.core.internal.di.MethodRequestor.execute(MethodRequestor.java:55) ... 40 more Caused by: java.lang.reflect.InvocationTargetException at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:479) at org.eclipse.jface.operation.ModalContext.run(ModalContext.java:374) at org.eclipse.jface.dialogs.ProgressMonitorDialog.run(ProgressMonitorDialog.java:527) at org.eclipse.ui.internal.progress.ProgressManager$RunnableWithStatus.run(ProgressManager.java:1380) at org.eclipse.swt.custom.BusyIndicator.showWhile(BusyIndicator.java:70) at org.eclipse.ui.internal.progress.ProgressManager$5.run(ProgressManager.java:1214) at org.eclipse.swt.widgets.Synchronizer.syncExec(Synchronizer.java:187) at org.eclipse.ui.internal.UISynchronizer.syncExec(UISynchronizer.java:156) at org.eclipse.swt.widgets.Display.syncExec(Display.java:4721) at org.eclipse.ui.internal.progress.ProgressManager.runInUI(ProgressManager.java:1211) at org.eclipse.ui.internal.progress.WorkbenchSiteProgressService.runInUI(WorkbenchSiteProgressService.java:396) at org.eclipse.ui.internal.views.markers.QuickFixHandler.execute(QuickFixHandler.java:124) ... 47 more Caused by: java.lang.NullPointerException at org.eclipse.ui.internal.ide.registry.MarkerHelpRegistry.getResolutions(MarkerHelpRegistry.java:254) at org.eclipse.ui.internal.views.markers.QuickFixHandler$1.run(QuickFixHandler.java:88) at org.eclipse.jface.operation.ModalContext.runInCurrentThread(ModalContext.java:466) ... 58 more
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,856
{"url":"https:\/\/stats.stackexchange.com\/questions\/68915\/k-means-clustering-for-usage-profiling","text":"# K-means clustering for usage profiling\n\nI am trying to use k-means clustering to profile mobile device usage behaviour for IT users. My data consists of different system and user level variable\/readings like number of calls\/sms, cpu\/memory usage, number of users and system applications\/services etc. The readings are taken every 5 minutes.\n\nThe idea I have is to use say 1 month's data for training, i.e. clustering, and then use the future data to compare with existing clusters and try to find (dis)similarity between the two. The assumption is different users will have different usage; hence readings from USER B will not fit into clusters from USER A.\n\nNow two questions I have:\n\n1. After training (clustering), how do I compare new data with existing clusters to determine (dis)similarity, i.e. new data belongs to same user or not? I am thinking of finding nearest cluster and then checking if the point lies within this cluster's boundary.\n\n2. I am using Silhouettes plot to determine the clustering quality. I get some negative values e.g see. Should I be concerned? or is it normal to have some negative values?\n\n\u2022 xIs this based on the silhouette measure of cohesion? If yes its values lie between 0-1. The diagram produced in SPSS is far clearer than this one shown. Sep 1, 2013 at 21:01\n\nHave you validated your results in any way?\n\nIt seems that you want to do unsupervised classification. That usually doesn't really work too well, in particular for this kind of data and with this method. K-means is more a vector quantization method than meant to find how clusters are separated. I.e. it will - always - discretize your data into $k$ groups, even when there is no separating gap inbetween!\n\nA negative value means that the record is more similar to the records of its neighboring cluster than to other members of its own cluster.\n\nThis seems to be what is happening here. K-means breaks apart data that should be in the same cluster.\n\nBut my larger concern is that your data may be inappropriate for k-means. K-means minimizes the within-cluster-sum-of-squares (WCSS). But given that your axes are from different domains, they do not necessarily have the same scale. K-means implicitely assumes squared Euclidean distance (which is the sum-of-squares) and this may be an inappropriate measure of similarity for your data, in particular without extensive preprocessing. You could try the following approach:\n\n1. define an appropriate measure of similarity for your data. Spend a lot of effort here!\n2. use metric learning techniques (e.g. non-metric multidimensional scaling) to obtain a vector space where Euclidean distance is appropriate\n3. run k-means in this projected data\n4. to assign a new observation to the clusters, apply the same preprocessing as in 1), then the same projection as in 2) and then assign it to the nearest mean in 3)\n\nA common failure with k-means is to run it on your data without first checking that this is appropriate; that the dimensions bear the same amount of relevant information on the same scale. The simplest heuristic is to use whitening but more often than not (e.g. when having discrete or binary attributes) this will not be enough.\n\nBut even with all these efforts, k-means may still fail badly. Because it assumes clusters have the same \"diameter\". So if one of your users has a very narrow usage profile (always using the webbrowser only, with a single tab), and the other has a very wide usage (word, browser, email, ...) all open at the same time or not, k-means may just be based on the wrong assumptions: clusters in k-means are expected to have the same diameter.\n\n\u2022 The input variables are all scaled between 0-100. The CPU and memory usage was already percentage, I used max-values for other variables in the dataset & converted them to percentages. Now would you consider z-score as a better option to scale variables to a range, rather than converting them to percentages... Secondly, do you consider SOM as a better option than k-means, because with SOM SOM the neural networks would deal with parameter's weighting and normalization is just question of initialization. Sep 3, 2013 at 10:27","date":"2022-07-05 09:46:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5419671535491943, \"perplexity\": 992.7729661259157}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656104542759.82\/warc\/CC-MAIN-20220705083545-20220705113545-00184.warc.gz\"}"}
null
null
Nice 5 Kitchen Safety Rules #0 - Cooking Is Fun P 14 Our Safety Rules Katherine Of was posted in July 14, 2018 at 9:59 pm. If you want to apply this picture as your desktop background, you may tap the download link beneath or you may just right click on the image above , then tap "Save Image As" to save the Nice 5 Kitchen Safety Rules #0 - Cooking Is Fun P 14 Our Safety Rules Katherine Of or by tap the "Set Desktop Background As" section if your internet browser has the capability. If you were no able to pick up the the most outstanding picture you are awaiting for, you need to go for "Search Column" at top right or browse another image wallpapers which you want. This picture has the image size is 240kB and the resolution of 669x1024. Nice 5 Kitchen Safety Rules #0 - Cooking Is Fun P 14 Our Safety Rules Katherine Of has been seen by 36 users and it is an attachment from 5 Kitchen Safety Rules.
{ "redpajama_set_name": "RedPajamaC4" }
5,945
Patient Experience and Engagement Day One » Day Two » Congress Registration and Morning Coffee Congress Welcome and Opening Remarks KEYNOTE: Advance Health Care Experiences Through Meaningful Connections Hear how leaders from providers and plans are engaging consumers in the digital age. Patty Blake President, Senior Products Tufts Health Plan Patty joined Tufts Health Plan in 1993 to develop and launch the company's Medicare business. Under her leadership, Tufts Health Plan established itself as a market leader and became the number one Medicare Advantage plan in Massachusetts. Patty is responsible for all aspects of Tufts Health Plan's senior business including strategy, product development, network management, care management, marketing and sales. Tufts Health Plan Medicare Preferred is consistently ranked among the top Medicare Advantage plans in the country by the Centers for Medicare and Medicaid Services. Tufts Medicare Preferred HMO and Senior Care Options earned 5 stars out of a possible 5 from CMS for 2016, 2017 and 2018. Prior to joining Tufts Health Plan, Patty spent seven years with PacifiCare Health Systems in California. She was executive director for Secure Horizons USA and was regional director of health services; responsible for provider network performance, network contracting and development and utilization management. Patty began her career in managed care in sales and provider contracting. Patty serves on the board of directors of the Massachusetts/New Hampshire chapter of the Alzheimer's Association. Patty received an M.B.A. from the U.C.L.A. Anderson School of Management, an M.A. in health care management from the U.C.L.A. School of Public Health and a B.S. in public health from the University of Massachusetts at Amherst. Speaking at these upcoming events: 11th Annual Health Plan Consumer Experience and Retention Track September 23 - 24, 2019 • Boston, MA 6th Annual Hospital Marketing and Communications Track Kathy Klingler Senior Vice President; Chief Marketing Officer, Marketing, Digital Strategy, and Consumer Experience Blue Cross and Blue Shield of Massachusetts Kathy is Senior Vice President and Chief Marketing Officer for Blue Cross Blue Shield of Massachusetts, the Commonwealth's largest private health plan and one of the largest independent, not-for-profit Blue Cross Blue Shield plans in the country. We serve nearly 3 million members and are rated among the nation's best health plans for overall member satisfaction and quality. Kathy leads our brand, consumer experience, digital strategy and marketing teams, who are focused the diverse needs of our customers, members, and key business partners. Through the transformation and alignment of these teams, she is responsible for delivering innovations in marketing and communications that meet the ever-changing needs of the consumer in a digitally-evolving marketplace. She is responsible for managing our brand and ensuring our brand strategy aligns to our corporate mission of becoming a more consumer-centric company focused on access, affordability and unparalleled consumer experience. She also leads our market insights strategies, including our unique listening community and our Net Promoter Score research, to ensure our customers and members have a voice have in driving future advancements that matter. Kathy joined the company in February 2017, bringing with her over 25 years of experience transforming marketing organizations across multiple industries. Before joining Blue Cross, Kathy served as SVP and Chief Marketing Officer at Kaplan Higher Education, where she was responsible for aligning the national brand to the company's student focused strategy and commitment to improving graduation and job placement. She led the "digital first" marketing approach focused on student acquisition, retention, engagement aligned to the lifetime value of a student. Kathy was also the EVP and Chief Marketing Officer of Santander Bank, where she led brand management, marketing, communications, sponsorships, community affairs and public relations for the U.S. division. Kathy led the bank-wide rebranding from Sovereign Bank to Santander and the platform to drive differentiation in the future. Prior to that, Kathy held senior marketing positions for several financial services organizations, including Fidelity Investments, John Hancock Financial Services and was a senior consultant with KPMG in both the United States and London. She serves as a member of the Executive Board of Horizons for Homeless Children, Executive Board of the Ad Club of Massachusetts and the Board of WGBH. Kevan Mabbutt Senior Vice President and Chief Consumer Officer Intermountain Healthcare Kevan is a global leader in customer experience with an impressive track record of inspiring and developing best-in-class consumer outcomes for brands like Disney, Discovery Channel, and the BBC. He is superbly qualified to lead efforts to identify what customers and patients need and expect, and evolve capabilities to create and deliver consistent, customer-centered, digitally enabled experiences for them. Kevan was previously at The Walt Disney Company, where he served as the Global Head of Consumer Insight, based in Los Angeles, California. He led the transformation and development of Disney's theme park, cruise line, resort, retail, and digital experiences in the U.S., Europe, and Asia. He was instrumental in defining and optimizing the guest experience at Disney's first theme park in mainland China (Shanghai Disney Resort opened in 2016). He also oversaw the expansion of Disney, Pixar, Marvel, and Star Wars brands globally. Prior to Disney, Kevan held global marketing and analytics leadership roles at Discovery Channel and the BBC and served as a consultant to media companies in the Middle East and Europe. He was also a member of Deloitte's consumer practice. In that role he developed strategies to attract, retain, and satisfy customers in the banking, transportation, and retail sectors. Randal Weber, MD, FACS Chief Patient Experience Officer University of Texas MD Anderson Cancer Center Randal Weber Randal S. Weber, M.D., F.A.C.S., is an internationally recognized surgeon and expert in the treatment of patients with head and neck cancer. He is the immediate past chairman of the Department of Head and Neck Surgery, a position he held for over 14 years. He has a joint appointment as Professor, Department of Radiation Oncology, at The University of Texas MD Anderson Cancer Center in Houston, Texas, and is Adjunct Professor, Department of Otolaryngology-Head and Neck Surgery, at Baylor College of Medicine in Houston. He is the recipient of the John Brooks Williams and Elizabeth Williams Distinguished University Chair in Cancer Medicine. A leader in healthcare initiatives to improve cancer care, Dr. Weber has been instrumental in the efforts to improve the quality of care and the outcomes achieved through the establishment of performance-driven processes and the adherence to evidence-based treatment guidelines for patients with head and neck cancer. His leadership efforts in promoting quality cancer care that is value driven have been instrumental in creating a national agenda to improve head and neck cancer care. In addition to maintaining a busy clinical schedule, he remains closely involved in the professional development and education of head and neck surgical oncology fellows. He remains active in clinical research investigating new treatment approaches for patients with head and neck cancers and is a pioneer in the use of organ-sparing oncologic strategies. Highly sought after for his expertise and professional insights, Dr. Weber has been the guest lecturer and visiting professor on more than 200 occasions both in the United States and around the world, in addition to leading numerous courses and seminars. Dr. Weber was honored as the Hayes Martin Lecturer and recipient of the Distinguished Service Award at the April 2011 meeting of the American Head and Neck Society. He has served as President of the Society of University Otolaryngologists–Head and Neck Surgeons, the American Radium Society, and the American Head and Neck Society. He is the past President of the American Board of Otolaryngology and past Chair of the Head and Neck Surgery Committee of the Radiation Therapy Oncology Group. Dr. Weber is a prolific author with over 400 publications that include scientific articles, book chapters, and textbooks. He is the immediate past Editor in Chief of Head & Neck: Journal for the Sciences and Specialties of the Head and Neck, a position he held for 13 years. He also serves on the editorial board of several scientific journals. On September 1, 2016, Dr. Weber assumed the role of Chief Patient Experience Officer for MD Anderson Cancer Center and will continue an active head and neck surgical practice. Networking and Refreshment Break KEYNOTE: Leverage Emerging Technology to Positively Impact Lives Technology is a vital part of an organization's capabilities and success in leveraging emerging technologies can improve access to health care services and information, as well as drive affordability. Discover how deploying technology and data-driven solutions will positively impact consumers' lives, improve health care experiences, and drive meaningful change in the health care industry. Steve Betts Senior Vice President; Chief Information Officer As senior vice president and chief information officer at Health Care Service Corporation (HCSC), Steve Betts is responsible for deploying technology and data-driven solutions that will positively impact members' lives and drive meaningful change in the health care industry. He recognizes technology as a vital part of the organization's capabilities and success leveraging digital and emerging technologies to help improve access to health care services and information, as well as drive affordability, for HCSC's more than 15 million members across its five health plan states. Since joining HCSC in 2014, Steve has transformed the organization to exemplify the agility and skills needed to serve HCSC's members. To help encourage a start-up mentality and help employees identify, test and accelerate new products, ideas and emerging technologies, Steve oversaw the implementation of an in-house innovation incubator powered by a team of user experience researchers, designers and developers. He fosters a purpose-driven culture and frequently leads employee hackathons to find new solutions to business and consumer challenges. Steve has also revitalized the organization's data and analytics framework to curate and leverage insights that can help improve members' health care experiences. He was also a central force in modernizing HCSC's workspaces to encourage collaboration and facilitated the development of a tool that helps employees identify and understand the impact of emerging technologies to the business. Prior to joining HCSC, Steve served as global chief information officer for Aon PLC. Through his leadership, technology played an increasing role in Aon's go-to-market strategy. He led the IT integration programs for the two largest acquisitions in Aon's history and also spearheaded a multi-year rationalization program which simplified Aon's application portfolio and core technology footprint. Steve is board chair of Lumity, a Chicago-based organization dedicated to exposing students to careers in science, technology, engineering and math (STEM) fields. His work with Lumity is a key component of his efforts to help grow the pipeline of future tech workers, particularly in health care, by getting students excited about STEM careers. Steve received his Bachelor of Science in Mathematics and Management Science from the University of Manchester in the United Kingdom. KEYNOTE: Apply Out-Of-Industry Tactics to Thrive Amid Disruption As the health care industry goes through digital transformations and move to value-based care, organizations need to redesign their workflow to connect employee-driven change and technology to better understand consumer needs. Discover how other industries are leveraging technology advancements and human-centered change to deliver exceptional experiences Apply successes and failures learned from other industries, who have navigated disruptive transformations, to health care Neil Gomes, MBA, M.Ed., CSM, CSPO Executive Vice President, Chief Digital Officer Thomas Jefferson University and Jefferson Health Neil Gomes Neil Gomes (B.Sc., MBA/MMS, M.Ed, ABD) is the Chief Digital Officer & Executive Vice President for Technology Innovation and Consumer Experience at Thomas Jefferson University and Jefferson Health System. Neil has worked for the $100+ billion, Fortune 500 Tata Group of Companies where he played a leadership role in building the intrapreneurial startup, Tata Interactive Systems, from 60 employees to the world's largest custom e-learning development firm in less than two years. Neil left the Tata Group to complete his M.Ed. in Instructional Design at the University of South Florida (USF) whilst progressively working toward the position of Director of eTeaching and Technology and then the Director of Instructional Design and Training at USF Health. While at USF, Neil's leadership and entrepreneurial acumen helped grow a strategic team of application developers, instructional and multimedia designers, and project managers that generated over $1.5 million in annual auxiliary revenue from research and external development projects while growing online student enrollment from 200 enrollments in 2002 to approx. 200,000+ enrollments a year by 2012, generating nearly $30 million in revenue each year. While at USF, Neil also began working toward his Ph.D., is currently a Ph.D. Candidate (ABD), has authored research articles, book chapters, and delivered several formal research presentations. At Jefferson, Neil founded the Digital Innovation and Consumer Experience (DICE) Group and drives consumer-focused digital innovation in healthcare and education via teams of digital consumer experience specialists; application, platform, machine learning, and IoT developers; simulation and UI/UX designers; trainers; documentation and support specialists; instructional/e-learning designers; and process designers. Neil also helps define innovation strategy and programs via Jefferson's Innovation Team. He helped secure a $15+ million donor grant from the Bernie Marcus Foundation to develop a high-tech, consumer-centric, integrative health center at Jefferson and has also launched several pioneering collaborations with partners such as Google, Apple, Adobe, SAP, and IBM Watson. Neil serves as Associate Editor of the Journal for Healthcare Transformation and is a contributor toward the book: We CAN fix Healthcare, the Future is NOW. Neil is also a speaker, agile aficionado, and digital innovation evangelist. Track Chairs' Remarks Jannienne Jones Verse Vice President Brand Management; Associate Vice President, System Marketing and Brand Management Jannienne Jones Verse is Vice President Brand Management with Geisinger. Focused on the build of innovative and metrics-based brand initiatives, Jones Verse's work crosses the disciplines of marketing, advertising, media planning and buying, design, digital engagement, research and business operations. She walks the path of servant leadership, blending her passion for elevating teams to new levels of success with years of experience in brand architecture, technology and strategy. Jones Verse began her career in corporate communications, evolving into strategic marketing with retail technology leader AT&T. She aligned operations across three states, increasing AT&T's footprint in those markets by 64% during her tenure. Jones Verse then propelled international software giant Reynolds & Reynolds to next level B2B integrated marketing, with sales campaigns exceeding quarterly goals by the millions. In the healthcare arena, she leveraged talents in consumer journey mapping and community engagement strategies for Premier Health and Vidant Health. Jones Verse holds many professional distinctions, among them being named one of the nation's most talented marketing professionals and 2017 honoree as a Most Powerful & Influential Woman of Pennsylvania. Recognition for outstanding excellence in medical advertising and noted as an Emerging Executive Leader for exceptional business acumen are additional accomplishments. Jones Verse currently serves as Co-Chair for Geisinger's Employee Resource Group BOLD (Black Outreach and Leadership Development). Additionally, she is a dedicated member of the board with the National Diversity Council and holds board membership with the YMCA in her home community. When asked of her greatest accomplishments, Jones Verse states family, and the ability to positively impact others. She views it an esteemed honor to serve as a youth mentor, develop youth leadership programs, and to share her insights and experiences via lectureship with university students, professionals and others. Designing and facilitating personal brand workshops, as well as blogging on the topic, are also areas of concentration and joy. The Buckeye State native lived her formative years abroad. She earned academic awards and a Bachelor of Arts, Mass Communications at Wright State University. Jones Verse also achieved a Master's in Business Administration, Marketing Concentration, Summa Cum Laude, at the University of Phoenix. She lives in Central PA, and enjoys time with her family, creative projects and travel. Master Social Media Marketing to Build Engagement With Consumers Understand how your organization can drive effective social media integration on all platforms to showcase success, optimize engagement, reach the right audiences, as well as acquire new prospective patients and physicians. Develop strategies to ensure tone and voice are consistent across social media Explore ways to craft branded multimedia and repurpose content to meet the needs of different social media platforms Elizabeth Whittington Director of Digital Media Elizabeth Whittington is the director of digital media at St. Jude Children's Research Hospital, where she leads a successful team in digital content strategy, social media, marketing and physician communications. She is committed to raising the awareness of the exceptional clinical care and scientific research of the institution through public relations, marketing and digital strategy. During her time at St. Jude, she managed the successful launch of St. Jude Progress, an institutional blog focused on science and medicine, which was recently named a FOLIO finalist. She has guided digital content strategy for high-profile institutional initiatives, including St. Jude Global and the Graduate School of Biomedical Sciences. In 2015, she hired the institution's first dedicated social media person to develop and cultivate channels to engage audiences in scientific research and medicine. Prior to St. Jude, Elizabeth was web managing editor of CURE magazine for six years, which included a redesign and brand expansion of CURE Media Group. Elizabeth was responsible for the digital strategy for CURE Media Group, which included curetoday.com, microsites, emails, a guest blog network, social media and engagement. She led CMG to several digital awards, including the 2010 FOLIO: award for Best Consumer Health and Fitness Magazine Website. In 2012, she earned a fellowship to Medicine in the Media: The Challenge of Reporting on Medical Research, sponsored by the National Institutes of Health's Office of Disease Prevention in Bethesda, Maryland. Currently serving as Communications Co-Chair of the Public Relations Society of America Memphis Chapter, she is also a member of the American Marketing Association. She has a B.S. in biology and recently earned her MBA. Elizabeth enjoys running, spinning and has a slight addiction to coffee and business books. Harness the Power of Video for Your Marketing Strategy With the rise of video consumption across social and digital platforms, it is no longer a nice to have, but a need to have in your marketing strategy. Use video to tell your story, reach new audiences, educate consumers and build your brand. Understand how to make video a part of your well-rounded marketing strategy Examine different tools and effects in video creation to catch consumers' attention and drive desired results Utilize Influencer Marketing to Help Promote Your Organization and its Services Build meaningful relationships with influencers, such as celebrities or community leaders, to help promote your organizations' brand and mission. Step through opportunities for asset co-creation, channel optimization strategies, and ways to leverage complementary partner brands to build good will Learn how influencers can help guide consumer choice, raise awareness of services, build trust, and enhance your community outreach David Simpkins Vice President, Marketing and Communications, National Capital Region Vice President, Marketing and Communications - National Capital Region, Community Division, Johns Hopkins Medicine. In this role, David oversees marketing and communications for Johns Hopkins Health System community division hospitals, physicians, and outpatient centers including Howard County General Hospital, Sibley Memorial Hospital, Suburban Hospital and the Johns Hopkins Health Care and Surgery Center in Bethesda, Maryland. Prior to joining Johns Hopkins Medicine, David worked for the American Cancer Society in a variety of roles including national vice president for strategic communications and the senior vice president of market strategy and health equity for the South Atlantic Division. He has more than 25 years of professional experience working both in academic and community hospitals and healthcare systems, including vice president of planning, marketing, and business development for Saint Agnes Hospital (Ascension Health), Baltimore Maryland and vice president of marketing for Holy Cross Hospital (Trinity Health), Silver Spring, Maryland. Develop Comprehensive Digital Strategies to Engage Across Platforms with Your Consumers Online presence is key to reach your consumer in the digital era. However, creating cohesion between various platforms can be expensive and time consuming. Discover how to create platforms for all areas of engagement within budget and how to evaluate their coordination. Use analytics to examine how well your platforms perform and to gain market insights Create digital platforms that engage with a variety of audiences to build your brand and create the best user experience Tom Neumann Executive Director, Marketing, Content and Creative Services Tom Neumann is executive director of content & creative services at Cleveland Clinic, leading a team that is responsible for the creation of engaging health information content. Additionally, Neumann manages the creative services team which encompasses graphic design, medical illustration and photography. Prior to his work at Cleveland Clinic, Neumann served as chief marketing officer at Akron General Health System, associate vice president for communications and marketing at Kent State University, and led marketing efforts at Summa Health System and Medical Mutual of Ohio. He began his career as a video producer/director, and is a graduate of Ohio University with a degree in Communications with an M.B.A. from Kent State University. Amanda Todorovich Senior Director, Marketing, Content and Creative Services Amanda Todorovich is the senior director of content & creative services at Cleveland Clinic. She manages a team of writers, designers, digital engagement strategists and project managers to serve enterprise content needs both on- and off-line. Her team is responsible for the #1 most-visited hospital blog in the country, Health Essentials (health.clevelandclinic.org). Amanda joined Cleveland Clinic in February 2013, after serving for four years as chief content officer and co-founder of MedCity News. With 20 years of storytelling experience, Amanda is passionate about finding innovative ways to leverage every piece of content her team produces. She was chosen as a the 2016 Content Marketer of the Year by the Content Marketing Institute, 2018 Marketo Fearless 50 and honored as a 2017 Boldest Healthcare Brand Marketer finalist. Networking and Cocktail Reception Elizabeth Attaya Email: Elizabeth.Attaya@worldcongress.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,045
Q: Showing that $f(a)=f'(a) = 0$ if and only if $f = (x-a)^2q$. This was problem that I just cannot figure out: Let $F$ be a field, $a\in F$ and $f\in F[x]$. Show that $f(a)=f'(a)=0$ if and only if $f=(x-a)^2q$ for some $q\in F[x]$. And $f'$ refers to the derivative of $f$. A: The if part is obvious. For the only if part, since $f(a)=0$, there exist $g\in F[x]$ with $f=(x-a)g$. Now, $f'(x)=g(x)+(x-a)g'(x)$. So, $f'(a)=0$ implies $g(a)=0$. Thus again there exist $q \in F[x]$ with $g(x)=(x-a)q(x)$. Therefore, $f=(x-a)^2q$. A: Hint $\ $ Wlog, by a shift, assume the root is $\rm\: a = 0.$ Notice $\rm\ \ x^2\: |\ f(x)$ $\rm\ \iff\ x\ |\ f(x)\ $ and $\rm\ x\ \bigg|\ \dfrac{f(x)}{x}$ $\rm\ \iff\ f(0) = 0\ $ and $\rm\ x\ \bigg|\ \dfrac{f(x)-f(0)}x\iff \dfrac{f(x)-f(0)}x\bigg|_{\:x\:=\:0} =\: 0$ $\rm\ \iff\ f(0) = 0\ $ and $\rm\ f'(0) = 0$
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,418
Raiko Iwanow Daskalow (; * 9. Dezember 1886 in Bjala Tscherkwa; † 26. August 1923 in Prag) war ein bulgarischer Politiker und Diplomat. Leben Daskalow studierte Finanzwissenschaften. 1918 beteiligte er sich am Wladaja-Aufstand und proklamierte am 27. September 1918 in Radomir die Republik. Nach der Niederschlagung des Aufstandes floh er nach Griechenland. Nachdem er 1919 amnestiert worden war, kehrte er nach Bulgarien zurück. Er gehörte zu den führenden Persönlichkeiten des Bulgarischen Bauernvolksbundes und gehörte von 1920 bis 1923 als Finanzminister der vom Bauernvolksbund geführten bulgarischen Regierung an. Im Mai 1923 wurde er bulgarischer Gesandter in Prag, wo er am 26. August 1923 einem Mordanschlag zum Opfer fiel. Literatur Daskalow, Raiko Iwanow. In: Taschenlexikon Bulgarien, Bibliographisches Institut Leipzig 1983, Seite 51. Weblinks Finanzminister (Bulgarien) Bulgarischer Botschafter in der Tschechoslowakei Bulgare Geboren 1886 Gestorben 1923 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,788
Temple City's summer concert series to go… Temple City's summer concert series to go virtual this year By Pierce Singgih | psinggih@scng.com | San Gabriel Valley Tribune PUBLISHED: July 4, 2020 at 6:36 a.m. | UPDATED: July 4, 2020 at 6:37 a.m. The show goes on for Temple City's 24th annual concert series, but this summer, the show will be virtual because of the new coronavirus pandemic. According to a news release, this year, the city will host four virtual musical performances through its social media accounts on Facebook, Instagram and YouTube. The shows will also be broadcast and replayed on Spectrum Channel 3 where available. The first show starts 7 p.m. Wednesday, July 8. Subsequent shows will be held at the same time on the following three Wednesdays until July 29. "The pandemic has been hard on everyone," Mayor Tom Chavez said in a statement. "While we've had to adjust this year's summer concert series to the new normal, bringing music into people's homes is a great joy and stress reliever." Here's the concert schedule: Beatles tribute: 7 p.m. Wednesday, July 8. Get ready for nostalgia as this tribute band performs the Beatles' top hits using vintage instruments and amplifiers to replicate the sound of a true Beatles performance. The Country Club Band featuring Amanda Kate: 7 p.m. Wednesday, July 15. This band can play just about anything, from classic country and Top 40 hits to jazz and beyond. Cold Duck: 7 p.m. Wednesday, July 22. Cold Duck heats up dance floors with sizzling covers of Top 40, R&B and Latin hits. Soto: 7 p.m. Wednesday, July 29. Emerging as one of the most electric purveyors of funk, R&B and dance music, Soto offers soulful ballads, Latin rhythms and dynamic choreographed stage shows. For more information, visit the city's website or call 626-579-0461. Monrovia Trader Joe's had 5 employees test positive for coronavirus Southern California restaurants must shut down indoor dining for at least 3 weeks Newsom tightens up coronavirus rules in LA, Orange, San Bernardino, Riverside counties Here's how Gov. Newsom plans to enforce his latest coronavirus order Top Stories PSN Top Stories SGVT Pierce Singgih Pierce Singgih is a reporter for the San Gabriel Valley Tribune, covering Azusa, West Covina, El Monte and everything in between. Previously, he interned for the Los Angeles Daily News. He graduated with a B.A. in Journalism in 2019 from Biola University, where he served as the Editor-in-Chief of the Chimes Newspaper. Outside of journalism, Pierce is an avid street photographer and enjoys supporting local coffee shops. psinggih@scng.com Follow Pierce Singgih @piercesinggih Coronavirus: L.A. County reported 269 fewer hospitalizations, 9,927 new cases and 88 new deaths as of Jan. 18 Pandemic-delayed La Mirada's Broadway theater series not likely to return until September To honor King, Tournament of Roses volunteers distributing books to local kids Things to do in the San Gabriel Valley and Whittier area, Jan. 21-28:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
513
{"url":"http:\/\/love2d.org\/forums\/viewtopic.php?p=238853&sid=ce5f86c29505db267feab12bbc3a4a7a","text":"## Any idea in implementing steering behaviour\n\nGeneral discussion about L\u00d6VE, Lua, game development, puns, and unicorns.\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Any idea in implementing steering behaviour\n\nI wanted to m\u00e4ke smooth pathfinding movement like in \"the escapists\". I used A* and made the object move to a node in the path(by small dt steps in update) until the position is met then move on to the next node in the path. But the check makes the object stop before getting there and then start again so it still looks blocky.\nThen I thought of getting only the needed nodes(nodes that object can reach in a straight line) and using the same technique but now the stopping would be less noticeable...but I had trouble implementing it so I found https:\/\/gamedev.stackexchange.com\/ques ... 1596#81596 and I still couldn't implement it(I'm like that sometimes ) but then I heard about steering behaviour and thought that's where I should go, looked at https:\/\/www.nonsequitoria.com\/2521\/ and https:\/\/natureofcode.com\/book\/chapter- ... us-agents\/...but still had trouble implementing it (despite code examples but I don't know maybe right now I'm to impatient ).\n\nSo long story short(and that was pretty long) does anyone have any idea of how to do\/an example of a lua implementation of steering behaviour(without love.physics, I just use bump.lua).\np.s. if anyone has any ideas on the corner-to-corner path thing(the first one with a link I failed at )that's welcome too.\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\nCarotino\nProle\nPosts: 3\nJoined: Thu Feb 18, 2021 9:00 pm\n\n### Re: Any idea in implementing steering behaviour\n\nHi.\nRight now I've found only this boids example in my disk, not done by me.\n\nI actually implemented the algorithms from the original 80's paper years ago.\n\nWhat I did then was having basically a point with a mass and a direction. You apply a force to direct it towards its next destination. When you start getting near it you select the next destination point. By applying forces your point will naturally change orientation smoothly, also depending on the mass of the point.\nThere could be many forces applying to the point: following others points in proximity, avoiding them, running away, looking for something.\nI had a whole fish simulation with different roles (predators, prey, leaders, followers...). It was nice because after a while it started having a life of its own. Fishes started to school together (is it a sound verb? English is not my native tongue), then a predator made them fly in all directions, then a small fish found a leader and started following it, or they simply started to flock together, simultaneously following the others and trying to be no too close. All this sort of things.\nAttachments\nboids2.zip\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Re: Any idea in implementing steering behaviour\n\nCarotino wrote: Thu Feb 18, 2021 9:57 pm ...\nThere could be many forces applying to the point: following others points in proximity, avoiding them, running away, looking for something.\nI had a whole fish simulation with different roles (predators, prey, leaders, followers...). It was nice because after a while it started having a life of its own. Fishes started to school together (is it a sound verb? English is not my native tongue), then a predator made them fly in all directions, then a small fish found a leader and started following it, or they simply started to flock together, simultaneously following the others and trying to be no too close. All this sort of things.\nThnx a lot!\nYou used the school verb properly (I think ).\nYour examples was just what I needed ...but now I just want to ask about implementing a way for it to avoid obstacles which I once saw is possible in steering behaviour?\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\ntogFox\nParty member\nPosts: 112\nJoined: Sat Jan 30, 2021 9:46 am\n\n### Re: Any idea in implementing steering behaviour\n\nI asked a similar question on avoidance here:\n\nhttps:\/\/love2d.org\/forums\/viewtopic.php?f=4&t=90297\n\nI love vectors and there is a very simple way to do path avoidance in my open world (no tiles).\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Re: Any idea in implementing steering behaviour\n\ntogFox wrote: Mon Feb 22, 2021 10:43 pm I asked a similar question on avoidance here:\n\nhttps:\/\/love2d.org\/forums\/viewtopic.php?f=4&t=90297\n\nI love vectors and there is a very simple way to do path avoidance in my open world (no tiles).\n\nSeems like it would work, though it doesn't use steering directly . Though I can think of a way to affect the vector given of by the steering, so...seems like it would work.\nHeheh, though(me not being very good at math) doesn't really understand the weighted part:\nXii wrote: Sat Feb 13, 2021 3:40 pm If the number of entities isn't too high, what you can do is consider the distances and directions (vectors, essentially) towards each and every other entity.\n\nBegin with a vector pointing straight to the goal, or something. Then, loop over all opponents. Subtract from the initial vector the vector towards each opponent, weighted by the inverse square [\\b]of the distance to said opponent. Then have the entity move towards the final resulting vector. This creates avoidance behavior in real-time.\n\nCould you\/someone elaborate, pls.\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Re: Any idea in implementing steering behaviour\n\ntogFox wrote: Mon Feb 22, 2021 10:43 pm I asked a similar question on avoidance here:\n\nhttps:\/\/love2d.org\/forums\/viewtopic.php?f=4&t=90297\n\nI love vectors and there is a very simple way to do path avoidance in my open world (no tiles).\n\nSeems like it would work, though it doesn't use steering directly . Though I can think of a way to affect the vector given of by the steering, so...seems like it would work.\nHeheh, though(me not being very good at math) doesn't really understand the weighted part:\nXii wrote: Sat Feb 13, 2021 3:40 pm If the number of entities isn't too high, what you can do is consider the distances and directions (vectors, essentially) towards each and every other entity.\n\nBegin with a vector pointing straight to the goal, or something. Then, loop over all opponents. Subtract from the initial vector the vector towards each opponent, weighted by the inverse square of the distance to said opponent. Then have the entity move towards the final resulting vector. This creates avoidance behavior in real-time.\nCould you\/someone elaborate, pls?\n\nEdit:nooo!!! Pressed submit twice . Can't delete in this forum\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\nNikki\nProle\nPosts: 32\nJoined: Wed Jan 25, 2017 5:42 pm\n\n### Re: Any idea in implementing steering behaviour\n\nhttps:\/\/natureofcode.com\/book\/chapter-6 ... us-agents\/\nthis book is very nice and free\n\nbtw the weighing part is something like this in pseudocode\n0.4 * pathfollowing + 0.6 * obstacle avoidance = the force you want to apply this frame\n\noh and obstacle avoidance is basically just calculating a force the opposite direction.\n(but do have a look at the book its very nice, its by the same author as the coding train youtube channel (\nhttps:\/\/thecodingtrain.com\/CodingChalle ... paths.html\n))\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Re: Any idea in implementing steering behaviour\n\nNikki wrote: Tue Feb 23, 2021 5:23 pm https:\/\/natureofcode.com\/book\/chapter-6 ... us-agents\/\nthis book is very nice and free\n\nbtw the weighing part is something like this in pseudocode\n0.4 * pathfollowing + 0.6 * obstacle avoidance = the force you want to apply this frame\n\n...\n\nThnx, I'll try to read that book (though I've seen it before and gone past it, but thnx for pointing it out again)...but...bear with me...I still don't get that weighted part. I mean, I check online before asking and I saw sigma signs, inverse square law, and a lot of stuff . So it just suprises me that it's that simple and that it has definite variables like 0.6.\nI really want to just find out how to get the inverse square of a distance and then weigh it against intitial subtracted vector.\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\nXii\nCitizen\nPosts: 82\nJoined: Thu Aug 13, 2020 9:09 pm\nContact:\n\n### Re: Any idea in implementing steering behaviour\n\nGunroar:Cannon() wrote: Tue Feb 23, 2021 11:03 pm I really want to just find out how to get the inverse square of a distance and then weigh it against intitial subtracted vector.\nA point in 2-dimensional space is defined by two coordinates, x and y.\nThe distance between two points is:\n\nCode: Select all\n\n-- returns the distance between points x1,y1 and x2,y2\nfunction distance(x1, y1, x2, y2)\nreturn math.sqrt(((x2-x1)^2)+((y2-y1)^2))\nend\n\nThe square of a number n is n^2, or n*n (n raised to the 2nd power, or multiplied by itself)\nThe root of a squared number n, i.e. the square root of n, is math.sqrt(n)\nThe inverse of a number n is 1\/n\nTherefore, the inverse square root of a number n is 1\/math.sqrt(n)\nPutting it all together, the inverse square root of distance is therefore:\n\nCode: Select all\n\nfunction inv_sqrt_distance(x1, y1, x2, y2)\nreturn 1\/math.sqrt(((x2-x1)^2)+((y2-y1)^2))\nend\n\nTo weigh a vector by it is to multiply its components by it.\nSo if I have a vector {x,y}, the weighted vector is {x*d,y*d} where d is the inverse square distance.\n\nOh and to subtract one vector from another is to subtract the matching components:\n\nCode: Select all\n\nvector_one = {x1,y1}\nvector_two = {x2,y2}\nsubtracted_vector = {vector_one[1]-vector_two[1], vector_one[2]-vector_two[2]}\n\nWhat I like to do myself is use these...:\n\nCode: Select all\n\n-- returns the direction from point x1,y1 to x2,y2 (in radians)\nfunction direction(x1, y1, x2, y2)\nreturn math.atan2(y2-y1, x2-x1)\nend\n\n-- returns the distance between points x1,y1 and x2,y2\nfunction distance(x1, y1, x2, y2)\nreturn math.sqrt(((x2-x1)^2)+((y2-y1)^2))\nend\n\n-- returns the point located at x,y moved in direction by distance\nfunction transposition(x, y, direction, distance)\nreturn distance*math.cos(direction)+x, distance*math.sin(direction)+y\nend\n\nFirst, the basic behavior; moving the actor towards the goal at some constant speed, making sure not to overshoot it:\n\nCode: Select all\n\nlocal dir = direction(actor_x, actor_y, goal_x, goal_y)\nlocal dist = math.min(actor_speed, distance(actor_x, actor_y, goal_x, goal_y))\nactor_x, actor_y = transposition(actor_x, actor_y, dir, dist)\n\nSecond, the avoidance behavior; moving the actor away from an obstacle at full speed:\n\nCode: Select all\n\nlocal dir = direction(obst_x, obst_y, actor_x, actor_y) -- note we reversed the coordinates.\n-- before we got the direction from actor to goal, now we get direction from obstacle to actor\nactor_x, actor_y = transposition(actor_x, actor_y, dir, actor_speed)\n\nThird, weighted avoidance. We multiply the speed by the inverse square of the distance:\n\nCode: Select all\n\nlocal dir = direction(obst_x, obst_y, actor_x, actor_y)\nlocal dist = 1\/math.sqrt(distance(obst_x, obst_y, actor_x, actor_y)) * obst_radius\nlocal speed = math.max(actor_speed, dist)\nactor_x, actor_y = transposition(actor_x, actor_y, dir, speed)\n\nobst_radius is a value >=1 that makes the avoidance behavior stronger the bigger it is. If the distance between actor and obstacle is less or equal to this radius, the actor will flee at full speed. The further away the actor is from the obstacle, the slower it will avoid it.\n\nFinally, we combine everything:\n\nCode: Select all\n\nlocal goal_dir = direction(actor_x, actor_y, goal_x, goal_y)\nlocal goal_dist = math.min(actor_speed, distance(actor_x, actor_y, goal_x, goal_y))\n\nlocal avoid_dir = direction(obst_x, obst_y, actor_x, actor_y)\nlocal avoid_dist = 1\/math.sqrt(distance(obst_x, obst_y, actor_x, actor_y)) * obst_radius\nlocal avoid_speed = math.max(actor_speed, avoid_dist)\n\nlocal vector_x, vector_y = transposition(actor_x, actor_y, goal_dir, goal_dist) --towards goal\nvector_x, vector_y = transposition(vector_x, vector_y, avoid_dir, avoid_speed) --away from obstacle\n\nlocal dir = direction(actor_x, actor_y, vector_x, vector_y)\nlocal dist = distance(actor_x, actor_y, vector_x, vector_y)\n-- finally move the actor:\nactor_x, actor_y = transposition(actor_x, actor_y, dir, math.min(dist, actor_speed))\n\n...I think I got that right, anyway.\n\nNote that this combined behavior will always move towards the goal, while also avoiding the obstacle. Which means that in theory if the obstacle is exactly between the actor and goal, the actor will stand still, not smart enough to go around it.\nGunroar:Cannon()\nParty member\nPosts: 203\nJoined: Thu Dec 10, 2020 1:57 am\n\n### Re: Any idea in implementing steering behaviour\n\nXii wrote: Wed Feb 24, 2021 6:59 pm ... Xii wrote a bunch of stuff, actually...\nHaha, thnx for the 'from the very beginning' explanation ( it now ensures that there's no way for me to not understand anymore ).\nAs for it not being able to move around obstacles...I really hope that my AI doesn't need to be that smart(*sweating smiley emoticon* )\nme: I don't always code but when I do it's done flawlessly.\nalso me:\n\nCode: Select all\n\n function Gunroar:Cannon()\nfor x, enemy in ipairs(self.allEnemies) do\nself:Cannon(enemy)\nend\nend\n\nCode: Select all\n\nLua Error: [file Gunroar.lua]:18: C stack overflow\n\n\n### Who is online\n\nUsers browsing this forum: No registered users and 9 guests","date":"2021-04-13 18:56:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.29857704043388367, \"perplexity\": 4598.40635072253}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038074941.13\/warc\/CC-MAIN-20210413183055-20210413213055-00152.warc.gz\"}"}
null
null
Raven Tribune Complete News World Prison attack: Airstrikes kill more than 70 in Yemen Corona eruption in perpetual snow: Affected people have to leave Antarctic station Two years released from Corona: Flight from Fiji brings first-time 36 victims to Kiribati Greece armed itself: French warplanes against Turkey Trump defeat in Supreme Court – Hundreds of documents released India: 18 killed in hospital fire Corona patients die in their beds – News Abroad Keith Wise May 1, 2021 2 min read Every day, India is plunged into new heights with new infections and deaths – and more and more dramatic accidents occur in completely overcrowded hospitals. A fire at a clinic in the western Indian state of Gujarat claimed the lives of at least 18 corona patients early Saturday morning (local time). The Times of India reports that locals and firefighters have rescued another 50 people from a four-story hospital in Bharuch. They were taken to other hospitals. The fire is spreading in Indian hospitalsPhoto: Viral Rana / A.P. Pictures show the remains of those burned alive on stretchers and beds. Kovit had a ward fire for 19 patients and was extinguished an hour later. The cause of the fire has not yet been determined. On April 23, at least 13 corona patients died in a fire at the intensive care unit in Virar, north of Mumbai. Fires are common in India, including hospitals. The reason for this is often poor or outdated equipment in the building. Ventilators in the corona ward of the clinic in the West IndiesPhoto: Viral Rana / DBA India is the first country in the world to register more than 400,000 new infections with the corona virus in a single day. During the same period, 3,523 people died in connection with Govt-19, according to Health Ministry figures. In a South Asian country with a population of more than 1.3 billion, hospitals and crematoriums are overcrowded for days and there is a shortage of medical oxygen. A total of 401,993 new infections were reported with the corona virus, the ministry said. India was the ninth day in a row. 19 million people in the country have been affected since the onset of the epidemic. With a total of 211,853 deaths, India ranks fourth in the world over the United States, Brazil and Mexico. Locals and rescue workers rescued about 50 peoplePhoto: Unauthorized / dpa According to the government plan, all adults will be vaccinated from this Saturday. However, many Indian states reported that the vaccine had been exceeded or had already been withdrawn. So far, less than ten per cent of people in India have received at least one dose of the vaccine. About two percent are vaccinated. Keith Wise "Social media maven. Amateur food buff. Pop culture trailblazer. Tv ninja." See also Air Invasion: China flies 39 warplanes over Taiwan - Politics Abroad Previous Study: The Amazon rainforest currently emits more CO₂ than it absorbs Next "I don't want to go to jail": Cohen: Giuliani may turn against Trump January 22, 2022 Keith Wise
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,692
{"url":"https:\/\/lamington.wordpress.com\/2009\/05\/29\/the-strengthened-hanna-neumann-conjecture\/","text":"The (strengthened) Hanna Neumann\u00a0Conjecture\n\nA few days ago, Joel Friedman posted a paper on the arXiv purporting to give a proof of the (strengthened) Hanna Neumann conjecture, a well-known problem in geometric group theory.\n\nSimply stated, the problem is as follows.\n\nConjecture (Hanna Neumann): Let $F$ be a free group, and let $G$ and $H$ be finitely generated subgroups. For a subgroup $E$ of $F$, let $\\rho(E) = \\max(\\text{rank}(E)-1,0)$. Then there is an inequality $\\rho(G \\cap H) \\le \\rho(G)\\rho(H)$.\n\nThis conjecture was further strengthened by Walter Neumann (her son):\n\nConjecture (strengthened Hanna Neumann): With notation above, there is an inequality $\\sum_x \\rho(G \\cap xHx^{-1}) \\le \\rho(G)\\rho(H)$ where the sum is taken over $x \\in H\\backslash F \/ G$, i.e. the double coset representatives.\n\nNotice by the way that since any free group embeds into $F_2$, the free group of rank $2$, one can assume that $F$ has rank $2$ above. This fact is implicit in the discussion below.\n\nFriedman\u2019s paper seems to be very carefully written, and contains some new ideas (which I do not yet really understand), namely an approach using sheaf theory. But in this post I want to restrict myself to some simple (and probably well-known) geometric observations.\n\nThe first step is to reduce the problem to a completely graph-theoretic one, following Stallings; in fact, Benson Farb tells me that he thinks this reduction was known to Stallings, or at least to Dicks\/Formanek (and in any case is very close to some ideas Stallings and Gersten introduced to study the problem; more on that in a later post). Friedman makes the following definition:\n\nDefinition: Let $\\mathcal{G}$ be a finite group and $g_1,g_2 \\in \\mathcal{G}$ be two elements (that do not necessarily generate $\\mathcal{G}$). The directed Cayley graph $C$ is the graph with vertex set $\\mathcal{G}$ and with a directed edge from $v$ to $vg_i$ labeled $i$ for each $v \\in \\mathcal{G}$ and $i=1,2$.\n\nIn other words, $C$ is a graph whose edges are oriented and labeled with either $1$ or $2$ in such a way that each vertex has at most one outgoing and one incoming edge with each label, and such that there is a transitive (on the vertices) free action of a group $\\mathcal{G}$ on $C$. (Note: for some reason, Friedman wants his group to act on the right, and therefore has directed edges from $v$ to $g_iv$, but this is just a matter of convention).\n\nFor any finite graph $K$, not necessarily connected, let $\\rho(K) = \\sum_j \\max(0,-\\chi(K_j))$; i.e. $\\rho(K) = \\sum_j \\rho(\\pi_1(K_j))$ where the sum is taken over the connected components $K_j$ of $K$. Friedman shows (but this reduction is well-known) that the SHNC is equivalent to the following graph-theoretic inequality:\n\nTheorem: The SHNC is equivalent to the following statement. For any graph $C$ as above, and any two subgraphs $K,K'$ we have $\\sum_{g \\in \\mathcal{G}} \\rho(K \\cap gK') \\le \\rho(K)\\rho(K')$.\n\nThe purpose of this blog entry is to show that there is a very simple proof of this inequality when $\\rho$ is replaced with $-\\chi$. This is not such a strange thing to do, since $\\rho$ and $-\\chi$ are equal for graphs without acyclic components (i.e. without components that are trees), and for \u201crandom\u201d graphs $K,K'$ one does not expect the difference between $\\rho$ and $-\\chi$ to be very big. The argument proceeds as follows. Suppose $K$ has $v$ vertices and $e_1,e_2$ edges of kind $1,2$ respectively, and define $v',e_1',e_2'$ similarly for $K'$. Then\n\n\u2022 $(-\\chi(K))(-\\chi(K')) = (v-e_1-e_2)(v'-e_1'-e_2')$\n\nOn the other hand, since Euler characteristic is local, we just need to count how many vertices and edges of each kind turn up in each $K \\cap gK'$. But this is easy: every vertex of $K$ is equal to exactly one translate of every vertex of $K'$, and similarly for edges of each kind. Hence\n\n\u2022 $\\sum_g -\\chi(K \\cap gK') = e_1e_1' + e_2e_2' - vv'$\n\nSo the inequality one wants to show is $e_1e_1' + e_2e_2' - vv' \\le (v-e_1-e_2)(v'-e_1'-e_2')$ which simplifies to\n\n\u2022 $v(e_1' + e_2') + v'(e_1 + e_2) \\le 2vv' + e_1e_2' + e_2 e_1'$\n\nOn the other hand, each graph $K,K'$ has at most two edges at any vertex with either label, and therefore we have inequalities $0 \\le e_1,e_2 \\le v, 0 \\le e_1',e_2' \\le v'$. Subject to these constraints, the inequality above is straightforward to prove. To see this, first fix some non-negative values of $v,v'$ and let $X$ be the four-dimensional cube of possible values of $e_1,e_2,e_1',e_2'$. Since both sides of the inequality are linear as a function of each $e_i$ or $e_i'$, if the inequality is violated at any point in $X$ one may draw a straight line in $X$ corresponding to varying one of the co-ordinates (e.g. $e_1$) while keeping the others fixed, and deduce that the inequality must be violated on one of the faces of $X$. Inductively, if the inequality is violated at all, it is violated at a vertex of $X$, which may be ruled out by inspection; qed.\n\nThis argument shows that the whole game is to understand the acyclic components of $K \\cap gK'$; i.e. those which are topologically trees, and therefore contribute $0$ to $\\rho$, but $-1$ to $-\\chi$.\n\nIncidentally, for all I know, this simple argument is explicitly contained in either Stallings\u2019 or Gersten\u2019s paper (it is surely not original in any case). If a reader can verify this, please let me know!\n\nUpdate: Walter Neumann informs me that this observation (that the inequality is true with $-\\chi$ in place of $\\rho$) is in his paper in which he introduces the SHNC! He further shows in that paper that for \u201cmost\u201d $G$, the SHNC is true for all $H$.\n\nUpdate (6\/29): Warren Dicks informs me that he was not aware of the reduction of SHNC to the graph-theoretic formulation described above. Friedman\u2019s webpage acknowledges the existence of an error in the paper, and says that he is working to correct it. One problem that I know of (discovered mostly by my student Steven Frankel) concerns the commutativity of the diagram on page 10.\n\nUpdate (10\/22): It has been a few months since I last edited this page, and Joel Friedman has not updated either the arXiv paper, or the statement on his webpage that he is \u201ctrying to fix the error\u201d. Since wikipedia mentions Friedman\u2019s announcement, I thought it would be worth going on record at this point to say that Friedman\u2019s arXiv paper (version 1 \u2014 the only version at the point I write this) is definitely in error, and that I believe the error is fundamental, and cannot be repaired (this is not to say that the paper does not contain some things of interest (it does), or that Friedman does not acknowledge the error (he does), just that it is worth clearing up any possible ambiguity about the situation for readers who are wondering about the status of the SHNC). The problem is the \u201cnot entirely standard\u201d (quote from Friedman\u2019s paper) diagrams, like the one on page 10. In particular, the claimed proof of Theorem 5.6, that the projections constructed in Lemma 5.5 (by a very general dimension counting argument) fit into a diagram with the desired properties is false. Any construction of projections satisfying the desired properties must be quite special. Nevertheless, one can certainly still define Friedman\u2019s sheaf $\\mathcal{K}$, and ask whether it has $\\rho(\\mathcal{K})=0$ (in Friedman\u2019s sense); this would, as far as I can tell, prove SHNC; however, I do not know of any reason why it should hold (or whether there are any counterexamples, which might exist even if SHNC is true).\n\nThis entry was posted in Commentary, Groups and tagged , , . Bookmark the permalink.","date":"2016-08-29 17:58:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 81, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9081225395202637, \"perplexity\": 201.60658138099834}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-36\/segments\/1471982290497.47\/warc\/CC-MAIN-20160823195810-00217-ip-10-153-172-175.ec2.internal.warc.gz\"}"}
null
null
Clervaux () je obec na severu Lucemburska v kantonu Clervaux. Známá je zejména výstava fotografií nazvaná Lidská rodina (Family Of Man), která je vystavena v místním zámku. Osobnosti města Edward Steichen (1879–1973) – americký fotograf, autor výstavy Lidská rodina Halldór Kiljan Laxness (1902–1998) – islandský prozaik, básník, dramatik, esejista a překladatel, nositel Nobelovy ceny Externí odkazy Oficiální stránky města Turistické informace Geografie Lucemburska
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,118
{"url":"https:\/\/www.student-circuit.com\/studentzone\/power-and-power-factor-in-ac-circuits\/","text":"StudentZone\n\n# Power and power factor in AC circuits\n\nAfter the introduction of the SMU ADALM1000 lets continue with the ninth part of the series with some small, basic measurements.\n\nBy Doug Mercer and Antoniu Miclaus, Analog Devices\n\nObjective: In this lab activity you will determine real, reactive, and apparent power in RC, RL, and RLC circuits. You will also determine the amount of capacitance that is required to correct the power factor in a series RL circuit.\n\nBackground: For time varying voltages and currents, the power delivered to a given load also varies with time. This time, varying power is called instantaneous power. The power at any instant in time can be either positive or negative. That is to say, power is going into the load and being dissipated as heat or stored in the load as energy when positive and coming out of the load (from the stored energy in the load) when negative. The real (or actual) power delivered to the load is the average value of the instantaneous power.\n\nFor ac sinusoidal voltages and currents, the real power (P), in units of watts, dissipated in an RC, RL, or RLC load circuit is dissipated in only the resistance part. There is no real power dissipated in an ideal reactive element like a capacitor or inductor. In a reactive element, energy is stored during half of the ac cycle and released (sourced) during the other half of the cycle. The power in a reactive element is referred to as reactive power (Q) and has the units of var (volt-ampere reactive).\n\nThe real power (P) dissipated in a load can be calculated as follows:\n\n$P={I}^{2}R$\n\nWhere R is the resistive part of the load and I is the (true) rms current.\n\nThe reactive power in a load can be calculated as follows:\n\n$Q={I}^{2}x$\n\nWhere X is the reactance of the load and I is the ac rms current.\n\nWhen a load has an ac rms voltage (V) across it and an ac rms current (I) through it, the apparent power (S) is the product of the rms voltage and rms current in volt-amperes (VA). The apparent power can be calculated as follows:\n\n$S=VI$\n\nIf the load has both resistive and reactive parts, apparent power represents neither real power nor reactive power. It is called apparent power because it uses the same equation as dc power but does not take into account the possible phase difference between the voltage and current waveforms.\n\nA power triangle (vector diagram) can be drawn using the real, reactive, and apparent power. The real power is along the horizontal axis, the reactive power is along the vertical axis, and the apparent power forms the hypotenuse of the triangle as shown in Figure 1.\n\nUsing geometry, S can be calculated by:\n\n$S=\\sqrt{\\left({p}^{2}+{Q}^{2}}\\right)$\n\nThe cosine of angle \u03b8 is defined as the power factor (pf). The power factor is the ratio of the real power (P) to the apparent power (S) and is calculated as follows:\n\n$pf\u2013\\mathrm{cos}\\left(\\theta \\right)=\\frac{P}{S}=\\frac{P}{\\left(VI\\right)}$\n\nWhere \u03b8 is the phase difference between the voltage waveform (across the load) and the current waveform (through the load). The power factor is thought of as lagging when the load current lags the load voltage (inductive) and leading when the load current leads the load voltage (capacitive).\n\nThe real power can also be found from the apparent power by multiplying the apparent power by the power factor:\n\n$P={I}^{2}R$\n\nThe reactive power in an RC circuit, as in Figure 2, can be calculated using:\n\n$Q={V}_{c}I={I}^{2}{X}_{c}$\n\nWhere VC is the rms voltage across the capacitor, I is the rms capacitor current, and XC is the capacitive reactance.\n\nThe reactive power in an RL circuit, as in Figure 3, can be calculated using:\n\n$Q={V}_{L}I={I}^{2}{X}_{L}LI$\n\nWhere VL is the rms voltage across the inductor, I is the rms inductor current, and XL is the inductive reactance.\n\nThe reactive power in an RLC circuit, as in Figure 4, can be calculated using:\n\n$Q={V}_{X}I={I}^{2}X$\n\nWhere VX = VC \u2013 VL is the rms voltage across the combined total reactance, I is the rms current in the reactance, and X = XC \u2013 XL is the combined total reactance. The rms voltage across the total reactance is equal to the difference between the capacitor voltage (VC) and the inductor voltage (VL), because the voltages have a 180\u00b0 phase difference (out of phase) between each other.\n\n#### Power factor correction\n\nPower factor correction is generally required for inductive loads like large ac motors. Because a power factor of 1 (unity) requires less peak current, it is advantageous to compensate for the inductance bringing the power factor as close to unity as possible. By doing this we make the real power close to being equal to the apparent power (VI). Power factor is corrected by connecting a capacitor in parallel with the inductive load.\n\nTo find the correct capacitor value required (Figure 5), first we need to know the reactive power of the original RL circuit. This is done by drawing the power triangle and solving for the reactive power. The power triangle can be drawn from the real power and the apparent power and the power factor angle, \u03b8. Once the reactive power for the original load circuit has been found, the capacitive reactance, XC, needed to correct the power factor can be calculated as follows:\n\n$Q=\\frac{{v}^{2}}{{X}_{C}}$\n\nWhere V is the rms voltage across the RL circuit. Rearranging \u2026\n\n${X}_{C}=\\frac{{V}^{2}}{Q}$\n\n\u2026 with a value for XC, the required capacitance can be found based on the frequency (F) as follows:\n\n${X}_{C}=\\frac{1}{\\left(2\\mathrm{\\pi FC}\\right)}$\n\nRearranging:\n\n$C=\\frac{1}{\\left(2{\\mathrm{\\pi FX}}_{\\mathrm{C}}\\right)}$\n\nWith the correct capacitor connected in parallel with the RL load (motor), the power factor will be close to unity\u2014that is, the voltage and current are in phase with each other. And the real power will be nearly equal to the apparent power.\n\n#### Materials\n\n\u2022 Solderless breadboard and jumper wires\n\u2022 One 47 \u03a9 resistor\n\u2022 One 100 \u03a9 resistor\n\u2022 One 10 \u03bcF capacitor\n\u2022 One 47 mH inductor\n\n#### Directions for the RC circuit\n\nConstruct the RC circuit shown in Figure 2 on your solderless breadboard with the component values R1 = 100 \u03a9 and C1 = 10 \u03bcF. Three connections to the ALM1000 are required as shown by the green boxes. Open the ALICE oscilloscope software.\n\n#### Procedure\n\nOn the right-hand side of the main scope window, enter 2.5 for the CA-V and CB-V offset adjustment. In this experiment we need to apply ac signals (\u00b1voltage) across the load and we are referencing all the measurements to the 2.5 V common rail. Also enter 0 for the CH-A and CH-B vertical position settings (along bottom of scope window). The vertical scales should now be centered on 0 and go from \u20132.5 to +2.5. Set the CA-I vertical scale to 5 mA\/Div.\n\nSet the Channel A AWG Min value to 1.08 V and the max value to 3.92 V to apply a 2.84 V p-p, 1 V rms sine wave centered on 2.5 V as the input voltage to the circuit. Set the frequency to 250 Hz and the phase to 90\u00b0. From the AWG A Mode drop-down menu, select SVMI mode. From the AWG A Shape drop-down menu, select Sine. From the AWG B Mode drop-down menu, select Hi-Z mode.\n\nFrom the ALICE Curves drop-down menu, select CA-V, CA-I, and CB-V for display. From the Trigger drop-down menu, select CA-V and Auto Level.\n\nThis configuration uses the oscilloscope to look at the ac voltage and current signals driving the circuit on Channel A and the voltage across the resistance on Channel B. The voltage across the capacitor is simply the difference between Channel A and Channel B (select CAV \u2013 CBV from the Math drop-down menu). Make sure you have checked the Sync AWG selector.\n\nThe software can calculate the rms values for the Channel A voltage and current waveforms, as well as the Channel B voltage waveform. In addition, the software also calculates the rms value of the point-by-point difference between the Channel A and Channel B voltage waveforms. In this experiment this will be the rms value of the voltage across the capacitor. To display these values, select RMS and CA-CB RMS under -CA-V- and RMS under the -CA-I- sections of the Meas CA drop-down menu. Select RMS under -CB-V- section of the Meas CB drop-down menu. You may also wish to display the max (or positive peak) values for CA-V, CA-I, and CB-V.\n\nClick on the Run button. Adjust the time base until you have more than two cycles of the sine wave on the display grid. Set the Hold Off to 4.0 ms. You should see four traces: the Channel A voltage, Channel B voltage, Channel A current, and CA-CB voltage Math trace. Because 100 \u03a9 was chosen for the resistor and the vertical scale for the current is 5 mA\/Div, the trace of the current in the resistor will fall right on top of the trace for voltage across the resistor, Channel B, with its vertical scale set to 0.5 V\/Div (0.5 mA time 100 \u03a9 = 0.5 V).\n\nRecord the rms value for the voltage across the total RC circuit (CHA V RMS), the rms value for the current through R1, which is also the current in Channel A in this series circuit (CHA I RMS), the rms value for the voltage across the resistor (CHB V RMS), and the rms value for the voltage across the capacitor (A-B RMS).\n\nBased on these values, calculate the real power (P) for the RC circuit. Calculate the reactive power (Q). Calculate the apparent power (S).\n\nBased on your calculated values for P, Q, and S, draw the power triangle, as in Figure 1. Determine the power factor (pf) and \u03b8 for the RC circuit.\n\nThe oscilloscope traces are displaying the time relationship between the voltage (the green Channel A voltage trace) and current (the cyan Channel A current trace). Using the display markers or the time cursor, measure the time difference between the zero crossings of the two traces and, from that, the phase angle between them. Use this angle (\u03b8) to calculate the power factor.\n\nHow does this compare to the value you obtained from the P, Q, and S and the power triangle? Is the power factor lagging or leading and why?\n\n#### Directions for the RL circuit\n\nFirst measure the dc resistance of the 47 mH inductor using the dc ohmmeter tool in ALICE. The total series resistance of the RL circuit will be the inductor resistance plus the 47 \u03a9 external resistor R1. The total resistance will need to be factored into your calculations for the real and reactive power.\n\nConstruct the RL circuit shown in figure 5 on your solderless breadboard with the component values R1 = 47 \u03a9 and L1 = 47 mH.\n\n#### Procedure\n\nClick on the Run button. Adjust the time base until you have more than two cycles of the sine wave on the display grid. Set the Hold Off to 4.0 ms. You should see four traces: the Channel A voltage, Channel B voltage, Channel A current, and CA-CB voltage Math trace.\n\nRecord the rms value for the voltage across the total RL circuit (CHA V RMS), the rms value for the current through R1, which is also the current in channel A in this series circuit (CHA I RMS), the rms value for the voltage across the resistor (CHB V RMS), and the rms value for the voltage across the inductor (A-B RMS).\n\nBased on these values, calculate the real power (P) for the RL circuit. Calculate the reactive power (Q). Calculate the apparent power (S).\n\nBased on your calculated values for P, Q, and S, draw the power triangle, as in Figure 1. Determine the power factor (pf) and \u03b8 for the RL circuit.\n\nThe oscilloscope traces are displaying the time relationship between the voltage (the green Channel A voltage trace) and current (cyan channel A current trace). Using the display markers or time cursor measure the time difference between the zero crossings of the two traces and from that the phase angle between them. Use this angle (\u03b8) to calculate the power factor.\n\nHow does this compare to the value you obtained from the P, Q, and S and the power triangle? Is the power factor lagging or leading and why?\n\nDirections for the RLC circuit\n\nConstruct the RLC circuit shown in Figure 7(a) on your solderless breadboard with the component values R1 = 47 \u03a9, C1 = 10 \u03bcF, and L1 = 47 mH.\n\n#### Procedure\n\nFor the RLC circuit you will need measurements for the ac rms voltage across each element. In the configuration shown in Figure 7(a), with Channel B connected to the junction of C1 and L1, we can get the rms voltage across C1 from the difference between the CA and CB waveforms. With Channel B connected to the junction of L1 and R1, we can get the rms voltage across R1 directly from the CB waveform. Record the rms value for the voltage across the total RLC circuit (CHA V RMS), the rms value for the current through R1, which is also the current in Channel A in this series circuit (CHA I RMS), the rms value for the voltage across the resistor (CHB V RMS) and the rms value for the voltage across the capacitor (A-B RMS) when CHB is connected to the junction of C1 and L1 and the combined reactance of L1 and C1 when CHB is connected to the junction of L1 and R1.\n\nWe still need the rms voltage across the inductor L1. By swapping the order of the components in this series connected circuit, as shown in Figure 7(c), we do not change the total overall impedance of the load circuit. However, we can now obtain the rms voltage across L1 from the difference between the CA and CB waveforms as we did with the capacitor in Figure 7(a). Record the rms value for the voltage across the total RLC circuit (CHA V RMS), the rms value for the current through R1, which is also the current in Channel A in this series circuit (CHA I RMS), the rms value for the voltage across the resistor (CHB V RMS), and the rms value for the voltage across the inductor (A-B RMS). Check that the value across the total circuit as well as the current through the load and the value across R1 is the same as what was measure for Figure 7(a). Why is this true?\n\nBased on these values calculate the real power (P) for the RLC circuit. Calculate the reactive power (Q) for the combined LC reactance and the L and C individually. Calculate the apparent power (S).\n\nIncrease the frequency of Channel A from 250 Hz to 500 Hz and remeasure the rms voltages for the RLC circuit. How has that changed the real, reactive, and apparent power? Is the load current lagging or leading and why?\n\nDecrease the frequency of Channel A from to 125 Hz and remeasure the rms voltages for the RLC circuit. How has that changed the real, reactive, and apparent power? Is the load current lagging or leading and why?\n\nDirections for power factor correction\n\nThe circuit shown in Figure 8 for the power factor correction exercise is the same as Figure 6 with the addition of capacitor C1 in parallel with L1.\n\nBased on your measurements from Figure 5 and the equations in the power factor correction section in the background information for this lab activity, calculate the appropriate value for C1 at 250 Hz. Use the closest standard value (or parallel combination of standard values) capacitor for C1.\n\n#### Procedure\n\nAs you did for the simple RL circuit record the rms value for the voltage across the total RL circuit (CHA V RMS), the rms value for the current through R1, which is also the current in Channel A in this series circuit (CHA I RMS), the rms value for the voltage across the resistor (CHB V RMS), and the rms value for the voltage across the inductor (A-B RMS).\n\nBased on these values, calculate the real power (P) for the RL circuit. Calculate the reactive power (Q). Calculate the apparent power (S).\n\nBased on your calculated values for P, Q, and S, draw the power triangle, as in Figure 1. Determine the power factor (pf) and \u03b8 for the pf corrected RL circuit. Compare this pf to the one you calculated for just the RL load circuit. How close was the calculated capacitor value to the optimal value needed to make the pf equal to unity? Explain any differences.\n\n### Appendix\n\n#### Using other component values\n\nIt is possible to substitute other component values in cases where the specified values are not readily available. The reactance of a component (XC or XL) scales with frequency. For example, if 4.7 mH inductors are available rather than the 47 mH called for, all that is needed to do is increase the test frequency from 250 Hz to 2.5 kHz. The same would be true when substituting a 1.0 \u03bcF capacitor for the 10.0 \u03bcF capacitor specified.\n\n#### Using the RLC impedance meter tool\n\nALICE desktop includes an impedance analyzer\/RLC meter that can be used to measure the series resistance (R) and reactance (X). As part of this lab activity it might be informative to use this tool to measure the components R, L, and C used to confirm your test results.\n\n### Questions\n\n1. In general, which is the effect of improving the power factor?\n2. Which is the most common way of improving it?\n\n### Notes\n\nAs in all the ALM labs, we use the following terminology when referring to the connections to the ALM1000 connector and configuring the hardware. The green shaded rectangles indicate connections to the ADALM1000 analog I\/O connector. The analog I\/O channel pins are referred to as CA and CB. When configured to force voltage\/measure current, \u2013V is added (as in CA-V) or when configured to force current\/measure voltage, \u2013I is added (as in CA-I). When a channel is configured in the high impedance mode to only measure voltage, \u2013H is added (as in CA-H).\n\nScope traces are similarly referred to by channel and voltage\/current, such as CA-V and CB-V for the voltage waveforms, and CA-I and CB-I for the current waveforms.\n\nWe are using the ALICE Rev 1.1 software for those examples here. File: alice-desktop-1.1-setup.zip.\n\nThe ALICE desktop software provides the following functions:\n\n\u2022 A 2-channel oscilloscope for time domain display and analysis of voltage and current\n\u2022 The 2-channel arbitrary waveform generator (AWG) controls.\n\u2022 The X and Y display for plotting captured voltage and current vs. voltage and current data, as well as voltage waveform histograms.\n\u2022 The 2-channel spectrum analyzer for frequency domain display and analysis of voltage waveforms.\n\u2022 The Bode plotter and network analyzer with built-in sweep generator.\n\u2022 An impedance analyzer for analyzing complex RLC networks and as an RLC meter and vector\n\u2022 A dc ohmmeter measures unknown resistance with respect to known external resistor or known internal 50 \u03a9.\n\u2022 Board self-calibration using the AD584 precision 2.5 V reference from the ADALP2000 analog parts kit.\n\u2022 ALICE M1K voltmeter.\n\u2022 ALICE M1K meter source.\n\u2022 ALICE M1K desktop tool.","date":"2023-03-31 13:46:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 13, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6578003168106079, \"perplexity\": 1240.7790116754477}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949642.35\/warc\/CC-MAIN-20230331113819-20230331143819-00734.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:intro} The solar atmosphere is a highly dynamic and structured plasma that is able to support a wide variety of magneto-acoustic waves and oscillations. Each layer of the solar atmosphere, from the photosphere to the corona, is magnetically connected to the others via the all pervading magnetic field. The omnipresence of the waves throughout the atmosphere is becoming well documented as new and exciting techniques are being developed to help observe and study the waves (see, e.g. \citealp{BANETAL2007}; \citealp{TOMetal2007}). After transverse coronal loop oscillations were first observed by TRACE in $1998$ (\citealp{ASCetal1999}; \citealp{NAKetal1999}), the phenomenon became one of the hot topics within solar physics. In the first theoretical interpretation of these oscillations, a coronal loop was modelled as a straight magnetic cylinder with the density constant inside and outside. Since then, a number of more complicated and realistic models have been considered. For a recent review on the theory of transverse oscillations of a coronal loop see, e.g., \cite{RUDERD2009}. Although the transverse coronal loop oscillations are interesting on their own, their main importance is related to the fact that they are a powerful tool of coronal seismology. \cite{NAKOFM2001} demonstrated this by using the observations of transverse coronal loop oscillations to estimate the magnitude of the magnetic field in the corona, while \cite{ANDetal2005a} suggested to use these observations to estimate the atmospheric scale height in the corona. In this paper we continue to study the transverse oscillations of coronal loops. Coronal loops with elliptical cross-sections and a \emph{constant} density profile have been studied previously in both cold (\citealp{RUD2003}) and { finite-}\/$\beta$ (\citealp{ERDMOR2009}) plasmas. Now, we consider oscillations of loops with the density \emph{varying} along the loop and a constant elliptic cross-section. The paper is organized as follows. In the next section we formulate the problem. In Sect.~\ref{sec:derivation} we derive the governing equations for non-axisymmetric oscillations of a coronal loop with an elliptic cross-section in the thin tube approximation. In Sect.~\ref{sec:seismology} we study the implication of our analysis on coronal seismology. Section~\ref{sec:summary} contains the summary of the obtained results and our conclusions. \section{Problem formulation} \label{sec:formulation} We model a coronal loop as a straight magnetic tube with an elliptical cross-section. The cold plasma approximation is used. The density varies along the tube, while the cross-section remains constant. In Cartesian coordinates $x,\,y,\,z$ the loop axis coincides with the $z$\/-axis. The equilibrium magnetic field is given by $\vec{B} = B\hat{\vec{z}}$\/, where $B$ is constant and $\hat{\vec{z}}$ is the unit vector in the $z$\/-direction. The plasma motion is governed by the linearised ideal MHD equations, \begin{equation} \frac{\partial^2\vec{\xi}}{\partial t^2} = \frac1{\mu_0\rho}(\nabla\times\vec{b})\times\vec{B}, \label{eq:ideal_v} \end{equation} \begin{equation} \vec{b} = \nabla\times(\vec{\xi}\times\vec{B}). \label{eq:ideal_b} \end{equation} Here $\vec{\xi}$ is the plasma displacement, $\vec{b}$ the magnetic field perturbation, $\rho(z)$ the equilibrium density, and $\mu_0$ the magnetic permeability of free space; $\rho(z) = \rho_{\rm i}(z)$ inside the tube and $\rho(z) = \rho_{\rm e}(z)$ outside the tube. \begin{figure} \centering \includegraphics[scale=0.7]{ellipcoord2.eps} \caption{Sketch showing the elliptical coordinate system used to describe the loop cross-section. { The open and closed curves show the $s$ and $\varphi$ coordinate lines respectively. The thick closed curve shows the tube boundary.}}\label{fig:1} \end{figure} Let us introduce the elliptic coordinates $s$ and $\varphi$ in the $xy$\/-plane (see Fig.~\ref{fig:1}). The Cartesian coordinates are expressed in terms of elliptic coordinates as \begin{equation} x = \sigma\cosh s\cos\varphi, \qquad y = \sigma\sinh s\sin\varphi, \label{eq:ellip_coord} \end{equation} where $\sigma$ is a quantity with the dimension of length, $s$ varies from $0$ to $\infty$\/, and $\varphi$ from $-\pi$ to $\pi$\/. In the elliptic coordinates the equation of the tube boundary is $s = s_0$\/. Then the large and small half-axes of the tube elliptic cross-section are in the $x$ and $y$\/-direction, and they are given by \begin{equation} a = \sigma\cosh s_0, \qquad b = \sigma\sinh s_0 . \label{eq:axes} \end{equation} At the tube boundary the normal component of the displacement, $\xi_s$\/, and the magnetic pressure perturbation, $P = \vec{b}\cdot\vec{B}/\mu_0$\/, has to be continuous, \begin{equation} [\hspace*{-0.7mm}[\xi_s]\hspace*{-0.7mm}] = 0, \quad [\hspace*{-0.7mm}[P]\hspace*{-0.7mm}] = 0 \quad \mbox{at} \quad s = s_0, \label{eq:jumps} \end{equation} where $[\hspace*{-0.7mm}[f]\hspace*{-0.7mm}]$ indicates the jumps of function $f$ across the boundary defined as \begin{equation} [\hspace*{-0.7mm}[f]\hspace*{-0.7mm}] = \lim_{\varepsilon\to 0} [f(s + \varepsilon) - f(s - \varepsilon)]. \label{eq:jump-def} \end{equation} The magnetic field lines at the loop foot points are frozen in the dense photospheric plasma, so that \begin{equation} \vec{\xi} = 0 \quad \mbox{at} \quad z = \pm L/2, \label{eq:frozen-xi} \end{equation} where $L$ is the loop length. It follows from Eq.~(\ref{eq:ellip_coord}) that the points with the elliptical coordinates $s = 0$, $\varphi = \varphi_0$\/, and $s=0$, $\varphi = -\varphi_0$ are the same point in the $xy$\/-plane. This implies that $P$ and $\xi_s$ have to satisfy the boundary conditions \begin{equation} P(0,\varphi) = P(0,-\varphi), \qquad \xi_s(0,\varphi) = -\xi_s(0,-\varphi). \label{eq:reg_pxi} \end{equation} Equations (\ref{eq:ideal_v}) and (\ref{eq:ideal_b}) together with the boundary conditions (\ref{eq:jumps}), (\ref{eq:frozen-xi}) and (\ref{eq:reg_pxi}) will be used in the next section to derive the governing equations for non-axisymmetric oscillations in the thin tube approximation. \section{Derivation of governing equations} \label{sec:derivation} The analysis in this section is similar to one used by \cite{DYMRUD2005} to derive the governing equation for a thin tube with a circular tube cross-section. We begin by noting that, in accordance with Eq.~(\ref{eq:ideal_v}), $\xi_z = 0$. The system of Eqs.~(\ref{eq:ideal_v}) and (\ref{eq:ideal_b}) can then be transformed to \begin{equation} \frac{\partial^2\vec{\xi}}{\partial t^2} = -\frac1\rho\nabla_\perp P + \frac B{\mu_0\rho}\frac{\partial\vec{b}_\perp}{\partial z}, \label{eq:transf_v} \end{equation} \begin{equation} \vec{b}_\perp = B\frac{\partial\vec{\xi}}{\partial z}, \label{eq:transf_b} \end{equation} \begin{equation} P = -\rho v_A^2\nabla\cdot\vec{\xi}, \label{eq:P2xi} \end{equation} where $v_A$ is the Alfv\'en speed defined by $v_A^2 = B^2/\mu_0\rho$\/, and the operator $\nabla_\perp$ and component of the magnetic field perturbation perpendicular to the $z$\/-axis are given by \begin{equation} \nabla_\perp = \nabla - \hat{\vec{z}}\frac\partial{\partial z}, \qquad \vec{b}_\perp = \vec{b} - \vec{b}\cdot\hat{\vec{z}}. \label{perp} \end{equation} Eliminating $\vec{b}_\perp$ from Eqs.~(\ref{eq:transf_v}) yields \begin{equation} \frac{\partial^2\vec{\xi}}{\partial t^2} - v_A^2\frac{\partial^2\vec{\xi}}{\partial z^2} = -\frac1\rho\nabla_\perp P. \label{eq:momentum} \end{equation} Taking the divergence of this equation and using Eq.~(\ref{eq:P2xi}) we arrive at the equation for $P$\/, \begin{equation} \frac{\partial^2 P}{\partial t^2} - v_A^2\frac{\partial^2 P}{\partial z^2} = v_A^2\nabla_\perp^2 P. \label{eq:Pfull} \end{equation} Now we use the thin tube approximation. To do this we note that the characteristic spatial scale in the $z$\/-direction is $L$\/, and the characteristic time of the problem is $L/\bar{v}_A$\/, where $\bar{v}_A$ is a typical value of Alfv\'en speed. In what follows we only consider the perturbations that decay at the distance of a few $a$ from the tube. Then the characteristic spatial scale in the $x$ and $y$\/-direction is $a$\/. It follows from this analysis that the ratio of the left-hand side of Eq.~(\ref{eq:Pfull}) to its right-hand side is of the order of $(a/L)^2 \ll 1$, so that we can neglect the left-hand side. Then, using the expression for $\nabla_\perp^2$ in the elliptical coordinates (e.g. \citealp{KornKorn}), we obtain the equation for $P$ in the thin tube approximation, \begin{equation} \frac{\partial^2 P}{\partial s^2} + \frac{\partial^2 P}{\partial\varphi^2} = 0. \label{eq:Papprox} \end{equation} The solution to this equation has to satisfy the first regularity condition in Eq.~(\ref{eq:reg_pxi}), and the second boundary condition in Eq.~(\ref{eq:jumps}). Using Eq.~(\ref{eq:momentum}) we rewrite the second regularity condition in terms of $P$\/, \begin{equation} \frac{\partial P(s,\varphi)}{\partial s}\bigg|_{s=0} = -\frac{\partial P(s,-\varphi)}{\partial s}\bigg|_{s=0}. \label{eq:reg_derP} \end{equation} To derive the governing equations for non-axisymmetric tube oscillations we solve Eqs.~(\ref{eq:momentum}) and (\ref{eq:Papprox}) inside and outside the tube, and then match the two solutions at the tube boundary. It is straightforward to obtain the general solution to Eq.~(\ref{eq:Papprox}) inside the tube satisfying the regularity conditions Eqs.~(\ref{eq:reg_pxi}) and (\ref{eq:reg_derP}), \begin{equation} P^{\rm i} = \sum_{n=1}^\infty\left[C_n^{\rm i}\cosh(ns)\cos(n\varphi) + D_n^{\rm i}\sinh(ns)\sin(n\varphi)\right], \label{eq:p_int} \end{equation} where $C_n^{\rm i}$ and $D_n^{\rm i}$ are arbitrary functions of $t$ and $z$\/. The solution outside the tube has to decay as $s \to \infty$\/. Hence, its general form is \begin{equation} P^{\rm e} = \sum_{n=1}^\infty e^{-ns}\left[C_n^{\rm e}\cos(n\varphi) + D_n^{\rm e}\sin(n\varphi)\right], \label{eq:p_ext} \end{equation} where once again $C_n^{\rm e}$ and $D_n^{\rm e}$ are arbitrary functions of $t$ and $z$\/. Substituting Eqs.~(\ref{eq:p_int}) and (\ref{eq:p_ext}) in Eq.~(\ref{eq:momentum}) and using the expression for $\nabla_\perp$ in the elliptical coordinates (e.g. \citealp{KornKorn}), \begin{equation} \nabla_\perp = \frac1{\sigma\Theta} \left(\hat{\vec{s}}\frac\partial{\partial s} + \hat{\vec{\varphi}}\frac\partial{\partial\varphi}\right), \quad \Theta = (\sinh^2 s + \sin^2\varphi)^{1/2}, \label{eq:nabla2coord} \end{equation} where $\hat{\vec{s}}$ and $\hat{\vec{\varphi}}$ are the unit vectors in the $s$ and $\varphi$\/-direction, we obtain the expressions for $\xi_s$ inside and outside the tube, \begin{equation} \xi_s^{\rm i} = \frac1{\sigma\Theta}\sum_{n=1}^\infty \left[F_n^{\rm i}\sinh(ns)\cos(n\varphi) + G_n^{\rm i}\cosh(ns)\sin(n\varphi)\right], \label{eq:xi_int} \end{equation} \begin{equation} \xi_s^{\rm e} = \frac1{\sigma\Theta}\sum_{n=1}^\infty e^{-ns} \left[F_n^{\rm e}\cos(n\varphi) + G_n^{\rm e}\sin(n\varphi)\right]. \label{eq:xi_ext} \end{equation} In these equations $F_n^{\rm i}$\/, $G_n^{\rm i}$\/, $F_n^{\rm e}$ and $G_n^{\rm e}$ are functions of $t$ and $z$\/. They are related to the functions $C_n^{\rm i}$\/ $D_n^{\rm i}$\/, $C_n^{\rm e}$ and $D_n^{\rm e}$ by \begin{equation} \frac{\partial^2 F_n^{\rm i}}{\partial t^2} - v_{A\rm i}^2\frac{\partial^2 F_n^{\rm i}}{\partial z^2} = -\frac{C_n^{\rm i}}{\rho_{\rm i}}, \label{eq:FCint} \end{equation} \begin{equation} \frac{\partial^2 G_n^{\rm i}}{\partial t^2} - v_{A\rm i}^2\frac{\partial^2 G_n^{\rm i}}{\partial z^2} = -\frac{D_n^{\rm i}}{\rho_{\rm i}}, \label{eq:GDint} \end{equation} \begin{equation} \frac{\partial^2 F_n^{\rm e}}{\partial t^2} - v_{A\rm e}^2\frac{\partial^2 F_n^{\rm e}}{\partial z^2} = \frac{C_n^{\rm e}}{\rho_{\rm e}}, \label{eq:FCext} \end{equation} \begin{equation} \frac{\partial^2 G_n^{\rm e}}{\partial t^2} - v_{A\rm e}^2\frac{\partial^2 G_n^{\rm e}}{\partial z^2} = \frac{D_n^{\rm e}}{\rho_{\rm e}}. \label{eq:GDext} \end{equation} Substituting Eqs.~(\ref{eq:p_int}) and (\ref{eq:p_ext}) in the second boundary condition in Eq.~(\ref{eq:jumps}) we obtain \begin{equation} C_n^{\rm i}\cosh(ns_0) = e^{-ns_0}C_n^{\rm e}, \quad D_n^{\rm i}\sinh(ns) = e^{-ns_0}D_n^{\rm e} . \label{eq:CDint-ext} \end{equation} Substituting Eqs.~(\ref{eq:xi_int}) and (\ref{eq:xi_ext}) in the first boundary condition in Eq.~(\ref{eq:jumps}) yields \begin{equation} F_n^{\rm i}\sinh(ns_0) = e^{-ns_0}F_n^{\rm e}, \quad G_n^{\rm i}\cosh(ns) = e^{-ns_0}G_n^{\rm e} . \label{eq:FGint-ext} \end{equation} Eliminating $C_n^{\rm i}$\/, $C_n^{\rm e}$ and $F_n^{\rm e}$ from Eqs.~(\ref{eq:FCint}), (\ref{eq:FCext}), (\ref{eq:CDint-ext}) and (\ref{eq:FGint-ext}) we obtain the equation for $F_n^{\rm i}$\/, \begin{equation} \frac{\partial^2 F_n}{\partial t^2} - c_{n\rm c}^2\frac{\partial^2 F_n}{\partial z^2} = 0, \quad c_{n\rm c}^2 = \frac{B^2_0[1+\tanh(ns_0)]} {\mu_0[\rho_{\rm i}+\rho_{\rm e}\tanh(ns_0)]}, \label{eq:govF} \end{equation} where we have dropped the superscript `i'. Eliminating $D_n^{\rm i}$\/, $D_n^{\rm e}$ and $G_n^{\rm e}$ from Eqs.~(\ref{eq:GDint}), (\ref{eq:GDext}), (\ref{eq:CDint-ext}) and (\ref{eq:FGint-ext}) we obtain the equation for $G_n^{\rm i}$\/, \begin{equation} \frac{\partial^2 G_n}{\partial t^2} - c_{n\rm s}^2\frac{\partial^2 G_n}{\partial z^2} = 0, \quad c_{n\rm s}^2 = \frac{B^2_0[1+\tanh(ns_0)]} {\mu_0[\rho_{\rm i}\tanh(ns_0)+\rho_{\rm e}]}, \label{eq:govG} \end{equation} where we have once again dropped the superscript `i'. It follows from Eqs.~(\ref{eq:frozen-xi}) and (\ref{eq:xi_int}) that $F_n$ and $G_n$ have to satisfy the boundary conditions \begin{equation} F_n = 0, \quad G_n = 0 \quad \mbox{at} \quad z = \pm L/2. \label{eq:frozen-FG} \end{equation} In Eqs.~(\ref{eq:govF}) and (\ref{eq:govG}) $n = 1$ corresponds to kink modes, and $n > 1$ to fluting modes. In the elliptical coordinates the loop axis ($x=y=0$) is defined by $s = 0$ and $\varphi = \pi/2$\/. It follows from Eq.~(\ref{eq:xi_int}) that the kink mode described by Eq.~(\ref{eq:govF}) does not displace the loop axis in the $s$\/-direction which, at the loop axis, coincides with the $y$\/-direction. Hence, the loop axis displacement is in the $x$\/-direction, i.e. this mode is polarised in the direction of the larger axis of the tube cross-section. The kink mode described by Eq.~(\ref{eq:govG}) displaces the loop axis in the $s$\/-direction. It is straightforward to show that it does not displace it in the $\varphi$\/-direction which, at the loop axis, coincides with the $x$\/-direction. Hence, the loop axis displacement is in the $y$\/-direction, i.e. this mode is polarised in the direction of the smaller axis of the tube cross-section. When the density is constant, we can use Eqs.~(\ref{eq:govF}) and (\ref{eq:govG}) with the boundary conditions Eq.~(\ref{eq:frozen-FG}) to recover the results obtained by \cite{RUD2003}. Let us look for the eigenmodes and restrict the analysis to the fundamental modes in the $z$\/-direction. This implies that we take $F_n$ and $G_n$ proportional to $e^{-i\omega t}\cos(\pi z/L)$. Then we immediately obtain that the eigenfrequencies of the boundary value problem defined by Eq.~(\ref{eq:govF}) and the boundary conditions (\ref{eq:frozen-FG}) are given by \begin{equation} \omega_{n\rm c}^2 = \frac{\pi^2 c_{n\rm c}^2}{L^2} = \frac{\pi^2 B^2_0[1+\tanh(ns_0)]} {\mu_0 L^2[\rho_{\rm i}+\rho_{\rm e}\tanh(ns_0)]}, \quad n = 1,2,\dots, \label{eq:Feigen} \end{equation} and the eigenfrequencies of the boundary value problem defined by Eq.~(\ref{eq:govG}) and the boundary conditions (\ref{eq:frozen-FG}) are given by \begin{equation} \omega_{n\rm s}^2 = \frac{\pi^2 c_{n\rm s}^2}{L^2} = \frac{\pi^2 B^2_0[1+\tanh(ns_0)]} {\mu_0 L^2[\rho_{\rm i}\tanh(ns_0)+\rho_{\rm e}]}. \quad n = 1,2,\dots \label{eq:Geigen} \end{equation} In particular, the squares of eigenfrequencies of the kink modes are given by \begin{equation} \omega_{1\rm c}^2 = \frac{\pi^2 B^2_0(a + b)} {\mu_0 L^2(a\rho_{\rm i} + b\rho_{\rm e})}, \quad \omega_{1\rm s}^2 = \frac{\pi^2 B^2_0(a + b)} {\mu_0 L^2(b\rho_{\rm i} + a\rho_{\rm e})}. \label{eq:kink-eigen} \end{equation} It is straightforward to see that the eigenfrequencies satisfy \begin{equation} \omega_{1\rm c} < \omega_{2\rm c} < \dots < \omega_{2\rm s} < \omega_{1\rm s}. \label{eq:order} \end{equation} \section{Implication on coronal seismology} \label{sec:seismology} After \cite{VERetal2004} reported two cases of observations of the transverse coronal loop oscillations where, in addition to the fundamental harmonic, the first overtone was also observed, \cite{ANDetal2005a} suggested observations of this nature could be used to estimate the scale height in the solar corona. \cite{ANDetal2005a} assumed that an oscillating loop has a half-circle shape and a circular cross-section, and it is in the vertical plane. They also assumed that the atmosphere is isothermal. In that case, the dependence of the plasma density on $z$ is given by \begin{equation} \rho_{\rm e} = \rho_{\rm f}\exp\left(-\frac L{\pi H}\cos\frac{\pi z}L\right), \quad \rho_{\rm i} = \zeta\rho_{\rm e}, \label{eq:density} \end{equation} where $H$ is the atmospheric scale height, $\rho_{\rm f}$ the plasma density at the loop foot points outside the loop, and $\zeta > 1$ a constant. \cite{ANDetal2005a} calculated the ratio of frequencies of the first overtone and fundamental mode and found that this ratio is a monotonically decreasing function of the parameter $L/H$\/. Hence, if we know the ratio of frequencies and $L$\/, we can determine $H$\/. For a recent review of coronal seismology using kink oscillation overtones see \cite{ANDetal2009}. A very important question is how robust is this method. \cite{DYMRUD2006b} and \cite{MORERD2009} have found that the account of the loop shape can moderately affect the estimates of the atmospheric scale height. \cite{RUD2007} has shown that the twist of magnetic field lines in the loop can be safely neglected when estimating the atmospheric scale height in the corona. \cite{ROBetal2010} found that the estimates of the atmospheric scale height obtained using the two-thread model are exactly the same as those obtained using the model of a monolithic coronal loop with a circular cross-section of constant radius. Recently \cite{RUD2010} showed that the account of stationary time independent siphon flows in coronal loops have little influence {on} the estimates of the coronal scale height found using the frequency ratio. On the other hand, \cite{RUDetal2008} and \cite{VERERDJES2008} found that the account of the loop expansion can strongly affect these estimates. In this section we study what the effect the elliptic cross-section has on the estimates of the coronal scale height. As we have already seen, when a loop has an elliptic cross-section, its kink oscillations are polarised along the axes of the cross-section. The kink mode polarised in the direction of the larger axis is described by Eq.~(\ref{eq:govF}) with $n = 1$, while the kink mode polarised in the direction of the smaller axis is described by Eq.~(\ref{eq:govG}) with $n = 1$. Let us consider the solutions to these equations in the form of eigenmodes and take $F_1$ and $G_1$ proportional to $\exp(-i\omega t)$. Using Eq.~(\ref{eq:density}) we obtain \begin{equation} c_{1\rm c}^2 = \frac{B^2_0(a+b)}{\mu_0\rho_{\rm f}(a\zeta + b)} \exp\left(\frac L{\pi H}\cos\frac{\pi z}L\right), \label{eq:phase-speed-c} \end{equation} \begin{equation} c_{1\rm s}^2 = \frac{B^2_0(a+b)}{\mu_0\rho_{\rm f}(b\zeta + a)} \exp\left(\frac L{\pi H}\cos\frac{\pi z}L\right) \label{eq:phase-speed-s} \end{equation} Then, introducing \begin{equation} \Omega_{\rm c}^2 = \frac{\mu_0\rho_{\rm f}(a\zeta + b)\omega^2}{B^2_0(a+b)}, \quad \Omega_{\rm s}^2 = \frac{\mu_0\rho_{\rm f}(b\zeta + a)\omega^2}{B^2_0(a+b)}, \label{eq:omega-scale} \end{equation} we reduce Eqs.~(\ref{eq:govF}) and (\ref{eq:govG}) with $n = 1$ to \begin{equation} \frac{d^2 U}{dz^2} + \Omega^2 U \exp\left(\frac L{\pi H}\cos\frac{\pi z}L\right) = 0, \label{eq:govern-scale} \end{equation} where either $U = F_1$ and $\Omega = \Omega_{\rm c}$\/, or $U = G_1$ and $\Omega = \Omega_{\rm s}$\/, and $U$ satisfies the boundary conditions $U = 0$ at $z = \pm L/2$\/. Since Eq.~(\ref{eq:govern-scale}) does not contain $a$ and $b$\/, the eigenvalues of the boundary value problem for $U$ are independent of $a$ and $b$\/. In particular, they are the same as those for a loop with the circular cross-section. Since $$ \frac{\Omega_{2\rm c}}{\Omega_{1\rm c}} = \frac{\Omega_2}{\Omega_1}, \qquad \frac{\Omega_{2\rm s}}{\Omega_{1\rm s}} = \frac{\Omega_2}{\Omega_1}, $$ it follows that we obtain the same estimates of the atmospheric scale height no matter if we use the observation of the kink oscillations polarised in the direction of the larger or smaller axis. The estimates are also independent of $a$ and $b$ and are the same as those obtained for a loop with the circular cross-section. \section{Summary and conclusions} \label{sec:summary} In this paper we have studied non-axisymmetric oscillations of straight magnetic loops with a constant elliptic cross-section and density varying along the loop. We derived the governing equations for kink and fluting modes in the thin tube approximation. All these equations are similar to the equation describing kink oscillations of a straight tube with the circular cross-section. We found that there are two kink modes, one polarised in the direction of larger axis of the elliptic cross-section, and the other polarised in the direction of smaller axis. The frequencies of fundamental mode and overtones of these two kinds of kink oscillation are different. However, the ratio of frequencies of the first overtone and the fundamental mode is the same for both kink oscillations, and it is independent of the ratio of the ellipse half-axes $a/b$\/. This result implies that we obtain the same estimates of the atmospheric scale height no matter if we use the observation of the kink oscillations polarised in the direction of larger or smaller axis. The estimates are also the same as those obtained for a loop with the circular cross-section. This demonstrates that the model shows a very robust nature when considering a static plasma. However, if the plasma in the loops is dynamic (i.e. time dependent) then the ability of the static model to provide accurate estimates may become questionable (see e.g. \citealp{MORERD2009b}). \begin{acknowledgements} The authors thank the Science and Technology Facilities Council (STFC), UK for the financial support they received. \end{acknowledgements} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,859
Subachoque is a municipality and town of Colombia in the Western Savanna Province, part of the department of Cundinamarca. The municipality is situated on the Bogotá savanna with the urban centre at an altitude of at a distance of from the capital Bogotá. Subachoque is part of the Metropolitan Area of Bogotá and borders Zipaquirá, Tabio and Tenjo in the east, Zipaquirá and Pacho in the north, San Francisco and Supatá in the west and Madrid and El Rosal in the south. Subachoque is composed of 17 subdivisions: Altania, Canica Alta, Canica Baja, Cascajal, El Guamal, El Pantano, El Páramo, El Tobal, Galdámez, La Cuesta, La Pradera, La Unión, La Yegüera, Llanitos, Rincón Santo, Santa Rosa, Tibagota, El Valle. Etymology The name Subachoque comes from Chibcha and means either "Work of the Sun" or "Farmfields of the front". History In the times before the Spanish conquest, the area of Subachoque formed part of the Muisca Confederation, a loose confederation of different rulers of the Muisca. Subachoque was reigned by the zipa based in Bacatá. Modern Subachoque was founded on March 16, 1774 by the priest Jacinto Roque Salgado. After the spanisch Crown gave the terrains and allowance to Spanish families in order to colonize the area, the indigenous peoples that lived by the time in that area were relocated in another areas of Colombia or executed if refused to be moved. Subachoque is one of the few towns in Colombia whose habitants are descendants mostly only from Spanish or European roots. During the iron production that took place at La pradera between the years 1850 to the early 1900s the arrival of North Americans, British, French and Germans placed a small mark into the habitants of this peculiar town. The Subachoque area was also the battleground for the Battle of Campo Amalia, also known as the Battle of Subachoque in 1861. Economy Main economical activities of Subachoque are agriculture, livestock farming and small-scale mining. The most important agricultural products cultivated are potatoes, carrots, peas and fruits as peaches, pears, strawberries and apples. Geology The Subachoque Formation is named after Subachoque. Born in Subachoque Nemesio Camacho, businessman and politician Gallery References Municipalities of Cundinamarca Department Populated places established in 1774 1774 establishments in the Spanish Empire
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,933
package suonos.models.music; /** * Meta data for a track. * * @author anthony */ public class TrackMetaData { /** * The track album Id. */ private String albumId; /** * Track file name. Eg "18. The Beat - Ranking Full Stop.ogg". There is no * path name. */ private String trackFileName; /** * The track Id */ private String trackId; /** * The track rating */ private int rating; public String getTrackId() { return trackId; } public void setTrackId(String trackId) { this.trackId = trackId; } public int getRating() { return rating; } public void setRating(int rating) { this.rating = rating; } public String getTrackFileName() { return trackFileName; } public void setTrackFileName(String trackFileName) { this.trackFileName = trackFileName; } public String getAlbumId() { return albumId; } public void setAlbumId(String albumId) { this.albumId = albumId; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,190
Q: How to have bootstrap tab links link to another partial file in ruby on rails to display content? I'm currently trying to incorporate Bootstrap tabs into my app. When I click on each tab, I want the content to also be different. However, I'm not sure how to do so with the rails language. Currently, I'm trying to render my partial like this: < ul class="nav nav-tabs justify-content-center"> <li class="nav-item"> <a class="nav-link active" href="#">Active</a> </li> <li class="nav-item"> <a class="nav-link" href="#">Link</a> <%= render :partial => "teams_list", locals: {teams: @teams} %> </li> </ul> Any help would be appreciated! A: I used bootstrap for getting nav-tabs. It's a very easy implementation. I showed some static data here. You can easily use ruby code here. And I used js for selecting the first tab initially. You can easily check it by removing it from the code snippet. $(function () { $('#myTab li:first-child a').tab('show') }) <!doctype html> <html lang="en"> <head> <!-- Required meta tags --> <meta charset="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no"> <!-- Bootstrap CSS --> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css" integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous"> </head> <body> <div class="container tab-bar"> <ul class="nav nav-tabs" id="myTab" role="tablist"> <li class="nav-item"> <a class="nav-link" id="tab1-tab" data-toggle="tab" href="#tab1" role="tab" aria-controls="tab1" aria-selected="false">By Tab1</a> </li> <li class="nav-item"> <a class="nav-link" id="tab2-tab" data-toggle="tab" href="#tab2" role="tab" aria-controls="tab2" aria-selected="true">Tab2</a> </li> </ul> <div class="tab-content"> <div class="tab-pane" id="tab1" role="tabpanel" aria-labelledby="tab1-tab"> <div class="container"> <div class="row p-2"> <div class="col-sm-3 col-md-2 p-2">Tab1--Child1</div> <div class="col-sm-3 col-md-2 p-2">Tab1--Child2</div> <div class="col-sm-3 col-md-2 p-2">Tab1--Child3</div> </div> </div> </div> <div class="tab-pane" id="tab2" role="tabpanel" aria-labelledby="tab2-tab"> <div class="container"> <div class="row p-2"> <div class="col-sm-3 col-md-2 p-2">Tab2--Child1</div> <div class="col-sm-3 col-md-2 p-2">Tab2--Child2</div> <div class="col-sm-3 col-md-2 p-2">Tab2--Child3</div> <div class="col-sm-3 col-md-2 p-2">Tab2--Child4</div> <div class="col-sm-3 col-md-2 p-2">Tab2--Child5</div> <div class="col-sm-3 col-md-2 p-2">Tab2--Child6</div> </div> </div> </div> </div> </div> <!-- Optional JavaScript --> <!-- jQuery first, then Popper.js, then Bootstrap JS --> <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js" integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN" crossorigin="anonymous"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js" integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q" crossorigin="anonymous"></script> <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js" integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl" crossorigin="anonymous"></script> </body> </html> I am mentioning the ruby code inside a tab using partial. <div class="tab-content"> <div class="tab-pane" id="tab1" role="tabpanel" aria-labelledby="tab1-tab"> <div class="container"> <div class="row p-2"> <% @teams.each do |team| %> <div class="col-sm-3 col-md-2 p-2"> <%= render partial: 'team', locals:{team:team} %> </div> <% end %> </div> </div> </div> </div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,415
Q: Can I configure WordPress to use postfix without a plugin? I am developing a WordPress site on a vagrant box and have installed postfix in order to test email notifications. At this guest OS (Ubuntu) level, I am able to end a test email: echo "Test mail from postfix" | mail -s "Test Postfix" my@email.address This works and I receive the email. As far as I know postfix uses the sendmail binary and so I would expect WordPress to send emails successfully. However, my contact form notifications are not being received. Is there a way to check/debug email sending in WordPress or verify which mail function it is using? UPDATE So after a bit of digging I discovered that wp_mail() uses PHPMailer. If I debug the wp_mail() function using this test script I see that PHPMailer is throwing an exception: Could not instantiate mail function. // Set $to as the email you want to send the test to $to = "myreal@email.address.com"; // No need to make changes below this line // Email subject and body text $subject = 'wp_mail function test'; $message = 'This is a test of the wp_mail function: wp_mail is working'; $headers[] = 'From: Me Myself <myreal@email.address.com>'; // Load WP components, no themes define('WP_USE_THEMES', false); require('wp/wp-load.php'); // Call the wp_mail function, display message based on the result. if( wp_mail( $to, $subject, $message, $headers ) ) { // the message was sent... echo 'The test message was sent. Check your email inbox.'; } else { // the message was not sent... echo 'The message was not sent!'; }; A: WordPress uses the wp_mail() function to send mail. It says on the Codex article there that: For this function to work, the settings SMTP and smtp_port (default: 25) need to be set in your php.ini file. Also be sure to check that your contact form is sending the required parameters to the wp_mail() function. The required parameters are included on the page in the link above.
{ "redpajama_set_name": "RedPajamaStackExchange" }
518
<?xml version="1.0" encoding="iso-8859-1"?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" lang="en" xml:lang="en"> <head> <title>Template Handler</title> <meta http-equiv="Content-Type" content="text/html;charset=iso-8859-1"/> <meta name="title" content="Template Handler"/> <meta name="generator" content="Org-mode"/> <meta name="generated" content="2014-11-12 19:50:36 CST"/> <meta name="author" content="Jesse Gumm (@jessegumm)"/> <meta name="description" content=""/> <meta name="keywords" content=""/> <style type="text/css"> <!--/*--><![CDATA[/*><!--*/ html { font-family: Times, serif; font-size: 12pt; } .title { text-align: center; } .todo { color: red; } .done { color: green; } .tag { background-color: #add8e6; font-weight:normal } .target { } .timestamp { color: #bebebe; } .timestamp-kwd { color: #5f9ea0; } .right {margin-left:auto; margin-right:0px; text-align:right;} .left {margin-left:0px; margin-right:auto; text-align:left;} .center {margin-left:auto; margin-right:auto; text-align:center;} p.verse { margin-left: 3% } pre { border: 1pt solid #AEBDCC; background-color: #F3F5F7; padding: 5pt; font-family: courier, monospace; font-size: 90%; overflow:auto; } table { border-collapse: collapse; } td, th { vertical-align: top; } th.right { text-align:center; } th.left { text-align:center; } th.center { text-align:center; } td.right { text-align:right; } td.left { text-align:left; } td.center { text-align:center; } dt { font-weight: bold; } div.figure { padding: 0.5em; } div.figure p { text-align: center; } div.inlinetask { padding:10px; border:2px solid gray; margin:10px; background: #ffffcc; } textarea { overflow-x: auto; } .linenr { font-size:smaller } .code-highlighted {background-color:#ffff00;} .org-info-js_info-navigation { border-style:none; } #org-info-js_console-label { font-size:10px; font-weight:bold; white-space:nowrap; } .org-info-js_search-highlight {background-color:#ffff00; color:#000000; font-weight:bold; } /*]]>*/--> </style> <LINK href="../stylesheet.css" rel="stylesheet" type="text/css" /> <script type="text/javascript"> <!--/*--><![CDATA[/*><!--*/ function CodeHighlightOn(elem, id) { var target = document.getElementById(id); if(null != target) { elem.cacheClassElem = elem.className; elem.cacheClassTarget = target.className; target.className = "code-highlighted"; elem.className = "code-highlighted"; } } function CodeHighlightOff(elem, id) { var target = document.getElementById(id); if(elem.cacheClassElem) elem.className = elem.cacheClassElem; if(elem.cacheClassTarget) target.className = elem.cacheClassTarget; } /*]]>*///--> </script> </head> <body> <div id="preamble"> </div> <div id="content"> <h1 class="title">Template Handler</h1> <p><a href="http://nitrogenproject.com">Home</a> | <a href="../index.html">Getting Started</a> | <a href="../api.html">API</a> | <a href="../elements.html">Elements</a> | <a href="../actions.html">Actions</a> | <a href="../validators.html">Validators</a> | <a href="../handlers.html"><b>Handlers</b></a> | <a href="../config.html">Configuration Options</a> | <a href="./advanced.html">Advanced Guides</a> | <a href="../troubleshooting.html">Troubleshooting</a> | <a href="../about.html">About</a> </p> <div id="table-of-contents"> <h2>Table of Contents</h2> <div id="text-table-of-contents"> <ul> <li><a href="#sec-1">1 Template Handler</a></li> </ul> </div> </div> <div id="outline-container-1" class="outline-2"> <h2 id="sec-1"><span class="section-number-2">1</span> Template Handler</h2> <div class="outline-text-2" id="text-1"> <p> Overview of what this handler does </p> </div> <div id="outline-container-1-1" class="outline-3"> <h3 id="sec-1-1"><span class="section-number-3"></span> Behavior Functions</h3> <div class="outline-text-3" id="text-1-1"> </div> <div id="outline-container-1-1-1" class="outline-5"> <h5 id="sec-1-1-1"><span class="section-number-5"></span> <code>init(Config, State)</code></h5> <div class="outline-text-5" id="text-1-1-1"> <p> Initialize the handler </p> <ul> <li><i>Return Value</i> - <code>{ok, NewState}</code> </li> </ul> </div> </div> <div id="outline-container-1-1-2" class="outline-5"> <h5 id="sec-1-1-2"><span class="section-number-5"></span> <code>finish(Config, State)</code></h5> <div class="outline-text-5" id="text-1-1-2"> <p> Clean up the handler </p> <ul> <li><i>Return Value</i> - <code>{ok, NewState}</code> </li> </ul> </div> </div> <div id="outline-container-1-1-3" class="outline-5"> <h5 id="sec-1-1-3"><span class="section-number-5"></span> <code>function(Arg1, Arg2)</code></h5> <div class="outline-text-5" id="text-1-1-3"> <p> Overview of this function </p> <ul> <li><code>Arg1</code> - Description of Arg1 </li> <li><code>Arg2</code> - Description of Arg2 </li> <li><i>Return Value</i> - Description of the return value </li> </ul> </div> </div> </div> <div id="outline-container-1-2" class="outline-3"> <h3 id="sec-1-2"><span class="section-number-3"></span> Example</h3> <div class="outline-text-3" id="text-1-2"> <p> Here is the complete text of the default template handler </p> <pre class="src src-erlang"></pre> </div> </div> <div id="outline-container-1-3" class="outline-3"> <h3 id="sec-1-3"><span class="section-number-3"></span> See Also</h3> <div class="outline-text-3" id="text-1-3"> <ul> <li><a href="../handlers.html">Handler Overview</a> </li> <li><a href="../api.html#sec-X">API: Template</a> </li> </ul> </div> </div> </div> </div> <div id="postamble"> <p class="date">Date: 2014-11-12 19:50:36 CST</p> <p class="author">Author: Jesse Gumm (@jessegumm)</p> <p class="creator">Org version 7.8.02 with Emacs version 23</p> <a href="http://validator.w3.org/check?uri=referer">Validate XHTML 1.0</a> </div><h2>Comments</h2> <b>Note:</b><!-- Disqus does not currently support Erlang for its syntax highlighting, so t-->To specify <!--Erlang--> code blocks, just use the generic code block syntax: <pre><b>&lt;pre&gt;&lt;code&gt;your code here&lt;/code&gt;&lt;/pre&gt;</b></pre> <br /> <br /> <div id="disqus_thread"></div> <script type="text/javascript"> /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */ var disqus_shortname = 'nitrogenproject'; // required: replace example with your forum shortname var disqus_identifier = 'html/handlers/template.html'; //This will be replaced with the path part of the url /* * * DON'T EDIT BELOW THIS LINE * * */ (function() { var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = 'http://' + disqus_shortname + '.disqus.com/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq); })(); </script> <noscript>Please enable JavaScript to view the <a href="http://disqus.com/?ref_noscript">comments powered by Disqus.</a></noscript> <a href="http://disqus.com" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
290
\section{Omitted Details for Section~\ref{sec:motivatinguni}}\label{sect:recurreldetails} \lstset{language=prog} \lstset{tabsize=3} \newsavebox{\progsearch} \begin{lrbox}{\progsearch} \begin{lstlisting}[mathescape] $\mathsf{randsearch}(ar,i,j,d)$ { 1: if ($i=j$ and $ar[i]\ne d$) 2: return $-1$; 3: else if ($i=j$ and $ar[i]=d$) 4: return $i$; 5: else 6: $k\leftarrow \mathrm{uniform}(i,j)$; 7: if ($ar[k]=d$) 8: return $k$; 9: else if ($ar[k]<d$ and $k<j$) 10: return $\mathsf{randsearch}(ar, k+1, j,d)$; 11: else if ($ar[k]>d$ and $i<k$) 12: return $\mathsf{randsearch}(ar,1,k-1,d)$; 13: else 14: return $-1$; end if end if } \end{lstlisting} \end{lrbox} \begin{figure} \centering \usebox{\progsearch} \caption{Sherwood's {\sc Randomized-Search}} \label{fig:randsearch} \end{figure} \noindent{\bf Example~\ref{ex:randsearch}.} [{\sc Randomized-Search}] Consider the Sherwood's {\sc Randomized-Search\ } algorithm (cf.~\cite[Chapter~9]{McConnellbook}) depicted in Fig.~\ref{fig:randsearch}. The algorithm checks whether an integer value $d$ is present within the index range $[i,j]$ ($0\le i\le j$) in an integer array $ar$ which is sorted in increasing order and is without duplicate entries. The algorithm outputs either the index for $d$ in $ar$ or $-1$ meaning that $d$ is not present in the index range $[i,j]$ of $ar$. The description of the pseudo-code is as follows. The first four lines deal with the base case when there is only one index in the index range. The remaining lines deal with the recursive case: in line 6, an index $k$ is uniformly sampled from $\{i,i+1,\dots,j\}$; line 7--8 check whether $k$ is the output; line 9--12 perform the recursive calls depending on whether $ar[k]<d$ or not; finally, line 13--14 handle the case when $d<ar[i]$ or $d>ar[j]$. Let $T:\Nset\rightarrow\Nset$ be the function such that for any $n\in\Nset$, we have $T(n)$ is the supremum of the expected execution times upon all inputs $(ar,i,j)$ with $j-i+1=n$. We derive a recurrence relation for $T$ as follows. Let $n\in\Nset$ and $(ar,i,j), d$ be any input such that $n=j-i+1$. We clarify two cases below: \begin{enumerate} \item there exists an $i\le k^*< j$ such that $ar[k^*]\le d < ar[k^*+1]$, where $ar[j+1]$ is interpreted $\infty$ here; \item $ar[j]\le d$ or $d< ar[i]$. \end{enumerate} In both cases, we have $T(1)=1$. In Case 1, we deduce from the pseudo-code in Fig.~\ref{fig:randsearch} that \begin{displaymath} T(n)\le 6+\frac{1}{n}\cdot \max\limits_{1\le \ell^*< n} \left(\displaystyle\sum_{\ell=1}^{\ell^*} T(n-\ell)+ \displaystyle\sum_{\ell=\ell^*+1}^{n} T(\ell-1)\right) \end{displaymath} for all $n\ge 2$, where the maximum ranges over all $\ell^*:=k^*-i+1$'s. In Case 2, similarly we deduce that \begin{displaymath} T(n)\le 6+\frac{1}{n}\cdot\max\left\{\displaystyle\sum_{\ell=1}^{n-1} T(n-\ell), ~~\displaystyle\sum_{\ell=2}^{n} T(\ell-1)\right\} \end{displaymath} Thus a preliminary version $G'$ of the recurrence relation is $\mathrm{T}(1)=1$ and \[ \mathrm{T}(n)=6+\frac{1}{n}\cdot\max\limits_{1\le \ell^*< n} \left(\displaystyle\sum_{\ell=1}^{\ell^*} \mathrm{T}(n-\ell)+ \displaystyle\sum_{\ell=\ell^*+1}^{n} \mathrm{T}(\ell-1)\right) \\ \] for all $n\ge 2$. Let $T':\Nset\rightarrow\Rset$ be the unique solution to $G'$. Then from the fact that $T'(2)\ge T'(1)$, by induction $T'$ is monotonically increasing. Thus the maximum \[ \max\limits_{1\le \ell^*<n} \left(\displaystyle\sum_{\ell=1}^{\ell^*} T'(n-\ell)+ \displaystyle\sum_{\ell=\ell^*+1}^{n} T'(\ell-1)\right) \] is attained at $\ell^*=\left\lfloor\frac{n}{2}\right\rfloor$ for all $n\ge 2$. Then $G'$ is transformed into our final recurrence relation as follows: \[ \begin{cases} \mathrm{T}(\mathfrak{n})=6+\frac{1}{\mathfrak{n}}\cdot\left( \displaystyle\sum_{\mathfrak{j}=\left\lceil\frac{\mathfrak{n}}{2}\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \displaystyle\sum_{\mathfrak{j}=\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right) \\ \mathrm{T}(1)=1 \end{cases}. \] We note that the worst-case complexity for this algorithm is $\Theta(n)$.\qed \lstset{language=prog} \lstset{tabsize=3} \newsavebox{\progsort} \begin{lrbox}{\progsort} \begin{lstlisting}[mathescape] $\mathsf{quicksort}(ar,i,j)$ { 1: if ($i<j$) 2: $k\leftarrow \mathrm{uniform}(i,j)$; 3: $m\leftarrow \mathsf{pivot}(ar,i,j,ar[k])$; 4: if ($i\le m-1$) 5: $\mathsf{quicksort}(ar,i,m-1)$; end if 6: if ($m+1\le j$) 7: $\mathsf{quicksort}(ar,m+1,j)$; end if end if } \end{lstlisting} \end{lrbox} \begin{figure} \centering \usebox{\progsort} \caption{Randomized {\sc Quick-Sort}} \label{fig:quicksort} \end{figure} \noindent{\em Example \ref{ex:quicksort}.}[{\sc Quick-Sort}] Consider the {\sc Quick-Sort} algorithm~\cite[Chapter~7]{DBLP:books/daglib/0023376} depicted in Fig.~\ref{fig:quicksort}, where every input $(ar,i,j)$ is assumed to satisfy that $0\le i\le j$ and $ar$ is an array of integers which does not contain duplicate numbers. The description of the pseudo-code is as follows: first, line 2 samples an integer uniformly from $\{i,\dots, j\}$; then, line 3 calls a subroutine $\mathsf{pivot}$ which (i) rearranges $ar$ such that integers in $ar$ which are less than $ar[k]$ come first, then $ar[k]$, and finally integers in $ar$ greater than $ar[k]$, and (ii) outputs the new index $m$ of $ar[k]$ in $ar$; and finally, lines 4--7 handle recursive calls to sub-arrays. From the pseudo-code, the following recurrence relation is easily obtained: \[ \mathrm{T}(\mathfrak{n})=2\cdot\mathfrak{n}+ 2\cdot (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} \] where $\mathrm{T}(\mathfrak{n})$ represents the maximal expected execution time where $\mathfrak{n}$ is the array length and the execution time of {\em pivoting} is represented by $2\cdot \mathfrak{n}$. We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \lstset{language=prog} \lstset{tabsize=3} \newsavebox{\progdiameter} \begin{lrbox}{\progdiameter} \begin{lstlisting}[mathescape] $\mathsf{diameter}(S)$ { 1: if ($|S|=1$) 2: return 0; else 3: $p\leftarrow \mathrm{uniform}(S)$; 4: $d\leftarrow \max_{p'\in S} \mbox{\sl dist}(p,p')$ 5: $U\leftarrow \bigcap_{p'\in S}\{p''\in\Rset^3 \mid \mbox{\sl dist}(p'',p')\le d\}$ 6: $S'\leftarrow S\backslash U$ 7: if ($S'=\emptyset$) 8: return $d$ 9: else 10: return $\mathsf{diameter}(S')$ end if end if } \end{lstlisting} \end{lrbox} \lstset{language=prog} \lstset{tabsize=3} \newsavebox{\progselect} \begin{lrbox}{\progselect} \begin{lstlisting}[mathescape] $\mathsf{quickselect}(ar,i,j,d)$ { 1: if ($i=j$) return $a[i]$; 2: else 3: $k\leftarrow \mathrm{uniform}(i,j)$; 4: $m\leftarrow \mathsf{pivot}(ar,i,j,ar[k])$; 5: if ($m-i+1=d$) 6: return $ar[m]$; 7: else if ($m-i+1<d$) 8: return $\mathsf{quickselect}(ar,m+1,j,d)$; 9: else if ($m-i+1>d$) 10: return $\mathsf{quickselect}(ar,i,m-1,d)$; end if end if } \end{lstlisting} \end{lrbox} \begin{figure} \begin{minipage}{0.55\textwidth} \centering \usebox{\progselect} \caption{Randomized {\sc Quick-Select}} \label{fig:quickselect} \end{minipage} \begin{minipage}{0.45\textwidth} \centering \usebox{\progdiameter} \caption{{\sc Diameter-Computation}} \label{fig:diameter} \end{minipage} \end{figure} \noindent{\bf Example~\ref{ex:quickselect}.} [{\sc Quick-Select}] Consider the {\sc Quick-Select} algorithm (cf.~\cite[Chapter~9]{DBLP:books/daglib/0023376}) depicted in Fig.~\ref{fig:quickselect} which upon any input $(ar,i,j)$ and $d$ such that $0\le i\le j$, $1\le d\le j-i+1$ and $ar$ contains no duplicate integers, finds the $d$-th largest integer in $ar$. Note that for an array of size $n$, and $d=n/2$, we have the {\sc Median-Find} algorithm. The description of the pseudo-code is as follows: line 1 handles the base case; line 3 starts the recursive case by sampling $k$ uniformly from $\{i,\dots,j\}$; line 4 rearranges $ar$ and returns an $m$ in the same way as $\mathsf{pivot}$ in {\sc Quick-Sort} (cf. Example~\ref{ex:quicksort}); line 5 handles the case when $ar[k]$ happens to be the $d$-th largest integer in $ar$; and finally, line 7--10 handle the recursive calls. Let $T:\Nset\rightarrow\Nset$ be the function such that for any $n\in\Nset$, we have $T(n)$ is the supremum of the expected execution times upon all inputs $(ar,i,j)$ with $j-i+1=n$. By an analysis on where the $d$-th largest integer lies in $ar$ which is similar to the analysis on $d$ in Example~\ref{ex:randsearch}, a preliminary recurrence relation is obtained such that $\mathrm{T}(1)=1$ and \[ \mathrm{T}(n)=4+2\cdot n+\frac{1}{n}\cdot\max\limits_{1\le \ell^*\le n} \left(\displaystyle\sum_{\ell=1}^{\ell^*-1} \mathrm{T}(n-\ell)+ \displaystyle\sum_{\ell=\ell^*+1}^{n} \mathrm{T}(\ell-1)\right). \\ \] By similar monotone argument in Example~\ref{ex:randsearch}, the maximum of the right-hand-side expression above is attained at $\ell^*=\left\lfloor\frac{n+1}{2}\right\rfloor$ for all $n\ge 2$. By the fact that $\left\lfloor\frac{n+1}{2}\right\rfloor=\left\lceil \frac{n}{2}\right\rceil$ for all $n\ge 2$, the following recurrence relation is obtained: \[ \begin{cases} \mathrm{T}(\mathfrak{n})=4+2\cdot\mathfrak{n}+ \frac{1}{\mathfrak{n}}\cdot \left(\displaystyle\sum_{\mathfrak{j}=\left\lfloor \frac{n}{2}\right\rfloor+1}^{n-1} \mathrm{T}(\mathfrak{j})+ \displaystyle\sum_{\mathfrak{j}=\left\lceil \frac{n}{2}\right\rceil}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right)\\ \mathrm{T}(1)=1 \end{cases} \] To fit our univariate recurrence expression, we use over-approximation, and the final recurrence relation for this example is \[ \begin{cases} \mathrm{T}(\mathfrak{n})\!=\!4+2\cdot\mathfrak{n}+ \frac{1}{\mathfrak{n}}\cdot \left(\displaystyle\sum_{\mathfrak{j}=\left\lfloor \frac{n}{2}\right\rfloor}^{n-1} \mathrm{T}(\mathfrak{j})+ \displaystyle\sum_{\mathfrak{j}=\left\lceil \frac{n}{2}\right\rceil}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right)\\ \mathrm{T}(1)=1 \end{cases}. \] We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \noindent{\em Example~\ref{ex:diameter}.}[{\sc Diameter-Computation}] Consider the {\sc Diameter-Computation} algorithm (cf.~\cite[Chapter 9]{DBLP:books/cu/MotwaniR95}) to compute the diameter of an input finite set $S$ of three-dimensional points. A pseudo-code to implement this is depicted in Fig.~\ref{fig:diameter}. The description of the pseudo-code is as follows: line 1--2 handle the base case; line 3 samples a point $p$ uniformly from $S$; line 4 calculates the maximum distance in $S$ from $p$; line 5 calculates the intersection of all balls centered at points in $S$ with uniform radius $d$; line 6 calculates the set of points outside $U$; lines 7--8 handle the situation $S'=\emptyset$ which implies that $d$ is the diameter; lines 9--10 handle the recursive call to $S'$. Due to uniform choice of $p$ at line 3, the size of $S'$ is uniformly in $[0,|S|-1]$; it then follows a pivoting (similar to that in Example~\ref{ex:quickselect} and Example~\ref{ex:quicksort}) by line $5$ w.r.t the linear order over $\{\max_{p'\in S}{\mbox{\sl dist}(p,p')}\mid p\in S\}$. Lines 5--6 can be done in $\mathcal{O}(|S|\cdot \log{|S|})$ time for Euclidean distance, and $\mathcal{O}(|S|)$ time for $L_1$ metric~\cite{DBLP:books/cu/MotwaniR95}. Depending on Eucledian or $L_1$ metric we obtain two different recurrence relations. For Eucledian we have the following relation: \[ \mathrm{T}(\mathfrak{n})=2+\mathfrak{n}+ 2\cdot \mathfrak{n}\cdot\ln{\mathfrak{n}} + (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} ; \] with the execution time for lines 5--6 being taken to be $2\cdot \mathfrak{n}\cdot\ln{\mathfrak{n}}$, and and for $L_1$ metric we have the following relation: \[ \mathrm{T}(\mathfrak{n})=2+\mathfrak{n}+ 2\cdot \mathfrak{n} + (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} \\ \] with the execution time for lines 5--6 being taken to be $2\cdot \mathfrak{n}$. We note that the worst-case complexity for this algorithm is as follows: for Euclidean metric it is $\Theta(n^2 \cdot \log n)$ and for the $L_1$ metric it is $\Theta(n^2)$.\qed \lstset{language=prog} \lstset{tabsize=3} \newsavebox{\progsortselect} \begin{lrbox}{\progsortselect} \begin{lstlisting}[mathescape] $\mathsf{sortbyselect}(ar,i,j)$ { 1: if ($i<j$) 2: $m\leftarrow \mathsf{quickselect}(ar,i,j,\lfloor\frac{j-i+1}{2}\rfloor)$; 3: if ($i< m-1$) 4: $\mathsf{sortbyselect}(ar,i,m-1)$; end if 5: if ($m+1<j$) 6: $\mathsf{sortbyselect}(ar,m+1,j)$; end if end if } \end{lstlisting} \end{lrbox} \begin{figure} \centering \usebox{\progsortselect} \caption{Sorting with {\sc Quick-Select}} \label{fig:sortselect} \end{figure} \noindent{\em Example \ref{ex:sortselect}.}[Sorting with {\sc Quick-Select}] Consider a sorting algorithm depicted in Fig.~\ref{fig:sortselect} which selects the median through the {\sc Quick-Select} algorithm. The recurrence relation is directly obtained as follows: \[ \mathrm{T}(\mathfrak{n})=4+ T^*(\mathfrak{n})+\mathrm{T}\left(\lfloor{\mathfrak{n}}/{2}\rfloor\right)+\mathrm{T}\left(\lceil{\mathfrak{n}}/{2}\rceil\right) \] where $T^*(\centerdot)$ is an upper bound on the expected running time of {\sc Quick-Select} (cf. Example~\ref{ex:quickselect}). We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \section{Omitted Details for Section~\ref{sec:motivatingbi}}\label{app:motivatingbi} \noindent{\bf Example~\ref{ex:coupon}.}[{\sc Coupon-Collector}] Consider the {\sc Coupon-Collector} problem~\cite[Chapter~3]{DBLP:books/cu/MotwaniR95} with $n$ different types of coupons ($n\in\Nset$). The randomized process proceeds in rounds: at each round, a coupon is collected uniformly at random from the coupon types (i.e., each coupon type is collected with probability $\frac{1}{n}$); and the rounds continue until all the $n$ types of coupons are collected. We model the rounds as a recurrence relation with two variables $\mathfrak{n},\mathfrak{m}$, where $\mathfrak{n}$ represents the total number of coupon types and $\mathfrak{m}$ represents the remaining number of uncollected coupon types. The recurrence relation is as follows: \[ \mathrm{T}(\mathfrak{n},1)=\mathfrak{n}\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=\mathfrak{n}/{\mathfrak{m}}+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \] where $\mathrm{T}(\mathfrak{n},\mathfrak{m})$ is the expected number of rounds, $\frac{\mathfrak{n}}{\mathfrak{m}}$ represents the expected number of rounds to collect a new (i.e., not-yet-collected) coupon type when there are still $\mathfrak{m}$ type of coupons to be collected, and $\mathfrak{n}$ (for $\mathrm{T}(\mathfrak{n},1)$) represents the expected number of rounds to collect a new coupon type when there is only one new coupon type to be collected. We note that the worst-case complexity for this process is $\infty$.\qed \noindent{\bf Example~\ref{ex:channel}.}[{\sc Channel-Conflict Resolution}] We consider two network scenarios in which $n$ clients are trying to get access to a network channel. This problem is also called the {\sc Resource-Contention Resolution}~\cite[Chapter~13]{Kleinbergbook}. In this problem, if more than one client tries to access the channel, then no client can access it, and if exactly one client requests access to the channel, then the request is granted. While centralized deterministic algorithms exist (such as Round-Robin) for the problem, to be implemented in a distributed or concurrent setting, randomized algorithms are necessary. \smallskip\noindent{\em Distributed setting.} In the distributed setting, the clients do not share any information. In this scenario, in each round, every client requests an access to the channel with probability $\frac{1}{n}$. We are interested in the expected number of rounds until every client gets at least one access to the channel. At each round, let $m$ be the number of clients who have not got any access. Then the probability that a new client (from the $m$ clients) gets the access is $m\cdot \frac{1}{n}\cdot (1-\frac{1}{n})^{n-1}$. Thus, the expected rounds that a new client gets the access is $\frac{n}{m}\cdot \frac{1}{(1-\frac{1}{n})^{n-1}}$. Since the sequence $\left\{(1-\frac{1}{n})^{n-1}\right\}_{n\in\Nset}$ converges decreasingly to $\frac{1}{e}$ when $n\rightarrow\infty$, this expected time is no greater than $e\cdot\frac{n}{m}$. Then for this scenario, we obtain an over-approximating recurrence relation \[ \mathrm{T}(\mathfrak{n},1)=\mathfrak{n}\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=(\mathfrak{n}\cdot{e})/{\mathfrak{m}}+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \] for the expected rounds until which every client gets at least one access to the channel. Note that in this setting no client has any information about any other client. \smallskip\noindent{\em Concurrent setting.} In the concurrent setting, the clients share one variable, which is the number of clients which has not yet been granted access. Also in this scenario, once a client gets an access the client does not request for access again. Moreover, the shared variable represents the number of clients $m$ that have not yet got access. In this case, in reach round a client that has not access to the channel yet, requests access to the channel with probability $\frac{1}{m}$. Then the probability that a new client gets the access becomes $m\cdot \frac{1}{m}\cdot (1-\frac{1}{m})^{m-1}$. It follows that the expected time that a new client gets the access becomes $\frac{1}{(1-\frac{1}{m})^{m-1}}$ which is smaller than $e$. Then for this scenario, we obtain an over-approximating recurrence relation \[ \mathrm{T}(\mathfrak{n},1)=1\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=1\cdot e+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \] We also note that the worst-case complexity for both is $\infty$.\qed \section{Details for Overapproximations}\label{app:overapprox} To prove results for overapproximations for recurrence expressions, we need the following well-known theorem. \begin{theorem}[Taylor's Theorem (with Lagrange's Remainder){~\cite[Chapter 6]{BasicCalculus}}] For any function $f:[a,b]\rightarrow \Rset$ ($a,b\in\Rset$ and $a<b$), if $f$ is ($k+1$)-order differentiable, then for all $x\in [a,b]$, there exists a $\xi\in (a,x)$ such that \[ f(x)=\left(\sum_{j=0}^k \frac{f^{(j)}(a)}{j!}\cdot (x-a)^j\right)+\frac{f^{(k+1)}(\xi)}{(k+1)!}\cdot (x-a)^{k+1}~~. \] \end{theorem} We also recall that \[ \sum_{j=1}^{\infty}\frac{1}{j^2}=\frac{\pi^2}{6}\mbox{ and }\sum_{j=1}^{\infty}\frac{1}{j^3}=\alpha \] where $\alpha$ is the Ap\'{e}ry's constant which lies in $[1.2020,1.2021]$. Moreover, we have the following result using integral-by-part technique and Newton-Leibniz Formula. \begin{lemma}\label{lemm:basiccalculus} For all $a,b\in (0,\infty)$ such that $a<b$, the following assertions hold: \begin{eqnarray*} \displaystyle (1)\ \int_a^b \frac{1}{x}\,\mathrm{d}x = \ln{x}{\Big|}_{a}^b \enspace ; \displaystyle ~~~(2)\ \int_a^b \ln{x}\,\mathrm{d}x = \left(x\cdot\ln{x}-x\right){\Big|}_{a}^b \enspace\\[1ex] \displaystyle (3)\ \int_a^b x\cdot\ln{x}\,\mathrm{d}x = \left(\frac{1}{2}\cdot x^2\cdot\ln{x}-\frac{1}{4}\cdot x^2\right){\Big|}_{a}^b~~ \enspace. \end{eqnarray*} \end{lemma} Furthermore, we need the following simple lemmas. The following lemma provides a tight approximation for floored expressions, the proof of which is a simple case distinction between even and odd cases. \begin{lemma}\label{lemm:flooroverapprox} For all natural numbers $n$, we have $\frac{n-1}{2}\le \left\lfloor\frac{n}{2}\right\rfloor\le \frac{n}{2}\le \left\lceil\frac{n}{2}\right\rceil \le \frac{n+1}{2}$~~. \end{lemma} The following lemma handles over-approximation of simple summations. \begin{lemma}\label{lemm:sumoverapprox} For any natural number $n\ge 2$ and real number $c$, one has that $\frac{\sum_{j=1}^{n-1} c}{n}\le c\mbox{ and }\frac{\left( \sum_{j=\left\lceil\frac{\mathfrak{n}}{2}\right\rceil}^{n-1} c+ \sum_{j=\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor}^{n-1} c\right)}{n}\le c$~~. \end{lemma} Then we prove the following two propositions. \textbf{Proposition~\ref{prop:lnflooroverapprox}.} For any natural number $n\ge 2$, we have \[ (1)\ \ln{n}-\ln{2}-\frac{1}{n-1}\le \ln{\left\lfloor \frac{n}{2}\right\rfloor}\le \ln{n}-\ln{2} \enspace ; \] \[ (2)\ \ln{n}-\ln{2}\le \ln{\left\lceil \frac{n}{2}\right\rceil}\le \ln{n}-\ln{2}+\frac{1}{n}~~. \] \begin{proof} Let $n\ge 2$ be a natural number. The first argument comes from the facts that \[ \ln{\left\lfloor \frac{n}{2}\right\rfloor}\le \ln{\frac{n}{2}}=\ln{n}-\ln{2} \] and \begin{align*} \ln{\left\lfloor \frac{n}{2}\right\rfloor} & \ge \ln{\frac{n-1}{2}} \\ & = \ln{\frac{n}{2}}-\left(\ln{\frac{n}{2}}-\ln{\frac{n-1}{2}}\right) \\ & = \ln{n}-\ln{2}-\frac{1}{2}\cdot \frac{1}{\xi_n}~~\left(\xi_n\in\left(\frac{n-1}{2},\frac{n}{2}\right)\right)\\ & \ge \ln{n}-\ln{2}-\frac{1}{n-1} \end{align*} where we use the fact that \[ \left\lfloor \frac{n}{2}\right\rfloor\ge \frac{n-1}{2} \] and $\xi_n$ is obtained from Taylor's Theorem. The second argument comes from the facts that \[ \ln{\left\lceil \frac{n}{2}\right\rceil}\ge \ln{\frac{n}{2}}=\ln{n}-\ln{2} \] and \begin{align*} \ln{\left\lceil \frac{n}{2}\right\rceil} & \le \ln{\frac{n+1}{2}}\\ & =\ln{\frac{n}{2}}+\left(\ln{\frac{n+1}{2}}-\ln{\frac{n}{2}} \right)\\ & =\ln{n} -\ln{2} + \left(\ln{\frac{n+1}{2}}-\ln{\frac{n}{2}} \right)\\ & =\ln{n}-\ln{2}+\frac{1}{2}\cdot \frac{1}{\xi'_n}\quad \left(\xi'_n\in \left(\frac{n}{2}, \frac{n+1}{2}\right)\right)\\ & \le \ln{n}-\ln{2}+\frac{1}{n} \end{align*} where the first inequality is due to the fact that \[ \left\lceil \frac{n}{2}\right\rceil\le \frac{n+1}{2} \] and $\xi'_n$ is obtained from Taylor's Theorem.\qed \end{proof} \textbf{Proposition~\ref{prop:nminusoneoverapprox}.} For any natural number $n\ge 2$, we have \[ \ln{n}-\frac{1}{n-1}\le\ln{(n-1)}\le \ln{n}-\frac{1}{n} . \] \begin{proof} The lemma follows directly from the fact that \[ \ln{n}-\ln{(n-1)}=\frac{1}{\xi} \] for some $\xi\in (n-1,n)$, which can be obtained through Taylor's Theorem.\qed \end{proof} \noindent\textbf{Proposition~\ref{prop:integralapproximation}.} For any natural number $n\geq 2$, we have: \begin{equation}\label{eq:reciprocalapprox} \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j}\in \left[-0.7552,-\frac{1}{6}\right] \end{equation} \begin{equation}\label{eq:logarithmapprox} \int_1^n \ln{x}\,\mathrm{d}x-\left(\sum_{j=1}^{n-1} \ln{j}\right) - \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x\in \left[-\frac{1}{12}, 0.2701\right] \end{equation} \begin{equation}\label{eq:xlogxapprox} \int_1^n x\cdot \ln{x}\,\mathrm{d}x-\left(\sum_{j=1}^{n-1} j\cdot\ln{j}\right)-\frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x+\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x-\frac{n-1}{2}\in \left[-\frac{19}{72},0.1575\right]. \end{equation} \begin{proof} Let $n$ be a natural number such that $n\ge 2$. We first estimate the difference \[ \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j}~~. \] To this end, we deduce the following equalities: \begin{align*} &\int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j} \\ = &\sum_{j=1}^{n-1}\int_j^{j+1} \left[\frac{1}{x}-\frac{1}{j}\right]\,\mathrm{d}x \\ = &\sum_{j=1}^{n-1}\int_0^{1} \left[\frac{1}{j+x}-\frac{1}{j}\right]\,\mathrm{d}x \\ = & \sum_{j=1}^{n-1}\int_0^{1} \left[-\frac{1}{j^2}\cdot x +\frac{1}{\xi^3_{j,x}}\cdot x^2\right]\,\mathrm{d}x\\ = & -\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right)+ \sum_{j=1}^{n-1}\int_0^{1}\frac{1}{\xi^3_{j,x}}\cdot x^2\,\mathrm{d}x~~,\\ \end{align*} where $\xi_{j,x}$ is a real number in $(j, j+x)$ obtained from Taylor's Theorem with Lagrange's Remainder. The first and fourth equalities come from the linear property of Riemann Integral; the second one follows from the variable substitution $x'=x-j$; the third one follows from Taylor's Theorem. Using the fact that $\xi_{j,x}\in (j, j+1)$, one obtains that \begin{equation}\label{eq:reciprocalupperapprox} \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j} \le -\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) + \frac{1}{3}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^3}\right) \end{equation} and \begin{equation}\label{eq:reciprocallowerapprox} \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j} \ge -\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) + \frac{1}{3}\cdot \left(\sum_{j=2}^{n}\frac{1}{j^3}\right). \end{equation} Then (\ref{eq:reciprocalapprox}) follows from the facts that \begin{align*} \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j} & \le \sum_{j=1}^{n-1}\left(-\frac{1}{2\cdot j^2} + \frac{1}{3\cdot j^3}\right)\\ & \le -\frac{1}{2\cdot 1^2} + \frac{1}{3\cdot 1^3}\\ & =-\frac{1}{6} \end{align*} and \begin{align*} \int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j} & \ge \sum_{j=1}^{n-1}\left(-\frac{1}{2\cdot j^2} + \frac{1}{3\cdot j^3}\right)-\frac{1}{3}+\frac{1}{3\cdot n^3}\\ & \ge -\frac{\pi^2}{12}+\frac{\alpha}{3}-\frac{1}{3}\\ & \ge -0.7552 \end{align*} where in both situations we use the fact that $2\cdot j^2\le 3\cdot j^3$ for all $j\in\Nset$. Then we consider the difference \[ \int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j}~. \] First, we derive that \begin{align*} &\int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j} \\ = &\sum_{j=1}^{n-1}\int_j^{j+1} \left[\ln{x}-\ln{j}\right]\,\mathrm{d}x \\ = &\sum_{j=1}^{n-1}\int_0^{1} \left[\ln{(j+x)}-\ln{j}\right]\,\mathrm{d}x \\ = & \sum_{j=1}^{n-1}\int_0^{1} \left[\frac{1}{j}\cdot x -\frac{1}{2\cdot \xi^2_{j,x}}\cdot x^2\right]\,\mathrm{d}x\\ = & \frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j}\right)- \sum_{j=1}^{n-1}\int_0^{1}\frac{1}{2\cdot \xi^2_{j,x}}\cdot x^2\,\mathrm{d}x \\ \end{align*} where $\xi_{j,x}$ is a real number in $(j, j+1)$ obtained from Taylor's Theorem. Using the fact that $\xi_{j,x}\in (j, j+1)$, one can obtain that \begin{align}\label{eq:logarithmupperapprox} & \int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j} \\ \le & \frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j}\right)-\frac{1}{6}\cdot\sum_{j=2}^{n}\frac{1}{j^2}\nonumber \\ \le & \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x + \frac{1}{4}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) - \frac{1}{6}\cdot \left(\sum_{j=2}^{n}\frac{1}{j^3}\right)\nonumber\\ &\qquad{}-\frac{1}{6}\cdot\sum_{j=2}^{n}\frac{1}{j^2} \nonumber \\ = & \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x + \sum_{j=1}^{n-1}\left(\frac{1}{12\cdot j^2}-\frac{1}{6\cdot j^3}\right)\nonumber\\ & \qquad{}+\frac{1}{3}-\frac{1}{6\cdot n^3}-\frac{1}{6\cdot n^2} \nonumber \end{align} where the second inequality follows from Inequality~(\ref{eq:reciprocallowerapprox}), and \begin{align}\label{eq:logarithmlowerapprox} & \int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j} \\ \ge & \frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j}\right)-\frac{1}{6}\cdot\sum_{j=1}^{n-1}\frac{1}{j^2}\nonumber\\ \ge & \frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{1}{12}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) - \frac{1}{6}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^3}\right)\nonumber\\ = & \frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x + \sum_{j=1}^{n-1}\left(\frac{1}{12\cdot j^2}-\frac{1}{6\cdot j^3}\right)\nonumber \end{align} where the second inequality follows from Inequality~(\ref{eq:reciprocalupperapprox}). Then from Inequality~(\ref{eq:logarithmupperapprox}) and Inequality~(\ref{eq:logarithmlowerapprox}), one has that \begin{align*} & \int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j} \\ \le & \frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x+\sum_{j=1}^{\infty}\left(\frac{1}{12\cdot j^2}-\frac{1}{6\cdot j^3}\right)+\frac{1}{3} \\ \le & \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{\pi^2}{72}-\frac{\alpha}{6}+\frac{1}{3} \\ \le & \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+0.2701 \end{align*} and \begin{align*} & \int_1^n \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \ln{j} \\ \ge & \frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x+\left(\frac{1}{12}-\frac{1}{6}\right)\\ \ge & \frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x -\frac{1}{12} \end{align*} where in both situations we use the fact that $12\cdot j^2\le 6\cdot j^3$ for all $j\ge 2$. The inequalities above directly imply the inequalities in (\ref{eq:logarithmapprox}). Finally, we consider the difference \[ \int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=m}^{n-1} j\cdot\ln{j}~~. \] Following similar approaches, we derive that for all natural numbers $n\ge 2$, \begin{align*} &\int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} j\cdot\ln{j} \\ = &\sum_{j=1}^{n-1}\int_j^{j+1} \left[x\cdot \ln{x}-j\cdot\ln{j}\right]\,\mathrm{d}x \\ = &\sum_{j=1}^{n-1}\int_0^{1} \left[(j+x)\cdot \ln{(j+x)}-j\cdot\ln{j}\right]\,\mathrm{d}x \\ = & \sum_{j=1}^{n-1}\int_0^{1} \left[(\ln{j}+1)\cdot x +\frac{1}{2\cdot \xi_{j,x}}\cdot x^2\right]\,\mathrm{d}x\\ = & \frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\ln{j}\right)+\frac{n-1}{2}+ \sum_{j=1}^{n-1}\int_0^{1}\frac{1}{2\cdot \xi_{j,x}}\cdot x^2\,\mathrm{d}x \\ \end{align*} where $\xi_{j,x}\in (j, j+1)$. Thus, one obtains that \begin{align}\label{eq:xlogxupperapprox} & \int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} j\cdot\ln{j} \le \\ & \qquad\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\ln{j}\right)+\frac{n-1}{2}+ \frac{1}{6}\cdot\sum_{j=1}^{n-1}\frac{1}{j}\nonumber \end{align} and \begin{align}\label{eq:xlogxlowerapprox} &\int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} j\cdot\ln{j}\ge \\ &\qquad\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\ln{j}\right)+\frac{n-1}{2}+\frac{1}{6}\cdot\sum_{j=1}^{n-1}\frac{1}{j}-\frac{1}{6}+\frac{1}{6\cdot n}.\nonumber \end{align} By plugging Inequalities in~(\ref{eq:reciprocallowerapprox}) and~(\ref{eq:logarithmlowerapprox}) into Inequality~(\ref{eq:xlogxupperapprox}), one obtains that \begin{align*} & \int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} j\cdot\ln{j} \\ \le & \frac{1}{2}\cdot\left[\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x - \sum_{j=1}^{n-1}\left(\frac{1}{12\cdot j^2}-\frac{1}{6\cdot j^3}\right)\right] \\ & {}+\frac{n-1}{2}\\ & {}+\frac{1}{6}\cdot\left[\int_1^n \frac{1}{x}\,\mathrm{d}x +\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) - \frac{1}{3}\cdot \left(\sum_{j=2}^{n}\frac{1}{j^3}\right)\right] \\ \le & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2}\\ & {}+\sum_{j=1}^{n-1}\left(\frac{1}{24\cdot j^2}+\frac{1}{36\cdot j^3}\right)+\frac{1}{18}-\frac{1}{18\cdot n^3} \\ \le & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2}+\frac{\pi^2}{144}+\frac{\alpha}{36}+\frac{1}{18} \\ \le & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2} + 0.1575 \end{align*} for all natural numbers $n\ge 2$. Similarly, by plugging Inequalities in (\ref{eq:reciprocalupperapprox}) and (\ref{eq:logarithmupperapprox}) into Inequality~(\ref{eq:xlogxlowerapprox}), one obtains \begin{align*} & \int_1^n x\cdot \ln{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} j\cdot\ln{j} \\ \ge & \frac{1}{2}\cdot\left[\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{2}\cdot\int_1^n \frac{1}{x}\,\mathrm{d}x - \sum_{j=1}^{n-1}\left(\frac{1}{12\cdot j^2}-\frac{1}{6\cdot j^3}\right)-\frac{1}{3}\right] \\ & {}+\frac{n-1}{2}\\ & {}+\frac{1}{6}\cdot\left[\int_1^n \frac{1}{x}\,\mathrm{d}x +\frac{1}{2}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^2}\right) - \frac{1}{3}\cdot \left(\sum_{j=1}^{n-1}\frac{1}{j^3}\right)\right]-\frac{1}{6} \\ \ge & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2}\\ & {}+\sum_{j=1}^{n-1}\left(\frac{1}{24\cdot j^2}+\frac{1}{36\cdot j^3}\right)-\frac{1}{3} \\ \ge & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2}+\frac{1}{24}+\frac{1}{36}-\frac{1}{3}\\ = & \frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x-\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x+\frac{n-1}{2} -\frac{19}{72} \end{align*} Then the inequalities in (\ref{eq:xlogxapprox}) are clarified.\qed \end{proof} \noindent{\em Example~\ref{ex:overapprox}.} Consider the summation \[ \displaystyle\sum_{j=\left\lceil\frac{n}{2}\right\rceil}^{n-1}\ln{j}+ \displaystyle\sum_{j=\left\lfloor\frac{n}{2}\right\rfloor}^{n-1} \ln{j}\quad (n\ge 4). \] By Proposition~\ref{prop:integralapproximation}, we can over-approximate it as \[ 2\cdot\left(\Gamma_{\ln{\mathfrak{n}}}\left(n\right)+\frac{1}{12}\right) -\left(\Gamma_{\ln{\mathfrak{n}}}\left(\left\lceil\frac{n}{2}\right\rceil\right)+\Gamma_{\ln{\mathfrak{n}}}\left(\left\lfloor\frac{n}{2}\right\rfloor\right)-0.5402\right) \] which is equal to \begin{align*} & 2\cdot n\cdot\ln{n}-2\cdot n-\ln{n}-\left\lceil\frac{n}{2}\right\rceil\cdot\ln{\left\lceil\frac{n}{2}\right\rceil}-\left\lfloor\frac{n}{2}\right\rfloor\cdot\ln{\left\lfloor\frac{n}{2}\right\rfloor}\\ &{}+\left\lceil\frac{n}{2}\right\rceil+\left\lfloor\frac{n}{2}\right\rfloor+\frac{\ln{\left\lfloor\frac{n}{2}\right\rfloor}}{2}+\frac{\ln{\left\lceil\frac{n}{2}\right\rceil}}{2}+\frac{1}{6}+0.5402. \end{align*} Then using Proposition~\ref{prop:lnflooroverapprox}, we can further obtain the following over-approximation \begin{align*} & 2\cdot n\cdot\ln{n}-2\cdot n-\ln{n}+0.7069-\frac{n}{2}\cdot\left(\ln{n}-\ln{2}\right)-\frac{n-1}{2}\cdot\left(\ln{n}-\ln{2}-\frac{1}{n-1}\right)\\ &{}+\frac{n+1}{2}+\frac{n}{2}+\frac{\ln{n}-\ln{2}}{2}+\frac{\ln{n}-\ln{2}+\frac{1}{n}}{2} \end{align*} which is roughly $n\cdot\ln{n}-(1-\ln{2})\cdot n+\frac{1}{2}\cdot\ln{n}+0.6672+\frac{1}{2\cdot n}$.\qed \section{Proofs for Sect.~\ref{sect:unisynth}}\label{app:unisynth} \noindent\textbf{Lemma~\ref{lemm:unitrans}.} Let $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$ and $c$ be a constant. For all univariate recurrence expressions $\mathfrak{g}$, there exists pseudo-polynomials $p$ and $q$ such that coefficients (i.e., $a_i,b_i$'s in~(\ref{eq:pseudopoly})) of $q$ are all non-negative, $C_q>0$ and the following assertion holds: for all $d>0$ and for all $n\ge 2$, with $h=d\cdot \mathsf{Subst}({\mathfrak{f}})+c$, the inequality $\mathsf{OvAp}(\mathfrak{g}, h)(n)\le h(n)$ is equivalent to $d\cdot p(n)\ge q(n)$. \begin{proof} From Definition~\ref{def:unioverapprox}, $n\mapsto n\cdot(n-1)\cdot\mathsf{OvAp}(\mathfrak{g}, h)(n)$ is a pseudo-polynomial. Simple rearrangement of terms in inequality $\mathsf{OvAp}(\mathfrak{g}, h)(n)\le h(n)$ gives the desired pseudo-polynomials. Moreover, the fact that all coefficients in $\mathfrak{g}$ (from~(\ref{eq:unirecurrel})) are positive, is used to derive that all coefficients of $q$ are non-negative and $C_q>0$.\qed \end{proof} \noindent\textbf{Proposition~\ref{prop:unisufflarge}.} Let $p,q$ be pseudo-polynomials such that $C_q>0$ and all coefficients of $q$ are non-negative. Then there exists a real number $d>0$ such that $d\cdot p(n)\ge q(n)$ for sufficiently large $n$ iff $\mathrm{deg}(p)\ge \mathrm{deg}(q)$ and $C_p>0$. \begin{proof} We present the two directions of the proof. (``\emph{If}'':) Suppose that $\mathrm{deg}(p)\ge \mathrm{deg}(q)$ and $C_p>0$. Then the result follows directly from the facts that (i) $\frac{q(n)}{p(n)}>0$ for sufficiently large $n$ and (ii) $\lim\limits_{n\rightarrow\infty}\frac{q(n)}{p(n)}$ exists and is non-negative. (``\emph{Only-if}'':) Let $d$ be a positive real number such that $d\cdot p(n)\ge q(n)$ for sufficiently large $n$. Then $C_p>0$, or otherwise $d\cdot p(n)$ is either constantly zero or negative for sufficiently large $n$. Moreover, $\mathrm{deg}(p)\ge \mathrm{deg}(q)$, since otherwise $\lim\limits_{n\rightarrow\infty}\frac{q(n)}{p(n)}=\infty$.\qed \end{proof} \noindent\textbf{Proposition~\ref{prop:unisufflargeN}.} Consider two univariate pseudo-polynomials $p,q$ such that $\mathrm{deg}(p)\ge \mathrm{deg}(q)$, all coefficients of $q$ are non-negative and $C_p,C_q>0$. Then given any $\epsilon\in (0,1)$, \[ \frac{q(n)}{p(n)}\le \frac{\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot \frac{C_q}{C_p}+\epsilon}{1-\epsilon} \] for all $n\ge N_{\epsilon,p,q}$ (for $N_{\epsilon,p,q}$ of Definition~\ref{def:unisuffN}). \begin{proof} Let $p,q$ be given in Definition~\ref{def:unisuffN}. Fix an arbitrary $\epsilon\in (0,1)$ and let $N_{\epsilon,p,q}$ be given in Definition~\ref{def:unisuffN}. Then for all $n\ge N_{\epsilon,p,q}$, (i) both $p(n),q(n)$ are positive and (ii) \begin{align*} \frac{q(n)}{\overline{p}(n)} &\le \sum_{i=1}^{k'} a'_i\cdot \frac{N^{i}\cdot \ln{N}}{\overline{p}(N)}+\sum_{i=1}^{\ell'} b'_i\cdot \frac{N^{i}}{\overline{p}(N)}\\ &\le \mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot\frac{C_q}{C_p}+\epsilon \end{align*} and \begin{align*} \frac{p(n)}{\overline{p}(n)} &\ge 1-\left[-1+\sum_{i=1}^{k} |a_i|\cdot \frac{N^{i}\cdot \ln{N}}{\overline{p}(N)}+\sum_{i=1}^{\ell} |b_i|\cdot \frac{N^{i}}{\overline{p}(N)}\right]\\ &\ge 1-\epsilon. \end{align*} It follows that for all $n\ge N_{\epsilon,p,q}$, \[ \frac{q(n)}{p(n)}\le \frac{\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot \frac{C_q}{C_p}+\epsilon}{1-\epsilon}~~. \] The desired result follows.\qed \end{proof} \noindent\textbf{Theorem~\ref{thm:soundnessunidec}.}[Soundness for $\mbox{\sl UniDec}$] If $\mbox{\sl UniDec}$ outputs ``$\mbox{\sl yes}$'', then there exists a univariate guess-and-check function in form~(\ref{eq:uniguess}) for the inputs $G$ and $\mathfrak{f}$. The algorithm is a linear-time algorithm in the size of the input recurrence relation. \begin{proof} From Definition~\ref{def:uniguess} and the special form (\ref{eq:uniguess}) for univariate guess-and-check functions, a function in form (\ref{eq:uniguess}) which satisfies the inductive argument of Definition~\ref{def:uniguess} can be modified to satisfy also the base condition of Definition~\ref{def:uniguess} by simply raising $d$ to a sufficiently large amount. Then the correctness of the algorithm follows from Theorem~\ref{thm:uniguess} and the sufficiency of Proposition~\ref{prop:unisufflarge}. Furthermore, the algorithm runs in linear time since the transformation from the inequality $\mathsf{OvAp}(\mathfrak{g}, h)(n)\le h(n)$ into $d\cdot p(n)\ge q(n)$ (cf. Lemma~\ref{lemm:unitrans}) takes linear time in the size of the input recurrence relation. \qed \end{proof} \smallskip \noindent\textbf{Theorem~\ref{thm:soundnessunisynth}.}[Soundness for $\mbox{\sl UniSynth}$] If the algorithm $\mbox{\sl UniSynth}$ outputs a real number $d$, then $d\cdot \mathsf{Subst}(\mathfrak{f})+c$ is a univariate guess-and-check function for $G$. \begin{proof} Directly from the construction of the algorithm, Theorem~\ref{thm:uniguess}, Proposition~\ref{prop:unisufflarge} and Proposition~\ref{prop:unisufflargeN}.\qed \end{proof} \section{Detailed Experimental Results}\label{app:experiments} The detailed experimental results are given in Table~\ref{tbl:detailedexperiments}. We use $\checkmark$ to represent $\mbox{\sl yes}$ and $\times$ for $\mbox{\sl fail}$. In addition to Table~\ref{tab:experiments}, we include values for $N_{\epsilon,p,q}$ in Definition~\ref{def:unisuffN}. For the separable bivariate examples, recall that $n$ does not change, and in these examples, the reduction to the univariate case is the function of $m$. \begin{table*} \centering \begin{tabular}{ |c|c|c|c|c|c|c| } \hline \multirow{2}{*}{\sc Program} & \multirow{2}{*}{\sc $\mathfrak{f}$} & \multirow{2}{*}{\sc UniDec } & \multicolumn{4}{|c|}{\sc UniSynth(\checkmark)}\\ \cline{4-7} & & & $\epsilon$ & $N_{\epsilon,p,q}$ & $d$ & $d_{100}$ \\ \hline \hline \multirow{4}{*}{\sc R.-Sear.} & \multirow{4}{*}{\sc $\ln \mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $13$ & $40.107$ & \multirow{4}{*}{\sc $15.129$ } \\ \cline{4-6} &&& $0.3$ & $25$ & $28.363$ & \\ \cline{4-6} &&& $0.1$ & $97$ & $21.838$ & \\ \cline{4-6} &&& $0.01$ & $1398$ & $19.762$ &\\ \hline \hline \multirow{6}{*}{\sc Q.-Sort} & $\ln \mathfrak{n}$ & $\times$ & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} &\multirow{2}{*}{\sc -} \\ \cline{2-3} & $\mathfrak{n}$ & $\times$ & & & & \\ \cline{2-7} & \multirow{4}{*}{\sc $\mathfrak{n}\ln \mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $10$ & $9.001$ & \multirow{4}{*}{\sc $3.172$ } \\ \cline{4-6} &&& $0.3$ & $21$ & $6.143$ & \\ \cline{4-6} &&& $0.1$ & $91$ & $4.556$ & \\ \cline{4-6} &&& $0.01$ & $1458$ & $4.051$ &\\ \hline \hline \multirow{5}{*}{\sc Q.-Select} & $\ln \mathfrak{n}$ & $\times$ & - & - & - & - \\ \cline{2-7} & \multirow{4}{*}{\sc $\mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $33$ & $17.001$ & \multirow{4}{*}{\sc $7.909$ } \\ \cline{4-6} &&& $0.3$ & $54$ & $11.851$ & \\ \cline{4-6} &&& $0.1$ & $160$ & $9.001$ & \\ \cline{4-6} &&& $0.01$ & $1600$ & $8.091$ &\\ \hline \hline \multirow{6}{*}{\sc Diam. A} & $\ln \mathfrak{n}$ & $\times$ & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} \\ \cline{2-3} & $\mathfrak{n}$ & $\times$ & & & & \\ \cline{2-7} & \multirow{4}{*}{\sc $\mathfrak{n}\ln \mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $3$ & $9.001$ & \multirow{4}{*}{\sc $4.525$ } \\ \cline{4-6} &&& $0.3$ & $3$ & $6.143$ & \\ \cline{4-6} &&& $0.1$ & $4$ & $4.556$ & \\ \cline{4-6} &&& $0.01$ & $4$ & $4.525$ &\\ \hline \hline \multirow{5}{*}{\sc Diam. B} & $\ln \mathfrak{n}$ & $\times$ & - & - & - & - \\ \cline{2-7} & \multirow{4}{*}{\sc $\mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $9$ & $13.001$ & \multirow{4}{*}{\sc $5.918$ } \\ \cline{4-6} &&& $0.3$ & $14$ & $9.001$ & \\ \cline{4-6} &&& $0.1$ & $40$ & $6.778$ & \\ \cline{4-6} &&& $0.01$ & $400$ & $6.071$ &\\ \hline \hline \multirow{6}{*}{\sc Sort-Sel.} & $\ln \mathfrak{n}$ & $\times$ & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} & \multirow{2}{*}{\sc -} \\ \cline{2-3} & $\mathfrak{n}$ & $\times$ & & & & \\ \cline{2-7} & \multirow{4}{*}{\sc $\mathfrak{n}\ln \mathfrak{n}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $18$ & $50.052$ & \multirow{4}{*}{\sc $16.000$} \\ \cline{4-6} &&& $0.3$ & $29$ & $24.852$ & \\ \cline{4-6} &&& $0.1$ & $87$ & $17.313$ & \\ \cline{4-6} &&& $0.01$ & $866$ & $16.000$ & \\ \hline \hline \multirow{4}{*}{\sc Coupon} & \multirow{4}{*}{\sc $\mathfrak{n}\cdot\ln \mathfrak{m}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $2$ & $3.001$ & \multirow{4}{*}{\sc $0.910$ } \\ \cline{4-6} &&& $0.3$ & $2$ & $1.858$ & \\ \cline{4-6} &&& $0.1$ & $2$ & $1.223$ & \\ \cline{4-6} &&& $0.01$ & $2$ & $1.021$ &\\ \hline \hline \multirow{4}{*}{\sc Res. A} & \multirow{4}{*}{\sc $\mathfrak{n}\cdot\ln \mathfrak{m}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $2$ & $6.437$ & \multirow{4}{*}{\sc $2.472$ } \\ \cline{4-6} &&& $0.3$ & $2$ & $4.312$ & \\ \cline{4-6} &&& $0.1$ & $2$ & $3.132$ & \\ \cline{4-6} &&& $0.01$ & $2$ & $2.756$ &\\ \hline \hline \multirow{5}{*}{\sc Res. B} & $\ln \mathfrak{m}$ & $\times$ & - & - & - & -\\ \cline{2-7} &\multirow{4}{*}{\sc $\mathfrak{m}$} & \multirow{4}{*}{\sc \checkmark} & $0.5$ & $2$ & $6.437$ & \multirow{4}{*}{\sc $2.691$ } \\ \cline{4-6} &&& $0.3$ & $2$ & $4.312$ & \\ \cline{4-6} &&& $0.1$ & $2$ & $3.132$ & \\ \cline{4-6} &&& $0.01$ & $2$ & $2.756$ &\\ \hline \end{tabular} \caption{Detailed experimental results where all running times (averaged over $5$ runs) are less than $0.02$ seconds (between $0.01$ and $0.02$ seconds).} \label{tbl:detailedexperiments} \end{table*} \section{Expected-Runtime Analysis}\label{sect:expruntime} \vspace{-1em} We focus on synthesizing logarithmic, linear, and almost-linear asymptotic bounds for recurrence relations. Our goal is to decide and synthesize asymptotic bounds in the simple form: $d\cdot \mathfrak{f}+\mathfrak{g}, \mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$ . Informally, $\mathfrak{f}$ is the major term for time complexity, $d$ is the coefficient of $\mathfrak{f}$ to be synthesized, and $\mathfrak{g}$ is the time complexity for the base case specified in (\ref{eq:unirecurrel}) or (\ref{eq:birecurrel}). \smallskip\noindent{\bf Univariate Case:} The algorithmic problem in univariate case is as follows: \begin{compactitem} \item {\em Input:} a univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}) and an expression $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$. \item {\em Output: Decision problem.} Output ``$\mbox{\sl yes}$'' if $T_G \in \mathcal{O}(\mathsf{Subst}(\mathfrak{f}))$, and ``$\mbox{\sl fail}$'' otherwise. \item {\em Output: Quantitative problem.} A positive real number $d$ such that \begin{equation}\label{eq:uniguess} T_G(n) \leq d\cdot \mathsf{Subst}(\mathfrak{f})(n)+c \end{equation} for all $n \geq 1$, or ``$\mbox{\sl fail}$'' otherwise, where $c$ is from (\ref{eq:unirecurrel}). \end{compactitem} \begin{remark} First note that while in the problem description we consider the form $\mathfrak{f}$ part of input for simplicity, since there are only three possibilites we can simply enumerate them, and thus have only the recurrence relation as input. Second, in the algorithmic problem above, w.l.o.g, we consider that every $\mathfrak{e}$ in (\ref{eq:unirecurrel}) or (\ref{eq:birecurrel}) involves at least one $\mathrm{T}(\centerdot)$-term and one non-$\mathrm{T}(\centerdot)$-term; this is natural since (i) for algorithms with recursion at least one $\mathrm{T}(\centerdot)$-term should be present for the recursive call and at least one non-$\mathrm{T}(\centerdot)$-term for non-recursive base step. \qed \end{remark} \smallskip\noindent{\bf Bivariate Case:} The bivariate-case problem is an extension of the univariate one, and hence the problem definitions are similar, and we present them succinctly below. \begin{compactitem} \item {\em Input:} a bivariate recurrence relation $G$ taking the form (\ref{eq:birecurrel}) and an expression $\mathfrak{f}$ (similar to the univariate case). \item {\em Output: Decision problem.} Output ``$\mbox{\sl yes}$'' if $T_G \in \mathcal{O}(\mathsf{Subst}(\mathfrak{f}))$, and ``$\mbox{\sl fail}$'' otherwise; \item {\em Output: Quantitative problem.} A positive real number $d$ such that $T_G(n,m) \leq d\cdot \mathsf{Subst}(\mathfrak{f})(n,m) +c\cdot\mathsf{Subst}(\mathfrak{h})(n)$ for all $n,m \geq 1$, or ``$\mbox{\sl fail}$'' otherwise, where $c,\mathfrak{h}$ are from (\ref{eq:birecurrel}). Note that in the expression above the term $\mathfrak{b}$ does not appear as it can be captured with $\mathfrak{f}$ itself. \end{compactitem} Recall that in the above algorithmic problems obtaining the finite behaviour of the recurrence relations is easy (through evaluation of the recurrences using dynamic programming), and the interesting aspect is to decide the asymptotic infinite behaviour. \section{Related Work} \vspace{-1em} Automated program analysis is a very important problem with a long tradition~\cite{DBLP:journals/cacm/Wegbreit75}. The following works consider various approaches for automated worst-case bounds~\cite{DBLP:conf/aplas/HoffmannH10,DBLP:conf/esop/HoffmannH10,DBLP:conf/popl/HofmannJ03,DBLP:conf/esop/HofmannJ06,DBLP:conf/csl/HofmannR09,DBLP:conf/fm/JostLHSH09,DBLP:conf/popl/JostHLH10,Hoffman1,DBLP:conf/icfp/AvanziniLM15,DBLP:conf/se/SinnZV16} for amortized analysis, and the SPEED project~\cite{SPEED1,SPEED2,DBLP:conf/cav/GulavaniG08} for non-linear bounds using abstract interpretation. All these works focus on the worst-case analysis, and do not consider expected-runtime analysis. Our main contribution is automated analysis of recurrence relations. Approaches for recurrence relations have also been considered in the literature. Wegbreit~\cite{DBLP:journals/cacm/Wegbreit75} considered solving recurrence relations through either simple difference equations or generating functions. Zimmermann and Zimmermann~\cite{Zimmermann1989} considered solving recurrence relations by transforming them into difference equations. Grobauer~\cite{DBLP:conf/icfp/Grobauer01} considered generating recurrence relations from DML for the worst-case analysis. Flajolet~\emph{et al.}~\cite{DBLP:journals/dam/FlajoletGT92} considered allocation problems. Flajolet~\emph{et al.}~\cite{DBLP:journals/tcs/FlajoletSZ91} considered solving recurrence relations for randomization of combinatorial structures (such as trees) through generating functions. The COSTA project~\cite{DBLP:journals/entcs/AlbertAGGPRRZ09,DBLP:conf/sas/AlbertAGP08,DBLP:conf/esop/AlbertAGPZ07} transforms Java bytecode into recurrence relations and solves them through ranking functions. Moreover, The PURRS tool~\cite{BagnaraPZZ05} addresses finite linear recurrences (with bounded summation), and some restricted linear infinite recurrence relations (with unbounded summation). Our approach is quite different because we consider analyzing recurrence relations arising from randomized algorithms and expected-runtime analysis through over-approximation of unbounded summations through integrals, whereas previous approaches either consider recurrence relations for worst-case bounds or combinatorial structures, or use generating functions or difference equations to solve the recurrence relations. For intraprocedural analysis ranking functions have been widely studied~\cite{BG05,DBLP:conf/cav/BradleyMS05,DBLP:conf/tacas/ColonS01,DBLP:conf/vmcai/PodelskiR04,DBLP:conf/pods/SohnG91,DBLP:conf/vmcai/Cousot05,DBLP:journals/fcsc/YangZZX10,DBLP:journals/jossac/ShenWYZ13}, which have then been extended to non-recursive probabilistic programs as ranking supermartingales~\cite{SriramCAV,HolgerPOPL,DBLP:conf/popl/ChatterjeeFNH16,DBLP:conf/cav/ChatterjeeFG16,ChatterjeeNZ2017,DBLP:journals/corr/ChatterjeeF17}. Such approaches are related to almost-sure termination, and not deriving optimal asymptotic expected-runtime bounds (such as $\mathcal{O}(\log n)$, $\mathcal{O}(n \log n)$). Proof rules have also been considered for recursive (probabilistic) programs in~\cite{DBLP:journals/fac/Hesselink94,JonesPhdThesis,DBLP:conf/lics/OlmedoKKM16}, but these methods cannot be automated and require manual proofs. \vspace{-1.5em} \section{Conclusion} \vspace{-1em} In this work we considered efficient algorithms for automated analysis of randomized recurrences for logarithmic, linear, and almost-linear bounds. Our work gives rise to a number of interesting questions. First, an interesting theoretical direction of future work would be to consider more general randomized recurrence relations (such as with more than two variables, or interaction between the variables). While the above problem is of theoretical interest, most interesting examples are already captured in our class of randomized recurrence relations as mentioned above. Another interesting practical direction would be automated techniques to derive recurrence relations from randomized recursive programs. \vspace{-1.5em} \subsubsection*{Acknowledgements} We thank all reviewers for valuable comments. The research is partially supported by Vienna Science and Technology Fund (WWTF) ICT15-003, Austrian Science Fund (FWF) NFN Grant No. S11407-N23 (RiSE/SHiNE), ERC Start grant (279307: Graph Games), the Natural Science Foundation of China (NSFC) under Grant No. 61532019 and the CDZ project CAP (GZ 1023). \section{Experimental Results}\label{sect:experiments} \vspace{-1em} We consider the classical examples illustrated in Section~\ref{sec:motivatinguni} and Section~\ref{sec:motivatingbi}. In Table~\ref{tab:experiments} for experimental results we consider the following recurrence relations $G$: {\sc R.-Sear.} corresponds to the recurrence relation (\ref{eq:relrandsearch}) for Example~\ref{ex:randsearch}; {\sc Q.-Sort} corresponds to the recurrence relation (\ref{eq:relquicksort}) for Example~\ref{ex:quicksort}; {\sc Q.-Select} corresponds to the recurrence relation (\ref{eq:relquickselect}) for Example~\ref{ex:quickselect}; {\sc Diam. A} (resp. {\sc Diam. B}) corresponds to the recurrence relation (\ref{eq:reldiametera}) (resp. the recurrence relation (\ref{eq:reldiameterb})) for Example~\ref{ex:diameter}; {\sc Sort-Sel.} corresponds to recurrence relation (\ref{eq:relsortselect}) for Example~\ref{ex:sortselect}, where we use the result from setting $\epsilon=0.01$ in {\sc Q.-Select}; {\sc Coupon} corresponds to the recurrence relation (\ref{eq:relcoupon}) for Example~\ref{ex:coupon}; {\sc Res. A} (resp. {\sc Res. B}) corresponds to the recurrence relation (\ref{eq:relresourcea}) (resp. the recurrence relation (\ref{eq:relresourceb})) for Example~\ref{ex:channel}. In the table, $\mathfrak{f}$ specifies the input asymptotic bound, $\epsilon$ and Dec is the input which specifies either we use algorithm $\mbox{\sl UniDec}$ or the synthesis algorithm $\mbox{\sl UniSynth}$ with the given $\epsilon$ value, and $d$ gives the value synthesized w.r.t the given $\epsilon$ ($\checkmark$ for $\mbox{\sl yes}$). We describe $d_{100}$ below. We need approximation for constants such as $e$ and $\ln{2}$, and use the interval $[2.7182,2.7183]$ (resp., $[0.6931, 0.6932]$) for tight approximation of $e$ (resp., $\ln{2}$). \smallskip\noindent{\em The value $d_{100}$.} For our synthesis algorithm we obtain the value $d$. The optimal value of the associated constant with the asymptotic bound, denoted $d^*$, is defined as follows. For $z\geq 2$, let $d_{z}:=\max\left\{\frac{T_G(n)-c}{\mathsf{Subst}(\mathfrak{f})(n)}\mid 2\le n\le z\right\}$ ($c$ is from (\ref{eq:unirecurrel})). Then the sequence $d_z$ is increasing in $z$, and its limit is the optimal constant, i.e., $d^* =\lim_{z \to \infty} d_z$. We consider $d_{100}$ as a lower bound on $d^*$ to compare against the value of $d$ we synthesize. In other words, $d_{100}$ is the minimal value such that (\ref{eq:uniguess}) holds for $1\le n\le 100$, whereas for $d^*$ it must hold for all $n$, and hence $d^* \geq d_{100}$. Our experimental results show that the $d$ values we synthesize for $\epsilon=0.01$ is quite close to the optimal value. We performed our experiments on Intel(R) Core(TM) i7-4510U CPU, 2.00GHz, 8GB RAM. All numbers in Table~\ref{tab:experiments} are over-approximated up to $10^{-3}$, and the running time of all experiments are less than $0.02$ seconds. From Table~\ref{tab:experiments}, we can see that optimal $d$ are effectively over-approximated. For example, for {\sc Quick-Sort} (Eq.~(\ref{eq:relquicksort})) (i.e, {\sc Q.-Sort} in the table), our algorithm detects $d=4.051$ and the optimal one lies somewhere in $[3.172, 4.051]$. The experimental results show that we obtain the results extremely efficiently (less than $1/50$-th of a second). For further details see Table~\ref{tbl:detailedexperiments} in Appendix~\ref{app:experiments}. \begin{table}[h] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Recur. Rel. & $\mathfrak{f}$ & $\epsilon,\mbox{\sl Dec} $ & $d$ & $d_{100}$ & Recur. Rel. & $\mathfrak{f}$ & $\epsilon,\mbox{\sl Dec} $ & $d$ & $d_{100}$ \\ \hline \multirow{5}{*}{{\sc R.-Sear.}} & \multirow{5}{*}{$\ln{\mathfrak{n}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$15.129$} & \multirow{5}{*}{{\sc Sort-Sel.}} & \multirow{5}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{n}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$16.000$} \\ \cline{3-4}\cline{8-9} & & $0.5$ & $40.107$ & & & & $0.5$ & $50.052$ & \\ \cline{3-4}\cline{8-9} & & $0.3$ & $28.363$ & & & & $0.3$ & $24.852$ & \\ \cline{3-4}\cline{8-9} & & $0.1$ & $21.838$ & & & & $0.1$ & $17.313$ & \\ \cline{3-4}\cline{8-9} & & $0.01$ & $19.762$ & & & & $0.01$ & $16.000$ & \\ \hline \multirow{5}{*}{{\sc Q.-Sort}} & \multirow{5}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{n}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$3.172$} & \multirow{5}{*}{{\sc Coupon}} & \multirow{5}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{m}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$0.910$} \\ \cline{3-4}\cline{8-9} & & $0.5$ & $9.001$ & & & & $0.5$ & $3.001$ & \\ \cline{3-4}\cline{8-9} & & $0.3$ & $6.143$ & & & & $0.3$ & $1.858$ & \\ \cline{3-4}\cline{8-9} & & $0.1$ & $4.556$ & & & & $0.1$ & $1.223$ & \\ \cline{3-4}\cline{8-9} & & $0.01$ & $4.051$ & & & & $0.01$ & $1.021$ & \\ \hline \multirow{5}{*}{{\sc Q.-Select}} & \multirow{5}{*}{$\mathfrak{n}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$7.909$} & \multirow{5}{*}{{\sc Res. A}} & \multirow{5}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{m}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$2.472$} \\ \cline{3-4}\cline{8-9} & & $0.5$ & $17.001$ & & & & $0.5$ & $6.437$ & \\ \cline{3-4}\cline{8-9} & & $0.3$ & $11.851$ & & & & $0.3$ & $4.312$ & \\ \cline{3-4}\cline{8-9} & & $0.1$ & $9.001$ & & & & $0.1$ & $3.132$ &\\ \cline{3-4}\cline{8-9} & & $0.01$ & $8.091$ & & & & $0.01$ & $2.756$ &\\ \hline \multirow{5}{*}{{\sc Diam. A}} & \multirow{5}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{n}}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$4.525$} & \multirow{5}{*}{{\sc Res. B}} & \multirow{5}{*}{$\mathfrak{m}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$2.691$} \\ \cline{3-4}\cline{8-9} & & $0.5$ & $9.001$ & & & & $0.5$ & $6.437$ & \\ \cline{3-4}\cline{8-9} & & $0.3$ & $6.143$ & & & & $0.3$ & $4.312$ & \\ \cline{3-4}\cline{8-9} & & $0.1$ & $4.556$ & & & & $0.1$ & $3.132$ & \\ \cline{3-4}\cline{8-9} & & $0.01$ & $4.525$ & & & & $0.01$ & $2.756$ & \\ \hline \multirow{5}{*}{{\sc Diam. B}} & \multirow{5}{*}{$\mathfrak{n}$} & $\mbox{\sl UniDec}$ & $\mbox{\sl \checkmark}$ & \multirow{5}{*}{$5.918$} & \multirow{5}{*}{{-}} & \multirow{5}{*}{-} & - & - & \multirow{5}{*}{-} \\ \cline{3-4}\cline{8-9} & & $0.5$ & $13.001$ & & & & - & - &\\ \cline{3-4}\cline{8-9} & & $0.3$ & $9.001$ & & & & - & - &\\ \cline{3-4}\cline{8-9} & & $0.1$ & $6.778$ & & & & - & - &\\ \cline{3-4}\cline{8-9} & & $0.01$ & $6.071$ & & & & - & - & \\ \hline \end{tabular} \caption{Experimental results where all running times (averaged over $5$ runs) are less than $0.02$ seconds, between $0.01$ and $0.02$ in all cases.} \label{tab:experiments} \end{table} \subsection{Discussion} \vspace{-0.5em} We discuss some aspects of the recurrence relations we consider and possible generalizations. \begin{compactenum} \item {\em Recurrence relations.} We note that the recurrence relations we consider capture various classical randomized programs, ranging for sorting ({\sc Quick-Sort}), searching ({\sc Randomized-Search}), {\sc Resource-Allocation}, {\sc Coupon-Collector}, which are basic in several analysis. Thus the recurrence relations we consider indeed captures a diverse range of widely used programs such as sort, search, etc. \item {\em Generalization of recurrence relations.} Our setting of recurrence relations can also be easily generalized. For example, consider a restricted non-uniform sampling that has one uniform probability value for the first half of the array, and a different uniform value for the second half. Then we can truncate the summation (for expected-time complexity) into two halves and apply our methods. \item {\em Generalization of bounds.} In this work we focus on almost-linear bounds. Our approach could be extended to upper bounds such as $\mathcal{O}\left(n^k\right)$ or $\mathcal{O}\left(n^k\cdot \log n\right)$ ($k\ge 2$) since the definite integral over these functions can be obtained in closed form through integral-by-part technique, and can be approximated efficiently as in Proposition~\ref{prop:integralapproximation}. \end{compactenum} While our techniques can be generalized to other recurrence relations and bounds, in this work we focus on the recurrence relations and bounds for two reasons: (a)~most of the classical examples does not require such generalizations; and (b)~the simpler classes allow clear exposition of the core ideas. \section{Guess-and-Check Functions}\label{sect:mfunc} We follow the standard guess-and-check technique to solve simple recurrence relations. Below we first fix a univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}). \smallskip \begin{definition}[Univariate Guess-and-Check Functions]\label{def:uniguess} Let $G$ be a univariate recurrence relation taking the form (\ref{eq:unirecurrel}). A function $h:\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for $G$ if there exists a natural number $N\in\Nset$ such that \begin{compactitem} \item {\em (Base Condition)} $T_G(n)\le h(n)$ for all $1\le n\le N$, and \item {\em (Inductive Argument)} $\mathsf{Subst}(\mathfrak{e},h) (n)\le h(n)$ for all $n> N$. \end{compactitem} \end{definition} By an easy induction on $n$ (starting from the $N$ specified in Definition~\ref{def:uniguess}) we obtain the following result. \medskip \begin{theorem}[Guess-and-Check, Univariate Case]\label{thm:uniguess} If a function $h:\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for a univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}), then $T_G(n)\le h(n)$ for all $n\in\Nset$. \end{theorem} We do not present explicitly present the definition for guess-and-check functions in the bivariate case, since we will present a reduction of the analysis of separable bivariate recurrence relations to that of the univariate ones (cf. Section~\ref{sect:bisynth}). \begin{comment} \begin{definition}[Bivariate Guess-and-Check Functions]\label{def:biguess} Let $G$ be a bivariate recurrence relation taking the form (\ref{eq:birecurrel}). A function $h:\Nset\times\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for $G$ if there exists a natural number $N\in\Nset$ such that \begin{compactitem} \item {\em (Base Condition)} $T_G(n,m)\le h(n,m)$ for all $n\in\Nset$ and $1\le m\le N$, and \item {\em (Inductive Argument)} $\mathsf{Subst}(\mathfrak{e}, h)(n,m)\le h(n,m)$ for all $n\in\Nset$ and $m> N$. \end{compactitem} \end{definition} \smallskip \begin{theorem}[Guess-and-Check, Bivariate Case] If a function $h:\Nset\times\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for a bivariate recurrence relation $G$ taking the form (\ref{eq:birecurrel}), then $T_G(n,m)\le h(n,m)$ for all $n,m\in\Nset$. \end{theorem} \begin{proof} By an easy induction on $m$ (starting from the $N$ specified in Definition~\ref{def:biguess}). \end{proof} \end{comment} \section{Introduction} \vspace{-1em} \noindent{\em Static analysis for quantitative bounds.} Static analysis of programs aims to reason about programs without running them. The most basic properties for static analysis are qualitative properties, such as safety, termination, liveness, that for every trace of a program gives a Yes or No answer (such as assertion violation or not, termination or not). However, recent interest in analysis of resource-constrained systems, such as embedded systems, as well as for performance analysis, quantitative performance characteristics are necessary. For example, the qualitative problem of termination asks whether a given program always terminates, whereas the quantitative problem asks to obtain precise bounds on the number of steps, and is thus a more challenging problem. Hence the problem of automatically reasoning about resource bounds (such as time complexity bounds) of programs is both of significant theoretical as well as practical interest. \smallskip\noindent{\em Worst-case bounds.} The worst-case analysis of programs is the fundamental problem in computer science, which is the basis of algorithms and complexity theory. However, manual proofs of worst-case analysis can be tedious and also require non-trivial mathematical ingenuity, e.g., the book {\em The Art of Computer Programming} by Knuth presents a wide range of involved techniques to derive such precise bounds~\cite{DBLP:books/aw/Knuth73a,DBLP:books/aw/Knuth73}. There has been a considerable research effort for automated analysis of such worst-case bounds for programs, see~\cite{SPEED1,SPEED2,Hoffman1,Hoffman2} for excellent expositions on the significance of deriving precise worst-case bounds and the automated methods to derive them. For the worst-case analysis there are several techniques, such as worst-case execution time analysis~\cite{DBLP:journals/tecs/WilhelmEEHTWBFHMMPPSS08}, resource analysis using abstract interpretation and type systems~\cite{SPEED2,DBLP:journals/entcs/AlbertAGGPRRZ09,DBLP:conf/popl/JostHLH10,Hoffman1,Hoffman2}, ranking functions~\cite{BG05,DBLP:conf/cav/BradleyMS05,DBLP:conf/tacas/ColonS01,DBLP:conf/vmcai/PodelskiR04,DBLP:conf/pods/SohnG91,DBLP:conf/vmcai/Cousot05,DBLP:journals/fcsc/YangZZX10,DBLP:journals/jossac/ShenWYZ13}, as well as recurrence relations~\cite{DBLP:conf/icfp/Grobauer01,DBLP:journals/entcs/AlbertAGGPRRZ09,DBLP:conf/sas/AlbertAGP08,DBLP:conf/esop/AlbertAGPZ07}. \smallskip\noindent{\em Expected-runtime bounds.} While several works have focused on deriving worst-case bounds for programs, quite surprisingly little work has been done to derive precise bounds for expected-runtime analysis, with the exception of~\cite{DBLP:journals/tcs/FlajoletSZ91}, which focuses on randomization in combinatorial structures (such as trees). This is despite the fact that expected-runtime analysis is an equally important pillar of theoretical computer science, both in terms of theoretical and practical significance. For example, while for real-time systems with hard constraints worst-case analysis is necessary, for real-time systems with soft constraints the more relevant information is the expected-runtime analysis. Below we highlight three key significance of expected-runtime analysis. \begin{compactenum} \item {\em Simplicity and desired properties:} The first key aspect is {\em simplicity}: often much simpler algorithms (thus simple and efficient implementations) exist for expected-runtime complexity as compared to worst-case complexity. A classic example is the {\sc Selection} problem that given a set of $n$ numbers and $0\leq k \leq n$, asks to find the $k$-th largest number (eg, for median $k=n/2$). The classical linear-time algorithm for the problem (see~\cite[Chapter~9]{DBLP:books/daglib/0023376}) is quite involved, and its worst-case analysis to obtain linear time bound is rather complex. In contrast, a much simpler algorithm exists (namely, {\sc Quick-Select}) that has linear expected-runtime complexity. Moreover, randomized algorithms with expected-runtime complexity enjoy many desired properties, which deterministic algorithms do not have. A basic example is {\sc Channel-Conflict Resolution} (see Example~\ref{ex:channel}, Section~\ref{sec:motivatingbi}) where the simple randomized algorithm can be implemented in a distributed or concurrent setting, whereas deterministic algorithms are quite cumbersome. \item {\em Efficiency in practice:} Since worst-case analysis concerns with corner cases that rarely arise, many algorithms and implementations have much better expected-runtime complexity, and they perform extremely well in practice. A classic example is the {\sc Quick-Sort} algorithm, that has quadratic worst-case complexity, but almost linear expected-runtime complexity, and is one of the most efficient sorting algorithms in practice. \item {\em Worst-case analysis ineffective:} In several important cases the worst-case analysis is completely ineffective. For example, consider one of the textbook stochastic process, namely the {\sc Coupon-Collector} problem, where there are $n$ types of coupons to be collected, and in each round, a coupon type among the $n$ types is obtained uniformly at random. The process stops when all types are collected. The {\sc Coupon-Collector} process is one of the basic and classical stochastic processes, with numerous applications in network routing, load balancing, etc (see~\cite[Chapter 3]{DBLP:books/cu/MotwaniR95} for applications of {\sc Coupon-Collector} problems). For the worst-case analysis, the process might not terminate (worst-case bound infinite), but the expected-runtime analysis shows that the expected termination time is $\mathcal{O}(n \cdot \log n)$. \end{compactenum} \smallskip\noindent{\em Challenges.} The expected-runtime analysis brings several new challenges as compared to the worst-case analysis. First, for the worst-case complexity bounds, the most classical characterization for analysis of recurrences is the {\em Master Theorem} (cf.~\cite[Chapter~1]{DBLP:books/daglib/0023376}) and Akra-Bazzi's Theorem~\cite{DBLP:journals/coap/AkraB98}. However, the expected-runtime analysis problems give rise to recurrences that are not characterized by these theorems since our recurrences normally involve an unbounded summation resulting from a randomized selection of integers from $1$ to $n$ where $n$ is unbounded. Second, techniques like ranking functions (linear or polynomial ranking functions) cannot derive efficient bounds such as $\mathcal{O}(\log n)$ or $\mathcal{O}(n \cdot \log n)$. While expected-runtime analysis has been considered for combinatorial structures using generating function~\cite{DBLP:journals/tcs/FlajoletSZ91}, we are not aware of any automated technique to handle recurrences arising from randomized algorithms. \smallskip\noindent{\em Analysis problem.} We consider the algorithmic analysis problem of recurrences arising naturally for randomized recursive programs. Specifically we consider the following: \begin{compactitem} \item We consider two classes of recurrences: (a)~{\em univariate} class with one variable (which represents the array length, or the number of input elements, as required in problems such as {\sc Quick-Select, Quick-Sort} etc); and (b)~{\em separable bivariate} class with two variables (where the two independent variables represent the total number of elements and total number of successful cases, respectively, as required in problems such as {\sc Coupon-Collector, Channel-Conflict Resolution}). The above two classes capture a large class of expected-runtime analysis problems, including all the classical ones mentioned above. Moreover, the main purpose of expected-runtime analysis is to obtain efficient bounds. Hence we focus on the case of logarithmic, linear, and almost-linear bounds (i.e., bounds of form $\mathcal{O}(\log n)$, $\mathcal{O}(n)$ and $\mathcal{O}(n \cdot \log n)$, respectively, where $n$ is the size of the input). Moreover, for randomized algorithms, quadratic bounds or higher are rare. \end{compactitem} Thus the main problem we consider is to automatically derive such efficient bounds for randomized univariate and separable bivariate recurrence relations. \smallskip\noindent{\em Our contributions.} Our main contribution is a sound approach for analysis of recurrences for expected-runtime analysis. The input to our problem is a recurrence relation and the output is either logarithmic, linear, or almost-linear as the asymptotic bound, or fail. The details of our contributions are as follows: \begin{compactenum} \item {\em Efficient algorithm.} We first present a linear-time algorithm for the univariate case, which is based on simple comparison of leading terms of pseudo-polynomials. Second, we present a simple reduction for separable bivariate recurrence analysis to the univariate case. Our efficient (linear-time) algorithm can soundly infer logarithmic, linear, and almost-linear bounds for recurrences of one or two variables. \item {\em Analysis of classical algorithms.} We show that for several classical algorithms, such as {\sc Randomized-Search, Quick-Select, Quick-Sort, Coupon-Collector, Channel-Conflict Resolution} (see Section~\ref{sec:motivatinguni} and Section~\ref{sec:motivatingbi} for examples), our sound approach can obtain the asymptotically optimal expected-runtime bounds for the recurrences. In all the cases above, either the worst-case bounds (i)~do not exist (e.g., {\sc Coupon-Collector}), or (ii)~are quadratic when the expected-runtime bounds are linear or almost-linear (e.g., {\sc Quick-Select, Quick-Sort}); or (iii)~are linear when the expected-runtime bounds are logarithmic (e.g., {\sc Randomized-Search}). Thus in cases where the worst-case bounds are either not applicable, or grossly overestimate the expected-runtime bounds, our technique is both efficient (linear-time) and can infer the optimal bounds. \item {\em Implementation.} Finally, we have implemented our approach, and we present experimental results on the classical examples to show that we can efficiently achieve the automated expected-runtime analysis of randomized recurrence relations. \end{compactenum} \noindent{\em Novelty and technical contribution.} The key novelty of our approach is an automated method to analyze recurrences arising from randomized recursive programs, which are not covered by Master theorem. Our approach is based on a guess-and-check technique. We show that by over-approximating terms in a recurrence relation through integral and Taylor's expansion, we can soundly infer logarithmic, linear and almost-linear bounds using simple comparison between leading terms of pseudo-polynomials. \section{Recurrence Relations}\label{sec:recurrel} \vspace{-1em} We present our mini specification language for recurrence relations for expected-runtime analysis. The language is designed to capture running time of recursive randomized algorithms which involve (i)~only one function call whose expected-runtime complexity is to be determined, (ii)~at most two integer parameters, and (iii)~involve randomized-selection or divide-and-conquer techniques. We present our language separately for the univariate and bivariate cases. In the sequel, we denote by $\Nset$, $\Nset_0$, $\Zset$, and $\Rset$ the sets of all positive integers, non-negative integers, integers, and real numbers, respectively. \vspace{-1.5em} \subsection{Univariate Randomized Recurrences}\label{sect:univariate} \vspace{-1em} Below we define the notion of univariate randomized recurrence relations. First, we introduce the notion of univariate recurrence expressions. Since we only consider single recursive function call, we use `$\mathrm{T}$' to represent the (only) function call. We also use `$\mathfrak{n}$' to represent the only parameter in the function declaration. \smallskip\noindent{\bf Univariate recurrence expressions.} The syntax of \emph{univariate recurrence expressions} $\mathfrak{e}$ is generated by the following grammar: \begin{align*} \mathfrak{e} & ::= c\mid \mathfrak{n}\mid \ln{\mathfrak{n}} \mid \mathfrak{n}\cdot \ln{\mathfrak{n}}\mid \frac{1}{\mathfrak{n}}\mid \mathrm{T}\left(\mathfrak{n}-1\right) \mid \mathrm{T}\left(\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor\right) \mid \mathrm{T}\left(\left\lceil\frac{\mathfrak{n}}{2}\right\rceil\right)\\ &\mid \frac{\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})}{\mathfrak{n}}\mid \frac{1}{\mathfrak{n}}\cdot\left( \textstyle\sum_{\mathfrak{j}=\left\lceil\mathfrak{n}/2\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \textstyle\sum_{\mathfrak{j}=\left\lfloor\mathfrak{n}/{2}\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right)\mid c\cdot \mathfrak{e}\mid \mathfrak{e}+\mathfrak{e} \end{align*} where $c\in [1,\infty)$ and $\ln(\centerdot)$ represents the natural logarithm function with base $e$. Informally, $\mathrm{T}(\mathfrak{n})$ is the (expected) running time of a recursive randomized program which involves only one recursive routine indicated by $\mathrm{T}$ and only one parameter indicated by $\mathfrak{n}$. Then each $\mathrm{T}(\centerdot)$-term in the grammar has a direct algorithmic meaning: \begin{compactitem} \item $\mathrm{T}\left(\mathfrak{n}-1\right)$ may mean a recursion to a sub-array with length decremented by one; \item $\mathrm{T}\left(\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor\right)$ and $\mathrm{T}\left(\left\lceil\frac{\mathfrak{n}}{2}\right\rceil\right)$ may mean a recursion related to a divide-and-conquer technique; \item finally, $\frac{\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})}{\mathfrak{n}}\mbox{ and }\frac{1}{\mathfrak{n}}\cdot\left( \sum_{\mathfrak{j}=\left\lceil\frac{n}{2}\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \sum_{\mathfrak{j}=\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right)$ may mean a recursion related to a randomized selection of an array index. \end{compactitem} \smallskip\noindent{\em Substitution.} Consider a function $h:\Nset\rightarrow\Rset$ and univariate recurrence expression ${\mathfrak{e}}$. The {\em substitution function}, denoted by $\mathsf{Subst}({\mathfrak{e}},h)$, is the function from $\Nset$ into $\Rset$ such that the value for $n$ is obtained by evaluation through substituting $h$ for $\mathrm{T}$ and $n$ for $\mathfrak{n}$ in ${\mathfrak{e}}$, respectively. Moreover, if $\mathfrak{e}$ does not involve the appearance of `$\mathrm{T}$', then we use the abbreviation $\mathsf{Subst}({\mathfrak{e}})$ i.e., omit $h$. For example, (i)~if ${\mathfrak{e}}= \mathfrak{n} + \mathrm{T}(\mathfrak{n}-1)$, and $h: n \mapsto n\cdot \log n$, then $\mathsf{Subst}({\mathfrak{e}},h)$ is the function $n \mapsto n+ (n-1)\cdot \log (n-1)$, and (ii)~if ${\mathfrak{e}}= 2\cdot \mathfrak{n}$, then $\mathsf{Subst}({\mathfrak{e}})$ is $n \mapsto 2n$. \smallskip\noindent{\bf Univariate recurrence relation.} A {\em univariate recurrence relation} $G=(\mathsf{eq}_1,\mathsf{eq}_2)$ is a pair of equalities as follows: \begin{equation}\label{eq:unirecurrel} \mathsf{eq}_1: \ \mathrm{T}(\mathfrak{n})=\mathfrak{e}; \qquad \qquad \mathsf{eq}_2: \ \mathrm{T}(1)=c \end{equation} where $c\in (0,\infty)$ and $\mathfrak{e}$ is a univariate recurrence expression. For a univariate recurrence relation $G$ the {\em evaluation sequence} $\mathsf{Eval}(G)$ is as follows: $\mathsf{Eval}(G)(1)=c$, and for $n \geq 2$, given $\mathsf{Eval}(G)(i)$ for $1\leq i < n$, for the value $\mathsf{Eval}(G)(n)$ we evaluate the expression $\mathsf{Subst}(\mathfrak{e},\mathsf{Eval}(G))$, since in $\mathfrak{e}$ the parameter $\mathfrak{n}$ always decreases and is thus well-defined. \smallskip\noindent{\em Finite vs infinite solution.} Note that the above description gives a computational procedure to compute $\mathsf{Eval}(G)$ for any finite $n$, in linear time in $n$ through dynamic programming. The interesting question is to algorithmically analyze the infinite behavior. A function $T_G:\Nset\rightarrow\Rset$ is called a solution to $G$ if $T_G(n)=\mathsf{Eval}(G)(n)$ for all $n \geq 1$. The function $T_G$ is unique and explicitly defined as follows: (1)~\emph{Base Step.} $T_G(1):=c$; and (2)~\emph{Recursive Step.} $T_G(n):=\mathsf{Subst}(\mathfrak{e},T_G)(n)$ for all $n\ge 2$. The interesting algorithmic question is to reason about the asymptotic infinite behaviour of $T_G$. \vspace{-1.5em} \subsection{Motivating Classical Examples}\label{sec:motivatinguni} \vspace{-1em} In this section we present several classical examples of randomized programs whose recurrence relations belong to the class of univariate recurrence relations described in Section~\ref{sect:univariate}. We put details of pseudocode and how to derive the recurrence relations in this section in Appendix~\ref{sect:recurreldetails}. Moreover in all cases the base step is $\mathrm{T}(1)=1$, hence we discuss the recursive case. \begin{example}[{\sc Randomized-Search}]\label{ex:randsearch} Consider the Sherwood's {\sc Randomized-Search\ } algorithm (cf.~\cite[Chapter~9]{McConnellbook}). The algorithm checks whether an integer value $d$ is present within the index range $[i,j]$ ($0\le i\le j$) in an integer array $ar$ which is sorted in increasing order and is without duplicate entries. The algorithm outputs either the index for $d$ in $ar$ or $-1$ meaning that $d$ is not present in the index range $[i,j]$ of $ar$. The recurrence relation for this example is as follows: \begin{equation}\label{eq:relrandsearch} \textstyle\mathrm{T}(\mathfrak{n})=6+\frac{1}{\mathfrak{n}}\cdot\big( \sum_{\mathfrak{j}=\left\lceil\mathfrak{n}/{2}\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \sum_{\mathfrak{j}=\left\lfloor\mathfrak{n}/{2}\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\big) \end{equation} We note that the worst-case complexity for this algorithm is $\Theta(n)$.\qed \end{example} \begin{example}[{\sc Quick-Sort}]\label{ex:quicksort} Consider the {\sc Quick-Sort} algorithm~\cite[Chapter~7]{DBLP:books/daglib/0023376}. The recurrence relation for this example is: \begin{equation}\label{eq:relquicksort} \textstyle\mathrm{T}(\mathfrak{n})=2\cdot\mathfrak{n}+ 2\cdot (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} \end{equation} where $\mathrm{T}(\mathfrak{n})$ represents the maximal expected execution time where $\mathfrak{n}$ is the array length and the execution time of {\em pivoting} is represented by $2\cdot \mathfrak{n}$. We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \end{example} \begin{example}[{\sc Quick-Select}]\label{ex:quickselect} Consider the {\sc Quick-Select} algorithm (cf.~\cite[Chapter~9]{DBLP:books/daglib/0023376}). The recurrence relation for this example is \begin{equation}\label{eq:relquickselect} \textstyle\mathrm{T}(\mathfrak{n})\!=\!4+2\cdot\mathfrak{n}+ \frac{1}{\mathfrak{n}}\cdot \left(\sum_{\mathfrak{j}=\left\lfloor \mathfrak{n}/2\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})+ \sum_{\mathfrak{j}=\left\lceil \mathfrak{n}/2\right\rceil}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right) \end{equation} We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \end{example} \begin{example}[{\sc Diameter-Computation}]\label{ex:diameter} Consider the {\sc Diameter-Computation} algorithm (cf.~\cite[Chapter 9]{DBLP:books/cu/MotwaniR95}) to compute the diameter of an input finite set $S$ of three-dimensional points. Depending on Eucledian or $L_1$ metric we obtain two different recurrence relations. For Eucledian we have the following relation: \begin{equation}\label{eq:reldiametera} \textstyle\mathrm{T}(\mathfrak{n})=2+\mathfrak{n}+ 2\cdot \mathfrak{n}\cdot\ln{\mathfrak{n}} + (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} ; \end{equation} and for $L_1$ metric we have the following relation: \begin{equation}\label{eq:reldiameterb} \textstyle\mathrm{T}(\mathfrak{n})=2+\mathfrak{n}+ 2\cdot \mathfrak{n} + (\sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j}))/{\mathfrak{n}} \\ \end{equation} We note that the worst-case complexity for this algorithm is as follows: for Euclidean metric it is $\Theta(n^2 \cdot \log n)$ and for the $L_1$ metric it is $\Theta(n^2)$.\qed \end{example} \begin{example}[Sorting with {\sc Quick-Select}]\label{ex:sortselect} Consider a sorting algorithm which selects the median through the {\sc Quick-Select} algorithm. The recurrence relation is directly obtained as follows: \begin{equation}\label{eq:relsortselect} \textstyle\mathrm{T}(\mathfrak{n})=4+ T^*(\mathfrak{n})+\mathrm{T}\left(\lfloor{\mathfrak{n}}/{2}\rfloor\right)+\mathrm{T}\left(\lceil{\mathfrak{n}}/{2}\rceil\right) \end{equation} where $T^*(\centerdot)$ is an upper bound on the expected running time of {\sc Quick-Select} (cf. Example~\ref{ex:quickselect}). We note that the worst-case complexity for this algorithm is $\Theta(n^2)$.\qed \end{example} \vspace{-1.5em} \subsection{Separable Bivariate Randomized Recurrences}\label{sec:bivariate} \vspace{-1em} We consider a generalization of the univariate recurrence relations to a class of bivariate recurrence relations called \emph{separable bivariate recurrence relations}. Similar to the univariate situation, we use `$\mathrm{T}$' to represent the (only) function call and `$\mathfrak{n}$', `$\mathfrak{m}$' to represent namely the two integer parameters. \smallskip\noindent{\bf Separable Bivariate Recurrence Expressions.} The syntax of \emph{separable bivariate recurrence expressions} is illustrated by $\mathfrak{e},\mathfrak{h}$ and $\mathfrak{b}$ as follows: \begin{align*} \mathfrak{e} & ::= \mathrm{T}\left(\mathfrak{n}, \mathfrak{m}-1\right) \mid \mathrm{T}\left(\mathfrak{n},\left\lfloor{\mathfrak{m}}/{2}\right\rfloor\right) \mid \mathrm{T}\left(\mathfrak{n},\left\lceil{\mathfrak{m}}/{2}\right\rceil\right)\\ & \mid \frac{\sum_{\mathfrak{j}=1}^{\mathfrak{m}-1} \mathrm{T}(\mathfrak{n},\mathfrak{j})}{\mathfrak{m}} \mid \frac{1}{\mathfrak{m}}\cdot\left( \textstyle\sum_{\mathfrak{j}=\left\lceil {\mathfrak{m}}/{2}\right\rceil}^{\mathfrak{m}-1}\mathrm{T}(\mathfrak{n},\mathfrak{j})+ \textstyle\sum_{\mathfrak{j}=\left\lfloor {\mathfrak{m}}/{2}\right\rfloor}^{\mathfrak{m}-1} \mathrm{T}(\mathfrak{n},\mathfrak{j})\right)\mid c\cdot \mathfrak{e}\mid \mathfrak{e}+\mathfrak{e} \\ \mathfrak{h} & ::= c\mid \ln{\mathfrak{n}}\mid \mathfrak{n}\mid \mathfrak{n}\cdot\ln{\mathfrak{n}}\mid c\cdot\mathfrak{h}\mid \mathfrak{h}+\mathfrak{h}\quad\mathfrak{b} ::= c\mid \frac{1}{\mathfrak{m}} \mid \ln{\mathfrak{m}}\mid \mathfrak{m}\mid \mathfrak{m}\cdot\ln{\mathfrak{m}}\mid c\cdot \mathfrak{b}\mid \mathfrak{b}+\mathfrak{b} \end{align*} The differences are that (i) we have two independent parameters $\mathfrak{n},\mathfrak{m}$, (ii) $\mathfrak{e}$ now represents an expression composed of only $\mathrm{T}$-terms, and (iii) $\mathfrak{h}$ (resp. $\mathfrak{b}$) represents arithmetic expressions for $\mathfrak{n}$ (resp. for $\mathfrak{m}$). This class of separable bivariate recurrence expressions (often for brevity bivariate recurrence expressions) stresses a dominant role on $\mathfrak{m}$ and a minor role on $\mathfrak{n}$, and is intended to model randomized algorithms where some parameter (to be represented by $\mathfrak{n}$) does not change value. \smallskip\noindent{\em Substitution.} The notion of substitution is similar to the univariate case. Consider a function $h:\Nset\times\Nset\rightarrow\Rset$, and a bivariate recurrence expression ${\mathfrak{e}}$. The {\em substitution function}, denoted by $\mathsf{Subst}({\mathfrak{e}},h)$, is the function from $\Nset\times\Nset$ into $\Rset$ such that $\mathsf{Subst}({\mathfrak{e}},h)(n,m)$ is the real number evaluated through substituting $h,n,m$ for $\mathrm{T},\mathfrak{n},\mathfrak{m}$, respectively. The substitution for $\mathfrak{h},\mathfrak{b}$ is defined in a similar way, with the difference that they both induce a univariate function. \smallskip\noindent{\bf Bivariate recurrence relations.} We consider {\em bivariate recurrence relations} $G=(\mathsf{eq}_1,\mathsf{eq}_2)$, which consists of two equalities of the following form: \begin{equation}\label{eq:birecurrel} \mathsf{eq}_1: \ \mathrm{T}(\mathfrak{n},\mathfrak{m})=\mathfrak{e}+\mathfrak{h}\cdot\mathfrak{b}; \quad \qquad \mathsf{eq}_2: \ \mathrm{T}(\mathfrak{n},1)=\mathfrak{h}\cdot c \end{equation} where $c\in(0,\infty)$ and $\mathfrak{e},\mathfrak{h},\mathfrak{b}$ are from the grammar above. \smallskip\noindent{\em Solution to bivariate recurrence relations.} The evaluation of bivariate recurrence relation is similar to the univariate case. Similar to the univariate case, the unique solution $T_G:\Nset\times\Nset\rightarrow\Rset$ to a recurrence relation $G$ taking the form (\ref{eq:birecurrel}) is a function defined recursively as follows: (1)~\emph{Base Step.} $T_G(n,1):=\mathsf{Subst}({\mathfrak{h}})(n)\cdot c$ for all $n\in\Nset$; and (2)~\emph{Recursive Step.} $T_G(n,m):=\mathsf{Subst}({\mathfrak{e}},T_G)(n,m)+\mathsf{Subst}(\mathfrak{h})(n)\cdot\mathsf{Subst}(\mathfrak{b})(m)$ for all $n\in\Nset$ and $m\ge 2$. Again the interesting algorithmic question is to reason about the infinite behaviour of $T_G$. \vspace{-1.5em} \subsection{Motivating Classical Examples}\label{sec:motivatingbi} \vspace{-0.5em} In this section we present two classical examples of randomized algorithms where the randomized recurrence relations are bivariate. We put the detailed illustration for this two examples in Appendix~\ref{app:motivatingbi}. \begin{example}[{\sc Coupon-Collector}]\label{ex:coupon} Consider the {\sc Coupon-Collector} problem~\cite[Chapter~3]{DBLP:books/cu/MotwaniR95} with $n$ different types of coupons ($n\in\Nset$). The randomized process proceeds in rounds: at each round, a coupon is collected uniformly at random from the coupon types the rounds continue until all the $n$ types of coupons are collected. We model the rounds as a recurrence relation with two variables $\mathfrak{n},\mathfrak{m}$, where $\mathfrak{n}$ represents the total number of coupon types and $\mathfrak{m}$ represents the remaining number of uncollected coupon types. The recurrence relation is as follows: \begin{equation}\label{eq:relcoupon} \mathrm{T}(\mathfrak{n},1)=\mathfrak{n}\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=\mathfrak{n}/{\mathfrak{m}}+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \end{equation} where $\mathrm{T}(\mathfrak{n},\mathfrak{m})$ is the expected number of rounds. We note that the worst-case complexity for this process is $\infty$.\qed \end{example} \begin{example}[{\sc Channel-Conflict Resolution}]\label{ex:channel} We consider two network scenarios in which $n$ clients are trying to get access to a network channel. This problem is also called the {\sc Resource-Contention Resolution}~\cite[Chapter~13]{Kleinbergbook}. In this problem, if more than one client tries to access the channel, then no client can access it, and if exactly one client requests access to the channel, then the request is granted. In the distributed setting, the clients do not share any information. In this scenario, in each round, every client requests an access to the channel with probability $\frac{1}{n}$. Then for this scenario, we obtain an over-approximating recurrence relation \begin{equation}\label{eq:relresourcea} \mathrm{T}(\mathfrak{n},1)=\mathfrak{n}\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=(\mathfrak{n}\cdot{e})/{\mathfrak{m}}+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \end{equation} for the expected rounds until which every client gets at least one access to the channel. In the concurrent setting, the clients share one variable, which is the number of clients which has not yet been granted access. Also in this scenario, once a client gets an access the client does not request for access again. For this scenario, we obtain an over-approximating recurrence relation \begin{equation}\label{eq:relresourceb} \mathrm{T}(\mathfrak{n},1)=1\cdot 1; \qquad \mathrm{T}(\mathfrak{n},\mathfrak{m})=1\cdot e+ \mathrm{T}(\mathfrak{n},\mathfrak{m}-1) \end{equation} We also note that the worst-case complexity for both the scenarios is $\infty$.\qed \end{example} \section{The Synthesis Algorithm}\label{sect:synalg} \vspace{-1em} In this section, we present our algorithms to synthesize asymptotic bounds for randomized recurrence relations. \smallskip\noindent{\em Main ideas.} The main idea is as follows. Consider as input a recurrence relation taking the form (\ref{eq:unirecurrel}) and an univariate recurrence expression $\mathfrak{f}\in\{\ln{\mathfrak{n}}, \mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$ which specifies the desired asymptotic bound. We first define the standard notion of a guess-and-check function which provides a sound approach for asymptotic bound. Based on the guess-and-check function, our algorithm executes the following steps for the univariate case. \begin{compactenum} \item First, the algorithm sets up a scalar variable $d$ and then constructs the template $h$ to be $n\mapsto d\cdot \mathsf{Subst}(\mathfrak{f})(n)+c$ for a univariate guess-and-check function. \item Second, the algorithm computes an over-approximation $\mathsf{OvAp}(\mathfrak{e}, h)$ of $\mathsf{Subst}(\mathfrak{e}, h)$ such that the over-approximation $\mathsf{OvAp}(\mathfrak{e}, h)$ will involve terms from $\mathfrak{n}^k,\ln^\ell{\mathfrak{n}}$ (for $k,\ell\in\Nset_0$) only. Note that $k,\ell$ may be greater than $1$, so the above expressions are not necessarily linear (they can be quadratic or cubic for example). \item Finally, the algorithm synthesizes a value for $d$ such that $\mathsf{OvAp}(\mathfrak{e},h)(n)\le h(n)$ for all $n\ge 2$ through truncation of $[2,\infty)\cap\Nset$ into a finite range and a limit behaviour analysis (towards $\infty$). \end{compactenum} Our algorithm for bivariate cases is a reduction to the univariate case. \smallskip\noindent{\bf Guess-and-Check functions.} We follow the standard guess-and-check technique to solve simple recurrence relations. Below we first fix a univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}). By an easy induction on $n$ (starting from the $N$ specified in Definition~\ref{def:uniguess}) we obtain Theorem~\ref{thm:uniguess}. \begin{definition}[Univariate Guess-and-Check Functions]\label{def:uniguess} Let $G$ be a univariate recurrence relation taking the form (\ref{eq:unirecurrel}). A function $h:\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for $G$ if there exists a natural number $N\in\Nset$ such that: (1) {\em (Base Condition)} $T_G(n)\le h(n)$ for all $1\le n\le N$, and (2) {\em (Inductive Argument)} $\mathsf{Subst}(\mathfrak{e},h) (n)\le h(n)$ for all $n> N$. \end{definition} \begin{theorem}[Guess-and-Check, Univariate Case]\label{thm:uniguess} If a function $h:\Nset\rightarrow\Rset$ is a \emph{guess-and-check} function for a univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}), then $T_G(n)\le h(n)$ for all $n\in\Nset$. \end{theorem} We do not explicitly present the definition for guess-and-check functions in the bivariate case, since we will present a reduction of the analysis of separable bivariate recurrence relations to that of the univariate ones (cf. Section~\ref{sect:bisynth}). \smallskip\noindent{\bf Overapproximations for Recurrence Expressions.} We now develop tight overapproximations for logarithmic terms. In principle, we use Taylor's Theorem to approximate logarithmic terms such as $\ln{(n-1)},\ln{\lfloor\frac{n}{2}\rfloor}$, and integral to approximate summations of logarithmic terms. All the results below are technical and depends on basic calculus (the detailed proofs are in the Appendix~\ref{app:overapprox}). \begin{proposition}\label{prop:lnflooroverapprox} For all natural number $n\ge 2$: \[ (1)\ \ln{n}-\ln{2}-\frac{1}{n-1}\le \ln{\left\lfloor \frac{n}{2}\right\rfloor}\le \ln{n}-\ln{2}; (2)\ \ln{n}-\ln{2}\le \ln{\left\lceil \frac{n}{2}\right\rceil}\le \ln{n}-\ln{2}+\frac{1}{n}~~. \] \end{proposition} \begin{proposition}\label{prop:nminusoneoverapprox} For all natural number $n\ge 2$: $\ln{n}-\frac{1}{n-1}\le\ln{(n-1)}\le \ln{n}-\frac{1}{n}$~~. \end{proposition} \begin{proposition}\label{prop:integralapproximation} For all natural number $n\geq 2$: \begin{compactitem} \item $\int_1^n \frac{1}{x}\,\mathrm{d}x-\sum_{j=1}^{n-1} \frac{1}{j}\in \left[-0.7552,-\frac{1}{6}\right]$; \item $\int_1^n \ln{x}\,\mathrm{d}x-\left(\sum_{j=1}^{n-1} \ln{j}\right) - \frac{1}{2}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x\in \left[-\frac{1}{12}, 0.2701\right]$; \item $\int_1^n x\cdot \ln{x}\,\mathrm{d}x-\left(\sum_{j=1}^{n-1} j\cdot\ln{j}\right)-\frac{1}{2}\cdot\int_1^n \ln{x}\,\mathrm{d}x+\frac{1}{12}\cdot \int_1^n \frac{1}{x}\,\mathrm{d}x-\frac{n-1}{2}\in \left[-\frac{19}{72},0.1575\right]$. \end{compactitem} \end{proposition} Note that Proposition~\ref{prop:integralapproximation} is non-trivial since it approximates summation of reciprocal and logarithmic terms up to a constant deviation. For example, one may approximate $\sum_{j=1}^{n-1} \ln{j}$ directly by $\int_1^n \ln{x}\,\mathrm{d}x$, but this approximation deviates up to a logarithmic term from Proposition~\ref{prop:integralapproximation}. From Proposition~\ref{prop:integralapproximation}, we establish a tight approximation for summation of logarithmic or reciprocal terms. \begin{example}\label{ex:overapprox} Consider the summation $\sum_{j=\left\lceil\frac{n}{2}\right\rceil}^{n-1}\ln{j}+ \sum_{j=\left\lfloor\frac{n}{2}\right\rfloor}^{n-1} \ln{j}\quad (n\ge 4)$. By Proposition~\ref{prop:integralapproximation}, we can over-approximate it as \[ 2\cdot\left(\Gamma_{\ln{\mathfrak{n}}}\left(n\right)+\frac{1}{12}\right) -\left(\Gamma_{\ln{\mathfrak{n}}}\left(\left\lceil\frac{n}{2}\right\rceil\right)+\Gamma_{\ln{\mathfrak{n}}}\left(\left\lfloor\frac{n}{2}\right\rfloor\right)-0.5402\right) \] where $\Gamma_{\ln{\mathfrak{n}}}(n) := \int_1^n\ln{x}\,\mathrm{d}x-\frac{1}{2}\cdot\int_1^n\frac{1}{x}\,\mathrm{d}x = n\cdot\ln{n}-n-\frac{\ln{n}}{2}+1$. By using Proposition~\ref{prop:lnflooroverapprox}, the above expression is roughly $n\cdot\ln{n}-(1-\ln{2})\cdot n+\frac{1}{2}\cdot\ln{n}+0.6672+\frac{1}{2\cdot n}$ (for details see Appendix~\ref{app:overapprox}).\qed \end{example} \begin{remark}\label{rmk:exthigerdegree} Although we do approximation for terms related to only almost-linear bounds, Proposition~\ref{prop:integralapproximation} can be extended to logarithmic bounds with higher degree (e.g., $n^3\ln n$) since integration of such bounds can be obtained in closed forms.\qed \end{remark} \vspace{-1.5em} \subsection{Algorithm for Univariate Recurrence Relations}\label{sect:unisynth} \vspace{-1em} We present our algorithm to synthesize a guess-and-check function in form~(\ref{eq:uniguess}) for univariate recurrence relations. We present our algorithm in two steps. First, we present the decision version, and then we present the quantitative version that synthesizes the associated constant. The two key aspects are over-approximation and use of pseudo-polynomials, and we start with over-approximation. We relegate some technical details to Appendix~\ref{app:unisynth}. \begin{definition}[Overapproximation]\label{def:unioverapprox} Let $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$. Consider a univariate recurrence expression $\mathfrak{g}$, constants $d$ and $c$, and the function $h= d \cdot \mathsf{Subst}(\mathfrak{f}) + c$. We define the {\em over-approximation function}, denoted $\mathsf{OvAp}(\mathfrak{g},h)$, recursively as follows. \begin{itemize} \item {\em Base Step A.} If $\mathfrak{g}$ is one of the following: $c', \mathfrak{n}, \ln{\mathfrak{n}}, \mathfrak{n}\cdot \ln{\mathfrak{n}},\frac{1}{\mathfrak{n}}$, then $\mathsf{OvAp}(\mathfrak{g},h):=\mathsf{Subst}({\mathfrak{g}})$. \item {\em Base Step B.} If $\mathfrak{g}$ is a single term which involves $\mathrm{T}$, then we define $\mathsf{OvAp}(\mathfrak{g},h)$ from over-approximations Proposition~\ref{prop:lnflooroverapprox}--~\ref{prop:integralapproximation}. In details, $\mathsf{OvAp}(\mathfrak{g},h)$ is obtained from $\mathsf{Subst}(\mathfrak{g},h)$ by first over-approximating any summation through Proposition~\ref{prop:integralapproximation} (i.e., through those $\Gamma_{(\centerdot)}$ functions defined below Proposition~\ref{prop:integralapproximation}), then over-approximating any $\ln{(\mathfrak{n}-1)}, \left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor, \left\lceil \frac{\mathfrak{n}}{2}\right\rceil, \ln{\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor}, \ln{\left\lceil \frac{\mathfrak{n}}{2}\right\rceil}$ by Proposition~\ref{prop:lnflooroverapprox} and Proposition~\ref{prop:nminusoneoverapprox}. The details of the important over-approximations are illustrated explicitly in Table~\ref{tbl:unioverapprox}. \item {\em Recursive Step.} We have two cases: (a)~If $\mathfrak{g}$ is $\mathfrak{g}_1+\mathfrak{g}_2$, then $\mathsf{OvAp}(\mathfrak{g},h)$ is $\mathsf{OvAp}(\mathfrak{g}_1,h)+\mathsf{OvAp}(\mathfrak{g}_2,h)$. (b)~If $\mathfrak{g}$ is $c'\cdot\mathfrak{g}'$, then $\mathsf{OvAp}(\mathfrak{g},h)$ is $c'\cdot\mathsf{OvAp}(\mathfrak{g}',h)$. \end{itemize} \end{definition} \begin{table} \caption{Illustration for Definition~\ref{def:unioverapprox} where the notations are given in the top-left corner.} \label{tbl:unioverapprox} \centering \scalebox{0.82}{ \begin{tabular}{|c|c|c|c|} \hline Notation & Expression & $\mathfrak{f}$, $\mathrm{T}$-term & Over-approximation \\ \hline $\mathfrak{e}_1$ & $\mathrm{T}(\mathfrak{n}-1)$ & $\ln{\mathfrak{n}}$, $\mathfrak{e}_1$ & $\ln{\mathfrak{n}}-\frac{1}{\mathfrak{n}}$\\ \hline $\mathfrak{e}_2$ & $\mathrm{T}\left(\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor\right)$ & $\ln{\mathfrak{n}}$, $\mathfrak{e}_2$ & $\ln{\mathfrak{n}}-\ln{2}$ \\ \hline $\mathfrak{e}_3$ & $\mathrm{T}\left(\left\lceil\frac{\mathfrak{n}}{2}\right\rceil\right)$ & $\ln{\mathfrak{n}}$, $\mathfrak{e}_3$ & $\ln{\mathfrak{n}}-\ln{2}+\frac{1}{\mathfrak{n}}$ \\ \hline $\mathfrak{e}_4$ & $\frac{1}{\mathfrak{n}}\cdot \sum_{\mathfrak{j}=1}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})$ & $\ln{\mathfrak{n}}$, $\mathfrak{e}_4$ & $\ln{\mathfrak{n}}-1-\frac{\ln{\mathfrak{n}}}{2\cdot\mathfrak{n}} +\frac{13}{12}\cdot\frac{1}{\mathfrak{n}}$\\ \hline $\mathfrak{e}_5$ & $\frac{1}{\mathfrak{n}}\cdot\left(\sum_{\mathfrak{j}=\left\lceil\frac{\mathfrak{n}}{2}\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \sum_{\mathfrak{j}=\left\lfloor\frac{\mathfrak{n}}{2}\right\rfloor}^{\mathfrak{n}-1} \mathrm{T}(\mathfrak{j})\right)$ & $\ln{\mathfrak{n}}$, $\mathfrak{e}_5$ & $\ln{\mathfrak{n}}-(1-\ln{2})+\frac{\ln{\mathfrak{n}}}{2\cdot \mathfrak{n}}+\frac{0.6672}{\mathfrak{n}}+\frac{1}{2\cdot \mathfrak{n}^2} $ \\ \hline \hline $\mathfrak{f}$, $\mathrm{T}$-term & Over-approximation & $\mathfrak{f}$, $\mathrm{T}$-term & Over-approximation \\ \hline $\mathfrak{n}$, $\mathfrak{e}_1$ & $\mathfrak{n}-1$ & $\mathfrak{n}\cdot\ln{\mathfrak{n}}$, $\mathfrak{e}_1$ & $\mathfrak{n}\cdot\ln{\mathfrak{n}}-\ln{\mathfrak{n}}-1+\frac{1}{\mathfrak{n}}$ \\ \hline $\mathfrak{n}$, $\mathfrak{e}_2$ & $\frac{\mathfrak{n}}{2}$ & $\mathfrak{n}\cdot\ln{\mathfrak{n}}$, $\mathfrak{e}_2$ & $\frac{1}{2}\cdot\mathfrak{n}\cdot \ln{\mathfrak{n}}-\frac{\ln{2}}{2}\cdot\mathfrak{n}$ \\ \hline $\mathfrak{n}$, $\mathfrak{e}_3$ & $\frac{\mathfrak{n}+1}{2}$ & $\mathfrak{n}\cdot\ln{\mathfrak{n}}$, $\mathfrak{e}_3$ & $\frac{\mathfrak{n}\cdot \ln{\mathfrak{n}}}{2}-\frac{\ln{2}}{2}\cdot \mathfrak{n}+\frac{1-\ln{2}}{2}+\frac{\ln{\mathfrak{n}}}{2}+\frac{1}{2\cdot\mathfrak{n}}$ \\ \hline $\mathfrak{n}$, $\mathfrak{e}_4$ & $\frac{\mathfrak{n}-1}{2}$ & $\mathfrak{n}\cdot\ln{\mathfrak{n}}$, $\mathfrak{e}_4$ & $\frac{\mathfrak{n}\cdot\ln{\mathfrak{n}}}{2}-\frac{\mathfrak{n}}{4}-\frac{\ln{\mathfrak{n}}}{2}+\frac{\ln{\mathfrak{n}}}{12\cdot\mathfrak{n}}+\frac{0.5139}{\mathfrak{n}}$ \\ \hline \multirow{2}{*}{$\mathfrak{n}$, $\mathfrak{e}_5$} & \multirow{2}{*}{$\frac{3}{4}\cdot \mathfrak{n}-\frac{1}{4\cdot \mathfrak{n}}$} & \multirow{2}{*}{$\mathfrak{n}\cdot\ln{\mathfrak{n}}$, $\mathfrak{e}_5$} & $\frac{3}{4}\cdot \mathfrak{n}\cdot \ln{\mathfrak{n}}-0.2017\cdot \mathfrak{n}-\frac{1}{2}\cdot \ln{\mathfrak{n}}$ \\ & & & $-0.2698+\frac{\ln{\mathfrak{n}}}{8\cdot\mathfrak{n}}+\frac{1.6369}{\mathfrak{n}}+\frac{1}{2\cdot\mathfrak{n}\cdot(\mathfrak{n}-1)}+\frac{1}{4\cdot \mathfrak{n}^2}$ \\ \hline \end{tabular} } \vspace{-1em} \end{table} \begin{example}\label{ex:reloverapprox} Consider the recurrence relation for Sherwood's {\sc Randomized-Search} (cf.~(\ref{eq:relrandsearch})). Choose $\mathfrak{f}=\ln{\mathfrak{n}}$ and then the template $h$ becomes $n\mapsto d\cdot \ln{n}+1$. From Example~\ref{ex:overapprox}, we have that the over-approximation for $6+\frac{1}{\mathfrak{n}}\cdot\left( \sum_{\mathfrak{j}=\left\lceil\frac{\mathfrak{n}}{2}\right\rceil}^{\mathfrak{n}-1}\mathrm{T}(\mathfrak{j})+ \sum_{\mathfrak{j}=\left\lfloor\frac{\mathfrak{\mathfrak{n}}}{2}\right\rfloor}^{\mathfrak{\mathfrak{n}}-1} \mathrm{T}(\mathfrak{j})\right)$ when $n\ge 4$ is $7+ d\cdot \left[\ln{n}-(1-\ln{2})+\frac{\ln{n}}{2\cdot n}+\frac{0.6672}{n}+\frac{1}{2\cdot n^2}\right]$ (the second summand comes from an over-approximation of $\frac{1}{\mathfrak{n}}\cdot\left( \sum_{\mathfrak{j}=\left\lceil\frac{\mathfrak{n}}{2}\right\rceil}^{\mathfrak{n}-1}d\cdot \ln{\mathfrak{j}}+ \sum_{\mathfrak{j}=\left\lfloor\frac{\mathfrak{\mathfrak{n}}}{2}\right\rfloor}^{\mathfrak{\mathfrak{n}}-1} d\cdot \ln{\mathfrak{j}}\right)$).\qed \end{example} \begin{remark} Since integrations of the form $\int x^k\ln^l x\,\mathrm{d}x$ can be calculated in closed forms (cf. Remark~\ref{rmk:exthigerdegree}), Table~\ref{tbl:unioverapprox} can be extended to logarithmic expressions with higher order, e.g., $\mathfrak{n}^2\ln \mathfrak{n}$.\qed \end{remark} \smallskip\noindent{\em Pseudo-polynomials.} Our next step is to define the notion of (univariate) pseudo-polynomials which extends normal polynomials with logarithm. This notion is crucial to handle inductive arguments in the definition of (univariate) guess-and-check functions. \begin{definition}[Univariate Pseudo-polynomials] A univariate pseudo-polynomial (w.r.t logarithm) is a function $p:\Nset\rightarrow\Rset$ such that there exist non-negative integers $k,\ell \in \Nset_0$ and real numbers $a_i,b_i$'s such that for all $n\in\Nset$, \vspace{-0.5em} \begin{equation}\label{eq:pseudopoly} \textstyle p(n)=\sum_{i=0}^{k} a_i\cdot n^{i}\cdot \ln{n}+\sum_{i=0}^{\ell} b_i\cdot n^{i}. \end{equation} \vspace{-1em} \end{definition} W.l.o.g, we consider that in the form (\ref{eq:pseudopoly}), it holds that (i) $a^2_k+b^2_\ell\ne 0$, (ii) either $a_k\ne 0$ or $k=0$, and (iii) similarly either $b_\ell\ne 0$ or $\ell=0$. \smallskip\noindent{\em Degree of pseudo-polynomials.} Given a univariate pseudo-polynomial $p$ in the form (\ref{eq:pseudopoly}), we define the \emph{degree} $\mathrm{deg}(p)$ of $p$ by: $\mathrm{deg}(p)= k+\frac{1}{2}$ if $k\ge \ell\mbox{ and }a_k\ne 0$ and $\ell$ otherwise. Intuitively, if the term with highest degree involves logarithm, then we increase the degree by $1/2$, else it is the power of the highest degree term. \smallskip\noindent{\em Leading term $\overline{p}$.} The \emph{leading term} $\overline{p}$ of a pseudo-polynomial $p$ in the form (\ref{eq:pseudopoly}) is a function $\overline{p}:\Nset\rightarrow\Rset$ defined as follows: $\overline{p}(n)=a_{k}\cdot n^{k}\cdot \ln{n} \mbox{ if }k\ge \ell\mbox{ and }a_k\ne 0$; and $b_{\ell}\cdot n^{\ell} \mbox{ otherwise }$; for all $n\in\Nset$. Furthermore, we define $C_p$ to be the (only) coefficient of $\overline{p}$. With the notion of pseudo-polynomials, the inductive argument of guess-and-check functions can be soundly transformed into an inequality between pseudo-polynomials. \begin{lemma}\label{lemm:unitrans} Let $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$ and $c$ be a constant. For all univariate recurrence expressions $\mathfrak{g}$, there exists pseudo-polynomials $p$ and $q$ such that coefficients (i.e., $a_i,b_i$'s in~(\ref{eq:pseudopoly})) of $q$ are all non-negative, $C_q>0$ and the following assertion holds: for all $d>0$ and for all $n\ge 2$, with $h=d\cdot \mathsf{Subst}({\mathfrak{f}})+c$, the inequality $\mathsf{OvAp}(\mathfrak{g}, h)(n)\le h(n)$ is equivalent to $d\cdot p(n)\ge q(n)$. \end{lemma} \begin{remark}\label{rem:unitrans} In the above lemma, though we only refer to existence of pseudo-polynomials $p$ and $q$, they can actually be computed in linear time, because $p$ and $q$ are obtained by simple rearrangements of terms from $\mathsf{OvAp}(\mathfrak{g}, h)$ and $h$, respectively. \end{remark} \begin{example}\label{ex:inequality} Let us continue with Sherwood's {\sc Randomized-Search}. Again choose $h=d\cdot\ln{\mathfrak{n}}+1$. From Example~\ref{ex:reloverapprox}, we obtain that for every $n\ge 4$, the inequality \begin{align*} d\cdot\ln{n}+1\ge 7+ d\cdot \left[\ln{n}-(1-\ln{2})+\frac{\ln{n}}{2\cdot n}+\frac{0.6672}{n}+\frac{1}{2\cdot n^2}\right] \end{align*} resulting from over-approximation and the inductive argument of guess-and-check functions is equivalent to $d\cdot\left[(1-\ln{2})\cdot n^2-\frac{n\cdot\ln{n}}{2}-0.6672\cdot n-\frac{1}{2}\right]\ge 6\cdot n^2$.\qed \end{example} As is indicated in Definition~\ref{def:uniguess}, our aim is to check whether $ \mathsf{OvAp}(\mathfrak{g}, h)(n)\le h(n)$ holds for sufficiently large $n$. The following proposition provides a sufficient and necessary condition for checking whether $d\cdot p(n)\ge q(n)$ holds for sufficiently large $n$. \begin{proposition}\label{prop:unisufflarge} Let $p,q$ be pseudo-polynomials such that $C_q>0$ and all coefficients of $q$ are non-negative. Then there exists a real number $d>0$ such that $d\cdot p(n)\ge q(n)$ for sufficiently large $n$ iff $\mathrm{deg}(p)\ge \mathrm{deg}(q)$ and $C_p>0$. \end{proposition} Note that by Definition~\ref{def:uniguess} and the special form (\ref{eq:uniguess}) for univariate guess-and-check functions, a function in form (\ref{eq:uniguess}) needs only to satisfy the inductive argument in order to be a univariate guess-and-check function: once a value for $d$ is synthesized for a sufficiently large $N$, one can scale the value so that the base condition is also satisfied. Thus from the sufficiency of Proposition~\ref{prop:unisufflarge}, our decision algorithm that checks the existence of some guess-and-check function in form (\ref{eq:uniguess}) is presented below. Below we fix an input univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}) and an input expression $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$. \textbf{Algorithm} $\mbox{\sl UniDec}$: Our algorithm, namely $\mbox{\sl UniDec}$, for the decision problem of the univariate case, has the following steps. \begin{compactenum} \item {\em Template.} The algorithm establishes a scalar variable $d$ and sets up the template $d\cdot \mathfrak{f}+c$ for a univariate guess-and-check function. \item {\em Over-approximation.} Let $h$ denote $d \cdot \mathsf{Subst}(\mathfrak{f}) +c$. The algorithm calculates the over-approximation function $\mathsf{OvAp}(\mathfrak{e},h)$, where $\mathfrak{e}$ is from (\ref{eq:unirecurrel}). \item {\em Transformation.} The algorithm transforms the inequality $\mathsf{OvAp}(\mathfrak{e},h)(n) \le h(n) ~~(n\in\Nset)$ for inductive argument of guess-and-check functions through Lemma~\ref{lemm:unitrans} equivalently into $d\cdot p(n)\ge q(n)~~(n\in\Nset)$, where $p,q$ are pseudo-polynomials obtained in linear-time through rearrangement of terms from $\mathsf{OvAp}(\mathfrak{e},h)$ and $h$ (see Remark~\ref{rem:unitrans}). \item {\em Coefficient Checking.} The algorithm examines cases on $C_p$. If $C_p> 0$ and $\mathrm{deg}(p) \ge \mathrm{deg}(q)$, then algorithm outputs ``$\mbox{\sl yes}$'' meaning that ``there exists a univariate guess-and-check function''; otherwise, the algorithm outputs ``$\mbox{\sl fail}$''. \end{compactenum} \begin{theorem}[Soundness for $\mbox{\sl UniDec}$]\label{thm:soundnessunidec} If $\mbox{\sl UniDec}$ outputs ``$\mbox{\sl yes}$'', then there exists a univariate guess-and-check function in form~(\ref{eq:uniguess}) for the inputs $G$ and $\mathfrak{f}$. The algorithm is a linear-time algorithm in the size of the input recurrence relation. \end{theorem} \begin{example} Consider Sherwood's {\sc Randomized-Search} recurrence relation (cf.~(\ref{eq:relrandsearch})) and $\mathfrak{f}=\ln{\mathfrak{n}}$ as the input. As illustrated in Example~\ref{ex:reloverapprox} and Example~\ref{ex:inequality}, the algorithm asserts that the asymptotic behaviour is $\mathcal{O}(\ln{n})$.\qed \end{example} \begin{remark} From the tightness of our over-approximation (up to only constant deviation) and the sufficiency and necessity of Proposition~\ref{prop:unisufflarge}, the $\mbox{\sl UniDec}$ algorithm can handle a large class of univariate recurrence relations. Moreover, the algorithm is quite simple and efficient (linear-time). However, we do not know whether our approach is complete. We suspect that there is certain intricate recurrence relations that will make our approach fail. \end{remark} \noindent{\bf Analysis of examples of Section~\ref{sec:motivatinguni}.} Our algorithm can decide the following optimal bounds for the examples of Section~\ref{sec:motivatinguni}. \begin{compactenum} \item For Example~\ref{ex:randsearch} we obtain an $\mathcal{O}(\log n)$ bound (recall worst-case bound is $\Theta(n)$). \item For Example~\ref{ex:quicksort} we obtain an $\mathcal{O}(n\cdot\log n)$ bound (recall worst-case bound is $\Theta(n^2)$). \item For Example~\ref{ex:quickselect} we obtain an $\mathcal{O}(n)$ bound (recall worst-case bound is $\Theta(n^2)$). \item For Example~\ref{ex:diameter} we obtain an $\mathcal{O}(n\cdot\log n)$ (resp. $\mathcal{O}(n)$) bound for Euclidean metric (resp. for $L_1$ metric), whereas the worst-case bound is $\Theta(n^2\cdot\log n)$ (resp. $\Theta(n^2)$). \item For Example~\ref{ex:sortselect} we obtain an $\mathcal{O}(n\cdot\log n)$ bound (recall worst-case bound is $\Theta(n^2)$). \end{compactenum} In all cases above, our algorithm decides the asymptotically optimal bounds for the expected-runtime analysis, whereas the worst-case analysis grossly over-estimate the expected-runtime bounds. \smallskip\noindent{\bf Quantitative bounds.} Above we have already established that our linear-time decision algorithm can establish the asymptotically optimal bounds for the recurrence relations of several classical algorithms. We now take the next step to obtain even explicit quantitative bounds, i.e., to synthesize the associated constants with the asymptotic complexity. To tackle these situations, we derive a following proposition which gives explicitly a threshold for ``sufficiently large numbers''. We first explicitly constructs a threshold for ``sufficiently large numbers''. Then we show in Proposition~\ref{prop:unisufflargeN} that $N_{\epsilon,p,q}$ is indeed what we need. \begin{definition}[Threshold $N_{\epsilon,p,q}$ for Sufficiently Large Numbers]\label{def:unisuffN} Let $p,q$ be two univariate pseudo-polynomials $p(n)=\sum_{i=0}^{k} a_i\cdot n^{i}\cdot \ln{n}+\sum_{i=0}^{\ell} b_i\cdot n^{i}$~, $q(n)=\sum_{i=0}^{k'} a'_i\cdot n^{i}\cdot \ln{n}+\sum_{i=0}^{\ell'} b'_i\cdot n^{i}$ such that $\mathrm{deg}(p)\ge \mathrm{deg}(q)$ and $C_p,C_q>0$. Then given any $\epsilon\in (0,1)$, the number $N_{\epsilon,p,q}$ is defined as the smallest natural number such that both $x,y$ (defined below) is smaller than $\epsilon$: \begin{compactitem} \item $x=-1+\sum_{i=0}^{k} |a_i|\cdot \frac{N^{i}\cdot \ln{N}}{\overline{p}(N)}+\sum_{i=0}^{\ell} |b_i|\cdot \frac{N^{i}}{\overline{p}(N)}$ ; \item $y=-\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot\frac{C_q}{C_p}+\sum_{i=0}^{k'} |a'_i|\cdot \frac{N^{i}\cdot \ln{N}}{\overline{p}(N)}+\sum_{i=0}^{\ell'} |b'_i|\cdot \frac{N^{i}}{\overline{p}(N)}$ . \end{compactitem} where $\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}$ equals $1$ when ${\mathrm{deg}(p)=\mathrm{deg}(q)}$ and $0$ otherwise. \end{definition} \begin{proposition}\label{prop:unisufflargeN} Consider two univariate pseudo-polynomials $p,q$ such that $\mathrm{deg}(p)\ge \mathrm{deg}(q)$, all coefficients of $q$ are non-negative and $C_p,C_q>0$. Then given any $\epsilon\in (0,1)$, $\frac{q(n)}{p(n)}\le \frac{\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot \frac{C_q}{C_p}+\epsilon}{1-\epsilon}$ for all $n\ge N_{\epsilon,p,q}$ (for $N_{\epsilon,p,q}$ of Definition~\ref{def:unisuffN}). \end{proposition} With Proposition~\ref{prop:unisufflargeN}, we describe our algorithm $\mbox{\sl UniSynth}$ which outputs explicitly a value for $d$ (in (\ref{eq:uniguess})) if $\mbox{\sl UniDec}$ outputs yes. Below we fix an input univariate recurrence relation $G$ taking the form (\ref{eq:unirecurrel}) and an input expression $\mathfrak{f}\in\{\ln{\mathfrak{n}},\mathfrak{n},\mathfrak{n}\cdot\ln{\mathfrak{n}}\}$. Moreover, the algorithm takes $\epsilon>0$ as another input, which is basically a parameter to choose the threshold for finite behaviour. For example, smaller $\epsilon$ leads to large threshold, and vice-versa. Thus we provide a flexible algorithm as the threshold can be varied with the choice of $\epsilon$. \textbf{Algorithm} $\mbox{\sl UniSynth}$: Our algorithm for the quantitative problem has the following steps: \begin{compactenum} \item {\em Calling $\mbox{\sl UniDec}$.} The algorithm calls $\mbox{\sl UniDec}$, and if it returns ``$\mbox{\sl fail}$'', then return ``$\mbox{\sl fail}$'', otherwise execute the following steps. Obtain the following inequality $d\cdot p(n)\ge q(n)~~(n\in\Nset)$ from the transformation step of $\mbox{\sl UniDec}$. \item {\em Variable Solving.} The algorithm calculates $N_{\epsilon, p,q}$ for a given $\epsilon\in(0,1)$ by e.g. repeatedly increasing $n$ (see Definition~\ref{def:unisuffN}) and outputs the value of $d$ as the least number such that the following two conditions hold: (i)~for all $2\le n< N_{\epsilon, p,q}$, we have $\mathsf{Eval}(G)(n)\le d\cdot \mathsf{Subst}({\mathfrak{f}})(n)+c$ (recall $\mathsf{Eval}(G)(n)$ can be computed in linear time), and (ii)~we have $d\ge \frac{\mathbf{1}_{\mathrm{deg}(p)=\mathrm{deg}(q)}\cdot \frac{C_q}{C_p}+\epsilon}{1-\epsilon}$. \end{compactenum} \begin{theorem}[Soundness for $\mbox{\sl UniSynth}$]\label{thm:soundnessunisynth} If the algorithm $\mbox{\sl UniSynth}$ outputs a real number $d$, then $d\cdot \mathsf{Subst}(\mathfrak{f})+c$ is a univariate guess-and-check function for $G$. \end{theorem} \begin{example} Consider the recurrence relation for Sherwood's {\sc Randomized-Search} (cf.~(\ref{eq:relrandsearch})) and $\mathfrak{f}=\ln{\mathfrak{n}}$. Consider that $\epsilon:=0.9$. From Example~\ref{ex:reloverapprox} and Example~\ref{ex:inequality}, the algorithm establishes the inequality $d\ge \frac{ 6}{(1-\ln{2})-\frac{\ln{n}}{2\cdot n}-\frac{0.6672}{n}-\frac{1}{2\cdot n^2}}$ and finds that $N_{0.9,p,q}=6$. Then the algorithm finds $d=204.5335$ through the followings: (a) $\mathsf{Eval}(G)(2)=7\le d\cdot \ln{2}+1$; (b) $\mathsf{Eval}(G)(3)=11\le d\cdot \ln{3}+1$; (c) $\mathsf{Eval}(G)(4)=15\le d\cdot \ln{4}+1$; (d) $\mathsf{Eval}(G)(5)=17.8\le d\cdot \ln{5}+1$; (e) $d\ge \frac{\frac{6}{1-\ln{2}}+0.9}{1-0.9}$. Thus, by Theorem~\ref{thm:uniguess}, the expected running time of the algorithm has an upper bound $204.5335\cdot \ln{n}+1$. Later in Section~\ref{sect:experiments}, we show that one can obtain a much better $d=19.762$ through our algorithms by choosing $\epsilon:=0.01$, which is quite good since the optimal value lies in $[15.129, 19.762]$ (cf. the first item {\sc R.-Sear.} in Table~\ref{tab:experiments}).\qed \end{example} \vspace{-1.5em} \subsection{Algorithm for Bivariate Recurrence Relations}\label{sect:bisynth} \vspace{-1em} In this part, we present our results for the separable bivariate recurrence relations. The key idea is to use separability to reduce the problem to univariate recurrence relations. There are two key steps which we describe below. \smallskip\noindent{\em Step~1.} The first step is to reduce a separable bivariate recurrence relation to a univariate one. \begin{definition}[From $G$ to $\Uni{G}$] Let $G$ be a separable bivariate recurrence relation taking the form~(\ref{eq:birecurrel}). The univariate recurrence relation $\Uni{G}$ from $G$ is defined by eliminating any occurrence of $\mathfrak{n}$ and replacing any occurrence of $\mathfrak{h}$ with $1$. \end{definition} Informally, $\Uni{G}$ is obtained from $G$ by simply eliminating the roles of $\mathfrak{h},\mathfrak{n}$. The following example illustrates the situation for {\sc Coupon-Collector} example. \begin{example} Consider $G$ to be the recurrence relation (\ref{eq:relcoupon}) for {\sc Coupon-Collector} example. Then $\Uni{G}$ is as follows: $\mathrm{T}(\mathfrak{n})=\frac{1}{\mathfrak{n}}+ \mathrm{T}(\mathfrak{n}-1)$ and $\mathrm{T}(1)=1$. \qed \end{example} \smallskip\noindent{\em Step~2.} The second step is to establish the relationship between $T_G$ and $T_{\Uni{G}}$, which is handled by the following proposition, whose proof is an easy induction on $m$. \begin{proposition}\label{prop:reduction} For any separable bivariate recurrence relation $G$ taking the form (\ref{eq:birecurrel}), the solution $T_G$ is equal to $(n,m)\mapsto \mathsf{Subst}(\mathfrak{h})(n) \cdot T_{\Uni{G}}(m)$. \end{proposition} \smallskip {\em Description of the Algorithm.} With Proposition~\ref{prop:reduction}, the algorithm for separable bivariate recurrence relations is straightforward: simply compute $\Uni{G}$ for $G$ and then call the algorithms for univariate case presented in Section~\ref{sect:unisynth}. \smallskip\noindent{\bf Analysis of examples in Section~\ref{sec:motivatingbi}.} Our algorithm can decide the following optimal bounds for the examples of Section~\ref{sec:motivatingbi}. \begin{compactenum} \item For Example~\ref{ex:coupon} we obtain an $\mathcal{O}(n\cdot \log m)$ bound, whereas the worst-case bound is $\infty$. \item For Example~\ref{ex:channel} we obtain an $\mathcal{O}(n\cdot\log m)$ bound for distributed setting and $\mathcal{O}(m)$ bound for concurrent setting, whereas the worst-case bounds are both $\infty$. \end{compactenum} Note that for all our examples, $m \leq n$, and thus we obtain $\mathcal{O}(n\cdot \log n)$ and $\mathcal{O}(n)$ upper bounds for expected-runtime analysis, which are the asymptotically optimal bounds. In all cases above, the worst-case analysis is completely ineffective as the worst-case bounds are infinite. Moreover, consider Example~\ref{ex:channel}, where the optimal number of rounds is $n$ (i.e., one process every round, which centralized Round-Robin schemes can achieve). The randomized algorithm, with one shared variable, is a decentralized algorithm that achieves $O(n)$ expected number of rounds (i.e., the optimal asymptotic expected-runtime complexity).
{ "redpajama_set_name": "RedPajamaArXiv" }
7,881
Q: request()->validate() does not work the second time around n laravel 8 I have a method to store and validate data from POST request, the first method works fine and validated successfully. public function saveClientInfo(){ ClientInfo::create($this->validateClientInfo()); } public function validateClientInfo(){ return request()->validate([ 'code' => 'required', 're' => ['required','unique:client_infos'], 'name' => 'required', 'contact' => 'required', 'email' => 'required', 'property' => 'required', 'status' => 'required' ]); } Here in this second method with the same structure for saving and validating the requests, it doesn't work I'm very intrigue as to why. public function loanStatus(){ PaymentInfo::create($this->validatePay()); } public function validatePay(){ return request()->validate([ 'payment_type' => 'required', 'terms' => 'required', 'amount_due' => 'required', 'from' => 'required', 'to' => 'required' ]); } By the way they're in the same controller so I dont get nauseous trying to figure out what is the difference of both as to where I went wrong. NOTE: the error it gives is Route GET is not Supported suggested Method POSTS some sort of like this, that's why it's confusing enough already I checked the form and routes and I used post so why this error is showing so I figure it must be the request validate part. Blade File <form action="/proceed/loan-status" method="POST"> @csrf <input type="hidden" name="re" value="{{ $re }}"> <div class="row"> <div class="p-2"> <input type="text" class="form-control" name="payment-type" placeholder="Payment Type..."> @if($errors->has('payment_type'))       <p class="alert text-danger">{{ $errors->first('payment_type') }}</p> @endif </div> <div class="p-2"> <input type="text" class="form-control" name="terms" placeholder="Terms..."> </div> <div class="p-2"> <input type="number" min="2" max="" step="any" class="form-control" name="amount" placeholder="Amount Due..."> </div> <div class="p-2"> <input type="text" class="form-control" name="from" placeholder="From..."> </div> <div class="p-2"> <input type="text" class="form-control" name="to" placeholder="To..."> </div> </div> </div> <div class="card-footer d-flex justify-content-between"> <button type="submit" class="btn btn-success btn-sm">Save</button> <a href="/" class="btn btn-danger btn-sm">Cancel</a> </div> </form> web.php Route::post('/proceed/loan-status', 'ClientInfoController@loanStatus'); A: If your Laravel version is 8, change the route sintax Route::post('/proceed/loan-status', ClientInfoController::class,'loanStatus')->name('proceed.loan-status'); and in the blade form change the action to <form action="{{ route('proceed.loan-status') }}" method="POST"> Also, I suggest you use validation request like Laravel doc, it's a better coded controller and adjusted to patterns. Changing that, you must have something like public function loanStatus(StorePaymentInfoRequest $request){ PaymentInfo::create($request->validated()); } and in the StorePaymentInfoRequest /** * Determine if the user is authorized to make this request. * * @return bool */ public function authorize() { return true; } /** * Get the validation rules that apply to the request. * * @return array */ public function rules() { return [ 'payment_type' => 'required', 'terms' => 'required', 'amount_due' => 'required', 'from' => 'required', 'to' => 'required' ]; } A: Please use the laravel's request validation. php artisan make:request FormRequest It will generate a file that you can use. https://medium.com/@kamerk22/the-smart-way-to-handle-request-validation-in-laravel-5e8886279271 This Article is a good example. A: The error look like you have defined post route in web.php but you are requesting form method is GET or vice versa .So it throwing error So you need to change form method to post to support that <form method="POST" > Or if you still need get request in form method then you need to update routes form post to get A: in your Route put your code like this ROUTE::POST('/VARRIBALE NAME', [CONTROLLER NAME::CLASS, 'FUNCTIONNAME']->NAME('VARRIBALE NAME.FUNCTIONNAME);
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,452
Thélod és un municipi francès situat al departament de Meurthe i Mosel·la i a la regió del Gran Est. L'any 2007 tenia 263 habitants. Demografia Població El 2007 la població de fet de Thélod era de 263 persones. Hi havia 100 famílies, de les quals 16 eren unipersonals (8 homes vivint sols i 8 dones vivint soles), 28 parelles sense fills, 40 parelles amb fills i 16 famílies monoparentals amb fills. La població ha evolucionat segons el següent gràfic: Habitants censats Habitatges El 2007 hi havia 106 habitatges, 99 eren l'habitatge principal de la família, 6 eren segones residències i 1 estava desocupat. 102 eren cases i 4 eren apartaments. Dels 99 habitatges principals, 89 estaven ocupats pels seus propietaris i 10 estaven llogats i ocupats pels llogaters; 1 tenia dues cambres, 9 en tenien tres, 15 en tenien quatre i 74 en tenien cinc o més. 87 habitatges disposaven pel capbaix d'una plaça de pàrquing. A 35 habitatges hi havia un automòbil i a 58 n'hi havia dos o més. Piràmide de població La piràmide de població per edats i sexe el 2009 era: Economia El 2007 la població en edat de treballar era de 187 persones, 149 eren actives i 38 eren inactives. De les 149 persones actives 145 estaven ocupades (77 homes i 68 dones) i 4 estaven aturades (1 home i 3 dones). De les 38 persones inactives 17 estaven jubilades, 13 estaven estudiant i 8 estaven classificades com a «altres inactius». Ingressos El 2009 a Thélod hi havia 100 unitats fiscals que integraven 266,5 persones, la mediana anual d'ingressos fiscals per persona era de 22.484 €. Activitats econòmiques Dels 10 establiments que hi havia el 2007, 1 era d'una empresa de fabricació d'altres productes industrials, 3 d'empreses de construcció, 3 d'empreses de comerç i reparació d'automòbils, 1 d'una empresa de transport i 2 d'empreses de serveis. Dels 3 establiments de servei als particulars que hi havia el 2009, 1 era un paleta, 1 guixaire pintor i 1 fusteria. Dels 2 establiments comercials que hi havia el 2009, 1 era una peixateria i 1 una botiga de roba. L'any 2000 a Thélod hi havia 4 explotacions agrícoles. Poblacions més properes El següent diagrama mostra les poblacions més properes. Referències Résumé statistique Fitxa resum de dades estadístiques de Thélod a l'INSEE. Évolution et structure de la population Fitxa amb el detall de dades de Thélod a l'INSEE France par commune Dades detallades de tots els municipis de França accessibles a partir del mapa. Municipis de Meurthe i Mosel·la
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,786
Home to diverse and vibrant terrain, the desert region is known for offering a variety of natural landscapes. From blooming Joshua Trees to rolling boulder hills, there's no scarcity of inspiration for creating your own desert landscape at home. With temperatures rising and summer quickly approaching, now is the best time to give your home's landscape an upgrade and we're here to make your next desert landscaping project even easier. As you begin to plan your summer projects, consider these landscaping tips. When preparing for new desert landscaping projects, one of the biggest factors to keep in mind is texture. How will the plants and rocks fit the style of your home? Plants with strong textural contrasts add flavor and interest to your garden. You can combine plants with varying leaf sized and structural patterns. Utilizing unplanted ground space to leave room for other plants to stand out can also be beneficial. Plants with strong textures can easily be paired with small decorative rock and gravel. Desert color palettes are important for existing natural vegetation. Dry climates are often home to brown and gray hues. When landscaping, think of how to introduce colors to your home strategically. Warm yellow and red plants like aloe or ice plants, can be used as focal points. Contrasting colors can also create a unique balance and flow throughout your front or backyard. Keep in mind the leaves of trees, boulder rock colors, stems of succulents, and bark of trees. Railroad ties are a unique element to include in your landscape and can be multifaceted in use. They can also influence the design of your yard, because of their preserved rustic look. Ties can be used to with a combination of sand, brick, large rocks, or concrete. Design a terraced backyard garden utilizing multiple two-tie retaining walls. You can also cut ties into any length to serve as in-ground steps, to give your steps an extra boost, and bed borders. They are generally used as retaining walls. Their strength can resist decay from the dirt and moisture against them and are often inexpensive. Crushed granite gravel and cacti create the perfect balance of a simple yet eye-catching desert landscape. Granite gravel is similar to decomposed granite but a bit fuller in texture and size. This rock is the best choice for walkways, patios, and setting off xeric plants. For a more refined or contemporary look, add cacti. Cacti are commonly known as the most drought-tolerant plants. Their varying structural shapes leave room for growth and can improve your desert landscape. You can create a visual oasis with little to no water. Arranging boulders and compacted cacti in randomly spaced groups can be shown as landscape artwork. Create a background, give them a focal point, and play off the colors. Preparing For Desert Landscaping with Whitewater Rock & Co. Now that you've got a list of desert landscaping ideas, the next is purchasing the materials. Whitewater Rock & Co.is here to aid the process of your project. We'll help your vision come to life with a wide assortment of decorative rock and stone, along with decorative backyard accessories. Our team provides quality service needs and consulting for the best landscape options.
{ "redpajama_set_name": "RedPajamaC4" }
5,945
Pismo Święte Starego i Nowego Testamentu – polskie przekłady poszczególnych ksiąg Biblii wraz z naukowym komentarzem i ekskursami. Nazywane "Komentarzami KUL-owskimi" z racji związku większości autorów z Katolickim Uniwersytetem Lubelskim. Wydawcą przekładów było Pallottinum. Tomy z przekładami ksiąg Nowego Testamentu ukazywały się w latach 1959–1979. Tomy z przekładami ksiąg Starego Testamentu ukazywały się od 1962 do 2020 roku. W 2020 roku, jako ostatni tom, ukazała się Księga Syracha, którą komentarzem opatrzył Hugolin Langkammer. Pomysłodawcami serii komentarzy dotyczącej Starego Testamentu byli Stanisław Łach i Stanisław Styś (po śmierci Stysia do redakcji dołączył Lech Stachowiak). Po śmierci Stanisława Łacha (1983) oraz Lecha Stachowiaka (1997) odpowiedzialnym za serię był Ryszard Rubinkiewcz (zm. 2011), a w ostatnich latach – aż do jej pomyślnego zakończenia – Mirosław Wróbel. Komentarz miał być, zgodnie z przedmową do pierwszego tomu: oparty na solidnych podstawach naukowych, ale skierowany nie tylko do biblistów i teologów, ale także do duchowieństwa i ogółu katolickiej inteligencji. Redaktorzy przyznawali, że mieli kłopot z zebraniem dostatecznej liczby współpracowników, gdyż specjalistów z dziedziny Starego Testamentu było mniej. Treść przekładów po ich ukończeniu miała być wydana łącznie z krótszym komentarzem – nie doszło jednak do tego. Każdy z tomów zawiera szczegółowy komentarz, przewyższający czasami wielokrotnie rozmiarem sam tekst. Aparat naukowy uzupełnia bibliografia, skorowidze i mapy. Wybrane zagadnienia były dodatkowo opisywane w postaci ekskursów. Serię pozytywnie przyjął kardynał Stefan Wyszyński, który w dedykacji do pierwszego tomu pisał o potrzebie odczuwanej przez społeczeństwo, jak też wysokim poziomie przygotowania Biblistów do tego dzieła. W ankiecie biblistów polskich, którą w 1999 przeprowadziła Katolicka Agencja Informacyjna przekład zwyciężył w dwóch kategoriach: wierności i staranności, a w łącznej punktacji zajął drugie miejsce, ustępując wyłącznie Biblii poznańskiej. Tradycję Komentarzy KUL-owskich kontynuuje Biblia lubelska. Część wydanych w ramach Biblii Lubelskiej tomów została napisana przez tych samych autorów, a tekst przekładów jest zbliżony. Lista poszczególnych tomów Pismo Święte Nowego Testamentu Redakcja naukowa przekładu: Feliks Gryglewicz, Eugeniusz Dąbrowski. Seria znana jest również jako: Pismo Święte Nowego Testamentu w 12 tomach. Pismo Święte Starego Testamentu Redakcja naukowa przekładu: Lech Stachowiak, Stanisław Łach. Uwagi Przypisy Katolickie przekłady Biblii Polskie przekłady Biblii
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,914
Q: Laptop power usage explanation I have an ASUS ROG STRIX G15. I've killed most of the ASUS software, and am using Throttlestop to control CPU usage. * *I've undervolted by 75mv *I've set the display to 60Hz (rather than the 144Hz it is capable of) *I've set brightness to minimum *I've removed all peripherals. *I have 2 M.2 SSD and no spinning drives Throttlestop reports the CPU package as using 2watts of power, and the GPU is inactive. Yet Throttlestop still reports the total power usage as around 35watts, which means my battery only lasts just over an hour. Setting brightness to max increases power usage by 10watts. The screenshot below shows it bursting to 55watts of usage when I changed to the Performance profile. Battery profile reports 35-40watts of usage. How can I find what is using the power?
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,656
Dr Cain Zhang Chancellor's Postdoctoral Research Fellow, School of Electrical and Data Engineering Core Member, Global Big Data Technologies Ting.Zhang@uts.edu.au Dr Ting (Cain) Zhang received a Bachelor Degree in electronics engineering and Information Science from University of Science and Technology of China (UTSC), Hefei, China, in 2007, and a Ph.D. Degree in Microelectronics and Solid-state Electronics from Unversity of Chinese Academy of Science in 2013. From 2013 to 2016, he was a Postdoctoral Research Fellow in Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia. From 2016 to 2017, he was a Postdoctoral Research Fellow with University of Technology Sydney (UTS). From 2017, he has been a Chancellor's Postdoctoral Research Fellow with UTS. His research interests are in the areas of wireless components, devices, circuits and systems for wireless communications, from microwave to THz frequencies. Gao, X, Du, J, Zhang, T & Guo, YJ 2019, 'High-T c Superconducting Fourth-Harmonic Mixer Using a Dual-Band Terahertz On-Chip Antenna of High Coupling Efficiency', IEEE Transactions on Terahertz Science and Technology, vol. 9, no. 1, pp. 55-62.View/Download from: UTS OPUS or Publisher's site © 2019 IEEE. This paper presents a dual-band on-chip antenna-coupled high-T c superconducting (HTS) Josephson-junction subterahertz (THz) fourth-harmonic mixer. The antenna utilizes a couple of different structured twin slots to enable the resonant radiations at two frequencies, and integrates a well-designed coplanar waveguide network for achieving good radiation coupling and signal isolation characteristics. The electromagnetic simulations show that coupling efficiencies as high as -4 and -3.5 dB are achieved for the 160- and 640-GHz operating frequency bands, respectively. Based on this dual-band antenna, a 640-GHz HTS fourth-harmonic mixer is developed and characterized in a range of operating temperatures. The mixer exhibits a measured conversion gain of around -18 dB at 20 K and -22 dB at 40 K, respectively. The achieved intermediate frequency bandwidth is larger than 23 GHz. These are the best results reported for HTS harmonic mixers at comparable sub-THz frequency bands to date. Gao, X, Zhang, T, Du, J & Guo, YJ 2018, 'Design, modelling and simulation of a monolithic high-T c superconducting terahertz mixer', Superconductor Science and Technology, vol. 31, no. 11.View/Download from: UTS OPUS or Publisher's site © 2018 IOP Publishing Ltd. This paper presents a novel concept and design of a full monolithic integrated high-T c superconducting (HTS) Josephson junction terahertz (THz) harmonic mixer coupled with a circularly polarized (CP) antenna. The fully on-chip mixer device is very compact in size and utilizes the CP antenna to enhance the polarization orientation flexibility in coupling THz radiation. Electromagnetic simulations are carried out to optimize the coupling efficiency and axial ratio of the THz CP antenna, and the signal transmission and isolation characteristics of the monolithic circuit. An equivalent circuit model of the HTS THz mixer is then established and simulation is performed based on our previously measured step-edge Josephson junction characteristics to evaluate the device performance and validate the concept of design. The results show that a superior performance could be achieved from such a monolithic HTS mixer device, which is significantly better than any HTS THz harmonic mixers reported to date. Zhang, T, Gao, X, Du, J & Guo, YJ 2018, 'Full em Design Method for HTS MMIC Josephson Mixers', IEEE Transactions on Applied Superconductivity, vol. 28, no. 4.View/Download from: UTS OPUS or Publisher's site © 2002-2011 IEEE. We report the full electromagnetic (EM) design and simulation method, and applied it to develop a 34-GHz high-temperature superconducting (HTS) microwave monolithic integrated circuit Josephson mixer. The mixer is modeled in EM simulation software, high-frequency simulation structural simulator, with the junction area modeled as an excitation port with frequency-dependent impedance. Impedance matching between the junction and RF/IF ports is then optimized accordingly. Module design is carried out for the optimized HTS Josephson mixer, and the cavity resonance issue is investigated and eliminated. The HTS mixer module was experimentally developed and measured to verify the simulation. The measured frequency response of the conversion gain agrees with the simulation results of combined RF and IF transmission loss. Du, J, Pegrum, CM, Gao, X, Weily, AR, Zhang, T, Guo, YJ & Foley, CP 2017, 'Harmonic Mixing Using a HTS Step-Edge Josephson Junction at 0.6 THz Frequency', IEEE Transactions on Applied Superconductivity, vol. 27, no. 4, pp. 1-5.View/Download from: UTS OPUS or Publisher's site © 2002-2011 IEEE. A high-temperature superconducting (HTS) terahertz (THz) heterodyne mixer based on a thin-film antenna-coupled YBa 2 Cu 3 O 7- x step-edge Josephson junction is presented. The frequency down-conversion from 0.6 THz to an intermediate frequency (IF) of 2 GHz was achieved using high-order harmonic mixing of a local oscillator (LO), thus removing the need to use a second THz source as the LO pumping source. The DC and RF characteristics of the harmonic mixer as well as the relationship of the IF output power versus the harmonic number were experimentally studied and compared with simulated results. Most of our measurements were made at 40 K, but we also observed stable harmonic mixing at 77 K which we believe has not been reported previously in HTS junction mixers. Du, J, Weily, AR, Gao, X, Zhang, T, Foley, CP & Guo, YJ 2017, 'HTS step-edge Josephson junction terahertz harmonic mixer', Superconductor Science and Technology, vol. 30, no. 2.View/Download from: UTS OPUS or Publisher's site © 2016 Federal Australian Crown copyright. A high-temperature superconducting (HTS) terahertz (THz) frequency down-converter or mixer based on a thin-film ring-slot antenna coupled YBa 2 Cu 3 O 7-x (YBCO)/MgO step-edge Josephson junction is reported. The frequency down-conversion was achieved using higher order harmonics of an applied lower frequency (19-40 GHz) local oscillator signal in the Josephson junction mixing with a THz signal of over 600 GHz, producing a 1-3 GHz intermediate frequency signal. Up to 31st order of harmonic mixing was obtained and the mixer operated stably at temperatures up to 77 K. The design details of the antenna, HTS Josephson junction mixer, the matching and isolation circuits, and the DC and RF performance evaluation are described in this paper. Gao, X, Du, J, Zhang, T & Guo, YJ 2017, 'Noise and conversion performance of a high- Tcsuperconducting Josephson junction mixer at 0.6 THz', Applied Physics Letters, vol. 111, no. 19, pp. 1-5.View/Download from: UTS OPUS or Publisher's site © 2017 AU-Crown. This letter presents both theoretical and experimental investigations on the noise and conversion performance of a high-T c superconducting (HTS) step-edge Josephson-junction mixer at the frequency of 0.6 THz and operating temperatures of 20-40 K. Based on the Y-factor and U-factor methods, a double-sideband noise temperature of around 1000 K and a conversion gain of -3.5 dB were experimentally obtained at 20 K. At the temperature of 40 K, the measured mixer noise and conversion efficiency are around 2100 K and -10 dB, respectively. The experimental data are in good agreement with the numerical analysis results using the three-port model. A detailed performance comparison with other reported HTS terahertz mixers has confirmed the superior performance of our presented mixer device. Gao, X, Du, J, Zhang, T, Jay Guo, Y & Foley, CP 2017, 'Experimental Investigation of a Broadband High-Temperature Superconducting Terahertz Mixer Operating at Temperatures Between 40 and 77 K', Journal of Infrared, Millimeter and Terahertz Waves, vol. 38, no. 11, pp. 1357-1367.View/Download from: UTS OPUS or Publisher's site © 2017, Springer Science+Business Media, LLC. This paper presents a systematic investigation of a broadband thin-film antenna-coupled high-temperature superconducting (HTS) terahertz (THz) harmonic mixer at relatively high operating temperature from 40 to 77 K. The mixer device chip was fabricated using the CSIRO established step-edge YBa 2 Cu 3 O 7-x (YBCO) Josephson junction technology, packaged in a well-designed module and cooled in a temperature adjustable cryocooler. Detailed experimental characterizations were carried out for the broadband HTS mixer at both the 200 and 600 GHz bands in harmonic mixing mode. The DC current-voltage characteristics (IVCs), bias current condition, local oscillator (LO) power requirement, frequency response, as well as conversion efficiency under different bath temperatures were thoroughly investigated for demonstrating the frequency down-conversion performance. Gao, X, Zhang, T, Du, J, Weily, AR, Guo, YJ & Foley, CP 2017, 'A wideband terahertz high-T c superconducting Josephson-junction mixer: electromagnetic design, analysis and characterization', Superconductor Science and Technology, vol. 30, no. 9, pp. 1-9.View/Download from: UTS OPUS or Publisher's site © 2017 IOP Publishing Ltd. This paper presents a wideband terahertz (THz) mixer based on a thin-film antenna-coupled high-temperature superconducting (HTS) YBa 2 Cu 3 O 7-x (YBCO) step-edge Josephson junction. The HTS mixer enables the flexible harmonic mixing operation at multiple THz bands with the same microwave local oscillator (LO) source, and features very wide intermediate-frequency or instantaneous bandwidth. In order to optimize the frequency down-conversion performance of the mixer, systematic electromagnetic design and analysis have been carried out to improve the power coupling of THz radiation as well as wideband transmission of microwave signals. Experimental characterization of a fabricated device prototype has demonstrated that the mixer exhibits good performance at both the 200 GHz and 600 GHz bands. Detailed measurement results including the DC characteristics, LO pumping requirement, frequency response, mixing linearity and conversion gain are presented in this paper. Zhang, T, Gao, X, Wang, W, Du, J, Pegrum, C & Guo, YJ 2017, 'A 36 GHz HTS MMIC Josephson mixer - Simulation and measurement', IEEE Transactions on Applied Superconductivity, vol. 27, no. 4.View/Download from: UTS OPUS or Publisher's site © 2002-2011 IEEE. Modeling, simulation, and measurement of a compact 36 GHz high-temperature superconducting (HTS) monolithic Josephson junction mixer are presented in this paper. A full HTS microwave monolithic integrated circuit (MMIC) simulation was carried out for the circuit combining HTS passive devices and the Josephson junction. Optimal impedance matching and bias conditions were investigated, and the circuit layout was designed accordingly. The HTS circuit has a compact dimension of 5 × 4 × 0.3 mm 3 , including filters, resonators, and impedance matching circuits. The HTS MMIC mixer was fabricated and packaged with an LNA to realize a receiver front end with a total dimension of 28 × 25 × 15 mm 3 . Measurement result showed an overall conversion gain around 35 dB, with local oscillator driving power around -45 dBm at operating temperature of 40 K. Zhang, T, Pegrum, C, Du, J & Guo, YJ 2017, 'Simulation and measurement of a Ka-band HTS MMIC Josephson junction mixer', Superconductor Science and Technology, vol. 30, no. 1, pp. 1-8.View/Download from: UTS OPUS or Publisher's site © 2016 IOP Publishing Ltd. We report modeling and simulation results for a Ka band high-temperature superconducting (HTS) monolithic microwave integrated circuit (MMIC) Josephson junction mixer. A Verilog-A model of a Josephson junction is established and imported into the system simulator to realize a full HTS MMIC circuit simulation containing the HTS passive circuit models. Impedance matching optimization between the junction and passive devices is investigated. Junction DC I-V characteristics, current and local oscillator bias conditions and mixing performance are simulated and compared with the experimental results. Good agreement is obtained between the simulation and measurement results. Zhang, T, Cai, Z, Yang, Y, Bao, J & Wang, Y 2017, 'Compact tunable lowpass filter with sharp roll-off and low insertion loss', Microwave and Optical Technology Letters, vol. 59, no. 10, pp. 2619-2623.View/Download from: UTS OPUS or Publisher's site © 2017 Wiley Periodicals, Inc. A novel continuously tunable lowpass filter (LPF) with compact size, sharp roll-off and low insertion loss is presented in this paper. The filter employs two varactor diodes, a pair of open-ended coupled lines and a U-shape step impedance line (SIL) with an open-ended stub loaded at the center of the SIL to form a very compact layout. The odd- and even-mode analysis and equivalent circuit model are demonstrated for estimation of the transmission characteristics. Tuning the DC voltage applied on the varactor diodes, the varactor capacitance accordingly changes leading to a varying cutoff frequency f c . The measured results show that the achieved 3-dB f c tuning range is 60.6% (1.15–2.15 GHz). The measured insertion loss (IL) and roll-off rate are 0.2-0.4 dB and 50–73 dB/GHz, respectively. The overall size of the LPF is only 0.005λ g 2 , which shows a competitive advantage comparing with the state-of-the-art work. Pegrum, C, Zhang, T, Du, J & Guo, YJ 2016, 'Simulation of HTS Josephson Mixers', IEEE Transactions on Applied Superconductivity, vol. 26, no. 3.View/Download from: UTS OPUS or Publisher's site © 2016 IEEE. The Commonwealth Scientific and Industrial Research Organization has developed superconducting microwave monolithic integrated circuit (MMIC) mixers using step-edge Josephson junctions and on-chip filters, made from YBaCuO on MgO substrates. Integration into an MMIC results in a compact and efficiently coupled structure. These have been shown to have outstanding conversion efficiency, dynamic range, and linearity. We report here a range of simulations of this type of mixer. We have mainly used Josephson simulators and analyze the data in both the time and frequency domains. More recently, we have also used microwave simulators incorporating a novel Verilog-A Josephson junction model that we have developed. We have looked at the interactions of junction bias current, local oscillator power, and radio-frequency input power with conversion efficiency, dynamic range, and linearity. Good agreement is found overall with measurements. Du, J, Wang, J, Zhang, T, Bai, D, Guo, YJ & He, Y 2015, 'Demonstration of a portable HTS MMIC microwave receiver front-end', IEEE Transactions on Applied Superconductivity, vol. 25, no. 3.View/Download from: UTS OPUS or Publisher's site © 2014 IEEE. We report the first demonstration of a portable HTS monolithic microwave integrated circuit (MMIC) receiver front-end module operating on a commercial mini cryocooler. The HTS circuit consists of a step-edge junction mixer and a number of HTS filters fabricated on a single MgO substrate. The HTS MMIC circuit is integrated with the mini cryocooler. The sample vacuum chamber, cold-head, compressor and cooling fans are all packed into one customer-designed portable box of approximately 350 mm × 350 mm × 250 mm in dimension. The HTS Josephson junction-based microwave circuit operated successfully in the cryocooler unshielded without observable performance degradation. The design and implementation of the compact unit and performance evaluation of a HTS MMIC frequency down-converter are presented. Zhang, T, Du, J, Wang, J, Bai, D, Guo, YJ & He, Y 2015, '30 GHz HTS receiver front-end based on monolithic Josephson mixer', IEEE Transactions on Applied Superconductivity, vol. 25, no. 3.View/Download from: Publisher's site © 2014 IEEE. A compact, high-gain, and low-noise Ka band HTS Josephson junction-based receiver front-end module for wireless communication is presented in this paper. The front-end module consists of biasing circuits, a semiconductor low-noise amplifier, and a monolithic HTS circuit consisting of an Josephson mixer, bandpass and lowpass filters. The semiconductor LNA in the first stage is applied to achieve a low noise figure of the whole front-end module. Integration of the Josephson mixer with a number of HTS passive components on a single chip improves the coupling efficiency and reduces the connection losses between the components. The total dimension of the packaged front-end module is below 25 mm × 20 mm × 15 mm, which is very compact. Measurement result shows that the front-end module has an overall conversion gain around 40 dB, and a low noise figure close to 0 dB with an LO driving power around -38 dBm at 40 K. Du, J, Bai, DD, Zhang, T, Jay Guo, Y, He, YS & Pegrum, CM 2014, 'Optimised conversion efficiency of a HTS MMIC Josephson down-converter', Superconductor Science and Technology, vol. 27, no. 10.View/Download from: UTS OPUS or Publisher's site © 2014 IOP Publishing Ltd. A high-Tc superconducting (HTS) monolithic microwave integrated circuit (MMIC) Josephson down-converter that approaches zero conversion loss is reported. The all-HTS YBa2Cu3O7-x thin-film circuit consists of a step-edge Josephson junction mixer, a 10-12 GHz bandpass filter for the RF input, a lowpass filter for the IF output and a resonant strip line for local oscillator isolation; all are integrated on a single 10 mm × 20 mm MgO substrate. The DC characteristics of the junction and its mixing properties have been experimentally studied and compared to the results of (a) a single Josephson mixer without the on-chip HTS filters, and (b) our previously reported MMIC down-converter which had very different junction characteristics. The Josephson junction parameters are analysed to give insight into their effect on the mixer performance. Du, J, Zhang, T, Guo, YJ & Sun, XW 2013, 'A high-temperature superconducting monolithic microwave integrated Josephson down-converter with high conversion efficiency', Applied Physics Letters, vol. 102, no. 21.View/Download from: UTS OPUS or Publisher's site A compact high-T c superconducting monolithic microwave integrated circuit Josephson down-converter is presented. The circuit consists of a single Josephson junction mixer, a bandpass filter, a lowpass filter, and a resonator for local oscillator fabricated on a single 10 mm × 20 mm chip of YBa 2 Cu 3 O 7-x film on MgO substrate. The down-converter demonstrates superior performance in terms of conversion efficiency, dynamic range, linearity, and low local oscillator power with stable operation from 20 to 77 K. A maximum conversion gain of -4.7 dB was measured at 20 K and -12.8 dB at 70 K. © 2013 Crown. Zhang, T, Du, J, Guo, YJ & Sun, X 2013, 'A 7-8.5 GHz high performance MMIC HTS josephson mixer', IEEE Microwave and Wireless Components Letters, vol. 23, no. 8, pp. 427-429.View/Download from: UTS OPUS A low-loss, low power consumption monolithic high-temperature superconducting (HTS) Josephson junction mixer at 7-8.5 GHz is presented. The mixer consists of a HTS YBa 2Cu 3O 7-x (YBCO) bandpass filter for RF input, a lowpass filter for IF output and a LO resonator integrated with a single Josephson junction. All the passive and active devices are fabricated on a 20 mmtimes 10 mm MgO substrate. Measurement result shows a conversion gain of -7 dB at 40 K, and -4.7 dB at 20 K. The IF output versus the RF input exhibits a wide linear range of conversion gains. The mixer has an extremely low LO power requirement at -32 dBm and a 50 nW power consumption. © 2001-2012 IEEE. Zhang, T, Du, J, Guo, YJ & Sun, X 2013, 'A compact HTS bandpass microstrip filter with novel coupling structure for on-chip integration', Physica C: Superconductivity and its Applications, vol. 495, pp. 69-73.View/Download from: UTS OPUS or Publisher's site A compact low-complexity high-selectivity high-temperature superconducting (HTS) microstrip bandpass filter is presented in this paper, which consists of only three half-wavelength resonators. A novel coupling scheme is used to provide a pair of transmission zeros outside the passband, so that the selectivity of the filter is improved. The filter is fabricated on an MgO substrate with YBa 2 Cu 3 O 7-x (YBCO) coating. Measurement result shows an in-band insertion loss at 0.5 dB, a sharp slope, and a stopband rejection better than 20 dB. The compactness and high-selectivity features make the filter suitable for on-chip integration of HTS receiver front-ends. © 2013 Published by Elsevier B.V. All rights reserved. Du, J, MacFarlane, JC, Pegrum, CM, Zhang, T, Cai, Y & Guo, YJ 2012, 'A self-pumped high-temperature superconducting Josephson mixer: Modelling and measurement', Journal of Applied Physics, vol. 111, no. 5.View/Download from: UTS OPUS or Publisher's site We have recently developed a high-temperature superconducting (HTS) Josephson self-pumped mixer with an on-chip heterodyne local oscillator. The device is based on HTS step-edge junction technology and a resistive- superconducting quantum interference device (RSQUID) configuration. The heterodyne local oscillator and mixer output are frequency-tunable from below 10 MHz to 5 GHz by a control current. The performance of the autonomous Josephson mixer-local oscillator has been experimentally evaluated in terms of the current-voltage characteristics, intermediate frequency (IF)-tunable bandwidth, operation range, linearity, bias current, and temperature dependence of the IF output (or mixer conversion efficiency). We find the results are in good overall agreement with numerical simulation. © 2012 American Institute of Physics. Du, J, MacFarlane, JC, Zhang, T, Cai, Y & Guo, YJ 2012, 'Self-pumped HTS Josephson heterodyne tunable mixer', Superconductor Science and Technology, vol. 25, no. 2.View/Download from: UTS OPUS or Publisher's site Experimental evaluation of a high-temperature superconducting Josephson heterodyne mixer based on a resistive-SQUID configuration is reported. The device consists of two YBa 2 Cu 3 O 7-x step-edge Josephson junctions connected via a small resistor in an otherwise superconducting loop. It has been previously shown to generate a heterodyne oscillation, which is frequency-tunable by a control current through the resistor. Under certain conditions, this device can operate as a frequency-tunable heterodyne mixer (down-converter) in the presence of an RF signal. In this paper, we describe the operation of the autonomous Josephson mixerlocal oscillator device and present the experimental results on the mixer performances in terms of the junction currentvoltage characteristics, the frequency tunability, linearity, and dynamic range as well as their temperature dependence for signal frequencies from 1 to 5GHz. © 2012 IOP Publishing Ltd. Du, J, Zhang, T, MacFarlane, JC, Guo, YJ & Sun, XW 2012, 'Monolithic high-temperature superconducting heterodyne Josephson frequency down-converter', Applied Physics Letters, vol. 100, no. 26.View/Download from: UTS OPUS or Publisher's site A monolithic microwave integrated circuit (MMIC) frequency down-converter based on a compact high-T c superconducting (HTS) device is demonstrated. The on-chip integrated HTS down-converter consists of a 7-9 GHz bandpass filter for RF input, a lowpass filter for intermediate frequency output, and a self-pumped Josephson heterodyne mixer. All the above passive and active components are fabricated on a single 10 mm × 20 mm chip of YBa 2 Cu 3 O 7-x film on MgO substrate. Characterization of this MMIC HTS down-converter in terms of frequency response, conversion gain, frequency-tuneability, bias dependence, dynamic range, linearity, and intrinsic noise are presented in this paper. © 2012 Crown. Yang, Z, Liu, J, Liang, X-Q, Jiang, Y, Zhang, T, Han, B, Sun, F-X & Liu, L 2012, 'Two novel 2D metal-organic frameworks based on biphenyl-2,2 ',6,6 '-tetracarboxylic acid: Synthesis, structures and luminescent properties', INORGANIC CHEMISTRY COMMUNICATIONS, vol. 16, pp. 92-94.View/Download from: UTS OPUS or Publisher's site Zhang, T, Du, J, Guo, YJ & Sun, X 2012, 'Design and integration of HTS filters with a Josephson device', Superconductor Science and Technology, vol. 25, no. 10.View/Download from: UTS OPUS or Publisher's site A high-temperature superconducting (HTS) Josephson frequency down-converting module is demonstrated. An HTS monolithic frequency down-converting circuit and a biasing circuit board for the Josephson device are packaged into the module. The monolithic circuit consists of HTS filters and a Josephson oscillator-mixer device, integrated on a single 10mm×20mm chip of Y Ba 2 Cu 3 O 7x (YBCO) film on MgO substrate. A compact, low-loss HTS step-impedance low-pass filter was designed for the intermediate frequency (IF) output port. The modeling, simulation and measurement results of the HTS low-pass filter are presented in this paper. The frequency response and dynamic range of the on-chip integrated HTS down-converting module are also described. © 2012 IOP Publishing Ltd. Zhang, T, Du, J, Guo, YJ & Sun, XW 2012, 'On-chip integration of HTS bandpass and lowpass filters with Josephson mixer', Electronics Letters, vol. 48, no. 12, pp. 729-731.View/Download from: UTS OPUS or Publisher's site A compact high-T c superconducting (HTS) monolithic downconverter is presented. The HTS passive and active devices are integrated on one single chip to achieve compactness and high coupling efficiency. The downconverter consists of a HTS YBa 2 Cu 3 O 7-x (YBCO) bandpass filter for RF input, a lowpass filter for IF output, and a self-pumped step-edge Josephson heterodyne mixer fabricated on a 20 ×10mm MgO substrate. Characterisations of the HTS filters and the frequency response of the on-chip integrated downconverter are reported. The results demonstrate the potential of the HTS downconverter for applications in wireless communications. © 2012 The Institution of Engineering and Technology. Gao, X, Du, J, Zhang, T & Guo, YJ 2018, '0.34-THz High-Temperature Superconducting Josephson-Junction Mixer with Superior Noise and Conversion Performance', International Conference on Infrared, Millimeter, and Terahertz Waves, IRMMW-THz, International Conference on Infrared, Millimeter, and Terahertz Waves, IEEE, Nagoya, Japan.View/Download from: UTS OPUS or Publisher's site © 2018 IEEE. We present, in this work, a new thin-film antenna-coupled high-temperature superconducting (HTS) Josephson-junction terahertz (THz) mixer that demonstrates superior performance at frequencies around 0.34 THz. A novel dual-meander-slot thin-film antenna is designed to significantly improve the antenna-junction impedance matching and thus more efficient coupling of the THz signal power. Theoretical and experimental investigations are carried out to evaluate the mixer performance. This mixer can be applied to the sensitive THz wireless receivers. Gao, X, Zhang, T, Du, J & Guo, YJ 2018, '300-GHz Dual-Beam Frequency-Selective On-Chip Antenna for High-T-c Superconducting Receivers', 2018 INTERNATIONAL SYMPOSIUM ON ANTENNAS AND PROPAGATION (ISAP), International Symposium on Antennas and Propagation (ISAP), IEEE, Busan, SOUTH KOREA.View/Download from: UTS OPUS Gao, X, Du, J, Zhang, T & Guo, YJ 2017, 'Design of a Monolithic-Integrated Circularly-Polarized Antenna-Coupled High-T-c Superconducting Terahertz Harmonic Mixer', PROCEEDINGS OF THE 2017 IEEE-APS TOPICAL CONFERENCE ON ANTENNAS AND PROPAGATION IN WIRELESS COMMUNICATIONS (APWC), IEEE-APS Topical Conference on Antennas and Propagation in Wireless Communications (APWC), IEEE, Verona, ITALY, pp. 324-325.View/Download from: UTS OPUS Gao, X, Du, J, Weily, AR, Zhang, T, Foley, CP & Guo, YJ 2016, 'Broadband Antenna-Coupled High-Temperature Superconducting Josephson-Junction Mixer for Terahertz Communication Applications', Infrared, Millimeter, and Terahertz waves (IRMMW-THz), 2016 41st International Conference on, International Conference on Infrared, Millimeter, and Terahertz waves, IEEE, Copenhagen.View/Download from: UTS OPUS or Publisher's site This paper presents a broadband terahertz (THz) high-temperature superconducting (HTS) mixer based on a log-periodic antenna-coupled YBa2Cu3O7-x (YBCO) step-edge Josephson junction. The THz thin-film antenna, as well as the microwave coupling circuits, have been carefully designed to optimize the power transmission from and into the junction while realizing good isolation between the DC bias, local-oscillator (LO) and intermediate-frequency (IF) ports. This mixer device has been fabricated, packaged and characterized to demonstrate a frequency down-conversion capability with a view to potential application in THz wireless communication systems. Ji, LY, Fu, G, Gong, SX, Zhang, T, Guo, YJ, Qin, P-Y & Ding, C 2015, 'Pattern Reconfigurable Fabry-Perot Cavity Antenna', Proceedings of the 2015 International Symposium on Antennas and Propagation (ISAP), IEEE International Symposium on Antennas and Propagation, IEEE, Hobart, AUSTRALIA.View/Download from: UTS OPUS A newly designed pattern reconfigurable Fabry-Perot cavity antenna is presented in this paper. The reconfigurability is achieved by employing a phased array with a reconfigurable feed network as the source of the FPC antenna. The design can switch its main beam direction between ™10° and 10° with respect to the broadside direction from 5.36 GHz to 5.76 GHz. The realized gain of the proposed antenna is over 11.6 dBi. Good agreement between the simulated and measured results is achieved. Du, J, Zhang, T & Guo, YJ 2014, 'Novel high-Tc superconducting devices for wireless communications and imaging', 2014 International Workshop on Antenna Technology: Small Antennas, Novel EM Structures and Materials, and Applications, iWAT 2014, International Workshop on Antenna Technology, IEEE, Sydney, NSW, Australia, pp. 122-125.View/Download from: UTS OPUS or Publisher's site © 2014 IEEE. High-T c superconducting (HTS) materials have ultralow surface resistance values at microwave frequencies, which have been applied to make high-Q resonators and 'super filters' with narrow-bandwidth, low insertion losses, and superior out-of-band rejections. The second important property of HTS materials is related to low-noise Josephson junctions made from the HTS thin films. In recent years, novel nonlinear high-frequency devices, most of them exploiting the unique features of the AC Josephson effect, have been developed. Applications of the HTS devices based on Josephson junctions have been extended from lower electromagnetic bands (microwave) into mm-wave and terahertz, regions. An overview of CSIRO's recent research activities and achievements in developing novel HTS devices for applications to wireless communication and imaging is presented in this paper. Zhang, T, Cai, Y, Du, J, Guo, YJ & Sun, XW 2011, 'A compact high-Tc superconducting quarter-wavelength SIR bandpass filter', 2011 International Conference on Applied Superconductivity and Electromagnetic Devices, ASEMD 2011, IEEE International Conference on Applied Superconductivity and Electromagnetic Devices, IEEE, Sydney, NSW, Australia, pp. 123-126.View/Download from: UTS OPUS or Publisher's site A compact high-temperature superconducting bandpass filter for wireless communications is proposed in this paper. The filter consists of four quarter-wavelength short-circuited stepped impedance resonators (SIR). A pair of transmission zeros outside the passband are introduced into the design by evaluating coupling coefficients between the four resonators for high selectivity within the frequency band of interest. The use of short-circuited quarter-wavelength SIRs has resulted in a higher spurious passband than conventional half-wavelength coupled-line filters. Based on the theoretical calculations, the filter has a compact size and exhibits a well suppressed stopband. The filter is fabricated on an MgO substrate with YBCO superconductor coating. Simulation and measurement results are presented. © 2011 IEEE. Ultra-sensitive Wideband Mm-wave HTS Receiver for Future Spectrum Sensing
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,435
Friday, June 7, 2019, 5 11 p.m. Saturday June 8, 2019, 11 a.m. 11 p.m. Sunday, June 9, 2019, 11 a.m. 7 p.m. The Texas Folklife Festival is an immersive cultural experience, surrounding visitors with some 250 participating organization representing more than 40 cultural groups; 6 stages of music and entertainment; a menu of some 100 authentic food items; and the skills of some 60 artisans. It's an opportunity to try Korean barbecue, listen to a bluegrass band, learn to hula with Hawaiian dancers, pick up an ax and split shingles with the pioneers, and experience for yourself just what it means to be a Texan.
{ "redpajama_set_name": "RedPajamaC4" }
799
package volume // ---------------------------------------------------------------------------- // DO NOT EDIT THIS FILE // This file was generated by `swagger generate operation` // // See hack/swagger-gen.sh // ---------------------------------------------------------------------------- // VolumesCreateBody volumes create body // swagger:model VolumesCreateBody type VolumesCreateBody struct { // Name of the volume driver to use. // Required: true Driver string `json:"Driver"` // A mapping of driver options and values. These options are passed directly to the driver and are driver specific. // Required: true DriverOpts map[string]string `json:"DriverOpts"` // A mapping of arbitrary key/value data to set on the volume. // Required: true Labels map[string]string `json:"Labels"` // The new volume's name. If not specified, Docker generates a name. // Required: true Name string `json:"Name"` }
{ "redpajama_set_name": "RedPajamaGithub" }
360
Q: children of HTML element different background on line-break Why isn't all the children's background color the same as the parent's one HTML CSS if there is a line break separating the two, for example if my font-size is 20: <div style="background-color: rgba(0,0,0,0.5);font-size: 20px;"><span style="text-align:left; color: blue;">Roberto#3<span style="float:right;">18:21</span></span></div> it works perfectly, but if my font size is 130 <div style="background-color: rgba(0,0,0,0.5);font-size: 130px;"><span style="text-align:left; color: blue;">Roberto#3<span style="float:right;">18:21</span></span></div> and how to solve that simply? A: The content width is greater than the element containing it, and it is floated, it will "overflow" outside of its container. Then we can add overflow: auto; to the containing element to fix this problem : <div style="background-color: rgba(0,0,0,0.5);font-size: 130px; overflow: auto;"> <span style="text-align:left; color: blue;">Roberto#3<span style="float:right;">18:21</span></span></div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,898
The conveyancing process need not be tiring. Thousands of home buyers and sellers every month choose Fridaysmove on the basis of an unbeatable service and great value fees. Our conveyancing solicitors will help you to move house faster. For standard sale or purchase conveyancing, your fixed fee quote for conveyancing in Thamesmead confirms all applicable solicitor's fees. In addition, No Sale, No Fee protection is included with all quotes, at no additional cost. You can get a conveyancing quote to verify what your fees will be, or contact a member of our conveyancing team on 0330 660 0286 if you have any questions or would like to get things underway. It will pay to always get a few solicitors quotes if looking for cheap Thamesmead solicitors. Also, always read the official terms of business for additional fees . We use the lowest cost CQS qualified home moving lawyers in Greenwich. Furthermore, in addition to cheap rates, levels of service are unparalleled and highly recommended. It is always really traumatic when a home move falls through at the last hour. Many issues can end up leading to the conveyancing not exchanging, e.g. the chain collapsing or an adverse survey report. With our No Completion, No Fee assurance, home movers do not pay for any legal fees if your sale or purchase fails to proceed. Any Thamesmead solicitors deposit can be transferred to your alternative property or sale and you will just need to cover any third party disbursements incurred, such as searches. While it is true that there is wide variance in the quality of conveyancing between firms, service levels, speed or cost have little to do with whether a solicitor markets their services online. Although some home buyers and sellers like to visit their conveyancing solicitor in person, it is rarely required. A proactive conveyancer should help to move you faster than a lawyer you need to visit in person. Remortgage conveyancing solicitors who work with Fridaysmove work on behalf of owners who wish to arrange a remortgage for their house or flat, and charge very competitive solicitor's fees, and hidden charges. Our solicitors will be able to act on behalf of all the building societies and lenders nationally. The proactive attitude employed by our conveyancing lawyers should increase the chance that completions will occur faster, so you will take advantage of your new rate earlier. Legal charges are divided into a couple of discrete parts referred to as disbursements and fees. Conveyancing disbursements are essentially third party charges sustained by the Thamesmead conveyancer on your behalf such as lease enlargement indemnity insurance or stamp duty. Solicitors conveyancing fees are the sums that the customer pays to the Thamesmead conveyancer for the legal work. The best way to comparing Thamesmead conveyancing solicitors costs is to read the conveyancing solicitors contractual terms. Additional fees could include fees for exchanging contracts and completing the transaction within 5 days of each other. The shortage of existing housing stock is encouraging the construction of more new build property. Pros when looking at new build houses and flats include being chain-free, but cons may include time spent snagging, and problems that can arise as a result. In some cases, new build homes are marketed with free or discounted legal fees, while it may be the case that developer recovers the cost of this 'offer' elsewhere, it is vital that completely independent legal advice is sought from a conveyancing solicitor. Settling for the developer's preferred solicitor may create a conflict of interest.
{ "redpajama_set_name": "RedPajamaC4" }
3,299
Dinner last night: Spaghetti carbonara and a bottle of Chianti. My reaction: Bleh. Back when I enjoyed stuffing an entire pouch of Big League Chew in my mouth, I may have enjoyed these. Dinner last night: Pizza with copious garlic and crushed red pepper. My reaction: A little mouthwashy, for obvious reasons, but effective. Hers: Tepid, but the situation improved after I clicked on the Today show. Thanks, Matt. Dinner last night: Pakistani takeout. My reaction: Like a cinnamon Altoid, only creepier. Comes in miniature version of groovy tin. For those who like to brag that they never watch television (that five-hour-a-day Law & Order habit notwithstanding), a 42-inch flat screen spins around to reveal this beautiful stainless-steel bookcase. Fill it with your rare-book collection and no one ever has to know what a huge Sam Waterson fan you are. Built into the center of this circular sofa is an LCD ceiling projector. The outside ring is a series of benches where viewers recline and face the ceiling. We're still trying to figure out how everyone can see Rocky II right side up--difficult, perhaps, but worth the effort to have a giant red circle in your living room. Never mind the mirrors, the space over your bed can serve a much more honorable purpose: TV screen. With an LCD projector in the headboard that shoots an image upward, this low-to-the-ground bed lets you watch people far more attractive than yourself get busy on the ceiling. DON'T LOOK DOWN For anyone who's ever been unable to glance at his watch because it might suggest that he has something better to do than listen to a complete rundown of the company-wide sexual-harassment policy (again!), the Tissot Silen-T is a miracle. With a touch, its "silent vibration" helps you keep track of the time without hurting the HR director's feelings. $425; 800-284-7768. --L. I.
{ "redpajama_set_name": "RedPajamaC4" }
7,640
package de.reneruck.android.beaconscanner; import android.os.AsyncTask; import android.os.Handler.Callback; import android.util.Log; public class WriteCharacteristicAsync extends AsyncTask<CharacteristicWriteData, Boolean, Void> { private static final String TAG = "WriteCharacteristicAsync"; private HRPService service; private Callback callback; public WriteCharacteristicAsync(HRPService service, Callback callback) { this.service = service; this.callback = callback; } @Override protected Void doInBackground( CharacteristicWriteData... paramArrayOfParams) { int currentPos = 0; boolean isDone = false; while(!isDone) { if(HRPService.sendDone) { HRPService.sendDone = false; boolean result = service.writeCharacteristic(paramArrayOfParams[currentPos]); if(!result) { Log.e(TAG, "Error while writing characteristic " + paramArrayOfParams[currentPos].getCharacteristic()); HRPService.sendDone = true; } currentPos++; if(currentPos == paramArrayOfParams.length) { isDone = true; } } } return null; } @Override protected void onPostExecute(Void result) { Log.d(TAG, "Async Task DONE!"); this.callback.handleMessage(null); super.onPostExecute(result); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,025
Fantastic investment opportunity in highly sought-after North Mayfair! This 4 bedroom, 2 full bath charming bungalow has so much potential! The main floor has 2 bedrooms, a full bath, a kitchen and a terrific attached sun room. The second level has an additional two bedrooms, full bath and even a second kitchen...WOW! It has been rented out for many years with the owner living on the lower level. Thus two families could live here if desired, further adding to the money-making potential! There is also a huge basement and a nice-sized two car garage. Can also be purchased as a tear-down since a very expensive new home could be built There are houses in the area listed right now for over $900,000 so this opportunity is an amazing for investors! Huge extended lot gives lots of options!! Ideal location near everything. Better hurry, this one is priced to sell very fast!!! Look no further! FRESHLY PAINTED Open, bright and sun-filled 2 bedroom condo. Features ample sized rooms, updated bath and kitchen with stainless steel appliances, freshly refinished stunning hardwood floors. The building has Storage and personal washer/dryer in the basement. WHY RENT WHEN YOU CAN OWN THIS CLEAN, CONVENIENT, AND EFFICIENT CONDO! 15 minute walk to CTA/blue line and 5 minute drive to freeway. very nice 2 bedroom 2 bath condo. Wee taken care of by owner. Great location closed to express way, airport, shopping and restaurants. Check out this 3 bed 2.5 bath Georgian on large lot. Updated throughout. This house offers 3 levels of living. Hardwood floors, each level has bathroom, finished basement, ss appliances, central heat/ac, side driveway, large outdoor pool, custom built deck and many other recent updates. Turn key and move right in. Close to shopping, dining, expressway and public trans. Construction complete! Contemporary new construction SFH in top rated Beaubien School! Bright & spacious combined living/dining w/gas fireplace. Chef's kitchen w/SS Bosch appl, shaker cabinetry, under cab lighting, island w/bar seating, quartz counters, & walk-in pantry. Adjoining family room w/oversized windows & exit to rear yard. Sunny master suite features two large closets & spa like bath w/frameless glass shower, designer tile, dual vanity, & heated flooring. Two addtl bedrooms, full guest bath w/tub, and laundry with utility sink on upper level. Finished lower level w/nice ceiling heights & great natural light, large rec room w/wet bar & wine fridge, 4th bedroom, full bath & 2nd laundry hook up. Warm hardwood floors, 8' doors, skylights, & wiring for sound. Enclosed backyard w/nice green space, stamped concrete patio, & 2 car garage. Great neighborhood! Easy access to I90, parks, and area conveniences. Less than 10 min walk to Blue line & Metra. Beautiful home by top developer! Charming English Tudor completely rehabbed to perfection. Feels like a brand new home with all new kitchen, baths, hardwood floors, doors, windows, trim, mechanicals. Nothing has been overlooked.Walk into an open living and dining room with bay windows. Enjoy amazing natural light due to additional windows being added. Kitchen has beautiful white high gloss cabinets. stainless appliances quartz countertops and room for a breakfast table. All baths with modern vanities, led mirrors, quartz counters and gorgeous fixtures. Glass staircase leads to upstairs bedrooms with walk-in closets, new carpet and spacious feel. Upstairs bath with double vanity, a window and linen closet space. Basement has family room, 4th bedroom and full bath. Utility area open for work space or storage, brand new washer and dryer included. Entertain and play in your back yard all summer long! 2 car garage is newer. Walk to the Gladstone Metra, or jump on the highway for an easy commute. Walk to park. HIGHEST AND BEST OFFERS DUE by 11PM on Saturday, 4/20. Super cute 4-bedroom, 2 bath Jefferson Park corner home with unique layout in great condition. Upper level master suite includes full bath & enclosed balcony. Formal dining room off living room. Refinished hardwood floors in living/dining and bedrooms on main level. Newer roof and separate heating/ac units. Attached 1.5-car garage with fenced driveway make for wonderful play area or room for many cars. Full unfinished basement for excellent storage. Good height attic makes for realistic expansion opportunity. 0.6 mile walk to Jefferson Park Metra and Blue Line CTA. Excellent selection of restaurants within 1 mile and a new Starbucks 1 block away. Short walk to Jefferson Park playlot and tennis courts and a Jewel at the end of the block. Easy access to 90-94. ELEGANT, NICELY UPGRADED ENGLISH , LOCATED ON A BEAUTIFUL BLOCK WITH SIMILAR ATTRACTIVE HOMES. SPECIOUS LIVING ROOM , NICE KITCHEN OPEN TO LIVING ROOM , SS APPLIANCES. 2 BEDROOMS AND FULL BATH ON FIRST FLOOR. LARGE BEDROOMS AND FULL BATH UPSTAIRS WITH NICE CLOSET SPACE. WOOD BURNING FIREPLACE IN MASTER BEDROOM. FULL FINISHED BASEMENT WITH RECREATION ROOM,EXTRA BEDROOM , SEPARATE LAUNDRY ROOM AND EXTRA BEDROOM. NEWER ROOF -2016, NEWER WINDOWS -2016, NEW A/C -2017. DUAL ZONED - 2 A/C AND 2 HEATING SYSTEM . OVERHEAD SWR. ALL FURNITURE STAYS EXCLUDED GLASS COFFEE TABLE FROM LIVING ROOM. NEWER 2.5 CAR GARAGE - 2016. FENCED YARD FOR YOUR PRIVACY. MUST SEE !!! Calling all Developers and Rehabbers. Large Home in Mayfair, Long Time Owner, home is clean but needs work, good room sizes. Great Neighborhood, Lots of New Construction on the Block. Fantastic Yard and Perennial Garden. Newer Garage, Large Unfinished Basement Waiting for your ideas. Close to Transportation, Blue Line, Metra, Parks and shopping. Third bedroom is Currently being used as an Office. Home sold As-Is. Looking for a home with lots of room? This immaculate Portage Park/Jefferson Township 1-1/2 story brick and frame home is ideal for a large family, extended family, or in-law arrangement. This property includes separate front and rear entrances, a large fenced-in yard with 8' x 20' patio, and a garage with alley access. The home includes 5 large bedrooms, 2 full baths, hardwood floors, custom bedroom closets, and a full basement. Recently installed 50 gallon hot water tank (2017), gas-forced air heat with humidifier, vinyl double hung windows, roof (2010), and front yard clean-out installed in 2013. Walking distance to church, schools, restaurants, shopping and transportation. This well-maintained, professionally landscaped ranch home has 3br/2ba, central air conditioning and hardwood floors. A full, finished walkout basement and one car garage.near highway.. NORTH MAYFAIR- HISTORIC CHICAGO BUNGALOW- SPACIOUS WELL MAINTAINED BRICK BUNGALOW OFFERS VINTAGE DETAIL STYLED FOR TODAY- TRADITIONAL LIVING ROOM WITH ARTIFICIAL FIREPLACE, BUILT IN BOOKCASE, CROWN MOLDING AND ORIGINAL SCONCES- FORMAL DINING ROOM-HARDWOOD FLOORS- NATURAL WOODWORK- EAT IN KITCHEN HAS NEW OAK FLOOR, MARBLE BACKSPLASH & FULL PANTRY- UPDATED BATHROOMS- GOOD SIZE BEDROOMS- SUNROOM WITH NEW VINYL PLANKING FLOOR- HUGE 2ND FLOOR MASTER BEDROOM, STUDY, 4TH BEDROOM- WALK IN CLOSET- VAULTED CEILINGS- SKYLIGHTS- WINDOW SEATS- BUILT IN A/C UNIT- OVERSIZED FULL BATH- KNOTTY PINE RECREATION ROOM WITH DRY BAR & ARTIFICIAL FIREPLACE- UNFINISHED BASEMENT 38X15- LAUNDRY AREA 12X9- ROOF TEAR OFF 2013- REPLACEMENT WINDOWS- COPPER PLUMBING- GLASS BLOCK WINDOWS- CONCRETE PATIO- NEW 2C GARAGE 2015- WALK TO 1+ RATED SCHOOL- GREAT PARK DISTRICT WITH POOL, TENNIS COURTS & LAGOON- ENJOY NORTH BRANCH BIKE TRAIL THRU FOREST PRESERVES- EASY PUBLIC TRANSPORTATION & EXPRESSWAY ACCESS. Specious 2 bedroom, 1.5 bath condo. Central Air, balcony, plenty of closet space, all hardwood floors. Remodeled kitchen and bathrooms. Laundry and large storage in the basement. Newer windows, freshly painted. Walking distance to Blue Line and Metra. Minutes to I90/I94. Close to local parks, restaurants and shopping. Assigned parking space. FABULOUS UNIT IN GREAT CHICAGO LOCATION. LARGE LIVING ROOM WITH HARD WOOD FLOORS, RECESSED LIGHTING AND BIG PICTURE WINDOW. UPDATED KITCHEN WITH TILE BLACKSPASH, UNDERMOUNT SINK AND GRANITE COUNTERS. NICE SIZE BEDROOMS. SECOND FLOOR CORNER UNIT. CLOSE TO SHOPS, RESTAURANTS, TRANSPORTATION AND EXPRESSWAY. Jefferson Park/ Portage Park area large 2 bedroom bright condo with balcony overlooking courtyard. New balcony door. Lots of sunlight in this top floor unit. Eat-in kitchen with new back splash. Combination living room/dining room; 2 spacious bedrooms, updated bathroom. Radiant heated floors keep you cozy in the winter! Lots of closets and storage, Well maintained building with coin laundry, assigned parking space, & storage unit. Close to Jefferson Park terminal & expressway. Located on a lovely residential block. One of a kind home in Jefferson Park!! Stunning open floor with brand new 2 floor addition in prime location. ( HIGHLY RATED Beaubien Elementary School ) BRAND NEW decks, 2 car garage , new heating, electric and plumbing. This spacious High-end luxury home features hardwood flooring throughout the main level. The open living room flows seamlessly through the dining room to the immaculate kitchen with quartz counter-tops and stainless steel appliances. Lots of natural light. The full finished basement with wet bar and full bathroom is perfect for entertaining! Walking distance to Metra/Blue Line El station, Frank J. Wilson Park, stores and minutes to expressway access make this a choice location. Hurry before it's gone!!! COZY RANCH IN DUNHAM PARK 2 BEDROOMS AND 1 BATH MAIN LEVEL, PLUS 3RD BEDROOM IN BASEMENT , NICE FLOOR PLAN. CLOSE TO SCHOOLS,2 CAR GARAGE,NEWER ROOF GARAGE AND HOUSE AUGUST 2016, NEEDS SOME TOUCH UPS.SHOWING SHORT NOTICE OK.
{ "redpajama_set_name": "RedPajamaC4" }
1,793
package com.html5parser.classes; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import java.util.Stack; import org.w3c.dom.Document; import org.w3c.dom.Element; import com.html5parser.classes.token.TagToken; import com.html5parser.classes.token.TagToken.Attribute; import com.html5parser.insertionModes.Initial; import com.html5parser.interfaces.IInsertionMode; import com.html5parser.parseError.ParseError; import com.html5parser.parseError.ParseErrorType; public class ParserContext { // TODO extra validation when emit tokens. see tokenization in spec /* * Tokenizer context */ private TokenizerContext tokenizerContext = new TokenizerContext(); /* * Insertion modes */ private IInsertionMode insertionMode = new Initial(); private IInsertionMode originalInsertionMode; // private IInsertionMode currentTemplateInsertionMode;is the last ins. mode // pushed onto the stack /* * Stacks */ private Stack<Element> openElements = new Stack<Element>(); private Stack<ParseError> parseErrors = new Stack<ParseError>(); private Stack<IInsertionMode> templateInsertionModes = new Stack<IInsertionMode>(); /* * Flags */ private boolean flagScripting = false; private boolean flagForceQuirks = false; private boolean flagParserPause = false; private boolean flagFramesetOk = false; private boolean flagStopParsing = false; private boolean flagReconsumeToken = false; private boolean flagFosterParenting = false; private boolean flagHTMLFragmentParser = false; /* * Others */ private ArrayList<Element> activeFormattingElements = new ArrayList<Element>(); // private Element currentNode; //is the last element pushed onto the stack // of open elements private Element adjustedCurrentNode; private Element headElementPointer; private Element formElementPointer; /* * Document */ Document doc; public TokenizerContext getTokenizerContext() { return tokenizerContext; } public void setTokenizerContext(TokenizerContext value) { this.tokenizerContext = value; } public IInsertionMode getInsertionMode() { return insertionMode; } public void setInsertionMode(IInsertionMode value) { this.insertionMode = value; } /* * public IInsertionMode getInsertionMode() { return IInsertionMode; } * * public void setInsertionMode(IInsertionMode IInsertionMode) { * this.insertionMode = IInsertionMode; } */ public IInsertionMode getOriginalInsertionMode() { return originalInsertionMode; } public void setOriginalInsertionMode(IInsertionMode originalInsertionMode) { this.originalInsertionMode = originalInsertionMode; } public IInsertionMode getCurrentTemplateInsertionMode() { return templateInsertionModes.peek(); } public Stack<Element> getOpenElements() { return openElements; } public void setOpenElements(Stack<Element> openElements) { this.openElements = openElements; } public Stack<ParseError> getParseErrors() { return parseErrors; } public void setParseErrors(Stack<ParseError> parseErrors) { this.parseErrors = parseErrors; } public void addParseErrors(ParseErrorType parseErrorType) { parseErrors.push(new ParseError(parseErrorType, this)); } public void addParseErrors(ParseErrorType parseErrorType, String message) { parseErrors.push(new ParseError(parseErrorType, message)); } public Stack<IInsertionMode> getTemplateInsertionModes() { return templateInsertionModes; } public void setTemplateInsertionModes( Stack<IInsertionMode> templateInsertionModes) { this.templateInsertionModes = templateInsertionModes; } public boolean isFlagScripting() { return flagScripting; } public void setFlagScripting(boolean flagScripting) { this.flagScripting = flagScripting; } public boolean isFlagForceQuirks() { return flagForceQuirks; } public void setFlagForceQuirks(boolean flagForceQuirks) { this.flagForceQuirks = flagForceQuirks; } public boolean isFlagParserPause() { return flagParserPause; } public void setFlagParserPause(boolean flagParserPause) { this.flagParserPause = flagParserPause; } public boolean isFlagFramesetOk() { return flagFramesetOk; } public void setFlagFramesetOk(boolean flagFramesetOk) { this.flagFramesetOk = flagFramesetOk; } public ArrayList<Element> getActiveFormattingElements() { return activeFormattingElements; } public void setActiveFormattingElements( ArrayList<Element> activeFormattingElements) { this.activeFormattingElements = activeFormattingElements; } public Element getCurrentNode() { return openElements.peek(); } public Element getAdjustedCurrentNode() { // The adjusted current node is the context element if the stack of open // elements has only one element in it and the parser was created by the // HTML fragment parsing algorithm; otherwise, the adjusted current node // is the current node. if (this.flagHTMLFragmentParser && this.openElements.size() == 1) // TODO return context node return openElements.peek(); else if (openElements.size() > 0) return openElements.peek(); else return null; } public void setAdjustedCurrentNode(Element adjustedCurrentNode) { this.adjustedCurrentNode = adjustedCurrentNode; } public Element getHeadElementPointer() { return headElementPointer; } public void setHeadElementPointer(Element headElementPointer) { this.headElementPointer = headElementPointer; } public Element getFormElementPointer() { return formElementPointer; } public void setFormElementPointer(Element formElementPointer) { this.formElementPointer = formElementPointer; } public boolean isFlagStopParsing() { return flagStopParsing; } public void setFlagStopParsing(boolean flagStopParsing) { this.flagStopParsing = flagStopParsing; } public boolean isFlagReconsumeToken() { return flagReconsumeToken; } public void setFlagReconsumeToken(boolean flagReconsumeToken) { this.flagReconsumeToken = flagReconsumeToken; } public Document getDocument() { return doc; } public void setDocument(Document doc) { this.doc = doc; } // Remove duplicate attributes and generate parse errors public void validateAttributeNames() { List<Attribute> attributes = ((TagToken) this.tokenizerContext .getCurrentToken()).getAttributes(); final List<Attribute> setToReturn = new ArrayList<>(); final Set<String> set1 = new HashSet<String>(); for (Attribute att : attributes) { if (set1.add(att.getName())) { setToReturn.add(att); } else { this.parseErrors.push(new ParseError( ParseErrorType.DuplicatedAttributeName, att.getName())); } } ((TagToken) this.tokenizerContext.getCurrentToken()) .setAttributes(setToReturn); } public boolean isFlagFosterParenting() { return flagFosterParenting; } public void setFlagFosterParenting(boolean flagFosterParenting) { this.flagFosterParenting = flagFosterParenting; } public boolean openElementsContain(String elementName) { List<Element> list = new ArrayList<Element>(); list.addAll(openElements); int n = openElements.size(); boolean flag = false; for (int i = 0; i < n; i++) { Element element = list.get(i); if (element.getNodeName().equals(elementName)) { flag = true; } } return flag; } public boolean isFlagHTMLFragmentParser() { return flagHTMLFragmentParser; } public void setFlagHTMLFragmentParser(boolean flagHTMLFragmentParser) { this.flagHTMLFragmentParser = flagHTMLFragmentParser; } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,337
\section{Introduction} One of the many surprises revealed by ESA's Rosetta spacecraft was the complex landscape of comet 67P/Churyumov-Gerasimenko. The surface is rich in land forms comparable to what is usually found on larger planetary bodies. Rosetta has mapped extensively the comet and its morphology has been described in many publications: \cite{thomas2015, elmaarry2015a, elmaarry2016, giacomini2016, birch2017}. Among all morphological features, we focus our interest on the near-vertical walls of cliffs and pits, interpreted to result from of surface collapse \citep{vincent2015b} and which clearly display ongoing regressive erosion due to ongoing activity/thermal stress \citep{vincent2016a} or sudden outbursts \citep{vincent2016b}. Cliffs on 67P/Churyumov-Gerasimenko were not unexpected. Similar features have been observed previously on most other nuclei visited by spacecraft: 19P/Borelly \citep{britt2004}, 81P/Wild 2 \citep{brownlee2004}, 9P/Tempel 1 \citep{thomas07} but Rosetta provided the opportunity to look at these features in greater detail, and for an extensive period of time. We could for instance characterize the boulder size distribution in the cliffs' taluses \citep{pajola2015, pajola2016a}, and link observed collapses to activity \citep{vincent2016b, pajola2017}. In this work, we investigate the relation between cliffs or other vertical features and the erosional rates and material strengths. While we do not understand yet how cliffs are formed on a comet, the simple fact that they exist puts constrains on the material strength. Indeed, even in a very low gravity environment (typically $2.10^{-4} m.s^{-1}$, see section \ref{sec:gravity}), cliffs without strength would naturally collapse under their own weight in a few minutes \citep{jeffreys1952}. As cliffs were clearly stable for at least the two years time span of the Rosetta mission, the material properties must be sufficient to ensure their existence. The surface strength on comet 67P has been investigated in localized areas and values published in several papers. For instance \citep{vincent2015b} constrained the strength of material surrounding active pits, interpreted as sink holes; \cite{groussin2015a} measured the strength of stable overhangs in selected areas of the comet; \cite{biele2015} and \cite{spohn2015} computed local strength respectively from the Philae lander bounce on Agilkia, and the MUPUS measurements at Abydos. All authors agree on a typical tensile strength in the range 10-100 Pa, and a compressive strength in the kPa range for the dusty layer, up to a couple of MPa for the underlying consolidated material. While these different studies are converging, their scope was limited to very specific regions of the comet and may not fully describe the material. Additionally, strength alone may not be the main driver for the topography, as evolutionary processes can play a significant role. Therefore, our aim is to derive global statistics on the topography across the entire surface of 67P and link this to our current understanding of material strength and the variable evolutionary history of the nucleus. \section{Data and methods} \subsection{Shape model} Our analysis is based on the most accurate 3-dimensional reconstruction of 67P's nucleus topography, obtained by photogrammetry. The data set and technique are described in \cite{preusker2015} for the Northern hemisphere. This paper uses a new version of the 3d shape ("cg-dlr\_spg-shap7-v1.0\_500Kfacets.ply"), representing the complete nucleus, and presented in \cite{preusker2017}. The full resolution model comprises about 22 million vertices arranged in 44 million triangular facets. Vertex positions have a typical spacing of 1-2~m and 1-sigma accuracy of 0.2-0.3~m. The typical uncertainty in the facet orientation is in the order of 2-5$^{\circ}~$. Processing such a large data set is computationally prohibitive, while the full resolution is not necessary for our analysis the typical feature size is larger than 10 meters. We therefore based this study on a decimated version of the same shape model, with about 250 000 vertices and 500 000 facets. On average, vertices are separated by a distance of about 15~m. \subsection{Gravity}\label{sec:gravity} In order to define which structures are actually cliffs, we need first to estimate the surface effective gravity, the combination of gravitational acceleration and centrifugal force due to the rotation. On a body such as 67P ($1.5~km$ mean radius, $1 \times 10^{13}~kg$ mass, 12.4 hours rotation period), the mean gravity is in the order of $2 \times 10^{-4} m.s^{-2}$ and the centrifugal force is about $3.10^{-5}~m.s^{-2}$. Hence, the centrifugal force opposes gravity with a relative magnitude of up to 15\% and must be accounted for in our calculation. Gravity values are obtained for each facet using the classical \cite{werner1996} approach. Because gravity calculation on a convex body is non-trivial, we also compared our results with an alternative model by \cite{cheng2012}. For the 500k facets shape, the absolute difference between the two gravity models is in the range [$1.9 \times 10^{-7} to 3.7 \times 10^{-6}~m.s^{-2}$], i.e. less than 1\% of the effective gravity. As both methods use an independent approach, we are therefore convinced that we have calculated a reliable approximation of the gravity vector on each facet.\\ We note that using a simpler model (two central masses and the ellipsoid parameters described in \cite{jorda2016}) is not sufficient. While the gravity obtained agrees with the more advanced models for most of the surface, we found that the simple model leads to anomalously large gravity values (greater than twice the expected figure) for about 8\% of the facets, especially in highly concave areas, such as the Hapi/Hathor region. \subsection{Slopes and automatic detection of cliffs} We also measure the effective surface slope, defined as the angle between the surface normal vector and the opposite of the local gravity. A slope of 0$^{\circ}~$ is flat with respect to gravity, while a slope of 90$^{\circ}~$ describes a cliff. Using this measure of the slope as input, we developed an algorithm to automate the detection of all topographic features relevant to this study. It works in three consecutive steps: \begin{enumerate} \item We isolate facets of the shape model having a slope larger than 60$^{\circ}~$. This arbitrary value is taken as a very conservative maximum angle of repose on 67P. It is twice the value measured for granular material in granular flows observed in various regions of the comet (30$^{\circ}~$, \cite{vincent2016a}). Using a high angle ensures that none of the selected areas contain loose dust. Additionally, it prevents us from selecting artefacts. Indeed, by reducing the number of facets, the decimation process from 44 million to 500 thousand facets smooths out features with a size comparable to the facet length. For instance, a boulder of 15 m height may end up being described with one vertex only and show lateral slopes close to 45$^{\circ}~$. The choice of this slope angle limit effectively defines the lowest height that can be detected: \mbox{height $>$ min(length) $\times$ sin(slope) $=~13~m$.} \item We then grouped together neighbouring high slopes by geographic location. Starting with the facet identified in step one as having a slope $>$60$^{\circ}~$, we then find all the neighbouring facets that match that criterion and group them into a unique set. We then iterate over these newly added facets until there are no more remaining neighbours with $>$60$^{\circ}~$ slopes to be added to the current set. We select another cliff not yet in a set and repeat the process. Thus, we end up with a separation of all cliffs as independent entities with no feature being identified twice. Figure \ref{fig:cliffs_3d} shows a 3d visualization of the identified cliffs. Our algorithm properly separates features that belong to the same morphological region. For instance the inner walls of a pit are grouped together, while the facets surrounding a large outcrop are similarly grouped. \item For each topographic feature identified in this way, we extract and save parameters that can be used for further investigation: average 3d position on the shape model, local gravity, slope, height, area. The height is defined relative to the local gravity: We first project the three-dimensional positions of all vertices in a set (that is all facets describing a feature) onto the local gravity vector. We then define the height as the altitude difference between the highest and lowest point of the set, after projection. \end{enumerate} This algorithm produces very reliable results. When comparing with images, we find that it catches all features that were already visually identified as cliffs, but can also isolate large boulders, outcrops, and overhangs. Table \ref{tab:cliffs} summarizes the output of our automatic detection. \begin{table} \caption{Output of the automatic cliff detection algorithm. A file with all results (cliff position, local gravity, height, slope, and area) is provided as supplementary material.} \label{tab:cliffs} \centering \begin{tabular}{l r} \hline\hline \multicolumn{2}{c}{cg-dlr\_spg-shap7-v1.0\_500Kfacets}\\ Facets & 499 902\\ Slopes > 60$^{\circ}~$ & 78 528\\ Independent cliffs & 2 633\\ Minimum height & 13 m\\ Maximum height & 621 m\\ \hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=0.49\hsize]{figures/CG_slopes_whitebg.png} \includegraphics[width=0.49\hsize]{figures/CG_cliffs_whitebg.png} \caption{Visual representation of the automatic detection of topographic features of interest. Left panel shows the effective gravitational slope for each facet of the shape model (accounting for gravity + centrifugal force). Right panels shows cliffs, i.e. independent sets of connected facets with a slope larger than 60$^{\circ}~$. Colors indicate different cliffs.} \label{fig:cliffs_3d} \end{figure*} \section{Results}\label{sec:results} \subsection{Global size distribution} Out of the 499 902 facets of the shape model, our algorithm extracted 2 633 independent "cliffs", defined as connected facets with a slope angle larger than 60$^{\circ}~$. Their geographic distribution is shown in Figure \ref{fig:map_cliffs}. This corresponds to 15.04\% of the total nucleus surface area. The smallest cliff detected on this shape model is 13~m high (constrained by the facet size and slope angle), while the tallest is 621~m. \begin{figure*} \centering \includegraphics[width=\hsize]{figures/map_height} \caption{Cliff heights, shown as coloured dots on a shaded map of effective slope (white=flat surface, black=high slope).} \label{fig:map_cliffs} \end{figure*} The size distribution does not show any preferred height, but rather a power law, as shown in Fig. \ref{fig:cumul_plaw}. When plotting the cumulative distribution of cliff heights, we find that the lower 99.3\% of the distribution (height < 300~m) can be described with a power law index equal to $-1.69 \pm 0.02$, while the remaining 0.7\% are better represented by a power law index of $-3.46 \pm 0.15$. The largest cliffs are mostly located in Hathor region, the area of the small lobe facing the larger component. This region oversees Hapi, the interface between both lobes of the nucleus, and has been described previously as one large cliff (\cite{thomas2015}), 900~m tall. Because its size is comparable to the small lobe itself, the gravity vector changes across the region and our automatic algorithm separates Hathor into a few distinct entities, shown in Fig. \ref{fig:Hathor}. For this reason, it is not clear whether the size distribution we observe in Hathor hints at distinct physical properties, or is rather an artefact of our definition of what a cliff is on this comet. It is interesting to note that if 67P is the result of a gentle merge between two smaller bodies as described in \cite{davidsson2016}, Hathor is effectively the former surface of the small lobe, and therefore not a cliff \textit{per se}. The significance of the power law distribution and what the different power index could mean for Hathor's material properties will be discussed in section \ref{sec:plaw}. \begin{figure} \centering \includegraphics[width=\hsize]{figures/cumul_power_law.png} \caption{Cumulative distribution of cliff height on 67P/Churyumov-Gerasimenko. The vertical axis gives the percentage of cliffs taller than the height given on the horizontal axis. For instance, only 10\% of the cliffs are larger than 70~m.} \label{fig:cumul_plaw} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\hsize]{figures/NAC_2014-08-28T12_42_54_F22.jpg}\\~\\ \includegraphics[width=0.8\hsize]{figures/cliffs_Hathor.jpg} \caption{The largest cliffs on 67P are all located in Hathor. Top panel: OSIRIS image NAC\_2014-08-28T12.42.54.563Z\_ID30\_1397549800\_F22. Bottom panel: simulated view, colors represent the facet pertaining to cliffs taller than 250~m. Both OSIRIS image and simulated view have been rotated and aligned with the local gravity.} \label{fig:Hathor} \end{figure} \subsection{Regional variations}\label{sec:regions} \subsubsection{North vs South} Several authors have pointed at the dichotomy between 67P's hemisphere, in terms of morphology \citep{thomas2015,elmaarry2016} or composition \citep{luspaykuti2015}. This dichotomy is largely explained by seasonal effects, as the southern hemisphere experiences significantly more erosion than the North \citep{keller2015a}. In addition to insolation, gas driven dust transport leads to a massive mantling of the Northern hemisphere \citep{lai2016} which smooths out the topography. Is this evolutionary dichotomy also present in the distribution of high slopes? Figure \ref{fig:lat_hist} shows the distribution of cliff densities (in number per $km^2$) and surface fraction as a function of the latitude. While there is no major difference in the absolute number of cliffs between the two hemispheres (50.3\% of all cliffs are in the North, 49.7\% in the South), we do observe significant variations in the local distribution: \begin{itemize} \item Northern cliffs are more likely to be found at higher latitudes (> 45$^{\circ}~$). This corresponds mainly to the Seth/Hathor regions on the big lobe, which displays some of the most dramatic topographic variations, e.g. the deep active pits presented in \cite{vincent2015b}. \item On the contrary, southern cliffs are distributed mainly around the mid latitudes (-20$^{\circ}~$ to -60$^{\circ}~$), which marks the transition area between several morphological regions \cite{elmaarry2016}. This latitude band was also identified by \cite{vincent2016b} as the preferred location for southern outbursts, many of them likely related to the sudden collapse of existing cliffs. \item The mean density of cliffs is 5\% higher in the Southern hemisphere, but the cliffs area is proportionally larger in the North. This is effectively a quantitative measure of the surface roughness at the scale of 10-100s~m. Indeed, cliffs are more densely distributed in the South than in the dust covered North, but southern cliffs are also less high and will not tend to create large continuous walls like the ones found at high Northern latitude. \end{itemize} In short: the southern regions of 67P's nucleus are rougher than the northern ones at a 10~m scale, but the North is rougher at a 100~m scale. This dichotomy is a consequence of the strong seasonal differences between the two hemispheres. Fig. \ref{fig:north_south} shows the cumulative size distribution of cliff heights for both hemispheres. We find that the northern power index ($-1.64 \pm 0.02$) is close to the mean value of the comet, while the southern distribution shows a steeper slope ($-1.86 \pm 0.04$). Because of the different insolation patterns between both hemispheres, it is tempting to interpret this difference in power index as a signature of the surface erosional rates, or how much time the comet has spent in the inner Solar System. We develop this argument further in section \ref{sec:erosion}. \begin{figure} \centering \includegraphics[width=\hsize]{figures/cliff_density_area.png} \caption{For each 10$^{\circ}~$ of latitude, the left side of this plot represents the number of cliffs per square kilometre, and the right side shows how much of the area of a given latitude band is covered by cliffs. The left side can be interpreted as a measure of the roughness in the 10~m scale, while the right side is more sensitive to features in the 100~m range and beyond. Overall, this plots show that the southern hemisphere is rougher at small scales, but displays less dramatic topographic changes than the northern one.} \label{fig:lat_hist} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{figures/cplaw_north_south.png} \caption{Cumulative size distribution of cliff height on the Northern and Southern regions of the nucleus. The southern distribution is steeper and the change of slope takes place at a lower height than on the north.} \label{fig:north_south} \end{figure} \subsubsection{Big lobe vs Small lobe} The origin of 67P is debatable, where different publications have argued for either a primordial object \citep{davidsson2016} or a re-accreted collisional fragment \citep{rickman2015}. All authors, however, agree that 67P is very likely the result of a low-speed merger collision between two small bodies. Those objects are effectively the lobes of the comet as we see it today. In our data set, the separation between the two lobes is purely geometric. \cite{preusker2015} have defined in 3D the limits of the small lobe (SL), neck region (NR) and big lobe (BL) with a set of two planes (BL-NR) and (SL-NR) which separate the shape in three entities. Vertices of the shape model belong to one component or the other depending on their position with respect to these planes. Because this definition was proposed before the Southern hemisphere was fully observed, the planes end up attributing parts of the lobes to the neck region. We correct for this by using only one separation, defined as the mean plane between the two cuts previously defined. In the Cheops-reference frame of the comet \citep{preusker2015}, a point $P[x,y,z]$ belongs to the separation plane if its coordinates satisfy the relation: $$1.706 x -0.846 y + 0.536 z -1.289 = 0 $$ A visualization of this separation is shown in Fig. \ref{fig:lobes}. ~\\ We looked at the distribution of cliff heights across both lobes and summarized these results in Fig. \ref{fig:lobes_data}. We find that the big lobe follows the same trend as described earlier, with the distribution akin to a double power law (kink at 300~m). The main power index is equal to $-1.81 \pm 0.04$. The small lobe however, has a much poorer fit. The distribution can roughly be approximated with a similar set of power laws, but it is clear from Fig. \ref{fig:lobes_data} that this is not the best model. We note an excess in both the 10-20~m cliffs and the 100-200~m cliffs. This may relate to an intrinsic difference between both lobes, although it is perhaps more easily explained by a different evolution process for two main reasons: \begin{itemize} \item{As explained earlier, some areas in Hathor, Anuket, and Neith are formerly the original surface of the small lobe (admittedly now considerably eroded). Hence, the features pertaining to this regions that we identified as the largest cliffs now could have been flat plains when considering solely the gravity of the smaller lobe. With that in mind, it can be that those features have experienced a very different history than the smaller cliffs in other areas, and were not born as cliffs sensu stricto.} \item{The Wosret region on the southern small lobe has a very peculiar morphology. It is extremely flat and dominated by long fractures, and devoid of any significant dust cover \citep{elmaarry2016}. Because of its location and orientation, Wosret is permanently illuminated with a Sun at zenith at perihelion. Therefore it is potentially the most eroded region of the comet, explaining why it is so flat.} \end{itemize} We conclude that the difference in size distribution of cliffs between the two lobe is probably not a meaningful way to assess differences in physical properties. It is, however, a good description of the different erosional history of both lobes. \begin{figure} \centering \includegraphics[width=0.8\hsize]{figures/lobes.jpg} \caption{3D visualization of the separation between "big lobe" and "small lobe"} \label{fig:lobes} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{figures/cplaw_big_small.png} \caption{Cumulative size distribution of cliff height on the big lobe and small lobe. Both distributions can be approximated with a double power law. The transition from one power index to the next take place at a lower height on the big lobe.} \label{fig:lobes_data} \end{figure} \section{Discussion}\label{sec:discussion} \subsection{Power law distribution}\label{sec:plaw} The fact that the cliff size distribution follows a power law is not surprising, as power laws are ubiquitous in measurements of natural phenomena. Specifically in planetary science, power laws are used to best describe for instance the size distribution of craters or boulders on rocky surfaces. On 67P, we measured a cumulative power index of $-3.6 \pm 0.2$ for boulders larger than 7m \citep{pajola2015}, $-2.05 \pm 0.25$ for the diameter of circular features \citep{ip2016}, and $-2.8 \pm 0.2$ for pebbles in the Agilkia region \citep{mottola2015}. The resolution the images acquired by previous missions was not sufficient to provide an exhaustive measure of the topography, but some features (e.g. pits and boulders) have been catalogued and are listed in Table \ref{tab:all_bodies}. \begin{table*} \caption{Power indices (slope of the cumulative size distribution in log-log space) as measured on cometary features. The power law for circular depressions on comet 81P/Wild 2 is not provided explicitly by Basilevsky \& Keller (2006), we re-calculated it from their Fig. 10.} \label{tab:all_bodies} \centering \begin{tabular}{l l l l} \hline\hline Comet & Feature & Power index & Reference\\ 81P & pit diameters & $-1.60 \pm 0.15$ & \cite{basilevsky2006} \\ 9P & pit diameters & $-2.24 \pm 0.09$ & \cite{belton2013}\\ 103P & boulders >10m & $-2.7 \pm 0.2 $ & \cite{pajola2016c}\\ 67P & boulders >7m & $-3.6 \pm 0.2 $ & \cite{pajola2015}\\ 67P & pit diameters & $-2.05 \pm 0.25$ & \cite{ip2016}\\ 67P & cliff heights & $-1.69 \pm 0.02$ & this study\\ \hline \end{tabular} \end{table*} Although it is not well understood why such distributions should be power laws, it is generally interpreted as a signature of scale invariance \citep{turcotte1986, newman2006}. Power laws distributions are alternatively found in the literature as descriptions of fractal structures and are characterized by their fractal Hausdorff dimension $d$ \citep{hausdorff1918}. Fractal dimension and power index relate to each other through the equation: $$ d = 1 + |p_{index}| $$ where $p_{index}$ is the power slope of the \textit{cumulative} size distribution. On 67P, an average $p_{index}$ of -1.69 for cliffs between 13 and 300~m is therefore equivalent to a fractal Hausdorff dimension $d = 2.69$. Hence, a pure mathematical approximation of the first 300~m of the comet could be an object such as the Level 4 Menger sponge \citep{menger1928}, which has a Hausdorff dimension $\simeq 2.73$, and 70\% porosity. Indeed, 67P's porosity is in the range 70-75\%, according to \cite{jorda2016} and \cite{paetzold2016}. This may prove useful when developing further models of the top 300 metres of the surface (Fig. \ref{fig:menger_comet}). \begin{figure} \centering \includegraphics[width=\hsize]{figures/menger_comet.jpg} \caption{The \textit{Level 4 Menger sponge} (left panel) is a mathematical object with similar fractal dimension and porosity as the top $\sim 100 m$ of the comet surface, marked by large depressions, cliffs, and sharp topographic variations, as it can be seen in this OSIRIS NAC image of the Seth region (NAC\_2014-09-22T14.49.49.332Z\_ID30\_1397549400\_F22).} \label{fig:menger_comet} \end{figure} ~\\ In terms of geophysical processes, this means that cliff formation and fragmentation tends to follow existing planes of failure which can be found at all scales. Additionally, \cite{turcotte1986} have shown that for general fragmenting processes, this fractal dimension is a measure of how efficiently the existing fractures will resist to fragmentation. Stronger material will have larger fractal dimension. In other words, the stronger the material, the steeper the power law. When applied to 67P, this observation means that as the comet crumbles, its individual fragments tend to become more resistant to subsequent failures and it may be easier for erosion processes to break down a large cliff, rather than fragmenting small boulders. The kink in the size distribution at large heights is difficult to explain. We rule out observation bias because we certainly cannot have missed features of a few hundred meters in size after having mapped 100\% of the nucleus surface several times over more than 2 years. We see two potential explanations for the larger power index: \begin{itemize} \item A steeper power slope typically indicates that more erosion/fragmentation took place. Therefore, our observations could mean that cliffs larger than 300~m break up into smaller ones more efficiently than smaller features. As cliff size is a function of the ratio between gravity and cohesion, it means that cliffs larger than this limiting height might be at the edge of where gravity starts to overcome tensile strength. Hence, the amplitude of the perturbation which may trigger the collapse will be lower than for smaller cliffs. However, this effectively defines a lower limit of 2~Pa for the material cohesion, at least an order of magnitude lower than the tensile strength derived from pit collapses \citep{vincent2015a} and overhangs \citep{groussin2015a} in the same regions. Therefore it is unlikely that these large cliffs are significantly weaker than other features. \item Rather than invoking heterogeneity in the material properties, one may instead consider insolation conditions. For example Hathor and Sobek, the two main locations for high cliffs, display very unusual erosion patterns due to their geographic position on the comet (inside large concavities). Hence, it is quite possible that erosion did not affect the cliff size distribution in these areas in the same way it modified the other regions. \end{itemize} We note that the kink appears at different heights depending on the regions. While this may reflect different regional history, it is more probably due to the very small number of tall cliffs over the surface (18 out of 2633) which does not allow us to constrain properly the height at which this kink occurs. \subsection{Correlation between Surface Erosion and Power Index}\label{sec:erosion} We suggested in section \ref{sec:regions} that the different size distributions in between hemispheres reflects the erosional history of the surface. In order to investigate this more thoroughly, we performed an orbital integration of the received insolation for the whole surface and derived an orbital erosion rate according to thermal model B in \cite{keller2015a}. More specifically, this model approximates the surface with a porous ice layer covered with a $50\mu m$ layer of small ($5\mu m$) aggregates of dust. This layer affects the heat transfer and, consequently, the sublimation of water ice. The erosion thus calculated considers only the water mass loss and is therefore a lower limit of the average erosion. Despite these simplifications, the results are consistent with observations of activity, erosion, and change of rotation period of the nucleus \citep{keller2015b}, and with other published models such as \cite{lai2016}. This approach gives us a way to account for the high non-linearity of sublimation and mass loss on the comet. This is important to consider, as although the northern and southern latitudes receive about the same amount of energy per orbit, the southern insolation gets all its energy in only 8 months when close to the Sun, and therefore the erosion of the southern surface is much more dramatic than on the North. Having a model for the orbital erosion rate, we divided the nucleus surface into 6 regions with increasing erosion rates, and comparable areas and number of cliffs ($\simeq$400/region). These areas are presented in Fig. \ref{fig:map_erosion}, top panel. For each region we calculated the power index of the cumulative distribution of cliff heights, as done before on the larger scale. Results are plotted in Figure \ref{fig:map_erosion}, bottom panel. We find a remarkable correlation (confidence 99\%) between both variables, confirming our intuition that the size distribution of cliffs is steeper for larger erosion rates. We interpret our results as a fundamental property of erosion processes on 67P. Rather than simply losing mass, the nucleus topography is actually eroded down into ever smaller fragments that remain mostly in the regions where they were formed. The higher the erosion rate, the higher the probability to find only small cliffs. This is particularly visible when comparing for instance regions like Seth (north) which is rich in cliffs and pits with a depth >150m, with the southern Wosret that is almost completely flat. It is however important to remember that correlation does not imply causality and one cannot assume that the linear relation between erosion rate and topographic size distribution is a physical law. We can say, though, that the correlation suggest that all erosional processes (activity, thermo-mechanical stresses, gravity, ...) modify the surface in a way that is directly related to how strong and how fast the solar insolation is distributed to specific regions. It is not clear how far this crumbling process goes as, for instance, comet pieces with a size below a few decimetres are blown away from the nucleus by activity \citep{agarwal2016}. We note, nonetheless, that the size distribution of boulders (cumul. $p_{index} = -3.6$) \citep{pajola2015,pajola2016b} and grains ejected from the comet (cumul. $p_{index} = -3~to~-~2.7$) \citep{fulle2016} is much steeper than that of the cliffs, which is compatible with our interpretation that boulders and dust are the end product of erosion. To be exhaustive, we must also mention that although this power law evolution from shallow to steep curves seems linear for cliffs, it is not at all certain that it continues in this way for smaller blocks. Indeed, \cite{pajola2015} have shown that while most boulder size distributions follow a $p_{index}$ = -3.6, there are some areas of the nucleus with much shallower power laws ($p_{index}$ = -2, or even -1). Small objects are much more sensitive to local conditions and are certainly affected by different erosion processes than the cliffs. \begin{figure} \centering \includegraphics[width=\hsize]{figures/orbital_erosion.png}\\ \includegraphics[width=\hsize]{figures/erosion_pindex_correlation.png} \caption{Top panel: Topographic map of the surface, shaded with the orbital erosion rate. Note that the equirectangular map projections makes the northernmost and southernmost regions appear larger than they are in reality. Bottom panel: power index of the cliff distribution as a function of the erosion rate. The dotted line is a linear fit to all points (correl. coeff. r=-0.993).} \label{fig:map_erosion} \end{figure} \subsection{A general evolution model} If we rewind this evolution process, can we define a primitive topographic distribution: what does a non eroded comet look like ? We must first define what is meant by primitive in the context of cometary surfaces. In our current model of the Solar System, comets are formed beyond 30 AU and may experience a certain amount of collisions in their original environment, enhanced by the migration of giant planets. The details of this early phase are still an open question, see \cite{rickman2015} and \cite{davidsson2016} for a discussion on the potential implications of various scenarios. After this initial formation phase, comets mostly remain far from the Sun for billions of years until a favourable gravitational pull brings them back to closer heliocentric distances. Because of the low energy available, and the low density of objects at far distance from the Sun, it is likely that a cometary nucleus evolves only very little during this phase and its surface is representative of what the comet looked like shortly after formation. Once a comet enters the inner Solar System, the situation changes dramatically, especially for Jupiter Family comets which have a small perihelion distance (e.g. 1.2 AU for 67P). We estimate that the lifetime of a comet in such orbit is in the order of a few ten thousands years, during which the surface will be completely transformed by the Solar insolation. Reconstructing cometary orbits is notoriously difficult because of the chaotic nature of such integration (small variations in initial conditions can lead to vastly different orbits when accounting for the gravity of all planets) but the current models agree that 67P has only recently been put on its current orbit (most likely in 1959, see \cite{ip2016}). Before that time it orbited beyond the snow line, and therefore the least eroded regions on the surface are very likely to be close to their primitive state. In the previous section we have shown how erosion affected 67P's surface: the cumulative size distribution of cliffs in the least eroded regions is characterized by a power law with $p_{index} \simeq -1.5$, while the most eroded regions have a $p_{index} \simeq -2.3$. Because of the orbital considerations expressed above, and because the most eroded areas show very little topography, we consider these two boundaries as realistic assumptions as to what the cliff height distribution should be on a very primitive and very eroded object, knowing that these can only be qualitative bounds until we have visited more comets. From these two extreme size distributions, we propose the following evolution model: We start with a km-size body already formed; we do not consider the original accretion itself. During that formation phase, or shortly after, the topography is created with rather violent processes such as large outbursts, impacts, or self-reorganization of the nucleus constituents. These effects leave behind large topographic features on the scale of several hundred meters. The cumulative distribution of these heights is quite shallow with a power index equal or above -1.5. As the comet enters the inner Solar System, and gets eroded by activity and insolation, the topography crumbles, and the power law steepens, down to a power index equal to or lower than -2.3. Beyond that, the topography is erased and only boulders, pebbles, and dust remain. Constraining the limit at which the transition from cliffs to boulders takes place may provide important clues on the material properties. However, it requires also a precise mapping of boulder distributions as a function of erosion rate and a better understanding of the fragmentation processes, which are beyond the scope of this article. This steepening of the power law may be partially balanced, or even counteracted by dust transport. We know from observations \citep{thomas2015,hu2017} and modelling \citep{thomas2015b,lai2016} that at least one meter of dust is deposited on regions north of +30$^{\circ}~$ of latitude, when ejected from the southern areas at perihelion. This amounts to at least 10 metres since 67P entered the inner Solar System. This deposition would erase preferentially the smaller cliffs, and therefore make the power law shallower. Therefore the smallest power index in Figure \ref{fig:map_erosion} may not be fully representative of a primordial surface. Hence, we postulate that the original surface is more likely to look like regions of 67P that are at the same time poorly eroded and at the edge of the dust blanket (roughly in between latitudes +20$^{\circ}~$ and +30$^{\circ}~$. This would correspond to the sharp cliffs/pits in Seth region on the big lobe, or the rim of the Hatmehit basin on the small lobe. \subsection{Comparison to other bodies} These results allow us to compare directly 67P with other comets. As Table \ref{tab:all_bodies} shows, 67P's power index for cliff heights is similar to what has been measured on 81P/Wild 2, but shallower than on 9P/Tempel 1. This is fairly consistent with observations of active pits measured by \cite{vincent2015b} which concluded that deep pits are most likely to be found in comets that have only recently entered the Inner Solar System. Smaller feature like boulders appear towards the end of this crumbling erosion, and therefore should display a steeper power law, which is observed on 67P and 103P. Our model suggest that 67P and 81P have encountered a similar level of erosion, while a comet like 9P, or the hyper-active 103P are more eroded. This is in agreement with our understanding of the dynamical history of these objects, both 67P and 81P for instance have entered the inner Solar System only recently \citep{brownlee2004,krolikowska2003,ip2016}. \cite{birch2017} reached a similar conclusion on the primitive state of 67P, from their analysis of several types of morphological features. We note that one must be cautious with such comparisons as observations of other comets were acquired at much lower resolution and often describe the diameter of features rather than their height. Nonetheless, height is typically a linear function of the feature breadth (i.e. crater depth/diameter = 0.2 on most solar system bodies) and therefore should share the same power law, but this is not granted. Indeed, large boulders on 67P appear less spherical than small ones (height<diameter), and pits show at least two populations with different depth-to-diameter ratio, dominated by the eroded population \citep{vincent2015b}. We summarize our concept of cometary surface evolution in Figure \ref{fig:comet_evolution}, setting the boundaries for primordial and eroded comet topographies at p-indexes -1.5 and -2.3. These values are not too well constrained and require that more comets are characterized. The model is, however, qualitatively useful as it shows that a measure of the topography can provide a direct link to the level of evolution of the surface, as crater size distributions are for instance used on rocky bodies. The two boundaries can be interpreted as follows: \begin{itemize} \item The higher boundary (p-index $\simeq$ -1.5) defines a primordial cometary surface, shortly after formation. It reflects the events that originally shaped the topography and could provide insight on, for instance, the size and velocity distribution of small impactors in the primordial Kuiper Belt, or the intensity of early cometary outbursts. This is not an exhaustive lists of potential processes; the exploration of more comets, but also Trojans and KBOs may help us constrain this limit. \item The lower boundary is more related to intrinsic properties of the cometary material. In essence, it describes the erosion limit at which a topographic feature cannot keep its core constituents together any more, and breaks apart into boulders, pebbles, and dust. \end{itemize} \begin{figure} \centering \includegraphics[width=\hsize]{figures/comet_evolution.png} \caption{A model of cometary evolution. The boundaries between the different regimes are not fully constrained and should be considered qualitatively only until more comets have been characterized. The data points describe the average cumulative power index of the topographic height distribution for 4 comets and is indicative of the progression of erosion on these bodies. Because 103P is too active to sustain much topography the number given here describes the size distribution of boulders. A full list of power laws considered in this paper is given in Table \ref{tab:all_bodies}.} \label{fig:comet_evolution} \end{figure} \section{Conclusion} We have performed an unbiased statistical analysis of the distribution of large scale topographic features on comet 67P/Churyumov-Gerasimenko. We find that: \begin{itemize} \item Cliffs size distribution follow a power law with an average cumulative $p_{index}$ = $-1.69 \pm 0.02$. This slope varies from region to region, and correlates well with the orbital erosion rate of the surface. The more eroded the area, the steeper the power law. \item This observation can be generalized to other comets. We argue that topography provides a direct measure of a comet's erosional history: primordial cometary surfaces are characterized by the presence of large cliffs, while eroded cometary surfaces are broken into smaller blocks. \item The power law of the topography cumulative height distribution can be used as a measure of how primitive a comet nucleus is, in a similar fashion as crater counts are used to date rocky surfaces. \item Our measurements suggest that the p-index of topographic height on a comet that has recently entered the Inner Solar system will be around -1.5. Dynamically older comets will display a larger power index, up to about -2.3. \item Topographic features which lay outside this size distribution may be the signature of some local heterogeneity in the material properties, but most likely encountered very unusual erosion patterns due to their geographic position on the comet. \end{itemize} \section*{acknowledgements} OSIRIS was built by a consortium led by the Max Planck Institut f\"ur Sonnensystemforschung, G\"ottingen, Germany, in collaboration with CISAS, University of Padova, Italy, the Laboratoire d'Astrophysique de Marseille, France, the Instituto de Astrofisica de Andalucia, CSIC, Granada, Spain, the Scientific Support Office of the European Space Agency, Noordwijk, The Netherlands, the Instituto Nacional de Tecnica Aeroespacial, Madrid, Spain, the Universidad Politecnica de Madrid, Spain, the Department of Physics and Astronomy of Uppsala University, Sweden, and the Institut f\"ur Datentechnik und Kommunikationsnetze der Technischen Universit\"at Braunschweig, Germany. The support of the national funding agencies of Germany (DLR), France (CNES), Italy (ASI), Spain (MINECO), Sweden (SNSB), and the ESA Technical Directorate is gratefully acknowledged. We thank the Rosetta Science Ground Segment at ESAC, the Rosetta Mission Operations Centre at ESOC and the Rosetta Project at ESTEC for their outstanding work enabling the science return of the Rosetta Mission. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 686709. This research has made use of NASA's Astrophysics Data System Bibliographic Services. The image of a Menger sponge was retrieved from \mbox{https://en.wikipedia.org/wiki/Menger\_sponge}. Tri-dimensional visualizations in this paper are provided by the software \textit{shapeViewer} \mbox{(www.comet-toolbox.com)}. \bibliographystyle{mnras}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,978
Franz Staeble (* 1876; † 1950) war ein deutscher Physiker (Optiker) und Unternehmer in der fotografischen Industrie. Staeble wurde 1901 an der Ludwig-Maximilians-Universität München mit einer Arbeit zur Untersuchung der Flächen, deren Krümmungs-Linien bei orthogonaler Projektion auf eine andere Fläche wieder Krümmungs-Linien werden promoviert. Referent war Carl Louis Lindemann, Korreferent Gustav A. Bauer. Zusammen mit Erwin Lihotzky führte Staeble seine Überlegungen von 1907 zur optischen Analyse sphärisch nicht korrigierter optischer Systeme weiter. Diese Überlegungen führten 1919 zur Formulierung der heute Staeble-Lihotzky-Bedingung genannten Forderung. Franz Staeble gründete am 5. Mai 1908 in München zusammen mit Alfred Neumann und O. Jaeger das 1944 nach Schongau ausgelagerte Staeble-Werk. Zusammen mit seinem Co-Gesellschafter A. Neumann veröffentlichte er Das photographische Objektiv. Seine Beurteilung und Ausnutzung, dessen dritte verbesserte Auflage bereits 1924 aufgelegt werden musste. Literatur von Franz Staeble Franz Staeble: Untersuchung der Flächen, deren Krümmungs-Linien bei orthogonaler Projection auf eine andere Fläche wieder Krümmungs-Linien werden. Inaugural-Dissertation an der LMU München. Druck: C. Wolf, 1901. Franz Staeble: Über den Zusammenhang von Komma und Sinusbildung bei sphärisch nicht korrigierten Systemen. In: Zeitschrift für Instrumentenkunde. 1907, S. 241–249. Alfred Neumann, Franz Staeble: Das photographische Objektiv. Seine Beurteilung und Ausnutzung. Erste Auflage. Ed. Liesegangs Verlag M. Eger, 1909 (Photographischer Bücherschatz, Band 8) (2. Auflage 1919, dritte verbesserte Auflage 1924) Franz Staeble: Isoplanatische Korrektion und Proportionalitatsbedingung. In: Münchner Sitzungsberichte. 1919, S. 163–196. Franz Staeble: Die Seidel'schen Bildfehler bei Beschränkung auf die erste Potenz der Linsendicken. In: Abhandlungen der Bayerischen Akademie der Wissenschaften. München 1935, 32 S. Weblinks Einzelnachweise Physiker (20. Jahrhundert) Deutscher Geboren 1876 Gestorben 1950 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,579
Rose McConnell Long, née le à Greensburg (Indiana) et morte le à Boulder (Colorado), est une femme politique américaine. Membre du Parti démocrate, elle est sénatrice de Louisiane entre 1936 et 1937. Elle est l'épouse de l'homme politique Huey Long. Biographie Bibliographie « Rose McConnell Long », in Women in Congress, 1917-2006, Washington, D.C., Government Printing Office, 2006. Annexes Notes et références Sources Articles connexes Liste des sénateurs des États-Unis pour la Louisiane Sénat des États-Unis Femmes au Sénat des États-Unis Liens externes Naissance en avril 1892 Naissance dans le comté de Decatur (Indiana) Décès en mai 1970 Décès à 78 ans Décès à Boulder Sénatrice des États-Unis Sénateur des États-Unis pour la Louisiane Conjoint de personnalité politique Pionnière en politique
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,381
Los hombres malo è un album degli Outlaws, pubblicato dalla Arista Records nel 1982. Il disco fu registrato al Axis Sound Studio di Atlanta, Georgia ed al Chicago Recording Company di Chicago, Illinois (Stati Uniti). Tracce Lato A Lato B Il brano All Roads in alcune fonti è accreditato a Sammy Hagar e Jim Peterik in altre a Freddie Salem ed in altre ancora a Rick Cua e Sammy Hagar Formazione Hughie Thomasson - chitarra Fender, banjo, voce, accompagnamento vocale e coro Freddie Salem - chitarra Gibson, voce, accompagnamento vocale e coro Rick Cua - basso elettrico Kramer a otto corde, accompagnamento vocale e coro David Dix - batteria, percussioni (Zildjian cymbals e Remo Heads) Ospiti Gary Lyons - tastiere, produttore, accompagnamento vocale e coro Dave The Professor Lane - fiddle Carol Bristow - accompagnamento vocale e coro Lu Moss - accompagnamento vocale e coro Note Collegamenti esterni
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,887
Q: gcc -Wl options : cannot pass comma inside I want to increase my stack and heap commit size in a PE i386 file. if i do: gcc -Wl,--stack,100000000,10000 -o stack.exe stack.c it does not work. A: You should use the -Xlinker option which is equivalent to the -Wl option: gcc -Xlinker --stack=100000000,10000 -o stack.exe stack.c the -Xlinker option can support comma.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,128
\section{Introduction} Let $(X, \mathbf{Z})$ be a random vector in $\mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+1}$, $d \ge 1$. We assume that $(X, \mathbf{Z})$ has a joint density on $\mathbb{R}^{d+1}$. If we want to predict $X$ using $\mathbf{Z}$ we usually formulate the following regression problem: \begin{eqnarray}\label{eq:RegMdl} X = m(\mathbf{Z}) + \epsilon, \end{eqnarray} where $m(\mathbf{z}) = \mathbb E(X|\mathbf{Z} = \mathbf{z})$ is the conditional mean of $X$ given $\mathbf{Z} = \mathbf{z}$ and $\epsilon := X - m(\mathbf{Z})$ is the {\it residual} (although $\epsilon$ is usually called the error, and its estimate the residual, for this paper we feel that the term residual is more appropriate). Typically we further assume that the residual $\epsilon$ is {\it independent} of $\mathbf{Z}$. However, intuitively, we are just trying to break the information in $(X,\mathbf{Z})$ into two parts: a part that contains all relevant information about $X$, and the ``residual'' (the left over) which does not have anything to do with the relationship between $X$ and $\mathbf{Z}$. In this paper we address the following question: given any random vector $(X, \mathbf{Z})$ how do we define the notion of a ``residual'' of $X$ on $\mathbf{Z}$ that matches with the above intuition? Thus, formally, we want to find a function $\varphi: \mathbb{R}^{d+1} \to \mathbb{R}$ such that the residual $\varphi(X, \mathbf{Z})$ satisfies the following two conditions: \begin{enumerate} \item[(C.1)] $\;\;\;\;\;$ the residual $\varphi(X, \mathbf{Z})$ is independent of the predictor $\mathbf{Z}$, i.e., \begin{eqnarray*}\label{eq:Indep} \varphi(X, \mathbf{Z}) \perp \! \! \! \perp \mathbf{Z}, \qquad \mbox{and } \end{eqnarray*} \item[(C.2)] $\;\;\;\;\;$ the information content of $(X, \mathbf{Z})$ is the same as that of $( \varphi(X, \mathbf{Z}), \mathbf{Z} )$, i.e., \begin{equation}\label{eq:Info} \sigma(X, \mathbf{Z}) = \sigma( \varphi(X, \mathbf{Z}), \mathbf{Z} ), \end{equation} where $\sigma(X, \mathbf{Z})$ denotes the $\sigma$-field generated by $X$ and $\mathbf{Z}$. We can also express~\eqref{eq:Info} as: there exists a measurable function $h : \mathbb{R}^{d+1} \to \mathbb{R} $ such that \begin{equation}\label{eq:GenX} X = h(\mathbf{Z}, \varphi(X, \mathbf{Z})); \end{equation} see e.g., Theorem 20.1 of~\cite{Bill95}. \end{enumerate} In this paper we propose a notion of a residual that satisfies the above two conditions, under any joint distribution of $X$ and $\mathbf{Z}$. We investigate the properties of this notion of residual in Section~\ref{sec:NPResid}. We show that this notion indeed reduces to the usual residual (error) in the multivariate normal regression model. Further, we use this notion of residual to develop a test for conditional independence. Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. The assumption of conditional independence means that $X$ is independent of $Y$ given $\mathbf{Z}$, i.e., $X \perp \! \! \! \perp Y |\mathbf{Z}$. Conditional independence is an important concept in modeling causal relations (\cite{dawid79}, \cite{Pearl00}), in graphical models (\cite{Lauritzen96}; \cite{koller09}), in economic theory (see \cite{Chiappori00}), and in the literature of program evaluations (see \cite{Heckman97}) among other fields. Traditional methods for testing conditional independence are either restricted to the discrete case (\cite{Lauritzen96}; \cite{Agresti02}) or impose simplifying assumption when the random variables are continuous (\cite{Lawrance76}). However, recently there has been a few nonparametric testing procedures proposed for testing conditional independence without assuming a functional form between the distributions of $X,Y$, and $\mathbf{Z}$. \cite{SuWhite07} consider testing conditional independence based on the difference between the conditional characteristic functions, while \cite{SuWhite08} use the Hellinger distance between conditional densities of $ X$ given $Y$ and $\mathbf{Z}$, and $X$ given $Y$ to test for conditional independence. A test based on estimation of the maximal nonlinear conditional correlation is proposed in \cite{Huang10}. \cite{B11} develops a test based on partial copula. \cite{KerCondInd07} propose a measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces; \cite{Z12} propose another kernel-based conditional independence test. \cite{poczos12} extend the concept of distance correlation (developed by \cite{SzekelyRizzoBakirov07} to measure dependence between two random variables or vectors) to characterize conditional dependence. \cite{SR14} investigate a method that is easy to compute and can capture non-linear dependencies but does not completely characterize conditional independence; also see~\cite{GW12} and the references therein. In Section~\ref{sec:TestCondInd} we use the notion of residual defined in Section~\ref{sec:NPResid} to show that the conditional independence between $X$ and $Y$ given $\mathbf{Z}$ is equivalent to the mutual independence of three random vectors: the residuals of $X$ on $\mathbf{Z}$ and $Y$ on $\mathbf{Z}$, and $\mathbf{Z}$. We reduce this testing of mutual independence to a one sample multivariate goodness-of-fit test. We further propose a modification of the easy-to-implement \textit{energy} statistic based method (\cite{SzekelyRizzo05}; also see \cite{SzekelyRizzo13}) to test the goodness-of-fit; see Section~\ref{sec:TestMutInd}. In Section~\ref{sec:sub_test_cond} we use our notion of nonparametric residual and the proposed goodness-of-fit test to check the null hypothesis of conditional independence. Moreover, we describe a bootstrap scheme to approximate the critical value of this test. In Section \ref{sec:simul} we compare the finite sample performance of the procedure proposed in this paper with other available methods in the literature through a finite sample simulation study. We end with a brief discussion, Section~\ref{sec:Disc}, where we point to some open research problems and outline an idea, using the proposed residuals, to define (and test) a nonparametric notion of partial correlation. \section{A nonparametric notion of residual}\label{sec:NPResid} Conditions (C.1)--(C.2) do not necessarily lead to a unique choice for $\varphi$. To find a meaningful and unique function $\varphi$ that satisfies conditions (C.1)--(C.2) we impose the following natural restrictions on $\varphi$. We assume that \begin{enumerate} \item[(C.3)] $\;\;\;\;\;$ $x \mapsto \varphi(x,\mathbf{z})$ is strictly increasing in its support, for every fixed $\mathbf{z} \in \mathbb{R}^d$. \end{enumerate} Note that condition (C.3) is a slight strengthening of condition (C.2). Suppose that a function $\varphi$ satisfies conditions (C.1) and (C.3). Then any strictly monotone transformation of $\varphi(\cdot, \mathbf{z})$ would again satisfy (C.1) and (C.3). Thus, conditions (C.1) and (C.3) do not uniquely specify $\varphi$. To handle this identifiability issue, we replace condition (C.1) with (C.4), described below. First observe that, by condition (C.1), the conditional distribution of the random variable $\varphi(X, \mathbf{Z})$ given $\mathbf{Z} = \mathbf{z}$ does not depend on $\mathbf{z} $. We assume that \begin{enumerate} \item[(C.4)] $\;\;\;\;\;$ $\varphi(X, \mathbf{Z})| \mathbf{Z} = \mathbf{z}$ is uniformly distributed, for all $\mathbf{z} \in \mathbb{R}^d$. \end{enumerate} Condition (C.4) is again quite natural -- we usually assume that the residual has a fixed distribution, e.g., in regression we assume that the (standardized) residual in normally distributed with zero mean and unit variance. Note that condition (C.4) is slightly stronger than (C.1) and will help us uniquely identify $\varphi$. The following result shows that, indeed, under conditions (C.3)--(C.4), a unique $\varphi$ exists and gives its form. \begin{lemma}\label{lem:NPError} Let $F_{X|\mathbf{Z}}(\cdot| \mathbf{z})$ denote the conditional distribution function of $X|\mathbf{Z} = \mathbf{z}$. Under conditions (C.3) and (C.4), we have a unique choice of $\varphi(x, \mathbf{z})$, given by \begin{eqnarray*} \varphi(x, \mathbf{z}) = F_{X|\mathbf{Z}}(x| \mathbf{z}). \end{eqnarray*} Also, $h(\mathbf{z}, u)$ can be taken as \begin{eqnarray}\label{eq:InvCondDist} h(\mathbf{z}, u) =F^{-1}_{X|\mathbf{Z}}(u|\mathbf{z}). \end{eqnarray} \end{lemma} \begin{proof} Fix $\mathbf{z}$ in the support of $\mathbf{Z}$. Let $u \in (0, 1)$. Let us write $\varphi_\mathbf{z}(x) = \varphi(x, \mathbf{z})$. By condition (C.4), we have $\mathbb P[ \varphi(X, \mathbf{Z}) \le u | \mathbf{Z} = \mathbf{z} ] = u$. On the other hand, by (C.3), $$\mathbb P[ \varphi(X, \mathbf{Z}) \le u | \mathbf{Z} = \mathbf{z} ] = \mathbb P[ X \le \varphi_\mathbf{z}^{-1}(u) | \mathbf{Z} = \mathbf{z} ] = F_{X|\mathbf{Z}}( \varphi_\mathbf{z}^{-1}(u) | \mathbf{z}) .$$ Thus, we have $$ F_{X|\mathbf{Z}}( \varphi_\mathbf{z}^{-1}(u) | \mathbf{z}) = u, \ \ \mbox{ for all } u \in (0,1), $$ which is equivalent to $ \varphi_\mathbf{z}(x) = F_{X|\mathbf{Z}}(x| \mathbf{z})$. Let $h$ be as defined in~\eqref{eq:InvCondDist}. Then, $$ h(\mathbf{z}, \varphi(x, \mathbf{z})) = F^{-1}_{X|\mathbf{Z}}( \varphi(x, \mathbf{z}) |\mathbf{z}) = F^{-1}_{X|\mathbf{Z}}( F_{X|\mathbf{Z}}(x| \mathbf{z}) |\mathbf{z}) = x, $$ as required. \end{proof} Thus from the above lemma, we conclude that in the nonparametric setup, if we want to have a notion of a residual satisfying conditions (C.3)--(C.4) then the residual has to be $F_{X|\mathbf{Z}}(X| \mathbf{Z})$. The following remarks are in order now. \begin{remark} Let us first consider the example when $(X, \mathbf{Z})$ follows a multivariate Gaussian distribution, i.e., $$ \begin{pmatrix} X \\ \mathbf{Z}\end{pmatrix} \sim N \left ( \begin{pmatrix} \mu_1 \\ \bm{\mu}_2 \end{pmatrix}, \Sigma := \begin{pmatrix} \sigma_{11}& \bm{\sigma}_{12}^\top \\ \bm{\sigma}_{12} & \Sigma_{22} \end{pmatrix} \right), $$ where $\mu_1 \in \mathbb{R}$, $\mu_2 \in \mathbb{R}^d$, $\Sigma$ is a $(d+1) \times (d+1)$ positive definite matrix with $\sigma_{11} > 0$, $\sigma_{12} \in \mathbb{R}^{d \times 1}$ and $\Sigma_{22} \in \mathbb{R}^{d \times d}$. Then the conditional distribution of $X$ given $\mathbf{Z} = \mathbf{z}$ is $N(\mu_1 + \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{z} - \bm{\mu}_2), \sigma_{11} - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} \bm{\sigma}_{12} )$. Therefore, we have the following representation in the form of~\eqref{eq:RegMdl}: $$ X = \mu_1 + \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) + \Big( X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) \Big) $$ where the usual residual is $X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2)$, which is known to be independent of $\mathbf{Z}$. In this case, using Lemma~\ref{lem:NPError}, we get $$ \varphi(X, \mathbf{Z}) = \Phi \left(\frac{ X - \mu_1 - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} (\mathbf{Z} - \bm{\mu}_2) }{\sqrt{\sigma_{11} - \bm{\sigma}_{12}^\top \Sigma_{22}^{-1} \bm{\sigma}_{12}} } \right),$$ where $\Phi(\cdot)$ is the distribution function of the standard normal distribution. Thus $\varphi(X, \mathbf{Z})$ is just a fixed strictly increasing transformation of the usual residual, and the two notions of residual essentially coincide. \\ \end{remark} \begin{remark} The above notion of residual does not extend so easily to the case of discrete random variables. Conditions (C.1) and (C.2) are equivalent to the fact that $\sigma(X, \mathbf{Z})$ factorizes into two sub $\sigma$-fields as $\sigma(X, \mathbf{Z}) = \sigma( \varphi(X, \mathbf{Z}) ) \otimes \sigma(\mathbf{Z} )$. This may not be always possible as can be seen from the following simple example. Let $(X, Z)$ take values in $\{0, 1\}^2$ such that $\mathbb P[X = i, Z =j] >0$ for all $i, j \in \{0, 1\}$. Then it can be shown that such a factorization exists if and only if $X$ and $Z$ are independent, in which case $\varphi(X, Z) = X$. \\ \end{remark} \begin{remark} Lemma~\ref{lem:NPError} also gives an way to generate $X$, using $\mathbf{Z}$ and the residual. We can first generate $\mathbf{Z}$, following its marginal distribution, and an independent random variable $U \sim \mathcal{U}(0,1)$ (here $\mathcal{U} (0,1)$ denotes the Uniform distribution on $(0,1)$) which will act as the residual. Then~\eqref{eq:GenX}, where $h$ is defined in~\eqref{eq:InvCondDist}, shows that we can generate $X = F^{-1}_{X|\mathbf{Z}}(U|\mathbf{Z})$. \\ \end{remark} In practice, we need to estimate the residual $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ from observed data, which can be done both parametrically and non-parametrically. If we have a parametric model for $F_{X|\mathbf{Z}}(\cdot|\cdot)$, we can estimate the parameters, using e.g., maximum likelihood, etc. If we do not want to assume any structure on $F_{X|\mathbf{Z}}(\cdot|\cdot)$, we can use any nonparametric smoothing method, e.g., standard kernel methods, for estimation; see~\cite{B11} for such an implementation. We will discuss the estimation of the residuals in more detail in Section~\ref{sec:NPEst}. \section{Conditional independence}\label{sec:TestCondInd} Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. In this section we state a simple result that reduces testing for the conditional independence hypothesis $H_0: X \perp \! \! \! \perp Y |\mathbf{Z}$ to a problem of testing mutual independence between three random variables/vectors that involve our notion of residual. We also briefly describe a procedure to test the mutual independence of the three random variables/vectors (see Section~\ref{sec:TestMutInd}). We start with the statement of the crucial lemma. \begin{lemma}\label{lem:CondInd} Suppose that $(X,Y,\mathbf{Z})$ has a continuous joint density on $\mathbb{R}^{d+2}$. Then, $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ are mutually independent. \end{lemma} \begin{proof} Let us make the following change of variable $$ (X,Y,\mathbf{Z}) \mapsto (U,V,\mathbf{Z}) := (F_{X|\mathbf{Z}}(X), F_{Y|\mathbf{Z}}(Y), \mathbf{Z}).$$ The joint density of $(U,V,\mathbf{Z})$ can be expressed as \begin{equation}\label{eq:trans} f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z}) = \frac{f(x,y,\mathbf{z})}{f_{X|\mathbf{Z}=\mathbf{z}}(x) f_{Y|\mathbf{Z}=\mathbf{z}}(y)} = \frac{f_{(X, Y)|\mathbf{Z}=\mathbf{z}}(x, y)f_\mathbf{Z}(\mathbf{z})}{f_{X|\mathbf{Z}=\mathbf{z}}(x) f_{Y|\mathbf{Z}=\mathbf{z}}(y)}, \end{equation} where $x = F_{X|\mathbf{Z}=\mathbf{z}}^{-1}(u)$, and $y = F_{Y|\mathbf{Z}=\mathbf{z}}^{-1}(v)$. Note that as the Jacobian matrix is upper-triangular, the determinant is the product of the diagonal entries of the matrix, namely, $f_{X|\mathbf{Z} = \mathbf{z}}(x)$, $f_{Y|\mathbf{Z}=\mathbf{z}}(y)$ and $1$. If $X \perp \! \! \! \perp Y |\mathbf{Z}$ then $f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z})$ reduces to just $f_\mathbf{Z}(\mathbf{z})$, for $u, v \in (0,1)$, from the definition of conditional independence, which shows that $U,V,\mathbf{Z}$ are independent (note that it is easy to show that $U,V$ are marginally $\mathcal{U}(0,1)$, the Uniform distribution on $(0,1)$). Now, given that $U,V,\mathbf{Z}$ are independent, we know that $f_{(U,V,\mathbf{Z})}(u,v,\mathbf{z}) = f_\mathbf{Z}(\mathbf{z})$ for $u, v \in (0,1)$, which from (\ref{eq:trans}) easily shows that $X \perp \! \! \! \perp Y |\mathbf{Z}$. \end{proof} $\vspace{0.000in}$ \begin{remark}\label{rem:berg} Note that the joint distribution of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ is known as the \textit{partial copula}; see e.g.,~\cite{B11}.~\cite{B11} developed a test for conditional independence by testing mutual independence between $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. However, as the following example illustrates, the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ is not enough to guarantee that $X \perp \! \! \! \perp Y |\mathbf{Z}$. Let $W_1, W_2, W_3$ be i.i.d.~$\mathcal{U}(0,1)$ random variables. Let $X = W_1+W_3$, $Y =W_2$ and $Z = \mathrm{mod}(W_1 + W_2, 1)$, where `$\mathrm{mod}$' stands for the modulo (sometimes called modulus) operation that finds the remainder of the division $W_1 + W_2$ by 1. Clearly, the random vector $(X, Y, Z)$ has a smooth continuous density on $[0,1]^3$. Note that $Z$ is independent of $W_i$, for $i = 1,2$. Hence, $X, Y$ and $Z$ are pairwise independent. Thus, $F_{X|\mathbf{Z}}(X) = F_X(X)$ and $F_{Y|\mathbf{Z}}(X) = F_Y(Y)$, where $F_X$ and $F_Y$ are the marginal distribution functions of $X$ and $Y$, respectively. From the independence of $X$ and $Y$, $F_X(X)$ and $F_Y(Y)$ are independent. On the other hand, the value of $W_1$ is clearly determined by $Y$ and $Z$, i.e., $W_1 = Z-Y$ if $Y \le Z$ and $W_1 = Z-Y+1$ if $Y>Z$. Consequently, $X$ and $Y$ are not conditionally independent given $Z$. To see this, note that for every $z \in (0,1)$, $$\mathbb E[ X| Y, Z=z ] = \left\{ \begin{array}{ll} z-Y + 0.5 & \mbox{if $Y \le z$}\\ z - Y +1 + 0.5& \mbox{if $Y > z$,}\end{array} \right.$$ which obviously depends on $Y$. In Remark~\ref{Bergsma2} we illustrate this behavior with a finite sample simulation study. \\ \end{remark} \begin{remark} We can extend the above result to the case when $X$ and $Y$ are random vectors in $\mathbb{R}^p$ and $\mathbb{R}^q$, respectively. In that case we define the conditional multivariate distribution transform $F_{X|\mathbf{Z}}$ by successively conditioning on the co-ordinate random variables, i.e., if $X = (X_1,X_2)$ then we can define $F_{X|\mathbf{Z}}$ as $(F_{X_2|X_1,\mathbf{Z}}, F_{X_1|\mathbf{Z}})$. With this definition, Lemma~\ref{lem:CondInd} still holds. \\ \end{remark} To use Lemma~\ref{lem:CondInd} to test the conditional independence between $X$ and $Y$ given $\mathbf{Z}$, we need to first estimate the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ from observed data, which can be done by any nonparametric smoothing procedure, e.g., standard kernel methods (see Section~\ref{sec:NPEst}). Then, any procedure for testing the mutual independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ can be used. In this paper we advocate the use of the {\it energy} statistic (see \cite{RizzoSzekely10}), described briefly in the next subsection, to test the mutual independence of three or more random variables/vectors. \subsection{Testing mutual independence of three or more random vectors with known marginals}\label{sec:TestMutInd} Testing independence of two random variables (or vectors) has received much recent attention in the statistical literature; see e.g.,~\cite{SzekelyRizzoBakirov07}, \cite{KerIndepALT05}, and the references therein. However, testing the mutual independence of three or more random variables is more complicated and we could not find any easily implementable method in the statistical literature. In this sub-section, we test the mutual independence of three or more random variables (vectors) with known marginals by converting the problem to a one-sample goodness-of-fit test for multivariate normality. In the following we briefly describe our procedure in the general setup. Suppose that we have $r \ge 3$ continuous random variables (or vectors) $V_1, \ldots, V_r$ and we want to test their mutual independence. We assume that we know the marginal distributions of $V_1, \ldots, V_r$; without loss of generality, we can assume that $V_i$'s are standard Gaussian random variables (vectors). We write $T:= (V_1, V_2, \ldots, V_r) \in \mathbb{R}^k$ and introduce $T_{\text{ind}} := (V_1^*, V_2^*, \ldots, V_r^*)$ where $V_j^*$ is an i.i.d.~copy of $V_j$, $j=1,2, \ldots, r$, but in $T_{\text{ind}}$ the coordinates, $V_1^*, V_2^*, \ldots, V_r^*$, are independent. To test the mutual independence of $V_1, V_2, \ldots, V_r$ all we need to test now is whether $T$ and $T_{\text{ind}}$ are identically distributed. If we observed a sample from $T$, we can test for the equality of distributions of $T$ and $T_{\text{ind}}$ through a one-sample goodness-of-fit test for the standard multivariate normal distribution, i.e., $$H_0: T \sim N(\textbf{0},\textbf{I}_{k\times k}),$$ as $T_{\text{ind}}\sim N(\textbf{0},\textbf{I}_{k\times k})$, where $\textbf{I}_{k \times k}$ is the identity matrix of order $k$ and $\textbf{0} := (0, \ldots, 0) \in \mathbb{R}^{k}.$ In this paper we consider the following {\it energy} statistic (see~\cite{SzekelyRizzo05} and \cite{RizzoSzekely10}) \begin{equation}\label{eq:EStat} \Lambda(T) = 2 \mathbb E \|T - T_{\text{ind}}\| - \mathbb E \|T - T'\| - \mathbb E \|T_{\text{ind}} - T_{\text{ind}}'\|, \end{equation} where $T'$ and $T_{\text{ind}}'$ are i.i.d.~copies of $T$ and $T_{\text{ind}}$, respectively ($\|\cdot\|$ denotes the Euclidean norm). Note that $\Lambda(T)$ is always nonnegative, and equals 0, if and only if $T$ and $T_{\text{ind}}$ are identically distributed, i.e., if and only if $V_1, V_2, \ldots, V_r$ are mutually independent (see Corollary 1 of~\cite{SzekelyRizzo05}). Suppose now that we observe $n$ i.i.d.~samples $T_1, \ldots, T_n$ of $T$. The (scaled) sample version of the energy statistic for testing the goodness-of-fit hypothesis is \begin{equation}\label{eq:teststat} \mathcal{E}_n(T_1,\ldots, T_n) :=2 \sum_{i=1}^n \mathbb E \|T_i-T_\text{ind}\| - \frac{1}{n} \sum_{i=1}^n\sum_{j=1}^{n} \|T_i-T_j\|- n \mathbb E \|T_\text{ind}-T^\prime_\text{ind}\|. \end{equation} Note that the first expectation in the above display is with respect to $T_\text{ind}$. Under the null hypothesis of mutual independence, the test statistic $\mathcal{E}_n(T_1,\ldots, T_n)$ has a limiting distribution, as $n \rightarrow \infty,$ while under the alternative hypothesis $\mathcal{E}_n(T_1,\ldots, T_n)$ tends to infinity; see Section 4 of \cite{SzekelyRizzo05} and Section 8 of \cite{SzekelyRizzo13} for detailed discussions. Thus any test that rejects the null for large values of $\mathcal{E}_n(T_1,\ldots, T_n)$ is consistent against general alternatives. As $T_\text{ind}$ and $T^\prime_\text{ind}$ are i.i.d.~$N(\textbf{0}, \textbf{I}_{k\times k})$ random variables. The statistic $\mathcal{E}_n(T_1,\ldots, T_n)$ is easy to compute: $$\mathbb E\|T_\text{ind}-T_\text{ind}^\prime\| =\sqrt{2}\mathbb E \|T_{ind}\|= 2 \frac{\Gamma \big(\frac{d+3}{2}\big)}{\Gamma \big( \frac{d+2}{2}\big)}$$ and for any $a\in \mathbb{R}^{d+2}$, we have $$\mathbb E\|a-T_\text{ind}\| =\frac{\sqrt{2}\Gamma \big(\frac{d+3}{2}\big)}{\Gamma \big( \frac{d+2}{2}\big)} + \sqrt{\frac{2}{\pi}} \sum_{k=0}^\infty \frac{(-1)^k}{k!\, 2^k} \frac{|a|^{2k+2}}{(2k+1)(2k+2)} \frac{\Gamma \big( \frac{d+3}{2}\big)\Gamma \big( k+\frac{3}{2}\big)}{\Gamma \big( k+\frac{d}{2}+2\big)}.$$ The expression for $\mathbb E\|a-T_\text{ind}\|$ follows from the discussion in \cite{Zacks81} (see page 55). See the source code ``energy.c'' in the \textit{energy} package of R language (\cite{Rlang}) for a fast implementation of this; also see \cite{SzekelyRizzo13}. \subsection{Testing conditional independence} \label{sec:sub_test_cond} In this sub-section we use Lemma \ref{lem:CondInd} and the test for mutual independence proposed in the previous sub-section (Section~\ref{sec:TestMutInd}) to test for the conditional independence of $X$ and $Y$ given $\mathbf{Z}.$ We start with a simple lemma \begin{lemma} \label{lem:CondIndeqiv} Suppose that $(X,Y,\mathbf{Z})$ has a continuous joint density on $\mathbb{R}^{d+2}$. Then $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $$W:=(F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z}), F_\mathbf{Z}(\mathbf{Z})) \sim \mathcal{U}([0,1]^{d+2}),$$ where $F_\mathbf{Z}(\mathbf{z}) = \left(F_{Z_d|Z_{d-1},\ldots, Z_1}(z_d|z_{d-1},\ldots, z_1), \ldots, F_{Z_2|Z_1}(z_2|z_1), F_{Z_1}(z_1)\right),$ $\mathbf{Z} =$ \\$ (Z_1,\ldots, Z_d),$ $\textbf{z}=(z_1,\ldots, z_d),$ and $\mathcal{U}([0,1]^{d+2})$ denote the Uniform distribution on $[0,1]^{d+2}$. \end{lemma} \begin{proof} Note that by Lemma~\ref{lem:CondInd}, $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $F_{X|\mathbf{Z}}(X|\mathbf{Z}),$ $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and $\mathbf{Z}$ are mutually independent. Furthermore, note that $F_{X|\mathbf{Z}}(X|\mathbf{Z}),$ $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ are i.i.d.~$\mathcal{U}(0,1)$ random variables. Thus the proof of the lemma will be complete if we show that $F_\mathbf{Z}(\mathbf{Z}) \sim \mathcal{U}([0,1]^d)$. As each of $F_{Z_d|Z_{d-1},\ldots, Z_1}(Z_d|Z_{d-1},\ldots, Z_1), \ldots, F_{Z_2|Z_1}(Z_2|Z_1),$ and $F_{Z_1}(Z_1)$ are $\mathcal{U}(0,1)$ random variables, it is enough to show that they are mutually independent. For simplicity of notation, we will only prove the independence of $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$, independence of other terms can be proved similarly. Note that \begin{align*} \mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \le z_2 | F_{Z_1}(Z_1)=z_1) ={}& \mathbb P(F_{Z_2|Z_1}(Z_2|Z_1) \le z_2 | Z_1=F_{Z_1}^{-1}(z_1))\\ ={}&\mathbb P\Big(Z_2 \le F_{Z_2|Z_1}^{-1}\big(z_2| F_{Z_1}^{-1}(z_1)\big) \Big| Z_1=F_{Z_1}^{-1}(z_1)\Big)\\ ={}&F_{Z_2|Z_1} \Big(F_{Z_2|Z_1}^{-1}\big(z_2| F_{Z_1}^{-1}(z_1)\big) |F_{Z_1}^{-1}(z_1)\Big)\\ ={}&z_2. \end{align*} As the conditional distribution of $F_{Z_2|Z_1}(Z_2|Z_1)$ given $ F_{Z_1}(Z_1) = z_1$ does not depend on $z_1$, we have that $F_{Z_2|Z_1}(Z_2|Z_1)$ and $F_{Z_1}(Z_1)$ are independent. \end{proof} Let us now assume $X \perp \! \! \! \perp Y |\mathbf{Z}$ and define \begin{equation*} \label{eq:T_def} W:=\left(F_{X|\mathbf{Z}}(X|\mathbf{Z}), F_{Y|\mathbf{Z}}(Y|\mathbf{Z}), F_{Z_d|\mathbf{Z}_{-d}}(Z_d|\mathbf{Z}_{-d}), \ldots, F_{Z_2|Z_1}(Z_2|Z_1), F_{Z_1}(Z_1)\right). \end{equation*} By Lemma~\ref{lem:CondIndeqiv}, we have \begin{equation*} \label{eq:eq_dist} W\stackrel{\mathcal D}{=} (U_1, \dots, U_{d+2}), \end{equation*} where $U_1, U_2, \ldots, U_{d+2}$ are i.i.d.~$\mathcal{U}(0,1)$ random variables. An equivalent formulation is \begin{equation} \label{eq:mvn} H_0: T:= \Phi^{-1} (W) \stackrel{\mathcal D}{=} N(\textbf{0}, \textbf{I}_{(d+2) \times (d+2)}), \end{equation} where $\Phi$ is the distribution function corresponding to the standard Gaussian random variable, and for any $\textbf{a} \in \mathbb{R}^{d+2}$, $\Phi^{-1} (\textbf{a}) := (\Phi^{-1}(a_1), \ldots, \Phi^{-1}(a_{d+2})).$ We observe i.i.d.~data $\{(X_i,Y_i,\mathbf{Z}_i): i = 1,\ldots, n\}$ from the joint distribution of $(X,Y,\mathbf{Z})$ and we are interested in testing $X \perp \! \! \! \perp Y |\mathbf{Z}$. Suppose first that the distribution functions $F_{X| \mathbf{Z}}(\cdot|\cdot), F_{Y| \mathbf{Z}}(\cdot|\cdot),$ and $F_{\mathbf{Z}}(\cdot)$ are known. Then we have an i.i.d.~sample $T_1,\ldots, T_n$ from $T$, where \begin{equation} \label{eq:data_ver} T_i:=\Phi^{-1}(F_{X|\mathbf{Z}}(X_i|\mathbf{Z}_i), F_{Y|\mathbf{Z}}(Y_i|\mathbf{Z}_i), F_{\mathbf{Z}}(\mathbf{Z}_i)). \end{equation} Now we can use the the test statistic \eqref{eq:teststat} to test the hypothesis of conditional independence. As the true conditional distribution functions $F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$ are unknown, we can replace them by their estimates $\widehat F_{X|\mathbf{Z}}, \widehat F_{Y|\mathbf{Z}},$ and $\widehat F_{\mathbf{Z}}$, respectively, where $\widehat F_\mathbf{Z} (\mathbf{z}) =\left( \widehat F_{Z_d|Z_{d-1},\ldots, Z_1}(z_d|z_{d-1},\ldots, z_1), \ldots,\widehat F_{Z_2|Z_1}(z_2|z_1), \widehat F_{Z_1}(z_1)\right)$; see Section \ref{sec:NPEst} for more details on how to compute these estimates. Let us now define \begin{equation} \label{eq:data_hat_ver} \widehat T_i:=\Phi^{-1}(\widehat F_{X|\mathbf{Z}}(X_i|\mathbf{Z}_i), \widehat F_{Y|\mathbf{Z}}(Y_i|\mathbf{Z}_i), \widehat F_{\mathbf{Z}}(\mathbf{Z}_i)), \end{equation} for $i = 1, 2,\ldots, n.$ We will use \begin{equation} \label{eq:en_hat} \widehat{\mathcal{E}_n}:= \mathcal{E}_n(\hat{T}_1, \ldots \hat{T}_n) \end{equation} to test the hypothesis of conditional independence. \subsubsection{Approximating the asymptotic distribution through bootstrap} The limiting behavior of $\mathcal{E}_n$ is not very useful in computing the critical value of the test statistic $\widehat{\mathcal{E}_n}$ proposed in the the previous sub-section. In a related but slightly different problem studied in~\cite{sen14}, it was shown that, the analogous versions of $\mathcal{E}_n$ and $\widehat{\mathcal{E}_n}$ have very different limiting distributions. In independence testing problems it is quite standard and natural to approximate the critical value of the test, under $H_0$, by using a permutation test; see e.g.,~\cite{SzekelyRizzo09}, \cite{gretton07}. However, in our problem as we use $\hat{T}_i$ instead of $T_i$, the permutation test is not valid; see~\cite{sen14}. In this sub-section, we propose a bootstrap procedure to approximate the distribution of $\widehat{\mathcal{E}_n}$, under the null hypothesis of conditional independence. We now describe the bootstrap procedure. Let $\mathbb{P}_{n,\mathbf{Z}}$ be the empirical distribution of $\mathbf{Z}_1, \ldots,\mathbf{Z}_n$. \begin{enumerate}[label=\bfseries Step \arabic*:] \item Generate an i.i.d.~sample $\{U_{i,1}^*, U_{i,2}^*, \mathbf{Z}^*_{n,i}\}_{ 1 \le i \le n}$ of size $n$ from the measure $\mathcal{U}(0,1) \times \mathcal{U}(0,1) \times \mathbb{P}_{n,\mathbf{Z}}$; recall that $\mathcal{U}(0,1)$ denotes the Uniform distribution on $(0,1).$ \item The bootstrap sample is then $\{X^*_{n,1}, Y^*_{n,1}, \mathbf{Z}^*_{n,1}\}_{ 1 \le i \le n},$ where \begin{equation} X^*_{n,i} := \widehat{F}^{-1}_{X|Z}(U_{i,1}^*|\mathbf{Z}_{n,1}^*) \qquad \text{and} \qquad Y^*_{n,i} := \widehat{F}^{-1}_{Y|Z}(U_{i,2}^*|\mathbf{Z}_{n,1}^*). \end{equation} \item Use the bootstrap sample $\{X^*_{n,i}, Y^*_{n,i}, \mathbf{Z}^*_{n,i}\}_{ 1 \le i \le n}$ to get smooth estimators $\widehat F^*_{X|\mathbf{Z}}, \widehat F^*_{Y|\mathbf{Z}},$ and $\widehat F^*_{\mathbf{Z}}$ of $F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$; see Section \ref{sec:NPEst} for a discussion on smooth estimation of the conditional distribution functions. \item Compute the bootstrap test statistic $\mathcal{E}^*_n:= \mathcal{E}_n(\widehat{T}^*_1, \ldots, \widehat{T}^*_n) $ where {\small \begin{equation} \widehat{T}^*_i= \Phi^{-1} \big(\widehat F^*_{X|\mathbf{Z}}(X^*_{n,i}|\mathbf{Z}_{n,i}^*), \widehat F^*_{Y|\mathbf{Z}}(Y^*_{n,i}|\mathbf{Z}^*_{n,i}), \widehat F^*_{\mathbf{Z}}(\mathbf{Z}^*_{n,i})). \end{equation} } \end{enumerate} We can now approximate the distribution of $\widehat{\mathcal{E}_n}$ by the conditional distribution of $\mathcal{E}_n^*$ given the data $\{X_i, Y_i,\mathbf{Z}_i\}_{ 1 \le i \le n}.$ In Section \ref{sec:simul} we study the finite sample performance of the above procedure through a simulation study and illustrate that our procedure indeed yields a valid test for conditional independence. \begin{remark} In steps 1 and 2 above, we generate the bootstrap sample from the approximated joint distribution of $(X, Y, \mathbf{Z})$ under the null hypothesis of conditional independence. In steps 3 and 4 we mimic the evaluation of the test statistic $\widehat{\mathcal{E}_n}$ using the bootstrap sample. This is an example of a model based bootstrap procedure.~\cite{sen14} prove the consistency of a similar bootstrap procedure in a related problem. As the sample size increases the approximated joint distribution of $(X, Y, \mathbf{Z})$ (under $H_0$) would converge to the truth and the bootstrap distribution would replicate the distribution of $\widehat{\mathcal{E}_n}$. \end{remark} \subsection{Nonparametric estimation of the residuals}\label{sec:NPEst} In this sub-section we discuss procedures to nonparametrically estimate $ F_{X| \mathbf{Z}}, F_{Y| \mathbf{Z}},$ and $F_{\mathbf{Z}}$ given data $\{X_i, Y_i,\mathbf{Z}_i\}_{ 1 \le i \le n}.$ The nonparametric estimation of the conditional distribution functions would involve smoothing. In the following we briefly describe the standard approach to estimating the conditional distribution functions using kernel smoothing techniques (also see~\cite{LeeLeePark06}, \cite{YuJones98}, and \cite{HallWolffYao99}). For notational simplicity, we restrict to the case $d=1$, i.e., $\mathbf{Z}$ is a real-valued random variable. Given an i.i.d.~sample of $\{(X_i,Z_i): i = 1,\ldots, n\}$ from $f_{X,Z}$, the joint density of $(X,Z)$, we can use the following kernel density estimator of $f_{X,Z}$: $$ \widehat f_n(x,z) = \frac{1}{n h_{1,n} h_{2,n}} \sum_{i=1}^n k \left( \frac{x - X_i}{h_{1,n}} \right) k \left( \frac{z - Z_i}{h_{2,n}} \right)$$ where $k$ is a symmetric probability density on $\mathbb{R}$ (e.g., the standard normal density function), and $h_{i,n}, i=1,2$, are the smoothing bandwidths. It can be shown that if $n h_{1,n} h_{2,n} \rightarrow \infty$ and $\max\{h_{1,n}, h_{2,n}\} \rightarrow 0$, as $n \rightarrow \infty,$ then $\widehat f_n(x,z) \stackrel{P}{\rightarrow} f_{X,Z}(x,z)$. In fact, the theoretical properties of the above kernel density estimator are very well studied; see e.g., \cite{FG96} and \cite{EM05} and the references therein. For the convenience of notation, we will write $h_{i,n}$ as $h_i$, $i=1,2$. The conditional density of $X$ given $Z$ can then be estimated by $$\widehat f_{X|Z}(x|z) = \frac{\widehat f_n(x,z)}{\widehat f_Z(z)} = \frac{\frac{1}{n h_{1} h_{2}} \sum_{i=1}^n k \left( \frac{x - X_i}{h_{1}} \right) k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{i=1}^n k \left( \frac{z - Z_i}{h_{2}} \right)}.$$ Thus the conditional distribution function of $X$ given $Z$ can be estimated as $$ \widehat F_{X|Z}(x|z) = \frac{\int_{-\infty}^ x \widehat f_n(t,z) \; dt}{\widehat f_Z(z)} = \frac{\frac{1}{n h_{2}} \sum_{i=1}^n K \left( \frac{x - X_i}{h_{1}} \right) k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{i=1}^n k \left( \frac{z - Z_i}{h_{2}} \right)} = \sum_{i=1}^n w_i(z) K \left( \frac{x - X_i}{h_{1}} \right) $$ where $K$ is the distribution function corresponding to $k$ (i.e., $K(u) = \int_{-\infty}^u k(v) \; dv$) and $w_i(z) = \frac{\frac{1}{n h_{2}} k \left( \frac{z - Z_i}{h_{2}} \right)}{\frac{1}{n h_{2}} \sum_{j=1}^n k \left( \frac{z - Z_j}{h_{2}} \right)}$ are weights that sum to one for every $z$. Least square cross-validation method proposed in \cite{hall2004cross} can be used to find the optimal choices for $h_1$ and $h_2.$ For general $d$, the optimal parameters must satisfy $h_1 \sim n^{-2/(d+4)}$ and $h_2 \sim n^{-1/(d+4)};$ see Section 6.2 of \cite{LiRacine07} and \cite{lilira13} for a thorough discussion. \begin{remark}\label{Bergsma2} Now we provide empirical evidence for the failure of the test proposed in~\cite{B11} in the example discussed in Remark~\ref{rem:berg}. We plot (see Figure~\ref{fig:berg}) the histogram of $p$-values obtained from the proposed test (see Section~\ref{sec:sub_test_cond}) and that of the $p$-values obtained from testing the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ (using their estimates $\widehat F_{X|\mathbf{Z}}(\cdot|\cdot)$ and $\widehat F_{Y|\mathbf{Z}}(\cdot|\cdot)$). We use the distance covariance test statistic (see \citet{SzekelyRizzoBakirov07}) to test for the independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. Figure~\ref{fig:berg} demonstrates that a test for mutual independence of $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ can fail to capture the conditional dependence between $X$ and $Y$ given $\mathbf{Z}$. \end{remark} \begin{figure}[h!] \includegraphics[scale=.8]{berg_1000_cv_all.pdf} \caption{Histograms of $p$-values (estimated using 1000 bootstrap samples) over $1000$ independent replications. Here, for $i=1,\ldots,200$, $\{X_i,Y_i,Z_i\}$ are i.i.d.~samples from the example discussed in Remark \ref{rem:berg}.} \label{fig:berg} \end{figure} \section{Simulation}\label{sec:simul} We now investigate the finite sample performance of the testing procedure developed in this paper through a simulation study. We also compare the performance of the our testing procedure to those proposed in \cite{KerCondInd07} and \cite{Z12}. We denote the the testing procedure proposed in \cite{KerCondInd07} by $CI_{perm}$ and use $KCI$ to denote the kernel based conditional independence test proposed in \cite{Z12}. To illustrate and compare the performance of different testing procedures, we consider the following sampling scenario borrowed from \cite{Z12}. Let us assume that $X$ and $Y$ are only dependent on $Z_1$ (the first coordinate of $\mathbf{Z}$) and that all other conditioning variables are independent of $X,Y,$ and $Z_1.$ We assume that $\mathbf{Z} \sim N_d(\textbf{0}, \sigma^2_z \textbf{I}_{d\times d})$, $X:= W+ Z_1+ \epsilon,$ and $Y:= W+ Z_1+ \epsilon^\prime,$ where $\epsilon, \epsilon^\prime,$ and $W$ are three independent mean zero Gaussian random variables. Moreover, we assume that $\epsilon, \epsilon^\prime,$ and $W$ are independent of $\mathbf{Z},$ $var(\epsilon)=var(\epsilon^\prime)=\sigma^2_E,$ and $var(W)= \sigma^2_W,$ where for any real random variable $V$, $var(V)$ denotes its variance. Note that $X \perp \! \! \! \perp Y |\mathbf{Z}$ if and only if $\sigma_W=0.$ In our finite sample simulations we fixed $\sigma_E= 0.3 $ and $\sigma_z=0.2$. We generate $500$ i.i.d.~samples $\{X_i, Y_i, \mathbf{Z}_i\}_{1 \le i \le 500}$ for each of $d=1, 3,$ and $5$ and for different values of $\sigma_W.$ For each such sample, we use 1000 bootstrap replicates to estimate the $p$-value of the proposed test procedure. We have used the ``\texttt{np}" (see \cite{np}) package in R (\cite{R}) to estimate the conditional distribution functions with the tuning parameters chosen using least-squares cross validation (see Section~\ref{sec:NPEst}). In Figure \ref{fig:power_curve} we plot the power (estimated using 500 independent experiments) of the testing procedure proposed in Section \ref{sec:sub_test_cond} along with those of $CI_{perm}$ and $KCI$ as $\sigma_W$ increases from $0$ to $0.25$, for dimensions $1, 3,$ and $5$. We fix the significance level at $0.05$. \begin{figure}[h!] \includegraphics[width=.65\paperwidth]{Final_fig_2.pdf} \caption{The power (at significance level $0.05$) of the three testing procedures for sample size $n=500$ as the dimension $d$ and $\sigma_W$ increase.} \label{fig:power_curve} \end{figure} The distribution of the $KCI$ test statistic under the null hypothesis of conditional independence is estimated with a Monte Carlo procedure suggested in \cite{Z12}. To implement the $CI_{perm}$ and the $KCI$ testing procedures, we have used the MATLAB source codes provided in \cite{Z12}; the source code can be found at \url{http://people.tuebingen.mpg.de/kzhang/KCI-test.zip}. The R language codes used to implement our procedure are available at \url{http://stat.columbia.edu/~rohit/research.html}. Observe that for $CI_{perm}$, the probability of type I error is much greater than the significance level for $d=3$. Furthermore, for $d=5$, it fails to detect the alternative for all values of $\sigma_W$. The performance of $CI_{perm}$ is sensitive to the dimension of the conditioning variable. The probability of type I error for both the proposed and the $KCI$ testing procedures are around the specified significance level. Moreover, the powers of $KCI$ and the proposed test increase to $1$ as $\sigma_W$ increases. Overall, we think that for this simulation scenario the $KCI$ method has the best performance. \section{Discussion}\label{sec:Disc} Given a random vector $(X, \mathbf{Z})$ in $\mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+1}$ we have defined the notion of a nonparametric residual of $X$ on $\mathbf{Z}$ as $F_{X|\mathbf{Z}}(X|\mathbf{Z})$, which is always independent of the response $\mathbf{Z}$. We have studied the properties of the nonparametric residual and showed that it indeed reduces to the usual residual in a multivariate normal regression model. However, nonparametric estimation of $F_{X|\mathbf{Z}}(\cdot|\cdot)$ requires smoothing techniques, and hence suffers from the curse of dimensionality. A natural way of mitigating this curse of dimensionality could be to use dimension reduction techniques in estimating the residual $F_{X|\mathbf{Z}}(X|\mathbf{Z})$. Another alternative would be to use a parametric model for the conditional distribution function. Suppose now that $(X,Y,\mathbf{Z})$ has a joint density on $\mathbb{R} \times \mathbb{R} \times \mathbb{R}^d = \mathbb{R}^{d+2}$. We have used this notion of residual to show that the conditional independence between $X$ and $Y$, given $\mathbf{Z}$, is equivalent to the mutual independence of the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$ and the predictor $\mathbf{Z}$. We have used this result to propose a test for conditional independence, based on the energy statistic. We can also use these residuals to come up with a nonparametric notion of partial correlation. The partial correlation of $X$ and $Y$ measures the degree of association between $X$ and $Y$, removing the effect of $\mathbf{Z}$. In the nonparametric setting, this reduces to measuring the dependence between the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. We can use distance covariance (\cite{SzekelyRizzoBakirov07}), or any other measure of dependence, for this purpose. We can also test for zero partial correlation by testing for the independence of the residuals $F_{X|\mathbf{Z}}(X|\mathbf{Z})$ and $F_{Y|\mathbf{Z}}(Y|\mathbf{Z})$. \newline \noindent {\bf Acknowledgements:} The second author would like to thank Arnab Sen for many helpful discussions, and for his help in writing parts of the paper. He would also like to thank Probal Chaudhuri for motivating the problem. The research of second and third authors is supported by National Science Foundation. \bibliographystyle{elsarticle-harv}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,752
New Delhi, Delhi, India is the location where Ankit Security And Vision Pvt Ltd is. Though Ankit Security And Vision Pvt Ltd is not the only firm involved in cameras, it's in the class of better ones. Cameras aren't the only product this establishment is engaged in; control systems are another range of expertise. Dealing with cameras, such as this company does, is not unusual being situated in Delhi. Alarms and panels can also be attained from Ankit Security And Vision Pvt Ltd. The inventory of products continues with barriers, cctv, outdoors and access controls.
{ "redpajama_set_name": "RedPajamaC4" }
8,596
Q: liquidfun 1.1.0 ndk-build 2 compile errors Alright, I'm trying to build liquidfun for the first time and I'm having trouble getting past compilation errors. I'm not to savvy with c/c++ so I'm not sure how to get these fixed so I can build. The error is... [armeabi-v7a] Compile++ arm : liquidfun <= b2ParticleSystem.cpp jni/../Box2D/Particle/b2ParticleSystem.cpp:2734:2: error: ignoring return value of function declared with warn_unused_result attribute [-Werror,-Wunused-result] std::remove_if(m_bodyContactBuffer.Begin(), ^~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 1 error generated. make: *** [obj/local/armeabi-v7a/objs/liquidfun/Box2D/Particle/b2ParticleSystem.o] Error 1 I was able to find a solution for a shifting error that I encountered, but unfortunately nothing on this issue. I was hoping someone could help me get this thing built so I can start playing with this engine. A bit more details LiquidFun version: 1.1.0 Build instructions: LiquidFun Build instructions for android Any help would be greatly appreciated. A: I've solved this searching for answers on Github. The warn_result_error can be resolved by adding (void) at the beginning of line 2734: (void)std::remove_if(m_bodyContactBuffer.Begin(), Source: https://github.com/google/liquidfun/issues/70
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,155
El es un partido político japonés fundado por la incumbente gobernadora de Tokio, Yuriko Koike, el 25 de septiembre de 2017. El 7 de mayo de 2018, una facción de la derecha que se opuso a la fusión con el Partido Democrático decidió rehacer el partido con el mismo nombre pero con una nueva dirección e ideología conservadora. Historia El partido fue creado con miras a las Elecciones generales de Japón del 22 de octubre de 2017, luego del éxito de Koike en las Elecciones prefecturales de Tokio de 2017 en Tokio del 3 de julio de 2017, donde a través del partido local Tomin First no Kai desplazó al gobernante Partido Liberal Democrático (PLD). Entre sus objetivos busca la reforma administrativa, la liberación de la información gubernamental al dominio público, el reemplazo de la política de Abenomics y una reforma constitucional. Debido al pobre desempeño en las elecciones del 2017, Koike renunció a la presidencia del partido el 14 de noviembre de 2017. Referencias Enlaces externos Página oficial (en japonés) Partidos políticos de Japón Partidos políticos fundados en 2017 Partidos conservadores
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,408
{"url":"https:\/\/ell.stackexchange.com\/questions\/10479\/how-do-i-know-when-to-use-the-versus-a-versus-%E2%88%85-as-an-article-on-a-noun\/10483","text":"# How do I know when to use \u201cthe\u201d versus \u201ca\u201d versus \u201c\u2205\u201d as an article on a noun?\n\nWith proper nouns, we don't use the except for river names, newspaper names, etc.\n\nI want to know why we use the with White House. I mean, under which rule can we categorize it? What could other similar examples for that rule be?\n\nIn general, we tend to use the as an article for nouns and proper nouns where it is clear from context that only one thing belongs to that description (or when we are talking about the archetypal thing of a set of things in the abstract). Otherwise we would normally use a to signify that we mean a single element out of a group of things that all fit the description.\n\n## General rules for articles\n\nFor example:\n\nThe President of the United States is Barack Obama.\n\nThere is only one sitting US President, so the is the correct article to use here, but...\n\nPresident Obama is a Democratic President.\n\nThere is more than one President from the Democratic Party, so a fits better here.\n\nThe most populous country is China.\n\n*We use the here because there is only one \"most populous country\"*.\n\nThe President lives in the White House\n\nAlthough there are many houses that are white, and there are even several copies of the famous White House, the context here makes it pretty clear which one we're talking about, and there's no ambiguity. Similarly it is clear which President we are talking about from context, and there's only one that fits the context.\n\nThis isn't even limited to proper nouns:\n\nI'm going to turn on the TV\n\nAlthough there are many TVs in the world, it is clear from context which one I mean.\n\nI am looking forward to the launch of the PlayStation 4\n\n*The second the here is talking about the singular product called \"PlayStation 4\", of which there is one (even though there are many units that will be sold), and the first the refers to the singular event which is the launch of that product*.\n\nI own a PlayStation 3\n\nSince I own only one of many PlayStation3 units\n\n## Rules for omitting articles\n\nHere are some rules of thumb that will help you get by.\n\n1. Items with numerical count or position normally have articles omitted (more detail)\n\nI came first in the pie-making contest!\n\n(X) I came a first in the pie-making contest!\n\n(X) I came the first in the pie-making contest!\n\nHere are six apples.\n\n(X) Here are a six apples.\n\n(X) Here are the six apples.\n\nUnless you are specifically referring to this group, which should be clear from context:\n\nHere are the six apples [that we were talking about earlier].\n\n2: Avoid putting adjectives on noun-phrases with omitted articles, but if you do prefix with an adjective, the article must be reintroduced:\n\nBuckingham Palace is world-famous!\n\nWe visited the world-famous Buckingham Palace yesterday!\n\n3: Never omit an article if the noun-phrase contains of:\n\nThe British Parliament sits in the Palace of Westminster.\n\nThe British parliament sits in Westminster Palace.\n\n4: Always omit articles when talking about people by name.\n\n(X) I love the Oprah Winfrey.\n\nI love Oprah Winfrey.\n\nBut not, if the name of the individual is just part of a noun-phrase that isn't a person:\n\nI love the Oprah Winfrey show\n\n(X) I love Oprah Winfrey show\n\n5: Omit articles when talking about companies by name or the buildings named after them.\n\nShe works at Microsoft.\n\n(X) She works at a Microsoft.\n\n(X) She works at the Microsoft.\n\nWhen there are many places that share the same company name (e.g. McDonalds, Starbucks), then you can either omit the article, or use a. You can also always use the when talking about a specific place that is clear from context.\n\nI work at Starbucks. (good)\n\n(X) I work at the Starbucks. (not acceptable without context).\n\nI work at the Starbucks [that we were talking about before].\n\nI work at a Starbucks. (acceptable).\n\n6: When talking about the singular names of most continents, territories, islands, settlements (including cities, towns, ports, villages, forts, and garrisons), states, lakes, waterfalls, bays, mountains, languages, sports, academic subjects, or street names we omit the article:\n\nI visited London, which is in Europe.\n\n(X) I visited the London last year.\n\nWe went to the Himalayas to climb Everest!\n\nWe'll meet at the top of Victoria Street and then we'll go to Brixton later.\n\nThe Great Lakes is a collection of lakes, so has an article.\n\nBut there are exceptions that must be learned by rote, such as\n\nThe International Criminal Court is based in the Hague.\n\nI grew up in the Bronx.\n\n7: We usually omit the article when talking about countries:\n\nSome countries omit the, some don't. By default, the article is omitted, but it is reintroduced if:\n\n\u2022 The country name is derived from a plural:\n\nThe United States, the US (but not America), The United Kingdom, the UK (but not Great Britain, England etc), the USSR (but not Russia), The United Arab Emirates\n\nThe Azores, the Canaries, the Falklands, the Galapagos, the Bahamas, the Dao Yu Islands\n\nThe Philippines, the Netherlands.\n\nThe Russian Federation, The British Empire, The Roman Empire.\n\n\u2022 Remember: if the country is being given its full name including style of government, it regains an article by rule (3) above.\n\nThe People's Republic of China (but \"I visited China\")\n\nThe Islamic Republic of Iran (but \"Iran is a beautiful country\").\n\nAnd there are a whole bunch of random exceptions:\n\nThe Vatican has an article, but Vatican City doesn't.\n\nThe Gambia uses an article, although it is sometimes used without one.\n\nThe Democratic Republic of the Congo is often (unofficially) called \"The Congo\".\n\nBefore 1991, Ukraine was referred to as the Ukraine (short for \"the Ukrainian Soviet Socialist Republic\"), but now has no article. In the mid 20th Century, \"The Argentine\" became known as Argentina.\n\nWe used to say \"The Lebanon\" (from the literal translation of HaLevanon), but now \"Lebanon\" is normally given with no article.\n\n8: Some nouns are used without articles to indicate that they are being used in idiomatic form:\n\nShe went to bed (means she went to her bed in order to sleep)\n\nShe went to work (she went to her workplace to perform her job).\n\nShe went to school\/university\/college (means she attended her school. Also variants such as Summer school and Sunday school can be used this way.)\n\nShe went to church (means she attended church. Other religious places cannot be used this way. Articles are normally used for \"He went to the temple\/cathedral\/mosque\/synagogue etc\")\n\nShe went to war (means she went to fight as part of the military abroad).\n\nShe went to prison\/jail (means she was convicted of a crime, and was incarcerated).\n\nShe went to hospital (she was admitted to hospital. esp. British English).\n\nShe went to court (she was brought before a judicial court, either as the plaintiff or as the defendant, or possibly as a lawyer or judge presiding).\n\nThese nouns can be used with articles when the idiomatic form is not wanted and we want to refer to the noun in it's \"ordinary\" form:\n\nThe bed is broken.\n\nThe church is next to the synagogue.\n\nThe war in Iraq started in 2003.\n\nI went to the prison in order to better understand the psyche of the inmates. (as opposed to I went to prison in order to..., which would imply that the speaker was incarcerated, rather than just visited).\n\nI went to the hospital on Tuesday to pick up Auntie May.\n\nThese forms just need to be learnt.\n\n9: Most names which are constructed from a possessive form of a person, city, county or suburb drop the article.\n\nWe visited Beeston Castle the other day!\n\nNelson's Column is 169 ft 3 in tall!\n\nQueen Elizabeth I is buried at Westminster Abbey.\n\nKensington Gardens are very famous.\n\nSeattle Tower is more commonly known as \"The Space Needle\".\n\nMunich Cathedral is very impressive.\n\nI'm writing a letter to Westminster City Council.\n\nSydney Opera house is amazing!\n\nWe visited St. Paul's Cathedral on Tuesday.\n\nBut\n\nThe Science Museum and the Natural History Museum are right next to each other.\n\nThe Tate Modern is quite expensive, but well worth the trip.\n\nThe British Library is free.\n\n10: Nouns which are personal qualities are usually given with no article (see StoneyB's answer here)\n\nHe certainly has talent. (He is talented)\n\nNobody denies her courage. (She is courageous)\n\nSartorius lacks generosity. (Sartorius is not generous)\n\n11: Most other nouns take an article, and the ones that don't tend to inconsistently have their articles used or not used by native speakers, so don't worry too much about them, and learn them by rote as and when they come up.\n\nFor example:\n\nThe Wikipedia article on Hagia Sophia uses the article \"the\" for Hagia Sophia 23 times but omits it 22 times.\n\n\"Taj Mahal\" normally takes an article, but the official website uses both forms.\n\nThe Washington Monument \/ The Lincoln Memorial normally (but do not always) take articles.\n\n(X) Signifies poor usage.\n\n\u2022 on that last point, a fairly good rule of thumb is that you use an article for a country name if that name includes a common noun that means a particular type of country\/government\/land mass, e.g. \"the People's Republic of China\", \"the Union of Soviet Socialist Republics\", \"the United Arab Emirates\", \"the Turks and Caicos Islands\", etc. (China is a bit wierd, though, as the short form of it's name drops the article, but probably easier to memorize the exceptions.) \u2013\u00a0KutuluMike Sep 18 '13 at 14:38\n\u2022 Additional note on the last one: \"the Ukraine\" used to be common, so you'll probably hear it on occasion, but as @MichaelEdenfield notes, the correct usage has changed. \u2013\u00a0Izkata Sep 18 '13 at 15:07\n\u2022 In the OPs question he states the rule he was taught as \"we don't use 'the' except for river names, newspaper names, etc\" The big catch there is the \"etc\". The list of exactly when we use \"the\" and when we don't is not -- to my mind at least -- particularly consistent or obvious. It's just a set of conventions that you have to learn. Like why do we use \"the\" with names of rivers but not with names of lakes? I have no idea. That's just the convention. \u2013\u00a0Jay Sep 18 '13 at 20:21\n\u2022 Ha ha! 20 edits! That's what you get for trying to write a complete treatise on the definite article. :^) This is a noble effort, though. \u2013\u00a0J.R. Sep 19 '13 at 12:51\n\u2022 Great Lakes are, not *Great Lakes is, surely. Fun fact: in the US there's geographical variation as to whether numbered roads take an article. Where I grew up (near the Great Lakes, incidentally), we'd say \"driving on 280\" or \"take 355 north\", but in Southern California people tend to say \"the 60\" or \"take the 101 north\". \u2013\u00a0snailcar Sep 22 '13 at 7:04\n\nSometimes there is no fixed rule. The Purdue OWL gives some guidelines about \"Geographical\" uses, but it doesn't say much about buildings and landmarks. However, the White House isn't the only building that uses the definite article; that practice seems to be rather common:\n\n\u2022 The White House\n\u2022 The Pentagon\n\u2022 The Empire State Building\n\u2022 The Space Needle\n\u2022 The Colosseum\n\u2022 The Leaning Tower of Piza\n\u2022 The Eiffel Tower\n\u2022 The Taj Mahal\n\u2022 The Pyramids\n\nbut:\n\n\u2022 Buckingham Palace\n\u2022 Westminster Abbey\n\u2022 St Basil's Cathedral\n\u2022 Burj Khalifa\n\nSometimes the article seems to be optional:\n\n\u2022 (The) Sydney Opera House\n\u2022 (The) Sears Tower\n\nMatt has covered it very well. Let me just add:\n\nDon't get caught up on the fact that \"White House\" could be read as a description, that is, a house that is white. There are many words and phrases out there that have a general meaning, but that become proper names in one particular context. When we say \"The president lives in the White House\", we don't mean simply that he lives in some house that happens to be white, but rather that he lives in a particular building which is called by the name \"White House\". Similarly, if I say, \"General Jones works in the Pentagon\", I am not using the word \"pentagon\" in its general meaning as a geometrical shape, but rather to a particular building that is called by this name. I live near the Great Lakes. I'm sure there are many other lakes in the world that are great, but these particular lakes are named \"the Great Lakes\". This isn't limited to place names. There might be many companies in Britain that do something related to broadcasting, but if I say \"the British Broadcasting Company\" I am referring to one particular company that goes by that name. Etc.\n\nAs I understand it, there are two possible rules that can lead to this \"the\" before the phrase \"White House\".\n\n1. When the speaker\/writer believes that the hearer\/reader knows exactly what he is referring to, e.g: The White House is white. Both the speaker and the reader know exactly which \"White House\" the speaker is talking about.\n\n2. When there is only one thing referred by \"the\". Such as The Earth, The Sun, The 44th president of America and by extension The White House. I'm guessing there is only one White House on this planet.\n\n\u2022 On second thought, I think you logic is not too foolproof. E.g., there is only one Mount Everest, but we don't use 'the' with it. \u2013\u00a0Ramit Sep 18 '13 at 10:14\n\u2022 Yeah, cause Mount Everest is a single mountain, which English rule doesn't put any article in front of it. But if that is a mountain range like himalyas, that will be the Himalyas. For more proof you can take a look at this: more info, straightly on point 5. \u2013\u00a0Safira Sep 18 '13 at 10:23\n\u2022 I am actually looking for the comparison b\/w White House and Mount Everest. Both are unique on the earth (aren't they), yet one uses a determiner and one doesn't. \u2013\u00a0Ramit Sep 18 '13 at 10:37\n\u2022 It is not true that there is just a White House: In Moscow, there is a government building that is called White House. \u2013\u00a0kiamlaluno Sep 18 '13 at 12:37\n\u2022 That's a very nice logic. Does it work in reverse as well? Like, Himalyas is not derived either, still it uses an article. \u2013\u00a0Ramit Sep 18 '13 at 18:50\n\nThere is another usage worth mentioning here. Sometimes we use \"the\" not because there is only one thing we could be talking about, but because we want to emphasize that the one we're talking about is the one that is familiar to everyone.\n\nFor example, if I'm introduced to somebody named Brian Wilson, I might ask, \"Are you the Brian Wilson?\", with emphasis on the. I'm asking if he is indeed the singer from the Beach Boys, and not some other not-so-famous person who is also named Brian Wilson.\n\n(Half of my question is answered by Safira and half by Matt, so I can't accept either of the answers. I am collaborating both the answers here so that I can accept one answer to my question.)\n\nRule: Out of similar things, if one is exceptionally well recognized, we use the determiner the with it.\n\nExamples:\n\n1. There are many white houses, but the one that belongs to the US President is second to none in popularity. So, the White House.\n\n2. Each planet has moon(s). But the one we see every night is the earth's only natural moon. So, the moon.\n\n3. Each solar system has sun(s) (the star(s) around which planets move). But we are normally interested in the star that gives us light during the day. So, the sun.\n\n\u2022 The White House is not the most popular white house; it is just a building that is called White House. There is the White House in the USA, and there is the White House in Moscow; even in the USA, there is more than one building called White House. \u2013\u00a0kiamlaluno Sep 19 '13 at 12:31\n\u2022 @kiamlaluno - Spot on. If they had decided to name the White House \"the Presidential Manor\" instead, we'd still call it the Presidential Manor. Still, the Photon makes an interesting point about the Brian Wilson. \u2013\u00a0J.R. Sep 19 '13 at 12:47","date":"2019-10-16 12:42:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.26876428723335266, \"perplexity\": 2439.1132238663636}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986668569.22\/warc\/CC-MAIN-20191016113040-20191016140540-00201.warc.gz\"}"}
null
null
Long live an astronomer from Saint Victoire de Sorel … Astronomer, Black Hole Specialist, Artist, and Mountaineer; Stephanie Juneau has a complete biography who is only 42 years old. Who was born in Sainte-Victoire-de-Sorel and has lived there throughout her youth thrives and lives all of her emotions on the other side of the border. "I really like what I do. I grew up in Sainte-Victoire in the countryside and always found myself outside running barefoot. I loved that freedom. Find it a little bit in Tucson, Arizona, but differently," says Ms. Juneau. Glorious journey Stephanie Juneau has a full career career. After studying at Sainte-Victoire Elementary School, Fernand-Lefebvre High School in High School, and then at Cégep de Sorel-Tracy in Natural Sciences, she went to McGill University in Physics. After her first year at university, an internship brought her to Victoria, British Columbia. Here I got a more intense taste of astronomy for the first time. "At the Astrophysical Institute in Victoria, when I got there, I didn't even know what a supernova was (laughs). I was a curious girl, I noticed the moon and stars, but I didn't even have a telescope. Except that I really wanted to know how it worked, I wanted to I understand everything Prove It was perfect there, with an observatory on a hill. Just trying that and doing a professional search with the computer, it turned me on, "she says. After her internship, Stéphanie Juneau returned to McGill for a second year in Physics. His second training took place at the same institute, but in Edinburgh, Scotland. "I was involved in designing a spectrometer, GMOS (Gemini Multi-Object Spectrograph) that is still in operation today in Hawaii. I took care of the last stage, which is making sure everything works. It was really rewarding. I even took a year off after The training period to focus on GMOS design, "she explains. See also In Carhaix, Startijenn is a resident of Espace Glenmor - Carhaix Several movements After that extra year in the United Kingdom, the native Victorian completed her BA and MA degrees at the University of Montreal. She made trips to the Victoria Observatory again, where she discovered mountain climbing. "I used to do it a lot in British Columbia, but it's not perfect with the weather. When it came time to apply for my PhD, I wanted to find a place where I could combine my passion. Either I was back in the UK or the US, but I chose Arizona. I visited, the landscape spoke to me right away! "She said. So, Stephanie Juneau moved there in 2005 to complete her PhD until 2011. As for post-doctoral, she chose to settle in France. "I lived in Paris for five years and could have stayed there for the rest of my career! But I missed the North American national team, which is why I returned to Arizona after that. My passion for climbing was strong and I really liked Tucson. If I could make a copy of myself, one of them would live in France and the other in Tucson (laughs). " Days are full Today, the 42-year-old is working full-time as a co-astronomer at NOIRLab, a laboratory that specializes in black holes. "This is one of the first topics that made me think about astronomy. A black hole is not like what you find on Earth. I like to visualize it as a challenge. We say a black hole, but it is not a hole, it is a mass that grows larger as it absorbs elements due to gravity. It is an enigmatic aspect and me. I love mystery, but not the science fiction genre. I love real puzzles! ", She invokes. See also COVID-19: Dr. Mylene Drouin faces the omicron wave Stephanie Juneau thrives in all areas of her life. In addition to her job, somewhat altered by the pandemic as she has to work from home, she indulges in passionate evenings and weekends: visual art and rock climbing. "The mountain connects me with nature. I have a 9,000 feet 45 minute drive from my house and the advantage here is that you can do it 12 months a year. It's very hot in summer, so I climb to the top. Then in winter, I leave from the bottom of the mountain." How about art in all of this? As for her artistic approach, it qualifies him as a language. "My art is my language! It's a way to express yourself. In fact, it's the only way I know of expressing myself. You visualize certain emotions, and certain expressions being reflected in my work." Stephanie Juneau has also taken some technical courses at the University of Arizona to improve her skills. She sees a close relationship with her work. "Science and art go hand in hand for me. Art made me understand things in physics and vice versa. For example, I took a long time to finish a more complex canvas and one teacher asked me to take it off because I swore with the rest. However, it took too long to finish it off. When writing a science article, I had taken a long time to write an article that had nothing to do with the rest. So I chose to remove or edit it, even though it took a long time to do. Art gave me a lesson in life and brought me a lot in science. See also The brain's ability to perceive space evolves as the universe evolves Before the pandemic, Stephanie Juneau returned to the area at least once a year to visit her family members who, for the most part, still lived in the area, including her father in Saint Victoire and her mother in Saint Robert. She also enjoys visiting her 91-year-old grandmother who lives in the area alone in her home. "One of my favorite places to go when I go to Quebec," she concluded. Previous The Science: SOHECO wants to reduce snakebite deaths Next Covid-19: A study reveals the causes of loss of the sense of smell during infection Brain cancer helmet The Department of Hematology will have a new headquarters in Shawinigan
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,748
package Team4450.Robot9.Tower; import Team4450.Lib.*; import Team4450.Robot9.*; import edu.wpi.first.wpilibj.*; import edu.wpi.first.wpilibj.smartdashboard.SmartDashboard; public class Shooter { private Robot robot; private Talon launchMotor1; private Talon launchMotor2; private TowerControl towerControl; private FestoDA hoodPiston; Shooter(Robot robot) { try { this.robot = robot; towerControl = robot.towerControl; launchMotor1 = towerControl.launchMotor1; launchMotor2 = towerControl.launchMotor2; hoodPiston = towerControl.shootPiston; } catch (Exception e) {e.printStackTrace(Util.logPrintStream);} } /** * Allows you to manually control when the shooter motors spins up. * @param onOrOff Sets the motor On (True) or Off (False) */ public void manualFire(boolean onOrOff) { if (onOrOff) { SmartDashboard.putBoolean("ShooterMotor", true); launchMotor1.set(1); launchMotor2.set(1); } else if (!onOrOff) { SmartDashboard.putBoolean("ShooterMotor", false); launchMotor1.set(0); launchMotor2.set(0); robot.headLight.set(Relay.Value.kOff); } } /** * This fires the launcher on the robot. */ public void fire() { towerControl.pickupPiston.SetA(); towerControl.belt.set(1); //TODO Check this number Timer.delay(1); //TODO Check this number towerControl.belt.set(0); manualFire(false); } /** * This adjusts the angle of the shooter tube. Accepts the strings 'retract' or 'extend' * @param Position The position to put the shooter hood in. Accepts 'retract' or 'extend' */ public void adjustAngle(String Position) { if (Position == "retract") { hoodPiston.SetA(); return; } if (Position == "extend") { hoodPiston.SetB(); return; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,402
CYPRUS: Creditors who have locus standi to petition the winding up of a Cypriot Company Section 213 of Cyprus Companies Laws CAP. 113 ("CAP. 113"), provides about the persons who have locus standing to petition the winding up of a company and provides inter alia for "…..the creditor or creditors including any contingent or prospective creditors ………..". The term "contingent creditor" or "prospective creditor", have not been defined in CAP. 113 and for their interpretation, guidance, shall be sought, from Cypriot and common law cases. In the English case STONEGATE SECURITIES LTD -V- GREGORY (1980) 1 Ch. 576, Lord Justice Buckley interpreting Section 224 of the English Companies Acts 1948 – (which is identical to Section 213 of CAP. 113) – stated that "………the expression "contingent creditor" means a creditor in respect of a debt, which will only become due, in an event, which may, or may not occur ………….". Concerning the expression "prospective creditor" contained in Section 224 of the English Companies Acts 1948, Lord Justice Buckley stated that "……….a prospective creditor is a creditor in respect of a debt, which will certainly become due in the future, either on some date, which has been already determined, or some date determinable by reference to future events …………..". In the light of the above, the following creditors have locus standing to petition the winding up of a Cypriot company: (a) A creditor whose claim is for a sum of money presently and unconditionally, due and payable; (b) A contingent creditor, who is a creditor in respect of a debt, which will only become due in an event, which may, or may not occur; and (c) A prospective creditor, who is a creditor in respect of a debt, which will certainly become due in the future, either on some future date which has been already determined, or some date determined by reference to future events.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,215
Extensor tendon irritation is one of the most common complications following volar locking plate osteosynthesis (VLPO) for distal radius fractures. It is most likely caused by distal screws protruding the dorsal cortex. Shorter distal screws could avoid this, yet the influence of distal screw length on the primary stability in VLPO is unknown. The aim of this study was to compare 75 to 100 % distal screw lengths in VLPO. A biomechanical study was conducted on 11 paired fresh-frozen radii. HRpQCT scans were performed to assess bone mineral density (BMD) and bone mineral content (BMC). The specimens were randomized pair-wise into two groups: 100 % (group A) and 75 % (group B) unicortical distal screw lengths. A validated fracture model for extra-articular distal radius fractures (AO-23 A3) was used. Polyaxial volar locking plates were mounted, and distal screws was inserted using a drill guide block. For group A, the distal screw tips were intended to be flush or just short of the dorsal cortex. In group B, a target screw length of 75 % was calculated. The specimens were tested to failure using a displacement-controlled axial compression test. Primary biomechanical stability was assessed by stiffness, elastic limit, and maximum force as well as with residual tilt, which quantified plastic deformation. Nine specimens were tested successfully. BMD and BMC did not differ between the two groups. The mean distal screw length of group A was 21.7 ± 2.6 mm (range: 16 to 26 mm), for group B 16.9 ± 1.9 mm (range: 12 to 20 mm). Distal screws in group B were on average 5.6 ± 0.9 mm (range: 3 to 7 mm) shorter than measured. No significant differences were found for stiffness (706 ± 103 N/mm vs. 660 ± 124 N/mm), elastic limit (177 ± 25 N vs. 167 ± 36 N), maximum force (493 ± 139 N vs. 471 ± 149 N), or residual tilt (7.3° ± 0.7° vs. 7.1° ± 1.3°). The 75 % distal screw length in VLPO provides similar primary stability to 100 % unicortical screw length. This study, for the first time, provides the biomechanical basis to choose distal screws significantly shorter then measured. Recent studies have reported complication rates following volar locking plate osteosynthesis (VLPO) for distal radius fractures of up to 18 % [1, 2]. Two of the most common complications are extensor tendon irritation and attritional tendon ruptures [1, 3, 2]. These are attributable either to direct damage during the operation (drilling, depth gauge) or secondary due to dorsodistal screw protrusion [4–6]. Dorsal screw protrusion might be an avoidable complication, especially for extra-articular fractures (AO-23 A3), which are the most common ones [7, 8]. The AO Foundation as well as Campbell's Operative Orthopaedics recommends using distal screw length 2 to 4 mm shorter than measured. However, the effect of shorter distal screws on the primary stability of the VLPOs remains unclear. Preliminary data on synthetic bones indicates that 75 % distal screw length provides comparable primary stability to 100 % unicortical screw length . Shorter distal screws are the most promising approach to avoid dorsal screw protrusion. Therefore, it is indispensable to investigate the effect of distal screw length on the primary stability of VLPO. Consequently, the aim of this study was to compare 75 to 100 % distal screw lengths in VLPO using human fresh-frozen radii and an established biomechanical fracture model for extra-articular distal radius fractures (AO-23 A3). The study's null hypothesis was that unicortical 100 % distal screw lengths provide superior primary stability compared to 75 % distal screw lengths in VLPO. This biomechanical study was conducted on fresh-frozen human radii using a validated fracture model for extra-articular distal radius fractures (AO-23 A3). The local ethics committee approved the study (LMU #409-13). The outcome parameters of interest were stiffness, elastic limit, maximum force, and residual tilt of the distal fragment. Eleven paired fresh-frozen radii were obtained from the Centre of Anatomy and Cell Biology, Medical University of Vienna, Austria. Radii were randomized pair-wise, side alternating into a 100 % unicortical distal screw length group (group A) and a 75 % distal screw length group (group B). They were then cut to 14-cm length. High-resolution peripheral quantitative computer tomography scans (HRpQCT, XtremeCT, Scanco Medical AG, Switzerland) were performed. Radii presenting previous fractures, severe osteoarthritis, or bone lesions were excluded. Bone mineral density (BMD) and bone mineral content (BMC) were computed to assess possible group differences. The general preparation has been outlined in detail previously . In brief, the radii were cleaned of all soft tissue and multidirectional, angular stable volar plates (APTUS 2.5 ADAPTIVE TriLock Distal Radius Plate, A-4750.61, Medartis Inc., Basel, Switzerland) were mounted just proximal to the watershed line. The plates were fixed to the radius shaft using four bicortical locking screws (Fig. 1C, screws 9, 10, 12, and 13). A drill guide block (Medartis A-2723 01/02) was mounted onto the distal plate, which assured uniform distal screw orientation. Following drilling, distal screw length was measured. Distal locking screw lengths were chosen according to the previously defined groups. For group A (100 %), the screw tips were intended to be flush or just short of the dorsal cortex (Fig. 1A). In group B (75 %), a target screw length of 75 % was calculated and rounded to the next available screw length (Fig. 1B, C, screws 1–5 and 8). Screws were available in 2-mm increments. Following distal screw insertion, a 10-mm dorsal wedge osteotomy simulating a dorsally unstable fracture was performed using an oscillating handsaw. The osteotomy location resembled the in vivo fracture location and was chosen based on previous studies [14, 13]. Care was taken to completely separate the volar cortex (1-mm gap). Each specimen was then embedded using two custom-made aluminium jigs. The load axis was defined proximally by the medullary canal and distally slightly dorsoradial to the centre of the crista subdividing the fossa lunata and scaphoidea. The proximal 40 mm of the shaft and a shallow edge of the distal articular surface of the radii were embedded in polyurethane (PUR, FDW HG, Austria) (Fig. 2B). A proximal constrained setup was used (Fig. 2). The embedded specimens were remounted to the aluminium jigs (Fig. 2A(A1)) and aligned within the material testing machine (Fig. 2A(A2); Zwick-Modell Z010/TN2A; Zwick GmbH & Co. KG, Ulm, Deutschland). Load was applied distally through a 32-mm metal sphere, which enabled free rotation of the distal fragment. It was centred in a centring bore to ensure consistent loading conditions (Fig. 2B). Three markers of a CMS20S ultrasound motion tracking system (Zebris Medical GmbH, Isny im Allgäu, Germany) were mounted to measure residual tilt of the distal fragment (Fig. 2A(A3)). Specimens were tested to failure using a displacement controlled axial compression test. Following preconditioning to exclude settling effects (preload: 10 N; preconditioning: 10 cycles, 0.2mm displacement, 1 mm/s), the specimens were loaded at 1 mm/s until either a 20 % force drop or 3mm displacement was reached [13, 15]. Photographs and radiographs were taken before and after testing. Primary biomechanical stability was assessed by stiffness, elastic limit, and maximum force. These were calculated from the load-displacement curves. Data analysis was conducted automatically in Python using custom scripts as outlined in Fig. 3a. The elastic range was defined as the data range until the coefficient of determination reached its maximum (R 2 > 0.998). The elastic limit corresponded the last data point of the elastic range. Stiffness was defined as the slope of the regression line within the elastic range. Maximum force was defined as the force where the slope of the tangent line dropped below 95 % of the stiffness. In one case, the slope did not reach this threshold and the global maximum force was chosen. Residual tilt was determined using the motion tracking system to quantify the overall plastic deformation. It was defined as the angle between the initial and final testing position of the distal jig and assessed by rigid registration of the initial and final marker positions (Fig. 3b). In addition to standard descriptive statistics, independent sample t tests were conducted for all biomechanical parameters. Normality and equality of variances for those parameters were tested using the Shapiro-Wilk and F test, respectively. Screw length measurements were not normally distributed and analysed using the Mann-Whitney U test. A Bonferroni correction was applied with an adapted level of significance of 0.0125 to account for multiple testing. Two specimens were excluded, one due to previous fracture and one because of misalignment during testing. The mean age of the remaining nine pairs was 85.6 ± 11.1 years. Four donors were female. BMD and BMC did not differ between the two groups. Table 1 shows distal screw length details and statistics for each distal screw separately. Distal screw lengths were significantly greater in group A (21.7 ± 2.6 mm; range: 16 to 26 mm) compared to group B (16.9 ± 1.9 mm; range: 12 to 20 mm). In group B, screws were on average chosen 5.6 ± 0.9 mm (range: 3 to 7 mm) shorter than measured. The analysis of the biomechanical outcome parameters revealed no differences between the 75 and 100 % distal screw length group for any parameter (Table 2). Therefore, the null hypothesis had to be rejected. Additional comparison between left and right as well as female and male radii revealed no significant differences for all parameters except a greater residual fragment tilt in female specimen (7.9 ± 0.8° vs. 6.6 ± 0.7°; p = 0.001). Extensor tendon irritation and attritional tendon ruptures are two of the most common complications following VLPO. Both can be caused by distal screws protruding the dorsal cortex [4–6]. Shorter distal screws can preclude dorsal screw protrusion [9, 10]. This biomechanical study demonstrated that 75 % distal screw lengths provides similar primary stability to 100 % screw lengths in a cadaver model. The authors are only aware of two studies, investigating the effect of distal screw length on the primary stability of VLPO, both with inherent limitations. Greenberg et al. presented an abstract at the Annual Meeting of the AAOS comparing three different distal screw lengths: 75 %, 100 % unicortical, and bicortical. Three fresh-frozen radii were tested per group. No details were given on the biomechanical setup. No group differences were found. The small sample size and the missing information on the setup hinder data interpretation. Wall et al. compared 50, 75, and 100 % unicortical distal screw lengths in synthetic radii. No significant differences between 100 and 75 % distal screw length were reported. However, these conclusions are limited due to the use of synthetic radii in an inadequate fracture model. In general, the validity of a biomechanical study relies on the test setup used. We tried to apply a best-evidence setup based on previous experiments and literature [15, 13]. Previous setups vary in almost every aspect, i.e. boundary conditions, the fracture model, and the specimens used [17–20]. All of these have a pronounced impact on the biomechanical outcome parameters. One of these varying parameters is the location of the osteotomy mimicking dorsally unstable distal radius fractures. Its impact on the biomechanical outcome parameters has been highlighted recently . Wall et al. removed a 10-mm dorsal wedge based 10 mm proximal to Lister's tubercle [21, 19, 18]. Previous studies have removed similar sized wedges 10 to 25 mm proximal to the articular surface [22–26]. The herein applied standardized fracture model [15, 13] bases the osteotomy location on a radiographic study, which has analysed the in vivo distal fracture location in distal radius fractures . We believe that the use of a standardized fracture model [15, 13] is a strength of our study. Another decisive parameter for the validity of a biomechanical study is the type of specimen tested. Wall et al. chose a sawbone model (#1027-130, Sawbones; Pacific Laboratories Inc., Vashon, WA, USA), which, although applied in previous studies [27, 21, 28], is not recommended for biomechanical testing by the manufacturers as it does not replicate structural properties of bone. Moreover, a previous study reported a significantly different biomechanical behaviour compared to fresh-frozen radii . Consequently, the use of paired fresh-frozen radii is another strength of this study. A further advantage is the use of paired samples, which allows pair-wise, side-alternating randomization. This ensures a high homogeneity for morphometric and structural parameters. The results of our study are corroborated by comparison to literature. As outlined above, the biomechanical setups published for distal radius fractures vary significantly. This not only alters the biomechanical behaviour of the mode, which consequently leads to diverging results, but also hampers inter-study comparison. Still, similar maximum force values were reported in previous studies applying a comparable setup [19, 29]. Moreover, the herein observed maximum force values exceeded 250 N for both groups, which is usually considered the maximum force occurring during rehabilitation [30–32]. Although various biomechanical parameters associated to failure of the osteosynthesis have been assessed, the actual failure mode has not. Possible failure modes include screw-bone, screw-plate, or plate failure. These could be influenced by distal screw length. First, shorter screws reduce the screw-bone contact area, which might increase the local damage around the screws during loading and therefore influence total plastic deformation. In this study, residual tilt was chosen as a surrogate parameter to quantify total plastic deformation . Other studies attempted to quantify residual deformation by the displacement at the fracture gap or along the loading axis . Both parameters are considered less reliable than residual tilt due to their dependence on the specimen's geometry. The herein observed gender differences could be associated to gender differences in BMC or bone geometry. Second, shorter distal screws reduce the screws' lever arm acting on the plate. This could have an impact on the screw-plate interface. Screw-plate failure, i.e. screw push-out, is a known complication following polyaxial VLPO [36, 37]. To our best knowledge, no biomechanical study has yet analysed this failure mode. In order to get a first insight, we conducted pre- and post-testing lateral radiographs and photographs to visually evaluate screw push-out (Additional file 1). For group A, five screw push-outs (screws 1 (×1), 5 (×2), 8 (×2)) occurred in three specimens. For group B, two screw push-outs (screws 5 (×2)) occurred in two specimens. Still, screw-plate failure is not only influenced by screw length, but by various parameters, including screw orientation and bone quality. Computational analyses are needed to assess the actual load distribution within the screw-plate construct. This would help to optimize the actual load distribution and thereby increase the construct's overall stability. A further limitation might be the used axial loading protocol, as it does not account for all loading conditions during early rehabilitation. Although few authors conducted specific bending and torsion tests , most biomechanical distal radius fracture studies applied axial compression testing. Constrained axial compression also results in considerable shear forces and moments and is therefore believed to simulate all relevant forces occurring within the construct [39, 40]. Moreover, while some studies applied fatigue testing [39, 11], our goal was the assessment of primary stability, following previous studies [34, 17, 13]. Finally, the influence of distal screw length was only assessed for the most common distal radius fracture (AO-23 A3) using a biomechanical fracture model. Whether this concept can be adapted to fractures in vivo and intra-articular distal radius fractures (AO-23 C) has yet to be evaluated. This biomechanical study was able to demonstrate that 75 % distal screw length can provide similar primary stability as unicortical 100 % distal screw length in VLPO. This study, for the first time, provides the biomechanical basis to choose distal screws significantly shorter then measured. Future clinical studies are required to validate this approach in vivo and investigate on the possible reduction of dorsal screw protrusion incidences and subsequent extensor tendon problems. We would like to thank Medartis Inc. who provided the osteosynthetic material. The study was funded by a research grant of the Medical University of Munich (LMU, FöFoLe #828). We would especially like to thank Mr. Dipl. Ing. Christian Schröder for his advice and help in manufacturing the testing setup. Additional file 1: Illustration of screw push-out (black arrows). A) Specimen prior to testing; B) specimen after testing; 1) photographs; 2) radiographs. Medartis Inc. provided the osteosynthetic material. None of the authors is linked to Medartis (no competing interests) nor has Medartis Inc. been involved in the planning and execution of the study. The study was funded by a research grant of the Medical University of Munich (LMU, FöFoLe #828). The authors declare that they have no competing interests. SFB initiated the study, wrote the ethics proposal, conducted the preparation, and wrote the manuscript. AS conducted the biomechanical study and helped with the data analysis and manuscript preparation. HT organized and prepared the specimens and contributed to the study design and preparation of the manuscript. WM helped design the study and prepare the ethics proposal and advised with the statistics and manuscript preparation, DP advised with the biomechanical testing protocol, data processing, and analysis. YC helped to design the study and conduct the testing, provided the biomechanical testing environment, and helped with the manuscript preparation. All authors conducted proofreading an approved the final manuscript. Rausch S, Schlonski O, Klos K, Gras F, Gueorguiev B, Hofmann GO et al. Volar versus dorsal latest-generation variable-angle locking plates for the fixation of AO type 23C 2.1 distal radius fractures: a biomechanical study in cadavers. Injury. 2012. doi:10.1016/j.injury.2012.08.048.
{ "redpajama_set_name": "RedPajamaC4" }
8,710
{{Infobox tennis biography | name = Katrina M. Adams | image= Katrina Adams.jpg | country = United States | residence = Yonkers, New York United States | birth_date = | birth_place = Chicago, Illinois | height = 5'5| turnedpro = 1988 | retired = 1999 | plays = Right-handed (two-handed backhand) | careerprizemoney = $1,294,235 | singlesrecord = 182–194 | singlestitles = 1 ITF | highestsinglesranking = No. 67 (May 8, 1989) | AustralianOpenresult = 3R (1992) | FrenchOpenresult = 1R (1988, 1989, 1992, 1996) | Wimbledonresult = 4R (1988) | USOpenresult = 3R (1995) | doublesrecord = 419–226 | doublestitles = 20 WTA, 7 ITF | highestdoublesranking = No. 8 (August 14, 1989) | AustralianOpenDoublesresult = QF (1992) | FrenchOpenDoublesresult = QF (1988, 1989, 1992, 1993, 1995, 1996) | WimbledonDoublesresult = SF (1988) | USOpenDoublesresult = QF (1991, 1994) }} Katrina M. Adams (born August 5, 1968) is an American tennis executive and former professional tennis player from Chicago. She was president and CEO of the United States Tennis Association and chair of the US Open, as well as the chair of the International Tennis Federation Fed Cup and Gender Equality in Tennis committees. As a player, Adams was a doubles specialist, reaching the quarterfinal stage or better at all four Grand Slams as well as achieving a career-high doubles ranking of no. 8 (August 1989). Her book, Own the Arena: Getting Ahead, Making a Difference, and Succeeding as the Only One was published in 2021. Early life Adams joined a tennis program on Chicago's West Side when she was six years old. She attended Whitney Young High School, becoming Illinois High School Association the first Chicago Public School and first African American singles champion in 1983 and 1984. While attending Northwestern University, she won the National Collegiate Athletic Association (NCAA) doubles title with Diane Donnelly in 1987, and was twice voted All-American. Results Adams won seven of her 20 WTA doubles titles between 1987 and 1996 partnering Zina Garrison, including the 1988 World Doubles Championships. Her best Grand Slam singles result was in the 1988 Wimbledon Championships when she reached the fourth round, losing to Chris Evert 5–7, 6–3, 6–0. The same year, she was Wimbledon doubles semifinalist with Zina Garrison. Awards Adams twice won the annual WTA Player Service Award in 1996 and 1997. Post-retirement Adams has been a television commentator for the Tennis Channel since 2003, a regular contributor to CBS Sports Network all-female sports panel We Need to Talk'' and is also an executive director of the Harlem Junior Tennis and Education Program. In January 2015, Adams became President, Chairman and CEO of the United States Tennis Association, becoming the first former professional tennis player, first African-American and the youngest person to serve as President in the 135-year history of the organisation. In 2016, Adams became Chairperson of the International Tennis Federation (ITF) Fed Cup committee, which governs the Fed Cup. Adams also serves on the board of directors for the International Tennis Hall of Fame. WTA Tour finals Singles 2 (0–2) Doubles 36 (22–14) ITF Finals Singles (1–1) Doubles (8–3) Performance timelines Singles Doubles References External links 1968 births Living people African-American female tennis players American female tennis players Northwestern Wildcats women's tennis players Sportspeople from Bradenton, Florida People from White Plains, New York Tennis people from Florida Tennis players from Chicago Tennis people from New York (state) Whitney M. Young Magnet High School alumni African-American sports executives and administrators American sports executives and administrators African-American tennis coaches Universiade medalists in tennis Tennis commentators Women tennis executives Universiade bronze medalists for the United States Medalists at the 1987 Summer Universiade Medalists at the 1991 Summer Universiade Medalists at the 1993 Summer Universiade 21st-century African-American people 21st-century African-American women 20th-century African-American sportspeople 20th-century African-American women
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,487
{"url":"https:\/\/twodee.org\/blog\/16908","text":"# teaching machines\n\n## Flatcaps in Libigl\n\nMadeup\u2019s dowel solidifier has one job: thicken a sequence of line segments into a solid. But what if the sequence isn\u2019t a polyline, but rather a branching structure like a tree or a fork? One could model each branch as a separate dowel and hope that nobody looks too closely at the joints, but that\u2019s not really a solution. Good news, though! I may have found a better one.\n\nTo subtract and join meshes, Madeup depends on libigl. A few months ago, Alec Jacobson announced that his team had added wire mesh generation to libigl. I used this feature to add a new solidifier to Madeup for branching structures. I named it network.\n\nEven on the first run, it was quite obvious that my use case was a bit different than the libigl team\u2019s. Normal people generate wire meshes for closed surfaces, in which each vertex is incident on at least a few faces. Here I was with a network of line segments, with a lot of dangling terminal vertices. The result wouldn\u2019t make it through airport security:\n\nI wanted the caps to be flat, so I asked the libigl folks how much work that might be. They actually responded and were reasonable human beings, so I dove into the source and figured out how to suppress the pointiness for terminal vertices. I like this much better:\n\nThey even approved my pull request. But really, it was a push request.","date":"2021-04-18 10:42:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4432260990142822, \"perplexity\": 1287.7216583565935}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038476606.60\/warc\/CC-MAIN-20210418103545-20210418133545-00192.warc.gz\"}"}
null
null
<?xml version="1.0" encoding="UTF-8"?> <definitions id="definitions" xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:camunda="http://camunda.org/schema/1.0/bpmn" targetNamespace="Examples"> <error id="myError" /> <process id="testThrowErrorWithoutErrorCode" isExecutable="true"> <startEvent id="theStart" /> <sequenceFlow id="flow1" sourceRef="theStart" targetRef="theEnd" /> <endEvent id="theEnd"> <errorEventDefinition errorRef="myError" /> </endEvent> </process> </definitions>
{ "redpajama_set_name": "RedPajamaGithub" }
1,408
passed on orally through songs and stories, as well as through actions and observations. and other variations in the natural world. in migration patterns. Key natural resources are growing scarce and more difficult to reach. Arctic people and their way of life. 1. Gather and share information on greenhouse gas emissions, national policies, and best practices. impacts, including the provision of financial and technological support to developing countries. 3. Cooperate in preparing for adaptation to the impacts of climate change. that the rest of the world's governments are ignoring their plight in the face of climate change.
{ "redpajama_set_name": "RedPajamaC4" }
921
{"url":"http:\/\/math.stackexchange.com\/questions\/153472\/calculating-the-surface-area-of-sphere-above-a-plane","text":"# Calculating the surface area of sphere above a plane\n\nHow do I calculate the surface area of the unit sphere above the plane $z=\\frac12$?\n\nEDIT: I have been attempting things and I am thinking about parameterizing this... While I know that surface area is given by the double integral of the cross products of partial derivatives of the new parameters, I don't know what to set them to.. (sorry I'm not good with the fancy notation)\n\n-\nIf you just want a formula, Wikipedia has it. MathWorld has a derivation as well. \u2013\u00a0 Rahul Jun 3 '12 at 22:15\n\nThe circumference of an infinitesimal ring of the unit sphere between $z$ and $z+\\mathrm dz$ is $2\\pi\\sqrt{1-z^2}$, and its width is $\\mathrm dz\/\\sqrt{1-z^2}$. Thus its surface area is $2\\pi\\,\\mathrm dz$. That is, the surface area of a slab of the unit sphere between two $z$ coordinates (or in fact between any two parallel planes) is simply $2\\pi$ times the difference of the $z$ coordinates (or, generally, the distance between the two planes). Thus the surface area of the slab of the unit sphere between $z=1\/2$ and $z=1$ is $2\\pi\\cdot(1-1\/2)=\\pi$.\n\n-\n\nSo if this is your paramterization $$X\\left(u,v\\right)=\\left(\\begin{array}{c} r\\sin u\\cos v\\\\ r\\sin u\\sin v\\\\ r\\cos u \\end{array}\\right)$$ these are the elements of tangent space (partial derivatives wrt $u$ and $v$ respectively): $$X_{u}=\\left(\\begin{array}{c} r\\cos u\\cos v\\\\ r\\cos u\\sin v\\\\ -r\\sin u \\end{array}\\right)$$ $$X_{v}=\\left(\\begin{array}{c} -r\\sin u\\sin v\\\\ r\\sin u\\cos v\\\\ 0 \\end{array}\\right)$$ Then by direct calculation: $$\\left|X_{u}\\times X_{v}\\right|=\\left|\\begin{array}{ccc} i & j & k\\\\ r\\cos u\\cos v & r\\cos u\\sin v & -r\\sin u\\\\ -r\\sin u\\sin v & r\\sin u\\cos v & 0 \\end{array}\\right|=\\left|\\left(r^{2}\\sin^{2}u\\cos v\\right)i+\\left(-r^{2}\\sin^{2}u\\sin v\\right)j+\\left(r^{2}\\sin u\\cos u\\right)k\\right|=r^{2}\\sin u$$ The area of half a sphere is found as follows: $$A=r^2\\int_0^{\\pi}\\int_0^{\\pi}\\sin ududv=2\\pi r^2$$\n\n-\nso your approach is similar to ananda's below, but you parameterized everything into polar coordinates first right? \u2013\u00a0 Mike Jun 3 '12 at 22:30\nalso, what exactly (intuitively) is the cross product doing? and why are the limits of the integral just 0 to pi and not to 2pi? \u2013\u00a0 Mike Jun 3 '12 at 22:32\naha found that it represents the area of a parallelogram! \u2013\u00a0 Mike Jun 3 '12 at 22:39\nyes, exactly. also if you need a slice above $z=1\/2$, then $u$ ranges from 0 to $\\arccos{\\frac{1}{2}}$. I wrongly assumed you were looking for the surface of half a sphere \u2013\u00a0 Valentin Jun 3 '12 at 22:45\n\nSurface area is given by\n\n$$\\iint_R \\left| \\vec r_u \\times \\vec r_v \\right| \\ dA$$\n\nwhere $\\vec r(u,v)$ is the parametrization of the surface. We can rewrite this as (derivation shown here: http:\/\/tutorial.math.lamar.edu\/Classes\/CalcIII\/SurfaceIntegrals.aspx):\n\n$$\\iint_D \\sqrt{ \\left(\\frac{\\partial z}{\\partial x}\\right)^2 + \\left(\\frac{\\partial z}{\\partial y}\\right)^2 + 1} \\ dA$$\n\nfor a function $z = f(x,y)$ where $D$ is the projection of the surface onto the xy-plane.\n\nSince we are only concerned with the portion of the unit sphere above $z = 0$, we can write it as\n\n$$z = \\sqrt{1-x^2-y^2}$$\n\nComputing the partial derivatives with respect to $x$ and $y$,\n\n$$\\frac{\\partial z}{\\partial x} = \\frac{-x}{\\sqrt{1-x^2-y^2}} \\rightarrow \\left(\\frac{\\partial z}{\\partial x}\\right)^2 = \\frac{x^2}{1-x^2-y^2}$$\n\n$$\\frac{\\partial z}{\\partial y} = \\frac{-y}{\\sqrt{1-x^2-y^2}} \\rightarrow \\left(\\frac{\\partial z}{\\partial y}\\right)^2 = \\frac{y^2}{1-x^2-y^2}$$\n\nSubstituting these into our expression for surface area,\n\n$$\\iint_D \\sqrt{ \\frac{x^2}{1-x^2-y^2} + \\frac{y^2}{1-x^2-y^2} + 1} \\ dA$$\n\nwhich simplifies to (omitting a bit of algebra)\n\n$$\\iint_D \\frac{1}{\\sqrt{1-x^2-y^2}} \\ dA$$\n\nObserve that $D$ (the projection of our surface into the xy-plane) is given by\n\n$$z = \\sqrt{1-x^2-y^2}$$\n\n$$\\frac{1}{2} = \\sqrt{1-x^2-y^2}$$\n\n$$\\frac{1}{4} = 1-x^2-y^2$$\n\n$$x^2+y^2 = \\frac{3}{4}$$\n\nwhich is a circle of radius $\\frac{\\sqrt{3}}{2}$. The integral over $D$ is easiest done in polar coordinates. I'll assume you know how to do that and omit the computation.\n\n$$\\int_{0}^{2\\pi} \\int_{0}^{\\frac{\\sqrt{3}}{2}} \\frac{1}{\\sqrt{1-r^2}} \\ r \\ dr \\ d\\theta$$\n\n$$= \\pi$$\n\n-\n\nWe will basically project the part of the unit sphere above $z=\\frac1 2$ onto $xy$ plane. I will assume that $\\int \\int_s||\\frac {\\partial r } {\\partial x }\\times \\frac {\\partial r } {\\partial y }|| dy dx$ Now $r= f(x,y,z) = f(x,y,z(x,y))$. So $\\frac {\\partial r } {\\partial x }=f(1,0,\\frac {\\partial z } {\\partial x})$ and $\\frac {\\partial r } {\\partial y }=f(0,1,\\frac {\\partial z } {\\partial y})$. so $||\\frac {\\partial r } {\\partial x }\\times \\frac {\\partial r } {\\partial y }||=$ $({\\frac {\\partial z } {\\partial x}}^2+{\\frac {\\partial z } {\\partial y}}^2+1)^{1\/2}$\n\nso now you have just find the derivatives and plug in . and the limit of the integral will be around the circle $x^2+y^2=3\/4$. you can use polar co-ordinates . let me know if u have doubts , i think the answer will be $3\/2$ times $\\pi$.\n\n-\nSo, z=sqrt(1-x^2-y^2), right and I need to take the partial w.r.t. x and y, respectively? what do you mean by the limit of the integral though? \u2013\u00a0 Mike Jun 3 '12 at 22:12\nyes , after that u have to evaluate double integral for which u need the limits . Can you see what the region $S$ looks like ? its just a circle . \u2013\u00a0 Theorem Jun 3 '12 at 22:15\nok I get a very messy formula for the cross product: sqrt($(x+y+1-x^2-y^2)\/(1-x^2-y^2)$)... I take that this is right and I set x^2+y^2=r^2? the limits would then be r=0 to r=sqrt(3\/4) and \\$\\theta from 0 to 2pi? how did you \"guess\" the answer so quickly when this is so complicated thought? \u2013\u00a0 Mike Jun 3 '12 at 22:28","date":"2015-03-30 16:14:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9583756327629089, \"perplexity\": 334.7261368698716}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-14\/segments\/1427131299496.98\/warc\/CC-MAIN-20150323172139-00259-ip-10-168-14-71.ec2.internal.warc.gz\"}"}
null
null
We do suffer from the over application of stereotypes in the IT world don't we? For most people, Moss and Roy in the 'IT crowd' are pretty much what is expected when you say you work in the Tech sector. It is frustrating at times, but at the risk of being a bit contentious, is it also be a little bit of our own fault? We see many IT specialists here at Proactive.IT and one thing that unites them, regardless of their area of expertise, is that they are uniformly dedicated. You simply cannot be in the IT sector and not be totally committed to your job and the wider industry because it moves too fast. The constant change in the mechanics of a technology-based employment arena requires that those within it are deeply embedded in it. As a result, we live, breath and talk IT and it is with the latter where we perhaps occasionally need to apply a little more empathy in the workplace when dealing with less IT literate people. Acronyms need to be demystified. While most people will think to explain the lesser known or more specialist acronyms because they stand out and are often a focus, we tend to notice the more used and older ones less. There is no guarantee the listener will know what they mean so they need explaining or at least checking if people are familiar with them. There is no point in explaining your IT strategy if the listener is completely baffled by a common term such as SQL. Take a moment to check if they know it means structured query language and how that relates to the conversation. Repeat, give time and reinforce. Remember you have been using the language of IT for years. To you, it is second nature and it flows as easily as a multi-linguist switching tongues. Take the time to repeat information, preferably in a different way, give time for the information to be absorbed and reinforce what it means. Think like a doctor. When you visit a health professional they will usually take the time to make sure you understand. Leaflets and other information supplies are written in plain language for the same reason. Their aim is to inform you about your condition in a way you will understand. Let people play with things. Most people learn best by doing, so give people access to the tech and explain while they are using it. We all want to be heard and be recognised for our expertise and skills but unless you are communicating effectively, you are not really being seen for your talents properly. In the workplace, and worse still in an interview setting, you want to be heard and not baffle the listener.
{ "redpajama_set_name": "RedPajamaC4" }
9,226
Q: How do I dynamically position a swf in flex relative to its parent container? How do I position a child relative to its containing box? The Code: <mx:Script> <![CDATA[ thisMap.scaleX = scaleFactor; thisMap.scaleY = scaleFactor; thisMap.x = thisMap.x - thisMap.mouseX * 1.3; thisMap.y = thisMap.y - thisMap.mouseY * 1.3; ]]> </mx:Script> <mx:Box id="mapHolder" x="0" y="30"> <mx:SWFLoader id="thisMap" source="MyWorld1.swf" /> </mx:Box> To access thisMap it is not necessary to go through mapHolder. A: A box is a container that manages positioning for you in a horizontal or vertical stack (see VBox or HBox). It looks like you are trying to position thisMap as if it was in a canvas. Try <mx:Script> <![CDATA[ thisMap.scaleX = scaleFactor; thisMap.scaleY = scaleFactor; thisMap.x = thisMap.x - thisMap.mouseX * 1.3; thisMap.y = thisMap.y - thisMap.mouseY * 1.3; ]]> </mx:Script> <mx:Canvas id="mapHolder" x="0" y="30"> <mx:SWFLoader id="thisMap" source="MyWorld1.swf" /> </mx:Canvas> Also i'm not sure how this will look by positioning it relative to the mouse coordinates... it looks like you are trying to achieve some sort of drag functionality. However the canvas should solve the original problem. A: I am creating a zoom functionality. When the user clicks thisMap, it zooms into the point that was clicked. I scale this map and adjust for the scaling and centering by offsetting its x, y position. This works fine in the main app, but I can't figure out how to put the swf in some kind of container.
{ "redpajama_set_name": "RedPajamaStackExchange" }
248
IBCOPP ;ALB/NLR - LIST INS. PLANS BY CO. (DRIVER) ; 08-SEP-94 ;;Version 2.0 ; INTEGRATED BILLING ;**28,62**; 21-MAR-94 ; EN ; Describe report W !!?5,"This report will generate a list of insurance plans by company." W !?5,"It will help you identify duplicates and verify patient coverage." W !?5,"You must select one, many (up to 20) or all of the insurance companies;" W !?5,"anywhere from one to all of the plans under each company; and whether to" W !?5,"include the patient policies (subscribers) under each plan. The number of" W !?5,"plans you select is independent for each company you are including, but" W !?5,"subscriber selection is the same (all or none) for all companies and" W !?5,"plans within a report. Regardless of how you run the report, the" W !?5,"number of subscribers per plan will be included.",!! ; ; Prompt user to select report type, insurance companies, plans ; ; Output from user selections: ; ; IBAPA=0 -- list insurance plans by company ; IBAPA=1 -- list Insurance plans by company with subscriber information ; IBAI=0 -- user selects insurance companies ; IBAI=1 -- run report for all insurance companies with plans ; IBAPL=0 -- whether some or all ins. co's., user selects plans (may be ; all for certain companies, some for other companies) ; IBAPL=1 -- whether some or all ins. co's., run report for all plans ; associated with those co's. ; S IBAPA=$$SELR^IBCOPP1 I IBAPA<0 G ENQ S IBAI=$$SELI^IBCOPP1 I IBAI<0 G ENQ S IBAPL=$$SELP^IBCOPP1 I IBAPL<0 G ENQ ; ; obtain plans for selected insurance companies ; I IBAI,IBAPL G DEVICE D START I IBQUIT G ENQ I '$D(^TMP("IBINC",$J)) W !!,"No plans selected!" G ENQ ; DEVICE ; Ask user to select device ; W !!,"*** You will need a 132 column printer for this report. ***",! S %ZIS="QM" D ^%ZIS G:POP ENQ I $D(IO("Q")) D G ENQ .S ZTRTN="^IBCOPP2",ZTDESC="IB - LIST OF PLANS BY INSURANCE COMPANY" .F I="^TMP(""IBINC"",$J,","IBAPA","IBAI","IBAPL" S ZTSAVE(I)="" .D ^%ZTLOAD K IO("Q") D HOME^%ZIS .W !!,$S($D(ZTSK):"This job has been queued as task #"_ZTSK_".",1:"Unable to queue this job.") .K ZTSK,IO("Q") ; ; Compile and print report ; U IO D ^IBCOPP2 ; ENQ K DIRUT,DIROUT,DUOUT,DTOUT,IBAPA,IBAI,IBAPL,IBQUIT,X,Y,^TMP("IBINC",$J) Q ; ; START ; Gather plans for all selected companies. S (IBCT,IBQUIT)=0 K ^TMP("IBINC",$J) ; ; - allow user selection of companies if required I 'IBAI D I Y<0 S IBQUIT=1 G STARTQ .S DIC="^DIC(36,",DIC("S")="I $D(^IBA(355.3,""B"",Y))" .S VAUTSTR="insurance company",VAUTNI=2,VAUTVB="VAUTI",VAUTNALL=1 .D FIRST^VAUTOMA K DIC,VAUTSTR,VAUTNI,VAUTVB,VAUTNALL Q:Y<0 .S IBCNS="" F S IBCNS=$O(VAUTI(IBCNS)) Q:IBCNS="" S ^TMP("IBINC",$J,$E(VAUTI(IBCNS),1,25),IBCNS)="" I IBAPL G STARTQ ; ; - gather all companies if required I IBAI S A=0 F S A=$O(^IBA(355.3,"B",A)) Q:'A S ^TMP("IBINC",$J,$E($P($G(^DIC(36,A,0)),"^"),1,25),A)="" ; ; - gather plans for selected companies S IBIC="" F S IBIC=$O(^TMP("IBINC",$J,IBIC)) Q:IBIC=""!IBQUIT D .S IBCNS="" F S IBCNS=$O(^TMP("IBINC",$J,IBIC,IBCNS)) Q:IBCNS=""!(IBQUIT) D ..S IBCT=IBCT+1 W !!,"Insurance Company # "_IBCT_": "_IBIC ..D OK^IBCNSM3 Q:IBQUIT I 'IBOK K ^TMP("IBINC",$J,IBIC,IBCNS) S IBAI=0 Q ..W " ...building a list of plans..." ..K IBSEL,^TMP($J,"IBSEL") D LKP^IBCNSU2(IBCNS,1,1,.IBSEL,0,1) Q:IBQUIT ..I '$O(^TMP($J,"IBSEL",0)) K ^TMP("IBINC",$J,IBIC,IBCNS) S IBAI=0 Q ..; ..; - set plans into an array ..S IBPN=0 F S IBPN=$O(^TMP($J,"IBSEL",IBPN)) Q:'IBPN S ^TMP("IBINC",$J,IBIC,IBCNS,IBPN)="" ; STARTQ K IBCNS,IBIC,IBJJ,IBCT,IBLCT,IBOK,IBPN,IBSEL,VAUTI,VAUTP,^TMP($J,"IBSEL") Q
{ "redpajama_set_name": "RedPajamaGithub" }
6,073
After the market devastation we (individual investors) have experienced the past six months it is apparent to me the uptick rule needs to be reinstated. Not a modified change but a full reinstatement as it existed prior to July 2006. It is obvious to me the change was made not for the individual investor but the short sellers. I know I am not a professional or for that matter even a novice investor. I am someone like many of my friends and family who have a vested interest in the market for savings and/or retirement. So please on behalf of us and millions like us please reinstate the uptick rule.
{ "redpajama_set_name": "RedPajamaC4" }
5,170
Eric Hanson Dr. Eric L. Hanson earned his undergraduate degree from the University of Illinois. He received his medical degree in 1994 from Tufts University School of Medicine in Boston, MA, graduating with the distinction of Alpha Omega Alpha. He then completed a transitional internship at the Naval Medical Center in San Diego, CA, followed by a four-year military commitment with the US Navy in Groton, CT as an Undersea Medical Officer. While serving the diving and submariner community, CDR Hanson received many medals and commendations, including those honoring his service as a US Navy Diver, as well as his medical support for the Salvage Operation of TWA Flight 800 off the Long Island coast. Dr. Hanson completed his dermatology residency at the University of North Carolina at Chapel Hill where he was named Chief Resident in 2002. Dr. Hanson is a board certified dermatologist and is actively affiliated with the American Board of Dermatology, American Academy of Dermatology, and the American Society of Mohs Surgeons. Dr. Hanson commutes to work on foot or by bicycle. Many weekends are spent with his wife, proudly watching their son Ethan compete at swim meets, and keeping up with their daughter Jane Catherine. He enjoys playing the guitar and brewing his own beer.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,209
In The Great Workplace 2.0, a "Participant" is defined as anyone who touches an organization and has an opportunity to help it achieve its Purpose and goals. It includes EMPLOYEES, VENDORS, board of directors, community and Management. Keep in mind here: HONEY attracts. Regular beatings, lies and misdirection do nothing to create TRUST. TRUST is a CORE value for The Great Workplace 2.0. It is part of Transparent Integrity. It is at the heart of Collaboration. In this case study, the organization has a mammoth turnover rate. At one point it exceeded 90% in 3 months of new hires. It didn't really matter who was hired to make their product. That person was not good enough, fast enough or motivated enough. The fact was (Is): There was no true training (nor onboarding, nor motivation, nor support, nor incentives, nor guidance). New line people are/were given about 30 minutes of training (show and tell) then put on the line's HARDEST job. The Hardest Job, not one where the new hire could learn. Then inevitably, they were fired for being slow, stupid or unmotivated. Each new hire is subjected to verbal abuse by a line manager. No other line person is allowed to help the newbie (I have seen a new person try to do the job while two experienced guys stood watching her fail). The "Team Leaders" are really glorified parts handlers (I watched one that same evening leave the newbie to fail, while he went to get more parts, that couldn't be used until the newbie learned the job). The abuse all starts when a new person reports to HR. The HR secretary acts like she just ate a sour grape: no smile, negative attitude, physical demeanor that says she is bothered by anyone's presence, and annoyed that you were there. The HR Manager would rarely leave her desk, and was NEVER seen on the shop floor (amazing how one could know what is going on out there). She speaks "down" to everyone and anyone: She sounds like a kindergarten teacher, except most teachers pull you up to their level, not insult you by speaking as though you don't understand English. Great first impression, isn't it?
{ "redpajama_set_name": "RedPajamaC4" }
4,115
Motorola edge+ vs Apple iPhone 14 Pro Max Specs Comparison Compare phone and tablet specifications of up to three devices at once. Comparison mode: Specs Size Buy from $700 Buy from $29 2340 x 1080 pixels, 19.5:9 ratio, 385 PPI Screen-to-body: HDR support, Scratch-resistant glass (Corning Gorilla Glass 5), Ambient light sensor, Proximity sensor HDR support, Oleophobic coating, Scratch-resistant glass (Ceramic Shield), Ambient light sensor, Proximity sensor System chip: Qualcomm Snapdragon 865 SM8250 (7 nm) Apple 16 Bionic (4 nm) Octa-core, 2840 MHz, Kryo 585, 64-bit Hexa-core Internal storage: 256GB (UFS 3.0), not expandable 128GB, not expandable Device type: Android (11, 10) iOS (16.x) Not user replaceable Battery life test results: Motorola TurboPower, Qi wireless charging, Reverse wireless charging Fast charging, Qi wireless charging, MagSafe wireless charging Max charge speed: Wired: 18.0W; Wireless: 15.0W Wireless: 15.0W Quad camera 108 MP (OIS, PDAF) 48 MP (OIS, PDAF) Aperture size: F1.8; Sensor size: 1/1.33"; Pixel size: 0.8 μm Aperture size: F1.8; Focal length: 24 mm; Pixel size: 2.44 μm Second camera: 8 MP (Telephoto, OIS, PDAF) 12 MP (Telephoto, OIS, PDAF) Optical zoom: 3.0x; Aperture size: F2.4; Focal Length: 81 mm; Pixel size: 1 μm Optical zoom: 3.0x; Aperture size: F2.8; Focal Length: 77 mm Third camera: 16 MP (Ultra-wide, Autofocus) 12 MP (Ultra-wide, PDAF) Aperture size: F2.2; Focal Length: 13 mm; Pixel size: 1 μm Aperture size: F2.2; Focal Length: 13 mm; Pixel size: 1.4 μm Fourth camera: ToF 3D depth sensing Video recording: 3840x2160 (4K UHD) (30 fps), 1920x1080 (Full HD) (120 fps) 3840x2160 (4K UHD) (60 fps), 1920x1080 (Full HD) (240 fps), 1280x720 (HD) (30 fps) Time-lapse video, Hyperlapse, EIS OIS, HDR 25 MP (HDR, Slow-motion videos) 12 MP (Autofocus, HDR) Video capture: 3840x2160 (4K UHD) (24 fps) 6.34 x 2.81 x 0.38 inches (161.07 x 71.38 x 9.6 mm) 6.33 x 3.05 x 0.31 inches (160.7 x 77.6 x 7.85 mm) 7.16 oz (203.0 g) Back: Glass (Corning Gorilla Glass 5); Frame: Aluminum Back: Glass; Frame: Stainless steel Yes; IP68 Biometrics: In-screen fingerprint 3D Face unlock Right: Volume control, Lock/Unlock key Left: Volume control, Other; Right: Lock/Unlock key Smoky sangria, Thunder grey Space Black, Silver, Gold, Deep Purple Carrier locked: n2, n5, n66, n260, n261, Sub-6, mmWave n1, n2, n3, n4, n5, n7, n8, n12, n14, n20, n25, n28, n30, n38, n40, n41, n48, n66, n70, n77, n78, n79, n258, n260, n261, SA, NSA, Sub-6, mmWave LTE (FDD): Bands 1(2100), 2(1900), 3(1800), 4(AWS-1), 5(850), 7(2600), 8(900), 12(700 a), 13(700 c), 17(700 b), 20(800 DD), 28(700 APT), 66(AWS-3) Bands 1(2100), 2(1900), 3(1800), 4(AWS-1), 5(850), 7(2600), 8(900), 12(700 a), 13(700 c), 14(700 PS), 17(700 b), 18(800 Lower), 19(800 Upper), 20(800 DD), 25(1900+), 26(850+), 28(700 APT), 30(2300 WCS), 32(1500 L-band), 66(AWS-3), 71(600) LTE (TDD): Bands 46, 48(3600) Bands 34(2000), 38(2600), 39(1900+), 40(2300), 41(2600+), 46, 48(3600) UMTS: Bands 1(2100), 2(1900), 5(850), 8(900) Bands 1(2100), 4(1700/2100), 5(850), 8(900) Data Speed: LTE-A Pro Cat 20 (2000/150 Mbit/s), HSDPA+ (4G) 42.2 Mbit/s, HSUPA 5.76 Mbit/s LTE-A, HSDPA+ (4G) 42.2 Mbit/s SIM type: HD Voice: VoLTE: No 3.5mm jack Earpiece, Multiple speakers Screen mirroring: Wireless screen share Additional microphone(s): for Noise cancellation Connectivity & Features 802.11 a, b, g, n, ac, ax (Wi-Fi 6), dual-band; Wi-Fi Direct, Hotspot 802.11 a, b, g, n, ac, ax (Wi-Fi 6); Wi-Fi Direct, Hotspot Type-C (reversible), USB 3.1 GPS, A-GPS, Glonass, Galileo, Cell ID, Wi-Fi positioning GPS, A-GPS, Glonass, Galileo, BeiDou, QZSS, Emergency SOS via satellite (SMS sending/receiving) Accelerometer, Gyroscope, Compass, Hall (for flip covers), Barometer Accelerometer, Gyroscope, Compass, Barometer, LiDAR scanner NFC, Ultra Wideband (UWB) Hearing aid compatible: M3, T4 Regulatory Approval FCC approval: FCC ID value: IHDT56YJ2 Measured SAR: Head: 0.53 W/kg Simultaneous Transmission: Wireless Router: This is the official Motor​ola edge+ User Guide in English provided from the manufacturer. Charger, USB Type-C cable, Guides, SIM tool Phone, USB-C to Lightning Cable, Documentation Officially announced:
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,309
Q: Please Explain Baby Rudin Theorem 1.20 (b) By Using Statements on Baby Rudin I have a question in Baby Rudin Theorem 1.20 (b). I have checked other Q and A's of this theorem in mathstackexchange (and I can understand this theorem). But, those answers did not explain statements in Baby Rudin (such as Baby Rudin Theorem 1.20 (b) Proof). That theorem states that "If $x\in\mathbb{R}$, $y\in\mathbb{R}$, and$x<y$, then there exists a $p\in\mathbb{Q}$ such that $x<p<y$". The proof of this theorem is below. Since $x<y$, we have $y-x>0$, and Archimedian property furnishes a positive integer $n$ such that $n(y-x)>1$. Apply Archimedian property again, to obtain positive integers $m_1$ and $m_2$ such that $m_1>nx$, $m_2>-nx$. Then $-m_2<nx<m_1$. Hence there is an integer m(with $-m_2\leq m\leq m_1$) such that $m-1\leq nx<m$. If we conbine these inequalities, we obtain $nx<m\leq 1+nx<ny$. Since n>0, it follows that $x<\frac{m}{n}<y$. This proves this theorem, with $p=m/n$. I want to know how Rudin gets a consequence that "Hence there is an integer m(with $-m_2\leq m\leq m_1$) such that $m-1\leq nx<m$". A: You have positive integers $m_1$ and $m_2$ such that $-m_2 \lt nx \lt m_1$. You can split this range of length $m_1+m_2$ into $m_1+m_2$ intervals of length $1$ and $nx$ will be in one of them So $nx$ falls in one of the intervals: $[-m_2,-m_2+1), [-m_2+1,-m_2+2), \ldots, [-1,0), [0,1), \ldots, [m_1-2,m_1-1), [m_1-1, m_1)$. If you prefer, one of the following statements is true: $-m_2 \le nx \lt -m_2+1, \ldots, -1 \le nx \lt 0, \ldots, m_1-1 \le nx \lt m_1$ So there is an integer $m$ with $-m_2 \le m \le m_1$ such that $nx$ is in $[m-1, m)$, i.e. such that $m-1 \le nx \lt m$
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,417
Place an order for 60.00лв. + Buy COGNIHELTH or other a product from our Promo Selection to get free shipping. Place an order for 50.00лв. + Buy COGNIHELTH or other a product from our Promo Selection to get free shipping. Place an order for 40.00лв. + Buy COGNIHELTH or other a product from our Promo Selection to get free shipping.
{ "redpajama_set_name": "RedPajamaC4" }
7,912
Preview Night on Thursday, April 25th, from 4:00 p.m. to 7:00 p.m. The Library relies in great part on donations to stock our book sales and we ask you to consider your library next time you have books, DVDs, CDs and other media you are no longer using. If you are ready to let go of some of yours, bring them to the library so they can be part of this sale. Other folks will enjoy them and all proceeds will go to the Weatherford Public Library.
{ "redpajama_set_name": "RedPajamaC4" }
8,536
Force-assisted lifting of the cargo floor when loading the storage area underneath. A cargo floor with gas spring assist can be operated with one hand. During closing, the gate movement will be softly dampened. In combination with a lock assist on the gas spring, the cargo floor will be safely held open. Stabilus gas springs and dampers are flexible and absolutely maintenance-free.
{ "redpajama_set_name": "RedPajamaC4" }
7,937
Why pension funds in Iran could face bankruptcy tsunami Years of mismanagement and populist legislation, coupled with an increasingly ageing population, have driven Iranian pension funds to the brink of mass bankruptcy. Elderly women sit on a park bench in Tehran, June 9, 2009. - REUTERS/Raheb Homavandi Navid Kalhor @@navid_kalhor Tehran, IRAN — One of the key underlying economic challenges of Iran is the current underfunding of its pension system. Make no mistake: Pension reform must be a priority for the country. Figures posted by the Ministry of Cooperative, Labor and Social Welfare show that "a major portion of pension funds have either gone completely bust or are among the ones relying tremendously on state resources," reported Eqtesad News on Jan. 23. The most important issues are the imposition of financial obligations by consecutive administrations over the past four decades as well as the approval of populist legislation by various parliaments, and particularly under the previous conservative government. Combined with the pension funds' own financial problems, the situation is truly dire. By politically influencing the pension reserves, consecutive governments have mismanaged the assets owned by these funds. At present, most of them are accumulating large, unsustainable and unfunded pension liabilities. In this vein, problems concerning equity, efficiency and management are pervasive. In an interview with business weekly Tejarat-e-Farda on Oct. 8, 2016, Mohsen Riazi, the deputy of the social and economic planning office at the Social Security Organization (SSO), said, "Policies such as early retirement in difficult and hazardous occupations and renovation of industries that have played a great role in the financial misbalance of the largest pension fund in the country [belonging to the SSO] date back to the Reformist government [1997-2005]." Riazi also reiterated that the Health Reform Scheme modeled on Obamacare that is implemented by President Hassan Rouhani or increasing retirees' monthly pension payments by 800,000 rials ($25) at the end of the first term of Mahmoud Ahmadinejad's presidency (2005-2009) could be referred to as "embittering policies" exerted on the SSO, shrinking its meager financial resources to a minimum. Another issue aggravating the dire situation of pension funds is the protective laws ratified by lawmakers who consistently have passed populist bills supposedly in support of vulnerable groups in society. Parliament Speaker Ali Larijani, who is also the deputy head of the parliamentary economic commission, has addressed parliamentarians' approval of bills relevant to pension funds, saying, "In the previous [parliamentary] terms, occasionally the members of parliament conducting their duties as representatives suffered from politicization and made rash decisions." He added, "This brought about a crisis for pension funds, and sadly, the resulting actions of the preceding parliament [2012-2016] are not very defensible." Indeed, in a session held on Jan. 15, members of parliament — without counseling appropriate expert agencies such as the parliament's Research Center or conducting any other conventional investigations — voted for a bill on the early retirement of working women who have merely contributed payments to their respective pension funds for 20 years, without pushing for any age limit. This is while no actuarial calculations of such a decision and its likely impact on fund assets have ever been carried out. That being said, preliminary estimates indicate that the move will cost pension funds some 1,000 trillion rials ($31 billion) collectively over five years alone. Minister of Cooperatives, Labor and Social Welfare Ali Rabiee has expressed his concerns over the alarming conditions of pension funds. Addressing parliament on June 14, 2016, he said, "The total budget deficit of pension and insurance funds stood at about 360 trillion rials [$11 billion]" through the end of the Iranian calendar year 1394 (ending March 19, 2016). He added, "Casting a blind eye on fund-holding principles caused the deficit to grow from approximately 42 trillion rials [$1.3 billion] in 2005 to 600 trillion rials in 2013, adding to the government's debt, and continuing the same trajectory, the deficit is climbing exponentially." Meanwhile, the demographic transition Iran is experiencing suggests that the number of people paying social security premiums will gradually decline while those entitled to the benefits of yearslong contributions to the pension system will increase. Indeed, it is estimated that the percentage of over-65-year-olds in Iran will reach 10%, 15% and 25% in 2021, 2036 and 2049, respectively. The disproportion between contributions received and benefits paid by pension funds severely damages their financial capability. Needless to say, the promised benefits are not in line with retirement rules and contribution rates. As is the case in Iran, basing pension entitlements on the final two years of earnings rather than the average of pay over one's lifetime appears unfair and might be open to abuse. This has worsened the standard for the replacement ratio, meaning a person's gross income after retirement divided by his or her gross income before retirement. It is noteworthy that the replacement ratio in Iran is already one of the highest in the world. According to Esmail Gorjipour, the head of the social security office at the Ministry of Cooperatives, Labor and Social Welfare, the ratio was 85% of the final salary at SSO and 87% in the Civil Servants Pension Fund (CSPF) in the Iranian year ending March 20, 2015. In addition, the support ratio — the number of people aged 15-64 per person aged 65 or above — which normally should be around 5, is 0.93 for CSPF and 4.57 for SSO. These ratios are indicative of an imminent crisis for a majority of pension funds in Iran. Many pundits have warned about the disastrous consequences of any deferral in taking proper and immediate action to help pension funds out of crisis, as they believe the funds are already on the brink of bankruptcy. Indeed, the time for change is now, as correctly advised by an in-depth report on pension funds in the region conducted by the World Bank in 2005. The recommended changes and reforms, nonetheless, should be executed by introducing systematic and/or parametric modifications to pension funds. If not, the odds are that Iran will have to pay a heavy penalty by exposing itself to the risk of social and political upheaval in the foreseeable future.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,807
Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke Hongjian Pu, Yejie Shi, Lili Zhang, Zhengyu Lu, Qing Ye, Rehana K. Leak, Fei Xu, Shubei Ma, Hongfeng Mu, Zhishuo Wei, Na Xu, Yuguo Xia, Xiaoming Hu, T. Kevin Hitchens, Michael V.L. Bennett, Jun Chen Emerging evidence suggests that tissue plasminogen activator (tPA), currently the only FDA-approved medication for ischemic stroke, exerts important biological actions on the CNS besides its well-known thrombolytic effect. In this study, we investigated the role of tPA on primary neurons in culture and on brain recovery and plasticity after ischemic stroke in mice. Treatment with recombinant tPA stimulated axonal growth in culture, an effect independent of its protease activity and achieved through epidermal growth factor receptor (EGFR) signaling. After permanent focal cerebral ischemia, tPA knockout mice developed more severe sensorimotor and cognitive deficits and greater axonal and myelin injury than wild-type mice, suggesting that endogenously expressed tPA promotes long-term neurological recovery after stroke. In tPA knockout mice, intranasal administration of recombinant tPA protein 6 hours poststroke and 7 more times at 2 d intervals mitigated white matter injury, improved axonal conduction, and enhanced neurological recovery. Consistent with the proaxonal growth effects observed in vitro, exogenous tPA delivery increased poststroke axonal sprouting of corticobulbar and corticospinal tracts, which might have contributed to restoration of neurological functions. Notably, recombinant mutant tPA-S478A lacking protease activity (but retaining the EGF-like domain) was as effective as wild-type tPA in rescuing neurological functions in tPA knockout stroke mice. These findings demonstrate that tPA improves long-term functional outcomes in a clinically relevant stroke model, likely by promoting brain plasticity through EGFR signaling. Therefore, treatment with the protease-dead recombinant tPA-S478A holds particular promise as a neurorestorative therapy, as the risk for triggering intracranial hemorrhage is eliminated and tPA-S478A can be delivered intranasally hours after stroke. https://doi.org/10.1073/pnas.1821979116 Axonal sprouting Diffusion tensor imaging Epidermal growth factor Oxygen–glucose deprivation Protease-inactive tPA Dive into the research topics of 'Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke'. Together they form a unique fingerprint. Tissue Plasminogen Activator Medicine & Life Sciences 100% Neuronal Plasticity Medicine & Life Sciences 89% Peptide Hydrolases Medicine & Life Sciences 78% Knockout Mice Medicine & Life Sciences 18% Pyramidal Tracts Medicine & Life Sciences 16% ErbB Receptors Medicine & Life Sciences 12% Intranasal Administration Medicine & Life Sciences 8% Pu, H., Shi, Y., Zhang, L., Lu, Z., Ye, Q., Leak, R. K., Xu, F., Ma, S., Mu, H., Wei, Z., Xu, N., Xia, Y., Hu, X., Kevin Hitchens, T., Bennett, M. V. L., & Chen, J. (2019). Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke. Proceedings of the National Academy of Sciences of the United States of America, 116(18), 9115-9124. https://doi.org/10.1073/pnas.1821979116 Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke. / Pu, Hongjian; Shi, Yejie; Zhang, Lili; Lu, Zhengyu; Ye, Qing; Leak, Rehana K.; Xu, Fei; Ma, Shubei; Mu, Hongfeng; Wei, Zhishuo; Xu, Na; Xia, Yuguo; Hu, Xiaoming; Kevin Hitchens, T.; Bennett, Michael V.L.; Chen, Jun. In: Proceedings of the National Academy of Sciences of the United States of America, Vol. 116, No. 18, 30.04.2019, p. 9115-9124. Pu, H, Shi, Y, Zhang, L, Lu, Z, Ye, Q, Leak, RK, Xu, F, Ma, S, Mu, H, Wei, Z, Xu, N, Xia, Y, Hu, X, Kevin Hitchens, T, Bennett, MVL & Chen, J 2019, 'Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke', Proceedings of the National Academy of Sciences of the United States of America, vol. 116, no. 18, pp. 9115-9124. https://doi.org/10.1073/pnas.1821979116 Pu H, Shi Y, Zhang L, Lu Z, Ye Q, Leak RK et al. Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke. Proceedings of the National Academy of Sciences of the United States of America. 2019 Apr 30;116(18):9115-9124. https://doi.org/10.1073/pnas.1821979116 Pu, Hongjian ; Shi, Yejie ; Zhang, Lili ; Lu, Zhengyu ; Ye, Qing ; Leak, Rehana K. ; Xu, Fei ; Ma, Shubei ; Mu, Hongfeng ; Wei, Zhishuo ; Xu, Na ; Xia, Yuguo ; Hu, Xiaoming ; Kevin Hitchens, T. ; Bennett, Michael V.L. ; Chen, Jun. / Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke. In: Proceedings of the National Academy of Sciences of the United States of America. 2019 ; Vol. 116, No. 18. pp. 9115-9124. @article{4dd94c66794e44e1b43583cf25d59c48, title = "Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke", abstract = "Emerging evidence suggests that tissue plasminogen activator (tPA), currently the only FDA-approved medication for ischemic stroke, exerts important biological actions on the CNS besides its well-known thrombolytic effect. In this study, we investigated the role of tPA on primary neurons in culture and on brain recovery and plasticity after ischemic stroke in mice. Treatment with recombinant tPA stimulated axonal growth in culture, an effect independent of its protease activity and achieved through epidermal growth factor receptor (EGFR) signaling. After permanent focal cerebral ischemia, tPA knockout mice developed more severe sensorimotor and cognitive deficits and greater axonal and myelin injury than wild-type mice, suggesting that endogenously expressed tPA promotes long-term neurological recovery after stroke. In tPA knockout mice, intranasal administration of recombinant tPA protein 6 hours poststroke and 7 more times at 2 d intervals mitigated white matter injury, improved axonal conduction, and enhanced neurological recovery. Consistent with the proaxonal growth effects observed in vitro, exogenous tPA delivery increased poststroke axonal sprouting of corticobulbar and corticospinal tracts, which might have contributed to restoration of neurological functions. Notably, recombinant mutant tPA-S478A lacking protease activity (but retaining the EGF-like domain) was as effective as wild-type tPA in rescuing neurological functions in tPA knockout stroke mice. These findings demonstrate that tPA improves long-term functional outcomes in a clinically relevant stroke model, likely by promoting brain plasticity through EGFR signaling. Therefore, treatment with the protease-dead recombinant tPA-S478A holds particular promise as a neurorestorative therapy, as the risk for triggering intracranial hemorrhage is eliminated and tPA-S478A can be delivered intranasally hours after stroke.", keywords = "Axonal sprouting, Diffusion tensor imaging, Epidermal growth factor, Oxygen–glucose deprivation, Protease-inactive tPA", author = "Hongjian Pu and Yejie Shi and Lili Zhang and Zhengyu Lu and Qing Ye and Leak, {Rehana K.} and Fei Xu and Shubei Ma and Hongfeng Mu and Zhishuo Wei and Na Xu and Yuguo Xia and Xiaoming Hu and {Kevin Hitchens}, T. and Bennett, {Michael V.L.} and Jun Chen", note = "Funding Information: ACKNOWLEDGMENTS. We thank Lesley M. Foley for assistance with the MRI experiments and Patricia Strickler for administrative support. This project was supported by NIH Grants NS095671 (to J.C.), NS036736 and NS095029 (to M.V.L.B. and J.C.), and US Department of Veterans Affairs (VA) Merit Review BX002495 (to J.C.). J.C. is the Richard King Mellon Professor of Neurology and a recipient of a VA Senior Research Career Scientist Award. M.V.L.B. is the Sylvia and Robert S. Olnick Professor of Neuroscience. H.P. was supported by the American Heart Association Grant 17POST33661207. Publisher Copyright: {\textcopyright} 2019 National Academy of Sciences. All rights reserved.", doi = "10.1073/pnas.1821979116", T1 - Protease-independent action of tissue plasminogen activator in brain plasticity and neurological recovery after ischemic stroke AU - Pu, Hongjian AU - Shi, Yejie AU - Zhang, Lili AU - Lu, Zhengyu AU - Ye, Qing AU - Leak, Rehana K. AU - Xu, Fei AU - Ma, Shubei AU - Mu, Hongfeng AU - Wei, Zhishuo AU - Xu, Na AU - Xia, Yuguo AU - Hu, Xiaoming AU - Kevin Hitchens, T. AU - Bennett, Michael V.L. AU - Chen, Jun N1 - Funding Information: ACKNOWLEDGMENTS. We thank Lesley M. Foley for assistance with the MRI experiments and Patricia Strickler for administrative support. This project was supported by NIH Grants NS095671 (to J.C.), NS036736 and NS095029 (to M.V.L.B. and J.C.), and US Department of Veterans Affairs (VA) Merit Review BX002495 (to J.C.). J.C. is the Richard King Mellon Professor of Neurology and a recipient of a VA Senior Research Career Scientist Award. M.V.L.B. is the Sylvia and Robert S. Olnick Professor of Neuroscience. H.P. was supported by the American Heart Association Grant 17POST33661207. Publisher Copyright: © 2019 National Academy of Sciences. All rights reserved. N2 - Emerging evidence suggests that tissue plasminogen activator (tPA), currently the only FDA-approved medication for ischemic stroke, exerts important biological actions on the CNS besides its well-known thrombolytic effect. In this study, we investigated the role of tPA on primary neurons in culture and on brain recovery and plasticity after ischemic stroke in mice. Treatment with recombinant tPA stimulated axonal growth in culture, an effect independent of its protease activity and achieved through epidermal growth factor receptor (EGFR) signaling. After permanent focal cerebral ischemia, tPA knockout mice developed more severe sensorimotor and cognitive deficits and greater axonal and myelin injury than wild-type mice, suggesting that endogenously expressed tPA promotes long-term neurological recovery after stroke. In tPA knockout mice, intranasal administration of recombinant tPA protein 6 hours poststroke and 7 more times at 2 d intervals mitigated white matter injury, improved axonal conduction, and enhanced neurological recovery. Consistent with the proaxonal growth effects observed in vitro, exogenous tPA delivery increased poststroke axonal sprouting of corticobulbar and corticospinal tracts, which might have contributed to restoration of neurological functions. Notably, recombinant mutant tPA-S478A lacking protease activity (but retaining the EGF-like domain) was as effective as wild-type tPA in rescuing neurological functions in tPA knockout stroke mice. These findings demonstrate that tPA improves long-term functional outcomes in a clinically relevant stroke model, likely by promoting brain plasticity through EGFR signaling. Therefore, treatment with the protease-dead recombinant tPA-S478A holds particular promise as a neurorestorative therapy, as the risk for triggering intracranial hemorrhage is eliminated and tPA-S478A can be delivered intranasally hours after stroke. AB - Emerging evidence suggests that tissue plasminogen activator (tPA), currently the only FDA-approved medication for ischemic stroke, exerts important biological actions on the CNS besides its well-known thrombolytic effect. In this study, we investigated the role of tPA on primary neurons in culture and on brain recovery and plasticity after ischemic stroke in mice. Treatment with recombinant tPA stimulated axonal growth in culture, an effect independent of its protease activity and achieved through epidermal growth factor receptor (EGFR) signaling. After permanent focal cerebral ischemia, tPA knockout mice developed more severe sensorimotor and cognitive deficits and greater axonal and myelin injury than wild-type mice, suggesting that endogenously expressed tPA promotes long-term neurological recovery after stroke. In tPA knockout mice, intranasal administration of recombinant tPA protein 6 hours poststroke and 7 more times at 2 d intervals mitigated white matter injury, improved axonal conduction, and enhanced neurological recovery. Consistent with the proaxonal growth effects observed in vitro, exogenous tPA delivery increased poststroke axonal sprouting of corticobulbar and corticospinal tracts, which might have contributed to restoration of neurological functions. Notably, recombinant mutant tPA-S478A lacking protease activity (but retaining the EGF-like domain) was as effective as wild-type tPA in rescuing neurological functions in tPA knockout stroke mice. These findings demonstrate that tPA improves long-term functional outcomes in a clinically relevant stroke model, likely by promoting brain plasticity through EGFR signaling. Therefore, treatment with the protease-dead recombinant tPA-S478A holds particular promise as a neurorestorative therapy, as the risk for triggering intracranial hemorrhage is eliminated and tPA-S478A can be delivered intranasally hours after stroke. KW - Axonal sprouting KW - Diffusion tensor imaging KW - Epidermal growth factor KW - Oxygen–glucose deprivation KW - Protease-inactive tPA U2 - 10.1073/pnas.1821979116 DO - 10.1073/pnas.1821979116
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,168