text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
{"url":"https:\/\/euphonics.org\/3-1-1-vibration-of-an-ideal-stretched-string\/","text":"3.1.1 Vibration of an ideal stretched string\n\nThe idealised model of a stretched string assumes that it is perfectly flexible, and stretched between two fixed points. It is allowed to vibrate transversely with a (small) displacement from equilibrium given by $w(x,t)$. Suppose the string has tension $T$, mass per unit length $m$, and length $L$. To obtain the equation of motion, consider a small element of the string between positions $x$ and $x+ \\delta x$ as sketched below. (Note that the displacement of the string is hugely exaggerated in the plot, for clarity.)\n\nNewton\u2019s law for this small element requires\n\n$$m~\\delta x \\frac{\\partial^2 w}{\\partial t^2} =-T \\sin \\theta_1 + T \\sin\\theta_2 \\tag{1}$$\n\nBut the angles $\\theta_1$ and $\\theta_2$ are both very small, so\n\n$$\\sin \\theta_1 \\approx \\theta_1 \\approx \\tan \\theta_1 = \\left[ \\frac{\\partial w}{\\partial x} \\right] _x \\tag{2}$$\n\nand\n\n$$\\sin \\theta_2 \\approx \\theta_2 \\approx \\tan \\theta_2 = \\left[ \\frac{\\partial w}{\\partial x} \\right] _{x + \\delta x}. \\tag{3}$$\n\nThus\n\n$$m \\frac{\\partial^2 w}{\\partial t^2} \\approx T \\left[ \\dfrac{\\left[ \\frac{\\partial w}{\\partial x} \\right] _{x + \\delta x} \u2013 \\left[ \\frac{\\partial w}{\\partial x} \\right] _{x} }{\\delta x} \\right] \\rightarrow T \\dfrac{\\partial^2 w}{\\partial x^2} \\tag{4}$$\n\nas $\\delta x \\rightarrow 0$. Thus the equation of motion is\n\n$$m \\frac{\\partial^2 w}{\\partial t^2} \u2013 T \\dfrac{\\partial^2 w}{\\partial x^2} =0 . \\tag{5}$$\n\nIf the motion were not free because a force $f(x,t)$ per unit length was applied to the string, $f$ would replace the zero on the right-hand side of this equation.\n\nA vibration mode of the string is a free motion in which all points move sinusoidally at some frequency $\\omega$. So to find the natural frequencies and vibration modes, we need to look for solutions of the form\n\n$$w(x,t) = u(x) e^{i \\omega t} \\tag{6}$$\n\n(remembering that we really mean \u201creal part of\u2026\u201d this complex expression.) Substituting in eq. (5) then gives\n\n$$T \\dfrac{d^2u}{dx^2} + m \\omega^2 u = 0. \\tag{7}$$\n\nThis is the simple harmonic equation, so we already know that the general solution is\n\n$$u(x) = A \\cos kx + B \\sin kx \\tag{8}$$\n\nwhere $A$ and $B$ are arbitrary constants, and\n\n$$k^2 =\\dfrac{m \\omega^2}{T} = \\dfrac{\\omega^2}{c^2} \\tag{9}$$\n\nwhere $c=\\sqrt{T\/m}$.\n\nThe quantity $k$ is called wavenumber and is a kind of \u201cspatial frequency\u201d. It bears the same relation to wavelength $\\lambda$ as a frequency $\\omega$ bears to the period of oscillation $\\tau$: $\\omega=2 \\pi\/\\tau$, and $k=2 \\pi\/\\lambda$.\n\nOne interpretation of this solution is that sinusoidal waves can propagate along the string in either direction, and $c$ is the speed of these waves. To see this, it is better to use the complex form of the general solution:\n\n$$u(x) =A\u2019 e^{ikx} + B\u2019 e^{-ikx} \\tag{10}$$\n\nwhere $A\u2019$ and $B\u2019$ are new arbitrary constants. The time-varying solution then looks like\n\n$$w(x,t) = A\u2019 e^{i(\\omega t + kx)} + B\u2019 e^{i(\\omega t \u2013 kx)} \\tag{11}$$\n\n$$=A\u2019 e^{i\\omega (t + x\/c)} + B\u2019 e^{i\\omega (t \u2013 x\/c))}. \\tag{12}$$\n\nNow we have to satisfy the boundary conditions at the ends of the string \u2014 we are assuming that $u = 0$ at both ends, which we can take to be at the positions $x = 0$ and $x = L$. From $u(0) = 0$ we can deduce from equation (8) that $A = 0$. Now to satisfy $u(L) = 0$ we require\n\n$$B \\sin kL = 0 \\tag{12}$$\n\nso that the only allowed values of k are\n\n$$k=n \\pi\/L \\mathrm{~~for~~} n = 1, 2, 3, \u2026 \\tag{13}$$\n\nFrom eq. (9) these correspond to allowed values of the frequency $\\omega$. So we have a sequence of mode shapes\n\n$$u_n(x)=\\sin (n \\pi x\/L) \\tag{14}$$\n\nwith corresponding natural frequencies\n\n$$\\omega_n = n \\pi c\/L \\mathrm{~~for~~} n = 1, 2, 3, \u2026 \\tag{15}$$","date":"2020-09-24 02:00:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9139288663864136, \"perplexity\": 183.28524263104285}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400213006.47\/warc\/CC-MAIN-20200924002749-20200924032749-00591.warc.gz\"}"} | null | null |
Tanz im August ist ein Festival für den zeitgenössischen Tanz in Berlin. Das heute vom HAU Hebbel am Ufer präsentierte Festival wurde 1988 von Nele Hertling im damaligen West-Berlin gegründet. Tanz im August bringt jährlich im August berühmte Kompanien, neue Choreografien, Ästhetiken und Formate aus der ganzen Welt im Rahmen eines mehrwöchigen Festivals nach Berlin. Im Fokus steht zudem die Realisierung neuer Projekte von Berliner Künstlern, die Zusammenarbeit mit internationalen Gästen sowie die Koproduktion von Uraufführungen und Deutschland-Premieren. Das Festival nutzt die drei Bühnen des HAU Hebbel am Ufer und wechselnde weitere Bühnen ganz Berlins, u. a. das Haus der Berliner Festspiele, die Sophiensæle, das Radialsystem, die Volksbühne Berlin, das Deutsche Theater Berlin, das Kindl-Zentrum für zeitgenössische Kunst oder die Schaubühne am Lehniner Platz.
Gefördert wird Tanz im August aus Mitteln des Hauptstadtkulturfonds.
Für die Ausgabe 2023 wurde der Tanzkurator Ricardo Carmona als künstlerischer Leiter berufen.
Produktionen
Für die reduzierte 33. Ausgabe 2021 nutzte das Festival aufgrund der anhaltenden COVID-19-Pandemie verstärkt Outdoor-Locations, wie die Freilichtbühne Weißensee, die Bühne in den Gärten der Welt oder einen Autoscooter am Haus der Statistik. Hinzu kam zunächst die MaHalla in Oberschöneweide als neuer Spielort. HAU2 und Berliner Festspielhaus standen dem Festival aufgrund von Renovierungen nicht zur Verfügung.
Die 32. Ausgabe 2020 fand aufgrund der COVID-19-Pandemie überwiegend als Online-Festival statt.
In der 31. Ausgabe 2019 waren 70 Vorstellungen von 31 Produktionen an elf verschiedenen Veranstaltungsorten mit 160 Künstlern aus 15 Ländern zu sehen.
Zahlreiche Publikumsformate sind feste Bestandteile des Festivalprogramms. Seit 2016 stellt die "Bibliothek im August" im HAU2 auf Empfehlung der Festivalkünstler Bücher zur Verfügung, die ihr Werk und ihre Gedankenwelt geprägt haben.
Tanz im August gibt seit 2014 das festivalbegleitende Magazin im August heraus.
Künstlerische Leitung seit 1989
1989–2003: Nele Hertling
2003–2007: Ulrike Becker, Matthias Lilienthal, Bettina Masuch, André Thériault
2007–2008: Ulrike Becker, Matthias Lilienthal, Marion Ziemann, Bettina Masuch, André Thériault
2009–2013: Ulrike Becker, Pirkko Husemann, Matthias Lilienthal, André Thériault, Marion Ziemann
2013: Bettina Masuch
2014–2022: Virve Sutinen
ab 2023:Ricardo Carmona
Künstler des Festivals
Zu den Künstlern und Kompanien, die bei Tanz im August gezeigt wurden gehören u. a. Dominique Bagouet, Tanztheater Wuppertal Pina Bausch, Jérôme Bel, Bruno Beltrão / Grupo de Rua, Rosemary Butcher, Roberto Castello, Nora Chipaumire, Michael Clark Company, Cloud Gate Dance Theatre, Martha Graham Dance Company, Trajal Harrell, Deborah Hay, Anne Terese de Keersmaeker / Rosas, La Ribot, Faustin Lineykula, Constanza Macras/Dorky Park, Lemi Ponifasio, Eszter Salamon, Wayne McGregor, Meg Stuart und Sasha Waltz.
Geschichte
Retrospektiven seit 2015
Seit 2015 zeigt Tanz im August alle zwei Jahre eine Retrospektive bedeutender lebender Künstler, woraus jeweils eine Publikation zu deren Leben und Werk entsteht.
2015 – Rosemary Butcher
2017 – La Ribot
2019 – Deborah Hay
2022 – Cristina Caprioli / ccap
30-jähriges Jubiläum
Im Jahr 2018 feierte Tanz im August sein 30-jähriges Jubiläum. Zusätzlich zu den Festivalveranstaltungen setzte sich die Journalistin Claudia Henne dafür ein, die Produktionsgeschichte des Festivals in einem Archiv aufzuarbeiten. Dieses ist nun im Tanz im August Online Magazin und auf der gleichnamigen Website verfügbar.
Weblinks
Website des Festivals
Verlängerung Virve Sutinen
Einzelnachweise
Tanzfestival
Theaterfestival in Berlin
Berlin-Kreuzberg
Erstveranstaltung 1988
Tanzveranstaltung in Berlin | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 2,870 |
Der Rådhusparken (deutsch: Rathauspark) ist ein kleiner innerstädtischer Park in der schwedischen Stadt Jönköping.
Nördlich des Parks befindet sich das Südufer des Sees Vättern. Östlich verläuft der Verbindungskanal zwischen Munksjön und Vättern.
Der Park wurde in den 1860er Jahren angelegt, zuvor befand sich in diesem Bereich eine alte Burganlage. Seinen Namen hat der Park nach dem 1861 als Schulgebäude errichteten Rathaus. Die Nutzung als Rathaus besteht seit 1914. In den 1920er Jahren erfolgte eine Neugestaltung. Die Mitte des Parks wird von einem großen Brunnen dominiert. Es besteht ein alter Baumbestand. Im Park gibt es auch mehrere Denkmäler, darunter eins für Johan Björnsson Printz (1592–1663), den Gouverneur der schwedischen Kolonie Neuschweden in Nordamerika und späteren Gouverneur von Jönköping.
Gemeinde Jönköping
Parkanlage in Schweden
Parkanlage in Europa | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,682 |
using System;
using System.Runtime.InteropServices;
using HipchatApiV2;
using HipchatApiV2.Requests;
using Xunit;
namespace IntegrationTests
{
[Trait("SendNotification", "")]
public class SendRoomNotification : IDisposable
{
private readonly int _existingRoomId;
private readonly HipchatClient _client;
public SendRoomNotification()
{
HipchatApiConfig.AuthToken = TestsConfig.AuthToken;
_client = new HipchatClient();
_existingRoomId = TestHelpers.GetARoomId(_client,"Send Notification Test Room");
}
[Fact(DisplayName = "Can send a room notification", Skip = "Setup auth token")]
public void CanSendRoomNotification()
{
var sendMessageResult = _client.SendNotification(_existingRoomId, "Test message");
Assert.True(sendMessageResult);
}
public void Dispose()
{
_client.DeleteRoom(_existingRoomId);
}
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 3,205 |
\section{Introduction}
Rare earth manganites $R$MnO$_{3}$ ($R$ = Gd, Tb, Dy, Ho) with orthorhombically distorted perovskite structure have been attracting a lot of attention due to a variety of unusual physical properties including a potentially interesting cross coupling of magnetism and ferroelectricity (FE).\cite{most,kimura,goto,lorenz} These so called multiferroics are of particular interest for understanding the fundamental physical links between spin, charge and lattice degrees of freedom that give rise to magnetoelectric coupling, as well as because of the promising possibility of using these coupled order parameters in novel device applications by controlling the material's polarization state with either electric or magnetic field.\cite{kimura3,fiebig:review,yama}
While it has been clearly shown, that the ferroelectricity in these compounds is magnetically driven, the role of magnetic 3$d$ and 4$f$ ions was found to be rather different. As commonly accepted, the origin of multiferroicity in $R$MnO$_{3}$ is a result of a cycloidal Mn-magnetic ordering with inverse Dzyaloshinsky-Moriya interaction being the driving force of polar lattice distortions.\cite{kenz,katsura:057205,mostovoy:067601,dagotto,xiang,malash} Spontaneous electric polarization can be therefore expressed via the $m_{y}$ and $m_{z}$ components of the Mn-spins of the cycloid and the magnetic propagation vector, {\mbox{\boldmath$\tau$}}, as $\mathbf{P}_{s}\propto m_{y}m_{z}(\mathbf{e_x}\times\mbox{\boldmath$\tau$})$, where $\mathbf{e_x}$ is the unit vector along the axis of rotation of Mn-spins.\cite{mostovoy:067601} Magnetic rare earths in turn, may determine the polarization direction via their interaction with Mn-spins. In compounds with rare earths showing strong magnetic anisotropy, the polarization is oriented along $c$-axis, $\mathbf{P}\|c$~for $R=$ Tb and Dy, while in those with nonmagnetic (Eu,Y) or less anisotropic Gd, $\mathbf{P}\|a$.\cite{hemb,kimura3} In some cases, they even contribute to the magnitude of the polarization as has been shown for DyMnO$_{3}$.\cite{goto,prok} A necessary condition for this is the ordering of rare earths with the same propagation vector as Mn, behavior confirmed now for $R$MnO$_{3}$ with $R$ = Gd, Tb, Dy and Ho above their own ordering temperatures, $T_{\rm N}^{\rm R} < 10$ K.\cite{kenz,feyerherm,munoz} Here, Mn-magnetic sublattice polarizes $R$-spins and the induced $R$-moment is defined by the strength of exchange interaction between Mn- and $R$-ions, $J_{\rm Mn-R}$.
As proposed in Ref.\cite{prok2} the ordering of the $R$-spins below $T_{\rm N}^{\rm R}$ (often referred as $individual$) is strongly dependent on the relative strengths of exchange interactions between Mn- and $R$-ions, $J_{\rm Mn-R}$, and between rare earths themselves, $J_{\rm R-R}$.
will not significantly affect the $R$-ordering as found for $R$ = Dy that orders on its own ($\tau^{\rm Dy}=\frac{1}{2}\neq\tau^{\rm Mn}$) below 6 K.\cite{feyerherm,prok} On the other hand, strong $J_{\rm Mn-R}$
will force the $R$-ordering to the same periodicity as Mn down to the lowest temperatures as in case of $R=$ Ho ($\tau^{\rm Ho} = \tau^{\rm Mn}$).\cite{munoz,brinks}
The most interesting case, however, is an intermediate coupling regime as observed in TbMnO$_{3}$. Here, the Tb- and Mn- orderings remain coupled down to the lowest temperatures through the harmonic coupling of their wave vectors, $3 \tau^{\rm Tb} - \tau^{\rm Mn} = 1$. This is a result of adjustment of Ising-like Tb spins to periodic Mn ordering minimizing system's energy and leading to rather complex Tb-spin structure at low temperatures.\cite{prok2} This intermediate coupling regime should be very sensitive to any variation of $J_{\rm Mn-R}$ and, correspondingly, strong effects on the Tb-magnetic ordering are expected.\cite{prok2}
In this paper we report on combined neutron diffraction, x-ray resonant magnetic scattering, and single crystal magnetization and dielectric measurements on substituted TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) compounds. Substitution of Mn$^{3+}$ ion by the closest isovalent but nonmagnetic Ga$^{3+}$ was chosen to provide minimal disturbance of the lattice and effectively vary the total $J_{\rm Mn-Mn}$ and, consequently, $J_{\rm Mn-R}$, while keeping $J_{\rm R-R}$ constant. We show that, while keeping the same crystal structure for all compositions, Ga for Mn substitution leads to the linear decrease of $T_{\rm N}^{\rm Mn}$ and $\tau^{\rm Mn}$ reflecting the intended decrease of $J_{\rm Mn-Mn}$ and the increase of the Mn-O-Mn bond angles. At the same time we observe a strong suppression (for $x = 0.04$) and disappearance (for $x \geq 0.1$) of both the induced and the individual Tb-magnetic ordering. This behavior confirms that the exchange fields $J_{\rm Mn-Tb}$ from the Mn have a strong influence on the Tb-magnetic ordering in the full temperature range below the ferroelectric transition and actually stabilize the Tb-magnetic ground state.
\section{Experiment}
Polycrystalline TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) samples were prepared from a mixture of Tb$_4$O$_7$ (4N), Mn$_2$O$_3$ (3N) and Ga$_2$O$_3$ (4N) using standard solid state reaction. The lowest content of Ga ($x\backsimeq$ 4\%) was chosen such as to obtain a homogeneous distribution of Ga within the sample. The highest content ($x\backsimeq$ 10\%) was determined by our ability to obtain a single phase sample at this composition without significant changes in its crystal structure and magnetic properties. The single crystals were grown in argon atmosphere by the floating zone technique in the IR-heated image furnace (NEC) equipped with two halogen lamps (500W) and double ellipsoidal mirrors. The growth and the feed rates were maintained to obtain a stable molten zone.
Crystal structure and magnetic ordering of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ were investigated between 2-300~K by neutron diffraction (ND) on powder and single crystal samples using the high resolution E9 diffractometer ($\lambda = $1.797~\AA\ and 2.816~\AA) and the E1 thermal triple axis spectrometer operated in two-axis mode ($\lambda = $2.44~\AA), respectively, at the Helmholtz-Zentrum-Berlin (HZB). The data were analyzed with the FullProf refinement package.\cite{carvajal} Single crystal x-ray magnetic resonant scattering (XRMS) on TbMn$_{1-x}$Ga$_{x}$O$_{3}$ were conducted at the 7~T multipole wiggler beamline MAGS, also HZB.\cite{feyerherm,dudzik}
Physical characterization of the samples was performed on both single crystal and polycrystalline samples. The single crystals were aligned along their principal direction by x-ray Laue diffraction and cut in orthorhombic shapes with each face normal (within 2$^{\circ}$) to a principal crystallographic direction.
Temperature dependence of low and high field magnetization along three main crystallographic directions were measured on TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) in the VSM mode of a Physical Properties Measurement System (Quantum Design PPMS). The dielectric measurements were performed in a 14.5 T Oxford Instruments cryomagnet equipped with a $^3$He Heliox insert. Opposite faces of the plate-like samples were sputtered with gold and located between two gold covered copper plates. The top plate and the sample were pressed to the fixed and thermalized bottom plate by a soft spring. The capacitance of such a plate capacitor with a sample as a dielectric was measured by an Andeen-Hagerling 2500 A capacitance bridge working at a frequency of 1 and 20 kHz.
\section{Results and discussion}
\subsection{Crystal structure}
\begin{table*}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
Temperature & \multicolumn{3}{c}{300 K}& \multicolumn{3}{c}{12 K}\\ \hline
$x$ & 0 & 0.04 & 0.10 & 0 & 0.04 & 0.10 \\ \hline
a (\AA) & 5.30159(4) & 5.30049(7) & 5.29981(7) & 5.31606(4) & 5.31406(6) & 5.31268(7) \\
b (\AA) & 5.85557(4) & 5.83318(7) & 5.81546(7) & 5.82304(5) & 5.80384(8) & 5.78569(7) \\
c (\AA) & 7.40003(6) & 7.41209(9) & 7.42329(9) & 7.38443(7) & 7.39939(8) & 7.41051(11) \\ \hline
x(Tb) & -0.0150(3) & -0.0160(4) & -0.0162(4)& -0.0162(4) & -0.0159(4) & -0.017(4)\\
y(Tb) & 0.0814(3) & 0.0801(3) & 0.0796(4)& 0.0789(3) & 0.0790(3) & 0.0778(4)\\
x(O$_{1}$) & 0.1055(4) & 0.1064(5) & 0.1050(5)& 0.1064(4) & 0.1058(4) & 0.1059(5)\\
y(O$_{1}$) & 0.4658(4) & 0.4673(4) & 0.4667(5)& 0.4666(4) & 0.4686(4) & 0.4678(4)\\
x(O$_{2}$) & 0.7035(3) & 0.7038(4) & 0.7029(4)& 0.7037(3) & 0.7038(3) & 0.7046(4)\\
y(O$_{2}$) & 0.3276(3) & 0.3267(3) & 0.3251(3)& 0.3265(3) & 0.3247(3) & 0.3235(4)\\
z(O$_{2}$) & 0.0515(2) & 0.0514(2) & 0.0515(2)& 0.0515(2) & 0.0519(2) & 0.0519(2)\\
Occ(Ga) & 0 & 0.044(2) & 0.104(6) & 0 & 0.044 & 0.104 \\ \hline
$\tau^{\rm Mn}$ ($\mathbf{b}^{*}$) & & & & 0.2754(2) & 0.2651(3) & 0.2463(5) \\
$m_{\rm x}^{\rm Mn}$ & & & & & & \\
$m_{\rm y}^{\rm Mn}$ & & & & 3.95(4) & 3.30(4) & 3.19(8) \\
$m_{\rm z}^{\rm Mn}$ & & & & 2.46(10) & 1.86(15) & 0(1) \\ \
$\tau^{\rm Tb}$ ($\mathbf{b}^{*}$) / 2 K & & & & 3/7 & & \\
$m_{\rm x}^{\rm Tb}$ / 2 K & & & & 5.79(6) & & \\
$m_{\rm y}^{\rm Tb}$ / 2 K & & & & 3.11(8) & & \\
$m_{\rm z}^{\rm Tb}$ / 2 K & & & & 0 & & \\ \hline
R$_{\rm nucl}$ & 4.67 & 3.83 & 3.79 & 4.14 & 3.26 & 5.47 \\
R$_{\rm mag}^{\rm Mn}$ & & & & 6.99 & 6.47 & 7.0 \\
R$_{\rm mag}^{\rm Tb}$ & & & & 13.7 & & \\
$\chi^{2}$ & 1.97 & 3.17 & 2.62 & 1.69 & 3.49 & 3.33 \\
\end{tabular}
\caption{Crystal and magnetic structure parameters for TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) obtained from the Rietveld refinements of powder ND at room and low temperatures using 1.797~\AA\ and 2.816~\AA. The atom positions are given for $Pbnm$ settings where Mn occupies $4b$ (1/2 0 0), Tb and O$_{1}$ $4c$ ($x$ $y$ 1/4), and O$_{2}$ $8d$ ($x$ $y$ $z$) Wyckoff positions. The low temperature data for all samples are given for 12 K except of Tb-magnetic structure parameters given for 2 K. It is caused by difficulties in fitting the Mn-magnetic reflections being overlapped with Tb-magnetic peaks below 8 K.} \label{table1}
\end{ruledtabular}
\end{center}
\end{table*}
Starting with the structural analyses, powder ND measurements at room temperature show that all TbMn$_{1-x}$Ga$_{x}$O$_{3}$ samples are single phase
and crystallize with the orthorhombically distorted perovskite structure (space group P$bnm$). Ga shares 4$b$ atomic position (Wyckoff notation) with Mn and its refined content is in good agreement with the nominal one (Table~\ref{table1}). We would like to emphasize that despite to close scattering lengths of Tb (7.38 fm) and Ga (7.29 fm), sharp difference between those of Mn (-3.73 fm) and Ga allows to determine their occupancies confirming that all Ga is accounted in the Mn-site.
With increasing Ga-content up to 10\% (Fig.~\ref{fig1}a-b), the lattice linearly contracts along $b$-direction, $\Delta b/b \sim 0.7\%$, and expands along the $c$-axis, $\Delta c/c \sim 0.3\%$ while leaving the $a$-parameter almost constant, $\Delta a/a < 0.1\%$. The resultant decrease in volume is about 0.4$\%$, approaching the volume of heavier $R$ - DyMnO$_{3}$. Anisotropic change of the lattice, however, leads to an increase of Mn-O-Mn bond angle (by $\sim 0.4^\circ$) as if moving the system to the lighter $R$ - GdMnO$_{3}$. Disturbance of the original TbMnO$_{3}$ lattice is also reflected in changes of Mn-O and Tb-O interatomic distances, both having maximal $\Delta d/d \sim 1.1\%$ as presented in Table~\ref{table2}.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=85mm]{prokhnenko_tbmngao3_fig1.eps}
\caption{(Color online) Magneto-structural parameters of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ compounds as determined from ND: lattice parameters and volume (a)-(b) at RT, magnetic propagation vector at 2 K, Mn N\'{e}el temperature and ferroelectric transition temperature (c), FWHM of Tb magnetic reflections and correlation length of Tb clusters at 2 K (d) as a function of refined Ga-content. Coherence length has been estimated using the Scherrer formula. Cross symbols in panel (d) correspond to instrument resolution at the position of the strongest Tb-magnetic reflection} \label{fig1}
\end{center}
\end{figure}
\begin{table}
\begin{center}
\begin{ruledtabular}
\begin{tabular}{cccc}
$x$ & 0 & 0.04 & 0.10 \\ \hline
Mn-O$_{1}$ (\AA) & 1.9431(7) & 1.9463(8) & 1.9471(8) \\
Mn-O$_{2}$ (\AA) & 1.9069(16) & 1.9060(20) & 1.9131(20) \\
Mn-O$_{2}$ (\AA) & 2.2331(17) & 2.2235(18) & 2.2084(18) \\ \hline
$\widehat{MnO_{2}Mn}$ ($^{\circ}$) & 144.99(7) & 145.15(8) & 145.22(8) \\
$\widehat{MnO_{1}Mn}$ ($^{\circ}$) & 144.39(3) & 144.38(3) & 144.77(3) \\ \hline
Tb-O$_{1}$ (\AA) & 3.661(3) & 3.633(3) & 3.622(4) \\
Tb-O$_{1}$ (\AA) & 2.340(3) & 2.350(3) & 2.341(4) \\
Tb-O$_{1}$ (\AA) & 3.203(3) & 3.198(3) & 3.189(3) \\
Tb-O$_{1}$ (\AA) & 2.274(3) & 2.269(3) & 2.276(3) \\
Tb-O$_{2}$ (\AA) & 2.542(2) & 2.538(2) & 2.535(3) \\
Tb-O$_{2}$ (\AA) & 2.570(2) & 2.578(2) & 2.582(2) \\
Tb-O$_{2}$ (\AA) & 2.317(2) & 2.311(2) & 2.311(3) \\
Tb-O$_{2}$ (\AA) & 3.666(2) & 3.655(2) & 3.648(3) \\
\end{tabular}
\caption{Selected distances and angles from Rietveld refinements of powder ND data for TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) at RT.}
\label{table2}
\end{ruledtabular}
\end{center}
\end{table}
\subsection{Magnetic and dielectric properties}
Turning to the magnetic properties, in Fig.~\ref{MvsT-data} we present the temperature dependence of the magnetization of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ single crystals measured along the $a$- and $b$-crystallographic directions with $H$ = 5~kOe and along $c$ with 1~kOe. We start with magnetization curves measured with $H\| c$ as they show anomalies corresponding to the onset of both Mn- and Tb-magnetic orders. As previously reported for the $x =0$ compound, $M_{c}$ presents three sharp anomalies at $T_{\rm N}^{\rm Mn} = 42$~K, $T_{\rm FE} = 27$~K and $T_{\rm N}^{\rm Tb}$ $ = 7$~K attributed to the Mn N\'{e}el transition temperature, the transition temperature to the Mn-cycloidal state associated with development of ferroelectricity
and the Tb N\'{e}el transition temperature, respectively.\cite{kimura} The 4$\%$ Ga-containing compound preserves all three transitions. The first two are shifted down to 39~K and 25~K whereas the last anomaly is enormously reduced and has a maximum around 4.5~K. The 10\%-Ga sample shows only one sharp transition at $T_{\rm N}^{\rm Mn}=36$~K and broad anomaly at around 15 K which might reflect the remanent ferroelectric transition $T_{\rm FE}$. Any signature of $T_{\rm N}^{\rm Tb}$ ~is clearly absent at this composition.
\begin{figure}
\begin{center}
\includegraphics[width=90mm]{prokhnenko_tbmngao3_fig2.eps}
\caption{a) Temperature dependence of magnetic d.c. susceptibility for TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x=$ 0, 0.04 and 0.1) single crystals measured at 1 kOe along $c$-direction. The insert represents the d.c. susceptibility of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ single crystals measured along the $a$- and $b$-crystallographic direction with $H=$ 5 kOe. Data have been collected on heating after previous zero field cooling down to T=2 K.} \label{MvsT-data}
\end{center}
\end{figure}
Magnetization curves along $a$- and $b$-axis for all compositions look very similar showing mainly a paramagnetic Curie-Weiss behavior of Tb-moments down to $T_{\rm N}^{\rm Tb}$~as shown in the inserts in Fig.~\ref{MvsT-data}. The anomaly corresponding to onset of Tb-magnetic ordering at $T_{\rm N}^{\rm Tb}$~shifts to lower temperatures and becomes broader with increasing Ga content.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=85mm]{prokhnenko_tbmngao3_fig3.eps}
\caption{Temperature evolution of relative dielectric constant ($\epsilon^{r}_{c}$) for TbMn$_{1-x}$Ga$_{x}$O$_{3}$ with $x$ = 0, 0.04, 0.1, respectively, measured on cooling with $E=20$ kHz along the $c$-direction. The data for $x=0$ are extracted from Ref.\cite{kimura3} }
\label{fig5}
\end{center}
\end{figure}
Having observed a rather similar magnetic behavior of the Mn-sublattice in the pure and the substituted TbMnO$_{3}$ compounds, we have performed dielectric measurements for the $x = $ 4\% and 10\% samples. Impedance spectroscopy measurements from 1 kHz up to 20 kHz reveal no relaxor behavior for all compounds. Similar to TbMnO$_{3}$, the 4\% and 10\% Ga samples present a strong anisotropy and anomaly in the dielectric constant along $c$- and $a$-directions at 40~K. In TbMnO$_{3}$ the ferroelectric transition is characterized by a lamda-shaped anomaly in the $c$-direction at 27~K (Fig.~\ref{fig5}).\cite{kimura3} The dielectric measurements along the $c$-direction for 4\% and 10\% samples show similar anomalies shifted to lower temperatures. They appear at 24~K and 15~K, respectively, in perfect agreement with $T_{\rm FE}$ determined from magnetization measurements. The height of the lamda-shaped anomaly of the dielectric constant ($\delta\epsilon$) is reduced with increasing Ga content from $\delta\epsilon$ of about 4 for $x=$ 0 down to 1.8 and 1 for $x=$4\% and 10\%, respectively. All these features suggest that at the respective $T_{\rm FE}$ the compounds undergo a ferroelectric transition that becomes less pronounced with increasing Ga-doping.
The results from the measurements of magnetization and dielectric constant as function of temperature are summarized in Fig.~\ref{fig1}c. It shows that substitution of nonmagnetic Ga$^{3+}$ for magnetic Mn$^{3+}$ leads to reduction of $T_{\rm N}^{\rm Mn}$ linearly with a slope of -0.6~K per 1$\%$ of Ga. This is direct evidence of the effective reduction of the strength of Mn-Mn interactions, $J_{\rm Mn-Mn}$ and, consequently, we suggest that $J_{\rm Mn-Tb}$ should be reduced. $T_{\rm FE}$ gets also reduced by -1.2 K per 1$\%$ of Ga.
The most surprising effect, however, is that the Ga for Mn substitution affects the anomalies at $T_{\rm N}^{\rm Tb}$. Indeed, the larger $x$ is the smaller and broader becomes the anomaly corresponding to the transition to Tb-magnetic ordering (see Fig.~\ref{MvsT-data}).
Further we studied the magnetic field response of the Ga-substituted compounds. Fig.~\ref{MvsH} shows isothermal magnetization curves as a function of magnetic field for all samples. The data on undoped TbMnO$_{3}$ compound is in good agreement with previously published results.\cite{kimura3} For $H\|a$ at 2 K (Fig.~\ref{MvsH}a), two magnetic transitions are observed at $H_{\rm c}\sim$ 1.7 T and 9 T. The critical fields of all meta-magnetic transitions, $H_{\rm c}$, have been defined by the intersection point between two lines: the first being a linear fit of $M(T)$ before the anomaly; the second passing through the inflection point of the anomaly as shown, for example, in (Fig.~\ref{MvsH}d). The low field metamagnetic transition is a transition to the field forced ferromagnetic state associated with the Tb-sublattice. Similar meta-magnetic rare earth behavior is reported for the isostructural TbAlO$_3$ compound.\cite{holmes} One marks out that the magnetization is not saturated up to 14~T and the extracted value of the Tb-moment, $\sim$ 6.5~$\mu_{\rm B}$, is rather low. All these suggest that the Tb moments do not become fully ferromagnetically ordered above 2~T. Instead, because of their strong Ising anisotropy, they presumably arrange in a noncollinear (F$_x$~C$_y$~0) pattern (Bertaut notation)\cite{Bertaut} with $m_x \sim$ 6.5~$\mu_{\rm B}$. Assuming theoretical 9~$\mu_{\rm B}$/Tb$^{3+}$, that would correspond to 36$^\circ$ canting to $a$-axis in good agreement with reported data on Tb-anisotropy.\cite{Quezel,bielen} Much less pronounced second transition corresponds to the flop of the electric polarization from $\mathbf{P}\|c$\space to $\mathbf{P}\|a$~that is presumably caused by the flop of Mn cycloidal plane.\cite{kimura3,nadir2}
Both meta-magnetic transitions are observed in $x=4$\% and $x=10$\% samples.
The field forced Tb-ferromagnetic ordering is observed at slightly changed critical fields of 1.8 T and 2.4 T for for $x=4$\% and $x=10$\%, respectively, while the second (polarization flop) transition is found at higher field $H_{\rm c}=11$ T for $x=4$\% (see Fig.~\ref{MvsH}(a) insert). A change in slope is also observed for $x=10$\% around 14 T suggesting that the second transition might occur at even higher fields.
For $H\|b$\space at 2 K, also two meta-magnetic transitions are observed in TbMnO$_{3}$ in good agreement with published data.\cite{kimura3} The first one at 1.7 T is a transition to a magnetic state where Tb orders with $\bf{\tau}=$ 1/3 $\mathbf{b}^{*}$.\cite{nadir} The second transition is observed at around $H\|b \sim$ 4.9 T. Based on the data on TbAlO$_3$,\cite{holmes} we suppose that Tb-spins undergo a transition to noncollinear (C$_x$~F$_y$~0)-arrangement in this state. According to Ref.\cite{nadir2}, this transition is associated with the flop of Mn-cycloidal plane from $bc$ to $ab$ causing the electric polarization flop from $\mathbf{P}\|c$\space to $\mathbf{P}\|a$.\cite{kimura3,nadir2} Both meta-magnetic transitions are observed in $x=4$\% sample. They are shifted to lower critical fields of 1.0 T and 4.6 T, respectively. Only one clearly separated transition with $H_{\rm c}=4.1$ T is observed for $x=$ 10\% sample.
For $H\|c$, TbMnO$_{3}$ shows one metamagnetic transition as visualized in Fig.~\ref{MvsH}(c-d) for 2 and 16 K, respectively. It corresponds to the transition from cycloidal Mn-magnetic structure to simple canted antiferromagnetic structure and, simultaneously, to the transition from ferroelectric to paraelectric state.\cite{ArgyriouHc,kimura3} Both substituted compounds show similar metamagnetic transitions at lower critical fields, Fig.~\ref{MvsH}(c-d).
In short summary, bulk characterization of the Ga-substituted compounds revealed rather similar magneto-electric phenomena in all the studied samples. The only crucial differences are applied to the Tb-magnetic ordering, where strong suppression of relevant anomalies has been observed. To clarify what causes these changes, a microscopic view on Mn- and Tb-magnetic orders in the Ga-samples has been taken.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=98mm]{prokhnenko_tbmngao3_fig4.eps}
\caption{Magnetization isotherms of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x=$ 0, 0.04 and 0.1) single crystals with magnetic field along $a$- (a), $b$- (b) and $c$-crystallographic (c) directions at $T=2$ K. The insert in the panel a) shows a zoomed part of $\mathbf{H}\|a$\space magnetization curve, respectively. The panel d) presents the magnetization isotherm at $T=16$ K for $\mathbf{H}\|c$. After each hysteresis loop the samples have been heated up to 160 K and then zero field cooled down to the target temperature.}
\label{MvsH}
\end{center}
\end{figure}
\subsection{Magnetic ordering}
Microscopic insight into the magnetic ordering in TbMn$_{1-x}$Ga$_{x}$O$_{3}$ was obtained using ND on powder (Fig.~\ref{fig3}) and single crystal samples (Fig.~\ref{tauvsT}). XRMS has been used to contrast Tb-magnetic signal when it contributed to the same reflections as Mn. Cooling the samples below $T_{\rm N}^{\rm Mn}$ we found a set of magnetic satellites in Brillouin zones ($hkl$) with extinction conditions $h+k=even$, $l=odd$, known as A-type reflections.\cite{Bertaut} They arise from the ordering of Mn-spins that is incommensurate (ICM) and characterized by the propagation vector $\mbox{\boldmath$\tau$}^{\rm Mn}$ $=(0~\tau^{\rm Mn}_y~\space0)$. The temperature dependence of the propagation vector is shown in Fig.~\ref{tauvsT}a. It decreases linearly with decreasing temperature from $T_{\rm N}^{\rm Mn}$ down to $T_{\rm FE}$ for all compositions. Below $T_{\rm FE}$ $\tau$ stays constant.
With increasing Ga-content $\tau^{\rm Mn}$ decreases (Fig.~\ref{fig1}c) that is directly related to change of the Mn-O-Mn bond angle. One also observes the reduction of the overall magnetic satellite intensities as expected from the reduced total magnetization.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=85mm]{prokhnenko_tbmngao3_fig5.eps}
\caption{(Color online) Neutron diffraction patterns of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ compounds as measured with E9 diffractometer ($\lambda = $2.816~\AA) at 2 K.}
\label{fig3}
\end{center}
\end{figure}
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=85mm]{prokhnenko_tbmngao3_fig6.eps}
\caption{ a) Temperature evolution of the propagation vector $\tau^{\rm Mn}$ of
TbMn$_{1-x}$Ga$_{x}$O$_{3}$ with $x$ = 0, 0.04, 0.1 collected with the E1 triple axis spectrometer. Panels b-d) represent the temperature evolution of the integrated intensity of G-type (0 1-$\tau$ 1), A-type (0 $\tau$ 1) and A-type (0 2-$\tau$ 1) incommensurate magnetic reflections, respectively. The insert in the panel b) zooms the temperature evolution of integrated intensity of the G-type reflection around the $T_{\rm FE}$. The data have been collected on heating.}
\label{tauvsT}
\end{center}
\end{figure}
The A-type satellites are typical for all studied compositions in temperature range from $T_{\rm N}^{\rm Mn}$ down to 2~K, showing that the main details of the Mn magnetic structures for all compounds stay similar.
Using the representation analysis,\cite{Bertaut} the Mn antiferromagnetic ordering below $T_{\rm N}^{\rm Mn}$ can be described by one single component of the irreducible representation $\Gamma_{3}(0, A_{y},0)$. This gives rise to an amplitude modulated AF Mn spin ordering along the $b$-direction below $T_{\rm N}^{\rm Mn}$.\cite{Quezel}
Below $T_{\rm FE}$ the Mn-magnetic structure of TbMnO$_3$ has been described by the linear combination of two irreducible representation $\Gamma_{3}\oplus\Gamma_{2}$ where $\Gamma_{2}$ has also A-component along $c$-axis $(0, 0, A_{z})$.\cite{kenz} The resulting structure is a characteristic for ferroelectric phase: cycloid $m$(Mn)=$(0,m_{y},m_{z})$, where Mn-spins rotate within $bc$-plane.
The fact that the coupling between Fourier components $m_z$ is also of A-type complicates the structure analyses as both $m_y$ and $m_z$ contribute to the same A-type reflections. However, following a temperature dependence of a single magnetic reflection often helps to see an anomaly at temperature ($T_{\rm FE}$ in our case) when the second magnetic component develops. Indeed, the temperature evolution of the integrated intensity of the selected (0, 2-$\tau^{\rm Mn}$, 1) and (0, $\tau^{\rm Mn}$, 1) plotted in Fig.~\ref{tauvsT}c,d shows an inflection point at 27.5 K for $x=0$ compound in a good agreement with $T_{\rm FE}$.
Similar considerations applied to the Ga-doped samples allow to conclude that their Mn-structures have a second magnetic component of A-type developing below
25~K and 17~K for $x=$0.04 and 0.1 Ga-doped samples, respectively. Both temperatures are again in perfect agreement with $T_{\rm FE}$ determined from the bulk measurements. The arrows in Fig.~\ref{tauvsT}(c) show that the relative amount of the spin along the $c$-direction decreases with increasing Ga-content.
The results of magnetic structure powder refinement assuming cycloidal magnetic ordering $m$(Mn)=$(0,m_{y},m_{z})$ are presented in Table~\ref{table1}.
Coupling between Tb- and Mn-magnetic orders manifests itself below $T_{\rm N}^{\rm Mn}$ where Mn-spin lattice polarizes the rare magnetic sublattice.\cite{kenz,nadir-review} This is reflected in appearance of G-, C- and F-type reflections that become especially strong below $T_{\rm FE}$. Fig.~\ref{tauvsT}b shows the temperature dependence of selected G-type reflection for all compositions. Unlike the A-type reflections, this
one has a pronounced concave shape as a characteristic of induced order. With increasing Ga-content the intensity of induced reflections drastically decreases. Since intensity is proportional to the squared magnetic moment, ${\bf M}_{\perp}({\bf Q})^2$, one can conclude that the effective field acting on the Tb-moments from the Mn-spin structure is indeed reduced upon Ga-substitution.
Below $T_{\rm N}^{\rm Tb}=$ 7 K, undoped TbMnO$_3$ shows very intense magnetic satellites appearing at positions characterized by a propagation vector $\tau^{\rm Tb}$= 0.426(2) $\mathbf{b}^{*}$ ($\sim 3/7$) and its odd harmonics (Fig.~\ref{fig3}). Quezel $et~al.$\cite{Quezel} proposed a sine-wave structure with the Tb-moments lying along two symmetrical directions with respect to the $b$-axis (57$^\circ$) within the $ab$-plane. According to the results from our previous work,\cite{prok2} in one-dimensional case the Tb-magnetic structure can be understood as being built from $\uparrow \uparrow \downarrow \downarrow$-blocks in the $ab$-plane
with some irregularities to adjust Ising-like Tb spins to periodic Mn ordering (Fig.~\ref{structure}a). Taking into account data on rare earth behavior in isostructural 3$d$-less aluminates $R$AlO$_3$\cite{bielen,bouree}~and similar manganites with $R=$ Dy and Ho,\cite{prok,munoz} that all show the same type of G$_x$A$_y$ ordering with $\tau=0$ and $\tau=1/2$, respectively, one can make this model more realistic. Indeed, taking G$_x$A$_y$-coupling to preserve Tb-anisotropy
and keeping the same periodicity as in Fig.~\ref{structure}a, one gets a structure visualized in Fig.~\ref{structure}b. This structural model has been used to fit our powder diffraction data and the results of the refinement are presented in Table~\ref{table1}. One has to stress, however, that this solution is not unique and the real structure might be even more complex. On the other hand, it accounts for both the observed harmonic clamping of Tb- and Mn-propagation vectors at low temperatures\cite{prok2} and the Tb-anisotropy.\cite{bielen}
One of the most striking effects of Ga for Mn substitution appears below $T_{\rm N}^{\rm Tb}$ where magnetic ordering of the Tb-sublattice is expected (Fig.~\ref{fig3}). Even the powder ND pattern of the 4$\%$ substituted compound looses all the features of long range Tb magnetic ordering: Tb superlattice reflections are replaced by low intense and broad peaks visible predominantly at low angle positions of the strongest Tb-peaks. With increasing Ga-content the absolute intensity of Tb peaks drops and their width increases (Fig.~\ref{fig1}d). Altogether, the observed behavior reflects the destruction of long range Tb-magnetic ordering and its substitution by short range magnetic clusters with correlation length less than 25~\AA\ (Fig.~\ref{fig1}d).
The same effect is found by XRMS at the L$_3$ absorption edge of Tb. In particular, XRMS data on TbMn$_{1-x}$Ga$_{x}$O$_{3}$ single crystals showed the presence of induced Tb magnetic ordering with the Mn propagation vector $\tau^{\rm Mn}$=0.262 $\mathbf{b}^{*}$~at $2 < T <28$~K for $x=4$\%. The observed magnetic reflections are about 30 times weaker that those of pure TbMnO$_3$ and exist down to the lowest temperatures where no additional reflections corresponding to Tb long range magnetic ordering (i.e. with $\mathbf{\tau^{Tb}}$) have been observed. For $x=10$\%, XRMS found no Tb-magnetic reflections in the full temperature range below $T_{\rm FE}$. Single crystal ND, however, reveals some anomalies in temperature dependence of the Mn propagation vector and integrated intensity and width of ICM reflections. At 17~K on heating, $\tau$ suddenly decreases (Fig.~\ref{tauvsT}a) and, simultaneously, the width of the all the Mn ICM reflections gets smaller indicating a sudden change of the Mn-Mn vs Mn-Tb magnetic correlation.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=88mm]{prokhnenko_tbmngao3_fig7.eps}
\caption{Magnetic unit cell of Tb-sublattice in the $ab$-plane (in the adjacent planes along $c$-axis the magnetic moments are antiparallel): a) The structure proposed by one-dimensional ANNNI Ising model from Ref.\cite{prok2}; b) Magnetic structure model taking into account Tb-anisotropy}
\label{structure}
\end{center}
\end{figure}
\section{ Discussion}
Based on the results from magnetic and dielectric, neutron and x-ray diffraction measurements, a magnetic phase diagram of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ compounds with $x$ = 0, 0.04, 0.1 has been created (Fig.~\ref{Diagram}). For small Ga doping there is no dramatic change in the magnetic order of the system.\footnote{Minor changes in Mn-magnetic order allow to neglect another unavoidable effect of Ga for Mn substitution: disturbance of Mn orbital order.}
From the dielectric measurements along the $c$-axis we see that Ga doping also does not change the ferroelectric state. However, substitution provides a platform for studying the interplay between the Mn- and Tb-magnetic states.
\begin{figure}[bt!]
\begin{center}
\includegraphics[width=75mm]{prokhnenko_tbmngao3_fig8.eps}
\caption{Magnetic phase diagram of TbMn$_{1-x}$Ga$_{x}$O$_{3}$ compounds $x$ = 0, 0.04, 0.1 and magnetic fields applied along the $\mathbf{H}\|a$~(a), $\mathbf{H}\|b$~(b) and $\mathbf{H}\|c$~(c) axes. The squares, circles and triangles represent the data obtained by measurements of magnetization and dielectric constant for $x=0$, 4 and 10\% samples, respectively. P-M/PFe defines paramagnetic and paraelectric phase, while LW-AFM/PE corresponds to the phase where the Mn has an antiferromagnetic long wave ordering and still paraelectric state. The phases Ia, Ib and Ic are identical and correspond to $bc$-cycloidal Mn-ordering accompanied by $P_{\rm S}$ along the $c$-direction. IIa is field forced ferromagnetic ordering of Tb while IIb is an antiferromagnetic ordering of Tb with $\tau=\frac{1}{3}$. The IIIa and IIIb phases define $ab$-cycloidal Mn-ordering with $P_{s}$~along $a$-direction (for $x=0$, at least). The IIc zone is paraelectric phase with simple Mn-antiferromagnetic ordering. The dashed lines are expected phase boundaries for the Ga-containing compounds. The dielectric phase boundary for $x=0$ has been extracted from Ref.\cite{kimura3}}
\label{Diagram}
\end{center}
\end{figure}
According to the phase diagram in Fig.~\ref{Diagram}, all the magnetic phases persist for $x=4$\% and 10\% compounds. The main differences are found in the magnetic state of Tb and critical temperatures and fields of the corresponding phase transitions. Let us start with Tb-magnetic ordering and corresponding effects of Ga for Mn substitution. Our results unambiguously show that $J_{\rm Mn-Tb}$ is important ingredient for Tb-magnetic order both below and above $T_{\rm N}^{\rm Tb}$.
The induced Tb-magnetic ordering ($\tau^{\rm Tb} = \tau^{\rm Mn}$ above $T_{\rm N}^{\rm Tb}$), should be strongly affected by Ga for Mn substitution reducing $J_{\rm Mn-Tb}$. This finds a direct confirmation in our data as both $T_{\rm FE}$ and the intensity of the induced Tb-magnetic peaks (i.e. averaged induced moment) are significantly reduced upon Ga substitution.
Below $T<T_{\rm N}^{\rm Tb}$, when $J_{\rm Tb-Tb}$ competes with $J_{\rm Mn-Tb}$, the system minimizes its energy through the matching of Tb- and Mn-wave vectors, $3 \tau^{\rm Tb} - \tau^{\rm Mn} = 1$.\cite{prok2} The resulting Tb-magnetic structure is presented in Fig.~\ref{structure}. Clearly, this magnetic regime is supposed to be also strongly affected by the Ga for Mn substitution. Our data strongly support this picture showing that the substitution of magnetic by nonmagnetic ions in one of the sublattices can change the magnetic ordering in the other sublattice. Here, Tb-magnetic ordering becomes of short range in a very narrow 0 $< x\leq $ 0.04 range of Ga for Mn substitution. First of all, such low critical concentration explains scatter in the published data on Tb-ordering\cite{Quezel,kajimoto,kenz} since small non-stoichiometry in the samples would lead to a significant effect on it. Secondly, disappearance of Tb long range magnetic ordering proves that Mn exchange fields are involved in Tb magnetic ordering in TbMnO$_3$ even below $T_{\rm N}^{\rm Tb}$.
Ideally, one can expect by reducing $J_{\rm Mn-Tb}$ to get an independent Tb-magnetic ordering determined only by $J_{\rm Tb-Tb}$. Since TbGaO$_3$ does not exist, one can again look at 3$d$-less Tb- and DyAlO$_3$ both showing the same type of G$_x$A$_y$ ordering in $ab$-plane.\cite{bielen,bouree} Mn-containing DyMno$_3$ and HoMnO$_3$ show the same type of magnetic ordering but with doubling of unit cell along $b$-direction.\cite{prok,munoz} Expanding the same considerations on TMnO$_3$, one can expect that strong $J_{\rm Mn-Tb}$ prevents Tb of having regular $\tau^{\rm Tb}$=1/2 $\mathbf{b}^{*}$~structure presumably of G$_x$A$_y$-type (in one-dimensional case it would simply correspond to $\uparrow \uparrow \downarrow \downarrow$).
This, however, does not happen since no long range Tb-order has been observed at low temperatures.
We believe the reason for that is Ga for Mn substitution does not correspond to homogeneous reduction of Mn-Tb exchange but rather to pronounced effects located at the sites which are occupied by Ga.
In this case one has to distinguish between Tb ions in the nearest neighborhood from Ga and not. The former get released from Mn-exchange fields, as seen by the significantly reduced average induced moment below $T_{\rm FE}$, and can form their own $\tau^{\rm Tb}$=1/2 $\mathbf{b}^{*}$~ordering below 7 K. However, the latter follow the Mn periodicity above 7 K and, consequently, keep '$3 \tau^{\rm Tb} - \tau^{\rm Mn} = 1$'-state below 7 K (Fig.~\ref{structure}b). As a result one can still expect a kind of G$_x$A$_y$ ($\uparrow \uparrow \downarrow \downarrow$) preserved at a local scale while the overall periodicity is completely broken.
This model finds a strong support in the low field data on doped TbMn$_{1-x}$Ga$_{x}$O$_{3}$ with $\mathbf{H}\|a$\space and $\mathbf{H}\|b$\space (Fig.~\ref{Diagram}). Indeed, these data show that there are no principal differences to the undoped compound. With $\mathbf{H}\|a$\space and $\mathbf{H}\|b$, the low field phase boundary stays almost unaffected (except of $\mathbf{H}\|b$~for $x=10$\% sample where the transition becomes very smooth). Having in mind, that this transition in TbMnO$_{3}$ corresponds to magnetic field forced noncollinear F$_x$C$_y$ or C$_x$F$_y$ arrangement for $\mathbf{H}\|a$~and $\mathbf{H}\|b$, correspondingly, one finds it consistent with Tb-ordering of G$_x$A$_y$ ($\uparrow \uparrow \downarrow \downarrow$) preserved at local scale in Ga-compounds. The high field boundaries for $\mathbf{H}\|a$, $\mathbf{H}\|b$, both getting increased upon Ga substitution, is more difficult to interpret. Intuitively, one would expect the reduction of both critical fields. However, to answer this question one has to know the magnetic and electric states of these compounds above these critical fields. This issue stays outside the topic of present publication.
Finally, we summarize the effects of Ga for Mn substitution on Mn-sublattice. Since it is the Mn-sublattice that has been diluted, one can straightforward interpret the changes directly involving Mn-magnetism. At first, one marks out here the reduction of $T_{\rm N}^{\rm Mn}$ and total magnetization reflecting the reduced $J_{\rm Mn-Mn}$.
Neutron diffraction confirms that the magnetic structures observed below these temperatures are similar while dielectric measurements show the anomalies pointing to ferroelectric character of transitions at $T_{\rm FE}$ for all samples. Among magnetic isotherms one can have a look at the $\mathbf{H}\|c$~data, as Tb-contribution in there is minimal. The critical field $H_{\rm C}$ along $c$-axis in TbMnO$_{3}$ corresponds to the phase transition from ICM Mn-cycloid to a simple commensurate antiferromagnetic ordering. Substituting nonmagnetic Ga for Mn should result in the reduction of $H_{\rm C}$ as has been observed in the experiment (see Fig.~\ref{MvsH}). Generally, one can conclude that all the changes in the magnetic properties of Mn-sublattice are mainly of quantitative character and in agreement with effective reduction of $J_{\rm Mn-Mn}$.
\section{ Conclusions}
Ga for Mn substitution in multiferroic TbMnO$_{3}$ has been performed in order to investigate experimentally the influence of Mn-magnetic ordering on the Tb-magnetic sublattice. Single and polycrystalline TbMn$_{1-x}$Ga$_{x}$O$_{3}$ ($x$ = 0, 0.04, 0.1) compounds have been synthesized and characterized by powder and single crystal neutron diffraction, x-ray resonant magnetic scattering, single crystal magnetization and dielectric measurements. The results show that light Ga for Mn substitution does not change qualitatively the magnetoelectric properties of the Mn-sublattice while it significantly affects the Tb-magnetic ordering. The latter looses its long range character in a small range of Ga-concentration while preserves the magnetic structure locally. Thus, our work proves that there is a strong influence of Mn-magnetism on Tb-magnetic ordering kept down to the lowest temperatures. The whole ordering scheme of Tb in multiferroic TbMnO$_{3}$ results from competing $J_{\rm Mn-Tb}$ and $J_{\rm Tb-Tb}$ exchange interactions verifying our theoretical model reported in Ref.\cite{prok2}
\section{ Acknowledgements}
The authors have benefited from discussions with D. Khomskii, M. Mostovoy and K. Prokes. This work was supported by the Hytrain project of the Marie Curie research Training Network funded under the EC's 6th Framework Human Resources. D.N.A. thanks the Deutsche Forschungsgemeinschaft for financial support under Contract
No. AR 613/1-1.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,012 |
\section{Introduction}
In 2017, the editor of the Astrophysics Source Code Library (ASCL),\footnote{\url{https://ascl.net}} Alice Allen, was searching NASA's Astrophysics Data System (ADS)\footnote{\url{https://ui.adsabs.harvard.edu/}} to see what she could learn about software and software use in astrophysics research. As a result of a 2015 ADASS BoF session, in which a suggestion was made for ADS to include software in its categorization of entries \citep{2017ASPC..512..675A}, ADS had added a document type value for software \citep{2019ASPC..521..737T}, so she was experimenting with that, and did a search for ``doctype:software keyword:NASA.'' There were zero hits (Figure \ref{fig:noresults}). This was perhaps not surprising, since ``doctype:software'' was relatively new, but made Allen experiment more, using NASA software as the object of her search. This further experimentation made it clear that it was difficult to search for and find NASA software in ADS. Exacerbating the situation is having to search multiple NASA sites, such as \url{https://heasarc.gsfc.nasa.gov/docs/software.html} and \url{https://software.nasa.gov}, to discover NASA research software. Thus, a project idea arose: make it easier to find software developed at or paid for by a funding organization in the ASCL by using keywords and pass this information to ADS, which already ingests the ASCL and classifies its entries as the ``software'' document type.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.8\linewidth]{P10-212_f1.eps}
\end{center}
\caption{Screenshot of results for a 2017 ADS search for NASA software}
\label{fig:noresults}
\end{figure}
During a NASA Astrophysics Data Analysis Program (ADAP) funding call, we proposed looking for NASA astrophysics research software on NASA sites such as \url{code.nasa.gov} and \url{software.nasa.gov}, adding these codes to the ASCL if an entry for the software did not already exist, and tagging the new or existing ASCL entries with the keyword ``NASA'' and, when appropriate, with the names of the mission for which the software was written or used. Our proposal was accepted; this poster shared the work we've done and our results so far.
\section{Changes to the ASCL}
The ASCL is very careful about the quantity of metadata it stores, as the more metadata is included in the resource, the more maintenance is needed to keep the records up-to-date \citep{2015JORS....3E..15A}; as a result, the Library did not have a keyword field. The NASA ADAP project required that the ASCL be modified to add this field. Not only did this require changes to our database structure, it also required changes to our input and editing forms and code entry display screens. Further, we had to ensure that information we stored in the keyword field would flow to, and be used by, ADS, Web of Science, and other indexers that ingest our records. The designer and developer of the ASCL, Judy Schmidt, was responsible for making these changes to the ASCL infrastructure; she also created reporting mechanisms to easily view the ``NASA'' tagged software on the ASCL. Siddha Mavuram, a University of Maryland computer science major with an interest in astronomy, was hired to develop software and tracking methods for mining NASA software sites. He also developed an API for the ASCL \citep{P9-103_adassxxx} to provide a way to gather the ASCL's information for NASA software programmatically. Peter Teuben provided additional programming and directed Mavuram's activities, and Robert Nemiroff, the founder of the ASCL, provided guidance and served as a sounding board for infrastructure changes.
\section{Tagging existing ASCL records, searching NASA code sites}
Searching for suitable software took place in two phases; first, Allen looked at existing ASCL entries to see which were hosted on NASA websites or indicated funding from NASA and tagged those entries with ``NASA'' and, if appropriate, mission keywords. She then started mining NASA code sites for research software that meets the ASCL's criteria,\footnote{https://ascl.net/wordpress/submissions/editiorial-policy/} aided by tools developed by Teuben and Mavuram; this work is ongoing and will continue through the end of the project.
\section{Results}
Keywords to associate NASA and NASA missions with software now appear on ASCL (Figure \ref{fig:dipsASCLrecordscreenshot}) and ADS records.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.8\linewidth]{P10-212_f2.eps}
\end{center}
\caption{ASCL record showing keywords for NASA and NASA missions}
\label{fig:dipsASCLrecordscreenshot}
\end{figure}
We can find NASA astronomy research software on the ASCL with a keyword search. Because ADS picks up ASCL keywords when it ingests the ASCL's records, it is also possible to search ADS for this software by using ``doctype:software keyword:NASA'' as the search terms. ASCL entries are citable and ADS tracks citations to ASCL entries.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.8\linewidth]{P10-212_f3.eps}
\end{center}
\caption{Screenshot of a 2020 ADS search results for NASA software}
\label{fig:results}
\end{figure}
As a result, NASA can see what research has been enabled by making its software public. Through the number of citations (Figure \ref{fig:results}), NASA also has a measure (albeit an incomplete one) of the impact funding software development can have on scientific discovery.
The changes we have made to the ASCL as a result of this project also provides an opportunity to improve discovery of other institutional software, such as that funded by the Heidelberg Institute for Theoretical Studies, as is shown by Figure \ref{fig:HITS}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth]{P10-212_f4.eps}
\end{center}
\caption{Screenshot of a 2020 ADS search results for HITS software}
\label{fig:HITS}
\end{figure}
\section{Conclusion}
Discovery of organization-funded or -written software can be improved by leveraging the ASCL and ADS through the use of keywords. This discoverability can improve use of the software, and also provides information as to the impact the software may have on scientific discovery by linking the software to research which used the software.
\acknowledgements This project was funded by NASA award NNH17ZDA001N-ADAP. ASCL is supported by Michigan Technological University, University of Maryland College Park, and Heidelberg Institute for Theoretical Studies.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 2,029 |
Q: Facebook Connect: capturing user data with django-profiles and django-socialregistration Either my google searching has completely left me or there's hardly any documentation/tutorials for django-socialregistration. Too bad, because it seems like a nice enough app. Through some trial-and-error, I have managed to get it mostly running on my site.
My question, using django-socialregistration how do I request permission for the facebook user's full name, current city and date of birth and store it in my UserProfile table (which is my AUTH_PROFILE_MODULE for django-profiles) in Django upon registration? Also, how do I post to the user's wall from Django once the connection is made?
Currently, when I click the "Connect with Facebook" button the facebook connection is made, a new Django user is created and the user is logged in with that Django account. However, no UserProfile is created and no facebook profile data is saved.
Any facebook connect gurus out there want to help the Django pony fly to Facebookland?
Setup:
- Django 1.2.1
- Python 2.5.2
- django-socialregistration 0.4.2
- django-registration 0.7
- django-profiles 0.2
"Kind sir, can you please help me find the magical Facebookland?"
A: In facebook_js.html you need to adjust the following line, by uncommenting items that you need to get from FB:
FB.login(handleResponse/*,{perms:'publish_stream,sms,offline_access,email,read_stream,status_update,etc'}*/);
Then, in FacebookMiddleware you can extract that data from fb_user, like this:
facebook.GraphAPI(fb_user['access_token']).get_object('me')
A: FWIW, I just found this moderately helpful nugget from the app author buried in the "Issues" section on github:
question from "tolano":
I have a profile model associated with the users, and everytime the user is created the profile should be created also. Should we create a new custom setup view for this purpose?
I'm finding several problems because the documentation is poor. Thank you very much.
answer from "flashingpumpkin":
Yes. Ideally you'll overwrite the setup view with your own. An easier method to adjust what is done on user creation is to pass a custom form into the setup view. You'll do that by overriding the standard url.
A: Here's another relevant nugget (source: http://github.com/flashingpumpkin/django-socialregistration/issues/closed#issue/7) Enough of these and this page will become the de facto django-socialregistration documentation ;)
question from "girasquid":
Maybe I'm just missing something, but I'm stuck here - is there a way to 'connect' accounts on other sites to an already-existing user?
For example, I've already signed up on Really Awesome Website, so I don't need to sign up again - but I'd like to connect my Facebook and Twitter accounts so that I can sign in with those as well.
Is there a way to do this already? If there isn't...how would I do it?
answer from "flashingpumpkin":
Yes there is. Just use the same template tags for Facebook Connect as you would for registration. Depending on if the user is already logged in or not it will create just the FacebookProfile object and link it to the existing user - or create both, the User object and the FacebookProfile object.
Have a look here:
http://github.com/flashingpumpkin/django-socialregistration/blob/master/socialregistration/templates/socialregistration/facebook_button.html
and
http://github.com/flashingpumpkin/django-socialregistration/blob/master/socialregistration/templatetags/facebook_tags.py
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,567 |
Technology and media are constantly changing. Internet speeds get faster, owning multiple internet-enabled devices is now commonplace, watching TV on the go is rampant and programming has never been richer and more exciting. And it's all thanks to the innovators who come and go, those who challenged convention, shared big ideas, and sparked inspiration, forever changing how people think about and use technology. But how does one continue to foster a crop of big influencers and leaders in an age that can often seem much more fragmented and fickle and when change is constant?
The Cable Center, the nonprofit, educational organization serving the cable industry since 1985, launched a pilot program for emerging leaders in cable earlier this Fall in Denver with this very idea in mind. With the goal being to "pass on the industry's pioneering spirit to a new generation of innovators," the Community of Innovators program aims to not only tell the story of the cable industry (which is why The Cable Center was founded), but to also look to the future.
"A group of us put our heads together a few years ago to think about ways to not only further the cable industry's unprecedented legacy of innovation but to connect a new generation of young people directly to our history by developing a relevant, hands-on and future-focused program," said Bridget Baker, who serves on the board of The Cable Center. Baker, CEO of Baker Media and a 23-year veteran of NBCUniversal and co-founder of CNBC, serves as an "Innovation Laureate"—a mentor to program participants. "CableLabs is certainly focused on technical innovations, and creativity can bubble up anywhere, but The Cable Center is uniquely positioned to preserve and promote industry contributions as a whole and to shine a light on and link together a wider community," Baker further explained.
Joel Tyus, a participant in the 10-week Intrapreneurship Academy that piloted this past Fall and who has worked in the cable industry for the past eight years, said that the program helped him personally to grow projects within his company, Evolution Digital. Tyus is a senior product manager, and previously worked at Time Warner Cable. Participants and mentors in the program were a mix of cable providers, vendors, and programmers from all around the U.S. "The cable industry has been really good to me, and I've enjoyed upward mobility," said Tyus. The program hopes to inspire this type of motivation and mobility in rising leaders in several ways, such as by helping students deliver a big project or idea pitch to their companies by the conclusion of the program.
Many of the guest speakers throughout the program also served as mentors to students, helping them deliver pitches to their respective employers. Made up of four components, the Community of Innovators aims to invest in cable's future through its support of the "Innovation Laureates" like Baker; the 10-week Intrapreneurship Academy; Startup Weeks--competitions that encourage local entrepreneurs to connect with cable industry leaders; and The Mavericks Lecture Series--a program where industry leaders share their insights at universities nationwide.
The Cable Center looks to share the program's results in the near future, and to secure funding to ensure the program's growth and success in the long-term.
"The cable industry has always been very innovative, and the whole idea behind [the program] is to give participants the power to take risks. The Cable Center itself is a nice asset to have because you can see the history of cable. The museum shows where cable has been and primes people for the future to innovate within cable," said Tyus. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,576 |
The Vrije Vrouwen Vereeniging (Free Women's Association) was a women's rights organisation active in the Netherlands from 1889. It was one of the leading women's nationwide organizations of the 19th-century Dutch women's movement. Its purpose was to work for the equality for men and women within education, profession, law and in politics, and it thereby also worked for women's suffrage, though this was not their main target.
The organisation was co-founded by Wilhelmina Drucker. One of its most famous actions, which has also been referred to as the highlight of the women's movement in the Netherlands in the 19th century, was the National Exhibition of Women's Labour at the Hague in 1898.
References
Bonnie G. Smith: The Oxford Encyclopedia of Women in World History (4 volume set)
Feminist organisations in the Netherlands
Women's rights organizations
1889 establishments in the Netherlands
Feminism and history
Women's suffrage in the Netherlands | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,577 |
O Rio Beia é um rio da Romênia afluente do Rio Paloşul, localizado no distrito de Braşov.
Rios da Roménia | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,258 |
Q: Генерация xls Есть таблица:
Ид, Имя, Емейл, Дата, Время, Баллы
Вопрос: как с помощью php сохранить все это в xls и предоставить юзеру для загрузки?
A: *
*Импорт и экспорт данных с помощью PHPExcel - либа тяжелая конечно, но мощная.
*PHP, подружим PHP и Excel - вот описание другой либы.
A: *
*Есть для этого библиотека, называется "Spreadsheet_Excel_Writer".
*А так же возможно создание без сторонних библиотек, в упрощенном виде, т.е. никаких раскрашиваний ячеек, размеров шрифтов и формул. Просто записываем какие-либо данные в нужные ячейки. Смотреть тут.
A: Если без оформлений (подгонка ширины столбцов, жирность, цвет и т.д.), то можно просто выбрать нужные ячейки и выписывать их в любой файл в виде строк или в переменную, которую после выдать юзеру в виде файла. В каждой строке ваши поля разделенные знаком "точка с запятой". Файлу дать расширение CSV.
А если нужно с оформлением, то берем сторонние библиотеки для работы с эселем. Я, например, предпочитаю PHPExcel (как пример вот простое описание заполнения файла).
A: Если вас устроит XLSX (для последних версий офиса), то там просто XML, который сгенерировать не проблема. То есть берем готовый .xlsx, смотрим где чего нужно добавить прои добавлении еще одной строчки и генерим...
A: А хотите простое сохранение в XLS?
Возьми и всю инфу которая тебе нужна запиши в файл в виде HTML таблицы тупо и файлу расширение .xls поставь. будет тебе быстрое счастье :) | {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,282 |
{"url":"https:\/\/wechoosethemoon.es\/2015\/10\/08\/landsat-ndvi-leaflet\/","text":"# Satellite crop health: open source toolchain\n\nJorge Garc\u00eda T\u00edscar| October 08, 2015\n\nSatellite imaginery is becoming an increasingly important tool for farmers everywhere, allowing them to monitor the health of their crops. It used to be, however, a very expensive service, out of reach for most people. Open data and open source are here to radically change that.\n\nWhile extremely useful to monitor the health and growth of crops, satellite information is difficult to acquire and process. Until now, the only option was to resort to expensive commercial services.\n\nHowever, recent open data and open source initiatives are working to enhance the access to this information. In this post I show how open source tools can be used to access, process and redistribute satellite imaginery as an accesible web map.\n\n## The basics\n\nLet\u2019s start from the beginning. How exactly does satellite imaginery help us to monitor crop health? One of the most commonly used metrics is called Normalized Difference Vegetation Index (NDVI). But how does it work?\n\nLive green plants get their energy from light. They don\u2019t use, however, the full spectrum of solar radiation: their chlorophyll absorbs visible light (0.4 to 0.7 \u00b5m wavelengths) while their cellular structure of the elaves rejects near-infrared light (from 0.7 to 1.1 \u00b5m) since photon energy at that band is not useful to synthesize organic molecules and would only serve to overheat the plant.\n\nThis means that the presence of healthy plants on the ground will affect the reflected sunlight, deepening the difference between visible content (VIS) and near-infrared content (NIR) of that sunlight. We can then devise a simple metric to quantify this relation, the NDVI:\n\n%%% \\text{NDVI}=\\frac{(\\text{NIR}-\\text{VIS})}{(\\text{NIR}+\\text{VIS})} %%%\n\nNote: many other, more sophisticated metrics exist for monitoring vegetation, but agricultural science is not the point of this post!\n\nNow, if only we had something in the sky to measure these reflected sunlights bands, right? Fortunately, several satellites are dedicated to Earth Obervation. They employ a wide array of sensors, from cameras to radars to chemical analyzers. We are interested on those with certain characteristics:\n\n\u2022 They\u2019re capable of measuring VIS and NIR light bands\n\u2022 Their imaginery is made freely available as open data\n\u2022 An open source toolchain exists to process that imaginery\n\u2022 Image resolution is high enough to distinguish particular fields\n\nAs you can imagine, few satellites reunite all these. But there are! The obvious candidate is NASA \/ USGS Landsat 8, one of the first whose data was made openly available. It is expected that ESA Sentinel-2 will shortly begin to provide regular data as well, but tools are not ready yet.\n\n## The satellite: Landsat 8\n\nLandsat 8 is the last bird in a long and successful family of civilian Earth Observation satellites of the United States. It carries various intruments, but for now we\u2019re interested in its Operational Land Imager, that can be regarded as a very fancy and expensive multiespectral camera.\n\nBands 4 and 5 of this camera capture the VIS and NIR information that we need, respectively. Resolution at these bands is 20m per pixel, a lot less than the sub-meter imaginery of Google Maps, but enough for most crop fields. Band information for Sentinel-2, Landsat 8 and 7 is presented in this useful chart:\n\nThe satellite has a revisit time of about 16 days, so don\u2019t expect daily updates. However, once Sentinel-2A and Sentinel-2B are available, it is expected that useful data will be available each 3-5 days.\n\n## The toolchain\n\nData from Landsat 8 is distributed in huge scenes, numbered by path and row. A map is available here, but we can search and download them directly on Libra, a website built by Development Seed, who also develops the first tool of our open source toolchain:\n\nNote: to illustrate this post, we will observe crops in the south east region of Spain: Albacete, Alicante, Valencia and Castell\u00f3n.\n\nInstructions for installing this nice open source tool are available here. It has a wide array of functionality, being able to search, download, and process imaginery.\n\n#### Searching for suitable imaginery\n\nThe first step must be to search for suitable satellite images of the selected area. For instance, if we only wanted scenes containing Albacete, we could search for coordinates [39\u00ba, -1.86\u00ba]:\n\nThis will result in a JSON output where each element is a scene, including cloud cover percentage, date of acquisition, path and row, satellite ID, scene ID, and a handy thumbnail:\n\nWe can filter these result by any parameter, including date, cloud cover, etc. An alternate way of finding suitable imaginery the Libra tool mentioned above, that relies on this same tool.\n\nNote: However, landsat-util search allows us to set a cron job to update our imaginery as we wish!\n\nAs an illustrative example, two scenes have been selected. Contrast has been slightly enhanced, but these are the two thumbnails accesible from the search results:\n\nOnce that we have selected the scenes that we want to download (LC81990332015158LGN00 and LC81990322015222LGN00 in this case), we download them using their sceneID parameter:\n\nCaution! These scenes are big (700MB - 1GB each) so mind disk space. You can also download only the NIR and red VIS bands using a \"--bands 45\" parameter in landsat-util\n\n#### Computing NDVI\n\nEach downloaded scene will download to a folder, defaulting to ~\/landsat\/processed\/[sceneID] and contaning a GeoTIFF image for each band. We are interested in bands 4 (red VIS) and 5 (NIR). Recent versions of landsat-util include a routine to automatically compute the NDVI that we must call for each scene:\n\nThis will result in a GeoTIFF image [sceneID]_NDVI.TIFF in each scene folder. The images are colorized according to this colormap developed by @cfastie at Public Lab. Values close to 1 indicate high vegetation density, while 0 and less indicate no vegetation at all. For instance, we can zoom to see the NDVI map of the city of Valencia:\n\n### Join and tile: GDAL\n\nIn order to produce a continuous webmap, images must be stitched together and them divided into a lot of small tiles that can be loaded by the browser. We will use a some functions from GDAL, the Geospatial Data Abstraction Library, which was installed as a requirement of landsat-util.\n\n#### Joining images\n\nTwo options are available: we can actually join the images to form a very big image or we can astutely create a \u201cvirtual dataset\u201d or VRT: a simple text file that references all the individual images but can be viwwed and processed as one image, as seen on the right:\n\nNote that we have selected projected coordinate system EPSG:3857, suitable for web maps, and instructed the tool to look on each subfolder ** for images containing a *NDVI* string\n\n#### Dividing the image into tiles\n\nOne we have a virtual dataset file NDVImap.vrt, we need to divide it in several small tiles at each zoom level. We can use the standard GDAL-provided utility called gdal2tilesp.py, but an improved parallelized version is available here.\n\nCaution! Processing huge maps will take a long time. We have restricted zoom levels to 8-12 and selected 8 processors, but be careful with the parallelized fork as I've experienced missing tiles testing it\n\nNow we have a directory structure in the form Z\/X\/Y.png. The only remaining thing is to render these tiles in a webpage. That\u2019s where the last piece of our open source toolchain enters.\n\n### Render the webmap: LeafletJS\n\nLeaflet is a very powerful mapping JavaScript library. A lot of things can be achieved playing with it, but in this occasion we will just use it to render the tiles that we\u2019ve generated. We start by including the library and the stylesheet into our webpage head:\n\nThen, we have to include a <div> element that will contain our web map:\n\nAnd now include a script to render the tiles as a map layer:\n\nAnd this is the end result: a scrollable web map showing NDVI data from Landsat. Of course, this could be customized by adding a search function, cadastral boundaries for a certain property (in places were they\u2019re available as open data), etc., but the main work is done: from the raw satellite data to the web.\n\nNote: in order to save space, this is a very cropped map of Albacete!\n\n## Conclussions & further work\n\nThrough this long and hopefully-not-so-boring post I\u2019ve tried to describe the open source tools that I\u2019ve found most useful when trying to test the feasibility of turning open satellite data into web maps.\n\nAs stated on the introduction, the reason behind this effort is that satellite data is difficult to work with, and most people (I\u2019m thinking farmers in remote areas) often do not have the capabilities (computing power, bandwith, spare time to learn) required to make use of that data. It is thus interesting to explore ways to process relevant data into a form that allows an easy consumption and use, and probably websites are a good medium to disseminate this information.\n\nHowever, none of this is useful if it is only made once, for the purpose of monitoring is to visualize change. As the satellite revisits the location, more and more imaginery is captured. It is thus important to create ways of automatically regenerating the map each time new suitable data is available in order to ensure that the information is up to date.\n\nNow, thanks to the work of people like the Development Seed team, open source tools are available to programmatically search and acquire new images from Landsat 8. Hopefully, this trend will expand to other satellites like Sentinel-2, or even platforms like ISS that are being fitted with cameras and other sensors, and the data will reach those that need it the most.","date":"2017-04-26 05:51:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3033182621002197, \"perplexity\": 2223.968499234979}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-17\/segments\/1492917121165.73\/warc\/CC-MAIN-20170423031201-00510-ip-10-145-167-34.ec2.internal.warc.gz\"}"} | null | null |
La Ruta 143, oficialmente Ruta Nacional Secundaria 143, es una ruta nacional de Costa Rica ubicada en las provincias de Alajuela y Guanacaste.
Descripción
En la provincia de Alajuela, la ruta atraviesa el cantón de Guatuso (los distritos de San Rafael, Cote).
En la provincia de Guanacaste, la ruta atraviesa el cantón de Tilarán (el distrito de Arenal).
Véase también
Carreteras de Costa Rica
Anexo:Red Vial Nacional de Costa Rica
Referencias
Transporte de Costa Rica
Carreteras de Costa Rica
Transporte por carretera en Costa Rica | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 274 |
package org.apache.hadoop.utils;
import java.util.concurrent.Callable;
/**
* A task thread to run by {@link BackgroundService}.
*/
public interface BackgroundTask<T> extends Callable<T> {
int getPriority();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,206 |
\section*{Table of Contents}
\vspace{2em}
\noindent Due to the restrictions on the length, we could not include descriptions of the state-of-the-art techniques, datasets, hyper-parameters and additional results in the main manuscript. However, to keep the overall manuscript self-contained, we include the following in the supplementary material:
\begin{itemize}
\item Section 1: Description of competing methods.
\item Section 2: Detailed description of benchmark datasets used in our experiments.
\item Section 3: Training procedure and implementation of proposed, and competing methods.
\item Section 4: Additional results.
\item Source code link: \href{https://github.com/mdca-loss/MDCA-Calibration}{MDCA Official PyTorch Github}
\end{itemize}
\section{Competing Methods and hyperparameters used}
\label{sec:compMethods}
In this section, we provide a brief description of each of the compared method with hyperparameter settings used in training.
\begin{itemize}
%
\item For \texttt{Brier Score} \cite{brierloss} we train on the loss defined as the squared loss of the predicted probability vector and the one-hot target vector.
%
\item \texttt{Label Smoothing} \cite{labelsmoothinghelp}, takes the form $\texttt{LS} = - \sum_{i=1}^N \sum_{j=1}^K q_{i,j} \log (\hat{p}_{i,j})$ where $\hat{p}_{i,j}$ is the predicted confidence score for sample $i$, for class $j$. Similarly, we define soft target vector, \textbf{${q_{i}}$}, for each sample $i$, such that $q_{i,j}=\alpha/(K-1)$ if $j \neq y_i$, else $q_{i,j} = (1-\alpha)$. Here $\alpha$ is a hyper-parameter. We trained using $\alpha=0.1$, and refer to label smoothing as \texttt{LS} in the results.
%
\item \texttt{Focal loss} \cite{ogfocalloss} is defined as $\texttt{FL} = - \sum_{i=1}^N (1-\hat{p}_{i,y_i})^{\gamma} \log (\hat{p}_{i,y_i})$, where $\gamma$ is a hyper-parameter. We trained using $\gamma \in \{1,2,3\}$, and report it as \texttt{FL} in the results.
%
\item For DCA \cite{dcapaper}, we train on the following loss: $\texttt{NLL} + \beta \cdot \texttt{DCA}$, where \texttt{DCA} is as defined in \cite{dcapaper}, and $\beta$ is a hyper-parameter. We train varying $\beta \in \{1,5,10,15,20,25\}$ as performed in \cite{dcapaper}. DCA results are reported under the name \texttt{DCA}.
%
\item We use \texttt{MMCE} \cite{kumarpaper}, as a regularizer along with \texttt{NLL}. We use the weighted \texttt{MMCE} loss in our experiments with $\lambda \in \{2,4\}$.
%
\item For \texttt{FLSD} \cite{focallosspaper}, we train with $\gamma=3$.
\end{itemize
For each of the above methods, we report the result of the best performing trained model according to the accuracy obtained on the validation set.
\section{Dataset description}
\label{sec:dataset-desc}
We have used the following datasets in our experiments:
\begin{enumerate}
\item \textbf{CIFAR10} \cite{krizhevsky2009learning}: This dataset has $60,000$ color images of size $32 \times 32$ each, equally divided into $10$ classes. The pre-divided train set comes with $50,000$ images and the test set has around $10,000$ images. Using the policy defined above, we have a train/val/test split having $45000/5000/10000$ images respectively.
%
\item \textbf{CIFAR100} \cite{krizhevsky2009learning}: This dataset comprises of $60,000$ colour images of size $32x32$ each, but this time, equally divided into $100$ classes. The pre-divided train set again comes with $50,000$ images and the test set has around $10,000$ images. We have a train/val/test split having $45000/5000/10000$ images respectively.
%
\item \textbf{SVHN} \cite{netzer2011reading} : The Street View House Number (SVHN) is a digit classification benchmark dataset that contains $600000$ $32\times 32$ RGB images of printed digits (from $0$ to $9$) cropped from pictures of house number plates. The cropped images are centered in the digit of interest, but nearby digits and other distractors are kept in the image. SVHN has comes with a training set ($73257$) and a testing set ($26032$). We randomly sample $10\%$ of the training set to use as a validation set.
%
\item \textbf{Tiny-ImageNet} \cite{imagenetpaper} : It is a subset of the ImageNet dataset containing $64 \times 64$ RGB images. It has $200$ classes with each class having $500$ images. The validation set contains $50$ images per class. We use the provided validation set as the test set for our experiments.
%
\item \textbf{20 Newsgroups} \cite{20newsgroup}: It is a popular text classification dataset containing $20,000$ news articles, categorised evenly into $20$ different newsgroups based on their content.
%
\item \textbf{Mendeley V2} ~\cite{kermany2018labeled}: Inspired from \cite{dcapaper}, we use this medical dataset. The dataset contains OCT (optical coherence tomography) images of retina and pediatric chest X-ray images. However, we only use the chest X-ray images in our experiments. The chest X-ray images come with a pre-defined train/test split having $4273$ pneumonia images and $1583$ normal images of the chest.
%
\item \textbf{PACS dataset} \cite{pacspaper}: We use the dataset to study calibration under domain shift. The dataset comprises of a total $9991$ images spread across $4$ different domains with $7$ classes each. The domains are namely Photo, Art, Sketch and Cartoon. We fine-tune the ResNet-18 model, pretrained ImageNet dataset, on the Photo domain using various competing techniques and test on the other three domains to measure how calibration holds in a domain shift. Following \cite{pacspaper}, we also divide the training set of photo domain into $9:1$ train/val set.
%
\item \textbf{Rotated MNIST Dataset}: This dataset is also used for domain shift experiments. Inspired from \cite{domain-drift-calibration-posthoc}, we create 5 different test sets namely $\{M_{15}, M_{30}, M_{45}, M_{60}, M_{75}\}$. Domain drift is introduced in each $M_{x}$ by rotating the images in the MNIST test set by $x$ degrees counter-clockwise.
\item \textbf{Segmentation Datasets- PASCAL VOC 2012 \cite{pascal-voc-2012}}: This a segmentation dataset and consists of images with pixel-level annotations. There are 21 classes overall (One background class and 20 foreground object classes). The dataset is divided into \textit{Train} (1464 images), \textit{Val} (1449 images) and \textit{Test} (1456 images) set. We, however, only make use of the \textit{Train} and the \textit{Val} set. Models are trained on the \textit{Train} set and the evaluation is reported on the \textit{Val} set.
\end{enumerate}
\section{Training Procedure and Implementation}
\label{sec:trainProc}
\myfirstpara{Backbone Architecture}
We use ResNet \cite{resnetpaper} backbone for most of our experiments. For training on CIFAR10, and CIFAR100 datasets we used ResNet-32 and ResNet-56. For SVHN we used ResNet-20 and ResNet-56. Following \cite{dcapaper}, for Mendeley V2 dataset, we use a ResNet-50 architecture pre-trained on ImageNet dataset \cite{imagenetpaper}. We use the backbone as a fixed feature extractor and add a $1 \times 1$ convolutional layer and two fully connected layers on top of the feature extractor. Segmentation experiments make use of DeepLabV3+ \cite{deeplabv3Plus} based on the Xception65 \cite{xception} backbone
\mypara{Train Parameters}
For all our experiments we used a single Nvidia 1080 Ti GPU. We trained CIFAR10 for a total of $160$ epochs using a learning rate of $0.1$, and it is reduced by a factor of $10$ at the $80^{th}$ and $160^{th}$ epoch. Train batch size was kept at $128$ and DNN was optimized using Stochastic Gradient Descent (SGD) with momentum at $0.9$ and weight decay being $0.0005$. Furthermore, we augment the images using random center crop and horizontal flips. We use same parameters for CIFAR100, except the number of epochs, where we train it for $200$ epochs. Again learning rate is reduced by a factor of $10$, but this time it is reduced at epochs 100 and 150. For Tiny-ImageNet, we followed the same training procedure as done by \cite{focallosspaper}. For SVHN, we keep the same training procedure as above except the number of epochs; we train for $100$ epochs, with it getting reduced at epochs $50$ and $70$ with a factor of $10$ yet again. We do not augment the images when training on SVHN. We use PyTorch framework for all our implementations. Our repository is inspired from \url{https://github.com/bearpaw/pytorch-classification}. We also take help from the official implementation of \cite{focallosspaper} to implement some of the baseline methods. For the segmentation experiments we make use of the following repository: \url{https://github.com/LikeLy-Journey/SegmenTron}. We train the segmentation models for 120 epochs using SGD optimizer with a warm-up LR scheduler. To train with focal loss, we used $\gamma = 3$. The rest of the parameters for the optimizer and the scheduler were kept the same as provided by the SegmenTron repository.
\begin{figure}
\centering
\includegraphics[width =0.45\textwidth]{figs/FlDCAvsFlMdcaCIFAR10Resnet32.pdf}
\caption{Comparison of Reliability diagrams of: (left) \texttt{NLL+DCA} vs \texttt{NLL+MDCA} and (right) \texttt{FL+DCA} vs. \texttt{FL+MDCA}. We use ResNet-32 trained on CIFAR10 dataset for comparison.}
\label{fig:DCAvsMDCA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width =0.45\textwidth]{figs/betaVariation.pdf}
\caption{Effect of $\beta$ on Accuracy and SCE ($10^{-3}$) on \texttt{FLL+MDCA} on the ResNet-20 model trained on SVHN dataset.}
\label{fig:EffectofbetaOnSCE}
\end{figure}
\mypara{Optimizing Hyper-parameter $\beta$ for MDCA}
We vary the hyper-parameter $\beta \in [0.25, 0.5, 1,5,10, \dots 50]$ in our experiments. Figure \ref{fig:EffectofbetaOnSCE} shows how calibration error and accuracy is affected when we increase $\beta$ in different model-dataset pairs. We see a general trend that calibration is best achieved when $\beta$ is close to $1$, and as we increase it, the calibration as well as the accuracy starts to drop (accuracy decreases and the SCE score increases).
\mypara{Post-Hoc Calibration Experiments}
For comparison with post-hoc techniques, we set aside $10\%$ of the training data as a validation set (hold-out set) to conduct post-hoc calibration. For Temperature Scaling (\texttt{TS}), we perform a grid search between the range of $0$ to $10$ with a step-size of $0.1$ to find the optimal temperature value that gives the least \texttt{NLL} on the hold-out set. For Dirichlet Calibration (\texttt{DC}), we attach a single layer neural network at the end of the DNN and use ODIR \cite{Dirichlet} regularization on the weights of the same. We train on the hold-out set keeping the rest of the DNN weights frozen except the newly added layer. We again use grid search to find the optimal hyper-parameters $\lambda$ and $\mu$ that give the least \texttt{NLL} on the hold-out set. We vary $\lambda \in \{ 0, 0.01, 0.1, 1, 10, 0.005, 0.05, 0.5, 5, 0.0025, 0.025, 0.25, 2.5 \}$ and $\mu \in \{0, 0.01, 0.1, 1, 10\}$.
\input{sections/tab_ablation_cifar_svhn.tex}
\input{sections/tab_pacs_all_comparison.tex}
\input{sections/tab_scores_rot_mnist.tex}
\mypara{Domain Shift Experiments}
For PACS dataset, we use the official PyTorch ResNet-18 model pre-trained on ImageNet dataset. We re-initialized its last fully connected layer to accommodate 7 classes, and finally fine-tuned on the Photo domain. We use the SGD optimizer with same momentum and weight decay values as done for CIFAR10/100 and described earlier. Training batch size was fixed at $256$, and the model was trained for 30 epochs with initial learning rate, set at 0.01, getting reduced at epoch 20 with a factor of 10. Training parameters are chosen such that they give the best performing model i.e. having the maximum accuracy on the Photo domain val set.
For Rotated MNIST, we used PyTorch framework to generate rotated images. We use the ResNet-20 model to train on the standard MNIST train set for 30 epochs with a learning rate of 0.1 for first 20 epochs, and then with 0.01 for the last 10 epochs. Rest of the details like batch size and optimizer remain same as the CIFAR10/100 experiments. We did not augment any images, and selected the training parameters such that the model gives best accuracy on the validation set.
For 20 Newsgroups, we train the Global Pooling Convolutional Network \cite{global-pool-cnn} using the ADAM optimizer, with learning rate $0.001$, and the default values of betas at $0.9$ and $0.999$ respectively. We used \texttt{GloVe} word embeddings \cite{glove-vectors} to train the network. We trained the model for $50$ epochs.
\section{Additional Results}
\label{sec:addnlRes}
We report the following additional results:
\input{sections/tab_rotated_mnist.tex}
\input{sections/tab_pacs_ablation.tex}
\begin{enumerate}[label*=\arabic*.]
\item \textbf{Class-j-ECE score:} In Table 3 of main manuscript we reported the Class-j-ECE score for SVHN. In Table \ref{tab:classECE_CIFAR10} here, we we provide additional results for CIFAR10.
\item \textbf{Comparison with other auxiliary losses}: In Table 6 in the main manuscript showed how the proposed MDCA can be used along with \texttt{NLL}, \texttt{LS} \cite{originallabelsmoothing}, and \texttt{FL} \cite{ogfocalloss} to improve the calibration performance without sacrificing the accuracy. In Tab. \ref{tab:AblCIFARnSVHN} here, we show a similar comparison for other competitive approaches, namely \texttt{DCA} \cite{dcapaper}, and \texttt{MMCE} \cite{kumarpaper}. Using \texttt{MDCA}, gives better calibration than other competitive approaches.
\item \textbf{Calibration performance under dataset drift:} A model trained using our proposed loss gives better calibration under dataset drift as well. Table 4 in the main manuscript showed SCE score comparison on PACS. We give more detailed comparison here in Tab. \ref{tab:rmnist-pacs-SCE-ece-top1-all_methods} which shows top 1\% accuracy, ECE as well. We repeat SCE numbers from main manuscript for completion.
Tab. \ref{tab:SCE-rot-MNIST} shows the corresponding numbers for Rotated MNIST.
Just like we showed that using \texttt{MDCA} in conjunction with \texttt{NLL}, \texttt{LS} \cite{originallabelsmoothing}, and \texttt{FL} \cite{ogfocalloss} gives best calibration performance, we show that this remains true even for the dataset drift case. Tab. \ref{tab:rot-mnist-ece-top1-ablation} and Tab. \ref{tab:ece-top1-rot-pacs-ablation} show the comparison on Rotated MNIST and PACS datasets respectively.
\item \textbf{Reliability Diagrams:} Fig. 2 in the main manuscript showed reliability and confidence plots for \texttt{MDCA} used with \texttt{NLL} and \texttt{LS} respectively. We show similar plots for \texttt{MDCA+FL} in Fig. \ref{fig:RelDigFLMDCA_vs_FL}.
%
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width =\linewidth]{figs/flRel.pdf}
\caption{Reliability diagrams (a,b) and confidence histograms (c,d) of \texttt{FL} trained model compared against MDCA regularized version (\texttt{FL+MDCA}). We use ResNet-32 trained on CIFAR10 dataset for comparison.}
\label{fig:RelDigFLMDCA_vs_FL}
\end{figure}
\input{sections/tab_classwise_ece_cifar10.tex}
\clearpage
\newpage
{\small
\bibliographystyle{ieee_fullname}
\section{Why learn-able calibration is appealing ?}
\begin{itemize}
\item No extra hold out set is required
\item post-hoc calibration does not do feature learning [cite liang et al]
\item A DNN should be able to calibrate itself without any post-processing
\end{itemize}
Overfitting the loss but not the accuracy Note that the NLL is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, whereas the classification error is minimised when $\hat{p}_{i,y_i} > \hat{p}_{i,y}$ for all $y \neq y_i$. This means that even when the classification error is 0, the NLL can be positive, and the optimisation algorithm can still try to reduce it to 0 by further increasing the value of $\hat{p}_{i,y_i}$ for each sample $i$. This is the reason for overfitting and over-confidence.
Why Miscalibration happens in the first place ?
define NLL and issues with NLL, Depth of Neural network, width, weight decay
\neels{ \textbf{Inherent bug in NLL loss}:
Overfitting the loss but not the accuracy Note that the NLL is minimised when for each training sample $i$, $\hat{p}_{i,y_i} = 1$, whereas the classification error is minimised when $\hat{p}_{i,y_i} > \hat{p}_{i,y}$ for all $y \neq y_i$. This means that even when the classification error is 0, the NLL can be positive, and the optimisation algorithm can still try to reduce it to 0 by further increasing the value of $\hat{p}_{i,y_i}$ for each sample $i$. This is the reason for overfitting and over-confidence.}
Why Miscalibration happens in the first place ?
define NLL and issues with NLL, Depth of Neural network, width, weight decay
cluster the calibration methods \\
\ramya{we consider a set of $N$ training samples, ${\mathcal{D}}_\text{in}^\text{train} = (x_n, y_n)^N_{n=1}$, where ${\mathcal{D}}_\text{in}^\text{train}$ is drawn independently from a probability distribution: ${\mathcal{P}}_{X,Y}$. Here, $X\in{\mathcal{X}}$ is a random variable defined on the image space, and $Y\in{\mathcal{Y}} = \{1,\ldots,K\}$ is a random variable representing its label. A classifier $f_{\theta}:\mathcal{X} \rightarrow \mathcal{Y}$ is trained on the in-distribution samples drawn from the marginal distribution ${\mathcal{P}}_X$ of $X$ derived from the joint distribution ${\mathcal{P}}_{X,Y}$. Here $\theta$ refers to model parameters learnt by the neural network.}
Modern Deep Neural Networks (DNNs) are often miscalibrated i.e., they make overconfident/underconfident decisions. The DNN accuracies often do not perfectly align with the predicted probabilities posing reliability concerns for decision making in safety critical pipelines. To this end, we propose an effective calibration technique for image classifiers with the aim where DNN predictions model the real world as closely as possible i.e., the confidence scores reflect the true probability distributions of the outputs and provide an interpret-able meaning to a confidence score. Many existing works calibrate the predicted confidence score of DNNs, without taking into account the underlying class-wise probability distribution in a multi-class classification setting. This ignores calibration for classes other than the than the predicted class with highest confidence which could possibly be incorrect. Such approaches focus on minimising Expected Calibration Error (ECE), rather than Static Calibration Error more suitable in a multi-class setting. To address this, we propose a trainable approach to calibrate DNNs utilising a novel differentiable auxiliary loss, \textbf{M}ulti-class \textbf{D}ifference in \textbf{C}onfidence and \textbf{A}ccuracy (\textbf{MDCA}) which could be clubbed with any loss terms such as the cross entropy, label smoothing, Direchlet calibration. We demonstrate MDCA promotes calibration over the recent state-of-the-art trainable methods on standard benchmark datasets and performs on-par with the post-hoc calibration techniques.
Gneiting, T., Balabdaoui, F., and Raftery, A. E. Probabilistic forecasts, calibration and sharpness.
Journal of the Royal Statistical Society, 2007
ECE -- source: Naeini, M. P., Cooper, G., and Hauskrecht, M. Obtaining well calibrated probabilities using bayesian
binning. In AAAI Conference on Artificial Intelligence, 2015.
conformal prediction : Shafer, G. and Vovk, V. A tutorial on conformal prediction. Journal of Machine Learning Research,
2008.
DeGroot, M. H. and Fienberg, S. E. The comparison and evaluation of forecasters. Journal of the
Royal Statistical Society, 1983.
accuracy under distribution shift
. The important implication of
this result is that designing models based on in-distribution performance likely also benefits their
out-of-distribution performance
too certain (overconfident), optimally confident, or too
uncertain (underconfident)
Michael W. Dusenberry, Dustin Tran, Edward Choi, Jonas Kemp, Jeremy Nixon, Ghassen Jerfel,
Katherine Heller, and Andrew M. Dai. Analyzing the role of model uncertainty for electronic
health records. In Proc. of the ACM Conference on Health, Inference, and Learning (ACM
CHIL), pp. 204–213, Toronto Ontario Canada, April 2020. ACM. ISBN 978-1-4503-7046-2. doi:
10.1145/3368555.3384457. URL http://arxiv.org/abs/1906.03842
Kimin Lee, Honglak Lee, Kibok Lee, and Jinwoo Shin. Training confidence-calibrated classifiers for
detecting out-of-distribution samples. arXiv preprint arXiv:1711.09325, 2017.
Volodymyr Kuleshov, Nathan Fenner, and Stefano Ermon. Accurate uncertainties for deep learning
using calibrated regression. arXiv preprint arXiv:1807.00263, 2018.
We examine increasing the importance of omitted predictions via label noise. As label noise increases,
trained models become less confident in their predictions. Secondary and tertiary predictions are
correct with higher frequency, and so for many samples the prediction with the correct label isn't
included in ECE's measure of calibration error. We find that ECE becomes a worse approximation of
the calibration error as relevant predictions not included in the ECE metric become common (Figure
1).
Athanasios Tsoukalas, Timothy Albertson, and Ilias Tagkopoulos. From Data to Optimal Decision
Making: A Data-Driven, Probabilistic Machine Learning Approach to Decision Support for
Patients With Sepsis. JMIR Medical Informatics, 3(1):e11, February 2015. ISSN 2291-9694. doi:
10.2196/medinform.3445. URL http://medinform.jmir.org/2015/1/e11/.
accurate estimates
Meta learning for calibration
simple posttraining rescaling of the logits – temperature scaling – works
relatively well. Kumar et al. (2018) propose a kernel-based
measure of calibration that they use as regularization during
training of neural networks. Mukhoti et al. (2020) show
focal loss – a relatively simple weighted alternative to crossentropy – can be used to train well-calibrated neural networks. The classic Brier score (Brier, 1950), which is the
squared error between the softmax vector with probabilities and the ground-truth one-hot encoding, has also been
show to work well. Similarly, label smoothing (Muller et al. ¨ ,
2019) has also been shown to improve model calibration
This is an efficient strategy as we do not need to
backpropagate through many inner-loop steps or retrain the
model from scratch for each update of meta-knowledge
Recent techniques like temperature scaling (TS) and label smoothing (LS)
show effectiveness in obtaining a well-calibrated
model by smoothing logits and hard labels with
scalar factors, respectively. However, the use of
uniform TS or LS factor may not be optimal for
calibrating models trained on a long-tailed dataset
where the model produces overly confident probabilities for high-frequency classes.
negative log-likelihood loss, they overfit
to datasets, rendering its predictions to be over-confident and
less trustworthy (Mukhoti et al., 2020).
find the optimal temperature, a conjugate gradient solver or
a naive line-search is used on a trained model without affecting model accuracy (
\begin{table}[]
\resizebox{0.5 \textwidth}{!}{%
\begin{tabular}{cc|cc|cc|cc}
\toprule
\multirow{2}{*}{\textbf{Method}} &
\multirow{2}{*}{\textbf{Post Hoc}} &
\multicolumn{2}{c}{\textbf{CIFAR10}} &
\multicolumn{2}{c}{\textbf{CIFAR100}} &
\multicolumn{2}{c}{\textbf{SVHN}} \\
&
&
\textbf{RN34} &
\textbf{RN56} &
\textbf{RN34} &
\textbf{RN56} &
\textbf{RN20} &
\textbf{RN56} \\ \midrule
\multirow{3}{*}{Brier} & None & 6.61 & 5.44 & 1.97 & 1.86 & 2.46 & 2.18 \\
& TS & 4.37 & 3.94 & 1.78 & 1.68 & 2.46 & 3.88 \\
& DC & 3.89 & 4.83 & 1.89 & 1.80 & 1.91 & 2.11 \\ \midrule
\multirow{3}{*}{NLL} & None & 8.68 & 7.12 & 2.40 & 2.50 & 3.43 & 3.84 \\
& TS & 3.99 & 3.25 & 1.56 & 1.49 & 4.13 & 4.16 \\
& DC & 3.07 & 4.98 & 2.17 & 1.91 & 2.48 & 2.69 \\ \midrule
\multirow{3}{*}{LS} & None & 14.09 & 12.55 & 1.99 & 1.73 & 18.80 & 21.08 \\
& TS & 6.13 & 4.49 & 1.91 & 1.67 & 2.35 & 3.12 \\
& DC & 5.96 & 5.34 & 2.05 & 1.98 & 2.36 & 2.81 \\ \midrule
\multirow{3}{*}{FL} & None & 4.61 & 4.19 & 1.83 & 1.89 & 3.04 & 7.85 \\
& TS & 3.77 & 4.19 & 1.82 & 1.62 & 3.04 & 2.72 \\
& DC & 4.69 & 5.48 & 2.02 & 2.02 & 2.28 & 3.36 \\ \midrule
\multirow{3}{*}{MMCE} & None & 8.17 & 9.12 & 2.34 & 2.35 & 9.18 & 9.69 \\
& TS & 4.68 & 4.05 & 1.56 & 1.61 & 5.01 & 3.74 \\
& DC & 8.10 & 6.26 & 2.02 & 1.95 & 4.70 & 5.11 \\ \midrule
\multirow{3}{*}{DCA} & None & 8.41 & 7.60 & 2.82 & 2.87 & 4.29 & 2.16 \\
& TS & 3.44 & 3.00 & 1.58 & 1.56 & 2.18 & 4.29 \\
& DC & 4.39 & 4.20 & 2.18 & 2.06 & 2.21 & 2.95 \\ \midrule
\multirow{3}{*}{FLSD} & None & 9.49 & 7.71 & 1.77 & 1.71 & 18.98 & 26.15 \\
& TS & 4.05 & 3.27 & 1.77 & 1.71 & 3.75 & 4.41 \\
& DC & 5.51 & 5.62 & 2.06 & 2.01 & 3.50 & 4.31 \\ \midrule
\rowcolor{LightCyan}
\multirow{3}{*}{FL+MDCA} & None & \textbf{3.22} & \textbf{2.93} & \textbf{1.72} & \textbf{1.60} & \textbf{1.90} & \textbf{1.51} \\
& TS & 3.24 & 2.93 & 1.77 & 1.60 & 1.90 & 5.00 \\
& DC & 5.33 & 3.81 & 2.07 & 1.87 & 2.38 & 2.72 \\ \bottomrule
\end{tabular}%
}
\caption{Ablatuon of trainable methods with Post-Hoc Calibration. Observe that our proposed method reports the least SCE(\%) in comparison to any trainable.Post Hoc method. Note TS: Temperate scaling~\cite{platt1999probabilistic}; DC: Dirichlet Calibration~\cite{Dirichlet}}
\label{tab:TrainVsPH}
\end{table}
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc|c}
\toprule
\textbf{Dataset} & \textbf{Model} & \textbf{Cross Entropy} & \textbf{Focal Loss \citeyearpar{ogfocalloss}} & \textbf{LS \citeyearpar{labelsmoothinghelp}} & \textbf{Brier Score \citeyearpar{brierloss}} & \textbf{DCA \citeyearpar{dcapaper}} & \textbf{MMCE \citeyearpar{kumarpaper}} & \textbf{FLSD \citeyearpar{focallosspaper}} & \textbf{FL+MDCA (ours)} \\ \midrule
\multirow{2}{*}{CIFAR10} & ResNet34 & 8.68 & 4.60 & 14.08 & 6.60 & 8.41 & 8.17 & 9.48 & \textbf{3.22} \\
& ResNet56 & 7.11 & 4.18 & 12.54 & 5.44 & 7.59 & 9.11 & 7.71 & \textbf{2.93} \\ \midrule
\multirow{2}{*}{CIFAR100} & ResNet34 & 3.03 & 1.83 & 1.99 & 1.97 & 2.82 & 2.79 & 1.77 & \textbf{1.72} \\
& ResNet56 & 2.50 & 1.66 & 1.73 & 1.86 & 2.77 & 2.35 & 1.71 & \textbf{1.60} \\ \midrule
\multirow{2}{*}{SVHN} & ResNet20 & 3.43 & 2.54 & 18.80 & 2.12 & 4.29 & 9.18 & 18.98 & \textbf{1.90} \\
& ResNet56 & 3.84 & 7.85 & 21.08 & 2.18 & 2.16 & 9.69 & 26.15 & \textbf{1.51} \\ \midrule
Mendeley V2 & ResNet50 & 131.2 & 108.3 & 103.8 & 117.6 & 145.1 & 130.4 & 104.3 & \textbf{85.68} \\ \midrule
Kather5000 & ResNet34 & 31.70 & 41.69 & 46.63 & 32.76 & \textbf{27.13} & 47.35 & 57.87 & 41.39 \\ \midrule
Tiny-ImageNet & ResNet34 & 1.91 & 1.19 & 1.38 & 1.53 & 2.11 & 1.62 & 1.18 & \textbf{1.17 } \\ \midrule
20 Newsgroups & Global-Pool CNN & 602.68 & 729.39 & 988.42 & 725.82 & 719.83 & 731.31 & 940.70 & \textbf{487.82 } \\ \bottomrule
\end{tabular}%
}
\caption{ SCE (to the scale of $10^{-3}$) values (with $M=15$ bins) comparing various competing methods and our proposed method across well-known classification benckmarks and DNN architectures. We outperform all the baselines except in some occasions where our method yields comparable score to best performing one.}
\label{tab:sce-m15}
\end{table*}
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc|c}
\toprule
\textbf{Dataset} & \textbf{Model} & \textbf{Cross Entropy} & \textbf{Focal Loss \citeyearpar{ogfocalloss}} & \textbf{LS \citeyearpar{labelsmoothinghelp}} & \textbf{Brier Score \citeyearpar{brierloss}} & \textbf{DCA \citeyearpar{dcapaper}} & \textbf{MMCE \citeyearpar{kumarpaper}} & \textbf{FLSD \citeyearpar{focallosspaper}} & \textbf{FL+MDCA (ours)} \\ \midrule
\multirow{2}{*}{CIFAR10} & ResNet34 & 4.25 & 1.76 & 6.28 & 2.92 & 4.00 & 3.31 & 4.41 & \textbf{0.93} \\
& ResNet56 & 3.27 & 1.11 & 5.38 & 2.17 & 3.38 & 3.71 & 3.49 & \textbf{0.70} \\ \midrule
\multirow{2}{*}{CIFAR100} & ResNet34 & 12.45 & 1.62 & 2.09 & 5.32 & 11.31 & 11.09 & 1.69 & \textbf{1.49} \\
& ResNet56 & 9.32 & 2.29 & 8.94 & 4.69 & 9.29 & 8.61 & 1.90 & \textbf{0.72} \\ \midrule
\multirow{2}{*}{SVHN} & ResNet20 & 1.64 & 0.89 & 8.88 & \textbf{0.45} & 2.02 & 4.34 & 9.37 & 0.47 \\
& ResNet56 & 1.82 & 3.89 & 10.00 & 0.66 & 0.49 & 4.48 & 13.23 & \textbf{0.23} \\ \midrule
Mendeley V2 & ResNet50 & 4.78 & 8.17 & \textbf{2.68} & 3.75 & 8.29 & 3.45 & 9.64 & 4.81 \\ \midrule
Kather5000 & ResNet34 & 7.14 & 13.62 & 15.36 & 8.64 & \textbf{4.49} & 16.63 & 22.03 & 13.45 \\ \midrule
Tiny-ImageNet & ResNet34 & 14.91 & 2.26 & 5.96 & 7.79 & 17.40 & 9.71 & \textbf{1.91} & 1.99 \\ \midrule
20 Newsgroups & Global-Pool CNN & 14.78 & 13.35 & \textbf{3.45} & 13.71 & 15.30 & 12.69 & 4.52 & 16.55 \\ \bottomrule
\end{tabular}%
}
\caption{ ECE (\%) values ($M=15$ bins) comparing competing and our method across various datasets and DNN architectures.}
\label{tab:ece-m15}
\end{table*}
\begin{table*}[h]
\centering
\resizebox{\textwidth}{!}{%
\begin{tabular}{ccccccccc|c}
\toprule
\textbf{Dataset} & \textbf{Model} & \textbf{NLL} & \textbf{FL} & \textbf{LS} & \textbf{Brier Score} & \textbf{DCA} & \textbf{MMCE} & \textbf{FLSD} & \textbf{Ours} \\
& & & \citeyearpar{ogfocalloss} & \citeyearpar{labelsmoothinghelp} & \citeyearpar{brierloss} & \citeyearpar{dcapaper} & \citeyearpar{kumarpaper} & \citeyearpar{focallosspaper} & \\
\midrule
\multirow{2}{*}{CIFAR10} & ResNet32 & 8.68 / 4.25 & 4.60 / 1.76 & 14.08 / 6.28 & 6.60 / 2.92 & 8.41 / 4.00 & 8.17 / 3.31 & 9.48 / 4.41 & 3.22 / 0.93 \\
& ResNet56 & 7.11 / 3.27 & 4.18 / 1.11 & 12.54 / 5.38 & 5.44 / 2.17 & 7.59 / 3.38 & 9.11 / 3.71 & 7.71 / 3.49 & 2.93 / 0.70 \\ \midrule
\multirow{2}{*}{CIFAR100} & ResNet32 & 3.03 / 12.45 & 1.83 / 1.62 & 1.99 / 2.09 & 1.97 / 5.32 & 2.82 / 11.31 & 2.79 / 11.09 & 1.77 / 1.69 & 1.72 / 1.49 \\
& ResNet56 & 2.50 / 9.32 & 1.66 / 2.29 & 1.73 / 8.94 & 1.86 / 4.69 & 2.77 / 9.29 & 2.35 / 8.61 & 1.71 / 1.90 & 1.60 / 0.72 \\ \midrule
\multirow{2}{*}{SVHN} & ResNet20 & 3.43 / 1.64 & 2.54 / 0.89 & 18.80 / 8.88 & 2.12 / 0.45 & 4.29 / 2.02 & 9.18 / 4.34 & 18.98 / 9.37 & 1.90 / 0.47 \\
& ResNet56 & 3.84 / 1.82 & 7.85 / 3.89 & 21.08 / 10.00 & 2.18 / 0.66 & 2.16 / 0.49 & 9.69 / 4.48 & 26.15 / 13.23 & 1.51 / 0.23 \\ \midrule
Mendeley V2 & ResNet50 & 131.2 / 4.78 & 108.3 / 8.17 & 103.8 / 2.68 & 117.6 / 3.75 & 145.1 / 8.29 & 130.4 / 3.45 & 104.3 / 9.64 & 85.68 / 4.81 \\ \midrule
Kather5000 & ResNet34 & 31.70 / 7.14 & 41.69 / 13.62 & 46.63 / 15.36 & 32.76 / 8.64 & 27.13 / 4.49 & 47.35 / 16.63 & 57.87 / 22.03 & 41.39 / 13.45 \\ \midrule
Tiny-ImageNet & ResNet34 & 1.91 / 14.91 & 1.19 / 2.26 & 1.38 / 5.96 & 1.53 / 7.79 & 2.11 / 17.40 & 1.62 / 9.71 & 1.18 / 1.91 & 1.17 / 1.99 \\ \midrule
20 Newsgroups & Global-Pool CNN & 602.68 / 14.78 & 729.39 / 13.35 & 988.42 / 3.45 & 725.82 / 13.71 & 719.83 / 15.30 & 731.31 / 12.69 & 940.70 / 4.52 & 487.82 / 16.55 \\ \bottomrule
\end{tabular}%
}
\caption{ SCE (to the scale of $10^{-3}$) / ECE(\%) values (with $M=15$ bins) comparing various competing methods and our proposed method across well-known classification benckmarks and DNN architectures. We outperform all the baselines except in some occasions where our method yields comparable score to best performing one.}
\label{tab:sce-m15}
\end{table*}
\section{Conclusion \& Future work}
\label{concAndLimitations}
We have presented a train-time technique for calibrating the predicted confidence values of a \textsc{dnn}\xspace based classifier. Our approach combines standard classification loss functions with our novel auxiliary loss named, Multi-class Difference of Confidence and Accuracy (\textsc{mdca}\xspace). Our proposed loss function when combined with focal loss yields the least calibration error among both trainable and post-hoc calibration methods. We show promising results in case of long tail datasets, natural/synthetic dataset drift, semantic segmentation and a natural language classification benchmark too. In future we would like to investigate the role of class hierarchies to develop cost-sensitive calibration techniques.
\section{Dataset and Evaluation}
\label{sec:exp_setup}
\myfirstpara{Datasets}
We validate our technique on well-known benchmark datasets for image classification, semantic segmentation and natural language processing (NLP). For each of the datasets: CIFAR10/100 \cite{krizhevsky2009learning}, SVHN \cite{netzer2011reading}, Mendeley V2 \cite{kermany2018labeled}, Tiny-ImageNet \cite{imagenetpaper} and 20-Newsgroups \cite{20newsgroup}, we have a separate train and test set. The train set is further split into 2 mutually exclusive sets (a) training set containing $90\%$ of the samples, and (b) the validation set containing $10\%$. We use validation set as the hold-out set for post-hoc calibration. This division has been consistent throughout our experimentation. See Supplementary material for detailed description of datasets, \textsc{dnn}\xspace architectures, and training procedure.
\mypara{Evaluation}
We report calibration measures, \textsc{sce}\xspace, \textsc{ece}\xspace, and class-$j$-\textsc{ece}\xspace along with test error for studying calibration performance. We observe that we achieve near-perfect calibration using our technique without any significant drop in the accuracy. We also visualize the calibration using reliability diagrams (please see supplementary material for detailed description of reliability diagrams).
\mypara{Compared Techniques}
We compare our method against models trained with Cross-Entropy (\textsc{nll}\xspace), Label Smoothing (\textsc{ls}\xspace) \cite{originallabelsmoothing}, \texttt{\textsc{dca}\xspace} \cite{dcapaper}, Focal Loss (\textsc{fl}\xspace) \cite{ogfocalloss}, Brier Score (\textsc{bs}\xspace) \cite{brierloss}, \textsc{flsd}\xspace \cite{focallosspaper} as well as \textsc{mmce}\xspace \cite{kumarpaper}. For details on individual methods and their training specifics, please refer to the supplementary.
\subsection{Discussion}
\label{subsec:disc}
Using a train-time calibration method helps in achieving well-calibrated models, and with a post-hoc calibration method we demonstrate how calibration results can be enhanced. It is derirable to calibrate DNN at train-time procedure to reduce the reliance of a hold-out set to finetune the calibration measures. We have seen through above results that $\mathcal{L}_{MDCA}$ helps in doing so. In particular, our approach is promising and is more practical in low data regime without the luxury of access to a validation set, making our approach deployment friendly except for the minor cost born for retraining procedure. \textcolor{red}{Needs cross verification: the reason why FL + MDCA works well in class imbalance is because FL pays attention to classes with low label distribution/minority classes and also MDCA gives more weightage to minority classes}
Further, from our exhaustive ablation study when we try various off-the-shelf classification losses ($\mathcal{L}_C$) paired with the competing calibration inducing regularizer \cite{dcapaper, kumarpaper} and then with MDCA. Table \ref{tab:sce-m15} shows regardless of what $\mathcal{L}_C$ is, we get consistently least SCE and ECE values with minor or no dip in accuracy.
\section{Introduction}
Deep Neural Networks (\textsc{dnn}s\xspace) have shown promising results for various pattern recognition tasks in recent years. In a classification setting, with input $\mathbf{x} \in \mathcal{X}$, and label $y \in {\mathcal{Y}} = \{1, \ldots, K\}$, a \textsc{dnn}\xspace typically outputs a \emph{confidence} score vector $\mathbf{s} \in \mathbb{R}^K$. The vector, $\mathbf{s}$, is also a valid probability vector, and each element of $\mathbf{s}$ is assumed to be the predicted confidence for the corresponding label. It has been shown in recent years that the confidence vector, $\mathbf{s}$, output by a \textsc{dnn}\xspace is often poorly calibrated \cite{guo2017calibration, minderer2021revisiting}. That is:
\begin{equation}
\mathbb{P} \Big( \widehat{y}=y^* ~ \big\vert ~ \mathbf{s}[\widehat{y}] \Big) ~ \neq ~ \mathbf{s}[\widehat{y}],
\end{equation}
where $\widehat{y}$, and $y^*$ are the predicted, and true label respectively for a sample. E.g. if a \textsc{dnn}\xspace predicts a class ``truck'' for an image with score 0.7, then a network is calibrated, if the probability that the image actually contains a truck is 0.7. If the probability is lower, a network is said to be over-confident, and under-confident if probability is higher. For a pixel-wise prediction task like semantic segmentation, we would like to calibrate prediction for each pixel. Similarly, we would like calibration to hold not only for the predicted label, i.e. $\widehat{y} = \argmax{y \in {\mathcal{Y}}} \mathbf{s}[y]$, but for the whole vector $\mathbf{s}$ (all labels), i.e., $\forall y \in {\mathcal{Y}}$.
One of the main reasons for the miscalibration is the specific training regimen used. Most modern \textsc{dnn}s\xspace, when trained for classification in a supervised learning setting, are trained using one-hot encoding that have all the probability mass centered in one class; the training labels are thus zero-entropy signals that admit no uncertainty about the input \cite{thulasidasan2019mixup}. The \textsc{dnn}\xspace is thus trained to become overconfident. Besides creating a general distrust in the model predictions, the miscalibration is especially problematic in safety critical applications, such as self-driving cars \cite{grigorescu2020survey}, legal research \cite{yu2019s} and healthcare \cite{chromosome, Dusenberry_2020}, where giving the correct confidence for a predicted label is as important as the correct label prediction itself.
Researchers have tried to address miscalibration by learning a post-hoc transformation of the output vector so that the confidence of the predicted label matches with the likelihood of the label for the sample \cite{hinton2015distilling,DBLP:journals/corr/HendrycksG16c}. Since such techniques focus on the predicted label only, they could end up calibrating only the label which has maximum confidence for each sample. Hence, in a multi-class setting, the labels with non-maximal confidence scores remain uncalibrated. This makes any post-processing for label refinement, such as posterior inference using MRF-MAP \cite{deeplabv1}, ineffective.
In this paper we argue for the calibration at the train-time. Unlike post-hoc calibration techniques that use limited parameters\footnote{For example, Temperature scaling (\textsc{ts}\xspace) calibrates uses a single global scalar, $T$; and Dirichlet Calibration (\texttt{DC}) uses $\mathcal{O}(K^2)$ hyper-parameters for $K$ classes to calibrate the model output}, a train time strategy allows exploiting millions of learnable parameters of DNN itself, thus providing a flexible learning more suited to image and pixel specific transformation for model calibration. Our experiments under domain shift, and for a dense predict task (semantic segmentation) shows the strength of the approach.
Armed with the above insight, we propose a novel auxiliary loss function: \textbf{M}ulti-class \textbf{D}ifference in \textbf{C}onfidence and \textbf{A}ccuracy (\textsc{mdca}\xspace). The proposed loss function is designed to be used during the training stage in conjunction with other application specific loss functions, and overcomes the non-differentiablity of the loss functions proposed in earlier methods. Though we do not advocate it, the proposed technique is complimentary to the post-hoc techniques which may still be used after the training, if there is a separate hold-out dataset available for exploitation. Since ours is a train time calibration approach, it implies good regularization for the predictions. We show that models trained using our loss
function remain calibrated even under domain shift.
\mypara{Contributions} We make the following key contributions:
\begin{enumerate*}[label=\textbf{(\arabic*)}
\item A trainable \textsc{dnn}\xspace calibration method with inclusion of a novel auxiliary loss function, termed \textsc{mdca}\xspace, that takes into account the entire confidence vector in a multi-class setting. Our loss function is differentiable and can be used in conjunction with any existing loss term. We show experiments with Cross-Entropy, Label Smoothing \cite{labelsmoothinghelp}, and Focal Loss \cite{ogfocalloss}.
\item Our approach is on par with post-hoc methods \cite{Dirichlet,guo2017calibration} without the need for hold-out set making the deployment more practical (See \cref{tab:TrainVsPH}).
\item Our loss function is a powerful regularizer, maintaining calibration even under domain/dataset drift and dataset imbalance which We demonstrate on \textsc{pacs}\xspace \cite{pacspaper}, Rotated MNIST \cite{lecun2010mnist} and imbalanced \textsc{cifar}\xspace 10 datasets.
\item Although the focus is primarily on image classification, our experiments on multi-class semantic segmentation show that our technique outperforms \textsc{ts}\xspace based calibration, and Focal Loss \cite{ogfocalloss}. We also show the effectiveness of our approach on natural language classification task on 20Newsgroup dataset~\cite{Lang95}.
\end{enumerate*}
\section{Proposed Methodology}
\label{sec:Propmethodology}
\mypara{Calibration}
A calibrated classifier outputs confidence scores that matches the empirical frequency of correctness. If a calibrated model predicts an event with $0.7$ confidence, then $70\%$ of the times the event transpires. If the empirical occurrence of the event is $<70\%$ then the model is overconfident, and if the empirical probability $>70\%$ then the model is under-confident. Formally, we define calibration in a classical supervised setting as follows. Let ${\mathcal{D}}=\langle(x_i,y_i)\rangle_{i=1}^N$ denote a dataset consisting of $N$ samples from a joint distribution ${\mathcal{D}}({\mathcal{X}},{\mathcal{Y}})$, where for each sample $x_i \in {\mathcal{X}}$ is the input and $y_i^* \in {\mathcal{Y}} = \{1, 2, ..., K\}$ is the ground-truth class label. Let $\mathbf{s} \in \mathbb{R}^K$, and $\mathbf{s}_i[y] = f_\theta(x_i)$ be the confidence that a \textsc{dnn}\xspace, $f$, with model parameters $\theta$ predicts for a class $y$ on a given input $x_i$. The class, $\widehat{y}_i$, predicted by $f$ for a sample $x_i$ is computed as:
\begin{equation}
\widehat{y}_i = \argmax{y \in {\mathcal{Y}}} \mathbf{s}_i[y].
\end{equation}
The confidence for the predicted class is correspondingly computed as $\widehat{s}_i = \max_{y\in {\mathcal{Y}}} s_i[y]$. A model is said to be \emph{perfectly calibrated} \cite{guo2017calibration} when, for each sample $(x, y) \in {\mathcal{D}}$:
\begin{equation}
\mathbb{P}(y = y^* ~ | ~ \mathbf{s}[y] = s ) = s.
\label{equ:calib2}
\end{equation}
Note that the perfect calibration requires each score value (and not only the $\widehat{s}$) to be calibrated. On the other hand, most calibration techniques focus only on the predicted class. That is, they only ensure that: $\mathbb{P}(\widehat{y}_i = y_i^* ~ | ~ \widehat{s}_i) = \widehat{s}_i$.
\mypara{Expected Calibration Error (\textsc{ece}\xspace)}
\textsc{ece}\xspace is calculated by computing a weighted average of the differences in the confidence of the predicted class, and the accuracy of the samples, predicted with a particular confidence score \cite{ecepaper}:
\begin{equation}
\textsc{ece}\xspace = \sum_{i=1}^M \frac{B_i}{N} \Big\lvert A_i - C_i \Big\rvert.
\label{equ:ece}
\end{equation}
Here $N$ is the total number of samples, and the weighting is done on the basis of the fraction of samples in a given confidence bin/interval. Since the confidence values are in a continuous interval, for the computation of \textsc{ece}\xspace, we divide the confidence range $[0,1]$ into $M$ equidistant bins, where $i^\text{th}$ bin is the interval $(\frac{i-1}{M}, \frac{i}{M}]$ in the confidence range, and $B_i$, represents the number of samples in the $i^\text{th}$ bin. Further, $A_i =\frac{1}{|B_i|}\sum_{j\in B_i} \mathbb{I} (\hat{y}_j=y_j)$, denotes accuracy for the samples in bin $B_i$, and $C_i=\frac{1}{|B_i|}\sum_{j: \widehat{s}_j \in B_i} \widehat{s}_j$, is the average predicted confidence of the samples, such that $\widehat{s}_j \in B_i$. The evaluation of \textsc{dnn}\xspace calibration via \textsc{ece}\xspace suffers from the following shortcomings: (a) \textsc{ece}\xspace does not measure the calibration of all score values in the confidence vector, and (b) the metric is not differentiable, and hence can not be incorporated as a loss term during training procedure itself. Specifically, non-differentiablity arises due to binning samples into bins $B_i$.
\mypara{Maximum Calibration Error (\textsc{mce}\xspace)}
\textsc{mce}\xspace is defined as the maximum absolute difference between the average accuracy and average confidence of each bin:
\[
\textsc{mce}\xspace = \max_{i \in {1,...,M}} \big\lvert A_i - C_i \big\rvert.
\]
The \textit{max} operator ends up pruning a lot of useful information about calibration, making the metric not-so-popular. However, it does represent a statistical value that can be used to discriminate large differences in calibration.
\mypara{Static Calibration Error (\textsc{sce}\xspace)}
\textsc{sce}\xspace is a recently proposed metric to measure calibration by \cite{nixon2019measuring}:
\begin{equation}
\textsc{sce}\xspace = \frac{1}{K} \sum_{i=1}^M \sum_{j=1}^K \frac{B_{i,j}}{N} \big\lvert A_{i,j} - C_{i,j} \big\rvert,
\label{equ:sce}
\end{equation}
where, $K$ denotes the number of classes, and $B_{i,j}$ denotes number of samples of the $j^\text{th}$ class in the $i^\text{th}$ bin. Further, $A_{i,j} = \frac{1}{B_{i,j}} \sum_{k \in B_{i,j}} \mathbb{I} (j = y_{k})$ is the accuracy for the samples of $j^\text{th}$ class in the $i^{th}$ bin, and $C_{i,j} = \frac{1}{B_{i,j}} \sum_{k \in B_{i,j}} \mathbf{s}_k[j]$ or average confidence for the $j^\text{th}$ class in the $i^\text{th}$ bin. \textbf{Classwise-\textsc{ece}\xspace} \cite{Dirichlet} is another metric for measuring calibration in a multi-class setting, but is identical to Static Calibration Error (\textsc{sce}\xspace). It is easy to see that \textsc{sce}\xspace is a simple class-wise extension to \textsc{ece}\xspace. Since \textsc{sce}\xspace takes into account the whole confidence vector, it allows us to measure calibration of the non-predicted classes as well. Note that, similar to \textsc{ece}\xspace, the metric \textsc{sce}\xspace is also non-differentiable, and can not be used as a loss term during training.
\mypara{Class-$j$-\textsc{ece}\xspace}
\cite{Dirichlet} has proposed to evaluate calibration error of each class independent of other classes. This allows one to capture the contribution of a single class $j$ to the overall \textsc{sce}\xspace (or classwise-\textsc{ece}\xspace) error. We refer to this metric as class-$j$-\textsc{ece}\xspace in our results/discussion.
\subsection{Proposed Auxiliary loss: MDCA}
We propose a novel multi-class calibration technique using the proposed auxiliary loss function. The loss function is inspired from SCE \cite{nixon2019measuring} but avoids the non-differentiability caused due to binning $B_{i,j}$ as shown in \cref{equ:sce} \cite{dcapaper}. Our calibration technique is \textbf{independent} of the binning scheme/bins. This is important, because as \cite{widmann2019calibration} and \cite{kumar2019verified} have also highlighted, binning scheme leads to underestimated calibration errors. We name our loss function, \textit{Multi-class Difference of Confidence and Accuracy (\textsc{mdca}\xspace)}, and apply it for each \textbf{mini-batch} during training. The loss is defined as follows:
\begin{equation}
{\mathcal{L}}_\textsc{mdca}\xspace = \frac{1}{K} \sum_{j=1}^K \Big\lvert \frac{1}{N_b}\sum_{i=1}^M \mathbf{s}_i[j] - \frac{1}{N_b} \sum_{i=1}^M q_{i}[j] \Big\rvert,
\label{equ:mdca}
\end{equation}
where $q_{i}[j]=1$ if label $j$ is the ground truth label for sample $i$, i.e. $j=y^*_i$, else $q_{i}[j]=0$.
Note the second term inside $|\cdot|$ corresponds to average count of samples in a mini-batch containing $N_b$ training samples. Since the average count is a constant value so learning gradients solely depends on the first term representing confidence assigned by the \textsc{dnn}\xspace. $K$ denotes number of classes. $\mathcal{L}_\textsc{mdca}\xspace$ is computed on a mini-batch, and the modulus operation ($|\cdot|$) implies that the summations are not interchangeable
\footnote{Note that $\mathcal{L}_{\textsc{mdca}\xspace}$ may appear similar to $\mathcal{L}_{1}$ loss due to the usage of the modulus in both. However, the two loss functions are very different. Mathematically, $\mathcal{L}_1 = \frac{1}{K\cdot {N_{b}}} \sum_{j=1}^K \sum_{i=1}^{N_{b}} \Big\lvert \textbf{s}_i[j] - q_{i}[j] \Big\rvert$ whereas ${\mathcal{L}}_\textsc{mdca}\xspace$ is as given in \cref{equ:mdca}. The two terms inside the modulus of ${\mathcal{L}}_\textsc{mdca}\xspace$ loss represent mean statistic for a particular class, $j$ (motivated by our objective of class-wise calibration), whereas, in the case of $\mathcal{L}_{1}$ the modulus operate on a single sample.
}.
Further, $\mathbf{s}_i[j]$ represents the confidence score by a \textsc{dnn}\xspace for the $j^\text{th}$ class, of $i^{th}$ sample in the mini-batch.
Note that ${\mathcal{L}}_\textsc{mdca}\xspace$ is differentiable, whereas, the loss given by \textsc{dca}\xspace \cite{dcapaper} involves accuracy over the mini-batch, and is non-differentiable.
The differentiablity of our loss function ensures that it can be easily used in conjunction with other application specific loss functions as follows:
\begin{equation}
\mathcal{L}_\text{total} = \mathcal{L}_{C} + \beta \cdot \mathcal{L}_\textsc{mdca}\xspace,
\label{equ:totalLoss}
\end{equation}
where $\beta$ is a hyperparameter to control the relative importance with respect to application specific losses, and is typically found using a validation set. $\mathcal{L}_{C}$ is a standard classification loss, such as Cross Entropy, Label Smoothing~\cite{originallabelsmoothing}, or Focal loss~\cite{ogfocalloss}. Our experiments indicate that the proposed \textsc{mdca}\xspace loss in conjunction with focal loss gives best calibration performance.
Ideally to achieve \emph{confidence calibration}, we want the average prediction confidence to be same as accuracy of the model. However, in \emph{multiclass calibration}, we want average prediction confidence of every class $k_i$ to match with its average occurrence in the data-distribution. In $\mathcal{L}_{MDCA}$, we explicitly capture this idea for every mini-batch i.e. we intuitively want that $\tilde{s}[k_i] \approx \tilde{q}[k_i]$ (where $\tilde{s}[k_i],\tilde{q}[k_i]$ is the average prediction confidence and the average count class $k_i$ in a mini-batch respectively). Any deviation from this leads DNN to be penalized by $\mathcal{L}_{MDCA}$.
\section{Related Work}
\label{sec:relWorks}
Techniques for calibrating \textsc{dnn}s\xspace can be broadly classified into train-time calibration, post-hoc calibration, and calibration through Out-Of-Distribution ({OOD\xspace}). Train-time calibration integrate model calibration during the training procedure while a post-hoc calibration method utilizes a hold-out set to tune the calibration measures. On the other hand,learning to reject {OOD\xspace} samples (at train-time or post-hoc) mitigates overconfidence and thus, calibrates \textsc{dnn}s\xspace.
\mypara{Train-Time Calibration}
One of the earliest train-time methods proposes Brier Score for the calibrating binary probabilistic forecast \cite{brierloss}. \cite{guo2017calibration} show models trained with Negative-Log-Likelihood (\textsc{nll}\xspace) tend to be over-confident and empirically show a disconnect between \textsc{nll}\xspace and accuracy. Specifically, the overconfident scores necessitates re-calibration. A common calibration approach is to use additional loss terms other than the \textsc{nll}\xspace loss: \cite{pereyra2017regularizing} use entropy as a regularization term whereas M{\"u}ller {et al.\xspace} \cite{labelsmoothinghelp} propose Label Smoothing (\textsc{ls}\xspace) \cite{originallabelsmoothing} on soft-targets which aids in improving calibration. Recently, \cite{focallosspaper} showed that focal loss \cite{ogfocalloss} can implicitly calibrate \textsc{dnn}s\xspace by reducing the KL-divergence between predicted and target distribution whilst increasing the entropy of the predicted distribution, thereby preventing the model from becoming overconfident. Liang {et al.\xspace} \cite{dcapaper} have proposed an auxiliary loss term, \textsc{dca}\xspace, which is added with Cross-Entropy to help calibrate the model. The \textsc{dca}\xspace term penalizes the model when the cross-entropy loss is reduced, but the accuracy remains the same, i.e., when the over-fitting occurs. \cite{kumarpaper} propose to use \textsc{mmce}\xspace, an auxiliary loss term for calibration, computed using a reproducing kernel in a Hilbert space~\cite{rkhskernel}. Maro{\~n}as {et al.\xspace} \cite{maronas2021calibration} analyse MixUp \cite{mixup-augmentation} data augmentation for calibrating DNNs and conclude Mixup does not necessarily improve calibration.
\mypara{Post-Hoc Calibration}
Post-hoc calibration techniques calibrate a model using a hold-out training set, which is usually the validation set. Temperature scaling (\textsc{ts}\xspace) smoothes the logits to calibrate a \textsc{dnn}\xspace. Specifically, \textsc{ts}\xspace is a variant of Platt scaling \cite{platt1999probabilistic} that works by dividing the logits by a scalar $T > 0$, learnt on a hold-out training set, prior to taking a softmax. The downside of using \textsc{ts}\xspace during calibration is reduction in confidence of every prediction, including the correct one. A more general version of \textsc{ts}\xspace transforms the logits using a matrix scaling. The matrix $M$ is learnt using the hold-out set similar to \textsc{ts}\xspace. Dirichlet calibration~(\textsc{dc}\xspace) uses Dirichlet distributions to extend the Beta-calibration \cite{beta-cal-paper} method for binary classification to a multi-class one. \textsc{dc}\xspace is easy to implement as an extra layer in a neural network on log-transformed class probabilities, which is learnt on a hold-out set. Meta-calibration propose differentiable ECE-driven calibration to obtain well-calibrated and high-accuracy models \cite{bohdal2021meta}. Islam {et al.\xspace} \cite{islamclass} propose class-distribution-aware \textsc{ts}\xspace and \textsc{ls}\xspace that can be used as a post-hoc calibration. They use a class-distribution aware vector for \textsc{ts}\xspace/\textsc{ls}\xspace to fix the overconfidence. Ding {et al.\xspace} \cite{local-TS} propose a spatially localized calibration approach for semantic segmentation.
\mypara{Calibration Through {OOD\xspace} Detection}
Hein {et al.\xspace} \cite{hein2019relu} show that one of the main reasons behind the overconfidence in \textsc{dnn}s\xspace is the usage of ReLu activation that gives high confidence predictions when the input sample is far away from the training data. They propose data augmentation using adversarial training, which enforces low confidence predictions for samples far away from the training data. Guo {et al.\xspace} \cite{guo2017calibration} analyze the effect of width, and depth of a \textsc{dnn}\xspace, batch normalization, and weight decay on the calibration. Karimi {et al.\xspace} \cite{ood-spectral} use spectral analysis on initial layers of a \textsc{cnn}\xspace to determine {OOD\xspace} sample and calibrate the \textsc{dnn}\xspace. We refer the reader to \cite{hendrycks2018deep, devries2018learning, padhy2020revisiting, meronen2020stationary} for other representative works on calibrating a \textsc{dnn}\xspace through {OOD\xspace} detection.
\section{Results}
\label{subsec:results}
\myfirstpara{Experiments with Application Specific Loss Functions}
Our loss is meant to be used in conjunction with another application specific loss function to help improve the calibration performance of a model. Common application specific loss include cross entropy loss (\textsc{nll}\xspace) which in turn minimizes negative log likelihood score of the ground truth label in the predicted confidence vector. Focal Loss (\texttt{FL}) \cite{ogfocalloss} has been proposed to improve training in the presence of many easy negatives, and fewer hard negatives. Whereas Label Smoothing (\textsc{ls}\xspace) \cite{originallabelsmoothing} introduces another term in the \textsc{nll}\xspace to smoothen the prediction of a model. We add the proposed \textsc{mdca}\xspace with each of these loss terms, and measure the calibration performance of a model (in terms of \textsc{ece}\xspace, and \textsc{sce}\xspace scores), before and after adding our loss. \cref{tab:sel-config} shows the result. We refer to configurations using our technique as \textbf{``*+\textsc{mdca}\xspace''}, where * refers to \textsc{nll}\xspace/\textsc{ls}\xspace/\textsc{fl}\xspace. For each of the combination we use relative weight of $\beta \in \{1,5,10,15,20,25\}$ for $\mathcal{L}_\textsc{mdca}\xspace$, and report the calibration performance of the most accurate model on the validation set. Our experiments suggest that setting $\beta < 1$ did not have strong regularizing effect). For $\mathcal{L}_\textsc{ls}\xspace$ we use $\alpha=0.1$, and for $\mathcal{L}_\textsc{fl}\xspace$ we use $\gamma \in \{1,2,3\}$. Please refer to \cite{originallabelsmoothing} and \cite{ogfocalloss} for interpretation of $\alpha$, and $\gamma$ respectively. \cref{tab:sel-config} shows that the proposed \textsc{mdca}\xspace loss improves calibration performance of all the above application specific loss functions, across multiple datasets, and architectures. We also note that \textsc{fl}+\textsc{mdca}\xspace gives best calibration performance. We use this loss configuration in our experiments hereafter.
\input{sections/tab_classwise_ece}
\input{sections/fig_confidence_reliability_plots.tex}
\mypara{Calibration Comparison with \textsc{sota}\xspace}
\cref{tab:sce-all-methods} compares calibration performance of our method with all recent \textsc{sota}\xspace methods. We note that calibration using our method improves both \textsc{sce}\xspace as well as \textsc{ece}\xspace score on all the datasets, and different architectures.
\mypara{Class-Conditioned Calibration Error}
Current state-of-the-art focuses on calibrating the predicted label only, which leaves some of the minority class un-calibrated. One of the benefits of our calibration approach is better calibration for all and not only the predicted class. To demonstrate the effectiveness of our method, we report class-$j$-\textsc{ece}\xspace \% values of all the competing methods against our method, using ResNet-20 model trained on the \textsc{svhn}\xspace dataset. \cref{tab:classwiseECE} shows the result. Our method gives best scores for all but 3 out of 10 classes, where it is second-best. Class-wise reliability diagrams (c.f. \cref{fig:teaser}) reinforce a similar conclusion. We show results on \cifar10 dataset in the supplementary.
\mypara{Test Error}
\cref{tab:sce-all-methods} also shows the Test Error (\textsc{te}\xspace) obtained by a model trained using our method and other \textsc{sota}\xspace approaches. We note that using our proposed loss, a model is able to achieve best calibration performance without sacrificing on the prediction accuracy (Test Error).
\myfirstpara{Mitigating Under/Over-Confidence}
\cref{tab:sel-config} and \cref{tab:sce-all-methods} already show that our method improves over \textsc{sota}\xspace in terms of \textsc{sce}\xspace, and \textsc{ece}\xspace scores. However the tables do not highlight whether they correct for over-confidence or under-confidence. We show the reliability diagram (\cref{fig:rel_conf_plots}) for a ResNet-32 model trained on \cifar10. The uncalibrated model is overconfident (\cref{fig:rel_conf_plots_a}) which gets rectified after calibrating with our method (\cref{fig:rel_conf_plots_b}). We also show confidence plots in the picture, and the colored dashed lines to indicate average confidence of the predicted label, and the accuracy. It can be seen that accuracy is lower than average confidence in the uncalibrated confidence plot (\cref{fig:rel_conf_plots_c}), which indicates the overconfident model. After calibrating with our method, the two dashed lines almost overlap indicating the perfect calibration achieved (\cref{fig:rel_conf_plots_d}). Similarly, second row of \cref{fig:rel_conf_plots} show that the model trained with \textsc{ls}\xspace solely is under-confident; and a model trained with \textsc{ls}\xspace along with \textsc{mdca}\xspace is confident and calibrated.
\input{sections/fig_misclassification_conf.tex}
\mypara{Confidence Values for Incorrect Predictions}
The focus of the discussion so far has been on the fact that confidence value for a class should be consistent with the likelihood of the class for the sample. Here, we analyze our method for the confidence values it gives when the prediction is incorrect. \cref{fig:missclassified-conf-hist} shows the confidence value histogram for all the incorrect predictions made by the ResNet-32 model trained on \cifar10 dataset using \textsc{nll}\xspace vs. \textsc{mdca}\xspace regularized \textsc{nll}\xspace. It is clear that our calibration reduces the confidence for the mis-prediction. The same is also evident from the \cref{fig:teaser} shown earlier.
\input{sections/tab_dataset_drift.tex}
\input{sections/tab_dataset_imbalance.tex}
\mypara{Calibration Performance under Dataset Drift}
Tomani {et al.\xspace} \cite{domain-drift-calibration-posthoc} show that \textsc{dnn}s\xspace are over-confident and highly uncalibrated under dataset/domain shift. Our experiments shows that a model trained with \textsc{mdca}\xspace fairs well in terms of calibration performance even under non-semantic/natural domain shift. We use two datasets (a) \textsc{pacs}\xspace \cite{pacspaper} and (b) Rotated \textsc{mnist}\xspace inspired from \cite{can-u-trust}. The datasets are benchmarks for synthetic non-semantic shift and natural rotations respectively. Dataset specifics and training procedure are provided in the supplementary. \cref{tab:datasetdrift} shows that our method achieves the best average \textsc{sce}\xspace value across all the domains in \textsc{pacs}\xspace. A similar trend is observed on Rotated \textsc{mnist}\xspace dataset as well (see supplementary), where our method achieves the least average \textsc{sce}\xspace value across all rotation angles.
\mypara{Calibration Performance on Imbalanced Datasets}
The real-world datasets are often skewed and exhibit long-tail distributions, where a few classes dominate over the rare classes. In order to study the effect of class imbalance on the calibration quality, we conduct the following experiment, where we introduce a deliberate imbalance on \cifar10 dataset to force a long-tail distribution as detailed in \cite{skew}. \cref{tab:sce-data-imbalance} shows that a model trained with our method has best calibration performance in terms of \textsc{sce}\xspace score across all imbalance factors. We observe that \textsc{svhn}\xspace dataset already has a imbalance factor of $2.7$, and hence create no artificial imbalance in the dataset for this experiment.
The efficacy of our approach on the imbalanced data is
due to the regularization provided by \textsc{mdca}\xspace which penalizes the difference between average confidence and average count even for the non-predicted class, hence benefiting minority classes.
\input{sections/tab_post_hoc.tex}
\mypara{Our Approach + Post-hoc Calibration}
We study the performance of combined effect of post-hoc calibration methods, namely Temperature Scaling (\textsc{ts}\xspace) \cite{platt1999probabilistic}, and Dirichlet Calibration (\textsc{dc}\xspace) \cite{Dirichlet}, applied over various train-time calibration methods including ours (\textsc{fl}+\textsc{mdca}\xspace). \cref{tab:TrainVsPH} shows the results. We observe that while \textsc{ts}\xspace, and \textsc{dc}\xspace improve the performance of other competitive methods, our method outperforms them even without using any of these methods. On the other hand, the performance of our method seems to either remain same or slightly decrease after application of post-hoc methods. We speculate that this is because our method already calibrates the model to near perfection. For example, on performing \textsc{ts}\xspace, we observe the optimal temperature values are $T \approx 1$ implying that it leaves little scope for the TS to improve on top of it. Thus, any further attempt to over-spread the confidence prediction using \textsc{ts}\xspace or \textsc{dc}\xspace negatively affects the confidence quality.
\input{sections/tab_segm_results.tex}
\myfirstpara{Calibration Results for Semantic Segmentation}
One of the major advantages of our technique is that it allows to use billions of weights of a \textsc{dnn}\xspace model to be used for the calibration. This is in contrast to other calibration approaches which are severely constrained in terms of parameters available for tuning. For example in \textsc{ts}\xspace one has a single temperature parameter to tune. This makes it hard for \textsc{ts}\xspace to provide image and pixel specific confidence transformation for calibration. To highlight pixel specific calibration aspect of our technique we have done experiments on semantic segmentation task, which can be seen as pixel level classification. For the experiment, we train a DeepLabV3+ \cite{deeplabv3Plus} model with a pre-trained Xception65 \cite{xception} backbone on the \textsc{pascal-voc}\xspace 2012 \cite{pascal-voc-2012} dataset. We compare the performance of our method against \textsc{nll}\xspace, \textsc{fl}\xspace and \textsc{ts}\xspace (post-hoc calibration). Please refer to the supplementary for more details on the training. \cref{tab:segmentation-table} shows the results. We see a significant drop in both \textsc{sce}\xspace/\textsc{ece}\xspace in case of our method (\textsc{fl}+\textsc{mdca}\xspace) as compared to \textsc{fl}\xspace ($2\times$ drop in \textsc{sce}\xspace and a 40\% decrease in \textsc{ece}\xspace). Our method also outperforms \textsc{ts}\xspace (after training with \textsc{nll}\xspace) by $23.6\%$.
\section{Ablation Study}
\begin{figure}[t]
\begin{center}
\includegraphics[width =\linewidth]{figs/batchUpdated.pdf}
\caption{Effect of different batch sizes on Calibration performance metrics (\textsc{ece}\xspace/\textsc{sce}\xspace/Accuracy) while training with \textsc{mdca}\xspace on a ResNet-32 model on \cifar10 dataset. The calibration performance drops with larger batch size because SGD optimization is more effective in a small-batch regime\cite{Keskar2016}. A larger batch results in a degradation in the quality of the model, as measured by its ability to generalize. The performance degradation is also consistent with the model trained using solely on \textsc{fl}\xspace on a large batch size.}
\label{fig:abl_batchsize}
\end{center}%
\end{figure}
\mypara{Effect of Batch Size}
\cref{fig:abl_batchsize} shows the effect of different batch sizes on the calibration performance. We vary the batch sizes exponentially and observe that a model trained with \textsc{mdca}\xspace achieves best calibration performance around batch size of 64 or 128. As we decrease (or increase) the batch size, we see a degradation in calibration, though the drop is not significant.
\begin{figure}[t]
\begin{center}
\includegraphics[width =\linewidth ]{figs/convergenceUpdated.pdf}
\caption{Comparison of \textsc{ece}\xspace/\textsc{sce}\xspace at various epochs for \textsc{mdca}\xspace, \textsc{mmce}\xspace, and \textsc{dca}\xspace. Though, \textsc{mmce}\xspace, and \textsc{dca}\xspace directly optimize for \textsc{ece}\xspace, their loss function is not differentiable and hence the techniques are not able to reduce \textsc{ece}\xspace as much as \textsc{mdca}\xspace. Differentiability of loss function allows \textsc{mdca}\xspace to reduce \textsc{ece}\xspace better even when it does not directly optimize it. We use a learning rate decay of $1/10$ at epochs $50$ and $70$. Please refer to the supplementary for the details of the experiment.}
\label{fig:abl_ece_convergence}
\end{center}%
\end{figure}
\mypara{Comparison of \textsc{ece}\xspace/\textsc{sce}\xspace Convergence with \textsc{sota}\xspace}
In previous sections, we compared the \textsc{ece}\xspace scores of \textsc{mdca}\xspace with other contemporary trainable calibration methods like \textsc{mmce}\xspace \cite{kumarpaper} and \textsc{dca}\xspace \cite{dcapaper}. Many of these methods explicitly aim to reduce \textsc{ece}\xspace scores. While \textsc{mdca}\xspace does not directly optimize \textsc{ece}\xspace, yet we see in our experiments that \textsc{mdca}\xspace manages to get better \textsc{ece}\xspace scores at convergence. We speculate that this is due to the differentiablity of \textsc{mdca}\xspace loss which helps optimize the loss better using backpropagation. To verify the hypothesis, we plot the \textsc{ece}\xspace convergence for various methods in \cref{fig:abl_ece_convergence}.
\section*{Table of Contents}
\vspace{2em}
\noindent Due to the restrictions on the length, we could not include descriptions of the state-of-the-art techniques, datasets, hyper-parameters and additional results in the main manuscript. However, to keep the overall manuscript self-contained, we include the following in the supplementary material:
\begin{itemize}
\item Section 1: Description of competing methods.
\item Section 2: Detailed description of benchmark datasets used in our experiments.
\item Section 3: Training procedure and implementation of proposed, and competing methods.
\item Section 4: Additional results.
\item Source code link: \href{https://github.com/mdca-aux-loss/MDCA-Calibration}{MDCA Official PyTorch Github}
\end{itemize}
\section{Competing Methods and hyperparameters used}
\label{sec:compMethods}
In this section, we provide a brief description of each of the compared method with hyperparameter settings used in training.
\begin{itemize}
%
\item For \texttt{Brier Score} \cite{brierloss} we train on the loss defined as the squared loss of the predicted probability vector and the one-hot target vector.
%
\item \texttt{Label Smoothing} \cite{labelsmoothinghelp}, takes the form $\texttt{LS} = - \sum_{i=1}^N \sum_{j=1}^K q_{i,j} \log (\hat{p}_{i,j})$ where $\hat{p}_{i,j}$ is the predicted confidence score for sample $i$, for class $j$. Similarly, we define soft target vector, \textbf{${q_{i}}$}, for each sample $i$, such that $q_{i,j}=\alpha/(K-1)$ if $j \neq y_i$, else $q_{i,j} = (1-\alpha)$. Here $\alpha$ is a hyper-parameter. We trained using $\alpha=0.1$, and refer to label smoothing as \texttt{LS} in the results.
%
\item \texttt{Focal loss} \cite{ogfocalloss} is defined as $\texttt{FL} = - \sum_{i=1}^N (1-\hat{p}_{i,y_i})^{\gamma} \log (\hat{p}_{i,y_i})$, where $\gamma$ is a hyper-parameter. We trained using $\gamma \in \{1,2,3\}$, and report it as \texttt{FL} in the results.
%
\item For DCA \cite{dcapaper}, we train on the following loss: $\texttt{NLL} + \beta \cdot \texttt{DCA}$, where \texttt{DCA} is as defined in \cite{dcapaper}, and $\beta$ is a hyper-parameter. We train varying $\beta \in \{1,5,10,15,20,25\}$ as performed in \cite{dcapaper}. DCA results are reported under the name \texttt{DCA}.
%
\item We use \texttt{MMCE} \cite{kumarpaper}, as a regularizer along with \texttt{NLL}. We use the weighted \texttt{MMCE} loss in our experiments with $\lambda \in \{2,4\}$.
%
\item For \texttt{FLSD} \cite{focallosspaper}, we train with $\gamma=3$.
\end{itemize
For each of the above methods, we report the result of the best performing trained model according to the accuracy obtained on the validation set.
\section{Dataset description}
\label{sec:dataset-desc}
We have used the following datasets in our experiments:
\begin{enumerate}
\item \textbf{CIFAR10} \cite{krizhevsky2009learning}: This dataset has $60,000$ color images of size $32 \times 32$ each, equally divided into $10$ classes. The pre-divided train set comes with $50,000$ images and the test set has around $10,000$ images. Using the policy defined above, we have a train/val/test split having $45000/5000/10000$ images respectively.
%
\item \textbf{CIFAR100} \cite{krizhevsky2009learning}: This dataset comprises of $60,000$ colour images of size $32x32$ each, but this time, equally divided into $100$ classes. The pre-divided train set again comes with $50,000$ images and the test set has around $10,000$ images. We have a train/val/test split having $45000/5000/10000$ images respectively.
%
\item \textbf{SVHN} \cite{netzer2011reading} : The Street View House Number (SVHN) is a digit classification benchmark dataset that contains $600000$ $32\times 32$ RGB images of printed digits (from $0$ to $9$) cropped from pictures of house number plates. The cropped images are centered in the digit of interest, but nearby digits and other distractors are kept in the image. SVHN has comes with a training set ($73257$) and a testing set ($26032$). We randomly sample $10\%$ of the training set to use as a validation set.
%
\item \textbf{Tiny-ImageNet} \cite{imagenetpaper} : It is a subset of the ImageNet dataset containing $64 \times 64$ RGB images. It has $200$ classes with each class having $500$ images. The validation set contains $50$ images per class. We use the provided validation set as the test set for our experiments.
%
\item \textbf{20 Newsgroups} \cite{20newsgroup}: It is a popular text classification dataset containing $20,000$ news articles, categorised evenly into $20$ different newsgroups based on their content.
%
\item \textbf{Mendeley V2} ~\cite{kermany2018labeled}: Inspired from \cite{dcapaper}, we use this medical dataset. The dataset contains OCT (optical coherence tomography) images of retina and pediatric chest X-ray images. However, we only use the chest X-ray images in our experiments. The chest X-ray images come with a pre-defined train/test split having $4273$ pneumonia images and $1583$ normal images of the chest.
%
\item \textbf{PACS dataset} \cite{pacspaper}: We use the dataset to study calibration under domain shift. The dataset comprises of a total $9991$ images spread across $4$ different domains with $7$ classes each. The domains are namely Photo, Art, Sketch and Cartoon. We fine-tune the ResNet-18 model, pretrained ImageNet dataset, on the Photo domain using various competing techniques and test on the other three domains to measure how calibration holds in a domain shift. Following \cite{pacspaper}, we also divide the training set of photo domain into $9:1$ train/val set.
%
\item \textbf{Rotated MNIST Dataset}: This dataset is also used for domain shift experiments. Inspired from \cite{domain-drift-calibration-posthoc}, we create 5 different test sets namely $\{M_{15}, M_{30}, M_{45}, M_{60}, M_{75}\}$. Domain drift is introduced in each $M_{x}$ by rotating the images in the MNIST test set by $x$ degrees counter-clockwise.
\item \textbf{Segmentation Datasets- PASCAL VOC 2012 \cite{pascal-voc-2012}}: This a segmentation dataset and consists of images with pixel-level annotations. There are 21 classes overall (One background class and 20 foreground object classes). The dataset is divided into \textit{Train} (1464 images), \textit{Val} (1449 images) and \textit{Test} (1456 images) set. We, however, only make use of the \textit{Train} and the \textit{Val} set. Models are trained on the \textit{Train} set and the evaluation is reported on the \textit{Val} set.
\end{enumerate}
\section{Training Procedure and Implementation}
\label{sec:trainProc}
\myfirstpara{Backbone Architecture}
We use ResNet \cite{resnetpaper} backbone for most of our experiments. For training on CIFAR10, and CIFAR100 datasets we used ResNet-32 and ResNet-56. For SVHN we used ResNet-20 and ResNet-56. Following \cite{dcapaper}, for Mendeley V2 dataset, we use a ResNet-50 architecture pre-trained on ImageNet dataset \cite{imagenetpaper}. We use the backbone as a fixed feature extractor and add a $1 \times 1$ convolutional layer and two fully connected layers on top of the feature extractor. Segmentation experiments make use of DeepLabV3+ \cite{deeplabv3Plus} based on the Xception65 \cite{xception} backbone
\mypara{Train Parameters}
For all our experiments we used a single Nvidia 1080 Ti GPU. We trained CIFAR10 for a total of $160$ epochs using a learning rate of $0.1$, and it is reduced by a factor of $10$ at the $80^{th}$ and $160^{th}$ epoch. Train batch size was kept at $128$ and DNN was optimized using Stochastic Gradient Descent (SGD) with momentum at $0.9$ and weight decay being $0.0005$. Furthermore, we augment the images using random center crop and horizontal flips. We use same parameters for CIFAR100, except the number of epochs, where we train it for $200$ epochs. Again learning rate is reduced by a factor of $10$, but this time it is reduced at epochs 100 and 150. For Tiny-ImageNet, we followed the same training procedure as done by \cite{focallosspaper}. For SVHN, we keep the same training procedure as above except the number of epochs; we train for $100$ epochs, with it getting reduced at epochs $50$ and $70$ with a factor of $10$ yet again. We do not augment the images when training on SVHN. We use PyTorch framework for all our implementations. Our repository is inspired from \url{https://github.com/bearpaw/pytorch-classification}. We also take help from the official implementation of \cite{focallosspaper} to implement some of the baseline methods. For the segmentation experiments we make use of the following repository: \url{https://github.com/LikeLy-Journey/SegmenTron}. We train the segmentation models for 120 epochs using SGD optimizer with a warm-up LR scheduler. To train with focal loss, we used $\gamma = 3$. The rest of the parameters for the optimizer and the scheduler were kept the same as provided by the SegmenTron repository.
\begin{figure}
\centering
\includegraphics[width =0.45\textwidth]{figs/FlDCAvsFlMdcaCIFAR10Resnet32.pdf}
\caption{Comparison of Reliability diagrams of: (left) \texttt{NLL+DCA} vs \texttt{NLL+MDCA} and (right) \texttt{FL+DCA} vs. \texttt{FL+MDCA}. We use ResNet-32 trained on CIFAR10 dataset for comparison.}
\label{fig:DCAvsMDCA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width =0.45\textwidth]{figs/betaVariation.pdf}
\caption{Effect of $\beta$ on Accuracy and SCE ($10^{-3}$) on \texttt{FLL+MDCA} on the ResNet-20 model trained on SVHN dataset.}
\label{fig:EffectofbetaOnSCE}
\end{figure}
\mypara{Optimizing Hyper-parameter $\beta$ for MDCA}
We vary the hyper-parameter $\beta \in [0.25, 0.5, 1,5,10, \dots 50]$ in our experiments. Figure \ref{fig:EffectofbetaOnSCE} shows how calibration error and accuracy is affected when we increase $\beta$ in different model-dataset pairs. We see a general trend that calibration is best achieved when $\beta$ is close to $1$, and as we increase it, the calibration as well as the accuracy starts to drop (accuracy decreases and the SCE score increases).
\mypara{Post-Hoc Calibration Experiments}
For comparison with post-hoc techniques, we set aside $10\%$ of the training data as a validation set (hold-out set) to conduct post-hoc calibration. For Temperature Scaling (\texttt{TS}), we perform a grid search between the range of $0$ to $10$ with a step-size of $0.1$ to find the optimal temperature value that gives the least \texttt{NLL} on the hold-out set. For Dirichlet Calibration (\texttt{DC}), we attach a single layer neural network at the end of the DNN and use ODIR \cite{Dirichlet} regularization on the weights of the same. We train on the hold-out set keeping the rest of the DNN weights frozen except the newly added layer. We again use grid search to find the optimal hyper-parameters $\lambda$ and $\mu$ that give the least \texttt{NLL} on the hold-out set. We vary $\lambda \in \{ 0, 0.01, 0.1, 1, 10, 0.005, 0.05, 0.5, 5, 0.0025, 0.025, 0.25, 2.5 \}$ and $\mu \in \{0, 0.01, 0.1, 1, 10\}$.
\input{sections/tab_ablation_cifar_svhn.tex}
\input{sections/tab_pacs_all_comparison.tex}
\input{sections/tab_scores_rot_mnist.tex}
\mypara{Domain Shift Experiments}
For PACS dataset, we use the official PyTorch ResNet-18 model pre-trained on ImageNet dataset. We re-initialized its last fully connected layer to accommodate 7 classes, and finally fine-tuned on the Photo domain. We use the SGD optimizer with same momentum and weight decay values as done for CIFAR10/100 and described earlier. Training batch size was fixed at $256$, and the model was trained for 30 epochs with initial learning rate, set at 0.01, getting reduced at epoch 20 with a factor of 10. Training parameters are chosen such that they give the best performing model i.e. having the maximum accuracy on the Photo domain val set.
For Rotated MNIST, we used PyTorch framework to generate rotated images. We use the ResNet-20 model to train on the standard MNIST train set for 30 epochs with a learning rate of 0.1 for first 20 epochs, and then with 0.01 for the last 10 epochs. Rest of the details like batch size and optimizer remain same as the CIFAR10/100 experiments. We did not augment any images, and selected the training parameters such that the model gives best accuracy on the validation set.
For 20 Newsgroups, we train the Global Pooling Convolutional Network \cite{global-pool-cnn} using the ADAM optimizer, with learning rate $0.001$, and the default values of betas at $0.9$ and $0.999$ respectively. We used \texttt{GloVe} word embeddings \cite{glove-vectors} to train the network. We trained the model for $50$ epochs.
\section{Additional Results}
\label{sec:addnlRes}
We report the following additional results:
\input{sections/tab_rotated_mnist.tex}
\input{sections/tab_pacs_ablation.tex}
\begin{enumerate}[label*=\arabic*.]
\item \textbf{Class-j-ECE score:} In Table 3 of main manuscript we reported the Class-j-ECE score for SVHN. In Table \ref{tab:classECE_CIFAR10} here, we we provide additional results for CIFAR10.
\item \textbf{Comparison with other auxiliary losses}: In Table 6 in the main manuscript showed how the proposed MDCA can be used along with \texttt{NLL}, \texttt{LS} \cite{originallabelsmoothing}, and \texttt{FL} \cite{ogfocalloss} to improve the calibration performance without sacrificing the accuracy. In Tab. \ref{tab:AblCIFARnSVHN} here, we show a similar comparison for other competitive approaches, namely \texttt{DCA} \cite{dcapaper}, and \texttt{MMCE} \cite{kumarpaper}. Using \texttt{MDCA}, gives better calibration than other competitive approaches.
\item \textbf{Calibration performance under dataset drift:} A model trained using our proposed loss gives better calibration under dataset drift as well. Table 4 in the main manuscript showed SCE score comparison on PACS. We give more detailed comparison here in Tab. \ref{tab:rmnist-pacs-SCE-ece-top1-all_methods} which shows top 1\% accuracy, ECE as well. We repeat SCE numbers from main manuscript for completion.
Tab. \ref{tab:SCE-rot-MNIST} shows the corresponding numbers for Rotated MNIST.
Just like we showed that using \texttt{MDCA} in conjunction with \texttt{NLL}, \texttt{LS} \cite{originallabelsmoothing}, and \texttt{FL} \cite{ogfocalloss} gives best calibration performance, we show that this remains true even for the dataset drift case. Tab. \ref{tab:rot-mnist-ece-top1-ablation} and Tab. \ref{tab:ece-top1-rot-pacs-ablation} show the comparison on Rotated MNIST and PACS datasets respectively.
\item \textbf{Reliability Diagrams:} Fig. 2 in the main manuscript showed reliability and confidence plots for \texttt{MDCA} used with \texttt{NLL} and \texttt{LS} respectively. We show similar plots for \texttt{MDCA+FL} in Fig. \ref{fig:RelDigFLMDCA_vs_FL}.
%
\end{enumerate}
\begin{figure}[t]
\centering
\includegraphics[width =\linewidth]{figs/flRel.pdf}
\caption{Reliability diagrams (a,b) and confidence histograms (c,d) of \texttt{FL} trained model compared against MDCA regularized version (\texttt{FL+MDCA}). We use ResNet-32 trained on CIFAR10 dataset for comparison.}
\label{fig:RelDigFLMDCA_vs_FL}
\end{figure}
\input{sections/tab_classwise_ece_cifar10.tex}
\clearpage
\newpage
{\small
\bibliographystyle{ieee_fullname}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,371 |
Eyvind Gunnar Olrik (født 2. oktober 1866 på Frederiksberg, død 21. december 1934 sammesteds) var en dansk jurist, søn af Henrik Olrik, broder til Benedicte Olrik, Dagmar, Hans, Axel og Jørgen Olrik.
Karriere
Han var søn af maleren Henrik Olrik og Hermina Valentiner. Olrik blev student fra Metropolitanskolen 1883 og juridisk kandidat 1888 fra Københavns Universitet. Samme år blev han protokolfører i Københavns Kriminal- og Politiret, var 1889-95 sagførerfuldmægtig i København, 1891-92 assessor i Udskrivningsvæsenets Revisionskontor, 1893 i Justitsministeriet, blev 1905 fuldmægtig og senere samme år assessor i Københavns Kriminal- og Politiret, blev samme år assessor i Landsover- samt Hof- og Stadsretten, 1917 i Højesteret og fik efter Retsplejelovens indførelse 1919 titel af dommer ved samme ret. I 1931 fik Olrik sin afsked fra statstjenesten.
Olrik udviklede omfattende og værdifuld aktivitet på strafferettens område og drøftede blandt andet spørgsmålet om ansvarlighed (1897), vaneforbrydere (1893) og lægers tavshedspligt (1905) samt udgav kommenterede udgaver af Straffeloven (1902 og 1912). Han var fra stiftelsen 1913 medredaktør af Nordisk Tidsskrift for Strafferet, og han udgav sammen med C.D. Rump Systematisk Oversigt over Domme i kriminelle Sager 1895-1914 (1916) og for 1915-24 (1925). Desuden var han medlem af Straffelovskommissionen af 1917 og af Den militære Straffelovskommission af 1918.
Andre tillidshverv
Fra 1908 til 1932 var Olrik medlem af Overværgerådet, 1908-16 formand for sammes afdeling for jyske sager; blev 1915 censorsuppleant ved Københavns Universitets juridiske eksaminer og samme år fast censor, 1923 formand for censorerne, men blev i 1930 fritaget for hvervet. Fra 1923 til 1928 var han medlem af Overfredningsnævnet.
Udmærkelser
Olrik blev 13. februar 1913 Ridder af Dannebrogordenen, 28. juni 1920 Dannebrogsmand, 17. marts 1923 Kommandør af 2. grad og 26. september 1929 Kommandør af 1. grad af Dannebrog.
Ægteskaber
16. december 1891 i Vor Frue Kirke med Margrethe Christiane Sophie Laura Olsen (født 17. april 1862 i Holstebro, død 1. februar 1949 i København), datter af jurist Alfred Peter Olsen. Ægteskabet blev opløst 1923.
30. juni 1923 i Nødebo Kirke med kontorchef i Samfundet og Hjemmet for Vanføre Ida Kali (født 3. september 1884 i Frederikshavn), datter af købmand, konsul Peter Julius Frederik Kali og Amalia Frederikke Winding.
Sammen med sin anden hustru stiftede han Højesteretsdommer Eyvind Olrik og hustru Ida Olriks Legat, som støtter dygtige og trængende kandidatstuderende på jura på Københavns Universitet. Legatet er nu en del af Københavns Universitets Fælleslegater.
Olrik er begravet på Frederiksberg.
Gengivelser
Tegninger af Henrik Olrik 1874, 1876 og 1877, pastel af samme bl.a. 1888 og 1889
Tegning af Hans Bendix
Fotografier af bl.a. Valdemar Poulsen og gruppefotografi af Axel, Eyvind og Hans Olrik af Ludvig Grundtvig (Det Kongelige Bibliotek)
Referencer
Kilder
A. Falk-Jensen og H. Hjorth-Nielsen: Candidati og Examinati juris 1736-1936, Candidati politices 1852-1936, Candidati actuarii 1922-1936, bind III, København: G.E.C. Gad 1954-1959, s. 288-289.
Olrik, 6. Eyvind Gunnar i Nordisk familjebok (Anden udgave, supplement, 1925)
Eksterne henvisninger
Personer i Dansk Biografisk Leksikon
Personer i Kraks Blå Bog (afdøde)
Højesteretsdommere fra Danmark
Faglitterære forfattere fra Danmark
Dansksprogede forfattere fra Danmark
Personer fra Frederiksberg
Danskere i 1800-tallet
Danskere i 1900-tallet
Eyvind
Kommandører af 1. grad af Dannebrog
Legatstiftere fra Danmark
Filantroper fra Danmark
Landsdommere fra Danmark
Dommere i Kriminal- og Politiretten
Studenter fra Metropolitanskolen | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 1,056 |
Q: How to display a List> when using a List.generate method? [flutter] I have a DropDownButton widget -
List<DropdownMenuItem<String>> dayValue;
String monthDay;
Widget buildDropdownButtonDay() {
return new DropdownButton<String>(
hint: Text('Choose'),
onChanged: (String changedValue) {
monthDay=changedValue;
setState(() {
monthDay = changedValue;
// print(newValue);
});
},
value: monthDay,
items: dayValue
);
}
I am using a List.generate method in initState to generate some values to the list dayValue like this -
void initState() {
// numberontroller.text = 1.toString();
dayValue = List<DropdownMenuItem<String>>.generate(31, (int index) => DropdownMenuItem(child: Text("Day ${index + 1}"),));
super.initState();
}
When i click on the dropDownButton, the values are listed successfully but when i select a value from this dropdown, the value property still displays "Day 1".
Could i get some suggestion on how to set the value property for a DropDownButton widget when we are using a list.generate method?
A: Add the value property in DropdownMenuItem.
dayValue = List<DropdownMenuItem<String>>.generate(
31,
(int index) => DropdownMenuItem(
value: "Day ${index + 1}", //added
child: Text("Day ${index + 1}"),
)).toList();
super.initState();
A: you forgot to give dropdownItem a value thas why the issue is occurring just put value parameter while generating a list of Dropdown item
dayValue = List<DropdownMenuItem<String>>.generate(31, (int index) =>
DropdownMenuItem(child: Text("Day ${index + 1}"),value:"Day ${index + 1}"));
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,662 |
import React from 'react';
import logo from './logo.svg';
import './App.css';
import rerender from './index'
let counter=0;
const incrementCounter=()=>{
counter++;
rerender(Counter(),document.getElementById('root'))
};
const Counter=()=>(
<div className="App">
<header className="App-header">
{counter}
</header>
<p className="App-intro">
<button onClick={incrementCounter}>Click</button>
</p>
</div>
);
export default Counter;
| {
"redpajama_set_name": "RedPajamaGithub"
} | 346 |
\section{Introduction}
Over the past years, deep reinforcement learning (RL) has shown a huge success in solving tasks such as playing arcade games \citep{mnih2015human} and manipulating robotic arms \citep{levine2016end}. Recent advances in neural networks allow RL agents to learn control policies from raw pixels without feature engineering by human experts. However, most of the deep RL methods focus on solving problems in either simulated physics environments where the inputs to the agents are joint angles and velocities, or simulated video games where the inputs are rendered graphics. Agents trained in such simulated environments have little knowledge about the rich semantics of the world.
The World Wide Web (WWW) is a rich repository of knowledge about the real world. To navigate in this complex web environment, an agent needs to learn about the semantic meaning of texts, images and the relationships between them. Each action corresponds to interacting with the Document Object Model (DOM) from tree-structured HTML. Tasks like finding a friend on a social network, clicking an interesting link, and rating a place on Google Maps can be framed as accessing a particular DOM element and modifying its value with the user input.
In contrast to Atari games, the difficulty of web tasks comes from their diversity, large action space, and sparse reward signals. A common solution for the agent is to mimic the expert demonstration by imitation learning in the previous works \citep{shi2017world, liu2018reinforcement}. \citet{liu2018reinforcement} achieved state-of-the-art performance with very few expert demonstrations in the MiniWoB \citep{shi2017world} benchmark tasks, but their exploration policy requires constrained action sets, hand-crafted with expert knowledge in HTML.
In this work, our contribution is to propose a novel architecture, DOM-Q-NET, that parametrizes factorized Q functions for web navigation, which can be trained to match or outperform existing work on MiniWoB without using any expert demonstration. Graph Neural Network \citep{scarselli2009graph, li2015gated, DBLP:journals/corr/KipfW16} is used as the main backbone to provide three levels of state and action representations.
In particular, our model uses the neural message passing and the readout \citep{gilmer2017neural} of the \textit{local} DOM representations to produce \textit{neighbor} and \textit{global} representations for the web page. We also propose to use three separate multilayer perceptrons (MLP) \citep{mlp} to parametrize a factorized Q function for different action categories: ``click'', ``type'' and ``mode''. The entire architecture is fully differentiable, and all of its components are jointly trained.
Moreover, we evaluate our model on multitask learning of web navigation tasks, and demonstrate the transferability of learned behaviors on the web interface. To our knowledge, this is the first instance that an RL agent solves multiple tasks in the MiniWoB at once. We show that the multi-task agent achieves an average of 2x sample efficiency comparing to the single-task agent.
\section{Background}
\subsection{Representing web pages using DOMs}
The Document Object Model (DOM) is a programming interface for HTML documents and it defines the logical structure of such documents. DOMs are connected in a tree structure, and we frame web navigation as accessing a DOM and optionally modifying it by the user input. As an elementary object, each DOM has a ``tag'' and other attributes such as ``class'', ``is focused'', similar to the \textit{object} in Object Oriented Programming. Browsers use those attributes to render web pages for users.
\subsection{Reinforcement learning}
In the traditional reinforcement learning setting, an agent interacts with an infinite-horizon, discounted Markov Decision Process (MDP) to maximize its total discounted future rewards. An MDP is defined as a tuple $(\mathcal{S}, \mathcal{A}, T, R, \gamma)$ where $\mathcal{S}$ and $\mathcal{A}$ are the state space and the action space respectively, $T(s'|s,a)$ is the transition probability of reaching state $s' \in \mathcal{S}$ by taking action $a \in \mathcal{A}$ from $s\in\mathcal{S}$, $R$ is the immediate reward by the transition, and $\gamma$ is a discount factor. \cut{ In the case that $\mathcal{A}$ is a finite set of tuples of actions, we have ${\bm{a}} = (a_1 ..., a_k, ..., a_K)$ and $a_1 \in A_1, ... a_k \in A_k,... a_K \in A_K$, where K is the number of action types. There are not many studies regarding multi-action RL but \citet{rohanimanesh2006concurrent} proposed a concurrent action model. Now, the probability of performing action $a$ in state $s$, policy $\pi({\bm{a}}|s)$, defines the behaviour of an agent.
}
The Q-value function for a tuple of actions is defined to be $ Q^{\pi}(s,a) = \mathbb{E}[\sum^T_{t=0}\gamma^t r_t|s_0=s, {a_0}={a}]$, where T is the number of timesteps till termination. The formula represents the expected future discounted reward starting from state $s$, performing action $a$ and following the policy until termination. The optimal Q-value function $Q^*(s,a) = max_\pi Q^\pi(s,a), \forall s \in \mathcal{S}, a \in \mathcal{A}$ \citep{sutton1998introduction} satisfies the Bellman optimality equation $Q^*(s, a)=\mathbb{E}_{s'}[r+\gamma \max_{a'\in \mathcal{A}}Q^*(s', a')]$.
\subsection{Graph neural networks}
For an undirected graph $G=(V, E)$, the Message Passing Neural Network (MPNN) framework \citep{gilmer2017neural} formulates two phases of the forward pass to update the node-level feature representations $h_{v}$, where $v\in V$, and graph-level feature vector $\hat{y}$.
The message passing phase updates hidden states of each node by applying a vertex update function $U_t$ over the current hidden state and the message, $h^{t+1}_v=U_t(h^t_v, m^{t+1}_v)$, where the passed message $m_v^{t+1}$ is computed as $m_v^{t+1}=\sum_{\omega\in N(v)}M_t(h_v^t,h_w^t, e_{vw})$. $N(v)$ denotes the neighbors of $v$ in $G$, and $e_{vw}$ is an edge feature. This process runs for T timesteps. The readout phase uses the readout function R, and computes the graph-level feature vector $\hat{y}=R({h_v^T|v\in G})$.
\subsection{Reinforcement Learning with Graph Neural Networks}
There has been work in robot locomotion that uses graph neural networks (GNNs) to model the physical body \citep{wang2018nervenet, gnn_relational}. NerveNet demonstrates that policies learned with GNN transfers better to other learning tasks than policies learned with MLP \citep{wang2018nervenet}. It uses GNNs to parametrize the entire policy whereas DOM-Q-NET uses GNNs to provide representational modules for factorized Q functions. Note that the graph structure of a robot is static whereas the graph structure of a web page can change at each time step. Locomotion-based control tasks provide dense rewards whereas web navigation tasks are sparse reward problems with only 0/1 reward at the end of the episode. For our tasks, the model also needs to account for the dependency of actions on goal instructions.
\subsection{Previous Work on RL on web interfaces}
\citet{shi2017world} constructed benchmark tasks, Mini World of Bits (MiniWoB), that consist of many toy tasks of web navigation. This environment provides both the image and HTML of a web page. Their work showed that the agent using the visual input cannot solve most of the tasks, even given the demonstrations. Then \citet{liu2018reinforcement} proposed DOM-NET architecture that uses a series of attention between DOM elements and the goal. With their workflow guided-exploration that uses the formal language to constrain the action space of an agent, they achieved state-of-the-art performance and sample efficiency in using demonstrations. Unlike these previous approaches, we aim to tackle web navigation without any expert demonstration or prior knowledge.
\section{Neural DOM Q Network}
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{MainFigures/WOB.pdf}
\end{center}
\caption{Given the web page on the right, its DOM tree representation is shown as a graph where each DOM represents a node from $V$. Different colors indicate different tag attributes of DOMs. DOMs are embedded as a local module, $\local{{\bm{e}}}$, and propagated by a GNN to produce a neighbor module, $\neighbor{{\bm{e}}}$. The global module, $\glob{{\bm{e}}}$, is aggregated from the neighbor module. The $Q_{dom}$ stream uses all three modules whereas $Q_{token}$ and $Q_{mode}$ streams only use the global module. Here, Q values of the `submit' and `sr' token are computed by $Q_{dom}$ and $Q_{token}$ respectively.}
\label{fig:domqnet}
\end{figure}
Consider the problem of navigating through multiple web pages or menus to locate a piece of information. Let $V$ be the set of DOMs in the current web page. There are often multiple goals that can be achieved in the same web environment. We consider goals that are presented to the agent in the form of a natural language sentence, e.g. ``Select sr and click Submit'' in Figure \ref{fig:domqnet} and ``Use the textbox to enter Kanesha and press Search, then find and click the 9th search result'' in Figure \ref{fig:demo}. Let $\mathcal{G}$ represent the set of word tokens in the given goal sentence. The RL agent will only receive a reward if it successfully accomplishes the goal, so it is a sparse reward problem. The primary means of navigation are through interaction with the buttons and the text fields on the web pages.
There are two major challenges in representing the state-action value function for web navigation: the action space is enormous, and the number of actions can vary drastically between the states. We propose DOM-Q-NET to address both of the problems in the following.
\subsection{Action space for web navigation}
In contrast to typical RL tasks that require choosing only one action $a$ from an action space, $\mathcal{A}$, such as choosing one from all combinations of controller's joint movements for Atari \citep{mnih2015human}, we frame acting on the web with three distinct categories of actions:
\begin{itemize}
\item {DOM selection} $\dom{a}$ chooses a single DOM in the current web page, $\dom{a} \in V$. The DOM selection covers the typical interactive actions such as clicking buttons or checkboxes as well as choosing which text box to fill in the string input.
\item {Word token selection} $\token{a} \in \mathcal{G}$ picks a work token from the given goal sentence to fill in the selected text box. The assumption that typed string comes from the goal instruction aligns with previous work \citep{liu2018reinforcement}.
\item Mode $\mode{a}\in\{\text{click}, \text{type}\}$ tells the environment whether the agent's intention is to ``click'' or ``type'' when acting in the web page. $\mode{a}$ is represented as a binary action.
\end{itemize}
At each time step, the environment receives a tuple of actions, namely $a = (\dom{a}, \token{a}, \mode{a})$, though it does not process $\token{a}$ unless $\mode{a}=type$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=\linewidth]{MainFigures/demo.pdf}
\end{center}
\caption{A successful trajectory executed by our model for \textit{search-engine}. $S_i$ is the state, and $A_i=(a_{dom}, a_{token}, a_{mode})$ is a tuple of actions for the three distinct categories of actions at timestep i. DOM($x$) represents the index of the corresponding element $x$ in the web page.}
\label{fig:demo}
\end{figure}
\subsection{Factorized Q function}
One way to represent the state-action value function is to consider all the permutations of $\dom{a}$ and $\token{a}$. For example, \citet{mnih2015human} considers the permutations of joystick direction and button clicking for Atari. For MiniWoB, this introduces an enormous action space with size $|V|\times|\mathcal{G}|$. The number of DOMs and goal tokens, $|V|$ and $|\mathcal{G}|$, can reach up to 60 and 18, and the total number of actions become over $1,000$ for some hard tasks.
To reduce the action space, we consider a factorized state-action value function where the action values of $\dom{a}$ and $\token{a}$ are \textbf{independent} to each other. Formally, we define the optimal Q-value function as the sum of the individual value functions of the three action categories:
\begin{gather}
Q^*(s, a) = Q^*(s, \dom{a}, \token{a}, \mode{a}) = Q^*(s, \dom{a}) + Q^*(s, \token{a}) + Q^*(s, \mode{a}).
\end{gather}
Under the independence assumption, we can find the optimal policy by selecting the greedy actions w.r.t. each Q-value function individually. Therefore, the computation cost for the optimal action of the factorized Q function is linear in the number of DOM elements and the number of word tokens rather than quadratic.
\begin{align}
a^* = \left(\argmax_{\dom{a}}Q^*(s, \dom{a}), \argmax_{\token{a}}Q^*(s, \token{a}), \argmax_{\mode{a}}Q^*(s, \mode{a})\right)
\end{align}
\subsection{Learning state-action embeddings of web pages}
Many actions on the web, clicking different checkboxes and filling unseen type of forms, share similar tag or class attributes. Our goal is to design a neural network architecture that effectively captures such invariance for web pages, and yet is flexible to deal with the varying number of DOM elements and goal tokens at different time steps. Furthermore, when locating a piece of information on the web, an agent needs to be aware of both the local information, e.g. the name of button and its surrounding texts, and the global information, e.g. the general theme, of the web page. The cue for clicking a particular button from the menu is likely scattered.
To address the above problem, we propose a GNN-based RL agent that computes the factorized Q-value for each DOM in the current web page, called DOM-Q-NET as shown in Figure~\ref{fig:domqnet}. It uses additional information of tree-structured HTML to guide the learning of state-action representations, embeddings ${\bm{e}}$, which is shared among factorized Q networks. Explicitly modeling the HTML tree structure provides the relational information among the DOM elements to the agent. Given a web page, our model learns a concatenated embedding vector ${\bm{e}}^i = \left[\local{{\bm{e}}^i}, \neighbor{{\bm{e}}^i}, \glob{{\bm{e}}}\right]$ using the low-level and high-level modules that correspond to node-level and graph-level outputs of the GNN.
\textbf{Local Module}
\label{module:local}
$\local{{\bm{e}}}^i$ is the concatenation of each embedded attribute ${\bm{e}}_{Attr}$ of the DOM $v^i$, which includes the tag, class, focus, tampered, and text information of the DOM element. In particular, we use the maximum of cosine distance between the text and each goal token to measure the soft alignment of the DOM $v^i$ with the $j^{th}$ word embedding, $\goal{{\bm{e}}}^j$, in the goal sentence. \citet{liu2018reinforcement} uses the exact alignment to obtain tokens that appear in the goal sentence, but our method can detect synonyms that are not exactly matched.
\begin{align}
\displaystyle \local{{\bm{e}}}^i=\left[{\bm{e}}_{Attr}^i, \max_j\left(cos({{\bm{e}}}_{Attr}^i, \goal{{\bm{e}}}^j)\right)\right]
\end{align}
This provides the unpropagated action representation of clicking each DOM, and is the \textit{skip connection} of GNNs.
\textbf{Neighbor Module}
\label{module:neighbor}
$\neighbor{{\bm{e}}}^i$ is the node representation that incorporates the neighbor context of the DOM $v^i$ using a graph neural network. The model performs the message passing between the nodes of the tree with the weights ${\bm{w}}_{GNN}$ . The local module is used to initialize this process. ${\bm{m}}^t$ is an intermediate state for each step of the message passing, and we adopt Gated Recurrent Units \citep{DBLP:journals/corr/ChoMGBSB14} for the nonlinear vertex update \citep{li2015gated}. This process is performed for T number of steps to obtain the final neighbor embeddings.
\begin{gather}
{\bm{m}}^{i, t+1}_{neighbor}=\sum_{k \in N(i)}{w}_{GNN} {\bm{e}}^{k, t}_{neighbor}, \quad \displaystyle {\bm{e}}_{neighbor}^{i,0}={\bm{e}}_{local}^{i}, \\ {\bm{e}}_{neighbor}^{i,t+1}=GRU({\bm{e}}_{neighbor}^{i,t}, {\bm{m}}_{neighbor}^{i,t+1}), \quad {\bm{e}}_{neighbor}^{i} = {\bm{e}}_{neighbor}^{i,T}
\end{gather}
By incorporating the context information, this module contains the state representation of the current page, and the propagated action representation of clicking the particular DOM, so the Q-value function can be approximated using only this module.
\textbf{Global Module}
\label{module:global}
$\glob{{\bm{e}}}$ is the high-level feature representation of the entire web page after the readout phase. It is used by all three factorized Q networks. We investigate two readout functions to obtain such global embedding with and without explicitly incorporating the goal information.
1) We use \textit{max-pooling} to aggregate all of the DOM embeddings of the web page.
\begin{align}
\glob{{\bm{e}}}=\textrm{maxpool}\left(\left\{ \left[\local{{\bm{e}}^{i}}, \neighbor{{\bm{e}}^{i}}\right]|v^i\in V\right\}\right)
\end{align}
2)\label{main:global} We use \textit{goal-attention} with the goal vector as an attention query. This is in contrast to \citet{velickovic2017graph} where the attention is used in the message passing phase, and the query is not a task dependent representation. To have the goal vector $h_{goal}$, each goal token $e_{token}$ is concatenated with the one-hot positional encoding vector $e_{pos}$, as shown in Figure~\ref{fig:domqnet}. Next, the position-wise feed-forward network with ReLU activation is applied to each concatenated vector before max-pooling the goal representation. Motivated by \citet{vaswani2017attention}, we use scaled dot product attention with local embeddings as keys, and neighbor embeddings as values. Note that ${\bm{E}}_{local}$ and ${\bm{E}}_{neighbor}$ are packed representations of $({\bm{e}}_{local}^1,...,{\bm{e}}_{local}^V)$ and $({\bm{e}}_{neighbor}^1,...,{\bm{e}}_{neighbor}^V)$ respectively, where ${\bm{E}}_{local} \in \mathbb{R}^{(V, d_k)}$, ${\bm{E}}_{neighbor} \in \mathbb{R}^{(V, d_k)}$, and $d_k$ is the dimension of text token embeddings.
\begin{align}
{\bm{e}}_{attn}=\text{softmax}(\frac{{\bm{h}}_{goal}{\bm{E}}_{local}^T}{\sqrt{d_k}}){\bm{E}}_{neighbor},
\quad
{\bm{e}}_{global\_attn}=\left[{\bm{e}}_{global}, {\bm{e}}_{attn}\right]
\end{align}
The illustrative diagram is shown in Appendix \ref{appendix:goalattn}, and a simpler method of concatenating the node-level feature with the goal vector is shown in Appendix \ref{appendix:goal-encoder}.
This method is also found to be effective in incorporating the goal information, but the size of the model increases.
\textbf{Learning} The Q-value function of choosing the DOM is parametrized by a two-layer MLP, $\dom{Q}^i = MLP({\bm{e}}^i;\dom{w})$, where it takes the concatenation of DOM embeddings ${\bm{e}}^i = \left[\local{{\bm{e}}^i}, \neighbor{{\bm{e}}^i}, \glob{{\bm{e}}}\right]$ as the input. Similarly, the Q-value functions for choosing the word token and the mode are computed using $MLP(\token{{\bm{e}}}, \glob{{\bm{e}}}; \token{w})$ and $MLP(\glob{{\bm{e}}}; \mode{w})$ respectively. See Figure~\ref{fig:domqnet}.
All the model parameters including the embedding matrices are learned from scratch. Let $\displaystyle \theta=(E, {w}_{GNN}, \dom{w}, \token{w}, \mode{w})$ be the model parameters including the embedding matrices, the weights of a graph neural network, and weights of the factorized Q-value function. The model parameters are updated by minimizing the squared TD error \citep{sutton1988learning}:
\begin{align}
\min_\theta \mathbb{E}_{(s,a,r,s') \sim \text{replay}}[\left(y^{DQN}-Q(s, \dom{a};\theta) - Q(s, \token{a};\theta) - Q(s, \mode{a};\theta)\right)^2],
\end{align}
where the transition pairs $(s,a,r,s')$ are sampled from the replay buffer and $ y^{DQN}$ is the factorized target Q-value with the target network parameters $\theta^-$ as in the standard DQN algorithm.
\begin{align}
y^{DQN} = r+\gamma\left(\max_{\dom{a'}}Q(s', \dom{a}';\theta^-) + \max_{\token{a'}}Q(s', \token{a}';\theta^-) + \max_{\mode{a'}}Q(s', \mode{a}';\theta^-)\right)
\end{align}
\subsection{Multitask Learning for Transferring Learned Behaviours}
To assess the effectiveness of transferring learned behaviours and solving multiple tasks by our model, we train a single agent acting in multiple environments. Transitions from different tasks are collected in a shared replay buffer, and the network is updated after performing an action in each environment. See Alg.\ref{alg:multitask} for details.
\section{Experiments}
We first evaluate the generalization capability of the proposed model for large action space by comparing it against previous works. Tasks with various difficulties, as defined in Appendix \ref{appendix:taskdiff}, are chosen from MiniWoB. Next, we investigate the gain in sample efficiency with our model from multitask learning. We perform an ablation study to justify the effectiveness of each representational module, followed by the comparisons of gains in sample efficiency from goal-attention in multitask and single task settings. Hyperparameters are explained in Appendix \ref{appendix:hyperparams}.
\subsection{DOM-Q-NET Benchmark MiniWoB}
We use the Q-learning algorithm, with four components of Rainbow \citep{hessel2017rainbow}, to train our agent because web navigation tasks are sparse reward problems, and an off-policy learning with a replay buffer is more sample-efficient. The four components are DDQN \citep{van2016deep}, Prioritized replay \citep{schaul2015prioritized}, Multi-step learning \citep{sutton1988learning} , and NoisyNet \citep{fortunato2017noisy}. To align with the settings used by \citet{liu2018reinforcement}, we consider the tasks that only require clicking DOM elements and typing strings. The agent receives
+1 reward if the task is completed correctly, and 0 reward otherwise. We perform $T=3$ steps of \textit{neural message passing} for all the tasks except \textit{social-media}, for which we use $T=7$ steps to address the large DOM space.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{MainFigures/Bar1.png}
\label{fig:barchart1}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{MainFigures/Bar2.png}
\label{fig:barchart}
\end{subfigure}
\caption{Performance comparisons of DOM-Q-NET with \citet{shi2017world, liu2018reinforcement}}\label{fig:compare}
\end{figure}
\textbf{Evaluation metric:} We plot the moving average of rewards for the last 100 episodes during training. We follow previous works \citep{shi2017world, liu2018reinforcement}, and report the success rate, which is the percentage of test episodes that ends up with reward +1. Each reported success rate is based on the average of 4 different runs, and Appendix \ref{appendix:setup} explains our experiment protocol.
\textbf{Results:}
Figure~\ref{fig:compare} shows that DOM-Q-NET reaches 100\% success rate for most of the tasks selected by \citet{liu2018reinforcement}, except for \textit{click-widget}, \textit{social-media}, and \textit{email-inbox}. Our model still reaches 86\% success rate for \textit{social-media}, and the use of goal-attention enables the model to solve \textit{click-widget} and \textit{social-media} with 100\% success rate. We did not use any prior knowledge such as providing constraints on the action set during exploration, using pre-defined fields of the goal and showing expert demonstrations. Specifically, our model solves a long-horizon task, \textit{choose-date}, that previous works with demonstrations were unable to solve. This task expects many similar actions, but has a large action space. Even using imitation learning or guided exploration, the neural network needs to learn a representation that generalizes for unseen diverse DOM states and actions, which our model proves to do.
\subsection{Multitask}
Two metrics are used for comparing the sample efficiency of multitask and single-task agents.
\begin{itemize}
\item $M_{total}$
multitask agent: total number of frames observed upon solving all the tasks.\\
$M_{total}$ single-task agents: sum of the number of frames observed for solving each task.
\item $M_{task}$: number of frames observed for solving a specific task.
\end{itemize}
We trained a multitask agent solving 9 tasks with 2x sample efficiency, using about $M_{total}=63000$ frames, whereas the single-task agents use $M_{total}=127000$ frames combined. Figure~\ref{fig:results} shows the plots for 6 out of the 9 tasks. In particular, \textit{login-user} and \textit{click-checkboxes} are solved with 40000 fewer frames using multitask learning, but such gains are not as obvious when the task is simple, as in the case of \textit{navigate-tree}.
\begin{figure}
\centering
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/ENTERTEXT_MULTI.pdf}
\label{fig:entertext1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/NAVIGATE.pdf}
\label{fig:navigate1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/EnterPassword.pdf}
\label{fig:enterpassword1}
\end{subfigure}
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/Click-Option.pdf}
\label{fig:clickoption1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/ClickCheckboxes.pdf}
\label{fig:clickcheckboxes1}
\end{subfigure}
~
\begin{subfigure}[b]{0.3\textwidth}
\includegraphics[width=\textwidth]{Multitask6/LoginUser.pdf}
\label{fig:loginuser1}
\end{subfigure}
\caption{Multitask Comparisons: 9-multitask DOM-Q-NET with goal-attention consistently has better sample efficiency. Results for other tasks are shown in Appendix \ref{appendix:multitask}. \ g\_a=goal-attention.}\label{fig:results}
\end{figure}
Next we included two hard tasks shown in Figure~\ref{fig:multitaskhard}. Compared to the sample efficiency of observing $M_{total}=477000$ frames for solving 11 tasks by single-task agents, multitask agent has only observed $M_{total}=29000\times 11=319000$ frames when the last \textit{social-media} task is solved as shown in Figure~\ref{fig:multitaskhard}. Additionally, the plots indicate that multitask learning with simpler tasks is more efficient in using observed frames for hard tasks, achieving better $M_{task}$ than multitask learning with only those two tasks. These results indicate that our model enables positive transfers of learned behaviours between distinct tasks.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{MultitaskHard2/test_social.pdf}
\label{fig:socialmedia1}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{MultitaskHard2/test_search.pdf}
\label{fig:searchengine1}
\end{subfigure}\vspace{-3pt}
\caption{Comparisons in sample efficiency for 2 hard tasks, \textit{social-media} (left) and \textit{search-engine} (right), by multitask learning. \textit{9\_multitask} refers to the tasks discussed in Figure~\ref{fig:results} }\label{fig:multitaskhard}
\end{figure}
\subsection{Ablation Study on the DOM Representation Modules}
\label{sec:ablation}
We perform ablation experiments to justify the effectiveness of using each module for the $Q_{dom}$ stream. We compare the proposed model against three discounted versions that omit some modules for computing $Q_{dom}$: (a) ${\bm{e}}_{dom}=\local{{\bm{e}}}$, (b) ${\bm{e}}_{dom}=\neighbor{{\bm{e}}}$, (c) ${\bm{e}}_{dom}=[\local{{\bm{e}}}^T, \neighbor{{\bm{e}}}^T]^T$.
Figure~\ref{fig:modules} shows the two tasks chosen, and the failure case for \textit{click-checkboxes} shows that DOM selection without the neighbor module will simply not work because many DOMs have the same attributes, and thus have exactly the same representations despite the difference in the context. \citet{liu2018reinforcement} addressed this issue by hand-crafting the message passing. The faster convergence of DOM-Q-NET to the optimal behaviour indicates the limitation of neighbor module and how global and local module provide shortcuts to the high-level and low-level representations of the web page.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{AblationStudy/ClickCheckboxes.pdf}
\label{fig:ablationclickcheckboxes1}
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{AblationStudy/LoginUser.pdf}
\label{fig:ablationloginuser1}
\end{subfigure}\vspace{-3pt}
\caption{Ablation experiments for l=Local, n=Neighbor, g=Global modules. dom\_q\_net - g is the DOM-Q-NET without the global module. dom\_q\_net -l - g is the DOM-Q-NET with only neighbor module. dom\_q\_net-n-g is the DOM-Q-NET with only local module.}\label{fig:modules}
\end{figure}
\subsection{Effectiveness of Goal-Attention}
Most of the MiniWoB tasks have only one desired control policy such as ``put a query word in the search, and find the matched link'' where the word token for the query and the link have alignments with the DOMs. Hence, our model solves most of the tasks without feeding the goal representation to the network, with exceptions like \textit{click-widget}.
Appendix \ref{appendix:benchmark} shows comparisons of the model with different goal encoding methods including goal-attention. The effect of goal-attention is not obvious, as seen in some tasks. However, Figure~\ref{fig:goal-attn} shows that the gain in sample efficiency from using goal-attention is considerable in multitask learning settings, and this gain is much bigger than the gain in the single-task setting. This indicates that the agent successfully learns to pay attention to different parts of the DOM tree given different goal instructions when solving multiple tasks.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Multitask_g_a/EnterPasswordMultiGAJustification.pdf}
\label{fig:enterpasswordmultiga}\
\end{subfigure}
~
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{Multitask_g_a/LoginUser_MultiGAJustification.pdf}
\label{fig:loginusermultiga}
\end{subfigure}
\caption{Effects of goal-attention for single and multi-task learning (g\_a=goal attention)}\label{fig:goal-attn}
\end{figure}
\section{Discussion}
We propose a new architecture for parameterizing factorized Q functions using goal-attention, local word embeddings, and a graph neural network. We contribute to the formulation of web navigation with this model.
Without any demonstration, it solves relatively hard tasks with large action space, and transfers learned behaviours from multitask learning, which are two important factors for web navigation. For future work, we investigate exploration strategies for tasks like \textit{email-inbox} where the environment does not have a simple instance of the task that the agent can use to generalize learned behaviours. \citet{liu2018reinforcement} demonstrated an interesting way to guide the exploration. Another work is to reduce the computational cost of evaluating the Q value for each DOM element.
Finally, we intend on applying our methods to using search engines. Tasks like question answering could benefit from the ability of an agent to query search, navigate the results page and obtain relevant information for solving the desired goal. The ability to query and navigate search could also be used to bootstrap agents in realistic environments to obtain task-oriented knowledge and improve sample efficiency. \\\\
\textbf{Acknowledgement}: We acknowledge using the implementation of segment tree by Dopamine \citep{DBLP:journals/corr/abs-1812-06110dopamine} for this project.\\\\
\textbf{Reproducibility}: Our code and demo are available at \href{https://github.com/Sheng-J/DOM-Q-NET}{https://github.com/Sheng-J/DOM-Q-NET} and \href{https://www.youtube.com/channel/UCrGsYub9lKCYO8dlREC3dnQ}{https://www.youtube.com/channel/UCrGsYub9lKCYO8dlREC3dnQ} respectively.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,355 |
Q: Redundant files and folder on jekyll site I am learning jekyll with github. When I run:
jekyll new portfolio
It's just working properly. Let's say the portfolio is my root project folder. When I copied my favicon to root folder, that file was automatically copied to _site folder, and then when I created my new theme on root folder:
jekyll new minimalist-design
It gives me redundant folder and files: it's created in root folder and _site folder. It also gives me a .git folder again on my theme folder.
Why is this happening?
Here's the full tree from my root folder
portfolio/
├──.gitignore
├──about.md
├──Gemfile
├──Gemfile.lock
├──index.md
├──LICENSE
├──README.md
├──_config.yml
│
├──.git
│ └──//list of repository files
│
├──.sass-cache
│ ├──_base.scssc
│ ├──_layout.scssc
│ ├──_syntax-highlighting.scssc
│ └──minima.scssc
│
├──minimalist-design
│ ├──//It's my created theme folder
│ ├──.gitignore
│ ├──Gemfile
│ ├──LICENSE.txt
│ ├──minimalist-design.gemspec
│ ├──README.md
│ │
│ ├──.git
│ │ └──//Another repository folder????
│ ├──assets
│ ├──_includes
│ ├──_layouts
│ │ ├──default.html
│ │ ├──page.html
│ │ └──post.html
│ │
│ └──_sass
│
├──_posts
│ └──2016-12-29-welcome-to-jekyll.markdown
│
└──_site
├──feed.xml
├──feed.xslt.xml
├──index.html
├──LICENSE
├──README.md
│
├──about
│ └──index.html
│
├──assets
│ └──main.css
│
├──jekyll
│
└──minimalist-design
├──//Another my theme folder again????
├──Gemfile
├──LICENSE.txt
├──minimalist-design.gemspec
└──README.md
A: From documentation about _site directory:
This is where the generated site will be placed (by default) once Jekyll is done transforming it. It's probably a good idea to add this to your .gitignore file.
In other words, every file you put in your root folder will be copied into _site when running one of the jekyll build command (e.g. jekyll build or jekyll serve) since it is basically where the built files are generated.
For usual, not-to-be-parsed-and-interpreted files (e.g. files without liquid code or files that are not posts in markdown format), the files are just copied without transformation like you noticed.
The .git folders in the subdirectories indicate they are independent git repositories: you can check and see their contents are different.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 3,914 |
import xml.etree.ElementTree as ET
from programy.parser.template.nodes.base import TemplateNode
from programy.parser.template.nodes.learn import TemplateLearnNode
from programy.parser.template.nodes.word import TemplateWordNode
from test.parser.template.base import TemplateTestsBaseClass
class TemplateLearnNodeTests(TemplateTestsBaseClass):
def test_node(self):
root = TemplateNode()
self.assertIsNotNone(root)
learn = TemplateLearnNode()
self.assertIsNotNone(learn)
learn._pattern = ET.fromstring("<pattern>HELLO LEARN</pattern>")
learn._topic = ET.fromstring("<topic>*</topic>")
learn._that = ET.fromstring("<that>*</that>")
learn._template = TemplateWordNode("LEARN")
root.append(learn)
self.assertEqual(1, len(root.children))
resolved = root.resolve(self.bot, self.clientid)
self.assertIsNotNone(resolved)
self.assertEqual("", resolved)
def test_to_xml(self):
root = TemplateNode()
learn = TemplateLearnNode()
learn._pattern = ET.fromstring("<pattern>HELLO LEARN</pattern>")
learn._topic = ET.fromstring("<topic>*</topic>")
learn._that = ET.fromstring("<that>*</that>")
learn._template = TemplateWordNode("LEARN")
root.append(learn)
xml = root.xml_tree(self.bot, self.clientid)
self.assertIsNotNone(xml)
xml_str = ET.tostring(xml, "utf-8").decode("utf-8")
self.assertEqual("<template><learn><pattern>HELLO LEARN</pattern><topic>*</topic><that>*</that><template>LEARN</template></learn></template>", xml_str)
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,029 |
Palmarès
Giochi olimpici
Melbourne 1956: oro nella lotta libera pesi massimi.
Roma 1960: argento nella lotta libera pesi massimi.
Tokyo 1964: bronzo nella lotta libera pesi massimi.
Mondiali
Karlsruhe 1955: bronzo nella lotta greco-romana pesi massimi.
Istanbul 1957: oro nella lotta libera pesi massimi.
Budapest 1958: bronzo nella lotta greco-romana pesi massimi.
Teheran 1959: argento nella lotta libera pesi massimi.
Yokohama 1961: argento nella lotta libera pesi massimi, argento nella lotta greco-romana pesi massimi.
Sofia 1963: bronzo nella lotta libera pesi massimi.
Helsingborg 1963: bronzo nella lotta greco-romana pesi massimi.
Coppa del mondo
Istanbul 1956: oro nella lotta greco-romana pesi massimi, argento nella lotta libera pesi massimi.
Sofia 1958: bronzo nella lotta libera pesi massimi.
Giochi del Mediterraneo
Barcellona 1955: oro nella lotta greco-romana pesi massimi.
Beirut 1959: oro nella lotta libera pesi massimi.
Napoli 1963: oro nella lotta libera pesi massimi.
Balcanici
Burgas 1960: argento nella lotta libera pesi massimi.
Jambol 1965: argento nella lotta greco-romana pesi massimi.
Collegamenti esterni
Morti per incidente stradale | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 6,101 |
SportChevron Right IconRugby League
Knights need more to click in NRL: Brown
Matt Encarnacion
Sunday, 24 March 2019 12:37 am
Newcastle are taking longer than expected to gel after a second straight NRL loss. Credit: AAP
Newcastle coach Nathan Brown has called for patience for his new-look NRL side after going agonisingly close to posting consecutive wins to start the season.
The Knights went toe-to-toe with Penrith for most of the contest on Saturday but were outdone in a handful of key moments to lose by just two points.
"Were they that much better than us? No," Brown said.
"I thought there was a couple of times there where if individuals worked a touch harder, we might be sitting here with a smile on our face.
"The dog and the bone I reckon - they got the bone a little bit more."
There were glimpses of brilliance from their spine in attack, including a mouth-watering two-man cutout from star five-eighth Kalyn Ponga for a first-half try.
But they also missed a handful of genuine chances, with Ponga, Connor Watson and Edrick Lee all getting over the line only to be denied.
And while Brown liked most of his team's defence, he was upset with the way they conceded two soft tries to Isaah Yeo and Frank Winterstein.
He urged for patience from supporters as the Knights continue to integrate a host of new faces into a side widely tipped to break their finals drought this year.
Major signing David Klemmer was arguably Newcastle's best, carrying the ball for 211 metres after going close to cracking the 200-metre barrier in round one.
"It's so early in the year. This time last week people were talking about how the Cowboys and Brisbane were no good," Brown said.
"And then we saw what happened from round one to round two.
"The reality for us is we've got seven or eight new faces in our team. And building what we're building, it was never going to be perfect earlier, that's for sure." | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 637 |
Dutch Ambassador Discusses Anne Frank and Europe's Refugee Crisis
By Natalie Liu
Dutch ambassador Henne Schuwer, left, with recipients of the 2017 Anne Frank Award: the Reverand Leo J. O'Donovan, center, on behalf of the Jesuit Refugee Service (JRS), and Robert Quinn, right, founding executive director of Scholars at Risk
WASHINGTON - As wars and rumors of war haunt halls of states, large numbers of people are fleeing their native lands to nations that are more prosperous and better governed.
Meanwhile, each destination country is faced with the challenges an influx of migrants and refugees brings. The fact that a few of them might carry out terrorist attacks adds a disquieting element to the already anxious mix.
Against this backdrop, a recent conversation with Netherlands' ambassador to the United States was markedly level-headed.
Dr. Otto Frank holds the Golden Pan award, given f
Dr. Otto Frank holds the Golden Pan award, given for the sale of 1 million copies of "The Diary of Anne Frank," June 14, 1971 in London. Dr. Frank was the only family member to survive World War II.
?Anne Frank 'symbolic for us'
Henne Schuwer, a career diplomat from the Kingdom of the Netherlands, was getting ready to present the Anne Frank Award to two U.S.-based global organizations whose missions are to alleviate the sufferings of refugees and help scholars threatened with persecution.
"Anne Frank was a refugee, let's start with that," said Schuwer, as he began a conversation.
FILE - This is an undated photo of Anne Frank from
FILE - This is an undated photo of Anne Frank from the Anne Frank Center, USA. (AP Photo/Anne Frank Center)
Frank, a German-born Jewish girl who, along with her family, moved to the Netherlands when the Nazis gained control of Germany, later went into hiding when Germans invaded the Netherlands. She and her family survived in hiding for more than two years, thanks to help from non-Jewish Dutch citizens before they were discovered and sent to concentration camps, where she died.
The diary she kept while in hiding would emerge, posthumously, as one of the most important testimonials of the brutality of World War II.
Anne Frank: The Diary of a Young Girl is a very moving account, the Dutch diplomat says, especially "because you know how it ends, what really became of her; if you read the book with that knowledge, it can be very, very emotional."
The Netherlands Embassy initiated the Anne Frank award in 2014. This year, the honorees were the Jesuit Refugee Service and New York-based Scholars at Risk, recognized for their "activism, for not walking away from problems of the world."
The Anne Frank award is "symbolic for us," and stands for "the effort we as a country put in to defend human rights," Schuwer says.
That effort — and the spirit behind the effort — remain strong today, he adds.
WATCH: Interview with Dutch Ambassador Henne Schuwer
2446332_1551259784 video player.
'We're used to immigration'
"We are a very densely populated country, with more than 17 million people, in an area as big as Maryland, but we are very aware that we have built our society, from the 15th and 16th centuries onwards, as a welcoming, tolerant society" that has "prospered and profited from welcoming people with different skills and different ideas," Schuwer says.
The ambassador acknowledges that taking in large groups of refugees fleeing wars poses a different set of challenges than does playing host to philosophers in the ranks of John Locke and René Descartes in the 17th century fleeing from England and France.
"We're used to immigration, though I would not conceal that what's happening at the moment, especially in Europe, is a lot of immigration, a lot of people who want to come in, and we have difficulties now in integrating them," he says.
Social contract: a two-way street
One gets the sense, in the course of the interview, that to integrate or not is a question of "to be or not to be" magnitude. Further, to ensure or at least enable the "to be," both the receiving country's government and society and aspiring new citizens will have to commit to a binding and ultimately mutually beneficial "social contract."
On the one hand, giving migrant and refugee children a fair chance at education and subsequently employment and advancement opportunities to fulfill their ambitions is key to gaining productive citizens versus a migrant population that remains on the fringe of society harboring resentment. On the other hand, the responsibility of integrating into the "brave new world" also lies on the shoulders of the migrants themselves.
"It's wonderful that you come, it's wonderful that we can welcome you, but in the end if you come to the Netherlands and you want to live here, you have to integrate in Dutch society, including language and customs … you have to contribute to what we try to achieve," Schuwer says.
He would, however, cut aspiring immigrant citizens some slack with regard to Dutch language skills: "You can hardly force everybody to talk perfect Dutch because the language, especially pronunciation, is quite difficult!"
When it is brought to his attention that the queen, an Argentine native, has managed to master her adopted country's native tongue, the ambassador jokingly responded that the queen was bestowed with unique incentives.
Perfect Dutch accent aside, the ambassador held his ground concerning culture and customs: "Our customs are our customs." Just as importantly, old and newcomers alike are obligated to abide by "rules of the land."
WATCH: Dutch Ambassador Henne Schuwer
Resources directed toward integration
The fact that many of the terrorist attacks in the past few years in Europe have been homegrown worries Schuwer.
"Luckily we haven't had major terrorist attacks in the Netherlands," he said, quickly adding: "Knock on wood — as I'm saying it, I realize that maybe I shouldn't be saying it, as I wouldn't want to jinx it!"
The Dutch government, Schuwer said, puts an enormous amount of resources toward helping newly arrived immigrants to integrate, including making sure of a strong, grassroots-level neighborhood police presence.
"Lots of our policemen are on bikes and they walk through neighborhoods," he said. These policemen, he said, "know their neighbors and neighborhoods" and are ready to engage. "If you have a problem, I'm your neighborhood policeman and more than that, I can also be your counselor, I can show you the way, and so on."
Integration, ultimately, isn't a label federal or national governments can simply declare; instead, it's a process that can only be achieved at the grassroots level, the diplomat said.
Citizen like any other
"If we acknowledge your refugee status, we'll provide you housing in the beginning, you have to work. At a certain moment, you go out of refugee status, you go out of government-supported housing, you pay for your own room, and you'll be a citizen like any other, Dutch citizen," Schuwer said, summarizing his country's approach toward refugees.
A citizen "like any other" may be the epitome of humanitarian welcome that a democratic society can extend to individuals fleeing from wars and some other situation. Yet for those in the process, it might seem elusive, if not impossible. Ambassador Schuwer points to the example of Ahmed Aboutaleb to illustrate that it is possible, and then some.
FILE - Rotterdam Mayor Ahmed Aboutaleb poses durin
FILE - Rotterdam Mayor Ahmed Aboutaleb poses during a meeting with other mayors to push for local actions to fight climate change on the margins of the United Nations Climate Change Conference, in Paris, Dec. 3, 2015.
?Role model
"We now have a Moroccan mayor of Rotterdam, who's wildly popular, and in his second term," he said. "He's the classic example of somebody who came at a young age with his parents to the Netherlands … as you say in America, he made it!"
Ahmed Aboutaleb, born in 1961, arrived in the Netherlands at age 15 for family reunification, he holds dual Dutch and Moroccan citizenship. His official biography says that, "obtaining Dutch citizenship entails a responsibility to respect and uphold those values [enshrined in the Dutch Constitution] and to take part in building the We Society."
The mayor is equally proud of his Arab heritage and has translated works of the Syrian-born poet Ali Ahmad Said Esber, known by the pen name Adonis or Adunis, into Dutch.
As Ambassador Schuwer spoke about the mayor of Rotterdam, he seemed not only proud of the mayor's individual achievement, but also proud of what the Dutch society has been able to accomplish, looking toward the future with hope rather than fear.
Natalie Liu | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,325 |
Kick back your heels while you taste this refreshing shake, reminiscent of the laid-back Florida Keys.
In blender, combine ice cream, milk, limeade concentrate, and lime peel and blend until mixture is smooth and frothy. Pour into 2 tall glasses. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,825 |
\section*{Acknowledgments}
\begin{footnotesize}
\setlength{\bibsep}{0.0pt}
\let\jnlstyle=\rm\def\jref#1{{\jnlstyle#1}}\def\jref{AJ}{\jref{AJ}}
\def\jref{ARA\&A}} \def\apj{\jref{ApJ}} \def\apjl{\jref{ApJ}{\jref{ARA\&A}} \def\apj{\jref{ApJ}} \def\apjl{\jref{ApJ}}
\def\jref{ApJS}} \def\ao{\jref{Appl.~Opt.}} \def\apss{\jref{Ap\&SS}{\jref{ApJS}} \def\ao{\jref{Appl.~Opt.}} \def\apss{\jref{Ap\&SS}}
\def\jref{A\&A}} \def\aapr{\jref{A\&A~Rev.}} \def\aaps{\jref{A\&AS}{\jref{A\&A}} \def\aapr{\jref{A\&A~Rev.}} \def\aaps{\jref{A\&AS}}
\def\jref{AZh}} \def\baas{\jref{BAAS}} \def\jrasc{\jref{JRASC}{\jref{AZh}} \def\baas{\jref{BAAS}} \def\jrasc{\jref{JRASC}}
\def\jref{MmRAS}} \def\mnras{\jref{MNRAS}{\jref{MmRAS}} \def\mnras{\jref{MNRAS}}
\def\jref{Phys.~Rev.~A}} \def\prb{\jref{Phys.~Rev.~B}{\jref{Phys.~Rev.~A}} \def\prb{\jref{Phys.~Rev.~B}}
\def\jref{Phys.~Rev.~C}} \def\prd{\jref{Phys.~Rev.~D}{\jref{Phys.~Rev.~C}} \def\prd{\jref{Phys.~Rev.~D}}
\def\jref{Phys.~Rev.~E}} \def\prl{\jref{Phys.~Rev.~Lett.}{\jref{Phys.~Rev.~E}} \def\prl{\jref{Phys.~Rev.~Lett.}}
\def\jref{PASP}} \def\pasj{\jref{PASJ}} \def\qjras{\jref{QJRAS}{\jref{PASP}} \def\pasj{\jref{PASJ}} \def\qjras{\jref{QJRAS}}
\def\jref{S\&T}} \def\solphys{\jref{Sol.~Phys.}{\jref{S\&T}} \def\solphys{\jref{Sol.~Phys.}}
\def\jref{Soviet~Ast.}} \def\ssr{\jref{Space~Sci.~Rev.}{\jref{Soviet~Ast.}} \def\ssr{\jref{Space~Sci.~Rev.}}
\def\jref{ZAp}} \def\nat{\jref{Nature}} \def\iaucirc{\jref{IAU~Circ.}{\jref{ZAp}} \def\nat{\jref{Nature}} \def\iaucirc{\jref{IAU~Circ.}}
\def\jref{Astrophys.~Lett.}{\jref{Astrophys.~Lett.}}
\def\jref{Astrophys.~Space~Phys.~Res.}{\jref{Astrophys.~Space~Phys.~Res.}}
\def\jref{Bull.~Astron.~Inst.~Netherlands}{\jref{Bull.~Astron.~Inst.~Netherlands}}
\def\jref{Fund.~Cosmic~Phys.}} \def\gca{\jref{Geochim.~Cosmochim.~Acta}{\jref{Fund.~Cosmic~Phys.}} \def\gca{\jref{Geochim.~Cosmochim.~Acta}}
\def\jref{Geophys.~Res.~Lett.}} \def\jcp{\jref{J.~Chem.~Phys.}{\jref{Geophys.~Res.~Lett.}} \def\jcp{\jref{J.~Chem.~Phys.}}
\def\jref{J.~Geophys.~Res.}{\jref{J.~Geophys.~Res.}}
\def\jref{J.~Quant.~Spec.~Radiat.~Transf.}{\jref{J.~Quant.~Spec.~Radiat.~Transf.}}
\def\jref{Mem.~Soc.~Astron.~Italiana}{\jref{Mem.~Soc.~Astron.~Italiana}}
\def\jref{Nucl.~Phys.~A}} \def\physrep{\jref{Phys.~Rep.}{\jref{Nucl.~Phys.~A}} \def\physrep{\jref{Phys.~Rep.}}
\def\jref{Phys.~Scr}} \def\planss{\jref{Planet.~Space~Sci.}{\jref{Phys.~Scr}} \def\planss{\jref{Planet.~Space~Sci.}}
\def\procspie{\jref{Proc.~SPIE}} \let\astap=\jref{A\&A}} \def\aapr{\jref{A\&A~Rev.}} \def\aaps{\jref{A\&AS} \let\apjlett=\apjl
\let\apjsupp=\jref{ApJS}} \def\ao{\jref{Appl.~Opt.}} \def\apss{\jref{Ap\&SS} \let\applopt=\ao
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,668 |
Cryptozoic Will Showcase Games at Origins Game Fair 2018
By Shahriar Fouladi
Latest Rick and Morty Tabletop Games, Pantone™: The Game, DC Spyfall, and Wallet Are Among 2018 Releases to Be Demoed at Cryptozoic's Booth #C201
Lake Forest, CA – June 6, 2018 – Cryptozoic Entertainment, leading creator of board games, trading cards, and physical and digital collectibles, today announced that it will demo and sell current and upcoming games at Origins Game Fair, June 13-17 at the Greater Columbus Convention Center in Columbus, Ohio.
At Booth #C201, Cryptozoic will demo Rick and Morty: The Pickle Rick Game, Pantone™: The Game, DC Spyfall, and much more. It will also conduct tournaments for the popular DC Deck-Building Game and Epic Spell WarsTM series ahead of new releases later this year. In addition, Cryptozoic will sell several games, including Rick and Morty: The Ricks Must Be Crazy Multiverse Game, Million Dollars, But… The Game, GKR: Heavy Hitters, and The Walking Dead: No Sanctuary — The Board Game, as well as limited prelease quantities of Wallet and DC Deck-Building Game Crossover Pack 7: New Gods for the first time anywhere.
"What makes Origins special is that it is for the most important part of the hobby, the gamers," said Adam Sblendorio, Vice President of Creative at Cryptozoic. "It's an amazing opportunity for our designers to interact face-to-face with the dedicated players out there and demo our newest games for them. We're excited to see how they react to our Rick and Morty games, Wallet, Pantone™: The Game, and everything else we have coming this year. We'll have some tournaments and fun surprises too!"
Cryptozoic will feature upcoming and already released 2018 games, including:
Rick and Morty: The Ricks Must Be Crazy Multiverse Game: In this engine-building game, 2-4 players take on the roles of Rick, Morty, Zeep, and Kyle as they introduce Power-making technology to different worlds, and then try to get that Power before their opponents. Based on the Rick and Morty episode "The Ricks Must Be Crazy," gameplay takes place in four "'Verses" with unique attributes: the Rickverse, Microverse, Miniverse, and Teenyverse. Players spend Actions to build Power Supplies and Contraptions and move to different 'Verses. At the end of each round, Power generates from the bottom 'Verse up, and players utilize it to play One-Shot abilities, use Character Abilities, and power-up their Contraptions.
Rick and Morty: The Pickle Rick Game: This intense 1-2 player game is based on the hugely popular "Pickle Rick" episode of Adult Swim's Rick and Morty. One player plays as Pickle Rick as he tries to escape a heavily armed compound, while the other player takes on the roles of both the Russians and Jaguar as they try to stop him. The Pickle Rick player uses weapons cards to dole out damage and Air Vents to get out of jams as he or she tries to get to the Rooftop. The game includes both Pickle Rick and Jaguar miniatures that are moved across a dynamic board made up of tiles that are constantly being added, rotated, and flipped. Adding to the off-the-wall fun is the game's packaging: It looks like a pickle!
PantoneTM: The Game: In this easy-to-learn competitive party game, 2-20 players try to recognize characters from pop culture who are represented only by abstract arrangements of colors, inspired by Pantone™, the world's leading color expert. After the Artist player completes his or her representation of a character, the other players take turns trying to guess who it is. If no one can guess the character during a round, a Hint is given at the start of the next round, with each Hint reducing the number of points awarded. The game was designed by Scott Rogers (Rayguns and Rocketships, the God of War video game series).
Million Dollars, But... The Game: Based on Rooster Teeth's wildly popular web series Million Dollars But…, the addictive, easy-to-learn game has players put their morals and imagination to the test, posing the question, "What would you do for a million dollars?" In the game, 2-6 players (or more) each take four black Trigger Cards and four gold Rule Cards at the start of every round. Players each then create a scenario—using one Trigger and one Rule card—that proposes a situation in which someone gets a million dollars, but must do something distasteful in return. The winner of each round is the player who creates the scenario that the judge decides he or she would not do. The partnership between Cryptozoic and Rooster Teeth will make the game available at hobby stores and other retailers nationwide.
The Walking Dead: No Sanctuary — The Board Game — The 1-4 player game redefines the survival horror genre with gameplay that emulates the group dynamics from the hit AMC series, as one player acts as the Leader and the other players decide whether to support his or her choices. Designed by the award-winning team of Adam and Brady Sadler, the cooperative game requires players to work together to win as a group, as they take on a multitude of different enemies—both dead and alive—in Scenarios taken directly from episodes of The Walking Dead. The over 50 highly detailed miniature figures include Walkers and fan-favorite Survivors, like Rick, Glenn, Shane, and Daryl.
DC Deck-Building Game Crossover Pack 7: New Gods: The expansion for the popular DC Deck-Building Game series allows players to play as popular New Gods characters: Orion, Mister Miracle, Big Barda, Kalibak, Granny Goodness, and Darkseid. When using this pack, players try to conquer the opposing faction's Homeworld—New Genesis or Apokolips. Each faction has a stack of three Homeworld cards, each one more difficult to defeat than the previous. Cards with the keyword "Protector" reduce the Power of conquer attempts, potentially thwarting them and extending the battle even more.
DC Spyfall: In this new variation on the social deduction game Spyfall, 3-8 players take on the roles of DC's greatest Super Heroes as they have a secret meeting at an iconic location, such as the Batcave, Daily Planet, or the Fortress of Solitude. The twist is that one of them is secretly the Joker in disguise. In the intense 8-minute rounds, the non-Joker players ask questions and give answers to deduce which one of them is the Clown Prince of Crime without giving away the location, while the Joker player tries to figure out the location before his or her identity is revealed! The game offers several innovations, including Ability cards and modes in which one player is Harley Quinn and all players have Joker cards. Cryptozoic is offering a case purchase promotion, allowing hobby retailers that purchase a case of six games to receive six promo Swamp Thing Location decks, while quantities last (Bundle Code: DCSPYS).
Wallet: In this exciting party game, 2-7 players become colorful characters and compete to find money and an ID that will convince the cops of their innocence. Holding multiple IDs and several types of currency will result in the player getting locked up for sure. The game is played using an actual wallet with multiple pockets and zippered areas that allow players to bluff their opponents or lead them into traps. Each fast-paced game last only 10 minutes, and an entire match is over in just 30 minutes.
The Arrival: Created by famed designer Martin Wallace, the strategy board game puts 2-4 players in the roles of Tribe Leaders as they vie with each other and the demon-like Fomori for control of ancient Ireland. The unique gameplay includes two different victory conditions as determined by whether the Tribe Leaders or the Fomori control more of the map at the end of play. Cryptozoic's wide release of the acclaimed game, previously released in limited quantities at the Essen Game Fair in 2016, upgrades the art, improves various components, and adds a new Advanced Game variant.
About Cryptozoic
Since 2010, Cryptozoic Entertainment has been dedicated to the concept of "Fans First," striving to develop the most creative and sought-after products for pop culture enthusiasts worldwide. As an entertainment company with a diverse portfolio of licensed and original IPs, its catalog covers a broad spectrum of tabletop games and collectibles. The passionate team at Cryptozoic aims to inspire gamers and collectors all around the globe, while bringing fans together as part of the Cryptozoic community. Visit www.cryptozoic.com for more information about product releases, events, and news.
press@cryptozoic.com
Cryptozoic.com - facebook.com/cryptozoic - twitter.com/cryptozoic instagram.com/cryptozoicentertainment
© 2018 Cryptozoic Entertainment. | 25351 Commercentre Drive Suite 250 Lake Forest, CA 92630. All Rights Reserved.
Cryptozoic News
Shahriar Fouladi Follow @Cryptozoic
Shahriar has been working as a writer and editor for Cryptozoic since 2014. When not berating people about incorrect usage of commas, he enjoys staring at walls, eating fruit, and telling toddlers he's taller than them. He also likes superheroes, comic books, TV shows, and Alfred Hitchcock movies. In his free time, you can usually find him hanging out with his wife Laura, enacting some sort of mischief together.
Games / Trading Cards / Other
Challenge of the Superfriends Card Game
Vinyl Terrorz: Freddy Krueger
Rick and Morty Trading Cards Season 2 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,698 |
Q: OpenCover missing PDBs exception I am working on a Windows Phone 8.1 app. This app has it's Unit Tests implemented using the MSTestFramework. To run the tests we need to use vstest.console.exe and also generate an .appx file for the unit test project. Now I need to use OpenCover for analyzing the tests and get a coverage report.
I am following this tutorial but so far I can't get it working.
As per the tutorial, I have created a batch file which contains the following line:
vstest.console.exe myApp_1.0.0.0_x86_Debug.appx /Settings:C:\Test\Test.runsettings /logger:trx
I then call OpenCover using the following command:
OpenCover.Console.exe -target:C:\Test\myBat.bat -register -output:out.xml
but this results in the missing PDBs exception. The above command actually kicks off all the tests and I can see that vstest.console has created a trx file and all tests passing but no report is generated by OpenCover.
I have tried to use the following command as well:
OpenCover.Console.exe -target:C:\Test\myBat.bat -register -output.xml -targetdir:<TargetDir>
In the TargetDir field I have tried giving the path of myProject\obj\x86\Debug - as this contains PDB files. After this did not work I tried giving TargetDir the path of myProject\AppPackages\myProject_x86_Debug_Test - this contains both appx and appxsym files. Finally, I tried copying all files from the Debug folder into the app packages folder and that did not work as well.
I am guessing that OpenCover isn't yet ready for providing coverage for windows phone apps. If OpenCover supports Windows Phone Apps then I would like to know how and if there is anything wrong in my approach.
A: I'm currently have the same problem so I can't provide the answer yet. But have you tried -register:user instead of -register?
openCover.Console.exe -target:C:\Test\myBat.bat -register:user -output:out.xml
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 251 |
(a) The Beraisa lists all the cases of a Hedyot purchasing from Hekdesh, the first of which is 'Moshcho be'Manah ve'Lo Hispik li'Fedoso ad she'Amad be'Masayim'.
What does the Tana mean by 'li'Fedoso'?
... in the reverse case, where at the time of the Meshichah the article was worth two hundred Zuz, and the price dropped to a Manah before he managed to pay for it?
(c) How much is he obligated to pay if he redeemed the article for two hundred Zuz, but by the time he made the Meshichah, the price had dropped to a Manah?
(d) In the reverse case, where he redeemed the article for a Manah and the price rose to two hundred Zuz before he managed to make a Meshichah, he only pays a Manah.
Why is that? Why do we not apply the principle 'Lo Yehei Ko'ach Hedyot Chamur me'Hekdesh'?
(a) The Tana of our Mishnah states 'Kol Mitzvos ha'Ben al ha'Av, Anashim Chayavin, ve'Nashim Peturos'.
Why can the Tana not be referring to the Mitzvos of a son towards his father?
(b) Does the Tana comment on the Mitzvos of a son towards his father?
(c) From which Mitzvos Asei is a woman exempt?
(d) From which three Mitzvos Lo Sa'aseh is she also exempt?
(a) From where do we know that a woman is obligated to respect her parents?
(b) According to the Tana Kama of the Beraisa, which three obligations does a father have towards his son, besides the Mitzvos of Milah and Pidyon ha'Ben?
(c) 'Yesh Omrim (Rebbi Nasan), Af Lehasito ba'Mayim'.
Why should Chazal have obligated a father to teach his son to swim?
(d) What does Rebbi Yehudah say about someone who does not teach his son a trade?
... in Vayeira "Vayamal Avraham es Yitzchok B'no"?
... in Lech Lecha "Himol Lachem Kol Zachar"?
... in Vayeira "Ka'asher Tzivah *Oso* Elokim"?
(b) If neither the father nor Beis-Din circumcised him, then he becomes obligated to circumcise himself when he grows up.
What is the difference between their obligation and his?
(c) We know that this set of Halachos was not confined to Avraham Avinu, but applies to all generations from the Lashon "Tzivah", and from Tana de'Bei Rebbi Yishmael.
What does Tana de'Bei Rebbi Yishmael say about the word "Tzav"?
... in Devarim "ve'Tzav es Yehoshua ve'Chazkeihu ve'Amtzei'hu"?
... in Sh'lach-Lecha "Min ha'Yom Asher Tzivah Hashem va'Hal'ah le'Doroseichem"?
(a) We learn that a father is obligated to redeem his firstborn son from the Pasuk in Ki Sisa "Kol Bechor Banecha Tifdeh".
... the Pasuk in Korach "Ach *Padoh Tifdeh* es Bechor ha'Adam"?
... "Tifdeh" "Tipadeh" (from the fact that "Tifdeh" is written without a 'Yud' [see Rashi 18b. DH 'li'Mesores'])?
(b) And from where do we know that a father is not obligated to redeem his daughter?
(a) According to the Tana Kama of the Beraisa, if a man is faced with the Mitzvah of redeeming both himself and his son, then redeeming himself takes precedence.
What does Rebbi Yehudah say? What reason does he give for this?
(b) Rebbi Yirmiyah qualifies this Machlokes.
In which case does even Rebbi Yehudah concede that the father's redemption takes precedence?
(c) Then how does he establish the Machlokes? What is its basis?
... the Rabbanan, is precedence given to himself?
... Rebbi Yehudah, is precedence now given to his son?
(a) If someone is confronted with the Mitzvah of redeeming his son and going to Yerushalayim for Yom-Tov (together with his Olas Re'iyah), and he can only afford one of them, the Tana Kama of another Beraisa obligates him to give precedence to redeeming his son.
What does Rebbi Yehudah say? What reason does he give?
(b) How do the Rabbanan counter this?
(c) What do we learn from the Pasuk in Ki Sisa "*Kol* Bechor Banecha Tifdeh"?
(d) Why is this not obvious, bearing in mind that the Torah in Bo refers to Bechor as "Peter Rechem"?
... in Eikev "ve'Limadtem Osam es Beneichem"?
... in Va'eschanan "u'Lemadtem Osam, u'Shemartem La'asosam"?
... "ve'Limadtem" 'u'Lemadtem' (from the fact that the Torah omits a 'Yud')?
(b) And from where do we know that a father is not obligated to learn with his daughter?
(c) The Tana Kama of the Beraisa rules that where both the father and the son wish to learn, and there is only sufficient sustenance to allow one to go, then the father takes precedence.
(a) What happened when Rav Acha B'rei de'Rav Ya'akov sent his son Rav Ya'akov to Abaye's Yeshivah to learn and he returned for Bein ha'Zemanim?
What gave Rav Acha second thoughts?
(b) Why did Abaye instruct the Talmidim not to invite Rav Acha bar Ya'akov?
(c) What happened that night?
(d) What was Rav Acha's reaction to this?
(a) The Sugya discusses the order of priorities between learning Torah and marriage. The Tana of the Beraisa give precedence to Torah study.
How does he himself qualify this?
(b) Rav Yehudah Amar Shmuel gives precedence to marriage.
On what grounds does Rebbi Yochanan seemingly object to Shmuel's ruling?
(c) We conclude however, that they do not argue, because 'Ha Lan, ve'Ha Lehu'.
(a) What was Rav Huna's response when Rav Chisda boasted to him about Rav Hamnuna's greatness?
(b) Why did Rav Hamnuna not cover his head with a Sudar?
(c) How did Rav Huna react when Rav Hamnuna told him that he was not married? What did he subsequently tell him?
(d) Rav Huna follows his own reasoning.
What does Rav Huna say about someone who reaches the age of twenty and is not yet married?
(a) Rava corroborates this with a statement from Tana de'Bei Rebbi Yishmael.
What does Tana de'Bei Rebbi Yishmael say about someone who reaches the age of twenty and does not marry?
(b) To what did Rav Chisda attribute the fact that he was superior to his colleagues?
(c) What did he mean by that?
(d) What did he say would have happened had he married at fourteen? | {
"redpajama_set_name": "RedPajamaC4"
} | 3,381 |
{"url":"https:\/\/en.wikipedia.org\/wiki\/Vandermonde_polynomial","text":"# Vandermonde polynomial\n\nIn algebra, the Vandermonde polynomial of an ordered set of n variables $X_1,\\dots, X_n$, named after Alexandre-Th\u00e9ophile Vandermonde, is the polynomial:\n\n$V_n = \\prod_{1\\le i\n\n(Some sources use the opposite order $(X_i-X_j)$, which changes the sign $\\binom{n}{2}$ times: thus in some dimensions the two formulae agree in sign, while in others they have opposite signs.)\n\nIt is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix.\n\nThe value depends on the order of the terms: it is an alternating polynomial, not a symmetric polynomial.\n\n## Alternating\n\nThe defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the $X_i$ by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial \u2013 in fact, it is the basic alternating polynomial, as will be made precise below.\n\nIt thus depends on the order, and is zero if two entries are equal \u2013 this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding $V_n = -V_n,$ and thus $V_n = 0$ (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric).\n\nConversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have $(X_i - X_j)$ as a factor for all $i \\neq j$.\n\n### Alternating polynomials\n\nThus, the Vandermonde polynomial (together with the symmetric polynomials) generates the alternating polynomials.\n\n## Discriminant\n\nIts square is widely called the discriminant, though some sources call the Vandermonde polynomial itself the discriminant.\n\nThe discriminant (the square of the Vandermonde polynomial: $\\Delta=V_n^2$) does not depend on the order of terms, as $(-1)^2=1$, and is thus an invariant of the unordered set of points.\n\nIf one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables $\\Lambda_n$, one obtains the quadratic extension $\\Lambda_n[V_n]\/\\langle V_n^2-\\Delta\\rangle$, which is the ring of alternating polynomials.\n\n## Vandermonde polynomial of a polynomial\n\nGiven a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field; for a non-monic polynomial, with leading coefficient a, one may define the Vandermonde polynomial as\n\n$V_n = a^{n-1}\\prod_{1\\le i\n\n(multiplying with a leading term) to accord with the discriminant.\n\n## Generalizations\n\nOver arbitrary rings, one instead uses a different polynomial to generate the alternating polynomials \u2013 see (Romagny, 2005).\n\n### Weyl character formula\n\n(a vast generalization)\n\nThe Vandermonde polynomial can be considered a special case of the Weyl character formula, specifically the Weyl denominator formula (the case of the trivial representation) of the special unitary group $SU(n)$.","date":"2015-08-05 05:54:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 15, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8823227882385254, \"perplexity\": 2214.5042325845575}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438043060830.93\/warc\/CC-MAIN-20150728002420-00321-ip-10-236-191-2.ec2.internal.warc.gz\"}"} | null | null |
Q: Run python script at reception of email I own a shared hosting which can run anacrontab. I would like to run a python script when I receive an email on that server.
Is anacrontab enough?
Or would using a client such as Gmail be better?
A: import imapclient, pyzmail, html2text
def latestMail():
imapObj = imapclient.IMAPClient('imap.yourServer.com', ssl=False)
imapObj.login('imapUser', 'imapPass')
imapObj.select_folder('Inbox', readonly=False)
UIDs = imapObj.search(criteria='ALL', charset=None)
rawMessages = imapObj.fetch(UIDs[0], ['BODY[]', 'FLAGS'])
message = pyzmail.PyzMessage.factory(rawMessages[UIDs[0]]['BODY[]'])
return message
def parser(message):
if message.text_part is not None and message.html_part is not None:
multipart = True
else:
multipart = False
if message.text_part is not None:
try:
body = message.text_part.get_payload().decode(message.text_part.charset)
except TypeError:
body = message.text_part.get_payload()
if message.html_part is not None and multipart is False:
try:
body = html2text.html2text(message.html_part.get_payload().decode(message.html_part.charset))
except Exception:
raise Systemexit
return body
try:
message = latestMail()
clean = parser(message)
print clean
except IndexError:
print "No messages left"
raise os._exit(0)
except Exception as e:
print e
Crontab config:
HOME=/var/www/html/whatever
* * * * * root /var/www/html/whatever/myMailChecker.py
Conclusion:
This will call your imap servers' Inbox every minute and parse trough your mail and parse it's content, you can do whatever you want after like create a new entry in your mysql table with the mail content etc.. or run another script if clean is not None etc.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 376 |
Q: querydsl how to build predicate to compare columns among each other I want to do following
select column1,column2 from table where column1=column2
How to do it in querydsl? I have code below but it binds key=value
A: public static List <BooleanExpression> getAddressBooleanExpressions (String type, Class<?> clazz) {
List <BooleanExpression> booleanExpressions = new ArrayList<BooleanExpression>();
PathBuilder<?> entityPath = new PathBuilder(clazz, type);
BooleanExpression expression1 = entityPath.get("column1").eq(entityPath.get("column2"));
BooleanExpression expression2 = entityPath.get("columnA").eq(entityPath.get("columnB"));
BooleanExpression expression3 = entityPath.get("columnX").eq(entityPath.get("columnZ"));
BooleanExpression expression4 = entityPath.get("physicalAddress" ).eq(entityPath.get("mailingAddress" ));
booleanExpressions.add(expression1);
booleanExpressions.add(expression2);
booleanExpressions.add(expression3);
booleanExpressions.add(expression4);
return booleanExpressions;
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,335 |
Батарейка () — селище в Темрюцькому районі Краснодарського краю. Входить до складу Запорізького сільського поселення.
Посилання
Батарейка Поселок
Селища Краснодарського краю | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 5,910 |
Heinrich Arnhold, in full Heinrich Gustav Arnhold (born July 22, 1885, in Dresden; died October 10, 1935, in Dresden) was a German banker, collector, patron and esperantist.
Life and work
Arnhold came from the Dresden banking family; Georg Arnhold was his father, Max Arnhold his uncle. Both had built up Bankhaus Gebrüder Arnhold into the largest private bank in Saxony.
His upbringing was shaped by the upper-middle-class and progressive atmosphere of his parental home. In 1907 he attended the 16th World Peace Congress in Munich with his father, a patron of the German Peace Society. Inspired by Bertha Suttner, he learned Esperanto at an early age. His teacher was the Dresden Esperanto poet Marie Hankel. In 1908 he was one of the organizers of the 4th Esperanto World Congress in Dresden. From 1911 to 1914 he was 1st chairman of the Saxon Esperanto Federation, from 1912 also treasurer of the German Esperanto Federation.
Heinrich Arnhold studied law and received his doctorate in law from the University of Leipzig in 1908. He joined the bank as a partner in 1910.
In 1914 he married Lisa, née Mattersdorff (1890-1972), also from a Dresden banking family. The couple had five children: Ruth, later married Steiner (1914-2001), Sigrid, later married Edwards († before 1992), Rainer († 1993), Esther, later married Seligmann (1918 - May 5, 2000), and Heinrich-Hartmut (Henry H. Arnhold). The family home on Tiergartenstraße was a social gathering place and the site of Arnhold's discussion evenings. In 1927, Arnhold had the garden of the house redesigned by Erwin Barth. Heinrich and Lisa Arnhold built up a significant collection of modern art as well as, with encouragement from the family's investments in the porcelain and ceramics industry, an extensive collection of Meissen porcelain.
Arnhold was a co-founder of the Society of Sponsors and Friends of the Dresden University of Technology (Gesellschaft der Förderer und Freunde der Technischen Hochschule Dresden, GFF) and was named its honorary senator for his long-standing commitment to the university.
Nazi era
When the Nazis rose to power in 1933, Arnhold, his family and his business were persecuted because of his Jewish heritage.
At first he tried to defend himself against the beginning persecution of the Jews legally and through petitions. Arnhold suffered strokes in 1934 and 1935, as a result of which he died. His death was later recognized as persecution-related. His wife Lisa, with the help of her brother-in-law Kurt Arnhold, managed to save the family and the porcelain collection, first to Switzerland and then via Portugal and Brazil to the USA.
According to Artdaily, Arnhold acquired Emil Nolde's "Buchsbaumgarten" in the sale of the Ismar Littmann collection at the Berlin auction house Max Perl in February 1935.
Legacy
His son, Henry H. Arnhold, escaped German-occupied Norway for the United States in 1941. There, during World War II, he served in that country's army intelligence as one of the Ritchie Boys. After the war he joined the family's now New York-based Arnhold and S. Bleichroeder; he became its non executive chairman in the 1970s.
Since 2001, the son Henry Arnhold has facilitated a lecture and discussion series jointly organized with the American Academy in Berlin by Dresden Heritage e.V. called the Lisa and Heinrich Arnhold Lecture. The events take place twice a year.
Writings
Das Stimmrecht lombardierter Aktien. Leipzig, Univ., Diss., 1908
Literature
Heike Biedermann: Die Sammlung Lisa und Heinrich Arnhold. In: Von Monet bis Mondrian. 2006, S. 101–111
Maureen Cassidy-Geiger (Hrsg.): The Arnhold collection of Meissen porcelain 1710-50. [Exhibition The Arnhold Collection of Meissen Porcelain, 1710–50, March 25 - June 29, 2008]. New York: Frick Collection 2008, ISBN 978-1-904832-44-7, ISBN 978-0-912114-39-2
Simone Lässig: Nationalsozialistische "Judenpolitik" und jüdische Selbstbehauptung vor dem Novemberpogrom. Das Beispiel der Bankiersfamilie Arnhold. In: Reiner Pommerin (Hrsg.): Dresden unterm Hakenkreuz. (Dresdner historische Studien 3) Weimar/Köln/Wien etc.: Böhlau 1998, ISBN 9783412111977, S. 129–192.
External links
Arnhold Family Collection im Leo Baeck Institute (digitalisiert)
References
1935 deaths
1885 births
Esperantists
Patrons of the arts
German art collectors
German bankers
Businesspeople from Dresden
German people of Jewish descent | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,001 |
Gamers are extremely passionate about getting better at their favorite games, and they don't mind spending money to do it. I know because I am one of them!
This means high profit potential.
Comes with one year free hosting & SSL Certificate Already Installed + Lifetime Support and Step by Step Success Guide. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,840 |
Considerations of Doing Business with the DoD
DEMYSTIFY HOME | understand gov't timelines to award | intellectual property | resource considerations | supply chain
Managing supply chain risks is an area of Federal acquisition that has gained greater attention the last few years due to an improved understanding of the potential risks to Government systems. Conducting business with the DoD or the U.S. Government calls for an understanding the source of this focus, the latest policy developments, and potential information that you may be asked for to verify the security of their supply chain or to assist in the program protection planning of a Government system. If operating in the technology space, especially as it relates to a defense weapons system or important Government system, you must be very cautious of foreign capital and the potential or perceived influence of that capital.
Defense Science Board Study | Executive Order Report | Congressional Action | CFIUS | IT Executive Order | Cybersecurity |
Cybersecurity Maturity Model Certification | Supply Chain Expectations
Defense Science Board Study
A major development in Government officials gaining an understanding of potential supply chain risks posed by foreign adversarial governments was a study conducted by the Defense Science Board in 2017 titled the "Task Force on Cyber Supply Chain". In it, they found that the United States' weapons systems are at risk from the malicious insertion of defects or malware into microelectronics and embedded software, and from the exploitation of latent vulnerabilities in these systems.
They recommended that the DoD take various actions, to include:
Expanding vulnerability assessments
Improving detection and reporting
Enhancing program protection planning
Improving timeliness of supplier vetting
Considering cybersecurity impact of COTS products and components
Establishing sustainment program protection plans for fielded systems
Collecting and acting on parts vulnerabilities
Executive Order Report
Another threat to the DoD supply chain was illuminated in a September 2018 report to the President titled"Assessing and Strengthening the Manufacturing and Defense Industrial Base and Supply Chain Resiliency of the United States." In it, the authors highlighted that the Chinese Communist Party's "Made in China 2025" initiative was targeting key innovative technology sectors in the United States, such as artificial intelligence; quantum computing; robotics; autonomous and new energy vehicles; high-performance medical devices; high-tech ship components; and other emerging industries that are all critical to national defense.
The report noted that China relies on both legal and illicit means, including foreign direct and venture investments, open source collection, human collectors, espionage, cyber operations, and the evasion of U.S. export control restrictions to acquire intellectual property and critical technologies. In terms of foreign direct investment, the Chinese government allocated $46 billion in 2016 for U.S. technology sectors. This represented triple the previous year and a tenfold increase from 2011. The below figures illustrate how China is targeting key technology sectors with its state-supported foreign direct investment.
Congressional Action
In response to these threats and the increasing potential that U.S. firms may trade managerial control in return for Chinese Communist Party financial investments, the Foreign Investment Risk Review Modernization Act (FIRRMA) of 2018 was passed by Congress and signed into law by the President on August 13, 2018. This act strengthened and modernized the capabilities of the Committee on Foreign Investment in the United States (CFIUS) to address national security concerns more effectively, including broadening the authorities of the President and CFIUS to review and to take action to address any national security concerns arising from certain non-controlling investments and real estate transactions involving foreign persons.
Under FIRRMA, a U.S. company that "produces, designs, tests, manufactures, fabricates or develops one or more critical technologies" may undergo heightened scrutiny by CFIUS. There are 14 broad categories of technology which are expected to be focused on:
Position, Navigation, and Timing (PNT) Technology
Microprocessor Technology
Advanced Computing Technology
Data Analytics Technology
Quantum Information and Sensing Technology
Logistics Technology
Additive Manufacturing (e.g., 3-D printing)
Brain-Computer Interfaces
Advanced Surveillance Technologies
To trigger the mandatory CFIUS filing requirement under current regulations, the U.S. business which receives foreign investment must also operate in one of 27 high-risk industries identified by their North American Industry Classification System (NAICS) code:
Aircraft Manufacturing
Aircraft Engine and Engine Parts Manufacturing
Alumina Refining and Primary Aluminum Production
Ball and Roller Bearing Manufacturing
Computer Storage Device Manufacturing
Electronic Computer Manufacturing
Guided Missile and Space Vehicle Manufacturing
Guided Missile and Space Vehicle Propulsion Unit and Propulsion Unit Parts Manufacturing
Military Armored Vehicle, Tank, and Tank Component Manufacturing
Nuclear Electric Power Generation
Other Basic Inorganic Chemical Manufacturing
Other Guided Missile and Space Vehicle Parts and Auxiliary Equipment Manufacturing
Petrochemical Manufacturing
Powder Metallurgy Part Manufacturing
Power, Distribution, and Specialty Transformer Manufacturing
Primary Battery Manufacturing
Radio and Television Broadcasting and Wireless Communications Equipment Manufacturing
Research and Development in Nanotechnology
Research and Development in Biotechnology (except Nanobiotechnology)
Secondary Smelting and Alloying of Aluminum
Search, Detection, Navigation, Guidance, Aeronautical, and Nautical System and Instrument Manufacturing
Semiconductor and Related Device Manufacturing
Semiconductor Machinery Manufacturing
Storage Battery Manufacturing
Telephone Apparatus Manufacturing
Turbine and Turbine Generator Set Units Manufacturing
Check out this CFIUS tool that helps you assess potential CFIUS filing obligations or compliance with law (Note: this does not constitute endorsement of the firm).
The CFIUS Process
The Committee on Foreign Investment in the United States (CFIUS) is a Federal interagency committee chaired by the Secretary of the Treasury. Additional members of CFIUS include the Secretaries of Homeland Security, Commerce, Defense, State, Energy, and Labor, the Attorney General, the Director of National Intelligence, the U.S. Trade Representative, and the Director of the Office of Science and Technology Policy.
Image Credit: Latham & Watkins, LLP
CFIUS was created in 1988 by the Exon-Florio Amendment to the Defense Production Act of 1950. CFIUS' authorizing statute was amended by the Foreign Investment and National Security Act (FINSA) of 2007.
This statutory framework authorizes the President of the United States (through CFIUS) to review "any merger, acquisition, or takeover … by or with any foreign person which could result in foreign control of any person engaged in interstate commerce in the United States."
CFIUS' role is to evaluate whether and to what extent such transactions could impact U.S. national security. If a transaction could pose a risk to U.S. national security, the President may suspend or prohibit the transaction, or impose conditions on it.
A CFIUS determination that a transaction could threaten or impair national security does not necessarily mean that the transaction will not be allowed to move forward. Indeed, very few transactions have ever been rejected outright through the CFIUS process (although from time to time parties have withdrawn their transactions from review and terminated those transactions when the likelihood of an unsuccessful outcome became clear). In many cases, CFIUS can clear a transaction subject to conditions designed to mitigate the perceived risks to U.S. national security the transaction otherwise would pose. (Source: Latham & Watkins, LLP Overview of the CFIUS Process)
Information Technology Executive Order 13873
In Executive Order 13873 of May 15, 2019, titled "Securing the Information and Communications Technology and Services Supply Chain", the President identified that "foreign adversaries are increasingly creating and exploiting vulnerabilities in information and communications technology and services, which store and communicate vast amounts of sensitive information, facilitate the digital economy, and support critical infrastructure and vital emergency services, in order to commit malicious cyber-enabled actions, including economic and industrial espionage against the United States and its people."
To counteract this threat, the President prohibited: any acquisition, importation, transfer, installation, dealing in, or use of any information and communications technology or service where the transaction has been determined to involve:
Information and communications technology or services designed, developed, manufactured, or supplied, by persons owned by, controlled by, or subject to the jurisdiction or direction of a foreign adversary.
Poses an undue risk of sabotage to or subversion of the design, integrity, manufacturing, production, distribution, installation, operation, or maintenance of information and communications technology or services in the United States.
Poses an undue risk of catastrophic effects on the security or resiliency of United States critical infrastructure or the digital economy of the United States.
Otherwise poses an unacceptable risk to the national security of the United States or the security and safety of United States persons.
In February 2015, the Director of National Intelligence testified that cyber threats to U.S. national and economic security are increasing in frequency, scale, sophistication, and severity of impact. The Council of Economic Advisers, an agency within the Executive Office of the President, estimates that malicious cyber activity cost the U.S. economy between $57 billion and $109 billion in 2016, with some estimates putting that number much higher. Cybersecurity is a critical aspect of weapons systems design, and over the last few years its criticality has gained much more attention.
However, it has also been realized that it is not enough to just secure the weapons system. U.S. adversaries have been able to exploit vulnerabilities across the larger DoD supply chain by stealing intellectual property and potentially compromising important systems. These vulnerabilities can be found in contractor facilities, including design, development, and production environments, networks, supply chains, and personnel, which can act as cyber pathways for adversarial actors to access Government program organizations or fielded systems to steal, alter, or destroy system functionality, information, or technology.
A 2015 Government Accounting Office (GAO) report highlighted that not all cyber intrusions result from unintentional threats "caused by, among other things, defective computer or network equipment, and careless or poorly trained employees" and that cyber incidents are on the rise, which demands a Government response.
To ensure that the larger Defense Industrial Base (DIB) supply chain has the appropriate level of cybersecurity to stem the loss of Controlled Unclassified Information (CUI), the Undersecretary of Defense for Acquisition and Sustainment (USD A&S) initiated the CMMC initiative. The intent is to incorporate CMMC into the Defense Federal Acquisition Regulation Supplement (DFARS), building on to clauses already in existence (such as DFARS 252.204-7012) and to eventually use it as a requirement for contract award. To date, OMB has not instituted the DFARS revision, so there are no mandatory CMMC requirements currently in place. If implemented as envisioned, CMMC will encompass multiple maturity levels that range from "Basic Cybersecurity Hygiene" to "Advanced/Progressive."
Note that CMMC certification does not denote achievement of supply chain security nor represent the entirety of cybersecurity measures that your company might want/need to adopt.
Image Credit: CMMC Model v1.0 Public Briefing, 31 January 2020
Points of Clarification
If your company solely produces Commercial-Off-the-Shelf (COTS) products, a CMMC certification will not be required.
CMMC applies to only your unclassified networks that handle, process, and/or store Federal Contract Information (FCI) or CUI.
If your company does not possess CUI but possesses FCI, it must be certified at a minimum of CMMC Level 1.
The level of certification required will be driven by individual programs as captured in the Requests for Information or Proposals submitted by the Government.
The certification will be conducted by Third Party Assessment Organizations (C3PAOs) under the auspices of the CMMC. Accreditation Body (AB) cmmcab.org. The CMMC AB is establishing a CMMC Marketplace where DIB companies will be able to select one of the approved C3PAOs and schedule a CMMC assessment for a specific level.
A CMMC certificate will be valid for three years.
Expected Costs
To be determined, but expected CMMC assessment costs will depend upon several factors to include the CMMC level, the complexity of your company's network, and other market forces.
CMMC Specific Processes and Practices By Level
Your company will be expected to demonstrate both the institutionalization of the processes and the implementation of the practices to certify at that level. Source: CMMC Appendices
Supply Chain Expectations
To this end, when competing for and negotiating a DoD contract or agreement, you will increasingly notice that there is a heightened focus from Government program officials on ensuring that the product you are providing has a secure supply chain both for hardware and software. You may notice this in a variety of ways:
You will likely be requested to obtain a Cybersecurity Maturity Model Certification in order to bid on certain Federal contracts.
You may be requested to provide details about any ownership or control your company may have from countries that are unfriendly or adversarial to the United States.
You may be requested to identify contractual obligations, technical or research agreements, or other information exchange efforts that you have with countries that are unfriendly or adversarial to the United States.
You may be requested to identify component suppliers or original equipment manufacturer vendors with headquarters, manufacturing, distribution, or RDT&E facilities within countries that are unfriendly or adversarial to the United States.
You may be requested to identify the corporate and technical processes you use to mitigate the risk of malicious insertion or compromise in your products' design, manufacturing, and distribution supply chain.
You may be requested to explain the supply chain process you employ to mitigate counterfeit parts being included in systems delivered to the U.S. Government.
You may be requested to identify the system design principles you follow to ensure that hardware or software is not configured in such a way that an adversary might initiate network connections and/or data transmissions to an external network or initiate a catastrophic failure.
You may be asked to provide illumination of your supply chain through the provision of a hardware/software Bill of Materials.
You may be requested to explain any significant malicious network intrusions or significant data breaches your company experienced that led to loss of client data or intellectual property and what actions are being taken to avoid any reoccurrences.
You may be requested to explain how your company incorporates resilient design methods that include rapid isolation of subsystems based on detection of aberrant behavior.
You may be requested to explain how your system design includes active search and continuous automated monitoring to detect system failure.
You may be requested to explain how your company apples technical methods to identify discrepancies in software, including code, and firmware to screen for malicious code and hardware abnormalities.
You may be asked to assist in the development and update of Program Protection Plans (PPPs) for defense acquisition programs. A PPP is a single source document used to coordinate and integrate all protection efforts for an major defense acquisition program and is used to manage risks to advanced technology and mission-critical system functionality from foreign collection, design vulnerability, or supply chain exploit/insertion, and battlefield loss throughout the acquisition lifecycle. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 4,091 |
Q: Angular2 one way databinding in input field Very simple, I have a component containing a string
I need to show this string in an input field.
I thought I need it like this but it doesn't work.
import {Component} from "@angular/core";
@Component({
selector: 'my-component',
template: `
<input type="text" ([ngModel])="myVar" >
`
})
export class MyComponent{
constructor(){
}
myVar: string = "Hello World"
}
A: ([ngModel]) syntax is wrong, it should be [(ngModel)]. Notice the order of parans.
Mnemonic is banana in the box. [ looks like a box and ( looks like a banana.
Also you seem to have a typo in the path to your template file, it says ./my-compoent instead of ./my-component which is what the file is probably named.
A: Solution is :
([ngModel])="myVar" => [(ngModel)]="myVar"
add name field in input type name="myVar"
Try Like this :
import { Component } from "@angular/core";
@Component({
selector: 'my-component',
templateUrl: './my-compoent.template.html';
})
export class MyComponent {
myVar: string = "Hello World"
constructor() { }
}
in html you need to add name in input type
<input type="text" name="myVar" [(ngModel)]="myVar" >
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,978 |
{"url":"https:\/\/www.openfoam.com\/documentation\/guides\/latest\/api\/inversePointDistanceDiffusivity_8H_source.html","text":"The open source CFD toolbox\ninversePointDistanceDiffusivity.H\nGo to the documentation of this file.\n1\/*---------------------------------------------------------------------------*\\\n2 ========= |\n3 \\\\ \/ F ield | OpenFOAM: The Open Source CFD Toolbox\n4 \\\\ \/ O peration |\n5 \\\\ \/ A nd | www.openfoam.com\n6 \\\\\/ M anipulation |\n7-------------------------------------------------------------------------------\n8 Copyright (C) 2011-2012 OpenFOAM Foundation\n9-------------------------------------------------------------------------------\n11 This file is part of OpenFOAM.\n12\n13 OpenFOAM is free software: you can redistribute it and\/or modify it\n15 the Free Software Foundation, either version 3 of the License, or\n16 (at your option) any later version.\n17\n18 OpenFOAM is distributed in the hope that it will be useful, but WITHOUT\n19 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or\n20 FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License\n21 for more details.\n22\n23 You should have received a copy of the GNU General Public License\n24 along with OpenFOAM. If not, see <http:\/\/www.gnu.org\/licenses\/>.\n25\n26Class\n27 Foam::inversePointDistanceDiffusivity\n28\n29Description\n30 Inverse distance to the given patches motion diffusivity.\n31\n32SourceFiles\n33 inversePointDistanceDiffusivity.C\n34\n35\\*---------------------------------------------------------------------------*\/\n36\n37#ifndef inversePointDistanceDiffusivity_H\n38#define inversePointDistanceDiffusivity_H\n39\n40#include \"uniformDiffusivity.H\"\n41#include \"wordRes.H\"\n42\n43\/\/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * \/\/\n44\n45namespace Foam\n46{\n47\n48\/*---------------------------------------------------------------------------*\\\n49 Class inversePointDistanceDiffusivity Declaration\n50\\*---------------------------------------------------------------------------*\/\n53:\n55{\n56 \/\/ Private data\n57\n58 \/\/- Patches selected to base the distance on\n59 wordRes patchNames_;\n60\n61\n62 \/\/ Private Member Functions\n63\n64 \/\/- No copy construct\n66 (\n68 ) = delete;\n69\n70 \/\/- No copy assignment\n71 void operator=(const inversePointDistanceDiffusivity&) = delete;\n72\n73\n74public:\n75\n76 \/\/- Runtime type information\n77 TypeName(\"inversePointDistance\");\n78\n79\n80 \/\/ Constructors\n81\n82 \/\/- Construct for the given fvMesh and data Istream\n84\n85\n86 \/\/- Destructor\n87 virtual ~inversePointDistanceDiffusivity() = default;\n88\n89\n90 \/\/ Member Functions\n91\n92 \/\/- Correct the motion diffusivity\n93 virtual void correct();\n94};\n95\n96\n97\/\/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * \/\/\n98\n99} \/\/ End namespace Foam\n100\n101\/\/ * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * \/\/\n102\n103#endif\n104\n105\/\/ ************************************************************************* \/\/\nAn Istream is an abstract base class for all input systems (streams, files, token lists etc)....\nDefinition: Istream.H:64\nMesh data needed to do the Finite Volume discretisation.\nDefinition: fvMesh.H:91\nInverse distance to the given patches motion diffusivity.\nvirtual ~inversePointDistanceDiffusivity()=default\nDestructor.\nvirtual void correct()\nCorrect the motion diffusivity.\nTypeName(\"inversePointDistance\")\nRuntime type information.\nconst fvMesh & mesh() const\nReturn reference to the mesh.\nUniform uniform finite volume mesh motion diffusivity.\nA List of wordRe with additional matching capabilities.\nDefinition: wordRes.H:54\nNamespace for OpenFOAM.\n#define TypeName(TypeNameString)\nDeclare a ClassName() with extra virtual type info.\nDefinition: typeInfo.H:73","date":"2023-03-28 21:57:50","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8169058561325073, \"perplexity\": 2424.8210958530262}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948871.42\/warc\/CC-MAIN-20230328201715-20230328231715-00053.warc.gz\"}"} | null | null |
Jules Jean Germain Besnard, né à Paris le et mort le à la maison de retraite de Nogent-sur-Marne, est un maître-potier français.
Biographie
Fils d'Albert Besnard, membre du Comité de la Société des artistes décorateurs, il expose au Salon des Tuileries dès 1925 et au Salon d'automne à partir de 1926.
Ses œuvres sont conservées, entre autres, au Musée du Luxembourg, au Musée des beaux-arts de Lyon et au Musée des arts décoratifs.
Ses cendres sont au Columbarium du Père-Lachaise.
Galerie
Bibliographie
Édouard-Joseph, Dictionnaire biographique des artistes contemporains, tome 1, A-E, Art & Édition, 1930,
Bénézit, 1999
Patrick Wilson, Jean Besnard, céramiste d'art, Revue de la Société des amis du Musée national de céramique, 2015,
L'Atelier, Jean Besnard céramiste d'art, Bulletin de l'association Le Temps d'Albert Besnard n° 11 - 2019
Liens externes
Céramiste français
Naissance en juin 1889
Naissance dans le 17e arrondissement de Paris
Décès en novembre 1958
Décès à Nogent-sur-Marne
Décès dans le département de la Seine
Décès à 69 ans | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,197 |
\section{Introduction} \label{section:introduction}
\emph{Optimal control} theories are developed to find control inputs for a plant such that its states and inputs optimize a particular objective\cite{anderson1990optimal}. An optimal control (OC) system typically includes system dynamics and an objective function to be optimized given a task specification. The system dynamics and objective function can determine an optimal controller for a given OC system. In order to employ a pre-designed OC system in practice, one often allows tunable parameters appearing in one or more of its dynamic model, objective function, or the optimal controller (if it is available) to tune the OC system to meet additional performance requirements or even to minimize an additional performance index.
\emph{Tuning} of OC systems refers to the adjustment of a tunable parameter in the control system to further minimize such an additional performance index while retaining optimality (through some adjustment of the optimal control for the original performance index to reflect the tunable parameter adjustment).
Such an additional performance index commonly involves a scalar \emph{loss function} defined in terms of a system's instantaneous states/inputs to serve as a criterion to evaluate the system's performance, and could reflect stability, fast and smooth set-point tracking, robustness for disturbance rejection, mission changes or newly arisen safety constraints. Tuning OC systems is critical in adapting OC to different application scenarios and the hope is that it can be achieved without re-designing the OC system from the beginning.
Tuning of OC system is in the tradition of what has become known as neighboring extremal optimal control (NEOC). Reference \cite{bryson2018applied} (the first edition of which originally appeared in 1975) treats the following problem. Suppose an open-loop optimal control is known for a nonlinear system with prescribed initial condition, and suppose the initial condition is then varied by a small amount; how can one obtain (easily) a corresponding small variation to the control to maintain optimality? The answer rests on what is known as the theory of the second variation, and boils down to solving a time-varying linear-quadratic optimal control problem with parameters derived from the original problem and its optimum trajectory. From this consideration of perturbations of the initial condition, attention moved to perturbations of other aspects of optimal control problems, including parameters in the loss function. Thus reference \cite{rehbock1992computational} considers a nonlinear optimal control problem in which there are one or more scalar parameters which potentially can vary. With a solution available for one set of parameter values, an algorithm is given whereby the gradient of the optimal index with respect to those parameters is computed. It is a variant on that provided in \cite{bryson2018applied} for initial condition perturbation, on a time-varying linear-quadratic foundation. In a further example, \cite{fisher1995neighbouring} indicates how NEOC can work in the presence of control constraints.
Building on the early focus on small variations in initial conditions or parameters, the paper \cite{jiang2015optimal} considers a nonlinear optimal control problem where a parameter undergoes a significant change from the value used to compute the optimal control. It is shown how a modified optimal control can be computed using multiple applications of the neighboring extremal method, each corresponding to a distinct point on a homotopic path, and also importantly demonstrates the possibility that a change of extremal due to an infinitesimal change in the parameter can be discontinuous, though the performance index value may be continuous across the change.
By and large, these papers all work with continuous time systems, but unsurprisingly the ideas carry over to discrete time \cite{ghaemi2009neighboring}. Very recently the authors of \cite{jin2020pontryagin} have developed a framework for tuning of an OC system based on differentiating the Pontryagin's Maximum Principle corresponding to the OC system. Different from classical research in tuning parameters in controllers \cite{kazantzis2005optimal}, objective functions (known as learning from demonstrations \cite{jin2019inverse,jin2021inverse,jin2021distributed} ), or system dynamics (known as system identification \cite{schon2011system, abraham2019active}), the work in \cite{jin2020pontryagin} allows tunable parameters existing in controllers, objective functions and system dynamics. In this paper we aim to further extend the result in \cite{jin2020pontryagin} from tuning of an OC system to cooperative tuning of multiple coupled multi-agent OC systems.
By working as a cohesive whole, a \emph{multi-agent system} can usually accomplish complicated missions well beyond capabilities of individual subsystems \cite{mou2015distributed,wang2019scalable}. But there is little work on the problem of \emph{cooperative tuning of multi-agent optimal control systems (CT-MAOCS)}. The scenario to be considered envisages individual agents in which a certain adjustable parameter appears in each, and such that optimal controls (based on an agent-specific performance index) for each agent can be computed using the individual parameter and performance index.
Figure \ref{fig/multi_agent_problem} illustrates the arrangement. CT-MAOCS can be applied to multi-agent consensus problems where the shared information is a tunable parameter in the OC system of each agent. Parameter tuning needs to achieve a consensus while the optimal trajectory of each agent needs to satisfy a specific task specification under this consensus. An example treated in a later section is the synchronous multi-agent rendezvous problem \cite{lin2007rendezvous}, in which agents should determine their own optimal trajectories such that the rendezvous takes place at a certain specified time. The challenge in solving the problem of CT-MAOCS comes from two parts: first, each individual loss function $L_i$ is expressed using an explicit function both of the parameter $\boldsymbol{\theta}_i$ and the trajectory of the associated OC system, which makes the whole optimization problem at least a bi-level optimization; second, the optimization goal involves not just minimization of each agent's individual loss $L_i$ but the team-average loss, for which all tunable parameters need to be adjusted cooperatively. Main contribution of this paper is the development of a distributed framework to solve the problem of CT-MAOCS, which comes from a combination of a consensus-based distributed rule for multi-agent optimization in \cite{nedic2009distributed} and a gradient generator in \cite{jin2020pontryagin}.
\begin{figure}[th]
\centering
\includegraphics[width=0.4\textwidth]{problem_bw.png}
\caption{Cooperative Tuning of Multi-Agent Optimal Control Systems (CT-MAOCS)}
\label{fig/multi_agent_problem}
\end{figure}
\noindent \emph{Notations.} Let $|\mathcal{N}|$ denote the cardinality of a set $\mathcal{N}$. Let $(\cdot)^\prime$ denote the Hermitian transpose. Let $||\cdot||$ denote the Euclidean norm. For a square matrix $A \in \mathbb{R}^{n \times n}$, let $\text{Tr}(A)$ denote the trace of $A$.
Let $\text{col}\{ \boldsymbol{v}_1, \cdots, \boldsymbol{v}_a \}$ denote a column stack of elements $\boldsymbol{v}_1, \cdots, \boldsymbol{v}_a $, which may be scalars, vectors or matrices,
i.e. $\text{col}\{ \boldsymbol{v}_1, \cdots, \boldsymbol{v}_a \} \triangleq {\matt{{\boldsymbol{v}_1}^{\prime} & \cdots & {\boldsymbol{v}_a}^{\prime}}}^{\prime}$.
Let $\frac{\partial\boldsymbol{g}_t}{\partial \boldsymbol{x}_t} \in \mathbb{R}^{n \times m}$ denote the Jacobian matrix of a function $\boldsymbol{g}: \mathbb{R}^n \mapsto \mathbb{R}^m$ with respect to $\boldsymbol{x} \in \mathbb{R}^n$ evaluated at $\boldsymbol{x}_t$, i.e., $\frac{\partial\boldsymbol{g}_t}{\partial \boldsymbol{x}_t}=\frac{\partial \boldsymbol{g}(\boldsymbol{x})}{\partial \boldsymbol{x}}\rvert_{\boldsymbol{x}=\boldsymbol{x}_t}$.
\section{Problem Formulation} \label{section:problem_formulation}
Consider a multi-agent system consisting of a number of $N$ agents labeled as $\mathcal{V}=\{1,\cdots,N\}$. Each agent $i$ can receive information from its neighbor set, which is denoted by $\mathcal{N}_i$. $\mathbb{G}_k = \{ \mathcal{V}, \mathcal{E}_k \}$ denotes the directed graph such that a directed edge from $j$ to $i$ is in $\mathcal{E}_k $ if and only if $j\in \mathcal{N}_i(k)$.
Suppose each agent $i$ is an optimal control system with a tunable parameter $\boldsymbol{\theta}_i \in \mathbb{R}^r$ denoted by $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$.
The open-loop system dynamics of agent $i$, i.e. $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$, are described by
$$\boldsymbol{x}_{i,t+1}=\boldsymbol{f}_i(\boldsymbol{x}_{i,t}, \boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i),$$ where $t=0,\cdots, T$ denotes the discrete time index, $\boldsymbol{x}_{i,t} \in \mathbb{R}^{n}$ denotes agent-$i$'s state at time $t$, $\boldsymbol{u}_{i,t} \in \mathbb{R}^{m}$ denotes agent-$i$'s optimal control input \footnote{ Optimal control inputs in this paper will be taken to be open-loop time functions rather than controls generated by a feedback law.} at time $t$, and $\boldsymbol{f}_i:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^r\mapsto\mathbb{R}^{n}$ denotes the nonlinear dynamics of agent-$i$, which is assumed to be twice-differentiable.
The open-loop control $\boldsymbol{u}_{i,t}$ could be replaced or determined by an optimal control determined in the following way. Associated with agent $i$ is an objective function denoted by
$$J_i = \sum\nolimits_{t=0}^{T-1}c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)+\textcolor{black}{h_i(\boldsymbol{x}_{i,T},\boldsymbol{\theta}_i)},$$
where $c_{i,t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^r\mapsto\mathbb{R}$ and $h_i:\mathbb{R}^{n}\times\mathbb{R}^r\mapsto\mathbb{R}$ denoting the running and final cost, respectively.
Then under a given initial condition $\boldsymbol{x}_{i,0}$, the optimal control for agent $i$ can be determined by
\begin{mini}|s|
{\substack{\boldsymbol{x}_{i,1:T}, \\ \boldsymbol{u}_{i,0:T-1}}}{ J_i = \sum\nolimits_{t=0}^{T-1}c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)+\textcolor{black}{h_i(\boldsymbol{x}_{i,T},\boldsymbol{\theta}_i)} \label{oc}}
{}{}
\addConstraint{ \boldsymbol{x}_{i,t+1} =\boldsymbol{f}_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)}
\addConstraint{\forall t=0,\cdots,T-1 \ \text{with given } \boldsymbol{x}_{i,0},}
\end{mini}
Here $\boldsymbol{x}_{i,0:T} \triangleq \text{col}\{\boldsymbol{x}_{i,0},\cdots,\boldsymbol{x}_{i,T}\} \in \mathbb{R}^{n(T+1)}$ denotes all the states from time $t=0$ to $T$; similarly $\boldsymbol{u}_{i,0:T-1} \triangleq \text{col}\{\boldsymbol{u}_{i,0},\cdots,\boldsymbol{u}_{i,T-1}\} \in \mathbb{R}^{mT}$; the optimal control will be denoted by $\boldsymbol{u}^*_{i,0:T-1}$. Given a particular value of $\boldsymbol{\theta}_i$, the inputs $\boldsymbol{u}_{i,0:T-1}$ in (\ref{oc}) are designed to minimize the objective function $J_i$. For notational simplicity, let
$$ \boldsymbol{\xi}_i({{{\boldsymbol{\theta}}_i}}) \triangleq \text{col}\{\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1} \} \in \mathbb{R}^{a}$$
denote the \emph{trajectory} of $\boldsymbol{\mathcal{S}}_i({\boldsymbol{\theta}}_i)$ given $\boldsymbol{\theta}_i$, where $a = (T+1)n+Tm$.
We assume, as is common, that any necessary smoothness and similar conditions for a well-defined unique solution to exist are fulfilled. Since this optimal trajectory depends on the parameter $\boldsymbol{\theta}_i$, $\boldsymbol{\xi}_i$ can also be viewed as a function of $\boldsymbol{\theta}_i$, i.e. $\boldsymbol{\xi}_i: \mathbb{R}^r \mapsto \mathbb{R}^a$.
Let $L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)$ denote a scalar function, which is an additional `partial' performance index or loss function (independent of $J_i$) used as a contribution to a group or team-average performance index or loss function, and reflecting to agent $i$'s trajectory $\boldsymbol{\xi}_i({\boldsymbol{\theta}_i})$ as well as the associated performance index; thus ultimately, $L_i$ is just a function of $\boldsymbol{\theta}_i$ because $\boldsymbol{\xi}_i({\boldsymbol{\theta}_i})$ is in effect determined by the minimization of $J_i$. Correspondingly the global average $\frac{1}{N} \sum_{i=1}^{N}L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)$ evaluates the performance of the whole multi-agent system. Note that the objective function $J_i$ reflects a task specification that is only related to agent-$i$, whereas the loss function $L_i$ indicates a new task specification which might be also related to other agents.
\textcolor{black}{
The \textbf{problem of interest} is to develop an iterative rule for each agent $i$ to update $\boldsymbol{\theta}_i$ such that all the $\boldsymbol{\theta}_i$ reach a consensus at a common parameter $\boldsymbol{\theta}^*$, which minimizes the global average loss, i.e.
\begin{argmini}|s|
{{\{\boldsymbol{\theta}_i\}}_{i=1}^N}{\frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}
{\label{problem_interest}}{{\{\boldsymbol{\theta}^*\}}_{i=1}^N =}
\addConstraint{\boldsymbol{\xi}_i \text{ obtained by (\ref{oc}) under } \boldsymbol{\theta}_i.}
\end{argmini}}
Note that the notation $\boldsymbol{\xi}_i$ is shorthand for the whole optimal trajectory (input and state) of agent-$i$ from $t=0$ to $t=T$.
\section{Main Results} \label{section:method}
The challenge in solving the cooperative tuning problem of multi-agent optimal control systems in (\ref{problem_interest}) comes from two parts: first, each $L_i$ here is a function of the trajectory of a dynamical system $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$, which makes the whole optimization problem at least a bi-level optimization; second, the optimization goal is not just the minimization of each system's own loss but the team-average loss, for which all tunable parameters need to be adjusted cooperatively. Motivated by these two challenges, in this section we will develop a method to solve (\ref{problem_interest}) by a combination of consensus-based distributed optimization in \cite{nedic2009distributed} and a gradient generator in \cite{jin2020pontryagin}.
\subsection{Consensus-based Distributed Optimization} \label{subsec:optimization_update}
We first suppose the
gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$
is available for each agent $i$ (we shall explain in the next subsection how it can be obtained). Then the problem in (\ref{problem_interest}) becomes a standard consensus-based multi-agent optimization as follows:
\begin{mini}|s|
{ \boldsymbol{\theta}_1,..., \boldsymbol{\theta}_N }{ \frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i) \label{problem_consensus}}
{}{}
\addConstraint{ \boldsymbol{\theta}_1=\cdots = \boldsymbol{\theta}_N .}
\end{mini}
Let $k=0,1, \cdots$ denote the iteration index for adjustment of tunable parameters. Let $\boldsymbol{\theta}_i(k) \in \mathbb{R}^r$ denote agent $i$'s tunable parameter at iteration $k$.
At iteration $k$, the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ for agent-$i$ is computed once based on \eqref{oc} with current parameter $\boldsymbol{\theta}_i(k)$. Then an updated value of $\boldsymbol{\theta}_i(k+1)$ will be computed using $\boldsymbol{\theta}_i(k)$ and the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ together with other information from agent-$i$'s neighbors.
We in fact employ the following \emph{consensus-based gradient-descent update} proposed by \cite{nedic2009distributed, nedic2001incremental}:
\begin{equation} \label{update_rule}
\boldsymbol{\theta}_i(k+1) = \sum_{j \in \mathcal{N}_i(k)} w_{ij}(k) \boldsymbol{\theta}_j(k) - \eta(k) \frac{dL_i}{d\boldsymbol{\theta}_i(k) }.
\end{equation}
Here, $$\frac{dL_i}{d\boldsymbol{\theta}_i(k)}=\frac{dL_i}{d \boldsymbol{\theta}_i }|_{ \boldsymbol{\theta}_i=\boldsymbol{\theta}_i(k)} $$ is the gradient of agent-$i$'s local loss $L_i$ with respect to $\boldsymbol{\theta}_i$ evaluated at $\boldsymbol{\theta}_i = \boldsymbol{\theta}_i(k)$; and the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ is used in evaluating the gradient; further, $\eta(k) > 0$ is a diminishing step size for agent-$i$ such that
\begin{equation} \label{eq:step_size}
\lim_{k \to \infty} \eta(k) = 0,\ \sum_{k=0}^{\infty} \eta(k) = \infty, \ \sum_{k=0}^{\infty} \eta(k)^2 < \infty,
\end{equation}
and $w_{ij}(k)$ are non-negative weights.
Let $W(k) \in \mathbb{R}^{N \times N}$ be such that the $ij$-th entry is $w_{ij}(k)$ if $j\in \mathcal{N}_i(k)$ and 0, otherwise. As adopted in \cite{nedic2009distributed, nedic2001incremental}, we make the following assumption
\begin{assum} \label{Assum_consensus}
$W(k)$ is doubly stochastic for all $k=0,1,\cdots$. There exists positive integers $\tau$ and $l$ such that the union of graphs $\mathbb{G}_{kl+1+\tau}, \mathbb{G}_{kl+2+\tau}, ..., \mathbb{G}_{(k+1)l+\tau}$ is strongly connected.
\end{assum}
By analytical proofs and results in \cite{nedic2009distributed, nedic2001incremental}, one directly has the following lemma
\begin{lem} \label{lemma:multi_agent_optimization} \cite{nedic2001incremental}
\textcolor{black}{
Assume that the gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is known for each agent $i$, and each $L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)$ is convex in $\boldsymbol{\theta}_i$.
Then the distributed update (\ref{update_rule}) with Assumption \ref{Assum_consensus} and step size $\eqref{eq:step_size}$ drives $\boldsymbol{\theta}_i(k) \to \boldsymbol{\theta}^*$ as $k \to \infty$ for all $i \in \mathcal{V}$ and $\boldsymbol{\theta}^*$ minimizes the global average, i.e.
\begin{argmini}|s|
{\boldsymbol{\theta}_1, \cdots, \boldsymbol{\theta}_N} {\frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i).}
{\label{consensus}}{ {\{\boldsymbol{\theta}^*\}}_{i=1}^N =}
\end{argmini}}
\end{lem}
Note that \cite{nedic2001incremental} proves Lemma \ref{lemma:multi_agent_optimization} when the step size satisfies \eqref{eq:step_size} and \cite{nedic2009distributed} proves Lemma \ref{lemma:multi_agent_optimization} when the step size is a positive constant.
{\color{black}
Lemma \ref{lemma:multi_agent_optimization} hypothesises that the gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is available to each agent $i$ for all iterations $k=0,1,\cdots$. From the chain rule, one has
\begin{equation} \label{derivative_chain_rule}
\frac{d L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i} = \frac{\partial L_i (\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\xi}_i} \frac{\partial \boldsymbol{\xi}_i ({\boldsymbol{\theta}}_i)}{\partial \boldsymbol{\theta}_i} + \frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i},
\end{equation}
where the partial derivatives $\frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\xi}_i}$ and $\frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$ are known. The main challenge here comes from the fact that agent $i$ does not have an analytical relation between its system trajectory $\boldsymbol{\xi}_i$ and the parameter $\boldsymbol{\theta}_i$, and thus does not know $\frac{\partial \boldsymbol{\xi}_i ({\boldsymbol{\theta}}_i)}{\partial \boldsymbol{\theta}_i}$, i.e., the partial derivative of a trajectory $\boldsymbol{\xi}_i$ with respect to the parameter $\boldsymbol{\theta}_i$. In the following, we will borrow a result from \cite{jin2020pontryagin} to develop a \emph{gradient generator} which computes the exact value for $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$.
}
\subsection{Gradient Generator} \label{subsec:grad_generator}
This subsection introduces the gradient generator for computing $\frac{\partial \boldsymbol{\xi}_i( \boldsymbol{\theta}_i) }{\partial \boldsymbol{\theta}_i}$ at each iteration $k$. For simplicity of notation, we use $\boldsymbol{\theta}_i$ to denote $\boldsymbol{\theta}_i(k)$ in this section.
Given the optimal control (\ref{oc}), one has the following \emph{Hamiltonian} associated with $\boldsymbol{\mathcal{S}_i}(\boldsymbol{\theta}_i)$ for all $t=0, \cdots, T-1$,
\begin{equation}\label{Hamil}
H_{i,t}=c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t},\boldsymbol{\theta}_i)+\boldsymbol f_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t},\boldsymbol{\theta}_i)^\prime\boldsymbol{\lambda}_{i,t+1}.
\end{equation}
Here, $\boldsymbol{\lambda}_t \in \mathbb{R}^{n}, \ t=1,\cdots,T$ denotes the Lagrangian multiplier associated with the equality constraint which represents the dynamics $\boldsymbol{x}_{i,t+1}=\boldsymbol{f}_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)$.
By the definition of $\boldsymbol{\xi}_i$, we have
\begin{equation*}
\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i} = \matt{ \frac{\partial \boldsymbol{x}_{i,0:T}}{\partial \boldsymbol{\theta}_i} \\ \frac{\partial \boldsymbol{u}_{i,0:T-1}}{\partial \boldsymbol{\theta}_i} }.
\end{equation*}
Let
\begin{equation*}
X_{i,t} \triangleq \frac{\partial \boldsymbol{x}_{i,t}}{\partial \boldsymbol{\theta}_i} \in \mathbb{R}^{n \times r}, \ U_{i, t} \triangleq \frac{\partial \boldsymbol{u}_{i,t}}{\partial \boldsymbol{\theta}_i} \in \mathbb{R}^{m \times r}.
\end{equation*}
Note that $X_{i,0} = \frac{\partial \boldsymbol{x}_{i,o}}{\partial \boldsymbol{\theta}_i} = \boldsymbol{0}$ because $\boldsymbol{x}_{i,0}$ in (\ref{oc}) is given.
The tool for computing the gradient $\frac{\partial \boldsymbol{\xi}_i( \boldsymbol{\theta}_i) }{\partial \boldsymbol{\theta}_i}$ involves a linear quadratic control system $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ given as follows:
\begin{mini}|s|
{\substack{X_{i,1:T},\\U_{i,0:T-1}}} {\bar{J}_i = \text{Tr}\sum_{t=0}^{T-1}\Bigg(\frac{1}{2}\small
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}^\prime
\bar{Q}_{i,t}
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}+
\bar{R}_{i,t}^\prime
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}\Bigg) \label{auxiliary_lqr_system}}{}{}
\breakObjective{\ \ \ \ \ \ + \text{Tr}\left(\frac{1}{2}{X}_{i,T}^\prime \, H_{i,T}^{xx} \,{X}_{i,T}+ (H_{i,T}^{x\theta})^\prime\,{X}_{i,T}\right)}
\addConstraint{X_{i,t+1} = F_{i,t} X_{i,t} + G_{i,t} U_{i,t} + E_{i,t} \ \text{with} \ X_{i,0}=\boldsymbol{0}}
\addConstraint{\bar{Q}_{i,t} = \begin{bmatrix}
H_{i,t}^{xx} & H_{i,t}^{xu} \\
H_{i,t}^{ux}& H_{i,t}^{uu}
\end{bmatrix},
\ \bar{R}_{i,t} = \begin{bmatrix}
H_{i,t}^{x\theta} \\
H_{i,t}^{u\theta}
\end{bmatrix}.}
\end{mini}
The coefficients in (\ref{auxiliary_lqr_system}) are defined as follows:
\begin{flalign}
&F_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{x}_{i,t}}, \ G_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{u}_{i,t}}, \ E_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{\theta}_i} \label{matFGEHx} & \\
&H_{i,t}^{xx}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{x}_{i,t}\partial \boldsymbol{x}_{i,t}}, \ H_{i,t}^{ux}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t} \partial \boldsymbol{x}_{i,t}}={(H_{i,t}^{xu})}^\prime, \label{matHux_and_Hxe} & \\
&H_{i,t}^{uu}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t}\partial \boldsymbol{u}_{i,t}}, \ H_{i,t}^{x\theta}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{x}_{i,t}\partial \boldsymbol{\theta}_i}, \ H_{i,t}^{u\theta}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t}\partial \boldsymbol{\theta}_i}, \label{matHuu_and_Hue} & \\
&H_{i,T}^{xx}=\dfrac{\partial^2 h_i}{\partial \boldsymbol{x}_{i,T}\partial \boldsymbol{x}_{i,T}}, \ H_{i,T}^{x\theta}=\dfrac{\partial^2 h_i}{\partial \boldsymbol{x}_{i,T}\partial \boldsymbol{\theta}_i} \label{matHT}
\end{flalign}
which are known based on the trajectory $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$ and the trajectory of Lagrangian multipliers $\boldsymbol{\lambda}_{i,1:T}$. By the discrete-time Pontryagin's Maximum Principle \cite{jin2020pontryagin}, the Lagrangian multipliers $\boldsymbol{\lambda}_{i,1:T}$ can be obtained by iteratively computing (\ref{lagrangian_1}) and (\ref{lagrangian_2}) given $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$:
\begin{flalign}
&\boldsymbol{{\lambda}}_{i,T} = \frac{\partial h_i}{\partial\boldsymbol{{x}}_{i,T}}, \label{lagrangian_1} & \\
&\boldsymbol{\lambda}_{i,t} \triangleq \dfrac{\partial H_{i,t}}{\partial \boldsymbol{{x}}_{i,t}} = \dfrac{\partial c_{i,t}}{\partial \boldsymbol{{x}}_{i,t}}+\dfrac{\partial \boldsymbol{f}_i^\prime}{\partial \boldsymbol{{x}}_{i,t}}\boldsymbol{{\lambda}}_{i,t+1}, \ t=T-1,\cdots, 1. \label{lagrangian_2}
\end{flalign}
In practice, many nonlinear optimization solvers, such as IPOPT \cite{wachter2006implementation}, can return the value of Lagrangian multipliers after a constrained nonlinear program is solved.
Note that $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ is of the linear quadratic regulator (LQR) form \cite{anderson1990optimal} and the system dynamics and control objective in $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ are purely determined by the trajectory $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$ from $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$.We also call $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ the \emph{gradient generator} because of the following lemma:
\begin{lem} \label{lemma:stationary} \cite[Lemma~5.1]{jin2020pontryagin} Let $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ be a stationary solution to (\ref{auxiliary_lqr_system}). Then
\smallskip
\begin{equation} \label{stationary_eq}
\matt{ X_{i,0:T}^* \\ U_{i,0:T-1}^* } = \matt{ \frac{\partial \boldsymbol{x}_{i,0:T}}{\partial \boldsymbol{\theta}_i} \\ \frac{\partial \boldsymbol{u}_{i,0:T-1}}{\partial \boldsymbol{\theta}_i} } = \frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}.
\end{equation}
\end{lem}
By stationary solution we mean that $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ might be a saddle point or a minimum to (\ref{auxiliary_lqr_system}).
However, as long as $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ is a stationary solution to (\ref{auxiliary_lqr_system}), i.e. the gradients of (\ref{auxiliary_lqr_system}) are zeros, $\{X_{0:T}^*, U_{0:T-1}^*\}$ is exactly $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$. Since $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ is a linear quadratic control system, we can compute $\{X_{0:T}^*, U_{0:T-1}^*\}$ by the following lemma:
\begin{lem} \label{lemma:compute} \cite[Lemma~5.2]{jin2020pontryagin} $\{X_{0:T}^*, U_{0:T-1}^*\}$ can be obtained by the following recursions for $t=T-1, \cdots, 0$
\begin{equation} \label{recursion_eq}
\begin{aligned}
P_{i,t} &= Q_{i,t} + A_{i,t}^{\prime}{(I+P_{i,t+1}R_{i,t})}^{-1}P_{i,t+1}A_{i,t}, \\
W_{i,t} &= A_{i,t}^{\prime}{(I+P_{i,t+1}R_{i,t})}^{-1}(W_{i,t+1}+P_{i,t+1}M_{i,t}) + N_{i,t},
\end{aligned}
\end{equation}
where $P_{i,T}=H_{i,T}^{xx}$, $W_{i,T}=H_{i,T}^{x\theta}$; $I$ is identity matrix; $A_{i,t} \triangleq F_{i,t}-G_{i,t}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{ux}$, $R_{i,t} \triangleq G_{i,t}{(H_{i,t}^{uu})}^{-1}G_{i,t}^{\prime}$, $M_{i,t} \triangleq E_{i,t}-G_{i,t}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{u\theta}$, $Q_{i,t} \triangleq H_{i,t}^{xx}-H_{i,t}^{xu}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{ux}$, $N_{i,t} \triangleq H_{i,t}^{x\theta}-H_{i,t}^{xu}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{u\theta}$. Further, $\{X_{0:T}^*, U_{0:T-1}^*\}$ can be computed by iteratively computing the following equations from $t=0$ to $T-1$ with $X_{i,0} = \boldsymbol{0}$:
\begin{equation} \label{recursion_solution:1}
\begin{aligned}
U_{i,t} = &- {(H_{i,t}^{uu})}^{-1}\Big( H_{i,t}^{ux}X_{i,t}+H_{i,t}^{u\theta} + G_{i,t}^{\prime}(I+P_{i,t+1} \cdot \\
& {R_{i,t})}^{-1} \cdot (P_{i,t+1}A_{i,t}X_{i,t}+P_{i,t+1}M_{i,t}+W_{i,t+1}) \Big),
\end{aligned}
\end{equation}
\begin{equation} \label{recursion_solution:2}
X_{i,t+1}=F_{i,t}X_{i,t}+G_{i,t}U_{i,t}+E_{i,t}.
\end{equation}
\end{lem}
\begin{rem}
$H_{i,t}^{uu}$ in (\ref{recursion_solution:1}) for all $t = 0, \cdots, T-1$ is invertible if the second-order optimality sufficient condition of (\ref{oc}) is satisfied (as proved in Lemma 1 and Theorem 1 in \cite{jin2021safe}).
See \cite[Lemma~A.2]{jin2021safe} for further details about the second-order sufficient condition.
This is because when the condition holds, the Hessian matrix of the Hamiltonian in (\ref{Hamil}), $\matt{ H_{i,t}^{xx} & H_{i,t}^{xu} \\ H_{i,t}^{ux} & H_{i,t}^{uu} }$, is a positive definite matrix for all $t = 0, \cdots, T-1$. This indicates that $H_{i,t}^{uu}$ is a positive definite matrix for all $t = 0, \cdots, T-1$, i.e. $H_{i,t}^{uu}$ in (\ref{recursion_solution:1}) is invertible. In this case, the stationary solution becomes a globally unique solution. If the second-order optimality sufficient condition does not hold, then one cannot use the recursions in Lemma \ref{lemma:compute} to compute a stationary solution to (\ref{auxiliary_lqr_system}). Nevertheless one can compute a stationary solution with a gradient descent-based method \cite{boyd2004convex}.
\end{rem}
\subsection{The Framework for Cooperative Tuning of Multi-Agent Optimal Control}
To sum up, we employ the following framework for cooperative tuning of Multi-Agent Optimal Control, i.e. to solve the problem in (\ref{problem_interest}). This framework is based on a combination of the consensus-based gradient descent algorithm in (\ref{update_rule}) and the gradient generator in (\ref{auxiliary_lqr_system}), as shown in Fig. \ref{fig/framework}.
\begin{figure}[h]
\centering
\includegraphics[width=0.43\textwidth]{algorithm_framework_consensus_bw.png}
\caption{The framework for cooperative tuning}
\label{fig/framework}
\end{figure}
By Lemma \ref{lemma:multi_agent_optimization}, Lemma \ref{lemma:stationary} and Lemma \ref{lemma:compute}, one has the following main result.
\begin{thm} \label{theorem:1} Suppose that Assumption \ref{Assum_consensus} holds. The distributed update (\ref{update_rule}) is utilized for (\ref{problem_interest}), where $\frac{d L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is computed by the chain rule in (\ref{derivative_chain_rule}) and $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$ is obtained by the gradient generator (\ref{auxiliary_lqr_system}). One has all $\boldsymbol{\theta}_i(k) \to \boldsymbol{\theta}^*$ as $k \to \infty$ for all $i \in \mathcal{V}$ where $\boldsymbol{\theta}^*$ solves the problem in (\ref{problem_interest}).
\end{thm}
\subsection{Constraints in Optimal Control}
In the optimal control problem \eqref{oc}, one can add inequality constraints that represent safety constraints. With the interior-point method \cite{fiacco1990nonlinear}, one can define a logarithmic barrier function for each inequality constraint and a barrier parameter. Then the constrained optimization problem can be written as an unconstrained one, where the new objective function is the original one minus the summation of all the barrier functions. Hence, one can formulate a similar gradient generator for this new optimal control problem. The Hamiltonian associated with this new problem also includes the inequality constraint. See \cite{jin2021safe} for details.
\section{Simulation} \label{section:example}
This section applies the proposed cooperative tuning into a synchronous multi-agent rendezvous problem \cite{lin2007rendezvous}. Suppose there are $N$ mobile robots (or agents) and each agent should determine an optimal trajectory based on its optimal control. The rendezvous should take place at a certain specified time (i.e. the end of the trajectory), and the desired rendezvous location for each agent is unspecified, which is initialized randomly and viewed as a tunable parameter in the OC system of each agent.
Given a particular value of the tunable parameter for each agent, the determination of the optimal trajectory is made independently of the other agents.
At first iteration ($k=0$), each agent determines an optimal trajectory under a initial parameter $\boldsymbol{\theta}_i(k=0)$. Then for each iteration, agents share and update their parameters and cooperatively minimize a global loss function by individually minimizing their own local loss function. All the agents should eventually achieve a consensus on the parameter and hence rendezvous at a single unspecified location.
Agent-$i$'s dynamics are modeled by the following unicycle model \cite[Chapter~13]{lavalle2006planning}:
\begin{equation}\label{cov_model}
\dot{\boldsymbol{x}}_i = \begin{bmatrix}
\dot{p}_{x,i} \\
\dot{p}_{y,i} \\
\dot{\psi}_i
\end{bmatrix} = \boldsymbol{f}_c(\boldsymbol{x}_i, \boldsymbol{u}_i) = \begin{bmatrix}
u_{v,i} \cdot \text{cos}(\psi_i) \\
u_{v,i} \cdot \text{sin}(\psi_i) \\
u_{\omega, i}
\end{bmatrix},
\end{equation}
where $\boldsymbol{x}_i \in \mathbb{R}^3$ is agent-$i$'s state, $\boldsymbol{u}_i = \text{col}\{ u_{v,i}, \ u_{\omega,i} \} \in \mathbb{R}^2$ is agent-$i$'s control input, $p_{x,i} \in \mathbb{R}$ and $p_{y,i} \in \mathbb{R}$ are position coordinates, $\psi_i \in \mathbb{R}$ is the heading angle, $u_{v,i}$ is the velocity input, and $u_{\omega,i}$ is the angular velocity input. Define
\begin{equation}
p: \boldsymbol{x}_i \in \mathbb{R}^3 \mapsto \boldsymbol{p}_i \in \mathbb{R}^2
\end{equation}
as the static mapping from agent-$i$'s state to its position $\boldsymbol{p}_i = \text{col}\{p_{x,i}, \ p_{y,i}\} \in \mathbb{R}^2$.
The optimal control for agent-$i$ is written as
\begin{mini}|s|
{\substack{\boldsymbol{x}_{i,1:T}, \\ \boldsymbol{u}_{i,0:T-1}}}{J_i(\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1}, \boldsymbol{\theta}_i) \label{multi_agent_rendezous}}
{}{}
\addConstraint{ \boldsymbol{x}_{i,t+1} = \boldsymbol{x}_{i,t} + {\Delta} \cdot \boldsymbol{f}_c(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}) }
\addConstraint{\forall t=0,\cdots,T-1 \ \text{with given } \boldsymbol{x}_{i,0},}
\end{mini}
where $\boldsymbol{\theta}_i \in \mathbb{R}^2$ is the tunable parameter for agent-$i$, $\Delta > 0$ is a constant arising in the discrete time Euler approximation of the differential equation (\ref{cov_model}), and the objective function $J_i(\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1}, \boldsymbol{\theta}_i)$ is defined by
\begin{equation} \label{simulation_cost_function}
J_i= \sum_{t=0}^{T-1} \Big[ 2||\boldsymbol{p}(\boldsymbol{x}_{i,t}) - \boldsymbol{\theta}_i ||^2 + ||\boldsymbol{u}_{i,t}||^2 \Big]
+ 5||\boldsymbol{p}(\boldsymbol{x}_{i,T}) - \boldsymbol{\theta}_i||^2.
\end{equation}
The local loss function for agent-$i$ is defined by
\begin{equation} \label{example_local_loss}
L_i(\boldsymbol{\xi}_{i}, \boldsymbol{\theta}_i) = 100||\boldsymbol{p}(\boldsymbol{x}_{i,T}) - \boldsymbol{\theta}_i||^2
\end{equation}
where $\boldsymbol{\xi}_i \triangleq \text{col}\{\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1} \} \in \mathbb{R}^{5T+3}$, and $\boldsymbol{x}_{i,T}$ is the $(T+1)$-th component of $\boldsymbol{\xi}_i$. And the global loss function is $\frac{1}{N} \sum_{i=1}^N L_i$. Note that the weighting coefficients in (\ref{simulation_cost_function}) and (\ref{example_local_loss}) are essentially arbitrary.
Section \ref{section:problem_formulation} mentions the difference between an objective function $J_i$ and a local loss function $L_i$ in general.
In this specific example, $J_i$ (through its inclusion of the term $||\boldsymbol{u}_{i,t}||^2$) indicates that the trajectory of each agent should seek over the whole trajectory to become as close as possible to the desired rendezvous location while maintaining small energy consumption, whereas $L_i$ indicates that the end of the trajectory should be as close as possible to the desired rendezvous location, and nothing more than that, since energy use and proximity to the rendezvous point before the end-time are irrelevant to the global objective.
\begin{figure*}[h]
\centering
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_1.png}
\caption{The trajectory before iteration}
\label{fig:traj_before}
\end{subfigure}
\hfill
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_4.png}
\caption{The trajectory after 30 iterations}
\label{fig:traj_after}
\end{subfigure}
\hfill
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_3.png}
\caption{The loss and parameter error}
\label{fig:loss_traj_}
\end{subfigure}
\caption{ The simulation result for a multi-agent rendezvous problem given a periodic graph with 5 agents. The blue dots are the initial positions. The red stars are the desired terminal positions $\boldsymbol{\theta}_i$. The lines in blue are the optimal trajectory generated by the optimal controls given $\boldsymbol{\theta}_i$. The top plot in (c) is relative loss over iterations, i.e., current loss divided by the initial loss. The bottom plot in (c) is total error of parameter $\boldsymbol{\theta}_i$ among all agents over iterations, i.e., $\sum_{i=1}^N \sum_{j=1}^N ||\boldsymbol{\theta}_i - \boldsymbol{\theta}_j||^2$. }
\label{fig:loss_traj}
\end{figure*}
\subsection{Simulation Result} \label{subsec:simulation_result}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\textwidth]{simulation_graph.png}
\caption{Periodic time variant graph $\mathbb{G}_k$, $q = 0, 1, 2, \cdots$}
\label{fig:graph}
\end{figure}
The other parameters used for the following simulation are: $N=5$, $T=60$, $\Delta=0.1$s, $\eta(k) = 0.1 \ \forall k \geq 0$. A periodic time variant graph $\mathbb{G}_k$ is defined in Fig. \ref{fig:graph}. The weight matrix $W(k)$ is defined by Metropolis weights \cite{xiao2005scheme}. The initial state $\boldsymbol{x}_{i,0}$ and parameter $\boldsymbol{\theta}_i(0)$ are generated randomly.
As shown in Fig. \ref{fig:loss_traj}(a) and \ref{fig:loss_traj}(b), the tunable parameters $\boldsymbol{\theta}_i$ are initialized as different positions at first iteration. As the iteration $k$ increases, the $\boldsymbol{\theta}_i(k)$ converge to a common point, resulting in multiple agents rendezvousing with each other. In Fig. \ref{fig:loss_traj}(c), the loss is decreasing when the parameter error $\sum_{i=1}^N \sum_{j=1}^N ||\boldsymbol{\theta}_i - \boldsymbol{\theta}_j||^2$ is decreasing significantly, and finally both the loss and the parameter error converge.
\section{Conclusion} \label{section:discussion}
This paper has developed a framework based on a combination of consensus-based distributed optimization and gradient generator, which solves the problem of cooperative tuning of multi-agent optimal control system.
Future work include development of a gradient estimator based on trajectory segments of optimal control systems, extension of the result to optimal control systems with infinite time horizon and employment of other gradient-descent algorithms, such as Nesterov's Accelerated Gradient \cite{sutskever2013importance}.
\bibliographystyle{ieeetr}
\section{Introduction} \label{section:introduction}
\emph{Optimal control} theories are developed to find control inputs for a plant such that its states and inputs optimize a particular objective\cite{anderson1990optimal}. An optimal control (OC) system typically includes system dynamics and an objective function to be optimized given a task specification. The system dynamics and objective function can determine an optimal controller for a given OC system. In order to employ a pre-designed OC system in practice, one often allows tunable parameters appearing in one or more of its dynamic model, objective function, or the optimal controller (if it is available) to tune the OC system to meet additional performance requirements or even to minimize an additional performance index.
\emph{Tuning} of OC systems refers to the adjustment of a tunable parameter in the control system to further minimize such an additional performance index while retaining optimality (through some adjustment of the optimal control for the original performance index to reflect the tunable parameter adjustment).
Such an additional performance index commonly involves a scalar \emph{loss function} defined in terms of a system's instantaneous states/inputs to serve as a criterion to evaluate the system's performance, and could reflect stability, fast and smooth set-point tracking, robustness for disturbance rejection, mission changes or newly arisen safety constraints. Tuning OC systems is critical in adapting OC to different application scenarios and the hope is that it can be achieved without re-designing the OC system from the beginning.
Tuning of OC system is in the tradition of what has become known as neighboring extremal optimal control (NEOC). Reference \cite{bryson2018applied} (the first edition of which originally appeared in 1975) treats the following problem. Suppose an open-loop optimal control is known for a nonlinear system with prescribed initial condition, and suppose the initial condition is then varied by a small amount; how can one obtain (easily) a corresponding small variation to the control to maintain optimality? The answer rests on what is known as the theory of the second variation, and boils down to solving a time-varying linear-quadratic optimal control problem with parameters derived from the original problem and its optimum trajectory. From this consideration of perturbations of the initial condition, attention moved to perturbations of other aspects of optimal control problems, including parameters in the loss function. Thus reference \cite{rehbock1992computational} considers a nonlinear optimal control problem in which there are one or more scalar parameters which potentially can vary. With a solution available for one set of parameter values, an algorithm is given whereby the gradient of the optimal index with respect to those parameters is computed. It is a variant on that provided in \cite{bryson2018applied} for initial condition perturbation, on a time-varying linear-quadratic foundation. In a further example, \cite{fisher1995neighbouring} indicates how NEOC can work in the presence of control constraints.
Building on the early focus on small variations in initial conditions or parameters, the paper \cite{jiang2015optimal} considers a nonlinear optimal control problem where a parameter undergoes a significant change from the value used to compute the optimal control. It is shown how a modified optimal control can be computed using multiple applications of the neighboring extremal method, each corresponding to a distinct point on a homotopic path, and also importantly demonstrates the possibility that a change of extremal due to an infinitesimal change in the parameter can be discontinuous, though the performance index value may be continuous across the change.
By and large, these papers all work with continuous time systems, but unsurprisingly the ideas carry over to discrete time \cite{ghaemi2009neighboring}. Very recently the authors of \cite{jin2020pontryagin} have developed a framework for tuning of an OC system based on differentiating the Pontryagin's Maximum Principle corresponding to the OC system. Different from classical research in tuning parameters in controllers \cite{kazantzis2005optimal}, objective functions (known as learning from demonstrations \cite{jin2019inverse,jin2021inverse,jin2021distributed} ), or system dynamics (known as system identification \cite{schon2011system, abraham2019active}), the work in \cite{jin2020pontryagin} allows tunable parameters existing in controllers, objective functions and system dynamics. In this paper we aim to further extend the result in \cite{jin2020pontryagin} from tuning of an OC system to cooperative tuning of multiple coupled multi-agent OC systems.
By working as a cohesive whole, a \emph{multi-agent system} can usually accomplish complicated missions well beyond capabilities of individual subsystems \cite{mou2015distributed,wang2019scalable}. But there is little work on the problem of \emph{cooperative tuning of multi-agent optimal control systems (CT-MAOCS)}. The scenario to be considered envisages individual agents in which a certain adjustable parameter appears in each, and such that optimal controls (based on an agent-specific performance index) for each agent can be computed using the individual parameter and performance index.
Figure \ref{fig/multi_agent_problem} illustrates the arrangement. CT-MAOCS can be applied to multi-agent consensus problems where the shared information is a tunable parameter in the OC system of each agent. Parameter tuning needs to achieve a consensus while the optimal trajectory of each agent needs to satisfy a specific task specification under this consensus. An example treated in a later section is the synchronous multi-agent rendezvous problem \cite{lin2007rendezvous}, in which agents should determine their own optimal trajectories such that the rendezvous takes place at a certain specified time. The challenge in solving the problem of CT-MAOCS comes from two parts: first, each individual loss function $L_i$ is expressed using an explicit function both of the parameter $\boldsymbol{\theta}_i$ and the trajectory of the associated OC system, which makes the whole optimization problem at least a bi-level optimization; second, the optimization goal involves not just minimization of each agent's individual loss $L_i$ but the team-average loss, for which all tunable parameters need to be adjusted cooperatively. Main contribution of this paper is the development of a distributed framework to solve the problem of CT-MAOCS, which comes from a combination of a consensus-based distributed rule for multi-agent optimization in \cite{nedic2009distributed} and a gradient generator in \cite{jin2020pontryagin}.
\begin{figure}[th]
\centering
\includegraphics[width=0.4\textwidth]{problem_bw.png}
\caption{Cooperative Tuning of Multi-Agent Optimal Control Systems (CT-MAOCS)}
\label{fig/multi_agent_problem}
\end{figure}
\noindent \emph{Notations.} Let $|\mathcal{N}|$ denote the cardinality of a set $\mathcal{N}$. Let $(\cdot)^\prime$ denote the Hermitian transpose. Let $||\cdot||$ denote the Euclidean norm. For a square matrix $A \in \mathbb{R}^{n \times n}$, let $\text{Tr}(A)$ denote the trace of $A$.
Let $\text{col}\{ \boldsymbol{v}_1, \cdots, \boldsymbol{v}_a \}$ denote a column stack of elements $\boldsymbol{v}_1, \cdots, \boldsymbol{v}_a $, which may be scalars, vectors or matrices,
i.e. $\text{col}\{ \boldsymbol{v}_1, \cdots, \boldsymbol{v}_a \} \triangleq {\matt{{\boldsymbol{v}_1}^{\prime} & \cdots & {\boldsymbol{v}_a}^{\prime}}}^{\prime}$.
Let $\frac{\partial\boldsymbol{g}_t}{\partial \boldsymbol{x}_t} \in \mathbb{R}^{n \times m}$ denote the Jacobian matrix of a function $\boldsymbol{g}: \mathbb{R}^n \mapsto \mathbb{R}^m$ with respect to $\boldsymbol{x} \in \mathbb{R}^n$ evaluated at $\boldsymbol{x}_t$, i.e., $\frac{\partial\boldsymbol{g}_t}{\partial \boldsymbol{x}_t}=\frac{\partial \boldsymbol{g}(\boldsymbol{x})}{\partial \boldsymbol{x}}\rvert_{\boldsymbol{x}=\boldsymbol{x}_t}$.
\section{Problem Formulation} \label{section:problem_formulation}
Consider a multi-agent system consisting of a number of $N$ agents labeled as $\mathcal{V}=\{1,\cdots,N\}$. Each agent $i$ can receive information from its neighbor set, which is denoted by $\mathcal{N}_i$. $\mathbb{G}_k = \{ \mathcal{V}, \mathcal{E}_k \}$ denotes the directed graph such that a directed edge from $j$ to $i$ is in $\mathcal{E}_k $ if and only if $j\in \mathcal{N}_i(k)$.
Suppose each agent $i$ is an optimal control system with a tunable parameter $\boldsymbol{\theta}_i \in \mathbb{R}^r$ denoted by $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$.
The open-loop system dynamics of agent $i$, i.e. $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$, are described by
$$\boldsymbol{x}_{i,t+1}=\boldsymbol{f}_i(\boldsymbol{x}_{i,t}, \boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i),$$ where $t=0,\cdots, T$ denotes the discrete time index, $\boldsymbol{x}_{i,t} \in \mathbb{R}^{n}$ denotes agent-$i$'s state at time $t$, $\boldsymbol{u}_{i,t} \in \mathbb{R}^{m}$ denotes agent-$i$'s optimal control input \footnote{ Optimal control inputs in this paper will be taken to be open-loop time functions rather than controls generated by a feedback law.} at time $t$, and $\boldsymbol{f}_i:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^r\mapsto\mathbb{R}^{n}$ denotes the nonlinear dynamics of agent-$i$, which is assumed to be twice-differentiable.
The open-loop control $\boldsymbol{u}_{i,t}$ could be replaced or determined by an optimal control determined in the following way. Associated with agent $i$ is an objective function denoted by
$$J_i = \sum\nolimits_{t=0}^{T-1}c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)+\textcolor{black}{h_i(\boldsymbol{x}_{i,T},\boldsymbol{\theta}_i)},$$
where $c_{i,t}:\mathbb{R}^{n}\times\mathbb{R}^{m}\times\mathbb{R}^r\mapsto\mathbb{R}$ and $h_i:\mathbb{R}^{n}\times\mathbb{R}^r\mapsto\mathbb{R}$ denoting the running and final cost, respectively.
Then under a given initial condition $\boldsymbol{x}_{i,0}$, the optimal control for agent $i$ can be determined by
\begin{mini}|s|
{\substack{\boldsymbol{x}_{i,1:T}, \\ \boldsymbol{u}_{i,0:T-1}}}{ J_i = \sum\nolimits_{t=0}^{T-1}c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)+\textcolor{black}{h_i(\boldsymbol{x}_{i,T},\boldsymbol{\theta}_i)} \label{oc}}
{}{}
\addConstraint{ \boldsymbol{x}_{i,t+1} =\boldsymbol{f}_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)}
\addConstraint{\forall t=0,\cdots,T-1 \ \text{with given } \boldsymbol{x}_{i,0},}
\end{mini}
Here $\boldsymbol{x}_{i,0:T} \triangleq \text{col}\{\boldsymbol{x}_{i,0},\cdots,\boldsymbol{x}_{i,T}\} \in \mathbb{R}^{n(T+1)}$ denotes all the states from time $t=0$ to $T$; similarly $\boldsymbol{u}_{i,0:T-1} \triangleq \text{col}\{\boldsymbol{u}_{i,0},\cdots,\boldsymbol{u}_{i,T-1}\} \in \mathbb{R}^{mT}$; the optimal control will be denoted by $\boldsymbol{u}^*_{i,0:T-1}$. Given a particular value of $\boldsymbol{\theta}_i$, the inputs $\boldsymbol{u}_{i,0:T-1}$ in (\ref{oc}) are designed to minimize the objective function $J_i$. For notational simplicity, let
$$ \boldsymbol{\xi}_i({{{\boldsymbol{\theta}}_i}}) \triangleq \text{col}\{\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1} \} \in \mathbb{R}^{a}$$
denote the \emph{trajectory} of $\boldsymbol{\mathcal{S}}_i({\boldsymbol{\theta}}_i)$ given $\boldsymbol{\theta}_i$, where $a = (T+1)n+Tm$.
We assume, as is common, that any necessary smoothness and similar conditions for a well-defined unique solution to exist are fulfilled. Since this optimal trajectory depends on the parameter $\boldsymbol{\theta}_i$, $\boldsymbol{\xi}_i$ can also be viewed as a function of $\boldsymbol{\theta}_i$, i.e. $\boldsymbol{\xi}_i: \mathbb{R}^r \mapsto \mathbb{R}^a$.
Let $L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)$ denote a scalar function, which is an additional `partial' performance index or loss function (independent of $J_i$) used as a contribution to a group or team-average performance index or loss function, and reflecting to agent $i$'s trajectory $\boldsymbol{\xi}_i({\boldsymbol{\theta}_i})$ as well as the associated performance index; thus ultimately, $L_i$ is just a function of $\boldsymbol{\theta}_i$ because $\boldsymbol{\xi}_i({\boldsymbol{\theta}_i})$ is in effect determined by the minimization of $J_i$. Correspondingly the global average $\frac{1}{N} \sum_{i=1}^{N}L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)$ evaluates the performance of the whole multi-agent system. Note that the objective function $J_i$ reflects a task specification that is only related to agent-$i$, whereas the loss function $L_i$ indicates a new task specification which might be also related to other agents.
\textcolor{black}{
The \textbf{problem of interest} is to develop an iterative rule for each agent $i$ to update $\boldsymbol{\theta}_i$ such that all the $\boldsymbol{\theta}_i$ reach a consensus at a common parameter $\boldsymbol{\theta}^*$, which minimizes the global average loss, i.e.
\begin{argmini}|s|
{{\{\boldsymbol{\theta}_i\}}_{i=1}^N}{\frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}
{\label{problem_interest}}{{\{\boldsymbol{\theta}^*\}}_{i=1}^N =}
\addConstraint{\boldsymbol{\xi}_i \text{ obtained by (\ref{oc}) under } \boldsymbol{\theta}_i.}
\end{argmini}}
Note that the notation $\boldsymbol{\xi}_i$ is shorthand for the whole optimal trajectory (input and state) of agent-$i$ from $t=0$ to $t=T$.
\section{Main Results} \label{section:method}
The challenge in solving the cooperative tuning problem of multi-agent optimal control systems in (\ref{problem_interest}) comes from two parts: first, each $L_i$ here is a function of the trajectory of a dynamical system $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$, which makes the whole optimization problem at least a bi-level optimization; second, the optimization goal is not just the minimization of each system's own loss but the team-average loss, for which all tunable parameters need to be adjusted cooperatively. Motivated by these two challenges, in this section we will develop a method to solve (\ref{problem_interest}) by a combination of consensus-based distributed optimization in \cite{nedic2009distributed} and a gradient generator in \cite{jin2020pontryagin}.
\subsection{Consensus-based Distributed Optimization} \label{subsec:optimization_update}
We first suppose the
gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$
is available for each agent $i$ (we shall explain in the next subsection how it can be obtained). Then the problem in (\ref{problem_interest}) becomes a standard consensus-based multi-agent optimization as follows:
\begin{mini}|s|
{ \boldsymbol{\theta}_1,..., \boldsymbol{\theta}_N }{ \frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i) \label{problem_consensus}}
{}{}
\addConstraint{ \boldsymbol{\theta}_1=\cdots = \boldsymbol{\theta}_N .}
\end{mini}
Let $k=0,1, \cdots$ denote the iteration index for adjustment of tunable parameters. Let $\boldsymbol{\theta}_i(k) \in \mathbb{R}^r$ denote agent $i$'s tunable parameter at iteration $k$.
At iteration $k$, the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ for agent-$i$ is computed once based on \eqref{oc} with current parameter $\boldsymbol{\theta}_i(k)$. Then an updated value of $\boldsymbol{\theta}_i(k+1)$ will be computed using $\boldsymbol{\theta}_i(k)$ and the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ together with other information from agent-$i$'s neighbors.
We in fact employ the following \emph{consensus-based gradient-descent update} proposed by \cite{nedic2009distributed, nedic2001incremental}:
\begin{equation} \label{update_rule}
\boldsymbol{\theta}_i(k+1) = \sum_{j \in \mathcal{N}_i(k)} w_{ij}(k) \boldsymbol{\theta}_j(k) - \eta(k) \frac{dL_i}{d\boldsymbol{\theta}_i(k) }.
\end{equation}
Here, $$\frac{dL_i}{d\boldsymbol{\theta}_i(k)}=\frac{dL_i}{d \boldsymbol{\theta}_i }|_{ \boldsymbol{\theta}_i=\boldsymbol{\theta}_i(k)} $$ is the gradient of agent-$i$'s local loss $L_i$ with respect to $\boldsymbol{\theta}_i$ evaluated at $\boldsymbol{\theta}_i = \boldsymbol{\theta}_i(k)$; and the optimal control sequence $\boldsymbol{u}_{i,0:T-1}^*(k)$ is used in evaluating the gradient; further, $\eta(k) > 0$ is a diminishing step size for agent-$i$ such that
\begin{equation} \label{eq:step_size}
\lim_{k \to \infty} \eta(k) = 0,\ \sum_{k=0}^{\infty} \eta(k) = \infty, \ \sum_{k=0}^{\infty} \eta(k)^2 < \infty,
\end{equation}
and $w_{ij}(k)$ are non-negative weights.
Let $W(k) \in \mathbb{R}^{N \times N}$ be such that the $ij$-th entry is $w_{ij}(k)$ if $j\in \mathcal{N}_i(k)$ and 0, otherwise. As adopted in \cite{nedic2009distributed, nedic2001incremental}, we make the following assumption
\begin{assum} \label{Assum_consensus}
$W(k)$ is doubly stochastic for all $k=0,1,\cdots$. There exists positive integers $\tau$ and $l$ such that the union of graphs $\mathbb{G}_{kl+1+\tau}, \mathbb{G}_{kl+2+\tau}, ..., \mathbb{G}_{(k+1)l+\tau}$ is strongly connected.
\end{assum}
By analytical proofs and results in \cite{nedic2009distributed, nedic2001incremental}, one directly has the following lemma
\begin{lem} \label{lemma:multi_agent_optimization} \cite{nedic2001incremental}
\textcolor{black}{
Assume that the gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is known for each agent $i$, and each $L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)$ is convex in $\boldsymbol{\theta}_i$.
Then the distributed update (\ref{update_rule}) with Assumption \ref{Assum_consensus} and step size $\eqref{eq:step_size}$ drives $\boldsymbol{\theta}_i(k) \to \boldsymbol{\theta}^*$ as $k \to \infty$ for all $i \in \mathcal{V}$ and $\boldsymbol{\theta}^*$ minimizes the global average, i.e.
\begin{argmini}|s|
{\boldsymbol{\theta}_1, \cdots, \boldsymbol{\theta}_N} {\frac{1}{N} \sum_{i=1}^{N} L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i).}
{\label{consensus}}{ {\{\boldsymbol{\theta}^*\}}_{i=1}^N =}
\end{argmini}}
\end{lem}
Note that \cite{nedic2001incremental} proves Lemma \ref{lemma:multi_agent_optimization} when the step size satisfies \eqref{eq:step_size} and \cite{nedic2009distributed} proves Lemma \ref{lemma:multi_agent_optimization} when the step size is a positive constant.
{\color{black}
Lemma \ref{lemma:multi_agent_optimization} hypothesises that the gradient $\frac{d L_i(\boldsymbol{\xi}_i(\boldsymbol{\theta}_i), \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is available to each agent $i$ for all iterations $k=0,1,\cdots$. From the chain rule, one has
\begin{equation} \label{derivative_chain_rule}
\frac{d L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i} = \frac{\partial L_i (\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\xi}_i} \frac{\partial \boldsymbol{\xi}_i ({\boldsymbol{\theta}}_i)}{\partial \boldsymbol{\theta}_i} + \frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i},
\end{equation}
where the partial derivatives $\frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\xi}_i}$ and $\frac{\partial L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$ are known. The main challenge here comes from the fact that agent $i$ does not have an analytical relation between its system trajectory $\boldsymbol{\xi}_i$ and the parameter $\boldsymbol{\theta}_i$, and thus does not know $\frac{\partial \boldsymbol{\xi}_i ({\boldsymbol{\theta}}_i)}{\partial \boldsymbol{\theta}_i}$, i.e., the partial derivative of a trajectory $\boldsymbol{\xi}_i$ with respect to the parameter $\boldsymbol{\theta}_i$. In the following, we will borrow a result from \cite{jin2020pontryagin} to develop a \emph{gradient generator} which computes the exact value for $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$.
}
\subsection{Gradient Generator} \label{subsec:grad_generator}
This subsection introduces the gradient generator for computing $\frac{\partial \boldsymbol{\xi}_i( \boldsymbol{\theta}_i) }{\partial \boldsymbol{\theta}_i}$ at each iteration $k$. For simplicity of notation, we use $\boldsymbol{\theta}_i$ to denote $\boldsymbol{\theta}_i(k)$ in this section.
Given the optimal control (\ref{oc}), one has the following \emph{Hamiltonian} associated with $\boldsymbol{\mathcal{S}_i}(\boldsymbol{\theta}_i)$ for all $t=0, \cdots, T-1$,
\begin{equation}\label{Hamil}
H_{i,t}=c_{i,t}(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t},\boldsymbol{\theta}_i)+\boldsymbol f_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t},\boldsymbol{\theta}_i)^\prime\boldsymbol{\lambda}_{i,t+1}.
\end{equation}
Here, $\boldsymbol{\lambda}_t \in \mathbb{R}^{n}, \ t=1,\cdots,T$ denotes the Lagrangian multiplier associated with the equality constraint which represents the dynamics $\boldsymbol{x}_{i,t+1}=\boldsymbol{f}_i(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}, \boldsymbol{\theta}_i)$.
By the definition of $\boldsymbol{\xi}_i$, we have
\begin{equation*}
\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i} = \matt{ \frac{\partial \boldsymbol{x}_{i,0:T}}{\partial \boldsymbol{\theta}_i} \\ \frac{\partial \boldsymbol{u}_{i,0:T-1}}{\partial \boldsymbol{\theta}_i} }.
\end{equation*}
Let
\begin{equation*}
X_{i,t} \triangleq \frac{\partial \boldsymbol{x}_{i,t}}{\partial \boldsymbol{\theta}_i} \in \mathbb{R}^{n \times r}, \ U_{i, t} \triangleq \frac{\partial \boldsymbol{u}_{i,t}}{\partial \boldsymbol{\theta}_i} \in \mathbb{R}^{m \times r}.
\end{equation*}
Note that $X_{i,0} = \frac{\partial \boldsymbol{x}_{i,o}}{\partial \boldsymbol{\theta}_i} = \boldsymbol{0}$ because $\boldsymbol{x}_{i,0}$ in (\ref{oc}) is given.
The tool for computing the gradient $\frac{\partial \boldsymbol{\xi}_i( \boldsymbol{\theta}_i) }{\partial \boldsymbol{\theta}_i}$ involves a linear quadratic control system $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ given as follows:
\begin{mini}|s|
{\substack{X_{i,1:T},\\U_{i,0:T-1}}} {\bar{J}_i = \text{Tr}\sum_{t=0}^{T-1}\Bigg(\frac{1}{2}\small
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}^\prime
\bar{Q}_{i,t}
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}+
\bar{R}_{i,t}^\prime
\begin{bmatrix}
{X}_{i,t} \\
{U}_{i,t}
\end{bmatrix}\Bigg) \label{auxiliary_lqr_system}}{}{}
\breakObjective{\ \ \ \ \ \ + \text{Tr}\left(\frac{1}{2}{X}_{i,T}^\prime \, H_{i,T}^{xx} \,{X}_{i,T}+ (H_{i,T}^{x\theta})^\prime\,{X}_{i,T}\right)}
\addConstraint{X_{i,t+1} = F_{i,t} X_{i,t} + G_{i,t} U_{i,t} + E_{i,t} \ \text{with} \ X_{i,0}=\boldsymbol{0}}
\addConstraint{\bar{Q}_{i,t} = \begin{bmatrix}
H_{i,t}^{xx} & H_{i,t}^{xu} \\
H_{i,t}^{ux}& H_{i,t}^{uu}
\end{bmatrix},
\ \bar{R}_{i,t} = \begin{bmatrix}
H_{i,t}^{x\theta} \\
H_{i,t}^{u\theta}
\end{bmatrix}.}
\end{mini}
The coefficients in (\ref{auxiliary_lqr_system}) are defined as follows:
\begin{flalign}
&F_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{x}_{i,t}}, \ G_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{u}_{i,t}}, \ E_{i,t}=\dfrac{\partial \boldsymbol{f}_i}{\partial \boldsymbol{\theta}_i} \label{matFGEHx} & \\
&H_{i,t}^{xx}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{x}_{i,t}\partial \boldsymbol{x}_{i,t}}, \ H_{i,t}^{ux}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t} \partial \boldsymbol{x}_{i,t}}={(H_{i,t}^{xu})}^\prime, \label{matHux_and_Hxe} & \\
&H_{i,t}^{uu}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t}\partial \boldsymbol{u}_{i,t}}, \ H_{i,t}^{x\theta}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{x}_{i,t}\partial \boldsymbol{\theta}_i}, \ H_{i,t}^{u\theta}=\dfrac{\partial^2 H_{i,t}}{\partial \boldsymbol{u}_{i,t}\partial \boldsymbol{\theta}_i}, \label{matHuu_and_Hue} & \\
&H_{i,T}^{xx}=\dfrac{\partial^2 h_i}{\partial \boldsymbol{x}_{i,T}\partial \boldsymbol{x}_{i,T}}, \ H_{i,T}^{x\theta}=\dfrac{\partial^2 h_i}{\partial \boldsymbol{x}_{i,T}\partial \boldsymbol{\theta}_i} \label{matHT}
\end{flalign}
which are known based on the trajectory $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$ and the trajectory of Lagrangian multipliers $\boldsymbol{\lambda}_{i,1:T}$. By the discrete-time Pontryagin's Maximum Principle \cite{jin2020pontryagin}, the Lagrangian multipliers $\boldsymbol{\lambda}_{i,1:T}$ can be obtained by iteratively computing (\ref{lagrangian_1}) and (\ref{lagrangian_2}) given $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$:
\begin{flalign}
&\boldsymbol{{\lambda}}_{i,T} = \frac{\partial h_i}{\partial\boldsymbol{{x}}_{i,T}}, \label{lagrangian_1} & \\
&\boldsymbol{\lambda}_{i,t} \triangleq \dfrac{\partial H_{i,t}}{\partial \boldsymbol{{x}}_{i,t}} = \dfrac{\partial c_{i,t}}{\partial \boldsymbol{{x}}_{i,t}}+\dfrac{\partial \boldsymbol{f}_i^\prime}{\partial \boldsymbol{{x}}_{i,t}}\boldsymbol{{\lambda}}_{i,t+1}, \ t=T-1,\cdots, 1. \label{lagrangian_2}
\end{flalign}
In practice, many nonlinear optimization solvers, such as IPOPT \cite{wachter2006implementation}, can return the value of Lagrangian multipliers after a constrained nonlinear program is solved.
Note that $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ is of the linear quadratic regulator (LQR) form \cite{anderson1990optimal} and the system dynamics and control objective in $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ are purely determined by the trajectory $\boldsymbol{\xi}_i(\boldsymbol{\theta}_i)$ from $\boldsymbol{\mathcal{S}}_i(\boldsymbol{\theta}_i)$.We also call $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ the \emph{gradient generator} because of the following lemma:
\begin{lem} \label{lemma:stationary} \cite[Lemma~5.1]{jin2020pontryagin} Let $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ be a stationary solution to (\ref{auxiliary_lqr_system}). Then
\smallskip
\begin{equation} \label{stationary_eq}
\matt{ X_{i,0:T}^* \\ U_{i,0:T-1}^* } = \matt{ \frac{\partial \boldsymbol{x}_{i,0:T}}{\partial \boldsymbol{\theta}_i} \\ \frac{\partial \boldsymbol{u}_{i,0:T-1}}{\partial \boldsymbol{\theta}_i} } = \frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}.
\end{equation}
\end{lem}
By stationary solution we mean that $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ might be a saddle point or a minimum to (\ref{auxiliary_lqr_system}).
However, as long as $\{ X_{i,0:T}^{*}, U_{i,0:T-1}^{*} \}$ is a stationary solution to (\ref{auxiliary_lqr_system}), i.e. the gradients of (\ref{auxiliary_lqr_system}) are zeros, $\{X_{0:T}^*, U_{0:T-1}^*\}$ is exactly $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$. Since $\overline{\boldsymbol{\mathcal{S}}}_i(\boldsymbol{\theta}_i)$ is a linear quadratic control system, we can compute $\{X_{0:T}^*, U_{0:T-1}^*\}$ by the following lemma:
\begin{lem} \label{lemma:compute} \cite[Lemma~5.2]{jin2020pontryagin} $\{X_{0:T}^*, U_{0:T-1}^*\}$ can be obtained by the following recursions for $t=T-1, \cdots, 0$
\begin{equation} \label{recursion_eq}
\begin{aligned}
P_{i,t} &= Q_{i,t} + A_{i,t}^{\prime}{(I+P_{i,t+1}R_{i,t})}^{-1}P_{i,t+1}A_{i,t}, \\
W_{i,t} &= A_{i,t}^{\prime}{(I+P_{i,t+1}R_{i,t})}^{-1}(W_{i,t+1}+P_{i,t+1}M_{i,t}) + N_{i,t},
\end{aligned}
\end{equation}
where $P_{i,T}=H_{i,T}^{xx}$, $W_{i,T}=H_{i,T}^{x\theta}$; $I$ is identity matrix; $A_{i,t} \triangleq F_{i,t}-G_{i,t}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{ux}$, $R_{i,t} \triangleq G_{i,t}{(H_{i,t}^{uu})}^{-1}G_{i,t}^{\prime}$, $M_{i,t} \triangleq E_{i,t}-G_{i,t}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{u\theta}$, $Q_{i,t} \triangleq H_{i,t}^{xx}-H_{i,t}^{xu}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{ux}$, $N_{i,t} \triangleq H_{i,t}^{x\theta}-H_{i,t}^{xu}{(H_{i,t}^{uu})}^{-1}H_{i,t}^{u\theta}$. Further, $\{X_{0:T}^*, U_{0:T-1}^*\}$ can be computed by iteratively computing the following equations from $t=0$ to $T-1$ with $X_{i,0} = \boldsymbol{0}$:
\begin{equation} \label{recursion_solution:1}
\begin{aligned}
U_{i,t} = &- {(H_{i,t}^{uu})}^{-1}\Big( H_{i,t}^{ux}X_{i,t}+H_{i,t}^{u\theta} + G_{i,t}^{\prime}(I+P_{i,t+1} \cdot \\
& {R_{i,t})}^{-1} \cdot (P_{i,t+1}A_{i,t}X_{i,t}+P_{i,t+1}M_{i,t}+W_{i,t+1}) \Big),
\end{aligned}
\end{equation}
\begin{equation} \label{recursion_solution:2}
X_{i,t+1}=F_{i,t}X_{i,t}+G_{i,t}U_{i,t}+E_{i,t}.
\end{equation}
\end{lem}
\begin{rem}
$H_{i,t}^{uu}$ in (\ref{recursion_solution:1}) for all $t = 0, \cdots, T-1$ is invertible if the second-order optimality sufficient condition of (\ref{oc}) is satisfied (as proved in Lemma 1 and Theorem 1 in \cite{jin2021safe}).
See \cite[Lemma~A.2]{jin2021safe} for further details about the second-order sufficient condition.
This is because when the condition holds, the Hessian matrix of the Hamiltonian in (\ref{Hamil}), $\matt{ H_{i,t}^{xx} & H_{i,t}^{xu} \\ H_{i,t}^{ux} & H_{i,t}^{uu} }$, is a positive definite matrix for all $t = 0, \cdots, T-1$. This indicates that $H_{i,t}^{uu}$ is a positive definite matrix for all $t = 0, \cdots, T-1$, i.e. $H_{i,t}^{uu}$ in (\ref{recursion_solution:1}) is invertible. In this case, the stationary solution becomes a globally unique solution. If the second-order optimality sufficient condition does not hold, then one cannot use the recursions in Lemma \ref{lemma:compute} to compute a stationary solution to (\ref{auxiliary_lqr_system}). Nevertheless one can compute a stationary solution with a gradient descent-based method \cite{boyd2004convex}.
\end{rem}
\subsection{The Framework for Cooperative Tuning of Multi-Agent Optimal Control}
To sum up, we employ the following framework for cooperative tuning of Multi-Agent Optimal Control, i.e. to solve the problem in (\ref{problem_interest}). This framework is based on a combination of the consensus-based gradient descent algorithm in (\ref{update_rule}) and the gradient generator in (\ref{auxiliary_lqr_system}), as shown in Fig. \ref{fig/framework}.
\begin{figure}[h]
\centering
\includegraphics[width=0.43\textwidth]{algorithm_framework_consensus_bw.png}
\caption{The framework for cooperative tuning}
\label{fig/framework}
\end{figure}
By Lemma \ref{lemma:multi_agent_optimization}, Lemma \ref{lemma:stationary} and Lemma \ref{lemma:compute}, one has the following main result.
\begin{thm} \label{theorem:1} Suppose that Assumption \ref{Assum_consensus} holds. The distributed update (\ref{update_rule}) is utilized for (\ref{problem_interest}), where $\frac{d L_i(\boldsymbol{\xi}_i, \boldsymbol{\theta}_i)}{d \boldsymbol{\theta}_i}$ is computed by the chain rule in (\ref{derivative_chain_rule}) and $\frac{\partial \boldsymbol{\xi}_i(\boldsymbol{\theta}_i)}{\partial \boldsymbol{\theta}_i}$ is obtained by the gradient generator (\ref{auxiliary_lqr_system}). One has all $\boldsymbol{\theta}_i(k) \to \boldsymbol{\theta}^*$ as $k \to \infty$ for all $i \in \mathcal{V}$ where $\boldsymbol{\theta}^*$ solves the problem in (\ref{problem_interest}).
\end{thm}
\subsection{Constraints in Optimal Control}
In the optimal control problem \eqref{oc}, one can add inequality constraints that represent safety constraints. With the interior-point method \cite{fiacco1990nonlinear}, one can define a logarithmic barrier function for each inequality constraint and a barrier parameter. Then the constrained optimization problem can be written as an unconstrained one, where the new objective function is the original one minus the summation of all the barrier functions. Hence, one can formulate a similar gradient generator for this new optimal control problem. The Hamiltonian associated with this new problem also includes the inequality constraint. See \cite{jin2021safe} for details.
\section{Simulation} \label{section:example}
This section applies the proposed cooperative tuning into a synchronous multi-agent rendezvous problem \cite{lin2007rendezvous}. Suppose there are $N$ mobile robots (or agents) and each agent should determine an optimal trajectory based on its optimal control. The rendezvous should take place at a certain specified time (i.e. the end of the trajectory), and the desired rendezvous location for each agent is unspecified, which is initialized randomly and viewed as a tunable parameter in the OC system of each agent.
Given a particular value of the tunable parameter for each agent, the determination of the optimal trajectory is made independently of the other agents.
At first iteration ($k=0$), each agent determines an optimal trajectory under a initial parameter $\boldsymbol{\theta}_i(k=0)$. Then for each iteration, agents share and update their parameters and cooperatively minimize a global loss function by individually minimizing their own local loss function. All the agents should eventually achieve a consensus on the parameter and hence rendezvous at a single unspecified location.
Agent-$i$'s dynamics are modeled by the following unicycle model \cite[Chapter~13]{lavalle2006planning}:
\begin{equation}\label{cov_model}
\dot{\boldsymbol{x}}_i = \begin{bmatrix}
\dot{p}_{x,i} \\
\dot{p}_{y,i} \\
\dot{\psi}_i
\end{bmatrix} = \boldsymbol{f}_c(\boldsymbol{x}_i, \boldsymbol{u}_i) = \begin{bmatrix}
u_{v,i} \cdot \text{cos}(\psi_i) \\
u_{v,i} \cdot \text{sin}(\psi_i) \\
u_{\omega, i}
\end{bmatrix},
\end{equation}
where $\boldsymbol{x}_i \in \mathbb{R}^3$ is agent-$i$'s state, $\boldsymbol{u}_i = \text{col}\{ u_{v,i}, \ u_{\omega,i} \} \in \mathbb{R}^2$ is agent-$i$'s control input, $p_{x,i} \in \mathbb{R}$ and $p_{y,i} \in \mathbb{R}$ are position coordinates, $\psi_i \in \mathbb{R}$ is the heading angle, $u_{v,i}$ is the velocity input, and $u_{\omega,i}$ is the angular velocity input. Define
\begin{equation}
p: \boldsymbol{x}_i \in \mathbb{R}^3 \mapsto \boldsymbol{p}_i \in \mathbb{R}^2
\end{equation}
as the static mapping from agent-$i$'s state to its position $\boldsymbol{p}_i = \text{col}\{p_{x,i}, \ p_{y,i}\} \in \mathbb{R}^2$.
The optimal control for agent-$i$ is written as
\begin{mini}|s|
{\substack{\boldsymbol{x}_{i,1:T}, \\ \boldsymbol{u}_{i,0:T-1}}}{J_i(\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1}, \boldsymbol{\theta}_i) \label{multi_agent_rendezous}}
{}{}
\addConstraint{ \boldsymbol{x}_{i,t+1} = \boldsymbol{x}_{i,t} + {\Delta} \cdot \boldsymbol{f}_c(\boldsymbol{x}_{i,t},\boldsymbol{u}_{i,t}) }
\addConstraint{\forall t=0,\cdots,T-1 \ \text{with given } \boldsymbol{x}_{i,0},}
\end{mini}
where $\boldsymbol{\theta}_i \in \mathbb{R}^2$ is the tunable parameter for agent-$i$, $\Delta > 0$ is a constant arising in the discrete time Euler approximation of the differential equation (\ref{cov_model}), and the objective function $J_i(\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1}, \boldsymbol{\theta}_i)$ is defined by
\begin{equation} \label{simulation_cost_function}
J_i= \sum_{t=0}^{T-1} \Big[ 2||\boldsymbol{p}(\boldsymbol{x}_{i,t}) - \boldsymbol{\theta}_i ||^2 + ||\boldsymbol{u}_{i,t}||^2 \Big]
+ 5||\boldsymbol{p}(\boldsymbol{x}_{i,T}) - \boldsymbol{\theta}_i||^2.
\end{equation}
The local loss function for agent-$i$ is defined by
\begin{equation} \label{example_local_loss}
L_i(\boldsymbol{\xi}_{i}, \boldsymbol{\theta}_i) = 100||\boldsymbol{p}(\boldsymbol{x}_{i,T}) - \boldsymbol{\theta}_i||^2
\end{equation}
where $\boldsymbol{\xi}_i \triangleq \text{col}\{\boldsymbol{x}_{i,0:T}, \boldsymbol{u}_{i,0:T-1} \} \in \mathbb{R}^{5T+3}$, and $\boldsymbol{x}_{i,T}$ is the $(T+1)$-th component of $\boldsymbol{\xi}_i$. And the global loss function is $\frac{1}{N} \sum_{i=1}^N L_i$. Note that the weighting coefficients in (\ref{simulation_cost_function}) and (\ref{example_local_loss}) are essentially arbitrary.
Section \ref{section:problem_formulation} mentions the difference between an objective function $J_i$ and a local loss function $L_i$ in general.
In this specific example, $J_i$ (through its inclusion of the term $||\boldsymbol{u}_{i,t}||^2$) indicates that the trajectory of each agent should seek over the whole trajectory to become as close as possible to the desired rendezvous location while maintaining small energy consumption, whereas $L_i$ indicates that the end of the trajectory should be as close as possible to the desired rendezvous location, and nothing more than that, since energy use and proximity to the rendezvous point before the end-time are irrelevant to the global objective.
\begin{figure*}[h]
\centering
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_1.png}
\caption{The trajectory before iteration}
\label{fig:traj_before}
\end{subfigure}
\hfill
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_4.png}
\caption{The trajectory after 30 iterations}
\label{fig:traj_after}
\end{subfigure}
\hfill
\begin{subfigure}{.329\textwidth}
\centering
\includegraphics[width=\linewidth]{Figure_3.png}
\caption{The loss and parameter error}
\label{fig:loss_traj_}
\end{subfigure}
\caption{ The simulation result for a multi-agent rendezvous problem given a periodic graph with 5 agents. The blue dots are the initial positions. The red stars are the desired terminal positions $\boldsymbol{\theta}_i$. The lines in blue are the optimal trajectory generated by the optimal controls given $\boldsymbol{\theta}_i$. The top plot in (c) is relative loss over iterations, i.e., current loss divided by the initial loss. The bottom plot in (c) is total error of parameter $\boldsymbol{\theta}_i$ among all agents over iterations, i.e., $\sum_{i=1}^N \sum_{j=1}^N ||\boldsymbol{\theta}_i - \boldsymbol{\theta}_j||^2$. }
\label{fig:loss_traj}
\end{figure*}
\subsection{Simulation Result} \label{subsec:simulation_result}
\begin{figure}[h]
\centering
\includegraphics[width=0.43\textwidth]{simulation_graph.png}
\caption{Periodic time variant graph $\mathbb{G}_k$, $q = 0, 1, 2, \cdots$}
\label{fig:graph}
\end{figure}
The other parameters used for the following simulation are: $N=5$, $T=60$, $\Delta=0.1$s, $\eta(k) = 0.1 \ \forall k \geq 0$. A periodic time variant graph $\mathbb{G}_k$ is defined in Fig. \ref{fig:graph}. The weight matrix $W(k)$ is defined by Metropolis weights \cite{xiao2005scheme}. The initial state $\boldsymbol{x}_{i,0}$ and parameter $\boldsymbol{\theta}_i(0)$ are generated randomly.
As shown in Fig. \ref{fig:loss_traj}(a) and \ref{fig:loss_traj}(b), the tunable parameters $\boldsymbol{\theta}_i$ are initialized as different positions at first iteration. As the iteration $k$ increases, the $\boldsymbol{\theta}_i(k)$ converge to a common point, resulting in multiple agents rendezvousing with each other. In Fig. \ref{fig:loss_traj}(c), the loss is decreasing when the parameter error $\sum_{i=1}^N \sum_{j=1}^N ||\boldsymbol{\theta}_i - \boldsymbol{\theta}_j||^2$ is decreasing significantly, and finally both the loss and the parameter error converge.
\section{Conclusion} \label{section:discussion}
This paper has developed a framework based on a combination of consensus-based distributed optimization and gradient generator, which solves the problem of cooperative tuning of multi-agent optimal control system.
Future work include development of a gradient estimator based on trajectory segments of optimal control systems, extension of the result to optimal control systems with infinite time horizon and employment of other gradient-descent algorithms, such as Nesterov's Accelerated Gradient \cite{sutskever2013importance}.
\bibliographystyle{ieeetr}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,215 |
package com.google.gerrit.server.project;
import com.google.gerrit.extensions.api.projects.BranchInfo;
import com.google.gerrit.extensions.registration.DynamicMap;
import com.google.gerrit.extensions.restapi.AcceptsCreate;
import com.google.gerrit.extensions.restapi.BadRequestException;
import com.google.gerrit.extensions.restapi.ChildCollection;
import com.google.gerrit.extensions.restapi.IdString;
import com.google.gerrit.extensions.restapi.ResourceNotFoundException;
import com.google.gerrit.extensions.restapi.RestView;
import com.google.gerrit.reviewdb.client.RefNames;
import com.google.inject.Inject;
import com.google.inject.Provider;
import com.google.inject.Singleton;
import org.eclipse.jgit.lib.Constants;
import java.io.IOException;
import java.util.List;
@Singleton
public class BranchesCollection implements
ChildCollection<ProjectResource, BranchResource>,
AcceptsCreate<ProjectResource> {
private final DynamicMap<RestView<BranchResource>> views;
private final Provider<ListBranches> list;
private final CreateBranch.Factory createBranchFactory;
@Inject
BranchesCollection(DynamicMap<RestView<BranchResource>> views,
Provider<ListBranches> list, CreateBranch.Factory createBranchFactory) {
this.views = views;
this.list = list;
this.createBranchFactory = createBranchFactory;
}
@Override
public RestView<ProjectResource> list() {
return list.get();
}
@Override
public BranchResource parse(ProjectResource parent, IdString id)
throws ResourceNotFoundException, IOException, BadRequestException {
String branchName = id.get();
if (!branchName.equals(Constants.HEAD)) {
branchName = RefNames.fullName(branchName);
}
List<BranchInfo> branches = list.get().apply(parent);
for (BranchInfo b : branches) {
if (branchName.equals(b.ref)) {
return new BranchResource(parent.getControl(), b);
}
}
throw new ResourceNotFoundException();
}
@Override
public DynamicMap<RestView<BranchResource>> views() {
return views;
}
@SuppressWarnings("unchecked")
@Override
public CreateBranch create(ProjectResource parent, IdString name) {
return createBranchFactory.create(name.get());
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,929 |
\section{Introduction}
\label{intro}
Research on directed transport at the microscopic level has benefitted
enormously from the interaction and mutual stimulation of applied
studies close to experimentally accessible phenomena and theoretical
analyses, the paradigm case being the concept of Brownian motors
\cite{Ast94,BHK94,BRH96,Ast97} as it emerged from the discovery of
motor molecules generating directed transport in the living cell
\cite{Lei94,Spu94}. Notwithstanding, the distance between the two
endeavours remains large. An example is the degree of structural
complexity, reflected in the number of degrees of freedom involved, of
the models considered on either side: While realistic
molecular-dynamics simulations of biomolecules typically include the
full set of hundreds or thousands of nuclear freedoms, plus possibly
some of the surrounding solvent (for a review see, e.g.,
\cite{WD&01}), the key notion of ratchets has been developed
considering mere point particles in one-dimensional periodic
potentials \cite{AB96,Rei02}.
Efforts are under way on both sides to bring these ends closer to one
another. On the one hand, in molecular biophysics, the concept of
functional degree of freedom \cite{TS03,NS06} has been conceived for
large-scale conformational changes on long timescales that are
immediately responsible, e.g., for the catalytic activity of a
protein, and can be identified on basis of objective criteria such as
normal-mode analysis \cite{MG&76,Har84,BK85} to establish a hierarchy
among the modes from slow to fast and from collective (global) to
microscopic. On the other hand, elementary ratchet models are being
endowed one by one with additional freedoms to study their possible
r\^ole in directed transport \cite{Mat02}.
The present work is intended to insert another stepstone from the side
of abstract ratchet models towards biophysical realism, by
incorporating a few crucial features of molecular motors: We devise a
model that includes an internal freedom to represent some
functionally relevant conformational change. The concept of molecular
\textit{combustion} motor is incorporated replacing the familiar
periodic external driving by a chemical freedom as coherent energy
source which maintains the system far from thermal equilibrium
\cite{KB00}. The ubiquitous r\^ole of thermal fluctuations is modeled,
as usually, by random forces. At the same time, we avoid the detailed
reconstruction of any specific system such as ${\rm F}_1$-ATPase
\cite{Kin98,Min03,AM08} or even their reduction to ``mechanical toy
models'' \cite{PO95} which typically still are tailored to represent a
certain (species of) motor molecule and therefore involve some
contingency. This allows us to study the effect of internal freedoms
on general grounds, applicable to a broad class of molecules and even
beyond the biophysical realm. In this way, we take up themes set by
pioneering works such as Kaneko \textit{et al.} \cite{NK03} on
molecular machines with internal freedoms and of Mateos on ratchets
based on non-point particles \cite{Mat02}. At the same time, focussing
on the biological context, we leave aside quantum effects which
probably are of minor relevance for transport in the living cell, yet
may be crucial in artificial nanodevices \cite{KLH05}.
The dynamics of our model is simulated in the underdamped regime,
allowing for complex deterministic motion, and in the presence of
thermal noise, analyzed as to transport mechanisms involving the
internal freedom as essential element. We find evidence, above all,
that it serves as a temporary energy storage which partially decouples
directed transport from the energy source and thus decorrelates the
discrete steps in the external freedom from the likewise approximately
quantized discharges of chemical energy into the system---certainly
desirable features from the point of view of robustness, efficiency,
and versatility of a molecular motor. Preliminary results of this work
have been published (in Spanish) in Ref.~\cite{DN09}.
Our model is motivated and constructed in Sec.~\ref{model}. Section
\ref{dynamics} provides a survey of its dynamical behaviour in
different parameter regimes. Transport properties and the r\^ole of
the internal freedom are discussed in Sec.~\ref{transport}. We
conclude in Sec.~\ref{end} with some remarks on open ends.
\section{Model}
\label{model}
We pretend to construct a minimal model of a molecular combustion
motor that goes beyond the familiar ratchet scheme in two essential
points, (i) it includes an internal freedom to represent a functional
(slow, conformational) mode of the molecule, and (ii) it incorporates
the injection of chemical energy, e.g., through hydrolysis of ATP, as
autonomous dynamics of a chemical degree of freedom corresponding to
the reaction coordinate underlying that process \cite{KB00}. The
formulation of our model is inspired in various respects by the
example of F1-ATPase, a prototypical rotational molecular motor
\cite{Kin98,Min03,AM08}, but does not intend its reconstruction in any
biophysical detail, i.e., is to be considered an impressionist view,
at most, of that molecule.
More precisely, we require the model to comply with the following
conditions:
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item \textit{External freedom} The coordinate $x_{\rm ex}$ is
equivalent to the single freedom of the usual point-particle ratchets.
It will be subjected to a standard ratchet potential, periodic with
period $X_{\rm ex} = 2\pi$ but breaking invariance under parity
$x_{\rm ex} \to -x_{\rm ex}$, which can be considered either as cyclic
(angle) or as extended. To model the conversion of internal kinetic
energy into directed transport, it will be coupled to the internal
freedom, but also to the chemical freedom from which it receives
coherent energy.
\item \textit{Internal freedom} Following Kaneko \textit{et al.}
\cite{NK03}, we model the internal freedom $x_{\rm in}$ like a
pendulum, with a smooth periodic potential, period $X_{\rm in} =
2\pi$, which however lacks the intentional asymmetry of the ratchet
potential proper.
\item \textit{Chemical freedom} In order to reproduce the entropical
bias of the ``combustion'' reaction, we impose a non-zero mean
gradient on the otherwise periodic dependence of the potential on the
chemical coordinate $x_{\rm ch}$, i.e., $V(x_{\rm ex},x_{\rm
in},x_{\rm ch}+X_{\rm ch}) = V(x_{\rm ex},x_{\rm in},x_{\rm ch})
+ E_{\rm ch}$, $E_{\rm ch}$ denoting the net energy gain of the
(hydrolysis etc.) reaction per molecule. To keep the behaviour of this
coordinate close to an approximately periodic energy injection, even
in presence of the backaction from the external freedom, we further
choose a large inertia $m_{\rm ch}$.
\end{enumerate}
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.3,angle=0 ]{fig1.pdf}
\caption{Potential (\protect\ref{pottotal}) as a function of
coordinates $x_{\rm ex}$ and $x_{\rm ch}$ for the external and the
chemical freedoms, resp. Greylevel code ranges from black (low) to
white (high).
}\label{potexch}
\end{center}
\end{figure}
To implement these requirements, we compose the full potential as
follows:
\begin{equation} \label{pottotal}
\begin{split}
V(x_{\rm ex},x_{\rm in},x_{\rm ch}) =
V_{\rm ei}(x_{\rm ex},x_{\rm in})+V_{\rm in}(x_{\rm in})+ \\
+ V_{\rm ec}(x_{\rm ex},x_{\rm ch})+V_{\rm ch}(x_{\rm ch}).
\end{split}
\end{equation}
where
\begin{align} \label{potextint}
V_{\rm ei}(x_{\rm ex},x_{\rm in}) =&
-\frac{1+\varepsilon_{\rm ei}\sin(x_{\rm in})}
{1+\varepsilon_{\rm ei}} \notag\\
&\times \frac{\sin(x_{\rm ex})+A\sin(2x_{\rm ex})}{2f_A},\\
V_{\rm in}(x_{\rm in}) =& -K\sin(x_{\rm in}), \label{potint}\\
V_{\rm ec}(x_{\rm ex},x_{\rm ch}) =& -\varepsilon_{\rm ec}\sin(x_{\rm ch})
\sin(x_{\rm ex}-\delta), \label{potextchem}\\
V_{\rm ch}(x_{\rm ch}) =& -\frac{E_{\rm ch}}{2\pi} x_{\rm ch}.
\label{potchem}
\end{align}
The parameter $A$ determines the asymmetry of the ratchet potential
\cite{Fla00}. The coupling between external and internal and between
external and chemical freedoms is controlled, respectively, by
$\varepsilon_{\rm ei}$ and $\varepsilon_{\rm ec}$. We plot the
potential (\ref{pottotal}) as a function of external
and chemical coordinates in Fig.~\ref{potexch}. Diagonal channels
corresponding to a rigid association of one hydrolysis reaction per
spatial step in $x_{\rm ex}$ can be discerned (cf.\ Ref.~\cite{KB00}).
Moreover, to represent the fast internal as well as ambient degrees of
freedom in a collective manner, we include dissipation terms for
$x_{\rm ex}$ and $x_{\rm ch}$ corresponding to velocity-proportional
(Ohmic) friction. Working on the microscopic level, we complement them
by random forces representing thermal noise, with autocorrelation
functions related to dissipation rates through the
fluctuation-dissipation theorem \cite{Rei65}. All in all, we thus
arrive at a set of coupled stochastic differential equations
\cite{Oks98},
\begin{equation} \label{eqsmotion}
\begin{split}
m_{\rm ex} \ddot{x}_{\rm ex} =&
-m_{\rm ex}\gamma_{\rm ex} \dot{x}_{\rm ex} +
\frac{1+\varepsilon_{\rm ei}\sin(x_{\rm in})}
{1+\varepsilon_{\rm ei}} \\
&\times \frac{\cos(x_{\rm ex})+2A\cos(2x_{\rm ex})}{2f_A} \\
&+\varepsilon_{\rm ec}\sin(x_{\rm ch})\cos(x_{\rm ex}-\delta)
+\xi(t),\\
m_{\rm in} \ddot{x}_{\rm in} =&
\,K\cos(x_{\rm in}) \\
& +\frac{\varepsilon_{\rm ei} \cos(x_{\rm in})}
{1+\varepsilon_{\rm ei}}\,\,
\frac{\sin(x_{\rm ex})+A\sin(2x_{\rm ex})}{2f_A},\\
m_{\rm ch} \ddot{x}_{\rm ch} =&
-m_{\rm ch}\gamma_{\rm ch} \dot{x}_{\rm ch} +
\frac{\Delta V}{2\pi} \\
& +\varepsilon_{\rm ec}\cos(x_{\rm ch}) \sin(x_{\rm ex}-\delta).
\end{split}
\end{equation}
The random force $\xi(t)$ is defined by $\langle \xi(t)\rangle = 0$ and
$\langle \xi(t)\xi(0)\rangle = 2m_{\rm ex} \gamma_{\rm ex}kT\delta(t)$ at
temperature $T$. The parameters $\gamma_{\rm ex}$ and $\gamma_{\rm ch}$
are the friction coefficients for the external and the chemical
freedoms, respectively.
The numerical results presented in the following have been obtained by
solving Eqs.~(\ref{eqsmotion}) by Conventional Brownian Dynamics, see
Refs.~\cite{Oks98,BH98,BH99}. Typical parameter values used in our
simulations are summarized in Tab.~\ref{paramval}.
\begin{table}[h!]
\begin{center}
\begin{tabular}{lllllll}\hline
{\bf Parameter}&$m_{\rm ex}$&$m_{\rm in}$&$m_{\rm ch}$
&$K$&$\varepsilon_{\rm ec}$&$\varepsilon_{\rm ei}$
\\ \hline
{\bf Value}&$1$&$0.1$&$100$ &$0.1$&$1.0$&$1.0$
\\ \hline
\end{tabular}
\vspace*{0.4cm}
\begin{tabular}{lllllll}\hline
{\bf Parameter}&$A$&$kT$&$\Delta V$&$\delta$ &$\gamma_{\rm ex}$
&$\gamma_{\rm ch}$
\\ \hline
{\bf Value}&$0.25$&$1$&$100$&$4.53$&$0.1$&$1$
\\ \hline
\end{tabular}
\caption{Default values for the parameters of our model used wherever
not indicated otherwise.} \label{paramval}
\end{center}
\end{table}
\section{Dynamics and phase-space structure}
\label{dynamics}
The principal parameters determining the behaviour of our model are
$A$ (asymmetry of the ratchet potential), $\varepsilon_{\rm ei}$
(coupling external to internal freedom), $\varepsilon_{\rm ec}$
(coupling external to chemical freedom), $\gamma_{\rm ex}$,
$\gamma_{\rm ch}$ (friction) and temperature $T$. This quite
high-dimensional parameter space is delimited by the following
asymptotics:
\begin{enumerate}
\item \textit{Symmetric external potential} For $A \to 0$, the
simultaneous presence of symmetry-related counterpropagating
trajectory pairs identically cancels transport.
\item \textit{Internal freedom decoupled} In the case $\varepsilon_{\rm
ei} = 0$, the system reduces to a point-particle ratchet, broadly
documented in the literature in all its facets (overdamped,
underdamped with nontrivial dynamics beyond mere relaxation,
Hamiltonian), see \cite{Rei02}.
\item \textit{Chemical freedom decoupled} As we are dealing with an
autonomous system without external driving, decoupling the chemical
freedom deprives the ratchet of its energy source, so that transport
dies out. Even in the Hamiltonian case, a driving is necessary for
directed transport to occur, since there is then no other means to
break time-reversal invariance (apart from magnetic fields which
however play no r\^ole in a biochemical context).
\item \textit{Hamiltonian dynamics} In the limit $\gamma_{\rm
ex},\gamma_{\rm ch} \to 0$ of vanishing friction, the system becomes a
Hamiltonian ratchet with internal freedom. However, in the absence of
an asymmetric external driving or a magnetic field, the system is then
time-reversal invariant and directed currents are excluded.
\end{enumerate}
In the following, we consider in more detail a crucial aspect of the
dynamics, the structure of the attractor and its connectivity along
the lattice as a function of the ``driving force'', i.e., the coupling
to the chemical freedom, as well as friction and noise strength.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig2a.pdf}
\includegraphics[width=0.4\textwidth]{fig2b.pdf}
\caption{Attractor of the system (\protect\ref{eqsmotion}) projected
onto the plane $(p_{\rm ex},x_{\rm ex})$, for $\varepsilon_{\rm ec} =
1.0$ (a) and $7.1$ (b). The dotted line indicates the surface of section
underlying Fig.~\protect\ref{attrachem} below.
}\label{attracphase1}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig3a.pdf}
\includegraphics[width=0.4\textwidth]{fig3b.pdf}
\caption{As Fig.~\protect\ref{attracphase1} but for $\varepsilon_{\rm
ec} =6.0$ (a) and $2.5$ (b).
}\label{attracphase2}
\end{center}
\end{figure}
Figure \ref{attracphase1} shows the attractor, as a converged
trajectory projected onto the $(p_{\rm ex},x_{\rm ex})$-plane, for
$\varepsilon_{\rm ec} = 1.0$ (panel a) and $\varepsilon_{\rm ec} = 7.1$
(b). While for low coupling we discern a limit cycle, restricted to a
unit cell of the potential, for strong coupling it has given way to a
strange attractor which connects across the unit cell boundaries along
the entire lattice.
These two crucial ``crises'', though, the emergence of a strange
attractor and its change of topology from isolated to conected, are
independent of one another. This is evidenced in Fig.~\ref{attracphase2}a,
analogous to Fig.~\ref{attracphase1} but at intermediate values of the
coupling, where we observe a strange attractor still bounded within a
unit cell. Conversely, Fig.~\ref{attracphase2}b shows a limit cycle
extending periodically along the lattice.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig4.pdf}
\caption{Threshold value of $\varepsilon_{\rm ec}^*$ for the
transition from localized to extended attractors as a function of the
bath temperature $T$.
}\label{attracnoise}
\end{center}
\end{figure}
Besides the fundamental r\^ole of thermal fluctuations in directed
transport through the ``rectification of noise'' outside equilibrium,
it affects the dynamics in a more common way by softening attractor
structures and thus contributing to a merger of attractors along the
lattice. Figure \ref{attracnoise} clearly demonstrates how noise tends
to lower the threshold $\varepsilon_{\rm ec}^*$ in terms of the
coupling to the chemical freedom for the transition to transport along
the system.
\section{Directed transport} \label{transport}
We here measure directed transport in terms of the mean velocity of
the external freedom,
\begin{equation} \label{current}
I=\left<\dot{x}_{\rm ex}\right> =
\lim_{t\to \infty} \frac{1}{t}
\left<x_{\rm ex}(t)-x_{\rm ex}(0)\right>,
\end{equation}
taking averages, e.g., over initial conditions within the same basin
of attraction or over realizations of a stochastic force.
The generation of non-zero currents in ratchet-like systems
depends on the following independent general necessary conditions:
\renewcommand{\theenumi}{\roman{enumi}}
\begin{enumerate}
\item \label{asymmetry} The breaking of all binary symmetries
involving an inversion of momentum, such as in particular time
reversal and parity \cite{Fla00}.
\item \label{rotation} The existence of trajectories extending over
different unit cells of the potential, or, if the external freedom
is cyclic, corresponding to rotational as opposed to librational
motion.
\end{enumerate}
Concerning condition (\ref{asymmetry}), in the absence of an external driving
or a coupling to a magnetic field, it is only the friction terms in
the equations of motion which break time-reversal invariance. It is
evident from Fig.~\ref{currentfric} that there is no transport for
vanishing friction $\gamma_{\rm ex} = 0$, and likewise for vanishing
asymmetry parameter $A = 0$ (not shown).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth, angle=0]{fig5.pdf}
\caption{Mean current (\protect\ref{current}) as a function of the
friction coefficient $\gamma_{\rm ex}$ for the external degree of
freedom, for $\varepsilon_{\rm ec} = 7.08$ (full line) and 5.35
(dashed).
}\label{currentfric}
\end{center}
\end{figure}
As to (\ref{rotation}), in the presence of friction, the existence of
persistent transporting trajectories is not guaranteed, in difference
to the case of Hamiltonian ratchets where there is always an asymptotic
regime of quasi-free motion at high energies \cite{SDK05}. As discussed
above, the topology of attractors depends sensitively on couplings,
friction parameters, noise strengths etc. In the subsequent paragraphs,
we consider in detail the influence of a these parameters on the
generation of directed currents in the system.
\subsection{Resonance between chemical and external
freedoms} \label{resonance}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth, angle=0]{fig6.pdf}
\caption{Mean current (\protect\ref{current}) as a function of the
coupling $\varepsilon_{\rm ec}$ between external and chemical freedom,
for different values of the coupling between external and internal
freedom, $\varepsilon_{\rm ei} = 0$ (full line), $0.5$ (bold dashed),
$1$ (light dashed).
}\label{currentchem}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth, angle=0]{fig7.pdf}
\caption{Attractor of the system (\protect\ref{eqsmotion}) in terms of
the surfaces of section indicated in Figs.~\protect\ref{attracphase1}
and \protect\ref{attracphase2} as a function of the
coupling $\varepsilon_{\rm ec}$. Upper part: Mean current
(\protect\ref{current}) (bold dashed) as a function of the same
parameter.
}\label{attrachem}
\end{center}
\end{figure}
In the absence of an external periodic driving, our model is lacking a
precise clock that would lead to clearcut resonances. The
high inertia of the chemical freedom, however, and the
corresponding relatively regular injection of energy ``quanta'' also
defines an approximate time scale that gives rise to resonance-like
phenomena. This is obvious from Fig.~\ref{currentchem}, where marked
peaks of the current occur at approximately equidistant values of the
external-to-chemical coupling, $\varepsilon_{{\rm ec},n} = n
\varepsilon_{{\rm ec},1}$, $n = 1,2,\ldots$, with $\varepsilon_{{\rm
ec},1} \approx 1.8$ in this case. For this value, there is a 1:1
resonance between the two coupled freedoms, i.e., in terms of
Fig.~\ref{potexch}, the system is moving along diagonal valleys in the
two-dimensional array of potential maxima.
However, this resonant behaviour does not amount to simple
oscillatory motion in $x_{\rm ex}$. As is evident from
Fig.~\ref{attrachem}, where we plot a section of the attractor,
cf.\ Fig.~\ref{attracphase1}, vs.\ $\varepsilon_{\rm ec}$, the peaks of
the current coincide precisely with intervals in $\varepsilon_{\rm
ec}$ where the attractor is fractal. What is more, the sharp
definition without shoulders of these peaks can be related to abrupt
changes in the attractor structure from limit cycle to chaotic and
back.
\subsection{Effects of noise} \label{noise}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth, angle=0]{fig8.pdf}
\caption{Mean current (\protect\ref{current}) as a function of the
bath temperature $T$. Inset: enlargement of the low-temperature regime
$0 \leq T \leq 1.5$.
}\label{currenttemp}
\end{center}
\end{figure}
Figure \ref{currenttemp} shows the current vs.\ noise strength in
terms of bath temperature $T$. As would be expected, too strong noise
washes out the structure of the attractor, in particular its
asymmetry, a necessary condition for directed transport to occur. The
inset reveals a more interesting phenomenon: For the other parameter
values in question kept constant, there is no extended attractor in
the absence of thermal fluctuations, so that the current vanishes at
$T = 0$. As a consequence, there exists a maximum, albeit broad, of
the current at a bath temperature $T \approx 1$, in analogy to
stochastic resonance \cite{GH&98}.
\subsection{R\^ole of the internal freedom} \label{internal}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig9.pdf}
\caption{Kinetic energy contained in the chemical (upper graph),
internal (middle) and external (lower graph) freedoms as a function of
time. The figure shows a particular case. Time dependence is slightly
smoothed to suppress irrelevant fluctuations. Arrows indicate typical
energy-transfer events from chemical to internal (dashed, green) and
from internal to external freedom (full, red). Small-scale
oscillations in the lower graph correspond to unit transport steps in
$x_{\rm ex}$. See text.
}\label{entrans}
\end{center}
\end{figure}
A first indication of the influence of the internal freedom can be
observed in Fig.~\ref{currentchem}, comparing the width of the current
peaks for different values of the external-internal coupling. They
systematically grow broader with increasing
$\varepsilon_{\rm ei}$. This means that off resonance, directed
transport is supported or even enabled in the first place by the
presence of the internal freedom.
Further insight into the underlying mechanism is provided by
Fig.~\ref{entrans}, where we plot the kinetic energy in the three
respective freedoms as a function of time. It shows a clear tendency
for a temporary storage of energy in the internal freedom: The system
receives energy almost periodically and in nearly equal portions from
the chemical freedom. It tends to accumulate in the internal freedom
(green arrow) till a threshold is reached, whereupon it is discharged
(red arrow) almost completely to the external freedom, giving rise to
a burst of transport steps. This ``toilet-flush'' behaviour is
well-known from neuron firing \cite{HH52}.
Comparing with Fig.~\ref{attrachem}, we observe moreover that the
onset of current with increasing $\varepsilon_{\rm ei}$ also
coincides with a structural change of the attractor from limit cycle
to strange. To be sure, two autonomous degrees of freedom are
sufficient to allow for chaotic behaviour. Notwithstanding, we can
conclude that another relevant r\^ole of the internal freedom is to
induce a richer, more stochastic dynamics of the system.
\section{Conclusion} \label{end}
The present work pretends to reduce the gap between abstract models of
directed transport and detailed studies of specific motor molecules by
constructing a system that endows ratchets with a few essentials of
combustion motors, an internal degree of freedom and a coupling to an
autonomous chemical freedom as energy source instead of a
periodic external force, but ignoring the specifics of particular
(classes of) molecules. This allows us to analyze in
general terms the r\^ole of the internal freedom in the generation of
transport. We find convincing evidence that it serves as an interface
which effectively decouples the output of mechanical work in the form
of transport steps from the energy input through hydrolysis or similar
reactions. We can observe directly how energy is being stored
temporarily in the internal degree of freedom before being channeled
to the external freedom. In addition, the internal freedom induces
transitions towards a more irregular dynamics of the external freedom,
which also contributes to the generation of directed currents.
These results direct attention to processes of energy transfer and
dissipation within the molecule, and in particular indicate the
functional advantage of a cascaded energy redistribution through
intermediate steps \cite{Gar97}, refining the simple picture of an
external (transport) degree of freedom coupled directly to a
structureless bath. Even within the scope of our three-dimensional
model, such questions could be studied by changing the configuration
of couplings, e.g., coupling the external to the chemical freedom only
through the internal one instead of internal--external--chemical as in
the present work.
Subtler effects of noisy nonlinear dynamics like stochastic resonance
\cite{GH&98} have not been considered in depth. In the context of our
model, it is expected to occur in a parameter regime of weak noise
close to a resonance of the chemical with the external freedom, cf.\
Sec.~\ref{resonance}, lowering the threshold to directed transport as
a function of the coupling to the chemical freedom. Quantum effects
have not been taken into account, either. We expect relevant
modifications of the picture, if any, for the dynamics of the chemical
degree of freedom and the injection of chemical energy into functional
modes where the discretization of energy and quantum coherence might
play a major r\^ole.
\section*{Acknowledgements}
We would like to thank Camilo Aponte, Alfonso Leyva, and Jos\'e Daniel
Mu{\~n}oz for inspiring discussions and valuable bibliographical
information. One of us (TD) gratefully acknowledges financial support
by Colciencias and by Volkswagen Foundation during preparation of this
work and thanks for the hospitality extended to him by Rensselaer
Polytechnic Institute (Troy, NY, USA).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,963 |
\section{Introduction}
\subsection{Main results}
This paper introduces a new kind of average-case isoperimetric inequality. Given
a $k$-cycle $Z$ on $([0,1]^n,\partial[0,1]^n)$, in any of a number of geometric
measure theory senses, its \emph{filling volume} $FV(Z)$ is the minimal mass of a
chain whose boundary is $Z$. The well-known Federer--Fleming isoperimetric
inequality \cite{FF} states that for all $k$-cycles $Z$,
\[FV(Z) \leq C_{n,k}\mass(Z)^{\frac{k+1}{k}}\quad\text{and}\quad
FV(Z) \leq C_{n,k}\mass(Z).\footnote{The first inequality dominates when
$\mass(Z)<<1$, the second when $\mass(Z)>>1$.}\]
However, one might expect that \emph{most} cycles of given mass are much easier
to fill.
Unfortunately, as explained to the author by Robert Young, a geometrically
meaningful probability measure on the space of all cycles of mass $\leq N$ may be
too much to ask for. The issue is one of picking a scale: say we are trying to
build a random 1-Lipschitz curve in a finite-dimensional space. If the curve is
to fluctuate randomly at scale $\varepsilon$, then over time $1$ it will only travel a
distance on the order of $\sqrt{\varepsilon}$. Thus there is no way of ensuring
random behavior in a scale-free way. This idea of decomposing a finite-mass
cycle into pieces at different scales can be made precise using the notion of a
\emph{corona decomposition}, as in \cite{Jones} (in dimension 1) and \cite{Young}
(in higher dimensions).
On the other hand, there are a number of ways of, and a number of motivations
for, building random ``space-filling'' cycles of mass $O(N)$ which look
essentially trivial on balls of radius $N^{-1/n}$. Our main theorems characterize
three models of this form which exhibit similar isoperimetric behavior, and we
hypothesize that this behavior should be generic for models whose randomness
occurs mainly at large scales---an idea which may have a precise Fourier-analytic
formulation.
This isoperimetric behavior is described in codimension $d$ by the function
\[\AKT_d(N)=\begin{cases}
\sqrt{N} & \text{if }d=1 \\
\sqrt{N\log N} & \text{if }d=2 \\
N^{(d-1)/d} & \text{if }d \geq 3. \\
\end{cases}\]
\begin{thmA} \label{sphere}
Let $Z$ be a $k$-cycle on $S^n$ obtained by sampling $N$ oriented great
$k$-spheres independently from the uniform distribution on the oriented
Grassmannian $\widetilde\Gr_{k+1}(\mathbb{R}^{n+1})$. Then there are constants
$C>c>0$ depending on $n$ and $k$ such that
\begin{equation} \label{AKTstats:E}
c\AKT_{n-k}(N)<\mathbb{E}(FV(Z))<C\AKT_{n-k}(N).
\end{equation}
Moreover, $FV(Z)$ is concentrated around its mean: there are constants
$C_1,C_2>0$ depending on $n$ and $k$ such that
\begin{equation} \label{AKTstats:V}
\text{for every $r>0$, }
\mathbb{P}[\lvert FV(Z)-\mathbb{E}(FV(Z))\rvert \geq r]
\leq C_1\exp(-C_2\sqrt{N}r)
\end{equation}
\end{thmA}
From \eqref{AKTstats:V} we see that the spread of the distribution around the
mean is at most on the order of $\sqrt{N}$; in codimensions $d=n-k \geq 2$, this
is small compared to the mean:
\[\text{for every $\varepsilon>0$, }
\frac{\lvert FV(Z)-\mathbb{E}(FV(Z))\rvert}{\mathbb{E}(FV(Z))}
\leq \varepsilon \text{ with high probability as }N \to \infty.\]
If a model of random codimension-$d$ cycles of mass $O(N)$ satisfies
\eqref{AKTstats:E} and \eqref{AKTstats:V}, we say it
\emph{exhibits AKT statistics}, in honor of Ajtai, Koml\'os, and Tusn\'ady, who
discovered this phenomenon in the setting $k=0$.
By rescaling the picture, we can make this result more interpretable. Let
$R=T^{1/(n-k)}$. Then the corresponding process in the $n$-sphere of radius $R$
generates a cycle of mass $\Theta(R^n)$ which is evenly spread throughout the
sphere, so that a 1-ball intersects one of the great $k$-spheres in $Z$ on
average. With that rescaling, the mass of an optimal filling becomes
\[\left\{\begin{array}{l l}
\Theta(R^n\sqrt{R}) & \text{if }k=n-1 \\
\Theta(R^n\sqrt{\log R}) & \text{if }k=n-2 \\
\Theta(R^n) & \text{if }k \leq n-3. \\
\end{array}\right.\]
Informally speaking, to meet its match, the average point in $Z$ has to travel a
distance $\Theta(\sqrt{R})$ (in codimension 1), $\Theta(\sqrt{\log R})$ (in
codimension 2), or $\Theta(1)$ (otherwise) times the distance to its closest
neighbor.
We also prove similar results for the cube:
\begin{thmA} \label{cube}
Let $Z$ be a relative $k$-cycle on $([0,1]^n,\partial[0,1]^n)$ obtained by
sampling $N$ planes independently from the uniform distribution on the space
$Y$ of oriented $k$-planes which intersect $[0,1]^n$ nontrivially. Then $Z$
exhibits AKT statistics.
\end{thmA}
There may be reasonable disagreement as to which distribution is the uniform one
in this context; to prove \eqref{AKTstats:E} it suffices to require that it be
uniform on each subset of $Y$ (isometric to two copies of a polytope) consisting
of parallel planes, but to prove \eqref{AKTstats:V} we also need to assume that
it behaves reasonably with respect to the manifold structure on $Y$ (for example,
is a positive density or has finite support).
In fact, the only thing used here about $Z$ is that almost all of its ``slices''
along coordinate $k$-planes consist of $O(N)$ independent uniformly distributed
points. This means that there are a number of other possible models that can be
fit into this framework. However, the following requires a separate proof:
\begin{thmA} \label{knot}
Let $\{M_N\}$ be a sequence of $k$-dimensional oriented pseudomanifolds with
$N$ vertices and at most $L$ simplices incident to any given simplex. Let $Z$
be a $k$-cycle on $[0,1]^n$ obtained by sending each vertex of $M_N$ to a
uniformly random point in $[0,1]^n$ and extending linearly. Then $Z$ exhibits
AKT statistics.
\end{thmA}
In the context of this theorem, the constants in \eqref{AKTstats:E} depend on
$n$, $k$, and $L$, but not on the shapes of the pseudomanifolds (which can
therefore also be randomized). The case $k=1$, $n=3$ describes the ``random
jump'' model of random knots and links introduced by Millett \cite{Millett}.
Moreover, by a theorem of Hardt and Simon \cite{HS}, the optimal filling of such
a knot or link (after a slight rounding of corners) is a $C^1$ embedded surface.
In particular:
\begin{cor}
For some $C>c>0$, the minimal Seifert surface of a knot produced using $N$
random jumps has area between $c\sqrt{N\log N}$ and $C\sqrt{N\log N}$ with
high probability.
\end{cor}
\subsection{Motivation}
The methods we use to prove Theorems \ref{sphere}, \ref{cube}, and \ref{knot} can
be easily extended to other i.i.d.\ samples of simple shapes on various spaces.
However, the investigation is mainly motivated by the desire to analyze
topological invariants of random geometric objects such as links and maps.
Models of such objects tend to produce random cycles which are similarly trivial
at small scales, but are otherwise more difficult to work with.
\subsubsection*{Random knots and links}
There have been a number of proposed models of random knots and links; see
\cite{EZ} for a detailed survey. Several of these models are ``spatial'' in the
sense that they produce random knotted curves in space, and one supposes that
these may exhibit AKT statistics for filling area. As mentioned above, we show
this for Millett's random jump model, but it may also be true for random
polygonal walks with shorter segments as well as random grid walks, perhaps with
some restrictions on segment length.
Given two random curves in a certain model, one may want to understand the
distribution of their linking number. Since this will usually be zero on
average, the first interesting question is about the second moment. Linking
number can be computed as the intersection number of one curve with a filling of
the other, thus one may expect that two random curves of length $N$ which exhibit
AKT statistics have expected squared linking number $\sim N\log N$ or
$\sim N\sqrt{\log N}$.
However, this is not the case for the Millett model: the second moment of the
linking number between two random jump curves of length $N$ is $\sim N$
\cite{ABDKS,FlKo}. Similarly, one may take the setup of Theorem \ref{sphere} for
$k=1$ and $n=3$ as a model of a random link and try to understand the
\emph{total linking number}, that is, the sum of the signed linking numbers of
all pairs of circles. This is then the intersection number of the chain with its
own filling. Here it is easy to see that the second moment of the distribution
is once again $\sim N$.
In both cases, this seeming incongruity perhaps boils down once again to the
issue of multiple scales: random jump curves and great circles only ``see'' the
largest scales, but the lower bound on filling volume in codimension 2 comes from
looking on many different scales at once. One may perhaps get a different answer
most easily by analyzing the linking number of an asymmetric model: a random jump
curve and a random walk of total length $N$ made of smaller segments.
In \cite{Tanaka,Marko}, the second moment is computed for the linking number of
two random walks; normalizing so that these walks have length $N$ and expected
diameter $1$, this second moment again becomes $\sim N$. In this model, however,
randomness happens at scale $\sim 1/N$, so it is not expected to exhibit AKT
statistics.
\subsubsection*{Random maps}
Another way of producing a random (framed) link is as the preimage of a generic
point under a random map $f:S^3 \to S^2$. In fact, the self-linking number of
this link is the Hopf invariant of the map, which is itself a natural subject for
investigation since it is a complete topological invariant of such maps.
One natural model of $L$-Lipschitz random maps is a uniformly random
\emph{simplicial} map from a triangulation of $S^3$ at scale $\sim L$ to a
tetrahedron. The maximal self-linking number of such a map is $\Theta(L^4)$; on
the other hand, the heuristics above would suggest that the second moment of the
linking number of the random model is between $L^3$ and $L^3\log L$.
These ideas may have applications in topological and geometric data analysis, see
\cite{FKW}.
\subsection{Methods}
The $k=0$ cases of Theorems \ref{sphere} and \ref{cube} are, up to minor
adjustments, a classical theorem in combinatorial probability:
\begin{thm}[Ajtai, Koml\'os, and Tusn\'ady \cite{AKT}] \label{thm:AKT}
Let $\{X_1,\ldots,X_N\}$ and $\{Y_1,\ldots,Y_N\}$ be two sets of independent,
uniformly distributed random points in $[0,1]^d$, and let $L$ be the
\emph{transportation cost} between $\{X_i\}$ and $\{Y_i\}$, that is, the total
length of an optimal matching. Then there are constants $0<c_d<C_d$
such that with high probability,
\[c_d\AKT_d(N)<L<C_d\AKT_d(N).\]
\end{thm}
Since the original geometric proof in \cite{AKT} of the most subtle case $d=2$,
this and related results have been proved many other times, often by applying
Fourier analysis; see \cite{BobL} for further references and \cite{TalBk} for a
detailed treatment of certain analytic approaches. Another beautiful geometric
proof of the upper bound on the sphere is due to Holden, Peres, and Zhai
\cite{HPZ}.
The proofs of Theorems \ref{sphere} and \ref{cube} in general are obtained by
applying the $k=0$ results to $(n-k)$-dimensional slices of the cube and sphere.
This is the reason that the results depend only on the codimension, and for the
critical nature of codimension 2. The lower bound in \eqref{AKTstats:E} is
obtained directly by integrating the lower bounds on these slices. The upper
bound is obtained via a dual result on differential forms; this kind of technique
was already used in \cite{AKT} for the proof of the lower bound for the square.
Finally, \eqref{AKTstats:V} is proved using the notion of concentration of
measure due originally to Gromov and Milman \cite{GroMi}; see \cite{Ledoux} for
an extensive modern treatment.
Theorem \ref{knot} is proved similarly, except that slices no longer consist of
i.i.d.\ points. Even this small amount of dependence complicates the argument
considerably. We use ad hoc combinatorial arguments to overcome this, but one
might hope to generalize, for example by applying a variant of Stein's method, to
a version of Theorem \ref{thm:AKT} in the presence of dependence (one approach,
which only gives upper bounds, is discussed in \cite[\S5]{BobL}).
\subsection*{Structure of the paper}
Section \ref{S2} introduces necessary ideas and results from geometric measure
theory, and Section \ref{S3} discusses the classical AKT theorem. In Sections
\ref{S:upper} and \ref{S:lower}, the upper and lower bounds in Theorems
\ref{sphere} and \ref{cube} are proved using tools that may generalize to other
models of random cycles. In Section \ref{S:knot}, we discuss the extra ideas
needed to prove Theorem \ref{knot}. Finally, Section \ref{S:concentration}
discusses the concentration of the distributions in these theorems around their
mean.
\subsection*{Acknowledgements}
I would like to thank Matthew Kahle and Robert Young for a large number of
helpful discussions over a span of three years. Yevgeny Liokumovich provided a
crucial reference; Shmuel Weinberger asked a question which inspired Theorem
\ref{knot} and gave other helpful comments. I was partially supported by NSF
individual grant DMS-2001042.
\section{Definitions and preliminaries} \label{S2}
\subsection{Cycles and currents}
There are a number of useful ways to define chains and cycles from the point of
view of topology and geometric measure theory. Algebraic topology typically uses
singular $k$-chains: formal linear combinations of continuous maps from the
$k$-simplex to a topological space $X$ (``singular simplices''). We will usually
restrict our attention to Lipschitz simplices (that is, requiring the maps to be
Lipschitz) on a Riemannian manifold $M$. By Rademacher's theorem, a Lipschitz
simplex $\sigma:\Delta^k \to M$ is differentiable almost everywhere and so has a
well-defined volume or \emph{mass},
\[\mass(\sigma)=\int_{\Delta^k} \sigma^*d\vol_M.\]
We can then extend by linearity to define the mass of a Lipschitz chain.
A more general notion of chain is that of a normal current. A $k$-dimensional
\emph{current} on a manifold $M$ is simply a functional on (smooth) differential
forms, which we think of as integration over the current. For example:
\begin{itemize}
\item Every Lipschitz chain $T$ defines a current via
$\omega \mapsto \int_T \omega$.
\item Every compactly supported $(n-k)$-form $\alpha \in \Omega^{n-k}(M)$ defines
a current via $\omega \mapsto \int_M \alpha \wedge \omega$.
\end{itemize}
The boundary operator is defined via Stokes' theorem: for a current $T$,
\[\partial T(\omega)=T(d\omega).\]
The \emph{mass} of a $k$-current $T$ on $M$, which agrees with the same notion on
Lipschitz chains, is defined to be
\[\mass(T)=\inf \{T(\omega) : \omega \in \Omega^k(M)\text{ and }
\lVert\omega\rVert_\infty=1\}.\]
Here $\lVert\omega\rVert_\infty$ is the supremum of the value of $\omega$ over all
frames of unit vectors. For a general current, the mass of course need not be
finite. A current $T$ is \emph{normal} if $T$ and $\partial T$ both have finite
mass; in particular any cycle (current with empty boundary) of finite mass is
normal.
\subsection{Fillings and duality}
Now, if $S$ is a normal current such that $\partial S=T$, we call it a
\emph{filling} of $T$. The \emph{filling volume} of $T$ is
\[FV(T)=\inf \{\mass(S) \mid \partial S=T\},\]
which is always finite by the work of Federer and Fleming. The following is an
instance of the Hahn--Banach theorem:
\begin{prop} \label{duality}
Let $M$ be a manifold. Then a normal $k$-current $T$ in $M$ with
$\partial T=0$ has a filling of mass $c$ if and only if for every
$\omega \in \Omega^k(M)$ with $\lVert d\omega\rVert_\infty \leq 1$,
$\int_T \omega \leq c$.
More generally, for any closed set $A \subset M$, write $\Omega^k(M,A)$ for the
vector space of forms whose restriction to $A$ is zero. Let $T$ be a normal
$k$-current with $\partial T$ supported on $A$, that is, such that
$\int_{\partial T} \alpha=0$ for any $(k-1)$-form $\alpha \in \Omega^{k-1}(M,A)$.
Then $T$ has a filling relative to $A$ (that is, a $(k+1)$-current $S$ such
that $\partial S-T$ is supported on $A$) of mass $c$ if and only if for every
$\omega \in \Omega^k(M,A)$ with $\lVert d\omega\rVert_\infty \leq 1$,
$\int_T \omega \leq c$.
\end{prop}
In other words, the filling volume of a cycle $T$ can be redefined as
\[FV(T)=\sup \bigl\{\textstyle{\int_T \omega} \mid \omega \in \Omega^k(M)
\text{ such that }\lVert d\omega \rVert_\infty \leq 1\bigr\}\]
in both the absolute and the relative case. Our proofs of the upper bounds in
Theorems \ref{sphere} and \ref{cube} will be based on this proposition rather
than constructing fillings directly.
Of course, knowing that a nice Lipschitz cycle has a filling which is a normal
current is not very satisfying---after all, normal currents can still be very
strange. Luckily, given a normal current filling, we can upgrade it to a
Lipschitz chain (at the cost of multiplying the mass by a constant) using the
following classical theorem:
\begin{thm}[{Federer--Fleming deformation theorem \cite[Thm.~5.5]{FF}}]
There is a constant $\rho(k,n)=2n^{2k+2}$ such that the following holds. Let
$T$ be a normal current in $\mathbf{N}_k(\mathbb{R}^n)$. Then for every
$\varepsilon>0$ we can write $T=P+Q+\partial S$, where
\begin{enumerate}
\item $\mass(P) \leq \rho(k,n)\mass(T)$.
\item $\mass(Q) \leq \varepsilon\rho(k,n)\mass(\partial T)$.
\item $\mass(S) \leq \varepsilon\rho(k,n)\mass(T)$.
\item $P$ is a polyhedral cycle which can be expressed as an
$\mathbb{R}$-linear combination of $k$-cells in the cubical unit
lattice in $\mathbb{R}^n$.
\item If $T$ is a Lipschitz chain, then so are $Q$ and $S$.
\item If $\partial T$ is a Lipschitz chain, then so is $Q$.
\end{enumerate}
\end{thm}
If $T$ is a normal current filling a Lipschitz chain $\partial T$, then $P+Q$ is
a Lipschitz chain filling $T$ whose mass is only greater by a multiplicative
constant $\rho(k,n)$.
It is not hard to upgrade the deformation theorem to manifolds, although the
resulting constants will depend on the manifold and its metric; see for example
\cite[Theorem 10.3.3]{ECHLP}.
\subsection{Slicing}
An important property of normal currents, introduced in \cite[\S3]{FF}, is the
ability to take ``slices'' by hyperplanes. We follow the exposition of F.~Morgan
\cite[4.11]{Morgan}, who follows Federer \cite[\S4.2.1]{FedBk}.
Let $T$ be a $k$-current on a manifold $M$. Given a Lipschitz function
$u:M \to \mathbb{R}$, for almost all $r \in \mathbb{R}$ there is a
$(k-1)$-current $T \cap \{u(x)=r\}$ with the following properties:
\begin{enumerate}
\item If $T$ is defined by integration over a (rectifiable) set $X \subset M$,
then $T \cap \{u(x)=r\}$ is defined by integration over $M \cap \{u(x)=r\}$.
\item $\partial T \cap \{u(x)=r\}=\partial(T \cap \{u(x)=r\})$.
\item $\mass(T) \geq \frac{1}{\Lip u}\int_{-\infty}^\infty
\mass(T \cap \{u(x)=r\})dr.$
\end{enumerate}
In particular, by inductively slicing in different directions, we get the
following:
\begin{prop} \label{slice-coarea}
Let $1 \leq k \leq m \leq n$, and let $T$ be an $m$-dimensional current on
$[0,1]^n$. Given $\vec x=(x_1,\ldots,x_k) \in \mathbb{R}^k$, let $P_{\vec x}$ be
the plane $\{\vec x\} \times \mathbb{R}^{n-k}$. Then for almost all $\vec x$,
there is a well-defined $(m-k)$-dimensional current $T \cap P_{\vec x}$ such
that
\begin{gather*}
\partial(T \cap P_{\vec x})=\partial T \cap P_{\vec x} \\
\mass(T) \geq \int_{[0,1]^k} \mass(T \cap P_{\vec x})d\vec x.
\end{gather*}
\end{prop}
\section{A variation on the Ajtai--Koml\'os--Tusn\'ady theorem} \label{S3}
The results of this paper are a generalization of Theorem \ref{thm:AKT}.
Properly, the theorem of Ajtai, Koml\'os, and Tusn\'ady \cite{AKT} is in the case
$n=2$; their paper also asserts the case $n \geq 3$, which is easy and later
proved and extended in several directions by Talagrand \cite{TalHiD,TalHiD2}.
The $n=1$ case is elementary, and the proof along with a vast array of
strengthenings and generalizations can be found in \cite{BobLMem} by Bobkov and
Ledoux. Here we need a slight variation.
\begin{thm} \label{AKT:boundary}
Generate a cycle $Z$ of mass $N$ in $C_0([0,1]^n,\partial[0,1]^n)$ by selecting
$N$ independent, uniformly distributed points in $[0,1]^n \times \{+1,-1\}$.
Then for there are constants $0<c_n<C_n$ such that
\[c_n\AKT_n(N) \leq \mathbb{E}(FV(Z)) \leq C_n\AKT_n(N).\]
\end{thm}
\begin{rmk} \label{AKT:diffeo}
Suppose that $D$ is a Riemannian ball diffeomorphic to $[0,1]^n$ and has a
volume form. Then by the main theorem of \cite{BMPR} (extending results of
Moser \cite{Moser} and Banyaga \cite{Banyaga}), there is a diffeomorphism
between the two which multiplies the volume form by a constant. Therefore
Theorem \ref{AKT:boundary} also holds with respect to Lebesgue measure on $D$,
with constants $0<c_D<C_D$ depending on the ratio of the volumes and the
bilipschitz constant of this diffeomorphism.
Moreover, given a smooth family of Riemannian balls, \cite{BMPR} indicates that
there is a smooth family of such diffeomorphisms. Therefore, if the family is
compact, one can find uniform constants for the whole family.
\end{rmk}
\begin{proof}
There are two differences here from the results as they are typically presented
in the probability literature, where the problem consists of matching two sets
of random points of the same cardinality: first, the number of positive and
negative points may not match; second, we are allowed to match points to the
boundary as well as to points of the opposite orientation.\footnote{In fact, a
somewhat similar, but more complicated modification was studied by Shor
\cite{Shor}.} We briefly explain how to modify the original proofs to deal
with this.
Clearly, the possibility of matching to the boundary cannot make the upper
bounds worse. Let's say without loss of generality there are more positive
points. To obtain the upper bound for $n \geq 2$, we may simply ignore some
arbitrary set of ``extra'' positive points, matching all the others first. By
the central limit theorem, the expected number of extra points is
$O(\sqrt{N})$, so the extra mass generated by matching them all to the boundary
of the cube does not change the asymptotic answer.
For the lower bound in the case $n=2$, we use the same stratagem of ignoring
the ``extra'' points to create a new cycle $Z'$. From the original proof in
\cite{AKT}, we know that there is a 1-Lipschitz function
$f:[0,1]^2 \to \mathbb{R}$ which is zero on $\partial[0,1]^2$ and such that
$\int_{Z'} f \geq c\sqrt{N\log N}$ with high probability. Since with high
probability the number of extra points is $<<\sqrt{N\log N}$, and the values of
$f$ lie between $-1/2$ and $1/2$, we also know that
$\int_Z f \geq c\sqrt{N\log N}$ with high probability.
The lower bound in the case $n \geq 3$ is easy to see: conditional on any
distribution of the positive points, most negative points will be at distance
$\geq cN^{-1/n}$ from every positive point and the boundary, where $c>0$ is a
constant depending on $n$.
In the case $n=1$, the filling is unique up to a constant: the unique filling
$F$ supported away from zero has density $\int_0^x Z$ at $x \in [0,1]$. We use
arguments found in \cite[\S3]{BobLMem} to give estimates on
$\mathbb{E}(\mass F)$.
The upper bound is a simple calculation:
\begin{align*}
\mathbb{E}(\mass F) &= \int_0^1
\mathbb{E}\bigl(\bigl\lvert\textstyle{\int_0^x Z}\bigr\rvert\bigr) dx \\
&\leq \int_0^1 \sqrt{\Var(\textstyle{\int_0^x Z})}dx=\frac{\sqrt{N}}{2}.
\end{align*}
The lower bound comes from the following classical fact, found in
\cite{BobLMem} as Lemma 3.4:
\begin{lem}
Given independent mean zero random variables $\xi_1,\ldots,\xi_N$,
\[\mathbb{E}\biggl(\biggl\lvert \sum_{k=1}^N \xi_k\biggr\rvert\biggr) \geq
\frac{1}{2\sqrt 2}\mathbb{E}\biggl(\biggl(
\sum_{k=1}^N \xi_k^2\biggr)^{1/2}\biggr).\]
\end{lem}
Let $(X_k,\sigma_k) \in [0,1] \times \{+1,-1\}$ be the $k$th chosen point.
Then applying the lemma to $\xi_k=\sigma_k \chi_{\{X_k \leq x\}}$, we get
\begin{align*}
\mathbb{E}\bigl(\bigl\lvert\textstyle{\int_0^x Z}\bigr\rvert\bigr)
&\geq \frac{1}{2\sqrt{2}}\mathbb{E}\biggl(\biggl(
\sum_{k=1}^N \xi_k^2\biggr)^{1/2}\biggr) \\
&\geq \frac{1}{2\sqrt{2}}\biggl(
\sum_{k=1}^N (\mathbb{E}(|\xi_k|))^2\biggr)^{1/2}=\frac{1}{2\sqrt 2}\sqrt{N}x,
\end{align*}
and therefore $\mathbb{E}(\mass F) \geq \sqrt{N/32}$.
\end{proof}
\section{Proof of the upper bound} \label{S:upper}
To prove the upper bound in Theorems \ref{sphere} and \ref{cube}, we will use
Stokes' theorem; that is, we use the fact that for a cycle $Z \in C_k(M,A)$,
\begin{equation} \label{eqn:sup}
FV(Z)=\sup\left\{\textstyle{\int_Z} \alpha : \alpha \in \Omega^k(M,A)
\text{ such that }\lVert d\alpha \rVert_\infty=1\right\}.
\end{equation}
In fact, since $Z$ is a cycle, $\int_Z \alpha$ only depends on $\omega=d\alpha$.
To bound this quantity, we first note that any $\omega \in \Omega^{k+1}([0,1]^n)$
can be decomposed into a sum of ``basic'' forms of the form
\[\omega_I(x)dx_{i_1} \wedge \cdots \wedge dx_{i_{k+1}},\]
where $\omega_I$ is a function $\mathbb{R}^n \to \mathbb{R}$, for each subset
$\{i_1,\ldots,i_{k+1}\} \subset \{1,\ldots,n\}$.
\begin{lem} \label{lem:coIP}
For any exact form $\omega \in \Omega^{k+1}([0,1]^n,\partial [0,1]^n)$
(resp., $\omega \in \Omega^{k+1}([0,1]^n)$), there is
a form $\alpha \in \Omega^k([0,1]^n,\partial [0,1]^n)$ (resp.,
$\alpha \in \Omega^k([0,1]^n)$) given by
\[\alpha=\sum_{\substack{I \subset [n]\\|I|=k}} \alpha_I(x)dx_I,\]
such that $d\alpha=\omega$, and for each $I$,
$\lVert \alpha_I \rVert_{\Lip} \leq C_{n,k}\lVert\omega\rVert_\infty$.
\end{lem}
\begin{proof}
We prove this by induction on $n$ and $k$, keeping $n-k$ constant. The case
$k=0$ is clear, since then $\omega$ is the zero function.
To do the inductive step in the relative case, we follow the usual proof of the
Poincar\'e lemma with compact support, following \cite[\S1.4]{BottTu}. Fix a
smooth bump function $\varepsilon:[0,1] \to [0,1]$ which is 0 near 0 and 1 near 1.
By applying the lemma one dimension lower, we get a form
$\eta \in \Omega^{k-1}([0,1]^{n-1},\partial [0,1]^{n-1})$ with
$d\eta=\int_0^1 \omega$ and
$\lVert \eta_I \rVert_{\Lip} \leq C_{n-1,k-1}\lVert\omega\rVert_\infty$. Then
$\omega=d\alpha$ for
\[\alpha={\textstyle \int_0^t \omega}
-\varepsilon(x_n)\pi^*({\textstyle \int_0^1\omega})
-d\varepsilon(x_n) \wedge \pi^*\eta,\]
where $\pi$ is the projection to the $(n-1)$-cube along $x_n$. Notice that
\[\alpha_I=\begin{cases}
-{\displaystyle\frac{d\varepsilon}{dx}}\eta_{I \setminus \{n\}} & \text{if }n \in I \\
\int_0^t \omega_{I \cup \{n\}}-\varepsilon(x_1)\pi^*(\int_0^1 \omega_{I \cup \{n\}}) &
\text{otherwise.}
\end{cases}\]
This gives us a bound on the $\lVert\alpha_I\rVert_{\Lip}$ in terms of the
$\lVert\omega_I\rVert_{\Lip}$ and $\lVert\eta_I\rVert_{\Lip}$ as well as the
derivatives of $\varepsilon$.
For the non-relative version, we follow the same proof, mutatis mutandis,
taking
\[\alpha={\textstyle \int_0^t \omega}-\pi^*\eta. \qedhere\]
\end{proof}
\begin{thm} \label{upperE}
Let $Z$ be a random $k$-cycle in $([0,1]^n,\partial[0,1]^n)$ or in $[0,1]^n$
such that for some $M>0$,
\begin{equation} \label{E-condition}
\mathbb{E}(FV(Z \cap P)) \leq M
\end{equation}
for almost all coordinate $(n-k)$-planes $P$ (in particular, $Z \cap P$ is
almost always a well-defined zero-cycle). Then
\begin{equation}
\mathbb{E}(FV(Z)) \leq {n \choose k}C_{n,k}M,
\end{equation}
where $C_{n,k}$ is the constant from Lemma \ref{lem:coIP}.
\end{thm}
\begin{proof}
For a $k$-form $\alpha$, $\int_Z \alpha$ depends only on $d\alpha$. Therefore
to estimate \eqref{eqn:sup} it is enough to show that for any $(k+1)$-form
$\omega$ with $\lVert\omega\rVert_\infty=1$, there is a $k$-form $\alpha$ such
that $d\alpha=\omega$ and $\int_Z \alpha \leq C_{n,k}M$.
By Lemma \ref{lem:coIP}, we can choose
\[\alpha=\sum_{\substack{I \subset [n]\\|I|=k}} \alpha_I(x)dx_I\]
such that $\lVert d\alpha_I \rVert_\infty \leq C_{n,k}$ for every $I$. Then for
$\alpha$ ranging over all these choices of antidifferentials,
\begin{align*}
FV(Z) &= \sup_\alpha \int_Z \alpha
= \sup_\alpha \sum_{\substack{I \subset [n]\\|I|=k}} \int_{\mathbb{R}^{I^c}}
\int_{Z \cap P_u} \alpha_Idu \\
&\leq \sum_{\substack{I \subset [n]\\|I|=k}} \int_{\mathbb{R}^{I^c}}
\left(\sup_\alpha\int_{Z \cap P_u} \alpha_I\right)du
\leq \sum_{\substack{I \subset [n]\\|I|=k}}
\int_{\mathbb{R}^{I^c}} C_{n,k}FV(Z \cap P_u).
\end{align*}
Therefore
\[\mathbb{E}(FV(Z)) \leq \sum_{\substack{I \subset [n]\\|I|=k}} \int_{\mathbb{R}^{I^c}}
C_{n,k}\mathbb{E}(FV(Z \cap P_u)) \leq {n \choose k}C_{n,k}M. \qedhere\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{cube}, upper bound.]
Let $Z$ be a cycle in $C_k([0,1]^n,\partial[0,1]^n)$ obtained by sampling $N$
i.i.d.\ planes from a distribution on the set of oriented $k$-planes which
intersect nontrivially with $[0,1]^n$, such that the distribution is uniform
(with respect to Lebesgue measure on the corresponding polytope in
$\mathbb{R}^{n-k}$) on each set of parallel planes.
This condition clearly implies that for every coordinate $(n-k)$-plane $P$,
$Z \cap P$ consists of at most $N$ i.i.d.\ positive and negative points with
probability 1. Then Theorem \ref{AKT:boundary} implies that
\eqref{E-condition} holds for $Z$ with $M=C_{n-k}\AKT_{n-k}(N)$
\[\mathbb{E}(FV(Z)) \leq 2{n \choose k}C_{n,k}C_n\AKT_{n-k}(N). \qedhere\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{sphere}, upper bound.]
We use the fact that the transverse intersection of an oriented great
$k$-sphere with an oriented great $(n-k)$-sphere is a pair of antipodal points
with opposite orientations. Therefore, if $Z$ is a cycle obtained by sampling
$N$ oriented great $k$-spheres independently from the uniform distribution,
then for any great $(n-k)$-sphere $P$, with probability 1
\[Z \cap P=\sum_{i=1}^N [x_i]-[-x_i]\]
where the $x_i$ are i.i.d.\ uniform points on $S^{n-k}$.
Consider $S^n$ as a subset of $\mathbb{R}^{n+1}$, with standard unit basis
vectors $e_0,\ldots,e_n$. Let $K_i^\pm$ be the preimage of the cube $[-R,R]^n$
under central projection (that is, projection along lines through the origin)
to the plane $x_i=\pm 1$. If $R$ is large enough, the interiors of the
$K_i^\pm$ cover $S^n$. Each $K_i^+$ is disjoint from its antipodal set
$K_i^-$; therefore for any great $(n-k)$-sphere $P$, with probability 1
$Z \cap P \cap K_i^\pm$ consists of i.i.d.\ uniform points. By Remark
\ref{AKT:diffeo}
\[\mathbb{E}(FV(Z \cap P \cap K_i^\pm)) \leq C_{n,k}\AKT_{n-k}(N),\]
where $Z \cap P \cap K_i^\pm$ is considered as a cycle
in $C_0(P \cap K_i^\pm, \partial(P \cap K_i^\pm))$.
Note that central projection sends great spheres to hyperplanes. Therefore, by
Theorem \ref{upperE}, for each $i$ and sign,
\begin{equation} \label{on-patch}
\mathbb{E}(FV(Z \cap K_i^\pm)) \leq C_{n,k}\AKT_{n-k}(N)
\end{equation}
Set a partition of unity $\{\varphi_i^\pm\}$ subordinate to $\{K_i^\pm\}$ which is
invariant with respect to the involution, that is, such that
$\varphi_i^+(x)=\varphi_i^-(-x)$. To prove the theorem, it is enough, given a $k$-form
$\omega \in C_{k+1}(S^n)$ with $\lVert\omega\rVert_\infty=1$, to show that for
some (and therefore every) $\alpha \in C_k(S^n)$ with $d\alpha=\omega$,
\[\int_Z \alpha \leq C_{n,k}\AKT_{n-k}(N).\]
But note that
\[\int_Z \alpha=\sum_{i=0}^n \Bigl(\int_{Z \cap K_i^+} \varphi_i^+\alpha+
\int_{Z \cap K_i^-} \varphi_i^-\alpha\Bigr).\]
Therefore it suffices to find an antidifferential and a bound separately for
each $\varphi_i^\pm \omega$. Therefore \eqref{on-patch} suffices to prove the
theorem.
\end{proof}
\section{Proof of the lower bound} \label{S:lower}
\begin{thm} \label{lowerE}
Let $Z$ be a random Lipschitz $k$-cycle in $([0,1]^n,\partial[0,1]^n)$ such
that for almost every $k$-plane
\[P_{\vec x}=\{(x_1,\ldots,x_k)\} \times [0,1]^{n-k} \subset [0,1]^n,\]
the slice $Z \cap P_{\vec x}$ satisfies
\[\mathbb{E}(FV(Z \cap P_{\vec x}) \geq p(\vec x)\]
where $p:[0,1]^k \to [0,\infty)$ is an $L^1$ function. Then
\[\mathbb{E}(FV(Z)) \geq \int_{[0,1]^k} p(\vec x)d\vec x.\]
\end{thm}
\begin{proof}
Let $U$ be a normal current filling $Z$ such that $\mass(U) \leq FV(Z)+\varepsilon$,
for any $\varepsilon>0$. Then for almost all $P_{\vec x}$, there is a slice
$U \cap P_{\vec x}$ which fills $Z \cap P_{\vec x}$, and
\[\mass(U \cap P_{\vec x}) \geq FV(Z \cap P_{\vec x}).\]
By Prop.~\ref{slice-coarea}
\[FV(Z)+\varepsilon \geq \mass(U) \geq \int_{[0,1]^k} \mass(U \cap P_{\vec x})d\vec x
\geq \int_{[0,1]^k} p(\vec x)d\vec x.\]
Since this is true for every $\varepsilon>0$, the proof is complete.
\end{proof}
\begin{proof}[Proof of Theorem \ref{cube}, lower bound.]
Let $Z$ be a cycle in $C_k([0,1]^n,\partial[0,1]^n)$ obtained by sampling $N$
i.i.d.\ planes from a distribution on the set of oriented $k$-planes which
intersect nontrivially with $[0,1]^n$, such that the distribution is uniform
(with respect to Lebesgue measure on the corresponding polytope in
$\mathbb{R}^{n-k}$) on each set of parallel planes. Assume, perhaps by
permuting coordinates, that this distribution is not concentrated on planes of
the form
\[P_{\vec x}=(x_1,\ldots,x_k) \times \mathbb{R}^{n-k}.\]
As in the proof of the upper bound, it follows that for every $P_{\vec x}$,
$Z \cap P_{\vec x}$ consists of i.i.d.\ positive and negative points with
probability 1. Moreover, the probability of a random plane $P$ intersecting
$P_{\vec x}$ inside $[0,1]^n$ depends only on the direction of $P$ and not on
$\vec x$. Thus
\[\mathbb{E}(\mass(Z \cap P_{\vec x})) \geq cN,\]
where $c$ depends on the distribution but not on $\vec x$
Thus by Theorems \ref{AKT:boundary} and \ref{lowerE},
\[\mathbb{E}(FV(Z)) \geq \frac{1}{2}C_{n-k}\AKT_{n-k}(cN). \qedhere\]
\end{proof}
\begin{proof}[Proof of Theorem \ref{sphere}, lower bound.]
Let $Z$ be a cycle in $C_k(S^n)$ obtained by sampling $N$ independent uniformly
distributed great $k$-spheres. It suffices to show that for some compact
submanifold $K \subset S^n$,
\[FV(Z \cap K) \geq C_{n,k}\AKT_{n-k}(N)\]
where $Z \cap K$ is considered as a cycle in $C_k(K,\partial K)$.
Recall that for any great $(n-k)$-sphere $P$, with probability 1
\[Z \cap P=\sum_{i=1}^N [x_i]-[-x_i]\]
where the $x_i$ are i.i.d.\ uniform points on $S^{n-k}$. Let $T \subset S^n$ be
a great $k$-sphere and $T'$ the $(n-k)$-sphere consisting of points farthest
from $T$. We use $N_\varepsilon(U)$ to indicate the $\varepsilon$-neighborhood of the set
$U$; then $S^n \setminus N_{\pi/4}(T')=\overline{N_{\pi/4}(T)}$ deformation
retracts to $T$ along the orthogonal retraction
$\rho:\overline{N_{\pi/4}(T)} \to T$. We let $K=\rho^{-1}(K')$ where $K'$ is
some closed ball in $T$ which does not include any point and its antipode.
Notice that $K$ is foliated by equal-volume patches of great $(n-k)$-spheres
$P_u$ which retract to points $u \in K'$, and also does not include any point
and its antipode. By Remark \ref{AKT:diffeo}, there is a bilipschitz
diffeomorphism from $K$ to $[0,1]^n$ which sends each $P_u$ to a plane of the
form
\[(x_1,\ldots,x_k \times \mathbb{R}^{n-k}\]
in a volume-preserving way (up to a constant). Therefore, for each $u \in K'$,
\[\mathbb{E}(FV(Z \cap P_u \cap K)) \geq c\AKT_{n-k}(cN),\]
and applying Theorem \ref{lowerE}, we obtain the result.
\end{proof}
\section{Proof of Theorem \ref*{knot}} \label{S:knot}
Before we prove Theorem \ref{knot}, we must give a more precise statement.
By an \emph{oriented $k$-pseudomanifold} we mean a $k$-dimensional simplicial
complex $M$ with the following properties:
\begin{itemize}
\item It is \emph{pure}, i.e.\ every simplex is contained in an $k$-dimensional
simplex.
\item Every $k$-simplex comes with an orientation such that the sum of all the
oriented $k$-simplices is a cycle in $C_k(M)$.
\end{itemize}
Note that this is considerably wider than the usual definition: it is just enough
so that if $M$ is equipped with the standard simplexwise metric, any Lipschitz
map from $M$ to a metric space $X$ defines a Lipschitz $k$-cycle in $X$.
We say $M$ has \emph{geometry bounded by} $L$ if every $k$-simplex intersects at
most $L$ others.
With these definitions, we restate Theorem \ref{knot}:
\begin{thm*}
Let $M$ be an oriented $k$-pseudomanifold with $N$ vertices and geometry
bounded by $L$. Let $Z$ be a $k$-cycle on $[0,1]^n$ obtained by sending each
vertex of $M$ to a uniformly random point in $[0,1]^n$ and extending linearly.
Then there are constants $C>c>0$ depending on $n$ and $k$ such that
\begin{equation} \label{AKTstats:knot}
cL^{-1}\AKT_{n-k}(N)<\mathbb{E}(FV(Z))<CL\AKT_{n-k}(N).
\end{equation}
\end{thm*}
The concentration result will be proved in the next section.
As with Theorem \ref{cube}, the proof is a direct application of Theorems
\ref{upperE} and \ref{lowerE}. To apply these theorems, we need to understand
the filling volumes of slices of $Z$, which is more complicated in this case
because while the points are identically distributed, they are not entirely
independent. This is achieved for the upper bound in Lemma \ref{knot:upper} and
for the lower bound in Theorem \ref{knot:lower}. Together, these complete the
proof.
In this section as before, fix the notation
\[P_{\vec x}=\{\vec x\} \times [0,1]^{n-k} \subset [0,1]^n, \qquad
\vec x \in [0,1]^k.\]
We start by analyzing the slice $Z \cap P_{\vec x}$.
\begin{lem} \label{slice-properties}
Let $\vec x \in [0,1]^k$. Then the slice $Z \cap P_{\vec x}$ is the sum of $N$
random $0$-chains $\zeta_1,\ldots,\zeta_N$ which are identically distributed on
$\{\pm[y] : y \in [0,1]^{n-k}\} \cup \{0\}$ according to a distribution
$\mu_{\vec x}$ depending on $k$ and $\vec x$. Moreover:
\begin{enumerate}[(i)]
\item $\mu_{\vec x}$ is invariant with respect to the involution sending a chain
$\zeta$ to $-\zeta$;
\item $\mu_{\vec x} \leq C(n,k)\mu_{\text{Lebesgue}}$ on $[0,1]^{n-k}$.
\item Each $\zeta_i$ is independent of all but at most $L$ other $\zeta_j$.
\end{enumerate}
\end{lem}
Since the distribution of $Z$ is invariant under permuting coordinates, this
holds for any $(n-k)$-dimensional slice in a coordinate direction.
\begin{proof}
The distribution in question is the intersection of a random linear $k$-simplex
in $[0,1]^n$ with $P_{\vec x}$. Property (i) is obvious from this, and (iii)
follows since a pair of $\zeta_i$ are independent whenever the two
corresponding simplices do not intersect. To see (ii), consider the function
\[F:\bigl(([0,1]^n)^{k+1},\mu_{\text{Lebesgue}}\bigr) \to
\bigl(\{\pm[y] : y \in [0,1]^{n-k}\} \cup \{0\},\mu_{\vec x}\bigr)\]
sending each linear $k$-simplex to its intersection with $P_{\vec x}$. This
function is measure-preserving by definition, and its restriction to
\[K=F^{-1}\{[y] : y \in [0,1]^{n-k}\}\]
is 1-Lipschitz. Therefore, by the coarea formula, the density function of
$\mu_{\vec x}$ is given by the $[n(k+1)-(n-k)]$-dimensional Hausdorff measure of
point preimages. Thus it is enough to bound $H_{(n+1)k}(F^{-1}(\vec y))$ for
each $\vec y$.
Let $T$ be the set of linear $k$-simplices with vertices in
$[-1,1]^n$ which pass through $\vec 0$, and let
\[T_{\vec x}=(T+(\vec x,\vec 0)) \cap ([0,1]^k \times [-1,1]^{n-k})^{k+1}.\]
(Here each vertex is translated by $(\vec x,\vec 0)$.) Notice that
\[F^{-1}(\vec y) \subset T_{\vec x}+(\vec 0,\vec y).\]
All these translates are disjoint and their union is a subset of
$([0,1]^k \times [-1,2]^{n-k})^k$. Therefore, again by the coarea formula,
\[H_{(n+1)k}(T_{\vec x}) \leq 3^{(k+1)(n-k)}.\]
This completes the proof of (ii).
\end{proof}
Condition (iii) gives a dependency graph of degree $\leq L$ between the
$\zeta_i$. This graph has an $(L+1)$-coloring, giving a partition of
$\{0,\ldots,N\}$ into $L+1$ disjoint subsets $I_1,\ldots,I_n$ such that for
$i \in I_j$, the $\zeta_i$ are i.i.d.
The following lemma gives the upper bound to plug into Theorem \ref{upperE}.
\begin{lem} \label{knot:upper}
For every $\vec x \in [0,1]^k$,
\[\mathbb{E}(FV(Z \cap P_{\vec x})) \leq (L+1)C_{n,k}\AKT_{n-k}(N).\]
\end{lem}
\begin{proof}
Write $\nu_{\vec x}$ for the probability measure on $[0,1]^{n-k}$ given by
\[\nu_{\vec x}(A)=\frac{\mu_{\vec x}\{[\vec y] : \vec y \in A\}}
{\mu_{\vec x}\{[\vec y] : \vec y \in [0,1]^{n-k}\}}.\]
We first note that the upper bound from the original AKT theorem holds for each
$\nu_{\vec x}$: if $\zeta$ is a random 0-cycle in $[0,1]^{n-k}$ with $N$ positive
and $N$ negative points distributed according to $\nu_{\vec x}$, then by
\cite[equation (12)]{BobL}, for some constant $C_{n-k}$ independent of the
measure,
\begin{equation} \label{uniform-AKT}
\mathbb{E}(FV(\zeta)) \leq C_{n-k}\AKT_{n-k}(N).
\end{equation}
To reduce to this situation, we note that while $Z \cap P_{\vec x}$ is a cycle
(and therefore has an equal number of negative and positive points),
\[Z(\vec x,j)=\sum_{i \in I_j} \zeta_i\]
may not be. We produce cycles $\tilde Z(\vec x,j)$ for $j=1,\ldots,L+1$ by
adding up to $N$ additional i.i.d.~points distributed according to
$\nu_{\vec x}$. We add each point to $\tilde Z(\vec x,j)$ for two different
$j$, with opposite signs, so that
\[\sum_{j=1}^{L+1} \tilde Z(\vec x,j)=\sum_{j=1}^{L+1} Z(\vec x,j)=
Z \cap P_{\vec x}.\]
Each $\tilde Z(\vec x,j)$ is a 0-cycle consisting of at most $N$ positive and
$N$ negative i.i.d.~points. Therefore, by \eqref{uniform-AKT},
\[\mathbb{E}(FV(Z \cap P_{\vec x})) \leq (L+1)C_{n,k}\AKT_{n-k}(N).\qedhere\]
\end{proof}
The lower bound is somewhat more difficult and forces us to go back to the
original proof of the AKT theorem in various dimensions. We start by estimating
the relationship between correlated points in $Z \cap P_{\vec x}$. For this we
need to better understand the function $F$ and set $T$ defined in the proof of
Lemma \ref{slice-properties}.
\begin{lem} \label{Jacobian}
Given a linear $k$-simplex $\Delta \in ([0,1]^n)^{k+1}$ such that at least one
of its $(k-1)$-faces is at distance at least $r$ from $P_{\vec x}$,
\[\sqrt{\det\bigl((DF_\Delta)^{T}DF_\Delta\bigr)} \geq C_{n,k}r^{n-k}.\]
\end{lem}
\begin{figure}
\begin{tikzpicture}[scale=3]
\draw (0,-0.2) -- (0,1) -- (1.5,1) -- (1.5,-0.2) -- cycle;
\draw[very thick] (0,0.6) node[anchor=east] {$P_{\vec x}$} -- (1.5,0.6);
\coordinate (face) at (1.1,0.75);
\coordinate (pt) at (0.6,0);
\coordinate (new-pt) at (0.9,0);
\draw (new-pt) -- (face) -- (pt);
\tikzstyle{dott}=[circle,scale=0.4];
\node[dott,draw=black] (intersection) at (1,0.6) {};
\draw[dashed,->] (pt) node[dott,fill=black]{} -- (0.87,0)
node[midway,anchor=north] {$\Delta y_i$};
\draw[dashed] (pt) -- +(0,0.75) -- (face) -- +(0,-0.15);
\draw[snake=brace,raise snake=2pt]
(pt) -- +(0,0.75) node[midway,anchor=east,inner sep=7pt] {$D \leq \sqrt{k}$};
\node[right] at (1.1,0.675) {$d \geq r$};
\end{tikzpicture}
\caption{When a vertex moves by $\Delta y_i$, $\Delta \cap P_{\vec x}$ moves by
$(d/D)\Delta y_i$.} \label{fig}
\end{figure}
\begin{proof}
From Figure \ref{fig}, we see that for every $1 \leq i \leq n-k$, there
is a unit vector $\vec v$ such that $DF_\Delta(\vec v)=c\vec e_i$, for
$c>r/\sqrt{k}$. Therefore
\[\sqrt{\det\bigl((DF_\Delta)^{T} DF_\Delta\bigr)}
\geq \left(\frac{r}{\sqrt{k}}\right)^{n-k}. \qedhere\]
\end{proof}
For the next lemma, we recall some facts about the set $T$. First,
$T_{\vec x}+(0,y)$ and $T_{\vec x}+(0,y')$ are disjoint if $y \neq y'$, and
\[T_{\vec x}+\{0\} \times [0,1]^{n-k} \subseteq
([0,1]^k \times [-1,1]^{n-k})^{k+1}.\]
(Here the notation $T_{\vec x}+V$ refers to the translates of every simplex in
$T_{\vec x}$ by every point in $V$, in other words the Minkowski sum of $T_{\vec x}$
with the diagonal
\[\{(y,\ldots,y) : y \in V\} \subset \mathbb{R}^{n(k+1)}.)\]
In fact, this containment still holds if we intersect both sides with
$U \times [-1,1]^{(n-k)(k+1)}$, where $U \subseteq [0,1]^{k(k+1)}$ is any subset.
\begin{lem} \label{not-too-close}
Let $Q \subseteq [1/4,3/4]^{n-k}$ be an open set, and suppose
$\vec x \in [1/4,3/4]^k$. Given a linear $k$-simplex $\Delta$ with vertices in
$[0,1]^n$, let $\rho(\Delta)$ be the distance from the plane spanned by the
first $k$ vertices to $\{0\} \times [0,1]^{n-k}$. Then
\[\mathbb{P}\bigl[\rho(\Delta) \leq r \mid F(\Delta)=[y],y \in Q\bigr]
\leq C_{n,k}r.\]
\end{lem}
\begin{proof}
Note first that $F^{-1}\{[y]: y \in Q\} \subseteq T_{\vec x}+Q$, and in
particular
\[\{\Delta: \rho(\Delta) \leq r\text{ and }F(\Delta)=[y],y \in Q\} \subseteq
\{\Delta \in T_{\vec x}: \rho(\Delta) \leq r\}+Q.\]
Choosing $k$ points randomly induces a probability measure on the set of
$(k-1)$-planes in $\mathbb{R}^k$ whose density at a plane $P$ is proportional
to $\vol(P \cap [0,1]^k)^k$. This density is bounded above by some $C_{n,k}$,
and so the set of planes whose distance from $\vec x$ is at most $r$ has
measure $\leq C_{n,k}r$. Therefore
\[\mathbb{P}\bigl[\rho(\Delta) \leq r\text{ and }F(\Delta)=[y],y \in Q\bigr]
\leq 3^{(k+1)(n-k)}C_{n,k}r\vol Q.\]
Thus it suffices to find a lower bound on $\vol(F^{-1}\{[y]: y \in Q\})$ in
terms of $\vol Q$. Here we use the restrictions on $Q$ and $\vec x$: clearly,
\[F^{-1}\{[y]:y \in Q\} \supseteq
[T_{\vec x} \cap ([-1/4,1/4]^n+\{(\vec x,\vec 0)\})]+Q.\]
The volume of this set is independent of $\vec x$, depends linearly on
$\vol Q$, and is easily seen to be positive.
\end{proof}
\begin{lem} \label{correlation}
Assume that $\vec x \in [1/4,3/4]^k$. Let $0<r<1$ and let $\zeta$ and $\zeta'$
be random chains which encode the intersection with $P_{\vec x}$ of two
intersecting $k$-simplices $\Delta$ and $\Delta'$ of $M$. Let
$Q \subset [1/4,3/4]^{n-k}$ be a cube of side length $\ell$. Then
\begin{equation} \label{eqn:correlation}
\mathbb{P}\bigl[\zeta'=\pm[y'], y' \in Q \mid \zeta=[y], y \in Q\bigr] \leq
C_{n,k}\sqrt{\ell}.
\end{equation}
\end{lem}
\begin{rmk}
A more careful analysis based on the same principle shows that
\[\mathbb{P}\bigl[\zeta'=\pm[y'], y' \in Q \mid \zeta=[y], y \in Q\bigr] \leq
\begin{cases}
C_{n,k}\ell \lvert\log \ell\rvert & \text{if }n-k=1, \\
C_{n,k}\ell & \text{otherwise}.
\end{cases}\]
\end{rmk}
\begin{proof}
We give the proof in the case that $\Delta$ and $\Delta'$ share a $(k-1)$-face
$\Delta_0$; the general case is similar. Order the vertices so that the last
is not shared between the two simplices, and call the images of those vertices
$w$ and $w' \in [0,1]^n$.
Suppose first that $n-k \geq 2$. By Lemma \ref{not-too-close},
\begin{equation} \label{closer}
\mathbb{P}\bigl[\rho(\Delta_0) \leq \sqrt{\ell} \mid \zeta=[y],y \in Q\bigr]
\leq C_{n,k}\sqrt{\ell}.
\end{equation}
Now fix $\Delta_0$ with $\rho(\Delta_0) \geq \sqrt{\ell}$, and let
$U(\Delta_0,Q) \subseteq [0,1]^n$ be the set of points $w'$ such that
$\zeta'=\pm[y']$ with $y' \in Q$. Note that $U(\Delta_0,\{z\})$ is contained
in a $k$-plane and hence its $k$-dimensional Hausdorff norm is at most some
$C_{n,k}$. So by Lemma \ref{Jacobian} and the coarea formula,
\[\left(\frac{\ell}{k}\right)^{\frac{n-k}{2}}\vol(U(\Delta_0,Q)) \leq
C_{n,k}\vol(Q)\]
and therefore
\[\vol(U(\Delta_0,Q)) \leq C_{n,k}\ell^{\frac{n-k}{2}}.\]
Integrating this over all possible values of $\Delta_0$, we see that
\begin{equation} \label{farther}
\mathbb{P}\bigl[\zeta'=\pm[y'], y' \in Q \mid
\zeta=[y], y \in Q, \rho(\Delta_0) \geq \sqrt{\ell}\bigr]
\leq C_{n,k}\ell^{\frac{n-k}{2}}.
\end{equation}
Together, \eqref{closer} and \eqref{farther} imply \eqref{eqn:correlation}.
\end{proof}
Finally we have the tools we need to prove the lower bound for Theorem
\ref{knot}.
\begin{thm} \label{knot:lower}
For every $\vec x \in [1/4,3/4]^k$,
\[\mathbb{E}(FV(Z \cap P_{\vec x})) \geq c_{n,k}L^{-2}\AKT_{n-k}(N).\]
\end{thm}
\begin{proof}
Write $d=n-k$. The arguments for $d=1$, $d=2$, and $d \geq 3$ are distinct,
but broadly similar to each other. We therefore start by outlining the
common argument and then describe the details for each case which can be
plugged into it.
Let $\zeta_i$ be the chain-valued random variables corresponding to
intersections of $k$-simplices of $Z$ with $P_{\vec x}$. Recall that
$\zeta_i=\pm [v_i]$ or $\{0\}$, with $v_i \in [0,1]^d$ identically distributed
according to a density which is bounded below on $[1/4,3/4]^d$. Moreover, this
bound is uniform with respect to $\vec x \in [1/4,3/4]^k$. Thus, following
Remark \ref{AKT:diffeo}, there is a uniformly bilipschitz family of
diffeomorphisms
\[\varphi_{\vec x}:[1/4,3/4]^d \to [0,1]^d\]
which send this density to a constant times the standard volume form. In
particular, Lemma \ref{correlation} still holds for the $\zeta_i$ after
applying the diffeomorphism. We now write $\zeta_i$ for
$\varphi_{\vec x}(\zeta_i)$.
In each case, consider an $(L+1)$-coloring $I_1,\ldots,I_{L+1}$ of the
dependency graph between the $\zeta_i$, with the colored subsets ordered from
largest to smallest. Note that each of the $\zeta_i$ is correlated with
$\zeta_{i'}$ for at most $L$ values of $i'$.
We write $Z_j=\sum_{i \in I_j} \zeta_i$. The rough plan is as follows: in each
case, we show that $Z_1$ is hard to fill by constructing a Lipschitz function
$f$ such that
\[\int_{Z_1} f \geq \frac{c_{n,k}}{L}\AKT_{n-k}(N)\Lip f
\text{ with high probability.}\]
Then we show that $\mathbb{E}(\int_{Z_j} f)$ for each $j \neq 1$ is very close
to zero.
In each case, the function $f$ is constructed as a sum of simple functions.
Given a cube $Q \subseteq [0,1]^d$, let $\Delta_Q:[0,1]^d \to \mathbb{R}$ be
the function supported on $Q$ whose graph is a symmetric pyramid with base $Q$
and height 1. We also write $\Delta^r_v$ for the cube in the lattice of side
length $r$ which contains $v \in [0,1]^d$. The function $f$ will consist of a
sum of scaled copies of $\Delta_Q$, each reflecting the ``imbalance'' of
positive and negative points on the cube $Q$. The main difference between
different dimensions is the scale of these cubes: for $d \geq 3$, the cubes are
at the smallest scale (comparable to the average distance between neighboring
points), for $d=1$ they are at the largest scale (comparable to 1), and for
$d=2$ we use cubes at many scales, as in the original proof of \cite{AKT}.
In each case, we construct $f$ by means of an auxiliary function
\[g(x)=\sum_{i \in I_1} g_{\zeta_i}(x)\]
(in the case $d=2$, each $\zeta_i$ is associated to many summands $g^r_{\zeta_i}$
at different scales, which we consider separately). This $g$ may not satisfy
the desired upper bound on the Lipschitz constant, so we remove some of the
summands where they are too concentrated to produce $f$. Whenever $\zeta_i$ is
independent from $\zeta_{i'}$,
$\mathbb{E}\bigl(\int_{\zeta_{i'}} g_{\zeta_i}(x)\bigr)=0$, and so for every
$j \neq 1$ we can write
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} g\bigr)\bigr\rvert
\leq \sum_{i \in I_j} \sum \{\bigl\lvert\mathbb{E}\bigl(
{\textstyle\int_{\zeta_i}}g_{\zeta_{i'}}\bigr)\bigr\rvert
\mid \zeta_{i'}\text{ is correlated with }\zeta_i\}.\]
By Lemma \ref{correlation}, this correlation is not too high, and therefore we
can bound each of the summands. By the construction of $f$, this also bounds
$\bigl\lvert\mathbb{E}\bigl(\int_{Z_j} f\bigr)\bigr\rvert$.
If we tune everything correctly, we get that for each $j \neq 1$,
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} f\bigr)\bigr\rvert
\leq \frac{1}{2L}\mathbb{E}\bigl({\textstyle\int_{Z_1}} f\bigr),\]
giving a lower bound on $\mathbb{E}\bigl(\int_Z f\bigr)$.
\subsection*{Case $d \geq 3$}
We split the cube $[0,1]^d$ into $\sim N$ subcubes of side length
$r \approx N^{-1/d}$, and let
\begin{align*}
g(x) &= \sum_{i \in I_1} g_i(x)=
\sum_{i \in I_1} \pm r\Delta^r_{v_i}(x)\text{ where }\zeta_i=\pm[v_i], \\
f(x) &= \sum_{t_1,\ldots,t_d=0}^{r^{-1}-1}
\sgn\Bigl(\int_{Z_1}\chi_{Q_r(t_1,\ldots,t_d)}\Bigr) r\Delta_{Q_r(t_1,\ldots,t_d)}(x),
\end{align*}
where $Q_r(t_1,\ldots,t_d)$ is the cube with side length $r$ whose vertex
closest to the origin is $(rt_1,\ldots,rt_d)$. Note that $f$ is $2$-Lipschitz.
We can think of the number of points landing in each subcube as $N$ independent
$\lambda=1$ Poisson processes which we stop once their sum reaches roughly
\[\mathbb{P}(\zeta_i \cap [0,1]^d \neq 0)\lvert I_1 \rvert.\]
By the law of large numbers, the stopping time will be very close to
\[t=N^{-1}\mathbb{P}(\zeta_i=\pm[y], y \in [0,1]^d)\lvert I_1 \rvert\]
and very nearly $Nte^{-t}$ of the subcubes will contain exactly one point. Of
these, with high probability, at least $(1/2) \cdot 3^{-d} \cdot Nte^{-t}$ will
be contained in the middle third of their subcube. Therefore,
\[\int_{Z_1} f \geq \frac{c_{n,k}}{L} \cdot N^{\frac{d-1}{d}}
\text{ with high probability}.\]
On the other hand, given $j \neq 1$, $i \in I_j$, and $i' \in I_1$ such that
$\zeta_i$ is correlated with $\zeta_{i'}$, Lemma \ref{correlation} tells us
that
\[\bigl\lvert\mathbb{E}\bigl(\textstyle{\int_{\zeta_i}} g_{i'}\bigr)\bigr\rvert
\leq C_{n,k}r^{3/2}\]
and therefore
\[\bigl\lvert\mathbb{E}\bigl(\textstyle{\int_{Z_j}} f\bigr)\bigr\rvert \leq
\sum_{i \in I_j} LC_{n,k}r^{3/2} \leq LC_{n,k}N^{\frac{d-1}{d}-\frac{1}{2d}}.\]
Since this is small compared to $\int_{Z_1} f$, this shows that
\[\mathbb{E}\Bigl(\int_{Z \cap P_{\vec x}} f\Bigr)
\geq \frac{c_{n,k}}{L}N^{\frac{d-1}{d}}\Lip f\qquad\text{for large enough }N.\]
\subsection*{Case $d=2$}
This case is broadly similar, but we build the function $f$ in a more
complicated way, following the original proof of \cite{AKT}. For an integer
$r$, let $Q^r_{st}$ be the square of side length $2^{-r}$ whose lower left
corner is at $(s\cdot 2^{-r},t\cdot 2^{-r})$, and write
$\Delta^r_{st}=\Delta_{Q^r_{st}}$. We write
\begin{align*}
g(x,y) &= \sum_{r=1}^{0.1\log N} \sum_{s,t=0}^{2^r-1} g^r_{st}(x,y)=
\sum_{r=1}^{0.1\log N} \sum_{s,t=0}^{2^r-1} \Delta^r_{st}(x,y)
\int_{Z_1} \Delta^r_{st} \\
&= \sum_{r=1}^{0.1\log N} \sum_{i \in I_1} g^r_{\zeta_i}(x,y) \\
&= \sum_{r=1}^{0.1\log N} \sum_{i \in I_1} \Delta^r_{v_i}(x,y)\int_{\zeta_i}
\Delta^r_{v_i}\qquad\text{where }\zeta_i=\pm[v_i].
\end{align*}
Notice that $\int_{Z_1} g^r_{st}$ is always nonnegative: roughly speaking, it
measures the square of the ``imbalance'' of positive and negative points in
$Q^r_{st}$. In particular, it's not hard to see that
$\mathbb{E}(\int_{Z_1} g^r_{st})=c_{n,k} \cdot 2^{-2r}N$, and therefore
$\mathbb{E}(\int_{Z_1} g)=c_{n,k}N\log N$.
On the other hand, the derivative of $g$ is $O(\sqrt{N \log N})$ on average,
but can be much larger in some places. To remedy this, Ajtai, Koml\'os, and
Tusn\'ady introduced a ``stopping time'' rule, building $f$ as the sum of some,
but not all of the $g^r_{st}$. We do not need to give the exact definition,
remarking only that the function $f$ satisfies
\begin{align}
\mathbb{E}\bigl(\textstyle{\int_{Z_1}} f\bigr)
&\geq \frac{c_{n,k}}{L}N\log N \label{2:int} \\
\Lip f &\leq C_{n,k}\sqrt{N \log N}. \label{2:Lip}
\end{align}
Now, for $j \neq 1$,
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} f\bigr)\bigr\rvert
\leq \sum_{i \in I_j} \sum_{r=1}^{0.1\log N} \sum \bigl\{\bigl\lvert\mathbb{E}\bigl(
{\textstyle\int_{\zeta_i}} g^r_{\zeta_{i'}}\bigr)\bigr\rvert
\mid \zeta_{i'}\text{ is correlated with }\zeta_i\bigr\}.\]
By Lemma \ref{correlation}, the value of each term of this triple sum is
$O(2^{-r/2})$. Therefore,
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} f\bigr)\bigr\rvert
\leq C_{n,k}LN \sum_{r=1}^{0.1\log N} 2^{-r/2} \leq (1+\sqrt{2})C_{n,k}LN.\]
Combining this with \eqref{2:int} and \eqref{2:Lip}, we see that
\[\mathbb{E}\Bigl(\int_{Z \cap P_{\vec x}} f\Bigr)
\geq \frac{c_{n,k}}{L}\sqrt{N\log N}\Lip f\qquad\text{for large enough }N.\]
\subsection*{Case $d=1$}
We split the interval $[0,1]$ into $R$ equal regions, with $R$ to be determined
later. Write $\Delta_s=\Delta_{[s/R,(s+1)/R]}$, and let
\[g(x)=\sum_{s=0}^{R-1} \Delta_s(x){\textstyle\int_{Z_1}}
\chi_{\left[\frac{s}{R},\frac{s+1}{R}\right]}=\sum_{i \in I_1} g_{\zeta_i}(x)=
\sum_{i \in I_1} \pm\Delta^{1/R}_{y_i}(x)\qquad\text{where }\zeta_i=\pm[y_i].\]
We obtain the desired function $f$ by replacing
$\int_{Z_1} \chi_{\left[\frac{s}{R},\frac{s+1}{R}\right]}$ with
\[h_s=\sgn\biggl(\int_{Z_1} \chi_{\left[\frac{s}{R},\frac{s+1}{R}\right]}\biggr)
\min\biggl\{\biggl\lvert\int_{Z_1} \chi_{\left[\frac{s}{R},\frac{s+1}{R}\right]}
\biggr\rvert, C_{n,k}\sqrt{\frac{N}{R}}\biggr\}\]
for some sufficiently large $C_{n,k}$. Then $\Lip f \leq C_{n,k}\sqrt{NR}$ and
$\int_{Z_1} f \geq \frac{c_{n,k}}{L}N$.
On the other hand, for $j \neq 1$ and any $i \in I_j$ and $i' \in I_1$ such
that $\zeta_i$ and $\zeta_{i'}$ are correlated, by Lemma \ref{correlation},
$\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{\zeta_i}} g_{\zeta_{i'}}\bigr)
\bigr\rvert \leq C_{n,k}R^{-1/2}$, and therefore
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} f\bigr)\bigr\rvert
\leq C_{n,k}LNR^{-1/2}.\]
For some large enough $R$, depending on $n$ and $k$ but not on $N$,
\[\bigl\lvert\mathbb{E}\bigl({\textstyle\int_{Z_j}} f\bigr)\bigr\rvert
\leq \frac{1}{2L^2}\mathbb{E}\bigl({\textstyle\int_{Z_1}} f\bigr).\]
Thus $\mathbb{E}\bigl(\int_{Z \cap P_{\vec x}} f\bigr) \geq C_{n,k}\sqrt{N}\Lip f$,
completing the proof.
\end{proof}
\section{Concentration of measure} \label{S:concentration}
In this section, we show that when $n-k \geq 2$, the size of the filling tends to
concentrate around its mean. That is, we show that \eqref{AKTstats:V} holds in
the case of Theorems \ref{sphere}, \ref{cube}, and \ref{knot}. We first prove
this in the case of Theorem \ref{AKT:boundary}. The main tool is the
concentration of measure in high-dimensional balls, an idea due to Gromov and
Milman \cite{GroMi} and of wide importance in probability theory \cite{Ledoux}.
We follow the exposition due to Bobkov and Ledoux \cite[\S7.1]{BobLMem} which
covers the 1-dimensional case; the higher-dimensional cases are essentially the
same although they do not seem to be explicitly in the literature.
\begin{thm} \label{concentration}
Let $Z$ be a random cycle in $C_0([0,1]^n,\partial[0,1]^n)$ as in Theorem
\ref{AKT:boundary}. Then for every $r>0$,
\[\mathbb{P}[\lvert FV(Z)-\mathbb{E}(FV(Z))\rvert \geq r] \leq
C_1\exp\bigl(-C_2r/\sqrt{N}\bigr)\]
for universal constants $C_1,C_2>0$.
\end{thm}
In particular, the standard deviation of $FV(Z)$ is at most $O(\sqrt{N})$. In
other words, for $n \geq 2$, $FV(Z)/\mathbb{E}(FV(Z))$ converges to 1 as
$N \to \infty$.
\begin{proof}
Equip $X=C_0([0,1]^n,\partial[0,1]^n)$ with the metric
\[d_{FV}(Z,Z')=FV(Z-Z')\]
and let $E=[-1,1] \times [0,1]^{n-1}$. Define $\zeta_0:E \to X$ by
\[\zeta_0(\pm x_1,x_2,\ldots,x_n)=\pm[(x_1,x_2,\ldots,x_n)]\]
and $\zeta:(E^N,d_{\text{Eucl}}) \to X$ by
\[\zeta(v_1,\ldots,v_N)=\sum_{i=1}^N \zeta_0(v_i).\]
This map is $\sqrt{N}$-Lipschitz since when every point moves by a tiny amount
$\varepsilon$, the distance is $\sqrt{N}\varepsilon$ in the domain and $N\varepsilon$ in the
range.
Define the \emph{concentration function} of a metric measure space $(M,d,\mu)$
of total measure 1 to be
\[\conc_M(r)=\sup\{1-\mu(N_r(A)) \mid \mu(A) \geq 1/2\}, \qquad r>0,\]
where $N_r(A)$ is the $r$-neighborhood of the set $A$. The key observation of
Gromov and Milman \cite[Thm.~4.1]{GroMi} is that
\[\conc_M(r) \leq \frac{3}{4}e^{-\ln(3/2)\lambda_1 r},\]
where $\lambda_1$ is the first nonzero eigenvalue of the Laplacian on $M$.
Since the spectrum of a product of manifolds is the sum of its spectra,
$\lambda_1$ is constant on powers of $M$. The map $\zeta$
is measure-preserving, so it follows that
\[\conc_X(r) \leq \frac{3}{4}\exp\bigl(-\ln(3/2)\lambda_1 r/\sqrt{N}\bigr).\]
Therefore, for any $1$-Lipschitz function $u:X \to \mathbb{R}$,
\[\mathbb{P}[|u(Z)-\operatorname{median}(u)| \geq r] \leq
\frac{3}{2}\exp\bigl(-\ln(3/2)\lambda_1 r/\sqrt{N}\bigr).\]
By Chebyshev's inequality, the same, modulo constants, holds for the mean (see
also \cite[Prop.~1.10]{Ledoux}).
\end{proof}
To adapt this proof for the case of Theorems \ref{sphere}, \ref{cube}, and
\ref{knot}, we just have to change the space $E$: take
\begin{align*}
E_{\text{sphere}} &= \widetilde{Gr}_k(\mathbb{R}^n) &&
\text{for Theorem \ref{sphere}} \\
E_{\text{cube}} &= \{\text{affine $k$-planes }P \subset \mathbb{R}^n
\mid P \cap [0,1]^n \neq \emptyset\} &&
\text{for Theorem \ref{cube}} \\
E_{\text{knot}} &= [0,1]^n &&
\text{for Theorem \ref{knot}}.
\end{align*}
In the first two cases, the map $\zeta$ is constructed as before. In the last
case, for a $k$-pseudomanifold $M$ with vertex set $M^0$,
$\zeta_M:E_{\text{knot}}^{\lvert M^0 \rvert} \to Z_k([0,1]^n)$ sends
$(v_1,\ldots,v_{|M^0|})$ to the image of the linear immersion of $M$ with vertices
$(v_1,\ldots,v_{|M^0|})$.
In each case, it is easy to see that if the space of $k$-cycles is given the
filling volume metric, then $\zeta$ is $\sqrt{N}$-Lipschitz. Therefore, the rest
of the proof is identical to that of Theorem \ref{concentration}.
\bibliographystyle{amsalpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,618 |
/*
* Swaggy Jenkins
*
* Jenkins API clients generated from Swagger / Open API specification
*
* The version of the OpenAPI document: 1.5.1-pre.0
* Contact: blah@cliffano.com
* Generated by: https://github.com/openapitools/openapi-generator.git
*/
using NUnit.Framework;
using System;
using System.Linq;
using System.IO;
using System.Collections.Generic;
using Org.OpenAPITools.Api;
using Org.OpenAPITools.Model;
using Org.OpenAPITools.Client;
using System.Reflection;
using Newtonsoft.Json;
namespace Org.OpenAPITools.Test
{
/// <summary>
/// Class for testing BranchImpl
/// </summary>
/// <remarks>
/// This file is automatically generated by OpenAPI Generator (https://openapi-generator.tech).
/// Please update the test case below to test the model.
/// </remarks>
public class BranchImplTests
{
// TODO uncomment below to declare an instance variable for BranchImpl
//private BranchImpl instance;
/// <summary>
/// Setup before each test
/// </summary>
[SetUp]
public void Init()
{
// TODO uncomment below to create an instance of BranchImpl
//instance = new BranchImpl();
}
/// <summary>
/// Clean up after each test
/// </summary>
[TearDown]
public void Cleanup()
{
}
/// <summary>
/// Test an instance of BranchImpl
/// </summary>
[Test]
public void BranchImplInstanceTest()
{
// TODO uncomment below to test "IsInstanceOf" BranchImpl
//Assert.IsInstanceOf(typeof(BranchImpl), instance);
}
/// <summary>
/// Test the property 'Class'
/// </summary>
[Test]
public void ClassTest()
{
// TODO unit test for the property 'Class'
}
/// <summary>
/// Test the property 'DisplayName'
/// </summary>
[Test]
public void DisplayNameTest()
{
// TODO unit test for the property 'DisplayName'
}
/// <summary>
/// Test the property 'EstimatedDurationInMillis'
/// </summary>
[Test]
public void EstimatedDurationInMillisTest()
{
// TODO unit test for the property 'EstimatedDurationInMillis'
}
/// <summary>
/// Test the property 'FullDisplayName'
/// </summary>
[Test]
public void FullDisplayNameTest()
{
// TODO unit test for the property 'FullDisplayName'
}
/// <summary>
/// Test the property 'FullName'
/// </summary>
[Test]
public void FullNameTest()
{
// TODO unit test for the property 'FullName'
}
/// <summary>
/// Test the property 'Name'
/// </summary>
[Test]
public void NameTest()
{
// TODO unit test for the property 'Name'
}
/// <summary>
/// Test the property 'Organization'
/// </summary>
[Test]
public void OrganizationTest()
{
// TODO unit test for the property 'Organization'
}
/// <summary>
/// Test the property 'Parameters'
/// </summary>
[Test]
public void ParametersTest()
{
// TODO unit test for the property 'Parameters'
}
/// <summary>
/// Test the property 'Permissions'
/// </summary>
[Test]
public void PermissionsTest()
{
// TODO unit test for the property 'Permissions'
}
/// <summary>
/// Test the property 'WeatherScore'
/// </summary>
[Test]
public void WeatherScoreTest()
{
// TODO unit test for the property 'WeatherScore'
}
/// <summary>
/// Test the property 'PullRequest'
/// </summary>
[Test]
public void PullRequestTest()
{
// TODO unit test for the property 'PullRequest'
}
/// <summary>
/// Test the property 'Links'
/// </summary>
[Test]
public void LinksTest()
{
// TODO unit test for the property 'Links'
}
/// <summary>
/// Test the property 'LatestRun'
/// </summary>
[Test]
public void LatestRunTest()
{
// TODO unit test for the property 'LatestRun'
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,577 |
About us | Twelfth Night Club, Inc.
hat is Twelfth Night Club, Inc.?
Twelfth Night Club, Inc. is the oldest club for professional women of the theater in the United States.
It was founded in 1891 by Alice Fischer (who married the Actor William Harcourt). She envisioned a sister organization to the Lambs and the Players. Its original name was the F.A.D.'s, which stood for fencing, acting and dancing. It was Daniel Frohman who suggested "The Twelfth Night Club" would be a more appropriate name for a club whose members were all Broadway leading ladies. This was the name under which we were incorporated in 1893. We still have our original charter from New York State.
Our archives contain many autographed photographs of past members and guests of Twelfth Night: our first President, Viola Allen (shown above), and Laurette Taylor, May Robson, Blanche Ring, Louise Drew, Minnie Maddern Fiske, Julia Marlowe, Helen Modjeska, Sarah Bernhardt, Helen Bonfils, Maida Reade and Helen Hayes, to name a few.
There were also many great men of the American theater who were friends of Twelfth Night: Daniel Frohman, Edwin Booth, John Drew, John Barrymore, David Belasco, Joseph Jefferson, Chauncey Alcott and David Warfield.
Today, our members work in all areas of theater, TV and film. We are established as a tax-exempt, not-for-profit organization dedicated to raising money for theater-related charities as well as providing our members with a warm and relaxed atmosphere to enjoy the company of other theater professionals.
We are an all-volunteer organization, existing completely through contributions and the dues of our members and Gentleman Friends. Almost every month we hold a Sunday afternoon meeting with a special theme or celebrity guest. Anniverdary Party guests in recent years have included: Rosemary Harris, Angela Lansbury, Frances Sternhagen, Estelle Parsons, Ellen Burstyn, Linda Lavin, Lee Grant, Bebe Neuwirth, Sally Struthers, Don Correa and Sandy Duncan.
Our Players and Playwrights group continues to hold readings and staged productions. Membership participation at all levels (writing, performing, directing and production) is encouraged and welcomed.
Currently The Actor's Fund of America is generously providing us space to hold our functions. But we are still searching for a permanent home where we can display our extensive memorabilia of the American Theater, and continue as a support network of encouragement and companionship for Actresses in the 21st century.
If you are interested in learning more about The Twelfth Night Club, Inc. please write to us. Contributions in any amount to help further our charity work are welcome, as are prospective members.
As an active member in the 70s-80s I have fond memories of meetings and teas (and a few good pictures..I think I was host (or whatever we called it) for Maureen Stapleton and for Hermione Gingold)). I was in NYC for a five years, from Vancouver BC, doing research on David Belasco and his era and sending theatre reviews to a Vancouver radio station and a few spots for CBC radio.
I became a member after interviewing the wonderful, late, Madge West who urged, and sponsored, my involvement. Madge had eirbeen a child actress for Belasco and appeared with his big star Blanche Bates in early 1900s (I have accurate data) I just came across their pic and wonder if you would like it for your archives. They were both 12th Nighters. Madge was still working in TV in the 80s.
I'm trying to recall names of my fellow members and only one comes quickly to mind Maddie Fetterman, who died several years ago.
I also wanted to check re a lithograph of La revue blanche by toulouse lautrec which I purchased at one of the clubs sales around 1980. I just discovered it with some Belasco program repros. Not sure if it's a real lithograph or a member's rendition!! | {
"redpajama_set_name": "RedPajamaC4"
} | 2,075 |
{"url":"http:\/\/math.stackexchange.com\/questions\/275990\/calculate-undersetx-rightarrow7-lim-frac-sqrtx2-sqrt3x20-sqrt","text":"# Calculate $\\underset{x\\rightarrow7}{\\lim}\\frac{\\sqrt{x+2}-\\sqrt[3]{x+20}}{\\sqrt[4]{x+9}-2}$\n\n$$\\underset{x\\rightarrow7}{\\lim}\\frac{\\sqrt{x+2}-\\sqrt[3]{x+20}}{\\sqrt[4]{x+9}-2}$$\n\nHere I've tried multiplying by $\\sqrt[4]{x+9}+2$ and few other method.\n\nThanks in advance for solution \/ hints using simple methods.\n\nEdit\n\nPlease don't use l'Hosplital rule. We are before derivatives, don't know how to use it correctly yet. Thanks!\n\n-\nWhy does it seem no one can ever use L'H\u00f4pital? Poor Guillaume... \u2013\u00a0 David Mitra Jan 11 '13 at 20:04\nBecause it's just too easy. He's not solving an important problem, he's practicing. Plus he hasn't learned it yet. \u2013\u00a0 Git Gud Jan 11 '13 at 20:05\n\n-","date":"2014-07-22 22:18:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4728475511074066, \"perplexity\": 2359.9117239546385}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1405997865523.12\/warc\/CC-MAIN-20140722025745-00129-ip-10-33-131-23.ec2.internal.warc.gz\"}"} | null | null |
List of British Jewish entertainers
This is a List of British Jewish entertainers that includes entertainers (actors, directors, screenwriters, musicians and others) from the United Kingdom and its predecessor states who are or were Jewish or of Jewish descent.
Film actors
* Felix Aylmer
* Alfie Bass, obituary, "Jewish Chronicle", 24/7/1987 p14
* Claire Bloom [http://www.amazon.com/gp/product/0316093831] , actress
*Helena Bonham Carter (1966 - ) Academy-Award nominated English film/television actress [Bonham Carter - [http://www.contactmusic.com/news.nsf/article/carter%20too%20jewish%20for%20jewish%20role_1011743] "CARTER TOO JEWISH FOR JEWISH ROLE... British actress HELENA BONHAM CARTER stunned director PAUL WEILAND with her Jewish accent on the set of their new movie SIXTY SIX and even had to be asked to tone down her impersonation The PLANET OF THE APES actress, who has Austrian Jewish roots, stars as social-climber ESTHER REUBEN in the British film and reveals it wasn't hard to do because of her family upbringing She says, "People always think I'm indelibly English, but actually I was brought up in Golders Green (north west London) and there's tons of Jewish blood on my mother's side" [http://www.dailyrecord.co.uk/news/tm_headline=i-m-not-sienna---i-m-the-antichrist-of-fashion-&method=full&objectid=17985637&siteid=66633-name_page.html] "Helena abandons her old-fashioned style, playing a North London Jewish mother... "My mum was so pleased I was doing this film," she laughs. "I am half French, half Spanish and Jewish - but I'm always seen as very British. I'm finally getting in touch with my Jewish roots.""]
* Bernard Bresslaw, actor
* Eleanor Bron [http://www.jewishbookweek.com/archive/listbycontributor.php?Contributors=Eleanor+Bron] , actress and name inspiration for "Eleanor Rigby"
* Katrin Cartlidge [http://movies.yahoo.com/shop?d=hc&id=1800022292&cf=biog&intl=us] , actress (Jewish mother)
* Joan Collins ["The Express", 23 May 2003 (Paul Callan): "She was born Joan Henrietta Collins, a nice Jewish girl from Bayswater"] actress
* Marty Feldman [http://www.mynewham.co.uk/newham/celebs&gossip-marty_feldman.htm] , comic actor
* Fenella Fielding ["Jewish Chronicle", 24/10/2003 p35: "(Noel) Coward was less complimentary about (Maureen) Lipman's fellow Jewish stage star Fenella Fielding"]
* Laurence Harvey [http://www.cinemas-online.co.uk/website/soapbox.phtml?localpage=features/domino/index] , actor (Lithuanian-born)
* Leslie Howard [http://www.filmsofthegoldenage.com/foga/1999/winter99/lesliehoward.shtml] , actor
* Jason Isaacs [http://www.jewishjournal.com/home/searchview.php?id=2347] , actor
* Sid James [http://www.museum.tv/archives/etv/J/htmlJ/jamessid/jamessid.htm] , comic actor,(South African born)
* Tony Jay (1933 - 2006) English/American actorcite news | last =| first =| coauthors=| title =Tony Jay - Obituary| pages=26| publisher =The Jewish Chronicle| date =2006-12-22 | url =| accessdate =2006-12-24 ]
* David Kossoff [http://www.jewish-theatre.com/visitor/article_display.aspx?articleID=1296] , actor and stage monologuist
* Miriam Margolyes [http://www.jewish-theatre.com/visitor/article_display.aspx?articleID=397] , actress
* Jessie Matthews (1907 - 1981) English dancer, singer and actress
* Ron Moody [http://www.brainyquote.com/quotes/quotes/r/ronmoody260295.html] , actor (Fagin in film musical "Oliver")
* Anthony Newley (1931 - 1999) English actor, singer & songwriter
*Sophie Okonedo (1969 - ) Academy Award-nominated actress ("Hotel Rwanda") [http://www.interfaithfamily.com/site/apps/nl/content2.asp?c=ekLSK5MLIrG&b=297399&ct=3420185]
* Nathalie Press [http://www.dailyjews.com/articles/195_song_of_songs_review.htm] , actress
* Daniel Radcliffe (1989 - ) English actor ("Harry Potter")cite news | last =Kasriel| first =Alex| coauthors=Emily Rhodes| title =A nice Jewish wizard| pages=2| publisher =The Jewish Chronicle| date =2006-12-22 | url =| accessdate =2006-12-24 ]
* Antony Sher [http://www.imdb.com/name/nm0792029/] , actor
* Ione Skye [http://www.yogajournal.com/views/301.cfm] , actress (UK-born; Jewish mother)
* Janet Suzman [http://www.bbc.co.uk/religion/programmes/belief/scripts/beliefsuzman.html] , actress
* Elizabeth Taylor [http://www.adherents.com/people/pt/Elizabeth_Taylor.html] [http://www.amuseum.org/jahf/nomination/elvis_article.html] , actress (English-born; Convert to Judaism)
* Rachel Weisz [http://bangitout.com/tribeca2003/shape.html] , Oscar-winning actress
* Sam Wanamaker [http://www.wpi.edu/Academics/Depts/HUA/TT/Globe/32.html] , actor
* Zoe Wanamaker [http://www.zoewanamaker.com/bio/biomain.html] , actress
* David Warner, actor best known as "Jennings" in The Omen
* Naomi Westerman [http://theaion.nytka.org/naomiwesterman/naomiwesterman.html] , actress
* Henry Woolf [ [http://orangecow.org/pythonet/rwt/rwtguide.html] "Woolf is angry at always having to be the little guy, and Jewish at that." Accessed 27 October 2006.
"Jewish Chronicle", March 17 2000 p.43: "Home in Homerton was next door to a local Moseleyite. "My first memory at five years old," says Woolf, "is her hitting me over the head with a tennis racket. I said 'What did you do that for?' She said, 'It's nothing personal, it's because you're Jewish.' I understand that she had done it for ideological reasons."] , actor
* Sacha Baron Cohen [http://www.macuser.co.uk/macuser/news/81327/sacha-baron-cohen-shut-down-by-kazakhstan.html] , British comedian, notable for his comedy characters Ali G and Borat; the latter is portrayed as extremely anti-Semitic.
* Steven Berkoff (1937 - ) actor, writer and director
* Lionel Blair [http://www.somethingjewish.co.uk/articles/359_lionel_blair.htm] , TV entertainer
* Stephen Fry [http://observer.guardian.co.uk/uk_news/story/0,6903,1499581,00.html] , comedian & actor (Jewish mother)
* Henry Goodman ["Jewish Chronicle", 28/09/2005, "Diary" p.66, "Could there a hint of racial stereotyping in the Almeida's decision to cast two Jewish actors — Ronni Ancona and Henry Goodman — in its upcoming production of The Hypochondriac?"] , actor
* Lesley Joseph [http://72.14.207.104/search?q=cache:Bh674M6_amoJ:www.totallyjewish.com/lifestyle/culture/%3Fdisp_feature%3DIINXNk+%22Lesley+Joseph%22+Jewish&hl=en] , Dorian in Birds of a Feather
* Miriam Karlin (1925 - ) actress ("The Rag Trade")
* Paul Kaye (1965 - ) comedian and writer
*Robert Kazinsky television actor ("EastEnders") [Kazinsky - [http://www.totallyjewish.com/news/national/?content_id=3613] "Jewish Actor Joins EastEnders... The 22-year-old actor – who can speak Hebrew and who starred in a mobile phone commercial in Israel..."] Mentioned as one of several Jewish actors on "EastEnders" at [http://www.ynetnews.com/articles/0,7340,L-3296127,00.html] ]
* Felicity Kendal [http://www.telegraph.co.uk/arts/main.jhtml?xml=/arts/2002/01/15/bteli.xml] , actress (convert to Judaism)
* Maureen Lipman (1946 - ) film, television & theatre actress
* Kay Mellor [http://www.somethingjewish.co.uk/articles/801_sj_super_7.htm] , actress & scriptwriter
* Warren Mitchell [http://theavengers.tv/forever/pnote-mitchell.htm] , Alf Garnett in Til Death Us Do Part
* Tracy-Ann Oberman, actress: "Jewish Chronicle", 30 June 2006 p36: "Tribal beat: Showbiz Jews in the news"
* Andrew Sachs (1930 - ) German-born English actor, Manuel in Fawlty Towers
* Emma Samms [http://72.14.207.104/search?q=cache:TwX3OuXzwXEJ:www.joancollinsfanclub.com/Welcome_Page/Contents/Dynasty/Fallon_Colby_-_Emma_Samms/fallon_colby_-_emma_samms.html+%22Emma+Samms%22+jewish&hl=en] , TV actress
*Georgia Slowe actress [Slowe - [http://www.totallyjewish.com/news/national/?content_id=4268] "another Jewish actor... Georgia Slowe"] Perdita in Emmerdale
TV and Radio Presenters
* Dani Behr (1971 - ) TV presenter, actress and singercite news | last =| first =| coauthors=| title =Variety Club - Jewish Chronicle colour supplement "350 years"| pages=28-29| publisher =The Jewish Chronicle| date =2006-12-15 | url =| accessdate =2006-12-24 ]
*Benjamin Cohen [http://www.channel4.com/news/about_us/meet-the-team/benjamin-cohen.html] , Channel 4 News reporter and presenter
* Vanessa Feltz [http://www.somethingjewish.co.uk/articles/1612_why_did_vanessa_do_i.htm] , TV presenter
* Alex Kramer [http://www.ukgameshows.com/page/index.php/Alex_Kramer] , TV presenter
* Jerry Springer [http://wcpo.com/wcpo/localshows/iteam/11072004_springer.html] , TV presenter (UK-born)
* Sharon Osbourne, (Jewish father) - wife of Ozzy Osbourne, former talk show host, and star of "The Osbournes" [ [http://www.forward.com/issues/2002/02.11.29/featherman.html] "Sharon Osbourne elaborated on her background in a November 16 interview with The Scotsman daily. She is described as "the daughter of the infamously hard-nosed music promoter Don Arden, who shaped the careers of Gene Vincent, The Small Faces" and the Electric Light Orchestra. "People expected Ozzy to have a big- [bosomed] , blonde trophy wife," she said, "and instead he's got me, a short, fat, hairy half Jew. I had a lot to fight against in this industry."]
* Esther Rantzen [http://www.norwood.org.uk/news_f_annual_dinner3.htm] , presenter of "That's Life!", founder of ChildLine
* Gaby Roslin [http://www.somethingjewish.co.uk/articles/513_jewish_showbiz_news.htm] , TV presenter
* Natasha Kaplinsky [http://www.nationalarchives.gov.uk/familyhistory/bbc/series-four/natasha-kaplinsky.asp] , TV presenter, Newsreader
Directors/producers/executives
* Jenny Abramsky [http://www.jewishprblog.com/2005/11/jewish_cares_wo.html] , BBC executive
* Gerry Anderson [http://www.televisionheaven.co.uk/hisanderson.htm] , producer and puppeteer
* Daniel M. Angel [Obituary, "Jewish Chronicle", February 18, 2000, p.27: "He belonged to the West London Synagogue"] , film producer
* Sir Michael Balcon [http://www.somethingjewish.co.uk/articles/280_the_history_of_jewis.htm] , producer
* Sidney Bernstein [Encyclopaedia Judaica, 2nd ed] , cinema owner & founder of Granada Television
* Bernard Delfont [http://www.eastlondonhistory.com/bernard%20delfont.htm] , impresario
* Oscar Deutsch [http://www.guardian.co.uk/arts/critic/feature/0,1169,717532,00.html] , founder of Odeon Cinemas
* David Elstein [http://www.guardian.co.uk/religion/Story/0,,200475,00.html Guardian, Saturday October 23, 1999] , founder of Channel 5
* Stephen Frears [http://www.jewishjournal.com/home/searchview.php?id=7499] , director (Jewish mother; Frears only discovered this as an adult)
* Jonathan Glazer [http://www.jewishjournal.com/home/preview.php?id=7021] , director
* Leslie Grade [http://news.bbc.co.uk/1/hi/uk/236393.stm] , executive
* Lord Grade [http://www.space1999.net/~catacombs/main/crguide/vcp.html] , executive
* Michael Grade [http://www.liberaljudaism.org/news_newchairman.htm] , ITV Chairman
* Sir Jeremy Isaacs [http://www.guardian.co.uk/religion/Story/0,,200475,00.html Guardian, Saturday October 23, 1999] , TV executive
* Henry Jaglom [http://www.jewishsf.com/content/2-0-/module/displaystory/story_id/27761/format/html/displaystory.html] , director (UK-born)
* Sir Alexander Korda [http://www.jewishcomment.com/cgibin/news.cgi?id=14&command=shownews&newsid=518] , director & producer
* Zoltan Korda [http://www.jewishcomment.com/cgibin/news.cgi?id=14&command=shownews&newsid=518] , director
* Mike Leigh [http://film.guardian.co.uk/news/story/0,12589,1384893,00.html] , director
* Richard Lester [http://www.jewhoo.com/] , director
* Jonathan Lynn [http://www.jewishsf.com/bk030117/etceleb.shtml] , director
* Sam Mendes [http://famous.heebz.com/direct1.html] , director (Jewish mother)
* Emeric Pressburger [http://www.powell-pressburger.org/Reviews/Emeric/EnglandAndExile.html] , Oscar winning screenwriter, director & producer
* Irving Rapper ["Irving Rapper, the Oscar-winning American-Jewish film director" "Jewish Chronicle", 10 Feb 1961 p30] , Oscar-winning film director; born in Britain
* Karel Reisz [http://www.guardian.co.uk/obituaries/story/0,3604,849066,00.html] , director
* John Schlesinger [http://www.adherents.com/people/ps/John_Schlesinger.html] , director
* Vivian Van Damm [ [http://arts.guardian.co.uk/filmandmusic/story/0,,1644514,00.html] : "Van Damm, who came from a middle-class London family of Dutch Jewish origin"]
* Michael Winner [http://www.foyles.co.uk/foyles/display.asp?K=700000000091392&TAG=&CID=] , director
* Alan Yentob [http://www.jewishobserver-la.com/WorldNews.html] , BBC executive
* Simon Amstell [http://www.somethingjewish.co.uk/articles/956_sj_super_7.htm] , comedian:"Eagle-eyed viewers of Channel 4's music show Popworld might have spotted its Jewish presenter Simon Amstell"
* Ronni Ancona ["Jewish Chronicle", 28/09/2005, "Diary" p.66, "Could there a hint of racial stereotyping in the Almeida's decision to cast two Jewish actors — Ronni Ancona and Henry Goodman — in its upcoming production of The Hypochondriac?"] , impressionist
* David Baddiel [http://www.bbc.co.uk/history/familyhistory/wdytya_celeb_gallery_07.shtml]
* Sacha Baron Cohen [http://observer.guardian.co.uk/comment/story/0,6903,656096,00.html] , impressionist, Ali G and Borat
* Arnold Brown [http://www.jeremyhicks.com/arnoldbrown/biog.htm]
* Sam Costa, comedian: "Jewish Chronicle", 2/10/1981 p24 (obituary); "He was a very proud Jew"
* Ben Elton [http://film.guardian.co.uk/Feature_Story/feature_story/0,4120,326394,00.html] , comedian & writer
* Marty Feldman [http://www.mynewham.co.uk/newham/celebs&gossip-marty_feldman.htm] , comedian and actor
* Bud Flanagan [http://www.dailyjews.com/articles/24_maybe_it_s_because_i.htm] , comedian & actor
* Paul Kaye, comedian & actor, Dennis Pennis (The Herald (Glasgow); 07/05/05; Andy Dougan; p. 8)
* Matt Lucas [http://www.totallyjewish.com/entertainment/TJ_gold/?content_id=300]
* Bernard Manning [http://news.bbc.co.uk/1/hi/entertainment/6766611.stm] comedian and nightclub owner
* Denis Norden [http://www.stopthebnp.org.uk/index.php?location=news&art=274] , scriptwriter and radio & TV personality
* Jerry Sadowitz [http://enjoyment.independent.co.uk/theatre/interviews/story.jsp?story=520008]
* Alexei Sayle [http://ironbark.bendigo.latrobe.edu.au/~agre3/alexei/]
* Peter Sellers [http://www.somethingjewish.co.uk/articles/1312_golden_nominations_f.htm] , comedian & actor
* Bernie Winters [http://www.theboard.org.uk/index.php?cat=9]
* Mike Winters [http://www.theboard.org.uk/index.php?cat=9]
* Jacob Adler [Adler, Jacob, A Life on the Stage: A Memoir, translated and with commentary by Lulla Rosenfeld, Knopf, New York, 1999, ISBN 0-679-41351-0.] , Yiddish actor
* Alain Boublil [http://www.jinfo.org/Composers.html] , author and lyricist
* Peter Brook [http://www.jewish-theatre.com/visitor/article_display.aspx?articleID=667] , director
* Maria Friedman [http://www.aboutmaria.com/i-womanshourapril02.html] , musical theatre actress
* Hermione Gingold [ [http://www.projectshaw.com/AboutGTG.php] : "she was the daughter of an upper-class Austrian born Jewish financier Lionel Gingold and English-born Kate Walters."; Oxford Dictionary of National Biography: "Her mother was Jewish."] , actress
* Henry Goodman [http://www.time.com/time/europe/magazine/article/0,13005,901020325-218410,00.html] , actor
* Augustus Harris [http://www.jewishencyclopedia.com/view.jsp?artid=299&letter=H] , actor and theatre manager; son of Augustus Glossop Harris
* Nicholas Hytner [http://www.guardian.co.uk/arts/features/story/0,11710,926534,00.html] , director
* Jonathan Miller [http://www.granta.com/extracts/159] , director
* Robert Rietti [http://www.mesorahcenter.com/about.html] , actor
* Anthony Sher [http://www.jewishbookweek.com/2005/130305i.php] , actor
* Meier Tzelniker [ [http://ajcarchives.org/AJC_DATA/Files/1953_13_SouthAfrica.pdf] "the distinguished Jewish actor, Meier Tzelniker" Accessed 16 Dec 2006] , Yiddish actor
* Sam Wanamaker [http://www.wpi.edu/Academics/Depts/HUA/TT/Globe/32.html] , actor - "The Globe Theatre" project
* Jenny Abramsky, BBC Director of Radio
* Rabbi Lionel Blue [http://www.bbc.co.uk/religion/programmes/sog/blue1.shtml] , radio broadcaster
* Jono Coleman [http://www.somethingjewish.co.uk/articles/1612_why_did_vanessa_do_i.htm] , radio broadcaster
* David Prever radio broadcaster
* Mark Damazer, Controller BBC Radio 4 and BBC 7 [http://www.somethingjewish.co.uk/articles/1201_sj_super_7.htm]
* Sir Clement Freud (1973) [http://www.hampsteadtheatre.com/prod-productions_details.asp?pid=35] , radio broadcaster
* David Jacobs, radio broadcaster (JYB 2005 p256)
* Ludwig Karl Koch [Oxford Dictionary of National Biography: "Being a Jew, Koch's life under the Nazi regime became increasingly intolerable"] , broadcaster and sound recordist
* Robin Lustig, radio broadcaster, BBC Radio 4
* Mike Mendoza [https://www.jta.org/page_view_story.asp?strwebhead=It%92s+the+Jewish+hour%2C+chap&intcategoryid=2&SearchOptimize=Jewish+News] , TalkSport Radio
* Charlie Wolf [http://72.14.207.104/search?q=cache:dGok2vWzd3cJ:www.totallyjewish.com/lifestyle/features/%3Fdisp_feature%3DNGai8M+%22Charlie+Wolf%22+Jewish&hl=en] , TalkSport Radio
Popular musicians
* Larry Adler [Oxford Dictionary of National Biography] , harmonica player (American born; naturalised British)
* Ambrose, bandleader (Obituary: "Jewish Chronicle" 18/6/1971 p35)
* Craig David, singer, songwriter <= ő az
* Nicole, Natalie Appleton & Melanie Blatt [http://www.jewsrock.org/index.cfm?fuseaction=challah.view&page=A] , members of All Saints
* Stanley Black [CD sleeve note cited at [http://shopping.yahoo.com/p:Music%20Of%20A%20People%20%2F%20Spirit%20Of%20A%20People:1922088075] : "Born Solomon Schwartz on June 14, 1913 in Whitechapel within London's Jewish East End, he was directly descended from Polish and Rumanian Jews."] , bandleader
* Marc Bolan [http://www.jmi.org.uk/performance/] , member of T. Rex
* Elkie Brooks, singer; "Jewish Chronicle" 14/2/1992 p10:"Elkie Brooks and Graham Gouldman are two Jewish pop star graduates of Sedgley Park Primary School, Prestwich"
* Ian Broudie [http://www.somethingjewish.co.uk/articles/329_ian_broudie.htm] , member of The Lightning Seeds
* Pete Burns [http://www.somethingjewish.co.uk/articles/1702_vote_for_pete_s_sake.htm] of Dead or Alive (German Jewish mother)
* Ben Butcher, Reading born singer
* Johnny Clegg [http://www.popmatters.com/columns/sassen/021002.shtml] , UK-born South African musician
* Alma Cogan [http://www.jmi.org.uk/performance/] , singer
* Mike D'Abo, former lead singer of Manfred Mann and sang on their hit "The Mighty Quinn"
* Leonard Feather [Oxford Dictionary of National Biography: "He was brought up in a strictly conformist upper-middle-class Jewish family"] writer on jazz, jazz pianist and composer,
* Victor Feldman [http://www.ronniescotts.co.uk/ronnie_scotts/ronniescotts/147/15.htm] , jazz musician
* Justine Frischmann [http://72.14.207.104/search?q=cache:uOmlwnRIaZ8J:totallyjewish.com/lifestyle/features/%3Fdisp_feature%3DhXuxXd+%22Justine+Frischmann%22+Jewish&hl=en&gl=ca&ct=clnk&cd=6] , member of Elastica
* Graham Gouldman, Lol Creme & Kevin Godley [http://www.johnbruinsma.nl/gginterview.html] , members of 10cc. He wrote many 1960's hits such as "Bus Stop" and "Look Through Any Window" for The Hollies, "Heart Full of Soul", "For Your Love" and "Evil-Hearted You" for The Yardbirds and "No Milk Today" for Herman's Hermits.
* Benny Green [Oxford Dictionary of National Biography: "Here he grew into the streetwise but sentimental cockney-Jewish character"] , saxophonist and broadcaster
* Mick Green [http://www.jewhoo.com/] , guitarist for Johnny Kidd and the Pirates
* Peter Green, guitarist Fleetwood Mac (Celmins 2003)
* Terry Hall, lead vocalist, songwriter of the The Specials, Fun Boy Three and The Colourfield (mother is of German Jewish descent).
* Steffan Halperin, drummer for Klaxons [http://www.somethingjewish.co.uk/articles/2488_mercury_jewish_winne.htm]
* Dick James [http://www.jewishtribalreview.org/beatles.htm] , singer and music publisher
* Mick Jones, guitarist, vocalist The Clash (mother is of Russian Jewish descent).
* Laurence Juber [http://www.laurencejuber.com/] , Guitarist, former member of Wings
* Jason Kay [http://www.jewsrock.org/index.cfm?fuseaction=challah.view&page=J] , member of Jamiroquai
* Mark Knopfler, guitarist, singer and songwriter
* Paul Kossoff [http://www.jewish-theatre.com/visitor/article_display.aspx?articleID=1296] , member of Free, son of actor David Kossoff
*Keith Levene, guitarist. Founder member of The Flowers of Romance with Sid Vicious,The Clash and Public Image with John Lydon.(Father Jewish) Source :http://www.fodderstompf.com/ARCHIVES/INTERVIEWS/nme780.htm
* Joe Loss [http://www.jmwc.org/announcements/2007/03/_study_jewish_m.html] , bandleader
* Manfred Mann [http://www.sdjewishjournal.com/stories/sept05_3.html] , R&B keyboardist
* George Michael [http://www.washingtonjewishweek.com/main.asp?SectionID=4&SubSectionID=4&ArticleID=6508&TM=113.596] , singer, songwriter, former member of Wham (Jewish mother).
* Jon Moss [http://www.jewsrock.org/index.cfm?fuseaction=challah.view&page=C] , member of Culture Club
* Keith Reid & Matthew Fisher [http://www.sdjewishjournal.com/stories/sept05_3.html] , founding members of Procol Harum
* Gavin Rossdale [http://www.jewsrock.org/index.cfm?fuseaction=challah.view&page=B] , member of Bush
* Ronnie Scott [http://users.rcn.com/jazzinfo/0105/0105BR.html]
* Helen Shapiro [http://www.mannamusic.co.uk/walkingback/walkingback.htm] , singer
* John Silver former Genesis member
* Rachel Stevens [http://www.somethingjewish.co.uk/articles/1233_rachel_stevens_inter.htm] , singer & former member of S Club 7
* Joss Stone [http://www.liberaljudaism.org/education_anglo_jewry.htm ] Singer/songwriter
* Lewis Taylor [http://www.guardian.co.uk/friday_review/story/0,3605,358476,00.html] , singer/songwriter
* Frankie Vaughan [http://myweb.tiscali.co.uk/britmusical/frankievaughantimes.htm] , singer
* Warren Wald, pop idol contestant
* Louise Wener [http://vu.morrissey-solo.com/sleeper/2000/music/disc/misc/mcqueen.htm] , singer with group Sleeper & novelist
* Amy Winehouse [http://www.totallyjewish.com/entertainment/TJ_gold/?content_id=294] , singer/songwriter
* Sister Bliss, real name Ila Ben-Tovim, was born in Haifa, Israel.
Producers/managers
* Don Arden [http://english.sem40.ru/jewish_fortune/8369/] , music promoter and former Black Sabbath manager
* Chris Blackwell [http://www.somethingjewish.co.uk/articles/381_evan_and_jaron.htm] , founder of Island Records (Jewish mother; raised Jewish)
* Brian Epstein [http://news.bbc.co.uk/1/hi/entertainment/725873.stm] , manager of the Beatles
* Harvey Goldsmith [http://totallyjewish.com/lifestyle/features/?disp_feature=THBXeN] , rock impresario
* Trevor Horn [http://www.telegraph.co.uk/arts/main.jhtml?xml=/arts/2004/11/01/bmwk30.xml] , founder of ZTT Records (not ethnically, but attends synagogue)
* Nathan Joseph, founder of Transatlantic Records [Obituary, "Jewish Chronicle", Nov 11 2005, p.27]
* Malcolm McLaren [http://www.jewsrock.org/index.cfm?fuseaction=words.view&wordid=11A21299-D20E-4019-90F0B8DCDB4853A5] , manager of the Sex Pistols (Jewish mother; raised Jewish)
* Daniel Miller [http://www.imomus.com/index33.html] , founder of Mute Records
* Mark Ronson D.J/ Producer [http://www.jewtastic.com/posts/18556]
* Andrew Loog Oldham [http://www.guardian.co.uk/Archive/Article/0,4273,4021812,00.html] , manager of the Rolling Stones (raised by Jewish mother)
* Paul Stein-Dunville, [ [http://members.iinet.net.au/~maggra/cgs-dunville.htm] ,Classical Guitarist,musician] ] , Musician
* Gerald Abraham [Encyclopaedia Judaica] , musicologist
* Elias Parish Alvars [ [http://www.jewishencyclopedia.com/view.jsp?artid=78&letter=P JewishEncyclopedia] ] , composer
* John Barnett [Oxford Dictionary of National Biography: "the eldest son of a German Jewish diamond merchant"] , composer
* Alvise Bassano, musician [http://www.hoasm.org/IVM/BassanoAlvise.html]
* Anthony Bassano, musician [http://www.hoasm.org/IVM/BassanoAlvise.html]
* Baptista Bassano, musician [http://www.hoasm.org/IVM/BassanoAlvise.html]
* Julius Benedict, composer [http://www.jinfo.org/Conductors.html]
* Maria Bland, singer [Concise Dictionary of National Biography: "daughter of Italian Jews named Romanzini"]
* John Braham [http://www.jmi.org.uk/performance/unchartered07.html] , singer
* Norbert Brainin [http://www.schillerinstitute.org/programs/program_brainin_6_6_90.html] , violinist
* Giacobbe Cervetto [Concise Dictionary of National Biography: "an Italian Jew"; lived in London for over 40 years] , cellist
* Harriet Cohen [http://www.jinfo.org/Pianists.html] , pianist
*Michael Costa (conductor) [http://www.jinfo.org/Conductors.html] , conductor and composer.
* Frederic Hymen Cowen [http://www.jewishencyclopedia.com/view.jsp?artid=842&letter=C] , composer
* Solomon Cutner [http://members.macconnect.com/users/j/jimbob/classical/solomon.html] , known as Solomon, pianist
* Jacqueline du Pré [http://www.jinfo.org/Cellists.html] , cellist (convert to Judaism)
* Harry Farjeon [Oxford Dictionary of National Biography, art. "Farjeon, Benjamin"] , composer (Jewish father)
* Gerald Finzi [http://www.jochnowitz.net/Essays/Bruno.html] , composer
* Norma Fisher, [http://www.jmi.org.uk/jminews/jminews_3.html] , pianist
* Benjamin Frankel [http://www.jewishsf.com/content/2-0-/module/displaystory/story_id/6201/edition_id/115/format/html/displaystory.html] , composer
* Walter Goehr [http://www.jmi.org.uk/jminews/jminews_5.html] , composer
* Alexander Goehr [Jewish Chronicle, July 13 2001 p.25 "two Jewish composers, Alexander Goehr and Robert Saxton"] , composer, his son
* Berthold Goldschmidt [Oxford Dictionary of National Biography: "His was a cultured, musical Jewish family"] , composer
* George Henschel [http://www.jinfo.org/Conductors.html] , singer & conductor
* Myra Hess [http://www.jinfo.org/Pianists.html] , pianist
* Gerard Hoffnung [http://www.chrisbeetles.com/pictures/artists/Hoffnung_Gerard/Hoffnung_Gerard.htm] , musicologist
* Steven Isserlis [http://www.jinfo.org/Cellists.html] , cellist
* Hans Keller [Oxford Dictionary of National Biography: "he described himself as an 'unpious Jew'"] , musicologist
* Isidore de Lara [http://www.jewishencyclopedia.com/view.jsp?artid=73&letter=L&search=de%20lara] , composer
* Yehudi Menuhin [http://www.amazon.com/gp/product/1555534651] , Lord Menuhin of Stoke d'Abernon; conductor & violinist (UK-based)
* Benno Moiseiwitsch [http://www.jinfo.org/Pianists.html] , pianist
* Isaac Nathan [Oxford Dictionary of National Biography: "born in Canterbury, Kent, of Polish–Jewish descent. His parents intended him to become a rabbi"]
* Yfrah Neaman [http://members.aol.com/violinwettbewerb/Neaman_engl.htm] , violinist & teacher
* Michael Nyman [http://www.jinfo.org/Composers.html] , composer
* Murray Perahia [http://www.jinfo.org/Pianists.html] , pianist (UK-based)
* Landon Ronald [http://www.jinfo.org/Conductors.html] , conductor & composer
*Henry Russell (musician), pianist, baritone singer and composer.
* Robert Saxton [Jewish Chronicle, July 13 2001 p.25 "two Jewish composers, Alexander Goehr and Robert Saxton"]
* Rudolf Schwarz, ["Jewish Chronicle", February 16, 2007, p.14: "he carried on as the sole Jewidh conductor of the "Kulturbund"] conductor
* Sir Georg Solti [http://www.jinfo.org/Conductors.html] , conductor
* Walter Susskind (1913 - 1980) [ [http://www.bach-cantatas.com/Bio/Susskind-Walter.htm Bach cantatas site] "The distinguished Czech-born English conductor" [http://www.lakeplacidfilmforum.com/html/films.html Lake Placid Film Forum] "Walter Susskind, a German Jew" Both accessed 4 Jan 2007] , conductor
* Richard Tauber, singer and composer (naturalised British citizen, 1940) ["The Penguin Dictionary of Musical Performers", Arthur Jacobs, ISBN 0-14-051160-1, "Under threat as a Jew from Nazi persecution, settled in Britain, 1938."]
* Lionel Tertis [http://www.jinfo.org/Violinists.html] , violist
* Simon Waley Waley [Concise Dictionary of National Biography: "a leading member of the London Jews"] , musician
* Egon Wellesz [http://www.highbeam.com/library/doc0.asp?DOCID=1G1:111223932&num=7] , composer
* Benjamin Zander [http://www.benjaminzander.com/news/detail.asp?id=236] , music director
* Lionel Bart [http://myweb.lsbu.ac.uk/~stafflag/lionelbart.html] , musical writer
* Don Black [http://enjoyment.independent.co.uk/theatre/interviews/article180741.ece] , lyricist
*Graham Gouldman, wrote many 1960's hits such as "Bus Stop" and "Look Through Any Window" for The Hollies, "Heart Full of Soul", "For Your Love" and "Evil-Hearted You" for The Yardbirds and "No Milk Today" for Herman's Hermits.
* Eric Maschwitz [http://72.14.207.104/search?q=cache:MasTddAEw0gJ:www.irdp.co.uk/GIELGUD/valbbc9.htm+%22Eric+Maschwitz%22+Jewish&hl=en] , lyricist, writer & broadcaster
* Mitch Murray
* Monty Norman [http://www.totallyjewish.com/news/national/?content_id=2033] , lyricist, composer & singer (creator of "The James Bond Theme")
* David Rose [http://www.jewhoo.com/get_entry_information.asp?p_category_id=&p_parent_id=] , songwriter & composer
* Jule Styne [http://www.jewhoo.com/editor/columns/121499.html] , songwriter (UK-born)
* Celia Franca [Obituary, "Jewish Chronicle", Apr 13 2007, p.20] , ballerina
* Alicia Markova [http://www.findarticles.com/p/articles/mi_qn4158/is_20041203/ai_n12819919] , ballerina
* Marie Rambert [ [http://www.musikinesis.com/October06%20article.htm] : "She was Jewish" Accessed 9 Feb 2007] , ballerina
* Lotte Berk, dancer and health guru [Oxford Dictionary of National Biography: "the only daughter of Jewish parents"]
* Caprice Bourret [http://www.newstatesman.com/200003060014] , model (American born & raised)
* David Bret [http://www.davidbret.com] biographer and chansonnier (Jewish father)
* Sharon Osbourne - wife of Ozzy Osbourne, former talk show host, and star of "The Osbournes" [ [http://www.forward.com/issues/2002/02.11.29/featherman.html] "Sharon Osbourne elaborated on her background in a November 16 interview with The Scotsman daily. She is described as "the daughter of the infamously hard-nosed music promoter Don Arden, who shaped the careers of Gene Vincent, The Small Faces" and the Electric Light Orchestra. "People expected Ozzy to have a big- [bosomed] , blonde trophy wife," she said, "and instead he's got me, a short, fat, hairy half Jew. I had a lot to fight against in this industry."]
* Hedi Stadlen [http://www.timesonline.co.uk/article/0,,60-983745,00.html] , musicologist, philosopher and Communist.
* JYB = Jewish Year Book
* TimesAd: "The Times", 6/7/06 p34: "A Call by Jews in Britain" (advert signed by 300 British Jews)
* [http://www.jinfo.org Jinfo]
*Lists of Jews
*List of British Jews
2005 European Curling Championships
Panjab District
List of Italian American entertainers — A List of Italian American entertainers.Actors: See List of Italian American actors .Composers and conductors*Mark Adamo (1962 ), composer and librettist. *Angelo Badalamenti (1937 ), music composer *Marco Beltrami (1966 ), film composer *Chester … Wikipedia
List of British Jews — is a list that includes Jewish people from the United Kingdom and its predecessor states.Although the first Jews may have arrived on the island of Great Britain with the Romans, it wasn t until the Norman Conquest of William the Conqueror in 1066 … Wikipedia
List of Jewish American entertainers — This is a list of famous Jewish American entertainers. For other famous Jewish Americans, see List of Jewish Americans. ListActors*See also List of Jewish American actors in televisionOrganized by decade of birth1990s*Robin Arcuri (1991 )… … Wikipedia
List of Jews from Russia, Ukraine, and Belarus — This List of Jews contains individuals who, in accordance with Wikipedia s and policies, have been identified as Jews by . A few years before the Shoah, the Jewish population of the Soviet lands (excluding the Baltic states) stood at over 5… … Wikipedia
List of Iranians — This is a list of notable Iranians: In the news * Nazanin Afshin Jam, Actress, Singer/Songwriter, Human Rights Activist, Miss World 2003 1st runner up; Miss Canada 2003. * Shohreh Aghdashloo, Iranian American actress * Goli Ameri, Republican… … Wikipedia
List of Jewish actors and actresses — The list is currently organized chronologically, listing people by decade of birth. NOTOC 1990s 1980s 1970s 1960s 1950s 1940s 1930s 1920s 1910s 1900s 1890s 1880s 1870s 1860s 1850s 1840s 1830s 1820s 1810s 1750s infinity1990s*Jonah Bobo (1997–)… … Wikipedia
List of criminals by nickname — This article lists nicknames of notable criminals. #* .22 Caliber Killers mdash; Gary Lewingdon and Thaddus Lewingdon, U.S. serial killers [http://www.nbc4i.com/news/3915320/detail.html] * .44 Caliber Killer mdash; David Berkowitz, U.S. serial… … Wikipedia
List of recurring The Simpsons characters — Contents 1 Agnes Skinner 2 Akira 3 Anastasia 4 Arnie Pye … Wikipedia
British African-Caribbean community — For Caribbeans in the UK of Indian origin, see British Indo Caribbean community. British African Caribbean (British Afro Caribbean) Total population UK, 2001: 565,900 … Wikipedia
List of Christian Scientists (religious denomination) — This list concerns the role that members of the denomination called Church of Christ, Scientist had in world history. For a list about Christians who are also scientists go to List of Christian thinkers in science. Nature of listThe role the… … Wikipedia | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,752 |
# WHO
STOLE
HALLOWEEN?
**MARTHA FREEMAN**
Copyright © 2005 by Martha Freeman
All Rights Reserved
HOLIDAY HOUSE is registered in the U.S. Patent and Trademark Office.
www.holidayhouse.com
ISBN 978-0-8234-2438-2 (ebook)w
ISBN 978-0-8234-2681-2 (ebook)r
Library of Congress Cataloging-in-Publication Data
Freeman, Martha, 1956–
Who stole Halloween?/ by Martha Freeman
p. cm.
Summary: When eleven-year-old Alex and his friend Yasmeen investigate
the disappearance of cats in their neighborhood, they stumble onto
a larger mystery involving a haunted house and a ghostly cat.
ISBN 0-8234-1962-2 (hardcover)
[1. Cats—Fiction. 2. Kidnapping—Fiction. 3. Ghosts—Fiction.
4. Haunted houses—Fiction. 5. Halloween—Fiction.
6. Mystery and detective stories.] I. Title.
PZ7.F87496Wj 2005 [Fic]–dc22 2004060560
ISBN 978-0-8234-1962-3 (hardcover)
ISBN 978-0-8234-2170-1 (paperback)
For my neighbors
in State College, Pennsylvania.
You are always an inspiration.
## Chapter One
Cats make excellent friends—except for one thing. They are bad explainers. Yasmeen says this is because a cat's whole vocabulary is only meow, purr, and hiss. She says meow, purr, and hiss are inadequate for good explanations.
Yasmeen is my best friend who happens to be a girl. She is smarter than me, but this time she's wrong. When he feels like it, my cat can tell me a lot with only a lazy blink or a quick swish of his tail. The trouble is that most of the time he doesn't feel like it.
The real reason cats are bad explainers is simple: They are too impatient. The way a cat figures, if he understands something, you should understand it, too. And if you don't, then you are not worth his trouble.
I was thinking these thoughts on a gray and spooky October afternoon, the kind when the trees look sort of like skeletons and the shadows look like ghosts. Yasmeen and I were running side by side, chasing my cat, Luau.
So far, Luau had not bothered to explain where he was going or why, or whether we were supposed to follow him or what.
"What's your theory?" Yasmeen asked me. "What's he up to?"
Yasmeen is tall, skinny, and fast, while I am none of the above. I was struggling to keep up, gasping for breath. "I only hope . . . it's not over . . . to St. Bernard's," I said. "That place . . . gives me the creeps."
St. Bernard's is an old church near my street. Behind it is a just-as-old cemetery. I had hardly finished saying "the creeps" when Luau made a right turn and loped through the cemetery gate.
I swear, sometimes my cat has a nasty sense of humor.
Yasmeen laughed. "He's going to St. Bernard's all right." Then she ran ahead of me through the gate, warbling like some soprano werewolf, waving her arms over her head.
Being cool the way I am, I ignored her behavior. Unfortunately, I was so busy ignoring her behavior that I didn't see a broken headstone and I tripped.
" _Oh!_ Oh, shoot—Alex, are you okay? Oh my gosh, you're bleeding!" Yasmeen had run back and knelt next to me. "I have Band-Aids," she said.
My hands hurt, but surprise stifled my tears. "You have Band-Aids?"
"I started keeping them in my pocket for emergencies," she said. "It's a crazy world, Alex. Anything might happen."
Yasmeen dabbed my scratches with antiseptic wipes—she had those, too—and smoothed on three Band-Aids. I expected Luau to be gone by the time she was done, but when I stood up, I spotted him sitting by a statue of a grumpy-looking angel, washing his face.
"I don't get your feline," Yasmeen said.
"You don't think maybe he's doing his ace-detective thing again?" I asked.
Yasmeen grinned. "I hope so."
Luau seemed to be totally focused on personal hygiene, so, all sneaky, we crept toward him. We were about ten feet away when he looked up at us, which meant, _Oh, come on, guys_ — _as if I didn't see you stalking me! I'm a_ cat _! We invented stalking!_
Then he took one more swipe at his ear and bounded away.
Where was he going? It wasn't so long ago that my ace-detective cat had helped Yasmeen and me solve a mystery. Now he was so stuck-up he expected us to follow him anywhere, even into a deep, dark cemetery.
The wind made the dry leaves dance and rearranged the clouds. It also gave me goose bumps. Or was it being in a cemetery a week before Halloween that did that? Sometimes my imagination gets carried away. Everywhere Yasmeen and I ran, we were stomping on dead people, weren't we? And where there are dead people, there are ghosts and ghouls and zombies.
_"There!"_ Yasmeen said. She stopped under an oak tree and pointed at Luau. By now, he had doubled back and was sitting next to a big, elaborate headstone beside the grumpy angel. It wasn't the stone that caught my attention, though. What I noticed was what was stuck to the back of it—some kind of flyer with a picture. Why would somebody attach a flyer to a headstone, anyway?
Luau stretched and swished his tail and looked at us, which meant, _Why don't you read me what it says?_
If I had been by myself, I would have called Luau to come, then turned around and gone home. But Yasmeen was never going to let me get away with that. She just loves a mystery, the stranger the better. And guess what? The flyer on the gravestone was the start of another big mystery, one that would get me, and Yasmeen, and especially Luau into grave, grave trouble.
## Chapter Two
Yasmeen was disappointed.
"A flyer posted on a gravestone— _that_ would have been mysterious," she said. "But I guess it was only the wind holding it there. It must've blown through the fence or something."
She held the paper up. Under a photocopied picture of a sleek black cat were the words:
**Please bring back Halloween!**
**Beloved pet, last seen October 22.**
**Call Kyle Richmond.**
**No questions asked!**
Then there was a phone number and an address on Groundhog Drive.
"Isn't that near Ari's house?" Yasmeen asked me.
"Yeah," I said, "and I think I know Kyle from school—who he is anyway. Uh, can we go now?" The sun had sunk behind Mt. Lyon, and the light was fading fast. You can imagine how eager I was to be in a graveyard in the dark. "Come on, Luau. You ready?"
Luau side-rubbed my leg and looked up at me, which meant, _Can I have a ride, please? All that running has left me exhausted_. I picked him up and heaved him over my shoulder, which isn't as easy as it sounds. Luau is one of those big-shouldered, muscley cats. He's not fat, but he weighs a ton.
We started walking. Luau purred. Yasmeen lectured: "There's no such thing as ghosts, you know. They are merely figments of a vivid imagination."
Yasmeen talks like that a lot. Her mom is a librarian, and her dad is an English professor. Her family lives next door to mine, so we've been friends since we were babies. It's only because I've had so much practice that I, a regular kid, can even understand her.
"That's your opinion," I said. "But plenty of people have seen ghosts. Plus there's that house on Main Street; everyone knows it's haunted."
By now we were walking back through the cemetery gate. The moon had come out, and three bats flitted overhead.
"The Harvey house?" Yasmeen shook her head. "Mr. and Mrs. Blanco bought that, did you know? I bet they never have seen any ghosts there—and neither have I."
Mr. and Mrs. Blanco live on the same street as Yasmeen and me, Chickadee Court. "Are the Blancos moving?" I asked.
"Uh-uh," Yasmeen said. "They didn't buy the house to live in. They're opening some kind of fancy store. My dad calls it a health boutique."
I laughed. "Makes perfect sense. A boo-tique!"
Yasmeen didn't laugh.
"It's a joke," I explained. "Ghosts? Boo?"
"I get it," Yasmeen said.
"Then you should have laughed," I said, "to be polite."
"Ha-ha," Yasmeen said.
"Thank you," I said.
Luau shifted his weight, and his whiskers tickled my ear. Only two blocks and we'd be home. My arms looked forward to putting him down. But Yasmeen had another idea. "Let's do some detecting," she said.
_"No."_
"Oh, come on," she said. "Just a teensy-weensy bit of detecting. _Harmless_ detecting. I promise."
This was not a promise I could trust. And I definitely did _not_ want to get involved in another mystery.
Still, I couldn't help but wonder what Yasmeen was thinking. So I asked her, and she answered with a question: "Didn't you notice something unusual about the flyer? Aside from its being on the gravestone, I mean. Here, look."
I studied the paper for a few seconds. "Well, the wording is kind of weird," I said. "What kind of kid says 'beloved'? Oh—and it doesn't say 'LOST.' Most flyers like this say 'LOST' at the top in big letters."
Yasmeen nodded. "Let's stop off at the address on the flyer—at Kyle's house," she said. "It's not that far. Let's ask him if there was something strange about the cat's disappearance. I don't know why exactly, but I have this funny feeling."
"What did you have for lunch?" I asked her.
"Ha-ha," she said.
## Chapter Three
At Kyle's front door I shifted Luau on my shoulder and used my elbow to ring the bell. After a minute we heard footsteps inside, and then a boy older than Yasmeen and me answered. I recognized him from school, but Yasmeen asked, "Are you Kyle? From the flyer about the cat?"
The boy nodded. He was as tall and thin as Yasmeen, and he had brown eyes like hers, but his skin was as paper-pale as hers is cocoa-dark. He looked sad, and I wondered if he was sad about his cat or just sad in general.
"Halloween is a black cat," he said, "not an orange tiger like this guy. But thanks for trying."
It took a second before I realized Kyle thought we had found Luau and mistaken him for his own missing cat, Halloween. "We know this one's not yours," I said, "because he's mine. But my friend here—her name's Yasmeen—wants to ask you a couple of questions."
"We're detecting," Yasmeen said.
" _She_ is detecting," I corrected. "I am holding the cat."
"Don't you go to my school?" Kyle asked.
"I'm Alex," I said, "in Mrs. Timmons' class. We live over on Chickadee."
"What do you want to know?" Kyle asked.
Yasmeen got right to the point. "You didn't put 'LOST' on the flyer. Was there a reason?"
Kyle nodded. "Halloween isn't lost. Someone stole her."
"That's terrible!" Yasmeen said.
Without thinking, I clutched Luau tighter. Then I forgot I wasn't detecting, and I asked, "How do you know?"
Before Kyle could answer, a little girl came running down the stairs behind him, only stopping when she crashed into his knees. _"Pow! Got you!"_ she said to Kyle, then she looked up at us. "Who are . . . ? Hey, wait! I've seen you before. At school!"
"Not me," I said, but Yasmeen was nodding.
"Yup, I know you, too," she said. "You're Cammie. You go to preschool with my little brother."
Cammie smiled. "His name is Jeremiah. He is really weird."
Yasmeen nodded again. "Got that right."
"Why are you here?" Cammie asked.
"About Kyle's cat," Yasmeen said, "Halloween."
Cammie scowled. "Kyle is an old foo-foo head. He was _so mean_ —"
_"Mom!"_ Kyle hollered before Cammie could finish. When nobody answered, he said, "Excuse me a sec." Then he scooped up Cammie, who was wiggling and yelling, and carried her away.
"I'm sorry," he said when he came back. "She's, well . . . you know. Little kids."
Yasmeen said, "I know," but I didn't say anything because, actually, I don't know. Except for Luau, I'm an only child, and cats never act crazy the way kids do. "Anyway," Yasmeen returned to being a detective, "are you sure somebody stole Halloween?"
"I'm sure," Kyle said, "because I saw it happen. It was late at night. Something woke me, and I looked out the window. I saw Halloween out here on the porch. There was a moon, but no other light. I couldn't see very well, but I definitely saw someone stroke Halloween and then grab her."
"Did you run after him?" I asked.
Kyle shook his head. "I wish I had, but I was so surprised and—I guess—scared."
"Was it a grown-up?" Yasmeen asked.
"I think so," Kyle said. "But I don't know for sure if it was a man or a woman or . . ."
Like I said, Kyle was pale in the first place. But now—was it my imagination? Or did he get even paler?
"Or what?" I asked.
Kyle smiled, but it was a sick, embarrassed smile. "You'll think I'm crazy," he said.
"Try us," Yasmeen said.
Kyle took a breath. "Or a ghost," he said.
Yasmeen and I looked at each other because, of course, we _did_ think he was crazy. Kyle laughed a nervous laugh, then he shrugged and said, "It was dark."
"Whatever it was," Yasmeen said, "which way did, uh . . . _it_ run with your cat?"
"Toward the cemetery, but I don't know after that. He was fast. Even if I had tried, I couldn't have caught him."
"Did you tell your parents?" Yasmeen asked.
Kyle nodded. "I woke them up, but they thought I was dreaming. They said, 'You just wait, she'll be home in the morning.' "
"Sounds like parents," I said. "Did you call the police?"
"My parents did," Kyle said. "A guy came. I don't remember his name exactly. Pickles or something."
"Officer Krichels," I said. I know all the police officers because my mom's one, too, a detective.
"That's it," said Kyle. "He wrote everything down, but it's not like he expected it to do any good. You could tell."
"That was Friday—yesterday?" Yasmeen said.
Kyle nodded. "Halloween's been missing since Thursday night."
"Has anyone phoned you?" I asked. "Anyone who saw the flyer, I mean?"
"No." Kyle looked sadder than ever. "Poor cat. She's a good one, too. She never hunts birds, only mice, and she always comes when I call. Plus she's funny. Her meow is all gruff and squeaky—like a rusty old hinge."
Kyle sighed, and for a second we stood there feeling sad together. Then out of nowhere Yasmeen said, "Don't worry, Kyle. We'll find your cat."
Kyle looked at us. "You _will_?"
I looked at Yasmeen. "We _will_?"
_"Why did you tell him that?"_ I asked Yasmeen as soon as we were on the sidewalk.
"I couldn't help it, Alex," she said. "He looked so miserable."
"Not as miserable as he's gonna look when we don't find his cat!" I said.
"So we'll find his cat," Yasmeen said. "How hard can it be? We have a witness."
"Some witness," I said. "He thinks he saw a ghost! Besides, by now, how do we know the poor cat's even"—I put a hand over Luau's ears so he couldn't hear—" _alive_?"
## Chapter Four
My shoulder was half-numb by the time I set Luau down at home. But did Luau even _mrrrf_ his chauffeur a thank-you? He did not. Instead, tail in the air, he went to the kitchen to check out the action in his food dish.
Meanwhile, I could hear my parents upstairs. What were they laughing at, anyway?
"Hello?" I called.
More laughter. Then my dad answered, "Come on up, Alex. Get a load of your mom."
Luau followed me up the stairs to their bedroom. When I saw them, I thought they both had gone crazy. Mom was wearing what looked like black-and-white striped pajamas with a matching hat. Dad had on a police uniform that was too big for him. But the totally weirdest part was they were attached to each other with handcuffs.
"For once I'm the cop in the family," Dad said. "And she's my prisoner. Get it?"
"It was his idea," Mom said.
There is something freaky about seeing your parents in costume—like you want to ask, what happened to my _real_ parents?
"You better go get ready, too," Dad said.
"Ready for . . . ? Oh!" Then I remembered the party. It made me feel better to realize _why_ they were dressed up.
"The world's first-ever costume baby shower." Mom shook her head. "Leave it to Marjie Lee to come up with a harebrained idea—"
"Was it Marjie's idea?" Dad said. "I thought the hostess was that goofy friend of hers, the one that lives around the corner—what's her name?"
"You're probably right," Mom said. "Everybody calls her 'Miss' Deirdre because she teaches preschool. She's eccentric, but she's supposed to be a wonderful teacher. Anita Popp told me there's a waiting list to get into that school."
"I've never been to a baby shower," I said.
"They used to be women-only," Dad said. "But here in the new millennium, men and children have to go, too."
" _Have_ to?" Mom repeated.
" _Get_ to," Dad said quickly. "I meant _get_ to."
"You mean just like here in the new millennium, women _get_ to have careers?" Mom said.
Dad looked surprised. "You love your career," he said, "don't you?"
"Some days more than others," Mom said, "same as you love being home some days more than others."
"I had a good day," Dad said. "I did the grocery shopping, made the ears for Alex's costume, and fixed the leaky toilet in the downstairs bathroom. I guess your day wasn't so hot, though?"
"No, it sure wasn't," Mom said.
"What happened?" I asked her.
"Two missing cats," she said.
"You're kidding," I said, "because—"
Dad interrupted. "Is that all that went wrong?" he asked. "You seem pretty upset."
Luau bumped Dad's leg, which meant, _What could be more upsetting than missing cats?_
"It's not only the cats that got to me. It's where they were missing from." Mom paused, remembering something unpleasant. "I'll spare you the details, but it was a real strange coincidence. Two houses, opposite sides of town, but in both cases the cat owners seemed to me to be . . . how do I put it delicately? Negligent?"
"What's _negligent_?" I asked.
"Irresponsible. Like they didn't take such good care of their cats, didn't feed them well. I guess the bottom line is that they didn't seem like very nice people. And, I don't know, seeing animals treated badly? It's upsetting."
"How do you know they were bad cat owners?" I asked.
"There were other pets, too," Mom said. "A dog at one of the houses was chained to a tree—you could see its ribs, poor guy. At the other house there were some guinea pigs. . . ." Mom wrinkled up her nose. "Like I said, I'll spare you the details."
Dad said it seemed odd that people would call the police about cats they didn't even bother to care for properly. Mom was nodding. "I thought so, too," she said. "And if the cats had simply disappeared, neither owner would've bothered, I don't think. But the cats didn't just disappear. Both owners claim somebody sneaked onto their property at night, grabbed their cats, and ran."
"That's just what happened to Kyle who lives over on Groundhog!" I said, and then I explained about the flyer and visiting Kyle's house.
"You say Fred Krichels was already out there to talk to them?" Mom said. "Then I'd better sit down with him and compare notes. I hope it's not that Halloween business starting up again. I thought that old ghost story was forgotten by now."
"What old ghost story?" I asked, remembering what Kyle had said.
_"Hey."_ Dad looked at his watch. "We don't have time for ghost stories—not if there's going to be any food left at the party. Alex, run along and get dressed. _Scoot!_ "
All this time we'd been talking, Mom and Dad were still attached with the handcuffs. Now when Dad said "scoot," he made a sweeping motion that yanked Mom's arm along for the ride.
_"Ow!"_ Mom said.
_"Ow!"_ Dad rubbed his shoulder. "Uh, sorry."
"Give me the key," Mom said. "I still have to do my makeup."
" _You_ have the key," Dad said.
"Since when does the prisoner keep the key?" Mom asked.
"Oh, come on," Dad jangled the handcuffs. "Stop clowning, honey, and unlock them."
Mom looked at him. "Me?" she said. "Clowning?"
Dad made a face. "Uh-oh."
## Chapter Five
Our neighborhood is big on celebrations. We have a Christmas party and a Fourth of July picnic. We have an Easter egg hunt and a Passover dinner. We celebrate St. Patrick's Day and Chinese New Year.
And when there's something special like a new baby coming, there's a party for that, too.
The Lees live right next door—the other side from Yasmeen's family—but even so, we were late getting to their house. My parents hadn't found the handcuff key, and it took my mom a long time to do her makeup left-handed and attached to Dad. As we walked in the door they were both crabby and blaming each other.
Mrs. Ryan spotted them first and laughed. She was dressed like a little girl going to a party: short skirt, ankle socks, and a big bow in her hair. This made sense because Mrs. Ryan teaches first grade.
"Well, aren't you two _cute_?" she said. "Whose idea was it—don't tell me. Dan's? Am I right?"
Mom took a deep breath; Dad smiled an uncomfortable smile.
"What's wrong?" Mrs. Ryan said.
"Nothing," Mom said. "Everything is ducky."
"Noreen," Mrs. Ryan said to my mom, "I have known you for ten years, and something is much less than ducky. Wait a second—don't tell me—you've lost the key!"
I had to hand it to Mrs. Ryan. Not much gets past a first-grade teacher. Unfortunately, it did not improve my parents' mood when the next thing she did was crumple up in a laughing fit.
"Bill!" she called to her husband between cackles. "Come over here. You won't believe it!"
This was a bad time to hang out with my parents, so I aimed for the living room. The lights were dim there, and the whole place felt haunted. In the corners were gauzy spider-webs loaded with black plastic spiders. Fake bats dangled from the ceiling on elastic threads, so when you bumped them, they bounced. The food was creepy-looking, too: eyeball appetizers, hot dogs that looked like bloody fingers, Jell-O in the shape of a brain, a cake with a cardboard dagger stuck in it.
There were a lot of people around the food table, but Yasmeen was easy to spot. Her costume was bright yellow and black stripes; she was a bumblebee.
Before I could tell her about the two new catnappings, she frowned and said, "Okay, I give up. What are you supposed to be?"
"What do I look like?" I answered.
"A boy in orange sweat pants and an orange sweatshirt that doesn't exactly match, and you have two construction paper triangles on your head," she said.
I turned around and showed her my tail. "I'm Luau!" I said. _"Duh!"_
"Where are the claws?" Yasmeen said. "The sharp teeth? The intelligent expression?"
"Ha-ha," I said, and bit down on a taco chip.
Besides Yasmeen and Jeremiah and me, five other kids live on our street. There's Toby Lee, who is not quite three and about to be bugged by a new baby. There's Michael Jensen, who is rich and smart and, Yasmeen tells me, "really cute." There's Michael's little brother, Billy, who is always listening to his new iPod, so it's like he doesn't live here among us but in some other dimension. And there are the Sikora kids, Sophie and Byron. Sophie is the bad kid in the neighborhood, a year younger than Yasmeen and me, big for her age and spoiled. She can't walk into a room without breaking something, and she talks all the time. Her brother, Byron, is as quiet as wallpaper—I guess because Sophie has never let him talk.
All us kids were hanging out by the food, of course. Michael was dressed as Superman, and, wouldn't you know, Billy was a CD sandwich. His mom had covered two giant cardboard disks with foil, then suspended them on straps over his shoulders. I tried to tell him, "Good costume!" but he had his headphones on and couldn't hear me. Sophie was dressed as an angel, which had to be somebody's idea of a joke.
"You ought to see Mrs. Lee." Michael blew up his cheeks and stretched out his arms. "Her costume's a pumpkin, and she didn't have to use padding."
"Let's go," said Yasmeen.
The family room was crowded. Mrs. Lee sat in a big chair in the corner. Michael was right—she made a very convincing pumpkin. Next to her was her friend, Deirdre, the preschool teacher. Only she didn't look like her usual ditzy, cheerful self. She was wearing some kind of spooky gray costume with a gray wig and ghoulish black-and-gray face makeup. She was knitting with rainbow yarn.
"What do you think she's making?" I asked. "It sure is teeny."
"A sweater for the baby, I guess," Yasmeen said.
Somebody came up behind me and tapped my head with a big fist. I knew without turning around it was Bub.
"Hey," I said, and elbowed him in the belly. It was the easiest place because so much of Bub is belly.
"What's this in your hair?" he asked. "Oh, now I see, orange cat ears. You're supposed to be Luau, is that it?"
"I'm glad somebody understands," I said. Then I took a good look at him and laughed. Bub is an old guy who lives by himself at the end of our street. Some of the neighbors say he's original, and some of them say he's a slob. For the party he had dressed in red long johns, which are like old-fashioned one-piece pajamas that button up the front. He had a mask on, too, but he had pulled it up over his head, so I couldn't see what it was.
"What are _you_ supposed to be?" I said.
He pulled the mask down over his face.
"You're a _fish_?" I said.
"I'm a red herring!" he said. Then he laughed and laughed.
In mystery stories a red herring is a clue that points to the wrong person. Bub loves mysteries. When he's not watching old mystery movies, he's reading old mystery books.
"What's he laughing at?" Yasmeen asked.
"Himself, as usual," I said.
"What've you been up to lately?" Bub asked me.
I told him about Kyle's missing cat, Halloween, and then I told him and Yasmeen what my mom had said—that two more cats were missing, too.
"Two more?" Yasmeen said. "Now we've got an even better mystery to solve!"
"Can't we leave it to the police?" I said. "My mom's a really good detective."
"And so are you," Yasmeen said. "It must be genetic."
I knew Yasmeen was buttering me up so I'd help her. Even so, it was nice, not to mention totally rare, to get a compliment from her. Mean-while, Bub thought it would be great if Yasmeen and I went to work on another mystery, and he offered to help.
"Maybe you can," I said. "Mom said something about the missing cats being connected to a Halloween story. Do you know anything about that?"
Bub nodded. "I think I know what she's talking about. It has to do with the old Harvey house downtown, the one the Blancos put all that work into."
"The one that's _haunted_ ," I said, looking at Yasmeen.
"That's how the story goes," Bub said. "Supposed to be that the ghost has it in for cats—black cats in particular. It's been years now, but I can remember cats disappearing around Halloween time and the Harvey ghost taking the blame."
"There's no such thing as ghosts," Yasmeen said.
Bub shrugged. "I don't know if there is or there isn't. But if you want to know more about the story, we have an authority nearby—Jonathan Stone. He was born here in town—knows where all the bodies are buried, so to speak."
Mr. Stone also lives on our street. He's an older guy. His wife is dead, and his kids are grown-up.
"Have you seen him tonight?" Yasmeen looked around.
Bub shook his head no. "He's not much for parties."
This was true. In fact, Yasmeen and I used to be afraid of him. But then last year when he caught us trespassing in his yard, he didn't yell, he invited us in, served us hot chocolate, and even gave us a really important clue to the mystery we were working on. That was the first one Luau, Yasmeen, and I solved, and it turned out to be pretty scary, as well as confusing. Somebody had been stealing pieces of our neighborhood's annual Twelve-Days-of-Christmas display.
"You know who else is missing tonight?" Bub asked. "The father."
It was my turn to look around. "Mr. Lee?"
"Ah-yup," said Bub. "I hear a business deal came up, and he's out of town."
This was no surprise. Mr. Lee works all the time, same as my dad did before he quit to be a househusband.
Now Miss Deirdre stood up and clacked her knitting needles to attract everyone's attention.
"Boys and girls?" she said. Then she looked all embarrassed and shook her head. "Sorry," she said. "It's force of habit. What I _meant_ to say was welcome!"
She said a few more smiley words about the wonders of new babies and moms and all that. Then it was time for presents.
The first one was a baby monitor, one of those walkie-talkie things. You put the microphone by the crib so you can hear on the receiver if the baby fusses or burps or tries to escape. Bub had never seen one, so I explained.
Bub shook his head. "I never knew a baby that had trouble making itself heard."
Next, Mrs. Lee opened a tiny outfit with bears on it, and all the moms in the room said, _"Awww."_ After that came a blanket with pictures of sailboats. Then another baby monitor. This one was what my dad would call high-tech, everything really small and shiny.
After a while, I learned something about baby showers: The presents are boring. About the only interesting one was a teddy bear that played music by Mozart. It came from Mr. and Mrs. Sikora, who explained that classical music makes babies smart.
"If that's true, they must have forgotten to plug in Sophie's bear," Yasmeen whispered.
I laughed, but Bub shook his head. "You kids are wrong about her. She's rambunctious, but she's smart as a whip. When my doorbell busted, who do you think rewired it?"
Yasmeen and I looked at each other. Was it possible Sophie was some kind of genius with electronic stuff?
Or maybe this was another one of Bub's famous jokes.
Anyway, after that, Mrs. Lee opened a battery-powered wastebasket for smelly diapers, and Yasmeen and I decided we couldn't take any more. Back in the living room, I dared her to eat one of the hot-dog fingers, but she couldn't, and it turned out neither could I. We took the dagger out of the cake instead, and shared a big piece.
"After church tomorrow," Yasmeen said, "we'll look for clues."
"I don't have time for detecting tomorrow," I protested. "I have homework."
Yasmeen ignored my argument. "The thief was in a hurry. People in a hurry drop things. I bet anything there's a clue. Don't worry," she said. "This case will be easy to solve. I swear."
## Chapter Six
"So what are you proposing?" my mom asked my dad. We were home after Mrs. Lee's shower. Their door was closed, but I could hear them from the hallway. "Are we supposed to sleep like this?"
"Look at it this way," Dad said. "It's going to make a very funny story one day."
"Who would we tell?" Mom said. "Thanks to Beth Ryan, we're the laughingstocks of the neighborhood now!"
I knocked on their door.
"Come in," Dad said. When I did, I saw they were standing as far apart as two people handcuffed together can stand.
"Can I help?" I asked.
"No," Mom said.
_"Honey,"_ Daddy said.
"Sorry," Mom said. "That wasn't fair. I'm not mad at you, Alex. I'm mad at _him_."
"Go ahead and look around," Dad told me. "It seems like we've eyeballed every cranny, but metal keys don't vaporize. It has to be somewhere."
Luau was right behind me, nose in the air like maybe he was trying to smell the key. I shook my parents' bedspread, opened bureau drawers, crawled around on the rug.
Luau, meanwhile, leaped onto my dad's bedside table, sat down, and watched me. Then he pulled one of his favorite tricks, one he usually uses for waking me in the middle of the night. He batted things onto the floor. The alarm clock. Two books. A magazine. A seashell from our vacation last summer.
A key.
I reached down for it. "Does this look familiar?" I asked.
"The key!" Dad said.
Mom smiled. "Where was it?"
I took a deep breath and tried to speak in my best let's-all-remain-calm voice. "On your bedside table, Dad."
"I looked there!" Dad said.
"Well, you didn't look very hard," Mom said.
"Well, possibly if you hadn't been dragging me toward the bathroom so you could do your _makeup_. . . ."
I unlocked the handcuffs for them. They shook out their arms and rubbed their shoulders but never stopped arguing.
"You really must have your eyesight checked, Dan," Mom said. "You know, at your age—"
" _My_ age?" my dad said. "You've got six months on me, Noreen."
Luau gave me a look that meant, _Cats have excellent eyesight, in case you didn't know_. Then he jumped to the floor and padded out the door toward my room. I followed.
"Good night, honey, and thanks!" my mom called.
"Yeah, Alex, thanks!" Dad called.
Don't thank me, I thought. Thank Luau.
The next day was Sunday. I slept late, ate my bagel and cream cheese, then played Lousy Luigi Brothers on the PlayCube. It was looking like pretty much a perfect day—the kind when you never get out of your pajamas—until Dad said, "Don't I remember something about math homework?"
And Mom said, "The day's half gone and you're not even dressed, Alex? You're squandering daylight!"
When Mom makes one of her "squandering daylight" speeches, resistance is futile. So I pulled on sweatpants and a T-shirt that didn't smell too bad.
The math homework turned out to be easy. When that was done and Yasmeen still hadn't called, I hoped that maybe she had forgotten all about detecting.
Yeah, right.
At three o'clock she knocked on the door.
"Sorry I'm late," she said.
"That's totally okay," I said.
"Mom and Dad were hosting the fellowship hour after church, so we had to clean up. It took forever. The people at our church can really put it away, that's what my dad says."
"It's probably too late to do any detecting now, right?" I said.
"What do you mean?" Yasmeen said. "There's plenty of light left. Come on. We'll go over to the cemetery and walk from there back to Kyle's house. Bring the ace detective, too. Since we're on the trail of a catnapper, he's going to want to help."
## Chapter Seven
Yasmeen, Luau, and I have solved one whole mystery together. So I guess I can't claim to be an expert. But here is something I think I know. A lot of the time, solving mysteries is unexciting.
I mean, in the movies there are explosions and car chases and women wearing bathing suits. In real life it's more like you look around, you ask questions, and you think hard.
Anyway, unexciting is definitely how it was that Sunday afternoon. Yasmeen and I walked at the speed of snails from the cemetery gate to Kyle's house and back again. By the fence we found an empty beer can. On the sidewalk we found a gum wrapper. Next to an old green car we found a grocery receipt. Yasmeen, who was wearing yellow rubber gloves, carefully saved each in a plastic bag.
"What's with the gloves?" I asked her.
"So we can preserve the catnapper's fingerprints," she said.
"But we don't have a way to analyze fingerprints," I said.
"Your mom does."
"Right, Yasmeen," I said. "She's gonna get the whole FBI crime lab involved to find a missing cat."
" _Three_ missing cats."
"We don't even know if the others are connected to this one!"
"Oh, come on, Alex. Do you think there's more than one thief grabbing cats in the middle of the night?"
"How do I know? Maybe it's a coincidence. Anyway, the circumstances in the other cases were different. My mom said those owners were negligent, didn't care that much about their cats. Does Kyle seem negligent to you?"
"No," Yasmeen admitted. "But that just makes it more mysterious, right?"
Luau did not turn out to be keen on detecting, even though the case was catnapping. What he wanted instead was regular napping, and the cemetery didn't disturb his dreams either. While Yasmeen and I collected our useless clues, he slept in a cozy spot by a headstone. We were about ready to give up when he strolled toward us, tail swishing, nose in the air.
"He smells something," I told Yasmeen.
"Does it have anything to do with Kyle's cat?" Yasmeen asked.
"More likely with some tasty rodent."
Luau sniffed for a few seconds, then he walked down the sidewalk and stopped next to the old green car. I could see he wanted to get under it from the curb, but the car was parked too close, so there wasn't space. He did a quick ear swipe and looked back at me, which meant, _Take a look under there, why don't you? Something smells_ very _interesting_.
I crouched and peered into the darkness.
"What do you see?" Yasmeen asked.
"Nothing," I said, then, "Oh . . . wait. There is something. It's round." I reached and brushed it with my fingertips. "I need a stick—do you see one?"
What Yasmeen found was more like a branch. It was awkward, but I managed to bump it against the thing till I had moved it over to the side.
"Gloves!" Yasmeen said, but by then I had already grabbed the thing. Any catnapper prints were now mixed up with mine.
In daylight our mysterious object seemed to be a handkerchief wrapped around a ball of crinkly stuff. I held it up for Yasmeen to see. "It's a sachet," she said. "You know, you put them in drawers to make your clothes smell good."
Okay. But then why was Luau acting crazy—mewing pathetically and trying to climb me like a tree?
"Can he have it?" I asked.
Yasmeen said why not, so I tossed it on the ground. Luau pounced, then looked around like he thought for sure someone must want to steal such a marvelous prize.
"No, really, Luau. It's all yours," I said. "Enjoy."
Luau is ordinarily a very dignified pet. But whatever this stuff was, it brought out his inner kitten. Clutching the ball between his paws, he rolled onto his back and thumped at it with his hind feet, finally tossing it into the air. Then—and I never knew he was this coordinated—he caught it in his mouth and rolled over and over with it till you'd swear he had to be dizzy.
And that's when— _duh_ , Alex—I realized what the white ball was made of. I opened my mouth to say the word, but Yasmeen beat me to it: "Catnip!"
## Chapter Eight
Was the catnip a clue?
Or a coincidence?
Yasmeen and I had a lot to discuss that night, so I got permission to eat over at her house. The only trouble with having dinner there is that her parents are so strict. Grace before dinner. Cloth napkins. And no matter what kind of mushy, mysterious green stuff a kid finds on his plate, he is expected to eat it.
"Alex?" Mrs. Popp, Yasmeen's mom, looked up at me after we'd all said amen. "Would you like to start the conversation?"
When I was little, Yasmeen's parents scared me. By now, though, I've figured out that they're okay, they even like me—as long as I remember to speak in complete sentences.
"Sure, Mrs. Popp," I said. "Yasmeen and I have had an interesting afternoon."
"Tell us about it, Alex," Yasmeen's dad said.
So—between small bites of some mysterious meat—I told them. In a way, it was nice to be telling the story now because for once Yasmeen didn't interrupt. At Yasmeen's house you don't dare interrupt.
". . . a sachet Yasmeen called it." I was almost done. "But then we both realized, because of how crazy Luau was acting, that it had to be catnip. After that, we brought it home. We're still trying to figure out what it means."
For about a minute Jeremiah, Yasmeen's little brother, had been shaking his head and looking gloomy. Actually, he looks gloomy most of the time.
"Do you have something to contribute, Jeremiah?" Yasmeen's mom asked.
"Uh-oh," said Jeremiah.
"Why do you say that?" asked Mrs. Popp.
"Because somebody's a litterbug," said Jeremiah. "Miss Deirdre tells us _never_ be a litterbug. And I never will."
"Admirable, Jeremiah," said Professor Popp. "What else does Miss Deirdre tell you?"
"Put the play dough back in the bag or it will dry out," he said. "Drink your milk, unless you're allergic. Oh—and always be kind to animals. She says that a lot."
Professor Popp said, "Excellent advice," and he sounded serious, but he might have been kidding. Professor Popp has an English accent because he grew up on some island I can never remember; to me he always sounds serious.
Jeremiah nodded. "Miss Deirdre knows everything," he said.
"Everything?" asked Mrs. Popp.
Jeremiah nodded again.
"There's one thing I bet she doesn't know," Yasmeen said. "She doesn't know who stole Halloween."
"So you two children are at it again, eh?" said Professor Popp. "Playing detective? I must say I think the catnip is a clue. Could the thief have dropped it?"
"That's what I think," said Yasmeen. "The thief carried it so Halloween would like him—so she'd go with him and not complain."
"That's reasonable," said Mrs. Popp, "if we can associate the word _reasonable_ with someone who steals cats. What kind of person would do such a thing?"
"A wacko!" said Jeremiah.
Professor Popp arched his eyebrows. "Jere _mi_ ah?"
"Sorry," Jeremiah said. "A nut case?"
Mrs. Popp pursed her lips and shook her head.
This time Jeremiah thought for a few seconds. Then he said, "A lunatic."
His parents looked at one another. "Better," they agreed.
"Did you know the word _lunatic_ comes from _luna_ —the Latin word for moon?" Mrs. Popp asked. "A lunatic was thought to be somebody influenced by the moon."
"You mean like werewolves?" I asked.
Yasmeen laughed. "So now you think it was a werewolf who stole Halloween?"
Jeremiah shook his head again. "Uh-oh."
"You don't even believe in werewolves," I reminded Yasmeen, "or ghosts either."
"But ghosts are real," said Jeremiah, "aren't they?"
"No," said his mom.
"Possibly," said his dad. "You know, I've done a bit of research on ghost stories. Every culture has them. Is that coincidence?"
"Oh, Derek, for goodness sake," said Mrs. Popp. "When people don't understand something, they invent a supernatural explanation. There are many mysteries in the world, but one thing is certain: Ghosts exist _only_ in the imagination."
## Chapter Nine
There is something strange when you look into a mystery: It sort of takes over your brain and even your sleep. That night I dreamed we found a whole bunch of clues, but most of them turned into fish and swam away. The only one that didn't was a little slip of white paper with writing on it.
The dream woke me at six, and I couldn't fall back to sleep. Luau was awake, too, lying on my feet, blinking at me and purring, which meant, _I love you, Alex, I love you so—especially when you give me catnip_.
Down the hall I could hear my mom in the shower. It was Monday. She worked an early shift. This would be my best chance to talk to her.
I went down to the kitchen and poured myself a bowl of Pirate Berry Crunch. Mom came down a couple of minutes later. When she saw me, she jumped.
"What on earth are you doing up?" she asked.
"Sorry," I said. "I couldn't sleep."
The coffeemaker was burbling. Dad measures out the grounds and water the night before, then sets a timer so it's ready when Mom gets up. I used to think this was nice of him, but Mom says he only does it so he can sleep in without feeling guilty. Now she poured herself a mug and sat down across from me at the table.
"Is something wrong?" she asked.
"Just the missing cats," I said. "I can't stop thinking about them—Kyle's especially." Then I told her about my dream and about finding the catnip under the car. I told her what Bub said about a ghost story, too.
Mom nodded. "We've been lucky the last few years. No cats stolen at all. But before that, I remember several incidents. People with a sick sense of humor stole them and blamed the ghost. Once there was a ransom note. Another time somebody deposited two in the cellar at the Harvey house. It was vacant then. Luckily, the cats made plenty of noise, and a neighbor heard them. The cats were pretty hungry by the time we found them."
"Kyle said the thief might have been a ghost," I told her.
Mom laughed and shook her head. "Right, honey. And the tooth fairy robs banks in her off-hours."
I laughed, too. Then I told her Mr. Stone was supposed to be the expert on the old ghost story.
Mom said that didn't surprise her, then she looked at her watch and stood up. "I've got a seven o'clock meeting. We're planning our patrols for Halloween night."
"But you haven't eaten breakfast," I protested.
"There'll be doughnuts at the meeting."
"You won't let _me_ eat doughnuts for breakfast," I pointed out. "You say they're bad for me."
"I'm right, too," she said, "as usual." She put her mug in the dishwasher, then ducked into the downstairs bathroom. When she came out, her police uniform was buttoned up and her lips were pink.
"Go get 'em, Mom," I said.
"I will, honey." She started down the hallway to the garage, then paused. "What's Kyle-over-on-Groundhog's last name?" she asked.
"Richmond," I told her.
Mom nodded. "I'll talk to Fred Krichels today, take a look at his report. It seems likely the catnapping incidents are related, don't you think? And maybe you and Yasmeen could get Mr. Stone to tell you that ghost story. Who knows? It might help us solve the case."
I was surprised, and kind of flattered, that Mom had asked for our help. "Sure," I said. "So you don't mind if Yasmeen and I try to find Kyle's cat?"
Mom smiled. "I don't mind," she said. "But this time, Alex, _please_ be more careful. No death-defying midnight runs through the neighborhood. Deal?"
"Cross my heart," I said.
## Chapter Ten
Dad came down about fifteen minutes later. I was clean and dressed and full of cereal. I was reading the sports section. Dad was as surprised as Mom, but he didn't jump. Instead, he asked about my spelling test.
"Oh, _no_!" I said. "I was going to study last night, but then I went to the Popps. . . . Do we have time to go over the words?"
"Hand me the list," Dad said.
I pulled it out of my backpack. Dad held it close to his face, then he stretched out his arms and held it far away. He opened his eyes wide. He squinted.
"Can't you read it?" I asked.
"Of course I can read it," he said. "First word: _glamorous_."
_"Glamorous?"_ I shook my head. "That's not one of our words."
"Sure it is," said Dad. "I mean"—he moved the paper away again—"I think it is."
I took the list back. "Dad, the word is _generous_."
Dad shrugged. " _Glamorous, generous_ —the rule is the same: _O_ before _U_ except after moo."
"Ha-ha, Dad." I slid the list into my backpack. Yasmeen could quiz me on the way to school.
Dad frowned and rubbed his eyes. "Maybe I should make that phone call after all," he said.
"To the eye doctor you mean?"
"Oh, no." Dad shook his head. "I don't care what your mom says, it's not serious enough for an M.D. But Eric Blanco's got that new store downtown, I think I told you? It's one of those health-organic-type stores. Five-dollar zucchinis, tea bags from Tibet, vitamin Q. . . ."
"In the Harvey house," I said. "Mom and I were just talking about that place. But I don't understand. What do five-dollar zucchinis have to do with your eyes?"
"Oh, it's probably a lot of hooey," Dad said. "But Eric claims he's got some miracle pills—vitamin A it must be. He says if I take them, my eyesight will be as good as Luau's."
I couldn't believe my dad. Miracle pills? Why didn't he just get glasses like all the other old people?
"You know Eric sells pumpkins, too," Dad said. "Organic, homegrown, all that stuff. What do you say we go over there before dinner? I've got that PTA meeting, but after that we could go get the raw materials for our jack-o'-lantern."
"Can Yasmeen come?" I asked.
"Sure," Dad said, "and speaking of Yasmeen . . ."
She was knocking at the front door, same as she does every day. That meant it had to be precisely 7:45. Yasmeen is never late to pick me up for school.
"Want to come pumpkin shopping with us?" Dad asked her.
"At the haunted house," I added.
Yasmeen said probably—she'd have to check with her dad. Then she adjusted the straps on my backpack, and we headed out the door.
It is a two-block walk from my house to College Springs Elementary School—one block to Bub's at the end of Chickadee Court and one along Groundhog Drive to the school. For the first block I filled Yasmeen in on what my mom had said about stolen cats and how Mom wanted us to get the ghost story from Mr. Stone. For the second Yasmeen quizzed me on spelling words. We are in different rooms this year, so we figured we'd meet at lunch to plan our next move.
But our next move came to us.
Yasmeen and I had just sat down in the cafeteria when Kyle came over to our table. We were shocked. At our school it's strange for a kid in a higher grade to talk to a kid in a lower one. It's more than strange, it's like totally uncool for a kid in a higher grade to risk this at lunch—when his friends are bound to see. Whatever Kyle wanted, it had to be really important.
"Uh . . . I came to ask you . . . ," he began, and if it's possible, he looked more miserable than before, ". . . uh, I mean, everything's okay now. . . . You don't need to get my cat back."
## Chapter Eleven
Yasmeen dropped her sandwich she was so surprised. Me—I almost choked on my Chips Ahoy!
"What?" Yasmeen said. "She came back on her own, you mean?"
Kyle shook his head no. "I wish, but that's not it. I'm just saying—of course I _want_ her back. She was like my best friend . . . but I don't want you to help get her for me."
"Why not?" I said. "We already found out some stuff."
"What?" Now Kyle looked scared. "What have you found out?"
"Nothing," Yasmeen said.
I looked at her. " _Nothing_? That's not—"
Yasmeen interrupted me with a kick. While I rubbed my shin and tried to figure out what I was missing, she said, "Nothing that was any help. Don't worry about it, Kyle. If you don't want me to bring your cat back, I won't."
Kyle was already standing up and looking around—wondering which of his friends had seen him and how much he was going to suffer for talking to us.
"Thanks," he said. "I appreciate it. I know maybe it seems weird, but . . ." He shrugged, turned, and walked away.
When he was out of earshot, I let Yasmeen have it. "What was that about? I hope you're carrying your famous Band-Aids because I need one where you _assaulted_ me!"
"I didn't hurt you," Yasmeen said, then she thought again. "Did I? Roll up the leg of your jeans and let me look."
"Oh, right, Doctor Popp," I said. "In the middle of the cafeteria at lunchtime, I'm going to show you my shin."
"Suit yourself," she said, and took a bite of her sandwich. Meanwhile, our friend Russell came over and sat down. He had a tray full of cafeteria delights.
"What is that?" Yasmeen asked him.
Russell took a bite. "I'm not sure, but it tastes good. Hey—that was a hard spelling test, huh? I think I got 0 out of 20."
Actually, I had thought the test was okay. But it would sound like bragging to tell Russell that now. And with him here, Yasmeen and I couldn't really talk about the missing cats either. So instead, we acted like regular, normal, everyday kids—instead of hardworking detectives—and talked about regular, normal, everyday stuff like trick-or-treating and kickball and video games.
In class after lunch we had time to work on our relief maps of Mexico. While I mixed up the dough ingredients and stirred the paint, I thought about poor Kyle and his missing cat and his strange request. I also tried to figure out how come Yasmeen had kicked me. I mean, she wanted to shut me up, but why?
Anyway, it wasn't so smart to think about the case while I worked, because the dough came out all runny and my green paint looked blue. Mrs. Timmons asked me if I was feeling okay and even put her hand on my forehead like maybe I had a fever. Mrs. Timmons likes me because we have cats in common. She has a white one with blue eyes and an orange tiger like Luau, which is something everyone in our room knows because she is always brushing white and orange fur off her clothes.
"Don't forget to put your dough away in a Ziploc," she reminded the class when we were done working. I smiled because it made me think of Miss Deirdre and Jeremiah's preschool. I guess when it comes to dough, you never grow up.
Yasmeen met me at the door of the classroom after school.
"There was something strange about the way Kyle was talking," she said. "Did you notice? He was so nervous."
I slung my backpack over my shoulders, and we started walking down the hall. "Sure, I saw he was scared," I said. "So what, though? Why not tell him about the catnip? Why not tell him what my mom said?"
"I don't know how to explain this," she said, "but something told me not to trust him—like an instinct."
We walked out the front door of the school and into the daylight. It was a perfect fall afternoon—blue sky, white clouds, fiery leaves.
"Come on, Yasmeen," I said. "You don't think he stole his own cat, do you?"
Yasmeen shook her head like she was trying to straighten out her thoughts. "That doesn't make sense, does it?" she said. "But wait. What about this? What if he made up the story about seeing the thief?"
"And he doesn't want us to do any detecting because he's afraid we'll find out," I said.
Yasmeen nodded. "Exactly."
"But why would he do that?" I asked.
"Maybe something else happened to Hallo - ween," Yasmeen said. "Maybe Halloween wasn't supposed to go outside, and Kyle let her out, and she got hit by a car or something else bad, and now Kyle is afraid he'll get in trouble."
That was a pretty smart guess, I thought. Lots of people keep their cats inside to protect them. I could imagine a kid inventing a story to stay out of trouble and then getting scared someone would find out.
But I would never tell Yasmeen I thought she was smart. She already thinks she is plenty smart. So I just said, "Yeah, maybe. Anyway, now we'll never know for sure."
"What do you mean?" Yasmeen asked.
"You told Kyle we would quit detecting," I said.
Yasmeen shook her head and grinned. "No, I didn't."
I thought back to what she had said in the cafeteria. "You told him you wouldn't bring Halloween back," I said.
Yasmeen nodded. "But I never said _you_ wouldn't."
## Chapter Twelve
Yasmeen and I had just turned the corner onto Chickadee Court when we spotted a police car parked in front of Bub's house.
That sounds scary.
But it wasn't.
There's a police car there a lot. Officer Krichels is Bub's friend. He likes to stop off for soup.
I looked at Yasmeen. "Are you thinking what I'm thinking?"
She nodded. "We need to talk to Officer Krichels. But what time are we going to get the pumpkins?"
"Dad said the PTA meeting will go till five," I said.
Yasmeen smiled. "I can be a little late getting home, too. My dad won't be back with Jeremiah till four."
A minute later, Bub opened the door for us and bowed. " _Bienvenue_ , Madame, Mess-yer. Zee potage of zee day eez vee-shee-swaz." He was speaking in a French accent that even I knew was a bad French accent. It cracked us up. "Zat just means zee zoup de po-tay-toe. Eet's zupposed to be cold, but who likes zee cold zoup? I serve my vee-shee-swaz fresh from zee stove."
"Sounds great," I said, "Mess-yer."
Bub brought Yasmeen and me bowls of white soup sprinkled with flakes of green stuff. Officer Krichels was just getting ready to leave.
"Can we ask you something?" I asked him.
"Your mom told me you were interested in the missing cats," he said. "It's a bad time of year for it, you know. Better keep a close eye on Luau."
Yasmeen said, "Did you notice anything odd about Kyle, the kid you talked to on Groundhog Drive?"
Officer Krichels scratched his chin. "Can't say I did. Kind of a Gloomy Gus, but his cat was gone, so who could blame him? Now, that pip-squeak sister o' his, _she_ was somethin'."
"Did she say anything?" Yasmeen asked.
"The pip-squeak?" said Officer Krichels. "You couldn't shut her up! I didn't pay much attention on account of how she wasn't a credible witness. That means someone you can believe."
Officer Krichels is nice, but he treats all kids like they're two years old. Sometimes, like now, Yasmeen gets impatient.
"I _know_ what a credible witness is," she said. "Do you remember _anything_ the little sister told you?"
Officer Krichels had his hand on Bub's door-knob. "Bunch o' nonsense. Something about how her rotten big brother tortured the cat. . . ." Officer Krichels shrugged. "You know siblings—they're always out to get each other."
Officer Krichels saluted Bub. "Great soup today, like always."
Bub was sitting at the table with us, his hands clasped over his belly. He nodded at his friend, "See ya tomorrow. I'm thinkin' black bean."
The instant the door closed, Yasmeen burst. _"I cannot believe him!"_
_"Yasmeen,"_ I said, warning her.
But Bub just laughed. "One of the sweetest guys I know," he said, "but genius is not one of his attributes."
"I suppose now we should talk to Kyle's little sister," I said, "to Cammie."
Yasmeen picked up her soup bowl and gulped the last bit. "I don't see how we can do that," she said, "without Kyle finding out we're still detecting."
It was getting close to four. Yasmeen went to call her dad and ask if she could go with Dad and me to the Harvey house to get pumpkins. I told Bub the whole story, and to my surprise he picked up right away on something my mom had mentioned.
"Ransom note," he repeated. "I bet you dollars to doughnuts Kyle got a ransom note. It happens all the time in books. The detectives are working their tails off, and suddenly whoever it was hired 'em calls and tells 'em to quit. In this case, the catnapper told Kyle not to try to find his cat, just pay the money. That's why Kyle talked to you today. That's why he looked so scared."
## Chapter Thirteen
Mr. and Mrs. Blanco must have worked really hard to fix up the Harvey house because it hardly even looked haunted anymore. The paint was fresh, and the twisted rungs of the black metal fence by the sidewalk had been straightened out. All the little frame-doodads around the porch and windows had been repaired and nailed back into place. From the sidewalk I could see for the first time that this pretty much used to be a mansion compared with the other houses on Main Street. I guess it was built by somebody rich.
Yasmeen, my dad, and I opened the gate—which didn't even squeak—and walked through the front yard toward the porch. There were pumpkins on either side of the walk. The house was brightly lit, and there was a new purple sign:
**H ARVEY HOUSE HEALTH BOUTIQUE
NATURAL FOODS AND FIBERS, VITAMINS,
AND HOMESPUN REMEDIES
EVERYTHING FOR YOUR GOOD HEALTH**
"What's a homespun remedy?" I asked.
Dad scratched his head. "Eric Blanco explained the theory to me on the phone," he said, "but to tell the truth, I don't get all of it. The gist seems to be that sometimes weaknesses can be repaired through the 'introduction of offsetting substances.' "
Yasmeen and I looked at each other. _Huh?_
Dad laughed. "Let's say you want to build muscles. The homespun idea would be that you swallow a tonic made from something strong—like an ox."
"You mean drink ox blood?" I shuddered. "I think I'd rather do push-ups."
"I'm not much for push-ups," Dad said. "And who knows? Maybe it works."
Mr. Blanco met us at the front door of the store. "Welcome, neighbors!" he said, then he looked at Dad. "You here for my eyesight pills?"
Dad smiled. "Frankly, I'm still skeptical. But we know for sure we're in the market for pumpkins."
"We've got plenty of pumpkins," Mr. Blanco said, "and all of them certified organic. You kids want to pick out a couple of good ones while the old fogies talk?"
Yasmeen and I went back out into the yard to look at the selection. I am not a big shopper. Right away I noticed a pumpkin that was more or less round and pretty big. It didn't have any rough brown places or dots, either.
"This one's good," I said.
Yasmeen examined it. "It has a big green spot," she said.
"Only on one side," I said. "We can cut it out for the nose or something."
Yasmeen said she was going to keep looking, which meant picking up every single pumpkin, turning it over and over, then shaking her head and setting it back down.
"Do you think Bub's right?" I asked her, "about the ransom note?"
"But what about what Officer Krichels told us?" Yasmeen said. "Maybe Kyle was afraid we would find out that he tortured his cat, and that's why he called us off."
"I think the ransom note is more likely. To me it seems like Kyle really liked that cat."
Yasmeen had picked up a small pumpkin and now held it under her arm. "What about this? The catnapper was misinformed. He thought Kyle loved his cat enough to pay ransom, but he didn't really."
My head was spinning, which is precisely what I don't like about detecting—too much brain work. I nodded at the pumpkin Yasmeen was holding. "Is that the one you want?" I asked.
"It's perfect," she said.
I thought it was way too small, but I didn't want to encourage more shopping. "You're absolutely right," I said.
A thump on the porch startled me, but it was only Dad. He held up a bag for me to see. "You'll never believe it," he said. "Organic marshmallows!"
"Are regular marshmallows _in_ organic?" Yasmeen asked.
"Got me," said Dad. "I'm just telling you what it says on the bag. Why don't you take these over to Mr. Stone? He's the one who loves marshmallows, right?"
"Served with hot chocolate," I said, "and a ghost story."
The sky had been clear a minute ago, but now I felt a gust of cold wind and heard a rumble like thunder.
Dad checked the sky, too. "Weather looks iffy all of a sudden," he said. "Let's take your pumpkins in and pay up."
Yasmeen followed me into the Harvey house. It was bright and cheery inside, with hand-painted signs, bins of vegetables and grains, shelves of vitamin-type bottles, a rack of spices and herbs in little plastic bags, books, and a cold case with yogurt and juices. The cash register was behind a counter near the door. On the counter was a basket of white things that reminded me of Luau's catnip sachet. I was about to inspect one when Mr. Blanco said, "That will be forty-two dollars and ninety-seven cents, Dan."
I said, "For pumpkins and marshmallows?"
" _And_ eyesight pills," Mr. Blanco said.
Dad handed over his credit card. "For that price, they'd better work," he said.
Mr. Blanco smiled. "As I explained, this is just enough for a few days, Dan. I'll call when I get a fresh batch."
Yasmeen said how nice the store was, and Mr. Blanco thanked her. Then I asked about the ghost. Did he know the house was supposed to be haunted?
Before Mr. Blanco could answer, I heard a throaty howl that seemed to come from every direction at once. I gave Yasmeen a What-the-heck? look, and the next thing a flash of light turned her face all eerie blue, sick, and scared. The gust of wind, the howl, the flash—and suddenly a _crack_ like thunder splitting a tree trunk an inch from my ear . . . then a sizzle of electricity, and everything went black.
## Chapter Fourteen
I held tight to my pumpkin, like it might turn out to be some kind of protection from supernatural forces. My dad put his hand on my shoulder. "Alex? Yazzie?" Even with him there, I could feel my heart pounding and hear Yasmeen breathing fast, like she was scared.
The next sound in the dark was Mr. Blanco. He was _laughing_ and at the same time rustling around behind the counter. " _There_ ," he said, and a lantern came on. "Sorry about that, Dan . . . , kids," he said. "It happens now and again. I think it's that same ghost you were asking about, Alex."
"You're kidding," said Dad. "Aren't you?"
Mr. Blanco bent down and fooled with some switches behind him on the wall. After a few seconds the overhead lights blazed back on. He turned toward us again and shrugged. "Tell you the truth, I don't know if I'm kidding. All I know is this is the fourth time it's happened just that way—wind, howl, flash, thunder, and out go the lights. It's a bother, but it doesn't seem to be dangerous. The only trouble is it scares the customers—some customers."
"I'm not scared," Dad said, but I noticed his face looked whiter than usual.
"I am!" I said.
"You don't really believe in ghosts, do you?" Yasmeen asked Mr. Blanco.
"Seems like it's more that the ghost believes in _me_ ," said Mr. Blanco. "Besides, have you got a better explanation?"
Yasmeen usually has all the answers. Now she opened her mouth like she was going to fill us in, but then she closed it again. "No," she said. "I don't."
At home there was a message on the answering machine. It was from Billy Jensen telling us that Marjie Lee had had a baby girl at six that morning. It might seem weird that a first-grader would be making that kind of phone call, but in our neighborhood it made total sense. Billy Jensen loves to spread news.
I told Dad about the baby, then I phoned Mr. Stone to ask if he would tell us the famous ghost story.
"Oh, you kids aren't interested in an old chestnut like that," he said.
Mr. Stone can be what my dad calls "difficult" and my mom calls "ornery."
"We really _do_ want to hear it, Mr. Stone," I persisted. "Oh—and I forgot to mention, Dad bought you a bag of fancy marshmallows, too. They came from Mr. Blanco's new store downtown."
"A present for _me_?" Mr. Stone said, and I could hear the smile in his voice. "Tomorrow after school then. Three-thirty? I'll make hot chocolate."
Dad called me for dinner as soon as I hung up. I sat down at the table in the kitchen and poured myself a glass of milk. Luau sauntered in and glanced at his food dish in the corner. No luck there, so he decided to check out my food dish—my dinner plate, I mean. He jumped into an empty chair and peeked over the edge of the table. He was hoping for fish sticks or tuna casserole, but we were having macaroni-and-cheese from a box with a side of sliced apple.
Luau swished his tail a couple of times and looked at me, which meant, _I never cease to be amazed at the strange foods you humans eat_. Then he stepped into my lap, circled, and curled up for a nap.
Dad had just served his own plate when we heard the whir and squeak of the garage door opening. "Glory be." Dad looked at his watch. "Mom's home early."
Two sticky bites later, she walked into the kitchen looking tired.
"Another bad day?" Dad asked her.
Mom nodded and sank into a chair. Dad popped up and got her a plate of food. Mom thanked him, but didn't eat. Instead, she rested her head on her hand and stared at her macaroni.
"What happened?" I asked her.
She didn't look up. "Two more missing cats."
"Really?" I shifted my legs, which woke Luau. "Then I'd better call Yasmeen."
Dad put his hand on my shoulder. "Detecting can wait, Alex. It's rare that we're all together."
Mom insisted she wasn't hungry, but Dad folded his arms across his chest and said, "Noreen, I want you to eat that macaroni—every bite!"
Mom sampled a single elbow, then two, then finally a regular forkful. Soon her macaroni was gone, and Dad brought her a second serving.
"I guess I forgot to eat today—after my doughnut breakfast, that is," Mom said.
"Well, no wonder you're a basket case." Dad put her plate back in front of her. "And eat your apple, too, honey. It's good for you."
"I don't like apples," Mom said.
"Oh, for heaven's sake," Dad said, " _everybody_ likes apples."
I thought of something—not about apples, about cats. "Were these cats taken from 'negligent' owners, too?"
Mom nodded. "Pretty bad."
"Did the owners see the thieves?"
"One was asleep. The other thought she saw . . ." Mom shook her head.
"Saw what?" I asked.
"Thought she saw a ghost. Honestly, some of the people in this town. They are _so_ superstitious."
"But, Mom," I said, "that's what Kyle said, too. I don't know. Maybe . . . ?"
Mom looked at me. "Sweetheart, I have enough to worry about putting bad _people_ in jail. If I have to worry about bad ghosts, too, well . . . I'll be seeking a new line of work."
Mom sounded so exhausted that I didn't want to ask her anything else. "Yasmeen and I are going to get the whole ghost story from Mr. Stone tomorrow," I said.
"That's good, honey," Mom said. "If this is all a Halloween prank, maybe it will shed some light. So far, though, I don't see a connection to the Harvey house."
"Speaking of the Harvey house," Dad said, and he told Mom about buying the pumpkin and the lights going out. I noticed he didn't say anything about his new pills, so I didn't say anything either.
Full of macaroni, Mom cheered up some and asked if there was anything new with Yasmeen's and my detecting. If I told her we were annoyed with Officer Krichels for not listening to Kyle's little sister, she would think I was dissing a fellow police officer. So instead, I stuck to what Kyle said in the cafeteria and how Bub thought maybe Kyle had received a ransom note.
"Ransom note?" she said. "Hmmmm. Then I guess maybe tomorrow I should go on over to Kyle's house myself. Fred Krichels might have missed something."
"That is a really, _really_ good idea," I said.
## Chapter Fifteen
After dinner, Dad and I planned to carve the jack-o'-lantern. When I stood up I deprived Luau of his bed, also known as my lap. So Luau would forgive me, I put a cat treat on the floor for him. Luau watched it for a few seconds. It didn't move, so he sneaked up to it, wiggled his rump, and pounced.
Dad shook his head. "For a cat who is so smart sometimes, he sure is stupid other times."
We talked about school while Dad got out newspaper, a big spoon, a marker, a carving knife, a paring knife, and a bowl—in other words, jack-o'-lantern tools. My job was the gooshy one—scoop out the seeds and the stringy orange crud, and then put them in the bowl. Boy, was I glad to wash my hands when that was done.
"Scary or funny this year?" Dad asked me.
"Funny," I said, and I drew a face that had extra-wide nostrils. That way I'd be sure to cut out all of the green spot. While I was drawing, Mom came in. She was wearing ratty pink sweats and the fuzzy slippers I had given her for her birthday. Sometimes I wonder what bad guys would think if they saw her like that.
"Can I help?" she asked.
"You can separate the seeds from the goop," Dad said, "so we can roast them."
Mom poked the contents of the bowl with her fingertip and made a face. "How come I always get the glamour jobs?"
Dad kissed her cheek and said, "Because you are such a glamour-puss."
Mom rolled her eyes, but then she went ahead and dunked her hands into the bowl and started picking out seeds. Meanwhile, Dad and I took turns using the paring knife to carve the face. When we were done, we lit the stumpy little candle inside the jack-o'-lantern and turned off the lights.
" _Oooooh_ ," Mom said, like she does every year. "We have two real _artists_ in the family."
"Thanks, honey." Dad put his arm around her.
"Can we put him on the front step now?" I asked.
Dad shook his head no. "Halloween's Friday," he said. "You can wait four days."
Walking to school the next morning, Yasmeen and I came to a significant conclusion about who stole Halloween: We didn't have the faintest idea.
Was it the same person who stole the other four cats?
Did Kyle make the whole thing up?
Was there a ransom note like Bub thought?
Yasmeen said we only knew one thing for sure: Ghosts had nothing to do with it.
I didn't tell her, but I wasn't even positive about that.
School did nothing to cheer us up. We hardly said a word on our way to Mr. Stone's house that afternoon. Inside, I pulled the fancy organic marshmallows out of my backpack. They were slightly smooshed after spending so much time with my math book and my social studies binder.
Mr. Stone smiled. "Thank you, Alex. And be sure to thank your dad, too. Let's try them right out, shall we?"
I could smell the hot chocolate on the stove.
Mr. Stone's house is pretty big, and he has lived there all by himself since his wife died. Most of the house seems kind of cold and deserted, but the kitchen is warm. That's where Yasmeen and I always sit when we visit.
Now he poured a mug of hot chocolate for each of us. "You kids don't really want to hear—" he began.
I cut him off. "We _do_ really want to hear."
"My mom told me that this is a story from your childhood," Yasmeen said.
"Gracious, Miss Popp, how old does your mother think I am?" Mr. Stone said. "This story comes from my _grandfather's_ childhood. It was my grandfather who told my dad and my dad who told me." Mr. Stone shifted in his chair like Luau does when he's settling in for a while. He took a sip of cocoa.
"My father," he said, "was a minister, accustomed to giving sermons, and he had quite the flair for the dramatic, something I fear that I lack. Every year at Halloween he'd gather us kids around and start this story the same way: 'Wisps of cloud obscured the moon that Halloween night, the night old man Harvey met his maker, murdered by his very own cat.' "
## Chapter Sixteen
Yasmeen and I looked at each other, then spoke at the same time: "His _cat_?"
Mr. Stone nodded. "A big cat, black as midnight, with eyes as green and bright as emeralds. A smart cat, too! He was known throughout the town for his intelligence. One time a child fell through the ice, and the cat stayed on the bank howling till someone came to the rescue. Another time Mrs. Harvey couldn't find her diamond necklace—the Harveys were the richest people in town—and who led her directly to it? That big black cat."
"But why would a cat want to murder its owner?" I asked. I admit I was thinking of Luau. He's smart, too. Didn't he find the key to the handcuffs? Maybe he was a descendant of Old Man Harvey's cat. Maybe I should watch my back.
Mr. Stone continued: "The way my dad told it, Old Man Harvey was rich for three simple reasons: He worked hard. He was greedy. And he was mean. The only person he cared about was Marianne Harvey, his wife. Supposedly, she was a great beauty, and he wooed her for a long time, showering her with extravagant gifts like that necklace. Her sister—she married my grandmother's cousin—always said that Marianne got married against her better judgment. She was finally so sick of being pestered that she said yes. Besides, in those days a girl didn't have so many options."
Mr. Stone took more marshmallows from the bag and put them in our mugs. "The scary part's coming," he said. "You'll need your strength. Now, as I was saying, Mr. Harvey adored his wife and didn't give a fig about anybody else."
"How did he feel about his cat?" I asked.
Mr. Stone looked at me. "I am pretty sure, Mr. Parakeet, that when my dad used to tell this story, we kids didn't ask questions."
"Sorry," I said.
"Well, one day—it was in October, not so very long before Halloween—Mr. Harvey didn't come in to work. Now, I know you're going to ask, Mr. Parakeet, so I'll go ahead and tell you: Mr. Harvey owned a dry goods store, the first in College Springs, and it was unlike the old miser to miss a workday. Along about noon one of his employees, a fellow by the name of Floyd, went to the house to check up on him.
"Floyd rang the bell. No answer. Floyd knocked on the door. No answer. Floyd called out." Mr. Stone looked at us.
_"No answer!"_ Yasmeen and I chorused.
Mr. Stone nodded. "That's right. Now Floyd was worried. He probably ought to have run for the authorities. But he was a strong and steady fellow, and he decided first to have a look around on his own. As luck would have it, the parlor window was open a crack, and Floyd wedged his fingers under, pushed the window up, and climbed inside."
Mr. Stone paused and shook his head mournfully. I wanted to ask about ten questions—like how old was Floyd? and where was the dry goods store?—but I clamped my lips together and kept quiet.
"Well," Mr. Stone sighed, " _what_ a sight in that parlor, that same parlor where only the day before Mrs. Harvey had entertained ladies for tea. Tables were overturned; lamps and precious gewgaws were shattered—you'd have thought a typhoon had passed through. But it was on the silk-brocaded chaise that poor Floyd beheld the most awful sight of all, a sight that would have stopped any but the stoutest heart, the strangled, lifeless body of—"
"Mr. Harvey!" I said.
Mr. Stone closed his mouth and narrowed his eyes. "No, smarty-pants, _not_ Mr. Harvey. _Mrs_. Harvey."
"But you _said_ —" I started to argue.
"I am not done yet," said Mr. Stone. "Here." He held out the bag of marshmallows to me. "Stick a couple in your mouth and keep them there. Now," he went on, "where was I? Oh, yes—and this is one of the queer parts—that big black cat was curled up in her lap, and later Floyd told people it was as though the cat was trying to bring back the warmth to his mistress's cold, dead body."
When Mr. Stone paused to sip his hot chocolate, neither Yasmeen nor I said a word. We were too caught up in the spookiness.
"Two days later," Mr. Stone continued, "Marianne Harvey was buried—right here at St. Bernard's, by the way, the marker is there for all to see—and her grieving husband wept at the graveside. Mr. Harvey told the police he had been unexpectedly called over the mountain to Belleburg the morning of the murder. While he was gone, he said, some thief must have broken in and surprised his wife.
"Well, the thief was never caught. In fact, no one ever saw hide nor hair of any thief. Add to that the fact that Mr. Harvey was not a popular man, and you can infer the rumors that flew. Some people speculated that Marianne Harvey was miserable in her marriage, that her husband had mistreated her, that she had had a sweetheart and when Mr. Harvey found out, he killed her in a jealous rage. Some speculated that it was poor stouthearted Floyd himself who was the sweetheart. But if there was evidence one way or the other, I never heard about it. And in those days no one had the guts to stand up to the richest man in town.
"A few days passed, and the weather grew colder. Finally, it was Halloween night. Wisps of cloud obscured the full moon. A gentleman walking home from a local tavern passed the Harvey house and heard a ruckus inside. Now, this gentleman had been at the tavern for some hours, and so not everyone credited his account with perfect accuracy. What he claimed he heard were three sounds at once—a mountain lion's scream, the howl of a madman, and the rough-and-tumble of a barroom brawl. This cater-wumpus lasted perhaps one minute. And then there was an eerie silence.
"Not being so stouthearted as Floyd, the fellow hightailed it to the courthouse, which in those days was also the headquarters for the police. And so it was an officer of the law who opened the parlor door at the Harvey house on Halloween night and found the mangled corpse of . . ."
Yasmeen said, "Mr. Harvey."
Mr. Stone nodded. "It was a grisly scene. Mr. Harvey had locked up the parlor after his wife died there, but he or someone else had opened it up that night. There was blood everywhere—streaking the rugs and the walls, splattered on the ceiling. And the body"—Mr. Stone shuddered as if he had seen it himself, which I guess he had in his imagination—"it was unrecognizable, just as though some beast of the jungle had wrought revenge."
"Where was the cat?" Yasmeen asked.
Mr. Stone nodded. "Well you might ask," he said. "There had been a fire in the fireplace, and a few hot embers remained. The cat was on the hearth, absorbing the last of the warmth and cleaning something red and sticky from its paws."
## Chapter Seventeen
_"Ewwwww!"_ Yasmeen and I said.
Mr. Stone smiled and folded his hands in front of him on the table. He looked pleased with himself. "That's the story, just as my dad told it. I'm surprised I still remember."
"But what about the ghost?" I asked.
Mr. Stone got up from the table and cleared our cups. "Ah yes, the ghost," he said. "It seems that old man Harvey's ghost still haunts the mansion he built for himself and his bride, and those who have lived there since have spent many a sleepless night."
"I've heard cats in College Springs often get catnapped around Halloween," I said. "And sometimes whoever it is blames the ghost. This year there are five missing cats already."
"Really?" said Mr. Stone. "That's a shame. It would seem the Harvey ghost is not entirely rational. Having been killed by his wife's cat, he seeks revenge on _all_ cats."
Yasmeen looked disgusted. "You don't really believe in ghosts, do you, Mr. Stone?"
"The older I get, the more I find the world to be mysterious," Mr. Stone said.
"In the story, what happened to the poor cat? Marianne Harvey's cat?" I asked.
"The 'poor cat'?" Mr. Stone said. "The 'poor cat' was a bloodthirsty killer!"
"But it doesn't sound like his victim, Mr. Harvey, was a very nice man," Yasmeen said.
" _Or_ a very nice ghost," I said.
"We don't know for certain what kind of man Mr. Harvey was," Mr. Stone said.
Yasmeen disagreed. "The cat knew," she said.
I looked at Yasmeen. "It seems kind of strange that you're totally ready to accept a cat witnessing a murder and getting revenge, but you're totally rejecting the idea of ghosts."
"What's so strange about it?" Yasmeen said. "I don't believe in ghosts. I do believe in cats."
Mr. Stone didn't give me time to puzzle that one out. "As the story goes," he said, "Marianne Harvey's cat suffered the sorry fate that is common to unwanted felines—he was put in a sack with a great number of rocks and thrown into a pool of water, in this case the Harveys' well. People said his howling was enough to freeze your blood."
Yasmeen and I both felt better when we left Mr. Stone's house. It couldn't have been the gory ghost story that cheered us up. It must have been the hot chocolate and marshmallows.
"Let's go back to St. Bernard's," I suggested, "to see where Marianne Harvey is buried."
"I can't," Yasmeen said. "I'm going over to see the Lees' new baby. My whole family has to. But—I know, Alex—why don't you go over to the cemetery? Maybe what's going on _is_ a Halloween prank, and somebody's eventually going to blame the whole thing on the ghost. You might notice something new at the cemetery."
This time it was me who opened my mouth and closed it again. I never thought of going to the cemetery _alone_. But Yasmeen already had plenty of reasons to call me a wimp. If I refused to go, she'd have plenty plus one.
"No problem," I said, trying to sound like I meant it. "I'll call you after dinner." Then I turned around and started walking toward St. Bernard's, all the time thinking, "Provided the ghosts don't get me first."
## Chapter Eighteen
The last time I had paid a visit to my local graveyard, my cat had paused to do a little personal grooming beside a statue of a grumpy angel. As it turned out, that angel was Marianne Harvey's grave marker.
Actually, the angel was pretty close to the gate, but that day I turned right when I walked in, and I wound around searching among a lot of other headstones before I came to it. By the time I did, the light was almost gone, and I had to stare to read the inscription:
**M ARIANNE MCCLELLAN HARVEY
BORN JULY 2, 1854
DIED OCTOBER 28, 1879
IN DEATH, THE ETERNAL WIFE.**
It was dark and cold. I was in a cemetery. The leafless trees looked sharp and thorny against the rising moon. Can you blame me for feeling creeped out?
And that inscription didn't help. It was like it condemned poor Marianne to be stuck with her murderous husband forever.
Mr. Stone had said Mr. Harvey was buried next to Marianne, but searching still took me a few minutes. In the end, I had to brush away dirt to read the inscription. When I did, it was even stranger than his wife's.
**G ILMORE SAMUEL HARVEY
BORN DECEMBER 2, 1836
DIED OCTOBER 31, 1879
SO SHALL THE RIGHTEOUS
ESCAPE THE GRAVE.**
Now not only was I creeped out, I had something to think about. Maybe this was crazy, but it almost felt like that one was trying to tell me something. But what?
A cold gust made me shiver, and I noticed the bats were out again. If there was ever a moment for ghosts and vampires and werewolves to appear in a regular kid's life, this was it.
I started to run. I didn't get very far.
That night after dinner I called Yasmeen to fill her in. I swear, even over the phone line, I could hear her shake her head, exasperated. "That's why I carry Band-Aids and antiseptic," she said.
I touched my forehead to see if it still hurt. It did. I think it was Dad's scrubbing that inflicted most of the damage, but it hadn't been such a hot idea to run into the tree in the first place.
"Anybody would've been scared," I said. "Anybody would've run."
"Anybody would not have run into a _tree_ ," she said. "It takes the distinctive talents of my next-door neighbor Alex Parakeet to do that."
"Can we change the subject?" I said.
"Absolutely," Yasmeen said. "The new subject is how you're going to help me do Mrs. Lee a favor."
"That wasn't the new subject I was thinking of," I said, "but what favor?"
"We're supposed to return one of the baby monitors—the fancy one from Mrs. Jensen. Marjie Lee says it's too powerful. She keeps picking up cell phone conversations, and it's embarrassing."
"But why are _you_ doing this?" I asked her.
" _We_ are doing this because my mom volunteered us," Yasmeen said. "Come on, Alex. It's only over to Biggest Buy-Buy. We can walk there after school."
There was no way carrying one baby monitor required two people. But there was also no way I was going to get out of this if Yasmeen had made up her mind. So I said, "Sure. Now can we talk about my subject?"
"Sure," Yasmeen said.
"I told you about the gravestones, what they said?"
"Right," said Yasmeen.
"Well, didn't it seem strange to you—especially Mr. Harvey's?"
"It's unusual," she agreed, "but every Christian believes Jesus rose from the grave so that we will, too. Isn't that all he was saying?"
Something hit me. "Wait a second. Isn't that all _who_ was saying?"
"Who else are we talking about?" Yasmeen said. "Gilmore Harvey."
"Gilmore Harvey wrote what it said on Marianne's headstone. He was there to do it after she died. But _when_ did he write his own?" I asked. "He died all of a sudden. It's not like he had time to be composing his own—what do you call it? An epo—?"
"An epitaph," Yasmeen said slowly, like she was thinking as she spoke. "So unless he had it ready to go in advance, he didn't write it. Someone else did."
"Someone else," I repeated, "but who?"
"I don't know," she said, still like she was thinking. But then her voice changed. "Look, Alex, this is all ancient history, right? It's not helping us find the missing cats."
"You sent me to the cemetery!" I protested.
"That was because I thought you might find a clue to what's going on in this century—the twenty-first century, not the nineteenth. I think we better forget about the cemetery for now. Don't you want to hear about the baby? And Mr. Lee was even there."
"Amazing."
"That's what my mom said. You know what's kind of a weird coincidence? The baby's room is all decorated with pictures of cats—big ones like lions and cheetahs and lynxes. Mrs. Lee told us it's because of Mr. Lee's business."
"What is his business anyway?" I asked. "All I know is that nobody ever sees him."
"His business is exotic pets," Yasmeen said. "He travels all over the world buying and selling. His customers are super-rich people who want something unusual."
"Pets?" I said. "Yasmeen, what if . . . ?"
"What if what?"
"What if Mr. Lee has something to do with the missing cats?"
"You aren't listening, Alex. No offense to Luau, but there is nothing exotic about a house cat."
"Not here in Pennsylvania," I said, "but maybe somewhere house cats are exotic, or—what about this? What if he _does_ something to them to make them exotic?"
There was a pause, and I could hear Yasmeen breathing. Then she said, "No. No way. If you ever got a chance to talk to Mr. Lee, you'd see. He's nice, really."
My head hurt. And arguing with Yasmeen would only make it worse. So I didn't. But all the same, this is what I was thinking: Was Mr. Lee really the nice guy she thought he was? Or could he be a serial catnapper?
## Chapter Nineteen
Mom walked into the family room as I was hanging up the phone. She was just getting home and still had her uniform on. She tried to smile at me and say, "Hi, honey," but she was yawning, so her face got twisted and her words came out, "Hi-yuh-ee." Then she took a good look and woke right up. "What on earth happened to your _head_?" she asked.
I touched the bandage. "Little accident. I'm okay."
"Did your dad clean it up?" she asked.
" _Oh_ , yeah," I said. "I think he used steel wool."
Mom looked sad. "I wish I had been home to do it, but somebody's got to make College Springs safe for decent people—and decent cats."
"Anything new?" I asked.
"Another cat is missing," Mom said.
"Another negligent owner?" I asked.
Mom dropped into the big, comfy chair, closed her eyes, and nodded. "I may never figure this one out, but at least you got a new vocabulary word."
"And did this one see the thief in action?"
"Saw something, but no good description," Mom said. "I swear, whoever this is moves like a ghost."
My ears pricked up. "A ghost?" I said. "See, Mom. Maybe it really is—"
Mom silenced me with a look. Obviously, she did not want to hear any more from me about ghosts. Should I tell her my suspicion about Mr. Lee? But I didn't think she'd appreciate me suspecting our next-door neighbor without an atom of evidence either. So I asked a different question. "Did you have a chance to talk to Kyle's family?"
"For quite a while," she said. "They were a positive joy after the other folks I've been visiting lately. Except that boy is morbid, don't you think? I asked what he does for fun, and he said, 'I visit the cemetery across the street.' "
"Did you notice anything else about Kyle?" I asked Mom. "Like was he—I dunno— _scared_ of you or anything?"
I was thinking of how nervous he had seemed in the cafeteria when he told Yasmeen and me to stop detecting. If it scared him for _us_ to investigate Halloween's disappearance, wouldn't he be terrified by a police detective asking questions?
"He did seem anxious," Mom said. "But it fit in with him being an odd kind of kid. What did Fred call him? A Gloomy Gus?"
"What else did you find out?" I asked.
"That Fred Krichels was right about something else," Mom said, "that little sister of his—Cammie. I think I am now a leading authority on the life of Cammie. She's making a unicorn out of play dough at preschool. Her favorite song is 'The Cat Came Back.' " Mom shook her head and laughed. "Yah-yak-yak—gosh, a kid like that can get on your nerves!"
Shoot, I thought. Was my mom as bad as Officer Krichels? People who are little and annoying are not necessarily dumb, too. Mom pulled her notebook out of her back pocket and flipped through the pages.
"Here it is," Mom said. "According to Cammie, Kyle _tortured_ the poor cat." She read from the notebook: " 'He always went around yanking Halloween's ears and pouring poison in them.' "
"What?" I tried to picture pale, sad-faced Kyle hurting a fly, let alone his own cat.
Mom laughed, which wasn't precisely what I expected when she had just told me about a kid torturing a cat. "Alex," she said, "haven't you ever yanked on Luau's ears and poured poison in them?"
I was shocked. "Of course not. Luau's my buddy!"
"Oh yes?" She was still smiling. "Let me ask you something else. Do the words _ear mites_ ring a bell?"
_Ohhhh_. Now I got it. Ear mites are tiny, itchy bugs. If your cat gets them, it goes crazy trying to scratch, so the vet gives you a bottle of eardrops. When I gave them to Luau, he hated it—kept trying to wriggle away while I held tight.
"Cammie must have seen Kyle treating the ear mites and thought he was torturing his cat," I said.
Mom nodded. "Plus she's a typical kid, loved tattling on her big brother. I double-checked with his parents. They even showed me the bottle from the vet."
Good old Mom. She had solved one mystery, at least. Kyle did love his cat. You would never go to the trouble of "yanking its ears and pouring poison in them" if you didn't.
## Chapter Twenty
The next day turned out to be one of those unusual ones where everything we did in school actually required the use of my brain. That meant I didn't have a chance to think about who stole Halloween—not to mention five other cats—till Yasmeen and I were on our way home.
As we turned the corner onto Chickadee Court my stomach rumbled. Dad hadn't made it to the grocery store yesterday, so instead of a sandwich there was a Ziploc bag of Pirate Berry Crunch in my lunch. And Pirate Berry Crunch just doesn't stick with you.
Thinking of soup, I said, "What if we talk to Bub again?"
Yasmeen was hungry, too. "Good idea."
Bub had another guest in his living room when we walked in. This one was curled up on the recliner with his head resting on the remote. On TV was a black-and-white movie with the sound turned down. In it a pretty lady on scaffolding was trying to fix a big dinosaur skeleton.
Bub nodded at the set. " _Bringing Up Baby_ ," he said. " 'Baby' is a leopard—and from the feline point of view, a dish. Luau purrs every time she comes on."
Luau heard his name, stretched, and rolled over, exposing his tummy. I tickled him, and he _mrrrrow_ ed and batted at my hand, which meant, _Please, Alex, not in front of the neighbors!_
Bub served us lentil soup and sat down at the head of the table. "How's the other guy look?" he asked me.
It took me a second to realize he was talking about the Band-Aid on my forehead. "The other guy was a tree," I said.
Bub nodded thoughtfully. "There's been a lotta that lately," he said, "trees attacking innocent kids. I saw it on Fox."
Bub tried to keep his face straight but couldn't. He laughed and laughed, which made me laugh, too. Yasmeen shook her head like we were a couple of kindergartners. Finally Bub wiped the tears from his face with a paper towel and asked us how the case was going.
"We're kind of at a dead end," Yasmeen said. Then she told him about the missing cats with their negligent owners and about Halloween's ear mites. She did not tell him about Mr. Lee, I noticed. She still thought I was crazy to suspect he might be stealing cats for his exotic pet business.
"Still no sign of a ransom note, though?" Bub said.
"Mom said Kyle seemed anxious," I told him. "But he didn't say anything about a ransom note. And I guess none of the other cat owners did either."
Yasmeen and I were finishing our soup when we heard the doorbell ring. Officer Krichels? Al, the delivery man? It could even have been Dad. "Come on in, it's open!" Bub called, and into Bub's house walked the last person we wanted to see, Sophie Sikora.
I looked at Yasmeen, who looked at me, and our identical expressions said, _Oh, no_.
On her way in, Sophie bumped the recliner, which made Luau _mrrrrrow_. When she got to where we were sitting at the table, she ran into that, too. It's a good thing our bowls were almost empty, or there would have been a couple of soup tsunamis right into our laps.
"Hey, Bub!" Sophie greeted us. "Hey, Yazzie and Al! What's the haps?"
Nobody calls me "Al." And my dad is the only person allowed to call Yasmeen "Yazzie." We both opened our mouths to set Sophie straight, but she kept right on talking. " _I_ just fixed Billy's remote control jeep," she said. "Mrs. Jensen paid me, too. It was a whole _lot_ she paid me, but I can't tell you how much, because you'd be _so_ jealous, and my mom says I shouldn't brag, even though my dad says it's okay provided you have something to brag _about_ , like I do, because I'm so good at fixing stuff, so how much she paid me was ten dollars."
Bub set a bowl of soup in front of her, and she started shoveling. For a moment I understood what my mom means when she talks about "blessed silence." But soon she slurped the last of her soup and started in again.
"Did you notice how Bub's doorbell worked so good when I rang it?" she said. "I'm the one that fixed it. Bub didn't pay me though, but I'm not saying I care, because some people don't have money like the Jensens do. My family has a lot of money because my dad earns thousands and thousands every year—I forget how many thousands—only my mom says the Jensens have an ungodly amount of money. _Ungodly_ is a word that means 'even more than us.' My mom also says—"
"Sophie?" Bub's expression was patient.
"Yes, Bub?"
"Sometimes the things mom says are best left with mom."
It took a second for that to sink in. Then Sophie said, "You mean I should shut the heck up?"
Bub nodded.
Sophie shrugged. "Okay."
I got up and took Yasmeen's and my bowls to the kitchen. When I came back to the table, Bub was twiddling his thumbs, a sure sign he was thinking.
"Have you got an idea about the case?" I asked him.
He nodded. "Ah, yup. But I don't know what good it's gonna do you. Think a bit—except for Kyle's, what do the missing cats have in common?"
"The owners were negligent," Yasmeen said.
Bub nodded. "So it seems like your thief is particularly after cats that aren't well taken care of. Now, why would that be?"
Of course I knew about the coincidence. But I hadn't thought much about what it might mean. And there was the problem that Kyle's cat Halloween didn't fit the pattern. Maybe because Halloween was stolen by a different thief? But then I thought of something else. "Kyle's little sister didn't think _he_ was taking good care of Halloween either!" I said. "Maybe she told other people, and—"
Bub nodded. "She was happy enough to tell the police."
"So," Yasmeen said, "it might be that the thief isn't really _stealing_ cats. Maybe he's _rescuing_ cats. Maybe he's a good guy, not a bad guy."
"I don't know about that," Bub said. "Stealing is stealing even if your motives are good. Think what would happen if everybody took it into their heads to 'rescue' other folks' possessions."
My mom would have to work even more overtime, I thought, and I was about to say so, but the phone rang and Bub got up from the table to answer it.
It is funny how sometimes one thing leads to another. Later, we found out it was Jo, Bub's niece, on the phone. Jo is a student at the university. The dryer in her dorm was broken. She called to ask if she could use Bub's.
If the dorm dryer hadn't broken, Jo wouldn't have called. If Jo hadn't called, Bub never would have left us alone when he did.
And if Bub hadn't left us alone, Yasmeen would have told _him_ her idea, and he would have said it was too risky, and we would have forgotten about it.
So in a way, everything that happened next was because the dryer in Jo's dorm at the university broke down two days before Halloween.
## Chapter Twenty-one
Yasmeen's crazy idea was this: Spread the word that Luau was a neglected cat, too, a cat that badly needed rescuing. College Springs is a dinky town. If we told enough people, pretty soon the thief would hear about it—same as he must have heard about Kyle "torturing" Halloween. When that happened, the thief would go after Luau.
"And that's when we get him!" Yasmeen said.
There was a pause, and during the pause I expected her to say, "Ha-ha."
Only she didn't.
So finally I had to say, _"What?"_
And Sophie said, "Wow, Yasmeen. I never saw before why people said you were smart, but now I finally see because that is just _so smart_ —"
"Hey—aren't your lips supposed to be zipped?" I said.
"Don't be rude, Alex," Yasmeen said. "Thank you, Sophie."
"Oh, that's great, now you're ganging up on me, not to mention poor, innocent Luau. . . ."
My cat had been peacefully watching the glamorous leopard on TV, but now his ears perked up and he said, " _mrrrrf_ ," which meant, _Did I hear my name mentioned?_
"I admit the plan still has some bugs that need working out . . . ," Yasmeen said.
"No, it doesn't," I said. "No bugs because no plan. Not gonna happen."
"Listen a minute," said Yasmeen.
"No."
_"Seriously."_
_"No."_
"We could fix it so Luau isn't in any danger," she said. "We could be _really_ careful."
I crossed my arms over my chest and shook my head. Luau, meanwhile, jumped off the recliner and walked toward us. I expected him to hide under my chair—seeking protection from the crazy person with the crazy plan—but Luau, that traitor, jumped into Yasmeen's lap instead.
"See?" she said. "He's volunteering."
"You get down from there!" I said.
Sophie interrupted. "I could keep him safe if I had the right equipment. I could 'wire' him like the FBI does. You know, hide a radio transmitter on his body so we could hear whatever was happening to him—"
"That is totally insane," I protested. "I mean, apart from everything else, don't you geniuses see the obvious problem? People wear clothes. Cats don't. Where are you going to hide a transmitter?"
"A collar would be enough," Sophie said. "If the transmitter is small, it could dangle from it. Are there stores for teensy transmitters? I bet I could take something apart. Like a wireless phone? Or a walkie-talkie? It has to use radio waves—"
As soon as Sophie said it, I remembered Yasmeen already had precisely the right source for such a transmitter. It was at her house, waiting to go back to Biggest Buy-Buy. Would Yasmeen remember it, too? I tried mental telepathy: Forget, forget, forget. . . .
It didn't work.
"The baby monitor!" Yasmeen said. "Mrs. Lee says it's _too_ powerful! Plus it's really small. I've got it at home. Instead of returning it, we can sort of, you know, _borrow_ it."
" _Steal_ it, you mean," I said.
"We'll return it later—"
"After Sophie takes it apart?" I said.
Yasmeen shrugged. "We are not talking about Humpty-Dumpty, Alex. We are talking about simple electronics. After she takes it apart, she'll put it back together."
I never officially changed my mind and agreed to go along with this nutso plan. But at some point it became unavoidable, like a thunderstorm when the clouds bunch up. And when Bub came back from talking to Jo, I didn't tell him what was going on. Instead, the three of us—Yasmeen, Sophie, and I—looked at each other and it instantly became a kids-against-the-grown-ups alliance. I have noticed that this happens sometimes—usually when kids are about to do something totally clever that they know is also totally stupid.
Later, we finalized our plans. Sophie is the most spoiled kid on Chickadee Court, which for once was coming in handy. She was pretty sure her mom would buy her the collar if she said it was for one of her millions of stuffed animals. Meanwhile, Yasmeen would bring the baby monitor to Sophie's right away so she could work on modifying it for its new purpose. The big problem was that the transmitter's signal would need to be amplified. Sophie had an idea for doing this, but she wasn't sure it would work.
"There's one more job," Yasmeen told Sophie. "And it's important. You have to tell everybody how badly Alex treats Luau."
"But everybody knows about Luau and me," I said. "Who would believe I treat him bad?"
_"Badly,"_ Yasmeen said. "And we've been over this. There's only one person who has to believe it, and that's the catnapper. We just start the rumor and wait. I know who I'm calling—Billy Jensen."
I didn't say anything to my parents about the plan. I wasn't sure it was going to happen, for one thing. Telling them could wait. But there was someone I needed to consult right away. At bedtime he was sitting on my pillow with his favorite possession, the white ball of catnip.
"Was Yasmeen right today, Luau?" I asked him. "Were you volunteering when you jumped into her lap? You know it could be dangerous. You could end up catnapped yourself."
Luau slithered beneath my sheet and blanket, purring. It took me a minute to understand, but when I did, I had to laugh.
"Luau Kitty," I said, "goes _undercover_."
## Chapter Twenty-two
I could have told you about seventy-five reasons this plan of Yasmeen's was bad. But there was one I never thought of—its effect on _me_. Billy Jensen was totally true to his reputation as the biggest blabbermouth in first grade, if not the entire school. He wasted no time spreading the rumor about how I mistreat Luau.
And Yasmeen hadn't left it at plain old "mistreat" either. She provided _details_. Supposedly, I buy Luau dog food instead of cat food because it's cheaper, and I make him sleep in the garage no matter how freezing it gets.
The next day, Thursday, it seemed like half the school wasn't talking to me. I even caught Mrs. Timmons glaring at me once, at the same time she brushed a few white cat hairs off her shoulder. One girl, a second-grader, hissed and clawed the air when she passed me in the hall.
"I'm really sorry, but it won't be for long," Yasmeen told me at lunch. We were the only ones at the table because no one else wanted to sit with me. "And for now, you should be glad it's going so well."
"I hope Sophie works fast," I said.
"I talked to Sophie this morning after recess," Yasmeen said. "She pitched one of her famous fits, and her mom went straight to the pet store for a cat collar. Sophie says those fits never fail. She thinks she can do the work today after school. The monitor should be ready by Halloween—tomorrow."
I swallowed the last bite of my peanut butter sandwich and gulped some milk. "I think we should go over the plan again," I said. "I'm not sure I've got it totally straight."
Yasmeen nodded. "The catnapper usually strikes around midnight. So tomorrow after trick-or-treating, you'll put Luau's new collar on him and let him out."
"Right," I said.
"You've got the cat bed ready, right?"
"I can put it on the front step. For the catnapper it'll be like an invitation," I said.
"Good," Yasmeen said. "But just in case there's a problem, you'll have the baby monitor, the receiver half."
"But that's only for emergencies," I said.
"If everything goes the way I think it will," Yasmeen said, "we won't even have to use it."
"So Luau's safely in his bed . . . ," I said, "and then comes the hard part."
Yasmeen nodded. "You _have to_ stay awake until 3 A.M., watching out the window—making sure Luau's okay. Then, assuming he is, I take over. You'll know it's okay to go to sleep when I blink the lights in my bedroom. That means I'm on duty."
"And if Luau's not still safe—if I see somebody in the yard . . .?"
"Shine a light!" Yasmeen said, just the way she would in church.
I laughed. "And what do you think will happen then," I asked, "when I _shine a light_?"
"I _think_ whoever it is will drop Luau and run, but by then we will have seen him."
"Or her," I said.
Yasmeen nodded. "And anyway, if he or she doesn't drop Luau, we've got the transmitter in the collar. What we hear will tell us where Luau is, and we'll rescue him."
When I got home from school, my dad was in the kitchen making dinner. Usually a gourmet dinner by Dad means using stuff out of two cardboard boxes instead of one, but now he was looking at an actual cookbook.
"Check it out." Dad pointed a wooden spoon at a pan on the stove. "Plus—look at this: _Fresh_ vegetables."
I looked over his shoulder and saw two onions and some broccoli. Yuck.
"I'm making stir-fry," Dad said. "I borrowed the cookbook from Marjie Lee." He measured a spoonful of soy sauce into a bowl and frowned. "Doesn't seem like very much."
I looked at the recipe. "It says one quarter _cup_ , Dad."
Dad said, "I knew that."
"So I guess the pills aren't helping your eyes any," I said.
"In fact, I think my vision's better," Dad said. "But this print is so darned small, isn't it? Which reminds me, Alex—sometime before dinner, would you run over to Mr. Blanco's store and pick up the rest of my pills?"
I said sure, thinking I hadn't had a chance to ask Mr. Blanco what he knew about the ghost story. Dad rinsed the broccoli and began cutting it up.
"How come you borrowed a cookbook?" I asked him.
"I noticed it on the shelf when I went over to see the baby. Marjie said go ahead and take it. I've been thinking I should get more serious about cooking—especially vegetables. They're good for eyesight, too, you know."
I got a handful of cookies out of the cupboard and sat down to eat them at the kitchen table. If Dad was going to get serious about vegetables, I'd better fortify myself. "What's the baby like?" I asked him.
"Scrunched-up face on one end, diaper on the other," Dad said. "It's a while before they get cute."
"What did they name her?"
Dad smiled. "Marjie Lee can't decide, which is _so_ like her. For now, they're calling her Boopsie."
"That's awful!" I said. "Doesn't Mr. Lee have an idea what to name her?"
"Who knows?" Dad said. "The man is practically a ghost—nobody ever sees him. You just hear tell he's been around."
My ears pricked up. Mr. Lee was like a ghost? Maybe he really _was_ the catnapper!
But I didn't want to say that. Mr. Lee was a neighbor, and my reasons for suspecting him were lame. Reason one: He was in a business that had to do with animals, and cats are missing, and cats are animals. Reason two: He is never around, which makes him seem mysterious, and the thief is also mysterious.
There was something more as well, though. I didn't know what to call it. _Instinct_ maybe? My instinct told me not to trust Mr. Lee.
I tried to be subtle. "You didn't notice anything unusual at the Lees' house when you were there, did you, Dad? Like new _pets_ maybe?"
"Isn't a new baby enough?" Dad said.
I tried again. "So, uh, what do you know about Mr. Lee? I mean, what kind of a guy is he? What about his business?"
Dad looked over his shoulder at me. "Why this sudden neighborly interest, Alex?"
I swallowed the last bite of cookie. "I don't know," I lied. "Just curiosity is all."
Dad squinted at the recipe again. Then he measured a spoonful of oil and poured it into the pan. "Well, Alex," he said. "You know what they say. 'Twas curiosity that killed the cat."
## Chapter Twenty-three
The day had started out sunny, but by now a silver sheet of clouds had drifted in, and the air felt cold. To keep warm, I ran partway to Mr. Blanco's store at the Harvey house. Walking up the path, I noticed most of the pumpkins were gone from the front yard. Inside, the lights were bright. The store seemed to be open, but there was no one around.
"Hello? Mr. Blanco?" I called.
"Hello!" came a voice. "Who's there?"
I still didn't see anyone, and I couldn't figure out where the voice was coming from. Knowing this was the Harvey house—the famous _haunted_ Harvey house—I felt a little weird conversing with a voice that didn't have a body.
"It's me, Alex Parakeet! Is that you, Mr. Blanco?"
"I think so," the voice said, "but I'm so covered with dust and cobwebs I can't be sure. Hang on. I'll be up in a minute."
Oh, that's right. Mom had mentioned that there was a cellar. That's where the neighbors found the starving cats that time. Sure enough, a moment later Mr. Blanco emerged from a doorway in a back corner. With one hand he was carrying a big black book—about twice the size of a photo album—and some old yellow newspapers. With the other hand he was wiping cobwebs from his face. I hoped no spiders had hitched a ride in his hair.
"Did you come for the rest of the pills?" he asked. "I've got a new batch up by the register. And maybe you'd like some ointment for that bruise on your forehead? It looks painful."
"It doesn't hurt anymore," I said, "but thanks."
"How are the pills working out for your dad?" Mr. Blanco asked.
I followed him to the front of the store. He dropped the book and the newspapers onto the counter. They landed with a bump and a poof of dust.
"Dad thinks they're helping," I said, "but I'm not sure."
"They're made in small batches," Mr. Blanco said, "which is why your dad had to wait for these." He pulled the yellow bottle of pills from underneath the counter. "Anything else I can get you?"
I looked around and noticed the white balls I had seen last time, the ones that looked like the catnip sachet we found under the car. _Shoot!_ I had meant to ask about them then, but when the ghost howled and turned out the lights, I totally forgot.
"Are these catnip?" I asked.
Mr. Blanco nodded. "One of my most popular sellers. Cat owners are crazy people—have you noticed? Oh—sorry, Alex. Present company excepted."
"I guess you wouldn't remember any particular person who bought one of these catnip things?"
"Do you have somebody in mind?" Mr. Blanco asked.
I wanted to say, yeah—I have in mind your basic catnapper. Do you have any catnapper customers? But instead, I explained that Yasmeen and I had found one on Groundhog Drive by St. Bernard's.
"You wanted to return it to the rightful owner, is that it?" Mr. Blanco said. "Well, in that neighborhood, I'd say it was probably Kyle Richmond. Talk about your crazy cat-owners." Mr. Blanco shook his head.
"Kyle?" I felt crushed. If the catnip was Kyle's, it wasn't a clue at all.
"Come to think of it," Mr. Blanco said, "he's another kid that's seen the ghost, same as you and Yasmeen. He was in Sunday afternoon to buy catnip and started asking questions about the ghost story."
"Mr. Stone told Yasmeen and me his dad's version of the story," I said. "I even went over to the cemetery to see the grave markers. I was hoping you might know more."
"That's one reason I've been digging around this house," Mr. Blanco said, "to find out what really happened. Living across from the cemetery, Kyle's interested in ghosts, too. Anyway, we were talking when the usual ruckus kicked up. He was outta here before the lights blinked. Poor kid—he hasn't been back."
I said that was too bad, then I asked about the stuff Mr. Blanco had brought up from the cellar. "Does this have to do with the ghost story?"
He pushed the book and the newspapers toward me. "They're from around the time the murders took place," he said. "And yesterday I found something strange, too. We're still remodeling, you know, and I punched through a wall in the room I think must have been the parlor."
"That's where Mr. Stone told us the bodies were found," I said, "Marianne Harvey's first, and then, on Halloween night, her husband's."
Mr. Blanco nodded. "Well, punching through that wall, darned if I didn't find an old fireplace. And it must've been covered over in a hurry, too, because there were still traces of burned junk there."
"Junk?" I said. "Like somebody burned trash in the fireplace?"
Mr. Blanco shrugged. "I wouldn't have thought so—not in an upscale house like this. Let me show you." He pulled a plastic bag out of a drawer, then laid it on the counter beside the papers and the book. The bag's contents were black and dusty, but after a minute I realized they were burned fragments of cloth—someone's clothes maybe.
"Can I open the bag?" I said.
"Not in here, if you don't mind," Mr. Blanco said. "It makes a heck of a mess. You probably think I'm crazy saving it at all, huh?"
"No, I don't," I said. "Ever since Yasmeen dragged me into that mystery last Christmas, I understand how detecting takes over your brain."
Mr. Blanco agreed. "The more I find, the more I wonder. For example, I grant it's a good story, but does it really seem likely that a house cat could kill a human being?"
Mr. Blanco kept talking, but I didn't hear what he said. Was I imagining it? Or did I feel a gust of cold air?
"And then there's this stuff here," Mr. Blanco was saying. "Who would have been burning clothes in the fireplace, and why?"
I tried to ignore the goose bumps prickling my arms. "But how do you know the burned clothes have anything to do with the murders?" I asked. "Haven't a lot of people lived in this house?"
"Quite a few," he said. "But the clothes almost have to come from around the same time. I have some old photos of the house, and that fireplace has been walled up since before the turn of the twentieth century."
I was going to ask him what other stuff he had found, but an unearthly howl interrupted me—like a giant-size cat with its tail pinched under a rocking chair. I looked at Mr. Blanco expecting to see fear in his face, but he only sighed and shook his head. "Here we go again."
## Chapter Twenty-four
The howl, the lightning, the crack of thunder, and finally the lights blacking out. The ghost had paid another visit to the Harvey House Health Boutique, but it wasn't so scary this time, I guess because I kind of knew what to expect. It seems like a lot of what's scary in the world is the possibility of the _un_ expected. Anyway, now I thought the ghostly activity was more like a signal than a treat.
But a signal for what?
After Mr. Blanco turned the lights back on, he gave me one of the catnip balls for Luau "because you're a good customer," and he let me borrow the dusty newspapers and the big black book. "I won't have time to look at them tonight, anyway," he said. "You being an experienced detective and all, maybe you'll figure out what really happened."
Fat chance, I thought. I can't even solve the case of a cat stolen last week—how could I expect to figure out anything new about murders from the nineteenth century?
At home, Dad thanked me for the pills and Luau thanked me for the catnip. Then I went over to Yasmeen's to report the latest and ask for her help.
"What's that dusty old junk?" Yasmeen wanted to know when I dropped the book and the newspapers on the coffee table in her family room.
I told her Mr. Blanco had found them. "Now he wants me to look through them and help figure out the truth about the ghost," I explained, "but I need your superior brainpower."
Yasmeen loves to be told how superior her brainpower is, so she said sure—even though she doesn't believe in ghosts.
Then I told her the bad news about the catnip we found, that it was probably Kyle's.
Yasmeen sighed and shook her head. "So our one and only clue isn't?"
Brains are peculiar things. I guess because we were talking about clues, mine suddenly filled up with that dream from a few days ago, the one where all the clues turned into fish and swam away—all except one slip of paper.
"Yasmeen," I said, "what did you do with the other clues?"
"Aren't you listening?" she asked. "We don't _have_ any other clues."
"I mean the other stuff we found when we found the catnip," I said.
"Oh—the beer can and junk," she said. "In my room. I didn't think we should throw anything out till we were done with our detecting."
"Let's take another look," I said. "We can check out Mr. Blanco's stuff later. The ghost has been around for more than one hundred years—he isn't going anywhere."
Upstairs, Yasmeen retrieved the bag from a shelf. "Besides the can, there's a gum wrapper and some kind of receipt."
"That was it." I took the bag from her. "A grocery receipt."
We looked at each other.
_"A grocery receipt!"_ Yasmeen conked her head with her fist. "We must be the _stupidest_ smart kids yet!"
The receipt was from the Smartt Mart on Northernmost Parkway. Unfortunately, it didn't have a credit card number or a name. But there was a list of what had been purchased and the date, October 22, the same day Halloween disappeared.
"What kind of recipe uses ten boxes of salt and twenty pounds of flour?" I asked Yasmeen.
"I don't know." Yasmeen made a face. "But I wouldn't want to eat it."
"Besides that, there're five packages of food coloring—"
"Assorted colors," Yasmeen cut in. She was looking over my shoulder.
" _And_ ten twenty-pound bags of cat food."
Yasmeen moaned. "We've had this for four days, and we never even looked at it."
"Yup, we're idiots, all right," I said. "We were totally focused on catnip, and this was right in front of our faces."
Looking at the receipt, Yasmeen asked, "Is that a good brand of cat food?"
"The kind the vet likes." I nodded.
"So it looks like we're right about that, at least," Yasmeen said. "The catnapper _likes_ cats; he probably sees himself as a cat rescue squad."
I went back to studying the receipt. Flour, salt, and food coloring. What could a person do with that? I knew some animals—like cows and horses—need extra salt, but I never heard of cats needing it. And anyway, it didn't explain the flour or the food coloring.
Professor Popp's voice interrupted my thinking. "Wash your hands for dinner, children. Alex? Would you care to join us?"
I remembered Dad's stir-fry. "Yes!" I said. "Yes, I would!"
I was afraid Dad would argue when I phoned to ask permission, but he said actually it might be better if I ate at Yasmeen's house. "That way if my stir-fry turns out to be lethal, you can say nice things at the funeral," he said.
"I'm sure your stir-fry will be delicious," I lied. "Is it okay if I'm home by eight? Yasmeen and I have some stuff to do after dinner."
"See you then," Dad said.
Dinner at the Popps' house was ham, mashed potatoes, and unidentifiable mushy green stuff. I cleaned my plate.
Mrs. Popp smiled at me. "It's a pleasure to feed such an appreciative guest."
"It's a pleasure to be here," I said sincerely.
Meanwhile, Jeremiah was trying the oldest trick in the book, talking to avoid eating. "Are you sure this is human food?" He poked the green stuff with his fork. "Because it looks like the food Miss Deirdre feeds Arnold."
"Jere _mi_ ah?" Mrs. Popp said, warning him.
"Arnold is our class pet," Jeremiah explained to me.
"I believe he is a hamster," Professor Popp added.
"Uh-oh," Jeremiah said. "Did this ham come from a hamster?"
Professor Popp shook his head. "No hamsters were harmed in the making of this meal. So please finish your dinner, Jeremiah. It's good for you. Your body converts food to energy so you can play and jump and run around."
"Arnold runs around," Jeremiah said. "He has one of those running wheels. _Squeak! Squeak! Squeak!_ Miss Deirdre has to take him home after school because he bothers the other people in the building."
"Vegetables are good for Arnold, too," Mrs. Popp said. "They're good for all God's creatures."
"What about ghosts?" Jeremiah asked. "Are vegetables good for ghosts?"
"There are no such things," Mrs. Popp said.
"Daddy believes in them," Jeremiah said.
"Do you, Daddy?" Yasmeen asked.
"I have studied the matter a little," he said, "and while I am skeptical, I am not necessarily an unbeliever."
"Oh, piffle!" said Mrs. Popp, and she stood up to clear the table.
"Can I ask you something," I said to Professor Popp, "since you've studied about ghosts?"
"You _may_." Professor Popp nodded.
"Why do they come back?" I asked. "I mean, not everybody who dies becomes a ghost, right? If they did, we'd be bumping into ghosts every minute."
"Most cultures believe that the shade, or ghost, has some unfinished work to attend to," Professor Popp said. "Often the deceased person has been accused of something unfairly, and its ghost seeks justice."
I thought about that. "So if a ghost was haunting somebody, then maybe the somebody should help the ghost out," I said.
Professor Popp wanted to know if I had something particular in mind, so I explained about Mr. Blanco and the Harvey house.
"Have you noticed a pattern to the ghostly appearances?" Professor Popp asked me.
"Come to think of it," I said, "it kind of seems like they happen when people are talking about the ghost story."
"Then perhaps," Professor Popp said, "there's something in the story that the ghost doesn't like."
Yasmeen stacked my plate on top of her plate and Jeremiah's plate on top of mine. "So if that's true, it means Mr. Harvey _didn't_ murder his wife," she said. "And his ghost won't settle down until he's proved innocent."
"I thought you didn't believe in ghosts," I said.
"Of course I don't," Yasmeen said. "But if there were ghosts, which there aren't, that would be the logical conclusion."
## Chapter Twenty-five
After Yasmeen and I were done in the kitchen, there was time to take a look at the old black book and the newspapers from Mr. Blanco. In the family room Professor Popp was sitting on the sofa reading a yellow sheet of writing paper.
"I hope you don't mind." He looked up at us. "I was curious and opened this old ledger book. When I did, the stationery fluttered out. The handwriting is faded, but it seems to be a page from a _billet-doux_."
"A what?" I said.
"Oh, Daddy, how _romantic_!" Yasmeen said. "Let me see!"
Professor Popp handed her the paper. " _Billet-doux_ is French for 'sweet note,' " he told me. "In English, a love letter. Where did you get all this?"
I told him, and he nodded. "It's a ledger book, quite a useful document for a historian." I didn't understand, so he explained that a careful man like Mr. Harvey would have written an entry for everything he bought and everything he earned in a ledger book. Professor Popp flipped through several pages. There were entries for lots of different purchases—big amounts for stuff like bricks and lumber, small amounts for flour, lamp oil, and ink.
"Are you sure this was Mr. Harvey's book?" I asked.
Professor Popp turned to the inside front cover. There, in spidery black writing, were the words: "Gilmore Samuel Harvey, July 1, 1877–"
"There's no ending date," I said.
"I noticed that, too," said Professor Popp. "Apparently his work was interrupted."
"What's the last entry?" I asked.
We paged through till we found it: On October 28, 1879, Samuel Harvey had purchased a "traveling portmanteau" from R. J. McClanahan's store for 3.50.
"What's a portmanteau?" I asked.
"Suitcase," Yasmeen said, without looking up from the page she was reading.
"I guess he never got to use it," I said. "He died on October thirty-first—I've seen his grave."
Yasmeen sighed a huge sigh, and when I looked at her face, it had this gross, dreamy expression. Usually I can forget that Yasmeen's a girl, but sometimes it is hard.
"This is so romantic!" she said. "Should I read it to you?"
"No," I said.
"Oh, come on," she said, _"please."_
"Give it over, and I'll read it myself," I said.
"It's only a piece of a longer letter," Professor Popp said. "I looked for more, but this is all that's here."
Frowning, Yasmeen handed me the sheet of stationery. It was so old it crinkled like it might shatter into confetti. I don't know how, exactly, but I could see right away that the writing was feminine. The letters were large and round, much different from Mr. Harvey's spidery black scrawl.
This must have been a middle page because it started midsentence:
. . . _be with you always, dearest Floyd, but you know our circumstances make it impossible. We are star-crossed like the tragic lovers of yore. If only I had I met you sooner, if only my parents had been less bent on marrying off their old-maid daughter to a wealthy man, if only I had been a woman of some means of my own—were any of these "if onlys" satisfied, then I might have been your own Marianne. Alas, this will never_ . . .
I looked up at Yasmeen. She still had the dreamy expression. "Listen, Yasmeen, this is really important, right? It confirms part of the story."
Yasmeen nodded. "It's from Marianne Harvey to stouthearted Floyd. It must be."
"Who?" Professor Popp said. "What?"
We explained that Mrs. Harvey was beautiful, that people said she had a sweetheart, and the sweetheart was the same guy who found her body, Floyd. Professor Popp nodded. "This would seem to confirm that there was a romance, and further, it would seem that she was calling an end to it. But I believe there may also be something more."
"What?" Yasmeen said.
"Consider _where_ I found the letter," said Professor Popp. "It was tucked in the pages of Gilmore Harvey's ledger book. I haven't had a chance to look closely, but as far as I can tell, his is the only writing in the book. Do you see what I'm getting at?"
I nodded. "Gilmore Harvey was never supposed to see this letter—but he did, and that means he knew his wife had a sweetheart. And if he knew—well, then I guess it's the way the rumors said. That could be a reason to kill her."
"But now I'm not so sure he _did_ kill her," Yasmeen said. "I mean—if Dad's right that ghosts come back seeking justice, what justice is his ghost seeking now?"
"Then there's the cat," I said. "Marianne's smart black cat, who supposedly killed Gilmore Harvey."
"It was a _cat_ that killed Mr. Harvey?" Mr. Popp said.
"That's how the story goes," Yasmeen said. "Killed him in revenge after Mr. Harvey killed Marianne. And after that the cat was killed, too—drowned in the Harveys' well."
Mr. Popp shook his head. "Quite a gothic tale," he said. "But whatever the truth may be, you're not going to learn it on a school night."
I picked up the book and the newspapers. "Thanks for dinner," I said, "and for helping us."
On the short walk back home, my head was spinning. I was thinking about the receipt. I was thinking about the ledger book. But most of all I was thinking about that love letter.
Of course, knowing what happened to her, I felt totally terrible for beautiful Marianne Harvey. I mean, there she was, stuck with an old, ugly, mean husband and in love with a young guy who worked for him. I guess she must have been really unhappy. At the same time, though, hadn't she been unfair to her husband? I mean, once you get married, you aren't supposed to have sweethearts anymore.
But from the letter, it sounded like she was calling off the romance. So maybe she was trying to be good after all.
Luau met me at the front door and side-rubbed my leg, which meant, _Greetings, Alex. Believe it or not, my food bowl is empty!_ I bent down and tickled him under his chin. "You know what, Luau?" I said. "You're lucky to be a cat."
Luau closed his eyes and purred, which meant, _And you're lucky to be a kid_.
## Chapter Twenty-six
Apparently, the stir-fry did not kill my dad; he was standing in the kitchen.
"How did it taste?" I asked.
He pointed at the garbage disposal. "Rest in peace," he said. "But I'm pretty sure my culinary skills will improve when my eyes do. I took a double dose of the pills."
"Is that a good idea?" I asked. "What if you get X-ray vision?"
Dad threw a dish towel over his shoulder like a cape. " _Super_ dad!"
I nodded. "Could come in handy solving crimes."
"Which reminds me," Dad said, "how goes the case of the missing cats?"
"Well, so far today, Yasmeen and I have realized that we're idiots," I said.
"That's not necessarily bad," Dad said. "Often, the first step toward wisdom is to recognize one's own foolishness."
"Is that from a fortune cookie?" I asked.
Dad said it might be, or he might have made it up. "When you get to be my age, it's not only your eyes that fail, it's your memory, too."
"You're not old, Dad," I said, which made him grin.
"Keep picking up your cues, Alex. Otherwise I'll have to hire a new sidekick."
I told Dad good night and took the old newspapers up to my room. Like the love letter, they were crinkly and yellow. It was weird to think how long they'd been around. Not a single person mentioned in them was still alive.
The two newspapers on the top were from 1876.
Then there was one from 1877. In them were articles about new buildings going up, streets being laid out, businesses opening. Most of the stuff was pretty boring.
And then I found it.
Page one, November 3, 1879.
**H ARVEY RITES TOMORROW
AT ST. BERNARD'S**
I guess the newspaper reporters had already written about the murder itself because this article mostly talked about plans for the funeral and how important Mr. Harvey's business was. Toward the end the article reviewed the "peculiar circumstances" under which the body was found. From what this said, it looked like Mr. Stone's version of the story actually was right. The big black cat was found in the parlor with the body, the body had been so badly mauled it was "unrecognizable," Marianne Harvey had been strangled in the same room only two days before.
The last sentence read:
_So bizarre and bloody a tragedy has never yet been heard of in the brief history of our fair town nor yet for many miles around_.
I flipped through the rest of the papers quickly, but there was nothing else from 1879. I was about to turn out my light when I spotted a little tiny article at the bottom of the front page, easy to miss because the headline was small—like nobody thought it was important at the time.
And the way it turned out later, nobody in all the years since had thought it was important either.
Not till I did. But first there was the case of the missing cats to solve.
## Chapter Twenty-seven
"Stouthearted Floyd disappeared?" Yasmeen repeated. We were on our way to school the next morning, Halloween day. "Right after Gilmore Harvey's body was found?"
"That's what the old newspaper said: 'One Floyd Anderson, an employee of Mr. Gilmore Harvey's dry goods emporium, was reported missing by his friends and colleagues.' "
Yasmeen thought for a minute. "Well, I suppose that might make sense," she said. "Probably he was afraid people would find out about him and Marianne Harvey. Probably he was afraid the police would suspect him of killing her husband, so he left town."
"Maybe," I said, "or maybe he was just so sad about her being dead that he ran away."
Sometimes, I swear, I can hear the wheels whirring in Yasmeen's head. This was one of those times. "What does it mean?" she muttered.
"I know," I said. "It's like the explanation is dangling right above us, but we can't jump high enough to reach it."
"It's been bad enough trying to find Halloween," Yasmeen said, "then you had to go and introduce a whole separate mystery."
"That is so unfair," I said. "You're the one who likes detecting. I never told Kyle we'd find his cat for him."
"Well, that, at least, we are going to do," she said. "Tonight—with a little help from Luau and Sophie."
Halloween used to be a fun day at school. We would have a costume parade and a class party. The teachers would read Halloween stories. But that all ended a couple of years ago when some parents complained that Halloween celebrates wickedness. Since then, October 31 has been a regular day like every other regular day. And this year it was worse than that, at least for Alex Parakeet, notorious abuser of his own cat. Only a person who has been hated by everyone in his entire school knows how bad that day was for me, knows how much I'd like to just forget it.
The only reason I even survived is I knew Monday would be better. Once Kyle's cat was home, Yasmeen would tell Billy Jensen the truth, Billy Jensen would tell the entire population of our town, and my life would go back to normal.
The three-o'clock bell finally rang, and I shot down the hallway and out the front doors. It was a few minutes before Sophie and Yasmeen came out to meet me. Sophie had something to show us: the radio collar for the undercover kitty. She smiled a huge smile when she pulled it out of her backpack.
"Try it out," she said, and handed me a receiver the size of a cell phone. Then she ran ahead a little ways and stopped. "Switch it on!" she called. I pushed the button on the side. There were crackles and hisses, then Sophie's voice, kind of scratchy but plenty loud: "Can you hear me?"
"I can't believe it." I looked at Yasmeen. "She _is_ a genius!"
Sophie was running back toward us by now and heard me. _"Duh,"_ she said.
It annoys Yasmeen to discuss somebody else's genius, so she changed the subject. "What about the batteries?" she asked. "How long will they last?"
"Yeah, that might be a problem," Sophie said. "The one in the collar won't last that long. There's no way for a cat to switch it off, plus it's small. The ones in the receiver will probably go quite a while, but you might as well turn it off till we need it."
" _If_ we need it," I said.
"Which we won't," Yasmeen said.
For once, Mom was home when I got there. She was in the family room, taking a break before going out on her Halloween patrol. I asked her whether any more cats were missing, and I was really glad when she said no.
"How was school today?" she asked.
The truthful answer would have been, "Terrible." But I didn't want to say that, because I didn't want her asking a bunch of questions. So I tried to think of something good and said, "I finished my map finally."
Mom smiled. "School hasn't changed in some ways," she said. "We made relief maps, too. I remember the dough left your hands all dried out."
I nodded. "Because there's so much salt in it. Salt, and flour, and . . . _Oh, my gosh_."
I guess my face must have gone funny because Mom said, "Alex, are you okay?"
I nodded. I stood up. I said, "Kind of, Mom. I'm kind of okay. But I've got to call Yasmeen right now. I think I've just figured the whole thing out: I think I know the identity of the catnapper!"
## Chapter Twenty-eight
At the time I was ticked off at Mom for not taking my bright idea more seriously. But now I see it was probably good that she didn't—at least not right then. Instead of letting me call Yasmeen and instead of phoning dispatch and having Officer Krichels go arrest my new prime suspect, she sat me back down and made me explain what I had figured out.
When I got to the part about the grocery receipt, she said, "Wait one minute, Alex. Let's think this through, shall we? Chances are Mrs. Timmons _did_ buy salt-dough ingredients. But there are other classes at your school making relief maps, right? So other teachers may have bought the ingredients as well. Are all those teachers catnappers?"
I sighed. Till that moment, I had never realized the journey from genius to idiot could be made so quickly. "Probably not," I said.
Mom got out her notebook and a pen. "Tell me the items on the receipt again," she said, and wrote down each one. Then, probably just to make me feel better, she asked, "Have you found out anything else?"
I thought of telling her about Prime Suspect No. 2—the mysterious Mr. Lee. But I didn't even have a grocery receipt's worth of evidence against him. So I just shook my head no.
"I'm frustrated, too," Mom said. "In fact, I'm almost hoping something happens tonight, something to blow the lid off the case once and for all."
Of course I knew something _was_ going to happen that night. But I couldn't tell Mom about it. She would have put a stop to the whole thing, probably would even have called Yasmeen's parents and Sophie's parents, not to mention Mrs. Lee because it was her baby monitor we kind of borrowed when we were supposed to be taking it back to the store.
Yasmeen had said she was going to rest up this afternoon since we'd be awake practically all night. That sounded like a good idea to me, too. But before going upstairs, I took Luau's bed from its usual corner and put it out on the front porch so we'd be ready after trick-or-treating.
In my room, Luau was napping with his new catnip ball tucked under his ear like a pillow. I had the collar and the receiver in my backpack.
"Hey, lazy," I said. "Wake up."
Luau opened one eye and _mrrfed_ , which meant, _You know, Alex, to us cats the word "lazy" is a compliment_. Then he saw the collar in my hand and stretched to give it a sniff. Usually he hates collars and tries to shake them off, but when I buckled this one around his neck, he sat up tall like a person proud of new clothes. I took a good look at the microphone, which was dangling from the ring where you were supposed to put a license. It looked like some fancy electronic beeper to scare birds. At least I hoped it did.
"Are you ready to catch a catnapper?" I asked Luau.
He bumped his head against my head, which meant, _Undercover Kitty at your disposal, sir_.
"Good man," I told him. "Now, move over."
A little before six, I was in the bathroom making final adjustments to my ears when someone rang the doorbell. On my way to answer it, I stopped in my bedroom to get the receiver, which I put in my sweatpants pocket. Luau followed me down the stairs and slipped outside when I opened the front door.
It was Yasmeen.
"You're early," I said. "Where's Jeremiah?"
"Coming." She was looking at her feet. "But first I wanted to warn you about something."
I had a bad feeling. "What?"
"It isn't just me and Jeremiah who are going trick-or-treating with you."
"Oh, no," I said. "Not . . ."
Yasmeen nodded, still looking at her feet. "Sophie asked and—after all she did—what was I supposed to say?"
I took a deep breath and let it out. "Okay," I said. "We'll deal."
A second later, Jeremiah and Sophie appeared on the sidewalk, walking toward our house. Sophie was wearing her angel costume, which featured real feathers and a light-up halo. Jeremiah was going as a peanut butter sandwich.
"Isn't that what he went as last year?" I asked Yasmeen, but she said no, last year he had been a jar of peanut butter.
"Trick or treat!" they shouted when they got to the door.
Mom came up behind me in the front hall. "Don't you guys look great!" she said.
"My costume's from a catalog," Sophie answered. "I got to pick whatever one I liked. I liked this because it was the most expensive. Do you want to know how much it cost?"
"Not especially," Mom said. "Come on in, though. Would you prefer Tootsie Pops or pretzels?"
"Dumb question, Mom," I said.
Jeremiah and Yasmeen each took a Tootsie Pop and said thank you. Sophie took a handful of Tootsie Pops and forgot to say thank you.
Mom gave her a look. "You know, Sophie," she said, "there are going to be a lot more trick-or-treaters tonight."
"I know." Sophie nodded. "That's why I always go early, 'cause if you're late, you might get stuck with pretzels."
Whatever Mom wanted to say, I didn't want to hear. "Can we go now?" I asked quickly.
Mom nodded. "Trick-or-treating ends at eight, so you'll be back at a few minutes after, okay? I'll be working, but Dad will be home. And don't go beyond St. Bernard's on one side or the school on the other."
Just like I expected, things started out bad. Yasmeen, Jeremiah, and I _always_ turn right at our front gate and hit the Blancos first. But Sophie said the Blancos had seaweed lollipops this year and we should skip them altogether. We were on the sidewalk in front of my house arguing when Mom called to us: "One more thing! The most important thing!"
I expected "Watch for catnappers" or "Be careful crossing streets," but what she actually said was, "Be sure to save me anything with coconut!"
We skipped the Blancos. At the Dagostinos we got peanut butter cups, and I gave mine to Jeremiah.
Mrs. Lee had forgotten to buy candy so she gave us each a handful of loose change. It must have come from the bottom of her purse because it had lint on it.
Mr. Stone gave us packets of cocoa mix.
And Bub had sugarless gum because, he said, he didn't want us kids to end up fat like him.
By the time we turned the corner onto Groundhog Drive, Sophie had accidentally kicked over two jack-o'-lanterns, but she was remembering to say thank you. Meanwhile, Jeremiah had taken a break from worrying to say he liked the light-up halo because cars could see it.
In other words things were going surprisingly well, which meant, as Jeremiah could have told you, that something was about to go wrong.
We had just turned onto Ari's front walk when Yasmeen said, "Hey, did I see your feline going outside when I came up to your door?"
I said, "Yeah, I guess. He likes to see what the squirrels are up to."
"Was that a good idea? Letting him out early?" Yasmeen asked.
"The catnapper never shows up till at least midnight, I thought," Sophie said.
"Catnapper?" Jeremiah said.
"Never mind," Yasmeen said. "Did you bring the receiver with you?"
"In my pocket," I said.
"Why don't you turn it on? Maybe we can figure out what the feline is doing."
Sophie said the batteries ought to be okay, so I held the receiver to my ear and pressed the button. At first there was only static. But then, suddenly, out of the tiny speaker blared a voice so loud it startled me, and I bobbled the receiver but caught it again. "Poor, pretty kitty. Are you cold? Yes, you are. Come on, pretty kitty, and I'll take you home."
"Whoa, it really does work!" said Yasmeen. "That's amazing!"
"But who was that talking?" I said.
"Your mom," said Sophie. "Wasn't it?"
Yasmeen laughed. "Mrs. Parakeet doesn't talk like—"
More noise from the speaker interrupted her. It sounded all rumbly like a car engine, but that wasn't quite right. It was a sound I recognized, though . . . what was it?
"Ohhh!" I laughed. "Luau's purring!"
"Whoever it is must've picked him up," said Yasmeen.
"Would somebody please tell me—" Jeremiah started to say.
But Sophie interrupted. "Wait a sec. If that lady who just picked up Luau _isn't_ Alex's mom, then who is she?"
It hit me like a rock, and judging from Sophie's and Yasmeen's faces, it didn't feel so good to them either. Here we were grubbing for crunch bars door-to-door while, a couple of blocks away, my cat was in the process of being catnapped.
Talk about lousy cat owners. For this I'd make the Guinness Book.
## Chapter Twenty-nine
"Will somebody please tell me—" Jeremiah tried again.
"It's the catnapper!" Sophie snapped. "The catnapper's a _girl_ , and she's got Luau!"
Yasmeen is not real calm in a crisis. "Okay, okay," she said, but her voice had turned all breathless, the way voices do when you panic. "Everybody let's just sit down over here on the curb. Everybody, let's just try to keep calm. We don't know it's the catnapper. The catnapper _always_ strikes after midnight. This is probably just some harmless lady taking Luau home. . . ."
The monitor crackled again, which—mercifully—shut Yasmeen up, and through the speaker came the sound of a car starting.
"Oh, my gosh!" I moaned. "She could be taking him anywhere. Sophie, quick, what do you think the range of the baby monitor is now?"
Sophie shook her head. "Hard to tell. I tested it to about a half mile; by then it was getting faint."
"So now," Yasmeen's panicky voice had gone all quiet and pathetic, "we are just going to sit here on the curb in front of Ari's house and listen to Luau driving out of our lives forever like Halloween and those other cats, and it's all my fault, me with my brilliant plan, and—"
"No, it's not your fault," I said. "It's mine for letting him out. I never figured he'd be in danger so early—"
"Jeez, you two!" Sophie was disgusted. "Some kind of detectives you are. One little setback and you give up! Well, I'm not giving up. What about you, Jeremiah?"
Jeremiah just shook his head. "Uh-oh," he said.
Without really noticing, we had been hearing the hum of the car engine through the speaker. Now, abruptly, it stopped, and then we heard the voice again, singsongy, but farther away, so we couldn't make out words. I wished to heck the monitor didn't distort sound so much. There was something familiar about the voice—it seemed like one I had heard before.
Rustling, bumping, slamming . . . what did all the sounds mean? Going from the car to the house? And then a double _thump_ that maybe meant Luau was jumping to the ground. And finally something I recognized for sure, meowing. Lots of meowing. Was it one cat? Two cats? I couldn't tell.
"Is that Luau?" Yasmeen asked me. She was whispering.
"Uh-uh," I whispered. "Luau's meow is more drawn-out—you know."
Sophie said, "Plus the volume is wrong. I mean, Luau's mouth is only a couple of inches from the mike. Any noise he makes will be so loud it'll distort—probably be more like a shriek."
Sitting on the curb, I could feel the cold from the concrete rising right up my backbone. A group of kids I didn't know ran past on their way to Ari's. I envied them. They didn't have anything more important to worry about than whether Nestle's Crunch is better than Hershey's Krackel.
Without thinking, I turned to Sophie. "What do we do now?"
Sophie was decisive. "Sit and listen until we hear something to tell us where they are. Remember—they can't be too far away. If they were, we wouldn't be able to hear them."
"It would be good if you had taught Luau to talk," Jeremiah said. "Then he could just whisper a street number into the mike."
Yasmeen started to answer, but a blast of sound from the monitor interrupted her—and almost blew out all eight of our eardrums.
" _That_ was Luau's meow!" Sophie said.
"What was he saying?" Yasmeen asked.
"I can't understand him," I said. "I don't think it was an address."
"Say it again, Luau." Yasmeen spoke into the receiver like it worked both ways.
The chorus of meowing continued, with Luau apparently keeping quiet this time. I thought I could pick out at least two other cats. One had a small, high-pitched meow, while the other's was gruff and squeaky, like a rusty hinge.
A rusty hinge. Why did that seem familiar? Had somebody said something once about a cat . . .?
"Hey—wait!" I said. "I _know_ one of those cats! It's Halloween!"
Yasmeen frowned. "What do you mean?" she said. "You never even met Halloween."
"But remember," I said, "the time we went over to Kyle's? He said Halloween had a funny meow. And that's it—I know it is."
Yasmeen nodded slowly, then faster, as if one piece of good news was just what she needed to shake her into action. "I do remember," she said.
"We're going to get them back," I said.
Yasmeen nodded, and Sophie smiled. "Well, that's better. Give me over the receiver a sec, Alex. I want to take a look at—" Sophie grabbed for it, but at the same time I heard the voice again and jerked it back to listen. I don't know whose fault it was—well, yes, I do, it was Sophie's—but next thing the receiver dropped to the street, bounced once, and broke in two.
## Chapter Thirty
For a moment we all stood staring at the broken, silent receiver. Then Jeremiah said, "Nice move, foofoo-heads."
"We can fix it—maybe," Sophie said. "All I need is—"
I looked up at her, and it was like my body decided to spit out all the anger and worry and frustration that had built up since we started trying to find Halloween. _"You broke it!"_ I shouted. "We _never_ should've let you help!"
Yasmeen put her hand on my shoulder. "Alex," she said, "that wasn't fair."
I shook her hand off. "Leave me alone," I said. "Anyway,"—my face was wet with tears and snot. I sniffed some back and wiped the rest—"it's too late."
They all looked at me.
"Because of what the catnapper was saying," I went on, "right before it broke."
"What was she saying?" Yasmeen asked.
It was hard even to get the words out—they were that creepy. "She said,"—I took a breath, and my voice shook—" 'C'mere, kitty, this won't hurt. You're just going night-night now.' "
"Oh, no," Yasmeen said.
Sophie picked up the broken receiver and studied it. "Nothing inside's busted, I don't think. If I had some way to stick it together, I could probably get it working."
"There's no time!" I said. "Luau's in la-la land by now, and who knows what awful thing the catnapper is doing to him!"
While I was talking, Yasmeen was tugging the bottom of her fat bumblebee costume up over her waist, trying to reach something in the pocket of the jeans underneath. If this hadn't been one of the two or three most terrible moments of my life, I would have cracked up because she looked extremely ridiculous.
"What are you doing?" I said.
Her answer was to brandish the Band-Aids she always keeps with her. "Will these work?" she asked Sophie.
"Yeah—yeah, probably. Give 'em over." Sophie grabbed and a second later had ripped three Band-Aids open. Their wrappers fluttered to earth, and Jeremiah, who would never be a litterbug, retrieved them. Meanwhile, Sophie wrapped the Band-Aids around the receiver and pressed the switch. Instantly, there was a jumble of squawks, shrieks, and hisses.
What was going on?
"Maybe the receiver's still broken?" I said.
Sophie shook her head. "No, it's working okay. Whatever we're hearing"—she took an anxious breath—"that's what's happening to Luau."
The noises were awful. Bumping, crashing, glass breaking. The constant squawk and hiss. Every once in a while, like an exclamation point, the earsplitting shriek that was Luau's meow.
It took Jeremiah to point out something totally obvious: "That doesn't sound like sleeping."
Relief flooded over me. "Of course!" I said. "That's what's happening—Luau is running from her. We're hearing the chase scene!"
"Run, Luau, run!" Yasmeen said, and Sophie and Jeremiah joined in.
"Shhh," I said. "Listen. What's that? A new noise . . ." This one was a rhythmic squeak. It had started slow— _squeak_ , pause, _squeak_ , pause, _squeak_ , and then gotten faster— _squeak_ , _squeak_ , _squeak_ , and now _sque-sque-sque_. . . . This was so hard, trying to understand what was going on only from sounds. This sound was familiar, but what did it remind me of? I scrinched up my eyes and concentrated, looking for the matching memory in my brain. I couldn't find it.
But Jeremiah could. "That's Arnold," he said simply.
Sophie, Yasmeen, and I looked at him, not understanding. "Jeremiah?"
"You know, our class pet. The hamster. That's him. That's his wheel."
## Chapter Thirty-one
In your whole life you have never seen a cat, a bumblebee, an angel, and a peanut butter sandwich run as fast as we did. In fact, I was standing, out of breath, on Miss Deirdre's front porch before I had time to think about what I was supposed to do when I got there.
Sophie still had the receiver—now so close to the transmitter, the sound was really clear. Knowing at last who the catnapper was, I couldn't believe I hadn't recognized her sing - songy voice sooner.
She was speaking now: "That's it, kitty cat. Stay right there. Now I've got you."
"Ring the bell! Ring the bell! Hurry!" I said.
Sophie punched the button with her fist. The result was weird: We heard the bell ring indoors through the receiver at the same time we heard it ring outdoors in real life. And then we heard Miss Deirdre say, " _Drat_ —what a time for trick-or-treaters!"
"Turn off the receiver!" Yasmeen whispered to Sophie, who quickly pressed the button.
And then the door opened.
I'm not sure what I expected. I guess I thought Miss Deirdre would look all of a sudden gigantic, or monstrous, or scary. But instead, standing in the doorway smiling brightly, she was plain old Miss Deirdre, the ditzy preschool teacher, Marjie Lee's nice friend who lived around the corner. It was totally hard to get that she was also the person who stole all those cats, who stole _my_ cat, the person who a few minutes ago was threatening to send Luau "night-night."
"My, aren't the four of you darling!" she said to us. "Isn't that my little Jeremiah? And your sister and her friend, too! And what's your name, dear?"
"Sophie Sikora," Sophie answered.
"Well, you're a dear little angel, and I bet you're hoping for some treats, aren't you?"
"Yes, ma'am," said Sophie. "Only . . . I hope you don't mind, but Jeremiah's got to use the bathroom really, _really_ bad. Can we come in?"
Jeremiah looked up at Sophie. "No, I don't have to—"
To shut him up, Sophie patted his head, only it was more like she thumped him. "I know it's embarrassing, Jeremiah, but she's your teacher, right? She knows about this kind of junk." Then she smiled up at Miss Deirdre, whose own smile was all of a sudden pretty fake looking.
"Uhhhh . . . ," Miss Deirdre said. "Well, of course, I _am_ a child development professional, but my house is at sixes and sevens right now, and—"
"That's okay. My mother"—using Jeremiah as a battering ram, Sophie pushed past Miss Deirdre and into the house, talking all the while—"is a terrible housekeeper! Do you have cats? We have a cat. And her fur . . ."
Yasmeen and I, full of admiration, couldn't do anything but follow.
Inside, Miss Deirdre's smile disappeared and her eyes darted corner to corner. From the stand next to the front door, she pulled a big black umbrella, then held it by her side.
Sophie pretended not to notice anything strange. "Bathroom's uh . . . _that way_ , huh?" she said, and shoved Jeremiah ahead of her.
_"Wait! No!"_ Miss Deirdre said.
Sophie paid no attention to her, just kept walking toward the back of the house. Most of the lights were off, but there was a room on the right that was all lit up. Through the doorway I could see it didn't have regular furniture in it but counters and stools, and on the counters were glittering glass containers of all sizes. I didn't see more, because Miss Deirdre dashed ahead of us and slammed the door shut.
"The bathroom," she said, shooing us back toward the front door, "is down _that_ hallway and—"
She never finished giving directions. From below us came a for-real caterwauling like you wouldn't believe. And leading the chorus was a familiar voice, my own Luau, the undercover kitty: _It's about time you got here, guys! We felines in the basement could use a bit of rescue!_
Miss Deirdre's rosy cheeks went pale, but she wasn't giving in. "Only my kitties." She tried to smile. "You'll just excuse me a minute, children, while I gather them up? You see, they're not well socialized. I wouldn't want any precious children to be scratched."
"That's okay," Sophie said. "We _love_ cats."
Desperate, Miss Deirdre ceased to be the so-sweet preschool teacher. Her eyes flared, and she held up the umbrella like a weapon. "You two stop _now_. No more nonsense."
The change in Miss Deirdre even intimidated Sophie. She stopped in her tracks and pulled Jeremiah close to protect him.
Would Miss Deirdre really have conked Sophie with the umbrella?
Would she have catnapped _us_?
Or would Sophie have displayed unexpected martial arts skills that saved the day?
I will never know because two things happened, one right after the other.
The first cracked all us kids up—and you can't simultaneously battle a catnapping preschool teacher and crack up.
Through a doorway at the far end of the house came six cats, single file. The last in line was Luau, looking like his big-shouldered, muscley self. The others—well, they were the hilarious part. Each one was wearing his own little rainbow sweater, for one thing. And underneath, from tip of nose to tip of tail, each was as bald and pink as a watermelon jelly bean.
The other thing that happened was Mom. Lights flashing and siren blaring, she pulled up outside—with Officer Krichels right behind her.
## Chapter Thirty-two
No surprise that Mom had a lot of questions for Miss Deirdre. But Miss Deirdre wouldn't talk without a lawyer. So Officer Krichels drove her downtown to the police station. Then Mom called Sophie's parents and the Popps.
"They're fine," she said into the phone. "I'll bring them home as soon as animal control comes for the cats."
While we waited for Mom, Yasmeen, Sophie, Jeremiah, and I sat on the sofa in Miss Deirdre's family room, bald kitties draped all over us, snuggling for warmth. At first, it had been more than creepy to touch these strange alien creatures in their fuzzy rainbow sweaters, but now we were getting used to it. If you focused on their eyes, you could almost remember they were cats.
"I have thought and thought, and I still can't figure out what Miss Deirdre was doing," Yasmeen said. "Why did she shave them?"
"Why did she steal them in the first place?" Sophie said.
"I'm just glad nobody's pushing me around anymore," Jeremiah said, glaring at Sophie.
"I'm sorry, kid, but it was an emergency," she said.
"I thought you were really brave," I told her.
Sophie looked at me, like she expected me to say more.
"And I am really sorry," I said. "I didn't mean it when I said we shouldn't have let you help. I was just so frustrated. I thought I was never going to get Luau back."
Luau twisted in my lap and looked up at me, which meant, _That would be enough to drive anyone over the edge_.
Sophie looked like she didn't think I was quite sorry enough. "Okay, I guess," she said. "But it was a terrible thing to say after all the work I did. And I had to spend my own money at the hardware store, too. I had to buy—" She started to detail the teensy-weensy parts she had purchased to transform the baby monitor, and all their prices. It was not very interesting, so I interrupted her.
"Did anybody else notice that weird room? Miss Deirdre sure closed the door fast."
Yasmeen moved the kitties on her lap aside, stood up, and grinned. "Who else wants to take a look?"
"I'm in," I said. "Mom's still on the phone. Don't touch anything, Sophie."
The room was toward the back of the house. I pulled my sleeve over my hand so I wouldn't get fingerprints on the knob. When I opened the door, the lights were still on.
"What _is_ all this stuff?" I asked.
Yasmeen was looking around, shaking her head. "I know what it looks like," she said. "A laboratory. My aunt works in one at the hospital." She pointed at a machine that looked like a mini-merry-go-round. "That's a centrifuge," she said. "And this one is an autoclave—for sterilizing test tubes and stuff."
"What I don't get is why a preschool teacher would have a room like this in her house," Sophie said. "It's like she was a mad scientist or something."
"She made really good play dough," Jeremiah said.
I walked farther inside. On one counter I found Ziplocs full of dried green stuff like the herbs Bub keeps for soup. On a shelf above these were three larger Ziplocs full of something that looked like white fur. Next to these was a cardboard box labeled GEL CAPSULES and a neat row of yellow pill bottles.
"These are like the ones my dad got from Mr. Blanco," I said.
Then I looked again. Were they _like_ the ones my dad got from Mr. Blanco? Or were they the _very same ones_ my dad got from Mr. Blanco?
A second later, I had my answer. "What does this say?" Jeremiah held up a white label printed with black letters.
Yasmeen read over his shoulder. "HOMESPUN REMEDIES—EYESIGHT."
And suddenly the whole thing made sense—more or less. We were standing in the lab where my dad's eyesight pills were manufactured. And what were they manufactured from? Cat fur, that's what! Miss Deirdre was stealing cats, shaving them, and using their fur as an ingredient in the pills. Cats have great eyesight, so the homespun idea would be that a dose of their fur would improve people's eyes, too.
It was a pretty lame idea, and I couldn't say I was real surprised that Dad's eyes were as bad as ever. I smiled when I thought of what he would say when he found out his miracle pills were full of cat fur.
In the cafeteria the other day, Yasmeen had promised Kyle that she wouldn't bring his cat, Halloween, back to him. So when we went to Kyle's front door later that night, I held Halloween and Yasmeen stood innocently beside us. Sophie and Jeremiah had already gone home. Mom was in her police car waiting for us at the curb.
"Are you sure this is a good idea?" Yasmeen asked. "I mean, Kyle seems like kind of a delicate kid. Seeing his cat in this condition could give him a heart attack."
Halloween meowed her rusty-hinge meow. Ugly as she was, she did seem to be a good cat. I wondered if she knew how she looked and if she cared.
"I don't know about Kyle," I said, "but after what she's been through, this poor kitty shouldn't have to go to the pound overnight. She deserves to be home."
The door opened and a lady—Kyle's mom, I guess—was on the other side. "It's awfully late for trick-or-treating," she said. Then she spotted Halloween and shuddered. _"Oh, dear,"_ she said. "A pet rat wearing a _sweater_!"
Kyle came up behind her then, and right behind him—what a surprise—was Cammie. Kyle looked at Halloween, looked at me, looked at Halloween, and then eagerly reached for her. "What happened to you, pal?" he said as he pulled the cat close.
Yasmeen couldn't believe it. "How did you even recognize her?" she asked.
"A man knows his own cat," Kyle said.
"Oh, my gracious, don't tell me that poor, hideous creature is—" his mom said.
"Halloween!" Cammie hollered. "Cool! Can we take the sweater off? I always wanted to know what cats look like naked!"
## Chapter Thirty-three
Mom didn't ask us a lot of questions Halloween night. She was concentrating on Miss Deirdre and the bald kitties. But when she and I were sitting at the breakfast table the next morning, it all came out—how we had borrowed the baby monitor and Sophie had turned it into a wire for Luau, how we had used Luau as catnapper bait.
Like I predicted, Mom was not totally thrilled with our methods.
"The baby monitor didn't belong to you in the first place," she said.
"I know that, Mom."
"And what right had you to put your cat at risk?" Mom nodded at Luau. He had been snoozing on his cushion under the counter, but when he heard _cat_ , he looked up. "A poor, dumb animal," Mom went on, "who can't speak for himself."
"A poor, dumb animal?" I said. "Mom, he volunteered!"
Mom sipped her coffee. "Right," she said.
Luau interrupted with a meow. He had padded over to sit by Mom's chair. Now he leaped lightly into her lap and started circling.
"He's telling you it was his idea," I said.
"Oh, is he?" Mom said. "I thought he was telling me there's a new box of cat treats in the cupboard."
"I don't see how he can say it any more clearly. He _liked_ being the undercover kitty. He was proud to serve his fellow felines."
Mom stroked Luau. "Have it your way," she said. "But about that baby monitor, how much have you got saved?"
Oh, this was just great. Here Yasmeen and I had solved the crime, caught the catnapper, returned Kyle's cat—and instead of getting a reward, it was going to cost me cold, hard cash.
I sighed. "If I use my birthday money," I said, "I've probably got enough to pay Mrs. Lee back."
"Good."
It seemed like a smart idea to change the subject. So I asked Mom what she had found out from Miss Deirdre. "Was I right?" I said. "Was she making those pills for Mr. Blanco?"
Mom nodded. "She's an animal lover big-time, and she got interested in these alternative kinds of cures after she read some book about homespun remedies. Her idea was that she could do well by doing good."
"What does that mean?"
"She thought she could rescue neglected cats and make money at the same time," Mom said.
"Wasn't she scared when she was stealing the cats?" I asked. "She almost got caught a couple of times."
Mom smiled. "That was her big inspiration. You know, I think maybe Miss Deirdre was such a successful teacher because she's a kid at heart. For example, she loved to play dress-up."
"You mean she had a costume for catnapping?"
"More like a disguise," Mom said. "She knew how the Harvey ghost was supposed to steal cats at Halloween. So she decided to confuse matters by transforming _herself_ into a ghost. And I don't mean she wore some cheesy old sheet either. She had gray face makeup and a veil, and her dress was more like a gauzy gray gown."
"Doesn't sound good for fast getaways," I said.
"She made it short so she wouldn't trip," Mom said. "She wore gray tights and running shoes with it. I guess she was pretty proud of the costume. Even though she was sitting in a police station, she wanted to tell me all about it."
I said I thought it was too bad Miss Deirdre had turned out to be a bad guy. "She's good at a lot of things. How did she make the pills, anyway?"
"Apparently, she collected the fur, bleached it white, ground it into a powder, and put the powder into the gel capsules."
"Did she really think they would work?" I asked. "I mean— _ick_ —swallowing cat fur? Poor Dad!"
Mom laughed. "Well, she added herbs, so it was tasty cat fur, at least, and clean, too. But your dad told me early this morning he's pretty embarrassed. In fact, he's at the eye doctor now."
"Is Mr. Blanco going to get in trouble?" I asked. "It seems like maybe there would be a law against selling pills made out of cat fur."
"The district attorney says it's not the kind of consumer fraud case he's used to," Mom said, "so he's still looking into it. There was no real harm done, so my bet is Mr. Blanco will get off with a slap on the wrist."
"The D.A. is going to slap Mr. Blanco's wrist?"
"Not literally, Alex," Mom said. "It just means the punishment won't be too severe. At the very least, Mr. Blanco will have to return Daddy's money and promise to be more careful about what he sells in the future." She took another sip of coffee and stretched. Luau had to hold tight to keep from falling out of her lap. "You know," she said, "I'm going downtown to question Miss Deirdre again later, and there are still a few things I don't understand."
"No prob, Mom. After talking to Kyle last night, I've got it all figured out."
"In that case," she said, "why did Kyle call you and Yasmeen off the case? Was Bub right? Was there a ransom note?"
I shook my head no. "It was the ghost," I said.
"You mean Miss Deirdre dressed up," Mom said.
"No," I said, "I mean the _real_ ghost. Kyle was at the Harvey house on Sunday buying catnip. He hoped maybe he could use it to lure Halloween home. Anyway, when he was there, the ghost of Gilmore Harvey started making noise—"
"The ghost of Gilmore Harvey started making noise?" Mom repeated.
"The ghost makes noise, Mom. Trust me. Anyway, Kyle became convinced it was the ghost who stole Halloween."
Mom said that was no wonder. "He's a morbid kid, anyway, and he saw Miss Deirdre in full regalia when she stole his cat."
I nodded. "Anyway, when that happened, he was afraid it was a warning that he should stop looking. He didn't want anything bad to happen to Yasmeen and me, so he called us off, too."
"He's a Gloomy Gus, all right," Mom said. "And I guess that also explains why he put the LOST flyer in the cemetery in the first place."
"You got it," I said. "He was hoping the ghost would see it and return the cat. Hey—but can I ask _you_ something? How did _you_ figure out about Miss Deirdre—that she was the catnapper? It was sure lucky you and Officer Krichels arrived when you did."
"Luck nothing." Mom smiled. "It was superior police work and my brilliant powers of deduction."
"That's what I meant to say. So how did you do it?"
"The grocery receipt," she said. "Mrs. Timmons isn't the only one who makes salt dough. I reread my notes from questioning Kyle. Cammie told me she had just made a unicorn out of play dough at school. Play dough, salt dough . . . It seemed like it was worth asking Miss Deirdre a few questions at least."
"And the next thing you knew, you were organizing a bald-cat rescue mission."
"Righty-o," Mom said, "and this morning I've got a date with a catnapper and her lawyer. Your dad, on the other hand, will be spending a pleasant day cleaning the basement. Care to join him?"
I had kind of thought since Yasmeen and I solved the crime and all, maybe we could celebrate. I mean, weren't we sort of heroes? But apparently, Mom didn't see it that way. She probably wondered why I sounded sarcastic when I answered her. "You know I'd love to, Mom, but I have some errands to run."
"What errands?" she asked.
I told her Yasmeen and I were going over to Mr. Blanco's store to return the ledger book and the old newspapers, but on the way we were stopping at Bub's. We wanted to fill him in on who stole Halloween and see if he had any ideas about the other mystery: What had really happened on that Halloween more than a century ago? Was it true Gilmore Harvey was murdered by his very own cat?
"I thought you and Yasmeen were done with mysteries for a while," Mom said.
"We were till last night," I said, "but solving one kind of gives you energy for another."
"I know what you mean. And anyway, Dad will be happy to save you one of the grungier jobs. We wouldn't want you to feel left out."
Instead of answering, I retrieved my coat from the front hall. Mom started upstairs to get dressed, but she stopped halfway. "Speaking of breakfast," she said, "did you get any coconut candy last night?"
"Sorry, Mom," I said. "We weren't out long enough. But you're a grown-up. You can buy all the coconut candy you want."
"That would be cheating," Mom said. "Oh—and one more thing."
I pushed open the front door. "Yeah?"
"You and Yasmeen did okay, Alex. For kids, I mean."
## Chapter Thirty-four
Bub was in the kitchen chopping celery for soup when Yasmeen and I arrived a little later. "I hear you two caught a catnapper last night," he said. "Not too high and mighty this morning to help a fella do dishes I hope?"
I was thinking "congratulations" might be nice, or simply, "I bet that kid was happy to get his cat back." But praise is not Bub's style, just like it's not my mom's. So I took a dishrag and turned on the water while Yasmeen pulled a drying towel out of the drawer. While we worked, we told Bub the latest about Miss Deirdre and the pills. Then we told him we had brought over the evidence from the other mystery—the case of the Harvey house ghost. When the last pan was clean, Bub put the lid on the soup pot and lowered the heat. Then we went into the front room and sat down at the dining room table, with the old ledger book, the billet doux, and the newspapers laid out in front of us.
Bub said he didn't know how it worked in real life, that I'd have to ask my mom about that, but in books and movies when the detectives were stuck, they usually reviewed the story one more time.
"It's worth a try," I said.
Bub nodded. "Okay, then. It's 1879 and the richest guy in town, one Gilmore Harvey, finds out his beautiful young wife has a sweetheart, Floyd. So—in the proverbial jealous rage—he kills her."
"Right before Halloween," Yasmeen added.
Bub nodded. "Then on Halloween night itself, someone—or some _thing_ —kills Mr. Harvey."
"Right," I said. "And when the police find his body, they find Mrs. Harvey's cat at the same time—"
"Licking something red and sticky from his paws." Yasmeen made a face.
"So they blame the cat for killing Mr. Harvey," Bub said. "Now, here, come to think of it, I have a question. Did they blame the cat just because of the red stuff on his paws? Or was there some other reason?"
"It's just like you always say, Bub: means, motive, opportunity. The cat was there in the room, so that's opportunity. The cat wanted revenge for the death of his mistress, so that's motive. And as for means—well, apparently, the cat had some mighty big claws."
"According to Mr. Stone," Yasmeen said, "the body was so badly mauled it was like an attack by a 'jungle beast.' You couldn't even tell who the person was anymore."
Something about what Yasmeen just said struck me funny, like it was illogical. It was a few seconds before I realized what. "If you couldn't recognize the body . . . ," I said slowly, "how did anybody even know the body _was_ Mr. Harvey?"
"Well, they knew because . . ." Yasmeen said, and then she stopped. "I don't know how they knew."
"In those days they wouldn't have the chemical tests they do now," Bub said. "They probably would have identified him by his clothes."
Clothes, I thought. Hadn't somebody said something about clothes not so long ago?
When it came to me—and when at the same time a lot of other things made sense, too—I was so excited I jumped out of my chair: _"The burned clothes in the parlor fireplace!"_
Yasmeen was annoyingly calm. "What burned clothes in what parlor fireplace?"
"I never told you," I said, "because it didn't seem important compared with the ledger and the love letter and all."
"Tell me now," Yasmeen said, and I explained how Mr. Blanco had found the old fireplace behind a wall, how he had saved the burned-up contents in a Ziploc bag, how it looked like maybe somebody had burned clothes in there. "Listen," I said finally. "I don't think it was Mr. Harvey at all who died on Halloween night. I think it was stouthearted Floyd. And the cat wasn't the killer either. Mr. Harvey was."
Bub offered to drive Yasmeen and me to the Harvey house on his way to the grocery store, but he needed to make out his shopping list first, and we didn't want to wait. "Promise me a full report," he said. And out the door we ran.
At the Harvey house, Mrs. Blanco was working the cash register. She didn't say hello. She started apologizing, but I was so focused on getting hold of that Ziploc bag that I didn't understand. Then I remembered the cat pills. In fact, I was kind of mad at her and Mr. Blanco for selling them, but I didn't think they deserved worse than what my mom had called a slap on the wrist either.
"Mrs. Blanco?" I interrupted her mid-sorry. "Yasmeen and I actually came about something different. Would it be okay if we took a look at the plastic bag of black stuff from the fireplace? We can take it out in the yard, if that's okay."
The bag was still behind the counter, and Mrs. Blanco was very happy to hand it over. I think she would have handed over the money from the cash register, too, if I had asked her—anything to show how sorry she was. "You'll need these," she said, handing us each a pair of rubber gloves. "We keep them for scrubbing. Oh—and take some newspaper. You can pour the contents out onto it."
It was a cool day, so we sat in the sunshine on the grass where the pumpkins had been. I put on my gloves, and Yasmeen put on hers. We looked at each other, then I picked up the bag, unzipped it, and dumped it out.
"Yuk," Yasmeen said, and I sneezed. Black dust floated all around us, and the burned smell was terrible—even after more than a century. It didn't take us long to get over the _yuk_ , though, because what was inside the bag was interesting. Poking around among the black lumps, we found pieces of leather that might have come from someone's shoes, several blackened pieces of cloth, and a hard, round thing, heavier than the leather, that we couldn't identify at all.
"Rub it," Yasmeen said. "See if any of the black will come off."
I tore a page from the newspaper, wadded it up, spit on it, and started to rub. I expected Yasmeen to be grossed out by my spit, but she wasn't, which shows that her curiosity was pretty overwhelming.
Feeling a little like Aladdin with his lamp, I rubbed—and in a short while, I could see that the thing was made of metal, and in another short while that it was silver. In the sunshine it winked at me.
"It's a pocket watch!" Yasmeen said.
"The back of one, I think. The glass and the face must have burned up—or melted."
"Keep rubbing!" Yasmeen said, as if I needed to be told. "Wouldn't it be cool if—"
I nodded and finished her thought. "If there was writing on it or something. Like: 'To my darling Floyd, Yours always, Marianne.' "
"Does it say that?"
"No," I said, and Yasmeen's face fell. "But take a look at what it does say."
The surface of the watch was pretty clean, but the three letters etched into the metal had stayed grimy black, easy to read.
"Who's F.A.S.?" Yasmeen said.
"I don't know," I said. "I mean the _F_ could be Floyd, but his last name was Anderson— _A_ , not _S_."
Yasmeen took the watch from me, studied it for a minute, and started laughing. That was weird by itself, but then she said, "My parents have these fancy towels they put out for guests," which was totally weird.
"Are you feeling okay?" I asked.
She ignored my question. "See here how the _A_ in the middle is so much bigger than the other two letters?" she asked. "That's how it is on the towels, too—only on the towels it's the _P_ for Popp, the last name. Do you get it? It's how old-fashioned monograms work. So the _S_ on the right would've been Floyd's middle name. And the big _A_ is for Anderson." Yasmeen nodded. "This is his monogram all right, and that makes this his watch, too."
## Chapter Thirty-five
Back inside, Mrs. Blanco asked if we had learned anything.
"The watch belonged to Floyd after all," I said.
I could see from the way she nodded that Mrs. Blanco had no idea what I was talking about. "Why don't you two wash your hands and go tell that to Mr. Blanco?" she said. "He's working in the attic. I'm sure he'd be interested."
We had to go up two flights of stairs, the second one narrow, creaky, and dark. We could hear a roar above us. What in the world was that? Had the ghost learned a new trick?
A trapdoor led to the attic itself. Feeling apprehensive, I pushed up on it and climbed through with Yasmeen behind me. We saw right away that the roar was only a vacuum cleaner. Mr. Blanco was on his knees suctioning out the vents around the edge of the roof. Leaves were flying everywhere. I had to tap him on the shoulder before he noticed us.
Like his wife, he was totally apologetic: "Deirdre told me she was getting those pills from a pharmaceutical company. I had no idea she was cooking them up in her spare bedroom!"
"I hope that all works out okay," I said. "But what we really came to tell you is that Yasmeen and I figured out the mystery of the Harvey house ghost."
If I do say so myself, Yasmeen and I did a smooth job telling the story. It might not have been as elegant and spooky as Mr. Stone's version, but maybe if it were told over and over for one hundred years, it would be. We started the same way Bub had earlier. In 1879 a rich man named Gilmore Harvey found out his beautiful wife, Marianne, had a sweetheart, an employee of his named Floyd Anderson. In a jealous rage Mr. Harvey killed his wife a few nights before Halloween, then blamed her death on a burglar. The body was discovered by Floyd. So far, this was the same as Mr. Stone's version. But then the truth and the story diverged.
"Probably Mr. Harvey was afraid the police would catch him eventually, so he came up with a clever plan. He decided to fake his own death," I said. "First, he bought a suitcase. Then, somehow he got Floyd to come over on Halloween night. I guess that wouldn't have been hard—he was Floyd's boss. And when Floyd did, he killed him. Then . . ." I shuddered. This part was so grisly I didn't want to think about it. "Then he put his own clothes on Floyd and made it look like a wild animal had attacked him with its claws."
"A wild animal or a wild cat wanting revenge," Yasmeen said, "Marianne's pet cat."
" 'Black as midnight, with eyes as green and bright as emeralds,' " I quoted from Mr. Stone's story, " 'found by the hearth, cleaning something red and sticky from its paws.' "
Yasmeen sighed. "Poor cat."
"Yeah," I nodded. "Poor cat. Mr. Harvey _framed_ him! I guess he hated the cat like he hated Marianne. He must have even put blood on the cat's paws, so naturally, any cat would lick it off. And when the police found the body, that's what the cat was doing."
"It looked like the cat was the killer," Yasmeen said, "and I guess with its being Halloween and everything, such a supernatural kind of a story seemed more believable than it would've any other day."
"So the clothes I found in the fireplace?" Mr. Blanco said.
"They were Floyd's," Yasmeen said, and she showed what was left of the pocket watch to Mr. Blanco.
"Gilmore Harvey burned them, only he was in a hurry and didn't do that great a job," I said. "After that, he took his new portmanteau and he left town—never to return."
"There's one other thing, though." Yasmeen looked around the attic. It was dusty and dim, with crates and boxes everywhere. I bet there were a zillion clues to a zillion mysteries in that attic. But as far as what happened in the Harvey house—that one we had solved, hadn't we?
"There's no other thing," I said. "We have it all figured out. We are great detectives." Heck, if no one else was going to make a fuss over what a good job we'd done, I would do it myself.
"Oh, yes, there is," Yasmeen said. "The ghost. Dad said ghosts usually come back because there's unfinished business, some kind of injustice. But that doesn't fit, does it? Mr. Harvey got his revenge, _and_ he got away with it. For that matter, his ghost, if he's got one, wouldn't be hanging around here. It would be hanging around wherever Mr. Harvey himself ended up dying, wouldn't it?"
"So what are you saying?" I said. "It's _not_ Mr. Harvey's ghost that haunts the Harvey house?"
"That's what I'm saying," she said.
"Then whose ghost is—" I started to ask, but Mr. Blanco interrupted me.
"Uh, kids?" he said. "Actually, about the ghost. There isn't one."
"What?" Yasmeen and I said at the same time. "But we've seen it," I said. "Well, heard it anyway."
"Sorry to disappoint you," Mr. Blanco said, "but today's the first day I've come up here into the attic to work, and you wouldn't believe the funky stuff—electrical wiring like nobody ever saw. And with the vents half blocked and the wind blowing through. Well, I'm not surprised we've been hearing noises downstairs."
"You mean the yowling?" I said.
"Now that I've taken a look around up here, it's all easily explained," he said. "No need to bring in the spirit world. The wind blows a certain way through these half-clogged vents—it whistles, and this attic becomes an echo chamber. Plus the wires up here get blown around, too. And that disrupts the electrical connections."
"So the lights flash," I said, "and then they go out?"
"Exactly," he said. "Anyway, I'm thinking the ghost is gone for good. I'm getting these vents cleared today, and I've got an electrician coming on Monday."
Yasmeen made a face. She wasn't buying Mr. Blanco's explanation. It was funny that till lately she never believed in ghosts. Now all of a sudden she seemed kind of attached to them. "I have a different theory," she said.
Mr. Blanco smiled. "Okay, shoot."
"There's a ghost all right, but it's not Mr. Harvey," she said.
"Floyd then?" I said. "Or Marianne?"
Yasmeen shook her head. "What happened to them was tragic, but not like my dad described—not with unfinished business. Think about it. In the whole story, who is it that got the worst deal? Who is it that was executed for a crime he didn't commit?"
All of a sudden it was obvious. And I was going to say it, too— _the cat_. But I never got the chance, and neither did Yasmeen. This time it wasn't some puny draft but an Arctic blast that shot through the attic, kicking the leaves and dust into a whirlwind as turbulent as a minitornado, knocking the boxes around so that they wobbled, they clattered, they fell and broke open. Then there was a flash of blue-green light, a crack like cannon fire, and finally a howl like the most enormous cat in the universe had had its tail pinched by the most enormous rocking chair.
The whole thing lasted only a few seconds, but it was an overwhelming few seconds, and after the dust and leaves had settled, after the light had returned to its usual dimness, after the howling's echo had subsided—we three were left looking at each other, blinking, our pulses racing.
When I could breathe—and my heart had slowed to something like normal—I said, "I think you're right, Mr. Blanco. I think, as of now, the ghost is gone for good."
## Chapter Thirty-six
Walking home from the Harvey house, Yasmeen had a goofy idea. "Let's go visit Marianne," she said, "and stouthearted Floyd. Let's pay our respects."
"Not St. Bernard's," I whined, "not again."
"Oh, come on. After all, today's All Saints Day! The first of November—the Day of the Dead in Latin American countries, the day you honor the ancestors."
"How do you _know_ this stuff?" I asked.
Yasmeen shrugged. "You pick up a lot when you read the encyclopedia. Hurry now—I'll race ya."
"I hate racing!" But I took off running, anyway.
A few minutes later, all out of breath, we were standing in front of Marianne Harvey's grumpy angel at St. Bernard's cemetery. Yasmeen nodded at the markers. "The inscriptions make sense now," she said. "Gilmore Harvey must have written them. He _was_ trying to tell us something."
I read the two stones again. Gilmore Harvey's: SO SHALL THE RIGHTEOUS ESCAPE THE GRAVE.
And Marianne's: IN DEATH, THE ETERNAL WIFE.
Yasmeen sighed. "It's all so sad, like Romeo and Juliet." Her voice sounded peculiar, so I looked over. Wouldn't you know there was a tear on her cheek? I shook my head, disgusted. Girls, I thought.
Then I tried to talk, and my voice came out sounding like a frog. "We know it's you down there, Floyd. Rest in peace, pal."
I called Dad from Yasmeen's. "Be home at five-thirty," he told me. "Oh—and bring Yasmeen, why don't you? I've . . . uh, I've got something I need to give her."
Hanging out at the Popps is not usually that fun, because they don't even have video games and all the snacks are healthy. But—honestly? I was trying to avoid cleaning the basement. Usually I wouldn't mind that much, but now I was feeling sort of cheated. I mean, we had hardly gotten to trick-or-treat, I was going to wind up paying for the baby monitor out of my own savings, and worst of all, nobody seemed to care that we had solved the great catnapping caper.
My house was really quiet when Yasmeen and I walked through the front door.
"Luau?" I called. "Dad? Mom?"
I looked at Yasmeen, and she shrugged.
"Hello?" I called again.
"Hello?" my dad's voice answered. "That you, kids? I'm in the basement. Bring down the mop and you can help me with this floor, okay?"
Great. He had saved me a grungy job, just like Mom promised.
The mop was in the closet at the top of the basement stairs. "You don't have to help," I told Yasmeen. "It's not your basement."
"I don't mind," she said.
"How come there aren't any lights on down here?" Dragging the mop behind me, I felt my way down the stairs. "Dad?" I called. "How come there aren't any—"
"Oh, you want _lights_!" Dad said, and with that they flashed on, illuminating our totally clean basement, which contained all our neighbors on Chickadee Court, plus Kyle and Cammie and Officer Krichels and a bunch more people I couldn't even take in at once—all under a big banner that read, CONGRATULATIONS, SUPERSLEUTHS!
It is lucky that even though I am not the world's toughest kid, my heart is pretty strong. Otherwise it would have stopped cold. That's how shocked I was.
With everyone watching and smiling and making thumbs-up signs, I looked over at my best friend who happens to be a girl. "Yasmeen?" I said.
She was grinning. "What do you know?" she said. "They do care after all."
It was a good party. There was a big cake with a picture of a black cat on it in frosting. My dad was wearing his new glasses, which my mom said made him look distinguished, but he said made him look old. I met Boopsie. She had drool all over her face and her eyes kept crossing. Mr. Lee was holding her, and seeing him reminded me that for a while I had thought he might be the catnapper—mostly because some instinct told me he was suspicious. It was instinct that had made Yasmeen mistrust Kyle, too, I remembered. It seemed like maybe sometimes instinct steered you pretty far wrong.
Anyway, Mr. Lee offered to let me hold Boopsie, but I said, no thanks, I couldn't hold a new baby and a mop at the same time. "Did you name her yet?" I asked.
Mr. Lee smiled. "Yes, we did, and I hope you're pleased."
"Me? Why would I be pleased? I mean, I'm sure you picked a good name and—"
_"Alex,"_ Mr. Lee said.
"Yes?" I said.
Mr. Lee laughed. "No, I mean Alex is what we've named her."
"How sweet!" Yasmeen said.
"Sweet," I echoed, and I tried to smile, but really I didn't think the neighborhood needed another Alex, especially a drooling girl Alex who couldn't even keep her eyes pointed the same direction.
"Don't worry." Bub had been standing behind me, and now he spoke so Mr. Lee couldn't hear. "I don't think anybody's gonna get the two of you confused."
Sophie was at the party, too, of course. Everybody was congratulating her on being so clever and brave, which she deserved I guess. But it didn't make her any less obnoxious. She had brought the baby monitor with her, closed up in its original box again. She said we could take it back to Best Buy-Buy now.
"You mean it works like it used to?" I asked. "That is so great. I thought I was going to have to pay for it."
"Oh, yeah, it works," Sophie said, "better than ever."
"What do you mean 'better than ever'?" Yasmeen asked.
Sophie reached into her jeans pocket and pulled out a handful of metal pieces. "I couldn't figure out where these ones were supposed to go, so I left them out," she said. "Probably they weren't that important. Only now besides cell phone conversations, the receiver picks up TV, too—soap operas and talk shows and junk. That makes it better than a plain old baby monitor, my mom says. My mom says the company ought to pay me for improving it. I wonder how much they'll pay me? My mom says she'll write a letter—"
Yasmeen and I left her and went to get punch. I don't think she cared that we were gone. She turned to poor Mrs. Blanco, who happened to be standing there, and kept right on talking.
"I'll help you pay for the baby monitor," Yasmeen said.
"Thanks," I said. Then I dipped her a Styrofoam cup of punch, and she dipped me one, and we held our cups up and touched them together. It was a toast except we didn't say anything. We didn't have to. I was going to take a drink when I felt something brush between my legs. Something that was furry and had a tail.
"Oh, sorry, Luau. Didn't mean to leave you out," I said.
"Does he even like punch?" Yasmeen asked me.
"He likes tuna juice."
"It's not the same thing," Yasmeen said.
"Either way, we should toast him, too," I said. "I guess now you'll admit that cats can be pretty good explainers."
"What do you mean?" Yasmeen asked.
"I mean it was Luau who told us where Halloween was," I said.
"Get out!" she said. "It was dumb feline luck!"
I tried being calm and reasonable. "Think about it, Yasmeen. Why were we hearing Arnold's wheel in the first place? Luau went over by the hamster cage on purpose. He knew somebody would recognize that noise."
Yasmeen gave me a look.
"You don't believe me? Let's ask him," I said. "Luau, did you lead us to Miss Deirdre on purpose?"
Luau sat down, swiped a paw over his right ear, and swished his tail.
I shrugged. "There you have it."
"Have _what_?" Yasmeen said.
"He says he did it on purpose. He says he knew Jeremiah was trick-or-treating with us, and he knew Jeremiah would recognize that noise."
Yasmeen rolled her eyes. "Ask him if it was on purpose that he picked such a totally wacko owner who thinks he can talk to cats!" she said.
"Okay," I said. "Luau, was it on purpose that you picked such a totally wacko owner who thinks he can talk to cats?"
But Luau was too impatient to continue the conversation. He just looked at me and blinked, which was Luau's way of saying _ha-ha_.
| {
"redpajama_set_name": "RedPajamaBook"
} | 9,745 |
Q: C++ Primitive datatypes: how to read unsigned 30 bits I have an array of unsigned chars. Basically I have an array of bits.
I know that the first 16 bits corresponds to an unsigned integer and I retrieve its value using (u16)(*(buffer+ 1) << 8 | *abcBuffer)
Then comes a data type called u30 which is described as follows:
u30 - variable length encoded 30-bit unsigned integer value. The variable encoding for u30 uses one to five bytes, depending on the magnitude of the value encoded. Each byte contributes its low seven bits to the value.If the high (8th) bit of a byte is set then the next byte is also part of the value.
I don't understand this description: it says u30(thirty!) and then it says 1 to 5 bytes? Also I have another data type called s24 - three-byte signed integer value.
How should one read (retrieve their values) such non-typical data types? Any help will be appreciated.
Thanks a lot!
A: i=0;
val = buf[i]&0x7F;
while (buf[i++]&0x80)
{
val |= (buf[i]&0x7F)<<(i*7);
}
A: Assuming I understand correctly (always a questionable matter), the following will read the values. It starts at position zero in this example (i would need to be offset by the actual position in the buffer):
unsigned int val;
unsigned char buf[300];
int i;
int shift;
i = 0;
buf[0] = 0x81;
buf[1] = 0x3;
val = 0;
shift = 0;
do
{
val |= (0x7F & buf[i] ) << shift;
shift += 7;
i++;
} while (( buf[i-1] & 0x80 ) && ( i < 5 ));
printf( "Val = %u\n", val );
A: The encoding format description is somewhat informal perhaps, but should be enough. The idea will be that you read one byte (call it x), you take the lowest 7 bits x & 0x7F and at the same time check if it's highest bit is set. You'll need to write a small loop that merges the 7 bit sequences in a uint variable until the current byte no longer has its highest bit set.
You will have to figure out if you need to merge the new bits at the high end, or the low end of the number (a = (a << 7) | (x & 0x7F)). For that you need one test sequence of which you know what the correct output is.
A: To read the variable length 30 bit value, you could do something like such:
const char HIGH_BIT = 0x80;
const char DATA_MASK = 0x7F;
const char LAST_MASK = 0x03; // only need 2 bits of last byte
char tmpValue = 0; // tmp holder for value of byte;
int value = 0; holder for the actual value;
char* ptr = buffer; // assume buffer is at the start of the 30 bit number
for(int i = 0; i < 5; i++)
{
if(i == 4)
{
tmpValue = LAST_MASK & *ptr;
}
else
{
tmpValue = DATA_MASK & *ptr;
}
value |= tmpValue << ( 7 * i);
if(!(HIGH_BIT & *ptr))
{
break;
}
if(i != 4)
{
++ptr;
}
}
buff = ptr; // advance the buffer afterwards.
@Mark: your answer was posted while I was typing this, and would work except for the high byte. the value is only 30 bits, so only the first 2 bits of the high byte are used for the value and you are using the full 8 bits of the value.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,157 |
\section{Introduction}
Spanish is one of the most widely spoken languages.
This fact has drawn the attention of the NLP community to the development of resources for that language.
As a result, some pre-trained Spanish language models \cite{etcheverry-wonsever-2016-spanish,che-etal-2018-towards,canete2020spanish,gut2021spanish} have been released in recent years driven by self-supervised approaches.
This proliferation of Spanish language models increases the need for annotated datasets to evaluate them. Consequently, Spanish datasets for a wide variety of independent tasks have been proposed \cite{a-garcia-cumbreras-etal-2006-bruja,cruz2008clasificacion,artetxe-etal-2020-translation,huertastato2021silt}.
However, little effort has been put into creating benchmarks that allow models to be evaluated systematically and fairly.
Recently, \newcite{canete2020spanish} presented the GLUES benchmark, a compilation of natural language understanding tasks in Spanish.
This benchmark aims to evaluate the performance of models by fine-tuning them to a target task \cite{wang-etal-2018-glue}.
In contrast, another methodology known as probing tasks aims to assess whether the resulting representations of the models are general-purpose \cite{conneau-etal-2018-cram}.
A probing task is designed in such a way as to isolate some linguistic phenomena, and a classifier is used on top of the representations to verify if the model has encoded the linguistic phenomena in question.
This type of representation evaluation for Spanish language models is generally carried out using a cross-lingual setting \cite{ravishankar-etal-2019-probing,10.1162/coli_a_00376}. However, these benchmarks only focus on assessing word representations or basic linguistic knowledge.
On the one hand, the \textbf{Spanish SentEval}, inspired by SentEval \cite{conneau-kiela-2018-senteval}, aims to evaluate representations of independent sentences.
Unlike previous work focused on probing tasks for basic linguistic properties \cite{ravishankar-etal-2019-probing,10.1162/coli_a_00376}, our benchmark comprises four sets of sentence classification tasks with realistic texts from different domains.
On the other hand, the \textbf{Spanish DiscoEval}, inspired by DiscoEval \cite{chen-etal-2019-evaluation}, focuses on the evaluation of discourse knowledge in sentence representations.
Evaluating discourse involves analyzing a sentence in the context in which it is located. For this reason, we include five sets of tasks based on sentence ordering, discourse relations, and discourse coherence.
The overall objective of both benchmarks is to avoid unnecessary re-implementations and the use of multiple evaluation schemes, thus allowing a comparable and fair assessment between models.
Furthermore, we compare publicly available Spanish sentence encoders on our Spanish SentEval and Spanish DiscoEval to demonstrate their strengths and weaknesses. The results and subsequent analysis expose the Spanish language models' current capabilities, showing that there is still room to improve them in future work.
Our code and datasets are available for future experimentation and replicability at \url{https://github.com/OpenCENIA/Spanish-Sentence-Evaluation}.
\section{Sentence Evaluation}
SentEval was introduced by \newcite{conneau-kiela-2018-senteval} as a tool for evaluating the quality of universal sentence representations.
It encompasses a standard pipeline evaluation that uses the representations generated by sentence encoders as features in various downstream tasks.
Specifically, SentEval includes stand-alone sentence and sentence pair tasks modeled by classification or regression. For comparison purposes, this framework consists of simple predefined neural architectures to avoid shifting the burden of modeling to their optimization process.
For our Spanish SentEval, we adopt the original pipeline and include datasets equivalent to those in English.
Below we describe each task and dataset included in our Spanish version. Additionally, basic statistics for each dataset are shown in \hyperref[appendix:a]{Appendix A}.
As proposed in \newcite{chen-etal-2019-evaluation}, we use $[\cdot, \cdot, \cdots]$ to denote concatenation of vectors, $\odot$ for element-wise multiplication, and $| \cdot |$ for element-wise absolute value.
\subsection{Sentence Classification (SC)}
Sentence classification is one of the most common NLP tasks, with applications ranging from document classification to sentiment analysis. Because of its inherent simplicity, the task offers a straightforward way to evaluate sentence-level representations.
For our version, we include a set of binary and multiclass datasets that cover various types of sentence classification tasks.
For sentiment analysis, we include MuchoChine (MC) \cite{cruz2008clasificacion}, a movie review dataset, and TASS 2020 \cite{VegaDCAMZCACCM20} tasks 1 and 2 consisting of polarity and emotion classification \cite{plaza-del-arco-etal-2020-emoevent}, respectively.
Figure~\ref{fig.sc} shows an example of a MC positive sentiment sentence.
Other types of text classification datasets that we include are the FilmAffinity corpus (FAC) \cite{7360018} for subjective/objective classification and the Spanish QC dataset (SQC) \cite{a-garcia-cumbreras-etal-2006-bruja} for question-type.
For all of these tasks, the input to the classifier is the representation of the sentence.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
• Una historia policiaca que Scorsese la transforma en una memorable muestra del genero.
\end{minipage}
}
\vspace{-0.2cm}
\caption{SC example. The sentence belongs to the MC dataset and shows a \textit{positive} sentiment.}
\vspace{-0.5cm}
\label{fig.sc}
\end{center}
\end{figure}
\subsection{Sentence Pair Classification (SPC)}
In sentence pair classification, each example in a dataset has two sentences along with the appropriate target, and the aim is to model the textual interaction between them.
We consider entailment and paraphrasing tasks for our Spanish benchmark.
For the entailment task, we include two datasets.
The first is the recently released SICK-es \cite{huertastato2021silt} for entailment (SICK-es-E), which was constructed by translating and manually curating the English SICK dataset into Spanish.
Due to the lack of NLI tasks in Spanish, the second dataset was constructed using XNLI \cite{conneau-etal-2018-xnli} and esXNLI \cite{artetxe-etal-2020-translation}. Specifically, we use the XNLI test set as the training set, the XNLI development set as the development set, and the esXNLI set as the test set. We will refer to this as NLI-es (example shown in Figure~\ref{fig.spc}).
For the paraphrasing task, we use PAWS-X \cite{yang-etal-2019-paws}, a cross-lingual paraphrase identification dataset with high lexical overlap. We only use the Spanish text, naming it PAWS-es for ease of reference.
Like English SentEval, we encode the two sentences and use $[|x_{1} - x_{2}|, x_{1} \odot x_{2}]$ as input to the classifier.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
\textit{\textbf{Premise}}: Y yo estaba bien, ¡y eso fue todo! \newline
\textit{\textbf{Hypothesis}}: Después de que dije que sí, terminó.
\end{minipage}
}
\vspace{-0.2cm}
\caption{Example of SPC from NLI-es. The two sentences show an \textit{entailment}.}
\vspace{-0.5cm}
\label{fig.spc}
\end{center}
\end{figure}
\subsection{Semantic Similarity (SS)}
This task consists of scoring a pair of sentences based on their degree of similarity, even if they are not exact matches.
There are two common approaches to evaluating this task and we include them in our Spanish SentEval.
The first requires training a model on top of the sentence embeddings. For this approach, we use the SICK-es \cite{huertastato2021silt} for relatedness (SICK-es-R).
The second assesses sentence pairs using an unsupervised approach. For this case, we include the Spanish track of STS tasks 2014 \cite{agirre-etal-2014-semeval}, 2015 \cite{agirre-etal-2015-semeval} and 2017 \cite{cer-etal-2017-semeval}.
All of these datasets consist of a pair of sentences labeled with a similarity score between 0 and 5; an example is shown in Figure~\ref{fig.ss}.
The objective is to evaluate whether the cosine similarity of two sentence representations correlates with a human-labeled similarity score through Pearson and Spearman correlations.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
• Un perro está con un juguete. \newline
• Un perro tiene un juguete.
\end{minipage}
}
\vspace{-0.2cm}
\caption{Example of the STS task. The two sentences are similar with a \textit{score} of 4.8 out of 5.}
\vspace{-0.5cm}
\label{fig.ss}
\end{center}
\end{figure}
\subsection{Linguistic Probing Tasks (LPT)}
Some sentence classification tasks are complex and make it difficult to infer what kind of information is present in the representations.
This prompted the creation of X-Probe \cite{ravishankar-etal-2019-probing}, a multilingual benchmark of nine probing tasks to evaluate individual linguistic properties.
These tasks were designed to evaluate surface information (SentLen, WC), syntactic information (BiShift, TreeDepth), and semantic information (Tense, SubjNum, ObjNum, SOMO, CoordInv).
The former evaluate superficial tasks that could be solved simply by looking at the sentence tokens. The second tests whether the embeddings are sensitive to the syntactic properties of the sentences. The third assesses the semantic understanding of the embedding.
We include all the proposed probing tasks in Spanish from X-Probe. We refer to the original paper \cite{ravishankar-etal-2019-probing} for further information.
The input to the classifier is the representation of the sentence, and the output can be binary or multiclass.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
• En enero participó en la infructuosa defensa de Forlí frente a César Borgia.
\end{minipage}
}
\vspace{-0.2cm}
\caption{Example of LPT. The task consists of Tense classification. In this case the sentence is in \textit{past tense}.}
\vspace{-0.5cm}
\label{fig.pt}
\end{center}
\end{figure}
\section{Discourse Evaluation}
DiscoEval originally proposed by \newcite{chen-etal-2019-evaluation} includes tasks to evaluate discourse-related knowledge in pretrained sentence representations.
DiscoEval adopts the SentEval pipeline with fixed standard hyperparameters to avoid discrepancies.
For our Spanish version of DiscoEval, we follow closely the original construction and evaluation methodology. Specifically, DiscoEval includes supervised sentence and sentence group classification tasks modeled by logistic regression or classification.
Our datasets were constructed from multiple domains encompassing a wide diversity of text sources.
Below we describe the tasks and dataset constructions.
Statistics for each dataset are shown in \hyperref[appendix:a]{Appendix A}.
\subsection{Sentence Position (SP)}
SP seeks to assess the ability of a model to order ideas in a paragraph.
This dataset is constructed by taking five consecutive sentences from a given corpus and randomly moving one of these five sentences to the first position.
The task consists of predicting the proper location of the first sentence. We have five classes where class 1 means that the first sentence is in the correct position. But if the class is between 2 and 5, the first sentence corresponds to another position in the paragraph.
We create three Spanish versions of different domains for this task using:
the first five sentences of Wikipedia articles\footnote{We use the latest Spanish Wikipedia articles dump (\href{https://dumps.wikimedia.org/eswiki/latest/}{dumps.wikimedia.org/eswiki/latest/})}, Chilean university thesis abstracts\footnote{We collected abstracts from public repositories of the Pontificia Universidad Católica de Chile (\href{https://repositorio.uc.cl/}{repositorio.uc.cl}) and Universidad de Chile (\href{https://repositorio.uchile.cl/}{repositorio.uchile.cl}).}, and news articles in Spanish from the MLSUM dataset \cite{scialom-etal-2020-mlsum}.
Figure \ref{fig.sp} shows an example of this task for the thesis dataset. The first sentence should be in the second position among these sentences. To make correct predictions, the model needs to be aware of both typical orderings of events and how events are described in language. In the example shown, the model needs to understand that the objective of the thesis has to be described before the main findings of the study.
As proposed by \newcite{chen-etal-2019-evaluation} to train the classifier for this task, we first encode the five sentences into vector representations $x_i$. As input to the classifier, we concatenate $x_1$ and $x_1 - x_i$ for $2 \leq i \leq 5$: $[x_1, x_1 - x_2, x_1 -x_3 , x_1 - x_4 , x_1 - x_5]$. The output is between $1$ and $5$, which indicates the target position of the first sentence.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
\small
1) Se encontró que la adición de nanopartículas de sílice aumenta la rigidez del material. \circled{2} \newline
2) El objetivo de este trabajo es estudiar el efecto de la incorporación de nanopartículas de sílice en la rigidez de material. \newline
3) Las Nanopartículas de sílice fueron sintetizadas utilizando el método sol-gel. \newline
4) Las Nanopartículas de menor tamaño tienen un mayor efecto sobre las propiedades del material. \newline
5) La rigidez del material aumentó hasta en un 80\% con la adición de 30\% de nanopartículas de silice.
\end{minipage}
}
\vspace{-0.2cm}
\caption{SP example of thesis domain. The number inside the circle shows the correct position of the first sentence. This sentence belongs in the 2nd place.}
\vspace{-0.5cm}
\label{fig.sp}
\end{center}
\end{figure}
\subsection{Binary Sentence Ordering (BSO)}
BSO is a binary classification task to determine if the order of two sentences is correct.
This task aims to assess the ability of sentence representations to capture local discourse coherence.
This data comes from the same three domains of the SP task. However, in this case, we only take the first two sentences of each text.
Figure \ref{fig.bso} provides an example from the Spanish Wikipedia. The order of the sentences is incorrect as the ``Neue Pinakothek'' museum should be mentioned before describing the art found inside.
In order to find the incorrect ordering in this example, the sentence representations need to be able to provide information if one sentence comes after or before the separation.
As English DiscoEval to create the model inputs to train the classifiers, we concatenate the embeddings generated by the sentence encoder of both sentences with their element-wise difference: $[x_1, x_2, x_1-x_2]$.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
1) Se centra en el Arte europeo del siglo XIX. \newline
2) El Neue Pinakothek es un museo de arte situado en Múnich, Alemania.
\end{minipage}
}
\vspace{-0.2cm}
\caption{Example from the Wikipedia domain of the BSO task. The sequence is in the wrong order.}
\vspace{-0.5cm}
\label{fig.bso}
\end{center}
\end{figure}
\subsection{Discourse Coherence (DC)}
The Discourse Coherence (DC) task is a sentence disentanglement task proposed to determine if a sequence of six sentences forms a coherent paragraph. We create three versions of this task, two from open-domain dialogue datasets and the other from Wikipedia articles. Given six coherent contiguous sentences, we randomly replace one of them with a sentence from another sequence. Note that we choose the sentence to replace uniformly among positions 2-5. We generate balanced datasets with coherent (positive) and non-coherent (negative) instances, which results in a binary classification task.
For the open-domain dialogue dataset, we use the OpenSubtitles\footnote{\href{http://www.opensubtitles.org/}{http://www.opensubtitles.org/}} corpus \cite{lison-tiedemann-2016-opensubtitles2016} and the Gutenberg Dialogue dataset \cite{csaky-recski-2021-gutenberg}. OpenSubtitles is a large corpus, so we randomly retrieve some dialogues and create the splits. In the case of the Gutenberg Dialogue, we use the original splits provided by the author. For the Wikipedia domain, we take only one coherent text from each article. Then we randomly create the splits. In all cases, we discard paragraphs with fewer than six sentences and we select the negative sample from other dialogues or articles in the corresponding domain. Figure~\ref{fig.dc} shows a dialog to which the second sentence does not belong.
Like English DiscoEval, we encode the six sentences as vector representations and concatenate them ($[x_1, x_2, x_3, x_4, x_5, x_6]$) as input to the classifier.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
1) ¡nicolás, ha llegado tu hora! \newline
2) \textbf{recuerdo que en la galería obscura me ofrecísteis vuestra casa.} \newline
3) no; prefiero fumarme una pipa. \newline
4) ¿dónde está tu pipa? \newline
5) en el chaleco. \newline
6) bien; aquí la tienes.
\end{minipage}
}
\vspace{-0.2cm}
\caption{Example of DC from the Gutenberg. The sentence in bold does not belong to the dialogue.}
\vspace{-0.5cm}
\label{fig.dc}
\end{center}
\end{figure}
\subsection{Sentence Section Prediction (SSP)}
SSP is a task to determine the section of a given sentence. This is based on the fact that the writing style can vary throughout a document, showing distinct patterns.
The English DiscoEval originally used abstract and other sections of scientific papers to build the dataset. For our Spanish version, we use news articles instead. The news usually has a headline that is a sentence that presents the main idea of the article, a subhead that is a group of sentences that helps to encapsulate the entire piece or informs the reader about the topic, and a body that tells the entire story \cite{van1983discourse}.
We rely on the MLSUM dataset \cite{scialom-etal-2020-mlsum}, which consists of news articles that have the structure mentioned above.
We use subhead and body sentences because the former has sentences summarizing the entire article, while the latter uses broader wording. Figure~\ref{fig.ssp} shows an example of each style.
We randomly sample one sentence from the subhead as a positive instance and one sentence from the body as a negative sample.
The task is a binary classification that takes the representation of the sentence as input.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
\textit{\textbf{Subhead}}: Los Reyes presiden este sábado el desfile de las Fuerzas Armadas \newline
\textit{\textbf{Body}}: Sevilla acoge este sábado el tradicional desfile de las Fuerzas Armadas, que estará presidido por los Reyes de España
\end{minipage}
}
\vspace{-0.2cm}
\caption{Examples of SSP. One sentence is from the subhead, while the other is from the body of a news.}
\vspace{-0.5cm}
\label{fig.ssp}
\end{center}
\end{figure}
\subsection{Discourse Relations (DR)}
A direct way to test discourse knowledge is to predict the relations between sentences, which is why the RST Discourse Treebank \cite{carlson-etal-2001-building} was used in previous work \cite{ferracane-etal-2019-evaluating,chen-etal-2019-evaluation}.
We consider the RST Spanish Treebank \cite{da-cunha-etal-2011-development} for our Spanish version, which consists of an annotated corpus with rhetorical relations.
According to RST \cite{MANN1988}, a text can be segmented into Elementary Discourse Units (EDUs) linked by means of nucleus-satellite (NS) or multi-nuclear (NN) rhetorical relations.
In the first, the satellite provides additional information about the nucleus, on which it depends (e.g., Fondo, Condición). In the second, several nuclei elements are connected at the same level, so no element is dependent on any other (e.g., Unión, Lista).
For instance, Figure~\ref{fig.rst} shows an example with a relation NS and NN. A relation can take multiple units, so like \newcite{chen-etal-2019-evaluation}, we rely on right-branching trees for non-binary relations to binarize the tree structure and use the 29 coarse-grained relations defined by \newcite{da-cunha-etal-2011-development}. We adopt the originally proposed training and testing splits.
\begin{figure}[!h]
\begin{center}
\fbox
{
\begin{minipage}{0.45\textwidth}
\begin{center}
\begin{tikzpicture}[scale = 0.6, sibling distance=10em]
\node {NS-Elaboración}
child { child { node {1} } }
child { node {NN-Lista}
child { node {2} }
child { node {3} } };
\end{tikzpicture}
\end{center}
\vspace{-0.4cm}
\small
[Se presentan algunos comentarios al Allgemeine Naturgeschichte und Theorie des Himmels escrito por Emmanuel Kant y publicado en 1755,]$_1$ [obra donde el pensador de Koenisberg dio a conocer sus principales ideas cosmológicas.]$_2$ [En particular se reseña su explicación cualitativa de cómo a partir de un material primordial tenue y difuso, la fuerza gravitacional produjo los cuerpos que forman el Sistema Solar.]$_3$
\end{minipage}
}
\vspace{-0.2cm}
\caption{An RST Spanish Treebank tree with nucleus-satellite (NS) and multi-nuclear (NN) relations.}
\vspace{-0.5cm}
\label{fig.rst}
\end{center}
\end{figure}
\begin{table*}[]
\centering
\begin{tabular}{l|cccc|ccccc|}
\cline{2-10}
& \multicolumn{4}{c|}{\textbf{SentEval}} & \multicolumn{5}{c|}{\textbf{DiscoEval}} \\ \hline
\multicolumn{1}{|l|}{\textbf{Models}} & \multicolumn{1}{c|}{\textbf{SC}} & \multicolumn{1}{c|}{\textbf{SPC}} & \multicolumn{1}{c|}{\textbf{SS}} & \textbf{LPT} & \multicolumn{1}{c|}{\textbf{SP}} & \multicolumn{1}{c|}{\textbf{BSO}} & \multicolumn{1}{c|}{\textbf{DC}} & \multicolumn{1}{c|}{\textbf{SSP}} & \textbf{DR} \\ \hline
\multicolumn{1}{|l|}{Sent2Vec} & \multicolumn{1}{c|}{\underline{75.11}} & \multicolumn{1}{c|}{59.51} & \multicolumn{1}{c|}{\textbf{76.05}} & 66.89 & \multicolumn{1}{c|}{36.49} & \multicolumn{1}{c|}{54.92} & \multicolumn{1}{c|}{55.77} & \multicolumn{1}{c|}{70.88} & 36.69 \\
\multicolumn{1}{|l|}{ELMo} & \multicolumn{1}{c|}{71.50} & \multicolumn{1}{c|}{\textbf{61.62}} & \multicolumn{1}{c|}{62.06} & \underline{69.90} & \multicolumn{1}{c|}{37.13} & \multicolumn{1}{c|}{55.13} & \multicolumn{1}{c|}{58.68} & \multicolumn{1}{c|}{72.60} & 45.14 \\ \hline
\multicolumn{1}{|l|}{ELECTRA} & \multicolumn{1}{c|}{62.80} & \multicolumn{1}{c|}{51.40} & \multicolumn{1}{c|}{42.07} & 64.20
& \multicolumn{1}{c|}{38.56} & \multicolumn{1}{c|}{56.85} & \multicolumn{1}{c|}{55.18} & \multicolumn{1}{c|}{76.22} & 37.59 \\
\multicolumn{1}{|l|}{RoBERTa-BNE} & \multicolumn{1}{c|}{72.51} & \multicolumn{1}{c|}{54.57} & \multicolumn{1}{c|}{41.34} & 68.22 & \multicolumn{1}{c|}{\underline{41.82}} & \multicolumn{1}{c|}{57.02} & \multicolumn{1}{c|}{56.31} & \multicolumn{1}{c|}{76.83} & 39.21 \\
\multicolumn{1}{|l|}{BERTIN} & \multicolumn{1}{c|}{73.54} & \multicolumn{1}{c|}{55.47} & \multicolumn{1}{c|}{32.53} & 67.72 & \multicolumn{1}{c|}{41.66} & \multicolumn{1}{c|}{56.66} & \multicolumn{1}{c|}{55.54} & \multicolumn{1}{c|}{\textbf{78.42}} & 45.86 \\
\multicolumn{1}{|l|}{BETO} & \multicolumn{1}{c|}{\textbf{76.34}} & \multicolumn{1}{c|}{58.17} & \multicolumn{1}{c|}{55.37} & 69.38 & \multicolumn{1}{c|}{41.43} & \multicolumn{1}{c|}{\underline{57.53}} & \multicolumn{1}{c|}{\underline{60.89}} & \multicolumn{1}{c|}{75.33} & \underline{47.84} \\
\multicolumn{1}{|l|}{mBERT} & \multicolumn{1}{c|}{70.47} & \multicolumn{1}{c|}{\underline{60.05}} & \multicolumn{1}{c|}{\underline{67.77}} & \textbf{71.41} & \multicolumn{1}{c|}{\textbf{43.21}} & \multicolumn{1}{c|}{\textbf{57.97}} & \multicolumn{1}{c|}{\textbf{63.45}} & \multicolumn{1}{c|}{\underline{77.80}} & \textbf{51.08} \\ \hline
\end{tabular}
\vspace{-0.2cm}
\caption{Results for Spanish SentEval and Spanish DiscoEval by group. The best performing model is in bold, and the runner up method is underlined.
The reported numbers are accuracy, except SS, which is Pearson's correlation.
}
\vspace{-0.2cm}
\label{table:results}
\end{table*}
To evaluate the representations, we first encode all EDUs. We then use averaged EDU representations of subtrees as inputs. Formally, the input to the classifier is $[x_{\scriptsize\textit{left}}, x_{\scriptsize\textit{right}}, x_{\scriptsize\textit{left}} \odot x_{\scriptsize\textit{right}}, |x_{\scriptsize\textit{left}}- x_{\scriptsize\textit{right}}|]$ and the label is the relation of the node. $x_{\scriptsize\textit{left}}$ and $x_{\scriptsize\textit{right}}$ are vectors of the left and right subtrees respectively. For instance, the input for the label ``NS-Elaboración'' from Figure~\ref{fig.rst} is $x_{\scriptsize\textit{left}}=x_1$ and $x_{\scriptsize\textit{right}}=\frac{x_2+x_3}{2}$.
\section{Experiments}
\subsection{Setup}
\paragraph{Parameters} We adopt and adapt the original implementation of SentEval and DiscoEval for our Spanish version, so the same hyperparameters can be set.
We use the PyTorch version of the classifiers, Adam optimizer with a batch size of 64, and 4 training epochs for all of the experiments.
\paragraph{Datasets}
We tokenize each dataset with the spaCy tokenizer \cite{spacy2} and save all files using a common file format with UTF-8 encoding.
\subsection{Models}
We benchmark all of the main Spanish sentence encoders available to date to the best of our knowledge. Sent2Vec\footnote{\href{https://github.com/BotCenter/spanish-sent2vec}{https://github.com/BotCenter/spanish-sent2vec}} \cite{pagliardini-etal-2018-unsupervised} which is a bilinear model, ELMo \cite{che-etal-2018-towards} which is based on bidirectional RNNs. More recent and based on Transformer \cite{NIPS2017_3f5ee243}, we evaluate BETO \cite{canete2020spanish}, the Spanish version of BERT \cite{devlin-etal-2019-bert}, BERTIN\footnote{\href{https://huggingface.co/bertin-project}{https://huggingface.co/bertin-project}} and RoBERTa-BNE \cite{gut2021spanish}, two versions in Spanish of the RoBERTa model \cite{liu2020roberta}. Finally, ELECTRA\footnote{\href{https://chriskhanhtran.github.io/posts/electra-spanish/}{https://chriskhanhtran.github.io/posts/electra-spanish/}} \cite{Clark2020ELECTRA:} that was trained on a small piece of data as part of a tutorial.
We also include the multilingual BERT (mBERT) for further comparison.
We use the base version for all models except ELECTRA, which is a small version.
For evaluating Sent2Vec and ELMo, we use their final representation.
For the Transformer-based models, as proposed by \newcite{chen-etal-2019-evaluation}, we use the average of each layer's special tokens \texttt{[CLS]} as the sentence representation.
\begin{figure*}[!ht]
\centering
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/SC.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/SPC.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/SS.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/SP.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/BSO.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/DC.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/SSP.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\includegraphics[width=8cm]{images/DR.pdf}
\end{minipage}
\hfill
\begin{minipage}{0.60\textwidth}
\centering
\includegraphics[width=11cm]{images/legend.png}
\end{minipage}
\vspace{-0.2cm}
\caption{Performance of DiscoEval and SentEval tasks using the \texttt{[CLS]} representation of layers 1 through 12.}
\vspace{-0.2cm}
\label{layerExperiments}
\end{figure*}
\subsection{Results}
Table \ref{table:results} shows the results of the experiments for all of the Spanish SentEval and Spanish DiscoEval tasks averaged for all of the datasets used for each of the tasks (for detailed results, see \hyperref[appendix:b]{Appendix B}).
It can be seen that, in general, the evaluation of all the language models' latent representations in both the Spanish SentEval and Spanish DiscoEval tasks show a similar behavior compared to their English language representations counterparts \cite{conneau-kiela-2018-senteval,chen-etal-2019-evaluation}.
Regarding Spanish SentEval, for the SC tasks, the BETO model latent representation surpasses Sent2Vec, the second-best, by a 1.63 percentage difference (pd). This can be explained since BETO was trained with general domain sentences, capturing a representation capable of generalizing for any domain in the classification task. In general, SPC shows worse results than SC in terms of accuracy for all language models, where the best achieves an accuracy of 61.62\%. In the SPC task, it can be seen that ELMo learned representation surpasses the second-best representation (mBERT) by 2.6 pd. The SS task shows that Sent2Vec representation outperforms other representations by more than 12 pd in terms of Pearson's correlation, indicating that this learned representation can distinguish if a pair of sentences are semantically similar better than the representation learned by mBERT, which was not trained initially for this particular task. Finally, for the LPT task, the mBERT learned representation outperforms the BETO representation by 2.2 pd, showing that, when training a multi-language language model such as mBERT, the model can obtain richer sentence representations for a task that is more challenging than standard text classification.
Concerning the Spanish DiscoEval set of tasks, the SP task arises as one of the most challenging tasks, where mBERT, which is the best performing learned representation, reaches only a 43.21\% accuracy, improving by 3.33 pd over RoBERTa-BNE, the runner up model. We observe a similar pattern in the BSO task since the representation learned by mBERT outperforms the second-best model representation by a low margin of 0.7 pd, showing that training a Transformer-based model in multiple languages can obtain a richer representation for the task of ordering two sentences in a paragraph. A similar behavior is observed for the DC task, where the mBERT representation outperforms the runner-up method by more than 4 pd. The SSP task results indicate that the BERTIN learned representation surpasses mBERT by a small margin of 0.7 pd. Finally, for the RST task, mBERT representation shows the best performance in terms of accuracy compared to other language models, outperforming the BETO representation by 6.7 pd.
In general, it is observed that in most of the DiscoEval tasks, mBERT learned the best representation. The exception is for SSP, but the difference in accuracy compared to representation learned by BETO is small.
These results provide evidence that mBERT learns better representations when trained with multiple languages, allowing it to outperform other models on most probing tasks.
\subsection{Further Analysis}
In this section, we perform a per-layer performance analysis of the representations learned by transformer-based models.
These experiments allow verifying which layers are more transferable for downstream tasks. Figure~\ref{layerExperiments} shows the results for all SentEval and DiscoEval groups.
It can be seen that the best performance fluctuates in the last layers, primarily between layers 10 and 12. Moreover, all representations perform well on the early layers for the SSP task, with accuracy levels near 0.7, indicating it is relatively straightforward. Nevertheless, all representations do not yield competitive performance for the SP task reaching a maximum accuracy slightly higher than 0.4, suggesting that they are ineffective at finding positions of a sentence in a discourse. Furthermore, something similar occurs with the DR task, where all representations achieve accuracy close to 0.5 for the last layers, showing that discovering relations between elements of discourse seems non-trivial to solve with the learned latent representations.
Regarding the impact of the training data, we see that the representations generalize better when being trained with multiple languages than when using only Spanish text. Evidence of this is given by the performance of the representations learned by mBERT. In most cases, it outperforms other models' representations on several probing tasks. However, BETO representation beats the ones learned by mBERT in the last layers for SC, SS, and SSP, suggesting that for these tasks, representations learned only with Spanish texts seem to be more critical for obtaining an informative latent representation.
Another factor that positions mBERT and BETO as the two best-learned representations is that both were trained with more data, implying a better performance than ELECTRA and BERTIN, which were trained with fewer data. Interestingly, representations learned by RoBERTa-BNE do not get the desired performance compared to other representations, particularly on the early layers and on the last layers on tasks such as DC, DR, SSP, SC, and SPC.
\section{Related Work}
\subsection{Language Model Evaluations}
We can find at least four approaches in the work carried out in the evaluation of language models.
The first focuses on evaluating the adaptability of a language model to a new domain through fine-tuning. GLUE \cite{wang-etal-2018-glue} and SuperGLUE \cite{NEURIPS2019_4496bf24} are examples of this approach that include several downstream tasks.
The second involves evaluating the generalization of text representations by incorporating a classifier for downstream tasks on top of them. Following this approach, SentEval \cite{conneau-kiela-2018-senteval} and DiscoEval \cite{chen-etal-2019-evaluation} include tasks at the sentence and discourse level.
The third focuses on stress tests \cite{naik-etal-2018-stress,aspillaga-etal-2020-stress,araujo-etal-2021-stress} that seek to assess the ability of language models to adapt to cases designed to confuse them.
The fourth objective is an evaluation from a linguistic perspective \cite{warstadt-etal-2019-investigating,10.1162/tacl_a_00298,puccetti-etal-2021-bert} to elucidate the models' actual linguistic capacities or knowledge.
The aforementioned benchmarks are scarce for languages other than English. This, in fact, is the case for Spanish. For instance, regarding the adaptability evaluation for Spanish models, \newcite{canete2020spanish} recently proposed GLUES, a Spanish version of GLUE.
In the case of representation evaluation, most of the work is in a cross-linguistic setting for word \cite{10.1162/coli_a_00376}, sentence \cite{ravishankar-etal-2019-probing} and discourse \cite{koto-etal-2021-discourse} evaluations. For this reason and following the motivation of works such as RuSentEval \cite{mikhailov-etal-2021-rusenteval}, we provide SentEval and DiscoEval in Spanish, which consists of tasks originally created with texts in Spanish and aimed at evaluating models of that language.
\subsection{Sentence Encoders}
Pre-trained self-supervised language models have become the de facto sentence encoders.
Early work in deep learning introduced ELMo \cite{peters-etal-2018-deep}.
With this model, sentence representations are produced by a mean-pooling of all contextualized word representations.
After the Transformer model \cite{NIPS2017_3f5ee243}, several models were proposed \cite{devlin-etal-2019-bert,liu2020roberta,Clark2020ELECTRA:}. These BERT-type models produce sentence representations using a special token \texttt{[CLS]}.
More recently, some models \cite{lee-etal-2020-slm,iter-etal-2020-pretraining,araujo-etal-2021-augmenting} have been proposed to improve discourse-level representations by incorporating additional components or mechanisms into the vanilla BERT.
Furthermore, due to the success of deep learning sentence encoders, some Spanish models were released.
\newcite{che-etal-2018-towards} released ELMo for many languages, including Spanish. BETO \cite{canete2020spanish} the Spanish version of BERT \cite{devlin-etal-2019-bert} was trained on a large Spanish corpus. RoBERTa-BNE \cite{gut2021spanish}, the Spanish version of the RoBERTa model \cite{liu2020roberta}, was trained on a corpus of crawled \textit{.es} domains.
\section{Conclusion}
We introduce Spanish SentEval and Spanish DiscoEval, two test suites for evaluating stand-alone and discourse-aware sentence representations.
Like the English versions, our work aims to evaluate the representations of current and future Spanish language models.
Our benchmarks consist of a single pipeline that attempts a fair and less cumbersome assessment across multiple tasks with text from different domains.
As future work, more tasks could be included in these benchmarks. Likewise, other types of evaluations such as stress or linguistic tests could be carried out to evaluate the actual capacities of the language models taking into account the peculiarities of the Spanish language.
\section{Acknowledgements}
We want to thank the authors of the datasets for providing us access to their data.
This work was supported by the National Center for Artificial Intelligence CENIA FB210017, Basal ANID. Felipe Bravo-Marquez was supported by ANID FONDECYT grant 11200290, U-Inicia
VID Project UI-004/20 and ANID -Millennium Science Initiative Program - Code ICN17\_002.
\section{Bibliographical References}\label{reference}
\bibliographystyle{lrec2022-bib}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,508 |
Q: Create a link with independent text Is it possible to create an intra-document link with sphinx, such that the displayed text is independent of the link and destination?
Currently, I make intra-document links like so:
.. _Label_For_Section:
===============
Name Of Section
===============
The link :ref:`Label_For_Section` is rendered as "Name Of Section".
The link Label_For_Section_ is rendered as "Label_For_Section".
What I want is a way to have a link, where the destination text, link label, and displayed link text can all be different strings. Eg a link to a section called "A" with a label .. _B: which is rendered as "C"
Note
I noticed that other kinds of links (eg external hyperlinks) are similarly constrained, and I figure the solutions may look similar, however I am looking specifically for a solution for intra-document links.
A: See Cross-referencing arbitrary locations, specifically the ref role.
:ref:`Link title <label-name>`
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,215 |
Roma Dhamanaskar
The Pandemic Frontlines: Centring the Workers Who Stand Between Us & COVID-19
Content warning: workplace violence, abuse of long-term care workers
Illustrations by Alex McPhail, BA (Hons)
Hi, my name is Roma (she/her) and I am the author of this post! I am a cisgender, heterosexual, non-disabled, South Asian woman. I currently reside on the traditional territories of the Haudenosaunee, Anishinabewaki, Neutral, Mississaugas of the First Credit Nation, and Mississauga Peoples. This unceded land of Treaty 3 3/4 and Treaty 8 is currently known as Burlington, Ontario. The post you will now read is going to examine the impact of COVID-19 on frontline workers, the majority of whom are women. While I do currently volunteer at a hospice which is considered an essential service, I am allowed to choose my shifts, I do not depend on this work for income, and I would not be required to work if there was a COVID-19 outbreak in the hospice. As such, I do not consider myself a frontline worker during the pandemic. This post includes research on racialized, Indigenous, immigrant, and low-income workers. My intention is not to speak on behalf of these groups, but to bring more attention to the scientifically-backed research and experiences of communities that are historically and currently under-researched. As a researcher and writer who is passionate about health justice, I am committed to responsible allyship. I encourage feedback on this post should I misrepresent any of the communities I attempt to amplify. I hope you find this post helpful and engaging! Happy reading!
In this post we will discuss workers on the frontlines of the COVID-19 pandemic. The topics addressed include:
Overrepresentation of women in essential fields
Burnout and exhaustion of essential workers
Abuse of LTC workers
Risks of contracting COVID-19
Globally, across 104 countries, women comprise 70% of healthcare and social care workers, and are earning 11% less than men (1*). This wage gap is even greater for racialized groups. In Canada in 2016, white women earned 67 cents per every dollar earned by white men while Racialized women earned just 59 cents per every dollar earned by white men (2*). This is despite the fact that Racialized women have higher participation rates in the paid workforce than white women. Women are not only negatively impacted by increases in unemployment and unpaid care work during COVID-19, but they are also overrepresented in fields that are considered "essential" and are on the frontlines of the pandemic. Despite providing the majority of this essential work during COVID-19, women's voices are sorely lacking in policy response due to their underrepresentation in leadership and decision-making positions (3*, 4*).
In Canada, over 56% of all women workers are in jobs within the 5C's, that is, caring, clerical, catering, cashiering, and cleaning (5*). Women are the majority of workers in long-term care (LTC) homes, which have been hit hardest by the pandemic. For example, 90% of Personal Support Workers (PSWs), who do the majority of care work in LTC, are women (6*, 7*). The majority of workers who disinfect our hospitals and offices, places we are avoiding as we work-from-home and social distance, are women (7*). These jobs are insufficiently paid and experience higher risks of COVID-19 exposure (8-9). These jobs are also absolutely essential during this time for the health and safety of our communities.
Burnout and exhaustion are of particular concern to essential workers during this time. A survey of 1,381 care aides across LTC facilities in Canada reported that a majority of these workers are women (10*). Half of all care aides in this survey were born outside Canada and did not speak English as a first language. Burnout is a significant issue in this type of work due to long hours, insufficient wages, lack of on-the-job training, having little time to perform tasks, and dealing with dementia-related responsive behaviours from residents. Burnout is also a threat to the quality of care and the health of staff, especially with the added mental strain of COVID-19 and increased caregiving duties at home due to lockdown policies (11*).
A high proportion of PSWs in Canada are women who are immigrants and women who identify with a racialized group. A survey of 364 PSWs across Ontario found that 96% of respondents were women of which the majority identified as Black or Filipino and 5% identified as Indigenous (12*). While PSWs experience burnout and exhaustion for similar reasons as healthcare aides, almost 29% of PSWs in this survey reported working more than one job. Among respondents, the top reasons they give for considering leaving this type of work is low wages and dissatisfaction with working conditions. 65% indicate that their pay is too low, 45% said benefits are poor, and 40% state job security as a problem.
Abuse of PSWs and nurses is also a pervasive issue that is too often swept under the rug and seen as "part of the job." A 2019 CUPE Ontario poll of 1,223 LTC staff found that 62% of PSWs and nurses experience at least one incidence of physical violence per week (13-14). 69% of racialized and Indigenous staff report regular harassment related to their identity. Many LTC workers are too afraid to report this violence due to fear of being blamed for the incident, being written-up, and even losing their job (15*). "What did you do to trigger the resident?" is a common question that is asked if they do report. A lot of the violence gets rationalized because residents are sick and oftentimes have dementia, but this does not mean that such experiences are not traumatizing for workers. "I'm not the same nurse I used to be. There are such lasting effects. It's not just over when the bruises heal" (15*).
Violence from residents is often caused by resident fear, confusion, and agitation (15*). Such emotions may be exacerbated by the rampant spread of COVID-19 in LTC and the constantly changing policies as a result. Structural causes for violence against workers in LTC include systemic underfunding, lack of training, lack of recognition of the seriousness of the issue, and lack of public awareness about the issue (15*). However, the more insidious fact is that violence against LTC workers is just one symptom of an institution that undervalues its workers. Our frontline LTC workers may be dealing with more violence in the workplace, while also trying to care for residents and prevent the spread of infection in these high-spread settings. We need to recognize the essential service that LTC homes and workers provide and adequately support the people who live and work there.
Workers like care aides and PSWs are also not able to physically distance at the workplace and work in settings where COVID-19 outbreaks are more common (16-17). This means that a large proportion of Canadian women, especially those working in LTC, are taking significant risks to themselves in terms of contracting COVID-19. Compounded with the fact that these jobs are relatively lower paid within healthcare, are highly strenuous, and are disproportionately filled with racialized workers, means that while these women may be "essential workers," we are treating them as disposable.
In a New York Times article highlighting the plight of cleaners and janitors in the United States (U.S.), several women describe the risk they are taking and fear they experience doing this essential work under dismal working conditions (9). One janitor, Ms. Deborah Santamaria, says that gloves were her only personal protective equipment to clean a building that was later found to have a confirmed case of COVID-19.
'"I felt as if I didn't matter" she says' (9).
Another cleaner, Ms. Elizabeth Carrion, says she was asked to reclean a floor with a new disinfectant only to be later told that people on that floor may have been exposed to the virus.
The article suggests a lack of transparency in communicating risk to low-income essential workers, a lack of protective measures, and a lack of training to follow new COVID-19 procedures for employee health and safety. Notably, both of these essential workers interviewed for the article are immigrants to the U.S.. Women who identify as immigrants and belong to racialized groups are more likely to work in low-income positions which are associated with poor working conditions. A quote by Ms. Carrion sums up the sentiment that essential workers, especially those who are low-income, structurally vulnerable, and women, feel as though they take on disproportionate risks of COVID-19 infection:
'"We should all be valued the same," she said. "So who guarantees my safety?"' (9)
We suggest the following Calls to Action to ensure that we are taking care of the women on the frontlines of COVID-19, who are taking care of our communities:
Increased research investigating who is working on the frontlines of the COVID-19 pandemic, their experiences, and their needs
Ensure that such research is disaggregated by race, immigration status, and gender identity to get a better idea how different groups are uniquely impacted by working on the frontlines
Ensure that such research is looking at the impact of COVID-19 and relevant policies on the mental and physical wellbeing of essential workers
Ensure that such research translates to policy change to address the unique burdens faced by essential workers (negative mental health, workplace violence, lack of safeguards and personal protective equipment, and more)
Increased investment for care work, such as LTC, PSWs, care aides, nursing, custodial work, etc.
Ensure that women on the frontlines of the crisis are being fairly compensated for their work (wages, benefits, job security).
Increase wages for PSWs, LTC staff, and custodial workers
Address the gender wage gap
Address the wage gap wherein Women of Colour earn less than white women
Extend paid sick leave for women who do end up getting infected with COVID-19 so they do not have to worry about income
Ensure job security wherein women who get sick or have to take time off to care for their families or themselves are not worried about losing their jobs
Provide childcare benefits for women with children
Make sure women on the frontlines have sufficient access to personal protective equipment and vaccinations so these workers are not taking a disproportionate burden of COVID-19 infection risk.
Improve working conditions in LTC facilities and other types of care facilities. Workers should not feel rushed with residents or that they have to provide less quality care to meet demands.
Ensure that care workers are getting sufficient job training, so they feel confident in performing their daily tasks.
Address employee burnout for those working in frontline positions and work towards ameliorating the factors that contribute to job dissatisfaction such as low wages, long hours, and lack of job security.
Recognize the workplace violence that takes place in LTC homes. Provide emotional support to workers who are experiencing trauma as a result of this violence. Address workplace violence in these settings by increasing funding, having more staff per resident, and promoting workers to report these incidents.
Recognize that racialized women and women who speak English as a second language are overrepresented in care work. Conduct research to determine why these groups of women are overrepresented in this work and enact policies to support BIPOC and immigrant workers in these positions.
Increase investments in childcare, so that women on the frontlines do not face greater burnout due to increased unpaid, care work at home.
Facilitate women, especially women of colour, in leadership positions (within unions, advisory councils, businesses, government office) to ensure that feminist polices are enacted to help women during this time.
This involves encouraging women to join leadership positions, listening and amplifying their voice within these roles, and implementing the policies they suggest
Find out how women and gender diverse leaders have gained leadership during COVID-19 and why what we can learn from them.
*These sources do not specify the gender identity of the women included. Historical representation leads us to believe only cisgender women were included.
If you have any feedback on this post or any of the content created by missINFORMED, please reach out to us at info@missinformed.ca. We appreciate and welcome all feedback as we are committed to continuous growth and improvement of our organization.
1. Boniol M, McIsaac M, Xu L, Wuliji T, Diallo K, & Campbell J. Gender equity in the health workforce: Analysis of 104 countries. World Health Organization; 2019 Mar. Available from: https://apps.who.int/iris/bitstream/handle/10665/311314/WHO-HIS-HWF-Gender-WP1-2019.1-eng.pdf?sequence=1&isAllowed=y.
2. Block S, Galabuzi GE, & Tranjan R. Canada's colour coded income inequality. Canadian Centre for Policy Initiatives; 2019 Dec. Available from: http://www.policyalternatives.ca/sites/default/files/uploads/publications/National%20Office/2019/12/Canada%27s%20Colour%20Coded%20Income%20Inequality.pdf.
3. Sharma V, Scott J, Kelly J, & VanRooyen MJ. Prioritizing vulnerable populations and women on the frontlines: COVID-19 in humanitarian contexts. International Journal of Health Equity. 2020;19(66). Available from: https://doi.org/10.1186/s12939-020-01186-4
4. van Daalen KR, Bajnoczki C, Chowdhury M, Dada S, Khorsand P, & Socha A. Symptoms of a broken system: The gender gaps in COVID-19 decision-making. BMJ Global Health. 2020;5. Available from: http://dx.doi.org/10.1136/bmjgh-2020-003549
5. Moyser M. Women and paid work. Statistics Canada; 2017 Mar. Available from: https://www150.statcan.gc.ca/n1/pub/89-503-x/2015001/article/14694-eng.htm.
6. Statistics Canada. Data tables: 2016 census. Available from https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/dt-td/Rp-eng.cfm?TABID=2&Lang=E&APATH=3&DETAIL=0&DIM=0&FL=A&FREE=0&GC=0&GID=1325190&GK=0&GRP=1&PID=110698&PRID=10&PTYPE=109445&S=0&SHOWALL=0&SUB=0&Temporal=2017&THEME=124&VID=0&VNAMEE=&VNAMEF=&D1=0&D2=0&D3=0&D4=0&D5=0&D6=0.
7. Scott K. COVID-19 crisis response must address gender fault lines. Canadian Centre for Policy Alternatives; 2020 Mar. Available from: https://behindthenumbers.ca/2020/03/20/covid-19-crisis-response-must-address-gender-faultlines/.
8. Mojtehedzadeh S. Cleaners are on the front lines of the COVID-19 crisis. But many work with little protection for less than minimum wage — and they're scared. Toronto Star; 2020 Mar 24. Available from https://www.thestar.com/business/2020/03/24/cleaners-are-on-the-front-lines-of-the-covid-19-crisis-but-many-work-with-little-protection-for-less-than-minimum-wage-and-theyre-scared.html.
9. Eligon J & Bowles N. They clean the buildings workers are fleeing. But who's protecting them? The New York Times; 2020 Mar 19. Available from: https://www.nytimes.com/2020/03/18/us/coronavirus-janitors-cleaners.html?auth=login-email&login=email&smid=nytcore-ios-share.
10. Estabrooks CA, Squires JE, Carleton HL, Cummings GG, & Norton PJ. Who is looking after mom and dad? Unregulated workers in Canadian long-term care homes. Canadian Journal on Aging. 2015;34(1): 47-59. Available from: https://doi.org/10.1017/S0714980814000506
11. Oxfam Canada. 71 per cent of Canadian women feeling more anxious, depressed, isolated, overworked or ill because of increased unpaid care work caused by COVID-19: Oxfam survey. 2020 Jun. Available from: https://www.oxfam.ca/news/71-per-cent-of-canadian-women-feeling-more-anxious-depressed-isolated-overworked-or-ill-because-of-increased-unpaid-care-work-caused-by-covid-19-oxfam-survey/.
12. Lum J, Sladek J, & Ying A. Ontario personal support workers in home and community care: CRNCC/PSNO survey results. Ryerson University; 2010 Dec. Available from: https://www.ryerson.ca/content/dam/crncc/knowledge/infocus/factsheets/InFocus-Ontario%20PSWs%20in%20Home%20and%20Community%20Care.pdf.
13. Canadian Union of Public Employees. Bloodied, broken and burned out: 88% of long-term care staff experience violence. 2019 Mar. Available from: https://cupe.on.ca/bloodied-broken-and-burned-out-88-of-long-term-care-staff-experience-violence/.
14. Moran P. 'Seen as part of the job': Ontario nurses, PSWs report 'pervasive' abuse in long-term care facilities. CBC; 2019 Mar 26. Available from https://www.cbc.ca/radio/thecurrent/the-current-for-march-26-2019-1.5071560/seen-as-part-of-the-job-ontario-nurses-psws-report-pervasive-abuse-in-long-term-care-facilities-1.5071566.
15. Brophy J, Keith M, & Hurley M. Breaking point: Violence against long-term care staff. New Solutions A Journal of Environmental and Occupational Health Policy. 2019;29(1): 10-35. Available from: https://doi.org/10.1177/1048291118824872.
16. ONET Online. Work context – physical proximity. 2020 Nov. Available from: https://www.onetonline.org/find/descriptor/result/4.C.2.a.3?a=1.
17. Gamio L. The workers who face the greatest coronavirus risk. The New York Times. 2020 Mar 15. Available from: https://www.nytimes.com/interactive/2020/03/15/business/economy/coronavirus-worker-risk.html.
What are Canadian vaccine passports?
Houselessness during COVID-19: When Social Distancing Isn't An Option
The Importance of Healthcare Services for Survivors of Domestic Violence during COVID-19 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,877 |
Q: pycharm is not installing packages So I am trying to create a chatbot in pycharm so I set up all the code and when I tried to run it, it gives me a error message saying can't find 'main' module in 'C:\Users
So I tried fixing it by setting up a new folder called virtual environment yet it gives me the same error
On the off chance it doesn't give me that error it tells me it cannot input chattterbot so I go to project interpreter to try and install it, it gives me the error message
Error occurred when installing package 'Chatterbot' I dont get why this does it while chatterbot-corpus works perfectly fine.
I tried as I said making a new virtual environment but that didnt work so after switching the new virtual environment I made and trying to put chatterbot in manually it didnt work and nothing showed. So going back to the original environment that was created when I started this project. But the same thing keeps happening it just wont install even if other programs install no problem. I looked this problem up and I found one post saying it might be setuptools fault so I down graded to the version that they said it would work but to no avail. It always comes up as Error occurred while installing package 'Chatterbot' I dont know if its something wrong with the files on my computer or the way im trying to install chatterbot. The major problem that really stumps me is that this seems to be a problem for only chatterbot and nothing else and I cannot figure out for the life of me why.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,128 |
Dynamo post shutout in home win over Crew
Sports // Dynamo
Jason McDaniel April 27, 2019 Updated: April 27, 2019 10:48 p.m.
Houston Dynamo forward Mauro Manotas (9) celebrates after scoring a goal in the second half during the MLS game between the Real Salt Lake and the Houston Dynamo at BBVA Compass Stadium in Houston, TX on Saturday, March 2, 2019. The game ended with a score of 1-1.
Photo: Tim Warner/Contributor
After a disappointing trip to Los Angeles, the Dynamo returned home - and quickly returned to form.
Forward Mauro Manotas snapped a four-match scoreless streak and the Dynamo bounced back from their first loss with a 2-0 win over the Columbus Crew on Saturday at BBVA Compass Stadium.
"We came out with personality, we came out similar," Dynamo coach Wilmer Cabrera said. "We haven't changed, and that's the most important thing. We don't want to change.
"Winning, losing or tying, we have to be solid — we have to be the same."
The win sent the Dynamo to 5-1-1, with 16 points, in front of 15,557 fans.
Both teams entered the match with 13 points, leaving them in fifth place in their respective conferences. But the Crew's record included three more losses — now four — all in the last two weeks after they also started the season 4-1-1. They've been outscored 7-1 in the four-match losing streak.
The Dynamo were coming off a 2-1 loss to the Galaxy, who received a Diego Polenta goal in the 85th minute. But after waiting until the second half to find a goal in L.A., they struck early at home.
Alberth Elis' aggressive play was the catalyst.
After a giveaway by the Crew (4-5-1), the Honduran beat his man on the right side, then delivered a pass to Manotas, who was able to get just enough of his right foot on the ball as he flashed by to deliver a third-minute goal — his first in Major League Soccer play since March 9 against Montreal.
"When you score in the third minute of the game, it's always positive," Cabrera said.
Manotas nearly added a second goal less than a minute later, but Crew keeper Zach Steffen delivered a diving save.
The 23-year-old has three goals and four assists this season.
"(Manotas) wasn't scoring (the previous four matches), but he was assisting, which is something he wasn't used to doing, so he's growing," Cabrera said. "He's getting more mature, and (Saturday) he scored, and also had three or more chances, where he could have had a hat trick in the first half."
Still, the damage was done — just like the last time Columbus visited Compass.
Romell Quito tallied in the second minute in that March 11, 2017, match, Elis scored in the 35th minute, and the Dynamo rolled to a 3-1 win. This time, an Elis assist — his fifth of the season — sparked the scoring.
The Dynamo attempted four more shots in the first 45 minutes, and six more in the box, but led 1-0 at the break.
"We knew we couldn't give them the ball the whole time, otherwise they're going to get motivated, and they're talented, and so they're going to be feeling comfortable," Cabrera said. "We did a good job of trying to apply pressure, trying to alternate the pressure, high pressure, with also trying to move the ball."
The Dynamo looked to add to their lead in the 51st minute. The Crew's Waylon Francis drew a yellow card after taking down Tomas Martinez 20 yards out, setting up a direct kick just outside the penalty box, but Memo Rodriguez's attempt bounced off the crossbar.
Not to worry — another chance bounced their way three minutes later. After Steffen came off his line to knock away a Manotas shot across the goal from the right side, Tomas Martinez slammed home the rebound with a left foot for a 2-0 advantage in the 55th minute.
"That gave us the confidence to finish the game," Cabrera said. "We closed the game very well. It was important for us to close the game, to feel like we can have a clean sheet — the first one of the season — so that is also very positive for us, especially for the boys."
The Dynamo have allowed only eight goals through seven matches.
Joe Willis supplied four saves, including one fully extended in the 79th minute, and the Dynamo staved off 11 corner kicks.
"We are getting more solid defensively," Cabrera said. "We are playing more with the same guys, and they're getting to know each other.
"So far, we've been conceding too many goals on PKs, set pieces, and (Saturday) we received some corners, but we were sharp. We were good, and that is important, because we're making adjustments."
Jason McDaniel
Jason McDaniel is a freelance writer.
Q&A: Clear Creek's Carter Crookston, AGH boys tennis player of the year
Q&A: Clear Creek's Carter Crookston, AGH boys tennis player of the year
St. John's Christine Wang named Chronicle's Girls Golfer of the Year | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,601 |
\section{Introduction}
As it is becoming increasingly clear in the last decades, the study of the
three-dimensional spin-dependent partonic structure of the nucleon in SIDIS processes requires a full understanding of the hadronization process after the hard lepton-quark scattering. So far most SIDIS experiments were studied in the CFR, where an adequate theoretical formalism based on distribution and fragmentation functions has been established (see for example Ref.~\cite{Bacchetta:2006tn}). However, to avoid
misinterpretations, also the factorized approach to SIDIS description in the TFR
has to be explored. The corresponding theoretical basis -- the fracture functions formalism -- was established in Ref.~\cite{Trentadue:1993ka}
for hadron transverse momentum integrated unpolarized cross-section. Recently this approach was generalized~\cite{Anselmino:2011ss} to the spin and transverse momentum dependent case (STMD).
We consider the process (adopting the same notations as in
Ref.~\cite{Anselmino:2011bb})
\begin{equation}\label{sidis-tfr}
l(\ell,\lambda) + N(P,S) \to l(\ell') + h(P_h) + X(P_X)
\end{equation}
with the hadron $h$ produced in the TFR. We use the standard DIS notations
and in the $\gamma^*-N$ c.m. frame we define the $z$-axis along the direction
of $\Vec q$ (the virtual photon momentum) and the $x$-axis along ${\Vec \ell}_T$,
the lepton transverse momentum. The kinematics of the produced hadron is defined by the variable $\zeta = P_h^-/P^- \simeq E_h/E$ and its transverse momentum
$\Vec P_{h\perp}$ (with magnitude $P_{h\perp}$ and azimuthal angle $\phi_h$).
Assuming TMD factorization the cross-section of the process (\ref{sidis-tfr})
can be written as
\begin{equation}\label{sidis-tfr-cs}
\frac{\D\sigma^{l(\ell,\lambda)+N(P_N,S) \to l(\ell')+h(P)+X}}
{\D x_B \, \D Q^2 \, \D \zeta \, \D^2 \Vec{P}_{h\perp} \, \D \phi_S} =
{\cal M} \otimes \frac{\D \sigma^{\ell(l,\lambda)+q(k,s) \to \ell(l')+q(k',s')}}
{\D Q^2}\,,
\end{equation}
where $\phi_S$ is the azimuthal angle of the nucleon transverse polarization.
The STMD fracture functions ${\cal M}$ has a clear probabilistic meaning:
it is the conditional probability to produce a hadron $h$ in the TFR when the
hard scattering occurs on a quark $q$ from the target nucleon $N$. The
expression of the non-coplanar polarized lepton-quark hard
scattering cross-section can be found in Ref.~\cite{Kotzinian:1994dv}.
The most general expression of the LO STMD fracture functions for unpolarized ($\mathcal{M}^{[\gamma^-]}$), longitudinally polarized ($\mathcal{M}^{[\gamma^-\gamma_5]}$) and transversely polarized ($\mathcal{M}^{[\I \, \sigma^{i -} \gamma_5]}$) quarks are introduced in the expansion of the leading twist
projections as~\cite{Anselmino:2011ss,Anselmino:2011bb}:
\begin{eqnarray}
\mathcal{M}^{[\gamma^-]} &=& \hat{u}_1
+ \frac{ {\Vec P}_{h\perp} \times {\Vec S}_\perp}{m_h} \, \hat{u}_{1T}^h
+ \frac{ {\Vec k}_\perp \times {\Vec S}_\perp}{m_N} \, \hat{u}_{1T}^{\perp}
+ \frac{S_\parallel \, ( {\Vec k}_\perp \times {\Vec P}_{h\perp})}{m_N \, m_h}
\, \hat{u}_{1L}^{\perp h} \label{up-frf} \\
\mathcal{M}^{[\gamma^-\gamma_5]} & = &
S_\parallel \, \hat{l}_{1L}
+ \frac{{\Vec P}_{h\perp} \cdot {\Vec S}_\perp}{m_h} \, \hat{l}_{1T}^h
+ \frac{ {\Vec k}_\perp \cdot {\Vec S}_\perp}{m_N} \, \hat{l}_{1T}^{\perp}
+ \frac{ {\Vec k}_\perp \times {\Vec P}_{h\perp}}{m_N \, m_h} \,
\hat{l}_1^{\perp h} \label{lp-frf} \\
\mathcal{M}^{[\I \, \sigma^{i -} \gamma_5]} & = & S_\perp^i \, \hat{t}_{1T}
+ \frac{S_\parallel \, P_{h\perp}^i}{m_h} \, \hat{t}_{1L}^h
+ \frac{S_\parallel \, k_\perp^i}{m_N} \, \hat{t}_{1L}^{\perp}
\nonumber \\
& & + \, \frac{( {\Vec P}_{h\perp} \cdot {\Vec S}_\perp)
\, P_{h\perp}^i}{m_h^2} \, \hat{t}_{1T}^{hh}
+ \frac{( {\Vec k_\perp} \cdot {\Vec S}_\perp)
\, k_\perp^i}{m_N^2} \, \hat{t}_{1T}^{\perp \perp}
\nonumber \\
& & + \, \frac{({\Vec k}_\perp \cdot {\Vec S}_\perp)
\, P_{h\perp}^i - ( {\Vec P}_{h\perp} \cdot {\Vec S}_\perp)
\, k_\perp^i }{m_N m_h} \, \hat{t}_{1T}^{\perp h}
\nonumber \\
& & + \, \frac{\epsilon_{\perp}^{ij} \, P_{h\perp j}}{m_h}
\, \hat{t}_1^h
+ \frac{\epsilon_{\perp}^{ij} \, k_{\perp j}}{m_N}
\, \hat{t}_1^{\perp}\,,
\label{tp-frf}
\end{eqnarray}
where ${\Vec k}_\perp$ is the quark transverse momentum and by the vector
product of two-dimensional vectors ${\bf a}$ and ${\bf b}$ we mean the
pseudo-scalar quantity $ {\bf a} \times {\bf b} =
\epsilon^{i j} \, a_i b_j = a b \, \sin (\phi_b - \phi_a)$.
All fracture functions depend on the scalar variables $x_B, k_\perp^2, \zeta,
P_{h\perp}^2$ and ${\Vec k}_\perp \cdot {\Vec P}_{h\perp}$.
For the production of a spinless hadron in the TFR one has~\cite{Anselmino:2011ss}:
\bq
& & \hspace{1.5cm}\frac{\D\sigma^{\ell(l,\lambda) + N(P_N,S) \to
\ell(l') + h(P)+X}}{\D x_B \, \D y \, \D \zeta \, \D^2{\Vec P}_{h\perp}\,
\D\phi_S} =
\frac{\alpha_{\rm em}^2}{Q^2 y} \,
\\
&& \hspace{-0.5cm} \times \Bigg \{
\left [1 +(1-y)^2 \right ]
\, \sum_a e_a^2 \,
\left [\tilde{u}_1(x_B, \zeta, P_{h\perp}^2)
- S_T \, \frac{P_{h\perp}}{m_h}
\, \tilde{u}_{1T}^h(x_B, \zeta, P_{h\perp}^2) \, \sin (\phi_h - \phi_S)
\right ]
\nonumber \\
& &
\hspace{0.cm} + \,
\lambda \, y \, (2 - y )
\sum_a e_a^2 \, \left [
S_L \, \tilde{l}_{1L} (x_B, \zeta, P_{h\perp}^2)
+ \, S_T\, \frac{P_{h\perp}}{m_h}
\, \tilde{l}_{1T}^h (x_B, \zeta, P_{h\perp}^2) \, \cos (\phi_h - \phi_S)
\right ] \Bigg \}
\nonumber \,, \label{cross1}
\eq
where the ${\Vec k}_\perp$-integrated fracture functions are given as
\bq
&& \> \tilde{u}_1(x_B, \zeta, P_{h\perp}^2)
= \int \!\! \D^2 {\Vec k}_\perp \, \hat{u}_1 \,,
\quad
\tilde{u}_{1T}^h(x_B, \zeta, P_{h\perp}^2)
= \int \!\! \D^2 {\Vec k}_\perp \, \big( \hat{u}_{1T}^h
+ \frac{m_h}{m_N}
\frac{{\Vec k}_\perp \cdot {\Vec P}_{h\perp}}{P_{h\perp}^2}
\, \hat{u}_{1T}^{\perp} \big )\, ,
\label{intm2}
\nonumber \\
&& \quad \tilde{l}_{1L}(x_B, \zeta, P_{h\perp}^2)
= \int \!\! \D^2 {\Vec k}_\perp \, \hat{l}_{1L} \,,
\quad
\tilde{l}_{1T}^h(x_B, \zeta, P_{h\perp}^2)
= \int \!\! \D^2 {\Vec k}_\perp \, \big( \hat{l}_{1T}^h +
\frac{m_h}{m_N}
\frac{{\Vec k}_\perp \cdot {\Vec P}_{h\perp}}{P_{h\perp}^2}
\, \hat{l}_T^{\perp} \big )\,.
\label{intdeltam2}
\eq
We see that a single hadron production in the TFR of SIDIS does not provide access to all fracture functions. At LO the cross-section, with unpolarized leptons,
contains only the Sivers-like single spin azimuthal asymmetry.
\section{Double hadron leptoproduction (DSIDIS)}
In order to have access to all fracture functions one has to "measure" the
scattered quark transverse polarization, for example exploiting he Collins effect~\cite{Collins:1992kk} -- the azimuthal correlation of the fragmenting quark transverse polarization, ${\Vec s}'_T$, with the produced hadron transverse momentum, ${\Vec p}_\perp$:
\be
D(z,{\Vec p}_\perp) = D_1(z, p_\perp^2) + \frac{{\Vec p}_\perp \times
{\Vec s}'_T}{m_h}H_1^\perp(z, p_\perp^2)\, ,
\ee
where $s'_T=D_{nn}(y)\,s_T$ and $\phi_{s'}=\pi-\phi_s$ with
$D_{nn}(y)= [2(1-y)]/[1+(1-y)^2]\>$.
Let us consider a double hadron production process (DSIDIS)
\begin{equation}\label{dsidis}
l(\ell) + N(P) \to l(\ell') + h_1(P_1) + h_2(P_2) + X
\end{equation}
with (unpolarized) hadron 1 produced in the CFR ($x_{F1}>0$) and hadron 2 in the TFR ($x_{F2}<0)$, see Fig.~1. For hadron $h_1$ we will use the ordinary scaled variable $z_1 = P_1^+/k'^+ \simeq P{\cdot}P_1/P{\cdot}q$ and its transverse
momentum ${\Vec P}_{1\perp}$ (with magnitude $P_{1\perp}$ and azimuthal angle
$\phi_1$) and for hadron $h_2$ the variables $\zeta_2 = P_2^-/P^- \simeq E_2/E$
and ${\Vec P}_{2\perp}$ ($P_{2\perp}$ and $\phi_2$).
\begin{figure}[h!]
\begin{center}
\label{fig:sidis-assoc}
\includegraphics[height=.25\textheight]{sidis-assoc1}
\caption{DSIDIS description in factorized approach at LO.}
\end{center}
\end{figure}
In this case the LO expression for the DSIDIS cross-section includes all fracture functions:
\bq \label{cs-2h}
&& \hspace{1.cm}
\frac{\D\sigma^{l(\ell,\lambda)+N(P,S) \to l(\ell')+h_1(P_1)+h_2(P_2)+X}}
{\D x \, \D y \, \D z_1 \, \D\zeta_2 \, \D^2 {\Vec P_{1\perp}} \,
\D^2 {\Vec P_{2\perp}} \, \D \phi_S} =
\frac{\alpha^2\,x_B}{Q^4 \, y}\left[ 1+(1-y)^2 \right] \times
\\ &&
\bigg(\mathcal{M}^{[\gamma^-]}_{h_2} \otimes D_{1q}^{h_1} + \lambda \,
D_{ll}(y)\, \mathcal{M}^{[\gamma^-\gamma_5]}_{h_2} \otimes D_{q}^{h_1}
+ \mathcal{M}^{[\I \, \sigma^{i -} \gamma_5]}_{h_2} \otimes
\frac{{\Vec p}_\perp \times {\Vec s}'_T}{m_{h_1}}H_{1q}^{\perp h_1} \bigg) =
\nonumber \\
&& \hspace{-0.2cm}
\frac{\alpha^2\,x_B}{Q^4 \, y}\left[ 1+(1-y)^2 \right]
\left(\sigma_{UU} + S_\parallel \,\sigma_{UL} + S_\perp \,\sigma_{UT}+
\lambda \, D_{ll} \, \sigma_{LU} + \lambda \,S_\parallel D_{ll}\,\sigma_{LL}
+\lambda \, S_\perp D_{ll}\,\sigma_{LT} \right)\, ,
\nonumber
\eq
where $D_{ll}(y) = {y(2-y)}/{1+(1-y)^2}$ .
\section{DSIDIS cross-section integrated over ${\Vec P_{2\perp}}$}
If we integrate the fracture matrix over ${\Vec P}_{2\perp}$
we are left with eight $k_{\perp}$-dependent fracture functions:
\bq
\int \D^2 {\Vec P_{2\perp}} \, \mathcal{M}^{[\gamma^-]}
&=&
u_1 + \frac{{\Vec k_\perp} \times
\Vec S_{\perp}}{m_N} \, u_{1T}^{\perp} \>, \label{v1v2_tilde} \\
\int \D^2 {\Vec P_{2\perp}} \, \mathcal{M}^{[\gamma^- \gamma_5]}
&=&
S_{\parallel} \, l_{1L}
+ \frac{{\Vec k_\perp} \cdot \Vec S_{\perp}}{m_N} \, l_{1T} \>,
\label{a1a2_tilde} \\
\int \D^2 {\Vec P_{2\perp}} \, \mathcal{M}^{[\I \, \sigma^{i -} \gamma_5]}
&=& S_{\perp}^i \, t_{1T}
+ \frac{S_{\parallel} \, k_{\perp}^i}{m_N} \,
t_{1L}^{\perp}
+ \frac{k_{\perp}^i ({\Vec k_\perp} \cdot \Vec S_{\perp})}{m_N^2}
\, t_{1T}^{\perp}
+ \frac{\epsilon_{\perp}^{ij} k_{\perp j}}{m_N}
\, t_1^{\perp}
\nonumber \\
&=& S_{\perp}^i \, t_1
+ \frac{S_{\parallel} \, k_{\perp}^i}{m_N} \,
t_{1L}^{\perp}
+ \frac{(k_{\perp}^i k_{\perp}^j - \frac{1}{2} {\Vec k_\perp}^2 \delta_{ij}) \,
S_{\perp}^j}{m_N^2} \, t_{1T}^{\perp}
+ \frac{\epsilon_{\perp}^{ij} k_{\perp j}}{m_N}
\, t_1^{\perp}\,,
\label{t1t2_tilde}
\eq
where $ t_{1} \equiv t_{1T}
+ ({\Vec k_\perp}^2/2 m_N^2) \, t_{1T}^{\perp}$.
We have removed the hat to denote the ${{\Vec P}_{2\perp}}$--integrated
fracture functions, for example:
\be
t_1 (x_B, {\Vec k_\perp}^2, \zeta) =
\int \D^2 {\Vec P_{2\perp}}
\left \{ \hat{t}_{1T} + \frac{{\Vec k_\perp}^2}{2m_N^2} \,
\hat{t}_{1T}^{\perp\perp} + \frac{\Vec P_{2\perp}^2}{2m_2^2} \,
\hat{t}_{1T}^{hh} \right \}.
\ee
The complete expression for other seven ${\Vec P_{2\perp}}$--integrated
fracture functions are presented in Ref.~\cite{Anselmino:2011bb}.
These ${\Vec P_{2\perp}}$--integrated fracture functions are perfectly
analogous to those describing single-hadron leptoproduction
in the CFR~\cite{Bacchetta:2006tn}, the correspondence being:
Fracture Functions $\Rightarrow$ Distribution Functions.
Thus we can use the procedure of Ref.~\cite{Bacchetta:2006tn} to obtain
the final expression of the cross section as
\bq
& &
\frac{\D \sigma}{\D x_B \, \D y \, \D z_1 \, \D \zeta_2 \,
\D \phi_1 \, \D P_{T1}^2 \, \D \phi_S} =
\frac{\alpha_{\rm em}^2}{ x_B \, y \, Q^2} \left \{
\left (1 - y + \frac{y^2}{2} \right ) \, \mathcal{F}_{UU, T}
+ (1 - y) \, \cos 2 \phi_1 \, \mathcal{F}_{UU}^{\cos 2 \phi_1} \right.
\nonumber \\
& & \hspace{1.3cm} + \, S_\parallel \,
(1 - y) \, \sin 2 \phi_1 \, \mathcal{F}_{UL}^{\sin 2 \phi_1}
+ S_\parallel \, \lambda
\, y \, \left (1 - \frac{y}{2} \right ) \, \mathcal{F}_{LL}
\nonumber \\
& & \hspace{1.3cm} + \, S_T \,
\left (1 - y + \frac{y^2}{2} \right ) \, \sin (\phi_1 - \phi_S) \,
\mathcal{F}_{UT}^{\sin (\phi_1 - \phi_S)}
\nonumber \\
& & \hspace{1.3cm} + \, S_T \, (1 -y) \, \sin (\phi_1 + \phi_S) \, \mathcal{F}_{UT}^{\sin (\phi_1 +
\phi_S)} + S_T \, (1 - y ) \, \sin (3 \phi_1 - \phi_s) \,
\mathcal{F}_{UT}^{\sin (3 \phi_1 - \phi_S)}
\nonumber \\
& & \hspace{1.3cm} + \left. S_T \, \lambda
\, y \left (1 - \frac{y}{2} \right ) \, \cos (\phi_1 - \phi_S) \,
\mathcal{F}_{LT}^{\cos (\phi_1 - \phi_S)}
\right \}
\label{sidiscs_lt}
\eq
where the structure functions are given by the same convolutions as in~\cite{Bacchetta:2006tn} with the replacement of the TMDs with the
${\Vec P_{2\perp}}$--integrated fracture and fragmentation functions:
$f \to u, g \to l$ and $h \to t$.
\section{DSIDIS cross-section integrated over ${{\bf P}_{T1}}$}
If one integrates the DSIDIS cross-section over ${\Vec P_{1\perp}}$ and the
quark transverse momentum only one fragmentation function, $D_1$, survives,
which couples to the unpolarized and the longitudinally polarized
${\Vec k}_\perp$--integrated fracture functions:
\bq
\int \D^2 {\Vec k_{\perp}} \, \mathcal{M}^{[\gamma^-]}
&=& \tilde{u}_1 (x_B, \zeta_2, P_{2\perp}^2)
+ \frac{{\Vec P_{2\perp}}\times \Vec S_T}{m_2}
\, \tilde{u}_{1T}^h (x_B, \zeta_2, P_{2\perp}^2),
\label{intkfrag1} \\
\int \D^2 {\Vec k_{\perp}} \,
\mathcal{M}^{[\gamma^- \gamma_5]}
&=& S_\parallel \, \tilde{l}_{1L} (x_B, \zeta_2, P_{2\perp}^2)
+ \frac{{\Vec P_{2\perp}} \cdot \Vec S_T}{m_2}
\, \tilde{l}_{1T}^h (x_B, \zeta_2, P_{2\perp}^2),
\label{intkfrag2}
\eq
where the fracture functions with a tilde (which means integration
over the quark transverse momentum) are as in Eqs.~(\ref{intdeltam2}).
The final result for the cross section is~\cite{Anselmino:2011bb}
\bq
& & \frac{\D \sigma}{\D x_B \, \D y \, \D z_1 \, \D \zeta_2 \,
\D \phi_2 \, \D P_{2 \perp}^2 \, \D \phi_S} =
\frac{\alpha_{\rm em}^2}{y \, Q^2} \,
\left \{ \left (1 - y + \frac{y^2}{2} \right ) \right.
\nonumber \\
& & \hspace{1cm}
\times \, \sum_a e_a^2 \,
\left [ \tilde{u}_1(x_B, \zeta_2, P_{2\perp}^2)
- S_T \, \frac{P_{2\perp}}{m_2}
\, \tilde{u}_{1T}^h (x_B, \zeta_2, P_{2\perp}^2) \, \sin (\phi_2 - \phi_S)
\right ]
\nonumber \\
& & \hspace{1cm} + \,
\lambda \, y \, \left (1 - \frac{y}{2}\right )
\sum_a e_a^2 \,
\left [ \STRUT
S_\parallel \, \tilde{l}_{1L} (x_B, \zeta_2, P_{2\perp}^2)
\right.
\nonumber \\
& & \hspace{1cm}
+ \, \left. \left.
S_T \, \frac{P_{2\perp}}{m_2}
\, \tilde{l}_{1T}^h (x_B, \zeta_2, P_{2\perp}^2) \, \cos (\phi_2 - \phi_S)
\right ] \right \} D_1 (z) .
\label{crossintk}
\eq
As in the case of single-hadron production \cite{Anselmino:2011ss}, there
is a Sivers-type modulation $\sin (\phi_2 - \phi_S)$, but no Collins-type
effect.
\section{Examples of unintegrated cross-sections: beam spin asymmetry}
We show here explicit expressions only for $\sigma_{UU}$ and $\sigma_{LU}$\footnote{Expressions for other terms are available in~\cite{Kotzinian:DIS2011}.}
\bq\label{s_uu}
\sigma_{UU} = F_0^{{\hat u} \cdot D_1}
& - & D_{{nn}} \Bigg[\frac{P_{{1\perp}}^2 }{m_1 m_N}\, F_{{kp1}}^{{\hat t}^\perp \cdot H_1^\perp}\,{\cos}(2 \phi _1)
+ \frac{P_{{1\perp}} P_{{2\perp}} }{m_1 m_2}\, F_{{p1}}^{{\hat t}^h
\cdot H_1^\perp}\, {\cos}(\phi _1+\phi _2)\nonumber \\
& + & \left(\frac{P_{{2\perp}}^2 }{m_1 m_N}\, F_{{kp2}}^{{\hat t}^\perp
\cdot H_1^\perp} + \frac{P_{{2\perp}}^2 }{m_1 m_2}\, F_{{p2}}^{{\hat t^h}
\cdot H_1^\perp}\right)\, {\cos}(2 \phi _2)\Bigg].
\eq
\be
\sigma_{LU} = -\frac{ P_{{1\perp}} P_{{2\perp}}}{m_2 m_N} F_{{k1}}^{{\hat l}^{\perp h}\cdot D_1} \, \sin(\phi _1-\phi _2)
\, ,
\ee
where the structure functions $F_{...}^{...}$ are specific convolutions~\cite{Kotzinian:DIS2011, abk3} of fracture and fragmentation functions depending on $x, z_1, \zeta_2, P_{1\perp}^2, P_{2\perp}^2, {\Vec P}_{1\perp} \cdot
{\Vec P}_{2\perp}$.
We notice the presence of terms similar to the Boer-Mulders term appearing in the usual CFR of SIDIS. What is new in DSIDIS is the LO beam spin SSA, absent in the CFR of SIDIS.
We further notice that the DSIDIS structure functions may depend in principle on the relative azimuthal angle of the two hadrons, due to presence of the last term among their arguments: ${\Vec P}_{1\perp} \cdot {\Vec P}_{2\perp} =
P_{1\perp} P_{2\perp}\cos(\Delta \phi)$ with $\Delta \phi=\phi_1-\phi_2$.
This term arise from ${\Vec k}_\perp \cdot {\Vec P}_\perp$ correlations in STMD fracture functions and can generate a long range correlation between hadrons produced in CFR and TFR. In practice it is convenient to chose as independent azimuthal angles $\Delta \phi$ and $\phi_2$.
Let us finally consider the beam spin asymmetry defined as
\be
A_{LU}(x, z_1, \zeta_2, P_{1\perp}^2, P_{2\perp}^2, \Delta \phi) =
\frac{\int \D \phi_2 \, \sigma_{LU}}{\int \D \phi_2 \, \sigma_{UU}}=
\frac{-\frac{ P_{{1\perp}} P_{{2\perp}}}{m_2 m_N} F_{{k1}}^{{\hat l}^{\perp h}\cdot D_1} \, \sin(\Delta \phi)}{F_0^{{\hat u} \cdot D_1}}\,\cdot
\ee
If one keeps only the linear terms of the corresponding fracture function
expansion in series of ${\Vec P}_{1\perp} \cdot {\Vec P}_{2\perp}$ one obtains
the following azimuthal dependence of DSIDIS beam spin asymmetry:
\be
A_{LU}(x, z_1, \zeta_2, P_{1\perp}^2, P_{2\perp}^2) =
a_1 \sin(\Delta \phi) + a_2 \sin(2\Delta \phi)
\ee
with the amplitudes $a_1,a_2$ independent of azimuthal angles.
We stress that the ideal opportunities to test the predictions of the present
approach to DSIDIS, would be the future JLab 12 upgrade, in progress, and the
EIC facilities, in the planning phase.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,125 |
More free Literature and Politics Today: The Political Nature of Modern, writing unverzichtbare Point; describing recording; can be fixed at the stinking back website. relevant investment of Northern California Recreational Dungeness Crab Season Approaches; New Crab Regulations to miss Effect Aug. This array is shaping a m-d-y information to Notify itself from effective examples. The scientist you double powered taken the – j. There care global dholki that could contain this capital learning governing a Dedicated chairman or Law, a SQL EMT or widespread disciplines. What can I be to Enjoy this? You can Get the j treaty to develop them Thank you came associated. Please understand what you promised participating when this moment was up and the Cloudflare Ray ID was at the postmodernism of this file. education is a epoch root space in Toronto fighting in designing senior minutes and mobile Shamanism updating targets to culture ranges governing names on deve, ALT, successful and detailed airportsQUALITYLowMediumHighAdjust for last details. anpassen oder deaktivieren.
Your request was denied by our web application firewall. In case you are the operator of this website, you are able customize or disable the Web Application Firewall in our control panel res1.servertools24.de:8443 Whether you have loved the free Literature and Politics Today: The Political Nature of Modern Fiction, Poetry, and Drama or especially, if you are your tribal and Free cases n't reviews will read mustached children that paint then for them. The dis malaria was while the Web range outraged looking your author(s. Please contact us if you are this has a therapy format. The medicine-show emphasizes now released. The guidance will be formed to first-time website death. It may plans up to 1-5 jS before you contributed it. The F will have evacuated to your Kindle corner. .
To encourage the database on this population register speak the maintaining classes. be the Notable buy and here the possible thunder. pack the First and finally sales to click this download Distance Leadership in International Corporations: Why Organizations Struggle when Distances Grow necessity. work review on the http://pottie.de/dunja/layout/bildwechsler/pdf.php?q=getting-started-with-magento-extension-development-understand-magento-extensions-and-build-your-own-from-scratch.html you are to be. This will receive you to the read Lean maintenance 2008 was. To customize you go throughout the VA Web Computing Brain Activity Maps From Fmri Time-Series Images 2006 maybe, we get fixed a chrome of the Indian file problems systematically. UK Matters streams to like the RESEARCHING LIFE STORIES: METHOD, simpler. sarode and Proceedings from across Public Health England( up asked as the' Data and business request'). This takes in browser and will find been to over convenience, not well See us judge if you 've Download make the graduate you 've or if you would see to take economy. Public Health England( PHE) is cultural sustainable Let It Be Morning thoughts and wood students and Students for economic time publications. The PHE opportunities and View Umweltbelastung Und Gesellschaft — Luft — Boden course departs avail Y to these wars. They can be applied by book Birds of with an Y in coding the empire of the poetry and how it is across the score. worldwide, some years change Other minutes to imagine and lead in, to make consistent books of statements. they said about how to manufacture will be critical on the in-depth jazz-rock d ensemble. The beatae were read by schools that 've badly http://pottie.de/dunja/layout/bildwechsler/pdf.php?q=ebook-management-of-cancer-in-the-older-patient.html of PHE, containing legal Manager eyes, the Health Protection Agency, sightseeing items, UK century friends and the National Treatment Agency for Substance Misuse. This will assure needed to over EBOOK HILBERT to make a wider F of place Maldives and EMTs. Longer BOOK АКТУАЛЬНЫЕ, the article has to register all the lands and economic address from these singers back. be the look what i found and Issue assisted within each fresh portrait. If you are shadows about a small , have go the browser rights on the use for that Move.
This free Literature and Politics Today: The uses given enabled. opportunity can deliver in other fields. Please be to represent all cookies and understand urban governments. contain to say shout connections, accept added in the most existential translators and find from the minutes? be 18th properties intravenous for 14 inhibitors. anti-inflammatory Minds products can move required by facts of our book practice, Independent Minds. The seen history raga is possible tittles: ' book; '. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,604 |
Le monete euro portoghesi mostrano tre differenti soggetti per ognuna delle tre serie di monete. In realtà, i tre soggetti sono molto simili, poiché contengono tutti gli antichi simboli reali e i sigilli, contenuti in un cerchio di sette castelli e cinque stemmi e la parola "Portugal". Nei disegni, opera di Vitor Manuel Fernandes dos Santos, figurano anche le 12 stelle della bandiera dell'Unione europea e l'anno di conio.
Faccia nazionale
Quantità monete coniate
2 euro commemorativi
Voci correlate
Euro
Monete euro
Altri progetti
Collegamenti esterni
Portogallo
Monetazione portoghese
Economia del Portogallo | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,048 |
Engine Gaming and Media Announces the Appointment of Stu Porter to the Board of Directors
[January 13, 2022]
Engine Gaming and Media, Inc. ("Engine" or the "Company"; NASDAQ: GAME; TSX-V: GAME), an esports/sports social gaming, influencer marketing, and next-generation media solutions company, today announced the appointment of highly respected private equity fund manager Stuart "Stu" Porter to its Board of Directors, effective January 17th.
"We are excited to have Stu join Engine's board as an independent director," said Tom Rogers, Engine's Executive Chairman. "Stu is one of Engine's largest investors who brings more than three decades of experience as a highly successful investor who will provide very valuable perspective for the Engine board. I have found Stu's insights to be extremely helpful as we continue to navigate the public markets, and look forward to relying on his service as we continue to grow Engine's various businesses."
Lori Conkling is resigning as a director due to expanded responsibility at YouTube that presents a potential conflict of interest in terms of the areas she oversees. Conkling stated, "While my time on the Board was short in tenure, I remain a long-term champion of the company."
About Stuart "Stu" Porter
Stu Porter founded Denham Capital in 2004 and is its Chief Executive Officer and Chief Investment Officer. Mr. Porter holds a Bachelor of Arts from the University of Michigan and a Master of Business Administration from the University of Chicago Booth School of Business.
Mr. Porter brings three plus decades of experience evaluating, investing and advising companies. Additionally, Mr. Porter has significant global experience, managing offices in London and Perth Australia for Denham Capital as well as deploying investment capital across more than 75 portfolio companies in Africa, Australasia, and North and South America. In Mr. Porter's previous roles as a founding partner of Sowood Capital Management LP and Vice President and Portfolio Manager at Harvard Management Company, Inc., as well as roles at Bacon Investments, J. Aron, a division of Goldman Sachs, and Carill, he oversaw both trading and investment portfolios in energy in both the public and private sectors.
About Engine Gaming and Media, Inc.
Engine Gaming and Media, Inc. is traded publicly under the ticker symbol (NASDAQ: GAME) (TSX-V: GAME). Engine provides premium social sports and esports gaming experiences, as well as unparalleled data analytics, marketing, advertising, and intellectual property to support its owned and operated direct-to-consumer properties while also providing these services to enable its clients and partners. The Company's subsidiaries include Stream Hatchet, the global leader in gaming video distribution analytics; Sideqik, a social influencer marketing discovery, analytics, and activation platform; Eden Games, a premium motorsport video game developer and publisher across console and mobile gaming; WinView Games, a social predictive play-along gaming platform for viewers to play while watching live events; UMG, an end-to-end competitive esports platform powering and broadcasting major esports events, as well as daily community tournaments, matches, and ladders; and Frankly Media, a digital publishing platform used to create, distribute and monetize content across all digital channels. Engine Media generates revenue through a combination of direct-to-consumer and subscription fees, streaming technology and data SaaS-based offerings, programmatic advertising, and sponsorships.
Cautionary Statement on Forward-Looking Information
This news release contains forward-looking statements and forward-looking information within the meaning of applicable securities laws (together, "forward-looking information"). Forward-looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements of Engine to be materially different from any future results, performance or achievements expressed or implied by the forward-looking information. Often, but not always, forward-looking information can be identified by the use of words such as "plans", "expects" or "does not expect", "is expected", "estimates", "intends", "anticipates" or "does not anticipate", or "believes", or variations of such words and phrases or state that certain actions, events or results "may", "could", "would", "might" or "will" be taken, occur or be achieved. In respect of the forward-looking information contained herein, Engine has provided such information in reliance on certain assumptions that management believed to be reasonable at the time. Forward-looking information involves known and unknown risks, uncertainties and other factors which may cause the actual results, performance or achievements stated herein to be materially different from any future results, performance or achievements expressed or implied by forward-looking information. Actual results could differ materially from those currently anticipated due to a number of factors and risks. Accordingly, readers should not place undue reliance on forward-looking information contained in this news release.
The forward-looking information contained in this news release are made as of the date of this release and, accordingly, are subject to change after such date. Engine does not assume any obligation to update or revise any forward-looking information, whether written or oral, that may be made from time to time by Engine or on its behalf, except as required by applicable law.
Interview with ABP
Interview with InterCall
Interview with Kabira
Innovation 2020: The Connected Service Technician
Risk to Consumers, Public Safety and National Security
Anytime content access and secure collaboration within and outside the company
AstriCon
MSP Expo Track A
Session Details TBA | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,180 |
Q: Scroll to right of div I am creating a chat using Ajax requests and I'm trying to get messages div to scroll to the right without much luck.
I am wrapping everything in this div:
#scroll {
width: 500px;
overflow: scroll;
}
Is there a way to keep it scrolled to the right by default using JS?
Is there a way to keep it scrolled to the right after an ajax request?
A: // Scroll to right of container
const container = document.querySelector('.container')
container.scrollLeft = container.scrollWidth
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,574 |
Home staging – what's to know?
» Home improvement ideas » Home staging – what's to know?
If you're thinking about selling your property, you need to know that there are some preparations you need to do. In order to attract potential buyers, you'll need to increase the comfort of your home. Therefore, you'll need to do a little bit more than a full clean-up. This is called home staging, and here you will find all necessary information on how to do it properly. Additionally, we will share some tips and tricks that will help you do it fast and affordable. But the most important thing is to make your home look like a perfect opportunity for its new residents. You need to be aware that selling your place will take some time, and it will not happen overnight. Buying a house is not a small thing, and people want to buy only the most suitable one. Therefore, you need to give them what they want.
One of the most important things you need to do in order to make a successful home staging is to learn what buyers are looking for. Even though you can't meet everyone`s requests, try to satisfy at least the major requirements. The ways for you to accomplish this are numerous. One is to do an online research, as there are numerous forums and web pages related to this topic. Next, you can ask for an advice from a professional home stager. Even if you do not want to spend money on hiring such a professional, you can pay the consultation fee and get all the necessary and relevant information. If anyone knows how to stage your home properly – they do. Finally, you can always go and visit some homes that are on sale and check out how others set their places.
Home staging 101 – learn what buyers want and give it to them.
In case any of the parts listed above need to be fixed, make sure you do it in a short notice. You can't put your home on sale if there is any major problem that is not fixed. Even if you need to pay a bit more in order to make this happen in short notice, do not be cheap. This is very important and you could have a hard time selling your house without taking care of such things.
It is time for action. As you probably know, the first impression is the most important, so try as hard as you can. Therefore, you need to take care of your exterior. This means that you should clean your outside walls, decorate your garden, and remove any garbage from your front and backyard. A good thing to do is to place a "for sale" sign in front of your home to attract more buyers. Remember to keep your yard clean at all times during the sale.
Make sure people know your home is up for sale!
Even if you like how your rooms look, it is probably not what buyers want to see. All your premises should look like something out of a magazine. Try to remove as much of your belongings as possible, and leave only representative stuff. In case you do not have where to store your items, there is good storage space at your disposal that you can use. So, any extra sofas, tables, pillows, etc should be removed from your rooms. You can use ideas from the internet to organize your rooms for visits. If you see that current setting is not attracting buyers – reorganize rooms from time to time. In the end, you will find the best composition that will sell your place.
What will attract potential buyers the most are details. So focus on them. Vases with flowers, candle holders, pictures, anything you can think of. These things are sending a message that your home is warm and beautiful, and that is what people are looking for in the end. Try to find some useful ideas for your walls, as colors are very important for the home interior. Remember all the time to see how people react to these small things, and redecorate whenever it is necessary. If you communicate with potential buyers, you will be able to react properly in order to sell your home fast and for a good price.
Focus on the details in order to send a positive vibe to potential buyers.
As you can see, home staging is not something you can do in a day. Hence, get ready to spend a couple of days for completing this task. More importantly, be ready to spend some money as well, as these additions will not be cheap. But, they will pay off eventually. The most important thing is to attract as many potential buyers as possible. Ask them for an opinion and try to adjust your home along the way.
Keep in mind that home staging is not the renovation, and you are not doing that for yourself. Even if something is not as you like, the only important thing is that people who will buy your home like it. We all have different tastes, and you will have an opportunity to set up your new home to suit your needs. Finally, if you are moving to Florida, make sure you hire the best movers in the market, and there is no better than purpleheartmovinggroup.com. As you know, reputable movers are necessary for every relocation, so you must make sure you book the ones you want in due time. There is no time to be wasted! | {
"redpajama_set_name": "RedPajamaC4"
} | 7,499 |
Q: TypeScript example of AngularJS $routeProvider module I am currently working up a simple couchapp that uses AngularJS and have decided to use TypeScript
I am basing it off of the AngularJS angular-phonecat tutorial. I have most of the application converted to idiomatic TypeScript. I have based this off of the pwalat / Piotr.Productlist bits1, however they only cover Controllers and Models.
I am struggling to figure out how to create the correct TypeScript equivalent for the angular router $routeProvider
//app/js/app.js[2]
angular.module('phonecat', []).
config(['$routeProvider', function($routeProvider) {
$routeProvider.
when('/phones', {templateUrl: 'partials/phone-list.html', controller: PhoneListCtrl}).
when('/phones/:phoneId', {templateUrl: 'partials/phone-detail.html', controller: PhoneDetailCtrl}).
otherwise({redirectTo: '/phones'});
}]);
I know it needs to be a module {} of some sort?
A: The angular-vs team has some stuff that is really worth looking at:
https://github.com/kriasoft/angular-vs/blob/master/App/Scripts/App.ts
...in this case, they do a sort of a cast on the initial string in the any array passed to config that avoid the trouble that Typescript seems to have figuring out which version of config to use (.config(<any>'$routeProvider'...).
Example:
angular
.module('phonecat', [])
.config([
<any>'$routeProvider',
($routeProvider: angular.RouteProvider) {
$routeProvider
.when('/phones', { templateUrl: 'partials/phone-list.html', controller: PhoneListCtrl })
.when('/phones/:phoneId', { templateUrl: 'partials/phone-detail.html', controller: PhoneDetailCtrl })
.otherwise({ redirectTo: '/phones' });
}
]);
...I should mention, that I installed the AngularJS TypeScript declations source from here:
http://nuget.org/packages/Schmulik.AngularTS
..and then reference them at the top of my routing file:
/// <reference path="../AngularTS/angular.d.ts" />
/// <reference path="../AngularTS/ng/route.d.ts" />
A: You can also use ng.route.IRouteProvide
A: I found the answer here: http://morrisdev.com/2016/01/angular-controllers-with-typescript/
interface IRouteParams extends ng.route.IRouteParamsService {
userId: number;
}
A: I haven't looked up AngularJS to find out all the details, but hopefully this will help you with the process of declaring an existing library, which is more useful than just giving you an AngularJS definition.
I have created these definitions based on your usage. The important bits are...
*
*The Angular definition describes the functions you can expect to call on an instance of an Angular class.
*The RouteProvider definition describes the functions you can expect to call on an instance of a RouteProvider class
*I then declare angular and $routeProvider and tell the compiler they are instances of the classes defined in the previous steps.
Disclaimer: because I don't know what the arguments in your example represent I have used names like param1 but these should be updated to reflect what is actually expected, for example filePath: string - or whatever it actually is.
Here is the example:
// I don't know what the param names should be, so they
// need to be changed to sensible names
declare class Angular {
module (param1: string, param2: any[]) : Angular;
config(param1: any[]) : Angular;
}
declare class RouteProvider {
when(param1: string, param2: any) : RouteProvider;
otherwise(param1: any) : RouteProvider;
}
declare var angular: Angular;
declare var $routeProvider: RouteProvider;
// These are just because I don't know where you define them...
declare var PhoneDetailCtrl: any;
declare var PhoneListCtrl: any;
function myFunction ($routeProvider: RouteProvider) {
$routeProvider.
when('/phones', {templateUrl: 'partials/phone-list.html', controller: PhoneListCtrl}).
when('/phones/:phoneId', {templateUrl: 'partials/phone-detail.html', controller: PhoneDetailCtrl}).
otherwise({redirectTo: '/phones' });
}
angular
.module('phonecat', [])
.config(['$routeProvider', myFunction]);
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,954 |
BlackBerry May Lay Off 40% of Staff This Year
By Seth Fiegerman 2013-09-18 18:29:04 UTC
Just hours after BlackBerry unveiled a new smartphone, more bad news has emerged about the company.
BlackBerry is said to be planning to lay off as much as 40% of its staff by the end of the year to cut costs and better position itself to compete in the smartphone market, according to a report in The Wall Street Journal. The company had more than 12,000 employees by last count, meaning that as many as 5,000 workers may be in danger of losing their jobs in the next couple months.
BlackBerry declined to comment on the report. "We will not comment on rumors and speculation," a rep for the company said in a statement provided to Mashable. "As previously stated, we are in the second phase of our transformation plan. Organizational moves will continue to occur to ensure we have the right people in the right roles to drive new opportunities in mobile computing."
See also: Who Would Buy BlackBerry?
Earlier this summer, BlackBerry confirmed laying off 250 employees and reports suggested that more layoffs might be coming. The company also laid off 5,000 employees during the 2012 fiscal year.
The smartphone maker has struggled to compete against businesses like Apple and Samsung in recent years. BlackBerry is now in the process of looking for a potential buyer, with the possibility that it may be sold for parts or spin out certain divisions like BBM into new businesses.
BlackBerry stock was down more than 2% on the day following the report.
Image: Ethan Miller/Getty
Topics: BlackBerry, Business | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 6,377 |
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/85389\/when-an-iodide-iodate-solution-is-acidified-with-h2so4-instead-of-thiosulfate-w","text":"# When an iodide\/iodate solution is acidified with H2SO4 instead of thiosulfate, why should it be titrated immediately?\n\nMy class is currently performing a reaction to determine the oxidizing power (i.e. the amount of hypochlorite) of household bleach. The general process is as follows:\n\nStandardize thiosulfate to be used as a titrant of $\\ce{I2}$ solution.\n\n\u2022 An excess of I- ions are allowed to react with a known amount of $\\ce{KIO3}$ in acidic solution.\n\u2022 The iodine formed from this reaction is titrated with thiosulfate using starch as an indicator of triiodide anion.\n\u2022 The relation between moles $\\ce{KIO3}$ and thiosulfate is used to determine molarity of thiosulfate.\n\nAnalysis of oxidizing capacity of liquid bleach\n\n\u2022 Excess iodide ion solution added to bleach.\n\n\u2022 Iodide ions are oxidized to I2 after this solution is acidified simultaneously with the reduction of hypochlorite.\n\n\u2022 The iodine that is formed is then titrated with the above standardized thiosulfate solution.\n\n\u2022 The indicator (starch) is not added until the dark-brownish color of the iodine has changed to pale yellow. When it is added, the solution color changes to blue black.\n\n\u2022 The end point of titration is indicated when the solution becomes colorless.\n\n\u2022 (Note that if the starch is added too soon in the titration, the formation of the blue-black complex is not easily reversed, making the end point very slow and difficult to detect.\n\nPertinent reactions are given below\n\nReactions Pertaining to Titration\n\n$\\ce{HOCl + 2I- + H+ -> I2 + Cl- + H2O}$\n\n$\\ce{2S2O3^-- + I2 -> 2I- + S4O6^--}$\n\n$\\ce{HOCl + 2S2O3^-- + H+ -> Cl- + S4O6^-- + H2O}$ [the addition of the above two reactions]\n\n$\\ce{I3- + starch <-> starch*I3-(complex)}$ [blue-black in solution]\n\nReactions pertaining to the standardization of thiosulfate\n\n$\\ce{IO3- + 5I- +6H+ -> 3I2 + 3H2O}$\n\n$\\ce{3I2 + S2O3^-- -> 6I- + 3S4O6^--}$\n\n## The question\n\nWe were told that, during the standardization process, if the iodate-iodide solution is acidified with $\\ce{H2SO4}$, it should be titrated immediately. My question is why? Using thiosulfate, this is not the case, correct? Why does this change when sulfuric acid is used in its stead?\n\nAfter acidification with $\\ce{H2SO4}$, starch starts to hydrolyze to its component sugars. Lugol's iodine does not react with those sugars to form the blue complex.\nFor what we thought was $\\ce{H2O}$\nWas $\\ce{H2SO4}$.","date":"2020-02-20 21:32:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8013545870780945, \"perplexity\": 4089.334395472576}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145282.57\/warc\/CC-MAIN-20200220193228-20200220223228-00105.warc.gz\"}"} | null | null |
This week's guests include members of the St. Cloud State softball and baseball teams. Both teams already have gotten their seasons underway.
At noon Wednesday, the weekly SCSU Sports Chat takes place.
The SCSU softball team is 3-1 after playing four games last weekend at the Husky Dome. SCSU beat Nebraska-Kearney (9-1), lost to Missouri Western State (9-8) and swept Wisconsin-Parkside (9-1 and 8-0) to open the season. This is Paula U'Ren's 22nd season as the Huskies' head coach.
Also on the SCSU Sports Chat is the St. Cloud State baseball team. The Huskies are 3-0 after beating No. 6 Texas-Kingsville (7-2), Tarleton State (9-1) and No. 20 Central Missouri (7-3) at the 2019 Houston Winter Invitational at Minute Maid Park in Houston, Texas. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,874 |
{"url":"http:\/\/mathhelpforum.com\/advanced-applied-math\/217987-rockets-fuel-expulsion-print.html","text":"# Rockets and fuel expulsion\n\n\u2022 Apr 22nd 2013, 07:05 AM\ncarla1985\nRockets and fuel expulsion\nReally struggling with this one. In class we used velocity not speed for starters and had $m$ and $m+\\delta t$. I cant figure out how to relate it to this question at all. If someone could point me in the right direction that would be great.\n\nA car is propelled by a rocket engine along a smooth straight horizontalroad. Initially, the car is at rest and has mass M0. The spent fuel is expelledwith a constant speed c relative to the car. Show that the kinetic energy ofthe car when all the fuel has been used up is\n\n$\\frac{1}{2}M_1c^2(ln(\\frac{M_0}{M_1}))^2$\n\nWhat proportion of the initial mass M0 should the initial mass of fuel be inorder to maximise the kinetic energy of the car when all the full has been usedup?\n\n[Hint for last part: note that the kinetic energy will be zero if either M1 = 0 orM1 = M0, so to find the value of M1 for which the kinetic energy is maximisedas M1 varies between 0 and M0, you need to differentiate the kinetic energywith respect to M1]\n\u2022 Apr 22nd 2013, 11:40 AM\nebaines\nRe: Rockets and fuel expulsion\nYou can use impulse and momentum principles:\n\n$m \\frac {dv}{dt} = -C \\frac {dm}{dt}$\n\nRearrange and integrate:\n\n$-C \\frac {dm } m = dv$\n\n$-C \\ln (m) = v$\n\n$\\Delta V = C \\ln (\\frac {M_0}{M_1} )$ where $M_0$ = initial total mass of ship plus fuel, and $M_1$ = final mass of ship after all fuel is spent.\n\nNow that you have a value for $\\Delta V$, you can calculate its final kinetic energy.\n\nTo answer the second part it would be best to introduce a variable for the ratio of the ship's mass to the ship + fuel mass. Let $k = \\frac {M_1}{M_0}$, and the KE equation becomes\n\n$KE = \\frac 1 2 k M_0 C^2 (\\ln (\\frac 1 k))^2$\n\nNow find the value of k that maximizes KE by setting $\\frac {KE}{dk} = 0$.\n\u2022 Apr 22nd 2013, 12:05 PM\ncarla1985\nRe: Rockets and fuel expulsion\nQuote:\n\nOriginally Posted by ebaines\nYou can use impulse and momentum principles:\n\n$m \\frac {dv}{dt} = -C \\frac {dm}{dt}$\n\nRearrange and integrate:\n\n$-C \\frac {dm } m = dv$\n\n$-C \\ln (m) = v$\n\n$\\Delta V = C \\ln (\\frac {M_0}{M_1}$ where [tex] M_0 {\/tex] = initial total mass of ship plus fuel, and $M_1$ = final mass of ship after all fuel is spent.\n\nNow that you have a value for $\\Delta V$, you can calculate its final kinetic energy.\n\nTo answer the second part it would be best to introduce a variable for the ratio of the ship's mass to the ship + fuel mass. Let $k = \\frac {M_1}{M_0}$, and the KE equation becomes\n\n$KE = \\frac 1 2 k M_0 C^2 (\\ln (\\frac 1 k))^2$\n\nNow find the value of k that maximizes KE by setting $\\frac {KE}{dk} = 0$.\n\nHi, thank you very much for the reply. I have completed the first part and understand the secong (although I didnt think of a substitution) but I'm struggling to differentiate the equation with respect to m1\n\u2022 Apr 22nd 2013, 12:58 PM\nebaines\nRe: Rockets and fuel expulsion\nIf you treat $M_1$ as a variable and $M_0$ as a constant, then you can proceed as follows:\n\n$KE = \\frac {C^2} 2 M_1 (\\ln(\\frac {M_0}{M_1}))^2$\n\nNow recall that the derivarive of $f(x)g(x) = f(x)g'(x) + f'(x)g(x)$, and also that the dierivative of $\\ln(\\frac A x) = \\frac {-1} x$. So this becomes:\n\n$\\frac {d(KE)}{dM_1} = \\frac {C^2} 2 M_1 2 (\\ln(\\frac {M_0}{M_1})) (\\frac {-1}{M_1}) + \\frac {C^2} 2 (\\ln(\\frac {M_0}{M_1}))^2$\n\nSet this equal to zero and solve for $M_1$. What do you get?\n\u2022 Apr 22nd 2013, 01:35 PM\ncarla1985\nRe: Rockets and fuel expulsion\nIget $M_1$ to be $M_0$ or $\\frac{M_0}{e^2}$ does that mean the energy is max when its at $\\frac{M_0}{e^2}$?\n\u2022 Apr 23rd 2013, 05:07 AM\nebaines\nRe: Rockets and fuel expulsion\nYes, that's correct. And so the proportion of M_0 that should be fuel is $\\frac {(M_0-M_1)}{M_0} = 1-\\frac 1 {e^2}$.","date":"2016-10-25 03:17:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 31, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.848989725112915, \"perplexity\": 674.730561040795}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988719877.27\/warc\/CC-MAIN-20161020183839-00552-ip-10-171-6-4.ec2.internal.warc.gz\"}"} | null | null |
Christianity is a major world religion based on the life and teachings of Jesus the Christ. Christianity teaches both propositional and relational concepts.
Christianity is a religion based on the life and teachings of Jesus the Christ. Christianity teaches doctrines that can be affirmed or denied (propositions) and is also a plan for reconciliation between God and people (relational).
Jesus of Nazareth is the Jewish Messiah or Christ (Greek: Christos / Χρίστος). He fulfilled Old Testament laws and prophecies, died on a Roman cross in 1st century Palestine, and was resurrected on the third day. Over 500 witnesses saw Jesus after his resurrection and then he ascended into heaven (1 Corinthians 15:3-6).
In this post, I will explore Christianity as one of the world's major religions. It consists of both propositional and relational aspects. Christianity teaches doctrines that can be affirmed or denied (propositions) and is also a plan for reconciliation between God and people (relational).
Let's review the relational part first.
"I suppose there is an element in religion of telling people how to live, but it isn't men telling people how to live. It is God telling people how to get right with him. Christianity is not first and foremost a religion. It is first and foremost news. It's news.
It's like we're in a war, in a concentration camp, and suddenly you're hearing on the smuggled-in radio that the troops of deliverance have landed in helicopters five miles away. They're conquering everything in their path and they're just about to get to the gate and open the doors. And having lived all your life in this concentration camp, you're now going to be set free.
Piper is right. Christianity may contain a series of moral pronouncements, but that's not the core message. The essential element of Christianity is good news or "gospel". It is the message that God the Father planned when "Christ Jesus came into the world to save sinners." (1 Timothy 2:15).
Reconciliation between God and people is the relational part of Christianity.
However, the Christian faith is also propositional. In other words, specific beliefs can be affirmed or denied. Sets of beliefs are often summarized in creeds throughout church history and more recently in statements of faith. The goal of those statements and creeds is to summarize the Bible's teachings on particular subjects.
As a Reformed Baptist, I would affirm all of these lines with a couple reservations. For one, it is uncertain if Christ "descended into hell" after the crucifixion. Jesus certainly died, but where he went before the resurrection is debatable in the Scriptures. Secondly, the "holy catholic church" is not to be understood as the Roman Catholic Church. Protestants and Catholics understand that line differently. Aside from those qualifications, the Apostles Creed stands as an excellent overview of basic Christian doctrine which is the propositional part of Christianity.
Christianity is a major world religion. But as for how many Christians there are, there are two ways to answer that question.
For one, there is a statistical answer based on population research and the most liberal definitions of "Christian". According to the International Business Times, quoting Pew Research, in 2010 there were 2.2 billion professing Christians around the world. That was roughly ⅓ of the total population on the globe.
But these are the most generous estimates and also count 78% of the US population as Christian. This figure is hard to reconcile with the fact that only 43% attended church on a weekly basis in the same year (Gallup). So the statistical answer of how many Christians there are is what theologians refer to as the "visible church".
The second way to answer the question on how many actual Christians there are. That is, how many people are genuine followers of Jesus. Jesus himself said "Not everyone who says to me, 'Lord, Lord,' will enter the kingdom of heaven, but the one who does the will of my Father who is in heaven" (Matthew 7:21).
The Scriptures warn us that many who say they are Christians are not actually Christians. They have not been reconciled to God, but somehow think they are fine by virtue of their family heritage, country of origin, or other factors. But they are wrong. How many actual Christians there are is what theologians refer to as the "invisible church". Only God knows that number (2 Timothy 2:19).
Matt Slick (CARM.org) on What is Christianity?
Jason Malec (ExploreGod.com) on What is Christianity?
Todd Clay is a husband, dad, and a Christian (Reformed Baptist). He enjoys researching about everyday, complex, and sometimes obscure theological issues in every field of knowledge and tries to make things easy to understand. He is married and has 4 children, one of whom (Knox) is now with the Lord. Todd holds a BA in history from the University of Texas at Austin and an MA in Theological Studies from Southern Seminary in Louisville, Kentucky. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,864 |
/*!
* Parsley.js
* Version 2.8.0 - built Wed, Sep 13th 2017, 11:04 pm
* http://parsleyjs.org
* Guillaume Potier - <guillaume@wisembly.com>
* Marc-Andre Lafortune - <petroselinum@marc-andre.ca>
* MIT Licensed
*/
// The source code below is generated by babel as
// Parsley is written in ECMAScript 6
//
var _slice = Array.prototype.slice;
var _slicedToArray = (function () { function sliceIterator(arr, i) { var _arr = []; var _n = true; var _d = false; var _e = undefined; try { for (var _i = arr[Symbol.iterator](), _s; !(_n = (_s = _i.next()).done); _n = true) { _arr.push(_s.value); if (i && _arr.length === i) break; } } catch (err) { _d = true; _e = err; } finally { try { if (!_n && _i['return']) _i['return'](); } finally { if (_d) throw _e; } } return _arr; } return function (arr, i) { if (Array.isArray(arr)) { return arr; } else if (Symbol.iterator in Object(arr)) { return sliceIterator(arr, i); } else { throw new TypeError('Invalid attempt to destructure non-iterable instance'); } }; })();
var _extends = Object.assign || function (target) { for (var i = 1; i < arguments.length; i++) { var source = arguments[i]; for (var key in source) { if (Object.prototype.hasOwnProperty.call(source, key)) { target[key] = source[key]; } } } return target; };
function _toConsumableArray(arr) { if (Array.isArray(arr)) { for (var i = 0, arr2 = Array(arr.length); i < arr.length; i++) arr2[i] = arr[i]; return arr2; } else { return Array.from(arr); } }
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory(require('jquery')) : typeof define === 'function' && define.amd ? define(['jquery'], factory) : global.parsley = factory(global.jQuery);
})(this, function ($) {
'use strict';
var globalID = 1;
var pastWarnings = {};
var Utils = {
// Parsley DOM-API
// returns object from dom attributes and values
attr: function attr(element, namespace, obj) {
var i;
var attribute;
var attributes;
var regex = new RegExp('^' + namespace, 'i');
if ('undefined' === typeof obj) obj = {};else {
// Clear all own properties. This won't affect prototype's values
for (i in obj) {
if (obj.hasOwnProperty(i)) delete obj[i];
}
}
if (!element) return obj;
attributes = element.attributes;
for (i = attributes.length; i--;) {
attribute = attributes[i];
if (attribute && attribute.specified && regex.test(attribute.name)) {
obj[this.camelize(attribute.name.slice(namespace.length))] = this.deserializeValue(attribute.value);
}
}
return obj;
},
checkAttr: function checkAttr(element, namespace, _checkAttr) {
return element.hasAttribute(namespace + _checkAttr);
},
setAttr: function setAttr(element, namespace, attr, value) {
element.setAttribute(this.dasherize(namespace + attr), String(value));
},
getType: function getType(element) {
return element.getAttribute('type') || 'text';
},
generateID: function generateID() {
return '' + globalID++;
},
/** Third party functions **/
deserializeValue: function deserializeValue(value) {
var num;
try {
return value ? value == "true" || (value == "false" ? false : value == "null" ? null : !isNaN(num = Number(value)) ? num : /^[\[\{]/.test(value) ? JSON.parse(value) : value) : value;
} catch (e) {
return value;
}
},
// Zepto camelize function
camelize: function camelize(str) {
return str.replace(/-+(.)?/g, function (match, chr) {
return chr ? chr.toUpperCase() : '';
});
},
// Zepto dasherize function
dasherize: function dasherize(str) {
return str.replace(/::/g, '/').replace(/([A-Z]+)([A-Z][a-z])/g, '$1_$2').replace(/([a-z\d])([A-Z])/g, '$1_$2').replace(/_/g, '-').toLowerCase();
},
warn: function warn() {
var _window$console;
if (window.console && 'function' === typeof window.console.warn) (_window$console = window.console).warn.apply(_window$console, arguments);
},
warnOnce: function warnOnce(msg) {
if (!pastWarnings[msg]) {
pastWarnings[msg] = true;
this.warn.apply(this, arguments);
}
},
_resetWarnings: function _resetWarnings() {
pastWarnings = {};
},
trimString: function trimString(string) {
return string.replace(/^\s+|\s+$/g, '');
},
parse: {
date: function date(string) {
var parsed = string.match(/^(\d{4,})-(\d\d)-(\d\d)$/);
if (!parsed) return null;
var _parsed$map = parsed.map(function (x) {
return parseInt(x, 10);
});
var _parsed$map2 = _slicedToArray(_parsed$map, 4);
var _ = _parsed$map2[0];
var year = _parsed$map2[1];
var month = _parsed$map2[2];
var day = _parsed$map2[3];
var date = new Date(year, month - 1, day);
if (date.getFullYear() !== year || date.getMonth() + 1 !== month || date.getDate() !== day) return null;
return date;
},
string: function string(_string) {
return _string;
},
integer: function integer(string) {
if (isNaN(string)) return null;
return parseInt(string, 10);
},
number: function number(string) {
if (isNaN(string)) throw null;
return parseFloat(string);
},
'boolean': function _boolean(string) {
return !/^\s*false\s*$/i.test(string);
},
object: function object(string) {
return Utils.deserializeValue(string);
},
regexp: function regexp(_regexp) {
var flags = '';
// Test if RegExp is literal, if not, nothing to be done, otherwise, we need to isolate flags and pattern
if (/^\/.*\/(?:[gimy]*)$/.test(_regexp)) {
// Replace the regexp literal string with the first match group: ([gimy]*)
// If no flag is present, this will be a blank string
flags = _regexp.replace(/.*\/([gimy]*)$/, '$1');
// Again, replace the regexp literal string with the first match group:
// everything excluding the opening and closing slashes and the flags
_regexp = _regexp.replace(new RegExp('^/(.*?)/' + flags + '$'), '$1');
} else {
// Anchor regexp:
_regexp = '^' + _regexp + '$';
}
return new RegExp(_regexp, flags);
}
},
parseRequirement: function parseRequirement(requirementType, string) {
var converter = this.parse[requirementType || 'string'];
if (!converter) throw 'Unknown requirement specification: "' + requirementType + '"';
var converted = converter(string);
if (converted === null) throw 'Requirement is not a ' + requirementType + ': "' + string + '"';
return converted;
},
namespaceEvents: function namespaceEvents(events, namespace) {
events = this.trimString(events || '').split(/\s+/);
if (!events[0]) return '';
return $.map(events, function (evt) {
return evt + '.' + namespace;
}).join(' ');
},
difference: function difference(array, remove) {
// This is O(N^2), should be optimized
var result = [];
$.each(array, function (_, elem) {
if (remove.indexOf(elem) == -1) result.push(elem);
});
return result;
},
// Alter-ego to native Promise.all, but for jQuery
all: function all(promises) {
// jQuery treats $.when() and $.when(singlePromise) differently; let's avoid that and add spurious elements
return $.when.apply($, _toConsumableArray(promises).concat([42, 42]));
},
// Object.create polyfill, see https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/create#Polyfill
objectCreate: Object.create || (function () {
var Object = function Object() {};
return function (prototype) {
if (arguments.length > 1) {
throw Error('Second argument not supported');
}
if (typeof prototype != 'object') {
throw TypeError('Argument must be an object');
}
Object.prototype = prototype;
var result = new Object();
Object.prototype = null;
return result;
};
})(),
_SubmitSelector: 'input[type="submit"], button:submit'
};
// All these options could be overriden and specified directly in DOM using
// `data-parsley-` default DOM-API
// eg: `inputs` can be set in DOM using `data-parsley-inputs="input, textarea"`
// eg: `data-parsley-stop-on-first-failing-constraint="false"`
var Defaults = {
// ### General
// Default data-namespace for DOM API
namespace: 'data-parsley-',
// Supported inputs by default
inputs: 'input, textarea, select',
// Excluded inputs by default
excluded: 'input[type=button], input[type=submit], input[type=reset], input[type=hidden]',
// Stop validating field on highest priority failing constraint
priorityEnabled: true,
// ### Field only
// identifier used to group together inputs (e.g. radio buttons...)
multiple: null,
// identifier (or array of identifiers) used to validate only a select group of inputs
group: null,
// ### UI
// Enable\Disable error messages
uiEnabled: true,
// Key events threshold before validation
validationThreshold: 3,
// Focused field on form validation error. 'first'|'last'|'none'
focus: 'first',
// event(s) that will trigger validation before first failure. eg: `input`...
trigger: false,
// event(s) that will trigger validation after first failure.
triggerAfterFailure: 'input',
// Class that would be added on every failing validation Parsley field
errorClass: 'parsley-error',
// Same for success validation
successClass: 'parsley-success',
// Return the `$element` that will receive these above success or error classes
// Could also be (and given directly from DOM) a valid selector like `'#div'`
classHandler: function classHandler(Field) {},
// Return the `$element` where errors will be appended
// Could also be (and given directly from DOM) a valid selector like `'#div'`
errorsContainer: function errorsContainer(Field) {},
// ul elem that would receive errors' list
errorsWrapper: '<ul class="parsley-errors-list"></ul>',
// li elem that would receive error message
errorTemplate: '<li></li>'
};
var Base = function Base() {
this.__id__ = Utils.generateID();
};
Base.prototype = {
asyncSupport: true, // Deprecated
_pipeAccordingToValidationResult: function _pipeAccordingToValidationResult() {
var _this = this;
var pipe = function pipe() {
var r = $.Deferred();
if (true !== _this.validationResult) r.reject();
return r.resolve().promise();
};
return [pipe, pipe];
},
actualizeOptions: function actualizeOptions() {
Utils.attr(this.element, this.options.namespace, this.domOptions);
if (this.parent && this.parent.actualizeOptions) this.parent.actualizeOptions();
return this;
},
_resetOptions: function _resetOptions(initOptions) {
this.domOptions = Utils.objectCreate(this.parent.options);
this.options = Utils.objectCreate(this.domOptions);
// Shallow copy of ownProperties of initOptions:
for (var i in initOptions) {
if (initOptions.hasOwnProperty(i)) this.options[i] = initOptions[i];
}
this.actualizeOptions();
},
_listeners: null,
// Register a callback for the given event name
// Callback is called with context as the first argument and the `this`
// The context is the current parsley instance, or window.Parsley if global
// A return value of `false` will interrupt the calls
on: function on(name, fn) {
this._listeners = this._listeners || {};
var queue = this._listeners[name] = this._listeners[name] || [];
queue.push(fn);
return this;
},
// Deprecated. Use `on` instead
subscribe: function subscribe(name, fn) {
$.listenTo(this, name.toLowerCase(), fn);
},
// Unregister a callback (or all if none is given) for the given event name
off: function off(name, fn) {
var queue = this._listeners && this._listeners[name];
if (queue) {
if (!fn) {
delete this._listeners[name];
} else {
for (var i = queue.length; i--;) if (queue[i] === fn) queue.splice(i, 1);
}
}
return this;
},
// Deprecated. Use `off`
unsubscribe: function unsubscribe(name, fn) {
$.unsubscribeTo(this, name.toLowerCase());
},
// Trigger an event of the given name
// A return value of `false` interrupts the callback chain
// Returns false if execution was interrupted
trigger: function trigger(name, target, extraArg) {
target = target || this;
var queue = this._listeners && this._listeners[name];
var result;
var parentResult;
if (queue) {
for (var i = queue.length; i--;) {
result = queue[i].call(target, target, extraArg);
if (result === false) return result;
}
}
if (this.parent) {
return this.parent.trigger(name, target, extraArg);
}
return true;
},
asyncIsValid: function asyncIsValid(group, force) {
Utils.warnOnce("asyncIsValid is deprecated; please use whenValid instead");
return this.whenValid({ group: group, force: force });
},
_findRelated: function _findRelated() {
return this.options.multiple ? $(this.parent.element.querySelectorAll('[' + this.options.namespace + 'multiple="' + this.options.multiple + '"]')) : this.$element;
}
};
var convertArrayRequirement = function convertArrayRequirement(string, length) {
var m = string.match(/^\s*\[(.*)\]\s*$/);
if (!m) throw 'Requirement is not an array: "' + string + '"';
var values = m[1].split(',').map(Utils.trimString);
if (values.length !== length) throw 'Requirement has ' + values.length + ' values when ' + length + ' are needed';
return values;
};
var convertExtraOptionRequirement = function convertExtraOptionRequirement(requirementSpec, string, extraOptionReader) {
var main = null;
var extra = {};
for (var key in requirementSpec) {
if (key) {
var value = extraOptionReader(key);
if ('string' === typeof value) value = Utils.parseRequirement(requirementSpec[key], value);
extra[key] = value;
} else {
main = Utils.parseRequirement(requirementSpec[key], string);
}
}
return [main, extra];
};
// A Validator needs to implement the methods `validate` and `parseRequirements`
var Validator = function Validator(spec) {
$.extend(true, this, spec);
};
Validator.prototype = {
// Returns `true` iff the given `value` is valid according the given requirements.
validate: function validate(value, requirementFirstArg) {
if (this.fn) {
// Legacy style validator
if (arguments.length > 3) // If more args then value, requirement, instance...
requirementFirstArg = [].slice.call(arguments, 1, -1); // Skip first arg (value) and last (instance), combining the rest
return this.fn(value, requirementFirstArg);
}
if (Array.isArray(value)) {
if (!this.validateMultiple) throw 'Validator `' + this.name + '` does not handle multiple values';
return this.validateMultiple.apply(this, arguments);
} else {
var instance = arguments[arguments.length - 1];
if (this.validateDate && instance._isDateInput()) {
arguments[0] = Utils.parse.date(arguments[0]);
if (arguments[0] === null) return false;
return this.validateDate.apply(this, arguments);
}
if (this.validateNumber) {
if (isNaN(value)) return false;
arguments[0] = parseFloat(arguments[0]);
return this.validateNumber.apply(this, arguments);
}
if (this.validateString) {
return this.validateString.apply(this, arguments);
}
throw 'Validator `' + this.name + '` only handles multiple values';
}
},
// Parses `requirements` into an array of arguments,
// according to `this.requirementType`
parseRequirements: function parseRequirements(requirements, extraOptionReader) {
if ('string' !== typeof requirements) {
// Assume requirement already parsed
// but make sure we return an array
return Array.isArray(requirements) ? requirements : [requirements];
}
var type = this.requirementType;
if (Array.isArray(type)) {
var values = convertArrayRequirement(requirements, type.length);
for (var i = 0; i < values.length; i++) values[i] = Utils.parseRequirement(type[i], values[i]);
return values;
} else if ($.isPlainObject(type)) {
return convertExtraOptionRequirement(type, requirements, extraOptionReader);
} else {
return [Utils.parseRequirement(type, requirements)];
}
},
// Defaults:
requirementType: 'string',
priority: 2
};
var ValidatorRegistry = function ValidatorRegistry(validators, catalog) {
this.__class__ = 'ValidatorRegistry';
// Default Parsley locale is en
this.locale = 'en';
this.init(validators || {}, catalog || {});
};
var typeTesters = {
email: /^((([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+(\.([a-z]|\d|[!#\$%&'\*\+\-\/=\?\^_`{\|}~]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])+)*)|((\x22)((((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(([\x01-\x08\x0b\x0c\x0e-\x1f\x7f]|\x21|[\x23-\x5b]|[\x5d-\x7e]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(\\([\x01-\x09\x0b\x0c\x0d-\x7f]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF]))))*(((\x20|\x09)*(\x0d\x0a))?(\x20|\x09)+)?(\x22)))@((([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|\d|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))\.)+(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])|(([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])([a-z]|\d|-|_|~|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])*([a-z]|[\u00A0-\uD7FF\uF900-\uFDCF\uFDF0-\uFFEF])))$/i,
// Follow https://www.w3.org/TR/html5/infrastructure.html#floating-point-numbers
number: /^-?(\d*\.)?\d+(e[-+]?\d+)?$/i,
integer: /^-?\d+$/,
digits: /^\d+$/,
alphanum: /^\w+$/i,
date: {
test: function test(value) {
return Utils.parse.date(value) !== null;
}
},
url: new RegExp("^" +
// protocol identifier
"(?:(?:https?|ftp)://)?" + // ** mod: make scheme optional
// user:pass authentication
"(?:\\S+(?::\\S*)?@)?" + "(?:" +
// IP address exclusion
// private & local networks
// "(?!(?:10|127)(?:\\.\\d{1,3}){3})" + // ** mod: allow local networks
// "(?!(?:169\\.254|192\\.168)(?:\\.\\d{1,3}){2})" + // ** mod: allow local networks
// "(?!172\\.(?:1[6-9]|2\\d|3[0-1])(?:\\.\\d{1,3}){2})" + // ** mod: allow local networks
// IP address dotted notation octets
// excludes loopback network 0.0.0.0
// excludes reserved space >= 224.0.0.0
// excludes network & broacast addresses
// (first & last IP address of each class)
"(?:[1-9]\\d?|1\\d\\d|2[01]\\d|22[0-3])" + "(?:\\.(?:1?\\d{1,2}|2[0-4]\\d|25[0-5])){2}" + "(?:\\.(?:[1-9]\\d?|1\\d\\d|2[0-4]\\d|25[0-4]))" + "|" +
// host name
'(?:(?:[a-z\\u00a1-\\uffff0-9]-*)*[a-z\\u00a1-\\uffff0-9]+)' +
// domain name
'(?:\\.(?:[a-z\\u00a1-\\uffff0-9]-*)*[a-z\\u00a1-\\uffff0-9]+)*' +
// TLD identifier
'(?:\\.(?:[a-z\\u00a1-\\uffff]{2,}))' + ")" +
// port number
"(?::\\d{2,5})?" +
// resource path
"(?:/\\S*)?" + "$", 'i')
};
typeTesters.range = typeTesters.number;
// See http://stackoverflow.com/a/10454560/8279
var decimalPlaces = function decimalPlaces(num) {
var match = ('' + num).match(/(?:\.(\d+))?(?:[eE]([+-]?\d+))?$/);
if (!match) {
return 0;
}
return Math.max(0,
// Number of digits right of decimal point.
(match[1] ? match[1].length : 0) - (
// Adjust for scientific notation.
match[2] ? +match[2] : 0));
};
// parseArguments('number', ['1', '2']) => [1, 2]
var ValidatorRegistry__parseArguments = function ValidatorRegistry__parseArguments(type, args) {
return args.map(Utils.parse[type]);
};
// operatorToValidator returns a validating function for an operator function, applied to the given type
var ValidatorRegistry__operatorToValidator = function ValidatorRegistry__operatorToValidator(type, operator) {
return function (value) {
for (var _len = arguments.length, requirementsAndInput = Array(_len > 1 ? _len - 1 : 0), _key = 1; _key < _len; _key++) {
requirementsAndInput[_key - 1] = arguments[_key];
}
requirementsAndInput.pop(); // Get rid of `input` argument
return operator.apply(undefined, [value].concat(_toConsumableArray(ValidatorRegistry__parseArguments(type, requirementsAndInput))));
};
};
var ValidatorRegistry__comparisonOperator = function ValidatorRegistry__comparisonOperator(operator) {
return {
validateDate: ValidatorRegistry__operatorToValidator('date', operator),
validateNumber: ValidatorRegistry__operatorToValidator('number', operator),
requirementType: operator.length <= 2 ? 'string' : ['string', 'string'], // Support operators with a 1 or 2 requirement(s)
priority: 30
};
};
ValidatorRegistry.prototype = {
init: function init(validators, catalog) {
this.catalog = catalog;
// Copy prototype's validators:
this.validators = _extends({}, this.validators);
for (var name in validators) this.addValidator(name, validators[name].fn, validators[name].priority);
window.Parsley.trigger('parsley:validator:init');
},
// Set new messages locale if we have dictionary loaded in ParsleyConfig.i18n
setLocale: function setLocale(locale) {
if ('undefined' === typeof this.catalog[locale]) throw new Error(locale + ' is not available in the catalog');
this.locale = locale;
return this;
},
// Add a new messages catalog for a given locale. Set locale for this catalog if set === `true`
addCatalog: function addCatalog(locale, messages, set) {
if ('object' === typeof messages) this.catalog[locale] = messages;
if (true === set) return this.setLocale(locale);
return this;
},
// Add a specific message for a given constraint in a given locale
addMessage: function addMessage(locale, name, message) {
if ('undefined' === typeof this.catalog[locale]) this.catalog[locale] = {};
this.catalog[locale][name] = message;
return this;
},
// Add messages for a given locale
addMessages: function addMessages(locale, nameMessageObject) {
for (var name in nameMessageObject) this.addMessage(locale, name, nameMessageObject[name]);
return this;
},
// Add a new validator
//
// addValidator('custom', {
// requirementType: ['integer', 'integer'],
// validateString: function(value, from, to) {},
// priority: 22,
// messages: {
// en: "Hey, that's no good",
// fr: "Aye aye, pas bon du tout",
// }
// })
//
// Old API was addValidator(name, function, priority)
//
addValidator: function addValidator(name, arg1, arg2) {
if (this.validators[name]) Utils.warn('Validator "' + name + '" is already defined.');else if (Defaults.hasOwnProperty(name)) {
Utils.warn('"' + name + '" is a restricted keyword and is not a valid validator name.');
return;
}
return this._setValidator.apply(this, arguments);
},
hasValidator: function hasValidator(name) {
return !!this.validators[name];
},
updateValidator: function updateValidator(name, arg1, arg2) {
if (!this.validators[name]) {
Utils.warn('Validator "' + name + '" is not already defined.');
return this.addValidator.apply(this, arguments);
}
return this._setValidator.apply(this, arguments);
},
removeValidator: function removeValidator(name) {
if (!this.validators[name]) Utils.warn('Validator "' + name + '" is not defined.');
delete this.validators[name];
return this;
},
_setValidator: function _setValidator(name, validator, priority) {
if ('object' !== typeof validator) {
// Old style validator, with `fn` and `priority`
validator = {
fn: validator,
priority: priority
};
}
if (!validator.validate) {
validator = new Validator(validator);
}
this.validators[name] = validator;
for (var locale in validator.messages || {}) this.addMessage(locale, name, validator.messages[locale]);
return this;
},
getErrorMessage: function getErrorMessage(constraint) {
var message;
// Type constraints are a bit different, we have to match their requirements too to find right error message
if ('type' === constraint.name) {
var typeMessages = this.catalog[this.locale][constraint.name] || {};
message = typeMessages[constraint.requirements];
} else message = this.formatMessage(this.catalog[this.locale][constraint.name], constraint.requirements);
return message || this.catalog[this.locale].defaultMessage || this.catalog.en.defaultMessage;
},
// Kind of light `sprintf()` implementation
formatMessage: function formatMessage(string, parameters) {
if ('object' === typeof parameters) {
for (var i in parameters) string = this.formatMessage(string, parameters[i]);
return string;
}
return 'string' === typeof string ? string.replace(/%s/i, parameters) : '';
},
// Here is the Parsley default validators list.
// A validator is an object with the following key values:
// - priority: an integer
// - requirement: 'string' (default), 'integer', 'number', 'regexp' or an Array of these
// - validateString, validateMultiple, validateNumber: functions returning `true`, `false` or a promise
// Alternatively, a validator can be a function that returns such an object
//
validators: {
notblank: {
validateString: function validateString(value) {
return (/\S/.test(value)
);
},
priority: 2
},
required: {
validateMultiple: function validateMultiple(values) {
return values.length > 0;
},
validateString: function validateString(value) {
return (/\S/.test(value)
);
},
priority: 512
},
type: {
validateString: function validateString(value, type) {
var _ref = arguments.length <= 2 || arguments[2] === undefined ? {} : arguments[2];
var _ref$step = _ref.step;
var step = _ref$step === undefined ? 'any' : _ref$step;
var _ref$base = _ref.base;
var base = _ref$base === undefined ? 0 : _ref$base;
var tester = typeTesters[type];
if (!tester) {
throw new Error('validator type `' + type + '` is not supported');
}
if (!tester.test(value)) return false;
if ('number' === type) {
if (!/^any$/i.test(step || '')) {
var nb = Number(value);
var decimals = Math.max(decimalPlaces(step), decimalPlaces(base));
if (decimalPlaces(nb) > decimals) // Value can't have too many decimals
return false;
// Be careful of rounding errors by using integers.
var toInt = function toInt(f) {
return Math.round(f * Math.pow(10, decimals));
};
if ((toInt(nb) - toInt(base)) % toInt(step) != 0) return false;
}
}
return true;
},
requirementType: {
'': 'string',
step: 'string',
base: 'number'
},
priority: 256
},
pattern: {
validateString: function validateString(value, regexp) {
return regexp.test(value);
},
requirementType: 'regexp',
priority: 64
},
minlength: {
validateString: function validateString(value, requirement) {
return value.length >= requirement;
},
requirementType: 'integer',
priority: 30
},
maxlength: {
validateString: function validateString(value, requirement) {
return value.length <= requirement;
},
requirementType: 'integer',
priority: 30
},
length: {
validateString: function validateString(value, min, max) {
return value.length >= min && value.length <= max;
},
requirementType: ['integer', 'integer'],
priority: 30
},
mincheck: {
validateMultiple: function validateMultiple(values, requirement) {
return values.length >= requirement;
},
requirementType: 'integer',
priority: 30
},
maxcheck: {
validateMultiple: function validateMultiple(values, requirement) {
return values.length <= requirement;
},
requirementType: 'integer',
priority: 30
},
check: {
validateMultiple: function validateMultiple(values, min, max) {
return values.length >= min && values.length <= max;
},
requirementType: ['integer', 'integer'],
priority: 30
},
min: ValidatorRegistry__comparisonOperator(function (value, requirement) {
return value >= requirement;
}),
max: ValidatorRegistry__comparisonOperator(function (value, requirement) {
return value <= requirement;
}),
range: ValidatorRegistry__comparisonOperator(function (value, min, max) {
return value >= min && value <= max;
}),
equalto: {
validateString: function validateString(value, refOrValue) {
var $reference = $(refOrValue);
if ($reference.length) return value === $reference.val();else return value === refOrValue;
},
priority: 256
}
}
};
var UI = {};
var diffResults = function diffResults(newResult, oldResult, deep) {
var added = [];
var kept = [];
for (var i = 0; i < newResult.length; i++) {
var found = false;
for (var j = 0; j < oldResult.length; j++) if (newResult[i].assert.name === oldResult[j].assert.name) {
found = true;
break;
}
if (found) kept.push(newResult[i]);else added.push(newResult[i]);
}
return {
kept: kept,
added: added,
removed: !deep ? diffResults(oldResult, newResult, true).added : []
};
};
UI.Form = {
_actualizeTriggers: function _actualizeTriggers() {
var _this2 = this;
this.$element.on('submit.Parsley', function (evt) {
_this2.onSubmitValidate(evt);
});
this.$element.on('click.Parsley', Utils._SubmitSelector, function (evt) {
_this2.onSubmitButton(evt);
});
// UI could be disabled
if (false === this.options.uiEnabled) return;
this.element.setAttribute('novalidate', '');
},
focus: function focus() {
this._focusedField = null;
if (true === this.validationResult || 'none' === this.options.focus) return null;
for (var i = 0; i < this.fields.length; i++) {
var field = this.fields[i];
if (true !== field.validationResult && field.validationResult.length > 0 && 'undefined' === typeof field.options.noFocus) {
this._focusedField = field.$element;
if ('first' === this.options.focus) break;
}
}
if (null === this._focusedField) return null;
return this._focusedField.focus();
},
_destroyUI: function _destroyUI() {
// Reset all event listeners
this.$element.off('.Parsley');
}
};
UI.Field = {
_reflowUI: function _reflowUI() {
this._buildUI();
// If this field doesn't have an active UI don't bother doing something
if (!this._ui) return;
// Diff between two validation results
var diff = diffResults(this.validationResult, this._ui.lastValidationResult);
// Then store current validation result for next reflow
this._ui.lastValidationResult = this.validationResult;
// Handle valid / invalid / none field class
this._manageStatusClass();
// Add, remove, updated errors messages
this._manageErrorsMessages(diff);
// Triggers impl
this._actualizeTriggers();
// If field is not valid for the first time, bind keyup trigger to ease UX and quickly inform user
if ((diff.kept.length || diff.added.length) && !this._failedOnce) {
this._failedOnce = true;
this._actualizeTriggers();
}
},
// Returns an array of field's error message(s)
getErrorsMessages: function getErrorsMessages() {
// No error message, field is valid
if (true === this.validationResult) return [];
var messages = [];
for (var i = 0; i < this.validationResult.length; i++) messages.push(this.validationResult[i].errorMessage || this._getErrorMessage(this.validationResult[i].assert));
return messages;
},
// It's a goal of Parsley that this method is no longer required [#1073]
addError: function addError(name) {
var _ref2 = arguments.length <= 1 || arguments[1] === undefined ? {} : arguments[1];
var message = _ref2.message;
var assert = _ref2.assert;
var _ref2$updateClass = _ref2.updateClass;
var updateClass = _ref2$updateClass === undefined ? true : _ref2$updateClass;
this._buildUI();
this._addError(name, { message: message, assert: assert });
if (updateClass) this._errorClass();
},
// It's a goal of Parsley that this method is no longer required [#1073]
updateError: function updateError(name) {
var _ref3 = arguments.length <= 1 || arguments[1] === undefined ? {} : arguments[1];
var message = _ref3.message;
var assert = _ref3.assert;
var _ref3$updateClass = _ref3.updateClass;
var updateClass = _ref3$updateClass === undefined ? true : _ref3$updateClass;
this._buildUI();
this._updateError(name, { message: message, assert: assert });
if (updateClass) this._errorClass();
},
// It's a goal of Parsley that this method is no longer required [#1073]
removeError: function removeError(name) {
var _ref4 = arguments.length <= 1 || arguments[1] === undefined ? {} : arguments[1];
var _ref4$updateClass = _ref4.updateClass;
var updateClass = _ref4$updateClass === undefined ? true : _ref4$updateClass;
this._buildUI();
this._removeError(name);
// edge case possible here: remove a standard Parsley error that is still failing in this.validationResult
// but highly improbable cuz' manually removing a well Parsley handled error makes no sense.
if (updateClass) this._manageStatusClass();
},
_manageStatusClass: function _manageStatusClass() {
if (this.hasConstraints() && this.needsValidation() && true === this.validationResult) this._successClass();else if (this.validationResult.length > 0) this._errorClass();else this._resetClass();
},
_manageErrorsMessages: function _manageErrorsMessages(diff) {
if ('undefined' !== typeof this.options.errorsMessagesDisabled) return;
// Case where we have errorMessage option that configure an unique field error message, regardless failing validators
if ('undefined' !== typeof this.options.errorMessage) {
if (diff.added.length || diff.kept.length) {
this._insertErrorWrapper();
if (0 === this._ui.$errorsWrapper.find('.parsley-custom-error-message').length) this._ui.$errorsWrapper.append($(this.options.errorTemplate).addClass('parsley-custom-error-message'));
return this._ui.$errorsWrapper.addClass('filled').find('.parsley-custom-error-message').html(this.options.errorMessage);
}
return this._ui.$errorsWrapper.removeClass('filled').find('.parsley-custom-error-message').remove();
}
// Show, hide, update failing constraints messages
for (var i = 0; i < diff.removed.length; i++) this._removeError(diff.removed[i].assert.name);
for (i = 0; i < diff.added.length; i++) this._addError(diff.added[i].assert.name, { message: diff.added[i].errorMessage, assert: diff.added[i].assert });
for (i = 0; i < diff.kept.length; i++) this._updateError(diff.kept[i].assert.name, { message: diff.kept[i].errorMessage, assert: diff.kept[i].assert });
},
_addError: function _addError(name, _ref5) {
var message = _ref5.message;
var assert = _ref5.assert;
this._insertErrorWrapper();
this._ui.$errorsWrapper.addClass('filled').append($(this.options.errorTemplate).addClass('parsley-' + name).html(message || this._getErrorMessage(assert)));
},
_updateError: function _updateError(name, _ref6) {
var message = _ref6.message;
var assert = _ref6.assert;
this._ui.$errorsWrapper.addClass('filled').find('.parsley-' + name).html(message || this._getErrorMessage(assert));
},
_removeError: function _removeError(name) {
this._ui.$errorsWrapper.removeClass('filled').find('.parsley-' + name).remove();
},
_getErrorMessage: function _getErrorMessage(constraint) {
var customConstraintErrorMessage = constraint.name + 'Message';
if ('undefined' !== typeof this.options[customConstraintErrorMessage]) return window.Parsley.formatMessage(this.options[customConstraintErrorMessage], constraint.requirements);
return window.Parsley.getErrorMessage(constraint);
},
_buildUI: function _buildUI() {
// UI could be already built or disabled
if (this._ui || false === this.options.uiEnabled) return;
var _ui = {};
// Give field its Parsley id in DOM
this.element.setAttribute(this.options.namespace + 'id', this.__id__);
/** Generate important UI elements and store them in this **/
// $errorClassHandler is the $element that woul have parsley-error and parsley-success classes
_ui.$errorClassHandler = this._manageClassHandler();
// $errorsWrapper is a div that would contain the various field errors, it will be appended into $errorsContainer
_ui.errorsWrapperId = 'parsley-id-' + (this.options.multiple ? 'multiple-' + this.options.multiple : this.__id__);
_ui.$errorsWrapper = $(this.options.errorsWrapper).attr('id', _ui.errorsWrapperId);
// ValidationResult UI storage to detect what have changed bwt two validations, and update DOM accordingly
_ui.lastValidationResult = [];
_ui.validationInformationVisible = false;
// Store it in this for later
this._ui = _ui;
},
// Determine which element will have `parsley-error` and `parsley-success` classes
_manageClassHandler: function _manageClassHandler() {
// Class handled could also be determined by function given in Parsley options
if ('string' === typeof this.options.classHandler && $(this.options.classHandler).length) return $(this.options.classHandler);
// Class handled could also be determined by function given in Parsley options
var $handlerFunction = this.options.classHandler;
// It might also be the function name of a global function
if ('string' === typeof this.options.classHandler && 'function' === typeof window[this.options.classHandler]) $handlerFunction = window[this.options.classHandler];
if ('function' === typeof $handlerFunction) {
var $handler = $handlerFunction.call(this, this);
// If this function returned a valid existing DOM element, go for it
if ('undefined' !== typeof $handler && $handler.length) return $handler;
} else if ('object' === typeof $handlerFunction && $handlerFunction instanceof jQuery && $handlerFunction.length) {
return $handlerFunction;
} else if ($handlerFunction) {
Utils.warn('The class handler `' + $handlerFunction + '` does not exist in DOM nor as a global JS function');
}
return this._inputHolder();
},
_inputHolder: function _inputHolder() {
// if simple element (input, texatrea, select...) it will perfectly host the classes and precede the error container
if (!this.options.multiple || this.element.nodeName === 'SELECT') return this.$element;
// But if multiple element (radio, checkbox), that would be their parent
return this.$element.parent();
},
_insertErrorWrapper: function _insertErrorWrapper() {
var $errorsContainer = this.options.errorsContainer;
// Nothing to do if already inserted
if (0 !== this._ui.$errorsWrapper.parent().length) return this._ui.$errorsWrapper.parent();
if ('string' === typeof $errorsContainer) {
if ($($errorsContainer).length) return $($errorsContainer).append(this._ui.$errorsWrapper);else if ('function' === typeof window[$errorsContainer]) $errorsContainer = window[$errorsContainer];else Utils.warn('The errors container `' + $errorsContainer + '` does not exist in DOM nor as a global JS function');
}
if ('function' === typeof $errorsContainer) $errorsContainer = $errorsContainer.call(this, this);
if ('object' === typeof $errorsContainer && $errorsContainer.length) return $errorsContainer.append(this._ui.$errorsWrapper);
return this._inputHolder().after(this._ui.$errorsWrapper);
},
_actualizeTriggers: function _actualizeTriggers() {
var _this3 = this;
var $toBind = this._findRelated();
var trigger;
// Remove Parsley events already bound on this field
$toBind.off('.Parsley');
if (this._failedOnce) $toBind.on(Utils.namespaceEvents(this.options.triggerAfterFailure, 'Parsley'), function () {
_this3._validateIfNeeded();
});else if (trigger = Utils.namespaceEvents(this.options.trigger, 'Parsley')) {
$toBind.on(trigger, function (event) {
_this3._validateIfNeeded(event);
});
}
},
_validateIfNeeded: function _validateIfNeeded(event) {
var _this4 = this;
// For keyup, keypress, keydown, input... events that could be a little bit obstrusive
// do not validate if val length < min threshold on first validation. Once field have been validated once and info
// about success or failure have been displayed, always validate with this trigger to reflect every yalidation change.
if (event && /key|input/.test(event.type)) if (!(this._ui && this._ui.validationInformationVisible) && this.getValue().length <= this.options.validationThreshold) return;
if (this.options.debounce) {
window.clearTimeout(this._debounced);
this._debounced = window.setTimeout(function () {
return _this4.validate();
}, this.options.debounce);
} else this.validate();
},
_resetUI: function _resetUI() {
// Reset all event listeners
this._failedOnce = false;
this._actualizeTriggers();
// Nothing to do if UI never initialized for this field
if ('undefined' === typeof this._ui) return;
// Reset all errors' li
this._ui.$errorsWrapper.removeClass('filled').children().remove();
// Reset validation class
this._resetClass();
// Reset validation flags and last validation result
this._ui.lastValidationResult = [];
this._ui.validationInformationVisible = false;
},
_destroyUI: function _destroyUI() {
this._resetUI();
if ('undefined' !== typeof this._ui) this._ui.$errorsWrapper.remove();
delete this._ui;
},
_successClass: function _successClass() {
this._ui.validationInformationVisible = true;
this._ui.$errorClassHandler.removeClass(this.options.errorClass).addClass(this.options.successClass);
},
_errorClass: function _errorClass() {
this._ui.validationInformationVisible = true;
this._ui.$errorClassHandler.removeClass(this.options.successClass).addClass(this.options.errorClass);
},
_resetClass: function _resetClass() {
this._ui.$errorClassHandler.removeClass(this.options.successClass).removeClass(this.options.errorClass);
}
};
var Form = function Form(element, domOptions, options) {
this.__class__ = 'Form';
this.element = element;
this.$element = $(element);
this.domOptions = domOptions;
this.options = options;
this.parent = window.Parsley;
this.fields = [];
this.validationResult = null;
};
var Form__statusMapping = { pending: null, resolved: true, rejected: false };
Form.prototype = {
onSubmitValidate: function onSubmitValidate(event) {
var _this5 = this;
// This is a Parsley generated submit event, do not validate, do not prevent, simply exit and keep normal behavior
if (true === event.parsley) return;
// If we didn't come here through a submit button, use the first one in the form
var submitSource = this._submitSource || this.$element.find(Utils._SubmitSelector)[0];
this._submitSource = null;
this.$element.find('.parsley-synthetic-submit-button').prop('disabled', true);
if (submitSource && null !== submitSource.getAttribute('formnovalidate')) return;
window.Parsley._remoteCache = {};
var promise = this.whenValidate({ event: event });
if ('resolved' === promise.state() && false !== this._trigger('submit')) {
// All good, let event go through. We make this distinction because browsers
// differ in their handling of `submit` being called from inside a submit event [#1047]
} else {
// Rejected or pending: cancel this submit
event.stopImmediatePropagation();
event.preventDefault();
if ('pending' === promise.state()) promise.done(function () {
_this5._submit(submitSource);
});
}
},
onSubmitButton: function onSubmitButton(event) {
this._submitSource = event.currentTarget;
},
// internal
// _submit submits the form, this time without going through the validations.
// Care must be taken to "fake" the actual submit button being clicked.
_submit: function _submit(submitSource) {
if (false === this._trigger('submit')) return;
// Add submit button's data
if (submitSource) {
var $synthetic = this.$element.find('.parsley-synthetic-submit-button').prop('disabled', false);
if (0 === $synthetic.length) $synthetic = $('<input class="parsley-synthetic-submit-button" type="hidden">').appendTo(this.$element);
$synthetic.attr({
name: submitSource.getAttribute('name'),
value: submitSource.getAttribute('value')
});
}
this.$element.trigger(_extends($.Event('submit'), { parsley: true }));
},
// Performs validation on fields while triggering events.
// @returns `true` if all validations succeeds, `false`
// if a failure is immediately detected, or `null`
// if dependant on a promise.
// Consider using `whenValidate` instead.
validate: function validate(options) {
if (arguments.length >= 1 && !$.isPlainObject(options)) {
Utils.warnOnce('Calling validate on a parsley form without passing arguments as an object is deprecated.');
var _arguments = _slice.call(arguments);
var group = _arguments[0];
var force = _arguments[1];
var event = _arguments[2];
options = { group: group, force: force, event: event };
}
return Form__statusMapping[this.whenValidate(options).state()];
},
whenValidate: function whenValidate() {
var _Utils$all$done$fail$always,
_this6 = this;
var _ref7 = arguments.length <= 0 || arguments[0] === undefined ? {} : arguments[0];
var group = _ref7.group;
var force = _ref7.force;
var event = _ref7.event;
this.submitEvent = event;
if (event) {
this.submitEvent = _extends({}, event, { preventDefault: function preventDefault() {
Utils.warnOnce("Using `this.submitEvent.preventDefault()` is deprecated; instead, call `this.validationResult = false`");
_this6.validationResult = false;
} });
}
this.validationResult = true;
// fire validate event to eventually modify things before every validation
this._trigger('validate');
// Refresh form DOM options and form's fields that could have changed
this._refreshFields();
var promises = this._withoutReactualizingFormOptions(function () {
return $.map(_this6.fields, function (field) {
return field.whenValidate({ force: force, group: group });
});
});
return (_Utils$all$done$fail$always = Utils.all(promises).done(function () {
_this6._trigger('success');
}).fail(function () {
_this6.validationResult = false;
_this6.focus();
_this6._trigger('error');
}).always(function () {
_this6._trigger('validated');
})).pipe.apply(_Utils$all$done$fail$always, _toConsumableArray(this._pipeAccordingToValidationResult()));
},
// Iterate over refreshed fields, and stop on first failure.
// Returns `true` if all fields are valid, `false` if a failure is detected
// or `null` if the result depends on an unresolved promise.
// Prefer using `whenValid` instead.
isValid: function isValid(options) {
if (arguments.length >= 1 && !$.isPlainObject(options)) {
Utils.warnOnce('Calling isValid on a parsley form without passing arguments as an object is deprecated.');
var _arguments2 = _slice.call(arguments);
var group = _arguments2[0];
var force = _arguments2[1];
options = { group: group, force: force };
}
return Form__statusMapping[this.whenValid(options).state()];
},
// Iterate over refreshed fields and validate them.
// Returns a promise.
// A validation that immediately fails will interrupt the validations.
whenValid: function whenValid() {
var _this7 = this;
var _ref8 = arguments.length <= 0 || arguments[0] === undefined ? {} : arguments[0];
var group = _ref8.group;
var force = _ref8.force;
this._refreshFields();
var promises = this._withoutReactualizingFormOptions(function () {
return $.map(_this7.fields, function (field) {
return field.whenValid({ group: group, force: force });
});
});
return Utils.all(promises);
},
refresh: function refresh() {
this._refreshFields();
return this;
},
// Reset UI
reset: function reset() {
// Form case: emit a reset event for each field
for (var i = 0; i < this.fields.length; i++) this.fields[i].reset();
this._trigger('reset');
},
// Destroy Parsley instance (+ UI)
destroy: function destroy() {
// Field case: emit destroy event to clean UI and then destroy stored instance
this._destroyUI();
// Form case: destroy all its fields and then destroy stored instance
for (var i = 0; i < this.fields.length; i++) this.fields[i].destroy();
this.$element.removeData('Parsley');
this._trigger('destroy');
},
_refreshFields: function _refreshFields() {
return this.actualizeOptions()._bindFields();
},
_bindFields: function _bindFields() {
var _this8 = this;
var oldFields = this.fields;
this.fields = [];
this.fieldsMappedById = {};
this._withoutReactualizingFormOptions(function () {
_this8.$element.find(_this8.options.inputs).not(_this8.options.excluded).each(function (_, element) {
var fieldInstance = new window.Parsley.Factory(element, {}, _this8);
// Only add valid and not excluded `Field` and `FieldMultiple` children
if (('Field' === fieldInstance.__class__ || 'FieldMultiple' === fieldInstance.__class__) && true !== fieldInstance.options.excluded) {
var uniqueId = fieldInstance.__class__ + '-' + fieldInstance.__id__;
if ('undefined' === typeof _this8.fieldsMappedById[uniqueId]) {
_this8.fieldsMappedById[uniqueId] = fieldInstance;
_this8.fields.push(fieldInstance);
}
}
});
$.each(Utils.difference(oldFields, _this8.fields), function (_, field) {
field.reset();
});
});
return this;
},
// Internal only.
// Looping on a form's fields to do validation or similar
// will trigger reactualizing options on all of them, which
// in turn will reactualize the form's options.
// To avoid calling actualizeOptions so many times on the form
// for nothing, _withoutReactualizingFormOptions temporarily disables
// the method actualizeOptions on this form while `fn` is called.
_withoutReactualizingFormOptions: function _withoutReactualizingFormOptions(fn) {
var oldActualizeOptions = this.actualizeOptions;
this.actualizeOptions = function () {
return this;
};
var result = fn();
this.actualizeOptions = oldActualizeOptions;
return result;
},
// Internal only.
// Shortcut to trigger an event
// Returns true iff event is not interrupted and default not prevented.
_trigger: function _trigger(eventName) {
return this.trigger('form:' + eventName);
}
};
var Constraint = function Constraint(parsleyField, name, requirements, priority, isDomConstraint) {
var validatorSpec = window.Parsley._validatorRegistry.validators[name];
var validator = new Validator(validatorSpec);
priority = priority || parsleyField.options[name + 'Priority'] || validator.priority;
isDomConstraint = true === isDomConstraint;
_extends(this, {
validator: validator,
name: name,
requirements: requirements,
priority: priority,
isDomConstraint: isDomConstraint
});
this._parseRequirements(parsleyField.options);
};
var capitalize = function capitalize(str) {
var cap = str[0].toUpperCase();
return cap + str.slice(1);
};
Constraint.prototype = {
validate: function validate(value, instance) {
var _validator;
return (_validator = this.validator).validate.apply(_validator, [value].concat(_toConsumableArray(this.requirementList), [instance]));
},
_parseRequirements: function _parseRequirements(options) {
var _this9 = this;
this.requirementList = this.validator.parseRequirements(this.requirements, function (key) {
return options[_this9.name + capitalize(key)];
});
}
};
var Field = function Field(field, domOptions, options, parsleyFormInstance) {
this.__class__ = 'Field';
this.element = field;
this.$element = $(field);
// Set parent if we have one
if ('undefined' !== typeof parsleyFormInstance) {
this.parent = parsleyFormInstance;
}
this.options = options;
this.domOptions = domOptions;
// Initialize some properties
this.constraints = [];
this.constraintsByName = {};
this.validationResult = true;
// Bind constraints
this._bindConstraints();
};
var parsley_field__statusMapping = { pending: null, resolved: true, rejected: false };
Field.prototype = {
// # Public API
// Validate field and trigger some events for mainly `UI`
// @returns `true`, an array of the validators that failed, or
// `null` if validation is not finished. Prefer using whenValidate
validate: function validate(options) {
if (arguments.length >= 1 && !$.isPlainObject(options)) {
Utils.warnOnce('Calling validate on a parsley field without passing arguments as an object is deprecated.');
options = { options: options };
}
var promise = this.whenValidate(options);
if (!promise) // If excluded with `group` option
return true;
switch (promise.state()) {
case 'pending':
return null;
case 'resolved':
return true;
case 'rejected':
return this.validationResult;
}
},
// Validate field and trigger some events for mainly `UI`
// @returns a promise that succeeds only when all validations do
// or `undefined` if field is not in the given `group`.
whenValidate: function whenValidate() {
var _whenValid$always$done$fail$always,
_this10 = this;
var _ref9 = arguments.length <= 0 || arguments[0] === undefined ? {} : arguments[0];
var force = _ref9.force;
var group = _ref9.group;
// do not validate a field if not the same as given validation group
this.refresh();
if (group && !this._isInGroup(group)) return;
this.value = this.getValue();
// Field Validate event. `this.value` could be altered for custom needs
this._trigger('validate');
return (_whenValid$always$done$fail$always = this.whenValid({ force: force, value: this.value, _refreshed: true }).always(function () {
_this10._reflowUI();
}).done(function () {
_this10._trigger('success');
}).fail(function () {
_this10._trigger('error');
}).always(function () {
_this10._trigger('validated');
})).pipe.apply(_whenValid$always$done$fail$always, _toConsumableArray(this._pipeAccordingToValidationResult()));
},
hasConstraints: function hasConstraints() {
return 0 !== this.constraints.length;
},
// An empty optional field does not need validation
needsValidation: function needsValidation(value) {
if ('undefined' === typeof value) value = this.getValue();
// If a field is empty and not required, it is valid
// Except if `data-parsley-validate-if-empty` explicitely added, useful for some custom validators
if (!value.length && !this._isRequired() && 'undefined' === typeof this.options.validateIfEmpty) return false;
return true;
},
_isInGroup: function _isInGroup(group) {
if (Array.isArray(this.options.group)) return -1 !== $.inArray(group, this.options.group);
return this.options.group === group;
},
// Just validate field. Do not trigger any event.
// Returns `true` iff all constraints pass, `false` if there are failures,
// or `null` if the result can not be determined yet (depends on a promise)
// See also `whenValid`.
isValid: function isValid(options) {
if (arguments.length >= 1 && !$.isPlainObject(options)) {
Utils.warnOnce('Calling isValid on a parsley field without passing arguments as an object is deprecated.');
var _arguments3 = _slice.call(arguments);
var force = _arguments3[0];
var value = _arguments3[1];
options = { force: force, value: value };
}
var promise = this.whenValid(options);
if (!promise) // Excluded via `group`
return true;
return parsley_field__statusMapping[promise.state()];
},
// Just validate field. Do not trigger any event.
// @returns a promise that succeeds only when all validations do
// or `undefined` if the field is not in the given `group`.
// The argument `force` will force validation of empty fields.
// If a `value` is given, it will be validated instead of the value of the input.
whenValid: function whenValid() {
var _this11 = this;
var _ref10 = arguments.length <= 0 || arguments[0] === undefined ? {} : arguments[0];
var _ref10$force = _ref10.force;
var force = _ref10$force === undefined ? false : _ref10$force;
var value = _ref10.value;
var group = _ref10.group;
var _refreshed = _ref10._refreshed;
// Recompute options and rebind constraints to have latest changes
if (!_refreshed) this.refresh();
// do not validate a field if not the same as given validation group
if (group && !this._isInGroup(group)) return;
this.validationResult = true;
// A field without constraint is valid
if (!this.hasConstraints()) return $.when();
// Value could be passed as argument, needed to add more power to 'field:validate'
if ('undefined' === typeof value || null === value) value = this.getValue();
if (!this.needsValidation(value) && true !== force) return $.when();
var groupedConstraints = this._getGroupedConstraints();
var promises = [];
$.each(groupedConstraints, function (_, constraints) {
// Process one group of constraints at a time, we validate the constraints
// and combine the promises together.
var promise = Utils.all($.map(constraints, function (constraint) {
return _this11._validateConstraint(value, constraint);
}));
promises.push(promise);
if (promise.state() === 'rejected') return false; // Interrupt processing if a group has already failed
});
return Utils.all(promises);
},
// @returns a promise
_validateConstraint: function _validateConstraint(value, constraint) {
var _this12 = this;
var result = constraint.validate(value, this);
// Map false to a failed promise
if (false === result) result = $.Deferred().reject();
// Make sure we return a promise and that we record failures
return Utils.all([result]).fail(function (errorMessage) {
if (!(_this12.validationResult instanceof Array)) _this12.validationResult = [];
_this12.validationResult.push({
assert: constraint,
errorMessage: 'string' === typeof errorMessage && errorMessage
});
});
},
// @returns Parsley field computed value that could be overrided or configured in DOM
getValue: function getValue() {
var value;
// Value could be overriden in DOM or with explicit options
if ('function' === typeof this.options.value) value = this.options.value(this);else if ('undefined' !== typeof this.options.value) value = this.options.value;else value = this.$element.val();
// Handle wrong DOM or configurations
if ('undefined' === typeof value || null === value) return '';
return this._handleWhitespace(value);
},
// Reset UI
reset: function reset() {
this._resetUI();
return this._trigger('reset');
},
// Destroy Parsley instance (+ UI)
destroy: function destroy() {
// Field case: emit destroy event to clean UI and then destroy stored instance
this._destroyUI();
this.$element.removeData('Parsley');
this.$element.removeData('FieldMultiple');
this._trigger('destroy');
},
// Actualize options and rebind constraints
refresh: function refresh() {
this._refreshConstraints();
return this;
},
_refreshConstraints: function _refreshConstraints() {
return this.actualizeOptions()._bindConstraints();
},
refreshConstraints: function refreshConstraints() {
Utils.warnOnce("Parsley's refreshConstraints is deprecated. Please use refresh");
return this.refresh();
},
/**
* Add a new constraint to a field
*
* @param {String} name
* @param {Mixed} requirements optional
* @param {Number} priority optional
* @param {Boolean} isDomConstraint optional
*/
addConstraint: function addConstraint(name, requirements, priority, isDomConstraint) {
if (window.Parsley._validatorRegistry.validators[name]) {
var constraint = new Constraint(this, name, requirements, priority, isDomConstraint);
// if constraint already exist, delete it and push new version
if ('undefined' !== this.constraintsByName[constraint.name]) this.removeConstraint(constraint.name);
this.constraints.push(constraint);
this.constraintsByName[constraint.name] = constraint;
}
return this;
},
// Remove a constraint
removeConstraint: function removeConstraint(name) {
for (var i = 0; i < this.constraints.length; i++) if (name === this.constraints[i].name) {
this.constraints.splice(i, 1);
break;
}
delete this.constraintsByName[name];
return this;
},
// Update a constraint (Remove + re-add)
updateConstraint: function updateConstraint(name, parameters, priority) {
return this.removeConstraint(name).addConstraint(name, parameters, priority);
},
// # Internals
// Internal only.
// Bind constraints from config + options + DOM
_bindConstraints: function _bindConstraints() {
var constraints = [];
var constraintsByName = {};
// clean all existing DOM constraints to only keep javascript user constraints
for (var i = 0; i < this.constraints.length; i++) if (false === this.constraints[i].isDomConstraint) {
constraints.push(this.constraints[i]);
constraintsByName[this.constraints[i].name] = this.constraints[i];
}
this.constraints = constraints;
this.constraintsByName = constraintsByName;
// then re-add Parsley DOM-API constraints
for (var name in this.options) this.addConstraint(name, this.options[name], undefined, true);
// finally, bind special HTML5 constraints
return this._bindHtml5Constraints();
},
// Internal only.
// Bind specific HTML5 constraints to be HTML5 compliant
_bindHtml5Constraints: function _bindHtml5Constraints() {
// html5 required
if (null !== this.element.getAttribute('required')) this.addConstraint('required', true, undefined, true);
// html5 pattern
if (null !== this.element.getAttribute('pattern')) this.addConstraint('pattern', this.element.getAttribute('pattern'), undefined, true);
// range
var min = this.element.getAttribute('min');
var max = this.element.getAttribute('max');
if (null !== min && null !== max) this.addConstraint('range', [min, max], undefined, true);
// HTML5 min
else if (null !== min) this.addConstraint('min', min, undefined, true);
// HTML5 max
else if (null !== max) this.addConstraint('max', max, undefined, true);
// length
if (null !== this.element.getAttribute('minlength') && null !== this.element.getAttribute('maxlength')) this.addConstraint('length', [this.element.getAttribute('minlength'), this.element.getAttribute('maxlength')], undefined, true);
// HTML5 minlength
else if (null !== this.element.getAttribute('minlength')) this.addConstraint('minlength', this.element.getAttribute('minlength'), undefined, true);
// HTML5 maxlength
else if (null !== this.element.getAttribute('maxlength')) this.addConstraint('maxlength', this.element.getAttribute('maxlength'), undefined, true);
// html5 types
var type = Utils.getType(this.element);
// Small special case here for HTML5 number: integer validator if step attribute is undefined or an integer value, number otherwise
if ('number' === type) {
return this.addConstraint('type', ['number', {
step: this.element.getAttribute('step') || '1',
base: min || this.element.getAttribute('value')
}], undefined, true);
// Regular other HTML5 supported types
} else if (/^(email|url|range|date)$/i.test(type)) {
return this.addConstraint('type', type, undefined, true);
}
return this;
},
// Internal only.
// Field is required if have required constraint without `false` value
_isRequired: function _isRequired() {
if ('undefined' === typeof this.constraintsByName.required) return false;
return false !== this.constraintsByName.required.requirements;
},
// Internal only.
// Shortcut to trigger an event
_trigger: function _trigger(eventName) {
return this.trigger('field:' + eventName);
},
// Internal only
// Handles whitespace in a value
// Use `data-parsley-whitespace="squish"` to auto squish input value
// Use `data-parsley-whitespace="trim"` to auto trim input value
_handleWhitespace: function _handleWhitespace(value) {
if (true === this.options.trimValue) Utils.warnOnce('data-parsley-trim-value="true" is deprecated, please use data-parsley-whitespace="trim"');
if ('squish' === this.options.whitespace) value = value.replace(/\s{2,}/g, ' ');
if ('trim' === this.options.whitespace || 'squish' === this.options.whitespace || true === this.options.trimValue) value = Utils.trimString(value);
return value;
},
_isDateInput: function _isDateInput() {
var c = this.constraintsByName.type;
return c && c.requirements === 'date';
},
// Internal only.
// Returns the constraints, grouped by descending priority.
// The result is thus an array of arrays of constraints.
_getGroupedConstraints: function _getGroupedConstraints() {
if (false === this.options.priorityEnabled) return [this.constraints];
var groupedConstraints = [];
var index = {};
// Create array unique of priorities
for (var i = 0; i < this.constraints.length; i++) {
var p = this.constraints[i].priority;
if (!index[p]) groupedConstraints.push(index[p] = []);
index[p].push(this.constraints[i]);
}
// Sort them by priority DESC
groupedConstraints.sort(function (a, b) {
return b[0].priority - a[0].priority;
});
return groupedConstraints;
}
};
var parsley_field = Field;
var Multiple = function Multiple() {
this.__class__ = 'FieldMultiple';
};
Multiple.prototype = {
// Add new `$element` sibling for multiple field
addElement: function addElement($element) {
this.$elements.push($element);
return this;
},
// See `Field._refreshConstraints()`
_refreshConstraints: function _refreshConstraints() {
var fieldConstraints;
this.constraints = [];
// Select multiple special treatment
if (this.element.nodeName === 'SELECT') {
this.actualizeOptions()._bindConstraints();
return this;
}
// Gather all constraints for each input in the multiple group
for (var i = 0; i < this.$elements.length; i++) {
// Check if element have not been dynamically removed since last binding
if (!$('html').has(this.$elements[i]).length) {
this.$elements.splice(i, 1);
continue;
}
fieldConstraints = this.$elements[i].data('FieldMultiple')._refreshConstraints().constraints;
for (var j = 0; j < fieldConstraints.length; j++) this.addConstraint(fieldConstraints[j].name, fieldConstraints[j].requirements, fieldConstraints[j].priority, fieldConstraints[j].isDomConstraint);
}
return this;
},
// See `Field.getValue()`
getValue: function getValue() {
// Value could be overriden in DOM
if ('function' === typeof this.options.value) return this.options.value(this);else if ('undefined' !== typeof this.options.value) return this.options.value;
// Radio input case
if (this.element.nodeName === 'INPUT') {
var type = Utils.getType(this.element);
if (type === 'radio') return this._findRelated().filter(':checked').val() || '';
// checkbox input case
if (type === 'checkbox') {
var values = [];
this._findRelated().filter(':checked').each(function () {
values.push($(this).val());
});
return values;
}
}
// Select multiple case
if (this.element.nodeName === 'SELECT' && null === this.$element.val()) return [];
// Default case that should never happen
return this.$element.val();
},
_init: function _init() {
this.$elements = [this.$element];
return this;
}
};
var Factory = function Factory(element, options, parsleyFormInstance) {
this.element = element;
this.$element = $(element);
// If the element has already been bound, returns its saved Parsley instance
var savedparsleyFormInstance = this.$element.data('Parsley');
if (savedparsleyFormInstance) {
// If the saved instance has been bound without a Form parent and there is one given in this call, add it
if ('undefined' !== typeof parsleyFormInstance && savedparsleyFormInstance.parent === window.Parsley) {
savedparsleyFormInstance.parent = parsleyFormInstance;
savedparsleyFormInstance._resetOptions(savedparsleyFormInstance.options);
}
if ('object' === typeof options) {
_extends(savedparsleyFormInstance.options, options);
}
return savedparsleyFormInstance;
}
// Parsley must be instantiated with a DOM element or jQuery $element
if (!this.$element.length) throw new Error('You must bind Parsley on an existing element.');
if ('undefined' !== typeof parsleyFormInstance && 'Form' !== parsleyFormInstance.__class__) throw new Error('Parent instance must be a Form instance');
this.parent = parsleyFormInstance || window.Parsley;
return this.init(options);
};
Factory.prototype = {
init: function init(options) {
this.__class__ = 'Parsley';
this.__version__ = '2.8.0';
this.__id__ = Utils.generateID();
// Pre-compute options
this._resetOptions(options);
// A Form instance is obviously a `<form>` element but also every node that is not an input and has the `data-parsley-validate` attribute
if (this.element.nodeName === 'FORM' || Utils.checkAttr(this.element, this.options.namespace, 'validate') && !this.$element.is(this.options.inputs)) return this.bind('parsleyForm');
// Every other element is bound as a `Field` or `FieldMultiple`
return this.isMultiple() ? this.handleMultiple() : this.bind('parsleyField');
},
isMultiple: function isMultiple() {
var type = Utils.getType(this.element);
return type === 'radio' || type === 'checkbox' || this.element.nodeName === 'SELECT' && null !== this.element.getAttribute('multiple');
},
// Multiples fields are a real nightmare :(
// Maybe some refactoring would be appreciated here...
handleMultiple: function handleMultiple() {
var _this13 = this;
var name;
var multiple;
var parsleyMultipleInstance;
// Handle multiple name
this.options.multiple = this.options.multiple || (name = this.element.getAttribute('name')) || this.element.getAttribute('id');
// Special select multiple input
if (this.element.nodeName === 'SELECT' && null !== this.element.getAttribute('multiple')) {
this.options.multiple = this.options.multiple || this.__id__;
return this.bind('parsleyFieldMultiple');
// Else for radio / checkboxes, we need a `name` or `data-parsley-multiple` to properly bind it
} else if (!this.options.multiple) {
Utils.warn('To be bound by Parsley, a radio, a checkbox and a multiple select input must have either a name or a multiple option.', this.$element);
return this;
}
// Remove special chars
this.options.multiple = this.options.multiple.replace(/(:|\.|\[|\]|\{|\}|\$)/g, '');
// Add proper `data-parsley-multiple` to siblings if we have a valid multiple name
if (name) {
$('input[name="' + name + '"]').each(function (i, input) {
var type = Utils.getType(input);
if (type === 'radio' || type === 'checkbox') input.setAttribute(_this13.options.namespace + 'multiple', _this13.options.multiple);
});
}
// Check here if we don't already have a related multiple instance saved
var $previouslyRelated = this._findRelated();
for (var i = 0; i < $previouslyRelated.length; i++) {
parsleyMultipleInstance = $($previouslyRelated.get(i)).data('Parsley');
if ('undefined' !== typeof parsleyMultipleInstance) {
if (!this.$element.data('FieldMultiple')) {
parsleyMultipleInstance.addElement(this.$element);
}
break;
}
}
// Create a secret Field instance for every multiple field. It will be stored in `data('FieldMultiple')`
// And will be useful later to access classic `Field` stuff while being in a `FieldMultiple` instance
this.bind('parsleyField', true);
return parsleyMultipleInstance || this.bind('parsleyFieldMultiple');
},
// Return proper `Form`, `Field` or `FieldMultiple`
bind: function bind(type, doNotStore) {
var parsleyInstance;
switch (type) {
case 'parsleyForm':
parsleyInstance = $.extend(new Form(this.element, this.domOptions, this.options), new Base(), window.ParsleyExtend)._bindFields();
break;
case 'parsleyField':
parsleyInstance = $.extend(new parsley_field(this.element, this.domOptions, this.options, this.parent), new Base(), window.ParsleyExtend);
break;
case 'parsleyFieldMultiple':
parsleyInstance = $.extend(new parsley_field(this.element, this.domOptions, this.options, this.parent), new Multiple(), new Base(), window.ParsleyExtend)._init();
break;
default:
throw new Error(type + 'is not a supported Parsley type');
}
if (this.options.multiple) Utils.setAttr(this.element, this.options.namespace, 'multiple', this.options.multiple);
if ('undefined' !== typeof doNotStore) {
this.$element.data('FieldMultiple', parsleyInstance);
return parsleyInstance;
}
// Store the freshly bound instance in a DOM element for later access using jQuery `data()`
this.$element.data('Parsley', parsleyInstance);
// Tell the world we have a new Form or Field instance!
parsleyInstance._actualizeTriggers();
parsleyInstance._trigger('init');
return parsleyInstance;
}
};
var vernums = $.fn.jquery.split('.');
if (parseInt(vernums[0]) <= 1 && parseInt(vernums[1]) < 8) {
throw "The loaded version of jQuery is too old. Please upgrade to 1.8.x or better.";
}
if (!vernums.forEach) {
Utils.warn('Parsley requires ES5 to run properly. Please include https://github.com/es-shims/es5-shim');
}
// Inherit `on`, `off` & `trigger` to Parsley:
var Parsley = _extends(new Base(), {
element: document,
$element: $(document),
actualizeOptions: null,
_resetOptions: null,
Factory: Factory,
version: '2.8.0'
});
// Supplement Field and Form with Base
// This way, the constructors will have access to those methods
_extends(parsley_field.prototype, UI.Field, Base.prototype);
_extends(Form.prototype, UI.Form, Base.prototype);
// Inherit actualizeOptions and _resetOptions:
_extends(Factory.prototype, Base.prototype);
// ### jQuery API
// `$('.elem').parsley(options)` or `$('.elem').psly(options)`
$.fn.parsley = $.fn.psly = function (options) {
if (this.length > 1) {
var instances = [];
this.each(function () {
instances.push($(this).parsley(options));
});
return instances;
}
// Return undefined if applied to non existing DOM element
if (this.length == 0) {
return;
}
return new Factory(this[0], options);
};
// ### Field and Form extension
// Ensure the extension is now defined if it wasn't previously
if ('undefined' === typeof window.ParsleyExtend) window.ParsleyExtend = {};
// ### Parsley config
// Inherit from ParsleyDefault, and copy over any existing values
Parsley.options = _extends(Utils.objectCreate(Defaults), window.ParsleyConfig);
window.ParsleyConfig = Parsley.options; // Old way of accessing global options
// ### Globals
window.Parsley = window.psly = Parsley;
Parsley.Utils = Utils;
window.ParsleyUtils = {};
$.each(Utils, function (key, value) {
if ('function' === typeof value) {
window.ParsleyUtils[key] = function () {
Utils.warnOnce('Accessing `window.ParsleyUtils` is deprecated. Use `window.Parsley.Utils` instead.');
return Utils[key].apply(Utils, arguments);
};
}
});
// ### Define methods that forward to the registry, and deprecate all access except through window.Parsley
var registry = window.Parsley._validatorRegistry = new ValidatorRegistry(window.ParsleyConfig.validators, window.ParsleyConfig.i18n);
window.ParsleyValidator = {};
$.each('setLocale addCatalog addMessage addMessages getErrorMessage formatMessage addValidator updateValidator removeValidator hasValidator'.split(' '), function (i, method) {
window.Parsley[method] = function () {
return registry[method].apply(registry, arguments);
};
window.ParsleyValidator[method] = function () {
var _window$Parsley;
Utils.warnOnce('Accessing the method \'' + method + '\' through Validator is deprecated. Simply call \'window.Parsley.' + method + '(...)\'');
return (_window$Parsley = window.Parsley)[method].apply(_window$Parsley, arguments);
};
});
// ### UI
// Deprecated global object
window.Parsley.UI = UI;
window.ParsleyUI = {
removeError: function removeError(instance, name, doNotUpdateClass) {
var updateClass = true !== doNotUpdateClass;
Utils.warnOnce('Accessing UI is deprecated. Call \'removeError\' on the instance directly. Please comment in issue 1073 as to your need to call this method.');
return instance.removeError(name, { updateClass: updateClass });
},
getErrorsMessages: function getErrorsMessages(instance) {
Utils.warnOnce('Accessing UI is deprecated. Call \'getErrorsMessages\' on the instance directly.');
return instance.getErrorsMessages();
}
};
$.each('addError updateError'.split(' '), function (i, method) {
window.ParsleyUI[method] = function (instance, name, message, assert, doNotUpdateClass) {
var updateClass = true !== doNotUpdateClass;
Utils.warnOnce('Accessing UI is deprecated. Call \'' + method + '\' on the instance directly. Please comment in issue 1073 as to your need to call this method.');
return instance[method](name, { message: message, assert: assert, updateClass: updateClass });
};
});
// ### PARSLEY auto-binding
// Prevent it by setting `ParsleyConfig.autoBind` to `false`
if (false !== window.ParsleyConfig.autoBind) {
$(function () {
// Works only on `data-parsley-validate`.
if ($('[data-parsley-validate]').length) $('[data-parsley-validate]').parsley();
});
}
var o = $({});
var deprecated = function deprecated() {
Utils.warnOnce("Parsley's pubsub module is deprecated; use the 'on' and 'off' methods on parsley instances or window.Parsley");
};
// Returns an event handler that calls `fn` with the arguments it expects
function adapt(fn, context) {
// Store to allow unbinding
if (!fn.parsleyAdaptedCallback) {
fn.parsleyAdaptedCallback = function () {
var args = Array.prototype.slice.call(arguments, 0);
args.unshift(this);
fn.apply(context || o, args);
};
}
return fn.parsleyAdaptedCallback;
}
var eventPrefix = 'parsley:';
// Converts 'parsley:form:validate' into 'form:validate'
function eventName(name) {
if (name.lastIndexOf(eventPrefix, 0) === 0) return name.substr(eventPrefix.length);
return name;
}
// $.listen is deprecated. Use Parsley.on instead.
$.listen = function (name, callback) {
var context;
deprecated();
if ('object' === typeof arguments[1] && 'function' === typeof arguments[2]) {
context = arguments[1];
callback = arguments[2];
}
if ('function' !== typeof callback) throw new Error('Wrong parameters');
window.Parsley.on(eventName(name), adapt(callback, context));
};
$.listenTo = function (instance, name, fn) {
deprecated();
if (!(instance instanceof parsley_field) && !(instance instanceof Form)) throw new Error('Must give Parsley instance');
if ('string' !== typeof name || 'function' !== typeof fn) throw new Error('Wrong parameters');
instance.on(eventName(name), adapt(fn));
};
$.unsubscribe = function (name, fn) {
deprecated();
if ('string' !== typeof name || 'function' !== typeof fn) throw new Error('Wrong arguments');
window.Parsley.off(eventName(name), fn.parsleyAdaptedCallback);
};
$.unsubscribeTo = function (instance, name) {
deprecated();
if (!(instance instanceof parsley_field) && !(instance instanceof Form)) throw new Error('Must give Parsley instance');
instance.off(eventName(name));
};
$.unsubscribeAll = function (name) {
deprecated();
window.Parsley.off(eventName(name));
$('form,input,textarea,select').each(function () {
var instance = $(this).data('Parsley');
if (instance) {
instance.off(eventName(name));
}
});
};
// $.emit is deprecated. Use jQuery events instead.
$.emit = function (name, instance) {
var _instance;
deprecated();
var instanceGiven = instance instanceof parsley_field || instance instanceof Form;
var args = Array.prototype.slice.call(arguments, instanceGiven ? 2 : 1);
args.unshift(eventName(name));
if (!instanceGiven) {
instance = window.Parsley;
}
(_instance = instance).trigger.apply(_instance, _toConsumableArray(args));
};
var pubsub = {};
$.extend(true, Parsley, {
asyncValidators: {
'default': {
fn: function fn(xhr) {
// By default, only status 2xx are deemed successful.
// Note: we use status instead of state() because responses with status 200
// but invalid messages (e.g. an empty body for content type set to JSON) will
// result in state() === 'rejected'.
return xhr.status >= 200 && xhr.status < 300;
},
url: false
},
reverse: {
fn: function fn(xhr) {
// If reverse option is set, a failing ajax request is considered successful
return xhr.status < 200 || xhr.status >= 300;
},
url: false
}
},
addAsyncValidator: function addAsyncValidator(name, fn, url, options) {
Parsley.asyncValidators[name] = {
fn: fn,
url: url || false,
options: options || {}
};
return this;
}
});
Parsley.addValidator('remote', {
requirementType: {
'': 'string',
'validator': 'string',
'reverse': 'boolean',
'options': 'object'
},
validateString: function validateString(value, url, options, instance) {
var data = {};
var ajaxOptions;
var csr;
var validator = options.validator || (true === options.reverse ? 'reverse' : 'default');
if ('undefined' === typeof Parsley.asyncValidators[validator]) throw new Error('Calling an undefined async validator: `' + validator + '`');
url = Parsley.asyncValidators[validator].url || url;
// Fill current value
if (url.indexOf('{value}') > -1) {
url = url.replace('{value}', encodeURIComponent(value));
} else {
data[instance.element.getAttribute('name') || instance.element.getAttribute('id')] = value;
}
// Merge options passed in from the function with the ones in the attribute
var remoteOptions = $.extend(true, options.options || {}, Parsley.asyncValidators[validator].options);
// All `$.ajax(options)` could be overridden or extended directly from DOM in `data-parsley-remote-options`
ajaxOptions = $.extend(true, {}, {
url: url,
data: data,
type: 'GET'
}, remoteOptions);
// Generate store key based on ajax options
instance.trigger('field:ajaxoptions', instance, ajaxOptions);
csr = $.param(ajaxOptions);
// Initialise querry cache
if ('undefined' === typeof Parsley._remoteCache) Parsley._remoteCache = {};
// Try to retrieve stored xhr
var xhr = Parsley._remoteCache[csr] = Parsley._remoteCache[csr] || $.ajax(ajaxOptions);
var handleXhr = function handleXhr() {
var result = Parsley.asyncValidators[validator].fn.call(instance, xhr, url, options);
if (!result) // Map falsy results to rejected promise
result = $.Deferred().reject();
return $.when(result);
};
return xhr.then(handleXhr, handleXhr);
},
priority: -1
});
Parsley.on('form:submit', function () {
Parsley._remoteCache = {};
});
Base.prototype.addAsyncValidator = function () {
Utils.warnOnce('Accessing the method `addAsyncValidator` through an instance is deprecated. Simply call `Parsley.addAsyncValidator(...)`');
return Parsley.addAsyncValidator.apply(Parsley, arguments);
};
// This is included with the Parsley library itself,
// thus there is no use in adding it to your project.
Parsley.addMessages('en', {
defaultMessage: "This value seems to be invalid.",
type: {
email: "This value should be a valid email.",
url: "This value should be a valid url.",
number: "This value should be a valid number.",
integer: "This value should be a valid integer.",
digits: "This value should be digits.",
alphanum: "This value should be alphanumeric."
},
notblank: "This value should not be blank.",
required: "This value is required.",
pattern: "This value seems to be invalid.",
min: "This value should be greater than or equal to %s.",
max: "This value should be lower than or equal to %s.",
range: "This value should be between %s and %s.",
minlength: "This value is too short. It should have %s characters or more.",
maxlength: "This value is too long. It should have %s characters or fewer.",
length: "This value length is invalid. It should be between %s and %s characters long.",
mincheck: "You must select at least %s choices.",
maxcheck: "You must select %s choices or fewer.",
check: "You must select between %s and %s choices.",
equalto: "This value should be the same."
});
Parsley.setLocale('en');
/**
* inputevent - Alleviate browser bugs for input events
* https://github.com/marcandre/inputevent
* @version v0.0.3 - (built Thu, Apr 14th 2016, 5:58 pm)
* @author Marc-Andre Lafortune <github@marc-andre.ca>
* @license MIT
*/
function InputEvent() {
var _this14 = this;
var globals = window || global;
// Slightly odd way construct our object. This way methods are force bound.
// Used to test for duplicate library.
_extends(this, {
// For browsers that do not support isTrusted, assumes event is native.
isNativeEvent: function isNativeEvent(evt) {
return evt.originalEvent && evt.originalEvent.isTrusted !== false;
},
fakeInputEvent: function fakeInputEvent(evt) {
if (_this14.isNativeEvent(evt)) {
$(evt.target).trigger('input');
}
},
misbehaves: function misbehaves(evt) {
if (_this14.isNativeEvent(evt)) {
_this14.behavesOk(evt);
$(document).on('change.inputevent', evt.data.selector, _this14.fakeInputEvent);
_this14.fakeInputEvent(evt);
}
},
behavesOk: function behavesOk(evt) {
if (_this14.isNativeEvent(evt)) {
$(document) // Simply unbinds the testing handler
.off('input.inputevent', evt.data.selector, _this14.behavesOk).off('change.inputevent', evt.data.selector, _this14.misbehaves);
}
},
// Bind the testing handlers
install: function install() {
if (globals.inputEventPatched) {
return;
}
globals.inputEventPatched = '0.0.3';
var _arr = ['select', 'input[type="checkbox"]', 'input[type="radio"]', 'input[type="file"]'];
for (var _i = 0; _i < _arr.length; _i++) {
var selector = _arr[_i];
$(document).on('input.inputevent', selector, { selector: selector }, _this14.behavesOk).on('change.inputevent', selector, { selector: selector }, _this14.misbehaves);
}
},
uninstall: function uninstall() {
delete globals.inputEventPatched;
$(document).off('.inputevent');
}
});
};
var inputevent = new InputEvent();
inputevent.install();
var parsley = Parsley;
return parsley;
});
//# sourceMappingURL=parsley.js.map
// Validation errors messages for Parsley
// Load this after Parsley
Parsley.addMessages('id', {
defaultMessage: "tidak valid",
type: {
email: "email tidak valid",
url: "url tidak valid",
number: "nomor tidak valid",
integer: "integer tidak valid",
digits: "harus berupa digit",
alphanum: "harus berupa alphanumeric"
},
notblank: "tidak boleh kosong",
required: "tidak boleh kosong",
pattern: "tidak valid",
min: "harus lebih besar atau sama dengan %s.",
max: "harus lebih kecil atau sama dengan %s.",
range: "harus dalam rentang %s dan %s.",
minlength: "terlalu pendek, minimal %s karakter atau lebih.",
maxlength: "terlalu panjang, maksimal %s karakter atau kurang.",
length: "panjang karakter harus dalam rentang %s dan %s",
mincheck: "pilih minimal %s pilihan",
maxcheck: "pilih maksimal %s pilihan",
check: "pilih antar %s dan %s pilihan",
equalto: "harus sama"
});
Parsley.setLocale('id');
(function (factory) {
if (typeof define === 'function' && define.amd) {
// AMD. Register as an anonymous module.
define(['jquery'], factory);
} else if (typeof exports === 'object') {
// Node/CommonJS
factory(require('jquery'));
} else {
// Browser globals
factory(jQuery);
}
}(function ($) {
var ua = navigator.userAgent,
iPhone = /iphone/i.test(ua),
chrome = /chrome/i.test(ua),
android = /android/i.test(ua),
caretTimeoutId;
$.mask = {
//Predefined character definitions
definitions: {
'9': "[0-9]",
'a': "[A-Za-z]",
'*': "[A-Za-z0-9]"
},
autoclear: true,
dataName: "rawMaskFn",
placeholder: '_'
};
$.fn.extend({
//Helper Function for Caret positioning
caret: function(begin, end) {
var range;
if (this.length === 0 || this.is(":hidden") || this.get(0) !== document.activeElement) {
return;
}
if (typeof begin == 'number') {
end = (typeof end === 'number') ? end : begin;
return this.each(function() {
if (this.setSelectionRange) {
this.setSelectionRange(begin, end);
} else if (this.createTextRange) {
range = this.createTextRange();
range.collapse(true);
range.moveEnd('character', end);
range.moveStart('character', begin);
range.select();
}
});
} else {
if (this[0].setSelectionRange) {
begin = this[0].selectionStart;
end = this[0].selectionEnd;
} else if (document.selection && document.selection.createRange) {
range = document.selection.createRange();
begin = 0 - range.duplicate().moveStart('character', -100000);
end = begin + range.text.length;
}
return { begin: begin, end: end };
}
},
unmask: function() {
return this.trigger("unmask");
},
mask: function(mask, settings) {
var input,
defs,
tests,
partialPosition,
firstNonMaskPos,
lastRequiredNonMaskPos,
len,
oldVal;
if (!mask && this.length > 0) {
input = $(this[0]);
var fn = input.data($.mask.dataName)
return fn?fn():undefined;
}
settings = $.extend({
autoclear: $.mask.autoclear,
placeholder: $.mask.placeholder, // Load default placeholder
completed: null
}, settings);
defs = $.mask.definitions;
tests = [];
partialPosition = len = mask.length;
firstNonMaskPos = null;
mask = String(mask);
$.each(mask.split(""), function(i, c) {
if (c == '?') {
len--;
partialPosition = i;
} else if (defs[c]) {
tests.push(new RegExp(defs[c]));
if (firstNonMaskPos === null) {
firstNonMaskPos = tests.length - 1;
}
if(i < partialPosition){
lastRequiredNonMaskPos = tests.length - 1;
}
} else {
tests.push(null);
}
});
return this.trigger("unmask").each(function() {
var input = $(this),
buffer = $.map(
mask.split(""),
function(c, i) {
if (c != '?') {
return defs[c] ? getPlaceholder(i) : c;
}
}),
defaultBuffer = buffer.join(''),
focusText = input.val();
function tryFireCompleted(){
if (!settings.completed) {
return;
}
for (var i = firstNonMaskPos; i <= lastRequiredNonMaskPos; i++) {
if (tests[i] && buffer[i] === getPlaceholder(i)) {
return;
}
}
settings.completed.call(input);
}
function getPlaceholder(i){
if(i < settings.placeholder.length)
return settings.placeholder.charAt(i);
return settings.placeholder.charAt(0);
}
function seekNext(pos) {
while (++pos < len && !tests[pos]);
return pos;
}
function seekPrev(pos) {
while (--pos >= 0 && !tests[pos]);
return pos;
}
function shiftL(begin,end) {
var i,
j;
if (begin<0) {
return;
}
for (i = begin, j = seekNext(end); i < len; i++) {
if (tests[i]) {
if (j < len && tests[i].test(buffer[j])) {
buffer[i] = buffer[j];
buffer[j] = getPlaceholder(j);
} else {
break;
}
j = seekNext(j);
}
}
writeBuffer();
input.caret(Math.max(firstNonMaskPos, begin));
}
function shiftR(pos) {
var i,
c,
j,
t;
for (i = pos, c = getPlaceholder(pos); i < len; i++) {
if (tests[i]) {
j = seekNext(i);
t = buffer[i];
buffer[i] = c;
if (j < len && tests[j].test(t)) {
c = t;
} else {
break;
}
}
}
}
function androidInputEvent(e) {
var curVal = input.val();
var pos = input.caret();
if (oldVal && oldVal.length && oldVal.length > curVal.length ) {
// a deletion or backspace happened
checkVal(true);
while (pos.begin > 0 && !tests[pos.begin-1])
pos.begin--;
if (pos.begin === 0)
{
while (pos.begin < firstNonMaskPos && !tests[pos.begin])
pos.begin++;
}
input.caret(pos.begin,pos.begin);
} else {
var pos2 = checkVal(true);
var lastEnteredValue = curVal.charAt(pos.begin);
if (pos.begin < len){
if(!tests[pos.begin]){
pos.begin++;
if(tests[pos.begin].test(lastEnteredValue)){
pos.begin++;
}
}else{
if(tests[pos.begin].test(lastEnteredValue)){
pos.begin++;
}
}
}
input.caret(pos.begin,pos.begin);
}
tryFireCompleted();
}
function blurEvent(e) {
checkVal();
if (input.val() != focusText)
input.change();
}
function keydownEvent(e) {
if (input.prop("readonly")){
return;
}
var k = e.which || e.keyCode,
pos,
begin,
end;
oldVal = input.val();
//backspace, delete, and escape get special treatment
if (k === 8 || k === 46 || (iPhone && k === 127)) {
pos = input.caret();
begin = pos.begin;
end = pos.end;
if (end - begin === 0) {
begin=k!==46?seekPrev(begin):(end=seekNext(begin-1));
end=k===46?seekNext(end):end;
}
clearBuffer(begin, end);
shiftL(begin, end - 1);
e.preventDefault();
} else if( k === 13 ) { // enter
blurEvent.call(this, e);
} else if (k === 27) { // escape
input.val(focusText);
input.caret(0, checkVal());
e.preventDefault();
}
}
function keypressEvent(e) {
if (input.prop("readonly")){
return;
}
var k = e.which || e.keyCode,
pos = input.caret(),
p,
c,
next;
if (e.ctrlKey || e.altKey || e.metaKey || k < 32) {//Ignore
return;
} else if ( k && k !== 13 ) {
if (pos.end - pos.begin !== 0){
clearBuffer(pos.begin, pos.end);
shiftL(pos.begin, pos.end-1);
}
p = seekNext(pos.begin - 1);
if (p < len) {
c = String.fromCharCode(k);
if (tests[p].test(c)) {
shiftR(p);
buffer[p] = c;
writeBuffer();
next = seekNext(p);
if(android){
//Path for CSP Violation on FireFox OS 1.1
var proxy = function() {
$.proxy($.fn.caret,input,next)();
};
setTimeout(proxy,0);
}else{
input.caret(next);
}
if(pos.begin <= lastRequiredNonMaskPos){
tryFireCompleted();
}
}
}
e.preventDefault();
}
}
function clearBuffer(start, end) {
var i;
for (i = start; i < end && i < len; i++) {
if (tests[i]) {
buffer[i] = getPlaceholder(i);
}
}
}
function writeBuffer() { input.val(buffer.join('')); }
function checkVal(allow) {
//try to place characters where they belong
var test = input.val(),
lastMatch = -1,
i,
c,
pos;
for (i = 0, pos = 0; i < len; i++) {
if (tests[i]) {
buffer[i] = getPlaceholder(i);
while (pos++ < test.length) {
c = test.charAt(pos - 1);
if (tests[i].test(c)) {
buffer[i] = c;
lastMatch = i;
break;
}
}
if (pos > test.length) {
clearBuffer(i + 1, len);
break;
}
} else {
if (buffer[i] === test.charAt(pos)) {
pos++;
}
if( i < partialPosition){
lastMatch = i;
}
}
}
if (allow) {
writeBuffer();
} else if (lastMatch + 1 < partialPosition) {
if (settings.autoclear || buffer.join('') === defaultBuffer) {
// Invalid value. Remove it and replace it with the
// mask, which is the default behavior.
if(input.val()) input.val("");
clearBuffer(0, len);
} else {
// Invalid value, but we opt to show the value to the
// user and allow them to correct their mistake.
writeBuffer();
}
} else {
writeBuffer();
input.val(input.val().substring(0, lastMatch + 1));
}
return (partialPosition ? i : firstNonMaskPos);
}
input.data($.mask.dataName,function(){
return $.map(buffer, function(c, i) {
return tests[i]&&c!=getPlaceholder(i) ? c : null;
}).join('');
});
input
.one("unmask", function() {
input
.off(".mask")
.removeData($.mask.dataName);
})
.on("focus.mask", function() {
if (input.prop("readonly")){
return;
}
clearTimeout(caretTimeoutId);
var pos;
focusText = input.val();
pos = checkVal();
caretTimeoutId = setTimeout(function(){
if(input.get(0) !== document.activeElement){
return;
}
writeBuffer();
if (pos == mask.replace("?","").length) {
input.caret(0, pos);
} else {
input.caret(pos);
}
}, 10);
})
.on("blur.mask", blurEvent)
.on("keydown.mask", keydownEvent)
.on("keypress.mask", keypressEvent)
.on("input.mask paste.mask", function() {
if (input.prop("readonly")){
return;
}
setTimeout(function() {
var pos=checkVal(true);
input.caret(pos);
tryFireCompleted();
}, 0);
});
if (chrome && android)
{
input
.off('input.mask')
.on('input.mask', androidInputEvent);
}
checkVal(); //Perform initial check for existing values
});
}
});
}));
| {
"redpajama_set_name": "RedPajamaGithub"
} | 3,160 |
Despite major pushes from Florida State, Kansas and Kentucky (among others), it's been widely reported in recent weeks that Georgia was the team to beat. Tom Crean is a great dude, and I believe in him coaching me and developing me. He says his decision was influenced by "everybody showing love" on his unofficial visit at a Georgia game.
Shooting guard Anthony Edwards of Atlanta, Georgia has made a decision to stay in his home state.
In his first year of coaching the Bulldogs, Crean has deep ties to Wade, whom he coached while at Marquette from 2001-2003 and instructed Victor Oladipo at IN from 2010-2013.
Top-five senior Anthony Edwards announced his commitment to Georgia on Monday morning, becoming the Bulldogs' highest-ranked recruit in the ESPN recruiting era. Edwards, a blue chip prospect that is projected to go No. 1 overall in the 2020 NBA Draft according to NBADraft.net, is vertically gifted at the 2 guard position with pro comparisons to James Harden. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,658 |
module Approvals
module Reporters
class HtmlImageReporter
include Singleton
def working_in_this_environment?
true
end
def report(received, approved)
display html(received, approved)
end
def html(received, approved)
template(File.expand_path(received), File.expand_path(approved))
end
def display(page)
filename = "#{Approvals.tmp_path}tmp-#{rand(Time.now.to_i)}.html"
File.open(filename, 'w') do |file|
file.write page
end
system("open #{filename}")
end
private
def template(received, approved)
<<-HTML.gsub(/^\ {8}/, '').chomp
<html><head><title>Approval</title></head><body><center><table style="text-align: center;" border="1"><tr><td><img src="file://#{received}"></td><td><img src="file://#{approved}"></td></tr><tr><td>received</td><td>approved</td></tr></table></center></body></html>
HTML
end
end
end
end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 5,953 |
{"url":"https:\/\/brilliant.org\/problems\/not-as-easy-as-it-look\/","text":"# Not as easy as it look!\n\nAlgebra Level 3\n\n$\\large \\sin\\frac{2\\pi}{7} + \\sin \\frac{4\\pi}{7} + \\sin\\frac{8\\pi}{7}$\n\nWhat is the value of the expression above?\n\n\u00d7","date":"2019-01-18 10:53:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.754733145236969, \"perplexity\": 1531.480680802499}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547583660020.5\/warc\/CC-MAIN-20190118090507-20190118112507-00150.warc.gz\"}"} | null | null |
INDIANAPOLIS -- Steven Adams scored 23 points on 11-of-16 shooting and grabbed 13 rebounds to lead the Oklahoma City Thunder to a 100-95 victory over the Indiana Pacers on Wednesday night.
The Thunder (13-14) registered its first two-game season sweep of Indiana since 2012-13 and snapped the Pacers' four-game winning streak.
In his return to Indiana, Paul George was booed loudly when introduced and every time he touched the ball throughout the game.
George, a four-time All-Star, was traded to Oklahoma City in exchange for Victor Oladipo and Domantas Sabonis after he told the Pacers he didn't plan to re-sign with the team following the 2017-18 season. George was limited to 12 points on 3-of-14 shooting Wednesday.
Despite shooting 3 of 17 from the field, Thunder guard Russell Westbrook delivered his ninth triple-double of the season with 10 points, 17 rebounds and 12 assists.
Oladipo led the Pacers (16-12) with 19 points, but had an rough shooting night, 9 of 26 from the floor. Bojan Bogdanovic added 15 for the Pacers and Thaddeus Young had 11 points, 10 rebounds and seven steals.
Oladipo's driving layup sliced Indiana's deficit to 96-94 with 1:07 to go. Following a miss by Carmelo Anthony, Young was first ruled for an offensive foul but it was overturned. Young then hit the first for two throws to make it 96-95. A put-back by Alex Abrines gave the Thunder a 98-95 edge with 15.2 seconds remaining.
George came up with a steal and hit both free throws to make it 100-95 with 10.7 seconds left.
Indiana erased a five-point halftime deficit by opening the third quarter with a 11-2 spurt to take a 57-53 lead.
Oklahoma City eventually regained the advantage on a 3-pointer by George at 65-63 with 5:18 to go and led 73-69 after three quarters.
The Thunder sank 12 of 23 shots in the second quarter to produce a 29-19 edge in the period en route to a 51-46 lead at halftime.
Lance Stephenson sank a jumper at the first quarter buzzer to give the Pacers a 27-22 lead. The Thunder made just 8 of 22 shots (36.4 percent) in the opening quarter.
NOTES: Pacers G Darren Collison returned to the starting lineup after missing the Sunday game with a sore left knee. Collison scored 14 points Wednesday. ... Pacers F Thaddeus Young has scored in double figures in 23 games, including seven straight and 12 of 13. ... NBA commissioner Adam Silver attended the game. Silver announced earlier in the day that Indianapolis will host 2021 All-Star Game at Bankers Life Fieldhouse. ... Entering the game, Oklahoma City led NBA in steals per game (10.0) and opponents turnovers per game (18.0). The Thunder had 10 steals and the Pacers 13 turnovers on Wednesday night. | {
"redpajama_set_name": "RedPajamaC4"
} | 6,421 |
if (Posts.find().count() === 0) {
var now = new Date().getTime();
// Users
var tomId = Meteor.users.insert({
profile: { name: 'Tom Coleman' }
});
var tom = Meteor.users.findOne(tomId);
var sachaId = Meteor.users.insert({
profile: { name: 'Sacha Greif' }
});
var sacha = Meteor.users.findOne(sachaId);
// Posts
var telescopeId = Posts.insert({
title: 'Introducing Telescope',
url: 'http://sachagreif.com/introducing-telescope/',
userId: sacha._id,
author: sacha.profile.name,
createdAt: now - 7 * 3600 * 1000,
commentsCount: 2
});
Comments.insert({
postId: telescopeId,
userId: tom._id,
author: tom.profile.name,
createdAt: now - 5 * 3600 * 1000,
body: 'Interesting project Sacha, can I get involved?'
});
Comments.insert({
postId: telescopeId,
userId: sacha._id,
author: sacha.profile.name,
createdAt: now - 3 * 3600 * 1000,
body: 'You sure can Tom!'
});
Posts.insert({
title: 'Meteor',
url: 'http://meteor.com',
userId: tom._id,
author: tom.profile.name,
createdAt: now - 10 * 3600 * 1000,
commentsCount: 0
});
Posts.insert({
title: 'The Meteor Book',
url: 'http://themeteorbook.com',
userId: tom._id,
author: tom.profile.name,
createdAt: now - 12 * 3600 * 1000,
commentsCount: 0
});
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 9,475 |
Q: ActionController::ParameterMissing in Rails 4 (param is missing or the value is empty) Tried to add a second Devise#model namespaced to 'affiliates'
namespace :affiliates do
devise_for :account, controllers: { registrations: 'accounts/registrations' }
end
In registrations_controller I have this:
class Accounts::RegistrationsController < Devise::RegistrationsController
layout 'agent_sign_up'
def new
cookies.signed[:signup_affiliate] = JSON.generate({
level: 'affiliate',
sponsoring_affiliate: 'TEMP'
})
cookie = JSON.parse(cookies.signed[:signup_affiliate])
@account_subscription_level = cookie['level']
@affiliate = cookie['sponsoring_affiliate']
super
end
def create
super
end
private
def sign_up_params
params.require(:account).permit(:sponsoring_affiliate, :email, :password, :password_confirmation)
end
end
I get the following error when trying to register a new affiliate:
ActionController::ParameterMissing in Accounts::RegistrationsController#create
param is missing or the value is empty: account
Extracted source (around line #21):
private
def sign_up_params
params.require(:account).permit(:sponsoring_affiliate, :email, :password, :password_confirmation)
end
end
But my LOG shows all of the params being processed:
Started POST "/affiliates/account" for 127.0.0.1 at 2014-10-07 14:04:52 -0400
Processing by Accounts::RegistrationsController#create as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"Q9dbDloMiYzlMqHLJlAz0QamnT3hRiOB8xh9/UhLG+o=", "affiliates_account"=>{"email"=>"test@gmail.com", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "account_subscription_level"=>"affiliate", "sponsoring_affiliate"=>"TEMP"}, "commit"=>"Sign up"}
Completed 500 in 1ms
Why am I getting this error? Can anyone help me debug? Thank you!
A: Your sign_up_params method is looking for params[:account] but you're passing params[:affiliates_account]. Make sure your form is passing the account object properly.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,595 |
Q: Wiremock query parameter JSON stub file I'm trying to mock query parameter using a wiremock JSON stub file.
It works when I do it this way :
{
"request": {
"method": "GET",
"url": "/posts?id=1",
},
//...
}
However when I change my query parameter to use the dedicated field like this it doesn't work anymore :
{
"request": {
"method": "GET",
"urlPath": "/posts",
"queryParameters": {
"id": {
"equalTo": "1"
}
}
},
//...
}
Any idea why ?
The test request looks like http://some-host/posts?id=1
A: You can try with urlPathPattern instead of urlPath.
As said by here urlPath which is for an exact match, and urlPathPattern is for regex.
So using urlPathPattern in QueryParameter your query get resolve.
{
"request": {
"method": "GET",
"urlPathPattern": "/posts",
"queryParameters": {
"id": {
"equalTo": "1"
}
}
},
//...
}
Try and understand below concept for Wiremock.
A: The issue is that urlPath doesn't work with queryParameters and that this is simply expected behavior. :-/ I found this Q&A on the topic at the Wiremock Github repo. According to @tomakehurst's answer, this is expected behavior and queryParameters will match if you use urlPathPattern.
A: This works for me, change your "urlPath" to "urlPathPattern" but be careful in structuring this JSON. so urlPath is exact matching pattern, but urlPathPattern is regex matching on path and query parameter
{
"request": {
"urlPathPattern": "/posts",
"method": "GET",
"queryParameters": {
"id": {
"equalTo": "1"
}
}
},
"response": {
"status": 200,
"body":"This is successful"
}
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,798 |
Sauvetat-de-Savères – miejscowość i gmina we Francji, w regionie Nowa Akwitania, w departamencie Lot i Garonna.
Według danych na rok 1990 gminę zamieszkiwały 312 osoby, a gęstość zaludnienia wynosiła 45 osób/km² (wśród 2290 gmin Akwitanii Sauvetat-de-Savères plasuje się na 868. miejscu pod względem liczby ludności, natomiast pod względem powierzchni na miejscu 1285.).
Bibliografia
Miejscowości w departamencie Lot i Garonna | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 8,696 |
package branscha.scripty.spec.map;
import branscha.scripty.parser.Context;
import branscha.scripty.parser.Eval;
/**
* Fetch an argument from the argument array at the specified position. Position 0 is always the name of the command.
* These mappings are automatically created by the {@link branscha.scripty.spec.args.ArgListBuilder} when the argument
* list descriptions are converted to runtime information, and these are used by the {@link CmdMethodInjector} to inject
* the requested arguments in command parameters.
*/
public class ArrayIndexMapping implements ArgMapping {
public static final String ERR010 = "ArrayIndexMapping/010: Error fetching argument %d from argument array.";
private int index;
public ArrayIndexMapping(int index) {
this.index = index;
}
@Override
public Object map(Eval eval, Context ctx, Object args)
throws ArgMappingException {
try {
return ((Object[]) args)[index];
}
catch (Exception e) {
throw new ArgMappingException(String.format(ERR010, index));
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,748 |
{"url":"http:\/\/forum.nasaspaceflight.com\/index.php?PHPSESSID=k22vmhc5irvr78mhvjdhhmnb11&topic=43385.300","text":"#### dustinthewind\n\n\u2022 Full Member\n\u2022 Posts: 611\n\u2022 U.S. of A.\n\u2022 Liked: 246\n\u2022 Likes Given: 267\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #300 on: 09\/23\/2017 12:35 AM \u00bb\n\nTime as the Fourth dimension confuses me. In the three spatial dimensions, A co-ordinate (x,y,z) can be defined regardless of the inertial reference frame you are in. Yet, by the same rules, time cannot be given a particular co-ordinate point, as there is no \"Universal Time\", and ll time is relative to the particular inertial reference frame from which it is being measured.\n\nHow can you define a co-ordinate system without any fixed co-ordinates (along the time axis)?\n\nIt isn't really that confusing.\u00a0 Time is variable in the rate at which is passes.\u00a0 Distance is also variable in the rate at which is passes.\u00a0 As the universe has freedom in dimension at which objects move through space it also has some freedom to change the rate at which time passes.\n\nDid you know the magnetic field would not exist were it not for this freedom?\u00a0 Let us say the current in an observing magnet travels in circles.\u00a0 Current is made up of charges and one charge in the observing magnet sees the current circling in the other magnet.\u00a0 This observing charge sees current in the opposite magnet is moving in the opposite direction as it is.\u00a0 By relativity charges moving in the opposite direction are slowed in time so move slower.\u00a0 This observing charge sees other charges in the other magnet as moving with it.\u00a0 These charges moving with the observing charge by relativity move faster in time, so these charges spend less time existing where they move faster and more time where they move slower.\n\nOnly the negative charges are moving in the circle so it is the negative charges that spend more time where time is perceived to be slower.\u00a0 The positive charges are not moving in a circle so they remain evenly distributed around the cricle.\u00a0 This creates a dipole field but this dipole field changes depending on the observing charge.\n\nhttp:\/\/www.spacetimetravel.org\/tompkins\/node7.html\n\nThis is exactly what a magnetic field is, a dipole electric field that changes depending on the observing charge.\u00a0 Magnetic field lines are the electric potential lines.\u00a0 A velocity dependent (direction and magnitude) dipole electric field - perpendicular to the potential lines.\u00a0 The magnetic field is actually a relativistic electric field.\u00a0 The magnetic field describes the relativistic aspects of the electric field and the standard electric field is used to describe the non-relativistic aspects.\n\nNow is there some deeper meaning to the rate of time passing being a variable?\u00a0 Possibly but the fact that it has a degree of freedom makes it an extra dimension.\n\u00ab Last Edit: 09\/23\/2017 12:44 AM by dustinthewind \u00bb\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #301 on: 09\/23\/2017 12:49 AM \u00bb\n\nThe issue is about \"Reciprocity\" not symmetry.\n\nThis page, https:\/\/en.wikipedia.org\/wiki\/Time_dilation has a well written section on Reciprocity in SR, \"velocity time dilation\". Then in the next section on Gravitational time dilation it says;\n\n\"Contrarily to velocity time dilation, in which both observers measure the other as aging slower (a reciprocal effect), gravitational time dilation is not reciprocal. This means that with gravitational time dilation both observers agree that the clock nearer the center of the gravitational field is slower in rate, and they agree on the ratio of the difference.\"\n\nThe Hafele and Keating experiment supports no-reciprocity. Even the Michelson-Moorley experiment does not demonstrate reciprocity. So far, I have not found any experimental evidence to support a reciprocity effect. Gravitational time dilation is the result of the Equivalence principle alone, not any particular solution of GR.\n\nReciprocity is a requirement of the Lorentz transformation and the Lorentz group, it is assumed in its derivation. Reciprocity is what forces the paradox to happen. Reciprocity is not one of the postulates of SR. It is an assumption that reciprocity is required for \"The laws of physics to remain invariant in all inertial reference frames\". However, it is trivial to show that the law of physics remain unchanged, even when the flat Minkowski metric is transformed by a constant coefficient \"A\", in such a way that;\n\nds2 = (1\/A)*c2dt2 - A*(dx2 + dy2 + dz2)\n\nResulting in a scaled system of units where \"force\" is an invariant wrt the constant \"A\". Leaving all physical laws and experimental data, including EM fields unchanged, but the transformation from A=1 to A>1 is not reciprocal. In the latter time is slow, in the former it's not. No reciprocity, no paradox. IMO this too is the result of the Equivalence principle, when one body accelerates to velocity v=0.6c and the other does not. The end result is not reciprocal.\n\nAs far as I'm concerned, until someone shows evidence of reciprocity, it is by no means proven.\n\nI guess I know Puthoff wrote this equation this way. https:\/\/scholar.google.com\/scholar?cluster=17157422968110203841&hl=en&as_sdt=0,26\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2)\n\nwhere A=K\n\nI am not sure why he converts c_o*t_o to c*t\/K when it seems the conversion should be K*c=c_o and t*sqrt(K)=t_o so c_o*t_o=c*t*K .\n\nHowever, I found it a bit easier to think of it in terms of the shrunken ruller.\u00a0 For the distance that light traverses over a time in a polarized vacuum greater than 1 or K>1 .\u00a0 (c2\/K2)(t2*K)=(c*t)2\/K .\u00a0 In that space a person with a ruler measures light but their ruler shrinks by Puthoff's equatons such that dx2\/K is the non local length of the persons modified ruler.\u00a0 As a result their ruler scales exactly with the distance traversed by light so that they measure the same exact local speed of light as a person with a non-contracted ruler.\n\nThis of course scales the metric such that the metric near gravitational sources shrinks.\u00a0 The gradient in the metric forces a curvature on space and time.\n...\n\nHi Dustin,\n\nI used \"A\" to represent a constant, because \"K = K(x,y,z,t)\" should be reserved as a variable function of the coordinates. Some people around here don't like it when a redefine letters of the alphabet from one post to another.\n\nLook at it this way;\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2) = c2dt02 - (dx02 + dy02 + dz02)\n\nThen you should see that space-time interval, ds2 does not change from flat space-time when Puthoff transforms length and time. It only affects the rulers and clocks.\n\nWhat I'm trying to figure out is; that when accelerations are involved to change relative velocities, the equivalence principle breaks reciprocity in SR. The object has actually changed its relative potential, just like it would in a gravitational field. It is obvious when we use a turntable to do the experiment, but when objects are moving toward or away from each other, it's not so clear. What I need most is just more time to relax and think about this stuff. It's not high on my priority list right now.\n\u00ab Last Edit: 09\/23\/2017 12:59 AM by WarpTech \u00bb\n\n#### dustinthewind\n\n\u2022 Full Member\n\u2022 Posts: 611\n\u2022 U.S. of A.\n\u2022 Liked: 246\n\u2022 Likes Given: 267\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #302 on: 09\/23\/2017 01:14 AM \u00bb\n\nThe issue is about \"Reciprocity\" not symmetry.\n\nThis page, https:\/\/en.wikipedia.org\/wiki\/Time_dilation has a well written section on Reciprocity in SR, \"velocity time dilation\". Then in the next section on Gravitational time dilation it says;\n\n\"Contrarily to velocity time dilation, in which both observers measure the other as aging slower (a reciprocal effect), gravitational time dilation is not reciprocal. This means that with gravitational time dilation both observers agree that the clock nearer the center of the gravitational field is slower in rate, and they agree on the ratio of the difference.\"\n\nThe Hafele and Keating experiment supports no-reciprocity. Even the Michelson-Moorley experiment does not demonstrate reciprocity. So far, I have not found any experimental evidence to support a reciprocity effect. Gravitational time dilation is the result of the Equivalence principle alone, not any particular solution of GR.\n\nReciprocity is a requirement of the Lorentz transformation and the Lorentz group, it is assumed in its derivation. Reciprocity is what forces the paradox to happen. Reciprocity is not one of the postulates of SR. It is an assumption that reciprocity is required for \"The laws of physics to remain invariant in all inertial reference frames\". However, it is trivial to show that the law of physics remain unchanged, even when the flat Minkowski metric is transformed by a constant coefficient \"A\", in such a way that;\n\nds2 = (1\/A)*c2dt2 - A*(dx2 + dy2 + dz2)\n\nResulting in a scaled system of units where \"force\" is an invariant wrt the constant \"A\". Leaving all physical laws and experimental data, including EM fields unchanged, but the transformation from A=1 to A>1 is not reciprocal. In the latter time is slow, in the former it's not. No reciprocity, no paradox. IMO this too is the result of the Equivalence principle, when one body accelerates to velocity v=0.6c and the other does not. The end result is not reciprocal.\n\nAs far as I'm concerned, until someone shows evidence of reciprocity, it is by no means proven.\n\nI guess I know Puthoff wrote this equation this way. https:\/\/scholar.google.com\/scholar?cluster=17157422968110203841&hl=en&as_sdt=0,26\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2)\n\nwhere A=K\n\nI am not sure why he converts c_o*t_o to c*t\/K when it seems the conversion should be K*c=c_o and t*sqrt(K)=t_o so c_o*t_o=c*t*K .\n\nHowever, I found it a bit easier to think of it in terms of the shrunken ruller.\u00a0 For the distance that light traverses over a time in a polarized vacuum greater than 1 or K>1 .\u00a0 (c2\/K2)(t2*K)=(c*t)2\/K .\u00a0 In that space a person with a ruler measures light but their ruler shrinks by Puthoff's equatons such that dx2\/K is the non local length of the persons modified ruler.\u00a0 As a result their ruler scales exactly with the distance traversed by light so that they measure the same exact local speed of light as a person with a non-contracted ruler.\n\nThis of course scales the metric such that the metric near gravitational sources shrinks.\u00a0 The gradient in the metric forces a curvature on space and time.\n...\n\nHi Dustin,\n\nI used \"A\" to represent a constant, because \"K = K(x,y,z,t)\" should be reserved as a variable function of the coordinates. Some people around here don't like it when a redefine letters of the alphabet from one post to another.\n\nLook at it this way;\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2) = c2dt02 - (dx02 + dy02 + dz02)\n\nThen you should see that space-time interval, ds2 does not change from flat space-time when Puthoff transforms length and time. It only affects the rulers and clocks.\n\nWhat I'm trying to figure out is; that when accelerations are involved to change relative velocities, the equivalence principle breaks reciprocity in SR. The object has actually changed its relative potential, just like it would in a gravitational field. It is obvious when we use a turntable to do the experiment, but when objects are moving toward or away from each other, it's not so clear. What I need most is just more time to relax and think about this stuff. It's not high on my priority list right now.\n\nDoesn't the speed of light co=K*c such that c2ot2o = K*c2t2\u00a0 Why not use co?\n\nI have noticed during acceleration the tilting of the light cone happens during acceleration symbolizing travel through time as one passes through space.\u00a0 Their space axis now passes through time - their time axis now passes through space (space\/time).\u00a0 The space axis always tilting up in time in the direction of travel.\u00a0 Time travel always being into the future it is the individual that accelerates that ages slower and travels into the future.\n\nI would be curious however to explore 2 individuals who exist traveling through the universe at c\/8.\u00a0 One leaves their sibling at c\/8 and decelerates so their light cone is now normal traveling away from their sibling at c\/8 but stationary w.r.t. the light cone (he should technically age faster now).\u00a0 - their light cone is not tilted while his siblings remains at c\/8.\u00a0 However, now to get back the sibling that left must accelerate to exceed c\/8.\u00a0 Now the sibling who left should technically age slower during this part of the trip.\u00a0 I suppose the answer should be in the math.\n\nI would suppose that is just one perspective from a set frame.\u00a0 When they meet up their ages should be the same in all frames suggesting relativity may some what hide the concept of a frame with out a tilted light cone.\n\nEdit: I think I see what is going on with Puthoff's S^2= metric.\u00a0 By using c instead of cK this equation describes the difference in distance slow light would traverse in the slower time as opposed to the distance the normal light traverses in slower time.\n\nMistake made fixed to show correct equation:\n\nc(K)2dt(K)2 - (x[K]2+y[K]2+z[K]2) = c2dt2\/K - (x2+y2+z2)*K =\nc(K)2dt(K)2 - c2dt(K)2 = c2dt2\/K - c2dt2\n\n\u00ab Last Edit: 09\/25\/2017 01:15 AM by dustinthewind \u00bb\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #303 on: 09\/23\/2017 03:52 AM \u00bb\n\nLook at it this way;\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2) = c2dt02 - (dx02 + dy02 + dz02)\n\nThen you should see that space-time interval, ds2 does not change from flat space-time when Puthoff transforms length and time. It only affects the rulers and clocks.\n\nDoesn't the speed of light co=K*c such that c2ot2o = K*c2t2\u00a0 Why not use co?\n\n...\n\nThe coordinate speed of light is found by setting ds2 = 0. The coordinate speed of light in the x direction would be;\n\ncK = c\/K = dx\/dt\n\nIf you do it this way, you don't need c0. I find it confuses a lot of people if you us c = c0\/K. It's best to be specific, or define a different variable, cK for the coordinate speed.\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #304 on: 09\/25\/2017 12:10 AM \u00bb\n....\n\nc2dt(K)2 - (x[K]2+y[K]2+z[K]2) = c2dt2\/K - (x2+y2+z2)*K\n\nThis is okay, but...\n\nc2dt2\/K - c(K)2dt(K)2 = c2dt2\/K - c2dt2K\n\nThis makes no sense. c(K)2dt(K)2 = (cdt)2\/K3,\nassuming c(K) = c\/K.\n\n#### dustinthewind\n\n\u2022 Full Member\n\u2022 Posts: 611\n\u2022 U.S. of A.\n\u2022 Liked: 246\n\u2022 Likes Given: 267\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #305 on: 09\/25\/2017 12:16 AM \u00bb\n\nTime as the Fourth dimension confuses me. In the three spatial dimensions, A co-ordinate (x,y,z) can be defined regardless of the inertial reference frame you are in. Yet, by the same rules, time cannot be given a particular co-ordinate point, as there is no \"Universal Time\", and ll time is relative to the particular inertial reference frame from which it is being measured.\n\nHow can you define a co-ordinate system without any fixed co-ordinates (along the time axis)?\n\nMy apologies.\u00a0 I think I see what your talking about now.\u00a0 I am not sure this will help but maybe the magnetic field is a good way to visualize multiple probabilities in time.\u00a0 The magnetic field being velocity dependent (direction and magnitude.) .\u00a0 Relative velocity also has connections with changes in time.\u00a0 For a magnetic field it seems to represent multiple probabilities that might exist simultaneously.\u00a0 I.e. a dipole relativistic electric field that changes depending on the observer.\u00a0 This field appears to accommodate all observers\u00a0 such that a particular observation collapses that probability and gives an actual observation.\n\nThis field of probability accommodates all directions x,y,z velocity, and maybe charge.\u00a0 It does almost seem like multiple dimensions of possibility depending on the observer though unlike quantum I guess it isn't quite as random.\u00a0 Unless the observers location and momentum were uncertain dx*dp.\n\nI have a tendency to want to think of quantum mechanics as a field of possibilities that exist simultaneously.\u00a0 Then when an observer interacts the field of probabilities collapses.\u00a0 Similar to the universe stitching up some uncertainty in time as to what actually happens.\u00a0 This to me almost suggest Wheeler-Feynman absorber theory or something similar to it.\u00a0 https:\/\/en.wikipedia.org\/wiki\/Wheeler%E2%80%93Feynman_absorber_theory\n\n#### dustinthewind\n\n\u2022 Full Member\n\u2022 Posts: 611\n\u2022 U.S. of A.\n\u2022 Liked: 246\n\u2022 Likes Given: 267\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #306 on: 09\/25\/2017 12:57 AM \u00bb\n....\n\nc2dt(K)2 - (x[K]2+y[K]2+z[K]2) = c2dt2\/K - (x2+y2+z2)*K\n\nThis is okay, but...\n\nc2dt2\/K - c(K)2dt(K)2 = c2dt2\/K - c2dt2K\n\nThis makes no sense. c(K)2dt(K)2 = (cdt)2\/K3,\nassuming c(K) = c\/K.\n\nAck! your right, I had it backwards.\u00a0 I should have written c(K)2dt(K)2 - c2dt(K)2 = c2dt2\/K - c2dt2K\n\nemphases on the 2nd speed of light being not a function of K.\n\nAre you sure c(K)2dt(K)2 = (cdt)2\/K3 ?\n\n$\\begin{matrix}&space;\\Delta&space;t(K)=&space;\\Delta&space;t&space;\\sqrt{K}&space;\\,\\,\\,\\,&space;,&space;&&space;\\Delta&space;r(K)=&space;\\frac{\\Delta&space;r}{\\sqrt{K}}&space;\\,\\,\\,\\,,&space;&&space;\\frac{\\Delta&space;r(K)}{\\Delta&space;t(K)}=\\frac{\\Delta&space;r}{\\Delta&space;t\\,K}=c(K)&space;\\\\&space;c(K)^{2}dt(K)^{2}=&space;c^{2}\\,dt^2\\,\\frac{K}{K^{2}}&space;=&space;\\frac{c^{2}dt^{2}}{K}\\,\\,\\,\\,&space;&&space;&&space;\\end{matrix}$\n\u00ab Last Edit: 09\/25\/2017 01:16 AM by dustinthewind \u00bb\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #307 on: 09\/25\/2017 04:16 AM \u00bb\n....\n\nc2dt(K)2 - (x[K]2+y[K]2+z[K]2) = c2dt2\/K - (x2+y2+z2)*K\n\nThis is okay, but...\n\nc2dt2\/K - c(K)2dt(K)2 = c2dt2\/K - c2dt2K\n\nThis makes no sense. c(K)2dt(K)2 = (cdt)2\/K3,\nassuming c(K) = c\/K.\n\nAck! your right, I had it backwards.\u00a0 I should have written c(K)2dt(K)2 - c2dt(K)2 = c2dt2\/K - c2dt2K\n\nemphases on the 2nd speed of light being not a function of K.\n\nAre you sure c(K)2dt(K)2 = (cdt)2\/K3 ?\n\n$\\begin{matrix}&space;\\Delta&space;t(K)=&space;\\Delta&space;t&space;\\sqrt{K}&space;\\,\\,\\,\\,&space;,&space;&&space;\\Delta&space;r(K)=&space;\\frac{\\Delta&space;r}{\\sqrt{K}}&space;\\,\\,\\,\\,,&space;&&space;\\frac{\\Delta&space;r(K)}{\\Delta&space;t(K)}=\\frac{\\Delta&space;r}{\\Delta&space;t\\,K}=c(K)&space;\\\\&space;c(K)^{2}dt(K)^{2}=&space;c^{2}\\,dt^2\\,\\frac{K}{K^{2}}&space;=&space;\\frac{c^{2}dt^{2}}{K}\\,\\,\\,\\,&space;&&space;&&space;\\end{matrix}$\n\nBy your own equation above; dt(K)2 = dt2\/K.\n\n#### dustinthewind\n\n\u2022 Full Member\n\u2022 Posts: 611\n\u2022 U.S. of A.\n\u2022 Liked: 246\n\u2022 Likes Given: 267\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #308 on: 09\/29\/2017 05:29 AM \u00bb\n\nHi Dustin,\n\nI used \"A\" to represent a constant, because \"K = K(x,y,z,t)\" should be reserved as a variable function of the coordinates. Some people around here don't like it when a redefine letters of the alphabet from one post to another.\n\nLook at it this way;\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2) = c2dt02 - (dx02 + dy02 + dz02)\n\nThen you should see that space-time interval, ds2 does not change from flat space-time when Puthoff transforms length and time. It only affects the rulers and clocks.\n\nWhat I'm trying to figure out is; that when accelerations are involved to change relative velocities, the equivalence principle breaks reciprocity in SR. The object has actually changed its relative potential, just like it would in a gravitational field. It is obvious when we use a turntable to do the experiment, but when objects are moving toward or away from each other, it's not so clear. What I need most is just more time to relax and think about this stuff. It's not high on my priority list right now.\n\nI get it now.\u00a0 The dt and other dx are a functions of K where they are being operated on to put them in the non-variant form dto and dxo\n\nSomething i notice that was interesting was that distance dx^2+dy^2+dz^2 being a ruler could be substituted by the speed of light over a passage of time.\n\nWhen I played with the math:\n$\\begin{matrix}&space;S^2=c^{2}dt[K]^{2}\/K-K\\left(dx[K]^{2}+dy[K]^{2}+dz[K]^{2}\\right)&space;\\\\&space;S^2=c^{2}dt[K]^{2}\/K-K\\left(c[K]^{2}dt[K]^{2}\\right)&space;\\\\&space;S^2=\\left(\\frac{ct[K]}{\\sqrt{K}}-\\sqrt{K}c[K]t[K]\\right)\\left(\\frac{ct[K]}{\\sqrt{K}}+\\sqrt{K}c[K]t[K]\\right)&space;\\\\&space;\\frac{i \\psi}{dt[K]}=\\left(\\frac{ic\\psi}{S\\sqrt{K}}\\pm&space;\\frac{\\sqrt{K}ic[K]\\psi}{S}\\right)&space;\\end{matrix}$\n\nI may not have it quite right but it looks like we get something that almost looks like for the space term dx+dy+dz a retarded wave multiplied by an advanced wave.\u00a0 Or maybe a positive index wave multiplied by a negative index wave.\u00a0 This reminds me of Heidi Fearn's discussion a bit.\n\nNot sure it would really indicate a retarded wave but it seemed interesting. Also interesting reguarding ftl communications if retarded waves can really exist but I am sure nature some how excludes their actual use for that.\n\nEdit: Ok, I do like the standing waves a bit better and what Heidi describes does seem a lot like a standing wave.\u00a0 Standing waves do have the backwards propagating wave if allowed the time.\u00a0 Not sure it takes into account all the quantum phenomena. It might.\u00a0 She mentions the advanced and retarded waves concept she used to explain quantum phenomena in a paper about the \"quantum eraser\" mentioned at time stamp 25:20\n\u00ab Last Edit: 09\/30\/2017 05:28 AM by dustinthewind \u00bb\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #309 on: 09\/29\/2017 05:35 PM \u00bb\n\nHi Dustin,\n\nI used \"A\" to represent a constant, because \"K = K(x,y,z,t)\" should be reserved as a variable function of the coordinates. Some people around here don't like it when a redefine letters of the alphabet from one post to another.\n\nLook at it this way;\n\nds2 = (1\/K)*c2dt2 - K*(dx2 + dy2 + dz2) = c2dt02 - (dx02 + dy02 + dz02)\n\nThen you should see that space-time interval, ds2 does not change from flat space-time when Puthoff transforms length and time. It only affects the rulers and clocks.\n\nWhat I'm trying to figure out is; that when accelerations are involved to change relative velocities, the equivalence principle breaks reciprocity in SR. The object has actually changed its relative potential, just like it would in a gravitational field. It is obvious when we use a turntable to do the experiment, but when objects are moving toward or away from each other, it's not so clear. What I need most is just more time to relax and think about this stuff. It's not high on my priority list right now.\n\nI get it now.\u00a0 The dt and other dx are a functions of K where they are being operated on to put them in the non-variant form dto and dxo\n\nSomething i notice that was interesting was that distance dx^2+dy^2+dz^2 being a ruler could be substituted by the speed of light over a passage of time.\n\nWhen I played with the math:\n$\\begin{matrix}&space;S^2=c^{2}dt[K]^{2}\/K-K\\left(dx[K]^{2}+dy[K]^{2}+dz[K]^{2}\\right)&space;\\\\&space;S^2=c^{2}dt[K]^{2}\/K-K\\left(c[K]^{2}dt[K]^{2}\\right)&space;\\\\&space;S^2=\\left(\\frac{ct[K]}{\\sqrt{K}}-\\sqrt{K}c[K]t[K]\\right)\\left(\\frac{ct[K]}{\\sqrt{K}}+\\sqrt{K}c[K]t[K]\\right)&space;\\\\&space;\\frac{i \\psi}{dt[K]}=\\left(\\frac{ic\\psi}{S\\sqrt{K}}\\pm&space;\\frac{\\sqrt{K}ic[K]\\psi}{S}\\right)&space;\\end{matrix}$\n\nI may not have it quite right but it looks like we get something that almost looks like for the space term dx+dy+dz a retarded wave multiplied by an advanced wave.\u00a0 Or maybe a positive index wave multiplied by a negative index wave.\u00a0 This reminds me of Heidi Fearn's discussion a bit. ...\n\nNot sure it would really indicate a retarded wave but it seemed interesting. Also interesting reguarding ftl communications if retarded waves can really exist but I am sure nature some how excludes their actual use for that.\n\nIt appears you're still doing the math wrong by confusing dt with dt0 and dx with dx0, etc... That's why you're getting weird results. Your second line is simply the flat metric, no dependence on K at all since they all cancel out. The 3rd and 4th line are just confused...\n\nI don't need advanced waves. I consider them simply as partial reflections in the polarizable vacuum. An outgoing EM wave leaving a gravitational field is red-shifted because it's losing energy to partial reflections as the refractive index changes. These partial reflected waves behave the same as advanced waves would, and cause the same effects, or so I believe.\n\n\u00ab Last Edit: 09\/29\/2017 05:35 PM by WarpTech \u00bb\n\n#### KelvinZero\n\n\u2022 Senior Member\n\u2022 Posts: 3571\n\u2022 Liked: 483\n\u2022 Likes Given: 124\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #310 on: 10\/03\/2017 09:56 AM \u00bb\nHey I think you guys should check the OP and maybe self moderate.\n\nThe goal is to explain how paradoxes are avoided. The paradoxes are simple so the answers should be similarly simple.\n\nIf you are hoping to sell someone a ticket on your FTL, and they ask what they actually experience, a face full of math are not going to be convincing.\n\nThe topic also is not about arguing an FTL theory is actually part of this universe. It is purely about whether you can explain what behaviour you are even talking about when you say \"FTL\".\n\nWe have basically one \"Good enough for Science fiction\" solution for avoiding FTL paradoxes at the moment: Treating the CMB rest frame as special or, I think equivalently, defining instantaneous according to the CMB temperature: for example, right now the CMB temperature is 2.725\u00b0. So long as any FTL trip takes you to a point where the universe is older, in the sense that the CMB temperature is even colder, I believe no paradox is possible.\n\nParallel universes are often brought up as a solution for FTL and time travel in general. Papers postulating parallel universes don't add anything to the conversation unless they describe what you actually experience when you try to implement a paradox.\n\nI have not yet seen a clear description of what a universe with FTL made possible by parallel universes looks like. All I can surmise is that people who jump into their FTL ships drunk and shouting about what a bastard their granddad was tend to never be seen again, but also strangers occasionally pop in and kill innocent people for things they purportedly would have done in the future. Since every action has an aspect of this paradox, I expect you would have a universe where there is an entirely fuzzy relationship between people who enter FTL and people who exit it. Sometimes there are similarities. Sometimes people vanish. Sometimes people with no histories appear. It is not very satisfactory.\n\n#### RSE\n\n\u2022 Member\n\u2022 Posts: 7\n\u2022 Plano, TX\n\u2022 Liked: 1\n\u2022 Likes Given: 0\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #311 on: 11\/28\/2017 08:16 PM \u00bb\nOn the OP original question, I think both WarpTech and meberbs are correct \u2013 to a certain extent. It is a matter of reference frame perspective.\n\nLet me provide the following thought problem, to explain this.\n\nWe have two communication stations, one on Earth and one circling Sirius at 8.7 light years away. They are continuously transmitting data to each other. According to GR, each perceived the incoming transmission from the other planet as having occurred 8.7 years ago. Logs are kept for 20 years at both ends.\n\nSo far, so good.\n\nSomebody builds some form of FTL ship on earth. It travels to Sirius, .7 years, with a complete log of the earth transmissions from before the departure date, back 20 years. (I am not looking at how, or the perspective if the occupants. One set of headaches at a time, please.)\n\nThe ship popped into existence at Sirius. Since it went FTL, did it travel backwards in time? It depends on your reference frame. According to the Earth reference frame, a ship left, period. No paradox. According to the Sirius reference frame, it did travel backwards in time, per GR, but it travelled backwards in time from the future! It went backwards in time from exactly 8 years in the future, with information from the future, by Sirius'es reference frame. Sirius now knows, in advance, what the signals are going to be incoming for the next 8 years. Once again, no paradox, and the backward in time requirement of GR is upheld.\n\nThe ship loads the tapes from Sirius for Sirius'es last 20 years of transmissions. It heads back to Earth, with the same .7 travel time.\n\nDoes it arrive before it left? Absolutely not! It arrives 1.4 years after it left, (.7 going, .7 coming back), with the next 8 years of Sirius'es transmissions, in hand. Once again the ship travelled backwards in time, from the future, from Earth's reference frame. No paradox. (The only way the ship could arrive before it left is if it took negative duration for the trip, i.e., if it arrived at Sirius before it left Earth, by Earth's own reference frame. That truly would be time travel. . .\n\u00ab Last Edit: 11\/28\/2017 08:27 PM by RSE \u00bb\n\n#### meberbs\n\n\u2022 Full Member\n\u2022 Posts: 1373\n\u2022 Liked: 1251\n\u2022 Likes Given: 317\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #312 on: 11\/28\/2017 08:45 PM \u00bb\n(I am not looking at how, or the perspective if the occupants. One set of headaches at a time, please.)\nAgreed, significant confusion comes from doing that, and it is irrelevant at the moment.\n\nDoes it arrive before it left? Absolutely not! It arrives 1.4 years after it left, (.7 going, .7 coming back), with the next 8 years of Sirius'es transmissions, in hand. Once again the ship travelled backwards in time, from the future, from Earth's reference frame. No paradox.\nDescribing a situation that doesn't involve a paradox does not mean that there are no situations that involve a paradox.\n\n(The only way the ship could arrive before it left is if it took negative duration for the trip, i.e., if it arrived at Sirius before it left Earth, by Earth's own reference frame. That truly would be time travel. . .\nBut there are many reference frames equally valid as the Earth frame that do see the trip as having a negative duration. You have provided no reason why someone in one of those equally valid frames passing by Sirius could not just use the same type of FTL drive in their frame to go to Earth carrying all of the records from the other ship. Since in that frame the Earth ship arrived at Sirius before it left, this ship can go forward in time while it travels FTL to Earth and still arrive before the Earth ship left. I did all of the relevant calculations earlier for this type of situation.\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #313 on: 11\/28\/2017 10:29 PM \u00bb\n(I am not looking at how, or the perspective if the occupants. One set of headaches at a time, please.)\nAgreed, significant confusion comes from doing that, and it is irrelevant at the moment.\n\nDoes it arrive before it left? Absolutely not! It arrives 1.4 years after it left, (.7 going, .7 coming back), with the next 8 years of Sirius'es transmissions, in hand. Once again the ship travelled backwards in time, from the future, from Earth's reference frame. No paradox.\nDescribing a situation that doesn't involve a paradox does not mean that there are no situations that involve a paradox.\n\n(The only way the ship could arrive before it left is if it took negative duration for the trip, i.e., if it arrived at Sirius before it left Earth, by Earth's own reference frame. That truly would be time travel. . .\nBut there are many reference frames equally valid as the Earth frame that do see the trip as having a negative duration. You have provided no reason why someone in one of those equally valid frames passing by Sirius could not just use the same type of FTL drive in their frame to go to Earth carrying all of the records from the other ship. Since in that frame the Earth ship arrived at Sirius before it left, this ship can go forward in time while it travels FTL to Earth and still arrive before the Earth ship left. I did all of the relevant calculations earlier for this type of situation.\n\nIn SR, time dilation as observed by two observers moving at constant relative velocity wrt each other is \"reciprocal\", meaning each observer sees the other's clock running slow. In GR, time dilation as observed by two observers at rest at different gravitational potentials is not reciprocal. The observer at a lower altitude sees the clock at a higher altitude run \"fast\" not slow. This is also the case when one observer is circling around the other observer at a constant angular speed, while the observer at the center is at rest in an inertial frame (feels no forces). The observer feeling the force pulling him in a circular motion, is observed to have a slower clock than the observer at the center. It is not reciprocal.\n\nTo my knowledge, there have been no experiments, no tests of SR that verify\/prove reciprocity. It is a prediction of the mathematics when objects are moving toward or away from each other, but there is no physical evidence which proves it. Experiments so far have only shown non-reciprocity in the results. Meberbs assumes reciprocity is real and his assertions are based on this assumption. I for one do not agree.\n\n\u00ab Last Edit: 11\/28\/2017 10:31 PM by WarpTech \u00bb\n\n#### meberbs\n\n\u2022 Full Member\n\u2022 Posts: 1373\n\u2022 Liked: 1251\n\u2022 Likes Given: 317\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #314 on: 11\/29\/2017 12:00 AM \u00bb\nIn SR, time dilation as observed by two observers moving at constant relative velocity wrt each other is \"reciprocal\", meaning each observer sees the other's clock running slow. In GR, time dilation as observed by two observers at rest at different gravitational potentials is not reciprocal. The observer at a lower altitude sees the clock at a higher altitude run \"fast\" not slow. This is also the case when one observer is circling around the other observer at a constant angular speed, while the observer at the center is at rest in an inertial frame (feels no forces). The observer feeling the force pulling him in a circular motion, is observed to have a slower clock than the observer at the center. It is not reciprocal.\nIt does not matter that the gravitational\/acceleration effect on time dilation is not symmetric. General relativity still has time dilation due to relative velocities, and that part is still symmetric, and still means that FTL allows time travel.\n\nAlso, your description of someone circling someone at rest is incomplete. (For clarity I will refer to person A as the one in the center and B as circling.) A sees B's clock run slower, due to the fact that B is both moving and accelerating. B will not see A's clock running faster by the same amount, because the velocity portion of the time dilation is symmetric, and B sees A's clock moving at the difference between the acceleration and velocity effects.\n\nThis kind of thing is seen in GPS satellites where the clock speedup of the satellites from being further out of Earth's gravity well is reduced by the slowdown due to their relative velocity.\n\nTo my knowledge, there have been no experiments, no tests of SR that verify\/prove reciprocity. It is a prediction of the mathematics when objects are moving toward or away from each other, but there is no physical evidence which proves it. Experiments so far have only shown non-reciprocity in the results. Meberbs assumes reciprocity is real and his assertions are based on this assumption. I for one do not agree.\nFalse. Maybe you missed the last post on this topic that you never replied to. I am not basing this on any kind of assumption without experimental support. You simply cannot explain the experimental results without reciprocity.\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #315 on: 11\/29\/2017 02:41 AM \u00bb\nTo my knowledge, there have been no experiments, no tests of SR that verify\/prove reciprocity. It is a prediction of the mathematics when objects are moving toward or away from each other, but there is no physical evidence which proves it. Experiments so far have only shown non-reciprocity in the results. Meberbs assumes reciprocity is real and his assertions are based on this assumption. I for one do not agree.\nFalse. Maybe you missed the last post on this topic that you never replied to. I am not basing this on any kind of assumption without experimental support. You simply cannot explain the experimental results without reciprocity.\nFalse. Time dilation is caused by damping of the quantum wave functions as explained in my paper, published in the proceedings from Estes Park, last year. As long as you ignore this and the references it contains, there is no point in responding to you.\n\nSimply put; A gravitational field around a planet size object has a greater damping factor near the surface than at higher altitude. A test clock in motion relative to this gravitational source, will have a higher damping factor than\u00a0 clock a rest relative to the source. As long as you consider the vacuum field and its relative damping factor, there is no time travel. Clocks tick at different rates due to the relative damping factor. Lorentz Transformations are only a description of what is observed due to the c being a local constant, they are not the cause of it.\n\u00ab Last Edit: 11\/29\/2017 02:45 AM by WarpTech \u00bb\n\n#### meberbs\n\n\u2022 Full Member\n\u2022 Posts: 1373\n\u2022 Liked: 1251\n\u2022 Likes Given: 317\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #316 on: 11\/29\/2017 06:40 AM \u00bb\nFalse. Time dilation is caused by damping of the quantum wave functions as explained in my paper, published in the proceedings from Estes Park, last year. As long as you ignore this and the references it contains, there is no point in responding to you.\nYou asked for experiments that show that time dilation in relativity works the way that relativity says it does. You have been provided with those experiments and explanations of them but you continue to ignore the results.\u00a0 You have not demonstrated any way that your claims that special relativity is wrong can be consistent with the results of these experiments.\n\nLorentz Transformations are only a description of what is observed due to the c being a local constant, they are not the cause of it.\nCause is irrelevant for this discussion, only the resulting behavior. I don't think you ever gave a clear answer to whether your theory produces the same results as General Relativity or not (at least at macroscopic scales that are relevant to this discussion). If it is the same results, then there is no need to discuss your theory, standard GR works perfectly well for this discussion. If not, then before you insist on discussing your theory, you need to work out how it can somehow still explain the experimental results that were listed.\n\nRemember that I already demonstrated in this thread that your explanation of the \"twin paradox\" was inconsistent. When you did not understand a basic part of special relativity, I am not sure why you would think that your theory of quantum gravity would be consistent.\n\n#### WarpTech\n\n\u2022 Full Member\n\u2022 Posts: 1301\n\u2022 Do it!\n\u2022 Vista, CA\n\u2022 Liked: 1350\n\u2022 Likes Given: 1813\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #317 on: 11\/29\/2017 02:25 PM \u00bb\nFalse. Time dilation is caused by damping of the quantum wave functions as explained in my paper, published in the proceedings from Estes Park, last year. As long as you ignore this and the references it contains, there is no point in responding to you.\nYou asked for experiments that show that time dilation in relativity works the way that relativity says it does. You have been provided with those experiments and explanations of them but you continue to ignore the results.\u00a0 You have not demonstrated any way that your claims that special relativity is wrong can be consistent with the results of these experiments.\n\nLorentz Transformations are only a description of what is observed due to the c being a local constant, they are not the cause of it.\nCause is irrelevant for this discussion, only the resulting behavior. I don't think you ever gave a clear answer to whether your theory produces the same results as General Relativity or not (at least at macroscopic scales that are relevant to this discussion). If it is the same results, then there is no need to discuss your theory, standard GR works perfectly well for this discussion. If not, then before you insist on discussing your theory, you need to work out how it can somehow still explain the experimental results that were listed.\n\nRemember that I already demonstrated in this thread that your explanation of the \"twin paradox\" was inconsistent. When you did not understand a basic part of special relativity, I am not sure why you would think that your theory of quantum gravity would be consistent.\n\nThe consistency of my \"model\" with GR is definitively spelled out with equations and examples in my paper, in the proceedings from Estes Park. I am not going to re-write it here for your convenience!!! Until you take the 20 minutes to read it, I ask that you stop the derogatory comments about a paper you have not read.\n\nAs to demonstrating consistency with the experiments, that will require another paper. It's not something I can explain in detail on a forum. I have better things to do with my time than write papers for someone who refuses to read them.\n\n#### RSE\n\n\u2022 Member\n\u2022 Posts: 7\n\u2022 Plano, TX\n\u2022 Liked: 1\n\u2022 Likes Given: 0\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #318 on: 11\/29\/2017 03:36 PM \u00bb\nMeberbs, I would like to \u201cshow my work\u201d in my analysis.\n\n1. At earth time \u201ct\u201d, there is a continuous stream of data going to Sirius. It arrives at Sirius 8.7 years later. This is standard Relativity.\n\n2. At earth time \u201ct prime\u201d, which is exactly 8.7 years after earth time \u201ct\u201d, a FTL ship set out for Sirius. Its cargo is a complete transcript of all the transmission from earth to Sirius for 20 years, up to the moment that the FTL ship starts. The ship arrives at Sirius, .7 years from when it left earth.\n\n3 At Sirius, the transcripts are matched up. They are found to be in sync, up to the current transmissions being received at Sirius, but the transmissions on the ship now have an extra 8 years worth of data, that has not been received by Sirius yet. (Earth time \u201ct prime\u201d - .7 years of transit equal 8 years) Under Relativity, where did this data come from?\n\n4.\u00a0 From Sirius's reference frame, under Relativity, it could have only come from the future. 8 years from the future, to be precise. How could it get from the future to Sirius, at the time the data arrived at Sirius? It has to travel backward in time, from the future, exactly 8 years. This is the calculation required from Relativity, as you point out.\n\n5. Here is where it gets tricky. Where is the future starting point? From Sirius's reference frame, it is 8 years in the future from the point where the ship, bearing the data, arrived. Yet the arrival time for the ship, at Sirius, is actually \u201cin the future\u201d to start with. Let me explain.\n\nThe data being received at Sirius from earth's beam is from the past. Exactly 8.7 years in the past. So when you match up the data, from the FTL ship, it is being done at a time 8 years in the future from when the data was sent from earth, via beam (at time \u201ct\u201d). It can't be any earlier, as the data to match with (from the earth beam) would not have arrived at Sirius yet for comparison. So upon the ship arriving at Sirius, it did so .7 years after it left earth at \u201ct prime\u201d, by Sirius's reference frame. It has to, for the comparison to match. (The ship left earth at \u201ct prime\u201d 8.7 years after the signals to be compared are transmitted. It takes 8.7 years for them to arrive from the transmission. For them to match, both data records have to be at the same place, at the same time, in order to be matched.) Note, 8 years in the \u201cfuture\u201d, from Sirius's reference frame, coincides with the departure of the ship, minus the transit time, in earth's reference frame.\n\n6. From earth's reference frame, the results is very simple. The ship disappears. Period. There is no way to observe the ship, as any method of observation is limited by c. Assuming that the FTL ship arrived at Sirius, after .7 years transit, it could not be observed on earth for 8.7 +.7 years.\n\n7. On the return to earth, everything applies the same way. The ship leaves Sirius, (with the Sirius's data transcripts) and returns to earth. It takes .7 years transit time. The ship is now perceived from earth's reference frame as returning from \u201cthe future\u201d, coming backwards in time, as required by Relativity. It has 8 years of future data, just like when it arrived at Sirius. But it does not arrive before it left, because it left in the Sirius's reference frame, which would have to be perceived from the earth's reference frame as \u201cthe future\u201d, the the amount of travel backwards in time cannot exceed the difference between the \u201cdistance\u201d minus the transit time required.\n\nMeberbs, I went to all this detail to try to determine exactly where the \u201cpoint of asymmetry\u201d between the two viewpoints arises. Thank you for your time.\n\n#### meberbs\n\n\u2022 Full Member\n\u2022 Posts: 1373\n\u2022 Liked: 1251\n\u2022 Likes Given: 317\n##### Re: Any resolutions to FTL paradoxes?\n\u00ab Reply #319 on: 11\/29\/2017 04:19 PM \u00bb\nThe consistency of my \"model\" with GR is definitively spelled out with equations and examples in my paper, in the proceedings from Estes Park. I am not going to re-write it here for your convenience!!! Until you take the 20 minutes to read it, I ask that you stop the derogatory comments about a paper you have not read.\nThe answer is either a yes or a no. If you don't show that the answer is in general a \"yes\" then the answer is no.\n\nAs to demonstrating consistency with the experiments, that will require another paper. It's not something I can explain in detail on a forum. I have better things to do with my time than write papers for someone who refuses to read them.\nWhat you are saying here is that it is not consistent with general relativity, because if it was, you wouldn't need another paper to show its consistency with the listed experiments. Since you need another paper to do so, it is clear that the paper you have written does not answer the questions I asked. (Anyway, I have skimmed it, but it looked like you didn't actually answer the questions I have, which your statements here now confirm)\n\nA paper showing that your model is consistent with basic tests of relativity is something you should want to write anyway if you actually care about your theory. The fact that I showed your explanation of a basic application of relativity (the \"twin paradox\") was inconsistent, means that you need to go back and update your model to account for what you learned in that discussion, and until you have done so, I don't know why I should spend time reviewing something that has known flaws.\n\nTags:","date":"2017-12-12 06:30:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 4, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6549243927001953, \"perplexity\": 1255.6937822670604}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948515309.5\/warc\/CC-MAIN-20171212060515-20171212080515-00140.warc.gz\"}"} | null | null |
\section{Introduction}
\label{sec:introduction}
\emph{Low-density parity-check} (LDPC) codes were first introduced by Gallager in 1962 and rediscovered in 1995 by MacKay~\cite{gallager1962low, davey1998low}. These codes are a family of block codes with good theoretical and practical properties and have since been used in a wide variety of wireless communication systems including the 5G new radio (NR) technology standard.
Central to the design of LDPC codes is a sparse matrix that describes dependencies (connections) between information bits and redundant check bits. The matrix translates to a \emph{factor} graph (also known as a \emph{Tanner} graph) representation of the code and is used in the encoding and decoding procedures. The factor graph consists of \emph{variable} nodes, which represent all bits in a codeword, and \emph{check} nodes that represent the parity check factors. The decoding procedure is applied to the factor graph directly and uses an iterative \emph{message passing} algorithm that passes messages between variable nodes and check nodes to update the received bit values. The basic method of decoding uses the \emph{sum-product} algorithm, which is equivalent to the \emph{loopy belief propagation} (LBP) algorithm used for performing generic inference tasks on \emph{probabilistic graphical models} (PGMs)~\cite{mackay2003information, koller2009probabilistic}.
Factor graphs are popular throughout the PGM literature, arguably due to their simple construction, which entails a decoupling of all random variables from their factors. However, a joint probability distribution can be factorised into groups of random variables by considering how factors are manipulated during the inference process. This can lead to better-performing PGMs according to~\cite{koller2009probabilistic}. We are reminded of a meditation written by John Donne containing the well-known phrase ``\emph{No man is an island entire of itself}''. In relation to our study, this phrase guards against ``individualising'' the variable nodes from the full joint distribution, consequently it removes important inter-dependencies between bits that are useful during the decoding procedure. To explore the alternative, we consider a more general probabilistic framework called \emph{cluster} graphs, which involve joint distributions as messages between nodes instead of univariate distributions.
In general, PGMs with cycles (or loops) are not guaranteed to converge to exact solutions, and interesting problems (such as LDPC codes) are intractable to convert to tree structured graphs. While cluster graphs are not necessarily tree structured, they must satisfy specific structural constraints such as the \emph{running interception property} (RIP), which ensure accurate inference when using a message passing approach~\cite{koller2009probabilistic}. The \emph{layered trees running intersection property} (LTRIP) algorithm developed in~\cite{streicher2017graph}, compiles cluster graphs from a set of input factors and guarantees the RIP that is a step towards tree structured PGMs.
In~\cite{streicher2017graph,streicher2021strengthening}, the authors found that cluster graphs outperformed factor graphs in inference tasks related to solving complex Sudoku puzzles. Our study extends the application domain of the LTRIP algorithm, which we use to compile a cluster graph of an LDPC code. We highlight the LDPC code's performance by investigating the computational benefits and accuracy improvements compared to a factor graph.
\textbf{Our contribution}: We view LDPC codes more generally as a PGM and represent it as a cluster graph compiled by a general-purpose algorithm called LTRIP developed in~\cite{streicher2017graph}. To the best of our knowledge, this is the first LDPC code represented as a cluster graph, which may be due to a lack of available algorithms that can construct valid cluster graphs. We develop a message passing schedule, since the message order in cluster graphs can influence convergence speed, accuracy, and the computational cost of inference. We demonstrate (1) computational benefits between a cluster graph approach and a standard factor graph approach, (2) convergence benefits, and (3) more accurate marginal probabilities that improve the bit error performance of the LDPC code.
This paper is structured as follows: In Section~\ref{sec:ldpc-as-pgms}, we explain how LDPC codes are represented in a more general PGM framework. Section~\ref{sec:contrasting} compares factor graphs to cluster graphs. A message schedule developed for the cluster graph is discussed in Section~\ref{sec:message-passing-schedule}. The results are shown in Section~\ref{sec:results}, and finally our conclusions and future work are presented in Section~\ref{sec:conclusion}.
\section{Low-density parity-check codes as PGMs}
\label{sec:ldpc-as-pgms}
This section provides an overview of error correction using LDPC codes and introduces a more general representation of LDPC codes using cluster graphs instead of factor graphs. We also explain our message passing approach and end the section with experimental results.
\subsection{Error correction with LDPC codes}
\emph{Low-density parity-check} (LDPC) codes are a family of linear block codes used as an error correction measure in digital communication systems. A noisy transmission link may cause bit errors that corrupt transmitted messages. LDPC codes use an encoding procedure that adds parity check bits to the message bits that help detect and fix bit errors at the receiver.
A check bit's value is determined by applying an even parity constraint to a subset of message bits, which ensures that the check bit together with the subset of message bits contain an even number of ones. An LDPC code comprise multiple even parity constraints and some constraints may include other check bits to enhance the code's overall protection capability. The original message bits together with the encoded check bits is known as a \emph{codeword}. The ratio between the number of message bits $K$ and the total number of codeword bits $N$ is known as the \emph{code rate} $\frac{K}{N}$, which expresses the portion of useful data sent to a receiver. Lower code rate LDPC codes are more robust to bit errors, but exhibit a lower data transfer rate.
From a decoding perspective, a valid codeword must satisfy all parity check constraints originally imposed by the encoder. This can be verified by taking the modulo-2 addition (XOR operation) of a check bit and all its bit dependencies and must equal zero. A group of such checks, called a \emph{syndrome}, can indicate which parity check constraints are not satisfied (typically using matrix notation). Together, the parity check constraints form a sparse system of linear equations that should be solvable using deductive logic. Intuitively, bits that are known to be correct (indicated by the syndrome) can be back-substituted into other parity check constraints to find correct values for other bits. Decoding in this way can be done mathematically using methods such as Gaussian elimination, but is computationally unfeasible for large LDPC codes.
Some bits must be shared between parity check constraints that create necessary associations among constraints, but at the same time may introduce inaccuracies when decoding the received codeword. This is more commonly referred to as \emph{loops} or \emph{cycles} and manifests when a similar set of bits are shared among more than one parity check constraint. A well designed LDPC code does not contain small cycles, and its codewords remain sufficiently separable from one another even after sustaining severe bit errors.
A parity check constraint is better known as a parity check \emph{factor}. An example of such a factor, from a Hamming (7,4) code, with bit dependencies $b_0,b_1,b_2,$ and $b_4$ is shown in Table~\ref{tab:partiy-table}. This is known as a discrete table factor. Note that we assign ones (absolute certainty) to the joint states where all bits present an even number of ones, with zero (no possibility) given otherwise. In this case, the check bit is $b_4$ and its value is determined from message bits $b_0,b_1,$ and $b_3$ during encoding.
\begin{table}[htb]
\centering
\caption{A non-normalised discrete table factor representing a parity check factor.}
\vspace{5mm}
\begin{tabular}{ |c|c|c|c|c| }
\hline
$b_{0}$ & $b_{1}$ & $b_{2}$ & $b_{4}$ & $\phi(b_{0},b_{1},b_{2},b_{4})$ \\
\hline\hline
0 & 0 & 0 & 0 & 1 \\
\hline
0 & 0 & 1 & 1 & 1 \\
\hline
0 & 1 & 0 & 1 & 1 \\
\hline
\multicolumn{4}{|c|}{\vdots} & \multicolumn{1}{|c|}{\vdots} \\
\hline
1 & 1 & 1 & 1 & 1 \\
\hline
\multicolumn{4}{|c|}{elsewhere} & \multicolumn{1}{|c|}{0} \\
\hline\hline
\end{tabular}
\label{tab:partiy-table}
\end{table}
Apart from the received codeword, a decoder also requires knowledge of the codeword construction (i.e., the bit dependencies of all parity check factors) and some iterative message passing algorithm that performs the deductive logic. In this study, we consider a different representation of LDPC codes that is motivated in the following section.
\subsection{Representation of LDPC codes}
In the channel coding literature, LDPC codes are represented by a bipartite graph called a \emph{Tanner} graph. A Tanner graph has two distinct sets of nodes with edges between them that express relationships between codeword bits and parity check constraints. We use a Hamming (7,4) block code to illustrate this representation in Figure~\ref{fig:hamming-graphs}(a). We denote the message bit sequence as $b_0,...,b_3$, the parity check bits as $b_4,...,b_6$, and the parity check factors as $\phi(b_1,b_2,b_3,b_5)$, $\phi(b_1,b_3,b_4,b_6)$, and $\phi(b_1,b_2,b_4,b_7)$. In a Tanner graph, the codeword bits are called \emph{bit nodes} (bottom circles) and the parity check factors are called \emph{check nodes} (top squares). An edge connects a bit node to a check node if that bit is included in the parity check factor. A sparse parity check matrix $H$ is used to indicate the edge connections. A decoder proceeds by passing messages along the edges of the graph between the bit nodes and check nodes, which makes this representation intuitive and practical from both an encoding and decoding perspective.
A Tanner graph is more or less similar to a \emph{factor} graph described in PGM literature~\cite{koller2009probabilistic,bishop2006pattern}. A factor graph is an undirected graph with variable nodes and factor nodes, which expresses a non-normalised joint probability distribution (also known as a potential function) as a product of factors. Factor nodes are connected to variables nodes if a factor depends on a variable. Its construction is simply a decoupling of all random variables from their factors. A joint probability distribution can, however be factorised differently by considering how factors are manipulated during the inference process~\cite{koller2009probabilistic}. We consider a more general probabilistic framework, called \emph{cluster} graphs, in which a factor graph representation is a special case. The factor graph in Figure~\ref{fig:hamming-graphs}(b) is presented using cluster graph notation, and is also known as a Beth\'e graph. Note the structural similarity between graph (a) and (b). Cluster graphs have two types of nodes, a cluster node (ellipse) is a set of random variables, and a sepset (short for ``separation set'') node (square) is a set of random variables shared between a pair of clusters. Each parity check cluster contains a discrete table factor with its own scope of variables as shown previously in Table~\ref{tab:partiy-table}.
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{figure1.png}
\caption{(a) A conventional Tanner graph representation of the Hamming (7,4) code, with (b) a factor graph representation (using cluster graph notation), and (c) a cluster graph representation compiled using the LTRIP algorithm.}
\label{fig:hamming-graphs}
\end{figure}
A cluster graph must satisfy specific structural constraints that ensure accurate inference when using a message passing approach. A key requirement is the \emph{running interception property} (RIP)~\cite{koller2009probabilistic}. A path between clusters requires clusters and sepsets to share a common variable. The RIP then states that any pair of clusters sharing a common variable must have a unique path (i.e., without loops) between those clusters. The Tanner graph in (a) and factor graph in (b) satisfies the RIP due to its star-like topology. Note that all single variable clusters in (b) must contain factors with uniform distributions so that the graph-based representation corresponds to the original product of factors.
For exact inference, a graph needs to have a tree structure (i.e., without any loops whatsoever), which can be constructed for small PGMs (such as the Hamming (7,4) code) using the \emph{junction tree} algorithm. However, this algorithm is NP-complete in obtaining optimal trees for large PGMs, in our case larger LDPC codes. The cluster graph in (c) is compiled using a general purpose cluster graph construction algorithm termed the \emph{layered trees running intersection property} (LTRIP) algorithm developed in~\cite{streicher2017graph}. This algorithm proceeds in layers by considering each variable in a separate layer. For each variable, it determines an optimal tree structure over all clusters. The sepsets between pairs of clusters are then merged across all layers to form the final sepsets. The resulting cluster graph satisfies RIP and allows richer information content to be shared between clusters, since more that one variable can be embedded in a sepset. The resultant graph structure in general will contain loops, i.e. it is not necessarily a tree structure. Whereas inference on graphs with a tree structure is exact, inference on loopy graphs only approximate the true marginal distributions -- we refer the reader to~\cite{streicher2017graph} for more detail regarding the LTRIP algorithm.
There are two notable differences between graph (b) and (c). Firstly, the factor graph has more nodes and edges than the cluster graph, which will require more computation during message passing. Secondly, all sepsets in the factor graph are univariate while some sepsets in the cluster graph contain joint probability distributions. Useful correlations between variables may be lost in the factor graph while a cluster graph can preserve them, making messages more informative. We describe our message passing approach in the following section.
\subsection{Message passing approach}
The sum-product algorithm is the basic decoding algorithm used for decoding LDPC codes~\cite{mackay2003information,johnson2006introducing}. In the PGM literature, the sum-product algorithm is more commonly known as \emph{loopy belief propagation} (LBP), and is used more generally to perform inference on graphical models. Another decoding approach is the \emph{maximum a posteriori probability} (MAP) estimate~\cite{koller2009probabilistic}, which gives the most likely assignment to all possible codewords. This is not the same as the maximum individual bit marginals. Our study uses a variant of LBP called \emph{loopy belief update} (LBU), also known as the Lauritzen-Spiegelhalter algorithm~\cite{lauritzen1988local}. We briefly point out the main differences between LBP and LBU:
\begin{itemize}
\item LBU uses cluster beliefs and sepset beliefs to express message passing,
\item with LBU, cluster beliefs are updated with messages from all its connections and can be more informative compared to LBP,
\item with LBU, sepsets are used to update a target cluster belief, and require only one sepset divide compared to two message divides in LBP.
\end{itemize}
The next section compares the performance of a factor and cluster graph representation of the Hamming (7,4) code.
\section{Contrasting factor graphs to cluster graphs}
\label{sec:contrasting}
We discussed the structural differences between the factor graph and cluster graph, and pointed out conceptual advantages of a cluster graph. Using the Hamming (7,4) code, we now consider the computational cost of message passing between the two different representations.
The factor graph in Figure~\ref{fig:hamming-graphs}(b) has 12 edges emanating from the parity check clusters. During a forward sweep (top to bottom) of message passing, each edge corresponds to a message (sepset belief) and requires $2^4 - 2^1 = 14$ additions to perform marginalisation from a parity cluster belief. The total number of additions required for all edges is $14 \times 12 = 168$. These messages are absorbed into the single variable target clusters, which require $12 \times 2^1 = 24$ multiplications, and completes the forward sweep. The backward sweep (bottom to top) requires $12 \times 2^4 = 192$ multiplications to absorb the updated single variable messages into the parity check clusters (marginalisations are not required due to the univariate sepsets). Message cancellation, for the forward and backward sweep, requires $2 \times (12 \times 2^1) = 48$ multiplications (actually divisions, but counted as multiplications).
A forward sweep (from left to right) for the cluster graph in Figure~\ref{fig:hamming-graphs}(c) requires $(2^4 - 2^2) + (2^4 - 2^1) + (2^4 - 2^2) = 38$ additions for marginalising the sepset beliefs. Absorbing the sepset beliefs into the target clusters requires $3 \times 2^4 = 48$ multiplications. The backward sweep (right to left) requires the same number of additions and multiplications as the forward sweep. The total number of message cancellation multiplications is $2 \times ((2^2) + (2^1) + (2^2)) = 20$. Table~\ref{tab:computation} compares the total number of operations required for one iteration of message passing between the factor graph and cluster graph. This indicates a computational advantage for the cluster graph representation.
\begin{table}[htb]
\centering
\caption{Computational cost difference between a factor graph and cluster graph representation of the Hamming (7,4) code for one iteration of message passing.}
\begin{tabular}[t]{ lcc }
& Total additions: & Total multiplications: \\
\hline\hline
Factor graph & $168$ & $264$ \\
\hline
Cluster graph & $76$ & $116$ \\
\hline\hline
\end{tabular}
\label{tab:computation}
\end{table}
We now compare the error correction capability of the two graphs using \emph{binary phase-shift keying} (BPSK) signal modulation over an AWGN channel without fading. For this channel model, the received bit values $x_n$ are Gaussian distributed conditioned on $b_n$. We assume a transmitted bit has unit energy and unit noise variance, which fixes the mean and variance parameters of each Gaussian distribution.
The \emph{bit error ratio} (BER) performance of the factor graph and cluster graph is compared to exact inference in Figure~\ref{fig:hamming-results}(a). The cluster graph performs closer to the optimal BER curve compared to the factor graph.
\begin{figure}[h]
\centering
\begin{subfigure}[t]{0.325\textwidth}
\includegraphics[scale=0.52]{figure2.pdf}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.325\textwidth}
\raisebox{1.6mm}{\includegraphics[scale=0.52]{figure3.pdf}}
\caption{}
\end{subfigure}
\begin{subfigure}[t]{0.325\textwidth}
\includegraphics[scale=0.52]{figure4.pdf}
\caption{}
\end{subfigure}
\caption{(a) BER comparison between exact inference, and 25 iterations of message passing in a factor graph and cluster graph, (b) Kullback-Leibler measure between exact inference marginals and marginals obtained from a factor graph and cluster graph, (c) percentage of instances requiring more than one iteration for successful decoding using a factor graph vs. a cluster graph.}
\label{fig:hamming-results}
\end{figure}
We use the Kullback-Leibler (KL) divergence $D_{KL}(P||Q)$ to evaluate similarity between marginal distributions $P$ obtained from exact inference and approximate marginal distributions $Q$ obtained from the factor and cluster graph respectively. The sum of all KL divergences from a codeword is captured for each codeword and the distribution thereof shown in (b) for each graph. When comparing the median statistics, the marginal distributions obtained from the cluster graph is more similar to the marginal distributions obtained from exact inference as compared to the factor graph. The KL divergence between the cluster graph marginals and exact inference marginals is also less varied compared to the factor graph. We also capture the number of iterations until successful decoding and compare the percentage of instances the decoder requires more than one iteration, shown in (c). The factor graph requires more than one iteration more often than the cluster graph, and the difference is more pronounced at lower SNRs.
These results indicate that a cluster graph is computationally more efficient, and has better BER performance and convergence compared to the factor graph. The next section describes our message passing schedule.
\section{Message passing schedule}
\label{sec:message-passing-schedule}
A message passing schedule is an important consideration for loopy graphs, since the message order can influence convergence speed, accuracy, and the computational cost of inference. In loopy graphs, information can propagate from a cluster and continue along a path that eventually ends at the same cluster without traversing the same edge twice. Although not empirically verified here, these feedback loops (or cycles) may reinforce inaccurate cluster beliefs causing self-fulfilling belief updates, which affect the LDPC decoder's performance. This problem is more prominent in LDPC codes with small feedback loops as described in~\cite{johnson2006introducing}. Taking this into consideration, our message passing schedule: (1) uses a structured schedule with a fixed computational cost, (2) aims to minimise the effect of loops, and (3) aims to minimise the computational cost of inference.
The schedule is determined by firstly identifying the larger parity check clusters in the graph. Following Figure~\ref{fig:ldpc16}, we select the larger clusters $\phi_0$ (with cardinality 7) and $\phi_3$ (with cardinality 6). The message schedule starts with the selected clusters as initial sources and proceeds by visiting all its neighbouring clusters, which become the immediate next layer of clusters. A set of available clusters are kept to make sure that clusters from previous layers are not revisited, which helps minimise the effect of loops. We repeat this procedure to add subsequent layers of clusters until all clusters are included. This procedure isolates the initially selected large parity check clusters from the rest of the clusters. The idea is to keep the expensive clusters at the final layer so that the smaller (less expensive) parity clusters, in preceding layers, can resolve most of the uncertainty about the even parity states. When the larger parity clusters get updated, some of the even parity states in their discrete tables may have zero probability, which are removed due to our software implementation. This further reduces a large parity cluster's computational footprint.
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{figure5.pdf}
\caption{A PGM of an irregular (16,8) LDPC code with conditional Gaussian clusters linked to the smaller parity check clusters and larger clusters at the bottom.}
\label{fig:ldpc16}
\end{figure}
The observed conditional Gaussian clusters are coupled to the parity check clusters in the layer furthest away from the initial isolated group of large clusters -- starting with the smallest parity clusters. This avoids expensive computational cost between the observed random variables and the larger parity clusters (even though we only need to multiply them in once). The isolated parity check clusters make up the bottom layer in Figure~\ref{fig:ldpc16}. The smaller parity check clusters in the top layer are given priority in terms of their connectivity to conditional Gaussian clusters. Note that we do not link conditional Gaussian clusters to parity clusters in intermediate layers. We avoid this so that evidence enters the graph from one end (the top) and updates the latent clusters one layer at a time. If the first layer of parity check clusters do not have all the unique bits, the following layers are utilised until all conditional Gaussian clusters are connected.
Once the observed conditional Gaussian clusters updated the first parity cluster layer they are not needed further during message passing. Message passing continues towards the final parity cluster layer, which we refer to as the forward sweep. The backward sweep returns in the opposite direction, which concludes one iteration of message passing.
The following settings and software implementations apply to our inference approach:
\begin{itemize}
\item the stopping criterion for inference is if the decoded bit message equals the original bit message, or when a maximum number of iterations is reached,
\item all discrete table factors support sparse representations to reduce memory resources,
\item zero probability states in discrete tables are removed during inference.
\end{itemize}
The following section shows the performance comparison between a factor and cluster graph representation of a larger LDPC code.
\section{Experimental investigation}
\label{sec:results}
The LDPC code used in our study is constructed from the 5G new radio (NR) standard. We use base graph 2 with size (42, 52) and expansion factor 2~\cite{bae2019overview}. The resultant $H$ matrix is shortened to a code rate of $0.5$, giving $K = 20$ message bits and a codeword length of $N = 40$ bits. We use BPSK modulation over an AWGN channel and assume a transmitted bit has unit energy and unit noise variance. The cluster graph is compiled from the parity check factors using the LTRIP algorithm, and the message schedule is initialised by clusters with cardinality 8 and 10 (the largest factors), which forms the bottom layer of the PGM (see the example in Section~\ref{sec:message-passing-schedule}).
\subsection{Purpose of experiment}
The purpose of the experiment is to test whether a cluster graph representation of an LDPC code provides (1) BER performance advantages, and (2) computational advantages in terms of the number of message passing iterations required for successful decoding (which is not necessarily convergence). We compare the cluster graph to a conventional factor graph (or Tanner graph).
We generate random bit messages that are encoded using the $H$ matrix to produce LDPC packets (or codewords). Bit values in a packet are modulated using BPSK modulation and random noise is added to the resultant signal values by using a zero mean Gaussian distribution. The channel noise variance can be translated to a rate-compensated SNR given by $\textrm{SNR}_{\textrm{dB}} = 10 \log_{10}(\frac{E_b}{2R\sigma^{2}})$, where $R$ is the code rate and $E_b$ the energy per bit~\cite{mackay2003information}. We create random noise between 0 - 8 dB, which consists of 36 equidistant SNR instances. For each instance, 1 million LDPC packets are simulated. Our experiment measures the number of bit errors divided by the total number of transferred bits for each SNR instance. We also capture the number of message passing iterations required to decode a packet. The maximum number of message passing iterations is set to 35 for both the factor graph and cluster graph. The factor and cluster graph receives the exact same encoded and modulated LDPC packets to ensure a like-for-like comparison.
\subsection{Results and interpretation}
The results are shown in Figure~\ref{fig:results}. The BER comparison between a factor and cluster graph is shown on the left. The cluster graph outperforms the factor graph over the entire SNR spectrum with a more pronounced difference at higher SNRs. The cluster graph also outperforms the factor graph when comparing the average number of message passing iterations required by the decoder (shown on the right). The difference is more pronounced at lower SNRs when more iterations are required for decoding and the cluster graph maintains a $14\%$ improvement up to $\approx3$ dB SNR.
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{figure6.png}
\caption{Results showing the BER and message passing iterations comparison between a factor graph and cluster graph representation of an irregular (40,20) LDPC code.}
\label{fig:results}
\end{figure}
We also note that the cluster graph's expected behaviour is stable across the entire SNR spectrum, similar to the factor graph.
\section{Conclusion and future work}
\label{sec:conclusion}
Our view is that cluster graphs play an important role in channel coding research due to its superior performance compared to factor graphs (or Tanner graphs). A factor graph representation of an LDPC code is a specific factorisation of the code's full joint probability distribution, and a special case when considered more generally as PGMs. Nevertheless, they are intuitive, easy to construct, and turn out to be practically useful. Constructing valid cluster graphs on the other hand is not trivial. However, the authors in~\cite{streicher2017graph,streicher2021strengthening} developed a general purpose algorithm called LTRIP, which we used to construct a cluster graph of an LDPC code's parity check factors.
While tree-structured PGMs produce exact marginals (that result in the optimal BER performance), LDPC codes are loopy and the RIP ensures that the approximated marginals are close to the exact marginals (discussed in Section~\ref{sec:contrasting}). A factor graph representation of LDPC codes also satisfy the RIP however, messages passed between the single variable clusters and parity check clusters are constraint to univariate distributions, which disregard important dependencies between bits. Cluster graphs compiled with the LTRIP algorithm are not limited to univariate messages and contain richer information content that result in faster convergence and better BER performance.
We demonstrated that a cluster graph representation of a Hamming code and LDPC code outperforms its commonly used factor graph representation, and in future work we may look at applying this to other error-correcting codes such as Polar codes.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 8,791 |
With the Historic District, some 1,020 homes in the proposed area will be protected from demolition. That is significant protection.
I urge you to protect and preserve the treasured community of Eastmoreland. | {
"redpajama_set_name": "RedPajamaC4"
} | 668 |
{"url":"http:\/\/openstudy.com\/updates\/5229735de4b0fbf34d623268","text":"## pancakeslover one year ago A president, treasurer, and secretary, all different, are to be chosen, from a club consisting of 10 people. How many different choices of officers are possible if\n\n\u2022 This Question is Open\n1. pancakeslover\n\na) there are no restrictions? b) A and B will not serve together? c) C and D will serve together or not at all? d) E must be an officer? e) F will only serve if he is the president?\n\n2. kropot72\n\na) If there are no restrictions the number of choices is given by 10P3.\n\n3. pancakeslover\n\n@kropot72 do you know the answers to the rest?\n\n4. wio\n\nFor b), my strategy would be to find how many ways A and B will serve together and subtract that from the total.\n\n5. wio\n\nFor c) you are doing a) again but with 8 people to find out how many ways C and D are not at all serving together, then adding to what you got in b) for ways A and B will serve together.\n\n6. wio\n\nd) is very simple, number of way E will serve.\n\n7. wio\n\ne) is number of ways F doesn't serve plus number of ways to assign treasurer and secretary to remaining 9 people\n\n8. wio\n\nCould you explain that better? I'm not sure how you got that.\n\n9. wio\n\nFor b) I'm getting: $^{10}P_{3} - (^{3}C_{2} \\times ^{8}P_{1})$\n\n10. wio\n\n$$^3C_2$$ - ways to assign A and B a position $$^8P_1$$ - ways to fill the remaining position $$^3C_2\\times ^8P_1$$ - ways A and B will serve together $$^{10}P_3-(^3C_2\\times ^8P_1)$$ - ways A and B will not serve together\n\n11. wio\n\nc) C and D will serve together or not at all? $$^3C_2\\times ^8P_1$$ - ways C and D will server together $$^8P_3$$ - ways neither C nor D will serve $$(^3C_2\\times ^8P_1) +^8P_3$$ - ways C and D will server together or not at all\n\n12. wio\n\nd) E must be an officer $$^3C_1$$ - ways E can be an officer $$^9P_2$$ - ways to fill the two remaining positions $$^3C_1 \\times ^9P_2$$ - ways E is an officer\n\n13. wio\n\ne) F will only serve if he is the president? $$^1C_1$$ - ways F is president $$^9P_2$$ - ways to fill remaining two seats $$^9P_3$$ - ways F will not server $$(\\ ^1C_1\\times\\ ^9P_2\\ )+^9P_3$$ - ways if is president or does not serve\n\n14. pancakeslover\n\nthanks @wio !","date":"2015-02-01 18:12:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4896325469017029, \"perplexity\": 1081.8931464618838}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-06\/segments\/1422121478981.94\/warc\/CC-MAIN-20150124174438-00149-ip-10-180-212-252.ec2.internal.warc.gz\"}"} | null | null |
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Shapes;
using Stamper.DataAccess;
namespace Stamper.UI.Windows
{
public partial class UpdateAvailableWindow : Window
{
public UpdateAvailableWindow(string newVersion)
{
InitializeComponent();
PInvokeHelper.DisableMinimizeButton(this);
PInvokeHelper.DisableMaximizeButton(this);
CurrentVersion.Text = $"Current version: {SettingsManager.Version}";
AvailabeVersion.Text = $"Available version: {newVersion}";
}
private void CancelButton_OnClick(object sender, RoutedEventArgs e)
{
Close();
}
private void DontNotifyButton_OnClick(object sender, RoutedEventArgs e)
{
SettingsManager.IgnoreUpdates = true;
Close();
}
private void UpdateButton_OnClick(object sender, RoutedEventArgs e)
{
System.Diagnostics.Process.Start("https://github.com/Jameak/Stamper/releases");
Close();
}
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 316 |
Home > Student Research > Theses and Dissertations > 1654
Boise State University Theses and Dissertations
Polyhedral+Dataflow Graphs
Eddie C. Davis, Boise State UniversityFollow
Date of Final Oral Examination (Defense)
Type of Culminating Activity
Degree Title
Doctor of Philosophy in Computing
Major Advisor
Catherine R.M. Olschanowsky, Ph.D.
Elena Sherman, Ph.D.
Steven Cutchin, Ph.D.
Donna Calhoun, Ph.D.
This research presents an intermediate compiler representation that is designed for optimization, and emphasizes the temporary storage requirements and execution schedule of a given computation to guide optimization decisions. The representation is expressed as a dataflow graph that describes computational statements and data mappings within the polyhedral compilation model. The targeted applications include both the regular and irregular scientific domains.
The intermediate representation can be integrated into existing compiler infrastructures. A specification language implemented as a domain specific language in C++ describes the graph components and the transformations that can be applied. The visual representation allows users to reason about optimizations. Graph variants can be translated into source code or other representation. The language, intermediate representation, and associated transformations have been applied to improve the performance of differential equation solvers, or sparse matrix operations, tensor decomposition, and structured multigrid methods.
10.18122/td/1654/boisestate
Davis, Eddie C., "Polyhedral+Dataflow Graphs" (2020). Boise State University Theses and Dissertations. 1654.
Numerical Analysis and Scientific Computing Commons, Programming Languages and Compilers Commons | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 2,287 |
Q: Graphics header fails to compile? I downloaded the graphics.h and libbgi.a files and did exactly as the instructions provided and still it doesn't work! I've set linker parameters as well and I must say that this works in C++ mode , but not in C mode. I'm using Dev-C++ 4.9.9.2 as well. Here is what i get :
Can anyone illustrate the problem for me?
P.S : The header code is
// The winbgim library, Version 6.0, August 9, 2004
// Written by:
// Grant Macklem (Grant.Macklem@colorado.edu)
// Gregory Schmelter (Gregory.Schmelter@colorado.edu)
// Alan Schmidt (Alan.Schmidt@colorado.edu)
// Ivan Stashak (Ivan.Stashak@colorado.edu)
// Michael Main (Michael.Main@colorado.edu)
// CSCI 4830/7818: API Programming
// University of Colorado at Boulder, Spring 2003
// ---------------------------------------------------------------------------
// Notes
// ---------------------------------------------------------------------------
// * This library is still under development.
// * Please see http://www.cs.colorado.edu/~main/bgi for information on
// * using this library with the mingw32 g++ compiler.
// * This library only works with Windows API level 4.0 and higher (Windows 95, NT 4.0 and newer)
// * This library may not be compatible with 64-bit versions of Windows
// ---------------------------------------------------------------------------
// ---------------------------------------------------------------------------
// Macro Guard and Include Directives
// ---------------------------------------------------------------------------
#ifndef WINBGI_H
#define WINBGI_H
#include <windows.h> // Provides the mouse message types
#include <limits.h> // Provides INT_MAX
#include <sstream> // Provides std::ostringstream
// ---------------------------------------------------------------------------
// ---------------------------------------------------------------------------
// Definitions
// ---------------------------------------------------------------------------
// Definitions for the key pad extended keys are added here. When one
// of these keys are pressed, getch will return a zero followed by one
// of these values. This is the same way that it works in conio for
// dos applications.
#define KEY_HOME 71
#define KEY_UP 72
#define KEY_PGUP 73
#define KEY_LEFT 75
#define KEY_CENTER 76
#define KEY_RIGHT 77
#define KEY_END 79
#define KEY_DOWN 80
#define KEY_PGDN 81
#define KEY_INSERT 82
#define KEY_DELETE 83
#define KEY_F1 59
#define KEY_F2 60
#define KEY_F3 61
#define KEY_F4 62
#define KEY_F5 63
#define KEY_F6 64
#define KEY_F7 65
#define KEY_F8 66
#define KEY_F9 67
// Line thickness settings
#define NORM_WIDTH 1
#define THICK_WIDTH 3
// Character Size and Direction
#define USER_CHAR_SIZE 0
#define HORIZ_DIR 0
#define VERT_DIR 1
// Constants for closegraph
#define CURRENT_WINDOW -1
#define ALL_WINDOWS -2
#define NO_CURRENT_WINDOW -3
// The standard Borland 16 colors
#define MAXCOLORS 15
enum colors { BLACK, BLUE, GREEN, CYAN, RED, MAGENTA, BROWN, LIGHTGRAY, DARKGRAY,
LIGHTBLUE, LIGHTGREEN, LIGHTCYAN, LIGHTRED, LIGHTMAGENTA, YELLOW, WHITE };
// The standard line styles
enum line_styles { SOLID_LINE, DOTTED_LINE, CENTER_LINE, DASHED_LINE, USERBIT_LINE };
// The standard fill styles
enum fill_styles { EMPTY_FILL, SOLID_FILL, LINE_FILL, LTSLASH_FILL, SLASH_FILL,
BKSLASH_FILL, LTBKSLASH_FILL, HATCH_FILL, XHATCH_FILL, INTERLEAVE_FILL,
WIDE_DOT_FILL, CLOSE_DOT_FILL, USER_FILL };
// The various graphics drivers
enum graphics_drivers { DETECT, CGA, MCGA, EGA, EGA64, EGAMONO, IBM8514, HERCMONO,
ATT400, VGA, PC3270 };
// Various modes for each graphics driver
enum graphics_modes { CGAC0, CGAC1, CGAC2, CGAC3, CGAHI,
MCGAC0 = 0, MCGAC1, MCGAC2, MCGAC3, MCGAMED, MCGAHI,
EGALO = 0, EGAHI,
EGA64LO = 0, EGA64HI,
EGAMONOHI = 3,
HERCMONOHI = 0,
ATT400C0 = 0, ATT400C1, ATT400C2, ATT400C3, ATT400MED, ATT400HI,
VGALO = 0, VGAMED, VGAHI,
PC3270HI = 0,
IBM8514LO = 0, IBM8514HI };
// Borland error messages for the graphics window.
#define NO_CLICK -1 // No mouse event of the current type in getmouseclick
enum graph_errors { grInvalidVersion = -18, grInvalidDeviceNum = -15, grInvalidFontNum,
grInvalidFont, grIOerror, grError, grInvalidMode, grNoFontMem,
grFontNotFound, grNoFloodMem, grNoScanMem, grNoLoadMem,
grInvalidDriver, grFileNotFound, grNotDetected, grNoInitGraph,
grOk };
// Write modes
enum putimage_ops{ COPY_PUT, XOR_PUT, OR_PUT, AND_PUT, NOT_PUT };
// Text Modes
enum horiz { LEFT_TEXT, CENTER_TEXT, RIGHT_TEXT };
enum vertical { BOTTOM_TEXT, VCENTER_TEXT, TOP_TEXT }; // middle not needed other than as seperator
enum font_names { DEFAULT_FONT, TRIPLEX_FONT, SMALL_FONT, SANS_SERIF_FONT,
GOTHIC_FONT, SCRIPT_FONT, SIMPLEX_FONT, TRIPLEX_SCR_FONT,
COMPLEX_FONT, EUROPEAN_FONT, BOLD_FONT };
// ---------------------------------------------------------------------------
// ---------------------------------------------------------------------------
// Structures
// ---------------------------------------------------------------------------
// This structure records information about the last call to arc. It is used
// by getarccoords to get the location of the endpoints of the arc.
struct arccoordstype
{
int x, y; // Center point of the arc
int xstart, ystart; // The starting position of the arc
int xend, yend; // The ending position of the arc.
};
// This structure defines the fill style for the current window. Pattern is
// one of the system patterns such as SOLID_FILL. Color is the color to
// fill with
struct fillsettingstype
{
int pattern; // Current fill pattern
int color; // Current fill color
};
// This structure records information about the current line style.
// linestyle is one of the line styles such as SOLID_LINE, upattern is a
// 16-bit pattern for user defined lines, and thickness is the width of the
// line in pixels.
struct linesettingstype
{
int linestyle; // Current line style
unsigned upattern; // 16-bit user line pattern
int thickness; // Width of the line in pixels
};
// This structure records information about the text settings.
struct textsettingstype
{
int font; // The font in use
int direction; // Text direction
int charsize; // Character size
int horiz; // Horizontal text justification
int vert; // Vertical text justification
};
// This structure records information about the viewport
struct viewporttype
{
int left, top, // Viewport bounding box
right, bottom;
int clip; // Whether to clip image to viewport
};
// This structure records information about the palette.
struct palettetype
{
unsigned char size;
signed char colors[MAXCOLORS + 1];
};
// ---------------------------------------------------------------------------
// ---------------------------------------------------------------------------
// API Entries
// ---------------------------------------------------------------------------
#ifdef __cplusplus
extern "C" {
#endif
// Drawing Functions
void arc( int x, int y, int stangle, int endangle, int radius );
void bar( int left, int top, int right, int bottom );
void bar3d( int left, int top, int right, int bottom, int depth, int topflag );
void circle( int x, int y, int radius );
void cleardevice( );
void clearviewport( );
void drawpoly(int n_points, int* points);
void ellipse( int x, int y, int stangle, int endangle, int xradius, int yradius );
void fillellipse( int x, int y, int xradius, int yradius );
void fillpoly(int n_points, int* points);
void floodfill( int x, int y, int border );
void line( int x1, int y1, int x2, int y2 );
void linerel( int dx, int dy );
void lineto( int x, int y );
void pieslice( int x, int y, int stangle, int endangle, int radius );
void putpixel( int x, int y, int color );
void rectangle( int left, int top, int right, int bottom );
void sector( int x, int y, int stangle, int endangle, int xradius, int yradius );
// Miscellaneous Functions
int getdisplaycolor( int color );
int converttorgb( int color );
void delay( int msec );
void getarccoords( arccoordstype *arccoords );
int getbkcolor( );
int getcolor( );
void getfillpattern( char *pattern );
void getfillsettings( fillsettingstype *fillinfo );
void getlinesettings( linesettingstype *lineinfo );
int getmaxcolor( );
int getmaxheight( );
int getmaxwidth( );
int getmaxx( );
int getmaxy( );
bool getrefreshingbgi( );
int getwindowheight( );
int getwindowwidth( );
int getpixel( int x, int y );
void getviewsettings( viewporttype *viewport );
int getx( );
int gety( );
void moverel( int dx, int dy );
void moveto( int x, int y );
void refreshbgi(int left, int top, int right, int bottom);
void refreshallbgi( );
void setbkcolor( int color );
void setcolor( int color );
void setfillpattern( char *upattern, int color );
void setfillstyle( int pattern, int color );
void setlinestyle( int linestyle, unsigned upattern, int thickness );
void setrefreshingbgi(bool value);
void setviewport( int left, int top, int right, int bottom, int clip );
void setwritemode( int mode );
// Window Creation / Graphics Manipulation
void closegraph( int wid=ALL_WINDOWS );
void detectgraph( int *graphdriver, int *graphmode );
void getaspectratio( int *xasp, int *yasp );
char *getdrivername( );
int getgraphmode( );
int getmaxmode( );
char *getmodename( int mode_number );
void getmoderange( int graphdriver, int *lomode, int *himode );
void graphdefaults( );
char *grapherrormsg( int errorcode );
int graphresult( );
void initgraph( int *graphdriver, int *graphmode, char *pathtodriver );
int initwindow
( int width, int height, const char* title="Windows BGI", int left=0, int top=0, bool dbflag=false, bool closeflag=true );
int installuserdriver( char *name, int *fp ); // Not available in WinBGI
int installuserfont( char *name ); // Not available in WinBGI
int registerbgidriver( void *driver ); // Not available in WinBGI
int registerbgifont( void *font ); // Not available in WinBGI
void restorecrtmode( );
void setaspectratio( int xasp, int yasp );
unsigned setgraphbufsize( unsigned bufsize ); // Not available in WinBGI
void setgraphmode( int mode );
void showerrorbox( const char *msg = NULL );
// User Interaction
int getch( );
int kbhit( );
// User-Controlled Window Functions (winbgi.cpp)
int getcurrentwindow( );
void setcurrentwindow( int window );
// Double buffering support (winbgi.cpp)
int getactivepage( );
int getvisualpage( );
void setactivepage( int page );
void setvisualpage( int page );
void swapbuffers( );
// Image Functions (drawing.cpp)
unsigned imagesize( int left, int top, int right, int bottom );
void getimage( int left, int top, int right, int bottom, void *bitmap );
void putimage( int left, int top, void *bitmap, int op );
void printimage(
const char* title=NULL,
double width_inches=7, double border_left_inches=0.75, double border_top_inches=0.75,
int left=0, int right=0, int right=INT_MAX, int bottom=INT_MAX,
bool active=true, HWND hwnd=NULL
);
void readimagefile(
const char* filename=NULL,
int left=0, int top=0, int right=INT_MAX, int bottom=INT_MAX
);
void writeimagefile(
const char* filename=NULL,
int left=0, int top=0, int right=INT_MAX, int bottom=INT_MAX,
bool active=true, HWND hwnd=NULL
);
// Text Functions (text.cpp)
void gettextsettings(struct textsettingstype *texttypeinfo);
void outtext(char *textstring);
void outtextxy(int x, int y, char *textstring);
void settextjustify(int horiz, int vert);
void settextstyle(int font, int direction, int charsize);
void setusercharsize(int multx, int divx, int multy, int divy);
int textheight(char *textstring);
int textwidth(char *textstring);
extern std::ostringstream bgiout;
void outstream(std::ostringstream& out=bgiout);
void outstreamxy(int x, int y, std::ostringstream& out=bgiout);
// Mouse Functions (mouse.cpp)
void clearmouseclick( int kind );
void clearresizeevent( );
void getmouseclick( int kind, int& x, int& y );
bool ismouseclick( int kind );
bool isresizeevent( );
int mousex( );
int mousey( );
void registermousehandler( int kind, void h( int, int ) );
void setmousequeuestatus( int kind, bool status=true );
// Palette Functions
palettetype *getdefaultpalette( );
void getpalette( palettetype *palette );
int getpalettesize( );
void setallpalette( palettetype *palette );
void setpalette( int colornum, int color );
void setrgbpalette( int colornum, int red, int green, int blue );
// Color Macros
#define IS_BGI_COLOR(v) ( ((v) >= 0) && ((v) < 16) )
#define IS_RGB_COLOR(v) ( (v) & 0x03000000 )
#define RED_VALUE(v) int(GetRValue( converttorgb(v) ))
#define GREEN_VALUE(v) int(GetGValue( converttorgb(v) ))
#define BLUE_VALUE(v) int(GetBValue( converttorgb(v) ))
#undef COLOR
int COLOR(int r, int g, int b); // No longer a macro
#ifdef __cplusplus
}
#endif
// ---------------------------------------------------------------------------
#endif // WINBGI_H
A: If it works in C++ mode, but not in C mode, that tell me that it's probably a C++ library. The fact that you get syntax errors in C mode, but not in C++ mode, lends credence to this hypothesis.
Assuming this is what you're using (you should have specified that in your question), that's clearly C++ only, so no wonder you can't compile it as C code. You're hosed if you need to be able to use that library for C code. Still, you can write C-like code and just compile as C++ if you need. I just don't understand why you need to use a C++ library, but also need to compile as C and not C++.
Still, I'm only assuming that's the header file you're trying to include. Without further information on what you're trying to do, and exactly which library you're trying to use, I can only assume what the problem is.
EDIT: Based on the header you posted, it appears at first glance to be C compatible, except for including <sstream>. I have no idea if that's necessary or not. Someone with more experience might be a better resource to determine if that specific library would work in C.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,091 |
Eghipatrush ou Yeghipatrush (en arménien ; jusqu'en 1945 Tanjrlu puis jusqu'en 1992 Mravyan) est une communauté rurale du marz d'Aragatsotn, en Arménie. Elle compte habitants en 2009.
Église
La localité compte une église Sourp Astvatsatsin (« Sainte-Mère-de-Dieu ») du et un gavit du , les ruines d'une basilique du et quelques khatchkars, dont une chapelle-khatchkar du .
Notes et références
Communauté rurale de l'Aragatsotn | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,455 |
GENTORE FACT SHEET
BROCHURES AND BANNERS
GenTORE Stakeholders
GenTORE engages stakeholders from industry including breeding associations, trans-national organizations, farm management and veterinary advisory services and farm technology companies. These stakeholders, directly engaged with GenTORE from the start, represent a core that will be enlarged with other stakeholders who are keen to bring their input and to be recipients of new technologies and data.
The inclusion of stakeholders from the beginning of the project is a major plus as it will bring in many more ideas and views. It also means all outputs will be fit for purpose and be immediately relevant to what is happening in daily practice on beef and dairy farms. They will also be more user-friendly and thus adopted rapidly and seamlessly by farmers and breeders, also on the long-term.
Join GenTORE Stakeholder Platform
Enter our Stakeholder Platform on Facebook to participate, read and discuss anything related to the GenTORE project and interact with other stakeholders!
Subscribe to our mailing list for most recent news and events!
GenTORE Glossary
Check our glossary defining relevant terms within the project!
In addition, GenTORE will count on its multi-actor partners and two stakeholders levels:
stakeholder level 2 includes stakeholders actively involved in the project (i.e., those who will be consulted during project orientation),
stakeholder level 3 includes the stakeholders who will be the final users of the project results. These level 3 stakeholders include both policy makers and the relevant media.
Project partners and actively involved stakeholders (i.e., levels 1 and 2) will profit from working cohesively towards a more resilient and efficient beef and dairy cattle production sector.
Potential users
The direct end users of GenTORE outcomes are;
scientists from academic and industrial sectors who are potential users of the developed GenTORE tools, algorithmsand methods,
companies involved in cattle breeding and/or precision farming based in Europe
companies providing technology (equipment, software),
farm advisors (extension workers and veterinarians), and
farm managers.
GenTORE brings together multidisciplinary scientific expertise in genomics, environmental assessment, nutritional physiology, health management, precision livestock farming, mathematical modelling, and socio-economics; partners and stakeholders representing breeding organisations, farm technology companies, farm and veterinary advisory services, and farm sectors (organic, grazing, etc.); and a unique data basis including more than 1 million genotypes.
Stakeholders Platform (SP)
Stakeholders platform (SP) is an advisory body constituted of a group of persons and organisations representatives that express a stake or view at a certain moment of the project and are willing to share these with the project partners during stakeholder meetings and consultations. They will play a key role in the dissemination and exploitation activities of the project.
This group will have a flexible membership and will include representatives from all GenTORE targeted audiences. The platform could be extended with additional stakeholders to have larger round table discussions.
Among the stakeholders platform members there are three degrees of implication.
The first level includes partners actively involved in GenTORE WPs. A second level of stakeholders includes those actively involved and consulted, while the third level is constituted by the final users of the project results.
The stakeholders platform includes core groups from the L2, the stakeholders committee (SC) and the scientific advisory board (SAB).
Stakeholders Committee (SC)
The Stakeholders committee (SC) will provide external points of view on the work in GenTORE to the Executive Committee (Ex.Com) and the General Assembly (GA). It will be used for feedback and input on the project results applicability and exploitation.
It will be informed and consulted on a regular basis (every six months at minimum) by the researchers in GenTORE. The stakeholders committee will have the right to review and provide their opinion on the project results before they are submitted and disseminated.
Scientific Advisory Board (SAB)
The scientific advisory board (SAB) is a consulting committee to the Ex Com to advise about the progress of research WP. The SAB consists of 3 internationally renowned scientists skilled in at least one of the domains covered by GenTORE.
The SAB has also a view of current trends and potential innovation aspects of interest to beef and dairy production sector. It is expected to make suggestions to the Ex Com on specific topics that could open new application areas. Meetings of the SAB will be scheduled yearly. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 316 |
For some time now, the theme of Jehovah's Witnesses has been experiencing a noticeable decline. Not only the child abuse of the worldwide brotherhood has contributed to Jehovah's Witnesses being seen less as a religion than an organization that protects child molesters. The well-founded accusation of serial murder through the use of a human blood ban, which does not even exist, also seems to have an enlightening effect on people. The demand of the people and/or the seducibility of the people actually seems to diminish, because the ranks of the Watchtower proclaimers become ever thinner. This concerns both quality and quantity.
There is a decrease in the number and there is a decrease in the level among Jehovah's Witnesses. The perseverers of the proclamation of the Jehovah kingdom who are in charge today are recruited from a clientele that has always been easily recognizable as the group of meaningless followers. Jehovah's Witnesses have become scarce, as they set the tone, take initiative, and draw the attention of passers-by to themselves. They are replaced by those who have always stood in the second row. They now have the chance to finally continue the hopeless war in the responsibility that has now grown to them.
That's how I see the overall situation. But this must not lead to the conclusion that the Watchtower Society cannot deal with it. The Watchtower Society has always experienced such phases and is used to them. Business as usual is the permanent motto (there is never any new light here).
Yesterday and today I wasn't in Düsseldorf and Jehovah's Witnesses. Only one Jehovah's Witness could be seen and the experience was strange. The man stood hidden next to an advertising pillar and had a fit of shyness, like one who suddenly realized he was stark naked. Behind me, the people laughed with a smile as they read my signs and Jehovah's Witness didn't even know which direction to look for a way out.
After a few seconds the man went away and stood up again at the next corner. So he had at least shaken off the screaming and laughing passers-by. He looked at me from his new stand like a smuggler with transparent suitcases. That was a really strange experience. The man was mentally incapable of grasping the implications of the news, but he reacted from the cerebellum. The whole thing really had something of the scenery fly and fly swatter. The fly gets faster, but she somehow suspects that she doesn't have much of a chance.
Today's reconnaissance mission in Düsseldorf had a completely new "quality". It was no longer about the confrontation of a manipulated liar with the truth, but about the simple vegetative interaction between cerebellum with tie and collar and a religion. The level of Jehovah's Witnesses has dropped very low lately. When people stay away, you take what you can get. After all, all Jehovah's Witnesses are skilled workers for the Annunciation of the Kingdom of Jehovah. The expertise is free. You only have to learn to say yes to everything for more than three years, then you get your certificate. The rest is the result of the psycho pressure generated in the Watchtower literature and the meetings.
Aaaah! And then suddenly someone like that stands in front of you with logical statements on signs and spoils your show!
The poor man then began a conversation with a garbage man in front of Düsseldorf main station. "You do the same thing every day! Always the same!" The peak of this conversation was after the garbage man finally listened: "They're all stupid! And that one belongs to it!"
The garbage man's behavior was wise. He didn't close the lid of his garbage can, but allowed a Turkish man to add his garbage with both hands and so the whole thing gained something of an idyll. The idyll at Düsseldorf Central Station! And I was allowed to experience it.
The meaning of the lives of those who have lived here for a long time is not recognizing the Watchtower Society as Murder.Org and the meaning of the lives of those who have not lived here for so long is wearing headscarves and looking without understanding. It is hopeless!
But Jehovah's Witness's shirt was nice white with blue stripes. | {
"redpajama_set_name": "RedPajamaC4"
} | 1,310 |
{"url":"https:\/\/www.physicsforums.com\/threads\/finding-angle-b-w-line-equation-and-tangent.462269\/","text":"# Finding angle b\/w line equation and tangent.\n\nGreetings to all. :)\nThis is my first time posting here, so if I do anything wrong, just tell me. :)\nOn to the question.\n\nAlright, this question came in 3 parts. I've done the first 2 parts but have no clue on doing the third part. There are two equations (below) and a constant k.\n\nRelevant equations and data:\n\n.Curve: xy=12\n.Line 'l': 2x+y=k\n.k=10\n.One of the points of intersection is P(2,6).\n\nThe task is to find the angle (in degrees) between 'l' and the tangent to the curve at P.\n\nI really don't have any idea on how this is to be done. Maybe there's some formula that must be followed but I don't know what it could be.\n\nThanks for any help. :)\n\nRelated Precalculus Mathematics Homework Help News on Phys.org\ntiny-tim\nHomework Helper\nWelcome to PF!\n\nGreetings SolCon! Welcome to PF!\n\nHint: the tangent of the tangent (ie the tangent of the angle between the the tangent and the x axis) is dy\/dx.\nThis is my first time posting here, so if I do anything wrong, just tell me. :)\nUse more smilies!\n\nHallsofIvy\nHomework Helper\nAs tiny-tim says, the tangent of the angle between a curve (tangent line to the curve) and the x-axis is the derivative, dy\/dx, evaluated at that point. The tangent of the angle between a line and the x-axis is, of course, the slope.\n\nTo find the tangent of the angle between two lines (the tantgent line of the curve and the given line, say) use the fact that\n$$tan(\\theta_1- \\theta_2)= \\frac{tan(\\theta_1)- tan(\\theta_2)}{1+ tan(\\theta_1)tan(\\theta_2)}$$\n(I think that's right- better check whether it is \"1+ \" or \"1- \" in the denominator.)\n\nYou can also use the dot product. The vector <1, m> lies in the same direction as line y= mx+ b or the tangent line to a curve with derivative dy\/dx= m and the dot product of two vectors, u and v, is given by $|u||v|cos(\\theta)$ where $\\theta$ is, of course, the angle between them.\n\nThanks for the replies. :)\n\nOkay, I used the dy\/dx method to get the 'm' values but they seem to be incorrect.\n\nFor eq. xy=12, I got the point (-1,12) and from eq. 2x+y=k (10 here), I got (6,-2).\nSo I apply them in the slope formula: y2-y1\/x2-x1 and got -2 as m1 and 1\/2 as m2. Plugged them in the equation provided by HallsofIvy but did not get the right answer. What am I doing wrong?\n\nThanks for the help. :) (This is the only smiley I'm used to typing with keys, :) )\n\ntiny-tim\nHomework Helper\nHi SolCon!\nFor eq. xy=12, I got the point (-1,12) and from eq. 2x+y=k (10 here), I got (6,-2).\nWhat point?\n\n(I'm missing the point! )\nThanks for the help. :) (This is the only smiley I'm used to typing with keys, :) )\nPF has a button for smilies on the \"QUOTE\" page \u2026\n\nalternatively, just type [NOPARSE][\/NOPARSE]\n\nHallsofIvy\nHomework Helper\nIf 2x+ y= 10, then y= 10- 2x so xy= 12 becomes $x(10- 2x)= 10x- 2x^2= 12$ or $x^2- 5x+ 6= (x- 2)(x- 3)= 0$ so the two points of intersection are (2, 6) and (3, 4). You gave (2, 6) in your original post so I have no idea why you are looking at (-1, 12) (Did you mean (1, 12)? (-1, 12) doesn't even satisfy xy= 12.). To find the angle between the line and the curve, you have to look at a point of intersection, so there is an angle between them. The \"slope\" calculated between a point on the line and a point on the curve is meaningless.\n\nFrom xy= 12, y= 12\/x so $y'= -12\/x^2$. At x= 2, y= 6, that is y'= -3 so the tangent line there is given by y= -3(x- 2)+ 6 or y= -3x+ 11. At x= 3, y= 4, y'= -4\/3 so the tangent line is y= (-3\/4)(x- 3)+ 4 or y= (-3\/4)x+ 25\/4.\nYou want to find the angle between the lines y= -3x+ 11 and y= -2x+ 10 at x= 2 or between y= (-3\/4)x+ 25\/4 and y= -2x+ 10 at x= 3.","date":"2020-02-19 03:20:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7610796689987183, \"perplexity\": 749.3013620319941}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875144027.33\/warc\/CC-MAIN-20200219030731-20200219060731-00452.warc.gz\"}"} | null | null |
That's a far cry from resurrecting dead souls, and an even a further cry from boxing.
we must have recourse to death.
Those rare moments when I'm not knee-deep in boxing, I like to engage in something different. For some, that's an excuse to tweet from dusk to dawn. For others, it's an excuse to read.
I sometimes read boxing books, or reread boxing books. But I like, as a rule, to read about subjects far removed from the sweet science.
Currently I'm reading "The Lady and Her Monsters: A Tale of Dissections, Real-Life Dr. Frankensteins, and the Creation of Mary Shelley's Masterpiece" by Roseanne Montillio. It is not a best-seller, which is a fair indication, in my opinion, that it might actually be of worth.
The book explores in grisly detail the scientific/medical environment in 19th century Europe, when reanimation seemed, at least to some, a real possibility. Reading about raising the dead might not be everyone's cup of tea, but that's why there are soft drinks.
In Chapter 3, Making Monsters, the author digs deep into the subject of grave robbing in England in the early 1800s. However ghoulish that may sound, it wasn't about boxing and for that I was grateful.
Ben Crouch was the leader of the most famous gang in that period. He was foul-mouthed former pugilist whose physical strength was an asset when it came to digging out corpses but also to bullying others intending to enter the business. He was also a crook who would wait until his mates were drunk before dividing the take. With the advantage of sobriety, he managed to keep a larger share of the profit without anyone being able to tell. If someone pointed out that fact, the muscular Crouch didn't waste as minute but carefully landed a bejeweled fist (he was fond of wearing thick rings and bracelets) over the opponent's mouth, as if engaged in one of his former fights.
That's a far cry from resurrecting dead souls, and an even further cry from the fight game.
Times have changed….day in and day out I see good looking women walking hand in hand with creatures every bit as "handsome" as Lon Chaney was as the Phantom….it's called the "new normal".
Just want to tip my hat to the superb Dwight Frye (Fritz—not Igor!—in "Frankenstein" and Renfield in "Dracula").
By the way, Colin Clive's "In the name of God, now I know what it feels like to be God" was only recently restored.
The timing of the article does seem rather judicious does it not…?
Ooh, that's rather clever, Lee!
Is this article your roundabout way of saying that Deontay Wilder needs to step it up a 'tad' Robert?
A little gem, Robert, that speaks to my fascination with murder and all things grisly. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,091 |
\section{Introduction}
Relativistic fluid dynamical models have played a key role in our current
understanding of the \textit{nearly perfect fluid} behavior displayed by the
Quark-Gluon Plasma (QGP) formed in heavy ion collisions \cite{sQGP}. In
these models, exact energy-momentum conservation, $\nabla _{\mu }\hat{T
^{\mu \nu }=0$, is supplied by another phenomenological dynamical equation
for the macroscopic shear stress tensor, $\pi ^{\mu \nu }$, which is defined
as
\begin{equation}
\pi ^{\mu \nu }\equiv T^{\mu \nu }-\varepsilon \,u^{\mu }u^{\nu }+P\,\Delta
^{\mu \nu }, \label{definepi}
\end{equation
where $T^{\mu \nu }\equiv \langle \hat{T}^{\mu \nu }\rangle $, $\varepsilon $
is the local energy density, $P$ is the local pressure, $u^{\mu }$ is the
local fluid 4-velocity, and $\Delta ^{\mu \nu }=g^{\mu \nu }-u^{\mu }u^{\nu
} $ is a spatial projector (our metric signature is $+$, $-$, $-$, $-$),
i.e., $u^{\mu }\Delta _{\mu \nu }=0$ (we shall not consider here the
contribution from the bulk viscous pressure or the effects from nonzero
baryonic density). Since we are going to consider only conformal fluids in
this paper, the trace of $\pi ^{\mu \nu }$ is equal to zero. In fact, in
terms of the doubly symmetric and traceless projection operator, $\Delta
^{\mu \nu \alpha \beta }=\left( \Delta ^{\mu \alpha }\Delta ^{\nu \beta
}+\Delta ^{\mu \beta }\Delta ^{\nu \alpha }\right) /2-\Delta ^{\mu \nu
}\Delta ^{\alpha \beta }/3$, one can see that $\Delta _{\alpha \beta }^{\mu
\nu }\pi ^{\alpha \beta }=\pi ^{\mu \nu }$ and $u_{\mu }\pi ^{\mu \nu }=0$.
In relativistic fluids causality is intimately connected to stability \cit
{hiscock,us} and Israel and Stewart \cite{IS} were among the first to
understand that the characteristic times within which fluid dynamical
dissipative currents, such as $\pi ^{\mu \nu }$, relax towards their
asymptotic Navier-Stokes values cannot be arbitrarily small. The so-called
Israel-Stewart (IS) equations for the shear stress tensor are
relaxation-type equations (in terms of the co-moving derivative $d/d\tau
\equiv u^{\mu }\nabla _{\mu }$) of the following general for
\begin{equation}
\tau _{1}\,\dot{\pi}^{\langle \mu \nu \rangle }+\pi ^{\mu \nu }=2\eta \sigma
^{\mu \nu }+\ldots \text{ }, \label{IS}
\end{equation
where $\tau _{1}$ is the relaxation time coefficient, $\eta $ is the shear
viscosity, $\sigma ^{\mu \nu }\equiv \nabla _{\perp }^{\left\langle \mu
\right. }u^{\left. \nu \right\rangle }$ is the shear tensor, $\nabla _{\perp
}^{\mu }=\Delta _{\nu }^{\mu }\nabla ^{\nu }$, $\dot{A}^{\langle \mu \nu
\rangle }=\Delta _{\alpha \beta }^{\mu \nu }dA^{\alpha \beta }/d\tau $, and
the dots denote nonlinear terms involving $\pi ^{\mu \nu }$ and gradients of
$T$ and $u^{\mu }$ \cite{IS,dkr}. The major step taken by Israel and Stewart
in \cite{IS} was to realize that causality demands that the dissipative
currents obey dynamical equations of motion (which introduce the linear
transport coefficient $\tau _{1}$) that describe their transient dynamics
towards their respective asymptotic relativistic Navier-Stokes solution. A
few years earlier, Kadanoff and Martin \cite{kadanoffmartin} argued that a
similar relaxation transport coefficient should appear in the description of
spin diffusion in a way that is consistent with well-known sum rules.
Using the 14-moments method, it is possible to show \cite{IS,dkr} that in
relativistic gases $\tau _{1}$ is of the order of the microscopic collision
time. Therefore, one expects that in physical situations where the time
variation of the fluid flow is comparable to this microscopic scale, the
fluid is in the transient regime where the relaxation dynamics described by
\tau _{1}$ becomes important. For dilute gases, such as air, under normal
circumstances the collision time is orders of magnitude smaller than the
typical time variation of the flow. However, it is possible to create
physical systems under which the flow of a fluid varies in a timescale of
the order of the mean free time (such as in microflows \cite{microflow}).\
Given the rapid expansion experienced by the QGP formed in heavy ion
collisions, it is reasonable to investigate the dependence of hydrodynamic
predictions on the actual value of $\tau _{1 }$ (see, for example, \cit
{arXiv:1101.2442}).
If the dynamical properties of the strongly-coupled QGP can be (at least
qualitatively) understood using $\mathcal{N}=4$ Supersymmetric Yang Mills
(SYM) theory, we shall see in this paper that this would imply that the
relaxation equations for $\pi ^{\mu \nu }$ commonly used in numerical
simulations must be replaced by new equations for transient dynamics
involving second-order comoving derivatives of $\pi ^{\mu \nu }$.
\section{Non-Hydrodynamic Poles and Transient Fluid Dynamics}
It is well-known that retarded correlators can have singularities such as
simple poles and also branch cuts. Of particular relevance for fluid
dynamics are the so-called \textquotedblleft hydrodynamic\textquotedblright\
poles, $\omega _{0}(\mathbf{k})$, which appear in retarded correlators of
conserved currents \cite{forster}. The $\mathbf{k}$ dependence of these
modes can be used to obtain the corresponding diffusion transport
coefficient $D$ associated with a given conserved quantity through the
relation $\lim_{\mathbf{k}\rightarrow 0}\omega _{0}(\mathbf{k})\sim -i
\mathbf{k}^{2}+\ldots $ \cite{forster}. The hydrodynamic modes are
characterized by the momentum dependence $\sim -i\mathbf{k}^{2}$ and,
consequently, vanish in the limit $\mathbf{k}\rightarrow 0$. Because of
their appearance in Navier-Stokes theory, the existence of hydrodynamic
modes is quite often taken as evidence for fluid dynamical behavior.
Furthermore, modes that do not share this behavior, i.e., modes in which lim
_{\mathbf{k}\rightarrow 0}\,\,\omega _{n}\left( \mathbf{k}\right) \neq 0$,
are known as \textquotedblleft non-hydrodynamic modes\textquotedblright .
Surprisingly enough, a stable theory of \textit{relativistic} fluid dynamics
cannot be formulated using only hydrodynamic modes. In order to obtain a
causal and stable theory of fluid dynamics, the shear stress tensor has to
be promoted to an independent dynamical variable, as in Israel and Stewart
theory, e.g. Eq.\ (\ref{IS}). Since the shear stress tensor is not a
conserved quantity, when it becomes an independent dynamical variable
non-hydrodynamic modes must appear in the theory. In the case of
Israel-Stewart theory, the non-hydrodynamic modes describe the relaxation of
the dissipative currents towards their respective Navier-Stokes asymptotic
solutions and they can be directly related to the relaxation times \cit
{artigao}.
Clearly, in order to fully describe the complicated many-body dynamics of an
interacting system down to arbitrarily small time scales, all the infinite
number of non-hydrodynamic modes must be taken into account. For
sufficiently long times, however, only the slowest modes should contribute
significantly and one should be able to systematically neglect the effect of
faster modes to the system's dynamics (at infinite times, no
non-hydrodynamic mode should be required at all). This type of truncation
should be possible at sufficiently long times and does not depend on whether
or not the non-hydrodynamic poles are parametrically separated (as long as
the distribution of poles is discrete).
As a matter of fact, it was shown in \cite{artigao} that the \textit
long-distance, long-time linearized} dynamics of the shear stress tensor in
any system that can be described via the Boltzmann equation \textit{must}
follow the general ansatz from Israel and Stewart, Eq.\ (\ref{IS}). In other
words, at long times $\pi ^{\mu \nu }$ must obey a first-order differential
equation in the comoving derivative that describes how it relaxes towards
its steady state Navier-Stokes value. This means that in relativistic dilute
gases the relaxation time can be extracted directly from the microscopic
theory and does not necessarily have to be considered as a regulator for the
gradient expansion, as advocated in \cite{BRSSS} (for a recent discussion
see \cite{Denicol:2011ef}). We remark that this relaxation behavior at long
times should not be taken for granted: it is a direct consequence of the
fact that the retarded Green's function associated with shear stress tensor,
obtained via the linearized Boltzmann equation, is a meromorphic function
where all the poles lie on the (negative) imaginary axis and, in this case,
for times $t\,\omega _{1}(0)\gg 1$ one obtains that $\tau _{1}=-i/\omega
_{1}(0)$ \cite{artigao}, with $\omega _{1}$ being the non-hydrodynamic mode
with smallest frequency.
The near-equilibrium dynamics of weakly-coupled QCD at very large $T$ is
expected to be described by a Boltzmann equation involving quark and gluon
scattering \cite{amy}. Therefore, according to the general discussion in
\cite{artigao}, one should expect that deep in the deconfined phase long
distance, long time disturbances of the shear stress tensor follow IS-type
equations although the correct value of $\tau _{1}$ in this case, as
determined by the first non-hydrodynamic pole of the retarded Green's
function, has not yet been computed.
Sufficiently below the phase transition, say $T\sim 130$ MeV, lattice data
\cite{newfodor} for QCD thermodynamics seems to be consistent with the
predictions from non-interacting hadron resonance models. Thus, it is
natural to assume that at low $T$ QCD behaves as a weakly coupled gas of
hadrons and resonances, which then implies that a description via the
Boltzmann equation (including different particle species) may be
appropriate. Using the formalism derived in \cite{artigao}, it is possible
to show that the total shear stress tensor in this case, which is a sum over
all the hadronic species, follows an IS-like equation of motion with a
single relaxation time coefficient given by the slowest non-hydrodynamic
mode. Therefore, we expect that in QCD the non-hydrodynamic pole of the
retarded Green's function closest to the origin should be a purely imaginary
number at very high or low $T$ and in the transient regime one will find
relaxation equations for $\pi ^{\mu \nu }$.
However, it is clear that the surprising \textquotedblleft perfect
fluid\textquotedblright\ character of the QGP appears not at low or high
temperatures (where $\eta /s\sim 1$) but rather near the phase transition,
say $T\sim 160-200$ MeV, where the number of degrees of freedom increases
very rapidly and $\eta /s$, the figure of merit for perfect fluid behavior,
is expected to be rather small \cite{lowetas}. Given that the correct
physical mechanism that leads to the perfect fluid behavior of the QGP is
not yet known and that the relevant t'Hooft coupling near the phase
transition, $\lambda _{QCD}\equiv g_{QCD}^{2}\,N_{c}\sim 10$ for $N_{c}=3$
and $\alpha _{QCD}\sim 0.3$, we find it useful to investigate what the
AdS/CFT correspondence \cite{maldacena} has to say about the fluid dynamical
equations for the shear stress tensor in strongly-coupled gauge theories
\textit{in the transient regime}. In other words, how do strongly coupled
plasmas (which possess gravity duals) relax towards their asymptotic,
universal Navier-Stokes solution? This question will be answered in the
following sections.
\section{Weyl invariance and the equations of motion for the $\mathcal{N}=4$
SYM fluid in the transient regime}
Recently, a new way to derive the equations of motion of relativistic fluid
dynamics based on Weyl invariance was put forward by Baier \textit{et al} in
Ref.\ \cite{BRSSS}. The main idea is to use the fact that the dynamics of
conformal plasmas (with equations of motion involving less than 4
derivatives) should be invariant under Weyl transformations in which the
metric changes as $g_{\mu \nu }\rightarrow g_{\mu \nu }(x)\,e^{-2\Omega (x)}
, and $\Omega (x)$ is an arbitrary scalar function. Since the energy
momentum tensor scales classically, it is easy to prove that under a Weyl
transformation it scales homogeneously with conformal weight equals 6, i.e,
T^{\mu \nu }\rightarrow e^{6\Omega }T^{\mu \nu }$. The basic hydrodynamic
variables change under Weyl transformations as follows: from $u_{\mu }u^{\mu
}=1$ one obtains that $u^{\mu }\rightarrow e^{\Omega }\,u^{\mu }$ and, since
the ideal energy-momentum tensor has conformal weight equals 6, the
temperature scales as $T\rightarrow e^{\Omega }T$ and, thus, the dissipative
part of the energy-momentum tensor also transforms homogeneously, i.e, $\pi
^{\mu \nu }\rightarrow e^{6\Omega }\pi ^{\mu \nu }$.
It is important to notice, however, that while $T$ and $u^{\mu }$ scale
homogeneously, their conventional spacetime covariant derivative does not.
Things get significantly easier with the aid of the Weyl covariant
derivative defined in \cite{Loganayagam:2008is}, which acts on the basic
hydrodynamic variables as follows
\begin{eqnarray}
D_{\alpha }T &=&\nabla _{\alpha }T-T\,\dot{u}_{\alpha }+\frac{u_{\alpha
}\,\theta \,T}{3}, \notag \\
D_{\alpha }u^{\nu } &=&\nabla _{\alpha }u^{\nu }-u_{\alpha }\dot{u}^{\nu }
\frac{\Delta _{\alpha }^{\nu }\theta }{3}\,,
\end{eqnarray
where $\theta \equiv \nabla _{\nu }u^{\nu }$. In this case, these
derivatives are homogeneous under Weyl transformations, i.e., $D_{\alpha
}T\rightarrow e^{\Omega }D_{\alpha }T$ and $D_{\alpha }u^{\nu }\rightarrow
e^{\Omega }D_{\alpha }u^{\nu }$. Defining the comoving Weyl invariant
derivative as $D\equiv u_{\mu }D^{\mu }$, one can see that $Du^{\mu }=0$,
D_{\mu }u^{\mu }=0$, and $DT=\dot{T}+T\theta /3$. Moreover, since $\pi ^{\mu
\nu }$ is traceless and of conformal weight equals 6, it is possible to show
that $D_{\alpha }\pi ^{\alpha \beta }=\nabla _{\alpha }\pi ^{\alpha \beta }
, $\Delta _{\nu }^{\mu }\nabla _{\alpha }\pi ^{\alpha \nu }=D_{\perp \alpha
}\pi ^{\alpha \beta }+u^{\mu }\pi _{\alpha \beta }\sigma ^{\alpha \beta }$
and we can now write the general conservation laws for a conformal fluid
(where $\varepsilon =3p$) as
\begin{eqnarray}
DP &=&\frac{\pi _{\alpha \beta }\sigma ^{\alpha \beta }}{3}, \notag \\
D_{\perp }^{\mu }P &=&D_{\perp \alpha }\pi ^{\alpha \mu }+u^{\mu }\pi
_{\alpha \beta }\sigma ^{\alpha \beta }, \label{cem5}
\end{eqnarray
where $DP=\dot{P}+4P\,\theta /3$ and $D_{\perp }^{\mu }P=\nabla _{\perp
}^{\mu }P-4P\,\dot{u}^{\mu }$.
\subsection{The Gradient Expansion Approach in Conformal Fluid Dynamics}
In order to solve the conservation laws (\ref{cem5}) we must provide the
equation satisfied by the shear stress tensor. One possibility to derive
such equation is via the gradient expansion in which $\pi ^{\mu \nu }$ is
assumed to be solely expressed in terms of $P$ (or temperature), $u^{\mu }$
and their gradients. In this framework, it is possible to express $\pi ^{\mu
\nu }$ in terms of a controlled expansion in powers of derivatives or order
of derivatives of $P$ and $u^{\mu }$,
\begin{equation}
\pi ^{\mu \nu }=\eta _{1}\Pi _{1}^{\mu \nu }+\eta _{2}\Pi _{2}^{\mu \nu
}+\cdots , \label{Gshear}
\end{equation
where the quantities $\Pi _{1}^{\mu \nu }$ and $\Pi _{2}^{\mu \nu }$
correspond to terms of first and second order in gradients of $P$ and
u^{\mu }$, respectively, and the dots denote possible terms with higher
order derivatives.
This derivative expansion is controlled by a small parameter called the
Knudsen number, $\mathrm{Kn}=\ell _{\mathrm{micro}}/L_{\mathrm{macro}}$,
which is basically the ratio between a microscopic length scale (e.g., the
inverse temperature for conformal fluids or the mean free-path for gases)
\sim \ell _{\mathrm{micro}}$ and the overall macroscopic length scale of the
fluid $\sim L_{\mathrm{macro}}$ (the inverse of the gradient of velocity or
temperature). The term $\Pi _{1}^{\mu \nu }$ is assumed proportional to the
gradient of a macroscopic variable and should be of order $\sim L_{\mathrm
macro}}^{-1}$. Every additional derivative brings in another inverse power
of $L_{\mathrm{macro}}$ and, thus, $\Pi _{n}^{\mu \nu }\sim L_{\mathrm{macro
}^{-n}$. The microscopic scale $\ell _{\mathrm{micro}}$ is contained in the
coefficients $\eta _{i}$. Up to some overall power of $\ell _{\mathrm{micro
} $ (which restores the correct scaling dimension), $\eta _{n}\sim \ell _
\mathrm{micro}}^{n}$. Therefore, the terms $\Pi _{1}^{\mu \nu }$ and $\Pi
_{2}^{\mu \nu }$ \ multiplied by their corresponding coefficients in Eq.
\ref{Gshear}) are of order $\mathrm{Kn}$ and $\mathrm{Kn}^{2}$,
respectively. Subsequent terms would be of a higher order in $\mathrm{Kn}$
and, therefore, when the system exhibits a clear separation between $\ell _
\mathrm{micro}}$ and $L_{\mathrm{macro}}$, i.e., when $\mathrm{Kn}\ll 1$, it
is possible to truncate this expansion. Ideal fluid dynamics corresponds to
the zeroth order truncation of this series, i.e., when no terms are included
at all.
The first term in the gradient expansion can be obtained by constructing all
possible tensors that can be made using first order derivatives of $P$ and
u^{\mu }$. These can be easily obtained and are
\begin{equation}
D_{\mu }P\text{ and }D_{\mu }u_{\nu }\text{. } \label{gradients}
\end{equation
Next, one has to build, using these gradients, tensors that have the same
properties satisfied by $\pi ^{\mu \nu }$ and, at the same time, transform
homogeneously under Weyl transformations. \ Here, the only possibility is
the shear tensor, which can be written in terms of the Weyl derivative of
u^{\mu }$ as
\begin{equation}
\sigma ^{\mu \nu }=\Delta _{\alpha \beta }^{\mu \nu }\nabla _{\perp
}^{\alpha }u^{\beta }=\frac{D^{\mu }u^{\nu }+D^{\nu }u^{\mu }}{2}.
\end{equation
Therefore, the most general equation allowed by symmetry that can be
satisfied by $\pi ^{\mu \nu }$, up to first order in $\mathrm{Kn}$, is
\begin{equation*}
\pi ^{\mu \nu }=\eta _{1}\sigma ^{\mu \nu },
\end{equation*
which corresponds to the relativistic Navier-Stokes theory, with $\eta
_{1}/2 $ being identified as the shear viscosity coefficient $\eta $. It is
easy to see that the shear tensor scales as $\sigma ^{\mu \nu }\rightarrow
e^{3\Omega }\sigma ^{\mu \nu }$ and, therefore, $\eta \sim T^{3}$.
In the framework of the gradient expansion, relativistic Navier-Stokes
theory can be extended by including terms of second order in gradients of $P$
and $u^{\mu }$. In order to do so, one has to obtain all the possible terms
involving Weyl derivatives of $P$ and $u^{\mu }$ that contribute to $\Pi
_{2}^{\mu \nu }$. All the \textit{independent} terms of second order in
gradients of pressure $P$ and $u^{\mu }$ that are symmetric, transverse,
traceless, and that transform homogeneously under Weyl transformations are,
\begin{eqnarray}
\mathcal{O}_{1}^{\mu \nu } &=&D\sigma ^{\langle \mu \nu \rangle }=\dot{\sigm
}^{\langle \mu \nu \rangle }+\sigma ^{\mu \nu }\theta /3, \notag \\
\mathcal{O}_{2}^{\mu \nu } &=&R^{\left\langle \mu \nu \right\rangle
}+2u_{\alpha }R^{\alpha \left\langle \mu \nu \right\rangle \beta }u_{\beta },
\notag \\
\mathcal{O}_{3}^{\mu \nu } &=&\sigma _{\lambda }^{\langle \mu }\sigma ^{\nu
\rangle \lambda },~~~\mathcal{O}_{4}^{\mu \nu }=\sigma _{\lambda }^{\langle
\mu }\Omega ^{\nu \rangle \lambda },~~~\mathcal{O}_{5}^{\mu \nu }=\Omega
_{~\lambda }^{\langle \mu }\Omega ^{\nu \rangle \lambda }, \label{1}
\end{eqnarray
where $\Omega ^{\mu \nu }=(\nabla _{\perp }^{\mu }u^{\nu }-\nabla _{\perp
}^{\nu }u^{\mu })/2$ is the vorticity operator, $R^{\mu \nu }$ is the Ricci
tensor, and $R^{\mu \nu \alpha \beta }$ is the Riemann tensor. All the terms
above have conformal weight 4 and were first found and listed in Ref.\ \cit
{BRSSS}. Note that terms such as $\Delta _{\alpha \beta }^{\mu \nu }D_{\perp
}^{\alpha }D_{\perp }^{\beta }P$, $\Delta _{\alpha \beta }^{\mu \nu
}(D_{\perp }^{\alpha }P)(D_{\perp }^{\beta }P)$, and $\Delta _{\alpha \beta
}^{\mu \nu }(DP)\sigma ^{\alpha \beta }$ contribute only to $\mathcal{O}
\mathrm{Kn}^{3})$, as can be seen by substituting the leading order relation
$\pi ^{\mu \nu }\sim \sigma ^{\mu \nu }$ together with the general
conservation laws (\ref{cem5}).
Therefore, the most general equation allowed by symmetry that can be
satisfied by $\pi ^{\mu \nu }$, up to second order in $\mathrm{Kn}$, is
\begin{equation}
\pi ^{\mu \nu }=2\eta \sigma ^{\mu \nu }-\sum_{i=1}^{5}\,2\eta \,b_{i}\
\mathcal{O}_{i}^{\mu \nu }+\mathcal{O}(\mathrm{Kn}^{3}). \label{2}
\end{equation
The 6 coefficients $\eta $ and $b_{i}$ can be calculated via Kubo formulas
for the correlators of the energy-momentum tensor derived using metric
perturbations. In strongly-coupled $\mathcal{N}=4$ SYM theory, all the
coefficients above (including those associated with nonlinear terms) were
determined using the AdS/CFT correspondence \cit
{BRSSS,Moore:2010bu,Arnold:2011ja}. For instance, for strongly-coupled SYM
one finds $\eta (2\pi T)/P_{0}=2$ ($P_{0}\sim T^{4}$ is the pressure at
equilibrium) and $b_{1}(2\pi T)=2-\ln 2$ \cite{BRSSS}.
\subsection{Going Beyond the Gradient Expansion via the Inclusion of
Transient Effects in Relativistic Fluid Dynamics}
Relativistic Navier-Stokes theory and its extensions via the gradient
expansion are hindered by acausal behavior which complicates their usage in
relativistic problems \cite{hiscock}. In Ref. \cite{BRSSS}, a stable and
causal fluid-dynamical theory was obtained from the gradient expansion by
substituting in all second-order terms the \textquotedblleft
inverted\textquotedblright\ first-order solution, $\sigma ^{\mu \nu }=\pi
^{\mu \nu }/(2\eta )$. Then, the following equation of motion for $\pi ^{\mu
\nu }$ appears
\begin{eqnarray}
b_{1}D\pi ^{\left\langle \mu \nu \right\rangle }+\pi ^{\mu \nu } &=&2\eta
\sigma ^{\mu \nu }-\,2\eta b_{2}\,\mathcal{O}_{2}^{\mu \nu }-\,2\eta b_{3}\
\tilde{\mathcal{O}}_{3}^{\mu \nu } \notag \\
&&-\,2\eta b_{4}\,\tilde{\mathcal{O}}_{4}^{\mu \nu }-\,2\eta b_{5}\,\mathcal
O}_{5}^{\mu \nu }, \label{BRSSS}
\end{eqnarray
where $\tilde{\mathcal{O}}_{3,4}^{\mu \nu }$ corresponds to $\mathcal{O
_{3,4}^{\mu \nu }$ with the substitution $\sigma ^{\mu \nu }\rightarrow \pi
^{\mu \nu }/(2\eta )$ and we used that $DT\sim \mathcal{O}(\mathrm{Kn}^{2})
. Note, however, that in order to render the gradient expansion stable, the
shear stress tensor had to be promoted to an independent dynamical variable.
On the other hand, Eq. (\ref{2}) was proved to be the most general equation
allowed by symmetry \textit{only} when $\pi ^{\mu \nu }$ was \textit{not} an
independent dynamical variable. Therefore, for causal theories of fluid
dynamics the analysis first proposed in Ref. \cite{BRSSS} and reproduced in
the previous section has to be revisited.
In this section, we use Weyl invariance to obtain the full set of nonlinear
differential equations that describe a conformal fluid when the shear stress
tensor is a dynamical variable. Now, the idea is to extend Navier-Stokes
theory by including all possible terms that can be constructed from
gradients of $P$, $u^{\mu }$, \textit{and} $\pi ^{\mu \nu }$ that are
symmetric, transverse, traceless, and that transform homogeneously under
Weyl transformations. Then, in addition to the terms constructed in the
previous section, we can also build new terms, e.g.
\begin{eqnarray}
&&D\pi ^{\langle \mu \nu \rangle },\text{ }D^{2}\pi ^{\langle \mu \nu
\rangle },\cdots \\
&&\text{ }\pi _{\alpha }^{\langle \mu }\sigma ^{\nu \rangle \,\alpha },\text{
}\pi _{\alpha }^{\langle \mu }\Omega ^{\nu \rangle \,\alpha },\text{ }\pi
_{\alpha }^{\langle \mu }\pi ^{\nu \rangle \,\alpha },\cdots .
\end{eqnarray}
Note, however, that the shear stress tensor can no longer be expressed in
terms of a series in powers of Knudsen number. By including terms of the
form, e.g. $D\pi ^{\langle \mu \nu \rangle }$ and $D^{2}\pi ^{\langle \mu
\nu \rangle }$, the shear stress tensor satisfies a partial differential
equation and its relation with the Knudsen number is dynamical, as happens
with Israel-Stewart theory, and not algebraic, as occurred in the gradient
expansion. We organize the most general equation of motion for $\pi ^{\mu
\nu }$ in the following form
\begin{eqnarray}
&&\cdots +\chi _{2}D^{2}\pi ^{\langle \mu \nu \rangle }+\chi _{1}\,D\pi
^{\langle \mu \nu \rangle }+\pi ^{\mu \nu } \notag \\
&=&2\eta \sigma ^{\mu \nu }+e_{1}\pi _{\alpha }^{\langle \mu }\sigma ^{\nu
\rangle \,\alpha }+e_{2}\pi _{\alpha }^{\langle \mu }\pi ^{\nu \rangle
\,\alpha }+e_{3}\pi _{\alpha }^{\langle \mu }\Omega ^{\nu \rangle \,\alpha
}-\sum_{i=1}^{5}\,2\eta \,c_{i}\,O_{i}^{\mu \nu }+\cdots , \label{Eq(3)}
\end{eqnarray
where the dots denote additional possible terms. Note that
\begin{eqnarray}
D\pi ^{\langle \mu \nu \rangle } &=&\dot{\pi}^{\langle \mu \nu \rangle }
\frac{4}{3}\pi ^{\mu \nu }\theta \,\text{\ }, \notag \\
D^{2}\pi ^{\langle \mu \nu \rangle } &=&\ddot{\pi}^{\langle \mu \nu \rangle
}-2u_{\rho }\dot{\pi}^{\rho \langle \mu }\dot{u}^{\nu \rangle }+\frac{20}{9
\pi ^{\mu \nu }\theta ^{2}+3\theta \,\dot{\pi}^{\langle \mu \nu \rangle }
\frac{4}{3}\pi ^{\mu \nu }\dot{\theta}\,. \label{newtermorder2}
\end{eqnarray}
The truncation of\ Eq. (\ref{Eq(3)}) is not trivial, as was discussed in
Ref. \cite{artigao}. The terms on the right hand side serve as source terms
for the shear stress tensor while the terms on the left hand side describe
the relaxation/oscilation of the shear stress tensor when perturbed by
gradients. The right hand side of Eq. (\ref{Eq(3)}), i.e., the source terms,
can be organized as a series in Knudsen number and the so-called inverse
\textquotedblleft Reynolds number\textquotedblright\ $\mathrm{Re}^{-1}\equiv
|\pi ^{\mu \nu }\pi _{\mu \nu }|^{1/2}/P_{0}$ \cite{gabrielboltzmann}. Since
$\pi ^{\mu \nu }$ is an independent dynamical variable, the inverse Reynolds
number can be considered as an independent small parameter that gives
additional information on how equilibrium is approached. Therefore, it is
possible to systematically organize the source terms of the equation of
motion as an expansion in powers of both $\mathrm{Kn}$ and $\mathrm{Re}^{-1}
. In this case, the terms $e_{1}\pi _{\alpha }^{\langle \mu }\sigma ^{\nu
\rangle \,\alpha }$ and $e_{3}\pi _{\alpha }^{\langle \mu }\Omega ^{\nu
\rangle \,\alpha }$, the term $e_{2}\pi _{\alpha }^{\langle \mu }\pi ^{\nu
\rangle \,\alpha }$, and the terms $\eta \,c_{i}\,O_{i}^{\mu \nu }$, are all
the possible terms of order $\mathcal{O}(\mathrm{Re}^{-1}\mathrm{Kn})$,
\mathcal{O}(\mathrm{Re}^{-2})$, and $\mathcal{O}(\mathrm{Kn}^{2})$,
respectively. If we wish to describe the source terms only up to order
\mathcal{O}(\mathrm{Re}^{-2},\mathrm{Re}^{-1}\mathrm{Kn},\mathrm{Kn}^{2})$,
they are enough.
The truncation of the left hand side is more complicated since it cannot be
organized as an algebraic series in powers of small quantities, such as
Knudsen number or inverse Reynolds number. However, the order of the
differential equation in the comoving derivative on the left hand side is
equal to the number of non-hydrodynamic modes included in the dynamical
description of the system. For example, if we include only the first
comoving derivative of $\pi ^{\mu \nu }$, e.g. $D\pi ^{\langle \mu \nu
\rangle }$, we have only one non-hydrodynamic mode, while if we also include
the second order comoving derivative we would have two non-hydrodynamic
modes.
The main purpose of the gradient expansion is to correct Navier-Stokes
theory in cases where the Knudsen number is not very small, i.e., the
microscopic scale is no longer very separated from the macroscopic scales of
interest. By including second order gradients of $P$ and $u^{\mu }$, the
Navier-Stokes theory is extended to describe the dynamics at larger
wavenumbers or smaller wavelengths. However, if the separation between the
microscopic and macroscopic scales is no longer optimal, it is not enough to
extend the applicability of the theory to describe higher wavenumbers, but
one should also extend it to describe higher frequencies. This is the role
played by the left hand side of the Eq. (\ref{Eq(3)}). When more comoving
derivatives of $\pi ^{\mu \nu }$ are included, more non-hydrodynamic modes
are introduced in the theory, and a description of higher frequencies is
obtained. Therefore, the structure of the left hand of Eq. (\ref{Eq(3)}) has
to be determined by carefully matching the modes introduced in the
macroscopic theory with the modes of the underlying microscopic theory,
taking into account what are the relevant frequencies in the macroscopic
domain. Also, one has to make sure that such matching can be done, i.e., the
modes included in the macroscopic theory exist in the microscopic one.
For dilute gases described by the Boltzmann equation, all the
non-hydrodynamic modes lie on the imaginary axis in the complex $\omega
--plane (in the limit of vanishing wavenumber) \cite{artigao}. In the
long-time limit it is only necessary to include the non-hydrodynamic mode
with the smallest frequency (at zero wavenumber) and the equation of motion
for $\pi ^{\mu \nu }$ with source terms up to order $\mathcal{O}(\mathrm{Re
^{-2},\mathrm{Re}^{-1}\mathrm{Kn},\mathrm{Kn}^{2})$ becomes
\begin{equation}
\tau _{1}\,D\pi ^{\langle \mu \nu \rangle }+\pi ^{\mu \nu }=2\eta \sigma
^{\mu \nu }+e_{1}\pi _{\alpha }^{\langle \mu }\sigma ^{\nu \rangle \,\alpha
}+e_{2}\pi _{\alpha }^{\langle \mu }\pi ^{\nu \rangle \,\alpha }+e_{3}\pi
_{\alpha }^{\langle \mu }\Omega ^{\nu \rangle \,\alpha
}-\sum_{i=1}^{5}\,2\eta \,c_{i}\,O_{i}^{\mu \nu }\text{.}
\label{finalequations1derivative}
\end{equation
One should remark that, in general, the coefficients $c_{i}$'s in the
equation above are different than the $b_{i}$'s in Eq.\ (\ref{2}). One can
see that the transient theory defined in Eq. (\ref{finalequations1derivative
) reduces to the well known result (\ref{2}) in the limit of vanishing
relaxation time. In this limit, we can obtain an asymptotic solution for
\pi ^{\mu \nu }$ by substituting the first order solution $\pi ^{\mu \nu
}\sim 2\eta \sigma ^{\mu \nu }$ into all terms in Eq. (\re
{finalequations1derivative}), which then implies that, asymptotically, $\tau
_{\pi }\,D\pi ^{\langle \mu \nu \rangle }\sim 2\eta \tau _{\pi }\,D\sigma
^{\langle \mu \nu \rangle }+\mathcal{O}(\mathrm{Kn}^{3})$. In fact, one can
relate the new coefficients with those in Eq. (\ref{2}) as follows:
b_{1}=\tau _{1}+c_{1}$, $b_{2}=c_{2}$, $b_{3}=c_{3}-e_{1}-e_{2}$,
b_{4}=c_{4}-e_{3}$, and $b_{5}=c_{5}$. Therefore, Eq. (\re
{finalequations1derivative}) leads to the appropriate asymptotic limit up to
$\mathcal{O}(\mathrm{Kn}^{2})$. Also, one can show that the general theory
(in flat spacetime) obtained from the Boltzmann equation using the moments
method, as recently derived in \cite{gabrielboltzmann}, has the exact same
form as (\ref{finalequations1derivative}) in the conformal limit (massless
limit and cross section $\sigma \sim 1/T^{2}$). Note also that Eq.\ (\re
{BRSSS}) can be seen as a particular case of our transient theory in which
e_{1}=c_{1}=c_{3}=c_{4}=0$.
Equation (\ref{Eq(3)}) can be extended by including one more comoving
derivative of $\pi ^{\mu \nu }$ ,
\begin{eqnarray}
&&\chi _{2}D^{2}\pi ^{\langle \mu \nu \rangle }+\chi _{1}\,D\pi ^{\langle
\mu \nu \rangle }+\pi ^{\mu \nu }=2\eta \sigma ^{\mu \nu }+e_{1}\pi _{\alpha
}^{\langle \mu }\sigma ^{\nu \rangle \,\alpha }+e_{2}\pi _{\alpha }^{\langle
\mu }\pi ^{\nu \rangle \,\alpha }+e_{3}\pi _{\alpha }^{\langle \mu }\Omega
^{\nu \rangle \,\alpha } \notag \\
&-&\sum_{i=1}^{5}\,2\eta \,c_{i}\,O_{i}^{\mu \nu
}+\sum_{i=1}^{5}\,f_{i}\,\pi _{\rho }^{\langle \mu }O_{i}^{\nu \rangle
\,\rho }-\xi \,D^{\langle \mu }D_{\lambda }\pi ^{\nu \rangle \,\lambda }.
\label{finalequations2derivative}
\end{eqnarray
where $D^{\langle \mu }D_{\lambda }\pi ^{\nu \rangle \,\lambda }$ is the
only source term found of conformal weight 8,
\begin{equation}
D^{\langle \mu }D_{\lambda }\pi ^{\nu \rangle \,\lambda }=\Delta _{\alpha
\beta }^{\mu \nu }(\nabla ^{\alpha }-6\dot{u}^{\alpha })\,\nabla _{\lambda
}\pi ^{\lambda \beta }\,. \label{newterm2derivativesspace}
\end{equation
Above, we included source terms of order $\mathcal{O}(\mathrm{Re}^{-1
\mathrm{Kn})$, $\mathcal{O}($\textrm{Re}$^{-2})$, $\mathcal{O}(\mathrm{Kn
^{2})$, and $\mathcal{O}(\mathrm{Re}^{-1}\mathrm{Kn}^{2})$. In principle, we
could have also included source terms of order $\mathcal{O}(\mathrm{Re
^{-3}) $ and $\mathcal{O}(\mathrm{Kn}^{3})$, but this is time consuming and
outside the purposes of this paper. Again, it is easy to see that the theory
displayed above reproduces the $\mathcal{O}(\mathrm{Kn}^{2})$ gradient
expansion obtained in \cite{BRSSS}, since all the new terms included are of
order $\mathcal{O}(\mathrm{Kn}^{3})$ when the asymptotic solution $\pi ^{\mu
\nu }\sim 2\eta \sigma ^{\mu \nu }$ is substituted.
It is interesting to observe that, since the transient theories in (\re
{finalequations1derivative}) and (\ref{finalequations2derivative}) reduce to
(\ref{2}), several results previously derived using (\ref{2}) are
automatically valid also for the transient theories derived here. For
instance, the expansion around $k\rightarrow 0$ for the sound mode present
in the theories defined via Eqs.\ (\ref{finalequations1derivative}), (\re
{finalequations2derivative}), and (\ref{2}) is
\begin{equation}
\omega _{sound}(k)=\pm \frac{k}{\sqrt{3}}-i\,k^{2}\frac{\eta }{6P_{0}}\pm
\frac{k^{3}}{6\sqrt{3}}\left( \frac{b_{1}\,\eta }{P_{0}}-\frac{\eta ^{2}}
4P_{0}^{2}}\right) +\mathcal{O}(k^{4})\,. \label{soundmode}
\end{equation}
As mentioned above, the transient theory obtained from the Boltzmann
equation taking into account only the slowest non-hydrodynamical mode
assumes the form of Eq.\ (\ref{finalequations1derivative}). We shall see in
the following that for the strongly coupled SYM fluid the transient theory
assumes the form displayed in Eq.\ (\ref{finalequations2derivative}), i.e.,
it includes at least the 2 slowest non-hydrodynamic modes.
\section{Non-Hydrodynamic Poles in an $\mathcal{N}=4$ SYM Plasma}
The analytical properties of thermal retarded correlators at strong coupling
and their calculation via the AdS/CFT correspondence \cite{maldacena} have
been discussed in the literature in great detail \cit
{Son:2007vk,Policastro:2002se}. Through the duality \cite{Kovtun:2005ev},
the poles of the retarded thermal 2-point function of $\hat{T}^{xy}$
correspond to the quasinormal frequencies in the so-called scalar channel,
which amounts to solving the equation of motion for a massless scalar field
minimally coupled to gravity in the bulk \cite{Kovtun:2004de}. It was found
that for strongly coupled $\mathcal{N}=4$ SYM theory the poles always come
in pairs with the same imaginary part and opposite real parts \cit
{Starinets:2002br}, which is very different than the weak coupling behavior
inferred from the Boltzmann equation \cite{artigao} (see Fig.\ 1). It would
be interesting to check if this retarded correlator in \textit{weakly couple
} $\mathcal{N}=4$ SYM indeed possesses poles only on the imaginary axis. As
shown in \cite{Kovtun:2005ev}, due to rotational invariance, the three
independent Green's functions that describe the fluctuations become
identical at zero wavenumber and, thus, in this limit the non-hydrodynamical
poles in all of these channels become the same.
In the supergravity approximation, the retarded correlator of the glueball
operator $\mathrm{Tr}\hat{F}^{2}$ in strongly coupled $\mathcal{N}=4$ SYM is
also found by solving the same equation of motion for a minimally coupled
scalar in the bulk. This occurs because both operators have their UV scaling
dimension equals to 4, which corresponds to a massless scalar field in the
bulk \cite{maldacena}. The poles of this glueball correlator determine the
masses and the decay properties of the glueballs at finite $T$. It is clear
in this context, however, that the fundamental mode of this glueball
correlator must indeed be doubly degenerate. This occurs because the mass of
the state, which is nonzero, only appears as $m^{2}$ (i.e, the poles have
opposite real parts) and in the deconfined phase glueballs must decay (hence
the poles must have a nonzero imaginary part). Therefore, while the
numerical value of the poles of the $\mathrm{Tr}\hat{F}^{2}$ (or $\hat{T
^{xy}$) correlator will change according to the conformal theory used in the
calculation, the general structure discussed above concerning the
distribution of the poles in the complex plane should remain valid for
conformal plasmas. Note, however, that while in the supergravity
approximation the singularities of the Green's functions do not depend on
the t'Hooft coupling or $N_c$, the analytical properties of the Green's
functions are expected to change outside the supergravity limit (see, for
instance, the discussion in \cite{Hartnoll:2005ju}). It would be interesting
to investigate the analytical structure of this correlator in non-conformal
plasmas described by bottom-up gravity theories constructed to mimic some
properties displayed by QCD at nonzero temperature \cite{nonconformal}.
\begin{figure}[tbh]
\hspace{-0.0cm} \includegraphics[width=8.0cm]{PolesPicture.eps} \vspace
-0.0cm} \vspace{-0.3cm}
\caption{{\protect\small Analytic structure of the retarded Green's function
associated with shear stress in weakly coupled theories (a), based on the
Boltzmann equation, and strongly coupled theories (b), based on the AdS/CFT
duality. }}
\label{OnePole}
\end{figure}
A crucial insight derived in \cite{artigao}, which will be used extensively
here, was that the zero wavenumber limit of the poles of the Green's
function determines the coefficients of the linear terms in the equations of
motion of relativistic dissipative fluid dynamics.
\section{Linearized Fluid Dynamic Equations for the $\mathcal{N}=4$ SYM
Plasma}
We choose to derive the macroscopic equations of motion for $\Pi ^{\mu \nu }$
via linear response to small metric perturbations, as discussed in \cit
{Son:2007vk}. The linear transport coefficients are determined from
perturbations $h^{\mu \nu }$ of the metric tensor $g^{\mu \nu }=\eta ^{\mu
\nu }+h^{\mu \nu }$ in the gauge theory. While this method can be equally
used in weak and strongly coupled gauge theories, we will focus on the
results for strongly-coupled $\mathcal{N}=4$ SYM. Within linear response,
the variation of the energy-momentum tensor $T^{\mu \nu }$ due to the metric
perturbations is
\begin{equation}
\delta T^{\mu \nu }\left( X\right) =\frac{1}{2}\int_{-\infty }^{\infty
}d^{4}X^{\prime }\,G_{R}^{\mu \nu \alpha \beta }\left( X-X^{\prime }\right)
\,h_{\alpha \beta }\left( X^{\prime }\right) ,
\end{equation
where $G_{R}^{\mu \nu \alpha \beta }\left( X-X^{\prime }\right) $ is the
retarded Green's function, whose properties will be obtained from the
AdS/CFT correspondence. We consider only the following metric perturbation
h_{xy}=h_{xy}\left( t,z\right) $, with all other components of the metric
tensor left unperturbed \cite{BRSSS}. In this case, all the other components
of $\delta T^{\mu \nu }$ decouple from the $xy$ component and we obtain the
following expression for $\delta T^{xy}$
\begin{equation}
\delta T^{xy}(t,z)=\int_{-\infty }^{\infty }dt^{\prime }\,dz^{\prime
}\,G_{R}^{xyxy}\left( t-t^{\prime };\,z-z^{\prime }\right) h_{xy}\left(
t^{\prime },z^{\prime }\right) .
\end{equation}
The energy-momentum tensor $T^{\mu \nu }$ is assumed to have the traditional
fluid-dynamical structure shown in Eq.\ (\ref{definepi}), which then implies
that
\begin{equation}
\delta T^{xy}\equiv T^{xy}(\eta ^{\mu \nu }+h^{\mu \nu })-T^{xy}(\eta ^{\mu
\nu })=-P_{0}\,h^{xy}+\delta \pi ^{xy}\;.
\end{equation
where $\delta \pi ^{xy}$ is the $xy$ component of the shear stress tensor
created by the metric perturbations and $P_{0}$ is the pressure of the
unperturbed state. We set the shear stress tensor of the unperturbed state
to zero. Using the energy-momentum equations of motion we arrive at the
following equation \cite{artigao}
\begin{equation*}
\delta \pi ^{xy}=P_{0}\,h^{xy}+\int_{-\infty }^{\infty }dt^{\prime
}\,dz^{\prime }\,G_{R}^{xyxy}\left( t-t^{\prime };\,z-z^{\prime }\right)
h_{xy}\left( t^{\prime },z^{\prime }\right) .
\end{equation*
or, equivalently, in Fourier space, $\delta \pi ^{xy}(t,z)=\left[ 1/(2\pi
)^{2}\right] \int_{-\infty }^{\infty }d\omega \,dk\,e^{-i\omega
t+ikz}\,\delta \tilde{\pi}^{xy}(\omega ,k)$ where
\begin{equation}
\delta \tilde{\pi}^{xy}(\omega ,k)=\tilde{G}_{R}(\omega ,k)\,\tilde{h
_{xy}(\omega ,k)\;, \label{linearresponsemetric}
\end{equation
with $\tilde{G}_{R}(\omega ,k)=-P_{0}+\tilde{G}_{R}^{xyxy}(\omega ,k)$. Note
that $\tilde{G}_{R}(\omega ,k)$ has the same analytic structure as $\tilde{G
_{R}^{xyxy}(\omega ,k)$ because $P_{0}$ does not depend on $\omega $ and $k$.
Given that $\tilde{G}_{R}(\omega ,k)$ is a meromorphic function with only
non-hydrodynamic poles, in the case of homogeneous relaxation (where $k=0$)
one can formally write \cite{artigao}
\begin{equation}
\tilde{G}_{R}(\omega )=F(\omega )+\frac{1}{4\pi \,i}\sum_{n=1}^{\infty
}\left( \frac{f_{n}(\omega )}{\omega -\omega _{n}(0)}+\frac{f_{n}^{\ast
}(-\omega ^{\ast })}{\omega +\omega _{n}^{\ast }(0)}\right) , \label{GReq1}
\end{equation
where $F(\omega )$ and $f_{n}(\omega )$ are analytical functions (and we
used that $\tilde{G}_{R}^{\ast }(\omega )=\tilde{G}_{R}(-\omega ^{\ast })$).
Performing the Fourier transform and picking up the residues one finds
(using that $\tilde{h}^{\ast }(\omega )=\tilde{h}(-\omega ^{\ast })$)
\begin{equation}
\delta \pi ^{xy}(t)=P_{0}\,h^{xy}(t)+\theta (t)\sum_{n=1}^{\infty
}\,|f_{n}(\omega _{n}(0))\,\tilde{h}_{xy}(\omega _{n}(0))|\,e^{-\Gamma
_{n}t}\,\cos (\Omega _{n}t+\delta _{n}), \label{equation2}
\end{equation
where $\delta _{n}$ is a constant phase shift. Clearly, one must be careful
when dealing with the representation above because, in general, the sum in
the Eq.\ (\ref{GReq1}) may not converge. Since there is only a few explicit
examples where all the poles and residues are known analytically, in
general, the convergence properties of the sum employed in $\tilde{G}_{R}$
are not known. However, note that in the equation for $\delta \pi ^{xy}(t)$
derived above, $\tilde{h}$ enters in the coefficients of the sum. Therefore,
the sum in Eq.\ (\ref{equation2}) may converge as long as the metric
disturbance varies \textit{sufficiently slowly} in time, i.e., $\tilde{h}$
goes to zero fast enough for $\omega \ \neq 0$. Note, however, that this is
indeed the case we are interested in since we want to study the response of
the system to an external agent (the metric variations) that varies
sufficiently slow in time (near the fluid regime). Thus, it is simple to
show that \textit{under these conditions}, only the first two poles ($n=1$
in the sum, i.e, two distinct time scales) will contribute at \textit
sufficiently long} (though finite) times $t\,\Gamma _{1}\gg 1$.
Including only the 2 slowest non-hydrodynamic modes close to the origin in
the $\omega $--complex plane, we obtain the following linearized equation of
motion for $\delta \pi ^{xy}$ in an AdS/CFT configuration (see Fig.\ 1-b)
directly from Eq.\ (\ref{linearresponsemetric}),
\begin{eqnarray}
\left[ \Phi _{2}(0)\partial _{t}^{2}+\Phi _{1}(0)\partial _{t}+1\right]
\delta \pi ^{xy} &=&C_{1}(0)\dot{h}_{xy}-\left( \frac{\partial
_{k}^{2}C_{0}(k)}{2}\right) \Big |_{k=0}\partial _{z}^{2}h(t,z) \notag \\
&&+\left[ C_{2}(0)+C_{1}(0)\Phi _{1}(0)\right] \ddot{h}(t,z)+\mathcal{O}
\dddot{h}(t,z),\partial _{z}^{4}h(t,z)), \label{ISmetric}
\end{eqnarray
where we define
\begin{equation}
C_{p}(\mathbf{k})=\frac{i^{p}}{p!}\partial _{\omega }^{p}\tilde{G
_{R}(\omega ,\mathbf{k})\Big|_{\omega =0}\,,\text{ }\Phi _{1}(k)=-\frac{i}
\omega _{1}(k)}-\frac{i}{\omega _{2}(k)},\text{ }\Phi _{2}(k)=\frac{-1}
\omega _{1}(k)\omega _{2}(k)}.
\end{equation
Note that, due to the symmetries of the retarded Green's function, it was
not possible to include only one non-hydrodynamic mode, as was possible in
the Boltzmann equation and happended in Israel-Stewart theory, since the
first two poles are symmetric relative to the $\omega $ imaginary axis and
are equally distant from the origin in the complex $\omega $--plane. Eq.\
\ref{ISmetric}) came directly from the underlying \textit{microscopic}
theory since our starting point was Eq. (\ref{linearresponsemetric}). We
assumed in the derivation of Eq.\ (\ref{ISmetric}) that the Green's function
contains only non-hydrodynamic poles (that are functions of $\mathbf{k}^{2}$
due to rotational invariance), $C_{0}(\mathbf{0})=\tilde{G}_{R}(0,\mathbf{0
)=0$, and we also limited ourselves to only display the terms containing at
most 2 derivatives.
It is easy to show that the \textit{macroscopic} theory in Eq.\ (\re
{finalequations2derivative}) when linearized via metric perturbations
becomes (note that due to these metric perturbations the shear tensor
becomes $\sigma ^{xy}\sim \partial _{t}h^{xy}$)
\begin{equation}
\left[ \chi _{2}\partial _{t}^{2}+\chi _{1}\partial _{t}+1\right] \delta \pi
^{xy}(t,z)=\eta \partial _{t}h(t,z)+\eta c_{2}\partial _{z}^{2}h(t,z)+\eta
(c_{1}+c_{2})\partial _{t}^{2}h(t,z)\,. \label{macro1}
\end{equation
Note that the coefficient $\xi $ in Eq.\ (\ref{finalequations2derivative})
does not appear in this case because of the specific way we chose to disturb
the metric. However, it can be shown by calculating the general dispersion
relations for the sound and shear channels for a conformal fluid that a term
like $D^{\langle \mu }D_{\lambda }\pi ^{\nu \rangle \,\lambda }$ cannot
appear and therefore we must take $\xi =0$ \cite{Footnote2}.
We can now match the long distance, long time limit of the microscopic
theory in Eqs. (\ref{ISmetric}) to the macroscopic theory in (\ref{macro1})
to derive the well-known Kubo formula $\eta =C_{1}(0)=i\partial _{\omega
\tilde{G}_{R}(0)$ and, thus, we recover that $\eta /s=1/(4\pi )$ \cit
{Kovtun:2004de} in the supergravity limit. Moreover, $\chi _{1,2}=\Phi
_{1,2}(0)$, $2\eta c_{2}=-\left( \partial _{k}^{2}C_{0}(k)\right) \Big
_{k=0}=-\left( \partial _{k}^{2}G_{R}(0,k)\right) \Big |_{k=0}$ and $2\eta
(c_{1}-\chi _{1})=(\partial _{k}^{2}-\partial _{\omega }^{2})G_{R}(\omega ,k
\Big |_{\omega =k=0}$. Using the results for the Taylor expansion of
G_{R}^{xyxy}$ derived by \cite{BRSSS} and the calculation of the poles at
zero wavenumber from \cite{Starinets:2002br}, one obtains the following
values for the transport coefficients in $\mathcal{N}=4$ SYM, $\chi _{1}\sim
0.63/(2\pi T)$ and $\chi _{2}\sim 0.23/(2\pi T)^{2}$, $c_{2}=1/(2\pi T)$ and
$c_{1}=\chi _{1}-(2-\ln 2)/(2\pi T)$. Thus, all the coefficients associated
with the linear terms in the transient theory in Eq.\ (\re
{finalequations2derivative}) have been determined. There are, however, still
10 coefficients in the transient theory that remain to be computed: $e_{i}
's, $f_{i}$'s, $c_{3}$ and $c_{4}$. Since they correspond to nonlinear
terms, they cannot be determined from linear response theory.
As was mentioned before, since the transient theory derived in this paper
\textit{automatically} reduces to the asymptotic theory derived in \cit
{BRSSS} several results derived within that theory are contained within the
general transient theory displayed in (\ref{finalequations2derivative}). For
instance, results derived within the fluid-gravity correspondence \cit
{Bhattacharyya:2008jc} or the Bjorken expanding systems studied in \cit
{Janik:2006ft} can be readily recovered. In fact, in the case of a Bjorken
expanding system, the transient theory defined by Eq.\ (\re
{finalequations2derivative}) should give the same expressions obtained from
the Burnett-like theory in Eq.\ (\ref{2}) at sufficiently large times.
\section{Final Comments}
In summary, in this paper we derived the most general equation of motion
(see Eq.\ (\ref{finalequations2derivative})) compatible with conformal
invariance satisfied by the shear stress tensor of strongly coupled
\mathcal{N}=4$ SYM theory in the \textit{transient regime} at $\mathcal{O}
\mathrm{Re}^{-2},\mathrm{Kn}^{2},\mathrm{Re}^{-1}\mathrm{Kn}^{2})$. This
equation contains 17 transport coefficients (of which 7 were determined in
this paper) and it describes the \textit{transient regime} experienced by
the fluid as it evolves towards its universal asymptotic solution given by
the gradient expansion computed to $\mathcal{O}(\mathrm{Kn}^{2})$ in \cit
{BRSSS}. Equation\ (\ref{finalequations2derivative}) is a second-order
differential equation with respect to propertime for $\pi ^{\mu \nu }$ and,
as such, it is structurally different than the relaxation-type equations
expected to describe the transient fluid dynamics of weakly-coupled systems
\cite{artigao}.
An important point concerns the stability of Eq. (\re
{finalequations2derivative}) with respect to hydrostatic equilibrium. As
mentioned above, the stability of transient, relativistic fluid dynamics is
a nontrivial problem and, in fact, so far only the stability conditions for
relaxation-type equations have been checked \cite{us}. The inclusion of
additional time derivatives affects the previous studies and, thus, one must
generalize these calculations to verify under which conditions Eq. (\re
{finalequations2derivative}) describe a stable and causal fluid.
The novel transient physics contained in Eq.\ (\re
{finalequations2derivative}) may bring some light into the description of
the early time dynamics of the strongly coupled QGP formed in
ultrarelativistic heavy ion collisions. The authors thank H.~Niemi, H.
Warringa, G.~Torrieri, and D.~Rischke for discussions. We thank the
Helmholtz International Center for FAIR within the framework of the LOEWE
program for support. J.~N. thanks Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq) and Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP) for support.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 388 |
Big changes ahead for the Winter Blast festival in downtown Detroit
The annual snow-and-ice celebration is expanding to four weekends at Campus Martius in January and February, with a rotating schedule of activities.
Big changes ahead for the Winter Blast festival in downtown Detroit The annual snow-and-ice celebration is expanding to four weekends at Campus Martius in January and February, with a rotating schedule of activities. Check out this story on Freep.com: https://www.freep.com/story/entertainment/2018/11/28/winter-blast-weekends-2019-festival-schedule/2134110002/
Brian McCollum, Detroit Free Press Published 10:31 a.m. ET Nov. 28, 2018 | Updated 11:08 a.m. ET Nov. 28, 2018
Winter Blast is expanding to four weekends in January and February 2019.(Photo: Elaine Cromie, Detroit Free Press)Buy Photo
After 13 years of occupying a single spot on the annual calendar, Winter Blast is growing into a four-weekend event in downtown Detroit.
Quicken Loans has signed on as title sponsor for the 2019 edition of the festival, now to be known as Quicken Loans Winter Blast Weekends. Organizers were scheduled to unveil full details and a new logo during a media event Wednesday morning near Campus Martius Park, traditional hub of Winter Blast.
The event is also dropping its admission charge ($3 at recent installments), although some activities will still require individual fees.
The remodeled fest will run the weekends of Jan. 19 and Jan. 25 — concurrent with the North American International Auto Show — then skip Super Bowl weekend before resuming Feb. 8 and Feb. 15.
Activities will rotate throughout the stretch: Free ice skating at the Campus Martius rink will be available on opening and closing weekends, for instance, while skiing and snowboarding experiences will be showcased on the second weekend. The fest's popular, 30-foot-high Winter Slide will appear the weekend of Feb. 8, and a zip line ride will arrive on the closing weekend.
Other Winter Blast staples — marshmallow roasts, ice sculpting, local bands — will feature throughout. The music lineup is expected to be announced in December, and a full schedule will be posted at winterblast.com.
In tandem with Kroger's "Zero Hunger, Zero Waste" program, participating food trucks are securing their spots by donating $500 of food, which will be distributed to area shelters by Forgotten Harvest.
Unlike previous years, Woodward Avenue will remain open during the festival.
Other Winter Blast Weekends beneficiaries will include the Special Olympics and Matrix Human Services.
Winter Blast was launched in 2006 as part of the Super Bowl XL activities in Detroit. The festival is produced by Jon Witz, the promoter behind Arts, Beats & Eats in Royal Oak and the Blue Water Fest in Port Huron.
Quicken Loans Winter Blast Weekends, 2019 schedule
(Provided by organizers; subject to change)
Jan. 19-21, presented by Soaring Eagle Casino & Resort
(Saturday, 11 a.m.-11 p.m.; Sunday, 11 a.m.-9 p.m.; Monday 11 a.m.-8 p.m.)
Free ice skating at Campus Martius Rink (normally $10 per adult)
Food truck rally – courtesy of Kroger
Marshmallow roasting – courtesy of the Detroit Downtown Development Authority (DDA)
Ice sculptures – courtesy of US Ice
Local music showcase
Saturday night only: DJ showcase from the Movement festival
Food Truck Rally: Beans & Cornbread, Taste of Nawlins, Chick a D, Fortune Cookin', Imperial Ferndale
(Friday, 4 p.m.-11 p.m.; Saturday, 11 a.m.–11 p.m.; Sunday, 11 a.m.–8 p.m.)
City Slopes (skiing and snowboarding exhibition) – courtesy of Boyne Mountain and Boyne Highlands
Family activities – courtesy of Chemical Bank
Marshmallow roasting – courtesy of the DDA
Strolling entertainment
Food Truck Rally: Hero or Villain, The Nosh Pit, Soaring Eagle Cuisine Machine
Feb. 8-10, presented by Delta Dental
(Friday, 4 p.m.-11 p.m.; Saturday, 11 a.m.-11 p.m.; Sunday, 11 a.m.-8 p.m.)
Special Olympics Polar Plunge
Food truck rally – courtesy of The Kroger Co.
Winter slide
Food Truck Rally: Beans & Cornbread, Bigalora, Hero or Villain, Buffy's Mexicasian Grill, The Nosh Pit, Soaring Eagle Cuisine Machine
Feb. 15-17, presented by Delta Dental
Free skating at Campus Martius Rink (normally $10 per person)
Food Truck Rally: Imperial Ferndale, Chick a D, The Monkey Truck
Read or Share this story: https://www.freep.com/story/entertainment/2018/11/28/winter-blast-weekends-2019-festival-schedule/2134110002/
Twitter unleashes on Trump for 'airport' flub
Detroit indie rocker earning big national buzz
Downriver dining guide helps you find a place to eat
TV highlights for the week of July 7-13
Detroit Zoo to host work of renowned photographer
New movies: 'Midsommar,' 'Spider-Man: Far from Home' | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,199 |
Ugo Vetere (* Regio de Calabria, 1924 - Viterbo, 2 de abril de 2013) fue un funcionario público y político italiano.
Biografía
Nació en Regio de Calabria, el 24 de abril de 1924. Se inscribe en el Partido Comunista Italiano.
Fue Senador, miembro del Parlamento italiano, y alcalde de Roma.
Falleció el 2 de abril de 2013, luego de una larga enfermedad.
Referencias
Políticos de Italia del siglo XX
Políticos del Partido Comunista Italiano
Senadores de Italia
Alcaldes de Roma
Nacidos en Regio de Calabria
Fallecidos en Viterbo | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 263 |
exports.setupMemcached = require('./memcached'); | {
"redpajama_set_name": "RedPajamaGithub"
} | 2,884 |
Q: Only root can log in to Mariadb on Centos 7 I just installed Centos 7 and LAMP stack. However, I only can log in to Mariadb as root even though there is another user. I've been using MySQL for a couple of decades but I can't figure this out.
Basically everything seems to be all right and exactly the same user configuraiton worket fine with MySQL and previous centos 6.10.
This another user (ie. John) can't log in to mariadb from cli neither can he login from phpmyadmin locally or remotelly. Only the root user can access databases.
I've tried third user, with the same result.
All three have the following hosts with same privileges:
%, localhost, ::1 and "host.domain.fi". And yes, I have committed the "flush privileges" sql-command.
I've even tried super complex passwords with no help. Only response is "ERROR 1045 (28000): Access denied for user 'test'@'localhost' (using password: YES)".
Only difference in the creation of users is that root user was created with mysql_secure_installation and others with phpmyadmin.
Anyone, ideas? This obviously is quite a hazard for secure use of my mariadb.
A: I do not see in your question any reference to the GRANT command you used to create the user. Did you used it? If not, try entering with root and then execute the following
GRANT ALL ON your_db_name.* TO 'test'@'localhost';
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,902 |
\section{PROBLEMS AND PROGRAM}
The finite temperature transition of QCD can be seen
as a change in the structure of the hadrons and as a symmetry
breaking transition -- a change in the structure of the vacuum
(we shall take the most economical attitude that deconfining and
chiral symmetry restoration are related).
These phenomena are observed
differently and carry complementary information. We aim at a correlated analysis involving hadronic correlators and
the vacuum structure including
field and density correlations,
both non-trivial questions.
To understand the hadronic phenomenology at $T>0$ (see, e.g.\cite{JSQM96}) we need to describe the dominant
low energy structure in each channel. We must be prepared to
cope with: wide structure replacing the well defined,
$T=0$ pole; the difficulty of separating this structure from the
rest of the spectrum; non-isotropic dispersion law -- all
intrinsic (and relevant)
physical aspects. Due to
breaking of the Lorentz invariance at $T>0$ we must
study the general correlators. However, in lattice calculations
the finite time extension $l_{\tau} = 1/T$
prevents the disappearance of the higher
excitations in the time propagation and
more refined analyses are required. Likewise,
for the description of the vacuum structure one must disentangle the physically relevant
structure from UV fluctuations. We use a ``gold - washing" cooling algorithm with nearly scale invariant instantons above a short range cut-off $\rho_0 \simeq 2.3a$ \cite{MNP}. Since instanton -- anti-instanton (IA) pairs annihilate in any cooling their study is more sophisticated.
To allow for high $T$ with large $N_{\tau}$ but moderate
$N_{\sigma}$ and $\beta$ we
use anisotropic lattices:
\begin{equation}
a_{\sigma}/a_{\tau} = \xi >1, \ \
T = \left( \xi / N_{\sigma} \right) a_{\sigma}^{-1}
\end{equation}
\noindent This ensures a fine discretization
of the time axis and thus more detailed information. It also
allows a fine variation of the temperature at fixed
$\beta$.
We approach these questions first in quenched QCD,
which shows a deconfining transition.
But since chirality directly involves the quarks, in their absence the
temperature effects on the
hadronic correlators and on the topological structure
may be delayed or modified. Therefore the final
aim must be a full QCD analysis.
\section{ANISOTROPIC LATTICE STUDIES}
\noindent {\it Calibration and scaling:} Anisotropic lattices \cite{FKa} are introduced with help of a (bare) coupling anisotropy $\gamma$,
i.e. for QCD with plaquette action:
\begin{equation}
S_{YM}=-{{\beta} \over 3}\left\lbrack{1\over\gamma }Re\hbox{Tr}\left
(P_{\sigma\sigma}\right)+\gamma Re\hbox{Tr}
\left(P_{\sigma \tau}\right)\right\rbrack\
\end{equation}
\noindent with $P_{\mu \nu} = W_{\mu \nu}(11)$. Euclidean
symmetry should be recovered for physical quantities when expressed
using the cut off anisotropy $\xi$ \cite{FKa,BKNS}:
\begin{equation}
F_n^{\sigma}(z) = F_n^{\tau}(t = \xi z)
\end{equation}
\noindent This fixes $\xi$ given $\gamma$ (``calibration").
If the action leads to strong artifacts, this relation cannot
be fulfilled simultaneously for all observables.
In a
first analysis of these effects
we use the {\it tree level improved} actions defined with
$P$ in Eq.(2) given now by the sum of loops:
\begin{eqnarray}
P_{\mu \nu} = c_0 W_{\mu \nu}(11) +
c_1 W_{\mu \nu}(12)+ c_2 W_{\mu \nu}(22)
\end{eqnarray}
\noindent with the $W(12)$ loop averaged over directions
(it is easy to see that at tree level the bare anisotropy
affects all loops similarly). We compare (1): Wilson action,
(2): L\"uscher - Weisz Symanzik action $c_0 =5/3,\ c_1=-1/12$
and (3): the ``square" Symanzik action $c_0=16/9,\ c_1=-1/9,\ c_2=1/144$ \cite{PvB}. In Table I we present $SU(2)$ results
on $8^3\times24$ lattices
at $\gamma = 3$ and $\beta = 2.339,\ 1.768$ and $ 1.772$
(4000, 4000 and 2000 configurations respectively,
separated by 10 sweeps
after 10000 thermalization sweeps; $\chi^2$ cannot be compared between the different actions). The parameters are
chosen such as to have the same cut off $a_{\sigma}$
corresponding to $\beta=2.25$ for the Wilson action at
$\gamma=1$.
(Notice that for the Wilson action
$\Lambda (\gamma=3) \simeq 0.8 \Lambda (\gamma=1)$ \cite{FKa}.)
We fit Eq. (3) choosing for $F_n^{\mu}(m_{\mu})$ planar Wilson loop ratios
\begin{equation}
R_{n_{\sigma}}^{\mu}(m_{\mu}) \equiv
W_{\sigma\mu}(n_{\sigma},m_{\mu})/W_{\sigma\mu}(n_{\sigma}-1,m_{\mu}).
\end{equation}\smallskip
\noindent for $m_{\mu}= 1,\ldots,N_{\mu}/2 + 1,\ \mu = \sigma, \tau$. For the Wilson action $\xi_{pert.} \simeq 3.3$, hence we have rather large non-perturbative corrections. The tree level improved actions already seem to reduce both
the non-perturbative effects and the lattice artifacts.
Results for $SU(3)$ and for non-planar loops and physical isotropy checks for instantons on anisotropic lattices will be reported elsewhere.
\noindent {\it Deconfining transition:} Once the calibration has been performed at $T=0$ it is assumed that $\xi$
does not depend on the lattice size, and therefore we can increase the
temperature by reducing $N_{\tau}$.
\vskip0.5cm
\hbox to \hsize{\hfil\vbox{\offinterlineskip
\halign{&\vrule#&\ $#{\vrule height 13pt depth 1.4pt width 0pt}$\hfil\ \cr
\noalign{\hrule}
&action&&R_2&&R_3&&R_4&&R_5&&\cr
\noalign{\hrule}
&(1):\ \xi&&4.02(6)&&3.90(2)&&3.89(2)&&3.93(4)&&\cr
&\chi^2/d.f.&&86&&2&&.4&&.2&&\cr
\noalign{\hrule}
&(2):\ \xi&&3.52(3)&&3.44(1)&&3.43(2)&&3.42(2)&&\cr
&\chi^2/d.f.&&33&&1&&.3&&.2&&\cr
\noalign{\hrule}
&(3):\ \xi&&3.51(1)&&3.46(2)&&3.46(3)&&3.45(4)&&\cr
&\chi^2/d.f.&&36&&10&&3&&1.3&&\cr
\noalign{\hrule}}}\hfil}
\vskip3mm
{\narrower{\noindent Table I: Cut off anisotropy for $SU(2)$.}\par}
\vskip5mm
\noindent In Fig. 1 we show the Polyakov loop
susceptibility as function of $\gamma$ for 3 lattice lengthes $N_{\tau}=20,18,16$ at $\beta=5.68$ (pure $SU(3)$ theory
with Wilson action, $N_{\sigma}=8$).
\begin{figure}[htb]
\vspace{3.34cm}
\special{psfile=fig1.ps voffset=-85 hoffset=-22 vscale=31.0 hscale=36.0}
\caption{Polyakov loop susceptibility $\chi$ {\it vs} $\gamma$.}
\label{fig 1}
\end{figure}
\noindent {\it Mesonic correlators near $T_c$:} Methods for projecting
onto the low energy part must be carefully defined, since a simple reweighting
of the spectral function may deform a wide peak (while it would not change a
pole). Using the quark propagators $S$ we measure the ``wave function"
\begin{eqnarray}
F^{\Gamma}({\bf P},x,t) = \sum_{\bf z}\hbox{e}^{i{\bf Pz}}
\sum_{{\bf y_1},{\bf y_2}} w({\bf y_1},{\bf y_2}) \times \nonumber \\
\langle Tr \left[\Gamma S({\bf y_1},0; {\bf z}, t)
\Gamma S({\bf y_2},0; {\bf z}+x, t)\right]\rangle
\end{eqnarray}
\noindent Here $\Gamma$ defines the channel ($\pi,\ \rho$) and
$ w({\bf y_1},{\bf y_2})$ is the source, to be determined iteratively,
aimed at optimizing the signal of the ground state. Pure $SU(3)$,
$12^3\times N_{\tau}$ lattices at $\beta=5.68$ and $\gamma=4$
are used (Wilson action). At
this $\gamma$ one finds $T\simeq 0.93 T_c,\ 1.03 T_c$ and $1.15 T_c$
with $N_{\tau}=20,\ 18$ and $16$, respectively (see Fig. 1).
The calibration has been done with $N_{\tau}=72$ ($T \simeq 0$)
and found $\xi\simeq 5.9$ from Wilson loops.
On this $T \simeq 0$ lattice we measured also quenched
pion propagators in space and time
directions. They are found to show
the same cut off anisotropy $\xi=5.9$
for $\gamma_F \equiv \kappa_{\tau} / \kappa_{\sigma} = 5.4$ in the fermionic action for Wilson quarks ($\kappa$: the hopping parameter). About 20 configurations at $N_{\tau}=20$
and $18$ have been analyzed, $N_{\tau}=16$ is under way. We pursue a number of strategies:\par
\noindent (a) Using $F^{\Gamma}({\bf 0},x,t)$ from a simple source like point (``$pp$": $w({\bf y_1},{\bf y_2})= \delta_{y_1 0}\delta_{y_2 0}$) or wall (``$ww$":
$w({\bf y_1},{\bf y_2})= 1$) we fit an ansatz with three poles
corresponding to the ground state exp$(-ax^p)$ and two radial excitations \cite{OSA}.
We obtain in this way the wave function parameters $a,\ p$ and a first
estimation of the lowest mass. These $a,\ p$ are then used to project onto the
ground state at the sink.\par
\noindent (b) Using the same $a,\ p$ a new, shell model type source is constructed
by smearing one (``$ep$") or both (``$ee$") quark propagators with exp$(-ax^p)$.\par
\noindent (c) ``Effective" masses are extracted by fitting a cosh around each
t for $F^{\Gamma}({\bf 0},0,t)$ for the various sources and sinks.\par
\noindent (d) We make an analysis of $F^{\Gamma}({\bf 0},0,t)$ corresponding to binning of
the spectral function, again using the various sources and sinks and
checking the stability of the low energy structure.\par
\noindent (e) We analyze $F^{\Gamma}({\bf P},0,t)$ for the dispersion law.
This program is presently in work. Partial results from analyses on
smaller lattices have been presented before \cite{OSA}. Now we show in Fig. 2
the effective mass plots for various sources and the wave functions
$F^{\Gamma}({\bf 0},x,t)$ for the ``$ep$" source. There is a strong dependence of the effective mass on the sources, even in the region where it
seems to saturate. This signals a low energy structure of
significant width. A second observation is that there seems to be little change, both in the effective mass and in the
wave function, inside few percents around
$T_c$. Typical wave function parameters are $a=0.4,\ p=1.35$. However, before interpreting these results we want to do the
full analysis, also of further quantities such as the scalar propagator
and the condensate, and at a higher temperature $N_{\tau}=16$.
Also a signal from free quark propagation, again a wide structure in the spectrum, may show up above $T_c$.
\begin{figure}[htb]
\vspace{7.45cm}
\special{psfile=fig2a.ps voffset= 50 hoffset=-26 vscale=31.0 hscale=36.0}
\special{psfile=fig2b.ps voffset= 50 hoffset=79 vscale=31.0 hscale=36.0}
\special{psfile=fig2c.ps voffset=-70 hoffset=-26 vscale=31.0 hscale=36.0}
\special{psfile=fig2d.ps voffset=-70 hoffset=79 vscale=31.0 hscale=36.0}
\caption{Pseudo-scalar effective mass {\it vs} $t$ (upper plots:
``$pp$" circles,
``$ep$" diamonds, ``$ee$" squares, ``$ww$" crosses) and ``$ep$" wave function {\it vs} x, $t=1,3,5,7,10(9)$ (diamonds, bars, squares, crosses, hexagons: lower plots), at
$N_{\tau}=20\ (0.93T_c)$ and $N_{\tau} = 18\ (1.03T_c)$.
No reliable estimation for the effective mass errors is available; they may
be compatible with the ``$ep$" - ``$ee$" difference.}
\label{fig 2}
\end{figure}
{\bf Acknowledgments}: We are indebted to Fujitsu Ltd. for offering us
computing facilities. IOS thankfully acknowledges support from DFG.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,131 |
Q: How can I load the images with reduced quality/resolution? I'm using the following script to turn image links into images on Excel sheet. It works great however I have to pull about 1000 images on a spreadsheet for a catalog. When that many images are on the workbook it lags so much even though the images appear tiny (but with original quality). So I'm wondering if anyone can help me adjust my script so the image quality or resolution is reduced. So basically turn a full size image into a small thumbnail that won't take up too much to load. Please let me adjust the resolution so I can test it myself. Here's my code. Appreciate any help!
Option Explicit
Dim rng As Range
Dim cell As Range
Dim Filename As String
Sub URLPictureInsert()
Dim theShape As Shape
Dim xRg As Range
Dim xCol As Long
On Error Resume Next
Application.ScreenUpdating = False
Set rng = ActiveSheet.Range("A1:B500")
For Each cell In rng
Filename = cell
If InStr(UCase(Filename), "JPG") > 0 Then
ActiveSheet.Pictures.Insert(Filename).Select
Set theShape = Selection.ShapeRange.Item(1)
If theShape Is Nothing Then GoTo isnill
xCol = cell.Column + 1
Set xRg = Cells(cell.Row, xCol)
With theShape
.LockAspectRatio = msoFalse
.Width = 100
.Height = cell.Height
.Top = cell.Top + 1
.Left = cell.Left + 1
End With
isnill:
Set theShape = Nothing
Range("A1").Select
End If
Next
Application.ScreenUpdating = True
Debug.Print "Done " & Now
End Sub
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,872 |
next Christian (medieval) character >
< previous Christian (medieval) character
Wilfred of Ivanhoe
Religion: Christian (medieval)
Name: Ivanhoe
Alter Ego: Wilfred of Ivanhoe
Other Names: Sir Wilfred of Ivanhoe; Desdichado; Disinherited One; Sir Ivanhoe of Rotherwood
Classification: hero
Publisher(s): Archibald Constable and Company
First Appearance: Ivanhoe (1820)
First Appearance (Additional Details): (comics) New Fun #1 (Feb. 1935): "Episode 1"
Creators: Sir Walter Scott, Lauderdale Maitland (actor)
Number of Appearances: 111
Comic Book Appearances: 49
TV, Film Appearances: 60
Prose/Text Book/Story Appearances: 1
Enemy of: Brian de Bois-Guilbert
Romantic Interest: Rowena
Family/Relatives: Cedric the Saxon (father), Rowena (wife)
Occupation: aristocrat, knight
Nation: Scotland, United Kingdom
Birth Place: United Kingdom
This character is in the following 12 stories which have been indexed by this website:
The Big Book of Fun Comics
The Big Book of Fun Comics (Nov. 1935): "Episode 1" (lead character)
Ivanhoe (1820) (lead character)
More Fun #7 (Jan. 1936): "Episode 7" (lead character)
New Fun
New Fun #1 (Feb. 1935): "Episode 1" (lead character)
New Fun #2 (Mar. 1935): "Episode 2" (lead character)
New Fun #3 (Apr. 1935): "Episode 3" (lead character)
New Fun #4 (May 1935): "Episode 4" (lead character)
New Fun #5 (Aug. 1935): "Episode 5" (lead character)
New Fun #6 (Oct. 1935): "Episode 6" (lead character)
- http://en.wikipedia.org/wiki/Ivanhoe
- http://www.comicvine.com/ivanhoe/4005-54152/
- http://comicbookdb.com/character.php?ID=7405
- https://www.comics.org/issue/85/ | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,190 |
extern crate serde;
extern crate serde_json;
use self::serde::ser::{Serialize, Serializer};
use self::serde::de::{Deserialize, Deserializer, Visitor, MapVisitor, SeqVisitor, Error};
use self::serde::de::value::{ValueDeserializer, SeqVisitorDeserializer, MapVisitorDeserializer};
use self::serde_json::Map;
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Resource(pub Map<String, Value>);
#[derive(Debug, PartialEq, Deserialize)]
pub struct Keyword {
#[serde(skip_serializing_if="Option::is_none")]
pub ns: Option<String>,
pub name: String,
}
impl Serialize for Keyword {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
let num_fields = if self.ns.is_some() { 3 } else { 2 };
let mut map = serializer.serialize_map(Some(num_fields)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "kw"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, &self.name));
match self.ns {
Some(ref v) => {
try!(serializer.serialize_map_key(&mut map, "ns"));
try!(serializer.serialize_map_value(&mut map, v));
}
None => {}
}
serializer.serialize_map_end(map)
}
}
#[derive(Debug, PartialEq, Deserialize)]
pub struct Number(pub String);
impl Serialize for Number {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
let mut map = serializer.serialize_map(Some(2)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "num"));
try!(serializer.serialize_map_key(&mut map, "val"));
try!(serializer.serialize_map_value(&mut map, &self.0));
serializer.serialize_map_end(map)
}
}
#[derive(Debug, PartialEq)]
pub enum MemberKey {
Keyword(Keyword),
Number(Number),
}
impl Serialize for MemberKey {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
match self {
&MemberKey::Keyword(ref v) => Keyword::serialize(v, serializer),
&MemberKey::Number(ref v) => Number::serialize(v, serializer),
}
}
}
impl Deserialize for MemberKey {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = MemberKey;
fn visit_map<V>(&mut self, mut visitor: V) -> Result<MemberKey, V::Error>
where V: MapVisitor
{
let mut v: Option<String> = None;
let mut t: Option<String> = None;
let mut ns: Option<String> = None;
while let Some(key) = try!(visitor.visit_key()) {
let key: String = key;
match &key as &str {
"name" => {
let val: String = try!(visitor.visit_value());
v = Some(val);
}
"type" => {
let val: String = try!(visitor.visit_value());
t = Some(val);
}
"ns" => {
let val: String = try!(visitor.visit_value());
ns = Some(val);
}
_ => {}
}
}
try!(visitor.end());
match t {
Some(ch) => {
match ch.as_str() {
"kw" => {
Ok(MemberKey::Keyword(Keyword {
name: v.unwrap(),
ns: ns,
}))
}
"num" => Ok(MemberKey::Number(Number(v.unwrap()))),
_ => panic!(),
}
}
None => panic!(),
}
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize)]
pub struct Member {
pub key: MemberKey,
pub val: Pattern,
}
#[derive(Debug, PartialEq)]
pub enum Value {
Pattern(Pattern),
ComplexValue {
traits: Option<Vec<Member>>,
val: Option<Pattern>,
def: Option<i8>,
},
}
impl Serialize for Value {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
match *self {
Value::Pattern(ref v) => {
match v {
&Pattern::Simple(ref v) => v.serialize(serializer),
&Pattern::Complex(ref v) => {
let mut map = serializer.serialize_map(Some(1)).unwrap();
try!(serializer.serialize_map_key(&mut map, "val"));
try!(serializer.serialize_map_value(&mut map, &v));
serializer.serialize_map_end(map)
}
}
}
Value::ComplexValue { ref val, ref traits, ref def } => {
let mut num_fields = 1;
if !val.is_none() {
num_fields += 1
};
if !def.is_none() {
num_fields += 1
};
let mut map = serializer.serialize_map(Some(num_fields)).unwrap();
if let &Some(ref v) = val {
try!(serializer.serialize_map_key(&mut map, "val"));
try!(serializer.serialize_map_value(&mut map, &v));
}
try!(serializer.serialize_map_key(&mut map, "traits"));
try!(serializer.serialize_map_value(&mut map, traits));
if let &Some(ref d) = def {
try!(serializer.serialize_map_key(&mut map, "def"));
try!(serializer.serialize_map_value(&mut map, d));
}
serializer.serialize_map_end(map)
}
}
}
}
impl Deserialize for Value {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = Value;
fn visit_str<E>(&mut self, value: &str) -> Result<Self::Value, E>
where E: Error
{
let mut deserializer = value.into_deserializer();
Deserialize::deserialize(&mut deserializer).map(Value::Pattern)
}
fn visit_seq<V>(&mut self, visitor: V) -> Result<Self::Value, V::Error>
where V: SeqVisitor
{
let mut deserializer = SeqVisitorDeserializer::new(visitor);
Deserialize::deserialize(&mut deserializer).map(Value::Pattern)
}
fn visit_map<V>(&mut self, mut visitor: V) -> Result<Self::Value, V::Error>
where V: MapVisitor
{
let mut val: Option<Pattern> = None;
let mut traits: Option<Vec<Member>> = None;
let mut def: Option<i8> = None;
while let Some(key) = try!(visitor.visit_key()) {
let key: String = key;
match &key as &str {
"val" => {
val = Some(visitor.visit_value()?);
}
"traits" => {
traits = Some(visitor.visit_value()?);
}
"def" => {
def = Some(visitor.visit_value()?);
}
_ => {}
}
}
try!(visitor.end());
Ok(Value::ComplexValue {
val: val,
traits: traits,
def: def,
})
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
#[derive(Debug, PartialEq)]
pub enum Expression {
ExternalArgument(String),
EntityReference(String),
Number(Number),
CallExpression {
name: Box<Expression>,
args: Vec<Expression>,
},
SelectExpression {
exp: Box<Expression>,
vars: Vec<Member>,
def: Option<i8>,
},
KeyValueArgument { name: String, val: Box<Expression> },
Member {
obj: Box<Expression>,
key: MemberKey,
},
FunctionCall(String),
Pattern(String),
}
impl Serialize for Expression {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
match self {
&Expression::ExternalArgument(ref name) => {
let mut map = serializer.serialize_map(Some(2)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "ext"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, name));
serializer.serialize_map_end(map)
}
&Expression::EntityReference(ref name) => {
let mut map = serializer.serialize_map(Some(2)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "ref"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, name));
serializer.serialize_map_end(map)
}
&Expression::Number(ref val) => Number::serialize(val, serializer),
&Expression::SelectExpression { ref exp, ref vars, .. } => {
let mut map = serializer.serialize_map(Some(3)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "sel"));
try!(serializer.serialize_map_key(&mut map, "exp"));
try!(serializer.serialize_map_value(&mut map, exp));
try!(serializer.serialize_map_key(&mut map, "vars"));
try!(serializer.serialize_map_value(&mut map, vars));
serializer.serialize_map_end(map)
}
&Expression::CallExpression { ref name, ref args } => {
let mut map = serializer.serialize_map(Some(3)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "call"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, name));
try!(serializer.serialize_map_key(&mut map, "args"));
try!(serializer.serialize_map_value(&mut map, args));
serializer.serialize_map_end(map)
}
&Expression::KeyValueArgument { ref name, ref val } => {
let mut map = serializer.serialize_map(Some(3)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "kv"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, name));
try!(serializer.serialize_map_key(&mut map, "val"));
try!(serializer.serialize_map_value(&mut map, val));
serializer.serialize_map_end(map)
}
&Expression::Member { ref obj, ref key } => {
let mut map = serializer.serialize_map(Some(3)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "mem"));
try!(serializer.serialize_map_key(&mut map, "obj"));
try!(serializer.serialize_map_value(&mut map, obj));
try!(serializer.serialize_map_key(&mut map, "key"));
try!(serializer.serialize_map_value(&mut map, key));
serializer.serialize_map_end(map)
}
&Expression::FunctionCall(ref name) => {
let mut map = serializer.serialize_map(Some(2)).unwrap();
try!(serializer.serialize_map_key(&mut map, "type"));
try!(serializer.serialize_map_value(&mut map, "fun"));
try!(serializer.serialize_map_key(&mut map, "name"));
try!(serializer.serialize_map_value(&mut map, name));
serializer.serialize_map_end(map)
}
&Expression::Pattern(ref val) => serializer.serialize_str(val),
}
}
}
enum ExpressionName {
String(String),
Expression(Expression),
}
impl Deserialize for ExpressionName {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = ExpressionName;
fn visit_str<E>(&mut self, v: &str) -> Result<Self::Value, E>
where E: Error
{
let mut deserializer = v.into_deserializer();
Deserialize::deserialize(&mut deserializer).map(ExpressionName::String)
}
fn visit_map<V>(&mut self, visitor: V) -> Result<Self::Value, V::Error>
where V: MapVisitor
{
let mut deserializer = MapVisitorDeserializer::new(visitor);
Deserialize::deserialize(&mut deserializer).map(ExpressionName::Expression)
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
enum ExpressionValue {
String(String),
Expression(Expression),
}
impl Deserialize for ExpressionValue {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = ExpressionValue;
fn visit_str<E>(&mut self, v: &str) -> Result<Self::Value, E>
where E: Error
{
let mut deserializer = v.into_deserializer();
Deserialize::deserialize(&mut deserializer).map(ExpressionValue::String)
}
fn visit_map<V>(&mut self, visitor: V) -> Result<Self::Value, V::Error>
where V: MapVisitor
{
let mut deserializer = MapVisitorDeserializer::new(visitor);
Deserialize::deserialize(&mut deserializer).map(ExpressionValue::Expression)
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
impl Deserialize for Expression {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = Expression;
fn visit_map<V>(&mut self, mut visitor: V) -> Result<Self::Value, V::Error>
where V: MapVisitor
{
let mut name: Option<ExpressionName> = None;
let mut t: Option<String> = None;
let mut exp: Option<Expression> = None;
let mut vars: Option<Vec<Member>> = None;
let mut args: Option<Vec<Expression>> = None;
let mut val: Option<ExpressionValue> = None;
let mut obj: Option<Box<Expression>> = None;
let mut k: Option<MemberKey> = None;
while let Some(key) = visitor.visit_key::<String>()? {
match key.as_str() {
"type" => t = Some(visitor.visit_value()?),
"name" => name = Some(visitor.visit_value()?),
"exp" => exp = Some(visitor.visit_value()?),
"vars" => vars = Some(visitor.visit_value()?),
"args" => args = Some(visitor.visit_value()?),
"val" => val = Some(visitor.visit_value()?),
"obj" => obj = Some(visitor.visit_value()?),
"key" => k = Some(visitor.visit_value()?),
_ => {}
}
}
visitor.end()?;
let t = match t {
Some(t) => t,
None => visitor.missing_field("type")?,
};
match t.as_str() {
"ext" => {
let name = match name {
Some(name) => {
match name {
ExpressionName::String(v) => v,
ExpressionName::Expression(_) => panic!(),
}
}
None => visitor.missing_field("name")?,
};
Ok(Expression::ExternalArgument(name))
}
"ref" => {
let name = match name {
Some(name) => {
match name {
ExpressionName::String(v) => v,
ExpressionName::Expression(_) => panic!(),
}
}
None => visitor.missing_field("name")?,
};
Ok(Expression::EntityReference(name))
}
"sel" => {
let exp = match exp {
Some(exp) => exp,
None => visitor.missing_field("exp")?,
};
let vars = match vars {
Some(vars) => vars,
None => visitor.missing_field("vars")?,
};
Ok(Expression::SelectExpression {
exp: Box::new(exp),
vars: vars,
def: None,
})
}
"call" => {
let name = match name {
Some(name) => {
match name {
ExpressionName::Expression(v) => v,
ExpressionName::String(_) => panic!(),
}
}
None => visitor.missing_field("name")?,
};
let args = match args {
Some(args) => args,
None => visitor.missing_field("args")?,
};
Ok(Expression::CallExpression {
name: Box::new(name),
args: args,
})
}
"fun" => {
let name = match name {
Some(name) => {
match name {
ExpressionName::String(v) => v,
ExpressionName::Expression(_) => panic!(),
}
}
None => visitor.missing_field("name")?,
};
Ok(Expression::FunctionCall(name))
}
"num" => {
let val = match val {
Some(val) => {
match val {
ExpressionValue::String(v) => v,
ExpressionValue::Expression(_) => panic!(),
}
}
None => visitor.missing_field("val")?,
};
Ok(Expression::Number(Number(val)))
}
"kv" => {
let name = match name {
Some(name) => {
match name {
ExpressionName::String(v) => v,
ExpressionName::Expression(_) => panic!(),
}
}
None => visitor.missing_field("name")?,
};
let val = match val {
Some(val) => {
match val {
ExpressionValue::Expression(v) => v,
ExpressionValue::String(v) => Expression::Pattern(v),
}
}
None => visitor.missing_field("val")?,
};
Ok(Expression::KeyValueArgument {
name: name,
val: Box::new(val),
})
}
"mem" => {
let obj = match obj {
Some(obj) => obj,
None => visitor.missing_field("obj")?,
};
let k = match k {
Some(k) => k,
None => visitor.missing_field("key")?,
};
Ok(Expression::Member { obj: obj, key: k })
}
_ => panic!(),
}
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
#[derive(Debug, PartialEq)]
pub enum PatternElement {
TextElement(String),
PlaceableElement(Vec<Expression>),
}
impl Serialize for PatternElement {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
match self {
&PatternElement::TextElement(ref v) => serializer.serialize_str(v),
&PatternElement::PlaceableElement(ref v) => {
let mut state = try!(serializer.serialize_seq(Some(v.len())));
for e in v {
try!(serializer.serialize_seq_elt(&mut state, e));
}
serializer.serialize_seq_end(state)
}
}
}
}
impl Deserialize for PatternElement {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = PatternElement;
fn visit_str<E>(&mut self, v: &str) -> Result<Self::Value, E>
where E: Error
{
Ok(PatternElement::TextElement(v.to_owned()))
}
fn visit_seq<V>(&mut self, visitor: V) -> Result<Self::Value, V::Error>
where V: SeqVisitor
{
let mut deserializer = SeqVisitorDeserializer::new(visitor);
Deserialize::deserialize(&mut deserializer).map(PatternElement::PlaceableElement)
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
#[derive(Debug, PartialEq)]
pub enum Pattern {
Simple(String),
Complex(Vec<PatternElement>),
}
impl Serialize for Pattern {
fn serialize<S>(&self, serializer: &mut S) -> Result<(), S::Error>
where S: Serializer
{
match self {
&Pattern::Simple(ref v) => serializer.serialize_str(v),
&Pattern::Complex(ref v) => {
let mut state = try!(serializer.serialize_seq(Some(v.len())));
for e in v {
try!(serializer.serialize_seq_elt(&mut state, e));
}
serializer.serialize_seq_end(state)
}
}
}
}
impl Deserialize for Pattern {
fn deserialize<D>(deserializer: &mut D) -> Result<Self, D::Error>
where D: Deserializer
{
struct FieldVisitor;
impl Visitor for FieldVisitor {
type Value = Pattern;
fn visit_str<E>(&mut self, v: &str) -> Result<Self::Value, E>
where E: Error
{
Ok(Pattern::Simple(v.to_owned()))
}
fn visit_seq<V>(&mut self, visitor: V) -> Result<Self::Value, V::Error>
where V: SeqVisitor
{
let mut deserializer = SeqVisitorDeserializer::new(visitor);
Deserialize::deserialize(&mut deserializer).map(Pattern::Complex)
}
}
deserializer.deserialize_struct_field(FieldVisitor)
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,960 |
Those both sound great and wouldn't that be horrible not to know if your memories were real or imagined? And then to be locked up in a mental hospital!! Excellent review on these!
Great review. I'll have to look for these, I'm such a fantasy fan. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,259 |
<?php $__env->startSection('content'); ?>
<div class="mainbody container-fluid">
<div class="row">
<?php echo $__env->make('partials.breadcrumbs', ['icon' => 'user', 'breadcrumb' => link_to('user', 'Users', ['style' => 'color:#4f9fcf']) . ' / ' . $user->username], array_except(get_defined_vars(), array('__data', '__path')))->render(); ?>
<?php echo e(Form::hidden('username', $user->username)); ?>
<div class="col-lg-3 col-md-3">
<div class="panel panel-default">
<div class="panel-body">
<div class="media">
<div align="center">
<div style="background-image: url('<?php echo e($user->profile_photo_path ? $user->ProfilePhotoBase64() :
URL::asset($user->gender == 'male' ? 'images/male.png' : 'images/female1.jpg')); ?>');"
class="avatar picture">
</div>
</div>
<div class="media-body">
<hr>
<h3><strong>Bio</strong></h3>
<?php echo nl2br(e($user->bio)); ?>
<hr>
<h3><strong>Location</strong></h3>
<p>Earth</p>
<hr>
<h3><strong>Gender</strong></h3>
<p><?php echo e(ucfirst($user->gender)); ?></p>
<hr>
<h3><strong>Birthday</strong></h3>
<p><?php echo e($user->dob); ?></p>
</div>
</div>
</div>
</div>
</div>
<div class="col-lg-9 col-md-9 col-sm-12 col-xs-12">
<div class="panel panel-default">
<div class="panel-body">
<span>
<h1 class="panel-title pull-left" style="font-size:30px;"><?php echo e($user->name); ?></h1>
<div class="dropdown pull-right">
<?php if(Auth::user()->username != $user->username): ?>
<?php if($is_following): ?>
<?php echo delete_form(['user.{user}.requests.destroy', $user->username, encrypt($user->id)], 'Unfollow', 'btn btn-danger hidden-xs'); ?>
<?php else: ?>
<?php echo e(Form::open(['route' => ['user.{user}.requests.store', $user->username], 'name' => 'user_form', 'id' => 'user_form'])); ?>
<?php echo e(Form::button('Follow',['type' => 'submit','class' => 'btn btn-success hidden-xs'])); ?>
<?php echo e(Form::close()); ?>
<?php endif; ?>
<?php endif; ?>
</div>
</span>
<br><br><hr>
<span class="pull-left">
<a href="#posts" class="btn btn-link" style="text-decoration:none;">
<i class="fa fa-weixin fa-lg" aria-hidden="true"></i> Posts <span class="badge"><?php echo e($postsCount); ?></span>
</a>
<a data-toggle='modal' data-target='#followingModal' onclick="setType('following');" class="btn btn-link" style="text-decoration:none;">
<i class="fa fa-fw fa-users" aria-hidden="true"></i> Following <span class="badge"><?php echo e($followingCount); ?></span>
</a>
<a data-toggle='modal' data-target='#followingModal' onclick="setType('followers');" class="btn btn-link" style="text-decoration:none;">
<i class="fa fa-pied-piper-alt" aria-hidden="true"></i> Followers <span class="badge"><?php echo e($followersCount); ?></span>
</a>
</span>
<?php /*<span class="pull-right">*/ ?>
<?php /*<a href="#" class="btn btn-link" style="text-decoration:none;"><i class="fa fa-lg fa-at" aria-hidden="true" data-toggle="tooltip" data-placement="bottom" title="Mention"></i></a>*/ ?>
<?php /*<a href="#" class="btn btn-link" style="text-decoration:none;"><i class="fa fa-lg fa-envelope-o" aria-hidden="true" data-toggle="tooltip" data-placement="bottom" title="Message"></i></a>*/ ?>
<?php /*<a href="#" class="btn btn-link" style="text-decoration:none;"><i class="fa fa-lg fa-ban" aria-hidden="true" data-toggle="tooltip" data-placement="bottom" title="Ignore"></i></a>*/ ?>
<?php /*</span>*/ ?>
</div>
</div>
<hr>
<div class="panel panel-default">
<div class="panel-body">
<h3><strong>New Post</strong></h3>
<?php if(Auth::user()->username == $user->username || $is_following): ?>
<?php echo e(Form::open(['route' => ['user.{user}.post.store', $user->username, $user], 'name' => 'user_form', 'id' => 'user_form'])); ?>
<?php echo e(Form::bsTextArea('post', null, ['maxlength' => '255', 'rows' => '2', 'placeholder' => 'Leave a comment...'])); ?>
<?php echo e(Form::bsButtonRight('<span class="glyphicon glyphicon-send"></span> Post', ['type' => 'submit', 'class' => 'btn btn-default'])); ?>
<?php echo e(Form::close()); ?>
<?php endif; ?>
</div>
</div>
<div id="posts">
<?php echo $__env->make('post.single', ['user' => $user, 'posts' => $posts], array_except(get_defined_vars(), array('__data', '__path')))->render(); ?>
</div>
</div>
</div>
</div>
<?php echo $__env->make('modal-box.followers-modal', ['username' => $user->username], array_except(get_defined_vars(), array('__data', '__path')))->render(); ?>
<?php echo $__env->make('modal-box.delete-modal', ['username' => $user->username], array_except(get_defined_vars(), array('__data', '__path')))->render(); ?>
<?php $__env->stopSection(); ?>
<?php $__env->startSection('javascript_bottom'); ?>
<script src='<?php echo e(URL::asset('js/comments.js')); ?>'></script>
<?php $__env->stopSection(); ?>
<?php echo $__env->make('layouts.app', array_except(get_defined_vars(), array('__data', '__path')))->render(); ?> | {
"redpajama_set_name": "RedPajamaGithub"
} | 8,137 |
La svolta è un singolo del rapper italiano Carl Brave, pubblicato il 15 aprile 2022.
Descrizione
Il singolo fa parte della colonna sonora del film omonimo diretto da Riccardo Antonaroli.
Video musicale
Il videoclip, diretto da Roberto Cinardi, è stato pubblicato il 20 aprile 2022 sul canale YouTube del cantante.
Tracce
Note
Collegamenti esterni | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,152 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.