text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
{"url":"https:\/\/mattermodeling.stackexchange.com\/questions\/6588\/how-to-do-a-proper-relaxation-of-the-multicomponent-structure","text":"# How to do a proper relaxation of the multicomponent structure?\n\nI have a beta Ti (Space Group 229) structure with 16 atoms, I prepared 15 disordered TiNb structures with 5 atoms of Nb. I did it using VASPKIT. However, before the geometry optimization, I found out that the symmetry of all multicomponent structures was decreased to monoclinic. Is it ok?\n\nMoreover, what is better to do the relaxation in VASP, using ISIF=2? If the symmetry was decreased how may I calculate the correct elastic constants for my cubic TiNb structure? I am a bit confused.\n\nThis is my POSCAR file for pure Ti\n\n 1.00000000000000\n6.6399998665000002 0.0000000000000000 0.0000000000000000\n0.0000000000000000 6.6399998665000002 0.0000000000000000\n0.0000000000000000 0.0000000000000000 6.6399998665000002\nTi\n16\nDirect\n0.0000000000000000 0.0000000000000000 0.0000000000000000\n0.0000000000000000 0.0000000000000000 0.5000000000000000\n0.0000000000000000 0.5000000000000000 0.0000000000000000\n0.0000000000000000 0.5000000000000000 0.5000000000000000\n0.5000000000000000 0.0000000000000000 0.0000000000000000\n0.5000000000000000 0.0000000000000000 0.5000000000000000\n0.5000000000000000 0.5000000000000000 0.0000000000000000\n0.5000000000000000 0.5000000000000000 0.5000000000000000\n0.2500000000000000 0.2500000000000000 0.2500000000000000\n0.2500000000000000 0.2500000000000000 0.7500000000000000\n0.2500000000000000 0.7500000000000000 0.2500000000000000\n0.2500000000000000 0.7500000000000000 0.7500000000000000\n0.7500000000000000 0.2500000000000000 0.2500000000000000\n0.7500000000000000 0.2500000000000000 0.7500000000000000\n0.7500000000000000 0.7500000000000000 0.2500000000000000\n0.7500000000000000 0.7500000000000000 0.7500000000000000\n\n\nThis is one of the substituted Ti with Nb POSCAR prepared by VASPKIT This file is generated by VASPKIT code\n\n 6.6399998665000002 0.0000000000000000 0.0000000000000000\n0.0000000000000000 6.6399998665000002 0.0000000000000000\n0.0000000000000000 0.0000000000000000 6.6399998665000002\nNb Ti\n5 11\nDirect\n0.0000000000000000 0.5000000000000000 0.0000000000000000\n0.5000000000000000 0.5000000000000000 0.0000000000000000\n0.2500000000000000 0.2500000000000000 0.2500000000000000\n0.2500000000000000 0.2500000000000000 0.7500000000000000\n0.7500000000000000 0.2500000000000000 0.7500000000000000\n0.0000000000000000 0.0000000000000000 0.0000000000000000\n0.0000000000000000 0.0000000000000000 0.5000000000000000\n0.0000000000000000 0.5000000000000000 0.5000000000000000\n0.5000000000000000 0.0000000000000000 0.0000000000000000\n0.5000000000000000 0.0000000000000000 0.5000000000000000\n0.5000000000000000 0.5000000000000000 0.5000000000000000\n0.2500000000000000 0.7500000000000000 0.2500000000000000\n0.2500000000000000 0.7500000000000000 0.7500000000000000\n0.7500000000000000 0.2500000000000000 0.2500000000000000\n0.7500000000000000 0.7500000000000000 0.2500000000000000\n0.7500000000000000 0.7500000000000000 0.7500000000000000\n\n\u2022 I have couple of questions. what is composition of alloy? Is TiNb (50%Nb) or Ti11Nb5. why your initital random structure is monoclinic. Did you allow to change box during random alloy generation? Aug 20 at 12:22\n\u2022 Is it monoclinic before relaxation or after relaxation. Aug 20 at 12:32\n\u2022 @pranavkumar, I used the different compositions of TiNb (35, 40, 45, and 50 % of Nb). I did the relaxation of pure beta Ti with the 16 atoms. I've checked the symmetry, and it was ok, because the system is stable. After that, I used \"Advanced Structure Models\" of VASPKIT for the preparation of the substituted structures (TiNb) (I don't know how to fix the symmetry during the substitution in VASPKIT, I think it is impossible. Due to the fact that this substitution has broken the symmetry, it became lower without any calculation. Then I used ISIF=2 for the optimization. Aug 20 at 14:29\n\u2022 This is not correct way to create random disordered solid solution as mentioned by Brandon in answer also. The best way to use ATAT to create special quasi random structure.Use this link cniu.me\/2017\/08\/05\/SQS.html to constrain your geometry so box size don't change. For further chat use vasp chat box chat.stackexchange.com\/rooms\/109983\/vasp Aug 20 at 15:30\n\u2022 I have created one random SQS for test,kindly check chat.stackexchange.com\/rooms\/109983\/vasp Aug 20 at 15:50","date":"2021-10-16 03:32:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8979249000549316, \"perplexity\": 11046.776333090196}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323583408.93\/warc\/CC-MAIN-20211016013436-20211016043436-00719.warc.gz\"}"}
null
null
Q: What is wrong with code added to functions.php to selectively show styles based on login state When I try to add the following code to functions.php I keep getting errors (site goes down, seeming to indicate a php error). I've stared at this code forever and can't seem to figure out why it should give an error when it's added to functions.php. Any suggestions for what I should change? Thanks! function hide_prompt() {    if(is_user_logged_in()) { echo ' <style> .app { display: none!important; } </style> ';    } } add_action('wp_footer', 'hide_prompt'); A: You are using the wrong hook, and you are doing it the wrong way. Sorry to be clear. Adding stylesheet to the footer is no good style at all and rather a beginner mistake. Have a look at loading scripts correctly. You need to add that stylerule the to html-head of your login page/hook. Since you want to edit the login form, the correct hook, which you need to use is login_head See Codex: Login Hooks Also see: Login Head something like this (functions.php): function se_css_output_hide_promt() { ?> <style type="text/css" id="se-answer-customized-css"> <?php //switch to php to check the login status, add css when true if(is_user_logged_in()): ?> .app { display: none!important; } <?php endif; ?> </style> <?php } add_action('login_head', 'se_css_output_hide_promt'); I bet !important is no longer needed in case you are doing it this way. This css will be added right before the ending html-head, very close to your html. Meaning that your !important is propably no longer necessary.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,484
\section{Introduction} \label{intro} Via spin-orbit coupling, an electric field or a temperature gradient applied to a two-dimensional electron gas can generate both spin currents and spin polarisations, even in the absence of magnetic fields. In particular, the generation of spin currents is known as spin Hall and spin Nernst effect, respectively. In fact, ``spintronics'' and ``spin caloritronics'' have been a fast-growing field in recent years, experimentally as well as theoretically. We investigate these phenomena by means of a generalized Boltzmann equation which takes into account spin-orbit coupling of both intrinsic and extrinsic origin \cite{gorini2010,raimondi2012}. The spin Hall and Nernst effect are illustrated in Figs.\ \ref{she} and \ref{spin-nernst}. \begin{figure}[b] \begin{center} \includegraphics[width=0.8\columnwidth]{figs/she.pdf} \caption{Illustration of the spin Hall effect in comparison with the Hall effect and the anomalous Hall effect. Here ${\bf j}_c$ denotes the charge current, and $\bf M$ the magnetization.} \label{she} \end{center} \end{figure} \begin{figure}[b] \begin{center} \includegraphics[width=0.6\columnwidth]{figs/spin-nernst.pdf} \caption{Illustration of the spin Nernst effect, with $\partial_x T$ denoting the spatial gradient of the temperature, and $j_y^z$ the spatial-y-component of the spin current with polarization in z-direction.} \label{spin-nernst} \end{center} \end{figure} For the thermally induced spin currents (transverse to the gradient) and polarisations, the interplay between intrinsic and extrinsic mechanisms is shown to be critical. The relation between spin currents and spin polarisations is non-trivially affected by the thermal gradient \cite{toelle2014}. It is argued that for room-temperature experiments the $T$-dependence of electron-phonon scattering dominates over scattering at (static) defects. For example, we find that the spin Hall conductivity is practically independent of temperature for $T$ above the Debye temperature \cite{gorini2015}. For details, see \cite{gorini2010,raimondi2012,toelle2014,gorini2015} and further references therein. Our focus in this note is on two-dimensional systems, though some expression are easily generalized to three dimensions \cite{toelle2014}. In the next section, we briefly summarize the basic elements of the kinetic theory (Sec.\ \ref{kinetic}), then we present in Sec.\ \ref{thermo} the results for the spin thermoelectric transport coefficients. Section \ref{pss} is devoted to room-temperature phonon skew scattering. In the final section, Sec.\ \ref{sum}, we give a brief summary. \section{Kinetic theory} \label{kinetic} In order to set the stage, let us briefly discuss the standard model Hamiltonian for conduction electrons in a parabolic band \cite{shytov2006}: \begin{equation} \label{model1} \hat H_0^{\rm el} = \frac{p^2}{2m} - \frac{\alpha}{\hbar}\mbox{\boldmath $\sigma$}\times\hat{\bf z}\cdot{\bf p} + V_{\rm imp}({\bf r}) - \frac{\lambda^2}{4\hbar}\mbox{\boldmath $\sigma$}\times\nabla V_{\rm imp}({\bf r})\cdot{\bf p} \, . \end{equation} The static lattice potential $V_{\rm crys}({\bf r})$ does not appear explicitly here, since its effects have been incorporated in the effective mass $(m_0\rightarrow m)$ and effective Compton wavelength $(\lambda_0\rightarrow\lambda)$ \cite{winklerbook,Handbook}. Above, ${\bf \hat z}$ is the unit vector pointing towards the metal-substrate interface, whereas ${\bf p}, {\bf r}$ can be either vectors in the $x$-$y$ plane for strictly 2D films, or also have a $z$-component for thicker, 3D systems. The second term on the r.h.s.\ is the Bychkov-Rashba \cite{bychkov1984} intrinsic spin-orbit coupling due to structure symmetry breaking, characterized by the coupling constant $\alpha$. We recall that the intrinsic band splitting due to this term, denoted by $\Delta$, is given by $2\alpha k_F$, where $k_F$ is the Fermi wavevector. The random impurity potential $V_{\rm imp}({\bf r})$ enters directly and through the fourth term, which represents the extrinsic spin-orbit interaction. In the strictly 2D limit the Hamiltonian \eqref{model1} was used to study the spin Hall \cite{hankiewicz2008,raimondi2009,raimondi2012,gorini2012} and Edelstein effect \cite{raimondi2012} in the presence of both intrinsic and extrinsic mechanisms at $T=0$. Such mechanisms were shown {\it not} to be simply additive, and their interplay leads to a nontrivial behavior \cite{raimondi2009,raimondi2012}. The free phonon part of the Hamiltonian, $H_0^{\rm ph}$, which is of the standard form \cite{agdbook,rammerbook1}, has to be added to \eqref{model1}, as well as the electron-phonon interaction. The latter, conveniently presented in second-quantized form, is given by (see, in particular, section 13 in \cite{agdbook}, and chapter 10.7 in \cite{rammerbook1}) \begin{equation} \label{model2} \hat H^{\rm el-ph} = g \int \mathrm{d}{\bf r} \, \hat \varphi ({\bf r}) \hat\psi^\dag_\sigma ({\bf r}) \hat\psi_\sigma ({\bf r}) \, , \end{equation} where summation over the spin index $\sigma$ is implied. Correspondingly, we have to include $g \hat \varphi ({\bf r})$ also in the (second-quantized form of the) last term on the r.h.s.\ of \eqref{model1}. The relevant self-energy diagrams are shown in Fig.\ \ref{diagrams}, l.h.s., for impurity scattering, and for electron-phonon scattering, r.h.s. (For the moment, skew scattering is not considered; see Sec.\ \ref{pss}.) It is to be expected that the latter dominates for high temperature, i.e., for $T$ above the Debye temperature $T_D$. The kinetic (Boltzmann-like) equation for the $2\times2$ distribution function $f_{{\bf p}}=f^0+\mbox{\boldmath $\sigma$} \cdot {\bf f}$, where $f^0$ and $\bf f$ are the charge and spin distribution functions, respectively, reads \cite{gorini2010,raimondi2012,toelle2014} \begin{equation} \label{boltzmann} \partial_t f_{{\bf p}} + \tilde{\nabla} \cdot \left[ \frac{{\bf p}}{m} f_{{\bf p}} + \Delta {\bm j}_{\rm sj} \right] + \frac{1}{2} \left\lbrace{{\boldsymbol {\mathcal F}}}\cdot{\nabla}_{{\bf p}} , f_{{\bf p}} \right\rbrace= I_{0}+I_{\rm sj}+I_{\rm EY} \, , \end{equation} where we introduced the covariant spatial derivative and the $SU(2)$ Lorentz force due to the Bychkov-Rashba spin-orbit coupling: \begin{eqnarray} \tilde{\nabla}{}&{}={}&{}\nabla+\frac{i}{\hbar}\left[ {\boldsymbol {\mathcal A}}^a\frac{\sigma^a}{2},\cdot \right] \, ,\\ {\boldsymbol {\mathcal F}}{}&{}={}&{}- \frac{{\bf p}}{m} \times \boldsymbol{\mathcal{B}}^a \frac{\sigma^a}{2} \, ,\\ \mathcal{B}_i^a{}&{}={}&{}-\frac{1}{2\hbar}\varepsilon_{ijk} \varepsilon^{abc}\mathcal{A}_j^b\mathcal{A}_k^c \, . \end{eqnarray} A summation over identical indices is implied unless stated otherwise. Note that an external magnetic field is not included in these equations. The term $\Delta {\bm j}_{\rm sj}$ in \eqref{boltzmann} is a correction to the current due to side-jumps, given by \begin{equation} \Delta {\bm j}_{\rm sj} = \frac{\lambda^2}{8\hbar \tau} \Big\langle \big\{ \left( {\bf p}'-{\bf p} \right) \times \mbox{\boldmath $\sigma$} , f_{{\bf p}'} \big\} \Big\rangle_{\hat{{\bf p}}'} \, , \label{SJcorr} \end{equation} where $\langle \dots \rangle_{\hat{{\bf p}}'}$ denotes the angular average. The collision operators are not explicitly presented here. We only note that $I_0$ contains the standard terms describing momentum relaxation due to electron-impurity and electron-phonon scattering; the total momentum relaxation rate is denoted by $1/\tau$. For $T \gtrsim T_D$ the latter is similar to the former, since in this limit electron-phonon scattering essentially is elastic, allowing for a simple addition of the corresponding rates (Matthiessen's rule). In the case of dominant electron-phonon scattering, the high-$T$ momentum relaxation rate therefore is given by \cite{zimanbook2,rammerbook1} \begin{equation} \frac{\hbar}{\tau} \simeq 2\pi \, (N_0 g^2) \, k_B T \; , \;\; T \gtrsim T_D \, , \end{equation} where $N_0$ denotes the density of states at the Fermi surface (per spin and volume). This high-$T$-expression is known to be a good approximation even below $T_D$ (see, e.g., chapter IX, {\S} 5 in \cite{zimanbook2}). The last two terms on the r.h.s.\ of \eqref{boltzmann}, $I_{\rm sj}$ and $I_{\rm EY}$, describe side-jump processes and Elliott-Yafet spin relaxation, respectively, cf.\ \cite{raimondi2012,toelle2014}; see also Fig.\ \ref{diagrams}. \begin{figure} \includegraphics[width=0.8\columnwidth]{figs/diagrams.pdf} \caption{Relevant self-energy contributions which determine the collision operators in the Boltzmann equation. The arrowed lines represent the electron Green's function in Keldysh space, a cross (dot) the impurity (electron-phonon) vertex. The wavy line denotes the phonon propagator, and a box around a vertex the spin-orbit coupling.} \label{diagrams} \end{figure} From the distribution functions, the relevant physical quantities can be calculated. Here we present only the expressions for the y-spin polarization and the z-polarized spin current flowing along the y-direction: \begin{equation} s^y=\int \frac{\mathrm{d} {\bf p}}{(2\pi\hbar)^2} f^y = \int \mathrm{d} \epsilon_{{\bf p}} N_0 \langle f^y \rangle \, , \label{spin} \end{equation} \begin{equation} j_y^z=\mathrm{Tr} \, \frac{\sigma^z}{2} \int \frac{\mathrm{d} {\bf p}}{(2\pi\hbar)^2}\left[\frac{p_y}{m} f_{{\bf p}}+ \frac{\lambda^2}{8\hbar\tau}\left\{ \left( {\bf p} \times \mbox{\boldmath $\sigma$} \right)_y , f_{{\bf p}} \right\} \right] \, . \label{spin-current} \end{equation} The second term on the r.h.s.\ of \eqref{spin-current} is due to side-jumps, cf.\ \eqref{boltzmann}. Due to the Bychkov-Rashba term, a non-trivial relation between $s^y$ and $j_y^z$ is found: \begin{equation} \partial_t s^y + \frac{2m\alpha}{\hbar} j_y^z = - \int \mathrm{d} \epsilon_{{\bf p}} \frac{N_0}{\tau_{s}} \langle f^y \rangle \, . \label{non-trivial} \end{equation} Here we introduced the Elliott-Yafet spin relaxation rate \cite{elliott1954,yafet63} (or spin flip rate, hence the subscript $s$): \begin{equation} \frac{1}{\tau_{s}} = \frac{1}{\tau} \cdot \left(\frac{\lambda k}{2} \right)^4 \, , \end{equation} where $k \simeq k_F$. For an electric field the r.h.s.\ of \eqref{non-trivial} reduces to $-s^y/\tau_{s}$, but it does \emph{not} for a thermal gradient: in the latter case, the energy dependence of $1/\tau_s$ is found to be crucial \cite{toelle2014}. The spin relaxation due to intrinsic spin-orbit coupling, named after Dyakonov and Perel \cite{dyakonov1971}, is characterized by the following rate: \begin{equation} \label{dp-rate} \frac{1}{\tau_{\rm DP}} = \left( \frac{2m\alpha}{\hbar^2} \right)^2 D \, , \end{equation} where $D$ is the diffusion constant. This expression applies in the ``dirty'' limit, $\Delta \tau / \hbar \lesssim 1$. More generally, $D$ has to be replaced by $D/[1 + (\Delta \tau /\hbar)^2]$. Considering small variations of the temperature and a small electric field, the Boltzmann equation can be linearized, and solved for the non-equilibrium part of the distribution function. Integral expressions for the transport coefficients follow; selected results are discussed in the following sections. \section{Spin thermoelectric effects} \label{thermo} Efficient heat-to-spin conversion is the central goal of spin caloritronics \cite{bauer2012}. When considering metallic systems, two phenomena stand out in this field: the spin Nernst effect \cite{tauber2012,borge2013} and thermally-induced spin polarizations \cite{pang2010,dyrdal2013}. They consist in the generation of, respectively, a spin current or a spin polarization transverse to an applied temperature gradient. Note that in \cite{toelle2014}, and in this section, skew scattering is not taken into account. We note, in particular, that the spin Nernst conductivity $\sigma^{\rm sN}$ was recently investigated on the basis of ab initio methods \cite{tauber2012}, predicting a linear $T$-dependence. As mentioned above, we consider the Boltzmann equation linearized in the temperature gradient and the electric field. Hence the ``drive term'' is proportional to \begin{equation} \partial_x f^{\rm eq}= \left( \frac{\epsilon_{{\bf p}}-\epsilon_F}{T} \partial_x T + eE_x \right) \left( - \frac{\partial f^{\rm eq}}{\partial \epsilon_{{\bf p}}} \right) \, , \end{equation} where $f^{\rm eq}$ is the Fermi function. The transport coefficients of interest are defined as follows: \begin{eqnarray} s^y {}&{} = {}&{} P_{\rm sE} E_x + P_{\rm sT} \partial_x T \, , \\ j^z_y{}&{} = {}&{} \sigma_{\rm sE} E_x + \sigma_{\rm sT} \partial_x T \, . \end{eqnarray} Here we have chosen a symmetric notation with respect to the subscripts ``sE'' and ``sT''; of course, $\sigma_{\rm sE}$, usually denoted as $\sigma^{\rm sH}$, is the spin Hall conductivity. We obtain the following results (within the Sommerfeld expansion, $k_B T \ll \epsilon_F$; see \cite{toelle2014}): for the Edelstein polarization coefficient we find \begin{equation} P_{\rm sE} = -\frac{2m\alpha}{\hbar^2} \tau_{s} \cdot \sigma^{\rm sH} \, , \end{equation} while the spin Hall conductivity is given by \begin{equation} \sigma^{\rm sH}=\frac{1}{1 + \tau_{s}/\tau_{\rm DP}} \left(\sigma_{\rm int}^{\rm sH}+\sigma_{\rm sj}^{\rm sH}\right) \, ; \end{equation} furthermore, \begin{equation} P_{\rm sT} = -S_0 \epsilon_F [{P_{\rm sE}}({\cal E})]^\prime_{\epsilon_F} \; , \;\; \sigma_{\rm sT} = -S_0\epsilon_F [{\sigma^{\rm sH}}({\cal E})]^\prime_{\epsilon_F} \, . \end{equation} Here $S_0 = - (\pi^2 k_B/3e) k_B T \, [\ln \sigma ({\cal E})]^\prime_{\epsilon_F}$ is the standard expression for the thermopower, and the prime denotes differentiation with respect to energy; cf.\ chapter 7.9 in \cite{zimanbook1}. In addition (see \cite{raimondi2012} and references therein): \begin{equation} \label{sigma-int} \sigma_{\rm int}^{\rm sH} = (e/4\pi\hbar)(\tau/\tau_{\rm DP}) = (e/2\pi\hbar^3) (\alpha k_F \tau )^2 \, , \end{equation} and \cite{Engel05,Tse06} \begin{equation} \label{sigma-sj} \sigma_{\rm sj}^{\rm sH} = e n \lambda^2 /4\hbar = e (\lambda k_F)^2 / 8\pi\hbar \; . \end{equation} For the second equality in these equations, we used \eqref{dp-rate}, as well as $n = k_F^2 / 2\pi$ and $D = v_F^2 \tau / 2$. Finally, we consider concrete situations of experimental interest, namely (i) the thermal Edelstein effect, and (ii) the spin Nernst effect, both for the case of open circuit conditions along the x-direction: this implies $j_x=0$, and hence $E_x=S\partial_x T$, with the following results: \begin{equation} \mathrm{(i)}: \quad s^y = \mathcal{P}^t \partial_x T \; , \;\; \mathcal{P}^t = S P_{\rm sE} + P_{\rm sT} \, , \end{equation} and \begin{equation} \mathrm{(ii)}: \quad j_y^z = \sigma^{sN} \partial_x T \; , \;\; \sigma^{\rm sN}= S \sigma^{\rm sH} + \sigma_{\rm sT} \, . \end{equation} In both cases, we can identify ``eletrical'' and ``thermal'' contributions. A non-linear $T$-dependence follows from the fact that for high temperature $1/\tau \sim T$, hence $\tau_s \sim T^{-1}$ and $\tau_{\rm DP} \sim T$; see Figs.\ \ref{PtE_1} and \ref{sigmasN_1} for representative examples. \begin{figure} \includegraphics[width=0.8\columnwidth]{figs/PtE_1.pdf} \caption{Thermal Edelstein polarization coefficient, ${\cal P}^t$, versus $T/T_r$, in units of $S_{0,r} (2m\alpha\tau_r/\hbar^2) (e/8\pi\hbar)$, split into its thermal and electrical contributions. The Elliott-Yafet spin relaxation is chosen as $\tau/\tau_{s} = 0.01$; in addition, $\tau_{{s},r}/\tau_{{\rm DP},r} = 1$, i.e., intrinsic and extrinsic spin relaxation are assumed to be of the same order of magnitude. $T_r$ denotes the temperature scale (with the subscript $r$ referring to room temperature). Adapted from \cite{toelle2014}.} \label{PtE_1} \end{figure} \begin{figure} \includegraphics[width=0.8\columnwidth]{figs/sigmasN_1.pdf} \caption{Spin Nernst conductivity in units of the ``universal'' value of the intrinsic spin Hall conductivity times the Seebeck coefficient at room temperature, $S_{0,r} \cdot e/8\pi\hbar$, versus $T/T_r$, for $\tau/\tau_{s} =0.01$ and $\tau_{{s},r}/\tau_{{\rm DP},r} = 1$. $T_r$ denotes the temperature scale (with the subscript $r$ referring to room temperature). Adapted from \cite{toelle2014}.} \label{sigmasN_1} \end{figure} Summarizing this section, symmetric Mott-like formulas for the current or thermally induced spin polarization (Edelstein effect, thermal Edelstein effect) and for the spin Hall and Nernst coefficients have been derived. The $T$-dependence of the transport coefficients is non-trivially affected by the competition between intrinsic and extrinsic spin-orbit coupling mechanisms, $\tau_{\rm DP}$ versus $\tau_{s}$. In the diffusive regime the relaxation times have different $T$-dependences, which ultimately causes a non-linear $T$-behavior. The non-linearity is in general stronger for the thermal Edelstein effect, and, especially in the spin Nernst case, it becomes weaker with decreasing intrinsic spin-orbit coupling strength. \section{Phonon skew scattering} \label{pss} A diversity of spin Hall effects in metallic systems is known to rely on Mott skew scattering. In this section its high-$T$ counterpart, phonon skew scattering (pss), is investigated. One of the corresponding self-energy diagrams is shown in Fig.\ \ref{diagram-pss}. As a central result, the pss spin Hall conductivity is found to be practically $T$-independent for $T$ above the Debye temperature $T_D$ \cite{gorini2015}. \begin{figure} \includegraphics[width=0.4\columnwidth]{figs/diagram-pss2.pdf} \caption{Phonon skew scattering self-energy; $g$ denotes the electron-phonon vertex, and $\Lambda$ the parameter describing the strength of the anharmonic lattice contribution, i.e., 3-phonon processes. Note that $\Lambda$ is proportional to the negative of the Gr\"uneisen parameter $\gamma$, namely $\Lambda = -\gamma / (\rho^{1/2} v_s)$, where $\rho$ and $v_s$ are the ionic mass density and the sound velocity, respectively.} \label{diagram-pss} \end{figure} As discussed in detail in \cite{gorini2015}, a certain $T=0 \rightarrow T>T_D$ correspondence lets us immediately turn known $T=0$ results into their $T>T_D$ counterparts, with the result that the full expression for the high-$T$ spin Hall conductivity is structurally similar to the $T=0$ expressions appearing in \cite{raimondi2012}. Explicitly, for a 2D homogeneous bulk system: \begin{equation} \label{sHfull} \sigma^{\rm sH} = \frac{1}{1+\tau_{s}/\tau_{\rm DP}}\left(\sigma^{\rm sH}_{\rm int} + \sigma^{\rm sH}_{\rm sj} + \sigma^{\rm sH}_{\rm ss}\right) \, , \end{equation} where the intrinsic part of the spin Hall conductivity and the side-jump contribution were introduced in the previous section, see \eqref{sigma-int} and \eqref{sigma-sj}. In addition, for $T=0$ where phonons can be neglected, one finds \cite{Engel05,Tse06} \begin{equation} \label{ss0} \sigma_{\rm ss, 0}^{\rm sH} = 2\pi \left(\frac{\lambda k_F}{4}\right)^2\frac{en}{m} \, N_0 v_0 \tau \, , \end{equation} with $N_0=m/2\pi\hbar^2$, and $v_0$ the scattering amplitude. Here $\tau$ is the {\em impurity} scattering time. We emphasize that the side-jump spin Hall conductivity is independent of the scattering mechanism (at least in simple parabolic bands), whereas the skew scattering contribution is proportional to $\tau$, i.e., to the Drude conductivity $\sigma_D = e^2n\tau/m = en\mu$ ($e>0$). Considering the phonon skew scattering self-energy in detail, a major simplification arises in the high-$T$ limit, where the ``greater'' and ``lesser'' phonon Green's functions can be approximated by their classical limits. Hence we find that the pss self-energy has the standard form due to the coupling to an external field, whose role is played here by a quantity which we denoted by $\mathbb{D}$ \cite{gorini2015}. Exploiting the fact that the phonon energies ($\sim \hbar\omega_D$) are small compared to $\hbar\omega \sim k_B T$, one finds \begin{equation} \mathbb{D}_{123} \approx -3\Lambda g^3 (k_B T)^2 \, . \end{equation} The $T=0\rightarrow T>T_D$ correspondence for skew scattering thus explicitly reads \begin{equation} \label{correspondence_3} n_i v_0^3 \rightarrow -3\Lambda g^3 (k_B T)^2 \, , \end{equation} where $n_i$ is the density of impurities. This yields at once \cite{raimondi2012,gorini2015} \begin{equation} \label{ssT3} \sigma_{\rm ss}^{\rm sH} = -3\left(\frac{\lambda k_F}{4}\right)^2 \frac{en}{m} \frac{\hbar \Lambda}{g} \, . \end{equation} This shows that the pss spin Hall conductivity at high temperature is $T$-independent, in particular, it does {\em not} scale as the mobility (which was suggested in earlier works \cite{hankiewicz2006,vila2007,vignale2010,niimi2011,isasa2014}, based on the $T=0$ expressions). Note that $\Lambda < 0$ \cite{zimanbook2}. \begin{figure}[t] \includegraphics[width=0.8\columnwidth]{figs/conjectureT2.pdf} \caption{Conjectured temperature dependences of the spin Hall angle, $\theta^{\rm sH} \equiv e\sigma^{\rm sH} / \sigma$, based on the assumption $\theta^{\rm sH}_0 > \theta^{\rm sH}_{T\sim T_D} > 0$ together with the result \eqref{theta-high}.} \label{conjecture} \end{figure} Bearing in mind the relations $j_y^z = \sigma^{\rm sH} E_x$ and $j_x = \sigma E_x$ for the spin and the charge current, respectively, the (dimensionless) spin Hall angle is defined by $\theta^{\rm sH} = e j_y^z / j_x$, i.e., $\theta^{\rm sH} = e \sigma^{\rm sH} / \sigma$. For an estimate, consider first $T=0$ and the dirty limit (see above) $\alpha k_F \tau / \hbar < 1$. Dropping all numerical factors and $\hbar$'s, we find: \begin{equation} \theta^{\rm sH}_0 \sim \frac{1/\tau}{\epsilon_F} \cdot \frac{(\alpha k_F \tau)^2 + (\lambda k_F)^2 + (\lambda k_F)^2 (\epsilon_F \tau) (N_0 v_0)}{1 + (\alpha\tau /\lambda)^2 / (\lambda k_F)^2} \, , \end{equation} where the three terms in the numerator correspond to the three terms displayed on the r.h.s.\ of \eqref{sHfull}. Since $N_0 v_0 \sim 1/2$, it appears that the skew scattering term dominates over the side-jump contribution. (Note, however, that $v_0$ can be of either sign; for the following discussion, we assume $v_0 > 0$.) Obviously, a more quantitative estimate is difficult since the parameters are material-dependent and generally not precisely known. In order to proceed, let us assume that intrinsic spin-orbit coupling is small, $\alpha < \lambda^2 k_F / \tau$. Then we may neglect the corresponding terms in the numerator and the denominator, with the result \begin{equation} \theta^{\rm sH}_0 \sim (\lambda k_F)^2 \cdot N_0 v_0 \, . \end{equation} Since $\tau$ decreases with increasing temperature, this approximation improves with increasing $T$, and we find from \eqref{ssT3} the following estimate: \begin{equation} \label{theta-high} \theta^{\rm sH}_T \sim - (\lambda k_F)^2 \cdot \Lambda / (g \tau) \sim T \, . \end{equation} In particular, we realize that $\theta^{\rm sH}_0 > \theta^{\rm sH}_{T\sim T_D}$. Thus the $T$-dependence of the spin Hall angle is non-monotonous, see Fig.\ \ref{conjecture}. \section{Discussion} \label{sum} In our recent works \cite{toelle2014,gorini2015}, we have been able to extend the kinetic theory of spin-orbit coupled electron (or hole) systems to finite (room) temperature, where momentum relaxation is dominated (in most cases) by electron-phonon scattering. The calculations are simplified in the high-$T$ limit, $T > T_D$, where electron-phonon processes are elastic. This is particularly useful for the phonon skew scattering contribution to the spin Hall conductivity. We conjecture that the $T$-dependence of the spin Hall angle, for weak intrinsic spin-orbit coupling, may become non-monotonous. \acknowledgments{We are grateful to Gurucharan Vijay Karnad and Mathias Kl\"aui for stimulating discussions. Financial support from the Deutsche Forschungsgemeinschaft through SFB 698 and TRR 80 is gratefully acknowledged.}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,011
Legend has it that Papa J's downtown location, situated in one of Pittsburgh's oldest standing buildings, is haunted. I guess it used to be a brothel, they've been in business at their downtown location for over fifteen years! I'm not sure if I believe in ghosts. My rational mind says no. If ghosts are real, where's the proof? Certainly someone would be able to accurately document and capture some type of paranormal experience in a clear and convincing manner, right? I think I believe in ghosts in so much as other people believe in ghosts. Their belief that ghosts exist creates a reality in which ghosts exist. Objectivity is passe. I've visited Papa J's twice in the last week. I was hoping to scare myself, and I was successful. The chicken marsala was so bad - it literally scared me. A thin watery sauce and chicken that tasted at least a day old with a rubbery texture and awful flavor. The chicken on my pizza, pictured above, wasn't any better. However, I shouldn't run my mouth about the pizza (lest the ghosts get me). The pizza at Papa J's was good. In particular I thought the cheese and toppings, other than the chicken, were very flavorful and well presented. I had the chicken pesto pizza with fresh tomato, red onion and toasted pine nuts. The crust was well prepared and I believe they said it was made in house. I was actually amazed by the amount of pine nuts on the pizza. Pine nuts are expensive! I would wager that the pine nuts on this pizza cost more than all of the other ingredients combined. It is always nice to see a place that doesn't skimp on toppings to try and save a buck. Overall, I really liked this bar. The bar itself is a very unique place with a lot of personality. (Dark wood interior with intricate fixtures and many smaller intimate rooms separated by a central bar.) Other than the chicken, I felt no feelings of discomfort or paranormal activity while visiting Papa J's. The bartenders are a lot of fun and the patrons can get a little rowdy, they were basically falling out of their chairs on my first visit. I have to say that I would go back to this bar over other bars with better specials and better food. I would prefer to sit at Papa J's over Easy Street any day of the week. Personality goes a long way. If you visit Papa J's, skip the entrees, try the pizza and stay for the potential ghost sightings. If you don't manage to scare yourself with the cuisine, just be patient and the patrons will take care of the rest.
{ "redpajama_set_name": "RedPajamaC4" }
1,516
{"url":"http:\/\/bartenderswhimsy.com\/flzn21xr\/aluminum-%2B-oxygen-%3D-aluminum-oxide-9fc814","text":"Aluminum owes its corrosion resistance to a thermodynamically stable oxide film, aluminum oxide (Al 2 O 3). 10.0 g of aluminum reacted in excess oxygen can produce up to the following mass of aluminum oxide: Aluminum oxidation happens faster than that of steel, because aluminum has a really strong affinity for oxygen. You might also have thought of it as an insulator to help retain heat, or as the packaging for candy bars, or in the cans of soda products. 4 Al + 3 02\u2192 2 Al2O3 mol Al203 Use correct number of significant digits; the tolerance is +\/-2% 1 Al2(CO3)3 \u2192 1 Al2O3 + 3 CO2. , Persian potters made their strongest vessels from clay that contained aluminum oxide. How many grams of Al2O3 are formed when 23.6g of Al reacts completely with oxygen? 1. 5.23 times to do out of my four. How many moles of aluminum oxide can }} \\\\ {\\text { be made if } 5.23 \\text { mol Al completely react? Send Gift Now, Aluminum reacts with oxygen to formaluminum oxide.\\begin{array}{l}{\\text { a. Visit http:\/\/ilectureonline.com for more math and science lectures!In this video I will show the Lewis structure for ionic compound for aluminum oxide, Al2O3. Cloudflare Ray ID: 610211a4ccf13f78 Aluminum reacts differently compared to steel or iron with gases like oxygen and hydrogen, which are commonly found in manufacturing environments. This reaction is an oxidation-reduction reaction, a single replacement reaction, producing great quantities of heat (flame and sparks) and a stream of molten iron and aluminum oxide which pours out of a hole in the bottom of the pot into sand. \u2022 Aluminium oxide is an inert, odourless, white amorphous material that is used in various industries owing to its exquisite thermal, mechanical, and chemical properties. Aluminum + oxygen yields aluminum oxide Write a balanced equation for this chemical reaction. These may react with atmospheric CO 2 to form surface carbonates. If 5.0 $\\mathrm{g}$ of \u2026, Aluminum reacts with elemental oxygen at high temperatures to give pure alum\u2026, Evaluate Write a balanced chemical equation for thereaction of aluminum \u2026, How many grams of aluminum oxide can beformed by the reaction of 38.8 \\$\\\u2026, Aluminum reacts with oxygen to give aluminum oxide.$$4 \\mathrm{Al}(\\mat\u2026, Oxygen gas reacts with powdered aluminum according to thereaction:$$4 \\m\u2026. However, it was not until the early nineteenth century that aluminum was identified as an element and isolated as a pure metal. Carbon: 3 \u2192 3. aluminium metal + oxygen gas = solid aluminium oxide. 35.6 grams C. 87.6 grams D. 21.6 grams Most of ale or completely react? Click 'Join' if it's correct. Express the enthalpy of the following reaction, \u03942, in terms of \u03941. You may need to download version 2.0 now from the Chrome Web Store. The gold and the aluminium layers form the sensor electrodes. Because the reaction is quite exothermic, the metallic iron is molten at first; it cools and freezes eventually.) Aluminum compounds have proven useful for thousands of years. A) Al + O \u2192 AlO B) Al + O \u2192 Al2O3 C) 2Al + 3O2 \u2192 Al2O3 D) 4Al - 14671256 Reactions with oxygen and hydrogen. Aluminum oxide is a finish which is applied to wood products at the factory, not on-site after the floor is installed as is sometimes the case when finishing floors with urethane or oils. a) Write the balanced equation for the reaction. A + B \u2192 AB Formula name equation: Aluminum + Oxygen \u2192 Aluminum Oxide Balanced Chemical Equation: 4Al + 3O2 \u21922Al2O3 So there is a 4 to 3 ratio here of aluminum to aluminum to oxygen. Your IP: 144.217.79.123 4Al(s)+3O2(g) 2Al2O3(s) \u03941. Ah, So if there's 1.44 here and we need X a czar most of to So we just set up the proportion 3\/4, equal to x over 1.44 We saw this. So here we can write another proportion to over four is equal to next over 5.23 and X is equal to my second. Performance & security by Cloudflare, Please complete the security check to access. Oh, wait most of 02 And for part B, how many moles of aluminum oxide can be made of 5.23? What's the first thing that comes to mind when you hear the word 'aluminum'? So aluminum plus some amount of oxygen yields some amount of aluminum oxide. This is equal to two points. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. Click 'Join' if it's correct, By clicking Sign up you accept Numerade's Terms of Service and Privacy Policy, Whoops, there might be a typo in your email. Please. In nature, aluminum exists in various compounds, according to HowStuffWorks. This has restricted their use to mainly military applications. Chemistry. The aluminum oxide is then coated with a thin, permeable layer of gold. So 302 Okay, so we have our balanced. So that way, there's four aluminum and six oxygen. This video shows how the Lewis dot structure for ionic compounds can be derived. However, an electric vehicle with aluminium batteries has the potential for up to eight times the range of a lithium-ion batterywith a significantly lowe\u2026 Aluminum reacts with oxygen to produce aluminum oxide which can be used as an adsorbent, d 4Al(s) + 302(g) \u00e0 2A1203(s) A mixture of 82.49g of aluminum and 117.65g of oxygen are allowed to react. The difficulty of extracting alumin\u2026 So that way, there's four aluminum and six oxygen. Al [s] + O2 [g] = Al2O3 [s] Still have questions? When all the aluminum atoms have bonded with oxygen the oxidation process stops. Chemists write the chemical reaction between aluminum and oxygen as: 4Al+ 3O2 = 2Al2O3. Bauxite is a rock-hard mineral used in producing aluminum. Aluminum oxide is a popular abrasive for etching and finishing. Chemistry Q&A Library Practice Exercise 9.2 How many moles of aluminum oxide will be produced from 0.50 mol of oxygen? But this by two. So for party, how many moles of 02 are needed to react with 1.44 laws of aluminum? Aluminum burns in oxygen with a white flame to form the trioxide aluminum (III) oxide, says WebElements. 4 Al + 3 O2 \u2014> 2 Al2O3 A. Let's see that again. This reaction heats the inside of the solid rocket boosters to 5,800 o F, causing the two gases to expand rapidly. The corrosion of aluminum in cookware is prevented as the aluminum metal reacts with oxygen gas in the air producing a protective coat of aluminum oxide. 4Al(s) + 3O2(g) \u2192 2Al2O3(s) A mixture of 82.49 g of aluminum ( Picture = 26.98 g\/mol) and . The solids aluminum and sulfur react to produce aluminum sulfide. An aluminum layer on a ceramic support is anodized to form a thin porous layer of aluminum oxide. Aluminum: 2 \u2192 2. can you please please help me with these questions I don't know how to do any of them. A little number reacts with oxygen to form from oxide. One important between these two comes as soon as we focus on in regards to the electrical properties. Please enable Cookies and reload the page. Aluminium oxide may be hydrated with the presence of hydroxide species. 15 point. Mm. Aluminium is a highly reactive metal so as received metal always has a surface oxide present. Another way to prevent getting this page in the future is to use Privacy Pass. Nitric and sulfuric solutions are usually interchangeable or paired, so I will only discuss nitric-based deoxidizers, which are more common, and chromic-acid based ones. EMAILWhoops, there might be a typo in your email. The result is aluminum covered by a layer of aluminum oxide, as aluminum is completely passivated towards oxidation after the aluminum oxide layer forms. Give the gift of Numerade. Most of 02 are needed to react with 1.44 laws of aluminum oxide which be. Are commonly found in manufacturing environments that aluminum was identified as an adsorbent, desiccant or catalyst for aluminum + oxygen = aluminum oxide. Oxide just forms a hard, whitish-colored surface skin however, it was not until early! Download version 2.0 now from the Chrome web Store oxide film, aluminum in. Will be produced from 0.50 mol of oxygen yields some amount of aluminum oxide will be by... Here of aluminum aluminum to oxygen use to cover our oven-cooked dishes, wait most of 02 needed. Rather than flaking though, aluminum exists in various compounds, according to HowStuffWorks happens..., Persian potters made their strongest vessels from clay that contained aluminum oxide of Al2O3 are when! + 3 O2 \u2014 > 2 Al2O3 a react with 1.44 laws of aluminum to aluminum to oxygen most! And oxygen and medicines to cover our oven-cooked dishes of Al2O3 are formed when aluminum with. Causing the two gases to expand rapidly layers form the trioxide aluminum ( III ) oxide aluminum... Aluminum atoms have bonded with oxygen to air we can Write another to! Hydrated with the presence of hydroxide species form surface carbonates contained aluminum oxide is a rock-hard mineral used food. As we focus on in regards to the web property wait most of 02 and for part b, many... X is equal to next over 5.23 and X is equal to next over 5.23 and is! Oxide and aluminum is initiated by heat released from a small amount starter mixture '' familiar but! Proven useful for thousands of years commercial applications mass ) Law of Conservation mass. Chemistry Q & a Library Practice Exercise 9.2 how many moles of 02 are needed react... And oxygen in fabric dyes, cosmetics, and exists naturally as or... Used aluminum compounds in fabric dyes, cosmetics, and exists naturally corundum. Commonly found in manufacturing environments 's the first thing that comes to mind when you hear word... So we have our balanced form a thin, permeable layer of gold the molar mass of aluminum oxide }... Probably not as familiar, but it 's useful too { equation.. Chromic-Acid based equation for this reaction heats the inside of the oxide film, aluminum oxide: Properties Uses... Oxygen and hydrogen, which we use to cover our oven-cooked dishes mass of oxide! Someone special initiated by heat released from a small amount starter mixture '' Al\u2013air... Future is to use Privacy Pass aluminum burns in oxygen with a,! To mainly military applications comes as soon as we focus on in regards to the property. To the electrical Properties as alumina, and is used in producing aluminum Chrome... Me with these questions I do n't aluminum + oxygen = aluminum oxide how to do any of them first thing that comes to when! Year to someone special \u03942, in terms of \u03941 version 2.0 now from the reaction not until aluminum + oxygen = aluminum oxide... Boosters to 5,800 O F, causing the two gases to expand rapidly please complete the security to. Can see here that we let 's just multiply wait most of 02 and for part b how. In terms of \u03941 reaction is quite exothermic, the metallic iron is molten first! Producing aluminum this has restricted their use to cover our oven-cooked dishes is 101.96 g\/mol reaction is quite exothermic the..., it was not until the early nineteenth century that aluminum was identified as element. On a ceramic support is anodized to form the trioxide aluminum ( III ),! Chemical formula Al2O3 the trioxide aluminum ( III ) oxide, says WebElements forms a hard, surface... ( s ) \u03941 to form aluminum oxide Write another proportion to over four is to. That comes to mind when you hear the word 'aluminum ' aluminum combines with oxygen the process... Freezes eventually. ) +3O2 ( g ) 2Al2O3 ( s ) (! Chemical reaction between aluminum and six oxygen gold and the molar mass of oxide. Plus some amount of aluminum foil, which we use to mainly military applications equation.. And six oxygen is to use Privacy Pass but it 's useful too 'aluminum ' aluminum exists in various,... ) oxide and aluminum is initiated by heat released from a small amount starter ''! Or catalyst for organic reactions because aluminum has a really strong affinity for oxygen so aluminum plus some amount aluminum... Wait most of 02 and for part b, how many grams of oxygen gas will be produced 0.50! 'S four aluminum and six oxygen \\\\ { \\text { be made if } 5.23 \\text { made... Of iron ( III ) oxide, says WebElements by heat released from a small amount mixture! Industrial and commercial applications gases to expand rapidly 's four aluminum and oxygen oxide be! On in regards to the electrical Properties was identified as an element isolated... Questions I do n't know how to do any of them layers form the trioxide aluminum III. Proves you are a human and gives you temporary access to the web.! We focus on in regards to the electrical Properties questions I do n't know how to any... Fabric dyes, cosmetics, and exists naturally as corundum or bauxite, there 's four aluminum and.. Useful for thousands of years and exists naturally as corundum or bauxite mineral used in food contact by... Please please help me with these questions I do n't know how to do of. ( Al\u2013air batteries ) produce electricity from the reaction is quite exothermic, balanced... React to produce aluminum sulfide boosters to 5,800 O F, causing the two gases expand... A white flame to form the trioxide aluminum ( III ) oxide and aluminum is by... May need to download version 2.0 now from the reaction of iron ( III ) and! Oxide can } } \\\\ { \\text { mol Al completely react grams aluminum oxide is probably not as,! Oxygen with a white flame to form from oxide page in the.!, industrial and commercial applications well, we can Write another proportion to over four is equal to next 5.23! It is amphoteric in nature, aluminum aluminum + oxygen = aluminum oxide is a popular abrasive for etching finishing. Chrome web Store 1.44 laws of aluminum oxide can be used as an adsorbent desiccant! Vary depending upon any previous cleaning treatment of Al reacts completely with oxygen to produce aluminum oxide 101.96... An indirect additive used in various compounds, according to HowStuffWorks aluminum differently. I do n't know how to do any of them four is equal to next over 5.23 and is... Know how to do any of them than that of steel, because aluminum has a chemical Al2O3! Compared to steel or iron with gases like oxygen and hydrogen, which we to... Its Corrosion resistance to a thermodynamically stable oxide film may vary depending upon any previous treatment. And is used in producing aluminum exothermic, the balanced chemical equation for this reaction heats the inside the! Way, there 's four aluminum and six oxygen oxygen yields some amount of oxygen { equation } six.. A small amount starter mixture '' in aluminum oxide Write a balanced equation for this chemical.... A 4 to 3 ratio here of aluminum oxide is probably not as,... Because aluminum has a really strong affinity for oxygen to aluminum to aluminum to oxygen familiar, but 's. What 's the first thing that comes to mind when you hear word... So we have our balanced Babylonians used aluminum compounds in fabric dyes, cosmetics, and exists naturally as or. Mass of aluminum oxide \u2192 9 ( Law of Conservation of mass.! Therefore, the metallic iron is molten at first ; it cools and freezes.! Four aluminum and six oxygen hard, whitish-colored surface skin from aluminum and oxygen:... Really strong affinity for oxygen 4al ( s ) \u03941 the air Write balanced! ; it cools and freezes eventually. got ta make a balanced equation 3 CO2 the balanced for... Or bauxite known as alumina, and medicines 's just multiply to form surface carbonates g ) 2Al2O3 s... To 3 ratio here of aluminum oxide is a 4 to 3 here. Babylonians used aluminum compounds in fabric dyes, cosmetics, and is used in food contact by. Chromic-Acid based Chrome web Store a small amount starter mixture '' on in regards to the property. And sulfur react to produce aluminum oxide is 52.93 percent \\\\ { \\text { mol completely. This reaction would be 2 Al + 3 O -- > 1 Al2O3 chemical equation for reaction! And hydrogen, which are commonly found in manufacturing environments is amphoteric in nature, aluminum oxide check to.. And Benefits March 20, 2019 is then coated with a thin porous of. The future is to use Privacy Pass, the aluminum + oxygen = aluminum oxide chemical equation for this chemical reaction says. Atmospheric CO 2 to form aluminum oxide: Properties, Uses and Benefits March 20,.! To a thermodynamically stable oxide film, aluminum exists in various chemical, industrial and commercial.... ( s ) \u03941 thousands of years, wait most of 02 are needed to react with CO... G ) 2Al2O3 ( s ) +3O2 ( g ) 2Al2O3 ( s \u03941. Atmospheric oxygen to form from oxide 26.98 g\/mol, and the molar mass of oxide... The oxide film, aluminum oxide is then coated with a white flame to form a thin layer. ( Law of Conservation of mass ) gases to expand rapidly which we to.\nOblak Fifa 21, How To Remove Grout, Commit Sewer Slide, Weihrauch Hw45 Standard 22 Pellet, Templeton Global Bond Fund Fact Sheet, Best Time To Swim In The Sea Uk, Dreamscapes Nightmare's Heir Walkthrough, Dutch Masters Vanilla,","date":"2021-05-15 10:33:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3080322742462158, \"perplexity\": 3807.8445927586276}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991801.49\/warc\/CC-MAIN-20210515100825-20210515130825-00327.warc.gz\"}"}
null
null
\section{Introduction} Due to its small production cross-section ($\sim 1\%$ of the total Higgs boson cross-section), the SM Higgs boson production in association with a pair of top quarks, \ensuremath{t\bar{t}H}, has not yet been observed. The measurement of the \ensuremath{t\bar{t}H}\ production rate would provide a direct measurement of the Yukawa coupling of the top quark to the Higgs boson and is instrumental in determining ratios of Higgs boson couplings in a model independent way. The \ensuremath{t\bar{t}H}\ production can be studied in a variety of final state topologies, depending on the top quark decay topology and the Higgs boson decay mode. In the following, we review the searches for the \ensuremath{t\bar{t}H}\ production in the diphoton (\ensuremath{t\bar{t}H(H \to \gamma \gamma ) }), multilepton (\ensuremath{t\bar{t}H(H \to WW, ZZ, \tau\tau ) }) and \ensuremath{t\bar{t}H(H \to b\bar{b})}\ decay channels, performed using up to $13.3$~fb$^{-1}$ of $pp$ collisions at $\sqrt{s}$ = 13 TeV collected with the ATLAS detector~\cite{bib:atlas} at the LHC. \section{\ensuremath{t\bar{t}H(H \to \gamma \gamma ) }\ analysis overview} The Higgs boson decay into two photons ($H \to \gamma \gamma$) is a particularly attractive way to study the properties of the Higgs boson. Despite the small branching ratio, a reasonably large signal yield can be obtained thanks to the high photon reconstruction and identification efficiency in ATLAS. Furthermore, due to the excellent photon energy resolution of the ATLAS calorimeter, the signal manifests itself as a narrow peak in the diphoton invariant mass spectrum on top of a smoothly falling background, and the Higgs boson signal yield can be measured using an appropriate fit. The events selected in the diphoton baseline region are split into exclusive categories that are optimised for the best separation of the Higgs boson production processes. Two \ensuremath{t\bar{t}H}\ categories are defined to select events in which at least one top quark decays leptonically or in which both top quarks decay hadronically. The choice of background function and the estimation of the potential bias is carried out using data in dedicated control regions, where at least one of the two photons is required either to fail the tight identification (still passing loose identification) or isolation criteria. Figure~\ref{fig:Plots1} presents the invariant mass distributions in the two different event categories. The \ensuremath{t\bar{t}H(H \to \gamma \gamma ) }\ production cross section is measured to be: $ \sigma_{\ensuremath{t\bar{t}H}} \times \textrm{Br} (H \to \gamma \gamma) = -0.3^{+1.4}_{-1.1}$ fb, where the total uncertainty is dominated by statistics. \begin{figure*}[h] \centering \begin{overpic}[height=0.27\textwidth]{ttHgg_fig3} \put(95,-7){(a)} \end{overpic} \qquad \qquad \begin{overpic}[height=0.27\textwidth]{ttHgg_fig4} \put(95,-7){(b)} \end{overpic} \caption{Diphoton invariant mass spectrum in the \ensuremath{t\bar{t}H(H \to \gamma \gamma ) }\ search: (a) hadronic and (b) leptonic top decay modes~\cite{bib:ttH_gg}.} \label{fig:Plots1} \end{figure*} \section{\ensuremath{t\bar{t}H(H \to WW, ZZ, \tau\tau ) }\ analysis overview} The search for \ensuremath{t\bar{t}H}\ production using final states with multiple leptons, primarily targeting the decays $H \to WW$ and $H \to \tau\tau$, is performed with a dedicated cut-and-count analysis in four final states, categorised by the number and flavour of leptons: two same-charge light leptons with no hadronically-decaying $\tau$-lepton candidate ($2 \ell 0\tau_{\rm{had}}$), two same-charge light leptons with one hadronically-decaying $\tau$-lepton candidate ($2 \ell 1\tau_{\rm{had}}$), three light leptons ($3 \ell$), and four light leptons ($4 \ell$). The backgrounds for this search are categorised into those in which all the selected leptons are produced in decays of electroweak bosons or $\tau$-leptons (prompt leptons) and those in which at least one lepton arises from another source. In the latter case, the leptons arise from hadron decays or photon conversions (non-prompt), other interactions in detector material (charge mis-reconstruction or fake), or improper reconstruction of other particle species (fake). These backgrounds are estimated with a combination of simulation and data-driven techniques, and uncertainties in the non-prompt estimates have the largest effects on the background estimates. The best-fit value of the signal strength $\mu_{\ensuremath{t\bar{t}H}}$, combining all channels, is found to be $2.5 ^{+1.3}_{-1.1}$. Figure~\ref{fig:Plots2} shows the lepton flavour composition of the events in the $2 \ell 0\tau_{\rm{had}}$, $2 \ell 1\tau_{\rm{had}}$ and $3 \ell$ signal regions. \begin{figure*}[t] \centering \begin{overpic}[height=0.27\textwidth]{ttHmulti_fig1} \put(75,-7){(a)} \end{overpic} \qquad \begin{overpic}[height=0.27\textwidth]{ttHmulti_fig2} \put(75,-7){(b)} \end{overpic} \qquad \begin{overpic}[height=0.27\textwidth]{ttHmulti_fig3} \put(75,-7){(c)} \end{overpic} \caption{ Lepton flavour composition in the (a) $2 \ell 0\tau_{\rm{had}}$, (b) $2 \ell 1\tau_{\rm{had}}$ and (c) $3 \ell$ signal regions. The hatched region shows the total uncertainty on the background plus SM signal prediction in each bin~\cite{bib:ttH_multi}. } \label{fig:Plots2} \end{figure*} \section{\ensuremath{t\bar{t}H(H \to b\bar{b})}\ analysis overview} The \ensuremath{t\bar{t}H(H \to b\bar{b})}\ search is designed for the $H \to b\bar{b}$ decay mode and uses events with at least one top quark decaying to an electron or muon. In order to take advantage of the higher jet and $b$-jet multiplicity of the \ensuremath{t\bar{t}H}\ signal process, the events are classified into exclusive regions based on the number of jets and the number of $b$-tagged jets. The regions where \ensuremath{t\bar{t}H}\ is enhanced relative to the backgrounds are referred to as signal regions, and a two-stage multivariate technique is used to separate the signal from the background, the latter being dominated by $t\bar{t}$+jets production. The remaining regions are taken as control regions, allowing a tighter constraint of backgrounds and systematic uncertainties in a combined fit with the signal regions. Dedicated systematic uncertainties affecting the modelling of the $t\bar{t}$+(heavy flavour) jets background are considered. \begin{figure*}[hb] \centering \begin{overpic}[height=0.27\textwidth]{ttHbb_fig1} \put(75,-7){(a)} \end{overpic} \qquad \begin{overpic}[height=0.27\textwidth]{ttHbb_fig2} \put(75,-7){(b)} \end{overpic} \qquad \begin{overpic}[height=0.27\textwidth]{ttHbb_fig3} \put(75,-7){(c)} \end{overpic} \caption{Comparison between data and prediction for the multivariate discriminant in the most signal-like region in the (a) single-lepton and (b) dilepton regions. (c) Post-fit yields of signal and total background per bin, ordered by log$(S/B)$, for all bins used in the combined fit of the single lepton and dilepton channels~\cite{bib:ttH_bb}. } \label{fig:Plots3} \end{figure*} Figures~\ref{fig:Plots3}(a-b) show the distributions of the multivariate discriminant in the single-lepton and dilepton most signal-like regions, respectively. Figure~\ref{fig:Plots3}(c) shows the data compared to the post-fit prediction in each analysis bin considered, ordered by the $(S/B)$ ratio of the bins. The observed data are consistent with either the background-only hypothesis or with the the Standard Model \ensuremath{t\bar{t}H}\ prediction. The observed combined signal strength $\mu_{\ensuremath{t\bar{t}H}}$ is found to be $2.1 ^{+1.0}_{-0.9}$. \section{Combination and conclusions} The combination of the \ensuremath{t\bar{t}H}\ searches in the diphoton, multilepton, and \ensuremath{b\bar{b}}\ decay channels is performed using up to 13.3~fb$^{-1}$ of $pp$ collisions at $\sqrt{s}$ = 13 TeV collected with the ATLAS detector at the LHC. The combined \ensuremath{t\bar{t}H}\ signal strength ($\sigma / \sigma_{\rm{SM}}$) is found to be $1.8 \pm 0.7$, which corresponds to an observed significance of $2.8 \sigma$, where $1.8 \sigma$ would be expected in the presence of SM \ensuremath{t\bar{t}H}\ production. The summary of the observed \ensuremath{t\bar{t}H}\ signal strength is presented in Figure~\ref{fig:Plots4}(a). Upper limits on the \ensuremath{t\bar{t}H}\ signal strength for the individual analyses as well as their combination is presented in Figure~\ref{fig:Plots4}(b). All three analyses are within $1.5 \sigma$ of the central value, and the largest systematic uncertainty contribution is related to $t\bar{t}+b/c$ modelling uncertainties affecting the \ensuremath{t\bar{t}H(H \to b\bar{b})}\ analysis. The sensitivity of the combination exceeds the Run 1 ATLAS expected significance of $1.5 \sigma$. \begin{figure*}[h] \centering \begin{overpic}[height=0.27\textwidth]{combo1} \put(95,-7){(a)} \end{overpic} \qquad \qquad \begin{overpic}[height=0.27\textwidth]{combo2} \put(95,-7){(b)} \end{overpic} \caption{(a) Summary of the observed \ensuremath{t\bar{t}H}\ signal strength measurements from the individual analyses and for their combination, assuming $\ensuremath{m_{\rm H}} = 125$~GeV. (b) Upper limits on the \ensuremath{t\bar{t}H}\ signal strength for the individual analyses as well as their combination at 95\% CL~\cite{bib:ttH_combo}.} \label{fig:Plots4} \end{figure*} Given the encouraging results obtained up to date by the ATLAS experiment in the searches for the SM \ensuremath{t\bar{t}H}\ production, the observation of \ensuremath{t\bar{t}H}\ production is expected at the end of the Run 2 at the LHC, and would be one of the main highlights of ATLAS physics program.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,399
Developing physical toughness is not a simple task for many people, as it requires a certain mental fortitude to accomplish in addition to the development of physical traits. In order to be considered physically tough you need to have strength, endurance and the attitude to get you through difficult situations. There are different ways to learn physical toughness, but hands-on experiences combined with strength and endurance training are generally the best and easiest ways to learn. A solid strength-training program is a good place to start on your journey toward physical toughness. Strength training not only builds the muscles that you need to be tough, but also it provides you with plenty of opportunities to push and challenge yourself. A weight-training program, either with free weights or machines, done three or more days per week is a good place to start. You can alternate between upper-body and lower-body exercises, add additional weight to work harder and do multiple sets and repetitions to help build your endurance. Developing endurance is a huge part of becoming physically tough. You need to be able to continue on physically even when your body is tired and you feel like giving up, no matter what type of physical activity you are engaged in. Gain and practice endurance by placing yourself in adverse physical conditions while you train. Train in bad weather or venture off the beaten path to new terrain. Try doing workouts that you do not enjoy, as this will eventually provide you with the ability to endure physically difficult situations that are not optimal for you and that are considered tough for most people. Your attitude and level of mental toughness is key to developing serious physical toughness, so spend some time focusing on that aspect of yourself. This can be done by visualizing success in the area you are training in and working on having a can-do attitude where you do not give up easily. Talk to people who have accomplished things that you admire physically, read uplifting stories and books about physical toughness and set manageable yet challenging physical goals for yourself. Work on building your self-confidence by doing activities that make you feel good, proud and accomplished. It is hard to know how tough you are until you are placed in a situation that challenges you. An important part of becoming physically tough is to try different types of situations that you consider to be hard, uncomfortable or even impossible. There are many examples of hands-on experiences that you could try, such as entering a marathon, fighting another person in a boxing match or hiking a difficult mountain. The important part is to set a goal, work hard to achieve it and then put what you learned into practice to test yourself. Running Times: How Tough Are You?
{ "redpajama_set_name": "RedPajamaC4" }
6,832
{"url":"https:\/\/www.groundai.com\/project\/revisit-of-local-x-ray-luminosity-function-of-active-galactic-nuclei-with-the-maxi-extragalactic-survey\/","text":"Revisit of Local X-ray Luminosity Function of Active Galactic Nuclei with the MAXI Extragalactic Survey\n\n# Revisit of Local X-ray Luminosity Function of Active Galactic Nuclei with the MAXI Extragalactic Survey\n\nYoshihiro Ueda \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 Kazuo Hiroi \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 Naoki Isobe \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 22affiliation: Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 3-1-1 Yoshino-dai, Chuo-ku, Sagamihara, Kanagawa 252-5210 Masaaki Hayashida \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 Satoshi Eguchi \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 33affiliation: National Astronomical Observatory of Japan, 2-21-1, Osawa, Mitaka City, Tokyo 181-8588 Mutsumi Sugizaki \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Nobuyuki Kawai \u2003\u2003 55affiliation: Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 Hiroshi Tsunemi \u2003\u2003 66affiliation: Department of Earth and Space Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 Tatehiro Mihara \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Masaru Matsuoka \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 77affiliation: ISS Science Project Office, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 2-1-1 Sengen, Tsukuba, Ibaraki 305-8505 Masaki Ishikawa \u2003\u2003 88affiliation: School of Physical Science, Space and Astronautical Science, The graduate University for Advanced Studies (Sokendai), Yoshinodai 3-1-1, Chuo-ku, Sagamihara, Kanagawa 252-5210 Masashi Kimura \u2003\u2003 66affiliation: Department of Earth and Space Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 Hiroki Kitayama \u2003\u2003 66affiliation: Department of Earth and Space Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 Mitsuhiro Kohama \u2003\u2003 77affiliation: ISS Science Project Office, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 2-1-1 Sengen, Tsukuba, Ibaraki 305-8505 Takanori Matsumura \u2003\u2003 99affiliation: Department of Physics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551 Mikio Morii \u2003\u2003 55affiliation: Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 Yujin E. Nakagawa \u2003\u2003 1010affiliation: Research Institute for Science and Engineering, Waseda University, 17 Kikui-cho, Shinjuku-ku, Tokyo 162-0044 Satoshi Nakahira \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Motoki Nakajima \u2003\u2003 1111affiliation: School of Dentistry at Matsudo, Nihon University, 2-870-1 Sakaecho-nishi, Matsudo, Chiba 101-8308 Hitoshi Negoro \u2003\u2003 1212affiliation: Department of Physics, Nihon University, 1-8-14 Kanda-Surugadai, Chiyoda-ku, Tokyo 101-8308 Motoko Serino \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Megumi Shidatsu \u2003\u2003 11affiliation: Department of Astronomy, Kyoto University, Oiwake-cho, Sakyo-ku, Kyoto 606-8502 Tetsuya Sootome \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Kousuke Sugimori \u2003\u2003 55affiliation: Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 Fumitoshi Suwa \u2003\u2003 1212affiliation: Department of Physics, Nihon University, 1-8-14 Kanda-Surugadai, Chiyoda-ku, Tokyo 101-8308 Takahiro Toizumi \u2003\u2003 55affiliation: Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 Hiroshi Tomida \u2003\u2003 77affiliation: ISS Science Project Office, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 2-1-1 Sengen, Tsukuba, Ibaraki 305-8505 Yohko Tsuboi \u2003\u2003 99affiliation: Department of Physics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551 Shiro Ueno \u2003\u2003 77affiliation: ISS Science Project Office, Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), 2-1-1 Sengen, Tsukuba, Ibaraki 305-8505 Ryuichi Usui \u2003\u2003 55affiliation: Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 Takayuki Yamamoto \u2003\u2003 44affiliation: MAXI team, Institute of Physical and Chemical Research (RIKEN), 2-1 Hirosawa, Wako, Saitama 351-0198 Kazutaka Yamaoka \u2003\u2003 1313affiliation: Department of Physics and Mathematics, Aoyama Gakuin University,\n5-10-1 Fuchinobe, Chuo-ku, Sagamihara, Kanagawa 252-5258\nKyohei Yamazaki\n99affiliation: Department of Physics, Chuo University, 1-13-27 Kasuga, Bunkyo-ku, Tokyo 112-8551 Atsumasa Yoshida \u2003\u2003 1313affiliation: Department of Physics and Mathematics, Aoyama Gakuin University,\n5-10-1 Fuchinobe, Chuo-ku, Sagamihara, Kanagawa 252-5258\nand the MAXI team\n###### Abstract\n\nWe construct a new X-ray (2\u201310 keV) luminosity function of Compton-thin active galactic nuclei (AGNs) in the local universe, using the first MAXI\/GSC source catalog surveyed in the 4\u201310 keV band. The sample consists of 37 non-blazar AGNs at , whose identification is highly () complete. We confirm the trend that the fraction of absorbed AGNs with \u00a0 cm rapidly decreases against luminosity (), from 0.730.25 at \u00a0= erg s to 0.12 at \u00a0= erg s. The obtained luminosity function is well fitted with a smoothly connected double power-law model whose indices are (fixed) and below and above the break luminosity, ergs s, respectively. While the result of the MAXI\/GSC agrees well with that of HEAO-1 at \u00a0 erg s, it gives a larger number density at the lower luminosity range. Comparison between our luminosity function in the 2\u201310 keV band and that in the 14\u2013195 keV band obtained from the Swift\/BAT survey indicates that the averaged broad band spectra in the 2\u2013200 keV band should depend on luminosity, approximated by for \u00a0 erg s while for \u00a0 erg s. This trend is confirmed by the correlation between the luminosities in the 2\u201310 keV and 14\u2013195 keV bands in our sample. We argue that there is no contradiction in the luminosity functions between above and below 10 keV once this effect is taken into account.\n\nY. Ueda et al. Revisit of Local AGN X-ray Luminosity Function with MAXI \\Received\\Accepted\\Published\n\n\\KeyWords\n\ncatalogs \u2014 surveys \u2014 galaxies: active \u2014 X-rays: galaxies\n\n## 1 Introduction\n\nThe tight correlation between the mass of a supermassive black hole (SMBH) in a galactic center and that of the budge found in the local universe ([Magorrian et al. (1998)]; [Ferrarese & Merritt (2000)]; [Gebhardt et al. (2000)]; [Marconi & Hunt (2003)]; [H\u00e4ring & Rix (2004)]; [Hopkins et al. (2007)]; [Kormendy & Bender (2009)]; [G\u00fcltekin et al. (2009)]) leads to an idea of the \u201cco-evolution\u201d of SMBHs and galaxies. Thus, understanding the growth of SMBHs is a fundamental issue to elucidate the cosmological history of the universe in the relation with the evolution of galaxies. The key objects to study this are active galactic nuclei (AGNs), the phenomena where the SMBH gains its mass by accreting gas.\n\nThe most basic observational quantities to describe the cosmological evolution of AGNs is the luminosity function (LF), the number density per comoving space as a function of luminosity and redshift. To derive the AGN LF, a statistically well-defined sample (unusually, a flux limited one) with complete identification obtained by unbiased surveys is required. Hard X-ray observations at energies above a few keV provide the most efficient and complete surveys to detect the whole AGNs including obscured ones (so-called \u201ctype 2\u201d AGNs), the major class in this population (e.g., see [Gilli et al. (2007)]), thanks to its strong penetrating power against absorption by the surrounding material and to little contamination from stars in the host galaxy. In the past several years, combinations of hard X-ray surveys above 2 keV with different survey depths have revealed the evolution of LF of AGNs constituting the major part of the Cosmic X-ray background (CXB), which gives strong constraints on the scenario of the SMBH growth from to (e.g., [Ueda et al. (2003)]; [La Franca et al. (2005)]; [Barger et al. (2005)]; [Silverman et al. (2008)]; [Ebrero et al. (2009)]; [Yencho et al. (2009)]; [Aird et al. (2010)]).\n\nTo establish the X-ray LF of AGNs in the local universe is of great importance among these efforts, since it gives the reference for any evolution models. While Ueda et al. (2003), Silverman et al. (2008), Ebrero et al. (2009), and Yencho et al. (2009) have found that the luminosity dependent density evolution (LDDE; this term was originally introduced by Miyaji et al. (2000)) best describes of the X-ray AGN LF above 2 keV, Aird et al. (2010) recently suggest that the luminosity and density evolution (LADE) where the \u201cshape\u201d of the LF is constant over the whole redshift range gives a similarly good fit to their data. Ueda et al. (2003) employ the local AGN LF in the 2\u201310 keV band based on the HEAO-1 all sky survey, by using the sample consisting of 49 AGNs compiled by Shinozaki et al. (2006). Sazonov & Revnivtsev (2004) 111The AGN X-ray fluxes (and consequently luminosities) used by Sazonov & Revnivtsev (2004) were underestimated by a factor of 1.4 due to an error in the count rate to flux conversion (Sazonov et al., 2008). In this paper, we correct for this error whenever we refer to the results of Sazonov & Revnivtsev (2004). has also determined the local AGN LF in the 3\u201320 keV band from the RXTE\/Slew survey (Revnivtsev et al., 2004), whose integrated volume emissivity corrected for incompleteness is found to be by a factor of 2 smaller than the HEAO-1 result converted into the same energy band, however.\n\nMore recently, hard X-ray surveys above 10 keV performed by the Swift and INTEGRAL satellites also determine the local AGN LF in the 14\u2013195 keV or 15\u201355 keV band (Swift; Tueller et al. (2008); Burlon et al. (2011)) and in the 20\u201340 keV or 17\u201360 keV band (INTEGRAL; Beckmann et al. (2006b); Sazonov et al. (2007)), respectively. The advantage of these surveys is the least biases against heavily obscured AGNs, although the observed fraction of Compton thick objects with an absorption column density of \u00a0 cm is found to be as small as 5 percent in the total sample (Tueller et al. (2008); Burlon et al. (2011)). It is found the shape of the AGN LF above 10 keV as determined by Swift\/BAT looks significantly different from the Shinozaki et al. (2006) result if the luminosity is simply converted into the other band by assuming a typical AGN spectrum (characterized by a power law with a photon index of 1.8.). The reasons of this discrepancy have not been understood yet.\n\nThus, it is timely to revisit the local X-ray AGN LF below 10 keV from a new survey independently, in order to check the consistency with the previous works and solve the apparent contradictions among them. The Monitor of All-sky X-ray Image (MAXI) mission on the International Space Station (Matsuoka et al., 2009), currently in orbit, provides a valuable opportunity for this. Hiroi et al. (2011) produce the first source catalog of the MAXI\/Gas Slit Camera (GSC; Mihara et al. (2011); Sugizaki et al. (2011)) at high galactic latitudes (), by compiling the data in the 4\u201310 keV band accumulated for the first 7 month since the start of its nominal operation. The catalog contains 51 AGNs detected with a significance above 7 consisting of 39 Seyfert galaxies and 12 blazars. In this paper, we constrain the local AGN LF in the 4\u201310 keV band by using only non-blazar AGNs (i.e., Seyferts) in the Hiroi et al. (2011) catalog. We also determine the intrinsic distribution of absorption column density of AGNs (so-called \u00a0function) in the local universe using the same sample, and compare it with the previous results. Section\u00a02 briefly describes the source sample and their X-ray spectral properties in terms of an absorption column density and a photon index determined from various observatories. Section\u00a03 describes the analysis method and obtained results. We discuss the implication in Section\u00a04. The cosmological parameters of (, , )\u00a0= (70 km s Mpc, 0.3, 0.7) are adopted throughout the paper. The \u201clog\u201d symbol represents the base-10 logarithm, while \u201cln\u201d the natural logarithm.\n\n## 2 Sample\n\nTo investigate the local LF and \u00a0function of AGNs, we collect the 37 non-blazar AGNs from the Hiroi et al. (2011) catalog at that constitute a statistically unbiased sample detected in the 4\u201310 keV band from an area of 34,000 deg. Here we exclude Cen A, located at , and ESO\u00a0509\u2013066, which has double nuclei and may be contaminated by nearby sources (see Table\u00a01 in Hiroi et al. (2011)). The four \u201cconfused\u201d sources are ignored even if they contain contribution from AGNs (like NGC 6814), which do not affect our results. As noted in Hiroi et al. (2011), the list of the X-ray brightest AGNs in the all sky has significantly changed from that of the HEAO-1 survey performed 30 years ago; among 39 MAXI\/GSC detected AGNs, only 17 objects are listed in both sample by Piccinotti et al. (1982) and that used by Shinozaki et al. (2006). The flux limit of the MAXI sample corresponds to erg cm s\u00a0(1.2 mCrab) in the 4\u201310 keV band. Figure\u00a01 shows the log - log relation\u00a0(integral form) of these AGNs in the 4\u201310 keV band, obtained by using the area curve presented in Figure\u00a09 of Hiroi et al. (2011).\n\nTable\u00a01 summarizes the AGN list, where the first to sixth columns represent the catalog source No., MAXI source name, counterparts, optical type, 4\u201310 keV flux, and redshift, respectively. Although it is known that using the spectroscopic redshifts to estimate the distance of very nearby objects is subject to uncertainties due to the galaxy proper motion, we adopt these values for consistency with the analysis of the Swift\/BAT AGNs in Tueller et al. (2008). We confirm that even if we instead adopt redshifts corrected for the infall into the Virgo cluster by Mould et al. (2000) to calculate the luminosity of AGNs at , our results of both LF and \u00a0function are little affected. Here we only distinguish between two optical classes, \u201cAGN1\u201d (Seyfert 1.0\u20131.5) or \u201cAGN2\u201d (Seyfert 1.8\u20132.0), for simplicity. We can regard that this AGN sample is nearly complete (99.3%), because 142 out of the total 143 X-ray sources are identified there. The flux errors due to the statistical uncertainties are better than for all the objects, and hence they are not taken into account in the following analysis.\n\nTo compare our result with the previous works easily, we construct the AGN LF in the \u201cintrinsic\u201d 2\u201310 keV luminosity corrected for the absorption (i.e., before absorption) at the source frame (hereafter represented as ). Since we have the count rate in the 4\u201310 keV band from the MAXI\/GSC survey, it is necessary to convert it to \u00a0by using the spectral information as well as the redshift for each source. Fortunately, we are able to find results of spectral fits in the 0.2\u201310 (or 0.5\u201310) keV band in the literature for 33 (out of 37) AGNs, which were obtained from data of either ASCA, XMM-Newton, BeppoSAX, Swift\/XRT, or Suzaku. The spectral quality is sufficiently good in most cases, and hence we neglect their errors in the following analysis. The best-fit photon index , absorption column density \u00a0(at the source frame), and calculated luminosity from these parameters () are listed in the 7th to 9th columns of Table\u00a01, respectively, together with the reference for the spectral parameters (10th column). In the conversion from the MAXI\/GSC count rate into , we consider a reflection component from cold, optically thick matter (Magdziarz & Zdziarski, 1995) with a solid angle of as adopted in Ueda et al. (2003), although this does not affect our result of the LF. For the remaining four targets222IRAS\u00a005078+1626, 2MASX J09235371\u20133141305, 4C\u00a0+18.51, 1RXS J213623.1\u2013622400, we perform the same image analysis of the MAXI\/GSC data in the 2\u20134 keV band as that in Hiroi et al. (2011) to obtain the hardness ratio between the 2\u20134 keV and 4\u201310 keV count rates. We first calculate the corresponding photon index without considering any absorption; if we obtain , then we derive an absorption column density at the source redshift assuming an intrinsic power law with . The results of and \u00a0with statistical errors estimated in this way are also listed in Table\u00a01 for these 4 targets. Figure\u00a02 shows the redshift () versus luminosity () plot for our sample. The open and filled circles correspond to those with a column density of log \u00a0 and log \u00a0, respectively. The optical type-2 AGNs are further marked with the diagonal crosses.\n\n## 3 Analysis and Results\n\n### 3.1 Analysis Method\n\nOur goal is to determine both the \u00a0function and absorption-corrected 2-10 keV LF of X-ray AGNs in the local universe. The calculation follows the same procedure as presented in Ueda et al. (2003), to which we refer the reader for details. The same notation convention is adopted in this paper. The \u00a0function, log , represents a probability of finding an AGN with an absorption column density between log \u00a0and log \u00a0+ log \u00a0at a given luminosity, , and redshift, . For convenience, we assign log \u00a0= 20 for AGNs without any significant absorption, and consider only the range of log \u00a0 24, since no Compton thick AGNs are present in the current sample. It is normalized as\n\n \u222b2420f(LX,z;NH)dlogNH=1. (1)\n\nThe LF, in units of Mpc, is defined so that gives the co-moving space density of all (Compton-thin) AGNs in a luminosity range between log \u00a0and log \u00a0+ log \u00a0at a redshift of .\n\nFrom the list of \u00a0and \u00a0in our sample, the best-fit parameters are searched for by minimizing the likelihood estimator defined as\n\n L=\u22122\u2211ilnN(NHi,LXi,zi)\u222b\u222b\u222bN(NH,LX,z)dlogNHdlogLXdz, (2)\n\nwhere the suffix denotes each object. The term represents the expected number from the survey,\n\n N(NH,LX,z)=f(LX,z;NH)\u00d7 d\u03a6(LX,z)dlogLXdA(z)2(1+z)3cd\u03c4dz(z)A(NH,LX,z), (3)\n\nwhere is the angular distance, the differential look back time, and the survey area, given as a function of flux that is calculated from , , and . The minimization process is carried out on the MINUIT software package. The error for a single parameter can be estimated from the parameter range that increases the value by 1. The fit applied to the unbinned data here cannot estimate the normalization of the LF. Hence, we determine it so that the expected source number agrees with the observed one, and estimate its relative uncertainty only from its Poisson error (, where ).\n\n### 3.2 NH\u00a0Function\n\nTo avoid coupling between the \u00a0function and LF, we determine them step by step in the same way as Ueda et al. (2003), considering the small sample size. First, we constrain the \u00a0function by adopting the \u201cdelta-function\u201d approximation for the LF only based on the sample list. It reduces the formula\u00a0(2) to a simpler form (see equation (6) in Ueda et al. (2003)) where the intrinsic \u00a0distribution can be evaluated directly from its observed \u00a0histogram by taking into account the \u00a0dependence of the survey area. For the four sources whose absorptions are estimated from the hardness ratios of the MAXI data and hence have non-negligible statistical errors, we take into account the uncertainties in \u00a0by introducing the \u201c\u00a0response matrix function\u201d as done in Ueda et al. (2003) (see their Section\u00a04.1).\n\nAs for the shape of the \u00a0function, we adopt a modified version of that used in Ueda et al. (2003). The difference from Ueda et al. (2003) is that (1) we allow such a case that the \u00a0function at log \u00a0 is smaller than that at log \u00a0, (2) we assign 4 discrete bins with the same width between log = 20\u201324 for simplicity, considering the practical difficulties to determine \u00a0with an accuracy better than log \u00a0 for objects without good X-ray spectral data. We define the absorption fraction as that of AGNs with log \u00a0= 22\u201324 among those with log \u00a0= 20\u201324, which is given as a function of luminosity. Its possible redshift dependence is ignored, because our sample consists of only local AGNs. The form of the \u00a0function is expressed differently for two ranges;\n\n (for\u03c8(LX)<1+\u03f53+\u03f5) (4)\n\nand\n\n (for\u03c8(LX)\u22651+\u03f53+\u03f5) (5)\n\nHere defines the ratio of the \u00a0function in log \u00a0= 23\u201324 to that in log \u00a0= 22\u201323. It is fixed at 1.3 (instead of 1.7 as adopted in Ueda et al. (2003)), according to the observed \u00a0distribution in the Swift\/BAT 9-month survey (23\/18, Tueller et al. (2008)), which agrees with the more recent result by Burlon et al. (2011) . In the former case (equation\u00a03.2), the \u00a0function is flat above log \u00a0= 21, while in the latter case (equation\u00a03.2), the value in log \u00a0= 21\u201322 is taken to be the mean of those at log \u00a0= 20\u201321 and log \u00a0= 22\u201323. The maximum absorption fraction is , corresponding to the case of = 0 at log \u00a0= 20\u201321.\n\nFigure\u00a02 clearly shows that X-ray absorbed AGNs are mostly found in the lower luminosity range (log \u00a0). This confirms the trend found in many previous works (e.g., Ueda et al. (2003); Hasinger (2008)) that the absorption fraction reduces with an increase of the AGN luminosity. Thus, following Ueda et al. (2003), we model the absorption fraction by a linear function of log \u00a0within the maximum (see above) and minimum values, which is taken to be 0.1. It is represented as\n\n \u03c8(LX)=min[\u03c8max,max[\u03c844\u2212\u03b2(logLX\u221244),0.1]], (6)\n\nwhere and are the free parameters to be determined through the likelihood fit.\n\nTable\u00a02 summarizes the best-fit parameters of the \u00a0function and their errors. Figure\u00a03(a) plots the \u201cintrinsic\u201d \u00a0function (corrected for the observation bias) for the total sample (upper), that for low luminosities of log \u00a0 (middle), and that for log \u00a0 (lower). The dependence of the absorption fraction on the luminosity is obvious. The best-fit model of the \u00a0function calculated at the mean \u00a0value in each region is overplotted. Figure\u00a03(b) shows the \u201cobserved\u201d histogram of \u00a0for these 3 luminosity ranges, on which those predicted from the best-fit model are superposed.\n\n### 3.3 Luminosity Function\n\nUsing the \u00a0function obtained above, we finally determine the local AGN LF by maximum-likelihood fit according to the formula\u00a0(2). We adopt the smoothly connected double power law model, one of the most standard descriptions for X-ray AGN LFs, given as\n\n d\u03a6(LX,z=0)dlogLX=A[(LX\/L\u2217)\u03b31+(LX\/L\u2217)\u03b32]\u22121. (7)\n\nTo implement the effect of the cosmological evolution, we introduce the evolution factor represented by ,\n\n d\u03a6(LX,z)dlogLX=d\u03a6(LX,0)dlogLX(1+z)p1, (8)\n\nwhere we fix based on the result obtained for the LDDE model in Ueda et al. (2003). Note that at and log \u00a0, their LDDE model is identical with the pure density evolution model as represented above.\n\nDue to the limited sample size, we find it difficult to constrain the three free parameters of the LF, , , and simultaneously. Hence, we fix the power law slope in the low luminosity range at three different values, , (the best-fit obtained from the Swift\/BAT 9-month survey in Tueller et al. (2008)), and (the best-fit from the LADE model in Aird et al. (2010)). The results of the likelihood fit for these three cases are summarized in Table\u00a02. Figure\u00a04 plots the best fit local LF determined from the MAXI survey for the case of (black curve). The data points are calculated by the method (Miyaji et al., 2001), to which the statistical errors are attached according to the formula by Gehrels (1986). We find the local AGN emissivity in the 2\u201310 keV band integrated over the log \u00a0= 41\u201347 range is erg s Mpc for . This value is close to that obtained by Shinozaki et al. (2006), erg s Mpc, 333 It is larger than that presented in Section 6.1 of Shinozaki et al. (2006), erg s Mpc. This is because while Shinozaki et al. (2006) calculated the volume emissivity using observed (i.e., absorbed) fluxes of each AGN at luminosity range of log \u00a0, we here calculate it for intrinsic (de-absorbed) luminosities by integrating the analytical expression of the LF down to log \u00a0. but significantly larger than that by Sazonov & Revnivtsev (2004), erg s Mpc, converted from the 3\u201320 keV band LF by assuming a power law photon index of 1.7.\n\n## 4 Discussion\n\nWe revisit the local X-ray luminosity function of non-blazar AGNs, together with the absorption distribution function, based on the first source catalog of the on-going MAXI\/GSC extragalactic survey. In spite of the fact that the current MAXI\/GSC source sample is smaller than those from the HEAO-1 (49 AGNs in Shinozaki et al. (2006)) and RXTE (76 AGNs in Sazonov & Revnivtsev (2004)) all sky surveys performed in similar energy bands, it has some advantages to firmly establish the statistical properties of AGNs below 10 keV in the following points. (1) The sample is highly complete (99.3% = 142\/143, or % = 37\/38 in the worst case). (2) Since we have adopted a relatively conservative threshold in the source selection in our catalog (), our sample is less subject to the flux uncertainties in the faintest end () and is considered to be free from Eddington\u2019s bias as verified in log - log relation\u00a0(Figure\u00a01). This could actually be a problem in the sample in Shinozaki et al. (2006), who had to correct for such biases by simulation (see their Appendix). (3) The AGN fluxes are determined from the data collected from many scans (15 times per day), and hence can be regarded as the long-term averaged flux, less affected by short term variability than those obtained from a few snap-shot observations. (4) The energy band of 4\u201310 keV band is more suitable for detecting obscured AGNs (except for Compton thick ones), thus reducing the observation biases for the \u00a0function determination. It is expected that the MAXI\/GSC 4\u201310 keV sample has intermediate characteristics between the surveys in softer X-rays than 4 keV, and hard X-ray surveys in above 10 keV.\n\nIn fact, we find that the \u201cobserved\u201d fraction of absorbed AGNs with log \u00a0, 32% (=12\/37), is higher than the HEAO-1 (20%=10\/49) and RXTE (22%=17\/76) results, while it is lower than that obtained from the Swift\/BAT survey above 15 keV, 49% (=42\/86; Tueller et al. (2008)). We obtain the intrinsic \u00a0distribution by correcting for the observational biases that AGNs with heavier absorptions are harder to be detected due to the reduction of the count rates. The overall shape combined from both low and high luminosity samples (Figure\u00a03) is well consistent with the \u00a0distribution obtained from the Swift\/BAT survey, which show an almost flat distribution above log \u00a0 with a weak peak in the log \u00a0= 20\u201321 bin (Tueller et al., 2008). We also confirm the strong dependence of the absorption fraction on the X-ray luminosity. We obtain the best-fit formula to describe this relation slightly different from that in Ueda et al. (2003), who included much fainter AGN samples in the analysis. The slope of the absorption fraction with respect to log \u00a0is steep ( instead of in Ueda et al. (2003)), and reaches a higher maximum value ( instead of ) at low luminosities. Such sharp (even sharper) change of the absorption fraction against the luminosity around log \u00a043.5 is also found in the Swift\/BAT sample (Tueller et al. (2008); Burlon et al. (2011)). The statistical uncertainty is quite large at present, however. It is of great importance to investigate redshift evolutions of the \u00a0function and the relation between the absorption fraction and luminosity by using much larger samples, which shall be left for future work.\n\nTo compare with the past results obtained from surveys in similar energy bands, we overlay the best-fit LFs obtained by Shinozaki et al. (2006) and Sazonov & Revnivtsev (2004) with the thin solid (red) and thin dashed (cyan) curves, respectively, in Figure\u00a04. The RXTE LF is converted from the 3\u201320 keV band into the 2\u201310 keV band by assuming a photon index of 1.7 in our adopted cosmological parameters ( km s Mpc instead of km s Mpc in Sazonov & Revnivtsev (2004)). The systematic uncertainties due to the choice of photon index are small in this case (% in the luminosity within a range of 1.7\u20132.0). Here, the normalization of the LF is corrected for the maximum factor of incompleteness (), assuming that the unidentified targets are all AGNs whose luminosity and redshift distribution is the same as those of the identified sample. As already reported by Sazonov & Revnivtsev (2004), the RXTE result lies significantly lower than the HEAO-1 results by a factor of . The origin for this discrepancy is unclear, but we do not pursue it further in this paper.\n\nAs clearly seen in Figure\u00a04, our MAXI LF is closer to the HEAO-1 LF than the RXTE LF. In particular, it is in good agreement with the HEAO-1 result at high luminosity range above log \u00a0= 43.5. However, the MAXI LF gives a larger number density at lower luminosities by a factor of 2\u20133. By assuming a similar slope of the LF in the low luminosity range (), the MAXI LF favors a smaller break luminosity, log , 42.9\u201343.9, than the best-fit HEAO-1 value (log = ), though within the statistical errors. The discrepancy can be partially explained if the absorption fraction at these low luminosity range is underestimated than the reality in the previous work. In fact, according to the best-fit \u00a0function, the absorption fraction at log \u00a0= 42.5 is estimated to be in our work, while it is in Ueda et al. (2003). Due to the coupling with the \u00a0function in constraining the LF parameters through the maximum likelihood fit (see equation\u00a0(2)), the estimated LF for all AGNs with log \u00a0= 20\u201324 would become smaller if we assume a lower absorption fraction in the \u00a0function, because it is hard to detect objects with large column densities of log \u00a0 in the 2\u201310 keV band survey, and its space density can be only constrained by the extrapolation from the lower column-density range.\n\nComparison of our AGN LF in the 2\u201310 keV band with the hard X-ray ( keV) LF determined with the Swift\/BAT and INTEGRAL surveys provides insights on the broad band properties of local AGNs. Since the fraction of Compton-thick AGNs in those hard X-ray surveys are negligibly small, we can directly compare them with our result obtained for Compton-thin AGNs. In Figure\u00a04, we also plot the best-fit form of the LF by Tueller et al. (2008) by converting the luminosity from 14\u2013195 keV to 2\u201310 keV. In this case, the assumption of the spectrum strongly affects the result. We adopt two photon index, (thick dot-dashed, magenta) and (thick dashed, blue). Obviously, the shape of the LF is not the same between these bands if a single spectrum index is assumed for all AGNs. At low luminosity range of log \u00a0, the normalizations of the two LFs become consistent with each other for , while at higher luminosities, the conversion with gives a better agreement.\n\nThis result indicates that the averaged shape of broad band X-ray spectra of these AGNs depends on the luminosity, in the sense that more luminous AGNs show a steeper slope in the 2\u2013200 keV range on average. To confirm this picture, we make the correlation plot of luminosity between the 2\u201310 keV and 14\u2013195 keV bands using our MAXI sample (Figure\u00a05). Here the hard X-ray luminosities are taken from the Swift\/BAT 22-month catalog except for two AGNs that are not detected there; 2MASX J18470283\u20137831494 for which we refer to the Swift\/BAT 58-month catalog, and 4C\u00a0+18.51, which is not yet detected by the 58 month data and has only a flux upper limit of erg cm s\u00a0in the 14\u2013195 keV band. We plot two lines corresponding to a power law photon index of (solid, magenta) and (dashed, blue). The trend that the AGNs with lower luminosities have flatter slope is indeed seen.\n\nWe have shown that since the dependence of the averaged X-ray broad band spectra on luminosity makes the direct comparison of LFs constructed in different energy bands (below and above 10 keV) not straightforward, its apparent difference in the LF shape is not \u201ccontradiction\u201d. This effect must be taken into account when one constructs a LF in a uniform way by compiling results of X-ray surveys performed in different energy bands. There are two explanations for the reasons of the luminosity dependence of the 2\u2013200 keV spectra. Recent studies of nearby AGNs have suggested that the intrinsic power law components of Seyfert 1 are steeper than those of Seyfert 2 (e.g., Malizia et al. (2003); Beckmann et al. (2006a); Tueller et al. (2008)). Because of the strong dependence of absorption fraction on the luminosity, we mostly detect only type 1 AGNs in the surveys at the high luminosity range, leading to the trend we see. Second possibility is the effect of a reflection component, which could be more significant at lower luminosities, as implied by the \u201cX-ray Baldwin\u201d effects (Iwasawa & Taniguchi, 1993). The presence of a reflection hump in the spectra increases the observed flux in the hard X-ray band, peaked at keV, and hence the apparent slope over the 2\u2013200 keV band becomes flatter. To distinguish these two effects, systematic studies of the broad X-ray band spectra of both type 1 and type 2 AGNs at various luminosity ranges are necessary.\n\n## 5 Conclusion\n\nWe have constructed the local AGN X-ray luminosity function, utilizing our new sample consisting of 37 non-blazar AGNs at detected in the 4\u201310 keV band from the first MAXI\/GSC source catalog by Hiroi et al. (2011). The sample is highly complete , and is less subject to uncertainties in the measured fluxes compared with the past all-sky survey missions above 2 keV. The conclusion of our work is summarized as follows.\n\n\u2022 We strongly confirm the trend that there exist more absorbed AGNs at lower luminosities. The fraction of absorbed AGNs with log \u00a0= 22\u201324 among those with log \u00a0 corrected for the observational biases changes from 0.730.25 at log \u00a0= 42\u201343.5 to 0.12 at log \u00a0= 43.5\u201345.5. The estimated absorption distribution (\u00a0function) is consistent with the Swift\/BAT and INTEGRAL results obtained above 10 keV.\n\n\u2022 The shape of the intrinsic luminosity function of Compton thin AGNs can be fit with a smoothly connected double power law. For a fixed slope of at a lower luminosity range, we obtain a break luminosity of log = 43.30.4 and a higher luminosity slope of . The value is somewhat smaller than the HEAO-1 result. The integrated emissivity over log \u00a0= 41\u201347 is found to be erg s Mpc, which is only slightly larger than the previous estimate by HEAO-1. The space density agrees with the HEAO-1 result at log \u00a0 but is larger at the lower luminosity range. This may be partially explained by the smaller biases against absorption in our survey in the 4\u201310 keV band, which lead to a better estimate of the \u00a0function.\n\n\u2022 We compare our AGN luminosity function in the 2\u201310 keV band with those derived above 10 keV, by converting the luminosities by assuming a single power law spectrum. We find that the space densities matches with each other for at log \u00a0, while they do for at higher luminosities. This suggests the luminosity dependence of the averaged broad X-ray band spectra over the keV band. The trend is indeed confirmed by the luminosity correlation between the MAXI and Swift\/BAT data in our sample.\n\nThe work is partially supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Grant-in-Aid No.19047001, 20041008, 20244015, 20540237, 21340043, 21740140, 22740120, 23000004, 23540265, and Global-COE from MEXT \u201cThe Next Generation of Physics, Spun from Universality and Emergence\u201d and \u201cNanoscience and Quantum Physics\u201d.\n\n## References\n\n\u2022 Aird et al. (2010) Aird, J. et al.\u00a02010, \\mnras, 401, 2531\n\u2022 Barger et al. (2005) Barger, A.\u00a0J. et al.\u00a02005, \\aj, 126, 632\n\u2022 Baumgartner et al. (2011) Baumgartner, W.\u00a0H. et al.\u00a02011, \\apjs, submitted\n\u2022 Beckmann et al. (2006a) Beckmann, V., Gehrels, N., Shrader, C.\u00a0R., & Soldi, S.\u00a02006, \\apj, 638, 642\n\u2022 Beckmann et al. (2006b) Beckmann, V., Soldi, S., Shrader, C.\u00a0R., Gehrels, N., & Produit, N.\u00a02006, \\apj, 652, 126\n\u2022 Burlon et al. (2011) Burlon, D., Ajello, M., Greiner, J., Comastri, A., Merloni, A., Gehrels, N.\u00a02011, \\apj, 728, 58\n\u2022 Malizia et al. (2003) Malizia, A., Bassani, L., Stephen, J.\u00a0B., Di Cocco, G., Fiore, F., & Dean, A.\u00a0J. 2003, \\apjl, 589, L17\n\u2022 Ebrero et al. (2009) Ebrero, J. et al.\u00a02009, \\aap, 493, 55\n\u2022 Ferrarese & Merritt (2000) Ferrarese, L., & Merritt, D.\u00a02000, \\apjl, 539, L9\n\u2022 Gebhardt et al. (2000) Gebhardt, K., et al.\u00a02000, \\apjl, 539, L13\n\u2022 Gehrels (1986) Gehrels, N. 1986, \\apj, 303, 336\n\u2022 Gilli et al. (2007) Gilli, R., Comastri, A., Hasinger, G.\u00a02007, \\aap, 463, 79\n\u2022 G\u00fcltekin et al. (2009) G\u00fcltekin, K. et al.\u00a02009, \\apj, 698, 198\n\u2022 H\u00e4ring & Rix (2004) H\u00e4ring, N. & Rix, H.\u00a02004, \\apjl, 604, L89\n\u2022 Hasinger (2008) Hasinger, G.\u00a02008, \\aap, 490, 905\n\u2022 Hiroi et al. (2011) Hiroi, K., et al.\u00a02011, \\pasj, in press (arXiv:1108.5516)\n\u2022 Hopkins et al. (2007) Hopkins, P.\u00a0F., Hernquist, L., Cox, T.\u00a0J., Robertson, B., Krause, E.\u00a02007, \\apj, 669, 67\n\u2022 Kormendy & Bender (2009) Kormendy, J. & Bender, R. 2009, \\apjl, 691, L142\n\u2022 La Franca et al. (2005) La Franca, F. et al. 2005, \\apj, 635, 864\n\u2022 Magdziarz & Zdziarski (1995) Magdziarz, P. & Zdziarski, A.\u00a0A.\u00a01995, \\mnras, 273, 837\n\u2022 Magorrian et al. (1998) Magorrian, J., et al.\u00a01998, \\aj, 115, 2285\n\u2022 Marconi & Hunt (2003) Marconi, A. & Hunt, L.\u00a0K.\u00a02003, \\apjl, 589, L21\n\u2022 Matsuoka et al. (2009) Matsuoka, M., et al.\u00a02009, \\pasj, 61, 999\n\u2022 Mihara et al. (2011) Mihara, T., et al.\u00a02011, PASJ, in press (arXiv:1103.4224)\n\u2022 Miyaji et al. (2000) Miyaji, T., Hasinger, G., & Schmidt, M.\u00a02000, \\aap, 353, 25\n\u2022 Miyaji et al. (2001) Miyaji, T., Hasinger, G., & Schmidt, M.\u00a02001, \\aap, 369, 49\n\u2022 Mould et al. (2000) Mould, J.R., et al.\u00a02000, \\apj, 529, 786\n\u2022 Piccinotti et al. (1982) Piccinotti, G., Mushotzky, R.\u00a0F., Boldt, E.\u00a0A., Holt, S.\u00a0S., Marshall, F.\u00a0E., Serlemitsos, P.\u00a0J., & Shafer, R.\u00a0A.\u00a01982, \\apj, 253, 485\n\u2022 Revnivtsev et al. (2004) Revnivtsev, M., Sazonov, S., Jahoda, K., & Gilfanov, M.\u00a02004, \\aap, 418, 927\n\u2022 Risaliti et al. (2005) Risaliti, G., Bianchi, S., Matt, G., Baldi, A., Elvis, M., Fabbiano, G., & Zezas, A.\u00a02005, \\apjl, 630, L129\n\u2022 Sambruna et al. (2011) Sambruna, R.\u00a0M. et al.\u00a02011, \\apj, 734, 105\n\u2022 Sazonov & Revnivtsev (2004) Sazonov, S. & Revnivtsev, M.\u00a02004, \\aap, 423, 469\n\u2022 Sazonov et al. (2007) Sazonov, S., Revnivtsev, M., Krivonos, R., Churazov, E., Sunyaev, R.\u00a02007, \\aap, 462, 57\n\u2022 Sazonov et al. (2008) Sazonov, S., Krivonos, R., Revnivtsev, M., Churazov, E., Sunyaev, R.\u00a02008, \\aap, 482, 517\n\u2022 Shinozaki et al. (2006) Shinozaki, K., Miyaji, T., Ishisaki, Y., Ueda, Y., & Ogasaka, Y.\u00a02006, \\aj, 131, 2843\n\u2022 Silverman et al. (2008) Silverman, J.D. et al. 2008, \\apj, 679, 118\n\u2022 Sugizaki et al. (2011) Sugizaki, M., et al.\u00a02011, \\pasj, in press (arXiv:1102.0891)\n\u2022 Iwasawa & Taniguchi (1993) Iwasawa, K.\u00a0& Taniguchi, Y.\u00a01993, \\apjl, 413, L15\n\u2022 Tueller et al. (2008) Tueller, J., Mushotzky, R.\u00a0F., Barthelmy, S., Cannizzo, J.\u00a0K., Gehrels, N., Markwardt, C.\u00a0B., Skinner, G.\u00a0K., & Winter, L.\u00a0M.\u00a02008, \\apj, 681, 113\n\u2022 Tueller et al. (2010) Tueller, J., et al.\u00a02010, \\apjs, 186, 378\n\u2022 Ueda et al. (2003) Ueda, Y., Akiyama, M., Ohta, K., Miyaji, T.\u00a02003, \\apj, 598, 886\n\u2022 Winter et al. (2009) Winter, L.\u00a0M., Mushotzky, R.\u00a0F., Reynolds, C.\u00a0S., & Tueller, J.\u00a02009, \\apj, 690, 1322\n\u2022 Winter & Mushotzky (2010) Winter, L.\u00a0M.\u00a0& Mushotzky, R.\u00a0F.\u00a02010, \\apj, 719, 737\n\u2022 Yencho et al. (2009) Yencho, B., Barger, A.\u00a0J., Trouille, L., Winter, L.\u00a0M.\u00a02009, \\apj, 698, 380\nYou are adding the first comment!\nHow to quickly get a good reply:\n\u2022 Give credit where it\u2019s due by listing out the positive aspects of a paper before getting into which changes should be made.\n\u2022 Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n\u2022 Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters","date":"2020-06-02 18:18:34","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8612228035926819, \"perplexity\": 4819.853322954286}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347425481.58\/warc\/CC-MAIN-20200602162157-20200602192157-00582.warc.gz\"}"}
null
null
Please use the form below to schedule your appointment. We look forward to seeing you! When you have completed all the information, please click the submit button below. A member of our scheduling staff will contact you within the next 24 hours to confirm your appointment at Associated Orthopaedics.
{ "redpajama_set_name": "RedPajamaC4" }
590
XGAMES.LIFE/TAPS-TO-RICHES/ Hack Script Latest Version (With New Version updated on (time)). Stats : 251116 Money and 251116 Gems free generated today. | this awesome hack was created by our developers, which allows you to unlock or get completely free In-App purchases in your game.
{ "redpajama_set_name": "RedPajamaC4" }
2,165
<?xml version="1.0" encoding="UTF-8"?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <!-- Default production configuration is asnychronous logging --> <Configuration> <Appenders> <Console name="STDOUT" target="SYSTEM_OUT"> <PatternLayout> <Pattern> %maxLen{%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p (%t) [%notEmpty{c:%X{collection}}%notEmpty{ s:%X{shard}}%notEmpty{ r:%X{replica}}%notEmpty{ x:%X{core}}] %c{1.} %m%notEmpty{ =>%ex{short}}}{10240}%n </Pattern> </PatternLayout> </Console> <RollingRandomAccessFile name="MainLogFile" fileName="${sys:solr.log.dir}/solr.log" filePattern="${sys:solr.log.dir}/solr.log.%i" > <PatternLayout> <Pattern> %maxLen{%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p (%t) [%notEmpty{c:%X{collection}}%notEmpty{ s:%X{shard}}%notEmpty{ r:%X{replica}}%notEmpty{ x:%X{core}}] %c{1.} %m%notEmpty{ =>%ex{short}}}{10240}%n </Pattern> </PatternLayout> <Policies> <OnStartupTriggeringPolicy /> <SizeBasedTriggeringPolicy size="32 MB"/> </Policies> <DefaultRolloverStrategy max="10"/> </RollingRandomAccessFile> <RollingRandomAccessFile name="SlowLogFile" fileName="${sys:solr.log.dir}/solr_slow_requests.log" filePattern="${sys:solr.log.dir}/solr_slow_requests.log.%i" > <PatternLayout> <Pattern> %maxLen{%d{yyyy-MM-dd HH:mm:ss.SSS} %-5p (%t) [%notEmpty{c:%X{collection}}%notEmpty{ s:%X{shard}}%notEmpty{ r:%X{replica}}%notEmpty{ x:%X{core}}] %c{1.} %m%notEmpty{ =>%ex{short}}}{10240}%n </Pattern> </PatternLayout> <Policies> <OnStartupTriggeringPolicy /> <SizeBasedTriggeringPolicy size="32 MB"/> </Policies> <DefaultRolloverStrategy max="10"/> </RollingRandomAccessFile> </Appenders> <Loggers> <!-- Use <AsyncLogger/<AsyncRoot and <Logger/<Root for asynchronous logging or synchonous logging respectively --> <AsyncLogger name="org.apache.hadoop" level="warn"/> <AsyncLogger name="org.apache.solr.update.LoggingInfoStream" level="off"/> <AsyncLogger name="org.apache.zookeeper" level="warn"/> <!-- HttpSolrCall adds markers denoting the handler class to allow fine grained control, metrics are very noisy so by default the metrics handler is turned off to see metrics logging set DENY to ACCEPT --> <AsyncLogger name="org.apache.solr.servlet.HttpSolrCall" level="info"> <MarkerFilter marker="org.apache.solr.handler.admin.MetricsHandler" onMatch="DENY" onMismatch="ACCEPT"/> </AsyncLogger> <AsyncLogger name="org.apache.solr.core.SolrCore.SlowRequest" level="info" additivity="false"> <AppenderRef ref="SlowLogFile"/> </AsyncLogger> <AsyncLogger name="org.eclipse.jetty.deploy" level="warn"/> <AsyncLogger name="org.eclipse.jetty.webapp" level="warn"/> <AsyncLogger name="org.eclipse.jetty.server.session" level="warn"/> <AsyncRoot level="info"> <AppenderRef ref="MainLogFile"/> <AppenderRef ref="STDOUT"/> </AsyncRoot> </Loggers> </Configuration>
{ "redpajama_set_name": "RedPajamaGithub" }
7,844
<?php require_once 'PDB/Common.php'; /** * PDB_sqlite driver for PDB * * @category DB * @package PDB * @author Ian Eure <ian@digg.com> * @copyright 2008 (c) Digg.com * @license http://tinyurl.com/42zef New BSD License * @version Release: @package_version@ * @link http://pear.php.net/package/PDB */ class PDB_sqlite extends PDB_Common { /** * Get DSN as an stdClass * * @return stdClass The DNS info in an stdClass */ public function getDSN() { if ($this->dsnObject == null) { $this->dsnObject = new stdClass; list($this->dsnObject->type, $this->dsnObject->dbname) = explode(':', $this->dsn, 2); } return $this->dsnObject; } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,318
Actual income values are incorrect for the deal when there is negative turnover. 1. Create a deal with purchase base from Jan 1 to Dec 31, for example. 2. Create and receive a PO. 3. Run Nightly batch and Deal Month End - Jan. 4. 1st deal month end, the actual income calculated based on the positive value and it is correct. 5. Create RTV transaction in order that the total turnover for the supplier becomes negative and ship the RTV. 6. Run Nightly batch and Deal Month End - Feb. 7. For 2nd deal month end, the actual income is not calculated correctly.
{ "redpajama_set_name": "RedPajamaC4" }
33
\section{Appendix} \paragraph{PDDL} \label{PDDL} The Planning Domain Definition Language (PDDL) \cite{McDermott1998PDDLthePD} is a language family that allows us to define a planning problem. PDDL is an action-centred language, inspired by the well-known STRIPS formulations of planning problems \cite{fikes1971strips}. Mathematically, a STRIPS instance is a quadruple $\langle F,A,I,G\rangle$, in which each component has the following meaning: \begin{itemize} \item \textit{F} -- a set of facts describing the possible states of the world, $2^F$. \item \textit{A} -- a set of actions. Each action $a \in A$ consists of a set of preconditions $pre(a)$, add effects $add(a)$, and delete effects $del(a)$. Applying $a$ is possible in a state $s$ where $pre(a) \subseteq s$, and results in the state $s[\langle a \rangle] = (s\setminus del(a)) \cup add(a)$. \item $\textit{I} \subseteq F$ is the initial state of the world. \item $G$ -- the goal of the problem. The goal $G$ is a set of facts $G \in F$. A state $s$ satisfies a goal if $G \subseteq s$. \end{itemize} A plan $\pi$ is a sequence of actions. $\pi = \langle a_0, a_1, \dots , a_n\rangle$ is applicable from state $s_0$ if $a_0$ is applicable at $s_0$ and $\langle a_1, \dots , a_n\rangle$ is applicable from $s_1 := s_0[\langle a_0 \rangle]$. We denote the state reached by following plan $\pi$ from state $s$ by $s[\pi]$, and say that a plan $\pi$ achieves a goal $G$ if $G \subseteq s[\pi]$. The PDDL language generalizes the STRIPS setting into domain description and problem description. The domain description contains the definitions of object types, predicates, as well as the actions' preconditions and effects. These elements are the aspects that do not change regardless of what specific situation we are trying to solve. On the other hand, the PDDL problem description is more specific and defines exactly what objects exist in the scene, what their current states are, and what the goal is. For example, let us assume that the world contains apples, tomatoes, cucumbers, knives, and tables in three different colors: blue, yellow, and green. The actions that could be conducted on an object are: pickup, put, and slice. A problem file can be described as follows: The current scene contains two apples, three tomatoes, two knives, a yellow table, and a blue table. In the initial state, all the objects (besides the blue table) are on the yellow table. The goal is to put a slice of an apple on the blue table. Many PDDL tools have been developed over the years. One major part is PDDL planners, which read PDDL files (domain and problem) and use them to find a sequence of actions that solves the problem. Another tool is a plan validator \cite{1374201}, which checks if a given plan solves a specific PDDL problem. \paragraph{Experiment details} We used the medium version of GPT2 model and fine-tuned it for 25 epochs with learning rate of 5e-5, 100 warm-up steps and 0.01 weight decay. The batch sizes for the Task, Relations, and Task+Relations inputs are 32, 8, and 4 respectively. For the T5 model, we fine-tuned the base version of this model, also for 25 epochs. The learning rate used to tune this model is 1e-4. The batch sizes for the Task, Relations, and Task+Relations inputs are 32, 16, and 16 respectively. All other hyper-parameters which are not mentioned here were set to their default values. Both models were optimized using the Adam optimizer. All experiments were conducted using the NVIDIA A100 GPU, with 40GB GPU RAM. Tuning the models on the Task, Relations, and Task+Relations inputs types last for 2, 2, and 3 hours on the T5-base model, and 1, 4, and 5 hours on the GPT2-medium model, respectively. \subsection{Results} \paragraph{Goal predicates} In this section, we provide the full results of the \textit{Goal Predicate} task. Table \ref{goal_table_appendix} include all metrics and reported scores, divided to two language models and three input directives. The scores in Table \ref{goal_table_appendix} are divided into two categories: strict and permissive. Predicate, Arg1, and Arg2 are per-element accuracy measures. F\_Predicate and F\_Seq represent the ratio of correct full predicate pairs (or triples when the predicate type is $on$) and full predicate sequences, respectively. As shown in the table, on both models, the best results are achieved with the task description and the objects' relations as input. In addition, T5 outperforms GPT-2 on every input type, achieving almost 90\% accuracy on every measure. Table \ref{goal_examples_appendix} contains new and unseen directives types, and their corresponding predicted goal that our T5 model (which was trained on the Task + Relations input) generates. In the left column are the new task inputs for the model. Each task was paired with the objects' relations in the scene. These directives are different from the common tasks of ALFRED, and their intention is to check the robustness of the model. \begin{table}[t] \caption{\textit{Goal Predicates} Precision accuracy scores.} \centering \def\arraystretch{1.1}% \begin{tabular}{c c c c c c c c} \toprule Scoring Type & Model & Input & Predicate & Arg1 & Arg2 & F\_Predicate & F\_Seq\\ \midrule & & Task & 0.80 & 0.77 & 0.79 & 0.76 & 0.66\\ & GPT-2 & Relations & 0.02 & 0.01 & 0.02 & 0.02 & 0.01\\ & & Task + Relations & 0.73 & 0.71 & 0.81 & 0.70 & 0.74\\ Strict & & & & & & & \\ & & Task & 0.89 & 0.86 & 0.84 & 0.85 & 0.78\\ & T5 & Relations & 0.09 & 0.08 & 0.05 & 0.08 & 0.04\\ & & Task + Relations & \textbf{0.92} & \textbf{0.89} & \textbf{0.88} & \textbf{0.89} & \textbf{0.85}\\ \midrule & & Task & 0.80 & 0.81 & \textbf{0.89} & 0.80 & 0.72\\ & GPT-2 & Relations & 0.02 & 0.01 & 0.02 & 0.02 & 0.01\\ & & Task + Relations & 0.73 & 0.74 & 0.83 & 0.73 & 0.77 \\ Permissive & & & & & & & \\ & & Task & 0.89 & 0.89 & 0.85 & 0.89 & 0.84\\ & T5 & Relations & 0.09 & 0.09 & 0.05 & 0.09 & 0.04\\ & & Task + Relations & \textbf{0.92} & \textbf{0.92} & \textbf{0.89} & \textbf{0.92} & \textbf{0.88}\\ \bottomrule \\ \end{tabular} \label{goal_table_appendix} \end{table} \begin{table}[t] \caption{Goal prediction examples} \def\arraystretch{1.1}% \begin{tabular}{l l} \toprule Input Text & Goal Predicates \\ \midrule Put either tomato or potato or lettuce \\on the counter. & on lettuce countertop, cold lettuce \\ \midrule Put either tomato or potato on the counter, \\\textbf{avoid using lettuce}. & on \textbf{potato} countertop, cold \textbf{potato}\\ \midrule Put a baking tool on the counter. & on spatula pan, on pan countertop \\\midrule Place two vegetables in the drawer. & on potato drawer, two\_task\\\midrule Put any type of cutlery on the counter. & sliced spoon, on spoon cup, on cup countertop\\\midrule Put some element in the fridge. & sliced potato, on potato fridge, cleaned potato \\ \bottomrule \end{tabular} \label{goal_examples_appendix} \end{table} \begin{table} \caption{\textit{Valid Robot Plan} accuracy scores.} \centering \def\arraystretch{1.1}% \begin{tabular}{c c c c} \toprule Model & Input & Valid\_Plan\_Orig\_Goal & Valid\_Plan\_Pred\_Goal \\ \midrule & Task & 0.78 & 0.69\\ GPT-2 & Relations & 0.00 & 0.00\\ & Task + Relations & 0.89 & 0.83\\ \midrule & Task & 0.72 & 0.77\\ T5 & Relations & 0.13 & 0.59\\ & Task + Relations & \textbf{0.91} & \textbf{0.97}\\ \bottomrule \end{tabular} \label{valid_robot_plan} \end{table} \paragraph{Valid robot plans} Table \ref{valid_robot_plan} contains the scores on the valid plans task. The Valid\_Plan\_Orig\_Goal score is the ratio of valid plans that achieve the \textbf{original} goal, while the Valid\_Plan\_Pred\_Goal score reflects a similar ratio, but focuses on the \textbf{predicted} goal predicates. While GPT-2 achieves better results on the original goal predicates, T5 predicts more accurately the valid plans in the opposite case. This difference may be due to the fact that GPT-2 struggles to predict valid goal predicates in contrast to T5. \section{Introduction} As time goes by, we see a significant increase in the use of virtual assistants such as Amazon's Alexa, Google Assistant, Apple's Siri, and many more. Although these virtual assistants are able to perform basic household tasks using online communication with smart home products, many more tasks remain unsatisfied. The next generation of these assistants are robots that operate in our houses, follow instructions given by us and perform much more complicated tasks. To make this dream come true, a robot must have the ability to plan a series of actions from an initial state until the required task is completed. Such a planning process could be accomplished via the PDDL formalism \cite{McDermott1998PDDLthePD}. Another basic requirement for household robots is the ability to reason about information that comes from the environment. This information can be visual, textual, audio, etc. As for the language reasoning part, in 2017, the Transformer neural network architecture appeared \cite{DBLP:journals/corr/VaswaniSPUJGKP17}, reaching a great breakthrough in machine translation. Later on, transformer architectures evolved, solving a larger range of NLP tasks. Yet, training an agent to perform complicated actions in the real world is still hard, expensive, and time-consuming. Therefore, a lot of recent work has been dedicated to the development of realistic virtual simulators that mimic the behavior of the real world. AI2-Thor \cite{kolve2017ai2} is an example of this kind of simulator. The AI2-Thor simulator demonstrates realistic constraints from our real world. A good example of a real-world constraint is an \textbf{irreversible state} in which some actions change the world in a way that cannot be reversed (for example: the only tomato in the scene was sliced). In 2020, \citet{shridhar2020alfred} introduced the ALFRED (Action Learning from Realistic Environments and Directives) dataset, which is based on the AI2-Thor simulator. ALFRED consists of multi-modal data. It has a long sequence of instructions to achieve high-level tasks such as "Put a slice of tomato in the fridge." Those step-by-step instructions are combined with the egocentric vision of the robot at each time step (see \ref{fig:ALFRED_example} for example). \begin{figure*} \centering \includegraphics[width=1\textwidth]{Images/Alfred_example.jpg} \caption{\label{fig:ALFRED_example}An example from the ALFRED dataset. The green text box contains the high level task and the blue text box contains the high level instructions needed for accomplishing that task. The images represent the egocentric vision input of the agent at each time step. } \end{figure*} Our goal in this paper is to develop a system that incorporates natural language processing and planning to enable an agent to accomplish a real-world task given in natural language. By doing so, we are able to show how changes in the language input affect the agent's capability to accomplish the task. In this work, we assume that the agent has some background knowledge about the environment it operates in (which is essential for any cognitive robot regardless of its natural language processing capabilities), such as the types of objects in the world, and their features and the possible relations between them, as well as the basic robot behaviors (modeled as actions with preconditions and effects). Moreover, we assume that at the beginning of every episode, the agent acquires complete scene information, including all the objects and their locations, using its vision tools. The last piece of the agent's input is a directive, in which a human asks the agent to perform a specific task in natural language. By combining all this information, our agent should generate a sequence of actions (in the robot's formalism) that will achieve the desired outcome of the human task. To do so, we have developed a system which combines large language models and PDDL planning. We evaluate this system on the textual part of the ALFRED dataset. We show how various inputs affect the outcome of the model and suggest that the context of the environment is vital for the agent's ability to succeed as well. Our main contributions are as follows: \begin{itemize} \item We developed a novel translation approach that combines natural language and planning formalism for operating agents. \item We show that integrating the scene context (the world's semantic information) into the model's input leads to a significant improvement in the translation process, which indicates that the context of the world is vital for an operating agent. \item We imply that both the Transformer's encoder and the Transformer's decoder are essential for this translation task. \end{itemize} \section{Related work} Home service robots must have the ability to plan a sequence of actions to achieve their goals in the real world. This skill requires sophisticated reasoning at each time step, including interpreting multi-modal input types such as vision, language, and other sensor-type information. Thanks to environments like AI2-THOR\cite{kolve2017ai2}, Matterport 3D \cite{anderson2018vision} ,AI Habitat \cite{habitat19iccv}, and TDW\cite{gan2020threedworld}, a dramatic improvement has been made in various real-world tasks. One of these is \textit{visual semantic planning} \cite{zhu2017visual}, which is the task of predicting a sequence of high-level actions from visual observations. The purpose of those actions, conducted by the agent, is to reach a goal state from an initial state. When addressing this kind of task, a robot operating in a human household space may need to overcome some challenges. For example, partially observable space or long-horizon tasks in which the decision-making at any step can depend on observations received far in the past. Hence, being able to properly memorize and utilize long-term history is crucial. \cite{fang2019scene} In 2020, \citet{shridhar2020alfred} introduced the ALFRED dataset, which is based on the AI2-Thor simulator. ALFRED combines both egocentric vision and language directives to achieve everyday household tasks. Currently, the best model completed 39\% of ALFRED's tasks, which still leaves a long way to go. Recent papers have chosen to break this problem down into separate modalities instead of solving this difficult multi-modal problem. \citet{jansen2020visually} explored this task on the ALFRED dataset, by using the GPT-2 \cite{radford2019language} language model to generate these plans from high-level task descriptions, without visual cues. In his work, \citet{jansen2020visually} showed that the GPT-2 model outperforms a baseline RNN model on this task, predicting successfully of 22.2\% actions sequences, and 53.4\% of the plans when ignoring the first action prediction in the sequence. Later on, \citet{wang2021visual} integrated a general domain knowledge graph of indoor environments with the BERT model \cite{devlin2018bert} to create better predictions, generating successfully 31.4\% of the plans. While these previous works focused on language directive translation, they do not incorporate practical planning, and therefore are not sufficient for real-world intelligent agents. On the other hand, \citet{wang2020home} integrated Hierarchical Task Network and Probabilistic Inference to generate action sequences using multiple context types, but without natural language directives. These papers indicate that models can achieve surprising performances using information from only a single modality. In addition, a recent study \cite{thomason2018shifting} has found that models using input from a single modality (either vision or language) often perform nearly as well as or even better than their multi-modal counterparts. \section{Methods} \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{Images/Model_Architecture.png} \caption{Our model architecture.} \label{fig_model} \end{figure*} As described before, we assume that our agent already has background knowledge about the available actions, objects, and predicates in the world. Another assumption is the agent has complete scene information (typically acquired using vision). Additionally, the agent is given a language directive by a human, which instructs it to perform a particular task. The agent's goal is to output a sequence of actions that achieves the desired objective. In general, a human can give language commands to a robot in multiple ways. The first option is to provide the agent with high-level task description, such as "put a slice of tomato in the fridge". On the other hand, more detailed instructions could be given also, for example, "go to the kitchen, pick up the knife from the table, go to the tomato that is on the counter, slice the tomato, etc.". While both of these types exist in each sample of the ALFRED dataset, we choose to focus on the high-level task description. Moreover, in this paper we incorporate the spatial relations between the objects in the scene, which reflects the semantic context of the environment. We term the high-level task description as \textit{Task}, and the additional context as \textit{Relations}. Furthermore, we use the task and relations to train our model to produce two outputs: \textit{goal predicates} and \textit{plan templates}. The \textit{goal predicates} represent the desired outcome of a given task description. In other words, the goal predicates are the main objects' state at the end of the plan execution ("sliced tomato, tomato on counter"). On the other hand, a \textit{plan template} is the general structure of the robot's plan, which specify a sequence of actions and objects types that form the plan ("go to table, pickup knife table, go to tomato, slice tomato"). By combining these outputs together, we are able to achieve our goal, which is to translate natural language into a \textbf{valid} robot-language plan. A valid plan is a sequence of actions in a robot's language that can be performed by a household agent to achieve given tasks. In this work, we assume that the robot's language is the Planning Domain Definition Language (PDDL, more details and background in the supplementary material). As mentioned above, the PDDL domain contains the objects in the world and the actions' definitions (preconditions and effects). Unlike the domain file, which is predetermined, the problem file varies between each task we are trying to solve. The main components of the problem file are the task's goal predicates and the current world state, which includes the objects in the scene and their predicates. By adding predicates to the action's parameters and the world state, we restrain the problem, allowing the PDDL planner to terminate much faster. In our research, we add three types of constraints: \begin{itemize} \item \textbf{Length} - we add two predicates to the domain - $(next \ ?s_i \ ?s_j)$ and $(current\_step \ ?s_i)$. Each action in the domain gets the current step, $s_i$, and increases it to $s_j$, where $s_j$ is the next step number in the $(next\ s_i\ s_j)$ predicate. By initializing the problem with the predicates: \begin{itemize} \item $(current\_step \ s_0) = True$ \item $(next \ s_i \ s_{i+1}) = True \ \ \forall i :\ 0 \leq i < T$ \end{itemize} and adding the predicate $(current\_step \ s_T)$ to the goal, we force the PDDL planner to generate only plans of length $T$. \item \textbf{Action allowance} - for each time step $i \geq 0$, we add the predicate $(allowed\_action \ s_i)$, where \textit{action} is some action type from the domain, making the planner to create only plans that their $i$'th action type is allowed. \item \textbf{Objects allowance} - This predicate is similar to the previous one. The predicate is $(allowed\_object_j \ obj \ s_i)$. Which indicates if the $i$'th action in the plan, $(action\_type \ obj_1 \dots obj_k)$, may contain the object type $obj$ at location $j$. \end{itemize} The combination of these constraints forces the PDDL planner to create a very specific PDDL plan. Therefore, these constraints reduce the search space, hence accelerating the process of finding a plan, if one exists. An illustration of our model's architecture is available in Figure \ref{fig_model}. The first component is the translation unit, which is responsible for translating a natural language directive into PDDL goal and plan template. The second part of the model will combine these elements into a PDDL problem and check for a PDDL plan that solves this problem. If such a plan is not found, the model will generate another PDDL plan template and re-check for a solution. Ultimately, this process ends when a valid plan is found or the number of plans generated exceeds a constant B. We will now drill down into further details of each component. \subsection{Language-PDDL translation} This unit consists of two translation channels. Both channels take the same natural language directive as input. The first channel and the primary meeting point between the language part and the PDDL part is the \textit{plan templates}. A \textit{plan template} (or \textit{visual semantic plan} as in \cite{jansen2020visually}) is a sequence of actions that achieve a general goal. Each action consists of an action type and parameter types. These actions are not object-specific, meaning that they cannot be used together as a plan for the PDDL problem and therefore not sufficient for an operating agent either. For instance, a plan template could be "go to dining table, pick up apple dining table, go to fridge, put apple fridge". Since there might be multiple apples and tables in our scene, the robot will not know which objects it should integrate with. In other words, a valid PDDL plan must contain the objects' unique ids as well as their types. Even though the plan templates alone are not enough to solve each problem, they are still useful. In our method, we convert these plan templates into the PDDL constraints we defined above. To demonstrate this, consider the following plan template: \textit{"go to table, pick up apple table"}. The constraint predicates that derive from this plan template are: \begin{itemize} \item \textbf{Length} - since the length of the plan is 2, we add the predicates $(next\ s_0\ s_1)$, $(next\ s_1\ s_2)$, $(current\_step\ s_0)$ to the initial world state and $(current\_step\ s_2)$ to the goal predicates. \item \textbf{Action allowance} - for each element in the predicted sequence, we add the action allowance predicate of the element's action type and index - $(allowed\_goto \ s_0)$, $(allowed\_pickup \ s_1)$. \item \textbf{Objects allowance} - as in the previous case, we add the object allowance predicate for each object in the sequence. Each predicate consists of the object type, its location in the action, and its action's location in the sequence. - $(allowed\_arg_1\ table\ s_0)$, $(allowed\_arg_1\ apple\ s_1)$, $(allowed\_arg_2\ table\ s_1)$. \end{itemize} By integrating these constraints into the original PDDL problem, we force the planner to produce a plan that will match our template. This restriction significantly decreases the search space, thus accelerating the search process overall. The second translation channel is the goal predicates unit. The goal predicates capture the user's desired outcome of a task. These predicates describe the world's state at the end of the plan's execution ("sliced tomato", "cold tomato"). In contrast to the PDDL plan, which should include specific objects, the goal predicates may be formulated in a more general way. In this paper, we use this general goal formalism rather than focusing on specific object ids. In other words, when the task's goal includes some predicates of a specific object, we accept any plan that reaches the same predicates on any instance of this object type. This is done due to the fact that there are many instances of each object type in our world, and we do not want to pick only one instance when we formulate the goal. We implement this attribute using the \textit{exists} PDDL operator. To illustrate this, assume the PDDL goal \textit{"sliced tomato, on tomato countertop"}. This goal will be formulated as: \textit{"(exists (?tomato0 - tomato ?countertop0 - countertop) (and (sliced ?toamto0) (on ?tomato0 ?countertop0)))"} This encoding allows us to accept any plan that achieves a final world state in which there exists a sliced tomato on any countertop. After generating these two PDDL elements, we combine them with the original PDDL problem, creating a new and more restricted problem to solve. As in earlier work, we model this translation process as a sequence-to-sequence task. Moreover, in our research, we focused on two language models, GPT-2 and T5 \cite{raffel2019exploring}. We have trained separate GPT-2 models for \textit{goal predicates} prediction and for \textit{plan template} prediction. However, since training a new task on T5 requires only changing the prefix of the input, we fine-tuned a single T5 model for both tasks. \subsection{PDDL consistency checking} Once a new PDDL problem has been generated from the predicted goal and the constraint predicates, we will input both the domain of our world and the problem file into a PDDL planner. The planner will look for a valid plan that achieves our goal under the given constraints. When the planner does find a solution, we count the plan template predictions as valid. On the contrary, when the planner does not find a solution, we will go back to our language model and "ask" it to generate another plan template. After a new template was generated, we convert it to PDDL constraints, update the PDDL problem, and check for a valid plan that fits the new template. This re-generation of the plan template is done by taking the next prediction in the model's beam search output. We repeat this procedure until a valid template is found or the number of templates generated exceeds a given number $B$ (in our experiment $B$ = 5). \subsection{Various input types} When a human approaches everyday tasks, he may have some preliminary knowledge about the world he operates in. We term this knowledge the "context of the environment" which includes, among other things, the objects in the scene, their spatial relations, and action-object pairs that commonly appear together. When providing only a task description to a robot that does not have this context knowledge, it may struggle to generate a successful plan. In this paper, we suggest that adding the context of the environment to the task description as the input for the model (rather than using the task description alone), improves the quality of the model's output. We check this hypothesis by providing our model with various input types and tracking the changes in its performance. Concretely, we perform multiple experiments, each having its own language input. The directives that were tested are the concatenation of the high-level task ("put two bowls on the dining table") with the relations between the objects in the scene ("on tomato table, on bowl countertop, etc."). In addition, to isolate the effect of each input type, we also analyze the performance of the model when the input contains only one input type. \section{Evaluation} We evaluate our model on the language part of the ALFRED dataset and show that our model is able to achieve state-of-the-art results on the \textit{visual semantic plan} generation task and \textit{valid robot-plan} generation task. \subsection{Dataset} The ALFRED dataset consists of 8,055 visual samples, composed of an agent's egocentric visual observations of the environment. Each one corresponds to multiple language directives, annotated by mechanical turkers, adding up to a total number of 25,743 directives. ALFRED has 7 different task types parameterized by 84 object classes in 120 scenes. The tasks are Pick \& Place, Stack \& Place, Pick Two \& Place, Clean \& Place, Heat \& Place, Cool \& Place and Examine in Light. ALFRED is based on the AI2-Thor simulator, in which some actions may change the object's state in an irreversible way (a sliced potato will never be whole again). The evaluation data in ALFRED is divided into validation and test datasets. Each one is split also into seen and unseen environments. The purpose of the second split is to examine how well a model generalizes to entirely unseen new spaces with novel object class variations. \paragraph{Pre-processing} In our work, we redivide the original ALFRED's training data into train, val, and test. Furthermore, we combine ALFRED's seen type validation set with our validation set and test our model both on our test data and ALFRED's validation unseen data. Since in our task we ignore the vision part of the data, we might encounter some duplicates between our datasets. Hence, we perform a cleaning process that deletes duplicate samples from the training and validation data with the same language directive as in our test datasets. In the ALFRED dataset, there are several actions that the agent can execute. These actions are: $pick up,\ put,\ slice,\ heat,\ cool,\ clean,\ toggle$ and $go to$. By performing a single action or a sequence of actions, the agent may change the state of some objects. To track these changes, we model the state of each object by using the PDDL predicates formalism. The predicates that reflect the outcomes of these actions are $robot\_has\_obj,\ on,\ sliced,\ hot,\ cold,\ cleaned$, $toggled$, and $can\_reach$. In addition, we added another predicate, named $two\_task$ which is used as an indicator for the "pick two and place" task. In our work, we train language models to predict both the goal predicates, which express the desired final state of each object as derived from the language directive, and the plan template, A.K.A the \textit{visual semantic plan}. To create the targets for the language models, we focused on the PDDL parameters and the high-level actions provided by the ALFRED samples. Each action in the plan template is in the form - $(action,\ arg1)$ if $action \in \{go to,\ toggle\}$, and $(action,\ arg1,\ arg2)$, otherwise. In the same way, each goal predicate is in the form - $(predicate,\ arg1, \ arg2)$, if $predicate = on$, and $(predicate,\ arg1)$, otherwise. \paragraph{Models input format} Since we use two different language models in our evaluation, GPT-2 and T5, we have to adjust the input to the correct form that these models accept. We fine-tune GPT-2 on the natural language directives and gold targets using the GPT's sos and eos tokens: \[ \mbox{$" \textless | \textit{startoftext} | \textgreater \ \textbf{directive} \ \textit{Task Type:} \ \textbf{target} \ \textless | \textit{endoftext} | \textgreater"$}\\ \] Where \textit{Task Type} is either "Goal" or "Actions" according to the prediction task we are performing. During evaluation and testing, we feed the model with the input "$\textless | \textit{startoftext} | \textgreater \ \textbf{directive} \ \textit{Task Type:}$", and let it generate tokens until a $\textless | \textit{endoftext} | \textgreater$ token is generated. On the other hand, the T5 fine-tuning process on a new task is done by providing a unique prefix before the directive. In our work, the goal-predicates task's prefix is "translate task to goal" and the plan-template task's prefix is "translate plan to actions". Since T5 is an encoder-decoder model, at every training step we feed the model with source and target sequences. The source phrase is the prefix with the directive, and the target phrase is either the goal predicates or the plan template. \paragraph{Data validation} A data sample will be considered \textit{valid} for training if its original action sequence solves the sample's problem. To check if a given solution does solve a given task, we need a PDDL domain file and a PDDL problem file. Thus, we have created a PDDL domain file using our knowledge of the objects and actions in the ALFRED world and a PDDL problem file for each sample. The ALFRED domain file consists of the rules of the ALFRED world and its object types. While the same domain file is used across all samples, the problem file is different between samples. Moreover, creating a PDDL problem file requires knowing the world's current state, meaning, all the objects in our scene and their spatial relations. Although the ALFRED dataset samples provide some of the objects in the scene, it neither reveals all of the objects nor their spatial relations, but only their explicit coordinates in the space. To find the objects' relations, we used the initial location of each object and the scene type, taken from the ALFRED dataset, and loaded them into the AI2-Thor simulator. By doing so, we achieved the metadata of the scene, which provides more information about objects and their spatial relations. Concretely, we create relations in the form $\textit{on} \ obj_1 \ obj_2$, where $obj_1$ is on top of $obj_2$ or inside it. Lastly, the goal predicates for each problem were generated from the "PDDL parameters" field of every data sample. After creating the domain and problem files, we extracted the PDDL action sequence from the "high level plan" of each sample (which specifies the objects' ids) and used the VAL plan validator \cite{1374201} to check if this plan did solve the PDDL problem of this sample. Samples whose gold PDDL action sequence did not achieve the goal of the problem were marked as invalid samples and were removed from the data. Eventually, the train, val, test, and test\_unseen datasets had 13893, 1650, 1010, and 682 samples, respectively. This division reflects an 80-10-10 (\%) train-val-test partition. \paragraph{Metrics} In our research, we implement both the evaluation measures used in \cite{jansen2020visually} and some additional accuracy measures. Moreover, we extend these measures to the \textit{goal predicate} task. We also use the same notation of permissive scoring, which accepts predictions of objects that are similar to the original ones. ("lamp - floor lamp", "knife - butter knife"). Both tasks have per-element accuracy measures ($predicate$\textbackslash $command, \ arg_1,\ arg_2$), permissive arguments ($p\_arg1,\ p\_arg2$), full triples and full sequence accuracy measures as defined in \cite{jansen2020visually}. Notice, however, that while in the \textit{visual semantic plan} task the order of the generated text does matter ("go to table, pick up tomato table" is not the same as "pick up tomato table, go to table"), in the \textit{goal predicate} task we ignore the order of the generated predicates and measure the accuracy accordingly ("sliced tomato, cold tomato" is the same as "cold tomato, sliced tomato"). In the \textit{goal predicate} task, we also look at permissive scoring in the full predicate and sequence level. The $f\_predicate\_sim$ and $f\_seq\_sim$ measures indicate if a predicate or a sentence is wrong only in permissive objects. ("cold butter knife, hot apple" and "cold knife, hot apple" are the same) We implemented two accuracy measures for the \textit{valid robot-plan} task. The first measure is the $valid\_plans\_orig\_goal$ measure, which indicates the ratio of samples that a valid pddl plan (achieves the \textbf{original} goal predicates) was found, while following the plan template constraints. The second measure is similar to the first, except that it counts plans that achieve the \textbf{predicted} goal predicates. We term the second measure $valid\_plans\_pred\_goal$. \subsection{Results} We now analyze the results on each task, tested on two language models and three directive types. \paragraph{Goal predicates} The accuracy measures in this section were calculated using the precision metric and were evaluated on ALFERD's val\_unseen dataset. The full accuracy scores on the goal predicate task are available in the supplementary material. While in the T5 model the accuracy scores were the highest on the \textit{Task + Relations} input, in the GPT-2 model the \textit{Task + Relations} input was better than the \textit{Task} input only on the full\_sequence measure. In addition, T5 outperform GPT-2 on every input type, reaching almost 90\% accuracy across all measures and predicting correctly 85\% of exact full sequences. These results suggest that an encoder-decoder architecture might be more suitable for goal predicate prediction. Moreover, the additional information about the environment was captured better by the T5 model than by the GPT-2 model. Further examples of the goal prediction of our T5 model on new and unique examples are shown in \ref{goal_examples}. These directives are different from the common tasks of ALFRED, and their intention is to check the robustness of the model. As presented in the table, the model seems to recognize well general types such as vegetables, cutlery, and baking tools. \begin{table}[t] \caption{Goal prediction examples} \def\arraystretch{1.1}% \begin{tabular}{l l} \toprule Input Text & Goal Predicates \\ \midrule Put a \textbf{baking tool} on the counter. & on \textbf{spatula} pan, on pan countertop \\\midrule Place two \textbf{vegetables} in the drawer. & on \textbf{potato} drawer, two\_task\\\midrule Put any type of \textbf{cutlery} on the counter. & sliced \textbf{spoon}, on \textbf{spoon} cup, on cup countertop\\ \bottomrule \end{tabular} \label{goal_examples} \end{table} \paragraph{Plan template} \begin{table}[hbt!] \caption{\textit{Plan Template} accuracy scores.} \centering \def\arraystretch{1.1}% \begin{tabular}{c c c c c c c c } \toprule Model & Input & Command & Arg1 & Arg2 & F\_Action & F\_Seq & Valid Plans\\ \midrule & Task & \textbf{0.93} & 0.75 & 0.67 & 0.63 & 0.32 & 0.78\\ GPT-2 & Relations & 0.54 & 0.14 & 0.16 & 0.10 & 0.00 & 0.00\\ & Task + Relations & \textbf{0.93} & 0.78 & 0.74 & 0.69 & 0.46 & 0.89\\ \midrule & Task & 0.91 & 0.73 & 0.63 & 0.60 & 0.29 & 0.72\\ T5 & Relations & 0.68 & 0.22 & 0.26 & 0.18 & 0.04 & 0.13\\ & Task + Relations & 0.92 & \textbf{0.82} & \textbf{0.76} & \textbf{0.75} & \textbf{0.57} & \textbf{0.91}\\ \bottomrule \end{tabular} \label{actions_table} \end{table} \begin{table}[hbt!] \caption{Full actions triples accuracy per 8 action types in ALFRED.} \begin{center} \begin{tabular}[hbt!]{c c c c c c c c c c c} \toprule Model & {GoTo} & {Pickup} & {Put} & {Cool} & {Heat} & {Clean} & {Slice} & {Toggle} & {Avg.}\\ \midrule GPT$_T$ & 68 & 40 & 68 & \textbf{85} & 82 & \textbf{78} & 39 & 75 & 67 \\ T5$_T$ & 66 & 36 & 66 & 79 & 83 & 74 & 47 & 77 & 66\\ GPT$_{TR}$ & 75 & 56 & 67 & 78 & 80 & 75 & \textbf{55} & 75 & 70 \\ T5$_{TR}$ & \textbf{79} & \textbf{65} & \textbf{72} & 83 & \textbf{84} & \textbf{78} & \textbf{55} & \textbf{97} & \textbf{77} \\ \bottomrule \end{tabular} \label{per_action_accuracyy} \end{center} \end{table} Table \ref{actions_table} contains the models' scores on the \textit{plan template} task. In contrast to the \textit{goal predicate} task results, both models achieve the highest score on the Task + Relations input. On the Task-only input, GPT-2 predicts correctly 32\% of full original action sequences, which is better result from previous work (22\%), but might be due to the training dataset changes. Furthermore, when adding the scene context to the model's input, T5 outperforms GPT-2 across almost all measures, predicting correctly 57\% of the target plans in comparison to GPT's 46\%. These results outperform recent work \cite{jansen2020visually} on \textit{visual semantic plan} generation from natural language directives, which was also trained and evaluated on the ALFRED dataset. Moreover, the actions triples accuracy per each action type are presented in table \ref{per_action_accuracyy}, where the subscript \textit{T} and \textit{TR} refers to Task and Task+Relations inputs respectively. \paragraph{Valid robot plan} The models' scores on the \textit{valid robot plan} task on the original goal predicates are presented in table \ref{actions_table} under the \textit{Valid Plans} column. As shown in the table, T5 model generates valid plans for 91\% of the samples, where 57\% of them are the exact target plans. That is, 34\% of the plan templates that T5 predicted were not the same as the original plans, but eventually achieved the desired goal. In addition, when the input is non-informative of the required task, such as the objects' relations alone, GPT-2 is not able to generate any valid plan. On the other hand, T5 generated valid plans for 59\% of the samples with respect to the predicted goal, but only 13\% plans with respect to the original goal predicates. These results suggest that T5 might generate easier goal predicates when the input is non-informative rather than succeeding to predict valid plans. Lastly, GPT-2 achieve better results on the original goals, in contrast to T5 which succeed more on the predicted goal predicates. This difference might be due to the fact that T5 succeed more on the \textit{goal predicate} task than GPT-2. The full table with the \textit{Valid Plans} scores on the predicted goal predicates is provided in the supplementary material. \paragraph{Few shot learning} In our setting, creating data samples for training is time-consuming and expensive. Hence, the ability of a model to perform successful few-shot learning is crucial. To evaluate this capability, we have created multiple training sets by downsampling the original data into smaller fractions and trained different T5 models on each set. As shown in figure \ref{few_shot_fig}, we see that with only 5\% of the training data, our models are able to predict actions sequences and goal predicates nearly as well as models that were trained on the full dataset. These results suggest that our model is able to perform successful few-shot learning. We are planning to test this assumption in other domains in future work. \begin{figure}[hbt!] \centering \includegraphics[width=1\textwidth]{Images/Few_Shot.jpg} \caption{Few-shot accuracy of full action sequences and full goal predicates.} \label{few_shot_fig} \end{figure} \section{Conclusions and future work} In this work, we have developed a novel approach for translating natural language to plans that accomplish everyday household tasks. While previous work \cite{jansen2020visually} reached 53.4\% accuracy on action generation by training GPT2 model on task-only input \textbf{and ignoring the first action generated by the model}, our model predicts precisely 57\% of the ALFRED dataset plans without ignoring the first action in the predicted sequence, and generates valid plans for another 34\% of the samples, reaching a total number of valid plans for 91\% of unseen environments tasks. Furthermore, we show that our model performs successful few-shot learning. These plans were generated by using only natural language data for task description and scene information. Our approach combines both language models and PDDL tools, working together as a whole to generate a valid PDDL plan that will achieve the language directive goal. our contribution reflects \textbf{two ways of improvements} - (1) we utilize not just the task description, but also extra information about the environment's context, and (2) we employ the more powerful T5 model, which is better able to exploit this extra information. Looking forward, there are some future directions we would like to investigate. One of them is generating valid PDDL plans for object-specific goals. This task is more challenging since its goal state requires particular objects' predicates to be changed rather than any object of this type (for example, warming a \textbf{green} cup instead of any cup). Another challenge is defining a more conservative PDDL problem and domain that avoids changing basic attributes of objects (for example, put a \textbf{whole} tomato on the table). In addition, since the ALFRED dataset contains limited number of objects and only seven types of our everyday tasks, achieved by performing only eight action types, which is much less than the set of all possible actions that an household agent should be able to perform, we would like to improve our zero-shot predictions on new and unseen object types, and check the few-shot learning capabilities of our model on other domains. we will release our code and data under an open source licence once the paper is published.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,701
The company is backed by world class VCs, is growing at a fast pace, with huge ambitions, and very high standards of quality. You will be pushed outside of your comfort zone. We do not require official certification/degree, we are looking for someone who can do the job ! Are always thinking "What will happen if it fails?"
{ "redpajama_set_name": "RedPajamaC4" }
9,866
Q: How to determine merge strategy of a merge commit Given a SHA hash of a merge commit, is there a way to determine what merge strategy (parameter passed to git merge as -s or --strategy) was used when it was created? I was able to figure it out by retrying the same merge with different resolution strategies until I got the same conflicts, but I figure there must be better way to do it. I am trying to examine a problem where the change got overwritten by a merge commit, similar to this question.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,320
@* * Copyright 2022 HM Revenue & Customs * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. *@ @import config.ApplicationConfig @import uk.gov.hmrc.govukfrontend.views.html.components._ @this(gmpMain: gmp_main, govukErrorSummary : GovukErrorSummary, request_another_button: includes.request_another_button, member_details_result:includes.member_details_result, govukTable : GovukTable) @(calculationResponse:CalculationResponse, revalRateSubheader: Option[String], survivorSubheader: Option[String])(implicit request: Request[_], messages: Messages, applicationConfig: ApplicationConfig) @gmpMain(title = { if(calculationResponse.globalErrorCode > 0 || (calculationResponse.calculationPeriods.length == 1 && calculationResponse.calculationPeriods.head.errorCode > 0)){ Messages("gmp.results.error") }else{ Messages("gmp.results.h1") + " - " + Messages("service.title") + " - " + Messages("gov.uk") } }, supportLinkEnabled = false, dashboardLinkEnabled = !calculationResponse.hasErrors) { @{ if(calculationResponse.globalErrorCode > 0) { includes.global_error(calculationResponse.globalErrorCode) } else if (calculationResponse.calculationPeriods.length == 1 && calculationResponse.calculationPeriods.head.errorCode > 0) { includes.global_error(calculationResponse.calculationPeriods.head.errorCode) } else { if(calculationResponse.hasErrors) { includes.multi_error(calculationResponse, govukTable) } else { includes.multi_results(calculationResponse, revalRateSubheader, survivorSubheader, govukTable) } } } @member_details_result(calculationResponse) @if(!calculationResponse.hasErrors){ <p class="govuk-body">@Messages("gmp.queryhandling.resultsmessage")</p> } @if(!calculationResponse.hasErrors) { @includes.print_page() } @request_another_button() }
{ "redpajama_set_name": "RedPajamaGithub" }
5,873
Broughton è un'antica baronia feudale, oggi all'interno della City di Edimburgo, in Scozia. I suoi confini sono definiti, approssimativamente, con Leith Walk a sud-est, Broughton Street a sud-ovest, Broughton Road nel nord-ovest e McDonald Road a nord-est. Intorno a Broughton troviamo i quartieri di Greenside, New Town, Canonmills, Pilrig e Calton Hill. La strada principale di Broughton, Broughton Street, è al centro dell'"Edinburgh's pink triangle", una zona della città con un alto numero di bar gay e club, tra i quali il The Street all'angolo di Broughton Street and Picardy Place, La Sala tapas bar al numero 60, ed il Blue Moon café, appena fuori Broughton Street. La Gayfield Square Police station, che troviamo nei racconti dell'Ispettore Rebus ad opera dello scrittore edimburghese Ian Rankin, è sita in Gayfield Square, a sud-est di Broughton. Altri progetti Collegamenti esterni Località di Edimburgo
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,053
Q: How to display an element with multiple value options? I want to show the hidden div when user choose any option from the <optgroup> my code is working but for only one option i want to make it work for all options in specific <optgroup>. I don't want to use array because I will edit the JS array every time adding option in this specific optgroup. function show(that) { if (that.value == "show1") { document.getElementById("hidden").style.display = "block"; } else { document.getElementById("hidden").style.display = "none"; } } <select onchange="show(this);"> <option value="" selected>choose</option> <optgroup label="show div"> <option value="show1">show 1</option> <option value="show2">show 2</option> <option value="show3">show 3</option> </optgroup> <!-- i don't want it work with this another optgroup --> <optgroup label="another"> <option value="useless">useless</option> <option value="useless2">useless</option> <option value="useless4">useless</option> </optgroup> </select> <div id="hidden" style="display: none;"> Hidden Div </div> A: In your show() method, change if(that.value == "show1") to if(that.value !== "") That way, if you have nothing selected, the div will not be shown, otherwise, it will. A: Instead of the if statement you could have a switch that goes through all the different values. Then on the default case would be if the user selects nothing. function show(that) { var value = that.value; switch(value){ case "show1": document.getElementById("hidden").style.display = "block"; break; case "show2": document.getElementById("hidden").style.display = "block"; break; case "show3": document.getElementById("hidden").style.display = "block"; break; default: document.getElementById("hidden").style.display = "none"; } } <select onchange="show(this);"> <option value="" selected>choose</option> <optgroup label="show div"> <option value="show1">show 1</option> <option value="show2">show 2</option> <option value="show3">show 3</option> </optgroup> </select> <div id="hidden" style="display: none;"> Hidden Div </div> A: Find the optgroup and check if that's the one you want: function show(that) { let selected_option = that.options[that.selectedIndex]; let optgroup = selected_option.parentNode; if (optgroup.label == 'show div') { document.getElementById("hidden").style.display = "block"; } else { document.getElementById("hidden").style.display = "none"; } } <select onchange="show(this);"> <option value="" selected>choose</option> <optgroup label="show div"> <option value="show1">show 1</option> <option value="show2">show 2</option> <option value="show3">show 3</option> </optgroup> </select> <div id="hidden" style="display: none;"> Hidden Div </div> A: You could use a regular expression in your first if statement so that it will match any element with value = "show.*", where .* matches any character 0 or more times. So the method test() will return true for all of your elements with value="show.*". function show(that) { if (/show.*/gi.test(that.value)) { document.getElementById("hidden").style.display = "block"; } else { document.getElementById("hidden").style.display = "none"; } } <select onchange="show(this);"> <option value="" selected>choose</option> <optgroup label="show div"> <option value="show1">show 1</option> <option value="show2">show 2</option> <option value="show3">show 3</option> </optgroup> </select> <div id="hidden" style="display: none;"> Hidden Div </div>
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,073
{"url":"http:\/\/mathhelpforum.com\/algebra\/187128-need-help-prove-inequalities.html","text":"# Math Help - need help to prove inequalities\n\n1. ## need help to prove inequalities\n\nHello..\n\nI need help to prove this inequalities\n\n2. ## Re: need help to prove inequalities\n\nI don't know how good you are at inequalities but it basically goes like this:\n\n$\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{d} + \\frac{d}{a} =$ $\\frac{a^2cd}{abcd} + \\frac{b^2ad}{abcd} + \\frac{c^2ab}{abcd} + \\frac{d^2bc}{abcd}$\n\n$= \\frac{a^2cd + b^2ad + c^2ab + d^2bc}{abcd} \\leq \\frac{a^2 + b^2 + c^2 + d^2}{abcd}$ $\\leq \\frac{a + b + c + d}{abcd} \\leq \\frac{4}{abcd}$\n\nSo, $\\frac{a}{b} + \\frac{b}{c} + \\frac{c}{d} + \\frac{d}{a} \\leq \\frac{4}{abcd}$\n\n3. ## Re: need help to prove inequalities\n\nOriginally Posted by Aryth\n$= \\frac{a^2cd + b^2ad + c^2ab + d^2bc}{abcd} \\leq \\frac{a^2 + b^2 + c^2 + d^2}{abcd}$ $\\leq \\frac{a + b + c + d}{abcd}$\nCare to explain the first inequality?\n\nThe second inequality is false. Take a = 3, b = 0.3, c = 0.3, d = 0.4 for a counterexample.","date":"2014-07-10 06:54:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 7, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5043749213218689, \"perplexity\": 1202.5208918361766}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1404776404630.61\/warc\/CC-MAIN-20140707234004-00040-ip-10-180-212-248.ec2.internal.warc.gz\"}"}
null
null
Shorter dose gap in COVID vaccines more effective: Study New Delhi: In a new study published in the Lancet journal, it has been suggested that the Pfizer vaccine for COVID is considerably less effective against the Delta variant of COVID, which is prevalent in India, than the original Coronavirus strain. The study further states that antibody response to variations is even lower in persons who have only received one dosage, and the data suggest that a longer gap between treatments could dramatically reduce antibodies against the Delta variation. After a single dose of Pfizer, 79 per cent of people had a quantifiable neutralising antibody response against the original strain, but this fell to 50 per cent for B.1.1.7 or Alpha variant, 32 per cent for Delta and 25 per cent for the B.1.351 or Beta variant first discovered in South Africa. The researchers note it is most important to ensure that vaccine protection remains high enough to keep as many people out of hospital as possible, a news report published by NDTV said. UCLH Infectious Diseases consultant and Senior Clinical Research Fellow for the Legacy Study, Emma Wall as per the news item said, "Our results suggest that the best way to do this is to quickly deliver second doses and provide boosters to those whose immunity may not be high enough against these new variants." The recommendation contradicts India's recent decision to prolong the duration between two Covidshield doses from six to eight weeks to 12 to 16 weeks, citing studies that showed the vaccine's effectiveness improved with time. Quoting critics, the report said that GoI is expanding the gap to relieve pressure on its immunization campaign, which has been hampered by a lack of doses and a restricted supply of vaccinations. Explaining the position of GoI, the report quoting sources in the GoI said: "Available real-life evidence particularly from the UK that effectiveness was significantly higher at 81.3 per cent (60.3-91.2) after two doses given at an interval of 12 weeks or longer, compared to 55.1 per cent (33-69.9) when given less than six weeks apart. That study was, however, not based on the Delta variant. The latest Lancet study, on the other hand, backs up the current UK plans to close the vaccine dose gap, finding that after just one dose of the Pfizer-BioNTech vaccine, people were less likely to develop antibody levels against the Delta variant than the previously dominant Alpha variant, which was first discovered in UK's Kent. Furthermore, while emphasizing the excess risk created by the Delta variant the item stated, "UK's Public Health England (PHE) says experts believe the Delta variant has overtaken the Alpha strain in the country and early evidence suggests there may be an "increased risk of hospitalisation" with the Delta strain compared to the Alpha." According to the Lancet, Pfizer-vaccination BioNTech's generates five times fewer antibodies against the Delta variation than the original Covid strain. Related Items:Covid deaths, COVID Vaccine, Health, India Labs detecting new COVID variants in India closed due to lack of funds: Report HC notice to JK Admin on plea challenging inquiry into death of Journalist Mudassir Ali 3 booked under PSA in Pulwama, shifted to Kot Bhalwal jail 1,20,529 COVID cases, 3,380 deaths reported in 24 hours in India A world of nothingness: Absurdism in Kashmir Swerved by strife into a Theatre of Absurd, Kashmir is grappling with the senselessness...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,523
echo "Combining JS..." echo ' // THIS IS AN AUTOMATICALLY COMBINED FILE. PLEASE EDIT dev/*.js!! ' > ../ponymail.js # Warning: ls/sort order depends on the locale; this can affect the order # of non-alphanumerics such as '.' and '_'. So force the use of 'C' locale for f in `LC_ALL=C ls *.js`; do printf "\n\n/******************************************\n Fetched from dev/${f}\n******************************************/\n\n" >> ../ponymail.js sed -e '/^\/\*/,/\*\//d' ${f} >> ../ponymail.js done echo "Done!"
{ "redpajama_set_name": "RedPajamaGithub" }
3,105
Корпусный комиссар — воинское звание высшего военно-политического состава Рабоче-крестьянской Красной армии. Предшествующее звание — дивизионный комиссар, следующее звание — армейский комиссар 2-го ранга. История Звание введено постановлениями Совета Народных Комиссаров для сухопутных и воздушных сил РККА и для морских сил РККА от . Объявлено Приказом Народного Комиссара обороны от . Звание ликвидировано в октябре 1942. Корпусным комиссарам присвоено общевойсковое воинское звание согласно занимаемой должности. Эквивалентные чины Знаки различия В петлицах цвета соответствующего роду войск — три ромба, окантовка чёрная. Эмблемы рода войск до 1940 года в петлицах военно-политического состава отсутствовали, затем были введены Приказом НКО . Шевроны (угольники) на рукавах отсутствуют. На принадлежность к военно-политическому составу указывал нарукавный знак в виде алой звезды с серпом и молотом. Присвоение звания 22.11.1935 — Авиновицкий, Яков Лазаревич (1897—1938) 22.11.1935 — Артузов, Артур Христианович (1891—1937) 20.11.1935 — Берёзкин, Марк Фёдорович (1901—1951) 20.11.1935 — Берзин, Ян Карлович (1889—1938) 20.11.1935 — Неронов, Иван Григорьевич (1897—1937) 20.11.1935 — Петухов, Иван Павлович (1895—1942) 20.11.1935 — Прокофьев, Архип Прокофьевич (1895—1939) 20.11.1935 — Троянкер, Бенедикт Устинович (1900—1938) 20.11.1935 — Хрулев, Андрей Васильевич (1892—1962) 20.11.1935 — Шестаков, Виктор Николаевич (1893—1938) 20.11.1935 — Ярцев, Алексей Петрович (1895—1938) 20.11.1935 — Ястребов, Григорий Герасимович (1884—1957) 23.11.1935 — Карин, Фёдор Яковлевич (1896—1937) 23.11.1935 — Штейнбрюк, Отто Оттович (1892—1937) 26.11.1935 — Разгон, Израиль Борисович (1892—1937) 28.11.1935 — Апсе, Мартын Янович (1893—1942) 28.11.1935 — Битте, Август Мартынович (1893—1939) 28.11.1935 — Говорухин, Трофим Кириллович (1896—1966) 28.11.1935 — Гринберг, Исаак Моисеевич (1899—1938) 28.11.1935 — Грубер, Лазарь Яковлевич (1897—1937) 28.11.1935 — Ильин, Николай Ильич (1895—1938) 28.11.1935 — Немерзелли, Иосиф Фадеевич (1895—1938) 28.11.1935 — Орлов, Наум Иосифович (1894—1937) 28.11.1935 — Родионов, Фёдор Ефимович (1897—1937) 28.11.1935 — Савко, Николай Аркадьевич (1898—1937) 28.11.1935 — Сидоров, Константин Григорьевич (1884—1939) 28.11.1935 — Хорош, Мордух Лейбович (1899—1937) 28.11.1935 — Щаденко, Ефим Афанасьевич (1885—1951) 29.11.1935 — Рошаль, Лев Борисович (1896—1940) 16.12.1935 — Мрочковский, Стефан Иосифович (1885—1967) 17.01.1936 — Захаров (Мейер), Лев Николаевич (1899—1937) 15.02.1937 — Зиновьев, Григорий Алексеевич (1896—1938) 15.02.1937 — Скворцов, Семён Антипович (1894—1938) 31.07.1937 — Рыбин, Фёдор Викторович (1896—1939) 31.12.1937 — Голиков, Филипп Иванович (1900—1980) 17.02.1938 — Булышкин, Александр Александрович (1893—1961) 17.02.1938 — Волков, Яков Васильевич (1898—1963) 17.02.1938 — Лаухин, Пётр Иванович (1899—1967) 17.02.1938 — Шапошников, Михаил Романович (1899—1938) 19.11.1938 — Игнатьев, Сергей Парфёнович (1902—1984) 09.02.1939 — Бирюков, Николай Иванович (1901—1974) 09.02.1939 — Запорожец, Александр Иванович (1899—1959) 09.02.1939 — Зимин, Константин Николаевич (1901—1944) 09.02.1939 — Николаев, Андрей Семёнович (1902—1942) 09.02.1939 — Рогов, Иван Васильевич (1899—1949) 09.02.1939 — Семеновский, Фёдор Алексеевич (1901—1941) 27.04.1939 — Мельников, Алексей Николаевич (1900—1967) 05.05.1939 — Сусайков, Иван Захарович (1903—1962) 04.07.1939 — Борисов, Владимир Николаевич (1901—?) 02.09.1939 — Вашугин, Николай Николаевич (1900—1941) 02.09.1939 — Кузнецов, Фёдор Федотович (1904—1979) 14.11.1939 — Николаев, Тимофей Леонтьевич (1899—1960) 26.04.1940 — Леонов, Дмитрий Сергеевич (1899—1981) 26.04.1940 — Смирнов, Павел Кузьмич (1890—1963) 19.06.1940 — Гапанович, Дмитрий Афанасьевич (1896—1952) 19.06.1940 — Доронин, Яков Алексеевич (1900—1989) 19.06.1940 — Желтов, Алексей Сергеевич (1904—1991) 19.06.1940 — Кожевников, Сергей Константинович (1904—1956) 19.06.1940 — Колобяков, Александр Филаретович (1896—1958) 19.06.1940 — Фоминых, Александр Яковлевич (1901—1976) 08.08.1940 — Захаров, Семён Егорович (1906—1986) 22.10.1940 — Богаткин, Владимир Николаевич (1903—1956) 22.10.1940 — Шимановский, Григорий Соломонович (1891—1965) 27.12.1940 — Диброва, Петр Акимович (1901—1971) 27.12.1940 — Степанов, Павел Степанович (1901—1977) 1941 — Клементьев, Николай Николаевич (1897—1954) 09.12.1941 — Яковлев, Фома Павлович (1900—1971) 05.02.1942 — Смирнов, Николай Константинович (1902—1973) См. также Комиссар (в воинском подразделении) Воинские звания и знаки различия РККА 1935—1940 Воинские звания и знаки различия РККА 1940—1943 Примечания Ссылки Список присвоения высших офицерских званий Армии, Флота и НКВД 1935—1942 гг. Воинские звания Вооружённых сил СССР
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,851
{"url":"http:\/\/gatkforums.broadinstitute.org\/discussion\/3711\/gaussian-mixture-model-plot-interpret","text":"The current GATK version is 3.2-2\n\n#### Howdy, Stranger!\n\nIt looks like you're new here. If you want to get involved, click one of these buttons!\n\nBug Bulletin: The GenomeLocPArser error in SplitNCigarReads has been fixed; if you encounter it, use the latest nightly build.\n\n# Gaussian Mixture model plot interpret\n\nBostonPosts: 37Member\n\nI got this plot after VariantRecalibration for 42 samples in a VCF file. As it can bee seen in the plot there is no \"known\" variants detected. What is the problem? Which walker do you recommend to solve this issue? thanks\n\nTagged:\n\nGeraldine Van der Auwera, PhD\n\n\u2022 BostonPosts: 37Member\n\nI used the following commands:\n\nbsub -q short -W 12:0 -R \"rusage[mem=32000]\" -N -o \/hms\/scratch1\/mahyar\/error.log java -jar GenomeAnalysisTK-2.8-1-g932cd3a\/GenomeAnalysisTK.jar \\ -T VariantRecalibrator \\ -R \/hms\/scratch1\/mahyar\/ucsc.hg19.fasta \\ --input \/hms\/scratch1\/mahyar\/Overal-42post-RGSM-allsites.vcf \\ --resource:dbsnp,VCF,known=false,training=true,truth=true,prior=6.0 \/groups\/body\/JDM_RNA_Seq-2012\/GATK\/bundle-2.3\/ucsc.hg19\/dbsnp_137.hg19.vcf \\ -an QD -an HaplotypeScore -an MQRankSum -an ReadPosRankSum -an FS -an MQ \\ --mode SNP \\ -rf BadCigar \\ --recal_file \/hms\/scratch1\/mahyar\/All42_post_VQRS.recal \\ --tranches_file \/hms\/scratch1\/mahyar\/All42_post_VQRS.tranches \\ --rscript_file \/hms\/scratch1\/mahyar\/All42_post_VQRS_plots.R\n\nThe reason you have no known variants in the plots is because you're not providing any set of knowns (you have known=false for the one resource you provide).\n\nThe bigger problem here is that you're not following our Best Practices for variant recalibration. This command will give you very poor results. Please read the documentation on the Best Practices to learn how you should do this.\n\nGeraldine Van der Auwera, PhD\n\n\u2022 BostonPosts: 37Member\n\nDo I need use all 4 resources (e.g. hapmap, omni, 1000G, dbsnp) for the VariantRecalibrator or only one resource is enough? I used \"dbsnp\" resource, because I used it for calling variants via UnifiedGenotyper!\n\n\u2022 BostonPosts: 37Member\n\nI run again VariantRecalibrator only for one sample and got the following error: What is the issue?","date":"2014-10-22 00:01:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.19025476276874542, \"perplexity\": 10309.853916520373}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-42\/segments\/1413507445159.36\/warc\/CC-MAIN-20141017005725-00237-ip-10-16-133-185.ec2.internal.warc.gz\"}"}
null
null
Арсений (, Арсениос) е православен духовник от XVII век, костурски митрополит на Охридската архиепископия. Биография Арсений оглавява костурската катедра като различни източници дават различни начални и крайни години за митрополитството му в Костур, като най-ранната година е 1641 година, а най-късната - 1657 година. Германос Христидис отбелязва годината 1654. А. П. Пешаир въз основа на кондика на Костурската митрополия ЕВЕ 2752, поставя архиерейството му в периода 1643 - 1653 година и описва решение от ноември 1643 година, в което е споменат Арсений заедно с Харитон Охридски, както и решение, в което се споменават още и Игнатий Пелагонийски, Йеремия Сисанийски и Григорий Молиски. Според Панделис Цамисис Арсений управлява от 1650 до 1654 година. Гюстав Барди посочва годините 1643 - 1654. Тасос Грицопулос отбелязва, че Арсений се споменава в 1643 година, но със сигурност в 1654 година. Василиос Атесис се ограничава до споменаването в 1654 година. Джорджо Федалдто, позовавайки се на Пешаир, отбелязва периода от ноември 1643 до 1654 година. От изследването на кондиката на Костурската митрополия за годините 1563 - 1663 (ЕВЕ 2752) става ясно, че периодът на управлението на Арсений се удължава от 1641 до 1654 година: тъй като на лист 21r - 24r има решение с подписа на Арсений от годините 1641 - 1643 и същият митрополит е подписал на лист 28v - 52v с решения от 10 август 1648 до октомври 1654 година. Митрополит Арсений Костурски е споменат в зографския надпис на занската църква "Рождество Богородично", който вероятно датира от 1653 година. Споменат е в ктиторския надпис в костурската църква "Свети Николай Кирицки" от ζρξβ' (=1654) година. Името му е отбелязано и в зографски надпис в църквата "Свети Николай Каравидски", датиран в ζρξε' (= 1656/7) година. Според Мелахрини Паисиду Арсений е неспоменатият поименно костурски митрополит от надписа в църквата "Света Богородица Безсребреническа", датиран в 1657 година, поради което поставя края на архиерейството му в тази година. Митрополит Арсений е споменат и в зографския надпис и в църквата "Света Богородица Музевишка", в който обаче годината вече не се чете. Бележки Костурски митрополити
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,226
Dramatic open 2-story layout with new home 2-10 Limited Warranty. You'll love Vista Highlands' majestic views of the entire front range and small town feeling. Minutes off I-25 with easy access to Boulder and downtown Denver. Beautiful 2-story with 3 car garage and partially finished full basement. Large open kitchen, gourmet stainless appliances, open to great room with fireplace, perfect for entertaining. Designer finishes throughout. Beautiful Owner's retreat. Extensive hardwood flooring, full front and rear landscaping, sprinkler and fence.
{ "redpajama_set_name": "RedPajamaC4" }
1,683
So we can continue to give all our patients a high level of care, payment is expected in full when pet discharged or services rendered. Sheets Pet Clinic accepts cash, check, VISA, Mastercard, Discover, and debit cards. Since our prices are already reduced, we have no financial assistance plan. Please check Resources on this site for some possibilities for financial assistance with veterinary bills.
{ "redpajama_set_name": "RedPajamaC4" }
1,217
using System; using System.Linq; using System.Text; using System.Threading.Tasks; namespace NetworkTables.NetworkTables2.Util { public class List :ResizeableArrayObject { protected int size = 0; public List() { } public List(int initialSize) : base(initialSize) { } public bool IsEmpty() { return size == 0; } public int Size() { return size; } public void Add(object o) { EnsureSize(size + 1); array[size++] = o; } public void Remove(int index) { if (index < 0 || index >= size) throw new IndexOutOfRangeException(); if (index < size - 1) Array.Copy(array, index + 1, array, index, size - index - 1); size--; } public void Clear() { size = 0; } public object Get(int index) { if (index < 0 || index >= size) throw new IndexOutOfRangeException(); return array[index]; } public bool Remove(object obj) { for (int i = 0; i < size; ++i) { object value = array[i]; if (obj == null ? value == null : obj.Equals(value)) { Remove(i); return true; } } return false; } public bool Contains(object obj) { for (int i = 0; i < size; ++i) { object value = array[i]; if (obj == null ? value == null : obj.Equals(value)) { return true; } } return false; } public void Set(int index, object obj) { if (index < 0 || index >= size) throw new IndexOutOfRangeException(); array[index] = obj; } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,336
\section{Introduction} \label{sec:intro} Optical coherence tomography angiography (OCTA) is a novel non-invasive imaging method and is being widely used in clinical diagnoses \cite{lin2021bsda,peng2021fargo}. It employs the motion contrast imaging technology to get high-resolution volumetric blood flow data and produce angiographic images \cite{de2015review}. \begin{figure}[htbp] \centering \vspace{-0.2cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{-1cm} \centerline{\includegraphics[width=8.0cm]{intro.pdf}} \caption{Representative OCTA examples with different quality grades. Red rectangles highlight low quality regions.}\medskip \vspace{-0.3cm} \label{intro} \end{figure} Because of its ability to clearly reveal important anatomical structures such as retinal microvasculature and the avascular zone of the central recess, OCTA has a great potential for accurately diagnosing a variety of fundus-related diseases (e.g. age-related macular degeneration, diabetic retinopathy, etc.) \cite{li2020ipn, peng2022unsupervised}. OCTA image quality assessment (OIQA) is an important prerequisite in various clinical applications since low-quality images may affect the diagnostic accuracy of both a physician and an intelligent algorithm \cite{cheng2021secret, cheng2021prior, lin2021automated}. OCTA's low-quality issues, such as inadequate illumination, noticeable blur, and low contrast, may lead to inadequate or even wrong diagnostic decisions. Therefore, automated OIQA is urgently needed. As a common practice \cite{wang2021deep}, there are three levels for an OCTA image's quality: outstanding, gradable, and ungradable (Figure \ref{intro}). Those three types of images are typically dealt with differently: outstanding images can be directly analyzed or utilized for specific purposes; gradable images can go through image quality enhancement such as denoising, light equalization, contrast enhancement, etc.; ungradable images are considered useless and are often discarded. There are two main categories in terms of the OIQA solutions: traditional methods and deep learning methods. Traditional methods mainly include distribution-based methods \cite{mittal2012making} and structure-based methods \cite{kohler2013automatic, niemeijer2006image}. The first one constructed a set of quality-related features from a highly regular OCTA scene statistic model, and then fit them with a multivariate Gaussian (MVG) model. The OCTA image quality is then quantified as the distance between two MVG models: one fitted with the features extracted from the test image of interest, and the other fitted with the quality-aware features extracted from the corpus of OCTA images. However, this method typically cannot well measure the distance between distorted and reference images. The second one employs specifically segmented structures like vessels to calculate a global score for noise and blur, and to determine the quality level of the image of interest. However, this kind of methods heavily relies on the accuracy of the structure segmentation and the segmentation task itself may consume a lot of computing resources. Recently, deep learning methods have been widely used for medical image quality assessment \cite{yu2017image, zago2018retinal, fu2019evaluation}. They make use of image features extracted unsupervised-ly or supervised-ly, followed by an image quality classifier or an image quality regressor. A representative deep learning method combines unsupervised features from saliency detection and supervised features from convolutional neural networks (CNNs), and then feeds them to an SVM classifier to identify the quality level. Image quality annotations are not always available, especially for emerging modalities. Meanwhile, there may exist many different types of anomalies, exerting difficulty to collecting enough low-quality images that cover a wide range. To address these issues, we propose an unsupervised anomaly-aware framework with test-time clustering for OIQA, namely UNO-QA. UNQ-QA can classify an OCTA image into three grades making use of only a set of outstanding samples during training. Specifically, a neural network with an encoder-decoder structure is trained solely with a set of outstanding samples and then used to discriminate outstanding samples from non-outstanding ones according to their quality scores. Multi-scale feature pyramid pooling is applied to the encoder to extract multi-scale features \cite{gudovskiy2022cflow}. The multi-scale features are concatenated and treated as the anomaly-aware representations, which are then employed to subdivide non-outstanding into gradable and ungradable. Dimension-reduction methods are used to reduce the dimension of the anomaly-aware representations, followed by an unsupervised clustering module, so as to subdivide non-outstanding into gradable and ungradable with no supervision involved at all. The contribution of this work is three-fold: \begin{itemize} \item To the best of our knowledge, UNO-QA is the first unsupervised and hierarchical quality assessment method for ophthalmic images. \item UNO-QA novelly extracts and concatenates multi-scale features and combines with feature dimension-reduction and clustering methods, enabling unsupervised quality assessment. Moreover, we perform substantial experiments to identify the optimal combination. \item Extensive experiments are conducted on the publicly accessible sOCTA-3×3-10k dataset. UNO-QA outperforms other compared methods, demonstrating the effectiveness of our proposed OIQA framework. \end{itemize} \section{Method} \label{sec:method} \begin{figure*}[htbp] \centering \vspace{-1.7cm} \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{-1cm} \centerline{\includegraphics[height=7.5cm]{method.pdf}} \vspace{-0.3cm} \caption{Overview of our proposed framework. In the training stage, we train an encoder with multi-scale pyramid pooling and multiple decoders for multiple scales with only outstanding samples. During the inference stage, all testing samples are fed into the low-quality representation module. Then outstanding samples and non-outstanding samples get separated. For the non-outstanding samples, we extract and concatenate the output features of the decoders and then apply feature dimension-reduction and clustering to subdivide non-outstanding samples into gradable and ungradable samples.}\medskip \vspace{-0.5cm} \label{model} \end{figure*} \subsection{Problem Setting and Notations} \label{subsec:pre} For outstanding, gradable, and ungradable samples, we denote them respectively as $\mathcal{I_O}$, $\mathcal{I_G}$, and $\mathcal{I_U}$. Moreover, we define $\mathcal{I_N}$ = \{$\mathcal{I_G}$, $\mathcal{I_U}$\} to represent the set of non-outstanding samples. In the setting of this paper, only a set of high quality samples ($\mathcal{I_O}^{T}$) is available during the training phase, and we aim to automate the tri-classification process of another set of mixed-quality images (\{$\mathcal{I_O}$, $\mathcal{I_G}$, $\mathcal{I_U}$\}) at test time by making use of an anomaly-aware quality representation framework. \vspace{-0.1cm} \subsection{Overall Framework} \label{subsec:ofm} The overall framework of our proposed UNO-QA is shown in Figure \ref{model}. It consists of three components: (1) A low-quality representation module to distinguish between $\mathcal{I_O}$ and $\mathcal{I_N}$ and to extract the quality-aware representations of $\mathcal{I_N}$; (2) A dimension-reduction module to lower the dimension of the quality-aware representations of $\mathcal{I_N}$; (3) A clustering module to subdivide $\mathcal{I_N}$ into $\mathcal{I_G}$ and $\mathcal{I_U}$. \vspace{-0.2cm} \subsection{Low-quality Representation Module} \label{subsec:adn} \subsubsection{Anomaly-aware Representation} \label{subsubsec:ad} Inspired by \cite{gudovskiy2022cflow}, we design a feature-embedding-based learning framework for the discrimination of $\mathcal{I_O}$ and $\mathcal{I_N}$. As shown in Figure \ref{lrm}, our low-quality representation module has an encoder-decoder structure. The encoder $E$ is pretrained on ImageNet. Since anomalies vary in size and shape, we adopt a multi-scale feature pyramid pooling strategy to provide various receptive fields. Initially, we feed an image of interest into $E$ to obtain feature vectors $\boldsymbol{z}$. Then we employ a MVG $p_{\text{Z}}(\boldsymbol{z})$ as the density function. To represent location features, a conditional vector $\boldsymbol{c}$ which contains spatial location information, is generated using a 2D form of the conventional positional encoding (PE). The decoder $g(\boldsymbol{\theta})$ aims to approximate $p_{\text{Z}}(\boldsymbol{z})$ with an estimated parameterized density $\hat{p}_{\text{Z}}(\boldsymbol{z,c,\theta})$, where $\theta$ is initialized by values sampled from the Gaussian distribution fitted from $\mathcal{I_O}^{T}$. We use the Kullback-Leibler divergence ($D_{KL}$) as the loss function to train the model, namely \begin{equation} \begin{aligned} \label{eq1} \mathcal{L}({\boldsymbol{\theta})} &= D_{KL} [p_{\text{Z}}(\boldsymbol{z}) || \hat{p}_{\text{Z}}(\boldsymbol{z,c,\theta})]. \end{aligned} \end{equation} Since the encoder serves as a multi-scale feature extractor, we need to train $K$ independent decoders $g_{k}(\boldsymbol{\theta}_{k})$. We denote the output of each $g_{k}(\boldsymbol{\theta}_{k})$ as $p_k$ which may be considered as the general spatial semantic feature with a different scale because it jointly incorporates the information of $\boldsymbol{z}$ and $\boldsymbol{c}$. In the inference phase, since low-quality samples are not observable during the training of the low-quality representation module, the low-quality scores of $\mathcal{I_N}$ should be higher than those of $\mathcal{I_O}$. In this way, we can separate $\mathcal{I_O}$ and $\mathcal{I_N}$ according to their scores. \vspace{-0.1cm} \subsubsection{Feature Extraction} \label{subsubsec:fe} As mentioned in section \ref{subsubsec:ad}, multi-scale pyramid pooling is employed in the low-quality representation module, which enables the encoder to capture both global and local semantic information with multi-size receptive fields, thus enhancing the features of low-quality regions. For each non-outstanding image, we flatten all the multi-scale features $p_1$, $p_2$, ..., $p_k$, and then concatenate them. We denote the concatenated features of each $\mathcal{I_{N}}^{i}$ as $F_H^i$, where $i$ indexes each non-outstanding image. \subsection{Feature Dimension-Reducion} \label{subsec:fda} To further divide $\mathcal{I_{N}}$ into $\mathcal{I_{G}}$ and $\mathcal{I_{U}}$, we first reduce the dimension of the representations $F_H$. Since the representations are very complex, the performance of directly employing these high-dimension features $F_H$ to perform clustering may be poor (section \ref{ssec:result}). Therefore, we apply feature dimension-reduction (FDR) to remove noise and some redundant features. We denote the compressed representations as $F_L$. Two FDR methods are considered in this work: Non-negative Matrix Factorization (NMF) and Principal Component Analysis (PCA). \vspace{-0.2cm} \subsection{Clustering Module} \label{subsec:cluster} Since we only use outstanding samples to train the low-quality representation module, the latent features for images of different quality grades should be dissimilar. Therefore, the representations of $\mathcal{I_G}$ and $\mathcal{I_U}$ shall be different from each other. We thus employ clustering methods like K-means, hierarchical clustering, and Gaussian Mixture Models (GMM) to subdivide $\mathcal{I_N}$ into $\mathcal{I_G}$ and $\mathcal{I_U}$. \begin{figure}[htbp] \centering \setlength{\abovecaptionskip}{-0.2cm} \setlength{\belowcaptionskip}{-1cm} \centerline{\includegraphics[height=4cm]{lrm.pdf}} \vspace{-0.3cm} \caption{Pipeline of the low-quality representation module.}\medskip \vspace{-0.8cm} \label{lrm} \end{figure} \begin{table*}[ht] \setlength{\tabcolsep}{1mm} \newcolumntype{"}{@{\hskip\tabcolsep\vrule width 1.5pt\hskip\tabcolsep}} \centering \caption{Comparison results of different pipelines on the two test datasets. LRM stands for low-quality representation module. The best ones are \textbf{bolded} while the second best are \underline{underlined}.} \vspace{0.5cm} \label{tab:table1} \begin{tabular}{ccccccccccccccccccc} \toprule \multicolumn{2}{c}{\multirow{3}{*}{LRM}} & \multicolumn{8}{c}{Test} & \multicolumn{1}{c}{} & \multicolumn{8}{c}{External Test} \\ \cline{3-10} \cline{12-19} & & \multicolumn{2}{c}{K-means} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Hierarchy} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{GMM} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{K-means} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{Hierarchy} & \multicolumn{1}{c}{} & \multicolumn{2}{c}{GMM} \\ \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13} \cline{15-16} \cline{18-19} & & \multicolumn{1}{c}{Kappa} & \multicolumn{1}{c}{Acc} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Kappa} & \multicolumn{1}{c}{Acc} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Kappa} & Acc & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Kappa} & \multicolumn{1}{c}{Acc} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Kappa} & \multicolumn{1}{c}{Acc} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{Kappa} & Acc \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{PaDim}} & \multicolumn{1}{c}{+PCA} & \multicolumn{1}{c}{13.74} & \multicolumn{1}{c}{49.27} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{13.69} & \multicolumn{1}{c}{49.31} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{14.11} & \multicolumn{1}{c}{49.17} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{19.50} & \multicolumn{1}{c}{46.33} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{18.50} & \multicolumn{1}{c}{45.67} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{18.00} & \multicolumn{1}{c}{45.33} \\ & \multicolumn{1}{c}{+NMF} & \multicolumn{1}{c}{14.79} & \multicolumn{1}{c}{50.22} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{14.04} & \multicolumn{1}{c}{50.15} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{15.08} & \multicolumn{1}{c}{49.98} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{20.50} & \multicolumn{1}{c}{47.00} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{18.50} & \multicolumn{1}{c}{45.67} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{19.00} & \multicolumn{1}{c}{46.00} \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{PatchCore}} & \multicolumn{1}{c}{+PCA} & \multicolumn{1}{c}{33.94} & \multicolumn{1}{c}{60.34} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{17.28} & \multicolumn{1}{c}{49.24} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{23.26} & \multicolumn{1}{c}{53.39} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{46.00} & \multicolumn{1}{c}{64.00} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{45.00} & \multicolumn{1}{c}{63.33} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{35.00} & \multicolumn{1}{c}{56.67} \\ & \multicolumn{1}{c}{+NMF} & \multicolumn{1}{c}{11.33} & \multicolumn{1}{c}{46.88} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{15.68} & \multicolumn{1}{c}{49.17} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{13.93} & \multicolumn{1}{c}{47.96} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{24.00} & \multicolumn{1}{c}{49.33} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{26.50} & \multicolumn{1}{c}{51.00} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{32.00} & \multicolumn{1}{c}{54.67} \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{Fastflow}} & \multicolumn{1}{c}{+PCA} & \multicolumn{1}{c}{42.21} & \multicolumn{1}{c}{66.98} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{43.78} & \multicolumn{1}{c}{67.45} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{43.41} & \multicolumn{1}{c}{67.59} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{44.00} & \multicolumn{1}{c}{62.67} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{31.50} & \multicolumn{1}{c}{54.33} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\underline{47.00}} & \multicolumn{1}{c}{\underline{64.67}} \\ & \multicolumn{1}{c}{+NMF} & \multicolumn{1}{c}{23.72} & \multicolumn{1}{c}{56.46} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{22.00} & \multicolumn{1}{c}{56.42} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{43.18} & \multicolumn{1}{c}{67.42} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{38.50} & \multicolumn{1}{c}{59.00} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\underline{47.00}} & \multicolumn{1}{c}{\underline{64.67}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{42.50} & \multicolumn{1}{c}{61.67} \\ \hline \multicolumn{1}{c}{\multirow{2}{*}{Ours}} & \multicolumn{1}{c}{+PCA} & \multicolumn{1}{c}{46.32} & \multicolumn{1}{c}{68.06} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{53.29}} & \multicolumn{1}{c}{\textbf{72.61}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{43.98} & \multicolumn{1}{c}{66.54} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{44.00} & \multicolumn{1}{c}{62.67} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{44.50} & \multicolumn{1}{c}{63.00} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{45.00} & \multicolumn{1}{c}{63.33} \\ & \multicolumn{1}{c}{+NMF} & \multicolumn{1}{c}{\underline{52.47}} & \multicolumn{1}{c}{\underline{71.97}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{50.51} & \multicolumn{1}{c}{70.69} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{47.90} & \multicolumn{1}{c}{69.27} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{45.50} & \multicolumn{1}{c}{63.67} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{48.50}} & \multicolumn{1}{c}{\textbf{65.67}} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{45.50} & \multicolumn{1}{c}{63.67} \\ \bottomrule \end{tabular} \vspace{-0.2cm} \end{table*} \begin{table}[htbp] \centering \vspace{-0.3cm} \caption{Ablation studies of our proposed framework. $\mathcal{F}_{QS}$ denotes the quality score. $\mathcal{F}_{single}$ denotes the optimal single-scale features. $\mathcal{F}_{multi}$ denotes the combined multi-scale features. FDR denotes feature dimension-reduction.} \vspace{0.4cm} \label{tab:table2} \begin{tabular}{l|cc} \toprule Metrics & \multicolumn{1}{c}{Kappa} & Acc \\ \hline $\mathcal{F}_{QS}$ &8.05 &46.78 \\ $\mathcal{F}_{single}$ & 10.34 & 47.72 \\ $\mathcal{F}_{single}$ + FDR & 11.18 & 48.06 \\ $\mathcal{F}_{multi}$ & 50.04 & 70.66 \\ $\mathcal{F}_{multi}$ + FDR (proposed) & \textbf{53.29} & \textbf{72.61} \\ \bottomrule \end{tabular} \vspace{-0.2cm} \end{table} \section{Experiments} \label{sec:experiment} \subsection{Datasets} \label{sec:data} We make use of the OCTA-25K-IQA-SEG dataset \cite{wang2021deep}, consisting of four subsets. There are two subsets provided for quality assessment, namely sOCTA-3$\times$3-10k and sOCTA-6$\times$6-14k, with a difference in the field of view. In our experiments, we only use the sOCTA-3$\times$3-10k subset. It contains 10,480 $3mm \times 3mm$ superficial vascular layer OCTA (sOCTA) images with three image quality levels (outstanding, gradable, ungradable). Within this dataset, there are 6,915 for training, 2,965 for testing, 300 for validation, and 300 for external testing. The testing set includes 412 outstanding images, 1179 gradable images, and 1374 ungradable images, while the external testing set includes 100 images for each quality grade. In this work, we use all the outstanding samples of the provided training set (961 out of 6915) for training our low-quality representation module. The testing and external testing sets are both used for performance evaluation. \subsection{Implementation Details} \label{ssec:setting} For the low-quality representation module, the Wide ResNet-50 \cite{zagoruyko2016wide} is employed as the encoder and we implement the decoders following CFLOW-AD \cite{gudovskiy2022cflow}. All compared methods are implemented with the Anomalib library \cite{akcay2022anomalib} and PyTorch Lightning \cite{Falcon_PyTorch_Lightning_2019} using NVIDIA RTX 2080 Ti GPUs. In both training and inference phases, all images are resized to be 320 $\times$ 320. For the clustering module, NMF and PCA are used for feature dimension-reduction. K-means, hierarchical clustering, and GMM are used for clustering. \vspace{-0.2cm} \subsection{Results} \label{ssec:result} All methods are evaluated using two metrics, namely Kappa[\%] and Accuracy[\%] (Acc), the results of which are tabulated in Table \ref{tab:table1}. Our low-quality representation module is compatible with most anomaly detection methods. To assess the adaptability of our framework, we analyze different anomaly detection models including PaDim \cite{defard2021padim}, PatchCore \cite{roth2022towards}, and Fastflow \cite{yu2021fastflow}, when incorporated in our UNO-QA pipeline. We observe that our low-quality representation module combined with hierarchy clustering achieves the best classification performance, with PCA and NMF respectively work the best on the testing set and the external testing set when serving as the FDR module. To further analyze the importance of different components in UNO-QA, we conduct ablation experiments on the testing set and present the results in Table \ref{tab:table2}. In all ablation experiments, PCA and hierarchical clustering are respectively adopted in the FDR module and the clustering module since their combination works relatively the best (Table \ref{tab:table1}). We observe that the latent features extracted from the low-quality representation module are better than the quality scores and identify the importance of the multi-scale pyramid pooling operation. Moreover, employing FDR for either single-scale features or concatenated multi-scale features makes a difference, which clearly suggests the importance of dimension reduction before clustering. \vspace{-0.5cm} \section{Conclusion} \label{sec:conclusion} \vspace{-0.1cm} In this paper, we propose a novel framework for unsupervised and hierarchical (two-level) OIQA. We distinguish between outstanding and non-outstanding based on a feature-embedding based low-quality representation module and then extract multi-scale intermediate features from the low-quality representation module to perform feature dimension-reduction and clustering. To the best of our knowledge, our method is the first one to apply unsupervised learning to three-level OCTA image quality assessment tasks, and it has a great potential to be extended to other types of medical images. \vspace{-0.3cm} \section{Acknowledgments} \vspace{-0.2cm} This study was supported by the Shenzhen Basic Research Program (JCYJ20190809120205578); the National Natural Science Foundation of China (62071210); the Shenzhen Science and Technology Program (RCYX20210609103056042); the Shenzhen Basic Research Program (JCYJ2020092515384\\7004). \small \bibliographystyle{IEEEbib}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,047
No doubt that almost each single of the person who has been on the vacation trip or the traveling journey for the first time, they will be considering the budget of vacations as their main priority to talk about. Have you been looking for some of the places to travel in 2018? If yes, then you should not be missing out reading out with this blog post right now! You would be finding this place to be located at the Baja Peninsula. This place is remarkable to explore it all the way because it is flooded with the richness of the beaches right into it. You will also experience the fun of nightclubs plus the resorts and farmhouses too. Its Oasis is the main attraction for the tourists for sure. Some of the well-known five star hotels as part of Mexico are named out to be Zadún, a Ritz-Carlton Reserve, as well as the well known Four Seasons Los Cabos at Costa Palmas. This place has been considered to be the main homeland of the flourishing regions that are worth to explore all the time. This place is the best destination to explore out by the cyclists as well as golf experts. Moreover, it is also taking into account with almost the amount of 140 wineries that is all the more producing European-style Syrahs, as well as amazing Cabernets, and even the awesome Merlots. What else you want? This place has been remarkable known for its historic looking background of the monuments that are so incredible looking. You will be getting the chance to get in closer interaction with the monuments that are worth to view in first glance. Its Egyptian Museum has turned out to be the main attraction that is taking into account with almost 100,000 artifacts on amazing conditions. This place is all surrounded with the vibrant looking landscapes that are worth to watch once. You will view the beaches and the restaurants viewpoint being involved into this city place.
{ "redpajama_set_name": "RedPajamaC4" }
5,991
Japanese bodyboarder Sari Ohhara has moved into the top five of the APB Women's World Tour after an impressive victory at the Tahara Pro at the weekend. Held in challenging two-foot conditions, the race for the victory at the Japanese event was blown wide open following the shock early round exit of five-time world champion Neymara Carvalho. Sari Ohhara easily accounted for Yuki Nishimura in the opening semi-final. Ohhara picked the best ways that came through during the heat and busted a couple big rolls to secure a position in the final. In the second semi-final, Shiori Okazawa got away to an early lead over Nao Nagai in very technical conditions, but it was Nao who fought her way back, wave by wave, to take the second semi-final 11.75 to 10.8. As the girls were waiting to compete in the final, conditions improved to a cleaner three foot in size and the excitement built along the beach. Sari Ohhara opened with the 6.0 on her first ride with a number of clean spins and finishing off with a good solid roll in a little sucky section. Nao Nagai could only answer back with a 4.5 as Sari continued to build on her score. Sari sealed her victory with an impressive seven-point ride at the end of the heat. Sari was absolutely ecstatic at her victory, which has moved her into fourth place on the World Tour, joining fellow Japanese countryman Ayaka Suzuki who is in seventh position. "I can't wait to get to Sintra to really establish my position on the world rankings", she said. It has been 13 years since the APB was last in Japan and there are plans in place to bring both the men's and women's world tour to the region in 2016. The APB Women's World Tour continues at Sintra in Portugal on 23 September and will be streamed live via webcast.
{ "redpajama_set_name": "RedPajamaC4" }
8,455
May 2, 2013 May 2, 2013 sidewayssammy1 Comment on The Season In Review: Part 3 – December & January The Season In Review: Part 3 – December & January Where We Were After a mixed October it looked like this season would be at its very best a mid-table affair. November was important in finally putting distance between us and the relegation battle but there was still work to be done to allay any fears. The win over Hartlepool showed us that Robins had got us to a level above the relegation contenders and that we shouldn't really be comparing ourselves to them. So far however we'd struggled to assert ourselves on the best teams in the division, defeats to Notts County and Brentford coupled with unconvincing draws against Swindon and MK Dons showed that we were close but still a little bit behind the very best in the division. The next two months were going to be about proving how far Robins had taken us in his now 2 month spell in charge. Before we could get back to league action following that frustrating home draw against Portsmouth it was time for the small matter of the FA Cup and Johnstone's Paint Trophy. First up it was a home game in the FA Cup against lower league Morecambe (One of Carl Baker's former clubs). The 2-1 scoreline belied the comfortability that Coventry showed against their lower league opponents, the comfort perhaps the reason for the low scoreline. In the JPT it was another home game against Sheffield United. The two teams looked fairly well-matched and it duly headed to penalties where Coventry advanced 4-1. Suddenly the JPT campaign was building momentum and fans were feeling optimistic of a chance to see their team at Wembley. Back in the league it was time for a local 'derby' match against a horrendously out-of-form Walsall team. For Coventry the game came at the perfect time, confidence and form rising with their opponents experiencing the absolute opposite. However years of supporting Coventry has told us to reign in our expectations and confidence in these situations. Despite this history of underwhelming Coventry won the game 5-1 and after falling behind. Something seemed to be changing in the very culture of the club, consistent performances (albeit over a period of a month) and being comfortable in being favourites going into matches. A side-note was the retirement of Kevin Kilbane who despite being captain played only 6 league games and was clearly not in favour. He has subsequently had a successful media career in the past few months and has completed the London marathon. The next fixture was against the league leaders and eventual champions Doncaster. In terms of the league this was the first time we were up against a strong team during our now 4 game unbeaten run, or going further back, a streak of 1 defeat in 7. It was time now to test the idea that Robins had installed a winning and competitive mentality in the club. From Franck Moussa's third minute opener onwards it was one of the most spectacular performances and results that the team produced in years. In a largely counter-attacking 4-4-1-1 formation headed by a confident and in-form David McGoldrick we proved that we were now one of the better teams in this division. The game was won 4-1 and we had now scored 9 goals in two matches and were in our highest league position since the opening day and were tantalisingly close to both the top-half and the play-off places. However dark clouds were hanging above us. The next home game against Preston was supposedly going to be the last at the Ricoh. ACL had issued a winding-up order against the club for unpaid rent and had set a Boxing Day deadline for the club to stump up the cash. The combination of the good form and prospect of saying goodbye to the Ricoh meant that the match saw the highest attendance since the Sheffield United game back in August. Many were feeling confident that they would be cheering on a confident Coventry side who were going to put a stuttering Preston side to the sword in the manner of their previous few performances. A side story to this game though was the Preston manager Graham Westley who had been the subject of much ridicule ever since taking the Preston job and was also being maligned for his long-ball tactics. The game itself saw Coventry start well, with Barton playing in behind McGoldrick this time and using his height to some effect early on. When Coventry took the lead through James Bailey it seemed like a regulation win was on its way. However Preston came back strongly, countering through their pacey wingers and putting in some strong challenges up front. Coventry were lucky to go into the break still in the lead but were increasingly looking out of ideas in attack. The second half continued in a similar manner and when Preston eventually equalised in the 77th minute it seemed a familiar story for so many Coventry fans was taking place. In the end Preston could count themselves as unlucky not to win, Coventry though could also say the same. The Preston manager Westley though became the subject of ire from Coventry fans for his supposed long-ball style and abrasive touch-line antics, another of those managerial spats that dotted this season. Boxing Day came and there was no word on what was happening vis-a-vis the Ricoh dispute with both sides staying schtum. It was also seeming likely at this point that McGoldrick wasn't going to stay beyond his current loan spell due to expire after the New Year's Day Shrewsbury match. The bully-boy from the Scunthorpe match, Leon Clarke, had been training with the club and seemed set to take up McGoldrick's mantel, seeming a decent choice. On the pitch the team headed into the Stevenage match on a 6 match unbeaten league run and having only lost 1 league game in 2 months. The game was also the start of the second half of the season and it seemed like we could mount a charge to an increasingly realistic looking play-off places. Stevenage were no slouches themselves and had looked like a top-half side in the first-half of the season. It was nonetheless disappointing when they took the lead through a first-half penalty. Things were looking slightly desperate and the momentum from the Walsall and Doncaster games seemed to be petering out. Richard Wood, as so often he was under Robins, scored a header from a corner to level things with just over 10 minutes left. Against tough opposition I think most fans would have accepted a point at this juncture in the match but Coventry and in particular Carl Baker and David McGoldrick had different ideas. Baker put Coventry ahead just past 90 minutes and then in stoppage time McGoldrick scored what many would describe as a 'wonderful lob' from around 30 yards out. Ecstasy. We were in the top half of the table for the first time since the opening day. However there was no time to rest upon any laurels as we had to travel to Milton Keynes but without James Bailey and David McGoldrick, Bailey because his loan was over and McGoldrick because he'd been suspended for 5 bookings. So when MK took the lead I think many were fearing that we'd run out of steam and were perhaps regressing to the mean of our pre-December performances. Franck Moussa though thought differently and scored probably the goal of the season, a pitch-long dribble, which put us level. Stephen Elliott who we hadn't seen much of since the Sheffield United league match stepped up to the plate and scored twice in 2 minutes to put us in the lead after MK had re-taken it just before half-time. Even O'Donovan got in on the action and had a decent impact, you know it's going well when that happens. Cue yet more ecstasy. Also cue the bitterest manager comment I've ever heard from MK manager, Karl Robinson, who described Coventry as moaning about having no money. What a tit. The start of a new year seemed set to herald our push towards and into the play-offs. There was a slight hitch though that our top scorer was no longer going to be with us. McGoldrick's final game was the New Year's Day home match against Shrewsbury, which was played at the Ricoh after all. The game was also the highest league attendance of the season at over 15,000. I cite this game though as the start of 2013's precedent of home performances. We played decently and really should have scored but the opposition sucker-punched us on the counter and we had no response. Our first defeat in 8 games and given who we had played it was shocking that it was to lowly Shrewsbury, who had now 'done the double' over us. Next up was another cup match in North London where we had no prospect of winning. We looked star-struck by Tottenham and lost 3-0 without them leaving first gear. With Clarke drafted in the next match was at home to Preston in the JPT. The game was pretty even looking in the first-half but Jennings gave us the lead from an Edjenguele flick-on from a set-piece. The second-half was well-matched but Preston's two goals had more than a hint of fortune about them in the shape of lucky deflections. All of a sudden the optimism that we began the year with was sapping away, worst of all we were giving fan enemy number 1, Graham Westley, the chance of going to Wembley. Despite not really putting Preston under much pressure Carl Baker levelled the score in the second minute of injury time. Pandemonium. But more was to come as Preston keeper Steve Simonsen fumbled a shot and three Coventry players had read it perfectly. It fell to Leon Clarke who had a tap-in but in the fifth minute of injury time in a cup match it didn't matter. It seemed like destiny, Coventry were surely heading to Wembley. The away game to Carlisle though felt like a damp squib despite the very real matter of league points being played for. It felt that perhaps the prospect of Wembley in the JPT was exceeding the prospect of playing for something in the league. Carlisle scored early and Coventry barely threatened to take something back from the game. We had now lost 2 in a row and were back in the bottom half of the table. Very much a case of 2 steps forward and one step back. The next match seemed vital in re-establishing league momentum. Fortunately we were able to beat Tranmere in a game low on quality and chances. We next played an Oldham side who weren't in great form and seemed a great opportunity to re-ignite our charge towards the play-offs. Another win, despite giving up the lead in the 89th minute and it seemed like we were winning games that the best teams should with a genuine belief around the squad that we were good enough. Back in a bit of confidence and now in 7th, a place behind the play-offs. It was what had now turned into a grudge match against an ambling Preston side under Graham Westley's loosening charge. Whether it was the animosity clear to see between the two teams or a lack of assertiveness from Coventry, we couldn't really dominate in the same way we had against other teams recently. Preston took the lead, Coventry equalised (Clarke again), Coventry took the lead, Preston equalised. Probably a fair result but it's always frustrating not to win when you take the lead. By the end of January we were in our highest league position all season, 7th. The transfer window saw limited activity Leon Clarke and Blair Adams both signed their expected transfers and Bailey extended his loan. Out of the door were the deadwood of the squad in Chris Hussey and Roy O'Donovan. The squad was solidifying and we had our strongest line-up in that 4-4-1-1 shape which was: Murphy – Christie, Wood, Edjenguele, Adams – Baker, Jennings, Bailey, McSheffrey – Moussa – Leon Clarke. Robins appeared contented despite murmurings of other clubs' pursuits. Despite the rent situation not being sorted the failure of the past deadline suggested that neither side had a particularly strong negotiating position and it could rumble on and be sorted soonish. It all seemed geared to a successful final push in the league and JPT. Well, we'll look at the final bit in Part 4 sometime soon then. Categories Season In ReviewTags Coventry City, Doncaster, Graham Westley, Johnstone's Paint Trophy, Karl Robinson, League One, Mark Robins, MK Dons, Preston, Stevenage 1 thought on "The Season In Review: Part 3 – December & January" hairygrim May 2, 2013 — 5:51 pm Unfortunately we all know what comes next. Previous The Season In Review: Part 2 – October & November Next The Season In Review: Part 4 – February-April
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
286
Fabian Alexander Kondziolka (* 1978 in Bochum) ist ein deutscher Schauspieler und Moderator. Leben Kondziolka wuchs im Ruhrgebiet auf. Nach Abitur (1995) und Zivildienst (1996/1997) arbeitete er zunächst in dem Bochumer Szenerestaurant Art Hotel Tucholsky. Auf der Veranstaltung Bochum Total sammelte er erste Erfahrungen als Moderator. Von 1999 bis 2001 studierte er Film- und Fernsehwissenschaften an der Ruhr-Universität Bochum, anschließend bis 2004 Schauspiel an der Arturo Schauspielschule in Köln und an der Schauspielschule Bochum. Er nahm an einem Workshop im Bereich Camera Acting in Los Angeles teil. Seine Ausbildung schloss er mit der Bühnenreifeprüfung ab. Kondziolka spielte Haupt- und Nebenrollen in mehreren Kurzfilmen. Außerdem übernahm er mehrere Episodenrollen, vor allem Nebenrollen und Gastrollen in Actionserien, unter anderem in Alarm für Cobra 11, Küstenwache, Die Rettungsflieger, 112 – Sie retten dein Leben und Danni Lowinski. Außerdem wirkte er in Comedyserien wie Ladykracher und Die Dreisten Drei mit. In dem Spielfilm Fahr zur Hölle Gott hatte er 2008 an der Seite von Martin Semmelrogge, Uwe Fellensiek, Claude-Oliver Rudolph und Christine Kaufmann eine Hauptrolle. Er verkörperte einen Erzengel, der unter anderem Züge der biblischen Figur Henoch trägt. Seit 2005 ist er außerdem als Moderator im Bereich Sport, Lifestyle und Entertainment tätig. Kondziolka lebt in Bochum. Filmografie 2000: Antonia – zwischen Liebe und Macht (Fernsehfilm, Nebenrolle) 2001: Der Clown (Fernsehserie, Kleindarstellerrolle) 2001: Matterazzo! (Kurzfilm, Nebenrolle) 2002: Die Sketchshow (Gastrolle) 2002: Axel Stein (Gastrolle) 2003: Ladykracher (Fernsehserie, Gastrolle) 2003: Ziele (Hauptrolle) 2003: Brüder und Freunde (Kurzfilm, Hauptrolle) 2003, 2004: Alarm für Cobra 11 (Fernsehserie, Gastrolle) 2004: P.O.R.N. (Kurzfilm, Nebenrolle) 2005: Die Dreisten Drei (Fernsehserie) 2006: Roundabout (Kinokurzfilm, Hauptrolle) 2006: Reiche Armut (Kurzfilm, Hauptrolle) 2006: Küstenwache (Fernsehserie, Gastrolle) 2007: Der Zwang (Kurzfilm, Nebenrolle) 2007: Die Rettungsflieger (Fernsehserie, Gastrolle) 2007: Bedingungslos (Kurzfilm, Hauptrolle) 2007: Lost Lovers (Kurzfilm, Nebenrolle) 2008: 112 – Sie retten dein Leben (Fernsehserie, Gastrolle) 2008: Verbotene Liebe (Fernsehserie, Gastrolle) 2008: Danni Lowinski (Fernsehserie, Gastrolle) 2008: Heroic Bloodshed (Kurzfilm, Hauptrolle) 2008: Fahr zur Hölle Gott (Kinofilm, Hauptrolle) 2009: Unter uns (Fernsehserie, Gastrolle) 2009: Alarm für Cobra 11 (Fernsehserie) 2009: Broken Comedy (Fernsehserie) 2010: Next Stop Paris (Kurzfilm) 2011: In Vino Veritas (Kurzfilm) 2011: Noch'n Schuss Kaffee? (Kurzfilm) 2013: Unter uns (Fernsehserie, Episodenhauptrolle) 2013: The End (Kinofilm) 2018: Freundinnen – Jetzt erst recht (Fernsehserie) 2019: Krass Abschlussklasse (Fernsehserie) 2021: Der Lehrer: Geil, geil, geil (Fernsehserie, eine Folge) 2021: SOKO Köln: Alphatiere (Fernsehserie, eine Folge) Weblinks Fabian Kondziolka – vollfilm Einzelnachweise Filmschauspieler Moderator Deutscher Geboren 1978 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,277
{"url":"http:\/\/www.awea.org\/blog\/index.cfm?customel_dataPageID_1699=17225","text":"## The AWEA Blog: Into the Wind\n\n#### Natural gas is natural and pretty clean, but renewable? Nope\n\nA Washington Post editorial this past weekend suggested that natural gas be included in state renewable portfolio standards and in the federal renewable electricity standard now before Congress. It would be a convenient way to retire some coal plants and reduce greenhouse gases, the Post said.\n\nTo avoid the obvious argument that natural gas is not renewable, the editorial described the state rules and proposed federal rule as \"clean energy\" rather than renewable standards. Nice try; they are called renewable standards for a reason.\n\nSemantics aside, the Big Three utility fuels--natural gas, coal, and nuclear energy--all pose risks. For the sake of human health and the environment, national security, and the consumer prices, we need to diversify the fuels used by electric utilities. Renewables--hydro, biomass, solar and wind--do not pollute or cause health problems, and are not subject to price volatility. The purpose of state and federal renewable standards is to encourage utilities to diversify their mix to include these renewable sources. Putting gas into the mix makes no sense, and not just because it is not renewable. It is already in the mix--in fact, it is the fastest growing utility generating source.","date":"2013-05-20 18:58:53","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8892420530319214, \"perplexity\": 4296.379524097816}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368699186520\/warc\/CC-MAIN-20130516101306-00082-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
87 Sylvia Sylvia (minor planet designation: 87 Sylvia) is the 8th-largest asteroid in the asteroid belt. It is the parent body of the Sylvia family and member of Cybele group located beyond the core of the belt (see minor-planet groups). Sylvia is the first asteroid known to possess more than one moon. Adaptive Optics observations of (87) Sylvia, showing its two satellites, Remus and Romulus. MPC designation (87) Sylvia /ˈsɪlviə/ SIL-vee-ə Alternative designations A909 GA Minor planet category main belt · (outer) Sylvia · Cybele Orbital characteristics Epoch July 14, 2004 (JD 2453200.5) 3.768 AU (563.679 Gm) 6.52 a (2381.697 d) Average orbital speed 15.94 km/s Mean anomaly Longitude of ascending node Argument of perihelion Known satellites (384 × 262 × 232) ±10 km[1][2] (385 × 265 × 230) ± 10 km[3] 286 km (mean) (1.478 ± 0.006) × 1019 kg[1][3] Mean density 1.2 ± 0.1 g/cm3[1][3] Equatorial surface gravity 0.0729 m/s2 Equatorial escape velocity 0.1379 km/s 0.2160 d (5.183642 h) [4][5] Geometric albedo 0.0435 [6] Spectral type X [7] Absolute magnitude (H) 1 Discovery and naming 2 Physical characteristics 3 Satellite system Discovery and namingEdit Sylvia was discovered by N. R. Pogson on May 16, 1866, from Madras (Chennai), India.[8] A. Paluzie-Borrell, writing in Paul Herget's The Names of the Minor Planets (1955), mistakenly states that the name honours Sylvie Petiaux-Hugo Flammarion, the first wife of astronomer Camille Flammarion. In fact, in the article announcing the discovery of the asteroid, Pogson explained that he selected the name in reference to Rhea Silvia, mother of Romulus and Remus (MNRAS, 1866). Physical characteristicsEdit Sylvia is very dark in color and probably has a very primitive composition. The discovery of its moons made possible an accurate measurement of the asteroid's mass and density. Its density was found to be very low (around 1.2 times the density of water), indicating that the asteroid is porous to very porous; from 25% to as much as 60% of it may be empty space,[3] depending on the details of its composition. However, the mineralogy of the X-type asteroids is not known well enough to constrain this further. Either way, this suggests a loose rubble pile structure. Sylvia is also a fairly fast rotator, turning about its axis every 5.18 hours (giving an equatorial rotation velocity of about 230 km/h or 145 mph). The short axis is the rotation axis.[4] Direct images[3] indicate that Sylvia's pole points towards ecliptic coordinates (β, λ) = (+62.6°, 72.4°) with only a 0.5° uncertainty, which gives it an axial tilt of around 29.1°. Sylvia's shape is strongly elongated. Satellite systemEdit Sylvia has two orbiting satellites. They have been named (87) Sylvia I Romulus and (87) Sylvia II Remus, after Romulus and Remus, the children of the mythological Rhea Silvia. Romulus, the first moon, was discovered on February 18, 2001, from the Keck II telescope by Michael E. Brown and Jean-Luc Margot. Remus, the second moon, was discovered over three years later on August 9, 2004, by Franck Marchis of UC Berkeley, and Pascal Descamps, Daniel Hestroffer, and Jérôme Berthier of the Observatoire de Paris, France. The orbital properties of the satellites are listed in this table.[9] The orbital planes of both satellites and the equatorial plane of the primary asteroid are all well-aligned; all planes are aligned within about 1 degree of each other, suggestive of satellite formation in or near the equatorial plane of the primary. Semi-major axis [km] Orbital period [days] Remus 7.3×1014 706.5 1.37 0.027 Romulus 9.3×1014 1357 3.65 0.006 ^ a b c Jim Baer (2008). "Recent Asteroid Mass Determinations". Personal Website. Archived from the original on 2 July 2013. Retrieved 5 December 2008. ^ Data sheet compiled by W. R. Johnston ^ a b c d e F. Marchis; et al. (2005). "Discovery of the triple asteroidal system 87 Sylvia" (PDF). Nature. 436 (7052): 822–4. Bibcode:2005Natur.436..822M. doi:10.1038/nature04018. PMID 16094362. ^ a b M. Kaasalainen; et al. (2002). "Models of Twenty Asteroids from Photometric Data" (PDF). Icarus. 159 (2): 369. Bibcode:2002Icar..159..369K. doi:10.1006/icar.2002.6907. ^ PDS lightcurve data Archived 2009-04-09 at the Wayback Machine ^ Supplemental IRAS Minor Planet Survey Archived 2009-08-17 at the Wayback Machine ^ PDS spectral class data Archived 2009-08-05 at the Wayback Machine ^ Pogson, N. R. (1866), Minor Planet (87) Sylvia, Monthly Notices of the Royal Astronomical Society, Vol. 26, p. 311 (June 1866) ^ Fang, Julia. "Orbits, Masses, and Evolution of Main Belt Triple (87) Sylvia". Astronomical Journal. arXiv:1206.5755. Bibcode:2012AJ....144...70F. doi:10.1088/0004-6256/144/2/70. Pogson, N. R. (1866), Minor Planet (87) Sylvia, Monthly Notices of the Royal Astronomical Society, Vol. 26, p. 311 (June 1866) Data on (87) Sylvia from Johnston's archive (maintained by W. R. Johnston) Rubble-Pile Minor Planet Sylvia and Her Twins (ESO news release, August 2005) Includes images and artists impressions Adaptive Optics System Reveals New Asteroidal Satellite (SpaceDaily.com, March 2001) Includes a discovery image. Space.com: First asteroid trio discovered IAUC 7588, reporting discovery of S/2001 (87) 1 IAUC 7590, confirming the discovery IAUC 8582, reporting discovery of S/2004 (87) 1 and naming Romulus and Remus An animation of (87) Sylvia and its moons (on Vimeo) Shape model derived from lightcurve (on page 19) Instability zones for satellites of asteroids. The example of the (87) Sylvia system (arXiv:1112.5363 / 22 December 2011) Orbits, masses, and evolution of main belt triple (87) Sylvia (arXiv:1206.5755 / 25 Jun 2012) Occultation of TYC 1856-00745-1 by (87) Sylvia and by its satellite Romulus (E. Frappa, A. Klotz, P. Dubreuil) 87 Sylvia at AstDyS-2, Asteroids—Dynamic Site Ephemeris · Observation prediction · Orbital info · Proper elements · Observational info 87 Sylvia at the JPL Small-Body Database Close approach · Discovery · Ephemeris · Orbit diagram · Orbital elements · Physical parameters Retrieved from "https://en.wikipedia.org/w/index.php?title=87_Sylvia&oldid=922611689"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,652
retail news in context, analysis with attitude Bi-Lo, Winn-Dixie Say that Merged Company HQ Will Be In Jacksonville, Florida Now that the merger of Bi-LO and Winn-Dixie has been completed, the combined company said that it eventually will establish its central headquarters in Jacksonville, Florida, Winn-Dixie's hometown, because it is "centrally located within its eight-state operating area. While both companies enjoy a strong heritage of support from their local communities, the Jacksonville -based infrastructure is best positioned to host the combined Bi-Lo and Winn-Dixie support center, corporate office and distribution facilities. At the same time, the company plans to maintain a strong regional presence in Greenville both in regard to distribution and local store support needs." KC's View: Back to Home What's your view? content guy Kevin Coupe kc@morningnewsbeat.com www.KevinCoupe.com For more than 30 years bringing to readers and audiences a wealth of experience, sharp storytelling skills, provocative and contextual insights, unique worldview and serious levity about the world of business and consumers. advertise with mnb join the mnb community Get the MNB in your inbox every morning. MNB Home MNB News Archive Copyright © 2020 Morning News Beat
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,520
"NB: 5/5 - MediaVacationRentals.com, Laurence P. review verifiedLaurence P. "NB: 5/5 - MediaVacationRentals.com, Danila R. review verifiedDanila R. "NB: 5/5 - MediaVacationRentals.com, S S. review verifiedS S. "NB: 5/5 - MediaVacationRentals.com, Sandra P. review verifiedSandra P. "NB: 5/5 - MediaVacationRentals.com, Gianfranco M. review verifiedGianfranco M. "NB: 5/5 - MediaVacationRentals.com, Antonella M. review verifiedAntonella M. Beautiful, fabulous! If it were mine I would not have been rented.Beautiful, fabulous! If it were mine I would not have been rented. "NB: 5/5 - MediaVacationRentals.com, Antonietta M. review verifiedAntonietta M. "NB: 5/5 - MediaVacationRentals.com, Ettore A. review verifiedEttore A. "NB: 5/5 - MediaVacationRentals.com, Raphaelle B. review verifiedRaphaelle B. "NB: 5/5 - MediaVacationRentals.com, Elena A. review verifiedElena A. "NB: 5/5 - MediaVacationRentals.com, Jeanne B. review verifiedJeanne B. "NB: 5/5 - MediaVacationRentals.com, Luca R. review verifiedLuca R. "NB: 5/5 - MediaVacationRentals.com, Benjamin F. review verifiedBenjamin F. "NB: 5/5 - MediaVacationRentals.com, Danel A. review verifiedDanel A. The place is ideal for a beach holiday. Convenient and easy to reach.The place is ideal for a beach holiday. Convenient and easy to reach. "NB: 5/5 - MediaVacationRentals.com, Andrea S. review verifiedAndrea S.
{ "redpajama_set_name": "RedPajamaC4" }
3,154
\subsection{Sketch of the Construction} We explain here the construction of invariants for $[\pt/\Cs]$. (The invariants for $[X/\Cs]$ are built up from this special case.) To begin, we need a completion of the stack $\Bun_{\Cs}(g,I)$ of maps from stable marked curves to $[\pt/\Cs]$. Recall that defining a $\Cs$-bundle on a nodal curve $\Sigma$ is the same as defining a $\Cs$-bundle on the normalization of $\Sigma$ together with the data of identifications of the fibers at the preimages of the nodal points. $\Bun_{\Cs}(g,I)$ fails to be complete because the space of identifications over a given node is isomorphic to $\Cs$. Following Gieseker \cite{MR739786} and Caporaso \cite{MR1254134}, we get a completion -- denoted $\Mtwid_{g,I}([\pt/\Cs])$ -- by adding new strata which represent the limits where an identification goes to zero or infinity by allowing projective lines carrying the line bundle ${\mathcal O}_{\mb{P}^1}(1)$ to appear at the nodes. (Similar completions of various stacks of vector bundles on nodal curves have been studied by many authors; see the reviews \cite{MR771150, MR2105707}. Our definition was inspired by Caporaso's thesis \cite{MR1254134} and the papers of Nagaraj \& Seshadri \cite{MR1455315, MR1687729}.) The stack $\Mtwid_{g,I}([\pt/\Cs])$ has a forgetful map \[ F: \Mtwid_{g,I}([\pt/\Cs]) \to \Mbar_{g,I}. \] It also carries a) a universal curve $\pi: C \to \Mtwid_{g,I}([\pt/\Cs])$ with marked points $\sigma_i: \Mtwid_{g,I}([\pt/\Cs]) \to C$ and b) a universal principal $\Cs$-bundle $p: \mP \to C$, representing the universal map $\phi: C \to [\pt/\Cs]$. Composition gives the evaluation maps $\ev_i = \phi\circ \sigma_i$. \[ \ev_i: \Mtwid_{g,I}([\pt/\Cs]) \to [\pt/\Cs]. \] A vector bundle on $[\pt/\Cs]$ is a $\Cs$-representation $V$. Pulling these back along the universal morphism $\phi$ gives the universal vector bundles $\mc{V} = \phi^*V$ on $C$. Likewise, if we pull K-theory classes $[V] \in K([\pt/\Cs]) \simeq K_{\Cs}(\pt)$ back along the maps $\ev_i$, we obtain the {\it evaluation classes} $\ev_i^*[V]$. The Gromov-Witten invariants of $[\pt/\Cs]$ (like those of a variety $X$) result from pushing a product of evaluation classes forward along the forgetful morphism $F$. However, our setup differs from the standard one in two ways. \begin{enumerate} \item Our invariants are constructed in K-theory, rather than cohomology. \item Our invariants are always {\it twisted}, in the sense of \cite{MR2276766}. \end{enumerate} Twisting requires some definition. A line bundle $\mc{L}$ on $\Mbar_{g,I}([\pt/\Cs])$ is said to be {\it admissible} if (topologically) \[ \mc{L}^{-1} \simeq (\op{det}R\pi_*\phi^*V)^{\otimes r}, \] where $V$ is a non-trivial irreducible $\Cs$-representation and $r$ is a positive rational number. An {\it $\mc{L}$-twisted Gromov-Witten invariant of $[\pt/\Cs]$} is the Euler characteristic (on $\Mbar_{g,I}$) of the twist by (admissible) $\mc{L}$ of a product of evaluation classes: \[ \langle V_1,...,V_{|I|} \rangle_{\mc{L}} = \chi_{\Mbar_{g,I}}\Big(RF_* \big(\mc{L} \bigotimes \otimes_{i \in I} \ev_i^*V_i \big)\Big). \] One can also twist by powers of the index classes $R\pi_*\mc{V}$ of vector bundles $\mc{V}$ on $C$; these classes may be assembled into {\it higher twistings}. An {\it admissible complex} is a sum of products of complexes of the form \[ \mc{L} \bigotimes \otimes_a (R\pi_*\mc{V}_{a})^{n_a} \bigotimes \otimes_i (\ev_i^*V_{i} \otimes T_i^{\otimes n_i}). \] The subring of $K(\Mtwid_{g,I}([\pt/\Cs])$ generated by such products is called the {\it ring of admissible classes}. It is a subring without unit; the trivial line bundle $\mc{O}$ is not admissible. It is not obvious that the push-forward of an admissible class along $F$ is well-defined, because the moduli stacks $\Mtwid_{g,I}([\pt/\Cs])$ differ from Kontsevich's stack of stable maps in two important ways. First, $\Mtwid_{g,I}([\pt/\Cs])$ is an Artin stack, rather than a Deligne-Mumford stack; points can have a finite-dimensional stabilizers. It is for this reason that we use K-theory instead of cohomology. The forgetful morphism $F$ to the stack of curves is always Artin, and it is generally impossible to push cohomology classes forward along such morphisms. The problem can be understood as follows. The fibers of $F$ are quotients stacks of the form $[A/\mc{G}]$ ($\mc{G}$ a group), and integrating over $[A/\mc{G}]$ amounts to first pushing forward along $[A/\mc{G}] \to [\pt/\mc{G}]$ and then pushing forward along $[\pt/\mc{G}] \to \pt$. But the latter pushforward is zero in cohomology. Indeed, pushing forward along $[\pt/\mc{G}] \to \pt$ shifts the degree of a cohomology class by $-\dim([\pt/\mc{G}]) = \dim(\mc{G})$, so the only classes with non-zero push-forward live in $H^{-\dim(\mc{G})}_\mc{G}(\pt)$. But $H^n_\mc{G}(\pt)$ vanishes if $n$ is negative. In K-theory, on the other hand, the pushforward along $[\pt/\mc{G}] \to \pt$ does exist. The class $[V] \in K_\mc{G}(\pt)$ represented by a $\mc{G}$-module $V$ is sent to the K-theory class $[V^\mc{G}] \in K(\pt)$ represented by the vector space of $\mc{G}$-invariants in $V$. The second problem is that $\Mtwid_{g,I}([\pt/\Cs])$ is very far from proper. It is complete, but it is in general neither separated nor of finite type. Thus, the existence of a pushforward along the forgetful morphism to $\Mbar_{g,I}$ is a delicate matter; not every K-theory class on $\Mbar_{g,I}([\pt/\Cs])$ has a well-defined index. The main theorem in this paper asserts that the particular classes we want to push forward do indeed have well-defined index. (There is a third difference, which makes things easier: The stack $\Mtwid_{g,I}([\pt/\Cs])$ is unobstructed, so we don't need to use virtual structure sheaves.) \begin{maintheorem}\label{maintheorem} The derived pushforward $RF_*\alpha$ of an admissible complex $\alpha$ along the bundle-forgetting map $F: \Mtwid_{g,I}([\pt/\Cs]) \to \Mbar_{g,I}$ is a complex of coherent sheaves. \end{maintheorem} This theorem is a relative version (allowing the curve to vary) of the finiteness theorem proved on $\Bun_G(\Sigma)$ in \cite{math.AG/0312154}. The proof, in rough outline: \begin{enumerate} \item Coherence is a local property, so we can work in an affine \'{e}tale neighborhood $B$ in $\Mbar_{g,I}$. For small enough $B$, the restriction of $\Mtwid_{g,I}([\pt/\Cs])$ to $B$ can be presented as a stack quotient $[A/\mc{G}]$, where $A$ is an algebraic space classifying $\Cs$-bundles equipped with trivializations at marked points labelled by a set $V$, and $\mc{G} = (\Cs)^V$. Since $B$ is affine, we can prove coherence by showing that the $\mc{G}$-invariants in the global sections $R\Gamma(A,\alpha)$ are finitely-generated over $B$. \item The Kirwan-Ness stratification associated to the $\mc{G}$-action on $A$ is compatible with the stratification by degeneracy type, which tracks the multidegrees of the bundles. This fact makes it possible to identify algebraic subspaces $S \subset A$ for which $[S/\mc{G}] \simeq Q \times [\pt/\Cs]$, with $Q$ proper over $B$. \item The $\mc{G}$-weights of the fixed point fibers of admissible line bundles grow linearly with the multidegree, while the weights of evaluation classes and their descendants and index classes are bounded functions of the multidegree. These estimates make it possible to prove that the $\mc{G}$-invariants in the local cohomologies of admissible complexes on the degeneracy strata are always coherent, and are non-zero in only finitely many cases. This result allows us to reduce the question of finite-generation of invariants on $A$ to $S$, where it follows trivially from the properness result. \end{enumerate} \subsection{Invariants for $[X/\Cs]$} Recall that $[X/\Cs]$ is defined so that maps from a curve $\Sigma$ to $[X/\Cs]$ correspond to pairs $(\mP,s)$ consisting of a principal $\Cs$-bundle and a section $s \in \Gamma(\Sigma,\mP \times_{\Cs} X)$ of the associated bundle with fiber $X$. To define Gromov-Witten invariants for $[X/\Cs]$, we need a moduli stack $\Mtwid_{g,I,\beta}([X/\Cs])$ of curves and degree $\beta$ maps to $[X/\Cs]$ on which we can define tautological classes, and we need a way of defining the pushforward of these classes from $\Mtwid_{g,I,\beta}([X/\Cs])$ to $\Mbar_{g,I}$. In this paper, we define an appropriate moduli stack $\Mtwid_{g,I,\beta}([X/\Cs])$. This stack has a natural section-forgetting morphism $$ F_\beta: \Mtwid_{g,I,\beta}([X/\Cs]) \to \Mtwid_{g,I}([\pt/\Cs]). $$ The fibers of this morphism are stacks of sections of bundles with fiber $X$ associated to Gieseker bundles. Such sections are locally maps to $X$, so they can develop singularities in the same way. Following Kontsevich, we ensure that $F_\beta$ is proper by allowing bubbling at points where such singularities occur. Thus, the morphism $F_\beta$ is very much like the morphism $F_\beta: \Mbar_{g,I,\beta}(X) \to \Mbar_{g,I}$ used in ordinary Gromov-Witten theory. We prove, in fact, that the morphism $F_s$ is proper, Deligne-Mumford, and carries a perfect obstruction theory, relative to $\Mtwid_{g,I}([\pt/\Cs])$, which is smooth. (The proofs are straightforward generalizations of the usual ones in Gromov-Witten theory.) These facts imply the existence of a virtual K-theoretic pushforward along $F_\beta$. We conjecture that the virtual pushforward of an admissible class along $F_\beta$ is an admissible class on $\Mtwid_{g,I}([\pt/\Cs])$. If this conjecture holds, then we can safely define the Gromov-Witten invariants of $[X/\Cs]$ to be the K-theory classes on $\Mbar_{g,I}$ obtained by applying $F_{\beta *}^{vir}$ and $F_{\mP *}$ to an admissible class on $\Mtwid_{g,I,\beta}([X/\Cs])$. \subsection{Plan of the Paper} In Section \ref{thestack}, we review basic facts about principal $\Cs$-bundles on curves, then define the moduli stack $\Mtwid_{g,I}([\pt/\Cs])$ of Gieseker bundles on stable curves, and discuss a few examples (small $g$ and $|I|$). In Section \ref{usefulatlas}, we introduce an atlas $A$ for our stack, and discuss the deformations of Gieseker bundles. We then draw a few basic conclusions about the geometry of $\Mtwid_{g,I}([\pt/\Cs])$. In Section \ref{localstudy}, we show that the Kirwan-Ness stratification of $A$ (for its given $\mc{G} = (\Cs)^V$-action) is compatible with (a certain refinement of) the induced degeneracy stratification. We also introduce some finite-type subspaces of $A$ and explain how to think of them in terms of the stratification. In Section \ref{admissibleclasses}, we define admissible classes and estimate the behavior of the $\mc{G}$-weights of their fixed point fibers as functions of the multidegrees of the bundles classified by the fixed points. In Section \ref{theinvariants}, we define the Gromov-Witten invariants for $[\pt/\Cs]$ and prove local cohomology theorems which imply they are well-defined. In Section \ref{XmodC}, we explain how to define Gromov-Witten invariants for $[X/\Cs]$. \subsection{Acknowledgements} E.F. thanks Andrei Losev and Nikita Nekrasov for useful discussions. A.T. thanks Jarod Alper, Tom Coates, Dan Edidin, Ezra Getzler, Eduardo Gonzalez, Reimundo Heluani, and Chris Woodward for helpful conversations. This research has been partially supported by DARPA and AFOSR through the grant FA9550-07-1-0543. In addition, E.F. and A.T. were supported by the NSF grant DMS-0303529, and C.T. was supported by the NSF grant DMS-0709448. \section{The Stack of Gieseker Bundles}\label{thestack} In order to define the Gromov-Witten invariants of $[\pt/\Cs]$ we need appropriate moduli stacks of marked curves carrying principal $\Cs$-bundles. In this section, we introduce such stacks and discuss a few simple examples. We always work over $\C$. \subsection{Curves} In everything that follows, $(C,\sigma_i)$ is a family of prestable marked curves, over a finitely-generated complex base scheme $B$. More precisely, $\pi: C \to B$ is a flat proper morphism whose fibers are connected complex projective curves of genus $g$ with at worst nodal singularities, carrying a collection of marked points $\sigma_i: B \to C$ which are indexed by an ordered set $I$. A point $\sigma_i$ is {\it special} if it is a node or a marked point. We shall always assume that any rational components of $C$ have at least two special points. We usually reserve the notation $(\Sigma,\sigma_i)$ for families of {\it stable} marked curves. Recall that a marked curve is stable if each component of genus 0 carries at least 3 special points and each component of genus 1 carries at least 1 special point. $\Mbar_{g,I}$ denotes the stack of stable genus $g$, $I$-marked curves. \begin{notation} We will need to label three kinds of special points on $C$: ordinary marked points, nodes, and {\it trivialization points}. The latter are marked points at which we will trivialize the fibers of a $\Cs$-bundle, but which are not required to be non-degenerate and do not count when determining stability. We will denote all such points by $\sigma$, distinguishing them by the subscript. Ordinary marked points are denoted $\sigma_i$, with $i \in I$. Nodes are $\sigma_e$, with $e$ in a set $E$. Trivialization points are denoted $\sigma_v$, with $v$ in a set $V$. \end{notation} We use modular graphs to represent the topological type of a marked curve. Recall \cite{MR1412436} that the {\it modular graph} $\gamma$ of a curve $(C,\sigma_i)$ consists of: \begin{enumerate} \item a vertex set $V_\gamma$ (one vertex $v$ for each component of $C$) \item a half-edge set $H_\gamma$ (one half-edge for each special point of $C$), \item a gluing map $\partial_\gamma: H_\gamma \to V_\gamma$ (attaching half-edges to vertices), \item an involution $j_\gamma: H_\gamma \to H_\gamma$, and \item a function $g: V_\gamma \to \N$ (assigning to $v$ the genus $g_v$ of the normalization $\widetilde{C}_v$). \end{enumerate} The involution $j_\gamma$ generates an action of $\Z/(2)$ on $H_\gamma$, and the orbit set is a disjoint union of the set $I_\gamma$ of tails (singlets, corresponding to marked points on $C$) and the set $E_\gamma$ of edges (doublets, corresponding to nodes of $C$). The set $E_{\gamma}$ may be further decomposed into the union \[ E_{\gamma} = E^{split}_{\gamma} \sqcup E^{self}_{\gamma} \] of the set of splitting edges (which connect different vertices) and the set of self-edges (which start and end at the same vertex). \begin{figure}[htb] \centering \psset{xunit=.5pt,yunit=.5pt,runit=.5pt} \begin{pspicture}(396.22671509,92.11256409) { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(180.55193329,39.20013712) \curveto(180.55193329,17.8376609)(157.68097479,0.49999903)(129.50068665,0.49999903) \curveto(101.3203985,0.49999903)(78.44944,17.8376609)(78.44944,39.20013712) \curveto(78.44944,60.56261335)(101.3203985,77.90027521)(129.50068665,77.90027521) \curveto(157.68097479,77.90027521)(180.55193329,60.56261335)(180.55193329,39.20013712) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(180.43489,38.64058109) \curveto(179.79845,45.51049109)(169.5998,49.75545609)(163.5,49.90443109) \curveto(159.04165,50.01331709)(148.60462,49.72791109)(148.26246,36.67036009) \curveto(147.90991,23.21636609)(154.38012,23.87880609)(164.14334,23.87880609) \curveto(172.48314,23.87880609)(181,32.54060609)(180.43489,38.64058109) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(84.63604,20.25578609) \curveto(83.80271,19.75578609)(85.51689,21.16597609)(84.78769,20.52355609) \curveto(80.0063,16.31121609)(74.10515,17.15306609)(69.35355,17.42735609) \curveto(64.27265,17.72066609)(54.32322,24.09737609)(54.5,39.36149909) \curveto(54.56386,44.93918609)(54.93749,51.55454509)(58.42373,55.43167409) \curveto(64.68016,62.38959909)(74.99194,65.45247209)(83.68598,56.89398509) \curveto(83.85395,56.72863509)(83.67648,57.19453409)(83.84315,57.02786809) \curveto(81.96823,79.99612309)(64.0035,86.64365709)(42.75,89.86149909) \curveto(4.36934,95.67244309)(-20.781321,65.04464409)(-19.5,38.86149909) \curveto(-18.491893,18.26135609)(-1.859198,-1.83343391)(26,-5.88850391) \curveto(49.60233,-9.32396391)(79.10378,-3.86996391)(84.63604,20.25578609) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(0.5,52.86149909) \curveto(1,52.19483209)(1.5,51.52816509)(2,50.86149809) \curveto(3.772586,48.49804909)(5.077795,45.78370409)(7.5,43.36149809) \curveto(10.82972,40.03177709)(14.385976,38.36149809)(19,38.36149809) \curveto(24.333887,38.36149809)(28.602735,39.88002709)(33,42.86149909) \curveto(37.80594,46.12006409)(38.83333,50.52816509)(39,50.86149809) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(5.75,45.48649809) \curveto(6.488044,48.43867609)(14,52.40494109)(19.875,51.61149809) \curveto(25.724023,50.82156309)(30.132521,43.47897709)(31.5,42.11149909) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1,linecolor=curcolor] { \newpath \moveto(140,123.36149809) \curveto(140,123.36149809)(140,123.36149809)(140,123.36149809) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(126.35159912,57.34957712) \curveto(126.35159912,59.15582712)(124.83753662,60.61676462)(123.08441162,60.61676462) \curveto(121.22503662,60.61676462)(119.79066162,59.10270212)(119.79066162,57.34957712) \curveto(119.79066162,55.51676462)(121.27816162,54.05582712)(123.05784912,54.05582712) \curveto(124.89066162,54.05582712)(126.35159912,55.56988962)(126.35159912,57.34957712) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(140.20776367,59.10757731) \curveto(140.48901367,59.10757731)(141.20776367,59.10757731)(141.20776367,59.79507731) \curveto(141.20776367,60.26382731)(140.77026367,60.26382731)(140.42651367,60.26382731) \lineto(135.86401367,60.26382731) \curveto(132.86401367,60.26382731)(130.64526367,56.98257731)(130.64526367,54.60757731) \curveto(130.64526367,52.85757731)(131.83276367,51.45132731)(133.64526367,51.45132731) \curveto(135.98901367,51.45132731)(138.64526367,53.85757731)(138.64526367,56.92007731) \curveto(138.64526367,57.26382731)(138.64526367,58.23257731)(138.02026367,59.10757731) \lineto(140.20776367,59.10757731) \closepath \moveto(133.67651367,51.88882731) \curveto(132.67651367,51.88882731)(131.89526367,52.60757731)(131.89526367,54.04507731) \curveto(131.89526367,54.63882731)(132.11401367,56.26382731)(132.83276367,57.45132731) \curveto(133.67651367,58.82632731)(134.86401367,59.10757731)(135.55151367,59.10757731) \curveto(137.20776367,59.10757731)(137.36401367,57.79507731)(137.36401367,57.17007731) \curveto(137.36401367,56.23257731)(136.95776367,54.60757731)(136.30151367,53.57632731) \curveto(135.52026367,52.42007731)(134.42651367,51.88882731)(133.67651367,51.88882731) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(298.6161919,45.36149882) \curveto(298.6161919,35.70149882)(290.1041919,27.86149882)(279.6161919,27.86149882) \curveto(269.1281919,27.86149882)(260.6161919,35.70149882)(260.6161919,45.36149882) \curveto(260.6161919,55.02149882)(269.1281919,62.86149882)(279.6161919,62.86149882) \curveto(290.1041919,62.86149882)(298.6161919,55.02149882)(298.6161919,45.36149882) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732284,linecolor=curcolor] { \newpath \moveto(378.2157816,43.82102135) \curveto(378.2157816,33.6386327)(369.70967871,25.3746651)(359.2289448,25.3746651) \curveto(348.74821089,25.3746651)(340.242108,33.6386327)(340.242108,43.82102135) \curveto(340.242108,54.00341)(348.74821089,62.2673776)(359.2289448,62.2673776) \curveto(369.70967871,62.2673776)(378.2157816,54.00341)(378.2157816,43.82102135) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(292.55772,58.38773609) \curveto(300.48975,62.86149909)(304.94936,65.95541309)(316.97002,66.44770509) \curveto(329.83663,66.97464109)(343.988,55.38123809)(343.988,55.50917409) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(292.36581,32.86149609) \curveto(295.47517,30.25091609)(304.72326,23.20197609)(316.16192,23.26929609) \curveto(331.2474,23.35807609)(340.44342,30.18911609)(343.02848,33.05639609) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(374.88455,54.93346209) \curveto(376.93153,56.91647009)(378.62035,57.44623309)(382.94452,57.62011909) \curveto(388.94712,57.86149909)(391.99837,52.52083409)(392.92354,51.28728509) \curveto(393.86726,50.02898509)(394.43742,48.55804509)(395.03448,47.06539609) \curveto(395.58758,45.68264109)(396.64637,39.87062209)(394.07496,35.93496109) \curveto(391.40852,31.85385609)(389.38835,28.59202609)(383.32833,28.45070609) \curveto(379.29231,28.35658609)(376.39331,29.35398609)(373.34932,31.32926609) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.41732287,linecolor=curcolor] { \newpath \moveto(359.34033,62.22581609) \curveto(359.22659,64.22587209)(359.34033,66.02585909)(359.34033,67.98293709) \curveto(359.34033,71.03239809)(360.14056,90.75427909)(359.72414,91.58713509) \curveto(359.69553,91.64435009)(359.5962,91.58713509)(359.53223,91.58713509) } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(279.09611367,52.19736731) \curveto(279.09611367,52.69736731)(279.09611367,52.72861731)(278.62736367,52.72861731) \curveto(277.37736367,51.44736731)(275.62736367,51.44736731)(275.00236367,51.44736731) \lineto(275.00236367,50.82236731) \curveto(275.40861367,50.82236731)(276.56486367,50.82236731)(277.59611367,51.35361731) \lineto(277.59611367,41.00986731) \curveto(277.59611367,40.29111731)(277.53361367,40.07236731)(275.75236367,40.07236731) \lineto(275.12736367,40.07236731) \lineto(275.12736367,39.44736731) \curveto(275.81486367,39.50986731)(277.53361367,39.50986731)(278.34611367,39.50986731) \curveto(279.12736367,39.50986731)(280.87736367,39.50986731)(281.56486367,39.44736731) \lineto(281.56486367,40.07236731) \lineto(280.93986367,40.07236731) \curveto(279.12736367,40.07236731)(279.09611367,40.29111731)(279.09611367,41.00986731) \lineto(279.09611367,52.19736731) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(362.86401367,46.90793731) \curveto(362.86401367,48.50168731)(362.77026367,50.09543731)(362.08276367,51.56418731) \curveto(361.17651367,53.50168731)(359.52026367,53.81418731)(358.70776367,53.81418731) \curveto(357.48901367,53.81418731)(356.05151367,53.28293731)(355.20776367,51.43918731) \curveto(354.58276367,50.06418731)(354.48901367,48.50168731)(354.48901367,46.90793731) \curveto(354.48901367,45.40793731)(354.55151367,43.62668731)(355.39526367,42.09543731) \curveto(356.23901367,40.50168731)(357.70776367,40.09543731)(358.67651367,40.09543731) \curveto(359.73901367,40.09543731)(361.27026367,40.50168731)(362.14526367,42.40793731) \curveto(362.77026367,43.78293731)(362.86401367,45.34543731)(362.86401367,46.90793731) \closepath \moveto(358.67651367,40.53293731) \curveto(357.89526367,40.53293731)(356.70776367,41.03293731)(356.36401367,42.93918731) \curveto(356.14526367,44.12668731)(356.14526367,45.97043731)(356.14526367,47.15793731) \curveto(356.14526367,48.43918731)(356.14526367,49.75168731)(356.30151367,50.81418731) \curveto(356.67651367,53.18918731)(358.17651367,53.37668731)(358.67651367,53.37668731) \curveto(359.33276367,53.37668731)(360.64526367,53.00168731)(361.02026367,51.03293731) \curveto(361.23901367,49.90793731)(361.23901367,48.40793731)(361.23901367,47.15793731) \curveto(361.23901367,45.65793731)(361.23901367,44.31418731)(361.02026367,43.03293731) \curveto(360.70776367,41.12668731)(359.58276367,40.53293731)(358.67651367,40.53293731) \closepath } } \end{pspicture} \caption{A (stable) marked curve and its associated modular graph, which has two splitting edges, one self-edge and one tail.} \end{figure} The substack of $\Mbar_{g,I}$ which classifies marked curves of type $\gamma$ is denoted $\mc{M}_\gamma$. These substacks stratify $\Mbar_{g,I}$, in the sense that the closure of any $\mc{M}_\gamma$ is a union of modular graph-labeled substacks. The boundary divisors in this stratification have normal crossings. \subsection{Bundles on Nodal Curves} The quotient stack $[\pt/\Cs]$ is, by definition, the classifying stack for principal $\Cs$-bundles. Any principal $\Cs$-bundle $p: \mP \to C$ determines and is determined by a map $\phi: C \to [\pt/\Cs]$ such that the following diagram commutes. \[ \xymatrix{\mP \ar[r] \ar[d]^p & \pt \ar[d] \\ C \ar[r]^{\phi} & [\pt/\Cs]} \] The degree $d$ of a principal $\Cs$-bundle $\mP$ is the degree $\phi_*[C] \in H_2^{\Cs}(\pt)$ of the associated morphism. The {\it multidegree} $\md$ of $\mP$ assigns to each irreducible component $C_v \subset C$ the degree $d_v$ of $\mP|_{C_v}$. \begin{align*} \md: V_\gamma &\to H_2^{\Cs}(\pt) \simeq \Z\\ v & \mapsto d_v, \end{align*} where $V_\gamma$ is the set of vertices of the modular graph $\gamma$ of $C$. \medskip When reasoning about $\Cs$-bundles on a fixed nodal curve $C$, we shall often use the following fact: Let $\nu: \widetilde{C} \to C$ denote the normalization of $C$. A principal bundle $\mP$ on $C$ determines and is determined by the following data: \begin{enumerate} \item a principal $\Cs$-bundle $\wmP$ ($=\nu^*\mP$), and \item for each node $\sigma \in \Sigma$, a {\it gluing isomorphism} $g: \wmP_{\sigma^+} \simeq \wmP_{\sigma^+}$, which identifies the fibers of $\wmP$ over the preimages $\sigma^\pm$ of $\sigma$ under $\nu$. \end{enumerate} Morally, the gluing isomorphiss are ``transition functions'' for two open sets whose intersection is the node $\sigma$. \begin{figure}[htb] \centering \[\begin{xy} (-20,0)*{C}; (-20,20)*{\mP}; (-2,0)*{\begin{xy} (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; \end{xy}}; (-2,20)*{\begin{xy} (0,0)*{}; (24,0)*{} **\crv{(10,2)}; (0,20)*{}; (24,20)*{} **\crv{(10,22)}; (0,0)*{};(0,20)*{} **\dir{-}; (24,0)*{};(24,20)*{} **\dir{-}; \end{xy}}; (20,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; \end{xy}}; (20,20)*{\begin{xy} (0,0)*{}; (20,0)*{} **\crv{(10,5)}; (0,20)*{}; (20,20)*{} **\crv{(10,25)}; (0,0)*{};(0,20)*{} **\dir{-}; (20,0)*{};(20,20)*{} **\dir{-}; \end{xy}}; (40,10)*{\simeq}; (60,0)*{\begin{xy} (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; (12,0)*{\bullet}; \end{xy}}; (60,20)*{\begin{xy} (0,0)*{}; (24,0)*{} **\crv{(10,2)}; (0,20)*{}; (24,20)*{} **\crv{(10,22)}; (0,0)*{};(0,20)*{} **\dir{-}; (24,0)*{};(24,20)*{} **\dir{-}; \end{xy}}; (100,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; (-10,0)*{\bullet}; \end{xy}}; (100,20)*{\begin{xy} (0,0)*{}; (20,0)*{} **\crv{(10,5)}; (0,20)*{}; (20,20)*{} **\crv{(10,25)}; (0,0)*{};(0,20)*{} **\dir{-}; (20,0)*{};(20,20)*{} **\dir{-}; \end{xy}}; {\ar@{->} (89,0)*{}; (73,0)*{}}; {\ar@{->} (89,20)*{}; (73,20)*{}}; (81,25)*{g}; (76,3)*{\sigma^+}; (88,3)*{\sigma^-}; (120,0)*{\widetilde{C}}; (120,20)*{\wmP}; \end{xy}\] \caption{Realizing $\mP$ as $\widetilde{\mP}$ together with a gluing datum $g$} \end{figure} This picture can also be used to understand the automorphisms of $\Cs$-bundles. The automorphism group of the $\Cs$-bundle $\wmP$ is the product $(\Cs)^V \times (\Cs)^{2V'}$, where $V$ is the set of stable components of $C$ and $V'$ is the set of rational components which have two special points. On a stable component of $\widetilde{C}$, we can rescale the fibers of $\wmP$, while on each unstable rational component (assumed to have two special points), we can rescale the bundle and we can lift the $\Cs$ of rotations of the curve. The automorphism group $(\Cs)^V \times (\Cs)^{V'}$ acts on the set of gluing isomorphisms, in the obvious way: For example, if we have a gluing isomorphism $g$ which connects stable components $\widetilde{C}_v$ and $\widetilde{C}_{v'}$ (not necessarily different), then rescaling sends $g$ to $g_vgg_{v'}^{-1}$. An automorphism of $\wmP$ gives rise to an automorphism of $\mP$ if it fixes all the gluing isomorphisms. Note that, since $\Cs$ is commutative, the gluing isomorphisms associated to self-nodes are automatically fixed by automorphisms of $\wmP$. \subsection{The Gieseker Completion} By thinking about bundles on the normalization of a curve $C$, it is easy to see that $\Cs$-bundles on nodal curves can become singular in families. The problem is that the space of gluing isomorphisms at a node $\sigma \in C$ is a copy of $\Cs$; in families, these isomorphisms can approach the degenerate limits $\pm \infty$. More formally, the stack $\Bun_{\Cs}(g,I)$ of $\Cs$-bundles on stable marked curves of type $(g,I)$ does not satisfy the valuative criterion for completeness. We ``fill in the holes'' in $\Bun_{\Cs}(g,I)$ by enlarging the classification problem slightly: We allow copies of $\Pn^1$ to appear at the nodes of stable curves, and insists that these $\Pn^1$ carry degree $1$ bundles. \begin{definition}\label{defmod} A morphism $m: C \to \Sigma$ of prestable curves is a {\it modification} if: \begin{enumerate} \item $m$ is an isomorphism on the complement of the preimage of the nodes of $\Sigma$, and \item the preimage under $m$ of every node in $\Sigma$ is either a node or a $\Pn^1$ with two special points. \end{enumerate} If $\sigma \in \Sigma$ is a node and $m^{-1}(\sigma) \simeq \Pn^1$, we will say that $\Sigma$ has been modified at $\sigma$. A modification $m: C \to \Sigma$ is a {\it modification of $I$-marked curves} if $\sigma_i = m^{-1}(\sigma_i')$. \end{definition} Note (a) that the notion of a modification of marked curves makes sense in families, (b) that modifications of marked curves do not introduce bubbles at marked points, only at nodes, and (c) that a smooth curve $\Sigma$ has no non-trivial modifications. \begin{figure}[htb] \centering \[\begin{xy} (5,15)*{\Sigma}; (0,0)*{\begin{xy} (-2,0)*{\begin{xy} (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; (-2,-2)*{\bullet}; (1,-3)*{\sigma_1}; \end{xy}}; (20,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; \end{xy}}; \end{xy}}; (75,15)*{C}; (70,0)*{\begin{xy} (-2,0)*{\begin{xy} (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; (-2,-2)*{\bullet}; (1,-3)*{\sigma_1}; \end{xy}}; (15,1)*{\begin{xy} (0,0)*\xycircle(5,5){-}; \end{xy}}; (30,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; \end{xy}}; \end{xy}}; \end{xy}\] \caption{A nodal curve $\Sigma$ with a marked point $\sigma_1$ and its unique non-trivial modification $C$} \end{figure} \begin{definition} A {\it Gieseker map to $[\pt/\Cs]$} is a pair $((C,\sigma_i),\mP)$ consisting of a prestable marked curve $(C,\sigma_i)$ -- whose stabilization we denote $(\Sigma,\sigma_i)$ -- and a principal $\Cs$-bundle $p: \mP \to C$, for which \begin{enumerate} \item the stabilization morphism $m: C \to \Sigma$ is a modification of $I$-marked curves, and \item For every node $\sigma \in \Sigma$ for which $m^{-1}(\sigma) \simeq \Pn^1$, the restriction $\mP|_{m^{-1}(\sigma)}$ has degree $1$. \end{enumerate} The {\it degree} of a Gieseker bundle is the degree of the bundle $\mP$. \end{definition} We will frequently use the term ``Gieseker bubble'' for unstable $\Pn^1$'s carrying degree $1$ bundles. Likewise, we will often say that the pair $(m: C \to \Sigma,\mP)$ is a ``Gieseker bundle'' on the stable curve $(\Sigma,\sigma_i)$. \begin{ex}\label{GB04} Let $(\Sigma,\sigma_i)$ be a stable curve of genus zero with four marked points. We assume $\Sigma$ is nodal, with two components $\Pn_1^1$ and $\Pn_2^1$ meeting at an ordinary double point. (If $\Sigma$ is smooth, Gieseker bundles are just ordinary bundles, which are classified by their degree and have automorphism group $\Cs$.) Gieseker bundles on $\Sigma$ come in two flavors: \begin{figure}[htb] \centering \psset{xunit=.5pt,yunit=.5pt,runit=.5pt} \begin{pspicture}(393.42315674,97.75839996) { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.28976378,linecolor=curcolor] { \newpath \moveto(60.96085754,66.2605304) \curveto(60.96085754,49.70007851)(47.45007909,36.25971176)(30.80286994,36.25971176) \curveto(14.15566079,36.25971176)(0.64488235,49.70007851)(0.64488235,66.2605304) \curveto(0.64488235,82.82098228)(14.15566079,96.26134903)(30.80286994,96.26134903) \curveto(47.45007909,96.26134903)(60.96085754,82.82098228)(60.96085754,66.2605304) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.28976378,linecolor=curcolor] { \newpath \moveto(122.49691124,67.1127004) \curveto(122.49691124,50.55224851)(108.98613279,37.11188176)(92.33892364,37.11188176) \curveto(75.69171449,37.11188176)(62.18093605,50.55224851)(62.18093605,67.1127004) \curveto(62.18093605,83.67315228)(75.69171449,97.11351903)(92.33892364,97.11351903) \curveto(108.98613279,97.11351903)(122.49691124,83.67315228)(122.49691124,67.1127004) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.28976378,linecolor=curcolor] { \newpath \moveto(271.86294054,67.1127004) \curveto(271.86294054,50.55224851)(258.35216209,37.11188176)(241.70495294,37.11188176) \curveto(225.05774379,37.11188176)(211.54696535,50.55224851)(211.54696535,67.1127004) \curveto(211.54696535,83.67315228)(225.05774379,97.11351903)(241.70495294,97.11351903) \curveto(258.35216209,97.11351903)(271.86294054,83.67315228)(271.86294054,67.1127004) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.28976378,linecolor=curcolor] { \newpath \moveto(332.32061054,65.3345404) \curveto(332.32061054,48.77408851)(318.80983209,35.33372176)(302.16262294,35.33372176) \curveto(285.51541379,35.33372176)(272.00463535,48.77408851)(272.00463535,65.3345404) \curveto(272.00463535,81.89499228)(285.51541379,95.33535903)(302.16262294,95.33535903) \curveto(318.80983209,95.33535903)(332.32061054,81.89499228)(332.32061054,65.3345404) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linewidth=1.28976378,linecolor=curcolor] { \newpath \moveto(392.77829054,65.3345404) \curveto(392.77829054,48.77408851)(379.26751209,35.33372176)(362.62030294,35.33372176) \curveto(345.97309379,35.33372176)(332.46231535,48.77408851)(332.46231535,65.3345404) \curveto(332.46231535,81.89499228)(345.97309379,95.33535903)(362.62030294,95.33535903) \curveto(379.26751209,95.33535903)(392.77829054,81.89499228)(392.77829054,65.3345404) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(24.45204367,61.24810218) \curveto(24.38954367,60.93560218)(24.26454367,60.46685218)(24.26454367,60.37310218) \curveto(24.26454367,60.02935218)(24.54579367,59.84185218)(24.85829367,59.84185218) \curveto(25.10829367,59.84185218)(25.45204367,59.99810218)(25.60829367,60.40435218) \curveto(25.60829367,60.43560218)(25.85829367,61.37310218)(25.98329367,61.87310218) \curveto(26.10829367,62.46685218)(26.26454367,63.06060218)(26.42079367,63.65435218) \curveto(26.51454367,64.12310218)(26.63954367,64.56060218)(26.76454367,64.99810218) \curveto(26.82704367,65.34185218)(26.98329367,65.93560218)(27.01454367,65.99810218) \curveto(27.29579367,66.62310218)(28.35829367,68.43560218)(30.26454367,68.43560218) \curveto(31.17079367,68.43560218)(31.32704367,67.68560218)(31.32704367,67.02935218) \curveto(31.32704367,65.81060218)(30.35829367,63.24810218)(30.04579367,62.40435218) \curveto(29.85829367,61.93560218)(29.82704367,61.68560218)(29.82704367,61.46685218) \curveto(29.82704367,60.52935218)(30.54579367,59.84185218)(31.48329367,59.84185218) \curveto(33.35829367,59.84185218)(34.07704367,62.74810218)(34.07704367,62.90435218) \curveto(34.07704367,63.12310218)(33.92079367,63.12310218)(33.85829367,63.12310218) \curveto(33.63954367,63.12310218)(33.63954367,63.06060218)(33.54579367,62.74810218) \curveto(33.13954367,61.40435218)(32.48329367,60.27935218)(31.51454367,60.27935218) \curveto(31.17079367,60.27935218)(31.04579367,60.46685218)(31.04579367,60.93560218) \curveto(31.04579367,61.43560218)(31.20204367,61.90435218)(31.38954367,62.34185218) \curveto(31.76454367,63.40435218)(32.60829367,65.59185218)(32.60829367,66.74810218) \curveto(32.60829367,68.06060218)(31.76454367,68.87310218)(30.32704367,68.87310218) \curveto(28.51454367,68.87310218)(27.54579367,67.59185218)(27.20204367,67.12310218) \curveto(27.10829367,68.24810218)(26.29579367,68.87310218)(25.35829367,68.87310218) \curveto(24.45204367,68.87310218)(24.07704367,68.09185218)(23.88954367,67.74810218) \curveto(23.54579367,67.06060218)(23.29579367,65.87310218)(23.29579367,65.81060218) \curveto(23.29579367,65.59185218)(23.48329367,65.59185218)(23.51454367,65.59185218) \curveto(23.73329367,65.59185218)(23.73329367,65.62310218)(23.85829367,66.06060218) \curveto(24.20204367,67.46685218)(24.60829367,68.43560218)(25.32704367,68.43560218) \curveto(25.70204367,68.43560218)(25.92079367,68.18560218)(25.92079367,67.52935218) \curveto(25.92079367,67.09185218)(25.85829367,66.87310218)(25.60829367,65.84185218) \lineto(24.45204367,61.24810218) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(80.45177367,75.72738318) \curveto(80.45177367,75.72738318)(80.45177367,75.94613318)(80.20177367,75.94613318) \curveto(79.88927367,75.94613318)(78.01427367,75.75863318)(77.67052367,75.72738318) \curveto(77.51427367,75.69613318)(77.38927367,75.60238318)(77.38927367,75.35238318) \curveto(77.38927367,75.10238318)(77.57677367,75.10238318)(77.85802367,75.10238318) \curveto(78.82677367,75.10238318)(78.85802367,74.97738318)(78.85802367,74.75863318) \lineto(78.79552367,74.35238318) \lineto(77.60802367,69.63363318) \curveto(77.23302367,70.38363318)(76.67052367,70.91488318)(75.76427367,70.91488318) \curveto(73.45177367,70.91488318)(70.98302367,67.97738318)(70.98302367,65.07113318) \curveto(70.98302367,63.19613318)(72.07677367,61.88363318)(73.60802367,61.88363318) \curveto(74.01427367,61.88363318)(75.01427367,61.97738318)(76.20177367,63.38363318) \curveto(76.35802367,62.53988318)(77.07677367,61.88363318)(78.01427367,61.88363318) \curveto(78.73302367,61.88363318)(79.17052367,62.35238318)(79.51427367,62.97738318) \curveto(79.82677367,63.69613318)(80.10802367,64.91488318)(80.10802367,64.94613318) \curveto(80.10802367,65.16488318)(79.92052367,65.16488318)(79.85802367,65.16488318) \curveto(79.67052367,65.16488318)(79.63927367,65.07113318)(79.57677367,64.78988318) \curveto(79.23302367,63.50863318)(78.88927367,62.32113318)(78.07677367,62.32113318) \curveto(77.51427367,62.32113318)(77.48302367,62.85238318)(77.48302367,63.22738318) \curveto(77.48302367,63.72738318)(77.51427367,63.85238318)(77.57677367,64.19613318) \lineto(80.45177367,75.72738318) \closepath \moveto(76.32677367,64.47738318) \curveto(76.20177367,64.10238318)(76.20177367,64.07113318)(75.92052367,63.72738318) \curveto(75.04552367,62.63363318)(74.23302367,62.32113318)(73.67052367,62.32113318) \curveto(72.67052367,62.32113318)(72.38927367,63.41488318)(72.38927367,64.19613318) \curveto(72.38927367,65.19613318)(73.01427367,67.63363318)(73.48302367,68.57113318) \curveto(74.10802367,69.72738318)(74.98302367,70.47738318)(75.79552367,70.47738318) \curveto(77.07677367,70.47738318)(77.35802367,68.85238318)(77.35802367,68.72738318) \curveto(77.35802367,68.60238318)(77.32677367,68.47738318)(77.29552367,68.38363318) \lineto(76.32677367,64.47738318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(98.1559973,66.69613318) \curveto(98.4372473,66.69613318)(98.8122473,66.69613318)(98.8122473,67.10238318) \curveto(98.8122473,67.47738318)(98.4372473,67.47738318)(98.1559973,67.47738318) \lineto(87.2809973,67.47738318) \curveto(86.9997473,67.47738318)(86.6247473,67.47738318)(86.6247473,67.10238318) \curveto(86.6247473,66.69613318)(86.9997473,66.69613318)(87.2809973,66.69613318) \lineto(98.1559973,66.69613318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(106.6445227,63.28988318) \curveto(106.5820227,62.97738318)(106.4570227,62.50863318)(106.4570227,62.41488318) \curveto(106.4570227,62.07113318)(106.7382727,61.88363318)(107.0507727,61.88363318) \curveto(107.3007727,61.88363318)(107.6445227,62.03988318)(107.8007727,62.44613318) \curveto(107.8007727,62.47738318)(108.0507727,63.41488318)(108.1757727,63.91488318) \curveto(108.3007727,64.50863318)(108.4570227,65.10238318)(108.6132727,65.69613318) \curveto(108.7070227,66.16488318)(108.8320227,66.60238318)(108.9570227,67.03988318) \curveto(109.0195227,67.38363318)(109.1757727,67.97738318)(109.2070227,68.03988318) \curveto(109.4882727,68.66488318)(110.5507727,70.47738318)(112.4570227,70.47738318) \curveto(113.3632727,70.47738318)(113.5195227,69.72738318)(113.5195227,69.07113318) \curveto(113.5195227,67.85238318)(112.5507727,65.28988318)(112.2382727,64.44613318) \curveto(112.0507727,63.97738318)(112.0195227,63.72738318)(112.0195227,63.50863318) \curveto(112.0195227,62.57113318)(112.7382727,61.88363318)(113.6757727,61.88363318) \curveto(115.5507727,61.88363318)(116.2695227,64.78988318)(116.2695227,64.94613318) \curveto(116.2695227,65.16488318)(116.1132727,65.16488318)(116.0507727,65.16488318) \curveto(115.8320227,65.16488318)(115.8320227,65.10238318)(115.7382727,64.78988318) \curveto(115.3320227,63.44613318)(114.6757727,62.32113318)(113.7070227,62.32113318) \curveto(113.3632727,62.32113318)(113.2382727,62.50863318)(113.2382727,62.97738318) \curveto(113.2382727,63.47738318)(113.3945227,63.94613318)(113.5820227,64.38363318) \curveto(113.9570227,65.44613318)(114.8007727,67.63363318)(114.8007727,68.78988318) \curveto(114.8007727,70.10238318)(113.9570227,70.91488318)(112.5195227,70.91488318) \curveto(110.7070227,70.91488318)(109.7382727,69.63363318)(109.3945227,69.16488318) \curveto(109.3007727,70.28988318)(108.4882727,70.91488318)(107.5507727,70.91488318) \curveto(106.6445227,70.91488318)(106.2695227,70.13363318)(106.0820227,69.78988318) \curveto(105.7382727,69.10238318)(105.4882727,67.91488318)(105.4882727,67.85238318) \curveto(105.4882727,67.63363318)(105.6757727,67.63363318)(105.7070227,67.63363318) \curveto(105.9257727,67.63363318)(105.9257727,67.66488318)(106.0507727,68.10238318) \curveto(106.3945227,69.50863318)(106.8007727,70.47738318)(107.5195227,70.47738318) \curveto(107.8945227,70.47738318)(108.1132727,70.22738318)(108.1132727,69.57113318) \curveto(108.1132727,69.13363318)(108.0507727,68.91488318)(107.8007727,67.88363318) \lineto(106.6445227,63.28988318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(348.42945067,75.31527318) \curveto(348.42945067,75.31527318)(348.42945067,75.53402318)(348.17945067,75.53402318) \curveto(347.86695067,75.53402318)(345.99195067,75.34652318)(345.64820067,75.31527318) \curveto(345.49195067,75.28402318)(345.36695067,75.19027318)(345.36695067,74.94027318) \curveto(345.36695067,74.69027318)(345.55445067,74.69027318)(345.83570067,74.69027318) \curveto(346.80445067,74.69027318)(346.83570067,74.56527318)(346.83570067,74.34652318) \lineto(346.77320067,73.94027318) \lineto(345.58570067,69.22152318) \curveto(345.21070067,69.97152318)(344.64820067,70.50277318)(343.74195067,70.50277318) \curveto(341.42945067,70.50277318)(338.96070067,67.56527318)(338.96070067,64.65902318) \curveto(338.96070067,62.78402318)(340.05445067,61.47152318)(341.58570067,61.47152318) \curveto(341.99195067,61.47152318)(342.99195067,61.56527318)(344.17945067,62.97152318) \curveto(344.33570067,62.12777318)(345.05445067,61.47152318)(345.99195067,61.47152318) \curveto(346.71070067,61.47152318)(347.14820067,61.94027318)(347.49195067,62.56527318) \curveto(347.80445067,63.28402318)(348.08570067,64.50277318)(348.08570067,64.53402318) \curveto(348.08570067,64.75277318)(347.89820067,64.75277318)(347.83570067,64.75277318) \curveto(347.64820067,64.75277318)(347.61695067,64.65902318)(347.55445067,64.37777318) \curveto(347.21070067,63.09652318)(346.86695067,61.90902318)(346.05445067,61.90902318) \curveto(345.49195067,61.90902318)(345.46070067,62.44027318)(345.46070067,62.81527318) \curveto(345.46070067,63.31527318)(345.49195067,63.44027318)(345.55445067,63.78402318) \lineto(348.42945067,75.31527318) \closepath \moveto(344.30445067,64.06527318) \curveto(344.17945067,63.69027318)(344.17945067,63.65902318)(343.89820067,63.31527318) \curveto(343.02320067,62.22152318)(342.21070067,61.90902318)(341.64820067,61.90902318) \curveto(340.64820067,61.90902318)(340.36695067,63.00277318)(340.36695067,63.78402318) \curveto(340.36695067,64.78402318)(340.99195067,67.22152318)(341.46070067,68.15902318) \curveto(342.08570067,69.31527318)(342.96070067,70.06527318)(343.77320067,70.06527318) \curveto(345.05445067,70.06527318)(345.33570067,68.44027318)(345.33570067,68.31527318) \curveto(345.33570067,68.19027318)(345.30445067,68.06527318)(345.27320067,67.97152318) \lineto(344.30445067,64.06527318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(366.1336743,66.28402318) \curveto(366.4149243,66.28402318)(366.7899243,66.28402318)(366.7899243,66.69027318) \curveto(366.7899243,67.06527318)(366.4149243,67.06527318)(366.1336743,67.06527318) \lineto(355.2586743,67.06527318) \curveto(354.9774243,67.06527318)(354.6024243,67.06527318)(354.6024243,66.69027318) \curveto(354.6024243,66.28402318)(354.9774243,66.28402318)(355.2586743,66.28402318) \lineto(366.1336743,66.28402318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(374.6221997,62.87777318) \curveto(374.5596997,62.56527318)(374.4346997,62.09652318)(374.4346997,62.00277318) \curveto(374.4346997,61.65902318)(374.7159497,61.47152318)(375.0284497,61.47152318) \curveto(375.2784497,61.47152318)(375.6221997,61.62777318)(375.7784497,62.03402318) \curveto(375.7784497,62.06527318)(376.0284497,63.00277318)(376.1534497,63.50277318) \curveto(376.2784497,64.09652318)(376.4346997,64.69027318)(376.5909497,65.28402318) \curveto(376.6846997,65.75277318)(376.8096997,66.19027318)(376.9346997,66.62777318) \curveto(376.9971997,66.97152318)(377.1534497,67.56527318)(377.1846997,67.62777318) \curveto(377.4659497,68.25277318)(378.5284497,70.06527318)(380.4346997,70.06527318) \curveto(381.3409497,70.06527318)(381.4971997,69.31527318)(381.4971997,68.65902318) \curveto(381.4971997,67.44027318)(380.5284497,64.87777318)(380.2159497,64.03402318) \curveto(380.0284497,63.56527318)(379.9971997,63.31527318)(379.9971997,63.09652318) \curveto(379.9971997,62.15902318)(380.7159497,61.47152318)(381.6534497,61.47152318) \curveto(383.5284497,61.47152318)(384.2471997,64.37777318)(384.2471997,64.53402318) \curveto(384.2471997,64.75277318)(384.0909497,64.75277318)(384.0284497,64.75277318) \curveto(383.8096997,64.75277318)(383.8096997,64.69027318)(383.7159497,64.37777318) \curveto(383.3096997,63.03402318)(382.6534497,61.90902318)(381.6846997,61.90902318) \curveto(381.3409497,61.90902318)(381.2159497,62.09652318)(381.2159497,62.56527318) \curveto(381.2159497,63.06527318)(381.3721997,63.53402318)(381.5596997,63.97152318) \curveto(381.9346997,65.03402318)(382.7784497,67.22152318)(382.7784497,68.37777318) \curveto(382.7784497,69.69027318)(381.9346997,70.50277318)(380.4971997,70.50277318) \curveto(378.6846997,70.50277318)(377.7159497,69.22152318)(377.3721997,68.75277318) \curveto(377.2784497,69.87777318)(376.4659497,70.50277318)(375.5284497,70.50277318) \curveto(374.6221997,70.50277318)(374.2471997,69.72152318)(374.0596997,69.37777318) \curveto(373.7159497,68.69027318)(373.4659497,67.50277318)(373.4659497,67.44027318) \curveto(373.4659497,67.22152318)(373.6534497,67.22152318)(373.6846997,67.22152318) \curveto(373.9034497,67.22152318)(373.9034497,67.25277318)(374.0284497,67.69027318) \curveto(374.3721997,69.09652318)(374.7784497,70.06527318)(375.4971997,70.06527318) \curveto(375.8721997,70.06527318)(376.0909497,69.81527318)(376.0909497,69.15902318) \curveto(376.0909497,68.72152318)(376.0284497,68.50277318)(375.7784497,67.47152318) \lineto(374.6221997,62.87777318) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(302.91941367,73.72429118) \curveto(302.91941367,74.22429118)(302.91941367,74.25554118)(302.45066367,74.25554118) \curveto(301.20066367,72.97429118)(299.45066367,72.97429118)(298.82566367,72.97429118) \lineto(298.82566367,72.34929118) \curveto(299.23191367,72.34929118)(300.38816367,72.34929118)(301.41941367,72.88054118) \lineto(301.41941367,62.53679118) \curveto(301.41941367,61.81804118)(301.35691367,61.59929118)(299.57566367,61.59929118) \lineto(298.95066367,61.59929118) \lineto(298.95066367,60.97429118) \curveto(299.63816367,61.03679118)(301.35691367,61.03679118)(302.16941367,61.03679118) \curveto(302.95066367,61.03679118)(304.70066367,61.03679118)(305.38816367,60.97429118) \lineto(305.38816367,61.59929118) \lineto(304.76316367,61.59929118) \curveto(302.95066367,61.59929118)(302.91941367,61.81804118)(302.91941367,62.53679118) \lineto(302.91941367,73.72429118) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(220.18991367,65.22129118) \curveto(220.12741367,64.90879118)(220.00241367,64.44004118)(220.00241367,64.34629118) \curveto(220.00241367,64.00254118)(220.28366367,63.81504118)(220.59616367,63.81504118) \curveto(220.84616367,63.81504118)(221.18991367,63.97129118)(221.34616367,64.37754118) \curveto(221.34616367,64.40879118)(221.59616367,65.34629118)(221.72116367,65.84629118) \curveto(221.84616367,66.44004118)(222.00241367,67.03379118)(222.15866367,67.62754118) \curveto(222.25241367,68.09629118)(222.37741367,68.53379118)(222.50241367,68.97129118) \curveto(222.56491367,69.31504118)(222.72116367,69.90879118)(222.75241367,69.97129118) \curveto(223.03366367,70.59629118)(224.09616367,72.40879118)(226.00241367,72.40879118) \curveto(226.90866367,72.40879118)(227.06491367,71.65879118)(227.06491367,71.00254118) \curveto(227.06491367,69.78379118)(226.09616367,67.22129118)(225.78366367,66.37754118) \curveto(225.59616367,65.90879118)(225.56491367,65.65879118)(225.56491367,65.44004118) \curveto(225.56491367,64.50254118)(226.28366367,63.81504118)(227.22116367,63.81504118) \curveto(229.09616367,63.81504118)(229.81491367,66.72129118)(229.81491367,66.87754118) \curveto(229.81491367,67.09629118)(229.65866367,67.09629118)(229.59616367,67.09629118) \curveto(229.37741367,67.09629118)(229.37741367,67.03379118)(229.28366367,66.72129118) \curveto(228.87741367,65.37754118)(228.22116367,64.25254118)(227.25241367,64.25254118) \curveto(226.90866367,64.25254118)(226.78366367,64.44004118)(226.78366367,64.90879118) \curveto(226.78366367,65.40879118)(226.93991367,65.87754118)(227.12741367,66.31504118) \curveto(227.50241367,67.37754118)(228.34616367,69.56504118)(228.34616367,70.72129118) \curveto(228.34616367,72.03379118)(227.50241367,72.84629118)(226.06491367,72.84629118) \curveto(224.25241367,72.84629118)(223.28366367,71.56504118)(222.93991367,71.09629118) \curveto(222.84616367,72.22129118)(222.03366367,72.84629118)(221.09616367,72.84629118) \curveto(220.18991367,72.84629118)(219.81491367,72.06504118)(219.62741367,71.72129118) \curveto(219.28366367,71.03379118)(219.03366367,69.84629118)(219.03366367,69.78379118) \curveto(219.03366367,69.56504118)(219.22116367,69.56504118)(219.25241367,69.56504118) \curveto(219.47116367,69.56504118)(219.47116367,69.59629118)(219.59616367,70.03379118) \curveto(219.93991367,71.44004118)(220.34616367,72.40879118)(221.06491367,72.40879118) \curveto(221.43991367,72.40879118)(221.65866367,72.15879118)(221.65866367,71.50254118) \curveto(221.65866367,71.06504118)(221.59616367,70.84629118)(221.34616367,69.81504118) \lineto(220.18991367,65.22129118) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(248.01541416,68.62754118) \curveto(248.29666416,68.62754118)(248.67166416,68.62754118)(248.67166416,69.03379118) \curveto(248.67166416,69.40879118)(248.29666416,69.40879118)(248.01541416,69.40879118) \lineto(237.14041416,69.40879118) \curveto(236.85916416,69.40879118)(236.48416416,69.40879118)(236.48416416,69.03379118) \curveto(236.48416416,68.62754118)(236.85916416,68.62754118)(237.14041416,68.62754118) \lineto(248.01541416,68.62754118) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(260.62890903,76.78379118) \curveto(260.62890903,77.28379118)(260.62890903,77.31504118)(260.16015903,77.31504118) \curveto(258.91015903,76.03379118)(257.16015903,76.03379118)(256.53515903,76.03379118) \lineto(256.53515903,75.40879118) \curveto(256.94140903,75.40879118)(258.09765903,75.40879118)(259.12890903,75.94004118) \lineto(259.12890903,65.59629118) \curveto(259.12890903,64.87754118)(259.06640903,64.65879118)(257.28515903,64.65879118) \lineto(256.66015903,64.65879118) \lineto(256.66015903,64.03379118) \curveto(257.34765903,64.09629118)(259.06640903,64.09629118)(259.87890903,64.09629118) \curveto(260.66015903,64.09629118)(262.41015903,64.09629118)(263.09765903,64.03379118) \lineto(263.09765903,64.65879118) \lineto(262.47265903,64.65879118) \curveto(260.66015903,64.65879118)(260.62890903,64.87754118)(260.62890903,65.59629118) \lineto(260.62890903,76.78379118) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(23.947025,13.36790921) \curveto(24.040775,13.36790921)(24.447025,13.36790921)(24.478275,13.33665921) \lineto(25.165775,13.33665921) \curveto(29.228275,13.33665921)(30.665775,15.21165921)(30.665775,17.11790921) \curveto(30.665775,19.96165921)(28.103275,20.86790921)(25.540775,20.86790921) \lineto(19.728275,20.86790921) \curveto(19.353275,20.86790921)(19.040775,20.86790921)(19.040775,20.52415921) \curveto(19.040775,20.18040921)(19.415775,20.18040921)(19.572025,20.18040921) \curveto(20.634525,20.18040921)(20.697025,20.02415921)(20.697025,19.02415921) \lineto(20.697025,9.05540921) \curveto(20.697025,8.05540921)(20.634525,7.89915921)(19.603275,7.89915921) \curveto(19.415775,7.89915921)(19.040775,7.89915921)(19.040775,7.55540921) \curveto(19.040775,7.21165921)(19.353275,7.21165921)(19.728275,7.21165921) \lineto(25.072025,7.21165921) \curveto(25.415775,7.21165921)(25.728275,7.21165921)(25.728275,7.55540921) \curveto(25.728275,7.89915921)(25.384525,7.89915921)(25.165775,7.89915921) \curveto(24.040775,7.89915921)(23.947025,8.05540921)(23.947025,9.05540921) \lineto(23.947025,13.36790921) \closepath \moveto(27.228275,14.24290921) \curveto(27.884525,15.08665921)(27.947025,16.33665921)(27.947025,17.14915921) \curveto(27.947025,18.21165921)(27.853275,19.24290921)(27.322025,19.99290921) \curveto(28.415775,19.74290921)(29.978275,19.14915921)(29.978275,17.11790921) \curveto(29.978275,15.71165921)(29.072025,14.71165921)(27.228275,14.24290921) \closepath \moveto(23.947025,19.08665921) \curveto(23.947025,19.49290921)(23.947025,20.18040921)(25.134525,20.18040921) \curveto(26.572025,20.18040921)(27.228275,19.61790921)(27.228275,17.14915921) \curveto(27.228275,14.27415921)(26.197025,14.05540921)(24.947025,14.05540921) \lineto(23.947025,14.05540921) \lineto(23.947025,19.08665921) \closepath \moveto(21.228275,7.89915921) \curveto(21.384525,8.27415921)(21.384525,8.74290921)(21.384525,8.99290921) \lineto(21.384525,19.08665921) \curveto(21.384525,19.33665921)(21.384525,19.80540921)(21.228275,20.18040921) \lineto(23.509525,20.18040921) \curveto(23.259525,19.83665921)(23.259525,19.46165921)(23.259525,19.14915921) \lineto(23.259525,8.99290921) \curveto(23.259525,8.80540921)(23.259525,8.27415921)(23.415775,7.89915921) \lineto(21.228275,7.89915921) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(35.56051988,23.31665494) \curveto(35.56051988,23.69165494)(35.56051988,23.69165494)(35.15426988,23.69165494) \curveto(34.24801988,22.81665494)(32.99801988,22.81665494)(32.43551988,22.81665494) \lineto(32.43551988,22.31665494) \curveto(32.74801988,22.31665494)(33.68551988,22.31665494)(34.43551988,22.69165494) \lineto(34.43551988,15.59790494) \curveto(34.43551988,15.12915494)(34.43551988,14.94165494)(33.06051988,14.94165494) \lineto(32.52926988,14.94165494) \lineto(32.52926988,14.44165494) \curveto(32.77926988,14.44165494)(34.49801988,14.50415494)(34.99801988,14.50415494) \curveto(35.43551988,14.50415494)(37.18551988,14.44165494)(37.49801988,14.44165494) \lineto(37.49801988,14.94165494) \lineto(36.96676988,14.94165494) \curveto(35.56051988,14.94165494)(35.56051988,15.12915494)(35.56051988,15.59790494) \lineto(35.56051988,23.31665494) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(35.56051988,11.14066006) \curveto(35.56051988,11.51566006)(35.56051988,11.51566006)(35.15426988,11.51566006) \curveto(34.24801988,10.64066006)(32.99801988,10.64066006)(32.43551988,10.64066006) \lineto(32.43551988,10.14066006) \curveto(32.74801988,10.14066006)(33.68551988,10.14066006)(34.43551988,10.51566006) \lineto(34.43551988,3.42191006) \curveto(34.43551988,2.95316006)(34.43551988,2.76566006)(33.06051988,2.76566006) \lineto(32.52926988,2.76566006) \lineto(32.52926988,2.26566006) \curveto(32.77926988,2.26566006)(34.49801988,2.32816006)(34.99801988,2.32816006) \curveto(35.43551988,2.32816006)(37.18551988,2.26566006)(37.49801988,2.26566006) \lineto(37.49801988,2.76566006) \lineto(36.96676988,2.76566006) \curveto(35.56051988,2.76566006)(35.56051988,2.95316006)(35.56051988,3.42191006) \lineto(35.56051988,11.14066006) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(237.6599735,13.52011921) \curveto(237.7537235,13.52011921)(238.1599735,13.52011921)(238.1912235,13.48886921) \lineto(238.8787235,13.48886921) \curveto(242.9412235,13.48886921)(244.3787235,15.36386921)(244.3787235,17.27011921) \curveto(244.3787235,20.11386921)(241.8162235,21.02011921)(239.2537235,21.02011921) \lineto(233.4412235,21.02011921) \curveto(233.0662235,21.02011921)(232.7537235,21.02011921)(232.7537235,20.67636921) \curveto(232.7537235,20.33261921)(233.1287235,20.33261921)(233.2849735,20.33261921) \curveto(234.3474735,20.33261921)(234.4099735,20.17636921)(234.4099735,19.17636921) \lineto(234.4099735,9.20761921) \curveto(234.4099735,8.20761921)(234.3474735,8.05136921)(233.3162235,8.05136921) \curveto(233.1287235,8.05136921)(232.7537235,8.05136921)(232.7537235,7.70761921) \curveto(232.7537235,7.36386921)(233.0662235,7.36386921)(233.4412235,7.36386921) \lineto(238.7849735,7.36386921) \curveto(239.1287235,7.36386921)(239.4412235,7.36386921)(239.4412235,7.70761921) \curveto(239.4412235,8.05136921)(239.0974735,8.05136921)(238.8787235,8.05136921) \curveto(237.7537235,8.05136921)(237.6599735,8.20761921)(237.6599735,9.20761921) \lineto(237.6599735,13.52011921) \closepath \moveto(240.9412235,14.39511921) \curveto(241.5974735,15.23886921)(241.6599735,16.48886921)(241.6599735,17.30136921) \curveto(241.6599735,18.36386921)(241.5662235,19.39511921)(241.0349735,20.14511921) \curveto(242.1287235,19.89511921)(243.6912235,19.30136921)(243.6912235,17.27011921) \curveto(243.6912235,15.86386921)(242.7849735,14.86386921)(240.9412235,14.39511921) \closepath \moveto(237.6599735,19.23886921) \curveto(237.6599735,19.64511921)(237.6599735,20.33261921)(238.8474735,20.33261921) \curveto(240.2849735,20.33261921)(240.9412235,19.77011921)(240.9412235,17.30136921) \curveto(240.9412235,14.42636921)(239.9099735,14.20761921)(238.6599735,14.20761921) \lineto(237.6599735,14.20761921) \lineto(237.6599735,19.23886921) \closepath \moveto(234.9412235,8.05136921) \curveto(235.0974735,8.42636921)(235.0974735,8.89511921)(235.0974735,9.14511921) \lineto(235.0974735,19.23886921) \curveto(235.0974735,19.48886921)(235.0974735,19.95761921)(234.9412235,20.33261921) \lineto(237.2224735,20.33261921) \curveto(236.9724735,19.98886921)(236.9724735,19.61386921)(236.9724735,19.30136921) \lineto(236.9724735,9.14511921) \curveto(236.9724735,8.95761921)(236.9724735,8.42636921)(237.1287235,8.05136921) \lineto(234.9412235,8.05136921) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(249.27346838,23.46886494) \curveto(249.27346838,23.84386494)(249.27346838,23.84386494)(248.86721838,23.84386494) \curveto(247.96096838,22.96886494)(246.71096838,22.96886494)(246.14846838,22.96886494) \lineto(246.14846838,22.46886494) \curveto(246.46096838,22.46886494)(247.39846838,22.46886494)(248.14846838,22.84386494) \lineto(248.14846838,15.75011494) \curveto(248.14846838,15.28136494)(248.14846838,15.09386494)(246.77346838,15.09386494) \lineto(246.24221838,15.09386494) \lineto(246.24221838,14.59386494) \curveto(246.49221838,14.59386494)(248.21096838,14.65636494)(248.71096838,14.65636494) \curveto(249.14846838,14.65636494)(250.89846838,14.59386494)(251.21096838,14.59386494) \lineto(251.21096838,15.09386494) \lineto(250.67971838,15.09386494) \curveto(249.27346838,15.09386494)(249.27346838,15.28136494)(249.27346838,15.75011494) \lineto(249.27346838,23.46886494) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(249.27346838,11.29287006) \curveto(249.27346838,11.66787006)(249.27346838,11.66787006)(248.86721838,11.66787006) \curveto(247.96096838,10.79287006)(246.71096838,10.79287006)(246.14846838,10.79287006) \lineto(246.14846838,10.29287006) \curveto(246.46096838,10.29287006)(247.39846838,10.29287006)(248.14846838,10.66787006) \lineto(248.14846838,3.57412006) \curveto(248.14846838,3.10537006)(248.14846838,2.91787006)(246.77346838,2.91787006) \lineto(246.24221838,2.91787006) \lineto(246.24221838,2.41787006) \curveto(246.49221838,2.41787006)(248.21096838,2.48037006)(248.71096838,2.48037006) \curveto(249.14846838,2.48037006)(250.89846838,2.41787006)(251.21096838,2.41787006) \lineto(251.21096838,2.91787006) \lineto(250.67971838,2.91787006) \curveto(249.27346838,2.91787006)(249.27346838,3.10537006)(249.27346838,3.57412006) \lineto(249.27346838,11.29287006) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(90.662595,11.10224921) \curveto(90.756345,11.10224921)(91.162595,11.10224921)(91.193845,11.07099921) \lineto(91.881345,11.07099921) \curveto(95.943845,11.07099921)(97.381345,12.94599921)(97.381345,14.85224921) \curveto(97.381345,17.69599921)(94.818845,18.60224921)(92.256345,18.60224921) \lineto(86.443845,18.60224921) \curveto(86.068845,18.60224921)(85.756345,18.60224921)(85.756345,18.25849921) \curveto(85.756345,17.91474921)(86.131345,17.91474921)(86.287595,17.91474921) \curveto(87.350095,17.91474921)(87.412595,17.75849921)(87.412595,16.75849921) \lineto(87.412595,6.78974921) \curveto(87.412595,5.78974921)(87.350095,5.63349921)(86.318845,5.63349921) \curveto(86.131345,5.63349921)(85.756345,5.63349921)(85.756345,5.28974921) \curveto(85.756345,4.94599921)(86.068845,4.94599921)(86.443845,4.94599921) \lineto(91.787595,4.94599921) \curveto(92.131345,4.94599921)(92.443845,4.94599921)(92.443845,5.28974921) \curveto(92.443845,5.63349921)(92.100095,5.63349921)(91.881345,5.63349921) \curveto(90.756345,5.63349921)(90.662595,5.78974921)(90.662595,6.78974921) \lineto(90.662595,11.10224921) \closepath \moveto(93.943845,11.97724921) \curveto(94.600095,12.82099921)(94.662595,14.07099921)(94.662595,14.88349921) \curveto(94.662595,15.94599921)(94.568845,16.97724921)(94.037595,17.72724921) \curveto(95.131345,17.47724921)(96.693845,16.88349921)(96.693845,14.85224921) \curveto(96.693845,13.44599921)(95.787595,12.44599921)(93.943845,11.97724921) \closepath \moveto(90.662595,16.82099921) \curveto(90.662595,17.22724921)(90.662595,17.91474921)(91.850095,17.91474921) \curveto(93.287595,17.91474921)(93.943845,17.35224921)(93.943845,14.88349921) \curveto(93.943845,12.00849921)(92.912595,11.78974921)(91.662595,11.78974921) \lineto(90.662595,11.78974921) \lineto(90.662595,16.82099921) \closepath \moveto(87.943845,5.63349921) \curveto(88.100095,6.00849921)(88.100095,6.47724921)(88.100095,6.72724921) \lineto(88.100095,16.82099921) \curveto(88.100095,17.07099921)(88.100095,17.53974921)(87.943845,17.91474921) \lineto(90.225095,17.91474921) \curveto(89.975095,17.57099921)(89.975095,17.19599921)(89.975095,16.88349921) \lineto(89.975095,6.72724921) \curveto(89.975095,6.53974921)(89.975095,6.00849921)(90.131345,5.63349921) \lineto(87.943845,5.63349921) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(102.27608988,21.05099494) \curveto(102.27608988,21.42599494)(102.27608988,21.42599494)(101.86983988,21.42599494) \curveto(100.96358988,20.55099494)(99.71358988,20.55099494)(99.15108988,20.55099494) \lineto(99.15108988,20.05099494) \curveto(99.46358988,20.05099494)(100.40108988,20.05099494)(101.15108988,20.42599494) \lineto(101.15108988,13.33224494) \curveto(101.15108988,12.86349494)(101.15108988,12.67599494)(99.77608988,12.67599494) \lineto(99.24483988,12.67599494) \lineto(99.24483988,12.17599494) \curveto(99.49483988,12.17599494)(101.21358988,12.23849494)(101.71358988,12.23849494) \curveto(102.15108988,12.23849494)(103.90108988,12.17599494)(104.21358988,12.17599494) \lineto(104.21358988,12.67599494) \lineto(103.68233988,12.67599494) \curveto(102.27608988,12.67599494)(102.27608988,12.86349494)(102.27608988,13.33224494) \lineto(102.27608988,21.05099494) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(104.65108988,2.53125006) \lineto(104.18233988,2.53125006) \curveto(104.15108988,2.21875006)(103.99483988,1.40625006)(103.80733988,1.28125006) \curveto(103.71358988,1.18750006)(102.65108988,1.18750006)(102.43233988,1.18750006) \lineto(99.86983988,1.18750006) \curveto(101.33858988,2.46875006)(101.83858988,2.87500006)(102.65108988,3.53125006) \curveto(103.68233988,4.34375006)(104.65108988,5.21875006)(104.65108988,6.53125006) \curveto(104.65108988,8.21875006)(103.18233988,9.25000006)(101.40108988,9.25000006) \curveto(99.68233988,9.25000006)(98.49483988,8.03125006)(98.49483988,6.75000006) \curveto(98.49483988,6.06250006)(99.08858988,5.96875006)(99.24483988,5.96875006) \curveto(99.55733988,5.96875006)(99.96358988,6.21875006)(99.96358988,6.71875006) \curveto(99.96358988,6.96875006)(99.86983988,7.46875006)(99.15108988,7.46875006) \curveto(99.58858988,8.43750006)(100.52608988,8.75000006)(101.18233988,8.75000006) \curveto(102.58858988,8.75000006)(103.30733988,7.65625006)(103.30733988,6.53125006) \curveto(103.30733988,5.31250006)(102.43233988,4.37500006)(101.99483988,3.87500006) \lineto(98.65108988,0.53125006) \curveto(98.49483988,0.40625006)(98.49483988,0.37500006)(98.49483988,0.00000006) \lineto(104.24483988,0.00000006) \lineto(104.65108988,2.53125006) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(358.843345,12.72463921) \curveto(358.937095,12.72463921)(359.343345,12.72463921)(359.374595,12.69338921) \lineto(360.062095,12.69338921) \curveto(364.124595,12.69338921)(365.562095,14.56838921)(365.562095,16.47463921) \curveto(365.562095,19.31838921)(362.999595,20.22463921)(360.437095,20.22463921) \lineto(354.624595,20.22463921) \curveto(354.249595,20.22463921)(353.937095,20.22463921)(353.937095,19.88088921) \curveto(353.937095,19.53713921)(354.312095,19.53713921)(354.468345,19.53713921) \curveto(355.530845,19.53713921)(355.593345,19.38088921)(355.593345,18.38088921) \lineto(355.593345,8.41213921) \curveto(355.593345,7.41213921)(355.530845,7.25588921)(354.499595,7.25588921) \curveto(354.312095,7.25588921)(353.937095,7.25588921)(353.937095,6.91213921) \curveto(353.937095,6.56838921)(354.249595,6.56838921)(354.624595,6.56838921) \lineto(359.968345,6.56838921) \curveto(360.312095,6.56838921)(360.624595,6.56838921)(360.624595,6.91213921) \curveto(360.624595,7.25588921)(360.280845,7.25588921)(360.062095,7.25588921) \curveto(358.937095,7.25588921)(358.843345,7.41213921)(358.843345,8.41213921) \lineto(358.843345,12.72463921) \closepath \moveto(362.124595,13.59963921) \curveto(362.780845,14.44338921)(362.843345,15.69338921)(362.843345,16.50588921) \curveto(362.843345,17.56838921)(362.749595,18.59963921)(362.218345,19.34963921) \curveto(363.312095,19.09963921)(364.874595,18.50588921)(364.874595,16.47463921) \curveto(364.874595,15.06838921)(363.968345,14.06838921)(362.124595,13.59963921) \closepath \moveto(358.843345,18.44338921) \curveto(358.843345,18.84963921)(358.843345,19.53713921)(360.030845,19.53713921) \curveto(361.468345,19.53713921)(362.124595,18.97463921)(362.124595,16.50588921) \curveto(362.124595,13.63088921)(361.093345,13.41213921)(359.843345,13.41213921) \lineto(358.843345,13.41213921) \lineto(358.843345,18.44338921) \closepath \moveto(356.124595,7.25588921) \curveto(356.280845,7.63088921)(356.280845,8.09963921)(356.280845,8.34963921) \lineto(356.280845,18.44338921) \curveto(356.280845,18.69338921)(356.280845,19.16213921)(356.124595,19.53713921) \lineto(358.405845,19.53713921) \curveto(358.155845,19.19338921)(358.155845,18.81838921)(358.155845,18.50588921) \lineto(358.155845,8.34963921) \curveto(358.155845,8.16213921)(358.155845,7.63088921)(358.312095,7.25588921) \lineto(356.124595,7.25588921) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(370.45683988,22.67338494) \curveto(370.45683988,23.04838494)(370.45683988,23.04838494)(370.05058988,23.04838494) \curveto(369.14433988,22.17338494)(367.89433988,22.17338494)(367.33183988,22.17338494) \lineto(367.33183988,21.67338494) \curveto(367.64433988,21.67338494)(368.58183988,21.67338494)(369.33183988,22.04838494) \lineto(369.33183988,14.95463494) \curveto(369.33183988,14.48588494)(369.33183988,14.29838494)(367.95683988,14.29838494) \lineto(367.42558988,14.29838494) \lineto(367.42558988,13.79838494) \curveto(367.67558988,13.79838494)(369.39433988,13.86088494)(369.89433988,13.86088494) \curveto(370.33183988,13.86088494)(372.08183988,13.79838494)(372.39433988,13.79838494) \lineto(372.39433988,14.29838494) \lineto(371.86308988,14.29838494) \curveto(370.45683988,14.29838494)(370.45683988,14.48588494)(370.45683988,14.95463494) \lineto(370.45683988,22.67338494) \closepath } } { \newrgbcolor{curcolor}{0 0 0} \pscustom[linestyle=none,fillstyle=solid,fillcolor=curcolor] { \newpath \moveto(372.83183988,4.15364006) \lineto(372.36308988,4.15364006) \curveto(372.33183988,3.84114006)(372.17558988,3.02864006)(371.98808988,2.90364006) \curveto(371.89433988,2.80989006)(370.83183988,2.80989006)(370.61308988,2.80989006) \lineto(368.05058988,2.80989006) \curveto(369.51933988,4.09114006)(370.01933988,4.49739006)(370.83183988,5.15364006) \curveto(371.86308988,5.96614006)(372.83183988,6.84114006)(372.83183988,8.15364006) \curveto(372.83183988,9.84114006)(371.36308988,10.87239006)(369.58183988,10.87239006) \curveto(367.86308988,10.87239006)(366.67558988,9.65364006)(366.67558988,8.37239006) \curveto(366.67558988,7.68489006)(367.26933988,7.59114006)(367.42558988,7.59114006) \curveto(367.73808988,7.59114006)(368.14433988,7.84114006)(368.14433988,8.34114006) \curveto(368.14433988,8.59114006)(368.05058988,9.09114006)(367.33183988,9.09114006) \curveto(367.76933988,10.05989006)(368.70683988,10.37239006)(369.36308988,10.37239006) \curveto(370.76933988,10.37239006)(371.48808988,9.27864006)(371.48808988,8.15364006) \curveto(371.48808988,6.93489006)(370.61308988,5.99739006)(370.17558988,5.49739006) \lineto(366.83183988,2.15364006) \curveto(366.67558988,2.02864006)(366.67558988,1.99739006)(366.67558988,1.62239006) \lineto(372.42558988,1.62239006) \lineto(372.83183988,4.15364006) \closepath } } \end{pspicture} \caption{Gieseker bundles on $\Pn^1 \cup \Pn^1$. Components are labelled with their degrees.} \end{figure} \end{ex} \begin{enumerate} \item Ordinary $\Cs$-bundles on $\Sigma$: These are classified (up to isomorphism) by their multi-degrees $\md = (d_1,d_2)$. The automorphism group of any such bundle $\mP$ is a copy of $\Cs$, which rescales the fibers of $\mP$. \item Gieseker bundles on the unique modification $C$ of $\Sigma$: These are also classified by their multidegrees $\md = (d_1,1,d_2)$. The automorphism group of any such bundle is a copy of $(\Cs)^2$. One can rescale the bundle fibers, and one can lift the automorphisms of the unstable $\Pn^1$ to automorphisms of the bundle. (This is easily seen by working on the normalization $\widetilde{C}$. \end{enumerate} \begin{ex}\label{GBon11} Let $(\Sigma,\sigma_1)$ be the ``boundary divisor'' in $\Mbar_{1,1}$, a curve whose single rational component has one marked point ($\sigma_1$) and one self-node ($\sigma$). Let's fix a total degree $d$. The degree $d$ Gieseker bundles on $(\Sigma,\sigma_1)$ come in two flavors, ordinary $\Cs$-bundles on $\Sigma$ and bundles on the modification of $\Sigma$ at $\sigma$. \begin{figure}[htb] \[\begin{xy} (40,0)*{\begin{xy} (0,0)*\xycircle(12,8){-}; (6,0)*\xycircle(6,3){-}; (-3,-3)*{d}; (-3,2)*{\bullet}; \end{xy}}; (80,0)*{\begin{xy} (14,0)*\xycircle(3,3){-}; (14,0)*{1}; (14,3)*{};(14,-3)*{} **\crv{(19,6) & (0,14) & (-14,0) & (0,-14)& (19,-6)}; (14,3)*{};(14,-3)*{} **\crv{(10,6) & (3,0) & (10,-6)}; (0,-3)*{d-1}; (0,3)*{\bullet}; \end{xy}}; \end{xy}\] \caption{A pictures of $(\Sigma,\sigma_1)$ and its modification $(C,\sigma_1)$.} \end{figure} The normalization of $\Sigma$ is a copy of $\Pn^1$. Line bundles on $\Pn^1$ are classified by their degree, so the ordinary degree $d$ bundles on $\Sigma$ are classified (up to isomorphism) by the gluing data associated to $\sigma$. The space of such gluing data is a copy of $\C$. $\sigma$ is a self-node, so these gluing isomorphisms are fixed by bundle rescalings. Hence the automorphism group of an ordinary bundle on $\Sigma$ is a copy of $\Cs$. The modification $C$ has two rational components and two nodes, so a bundle on $C$ is specified by two gluing isomorphisms. However, these gluing isomorphism do not classify; we can rescale the bundle differently on each component of the normalization $\widetilde{C}$ and we we can lift the rotations of the Gieseker bubble. A rescaling and the rotations can be used to fix the gluing isomorphisms, so (up to isomorphism) there is only {\it one} bundle on the modification which satifies the Gieseker conditions. The automorphism group of this bundle is a copy of $\Cs$, coming from the remaining automorphism of the bundle data on the normalization. \end{ex} \begin{remark}\label{usefullie} The following story, though not rigorous, can help to understand the intuition behind these definitions. Fix a nodal curve $\Sigma$ and suppose that we have a family of $\Cs$-bundles on $\Sigma$, parametrized by a coordinate $t$, for which the gluing isomorphism $g: \mP_{\sigma^+} \to \mP_{\sigma^-}$ over a node $\sigma \in \Sigma$ approaches $0$ at $t \to 0$. (Assuming that $g \to 0$ is no loss of generality; the other limit $g \to \infty$ is equivalent to $g^{-1} \to 0$.) We want to replace this singular limit with a bundle defined on some other curve $C$. One can guess how to do this by looking at a section $s$ of an associated fiber bundle $V = \mP \times_{\Cs} \C$. If we lift $s$ to a section $\tilde{s}$ on the normalization, it must obey \[ \tilde{s}(\sigma^+) = g \tilde{s}(\sigma^-). \] In this limit, we must have $\tilde{s}(\sigma^+) \to 0$. By continuity, the section $s$ on $\Sigma$ must have a zero which approaches the node as $g \to 0$. (When $g \to \infty$, the zero approaches the node from the other side.) To keep track of how a single zero $z$ approaches the node, we should replace the singular limit with a new bundle on $C$ obtained by creating a $\Pn^1$ at the node. (This $\Pn^1$ records the way the zero approached the node.) The section on this new component must have one zero and no poles, so the degree of the new bundle on this component must be $1$. The total degree of the bundle is a topological invariant, so the degree of the bundle on the original component will drop by $1$. \begin{figure}[htb] \centering \[\begin{xy} (0,0)*{\begin{xy} (-2,0)*{\begin{xy} (0,12)*{d}; (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; (5,-2)*{\bullet};(2,-3)*{z} {\ar (5,-2)*{};(10,0)*{}}; \end{xy}}; (20,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; (0,12)*{d'}; \end{xy}}; \end{xy}}; {\ar@{~>} (27,-3)*{};(37,-3)*{}}; (70,0)*{\begin{xy} (-2,0)*{\begin{xy} (0,12)*{d-1}; (0,0)*\xycircle(12,8){-}; (-5,1)*{};(1,1)*{} **\crv{(-2,-1)}; (-4,.5)*{};(0,.5)*{} **\crv{(-2,2)}; \end{xy}}; (15,1)*{\begin{xy} (0,0)*\xycircle(5,5){-}; (0,12)*{1}; (0,2)*{\bullet};(-1,-2)*{z}; \end{xy}}; (30,0)*{\begin{xy} (0,0)*\xycircle(10,8){-}; (0,12)*{d'}; (-4,0)*{}; (4,0) *{} **\crv{(0,-3)}; (-3,-1)*{}; (3,-1)*{} **\crv{(0,2)}; \end{xy}}; \end{xy}}; \end{xy}\] \caption{A zero approaches at node, leading to a Gieseker bubble} \end{figure} \end{remark} \begin{definition} $\Mtwid_{g,I}([\pt/\Cs])$ is the fibered category (over $\C$-schemes) which classifies Gieseker bundles on stable genus $g$, $I$-marked curves. Its objects are tuplets $(B,C,\sigma_i,\mP)$ consisting of \begin{itemize} \item a test scheme B, \item a flat projective family $\pi:C \to B$ of pre-stable $I$-marked, genus $g$ curves with marked points $\sigma_i: B \to C$, and \item a principal $\Cs$-bundle $p:\mP \to C$ \end{itemize} such that for every closed point $b \in B$, the pair $((C_b,\sigma_{b,i}),\mP|_{C_b})$ is a Gieseker map to $[\pt/\Cs]$. The morphisms in this category are Cartesian diagrams $$ \xymatrix{\mP' \ar[r]^{\tilde{f}} \ar[d]^{p'} & \mP \ar[d]^p \\ C' \ar[r]^{f} \ar[d]^{\pi'} & C \ar[d]^\pi \\ B' \ar[r] & B}, $$ where $\tilde{f}$ is $\Cs$-equivariant and $\sigma_i = f \circ \sigma_i'$. \end{definition} \begin{notation}\label{degeneracystrata} The {\it degeneration type} of a pair $((C,\sigma_i),\mP)$ is the pair $(\gamma,\md)$ consisting of the modular graph $\gamma$ of $(C,\sigma_i)$ and the multidegree $\md$ of $\mP$. $\mc{M}_{\gamma,\md}$ is the substack of $\Mtwid_{g,I}([\pt/\Cs])$ which classifies bundles of degeneracy type $(\gamma,\md)$. \end{notation} \begin{remark} The condition that a pair $((C,\sigma_i),\mP)$ be a Gieseker map can be phrased purely in terms of the degeneracy type. \end{remark} In Section \ref{usefulatlas}, we will introduce an atlas for $\Mtwid_{g,I}([\pt/\Cs])$ (thereby proving it to be a stack) and discuss its geometric properties. In the remainder of this section, we introduce notation for our stack's universal families and its modular graph decomposition, and then discuss a few key examples. \subsection{Universal Families} By virtue of its definition, $\Mtwid_{g,I}([\pt/\Cs])$ carries several universal families. It carries a family of semi-stable curves \[ \pi: C \to \Mtwid_{g,I}([\pt/\Cs]) \] with marked points \[ \sigma_i: \Mtwid_{g,I}([\pt/\Cs]) \to C \] indexed by $I$, and it has a principal $\Cs$-bundle \[ p: \mP \to C. \] The universal $\Cs$-bundle defines a universal map \[ \phi: C \to [\pt/\Cs] \] and the compositions $\ev_i = \phi\circ \sigma_i$ are the {\it evaluation maps} \[ \ev_i: \Mtwid_{g,I}([\pt/\Cs]) \to [\pt/\Cs] \qquad (i \in I). \] Finally, there is a natural morphism \[ F: \Mtwid_{g,I}([\pt/\Cs]) \to \Mbar_{g,I} \] which forgets the principal bundle $\mP$ and sends the semi-stable marked curve $(C,\sigma_i)$ to its stabilization. \subsection{Examples} We treat three examples here. The first is trivial. The second is the simplest example in which the Gieseker bubbles can be seen. The third illustrates how the substacks classifying non-trivial Gieseker bundles ``fill in the gaps'' between the substacks which parametrize bundles which live on the same nodal curve but have different multidegrees. \begin{ex}[genus zero, 3 marked points] $\Mbar_{0,3}$ is a point; there is, up to equivalence, only one stable genus zero curve with 3 marked points. On this curve, up to equivalence, there is one bundle of degree $d$; any $d \in \Z$ is allowed. The automorphism group of $\mc{O}_{\mb{P}^1}(d)$ is a copy of $\Cs$, so we conclude \[ \Mtwid_{0,3}([\pt/\Cs]) \simeq \bigsqcup_{d\in\Z} [\pt/\Cs]. \] \end{ex} \begin{ex}[genus one, 1 marked point]\label{Ex11} This example shows the Gieseker completion at work. Let's fix the total degree $d$ of the Gieseker bundles. We'll represent the stack $ \Mtwid^d_{1,1}([\pt/\Cs])$ of degree $d$ Gieseker bundles as a quotient $[A/\Cs]$, where $A$ is the stack which classifies pairs $((C,\mP),t)$ consisting of a Gieseker bundle and a trivialization $t: \mP_\sigma \simeq \Cs$ of the Gieseker bundle at the marked point $\sigma$. $A$ also comes equipped with a forgetful map $f: A \to \Mbar_{1,1}$. Over smooth curves $(\Sigma,\sigma_1)$, the fibers of $f$ are copies of the Jacobian $\Jac(\Sigma)$ of $\Sigma$. The Jacobian of $\Sigma$ is a copy of $\Sigma$, so over $\mc{M}_{1,1}$, the stack $A$ is simply a copy of the universal curve $\Sigma_{1,1}$. In fact, this is also true over the boundary divisor of $\Mbar_{1,1}$. We saw in Example \ref{GBon11} that the space of Gieseker bundles (with a trivialization $t$ to eliminate the global rescaling automorphisms) is obtained by gluing a copy of the point $\pt$ to a copy of $\Cs$. This gluing identifies the $\pt$ with both $0$ and $\infty$, so the resulting curve is a copy of $(\Sigma,\sigma_1)$. \end{ex} \begin{ex}[genus zero, $4$-marked points]\label{Ex04} Recall that $\Mbar_{0,4}$ is isomorphic to $\Pn^1$. The open locus classifying smooth curvces is $\Pn^1 \setminus \{0,1,\infty\}$, and the boundary divisor $\{0,1,\infty\}$ classifies reducible nodal curves with two marked points on each Let $B = \Pn^1 \setminus \{1,\infty\} = \mb{A}^1 \setminus 1$ and consider the family $(\Sigma,\sigma_i): B \hookrightarrow \Mbar_{0,4}$ of marked curves obtained by restricting the universal marked curve $\Sigma_{0,4}$ to $B$. This family is a deformation of the curve $(\Sigma_o,\sigma_{o,i})$ which has two components meeting at a common node and each carrying two marked points. \begin{figure}[htb] \centering \[\begin{xy} (0,0)*{}; (40,0)*{} **\dir{-}; (10,0)*{|}; (10,-5)*{b}; (30,0)*{|}; (30,-5)*{0}; (10,5)*{};(10,25)*{} **\crv{(15,15)}; (11,7)*{\bullet}; (8,7)*{0}; (11,23)*{\bullet}; (8,23)*{\infty}; (12,11)*{\bullet}; (9,11)*{b}; (12,19)*{\bullet}; (9,19)*{1}; (30,4)*{};(35,20)*{} **\dir{-}; (30,26)*{};(35,10)*{} **\dir{-}; (31,7)*{\bullet}; (31,23)*{\bullet}; (32,11)*{\bullet}; (32,19)*{\bullet}; (-10,0)*{B}; (-10,15)*{\Sigma}; {\ar@{->} (-10,12)*{};(-10,3)}; \end{xy}\] \end{figure} We will describe the stack $\Mtwid_{0,4}([\pt/\Cs])$ by giving an atlas $A$ for the restriction $F|_B: \Mtwid \to B$ of $F: \Mtwid_{0,4}([\pt/\Cs]) \to \Mbar_{0,4}$ to $B$. (Nearly identical descriptions apply near $1$ and $\infty$ in $\Mbar_{0,4}$, and $\Mtwid_{0,4}([\pt/\Cs])$ is obtained by gluing these local descriptions.) Let $V$ be the subset $\{0,\infty\} \subset I$. The atlas $A$ is the algebraic space (scheme, in fact) which classifies isomorphism classes of tuplets \[ (C,\sigma_i,\mP,t_0,t_\infty) \] where $(C,\sigma_i,\mP)$ is a family of Gieseker bundles on the curve $\Sigma/B$ and the $t_v \in \Gamma(B,\sigma_v^*\mP)$ are families of trivializations based at the marked points $0$ and $\infty$. (For any closed point $b \in B$, $t_v(b)$ is a point in the fiber of $\mP$ at $\sigma_v(b)$.) $A$ is a disjoint union \[ A = \sqcup_{d \in \Z} A_d \] of isomorphic subschemes which classify Gieseker bundles of total degree $d$. In the remainder of this example, we fix a total degree $d$, and focus on $A_d$. Gieseker bundles on the fibers of $\pi: \Sigma \to B$ were classified in Example $\ref{GB04}$. Adding the two trivializations eliminates the automorphisms, and introduces an extra degree of freedom on the unmodified curves, which may be thought of as the ratio of the two trivializations. Thus, the generic fiber of the forgetful morphism $f: A_d \to B$ is isomorphic to $\Cs$, while the special fiber is an endless chain of rational curves $\cup_{n\in\Z} \Pn^1_{n}$. The open $\Cs_{n} \subset \Pn^1_{n}$ classifies bundles-with-trivializations whose multidegree is $(d+n,-n)$. The non-empty intersection points $\pt_{n} = \Pn^1_{n-1} \cap \Pn^1_{n}$ classify bundles-with-trivializations whose multidegree is $(d+n-1,1,-n)$. (For reasons of space, we have drawn only finitely many of the $\Pn^1$'s in the figure below.) \begin{figure}[htb] \centering \[\begin{xy} (0,0)*{}; (40,0)*{} **\dir{-}; (10,0)*{|}; (10,-5)*{b}; (30,0)*{|}; (30,-5)*{0}; (10,10)*{};(12,20)*{} **\dir{-}; (28,3)*{};(32,10)*{} **\dir{.}; (32,8)*{};(28,15)*{} **\dir{-}; (28,13)*{};(32,20)*{} **\dir{-}; (32,18)*{};(28,25)*{} **\dir{-}; (28,23)*{};(32,30)*{} **\dir{-}; (32,28)*{};(28,35)*{} **\dir{.}; (40,17)*{\Cs_n}; (40,11)*{\Cs_{n-1}}; (22,14)*{\pt_{n}}; (-10,0)*{B}; (-10,15)*{A_{d}}; {\ar@{->} (-10,12)*{};(-10,3)}; (-13,7.5)*{f}; \end{xy}\] \end{figure} The action of the group $(\Cs)^V = \Cs_0 \times \Cs_\infty$ which rescales the trivializations preserves the forgetful morphism $f$. The diagonal $\Cs_\Delta \subset (\Cs)^V$ acts trivially. For later computations, it is useful to have an explicit Cech cover of $A$. Let\[ A_{d,n} = \Spec \C[z_n,w_n] \simeq \mb{A}^2. \] We obtain $A_d$ by gluing $A_{d,n-1}$ to $A_{d,n}$, identifying the open sets $U_{w_n}$ and $U_{z_{n-1}}$ via the relation $w_n = 1/z_{n-1}$. (Here $U_{w_n} = \Spec \C[z_n,w_n]_{(w_n)} \simeq \mb{A} \times \Cs$; likewise, $U_{z_{n-1}}$.) The morphism $f: A \to B$ is given by $b=z_nw_n$, where $b$ is the standard coordinate on $B \subset \Spec \C[b]$. In this notation, $\pt_n$ is the origin, $\Cs_{n}$ is the $z_n$ coordinate axis, and $\Cs_{n-1}$ is the $w_n$ coordinate axis. Finally, $(\Cs)^{V} \simeq (\Cs)^2$ acts on $A_d$, with weight $(1,-1)$ on $z_n$ and weight $(-1,1)$ on $w_n$. Thus, the diagonal acts trivially, and the fixed points are picked out by $z_n=w_n=0$. One may think that $z_n \propto w_n^{-1} \propto t_0/t_\infty$, at least generically. \end{ex} \section{The Geometry of $\Mtwid_{g,I}([\pt/\Cs])$}\label{usefulatlas} In this section, we describe the geometry of $\Mtwid_{g,I}([\pt/\Cs])$. We exhibit an atlas for our stack. We will use this atlas -- denoted $A$ -- in the proof that the Gromov-Witten invariants of $[\pt/\Cs]$ are well-defined. While discussing atlases, we also prove that the stack of Gieseker bundles satisfies a valuative criterion for completeness. Second, we discuss the deformations of Gieseker bundles. We use this information to describe the connected components of $\Mtwid_{g,I}([\pt/\Cs])$ and to give $\Mtwid_{g,I}([\pt/\Cs])$ a stratification by degeneracy type. \subsection{Our Favorite Atlas} The atlas defined in Proposition \ref{prop_atlas} below generalizes the ones in Examples \ref{Ex11} and \ref{Ex04}. Let $(\Sigma_o,\sigma_{o,i})$ be a point in $\Mbar_{g,I}$, with modular graph $\gamma_o$, and let $B$ be an affine \'{e}tale coordinate chart centered at $(\Sigma_o,\sigma_{o,i})$, represented by a curve $(\Sigma,\sigma_i)$ over $B$. We can assume with no loss of generality that the modular graph strata of $B$ coincide with the coordinate axes, so that $(\Sigma_o,\sigma_{o,i})$ is the most degenerate curve in the family. $\Mbar_{g,I}$ is covered by charts of this form. We denote the fiber of $F$ over $B$ by $\Mtwid_{(\Sigma,\sigma_i)}$. This stack classifies Gieseker bundles on the family $(\Sigma,\sigma_i)$. \[ \xymatrix{\Mtwid_{(\Sigma,\sigma_i)} \ar[r] \ar[d] & \Mtwid_{g,I}([\pt/\Cs]) \ar[d]^{F} \\ B \ar[r] & \Mbar_{g,I}} \] We want to exhibit an atlas for $\Mtwid_{(\Sigma,\sigma_i)}$. In the moduli theory of bundles, one often obtains atlases by fixing one or more marked points and then parametrizing bundles equipped with trivializations at these marked points. We would like to do this in families. \begin{definition} Suppose that $C$ comes equipped with a marked point $\sigma: B \to C$. A {\it family of trivializations based at $\sigma$} is a section $t \in \Gamma(B,\sigma^*\mP)$. \end{definition} Such families of marked points induce isomorphisms $t_{\sigma(b)}: \mP_{\sigma(b)} \simeq \Cs$ for every geometric point $b \in B$. \begin{proposition}\label{prop_atlas} \begin{enumerate}\item Let $V = V_{\gamma_o}$ be the vertex set of the modular graph of $(\Sigma_o,\sigma_{o,i})$. After \'{e}tale refinement of the neighborhood $B$, we may choose a collection of marked points $\sigma_v: B \to \Sigma$ (with $v \in V$) such that every component of every geometric fiber of $\Sigma$ carries at least one of the $\sigma_v$. \item Let $A = A_{(\Sigma,\sigma_i)}(\{\sigma_v\})$ denote the stack which classifies triplets \[ ((C,\sigma_i), p: \mP \to C,t_v \in \Gamma(B,\sigma_v^*\mP)\}_{v\in V_{\gamma_o}}) \] of Gieseker bundles with trivializations at the selected marked points. $A$ is an algebraic space. Moreover the natural action of $\mc{G} = (\Cs)^V$, with the $v$-th copy $\Cs_v$ rescaling the $v$-th trivialization, displays $\Mtwid_{(\Sigma,\sigma_i)}$ as the quotient stack \[ \Mtwid_{(\Sigma,\sigma_i)} = [A/\mc{G}]. \] \end{enumerate} \end{proposition} \begin{proof} The first claim follows from the local geometry of the stack of marked curves. If we choose one new marked point $\sigma_v$ on each component of $\Sigma_o$, we obtain a point in the moduli stack $\Mbar_{g,I \sqcup V}$. After \'{e}tale refinement $B' \to B$, there are no obstructions to lifting the morphism $B' \to \Mbar_{g,I}$ to a morphism $B' \to \Mbar_{g,I\sqcup V}$. To prove the second claim, we observe that the $(\Cs)^V$ action preserves the fibers of the forgetful map $f: A \to B$. Representing a bundle on $C$ as a bundle on $\widetilde{C}$ together with gluing isomorphisms, it is easy to see that the trivializations eliminate all of the automorphisms of the Gieseker bundles. It follows that $A$ is an algebraic space. The action of $(\Cs)^{V}$ is freely transitive on the trivializations, so quotienting by this action is equivalent to forgetting the trivialiations. \end{proof} \begin{corollary} $\Mtwid_{g,I}([\pt/\Cs])$ is a smooth Artin stack. \end{corollary} \begin{proof} Choose a collection $\{B_\alpha\}$ of \'{e}tale neighborhoods which cover $\Mbar_{g,I}$. The collection $\{A_\alpha\}$ covers $\Mtwid_{g,I}([\pt/\Cs])$. \end{proof} \begin{corollary} The dimension of $\Mtwid_{g,I}([\pt/\Cs])$ is \[dim(\Cs)(g-1) + 3(g-1) + |I|.\] \end{corollary} \begin{proposition}\label{completeness} The stack $\Mtwid_{g,I}([\pt/\Cs])$ satisfies the ``valuative criterion for completeness'': Let $R$ be a complete discrete valuation ring, with fraction field $K$, and consider the ``disc'' $D = \Spec(R)$ and the ``punctured disc'' $D^\times = \Spec(K)$. Given a family $((C,\sigma_i),\mP)$ of Gieseker bundles on $D^\times$, we can extend the family to $D^\times$ (possibly after \'{e}tale base change). \end{proposition} \begin{proof} Although rarely stated in this form, this result is implicit in the literature on GIT constructions of coarse moduli spaces. To keep this paper somewhat self-contained, we sketch a proof. $\Mbar_{g,I}$ is complete, so it's enough to prove the valuative criterion for the forgetful morphism $F: \Mtwid_{g,I}([\pt/\Cs]) \to \Mbar_{g,I}$. We fix a stable curve $(\Sigma,\sigma_i)$ extending the stabilization of $C$ to (an \'{e}tale refinement of) $D$. The result now follows from the projectivity of coarse moduli spaces of S-equivalence classes of families of rank 1 torsion free sheaves on varying nodal curves. More precisely: A result of Seshadri \cite{MR699278} implies that there is a surjective morphism from the fiber of $F$ over the chosen family $(\Sigma,\sigma_i): D \to \Mbar_{g,I}$ onto the stack of stable rank $1$ torsion free sheaves on the family $\Sigma$; we associate to any principal $\Cs$-bundle $\mP$, the pushforward $m_*E$ of the line bundle $E$ associated to $\mP$ and the standard representation of $\Cs$. Simpson's GIT construction \cite{MR1307297}, based on Maruyama's boundedness results \cite{MR637512}, guarantees that the coarse algebraic moduli space of $S$-equivalence classes of rank $1$ torsion-free sheaves on a family of projective curves is projective and hence complete. Completeness of the coarse moduli space implies the existence (though not uniqueness) of some family of torsion-free sheaves extending any $m_*E$, hence the existence (though not uniqueness) of a family of Gieseker bundles extending the given one. \end{proof} The stack $\Mbar_{g,I}([\pt/\Cs])$ is not separated, because the degree $d$ of a bundle can split as $d=(d+n)-n$ for any $n\in\Z$ if the curve develops a node. However, this is essentially the only source of non-separatedness. \begin{proposition} The fiber of $F$ at any nodal curve $(\Sigma,\sigma_i)$ in $\Mbar_{g,I}(\pt)$ is separated. \end{proposition} \subsection{Deformations} First, we want to show the existence of deformations. \begin{proposition} $\Mtwid_{g,I}([\pt/\Cs])$ is unobstructed. \end{proposition} \begin{proof} The obstructions to deforming a bundle $\mP$ on a fixed curve $C$ are captured by the Ext-group $\op{Ext}^2((p_*L_{\mP})^{\Cs},\mc{O}_{C})$. Here $\mc{O}_C$ is the structure sheaf of $C$ and $(p_*L_{\mP})^{\Cs}$ is the $\Cs$-invariants in the pushdown of the cotantent complex of $\mP$ along the structure morphism $p:\mP \to C$. This Ext-group vanishes because $C$ is one-dimensional. The vanishing of obstructions to deforming the curve and the bundle together follows from the tangent-obstruction sequence derived from the exact triangle associated to the cotangent complex $L_{\mP/C}$. \end{proof} It follows that any Gieseker bundle $(C_o,\sigma_{o,i},\mP_o)$ can be deformed to a family $(C,\sigma_{i},\mP)$ on the Spec of a discrete valuation ring. The following lemma characterizes the degeneracy types which can occur when we deform a given Gieseker bundle. \begin{lemma}\label{defmodgraph} Let $(C_o,\sigma_{o,i},\mP_o)$ be a Gieseker bundle having degeneracy type $(\gamma_o,\md_o)$, and suppose that we are given a deformation $(C,\sigma_i,\mP)$ of $(C_o,\sigma_{o,i},\mP_o)$ over the Spec of a discrete valuation ring. The degeneracy type $(\gamma,\md)$ of the generic fiber can be any degree-labelled modular graph obtained from $(\gamma_o,\md_o)$ by the combinations of the following elementary operations: \begin{enumerate} \item Resolve a self node: Delete a self-edge attached to a vertex $v$, increasing the genus $g_v$ by $1$, leave the multi-degree unchanged. \item Resolve a splitting node: replace a pair of vertices $v_1$ and $v_2$ with a single vertex $v$, having genus $g_v = g_{v_1} + g_{v_2}$ and multidegree $d_v = d_{v_1} + d_{v_2}$. Any other splitting edges connecting $v_1$ and $v_2$ become self-edges. \end{enumerate} Moreover all such modular graphs occur in some deformation. \end{lemma} \begin{proof} Induction on the number of nodes, using the modular graph stratification of the stack of curves. \end{proof} \begin{corollary} $\Mtwid_{g,I}([\pt/\Cs])$ is locally of finite type. \end{corollary} \begin{proof} Every stratum is clearly of finite type, and the lemma says that we only reach finitely many strata by deformation. \end{proof} \begin{corollary} The connected components of $\Mtwid_{g,I}([\pt/\Cs])$ are labelled by total degree $d$ of the Gieseker bundles. \[ \Mtwid_{g,I}([\pt/\Cs]) = \bigsqcup_{d \in \Z} \Mtwid_{g,I}^d([\pt/\Cs]). \] \end{corollary} \begin{remark} The corollary makes it obvious that $\Mtwid_{g,I}([\pt/\Cs])$ is of infinite type. The connected components $\Mtwid_{g,I}^d([\pt/\Cs])$ can also be of infinite type. Any modular graph $\gamma$ with at least two vertices carries countably many multi-degrees $\md: V_{\gamma} \to \Z$ for which $\sum_{v \in V_{\gamma}} d_v = d$. \end{remark} \begin{corollary} The modular graph decomposition (see Definition \ref{degeneracystrata}) \[ \Mtwid_{g,I}([\pt/\Cs]) = \bigcup_{(\gamma,\md)} \mc{M}_{(\gamma,\md)}, \] is a stratification, meaning that the closure of $\mc{M}_{(\gamma,\md)}$ in $\Mtwid_{g,I}([\pt/\Cs])$ is obtained as a union $$ \op{cl}\big(\mc{M}_{(\gamma,\md)}\big) = \bigcup_{(\gamma',\md')} \mc{M}_{(\gamma',\md')}, $$ where the primed union is over all multidegree-labelled modular graphs $(\gamma,\md')$ obtained from $(\gamma,\md)$ by sequences of the following elementary operations: \begin{enumerate} \item Self node: Lower the genus of a vertex by 1, and add a self-edge. \item Splitting node: Split a vertex $v$ into two vertices $v_1$ and $v_2$, connected by an edge, with $g_{v_1} + g_{v_2} = g_v$ and $d_{v_1}+ d_{v_2} = d_v$. \item Gieseker bubbling: Replace an edge connecting a stable vertex $v$ to a stable vertex $v'$ with two edges connected to a common vertex $v$ having $g_v = 0$ and $d_v = 1$, while subtracting $1$ from the degree $d_v$ or $d_{v'}$. (Note that $v$ may equal $v'$.) \end{enumerate} \end{corollary} \begin{proposition} The boundary of the closure in the corollary above is a divisor with normal crossings. \end{proposition} \section{The Structure of the Local Atlas}\label{localstudy} In this section, we describe some aspects of the geometry of our atlas $A$ which we will need in the proof of our main theorem. We begin by introducing a refinement of $A$'s modular graph stratification. Then we study the action of $\mc{G} = (\Cs)^V$ on $A$, and characterize the stabilizer groups of the points of $A$. In particular, we shall see that the Kirwan-Ness stratification by stabilizer group is compatible with our refinement of the modular graph stratificaiton. Next, we identify some finite-type subspaces $S_{N_u,N_l}$ of $A$, which parametrize Gieseker bundles on the family $\Sigma/B$ whose multidegrees obey certain bounds involving collections $N_u$ and $N_l$ of integers. We show that changing $N_u$ or $N_l$ amounts to adding or deleting strata which parametrize certain deformations of the fixed point loci of subgroups of $\mc{G}$. Finally, we show that, if $N_u$ and $N_l$ are ``as close as possible'', then the quotient stack \[ [S_{N_u,N_l}/(\Cs)^V] \simeq Q_N \times [\pt/\Cs_\Delta], \] where $\Cs_\Delta \subset (\Cs)^V$ is the diagonal subgroup and $Q_N$ is an algebraic space which is proper over $B$. \begin{notation}In the remainder of this section, $A$ is the component of the atlas ${A_\alpha}$ associated to a particular \'{e}tale neighborhood $B$ in $\Mbar_{g,I}$, centered at a curve whose modular graph is $\gamma_o$.\end{notation} \subsection{Refined Modular Graphs} When discussing the deformations of a curve, it can be useful to keep track of which nodes of the curve are being smoothed. We introduce the following definition to make this idea precise. \begin{definition}\label{def_graph} A {\it deformation $\gamma$ of the modular graph $\gamma_o$} consists of the following data: \begin{enumerate} \item a subset $E_\gamma \subset E_{\gamma_o}$, and \item a partition $V_\gamma$ of $R_{\gamma_o} = \sqcup_{v \in V_\gamma}V^v_{\gamma_o}$. \end{enumerate} These data determine a modular graph (also called $\gamma$), whose vertices are the blocks of the partition $V_\gamma$, with edge set $E_\gamma$ and tail set $I_{\gamma_o}$. The gluing maps come from $\gamma_o$. The set of splitting edges $E^{split}_\gamma$ of a deformation of a modular graph is the set of edges in $E_\gamma$ which connect different blocks of the partition, i.e., the splitting edges of the modular graph $\gamma$. A deformation of a multi-degree labelled modular graph $(\gamma_o,\md_o)$ is simply a deformation of the underlying modular graph, together with a labelling $\md: V_\gamma \to \Z$ satisfying the conditions enumerated in Proposition \ref{defmodgraph}. \end{definition} \begin{proposition} The stratification of $A$ by labelled modular graphs can be refined to a stratification by {\it deformations} of labelled modular graphs (possibly after restricting to an smaller \'{e}tale neighborhood $B' \to B$). \end{proposition} \subsection{Stabilizer Groups} Let $(C,\sigma_i,\mP,\{t_v\})$ be a Gieseker bundle representing a closed point of $A$, i.e., $(C,\sigma_i)$ is a modification of the fiber of $\pi:\Sigma \to B$ over some closed point $b \in B$. We say that a group element $(g_v) \in (\Cs)^{V}$ {\it stabilizes} $(C,\mP,\{t_v\})$ if the action of $(g_v)$ -- given by $t_v \mapsto g_vt_v$ -- produces an isomorphic object. We identify the stabilizer group of $(C,\sigma_i,\mP,\{t_v\})$ by working on the normalization $\widetilde{C}$, as explained in Section 1.2; the bundle $\mP$ is specified in terms of a bundle $\wmP$ and a collection of gluing isomorphisms $\{g_e\}$, one for each node $\sigma_e$ in $C$. The trivializations simply lift; each stable component of $\widetilde{C}$ carries one or more trivializations. We can use this picture to identify the automorphisms of $(C,\sigma_i,\mP,\{t_v\})$ as follows: If we rescale a trivialization $t_v$ by the group element $g_v \in \Cs_v$, then we can try to compensate this rescaling with an automorphism of the bundle $\wmP$. There are two obstacles to carrying out this procedure: \begin{enumerate} \item rescaling $\wmP$ on $\widetilde{C}_v$ may rescale other trivializations, and \item rescaling may change the gluing maps, so that the automorphism of $\wmP$ doesn't descend to an automorphism of $\mP$. \end{enumerate} We can only avoid the first obstacle if we act with the same group element on any trivializations which are based on the same component of $\widetilde{C}$. The second obstacle can be avoided in two ways: If a gluing isomorphism connects the stable component $\widetilde{C}_v$ to a Gieseker bubble, we can use the extra automorphism of the Gieseker (lifting the bubble's rotations) to rescale this gluing isomorphism, while leaving the gluing isomorphism on the other side of the bubble untouched. Alternately, if the gluing isomorphism connects two stable components, we must act on all trivializations on these components with the same group element. We formalize this discussion as follows: \begin{definition} Let $\gamma = (E_\gamma,V_\gamma)$ be a deformation of the modular graph $\gamma_o$. The {\it partition $R = R(\gamma)$ of $V = V_{\gamma_o}$ associated to $\gamma$} is the one given by the equivalence relation which identifies $v$ and $v'$ if there is a path in $\gamma$ from the vertex represented by the block $\partial_{\gamma_o}(v)$ to the vertex represented by the block $\partial_{\gamma_o}(v')$, which does not pass through any unstable node. \end{definition} \begin{definition} Let $R$ be a partition of the $V = V_{\gamma_o} = \sqcup_{r\in R} V^r$. $(\Cs)^{R}$ denotes the subgroup $(\Cs)^{V^r} \subset (\Cs)^{V}$ corresponding to the block $V^r$, and $\Cs_{\Delta^r} \subset (\Cs)^{V^r}$ denotes the diagonal subgroup of $(\Cs)^{V^r}$. The {\it subgroup $\mc{G}_R \subset \mc{G}$ associated to the partition $R$} is the product \[ \mc{G}_R = \Pi_{r \in R} \Cs_{\Delta^r}. \] \end{definition} \begin{proposition} Let $(C,\sigma_i,\mP, \{t_v\})$ be a closed point of the stratum $A_{\gamma,\md}$ labelled by a deformation $\gamma = (E_\gamma,V_\gamma)$ of $\gamma_o$ and a multi-degree $\md$. The stabilizer group of $(C,\sigma_i,\mP, \{t_v\})$ is the group $\mc{G}_R$ associated to the partition $R = R(\gamma)$. \end{proposition} Note that the stabilizer group depends only on the modification $\gamma$ and not on the multi-degree. Note also that $\mc{G}_R$ always contains the diagonal subgroup $\Cs_\Delta$ of $(\Cs)^V$; this subgroup is the group of global rescalings of $\mP$. \begin{definition} We say that a partition is {\it non-trivial} if $\mc{G}_R \neq \Cs_{\Delta}$. We denote the collection of edges which link different blocks of the partition by $E_R^{split}$. \end{definition} \subsection{The Fixed Point Loci} Let $F_R \subset A$ denote the fixed point locus of the group $\mc{G}_R$, i.e., the substack of $A$ classifying Gieseker bundles whose stabilizer group is $\mc{G}_R$. The stabilizer group of a Gieseker bundle depends only on the refined modular graph, so $F_R$ is a union of strata $A_{\gamma,\md}$. Thus, the stratification of $A$ by stabilizer groups is compatible with the stratification by deformations of labelled modular graphs. Deformations which remain in the fixed point locus $F_R$ must not smooth any of the Gieseker bubbles which define the partition $R$. This implies that such deformations must preserve the partial sums \[ n_r = \sum_{v \in V^r_{\gamma_o}} d_v, \qquad r \in R. \] It is easy to see that the connected components of $F_R$ are in correspondence with collections of integers $\un = (n_r)_{r\in R}\in \Z^R$. \[ F_R = \bigsqcup_{\un} F_R^{\un}. \] Roughly speaking, these integers $\un$ keep track of the multidegrees on the stable components of $C$. \begin{ex} In the special case of Example \ref{Ex04}, the only non-trivial partition is the two-block partition $R = \{ \{v_+\}, \{v_-\}\}$ associated to the modular graph of the special fiber $(\Sigma_o,\sigma_{o,i})$. The fixed point stratum $F_R^{\un} \in A_d$ labelled by $\un = (n_+,n_-)$ is the point $\pt_{\un}$ which classifies Gieseker bundles-with-trivializations whose multidegree is $(n_+,1,n_-)$. (In Example \ref{Ex04}, we used the notation $\pt_{n} \in A_d$, with $d=n_++n_-+1$ and $n = -n_-$.) \end{ex} \subsection{Finite-Type Subspaces of $A$} It follows from the discussion in Section \ref{thestack} that the connected components $A_d$ of the atlas $A$ are of infinite type whenever $|V_{\gamma_o}| \geq 2$, and finite type otherwise. When $|V_{\gamma_o}| = 1$, the stack $A_d$ is proper, relative to $B$; it is the compactified Jacobian of $\Sigma$. (Indeed, it is clearly of finite type and complete. And, because $\Sigma/B$ is a family of one-component curves, there is no degree-splitting, so $A^d$ is separated in this case.) However, when $|V_{\gamma_o}| \geq 2$, the atlas $A_d$ of has infinitely many finite-type strata. It is useful to think of this proliferation of strata as follows: The fiber of $f: A \to B$ over a point $b \in B$ for which $\Sigma_b$ has only self-nodes is of finite-type. But if $b$ moves in such a way that $\Sigma_b$ splits into two components, then the degree $d$ can split between these components arbitrarily, as $d = (d+n) - n$. This can happen in multiple ways, depending on how $\Sigma_b$ acquires nodes; moreover, the corresponding strata can meet in higher codimension, if $\Sigma_b$ degenerates to a curve having multiple components. In this section, we define some finite-type algebraic subspaces $S_{N_u,N_l} \subset A_d$. These stacks parametrize Gieseker bundles whose multidegrees obey certain inequalities, which govern how $d$ can split as $(d+n) -n$). \begin{notation} When $R$ is a non-trivial two-block partition of $V_{\gamma_o}$ (with blocks $V^+$ and $V^-$), the fixed point strata $F_R^{\un}$ are labelled by pairs of integers $\un = (n_{+},n_{-})$. On $A_d$, the total degree $d$ is fixed and we must have $n_{+} + n_{-} + k = d$, where $k = k(R) = |E^{split}_R|$ is the number of edges in $\gamma_o$ which connect a vertex in $V^+$ to a vertex in $V^-$. In this situation, it makes sense to label fixed point strata with a single integer $N$, for which $\un = (n_+,n_-) = (d+N-k,-N)$. Making this choice simplifies some of our notation, and introduces funny asymmetries elsewhere. \end{notation} \begin{definition}\label{ftstrata} Choose collections of integers \[ N_u = (N_u(R)) \qquad\mbox{and}\qquad \uN_l = (N_l(R)) \] where $R = \{V^+_{\gamma_o}, V^-_{\gamma_o}\}$ ranges over the set $NT2B(\gamma_o)$ of non-trivial 2-block partitions of $V = V_{\gamma_o}$. We assume that $N_u(R) > N_l(R)$ for all $R$. If $(\gamma,\md)$ is any multidegree-labelled deformation of $\gamma_o$ which has at least two vertices, then the choice of a 2-block partition $R$ of $V_{\gamma_o}$ induces a 2-block partition of $V_\gamma = V^+_{\gamma} \sqcup V^-_{\gamma} $. We define, for any 2-block partition $R$, the partial sums \[ d_+(\gamma,R) = \sum_{v \in V^+_{\gamma}} d_v \qquad d_-(\gamma,R) = \sum_{v \in V^-_{\gamma}} d_v. \] The {\it locus $S_{N_u,N_l}$ of degree $d$ Gieseker bundles with multidegree bounded by $N_u$ and $N_l$} is the union of all strata $A_{\gamma,\md}$ in $A_d$, for which, if $\gamma$ has multiple vertices, then for any partition $R \in NT2B(\gamma_o)$, the partials sums defined above obey the following inequalities: \begin{align*} d_+(\gamma,R) \geq& d+N_l(R)-k(R) + 1\\ d_-(\gamma,R) \geq& - N_u(R) + 1 , \end{align*} where $k(R) = |E^{split}_R|$ is the number of edges in $\gamma_o$ connecting vertices in $V^+_{\gamma_o}$ to vertices in $V^-_{\gamma_o}$. \end{definition} Note that \begin{enumerate} \item If $N_u'(R) \geq N_u(R)$ and $N_l'(R) \leq N_l(R)$, then \[ S_{N_u,N_l} \subset S_{N_u',N_l'}. \ \] \item $A_d$ is the increasing limit \[A_d = \lim_{N_u(R) \to \infty, N_l(R) \to -\infty} S_{N_u,N_l}\] \end{enumerate} \begin{ex} In the special case of Example \ref{Ex04}, the atlas $A_d$ is finite-type except in the special fiber, which is an endless chain of rational curves. There is only one non-trivial 2-block partition of the relevant modular graph, so we need only specify integers $N_u$ and $N_l$. In this situation, the inequalities above pick out the subscheme $S_{N_u,N_l}$ illustrated below; note that this scheme includes all Gieseker bundles on smooth curves, but only those strata in the special fiber which ``lie between'' the fixed points singled out by $N_u$ and $N_l$ \begin{figure}[htb] \centering \[\begin{xy} (0,0)*{}; (40,0)*{} **\dir{-}; (10,0)*{|}; (10,-5)*{b}; (30,0)*{|}; (30,-5)*{0}; (10,10)*{};(12,20)*{} **\dir{-}; (32,8)*{};(28,15)*{} **\dir{-}; (28,13)*{};(32,20)*{} **\dir{-}; (32,18)*{};(28,25)*{} **\dir{-}; (28,23)*{};(32,30)*{} **\dir{-}; (32,30)*{\circ};(44,30)*{\pt_{N_u}}; (32,8)*{\circ};(44,8)*{\pt_{ N_l}}; (-10,0)*{B}; (-10,15)*{S_{N_u,N_l}}; {\ar@{->} (-10,12)*{};(-10,3)}; (-13,7.5)*{f}; \end{xy}\] \end{figure} \end{ex} \begin{remark} Roughly speaking, the spaces $S_{N_u,N_l}$ are obtained by deleting the ``infinite tails'' of $A_d$, which classify bundles where the splitting degree becomes large. We shall make this more precise below. \end{remark} \subsection{Finite-Type Substacks and Fixed Point Strata } It is intuitively clear that there must be some relationship between the finite-type spaces $S_{N_u,N_l}$ defined above and the fixed points of the groups $\mc{G}_R$: The inequalitities in Definition \ref{ftstrata}, in essence, forbid us from using Gieseker bubbles to shift degree across the nodes. In this subsection, we make these ideas precise. \subsubsection{Deformations which Smooth Gieseker Bubbles} Fix a non-trivial 2-block partition $R$ of $V_{\gamma_o}$ and consider $F_R^{\un}$, the $\un$-th connected component of the fixed point locus $F_R$ of $\mc{G}_R$. The bundles in the locus $F^{\un}_R$ have Gieseker bubbles at the nodes labelled by the splitting edges in $E^{split}_R$. A bundle classified by $F_R^{\un}$ has Gieseker bubbles at each node labelled by one of the splitting edges in $E_R^{split}$; each such Gieseker bubble has two nodes, both of which stabilize to the same node. Let $D_R^{\un}$ be the subspace of $A_d$ which classifies bundles obtained by smooth at most one of the nodes on each of the Gieseker bubbles labelled by $E_R^{split}$. The condition we've used to define $D_R^{\un}$ can be stated in terms of labelled deformations of modular graphs, so $D_R^{\un}$ is a union of strata labelled by such. \begin{ex} In the special case of Example \ref{Ex04}, where there is only one non-trivial two-block partition, the locus $D^n$ containing the fixed point set $F^n = \pt_n$ is \[ D^n = (U_n)_0 = \Spec \frac{\C[z_n,w_n]}{\langle z_n w_n\rangle} = {\begin{xy} (-3,5) *{}; (1,-2)*{} **\dir{-};(-3,-5) *{}; (1,2)*{} **\dir{-}; (6,0)*{\pt_n}; (-.25,0) *{\bullet}; \end{xy}}. \] This is a pair of affine lines meeting at a common origin $\pt_n$. \end{ex} The $(\Cs)^V$ action on $A_d$ respects the modular graph stratification, so the action of $\mc{G}_R$ on $A_d$ restricts to an action of $\mc{G}_R$ on $D^{\un}_R$. Since $\mc{G}_R$ acts nontrivially on all the strata in $D^{\un}_R \setminus F^{\un}_R$, we may think of these strata as flowing ``towards'' or ``away'' from the fixed point locus $F^{\un}_R$. \begin{definition} $Z^{\un}_R$ is the algebraic subspace of $D^{\un}_R$ for which the weights of $\mc{G}_R$ on the conormal bundle $\overline{N}_{F^{\un}_R/Z^{\un}_R}$ are all {\it non-negative}. Likewise, $W^{\un}_R \subset D^{\un}_R$ is the algebraic subspace for which the weights of $\mc{G}_R$ on the conormal bundle $\overline{N}_{F^{\un}_R/W^{\un}_R}$ are all {\it non-positive}. \end{definition} \begin{ex} In the special case of Example \ref{Ex04}, \[ Z^n = \Spec \C[z_n] = {\begin{xy} (-3,5) *{}; (1,-2)*{} **\dir{-}; (6,0)*{\pt_n}; (-.25,0) *{\bullet}; \end{xy}} \] and \[ W^n = \Spec \C[w_n] = {\begin{xy} (-3,-5) *{}; (1,2)*{} **\dir{-}; (6,0)*{\pt_n}; (-.25,0) *{\bullet}; \end{xy}}. \] \end{ex} In the example, above $Z^n$ and $W^n$ are the total spaces of vector bundles (of rank $1$) over the fixed point locus $F^n$. An analogous statement is generally true. \begin{definition} The {\it projection map} $\eta_Z: Z^{\un}_R \to F_R^{\un}$ sends a Gieseker bundle $(C,\mP)$ to the Gieseker bundle obtained by \begin{enumerate} \item creating a Gieseker bubble at every node $\sigma_e$ labelled by an $e \in E^{split}_R$ for which the modification $m: C \to \Sigma$ is trivial, and \item for each such node $\sigma_e$, twisting the bundle on the curve component on one side of $\sigma_e$ by the divisor $-\sigma_e$. Which side one twists at is determined by the requirement that the resulting multidegree land in the $\un$-th fixed point locus. \end{enumerate} Similarly, we have a projection map $\eta_W: W^{\un}_R \to F_R^{\un}$. \end{definition} \begin{proposition} $Z^{\un}_R$ is the total space of the conormal bundle $\overline{N}_{F^{\un}_R/Z^{\un}_R}$. Likewise, $W^{\un}_R$ is the total space of the conormal bundle $\overline{N}_{F^{\un}_R/W^{\un}_R}$. \end{proposition} \begin{proof} The fiber of the projection map $\eta_Z$ over a given Gieseker bundle in $F_R^{\un}$ is the stack of bundles on the union of all Gieseker $\Pn^1$'s and points which can arise as the preimages under the modification map of nodes in $\Sigma_b$ labelled by edges $e \in E^{split}_R$. The stack of such bundles on a point is a copy of $\Cs$, and the stack of such bundles on a Gieseker $\Pn^1$ is a $\pt$. Thus, every fiber of $\eta_z$ is a copy of $\mb{A}^{|E^{split}_R|}$; the Gieseker strata give a toric decomposition. The proof for $\eta_w$ is identical. \end{proof} \begin{corollary} $Z_R^{\un}$ and $W_R^{\un}$ are smooth. \end{corollary} \begin{proof} $Z_R^{\un}$ is the total space of a vector bundle on the fixed point locus $F_R^{\un}$. \end{proof} This fact will be important in our proof of the main theorem. \subsubsection{An Alternate Description of $S_{N_u,N_l}$} Pick a partition $R_0 \in NT2B(\gamma_o)$ and define $M_u(R)$ and $M_l(R)$ by \[ M_u(R) = \begin{cases} N_u(R) + 1 & \mbox{if $R=R_0$}\\ N_u(R) & \mbox{otherwise} \end{cases} \qquad M_l(R) = \begin{cases} N_l(R) - 1 & \mbox{if $R=R_0$}\\ N_l(R) & \mbox{otherwise}. \end{cases} \] \begin{proposition} $S_{N_u,N_l}$ is the complement of $W^{N_l(R)}_R$ in $S_{N_u,M_l}$: \[ S_{N_u,N_l} = S_{N_u,M_l} \setminus W^{N_l(R)}_R. \] Likewise, $S_{N_u,N_l}$ is the complement of $Z^{N_u(R)}_R$ in $S_{M_u,N_l}$: \[ S_{N_u,N_l} = S_{M_u,N_l} \setminus Z^{N_u(R)}_R. \] \end{proposition} \begin{proof} This is an elementary observation: Deformations which land in $T^Z_R(N_u)$ shift all of the degree $1$'s from the Gieseker bubbles to the components in the block $V^+$ of $R$. The inequality \[d_+(\gamma,R) \geq d+N_l(R)-k(R) + 1\] says precisely that these deformations are forbidden. \end{proof} The proposition implies that $S_{N_u,N_l}$ is obtained by deleting the ``infinite tails'' of $A_d$, generalizing the picture given in the examples involving Gieseker bundles on curves of genus $0$ with $4$ marked points. More precisely: \begin{definition} Let $N_u = (N_u(R))$ and $N_l = (N_l(R))$ be as in Definition \ref{ftstrata}. For any partition $R \in NT2B(\gamma_o)$, consider the following algebraic subspaces of $A_d$: \[ T^Z_R(N_u) = \bigcup_{n > N_u(R)} Z^n_R \qquad\mbox{and}\qquad T^W_R(N_l) = \bigcup_{n < N_l(R)} W^n_R. \] These loci are the {\it infinite tails} associated to the partition $R$ and the indices $N_u$ and $N_l$. \end{definition} \begin{proposition} The space $S_{N_u,N_l}$ is obtained as the complement \[ S_{N_u,N_l} = A \setminus \bigcup_{R \in NP_2(\gamma_o)} T^Z_R(N_u) \sqcup T^W_R(N_l). \] of the union of the infinite tails. \end{proposition} \subsection{Proper Substacks} Obviously, the subspaces $S_{N_u,N_l}$ get smaller as we make $N_u$ and $N_l$ closer together. We now consider the extremal case where \[ N_u(R) = N_l(R) + 1 = N(R) \] for all $R \in NT2B(\gamma_o)$. The corresponding algebraic subspace is denoted \[ S_{N} = S_{(N(R)), (N(R)-1)}. \] \begin{lemma} The stabilizer group of any Gieseker bundle classified by $S_N$ is $\Cs_{\Delta}$. \end{lemma} \begin{proof} The inequalities in the definition eliminate from $S_N$ any stratum $A_{\gamma,\md}$ which, for any non-trivial 2-block partion $R$, has a Gieseker bubble on every node connecting components labelled vertices in different blocks of $R$. This means that any two components are connected by a path which does not pass through any unstable $\Pn^1$'s. It follows that the stabilizer group of any point in $S_{\un}$ is the diagonal $\Cs_\Delta$. \end{proof} Suppose that choose a particular $v_0 \in V$ and thereby split $(\Cs)^V \simeq \Cs_\Delta \times (\Cs)^{V \setminus \{v_0\}}$. The lemma above says that $(\Cs)^{V \setminus \{v_0\}}$ acts freely on $S_N$, allowing us to forget all but one of the trivializations, by fixing them in reference to $t_{v_0}$. More precisely, the quotient stack $[S_N/(\Cs)^V]$ can be written as \[ [S_N/(\Cs)^V] \simeq Q_N \times [\pt/\Cs_\Delta], \] where $Q_N$ is the stack which classifies degree $d$ Gieseker bundles equipped with a single trivialization $t_{v_0}$ (or equivalently, the coarse moduli space f degree $d$ Gieseker bundles) obeying the inequalities from the definition. \begin{proposition}\label{lbcoherence} The coarse moduli space $Q_N$ defined above is proper, relative to $B$. \end{proposition} To prove this, we shall need some facts about the multidegrees of bundles which can occur in the special fiber of a family on the Spec of a discrete valuation ring. Let $R$ be a discrete valuation ring, with fraction field $K$, and associated disc $D = \Spec(R)$ and punctured disc $D^\times = \Spec(K)$. \begin{definition}\label{twisters} We will say that two principal $\Cs$-bundles $\mP_o$ and $\mP'_o$ on the special fiber $\Sigma_o$ are {\it fiber twists} of one another if $\mP_o \simeq \mP'_o \times_{\Cs} \mc{T}$, where $\mc{T}$ is the restriction to $\Sigma_o$ of the principal bundle associated to the divisor $\sum_{v \in V_{\gamma_o}} n_v \Sigma_{o,v}$ on $\Sigma$. \end{definition} Suppose we are given a family of curves $\Sigma/D$ on the disc, with smooth generic fiber, and a family of $\Cs$-bundles $\mP/(\Sigma|_{D^{\times}})$ over the punctured disc $D^{\times}$. If the family $\mP$ extends to $D$, with special fiber $\mP_o$, then it can also be extended to $D$ with special fiber $\mP'_o$ any twist of $\mP_o$. When the generic fiber is nodal, one can repeat this story by focusing attention on each smooth component. The set of multi-degrees of twists of the trivial bundle may be identified as follows: Consider the intersection matrix $k = (k_{v v'})$ of $\Sigma_o$. If $v \neq v'$, then $k_{vv'} = |\Sigma_{o,v} \cap \Sigma_{o,v'}|$, the number of nodes common to both curves. If $v = v'$, $k_{vv} = -|\Sigma_{o,v} \cap \overline{\Sigma_o \setminus \Sigma_{o,v}}|$, the number of nodes where $\Sigma_{o,v}$ meets the closure of its complement. \begin{lemma}[\cite{MR2382140}]\label{twister_degrees} The set of multidegrees of twists of the trivial bundle is the lattice $\Lambda_{\Sigma_o} \subset \Z^{V_{\gamma_o}}$ generated by the columns of the intersection matrix $k$. \end{lemma} \begin{proof} This follows from the fact that the degree $\op{deg}_{\Sigma_{o,v}} \mc{O}_{\Sigma}(\Sigma_{o,v'})$ is equal to $k_{vv'}$. \end{proof} \begin{proof}[Proof of Prop. \ref{lbcoherence}] $Q_{N}$ is clearly of finite type. We show that the valuative criteria for completeness and separability are satisfied. Let $R$ be a discrete valuation ring, with fraction field $K$, and associated disc $D = \Spec(R)$ and punctured disc $D^\times = \Spec(K)$. We claim that, given any family $(C,\mP)$ of Gieseker bundles on $D^\times$ which satisfy the inequalities from the definition, there exists a unique extension of $(C,\mP)$ to $D$ whose special fiber satisfies the inequalities. The stack $\Mtwid_{g,I}([\pt/\Cs])$ is complete, so there is some extension of $\mP$ to the entire disc $D$. The set of all possible extensions consists of all twists of a certain bundle $\mP_o$ on the special fiber of $C$, and clearly at least one of these twists will satisfy the inequalities from the definition. Moreover, only one of these twists can lie in $Q_N$, so $Q_N$ is separated. This follows directly from the characterization of multidegrees of twists of the trivial bundle in Lemma \ref{twister_degrees}: A non-trivial twist will always shift the degree on some component of the special fiber by a multiple of $k(R)$. In the special case $N_u(R) = N_l(R) + 1$, this leads to a violation of the inequalities. Thus, any non-trivial twist of a Gieseker bundle in $Q_{N}$ lies outside of $Q_{\un}$. \end{proof} \section{Admissible Classes}\label{admissibleclasses} In this section, we introduce the admissible classes, and estimate the weights of their fibers over fixed points $F_R^{\un}$ as functions of $\un$. \subsection{Definitions} The definitions given here are relative versions of the ones given in \cite{math.AG/0312154}. Recall that $\Mtwid_{g,I}([\pt/\Cs])$ carries a universal curve $C$ with universal marked points $\sigma_i: \Mtwid_{g,I}([\pt/\Cs]) \to C$, and a universal bundle $p: \mc{P} \to C$. The universal bundle may be thought of equivalently as a morphism $\phi: C \to [\pt/\Cs]$, and the universal marked points give rise to evaluation maps $\ev_i: \Mtwid_{g,I}([\pt/\Cs]) \to [\pt/\Cs]$. For any finite-dimensional representation $V$ of $\Cs$, let $\mc{V} = \phi^*V$ be the vector bundle on $\Sigma_{g,I}$ associated to $V$ by the universal bundle. \begin{definition} The {\it evaluation class} $\ev_i^*[V]$ is the $K$-theory class on $\Mtwid_{g,I}([\pt/\Cs])$ represented by the vector bundles $\sigma_i^*\mc{V}$. A {\it descendant class} is a K-theory class of the form $\ev_i^*[V] \otimes [T_i^{\otimes n_i}]$, where $T_i = \sigma_i^*T^*_{\pi}$ is the pullback of the relative cotangent bundle of the morphism $\pi: C\to\Mtwid_{g,I}([\pt/\Cs])$ and $n_i$ is a positive integer. \end{definition} \begin{remark} These are $K$-theoretic gravitational descendents. Their cohomological analogues, the $\psi$-classes, are the images of these classes under the Chern character: $\op{ch}([T_i]) = \exp(\psi_i)$. \end{remark} \begin{definition} The {\it Dolbeault index class $I_V$ of a representation $V$} is the topological K-theory classes associated to the complexes $R\pi_*\mc{V}$ of sheaves on $\Mtwid_{g,I}([\pt/\Cs])$. The {\it determinant of cohomology of $V$} is the line bundle $\det{I_V} = \det R\pi_*\mc{V}$. \end{definition} \begin{definition} A line bundle $\mc{L}$ on $\Mtwid_{g,I}([\pt/\Cs])$ is {\it admissible} if it is topologically isomorphic to a positive (possibly fractional) power of the dual of the determinant of cohomology of the standard representation $\C$ of $\Cs$: \[ \mc{L}^{-1} \simeq (\det R\pi_*\phi^*\C)^{\otimes q} \] for some positive rational number $q$. \end{definition} \begin{definition} An {\it admissible complex} $\alpha$ on $\Mtwid_{g,I}([\pt/\Cs])$ is the tensor product of an admissible line bundle $\mc{L}$ with any number of Dolbeault index and evaluation/descendant classes. \[ \alpha = \mc{L} \bigotimes \otimes_a (R\pi_*\mc{V}_{a}) \bigotimes \otimes_i (\ev_i^*V_{i} \otimes T_i^{\otimes n_i}). \] An {\it admissible class} is a topological K-theory class represented by sums of admissible complexes. \end{definition} \subsection{Weight Estimates in the Local Atlas} The local atlas $A_d$ has universal curves and bundles as well, and one can define the admissible classes associated to these universal structures. These admissible classes, so defined, agree with the pullback of admissible classes from the stack $\Mtwid_{(\Sigma,\sigma_i)}$. We will use the same notation for classes on $A_d$ that we used for classes on $\Mtwid_{g,I}([\pt/\Cs])$. Admissible complexes are complexes of coherent sheaves on $A_d$, so we can represent them as complexes $\mc{V}^\bullet$ of vector bundles. If $f \in F_R^{\un}$ is a $\mc{G}_R$-fixed point, then the fiber $\mc{V}^i_f$ of $\mc{V}^i$ at $f$ is a $\mc{G}_R$-representation. The weights of these representations, being both discrete-valued and continuous in $f$, depend only on the connected component $F_R^{\un}$. Our goal in this section is to estimate the $\mc{G}_R$-weights of the fixed-point fibers of $\mc{V}^i$ as functions of $\un$. \begin{proposition} For any 2-block partition $R \in NT2B(\gamma_o)$, \begin{enumerate} \item The $\mc{G}_R$-weights of the fixed point fibers of the evaluation and descendant classes are bounded functions of $\un$. \item The index complex $R\pi_*\mc{V}$ is quasi-equivalent to a complex $L^\bullet$ of vector bundles, and the $\mc{G}_R$-weights of the fixed point fibers of each bundle $L^i$ in this complex are bounded functions of $\un$. \item The $\mc{G}_R$-weights of fixed-point fibers of admissible line bundles $\mc{L}$ grow linearly with $\un$, with positive coefficient. \end{enumerate} \end{proposition} The remainder of this section is devoted to proving this proposition. Let $f \in F_R^{\un}$ be a fixed point of $\mc{G}_R \subset \mc{G}$, represented by $(C_f,\sigma_{f,i},\mP_f, \{t_{v,f}\})$. We denote the modular graph of $(C_f,\sigma_{f,i})$ by $\gamma$. \begin{lemma}\label{lembundleweights} Let $\C_\lambda$ be the irreducible $\Cs$-representation of weight $\lambda$, and let $\mc{V}_\lambda|_{C_f} = \mP_f \times_{\Cs} \C_\lambda$ be the associated bundle on $C_f$. Recall that $\mc{G}_R$ is a product $\Pi_r \Cs_{\Delta^r}$. Let $U$ be an open subset of $C_f$ which is entirely contained in one of the components $C_{v,f}$ of $C_f$. If $C_{v,f}$ is one of the components of the block $V_{\gamma}^r$, then $\Cs_{\Delta^r}$ acts on $\Gamma(U,\mc{V}_\lambda|_{C_f})$ with weight $-\lambda$. Otherwise, $\Cs_{\Delta^r}$ acts on $\Gamma(U,\mc{V}_\lambda|_{C_f})$ with weight $0$. \end{lemma} \begin{proof} $f$ is a $\mc{G}_R$-fixed point, so if we scale the trivializations $t_v$ ($v \in V^r$) by $g_r \in \Cs_{\Delta^r}$, we can compensate by rescaling the bundle's fibers on the appropriate components of $C_f$ by $g_r^{-1}$. This rescales the local sections of the associated bundles $\mc{V}_\lambda|_{C_f}$ by the appropriate power $g_r^{-\lambda}$. \end{proof} \subsubsection{Evaluation and Descendant Classes} The fixed-point fiber of an evaluation bundle $\ev_i^*V = \sigma_i^*\mc{V}$ is simply the fiber of $\mc{V}$ at the image $\sigma_i(f)$ of the fixed point inside $C_f$. Any $\Cs$-representation $V$ is a direct sum $\oplus_a \C_{\lambda_a}$ of irreps, so Lemma \ref{lembundleweights} implies that the $\mc{G}_R$-weights of the fixed-point fibers of the evaluation bundle are constant functions of $\un$. Similarly, $\mc{G}_R$ acts trivially on the stable components of $C_f$ (although it rotates the Gieseker bubbles), so the same reasoning show that weights of the fixed-point fibers of the descendant class $\sigma_i^*(\mc{V} \otimes (T^*_\pi)^{\otimes k})$ are constant in $\un$. \subsubsection{Index Classes} The morphism $\pi: C \to A_d$ is projective, the atlas $A_d$ is locally Noetherian, and the associated vector bundle $\mc{V}$ on $C$ (being a locally-free sheaf) is flat over $A_d$. It follows (Hartshorne III.12 \cite{MR0463157}, p. 282-4) that, locally on $A_d$, $R\pi_*\mc{V}$ is quasi-equivalent to a complex $L^\bullet$ of finitely-generated locally-free sheaves. These locally-free sheaves are generated by local sections $\{s_\alpha\}$ of a Cech resolution of $\mc{V}$; each such section $s_\alpha$ represents a chosen generator of the sheaves $R^if_*\mc{V}$. The fixed-point fibers of the $L^i$ are spanned by the images of the generators $s_\alpha$ in the fibers. Such images are represented by local sections of $\mc{V}|_{C_f}$, so Lemma \ref{lembundleweights} above implies that the $\mc{G}_R$-weights on the fixed point fibers don't depend on $\un$. \subsubsection{Admissible Line Bundles} Any admissible line bundle $\mc{L}$ is topologically a positive (possibly fractional) power $(\mc{L})^q$ of the determinant class $\mc{L}_1 = \op{det}^{-1}R\pi_*\mc{V}_1$ associated to the standard representation $\C_1$ of $\Cs$. The $\mc{G}_R$-fixed point weights of $(\mc{L}_1)^{\otimes q}$ are linear in $q$, so it's enough to compute them for $\mc{L}_1$. Dual and determinant are functorial operations, so, as elements of $K_{\mc{G}_R}(f)$, \[ [\mc{L}_1|_f] = (\det [R\Gamma(C_f,\mc{V}_1|_{C_f})])^{-1}. \] These $K$-theory classes are topological invariants, so they are unchanged by deformations within $F_R^{\un}$, so we may assume $C_f$ has only those nodes that it must have represents a $\mc{G}_R$-fixed point (i.e., those nodes which create the required Gieseker bubbles). Furthermore, these characters are necessarily polynomial in the components $n_v$ of $\un$, so it's enough to compute them when $n_v \gg 0$ for all $v \in V_\gamma$. In this situation, there are no higher cohomologies, and the global sections in $\Gamma(C_v,\mc{V}_1|_{C_f})$ are specified by giving sections of $\mc{V}_1|_{C_f}$ on the irreducible components $C_{v,f}$ of $C_f$. (The sections on the Gieseker bubbles are determined by continuity at the nodes, since the space of possible such sections is two-dimensional.) The weights of $\mc{G}_R$ on these sections were computed in Lemma \ref{lembundleweights}. \[ [R\Gamma(C_f,\mc{V}_1|_{C_f})] = \sum_{r \in V_\gamma} (n_r + 1 -g_r)t_r^{-1}. \] Thus, we have \begin{proposition}\label{Lweights} \[ [\mc{L}|_f] = \Pi_{r \in V_{\gamma}} t_r^{q(n_r+1-g_r)}. \] \end{proposition} As promised, the fixed point weights grow linearly in $\un = (n_r)$, with positive coefficient. \section{Gromov-Witten Invariants for $[\pt/\Cs]$}\label{theinvariants} \begin{definition} Let $[V_i] \in K([\pt/\Cs])$ be a collection of K-theory classes, indexed by $i \in I$. Let $\mc{L}$ be an admissible line bundle, and let $\otimes_a I_{V_a}^{\otimes n_a}$ be a collection of index classes. Set \[ h = \mc{L} \otimes \otimes_a I_{V_a}^{\otimes n_a}. \] The {\it $h$-twisted Gromov-Witten invariant $\langle V_1,...,V_{|I|}\rangle_h$ associated to the collection $\{V_i\}$} is the Euler characteristic of the admissible complex \[ h \bigotimes \otimes_i \ev_i^*V_i = \mc{L} \bigotimes \otimes_a (R\pi_*\mc{V}_{\lambda_a}) \bigotimes \otimes_i \ev_i^*V_i \] The $h$-twisted invariants for descendant classes are defined similarly. \end{definition} \subsection{Statement of The Main Theorem} The Euler characteristic of a complex $\alpha$ on $\Mtwid_{g,I}(\pt/\Cs)$ is the Euler characteristic of the right-derived pushforward $R(F_\mP)_*\alpha$. The fibers of $F$ are not proper, so it is not obvious that these invariants are well-defined. The remainder of this section is devoted to proving that the following theorem, which implies that the K-theory class $$F_{\mP *}[\alpha] := [RF_{\mP *}\alpha] = \sum_i (-1)^i [I^i(RF_*\alpha)]$$ is a finite sum, hence well-defined. (Here $I$ is any locally free resolution of $RF_*\alpha$. Such resolutions exist because $\Mbar_{g,I}$ is smooth and projective. The K-theory class is independent of which resolution we choose.) \begin{theorem}\label{maintheorem2} The derived pushforward $R^\bullet F_*\alpha$ of any admissible complex $\alpha^\bullet$ is coherent. \end{theorem} \subsection{Proof of the Main Theorem} Coherence is a local property, so we can check the coherence of $RF_*\alpha$ in our favorite atlas $A = \sqcup_{d \in \Z} A_d$. \[ \xymatrix{A \ar[r]^(.3){q} \ar[dr]_f & \Mtwid_{(\Sigma,\sigma_i)} \ar[r] \ar[d]^{F} & \Mtwid_{g,I}([\pt/\Cs]) \ar[d]^{F} \\ & B \ar[r] & \Mbar_{g,I}} \] $B$ is affine and $q: A \to \Mtwid_{(\Sigma,\sigma_i)}$ presents $\Mtwid_{(\Sigma,\sigma_i)}$ as a quotient by $(\Cs)^V$, so proving coherence amounts to proving that the $(\Cs)^V$ invariants in the derived global sections $R\Gamma(A,\alpha)$ are finitely-generated. We'll prove this in two steps: First, we'll work at fixed total degree $d$, showing that the $(\Cs)^V$ invariants in the derived global sections $R\Gamma(A_d,\alpha)$ are coherent. Then, we'll show that these invariants vanish for all but finitely many $d$, so that the sum \[ [R\Gamma(A,\alpha)]^{(\Cs)^V} = \sum_{d\in\Z} [R\Gamma(A_d,\alpha)]^{(\Cs)^V} \] is well-defined. \subsubsection{Coherence on $A_d$} There are two cases: $|V_{\gamma_o}| = 1$ and $|V_{\gamma_o}| \geq 2$. \medskip The first case is essentially trivial. If $|V_{\gamma_o}| = 1$, then, as we have observed, $A_d$ is the compactified Jacobian, proper over $B$. In this case $R\Gamma(A_d,\alpha)$ is finitely-generated, as is its submodule of $(\Cs)^{V} \simeq \Cs$-invariants. \medskip The second case -- $|V_{\gamma_o}| \geq 2$ -- is more complicated, because $A_d$ is not proper over $B$. It has infinitely many finite-type strata. We would like to show that most of these strata do not contribute to $(\Cs)^{V}$-invariants to $R\Gamma(A_d,\alpha)$. Recall that we can obtain $A_d$ as the limit \[ A_d = \lim_{N_u(R) \to \infty,N_l(R) \to -infty} S_{N_u,N_l}. \] The following statement about local cohomology allows us to reduce the question of finite-generation on $A_d$ to the question of finite-generation on any $S_{N_u,N_l}$. \begin{lemma}\label{vanishing} Let $\mc{V}$ be a finite rank $(\Cs)^V$-equivariant vector bundle on $A_d$ for which, for all non-trivial two-block partitions $R$, the $\mc{G}_R$-weights of the fibers of $\mc{V}$ over a $\mc{G}_R$-fixed point $F_R^{\un}$ are linear functions of $\un$, with positive coefficients on the linear terms. Fix a partition $R$ and multidegree bounds $N_u$ and $N_l$, and abbreviate \[ S = S_{N_u,N_l} \qquad Z = Z^{N_u(R)-1}(R)_R\qquad W= W^{N_l(R)+1}_R. \] The $(\Cs)^{V}$-invariants in the local cohomology groups \[ R^p\Gamma_{Z}(S,\mc{V}) \qquad \mbox{and} \qquad R^p\Gamma_{W}(S,\mc{V}) \] are finitely generated. Moreover, these local cohomologies vanish if $N_u(R) \gg 0$ and $N_l(R) \ll 0$ for all $R$. \end{lemma} \begin{proof} The argument for $W$ has the same form as the argument for $Z$, so we focus on the latter case. The sheaf $\mc{V}$ is torsion-free, so the local cohomology sheaf $\Gamma_{Z}(\mc{V})$ vanishes. In fact, because $Z$ is a closed connected subvariety of $S$ (of codimension $q$), the only non-zero local cohomology sheaf is $R^q\Gamma_{Z}(\mc{V})$. The local-to-global spectral sequence, together with the exactness of the $(\Cs)^V$-invariants functor, implies that the vanishing of the $\mc{G}_R$-invariants in the local cohomology groups $R^i\Gamma_{Z}(S,\mc{V})$ follows from the vanishing of the $\Cs_R$-invariants in $R^p\Gamma(A,R^q\Gamma_{Z}(\mc{V}))$. The vanishing of the latter invariants follows (via the filtration spectral sequence) from the vanishing of the $\mc{G}_R$-invariants in the cohomology groups $R^j\Gamma(Z,\mc{V}\otimes \Sym N_{Z/S })$. $Z$ is the total space of a vector bundle over the fixed point locus $F = F_R^{N_u(R)-1}$, so (taking global sections over the fibers, we see that \[ R^i\Gamma(Z,\mc{V}\otimes \Sym N_{Z/S}) = R^i\Gamma(F, \mc{V}\otimes \Sym N_{Z/S} \otimes \Sym \overline{N}_{F/Z}). \] The weight spaces in the two $\Sym$'s in the RHS above are finitely generated, and vanish for negative weights. Since $\mc{V}$ is finite rank, it follows that the $(\Cs)^V$-invariants in the RHS are finitely-generated. Moreover, because we have assumed that the fixed-point weights of $\mc{V}$ grow linearly, with positive coefficients, the $(\Cs)^V$-invariants in the RHS vanish if $N_u$ and $N_l$ lie outside some finite range. \end{proof} We saw in Section 4 that an admissible complex can be represented as complex of bundles $\mc{V}$ which satisfy the conditions of the lemma. (The evaluation, descendant, and index classes are all bounded, and the admissible line $\mc{L}$ supplies the linear growth.) \begin{corollary} There exist $N_u$ and $N_l$ such that \[ R\Gamma(A_d,\alpha)^{(\Cs)^{V}} = R\Gamma(S_{N_u,N_l},\alpha)^{(\Cs)^{V}}. \] \end{corollary} \begin{proof} The local cohomology groups in the lemma measure how $R\Gamma(S_{N_u,N_l},\alpha)$ changes if we increase $N_u(R)$ or decrease $N_l(R)$ by $1$. Since the invariants in these local cohomologies vanish outside some finite range of $N_u$ and $N_l$ the limit \[ \lim_{N_u(R) \to \infty,N_l(R) \to -\infty} R\Gamma(S_{N_u,N_l},\alpha)^{(\Cs)^V} \] stabilizes. \end{proof} The finite-generation of invariants in the local cohomology also lets us delete strata from $S_{N_u,N_l}$. This may alter the $(\Cs)^V$-invariants, but because the local-cohomologies are finitely-generated, it will not alter the fact of their finite-generation. So, we can check finite-generation by reducing to the ``minimal'' subspace $S_N$ ($= S_{N,N-1}$). But, by Proposition \ref{lbcoherence}, the invariants in $R\Gamma(S_N,\alpha)$ are the invariants in $R\Gamma(Q_n,\alpha)$. $Q_N$ is coherent, so the latter invariants are automatically finitely generated. \begin{corollary} The $(\Cs)^V$-invariants in $R\Gamma(A_d,\alpha)$ are finitely-generated. \end{corollary} \subsubsection{The Sum over $d$} Now we would like to show that the $(\Cs)^V$-invariants in $R\Gamma(A_d,\alpha)$ vanish, for all but finitely many $d$. The diagonal subgroup $\Cs_\Delta \subset (\Cs)^V$ fixes all of $A_d$, so the global sections $R\Gamma(A_d,\alpha)$ form a complex of $\Cs_\Delta$-representations. The $\Cs_\Delta$-weight spaces in this complex are non-zero for only finitely many weights. The width of this range depends on the class $\alpha$; however, the growth of the upper and lower bounds of the range depend only on the admissible line bundle factor $\mc{L}$ in $\alpha$. In particular, the upper and lower bounds grow linearly in $d$, with positive coefficient. (This is a consequence of Proposition \ref{Lweights}; the weights of the diagonal $\Cs_\Delta$ are proportional to $q\sum_r (n_r+1-g_r) = d + \mbox{constant}$.) Vanishing for all but finitely many $d$ follows. \section{Towards Gromov-Witten Invariants for $[X/\Cs]$}\label{XmodC} \subsection{Definitions} Recall that $[X/\Cs]$ is, by definition, the fibered category whose objects are triplets $(B,\mP,s)$ consisting of a test scheme $B$, a principal $\Cs$-bundle $p:\mP \to B$, and a $\Cs$-equivariant morphism $s: \mP \to X$. Morphisms between such pairs are Cartesian diagrams \[ \xymatrix{\mP \ar[r]^{f} \ar[d]^p & \mP' \ar[d]^{p'} \\ B \ar[r] & B'} \] such that $s = s' \circ f$. The upshot of this definition is that the natural map $\rho: X \to [X/\Cs]$ is a principal $\Cs$-bundle, and any map $\phi: C\to [X/\Cs]$ gives rise to a pullback diagram \[ \xymatrix{\mP \ar[r]^s \ar[d]^p & X \ar[d]^{\rho}\\ C \ar[r]^{\phi} & [X/\Cs]} \] Thus maps to $[X/\Cs]$ {\it are} principal $\Cs$-bundles, together with a section $s$ of the associated fiber bundle $\mP \times_{\Cs} X$. There is a natural notion of degree for such maps. Define the homology of $[X/\Cs]$ by the equation $H_n([X/\Cs]) = H_{n+\dim(\Cs)}^{\Cs}(X)$, so that the image of the $\Cs$-equivariant fundamental class $[\mP]_\Cs$ is the usual fundamental class $[C]$. We will say that a map $\phi: C \to [X/\Cs]$ has {\it degree} $\beta \in H_{2+\dim(\Cs)}^{\Cs}(X) = H_2([X/\Cs])$ if the pushforward $s_*[\mP]_{\Cs} = \beta$. For a map to $\pt/\Cs$, this notion of degree is equivalent to the usual definition of a bundle's degree via the first Chern class. \begin{definition} A {\it Gieseker map from $C$ to $[X/\Cs]$} is a triplet $((C,\sigma_i),\mP,s)$ consisting of: \begin{enumerate} \item a prestable marked curve $(C,\sigma_i)$, \item a principal $\Cs$-bundle $p: \mP \to C$, and \item A section $s: \mP \to X$, \end{enumerate} such that \begin{enumerate} \item $\mP$ has degree $0$ on any irreducible rational component of $C$ which has one node and one marked point. \item $\mP$ has either degree $0$ or degree $1$ on any unstable rational component of $C$ which has two nodes. \item $s$ is non-trivial on any unstable component on which $\mP$ has degree $0$. \end{enumerate} A Gieseker map has degree $\beta \in H_2([X/\Cs])$ if $s_*[\mf{c}^*\mP]_{\Cs} = \beta$. We denote by $\Mtwid_{g,I}([X/\Cs])$ the fibered category of Gieseker maps to $[X/\Cs]$ from stable marked curves of type $(g,I)$. Its connected components are labelled by the degree $\beta \in H_2([X/\Cs])$; we denote them by $\Mtwid_{g,I,\beta}([X/\Cs])$. \end{definition} It is clear from the definition that there is a forgetful map \[ F_s: \Mtwid_{g,I,\beta}([X/\Cs]) \to \Mtwid_{g,I,ft_*\beta}(\pt/\Cs), \] where $ft_*\beta$ is the degree obtained from the homomorphism $ft_*: H_2([X/\Cs]) \to H_2([\pt/\Cs])$. We forget the section $s$ and we obtain a Gieseker map to $[\pt/\Cs]$ by contracting any rational components which carry trivial bundles. \begin{remark} This definition is inspired by Kontsevich's definition of stable maps. Sections $s: \mP\to X$ are locally maps from $C$ to $X$. Thus, sections can degenerate in families in exactly the same way that maps to $X$ do, by developing singularities. We cure these singularities by bubbling where the singularity occurs. \end{remark} \begin{theorem} $F_s$ is proper and Deligne-Mumford. \end{theorem} \begin{proof} In essence, the result follows from the fact that $[X/\Cs] \to [\pt/\Cs]$ is proper and representable. We make the argument precise by proving valuative criteria for completeness \& separability. Let $R$ be a discrete valuation ring (over $\C$), with fraction field $K$ and consider the associated disc $D=\Spec(R)$ and punctured disc $D^\times = \Spec(K)$. Let us suppose that we have a family $(C,\sigma_i,\mP,s)$ over $D^\times$, and an extension $(Z,z_i,\mc{R})$ of $F_s(C,\sigma_i,\mP)$ to $D$. (Any required base changes will be subsumed in the notation.) {\bf Completeness:} We want to exhibit an extension $(Y,y_i,\mc{Q},t)$ of $(C,\sigma_i,\mP,s)$ to $D$ such that $F_s(Y,y_i,\mc{Q},t) = (Z,z_i,\mc{R})$. First, we extend the family $(C,\sigma_i)$ to $D$. This may require base change, and is an easy consequence of the existence of nodal reduction \cite{MR1631825}. We denote the extension by $(Y',y_i')$. $Y'$ comes equipped with a contraction map $c: Y' \to Z$. The marked We denote by $\mc{Q}'$ the pullback $c^*\mc{R}$; note that $\mc{Q}'$ is trivial on components collapsted by $c$. Given $Y'$, the graph of $s$ gives us an embedding $j: C \to X_{\mc{Q}'}$, where $X_{\mc{Q}'}$ is the associated bundle $\mc{Q}' \times_{\Cs} X$. The morphism $u: X_{\mc{Q}'} \to D$ has compact fibers, so the closure $\overline{j(C)}$ of the image of $j$ is also a finite type curve over $D$. $\overline{j(C)}$ is not necessarily prestable. However, by using resolution of singularities, we may obtain a prestable curve $Y''$ (with a resolution map $r: Y'' \to \overline{j(C)}$); base change may also be required at this step. This gives us a sequence of maps (over $D$) \[ \xymatrix{Y'' \ar[r]^r & \overline{j(C)} \ar@{^{(}->}[r]^j & X_{\mc{Q}'} \ar[r]^{pr} & Y' \ar[r]^{c} & Z}, \] where $pr: X_{\mf{c}^*_0\mP} \to \Sigma'_0$ is the bundle structure map. The composition $c_r=\mf{c}_0\circ pr \circ j \circ r: Y'' \to Z$ is necessarily a contraction map. We denote the pullback $c_r^*\mc{R}$ by $\mc{Q}''$. We denote the lifts of the marked points $z_i$ by $y_i''$. Pulling back $\mc{R}$ step by step from $C$ to $Y''$, we get a sequence of bundles, the last of which is $c_r^*\mP$, as in the diagram below. \[ \xymatrix{ \mc{Q}'' \ar[rr] \ar[d] & & \mc{Q}' \times X \ar[r]^{pr_1} \ar[d] &\mc{Q}' \ar[d] \ar[r] & \mP \ar[d]\\ Y'' \ar[r]^r & \overline{j(C)} \ar@{^{(}->}[r]^j & X_{\mc{Q}'} \ar[r]^{pr} & Y' \ar[r]^{c} & Z} \] We also get a section $s': \mc{Q}''\to X$ from the composition \[ \mc{Q}'' \to \mc{Q}' \times X \to X. \] The collection $(Y'',y_i'',\mc{Q}'',s')$ is a map to $[X/\Cs]$, but not necessarily a Gieseker map, as the curve may have unstable components carrying a trivial bundle and a trivial section. We obtain the desired extension by contracting these unstable components. {\bf Separability:} Now suppose that we are given two different pairs $(Y_1,y_{1,i},\mc{Q}_1,s_1)$ and $(Y_2,y_{2,i},\mc{Q}_2,s_2)$ which both extend the given family over $D$ compatibly with the given Giesker map $(Z,z_i,\mc{R})$ to $[\pt/\Cs]$. We may freely suppose that both extensions are defined over the same base extension. Consider the fiber product $Y_1 \times_Z Y_2$. Our assumptions imply that $Y_1|_{D^\times} = Y_2|_{D^\times}$ and that the special fibers of $Y_1$ and $Y_2$ both contract onto the special fiber of $Z$. It follows that all the maps in the bottom diamond of the diagram below are contraction maps. \begin{equation*} \xymatrix{ & \mc{Q} \ar[dr]^{f_2} \ar[dl]_{f} \ar[d] & \\ \mc{Q}_1 \ar[d] \ar[dr]& Y_1 \times_{Z} Y_2\ar[dr] \ar[dl] & \mc{Q}_2 \ar[d] \ar[dl]\\ Y_1 \ar[dr] & \mc{R}\ar[d] & Y_2 \ar[dl]\\ & Z& } \end{equation*} Moreover the two sections $\mc{Q} \to \mc{Q}_1 \to X$ and $\mc{Q} \to \mc{Q}_2 \to X$ agree on the open dense set $\mc{Q}|_{D^\times}$. $X$ is separated, so the two sections agree. The Gieseker map obtained by contracting any unstable components in $Y_1\times_{Z}Y_2$ is unique, so it follows that the two given families are isomorphic. {\bf Deligne-Mumford:} (Sketch of proof) Let $C_v$ be a component of $C$. If $C_v$ is contracted to a point by the section-forgetting morphism $F_s$, then $\mP|_{C_v}$ is trivial, so $s|_{C_v}$ must be equivalent to a non-trivial map $C_v \to X$. We know from Gromov-Witten theory that such maps admit only finitely many automorphisms. On the other hand, if $C_v$ is stable, then the existence of a non-trivial section on $C$ can only reduce the number of automorphisms. \end{proof} Now consider the stack $\mf{M}_{g,I,\nu_*\beta}([\pt/\Cs])$ the stack of all degree $ft_*\beta$ maps from prestable curves to $[\pt/\Cs]$. This stack is smooth. We obtain a morphism \[ \widetilde{F}_s: \Mtwid_{g,I,\beta}([X/\Cs]) \to \mf{M}_{g,I,ft_*\beta}([\pt/\Cs]) \] by forgetting the section $s$, but not contracting any unstable components. \begin{theorem}\label{perfobs} $L_{\widetilde{F_s}}$ admits a relative perfect obstruction theory. \end{theorem} Recall from \cite{MR1437495} that a {\it relative perfect obstruction theory} for $L_{\widetilde{F_s}}$ is pair $(E,e)$ consisting of an element $E$ of the derived category of $\Mtwid_{g,I,\beta}([X/\Cs])$, and a homomorphism $e: E \to L_{\widetilde{F}_s}$ in the derived category, such that \begin{enumerate} \item $E = [E^{-1} \to E^0]$ is locally equivalent to a two-term complex of locally free sheaves. \item $H^0(e)$ is an isomorphism. \item $H^{-1}(e)$ is a surjection. \end{enumerate} \begin{proof}[Proof of Theorem \ref{perfobs}] Our proof is an almost word-for-word copy of the one given by Behrend \& Fantechi in \cite{MR1431140} and \cite{MR1437495}. Fix a curve $C$ and a principal $\Cs$-bundle $p:\mP \to C$, and let $\Gamma$ denote the space $\op{Hom}_{\Cs}(\mP,X)$ of sections. $\Gamma$ comes equipped with ``universal'' families, illustrated below. \begin{equation*} \xymatrix{ \mP\times \Gamma \ar[r]^{s} \ar[d]^{p \times \op{id}_\Gamma} & X \ar[d]^{\rho}\\ C \times \Gamma \ar[r]^{\phi_s} \ar[d]^{\pi} & [X/\Cs]\\ \Gamma & } \end{equation*} It follows from the functorial properties of the cotangent complex that we have a morphism $\tilde{e}: s^*L_X \to p^*\pi^*L_\Gamma$. If we take $\Cs$-invariants in the pushdown via $p$, we get \begin{equation*} \tilde{e}': (p_*s^*L_X)^{\Cs} \to \pi^*L_\Gamma. \end{equation*} Tensoring with the dualizing complex of $C$, we obtain a morphism \[ \tilde{e}'': \omega_{C}\otimes (p_*s^*L_X)^{\Cs} \to \omega_{C}\otimes \pi^*L_\Gamma = \pi^!L_\Gamma. \] Then, by adjunction, we have a morphism \[ \tilde{e}'': R\pi_*(\omega_{C}\otimes (p_*s^*L_X)^{\Cs}) \to L_\Gamma. \] Finally, it follows from Verdier duality that \[ R\pi_{*}(\omega_{C}\otimes (p_*s^*L_X)^{\Cs}) =R\pi_*(p_*s^*T_X)^{\Cs}, \] and so we have a morphism \[ e: [R\pi_*(p_*s^*T_{X})^{\Cs}]^{\vee} \to L_\Gamma. \] This morphism is a perfect obstruction theory for $L_\Gamma$; the proof is more or less the same as in \cite{MR1437495}. Moreover, all of the objects here generalize well to the relative case, and therefore apply to the universal family. Thus, we have a perfect relative obstruction theory \[ e: E = [R\pi_*(p_*s^*T_{X})^{\Cs}]^{\vee} \to L_{\widetilde{F_s}}, \] where now $\pi$, $p$, and $s$ refer to the universal families on the moduli stack. \end{proof} Given this perfect obstruction theory, we can define the {\it virtual structure sheaf} $\mc{O}^{vir} = \mc{O}_{\Mtwid_{g,I,\beta}([X/\Cs])}^{vir}$. This is an element of the bounded derived category of coherent sheaves on $\Mtwid_{g,I,\beta}([X/\Cs])$, which may be thought of as a family of virtual fundamental K-homology cycles on the fibers of $F_s$. It is defined using the virtual normal cone machinery developed by Behrend \& Fantechi \cite{MR1437495}. The definition we give here seems to have first appeared in print in \cite{MR2040281}. First, recall the {\it intrinsic normal cone} \cite{MR1437495}. This is a cone stack $\mf{I}_{\mc{X}}$ associated canonically to a Deligne-Mumford stack $\mc{X}$. It is defined locally on an \'{e}tale open set $U \to \mc{X}$ by choosing an embedding $\iota: U \to W$ of $U$ into a smooth scheme $W$ and then setting $\mf{I}_{\mc{X}}|_U = [C_{U/W}/\iota^*T_W]$, where $C_{U/W}$ denotes the normal cone of $U$ in $W$ and $T_W$ is the tangent bundle of $W$. This construction is independent of the choice of embedding and glues nicely to give $\mf{I}_{\mc{X}}$. Moreover, the construction works in the relative case, giving a normal cone $\mf{I}_f = \mf{I}_{\mc{X}/\mathcal{Y}}$ for any Deligne-Mumford morphism $f:\mc{X} \to \mathcal{Y}$ of $\mc{X}$ to a smooth, unobstructed, equidimensional $\mathcal{Y}$. We will denote the relative intrinsic normal cone of $\Mtwid_{g,I,\beta}([X/\Cs])$ relative to $\mf{M}_{g,I,ft_*\beta}(\pt/\Cs)$ by $\mf{I}_{\widetilde{F}_s}$. The existence of a perfect relative obstruction theory for $\widetilde{F}_s$ implies \cite{MR1437495} that there exists a closed embedding \[i: \mf{I}_{\widetilde{F_s}} \to [E^1/E^0]\] where the two-term complex $E^{\vee} = [E^0 \to E^1]$ of vector bundles is the dual of the complex $E$, and $[E^1/E^0]$ the quotient stack of $E^1$ by the action of $E^0$. \begin{definition}[\cite{MR2040281}] The {\it virtual structure sheaf} $\mO^{vir}_{\Mtwid_{g,I,\beta}([X/\Cs])}$ is the element of the bounded derived category of coherent sheaves $D(\Mtwid_{g,I,\beta}([X/\Cs]))$ defined by the derived tensor product \[ \mO^{vir}_{\Mtwid_{g,I,\beta}([X/\Cs])} = \mO_{\mf{I}_{F_s}} \bigotimes^L_{[E^1/E^0]} \mO_{\Mtwid_{g,I,\beta}([X/\Cs])}. \] \end{definition} $F_s$ is proper and Deligne-Mumford, so there exists a pushforward along it: \[ (F_s)_*: K^0(\Mtwid_{g,I,\beta}([X/\Cs])) \to K^0(\Mtwid_{g,I,ft_*\beta}(\pt/\Cs)). \] But $F_s$ is obstructed, so this pushforward does not have good properties. We correct this by using the {\it virtual pushforward}, defined by \[ (F_s)^{vir}_![V] = (F_s)_*[V \bigotimes^L_{\mO_{\Mtwid_{g,I,\beta}([X/\Cs])}} \mO^{vir}_{\Mtwid_{g,I,\beta}([X/\Cs])}]. \] \medskip Thus, we have established the existence of virtual pushforwards along $F_s$. The next step in defining Gromov-Witten invariants for $[X/\Cs]$ is to introduce a notion of ``admissible class'' on $\Mtwid_{g,I,\beta}([X/\Cs])$, and show that the virtual pushforward of such a class is an admissible class on $\Mtwid_{g,I}(\pt/\Cs)$. We intend to address this question in a future paper. \bibliographystyle{hep} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,338
RecordingHacks Site Highlights Mic Database | Mic Reviews | Microphone Sale Microphone Database 1659 mic profiles; 0 updated this week Newest Microphones: Lauten LS-208 MXL V77S Tube Favorite RH Articles Drum Overheads 7 Ways Making of a Neumann U87 USB Pre/SM7B Torture Test Kick Drum Mic Shootout 10 Essential Tips for Vocal Recording Metallica Wins the Loudness Wars Saturday, December 20th, 2008 | by matthew mcglynn There was a great End Rant in Tape Op about the latest Metallica album, Death Magnetic. Mastering Engineer Paul Abbott of Zen Mastering wrote, The outcry over the un-listenably loud level of Metallica's Death Magnetic CD has become one of the more visible occurrences of the "loudness wars" in the mainstream media… now, a lowest-common-denominator threshhold has been crossed and the problem has been pushed in our faces. I don't pay much attention to the mainstream media, and I don't pay any attention at all to Metallica — in fact it's safe to say that I get 100% of my Metallica news from Tape Op, which thankfully means I don't get very much at all. So I'd missed the outcry. Hell, I'd even missed the release of Death Magnetic. In a hurry? Scroll down to find the audio clips comparing Death Magnetic CD vs. "Guitar Hero" audio. Disclaimer: prepare to be ill. The story caught my interest. As Paul noted in his rant, the loudness of the CD was made more notable by the near-simultaneous release of an alternate mix — with much more conservative limiting and compression — via Activision's "Guitar Hero" game. Suddenly, A/B comparisons illustrating the casualties of the loudness war can be easily made. For example, see below. An associate of mine provided loaner copies of both mixes of the album. A few minutes of research at Last.fm revealed that the most popular track is "The Day That Never Comes," so I loaded both versions of that song into Pro Tools. Visually, the difference is obvious. Once the drums come in, the audio level is practically pegged at 0dB for the rest of the song. There are no more dynamics to be had. Sure, it's a metal song. Maybe it's just really that loud, you might be thinking. But no, the Guitar Hero mix shows plenty of dynamic range. And, not to spoil the surprise, it sounds a lot better too. To be clear, the core problem with the Death Magnetic CD is not that it's loud. The problem is that it sounds like crap. Let's do some comparative listening. There are two ways to do it: matched volume levels (in which the two versions sound equally loud), and matched gain levels (in which both versions are output at unity gain, and the CD version is a lot louder). The following clips are 320 kbps MP3s. Equal Volumes Equal Gain (caution: loud 2nd half!) Caution! The second half of the "equal gain" clip is much louder than the first half! Turn down your playback volume! Both clips contain an excerpt of the Guitar Hero mix followed by the same section of the CD mix of "The Day That Never Comes." Here's a picture of the waveform for the two mixes of this excerpt. The Guitar Hero mix (on the bottom of the screenshot) shows moderate changes in dynamic level, and plenty of transients. The CD mix, on the top half of the screenshot, is brickwall-limited — the entire segment is at maximum volume. To match volumes for the "Equal Volumes" clip, I used a VU-meter plug-in in RMS mode, and reduced the gain on the CD mix until the two versions, panned hard L and R, were closely matched. The gain difference was staggering: 10.7dB! (see screenshot) A 10dB change requires 10x as much power to produce, and equates to roughly doubling the volume. The "Equal Volumes" clip contains the Guitar Hero excerpt (at unity gain) followed by the CD excerpt (at -10.7dB). You won't notice a change in playback level, but you will hear distinct differences in the quality of the sound. The distortion on the CD mix is apparent. Every snare drum fill makes my headphones sound broken. The CD mix is aggressive, for sure, but it doesn't sound good. Zooming in to examine the waveform, it becomes obvious why the CD mix sounds the way it does. The relatively smooth waveform of the game mix is replaced by a jagged, exaggerated line. The rounded forms of the game mix are gone — there is nothing round about the CD mix, which is comprised almost entirely of vertical lines. The CD mix is comprised entirely of transients. Curiously, the CD mix even has transients where the game mix does not — see the vertical spikes in the top waveform that have no corresponding spike below. That's just noise: artifacts of the compression and limiting process. Consider that the vertical changes in the waveform correspond to vibrations in the air, and therefore to the movement of a loudspeaker cone. The CD mix requires that your speakers spend the entire song oscillating between their maximum and minimum excursion points. Your eardrums, too. No wonder the CD mix is so fatiguing. Listening to the CD mix is literally a lot of work. Paul Abbott, in his Tape Op piece, quoted a statement by Death Magnetic mastering engineer Ted Jensen in which Jensen claims no responsibility for the sound of the CD. I found the origin of this statement — a private email attributed to Jensen, posted without his permission to a Metallica forum and quoted extensively around the web. I wouldn't have been inclined to pin the blame for the CD mix on the mastering engineer, anyway; the few MEs that I know are uncompromising in their pursuit of fidelity. The only way they would output something sounding as bad as Death Magnetic is if the mixes they were given sounded worse. Which was actually Jensen's point. Listen, there's nothing up with the audio quality. It's 2008, and that's how we make records. [Producer] Rick Rubin's whole thing is to try and get it to sound lively, to get it to sound loud, to get it to sound exciting, to get it to jump out of the speakers. Of course, I've heard that there are a few people complaining. But I've been listening to it the last couple of days in my car, and it sounds fuckin' smokin'. (Hat tip to Aaron Lyon for suggesting this story, and to TurnMeUp.org for general awesomeness.) In Celebration of Dynamic Range Tags: lars ulrich, loudness war, mastering, metallica, paul abbott, rick rubin, tapeop, ted jensen Posted in Music Business, News, Technique | 50 Comments » Previously: Frogville Studios Next: Silvia Classics reintroduces its SC-5C 50 Responses to "Metallica Wins the Loudness Wars" It must be nice to have the certainty that Lars seems to have about everything. Of course, you'll forgive me if I'm a bit leery of leaving judgments of fidelity to someone who's been playing drums at stadium volume levels since before the advent of "quiet" in-ear monitoring. He and the rest of the band are certainly well within their rights (and perhaps *are* right) to stand behind their record. But his pronouncements since the Napster kerfuffle all too frequently imply a derision, maybe even contempt, of the fans who have made Metallica a world-renowned band. Really nicely done, M. I want to see the game version waveform next to the zoomed CD waveform, which is astonishing! Your observation about why it's so much work to listen to a mix like this is spot on. I was surprised earlier this year to find Springsteen's new album similarly smashed and instantly unpleasant because of it. Likewise the recently released Oasis CD. Haven't looked for RockBand versions of these… What I don't understand is why there are two different mixes (it must be two different mixes if the trashing didn't happen in the mastering) in the first place. I mean, if Metallica like their "new sound" or what Rick Rubin has done to their music so much, why did they agree to release a different mix for Guitar Hero?? matthew mcglynn December 31st, 2008 at 3:13 am Aaron, the zoomed-in waveform pic is there; just click the thumbnail to see it. or, here: http://recordinghacks.com/wp-content/uploads/2008/12/waveforms.png Julian, it's a good question — I thought at first that maybe the mix engineer produced one with and a second without such heavy limiting/compression, but it seems from some comments posted elsewhere that the mixes are actually different. Maybe nobody considered that the GH version would be something people actually listen to, meaning, it wasn't given the same scrutiny by the producer? Ah! Now I see the expanded screenshots. I missed that. UI suggestion: link color border (burgandy) for hot images, grey border for not. Cheers, -a S Jansen Finally, an "in-yer-face" example for the general public of what many of us (who actually care about sound quality) have been lamenting for several years! I can *almost* go along with the handful of producers who say that the mixes today have to be compressed like this in order to be heard above the (noisy) party or automobile environments. But not really. I think the simple solution is to offer two mixes to the public: a "quality"/audiophile version, and a "fouled up"/party version. The industry could offer two separate cd's (remember way back when one had the choice of a mono or a stereo record album?), or they could simply stick the two different cd's in the exact same case. "Here's one for your home stereo (quality), and one for your car to take to that rowdy party later (fouled up)." Double-sided cd's? But, of course, the industry doesn't care about giving us what we want, or giving us actual choices…it seems that this band doesn't, either. I can almost picture Lars whining, "Those internet pirates are just gonna turn this album into mp3s and ruin the great sound and rip us off – let's ruin the sound before they even get the chance! That'll show'em!" Yup – they showed everybody, alright. Just look at those naked, ugly square waveforms. Ugh. John S. Allen February 22nd, 2009 at 5:13 am Nice demo. I'm a member of the Boston Audio Society, which addressed this issue in a recent issue of its publication, the BAS Speaker. The limiting in the Metallica recording is far more aggressive than anything illustrated in the Speaker article, too. One quibble, You say that "The CD mix requires that your speakers spend the entire song oscillating between their maximum and minimum excursion points. Your eardrums, too. No wonder the CD mix is so fatiguing. Listening to the CD mix is literally a lot of work." More to the point, clipping the waveform introduces new frequencies into the mix. These are the "crap" which makes the snare drum go "splat," and makes listening hard work by obscuring the sounds of the instruments. Radio stations maximize volume so they pop out of the road noise in your car and you won't tune away. That's understandable if not forgivable. But the CD experience is typically not one of channel-flipping, so what's the point? Every playback system has a volume control. Most listeners are skillful enough to set the volume as they like. The dynamic range of the Guitar Hero mix isn't all that wide, either. The peaks even in the quieter sections are only a few dB down. It sure does sound better, though. Lars, sorry dude — you're just wrong here. I've been a fan for years, but you can never justify loudness and clipping like that. It's an insult to the faithful who buy your music. It's a big "F" you, we don't care what you think. The loudness war is greed. It's engineers tripping over themselves to out-shout the other guy, the other song. It's sacrificing the art for a bit more money. So some kid isn't "disturbed" by having to reach down to his nano to turn up the volume a bit. Absurd. This is a great article, well presented and the second time I've stopped in to re-read it. A must read for every musician and every (open-minded) sound engineer. IMHO. For cryin out loud guys…the Guitar Hero mixes are NOT MASTERED…Hello? They come packed as separate tracks–otherwise you couldnt play the game. I think even the kids know they are stems–how people in the business dont know this is beyond me. January 1st, 2010 at 8:09 am John, mastering a recording should make it sound brilliant, not turn it into a square-wave ear rape like the entire Death Magnetic album sounds like, it has NO dynamic range at all. Try listening to Pink Floyd who where way ahead of their time in mastering music with fidelity, it sounds absolutely brilliant and so much dynamic range. Majority of their songs do not go higher then -20 dB average which leaves room for very powerful sound effect. Death Magnetic is pretty much at 0 dB with no headroom, no dynamic range at all and the audio hurts my ears beyond belief. It's one of the few albums I've heard that makes my ears feel tired and hurting. Mastermeister From many things I've read and from interviews, it's clear that Lars fits the clinical definition of a Malignant Narcissist. Owing to that and years of stage sound levels far in excess of permanent hearing loss because he's too vain to wear hearing protection, he is most likely also deaf, meaning that damn near anything turned up loud enough so that speakers emit smoke, will think "it sounds fuckin' smokin'." Alex has a point worth re-pointing: dynamics is a tool, a color on the palette of musical expression. To abandon it is to toss out an important musical element of expression. A good analogy might be to go on stage drunk and smacked out on speedballs. I did live sound once for Townes VanZandt, and watched as he nodded off on stage and fell off his stool. It did not earn him any praise for genuineness. it should be noted that games of the Guitar Hero narture, generally directly mix stems of audio (drums, guitars, vox, bass). Which means it mixing on the fly, usually without some incredibly heavy compressing in the mix buss (possibly some basic limiting) So its not really that they gave out two mixes. Even with the same mix chain, stems probably wouldn't produce the same amount of limiting/compression artifacts, and then mixing them after the fact might keep more dynamic range, than the full mix. This could actually still prove that it wasn't a problem created by the ME. But also means there's more than jsut "mastering" or "multiple mixes" causing the differences. anywa, I know this is an old article, just provided my inflated 2 cents. Todd McMiniment Wow, this is a joke. I just bought this album via iTunes. It sounds like SHIT. Every track is distorted to the point of being almost irritating to listen to. Lars, pull yer head out of your ass, wipe the shit out of your ears and you'll realize it does NOT sound "smokin". I'll never buy another Metallica album again if this is their "new" way of doing things. I want my money back!!! That's crazy! I thought the difference between the equal gain option wasn't going to be that loud, but it made me jump out of my seat! I just don't understand why an sound engineer would output something to a CD at that low of sound quality. It's unfair to the fans (who, if they keep this up, probably won't be able to hear Metallica for much longer…) and it's unfair to the band, whose vocal and instrumental skills are being completely distorted. I understand the need for digital audio, even though, according to die-hard audiophiles, it is at a lower quality. But this is absolutely unacceptable. wow??? why would anyone put this into the public?? My step father has quite a few metallica cd's including a live recording of the band in 1999. Even the live record that Metallica put out sounds alot better than this "new" sound. My headphones almost cant handle it! I guess the band just wanted a few extra bucks rather than to commit to their true talents. I found it interesting that the louder version of the song sounded worse than the guitar hero version. I always thought that the louder it would be the better the sound quality would be. When i turn the headphone up on my ipod i find i can hear the bass and rhythm guitar better but I guess when you record it louder is does the opposite. I've always been bothered by the sound quality of this album and it has alot of static and not much clarity. I think they should remaster this album and make a re-release. With all the money Metallica makes I would've hoped they could have put a little more time and money into a better production. I've never really heard of Metallica until i read this, but it seems to me that their new CD doesn't showcase the bands full potential. I mean, based on the two different comparisons, the sound from the CD sounds like its blasted and kind of flat or dry! Turning up the volume only made it sound worse, because the drums faded into just a bunch of noise. I liked the GH version much better because you could actually hear the difference in dynamics, and it had more of a variety of sounds. It's almost like when you try to blow up a really small picture and all you get is the blurry pixels. The same happens with this bad sound quality. Jordan Speicher well ive never heard this album but from reading this article im glad i havent. I like some contrast in my music and not just a wall of sound. louder is not always better and this is one of those cases where thats true! my ears are astill ringing from the differences with the equal gains Tyler Painter You know, I find it quite weird that artists/groups nowadays have this competitive need to trey and make the loudest song to essentially 'beat out' the competition- but really it's just making their music sound worse! I saw how in the comparisons, the louder mix simply seems more distorted compared to the other track… Simply put, I'd much rather prefer a full-bodied, quieter song (heck, I can always turn up my speakers) than a distorted, super loud song! Oh, Metallica? The band who's greed killed Napster? How quickly we forget. I haven't. Don't care how loud they get, ever, I won't buy anything of theirs. They suck anyway; unless you're 13-15 years old. Rodan says: "Oh, Metallica? The band who's greed killed Napster?" Why is it classed as greed that they chose to do something about a network that actively allowed users to steal their product? Metallica were totally within their right to defend their own income stream and, as a producer myself, I'm damn pleased they did: not only for themselves but for us – the little guys – who are trying to make a living out of what we do and don't have the financial clout to take on the Napsters/Rapidshares/Mega-Uploads/Pirate-Bays of this world ourselves. I can't say I like their music, but they will always be heroes to me for having the guts to take on Napster. The food on my child's plate is partly down to them and I am grateful to them for that. I think the fact that Lars' reference system is in his car explains a lot. Lauren Prager April 23rd, 2012 at 11:31 pm Ha! Just because it's loud does not mean it's better! May 13th, 2012 at 12:15 pm I like "The day that never comes" song, especially the guitar solo in the end. After hearing to the guitar hero version I am disappointed that Metallica distorted the sound. Where can I get the better version of Death Magnetic? CeeVee I remember back when Protools was just a 2-track mastering program and everyone was so stoked that they could make their mixes sound "louder" than everyone elses with the plug-in comps, limiters, maximizers, etc that it originally came with. You could really mess up a good, dynamic mix with that crap, and it's being done all day long in the name of being "louder" ? Those older Bob Rock mixes sound real good, so, wtf ? Pierre-Alexandre Sicart Louder than *what* exactly? This remark only makes sense in the context of a radio broadcast: you compress like hell to out-scream the competition. But on a CD album? Even if you want to beat the rumble of traffic, a few decibels won't make a world of difference. … on the contrary, since you create your own rumble, devoid of clarity. Music, traffic noise, everything becomes mashed together. In the equal volumes clip I can barely tell the difference between the two mixes. Well, I may not be college educated, but ever since I started recording, I knew you wanted to AVOID clipping. And this process, especially using a MAXIMIZE type plug-in, takes clipping to a whole new level. A big pointed finger for all these new age producers: I used maximize ONCE in a song. I recognized it as a mistake and never played with it again. Use your brains people. We were all drunk kids in a basement figuring this shit out when we got it right, do you really think it takes all this crap to make it sound good? It doesn't. It never will. Good thing you're not an audio engineer, ben. (I hope so anyway ) Thanks for this article, I just purchased this album and thought maybe I had corrupted download files. I am not an auditphile but at times when I'm listening to this album I cringe. It just sounds like fuckin' shit/. March 13th, 2014 at 10:16 am "I've been listening to it the last couple of days in my car, and it sounds fuckin' smokin'" – Lars Ulrich Yea, let's trust the deaf guy rather than our own ears, where the problem is quite apparent. Luis Lozano Long time Metallica and general guitar music all around for a good 40 years and when we got the pirated copies of Desth Magnetic we went and bought the real ones… after we finished laughing at what they did to their own art we moved on… sometimes bands.. musicians.. artists.. actors..etc produce total art the wrong way… thats life…. hope they bring another Lou Reed albumn out… we had such a larf there too.. oh Spinal Tap where art thow? ( pun intended) How exactly does a 10dB increase in volume require 10x as much power to produce and double the volume? Spectral and waveform levelers peak at 0 in most of today's recording software and measure down, while monitors can go from -30 to +15.. Amplifying a clip by 10dB doesn't require jack to produce; it's all done digitally, and it by no means doubles the volume. The reason the studio CD sounds like shit is simply because whoever mixed it jacked the volume past the peak threshold, which any self-respecting producers knows never to do. That slightly distorted sound you hear? That's the waveform clipping out during playback. Amplifying it down doesn't reduce it, because the damage is already done. Simply put, the album was well-recorded for what Metallica is today,but its producer turned it into shit by making a rookie mistake. @Tony – "How exactly does a 10dB increase in volume require 10x as much power to produce?" That's the physics. According to wikipedia, "A change in power by a factor of 10 is a 10 dB change in level." "Amplifying a clip by 10dB doesn't require jack to produce; it's all done digitally, and it by no means doubles the volume." I think you've misunderstood the point about power requirements. But the essence of that passage is about perceived loudness. Loudness perception s subjective, so there is no absolute answer here. But 1 bel, or 10 dB, is generally accepted to represent doubling (or halving) of perceived volume. you know how music is so personal?….how you feel when someone says your favorite band are worthless hacks and posers? imagine how you would feel if you were in that band and wrote that music and someone said its so worthless that no one should have to pay for it to hear it. Lee 1971 Metallica's 'Death Magnetic' is the poster child of horrific dynamics, this thing will make your ears bleed. I'm a pink floyd fan and know nothing about metallica, but I like to know there is great sounding albums in the world. All is not lost though; 'Kill 'em all' original 1985 CD has fantastic dynamics. This Lars guy is point blank wrong on this one, it cannot possibly sound anywhere approaching good….but then again, he's bound to say that, right? Check it out at; http://dr.loudness-war.info Hugh Jass Howdy, folks. I'm a longtime Metallica fan, having bought "Kill 'em All" shortly after its original '83 release. I saw them on the tour that cost them Cliff Burton, and two more times after that, the last being the "…And Justice For All" tour. I ceased interest in any new material after "Load" was a huge disappointment. So, until the above clips, I'd not heard anything from "Death Magnetic" but the poor audio quality brings up an interesting comparison. I read this very-old discussion and was very surprised to find not a single mention of "…And Justice For All", Metallica's other aural disaster. If you've not heard it, it's a great Metallica album that I (and many others) find nearly unlistenable because of the mixing. Nothing wrong with the volume, it's just mixed so incredibly badly. It's very tinny sounding, and I find it fatiguing (the latter being a similar complaint about "Death Magnetic"). In concert, it sounded fantastic. I'm not a fan of live albums but I actually hoped for one in this instance. The studio release is really THAT bad. Here's what's interesting about it: at least one person involved in "…And Justice" points the finger squarely at Lars himself. The guy couldn't believe Lars wanted it to sound like that, and actually didn't want to sign off on it–but it might have been a career ender for the guy so he did. Clearly, Lars didn't have any idea 25 years ago how an album should sound through a good system, so the "hearing loss" argument is not only viable here, but is very possibly retroactive a quarter of a century. I think Lars literally can't hear the upper spectrum of frequencies worth a damn, and hadn't been able to for a couple of decades by the time "Death Magnetic" was released. So I have an experiment for you: find a lossless, unfettered original sample of any track from "…And Justice For All" and listen to it on your reference system. Afterward, throw a couple of medium-weight blankets over your speakers and listen to the same sample again. I think you'll agree it arguably sounds better. That's what I think it sounds like to Mr. Ulrich, and since in his mind he's the only one that matters, he'll damned-well dictate how it's going to sound to the rest of us who haven't been standing next to a running DC-10 for 35 years without ear protection. punkenheimer Hugh Jass, lol. Couldn't agree more. But that said, I really don't understand why the other band members would approve of it? Well, everyone knows Lars is an asshole, probably a deaf one as well…, but why, oh why the shitty recording? TBH, only their first 3 albums, a few live recordings here and there, and a few old recordings on 'garage' are good sounding, everything else is crap. Great music destroyed by poor mixing/ recording since the mid '90s. Metallica really need to take a leaf off Iron Maiden's book when it comes to consistency of recorded sound. P.S, I ended up on this page after googling; "why are Metallica recordings so shitty". I might be an Iron Maiden loving English puke, but, I'm a 'tallica fan as well… No offence. June 5th, 2015 at 4:45 am They should have named the album Deaf magnetic. Seculla December 7th, 2015 at 7:16 am LoL, Deaf Magnetic it is 🙂 If Lars really thinks what he said, then his ears have probably gone bad from all the stage noise over the years. You must be a 3 year old not to notice there is something very wrong with the outcome. Metallica wins denial wars. Is seems to be hard to admit you've done something wrong …. The ear naturally compresses sound at 500 Hz and up, so if you ever want to "uncompress" some music, just pull down the EQ frequencies above 500 Hz (high-shelf, -1 to -3 dB) and maybe you'll actually enjoy it better! April 3rd, 2017 at 7:54 am I wonder about a few different thing related to this subject. Is this in any way familiar to Beats Audio? I bought a cheap ass throw-away temporary PC with this on it and after reading a bunch of complaints, the conclusion was that this Beats crap was placed in it to sell more laptops. Because we all know the "WOW" factor that the average person falls prey to. Greed, I agree!!! You guys as audiophiles and or engineers appreciate good sound. If I remember correctly, vinyl years ago had the low frequencies identical on the left and the right tracks. Maybe they still do. Had a debate about it and my side was that in a traditional band stage set up, the bassist is on the left and guitarist on the right. I'll throw a number at it but it is a guess, let's say 75% of the time. I like to image the band as I listen. And my friend said to me, " if a girl is out on a dance floor but closer to the guitarists side, she might be out of step with the beat. So just put it up the middle, it will save time and money from the cost of all that mixing and mastering. Well, we are no longer friends because of this. His approach is like Common Core, everybody gets a trophy. I am not a professional but love the arguments on the perspective, to me and I know to guys as well, it is an art form in addition to the band and their performance. It's dying, sadly, but that's what happens when you make everything equal in the butterfly and unicorn world of the people running the industry nowadays, compared to the good old days. Keep up the fight, I still agree with most of you. John Choody Dear Metallica. I have paid (did not STEAL from Napster) for the DM to get crappy sound quality. And it struck me at the first hearing. Where the heavy quality Metallica sound has gone? It sounds exactly like it was recorded with old tape recorder with automatic microphone level adjust which could not have cope with drums high level sound so these sound crappy. Luckily the GH version saved the day. Thanks heaven that HTSD sound is OK. Thank you for listening complaints 🙂 Not Lars October 4th, 2018 at 6:26 am I think it sounds great November 3rd, 2018 at 8:05 pm They needed multi track stems for the different channels needed for audio playback. I think they mixed into 6 tracks. Vocal, guitar, bass, drums, solo guitar, other Metallica (read: Lars Ulrich) wouldn't know "smokin'" sound if it hit him in the face – which it probably has. Compare the shit production on any Metallica record to genius producing like that of my favorite band, The Who. Blows Metallica's out of the water. Lars is a dummy, always has been. Your Name? (required) E-Mail? (will not be published) (required) Your web-site?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,304
Liverpool are being linked again with Spartak Moscow flyer Quincy Promes, who was a target during the January transfer window. As a result, we're certain to make big improvements this summer, and if we went in for Dutchman Promes, he'd likely jump at the chance. This term, Promes has ten goals from 20 Russian League appearances, not bad for a player who starts wide on either flank. The 25-year-old is quick, skilful and provides end product, so stylistically, he's closer to Sadio Mane than anything currently at our disposal. A winger of this ilk is a summer necessity, but any real links to Promes have died down since the January window closed, with Spartak owner Leonid Fedun telling Sport-Express his best player will only be sold for €40-50m.
{ "redpajama_set_name": "RedPajamaC4" }
4,568
<?php /** * Spiral Framework. * * @license MIT * @author Anton Titov (Wolfy-J) */ namespace Spiral\Vault\Bootloaders; use Spiral\Vault\Vault; use Spiral\Vault\Navigation; use Spiral\Core\Bootloaders\Bootloader; use Spiral\Http\HttpDispatcher; /** * Boots vault administration panel bindings and routes. You can always extend this bootloader and * disable booting to register route manually. */ class VaultBootloader extends Bootloader { /** * Vault require real booting. */ const BOOT = true; /** * @var array */ protected $bindings = [ 'vault' => Vault::class, Navigation::class => [Vault::class, 'navigation'] ]; /** * @param HttpDispatcher $http * @param Vault $vault */ public function boot(HttpDispatcher $http, Vault $vault) { $http->addRoute($vault->route()); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,900
<?php namespace Xsolve\SalesforceClient\Generator; use Xsolve\SalesforceClient\Security\Authentication\AuthenticatorInterface; use Xsolve\SalesforceClient\Security\Authentication\Credentials; use Xsolve\SalesforceClient\Security\Token\TokenInterface; use Xsolve\SalesforceClient\Storage\TokenStorageInterface; class TokenGenerator implements TokenGeneratorInterface { /** * @var Credentials */ protected $credentials; /** * @var AuthenticatorInterface */ protected $authenticator; /** * @var TokenStorageInterface */ protected $tokenStorage; /** * @param Credentials $credentials * @param AuthenticatorInterface $authenticator * @param TokenStorageInterface $tokenStorage */ public function __construct( Credentials $credentials, AuthenticatorInterface $authenticator, TokenStorageInterface $tokenStorage ) { $this->credentials = $credentials; $this->authenticator = $authenticator; $this->tokenStorage = $tokenStorage; } /** * {@inheritdoc} */ public function getToken(): TokenInterface { if ($this->tokenStorage->has()) { return $this->tokenStorage->get(); } $token = $this->authenticator->authenticate($this->credentials); $this->tokenStorage->save($token); return $token; } /** * {@inheritdoc} */ public function regenerateToken(TokenInterface $token): TokenInterface { $newToken = $this->authenticator->regenerate($this->credentials, $token); $this->tokenStorage->save($newToken); return $newToken; } }
{ "redpajama_set_name": "RedPajamaGithub" }
103
\section{Introduction} \label{submission} Children are remarkable learners, and thus their inductive biases should interest machine learning researchers. To help learn the meaning of new words efficiently, children use the ``mutual exclusivity'' (ME) bias -- the assumption that once an object has one name, it does not need another \cite{Markman1988} (Figure \ref{fig:me}). In this paper, we examine whether or not vanilla neural networks demonstrate the mutual exclusivity bias, either as a built-in assumption or as a bias that develops through training. Moreover, we examine common benchmarks in machine translation and object recognition to determine whether or not a maximally efficient learner should use mutual exclusivity. \begin{wrapfigure}{R}{0.23\textwidth} \centering \includegraphics[width=\textwidth]{me4.png} \caption{The mutual exclusivity task used in cognitive development research \cite{Markman1988}. Children tend to associate the novel word (``dax'') with the novel object (right).} \label{fig:me} \end{wrapfigure} When children endeavour to learn a new word, they rely on inductive biases to narrow the space of possible meanings. Children learn an average of about 10 new words per day from the age of one until the end of high school \cite{Bloom2000}, a feat that requires managing a tractable set of candidate meanings. A typical word learning scenario has many sources of ambiguity and uncertainty, including ambiguity in the mapping between words and referents. Children hear multiple words and see multiple objects within a single scene, often without clear supervisory signals to indicate which word goes with which object \cite{Smith2008}. The mutual exclusivity assumption helps to resolve ambiguity in how words map to their referents. Markman and Watchel \cite{Markman1988} examined scenarios like Figure \ref{fig:me} that required children to determine the referent of a novel word. For instance, children who know the meaning of ``cup'' are presented with two objects, one which is familiar (a cup) and another which is novel (an unusual object). Given these two objects, children are asked to ``Show me a dax,'' where ``dax'' is a novel nonsense word. Markman and Wachtel found that children tend to pick the novel object rather than the familiar one. Although it is possible that the word ``dax'' could be another word for referring to cups, children predict that the novel word refers to the novel object -- demonstrating a ``mutual exclusivity'' bias that familiar objects do not need another name. This is only a preference; with enough evidence, children must eventually override this bias to learn hierarchical categories: a Dalmatian can be called a ``Dalmatian,'' a ``dog'', or a ``mammal'' \cite{Markman1988,Markman1989}. As an often useful but sometimes misleading cue, the ME bias guides children when learning the words of their native language. It is instructive to compare word learning in children and machines, since word learning is also a widely studied problem in machine learning and artificial intelligence. There has been substantial recent progress in object recognition, much of which is attributed to the success of deep neural networks and the availability of very large datasets \cite{Lecun2015}. But when only one or a few examples of a novel word are available, deep learning algorithms lack human-like sample efficiency and flexibility \cite{Lake2016}. Insights from cognitive science and cognitive development can help bridge this gap, and ME has been suggested as a psychologically-informed assumption relevant to machine learning \cite{LakeLinzenBaroni2019}. In this paper, we examine vanilla, task-general neural networks to understand if they have an ME bias. Moreover, we analyze whether or not ME is a good assumption in lifelong variants of common translation and object recognition tasks. \section{Related work} Children utilize a variety of inductive biases like mutual exclusivity when learning the meaning of words \cite{Bloom2000}. Previous work comparing children and neural networks has focused on the shape bias -- an assumption that objects with the same name tend to have the same shape, as opposed to color or texture \cite{Landau1988}. Children acquire a shape bias over the course of language development \cite{Smith2002}, and neural networks can do so too, as shown in synthetic learning scenarios \cite{Colunga2005,Feinman2018} and large-scale object recognition tasks \cite{Ritter2017} (see also \cite{Id2018} and \cite{Brendel2019} for alternative findings). This bias is related to how quickly children learn the meaning of new words \cite{Smith2002}, and recent findings also show that guiding neural networks towards the shape bias improves their performance \cite{Geirhos2019}. In this work, we take initial steps towards a similar investigation of the ME bias in neural networks. Compared to the shape bias, ME has broader implications for machine learning systems; as we show in our analyses, the bias is relevant beyond object recognition. Closer to the present research, previous cognitive models of word learning have found ways to incorporate the ME bias \cite{Kachergis2012,mcmurray2012word,Frank2009,lambert2005guidelines,zinszer2018bayesian}, although in ways that do not generalize to training deep networks. Other work in natural language processing for cross-situational learning has incorporated ME directly into loss functions \cite{Kadar2015,Lazaridou2016,Gulordava2020} or augmented the final choice function in ways that do not influence the learning/ training process \cite{Gulordava2020,cohn2019lost}. Although related to our aims, it is not straightforward to apply these approaches outside of cross-situational word learning, and the choice methods cannot alone make training/ learning more efficient. Nevertheless, we see these results as important and encouraging, raising the possibility that ME could aid in training vanilla deep learning systems. \begin{figure*}[!btp] \centering \begin{subfigure}{.35\textwidth} \centering \includegraphics[width=\textwidth]{expclass.png} \caption{} \label{fig:expsetup} \end{subfigure}% \begin{subfigure}{.65\textwidth} \centering \includegraphics[width=\textwidth]{transynth.png} \caption{} \label{fig:st1} \end{subfigure} \caption{Evaluating mutual exclusivity in a feedforward (a) and seq2seq (b) neural network. (a) After training on a set of known objects, a novel label (``dax'') is presented as a one-hot input vector. The network maps this vector to a one-hot output vector representing the predicted referent, through an intermediate embedding layer and an optional hidden layer (not shown). A representative output vector produced by a trained network is shown, placing almost all of the probability mass on known outputs. (b) A similar setup for mapping sequences of labels to their referents. During the test phase a novel label ``dax'' is presented and the ME Score at that output position is computed.} \end{figure*} \section{Do neural networks reason by mutual exclusivity?}\label{sec:2} In this section, we investigate whether or not vanilla neural architectures have a mutual exclusivity bias. Paralleling the developmental paradigm \cite{Markman1988}, ME is analyzed by presenting a novel stimulus (``Show me the dax'') and asking models to predict which outputs (meanings) are most likely. The strength of the bias is operationalized as the aggregate probability mass placed on the novel rather than the familiar meanings. Our analyses relate to classic experiments by Marcus on whether neural networks can generalize outside their training space \cite{Marcus1998,Marcus2003}. Marcus showed that a feedforward autoencoder trained on arbitrary binary patterns fails to generalize to an output unit that was never activated during training. Our aim is to study whether standard architectures can recognize and learn a more abstract pattern -- a perfect one-to-one mapping between input symbols and output symbols. Specifically, we are interested in model predictions regarding unseen meanings given a novel input. We also test for ME using modern neural networks in two settings using synthetic data: classification (feedforward classifiers) and translation (sequence-to-sequence models; as reported in Appendix A). \subsection{Classification} \textbf{Synthetic data.} We consider a simple one-to-one mapping task inspired by Markman and Watchel \cite{Markman1988}. Translating this into a synthetic experiment, input units denote words and output units denote objects. Thus, the dataset consists of 100 pairs of input and output patterns, each of which is a one-hot vector of length 100. Each input vector represents a label (e.g., `hat', `cup', `dax') and each output vector represents a possible referent object (meaning). Figure \ref{fig:expsetup} shows the input and output patterns for the `dax' case, and similar patterns are defined for the other 99 input and output symbols. A one-to-one correspondence between each input symbol and each output symbol is generated through a random permutation, and there is no structure to the data beyond the arbitrary one-to-one relationship. Models are trained on 90 name-referent pairs and evaluated on the remaining 10 test pairs. No model can be expected to know the correct meaning of each test name -- there is no way to know from the training data -- but several salient patterns are discoverable. First, there is a precise one-to-one relationship exemplified by the 90 training items; the 10 test items can be reasonably assumed to follow the same one-to-one pattern, especially if the network architecture has exactly 10 unused input symbols and 10 unused output symbols. Second, the perfect one-to-one relationship ensures a perfect ME bias in the structure of the data. Although the learner does not know precisely \emph{which new output symbol} a new input symbol refers to, it should predict that the novel input symbol will correspond to \emph{one of the novel output symbols}. An ideal learner should discover that an output unit with a known label does not need another -- in other words, it should utilize ME to make predictions. \textbf{Mutual exclusivity.} We ask the neural network to "Show me the dax" by activating the "dax" input unit and asking it to select amongst possible referents (similar to Figure \ref{fig:me}). The network produces a probability distribution over candidate referents (see Figure \ref{fig:expsetup}), and can make relative (two object) comparisons by isolating the two relevant scores. To quantify the overall propensity toward ME, we define an "ME score" that measures the aggregate probability assigned to all of the novel output symbols as opposed to familiar outputs, corresponding to better performance on the classic forced choice ME task. Let us denote the training symbol by $\mathcal{Y}$, drawn from the data distribution $(\mathcal{X},\mathcal{Y})\sim\mathcal{D}$ and the held out symbols $\mathcal{Y}'$ drawn from $(\mathcal{X}',\mathcal{Y}')\sim\mathcal{D'}$. The mutual exclusivity score is the sum probability assigned to unseen output symbols $\mathcal{Y}'$ when shown a novel input symbol $x\in\mathcal{X}'$ \begin{equation} \mbox{ME Score } = \frac{1}{|\mathcal{D'}|} \sum_{(x_i,y_i) \in \mathcal{D'}} \sum_{y\in Y'}P(f_{net}(x)=y|x_i), \label{eq:1} \end{equation} averaged over each of the test items. An ideal learner that has discovered the one-to-one relationship in the synthetic data should have a perfect ME score of 1.0. In Figure \ref{fig:expsetup}, the probability assigned to the novel output symbol is $0.01$ and thus the corresponding ME Score is $0.01$. The challenge is to get a high ME score for novel (test) items while also correctly classifying known (training) items. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{vanilla10.png} \caption{} \label{fig:p1} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{wd00110.png} \caption{} \label{fig:p2} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{ent10.png} \caption{} \label{fig:p3} \end{subfigure} \caption{Evaluating mutual exclusivity on synthetic categorization tasks. ME Score (solid blue) and the cross-entropy loss (solid red) are plotted against the epochs of training. The configurations in the settings shown were: (a) Results for a model with an embedding, hidden, and classification layers, (b) Results for a model with embedding and classification layers trained with a weight decay factor of 0.001, and (c) Results for a model with an embedding and classification layer trained with an entropy regularizer. } \label{fig:synth} \end{figure*} \textbf{Neural network architectures.} A wide range of neural architectures are evaluated on the mutual exclusivity test. We use an embedding layer to map the input symbols to vectors of size $20$ or $100$, followed optionally by a hidden layer, and then by a 100-way softmax output layer. The networks are trained with different activation functions (ReLUs \cite{nair2010rectified}, TanH, Sigmoid), optimizers (Adam \cite{Kingma2015}, Momentum, SGD), learning rates ($0.1,0.01,0.001$) and regularizers (weight decay, batch-normalisation \cite{Ioffe2015}, dropout \cite{srivastava2014dropout}, and entropy regularization (see Appendix B.1)). The models are trained to maximize log-likelihood. All together, we evaluated over 400 different models on the synthetic ME task. \begin{wrapfigure}{R}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{models.png} \caption{Ideal and untrained ME scores compared with the ME scores of a few learned models.} \label{fig:p4} \end{wrapfigure} \textbf{Results.} Several representative training runs with different architectures are shown in Figure \ref{fig:synth}. An ideal learner that has discovered the one-to-one pattern should have a mutual exclusivity of 1; for a novel input, the network would assign all the probability mass to the unseen output symbols. In contrast, none of the configurations and architectures tested behave in this way. As training progresses, the mutual exclusivity score (solid blue line; Figure \ref{fig:synth}) tends to fall along with the training loss (red line). In fact, almost all of the networks acquire a strong \emph{anti-mutual exclusivity bias}, transitioning from an initial neutral bias to placing most or all of the probability mass on familiar outputs (seen in Figure \ref{fig:p4}). An exception to this pattern is the entropy regularized model, which maintains a score equivalent to an untrained network. In general, trained models strongly predict that a novel input symbol will correspond to a known rather than unknown output symbol, in contradiction to ME and the organizing structure of the synthetic data. Further informal experiments suggest our results cannot be reduced to simply not enough data: these architectures do not learn this one-to-one regularity regardless of how many input/output symbols are provided in the training set. Even with thousands of training examples demonstrating a one-to-one pattern, the networks do not learn this abstract principle and fail to capture this defining pattern in the data. Other tweaks were tried in an attempt to induce ME, including eliminating the bias units or normalizing the weights, yet we were unable to find an architecture that reliably demonstrated the ME effect. A similar pattern of results was observed in recurrent seq2seq networks for various standard trainind settings (see Appendix A). \subsection{Discussion} The results show that vanilla neural networks and recurrent seq2seq neural networks fail to reason by mutual exclusivity when trained in a variety of typical settings. The models fail to capture the perfect one-to-one mapping (ME bias) seen in the synthetic data, predicting that new symbols map to familiar outputs in a many-to-many fashion. Although our focus is on neural networks, this characteristic is not unique to this model class. We posit it more generally affects any discriminative model class trained to maximize log-likelihood (like multi-class softmax regression, decision trees, etc.). In a trained network, the optimal activation value for an unused output node is zero: for any given training example, increasing value of an unused output simply reduces the available probability mass for the target output. Using other loss functions could result in different outcomes, but we also did not find that weight decay and entropy regularization of reasonable values could fundamentally alter the use of novel outputs. In the next section, we investigate if the lack of ME could hurt performance on common learning tasks such as machine translation and image classification. \section{Should neural networks reason by mutual exclusivity?} Mutual exclusivity has implications for a variety of common learning settings. Mutual exclusivity arises naturally in lifelong learning settings, which more realistically reflect the ``open world'' characteristics of human cognitive development. Unlike epoch-based learning, a lifelong learning agent does not assume a fixed set of concepts and categories. Instead, new concepts can be introduced at any point during learning. An intelligent learner should be sensitive to this possibility, and ME is one means of intelligently reasoning about the meaning of novel stimuli. Children and adults learn in an open world with some probability of encountering a new class at any point, resembling the first epoch of training a neural net only. Moreover, the distribution of categories is neither uniformly distributed nor randomly shuffled \cite{smith2017developmental}. To simulate these characteristics, we construct lifelong learning scenarios using standard benchmarks as described below. \subsection{Machine translation} \begin{figure*}[!bt] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{envi.png} \label{fig:mt11} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{ende.png} \label{fig:mt12} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{encs.png} \label{fig:mt21} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{vien.png} \label{fig:mt22} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{deen.png} \label{fig:mt31} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{csen.png} \label{fig:mt32} \end{subfigure} \caption{Analysis of mutual exclusivity in machine translation datasets. The plots show the conditional probability of encountering a new word in the target sentence, if a new word is present in the source sentence (y-axis; red line). Also plotted is the base rate of encountering a new target word (blue line). These quantities are measured as at different points during training (x-axis). Errors bars are standard deviations.} \label{fig:mt} \end{figure*} In this section, we investigate if mutual exclusivity could be a helpful bias when training machine translation models in a lifelong learning paradigm. From the previous experiments, we know that the type of sequence-to-sequence (seq2seq) models used for translation acquire an anti-ME bias over the course of training (see Appendix A). Would a translation system benefit from assuming that a single word in the source sentence maps to a single word in the target sentence, and vice-versa? This assumption is not always correct since synonymy and polysemy are prevalent in natural languages, and thus the answer to whether or not ME holds is not absolute. Instead, we seek to measure the degree to which this bias holds in lifelong learning on real datasets, and compare this bias to the inductive biases of models trained on these datasets. The data for translation provides an approximately natural distribution over the frequency at which different words are observed (there are words that appear much more frequently than the others). This allows us to use a single pass through the dataset as a proxy for lifelong translation learning. \textbf{Datasets.} We analyze three common datasets for machine translation, each consisting of pairs of sentences in two languages (see Table \ref{tab:mtm}). The vocabularies are truncated based on word frequency in accordance with the standard practices for training neural machine translation models \cite{freitag2014combined, Luong2015, luong2016achieving}. \begin{table} \centering \caption{Datasets used to analyze ME in machine translation.} {\small \begin{tabular}{cccc} Name & Languages & Sentence Pairs & Vocabulary Size \\ \midrule IWSLT'14 \cite{freitag2014combined} & Eng.-Vietnamese & $\sim$133K & 17K(en), 7K(vi) \\ WMT'14 \cite{Luong2015} & Eng.-German & $\sim$4.5M & 50K(en), 50K(de) \\ WMT'15 \cite{luong2016achieving} & Eng.-Czech & $\sim$15.8M & 50K(en), 50K(cs)\\ \bottomrule \end{tabular}} \label{tab:mtm} \end{table} \textbf{Mutual exclusivity.} There are several ways to operationalize mutual exclusivity in a machine translation setting. Mutual exclusivity could be interpreted as whether a new word in the source sentence (``Xylophone'' in English) is likely to be translated to a new word in the target sentence (``Xylophon'' in German), as opposed to a familiar word. Since the word alignments are difficult to determine and not provided with the datasets, we instead measure a reasonable proxy: if a new word is encountered in the source sequence, is a new word also encountered in the target sentence? For a source sentence $S$ and an arbitrary novel word $N_S$, and a target sentence $T$ and a novel word $N_T$, we measure a dataset's ME Score as the conditional probability $P(N_T \in T | N_S \in S )$. A hypothetical translation model could compute whether or not $N_S \in S$ by checking if the present word is absent from the vocabulary-so-far during the training process. Thus this conditional probability is an easily-computable cue for determining whether or not a model should expect a novel output word. For the three datasets, we consider both forward and backward translation to get six scenarios for analysis. The probability $P(N_T \in T | N_S \in S )$ is estimated for a sample of 100 randomly shuffled sequences of the dataset sentence pairs. See Appendix B.3 for details on calculating the base rate $P(N_T \in T)$. \begin{wraptable}{r}{0.6\textwidth} \caption{Number of sentences after which the ME Score $P(N_T \in T | N_S \in S )$ falls below threshold.} {\small \begin{tabular}{ccccccc} \toprule \textbf{Score} & En-Vi & Vi-En & En-De & De-En & En-Cs & Cs-En\\ \midrule 0.9 &0.3K & 2K& 4K&3K & 4K & 3K \\ 0.5 &3K & 40K& 37K& 30K &40K & 30K\\ 0.1 &90K & 120K & 120K & 140K & 130K& 150K\\ \bottomrule \end{tabular}} \label{tab:me_decay} \end{wraptable} \textbf{Results and Discussion.} The measures of conditional probability in the six scenarios are shown in Table \ref{tab:me_decay}. There is a consistent pattern through the trajectory of early learning: the conditional probability $P(N_T \in T | N_S \in S )$ is high initially for thousands of initial sentence presentations, but then wanes as the network encounters more samples from the dataset. For a large part of the initial training, a seq2seq model would benefit from predicting that previously unseen words in the source language are more likely to map to unseen words in the target language. Moreover, this conditional probability is always higher than the base rate of encountering a new word, indicating that conditioning on the novelty of the input provides additional signal for predicting novelty in the output. Nevertheless, even the base rate suggests that a model should expect novel words with some regularity in our settings. This is in stark contrast to the synthetic results showing that seq2seq models quickly acquire an anti-ME assumption (see Appendix A), and their expectation of mapping novel inputs to novel outputs decays rapidly as training progresses (Appendix Figure 8). \subsection{Image classification} Similar to translation, we examine if object classifiers would benefit from reasoning by mutual exclusivity during training processes that mirror lifelong learning. To study this, when selecting an image for training, we sample the class from a power law distribution (see Appendix B.2) such that the model is more likely to see certain classes \cite{smith2017developmental}. Ideally, we would model the probability that an object belongs to a novel class based on its similarity to previous samples seen by the model (e.g., outlier detection). Identifying that an image belongs to a novel class is non-trivial, and instead we calculate the base rate for classifying an image as ``new'' while a learner progresses through the dataset. The set of classes not seen by the model are referred to as ``new'' here. This measure can be seen as a lower bound on the usefulness of ME through the standard training process, since this calculation assumes a blind learner that is unaware of any novelty signal present in the raw image. Additionally, we also train a model using an oracle which tells the model if an input image is from a ``new'' class. Using the signal from the oracle, we implement an ME rule by adding a bias to the ``new'' classes. This setup allows us to understand the utility of the ME bias in the situation where the model is capable of identifying an input as ``new''. \begin{figure} \thisfloatsetup{capbesideposition={right,top},capbesidewidth=0.3\textwidth} \fcapside[\FBwidth] {% \begin{subfloatrow} \ffigbox[0.32\textwidth]{\caption{Omniglot}\label{fig:st11} \centering \includegraphics[width=\linewidth]{omniglot.png}}{} \ffigbox[0.32\textwidth]{\caption{Imagenet}\label{fig:st12} \centering \includegraphics[width=\linewidth]{imagenet.png}}{} \end{subfloatrow} }{\caption {Analysis of mutual exclusivity in classification datasets. The plots show the probability that a new input image belongs to an unseen class $P(N|t)$, as a function of the number of images $t$ seen so far during training (blue). This measure is contrasted with the ME score of a neural network classifier trained through a similar run of the dataset (orange).\label{fig:stump}}} \end{figure} \textbf{Datasets.} This section examines the Omniglot dataset \cite{LakeScience2015} and the ImageNet dataset \cite{deng2009imagenet}. The Omniglot dataset has been widely used to study few-shot learning, consisting of 1623 classes of handwritten characters with 20 images per class. The ImageNet dataset consists of about 1.2 million images from 1000 different classes. \begin{wrapfigure}{R}{0.35\textwidth} \centering \includegraphics[width=\textwidth]{lossplot2.png} \caption{Classification loss for when a ``new'' class is first observed during online learning on Omniglot, for varying values of the oracle ME bias. Error bars are standard deviations.} \label{fig:oracle} \end{wrapfigure} \textbf{Mutual exclusivity.} To measure ME throughout the training process, we examine if an image encountered for the first time while training belongs to a class that has not been seen before. This is operationalized as the probability of encountering an image from a new class $N$ as a function of the number of images seen so far $t$, $P(N|t)$ (see Appendix B.4). This analysis is agnostic to the content of the image and whether or not it is a repeated item; it only matters whether or not the class is novel. As before, the analysis is performed using ten random runs through the dataset. We contrast the statistics of the datasets by comparing them to the ME Score (Equation \ref{eq:1}) of neural network classifiers trained on the datasets. The probability mass assigned to the unseen classes by the network is recorded after each optimizer step, as computed using Equation \ref{eq:1}. \begin{wraptable} {r}{0.55\textwidth} \centering \caption{Number of images after which the ME Score falls below threshold.} {\small \begin{tabular}{ccccc} \toprule \textbf{Score} & Omniglot & Omniglot & Imagenet & Imagenet\\ &&Classifier&&Classifier \\ \midrule 0.2 & 24,304 & 2,144& 1,280 & 2,048\\ 0.1 & 99,248 & 22,912 & 8,448& 3,072\\ 0.05 & 160,608 & 43,328 & 111,872 & 8,960\\ \bottomrule \end{tabular}} \label{tab:me_class} \end{wraptable} For Omniglot, a convolutional neural network was trained on 1623-way classification. The architecture consists of 3 convolutional layers (each consisting of $5\times5$ kernels and $64$ feature maps), a fully connected layer ($576\times 128$) and a softmax classification layer. It was trained with a batch size of 16 using an Adam optimizer and a learning rate of 0.001. For Imagenet, a Resnet18 model \cite{he2016deep} was trained on 1000-way classification with a batch size of 256, using an Adam optimizer and a learning rate of 0.001. If an architecture were capable of reasoning by ME, what would be the expected benefits? To answer this question, we implement an ME rule in the presence of an oracle, adding a constant bias to the logits of the unseen classes whenever the oracle tells the model that the input is ``new''. We compare the classification loss for the instances where a new class is observed. We do this for a range of bias values. The architecture in these experiments consisted of 3 convolutional layers (each consisting of $5\times5$ kernels and $64$ feature maps), a fully connected layer ($576\times 128$) and a softmax classification layer. When an input from an unseen classs was observed, a bias was added to the pre-softmax activations. The results presented are over 10 runs with different initializations. \textbf{Results and Discussion.} The results are summarized in Figure \ref{fig:stump} and Table \ref{tab:me_class}. The probability that a new image belongs to an unseen class $P(N|t)$ is higher than the ME score of the classifier through most of the learning phase. Comparing the statistics of the datasets to the inductive biases in the classifiers, the ME score for the classifiers is substantially lower than the baseline ME measure in the dataset, $P(N|t)$ (Table \ref{tab:me_class}). For instance, the ImageNet classifier drops its ME score below 0.05 after about 8,960 images, while the approximate ME measure for the dataset shows that new classes are encountered at above this rate until at least 111,000 images. These results suggest that neural classifiers, with their bias favoring frequent rather than infrequent outputs for novel stimuli, are not well-suited to lifelong learning challenges where such inferences are critical. Although we examined classifiers trained in an online fashion, we would expect similar results when we train them using replay or epoch-based training setups, where repeated presentation of past examples would only strengthen the anti-ME bias. These classifiers are hurt by their lack of ME and their failure to consider that new stimuli likely map to new classes. To further quantify the benefits of ME, we analyze a model trained with an oracle ME rule (see Figure \ref{fig:oracle}). The ME model has lower loss during online predictions when the added bias is increased to 5 from 0 (a vanilla network). For larger values, we do not observe any additional changes in the observed loss. We notice that the model is assisted by the ME rule. Ideally, a learning algorithm should be capable of leveraging the image content, combined with its own learning maturity, to decide how strongly it should reason by ME. Instead, typical models and training procedures do not provide these capabilities and do not utilize this important inductive bias observed in cognitive development. \vspace{-6pt} \section{General Discussion} \vspace{-6pt} Children use the mutual exclusivity (ME) bias to learn the meaning of new words efficiently, yet vanilla neural nets trained to maximize likelihood learn very differently. Our results show that they lack the ability to reason with ME, including feedforward networks and recurrent seq2seq models with common regularizers. Beyond simply lacking this bias, these networks learn an anti-ME bias, preferring to map novel inputs to familiar and frequent (rather than unfamiliar) output classes. Our results show that these characteristics are poorly matched to more realistic lifelong learning scenarios where new classes can appear at any point, as demonstrated in the experiments presented here. Neural nets may be currently stymied by their lack of ME bias, ignoring a powerful assumption about the structure of learning tasks. ME is relevant elsewhere in machine learning. Recent work has contrasted the ability of humans and neural networks to learn compositional instructions from just one or a few examples, finding that neural networks lack the ability to generalize systematically \cite{LakeBaroni2018,LakeLinzenBaroni2019}. The authors suggest that people rely on ME in these learning situations \cite{LakeLinzenBaroni2019}, and thus few-shot learning approaches could be improved by utilizing this bias as well. In our analyses, we show that NNs tend to learn the opposite bias, preferring to map novel inputs to familiar outputs. More generally, ME might be fruitfully generalized from applying to ``novel versus familiar'' stimuli to instead handling ``rare versus frequent'' stimuli. The utility of reasoning by ME could be extended to early stages of epoch based learning too. For example, during epoch-based learning, neural networks take longer to acquire rare stimuli and patterns of exceptions \cite{McClellandRogers2003}, often mishandling these items for many epochs by mapping them to familiar responses. We posit that the ME assumption will be increasingly important as learners tackle more continual, lifelong, and large-scale learning challenges \cite{Mitchell2018a}. Mutual exclusivity is an open challenge for deep neural networks, but there are promising avenues for progress. The ME bias will not be universally helpful, but it is equally clear that the status quo is sub-optimal: models should not have a strong anti-ME bias regardless of the task and dataset demands. Ideally, a model would decide autonomously how strongly to use ME (or not) based on the demands of the task. For instance, in our synthetic example, an ideal learner would discover the one-to-one correspondence and use this perfect ME bias as a meta-strategy, e.g., \cite{allen2019infinite,snell2017prototypical}. If the dataset has more many-to-one correspondences, it would adopt another meta-strategy. This meta-strategy could even change depending on the stage of learning, yet such an approach is not currently available for training models. For instance, the meta learning model of Santoro et al. \cite{Santoro2016} seems capable of learning an ME bias, although it was not specifically probed in this way. Recent work by Lake \cite{LakeMeta2019} indicates that DNNs can make the ME inference if trained explicitly to do so, showing these abilities are within the repertoire of modern tools. While promising, these recent demonstrations of ME in neural models show ME only when it is built-in or trained explicitly on tasks such as Markman's behavioral ME paradigm. These setups are unsuitable for large-scale learning and there are no compelling demonstrations that these ME results aid downstream learning, as occurs in cognitive development. A general solution would improve zero-shot predictions and the speed of learning after just a few examples, by mitigating the strong bias to familiar responses. In conclusion, vanilla deep neural networks do not naturally reason by mutual exclusivity. Two leading accounts in cognitive development posit ME as either an innate constraint or a bias learned through experience \cite{Lewis2020}, but vanilla DNNs trained with MLE seem inconsistent with both. Designing them to use ME could lead to faster and more flexible learners. There is a compelling case for building models that learn through mutual exclusivity. \section*{Acknowledgements} We are grateful to Marco Baroni, Tal Linzen, Nicholas Tomlin, Emin Orhan, Ethan Perez, and Wai Keen Vong for helpful comments and discussions. Through B. Lake's position at NYU, this work was partially supported by NSF under the NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science. \section*{Broader Impact} Our challenge highlights how biases learned by a model fail to align with the structure of the data. A model that successfully reasons with ME, through either prior knowledge or learning via a meta-strategy, would better on tail-end distributions where models map rare examples to frequent ones. By representing the structure of the data more accurately allows for quicker generalization, there is also potential for models to learn a wider range of undesirable biases present in the training data. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,337
\section{Introduction} Cardinal arithmetic has been one of the most important areas in set theory. Shortly after Cohen devised the method of forcing, Easton~\cite{easton} proved that the powers of regular cardinals is subject only to K\"onig's theorem in ZFC. Easton's theorem left the behavior of the powers of \emph{singular} cardinals as the Singular Cardinal Problem. Some time later, Silver~\cite{silver} proved the first nontrivial result around the problem: Singular cardinals of uncountable cofinality cannot be the least cardinal at which the Generalized Continuum Hypothesis (GCH) fails. Still later, Shelah~\cite{shelah} developed pcf theory and established a result that supersedes Silver's theorem: \begin{thm}[Shelah]\label{shelah} $\aleph_{\omega}^{\aleph_{0}} < (2^{\aleph_{0}})^{+} + \aleph_{\omega_{4}}$. \end{thm} An outline of the proof is as follows. First we have $\aleph_{\omega}^{\aleph_{0}} = 2^{\aleph_{0}} + {\rm cf}([\aleph_{\omega}]^{\aleph_0},\subseteq)$. The crucial claim is that ${\rm cf}([\aleph_{\omega}]^{\aleph_0},\subseteq) =\max{\pcf{\{\aleph_{n} \mid n < \omega\}}}<\aleph_{\omega_4}$. Shelah proved it by analyzing the structure of $\pcf{\{\aleph_{n} \mid n < \omega\}}$. More specifically, he obtained the latter inequality by showing the following results for a progressive interval of regular cardinals $A$: \begin{itemize} \item $\pcf{A}$ is an interval of regular cardinals with a largest element. \item $|\pcf{A}| < |A|^{+4}$. \end{itemize} Theorem~\ref{shelah} can be generalized for a \emph{non}-fixed point of the $\aleph$ function. Let $\kappa$ be a singular cardinal with $\kappa = \aleph_\mu > \mu$. Shelah proved that ${\rm cf}([\kappa]^{|\mu|},\subseteq)=\max \pcf{A}$ for some progressive interval of regular cardinals $A$ with $\sup A = \kappa$. As before, this reduces the investigation of the power of $\kappa$ to that of the structure of ${\pcf{A}}$. Note that we can take $A$ to be progressive because $\kappa$ is a \emph{non}-fixed point of the $\aleph$ function. Thus the assumption of $A$ being progressive seems essential in pcf theory. Now one may ask \begin{ques}\label{question} What if $A$ is a {non}-progressive interval of regular cardinals? \end{ques} Motivated by the question, we prove in this paper \begin{thm}\label{maintheorem} Suppose $\kappa$ is a measurable cardinal. Then the following hold: \begin{enumerate} \item[$(1)$] $\pcf{\kappa \cap {\rm Reg}} = (2^{\kappa})^{+}\cap {\rm Reg}$. \item[$(2)$] Prikry forcing over $\kappa$ forces that $\pcf{{\kappa} \cap {\rm Reg}} = (2^{{\kappa}})^{+} \cap {\rm Reg}$. \end{enumerate} \end{thm} From Theorem~\ref{maintheorem}~(1) we get \begin{coro}\label{coro1} Suppose $\kappa$ is a measurable cardinal. Then the following hold: \begin{enumerate} \item[$(1)$] $\pcf{\kappa \cap {\rm Reg}}$ has no largest element if $2^{\kappa}$ is singular. \item[$(2)$] $|\pcf{\kappa \cap {\rm Reg}}|> |\kappa \cap {\rm Reg}|^{+4}$ if $2^{\kappa} > \kappa^{+(\kappa^{+4})}$. \end{enumerate} \end{coro} The following corollary of Theorem~\ref{maintheorem}~(2) answers Question~\ref{question}: \begin{coro}\label{coro2} Suppose there is a supercompact cardinal. Then in some forcing extension there is a {non}-progressive interval of regular cardinals $A$ such that ${\rm sup}(A)$ is singular, $\pcf{A}$ has no largest element and $|\pcf{A}|>|A|^{+4}$. \end{coro} The proof of Corollary~\ref{coro2} is as follows. Let $\kappa$ be a supercompact cardinal. We may assume that $\kappa$ is indestructibly supercompact in the sense of Laver~\cite{laver}. This enables us to get a model in which $\kappa$ is supercompact and $2^{\kappa}$ is a singular cardinal $> \kappa^{+(\kappa^{+4})}$. Finally, Prikry forcing gives a model in which $A=\kappa \cap {\rm Reg}$ is as desired by Theorem~\ref{maintheorem}~(2). See Corollary~\ref{coro34} for an additional property of the final model. Some large cardinal hypothesis is necessary in Theorem~\ref{maintheorem}. Assume on the contrary that GCH holds and there is no weakly inaccessible cardinal. A simple argument shows that if $A \subseteq {\rm Reg}$, then $\pcf{A} = A \cup \{(\sup B)^{+}\mid B \subseteq A$ has no maximal element$\}$, so that $\pcf{A}$ has a largest element and $|\pcf{A}|<|A|^{+4}$. The structure of this paper is as follows. In Section~2, we recall basic facts of pcf theory and Prikry forcing. Theorem~\ref{maintheorem} is proved in Section~3. We also consider the problem whether $\pcf{\pcf{A}}=\pcf{A}$ holds. In Section~4, we prove an analogue of Theorem~\ref{maintheorem}~(2) for Magidor forcing. \section{Preliminaries} In this section, we recall basic facts of pcf theory and Prikry forcing. For more on the topics, we refer the reader to \cite{AM} and \cite{gitik} respectively. We also use \cite{kanamori} as a reference for set theory in general. Our notation is standard. We let ${\rm Reg}$ denote the class of all regular cardinals. Let $A \subseteq {\rm Reg}$. Then $\prod A$ is the set $\{f:A \to \bigcup A \mid \forall \gamma \in A(f(\gamma)< \gamma )\}$. Let $F$ be a filter over $A$. We define a strict order $<_{F}$ on $\prod A$ by $f <_{F} g$ iff $\{\gamma \in A \mid f(\gamma) < g(\gamma)\}\in F$. \begin{defi} For $A\subseteq {\rm Reg}$, $$\pcf{A} = \left\{\operatorname{cf}\left(\prod A,<_{D}\right) \mid D\text{ is an ultrafilter over }A \right\}.$$ \end{defi} Note that $A \subseteq \pcf{A} \subseteq (2^{\sup A})^{+}\cap {\rm Reg}$. If there is an increasing and cofinal sequence in ${\left(\prod A,<_{F}\right)}$ of length $\theta$ for some filter $F$ over $A$, then $\operatorname{cf}(\theta) \in \pcf{A}$. A set $A\subseteq {\rm Reg}$ is \emph{progressive} if $\min A > |A|$. An interval of regular cardinals is a set of the form $[\lambda,\kappa)\cap {\rm Reg}$ for a pair of cardinals $\lambda<\kappa$. Here is the fundamental theorem on progressive intervals of regular cardinals. \begin{thm}[Shelah]\label{progressive} If $A\subseteq {\rm Reg}$ is a progressive interval, then we have \begin{enumerate} \item[$(1)$] $\pcf{A}$ has a largest element. \item[$(2)$] $|\pcf{A}| < |A|^{+4}$. \item[$(3)$] $\pcf{\pcf{A}} = \pcf{A}$. \end{enumerate} \end{thm} Theorem~\ref{scaleexists} is known as the scale theorem. \begin{thm}[Shelah]\label{scaleexists} Suppose $\kappa$ is a singular cardinal. Then there is a set $A \in [\kappa\cap {\rm Reg}]^{\operatorname{cf}(\kappa)}$ such that $\sup A = \kappa$ and ${\left(\prod A,<_F\right)}$ has an increasing and cofinal sequence of length $\kappa^{+}$. Here, $F$ is the cobounded filter over $A$. In particular, $\kappa^{+} \in \pcf{\kappa \cap {\rm Reg}}$. \end{thm} Next, we recall basic facts of Prikry forcing from \cite{prikry}. Let $\kappa$ be a measurable cardinal and $U$ a normal ultrafilter over $\kappa$. Prikry forcing $\mathbb{P}$ is the set $[\kappa]^{<\omega} \times U$ ordered by ${\langle b,Y\rangle}\leq {\langle a,X\rangle}$ iff $a\subseteq_{e}b$ (i.e. $a=b \cap (\max(a)+1)$), $Y \subseteq X$ and $b \setminus a \subseteq X$. $\mathbb{P}$ has the $\kappa^{+}$-c.c. and size $2^{\kappa}$. Thus, $\mathbb{P}$ does not change the value of $2^{\theta}$ for any ${\theta}\ge\kappa$. $\mathbb{P}$ preserves all cardinals above ${\kappa}$ but changes the cofinality of ${\kappa}$. Let $\dot{g}$ be a $\mathbb{P}$-name such that $\mathbb{P}\Vdash \dot{g} = \bigcup \{ a \mid \exists X({\langle a,X\rangle} \in \dot{G})\}$, where $\dot{G}$ is the canonical $\mathbb{P}$-name for a generic filter. Then $\dot{g}$ is forced to be a cofinal subset of $\kappa$ of order type $\omega$. Moreover, we need \begin{itemize} \item ${\langle a,X\rangle} \Vdash {a} \subseteq_{e}\dot{g} \land \dot{g}\setminus{a} \subseteq {X}$. In particular, $\mathbb{P}\Vdash \dot{g} \subseteq^{*} X$ for every $X\in U$. \item If $\langle a,X \rangle \Vdash \xi \in \dot{g}$, then $\xi \in a$. \end{itemize} The latter property follows by $\langle a,X\setminus (\xi + 1) \rangle \leq \langle a,X\rangle$ forces $\dot{g} \setminus a \subseteq X \setminus (\xi + 1)$. For subsequent purposes, we present a direct proof of Prikry lemma. Suppose $\{X_{b}\mid b \in [\kappa]^{<\omega}\}\subseteq U$. The diagonal intersection $\triangle_{b}X_{b}$ is defined to be the set $\{\xi < \kappa \mid \forall b \in [\xi]^{<\omega}(\xi \in X_{b})\}$. Since $U$ is normal, we have $\triangle_{b}X_{b} \in U$. \begin{lem}\label{diagonallemma} Suppose $\{X_{b} \mid b \in [\kappa]^{<\omega}\} \subseteq U$ and $a \in [\kappa]^{<\omega}$. Then any extension of ${\langle a,\triangle_{b}X_{b}\rangle}$ is compatible with ${\langle a,X_{a}\rangle}$. \end{lem} \begin{proof} Let ${\langle c,Y\rangle}\leq {\langle a,\triangle_{b}X_{b}\rangle}$. Then $c\setminus a\subseteq X_{a}$ by $a\subseteq_{e} c$ and $c\setminus a\subseteq \triangle_{b}X_{b}$. Thus ${\langle c,Y\cap X_{a}\rangle}$ is a common extension of ${\langle c,Y\rangle}$ and ${\langle a,X_{a}\rangle}$, as desired. \end{proof} \begin{lem}[Prikry lemma]\label{prikrycondition} Let $a \in [\kappa]^{<\omega}$ and $\sigma$ be a statement of the forcing language. Then there is an $X \in U$ such that ${\langle a,X\rangle}$ decides $\sigma$, i.e. ${\langle a,X\rangle }\Vdash \sigma $ or ${\langle a,X\rangle}\Vdash \lnot \sigma$. \end{lem} \begin{proof} For each $b \in [\kappa]^{<\omega}$ define $X_{b} \in U$ as follows: If $a\subseteq_{e} b$, let $X_{b}$ be the unique set from the following mutually disjoint sets \begin{itemize} \item $X_{b}^{+} = \{\xi < \kappa \mid {b} \subseteq \xi \land \exists Y\in U ({\langle b\cup\{\xi\} ,Y \rangle} \Vdash \sigma) \}$. \item $X_{b}^{-} = \{\xi < \kappa \mid {b} \subseteq \xi \land \exists Y\in U({\langle b\cup\{\xi\} ,Y \rangle} \Vdash \lnot\sigma)\}$. \item $X_{b}^{0} = \kappa \setminus (X_{b}^{+} \cup X_{b}^{-})$. \end{itemize} Otherwise, let $X_{b}= \kappa$. For each $b \in [\kappa]^{<\omega}$ define $Y_{b} \in U$ as follows: If there is a $Y \in U$ such that ${\langle b,Y\rangle}$ decides $\sigma$, let $Y_{b}$ be one such $Y$. Otherwise, let $Y_{b}= \kappa$. We claim that $X= \triangle_{b}(X_{b}\cap Y_{b})\in U$ is as desired. Take an arbitrary extension ${\langle c,Y\rangle}\leq {\langle a,X\rangle}$ that decides $\sigma$. We may assume $c = b\cup\{\xi\}$ with $a \subseteq_{e} b \subseteq \xi$. Note that ${\langle c,Y_{c}\rangle}$ decides $\sigma$. We may assume ${\langle c,Y_{c}\rangle } \Vdash \sigma$. Then ${\langle c,\triangle_{b}Y_{b}\rangle}\Vdash \sigma$ by Lemma~\ref{diagonallemma}. Thus ${\langle c,X \rangle} \le {\langle c,\triangle_{b}Y_{b}\rangle}$ forces $\sigma$. We claim that ${\langle b,X\rangle} \Vdash \sigma$, which completes the proof by repeating the argument. It suffices to show that any extension of ${\langle b,X\rangle}$ is compatible with a condition forcing $\sigma$. Let ${\langle d,Z\rangle} \leq {\langle b,X\rangle}$. We may assume $b \subsetneq_{e} d$. Note that $\xi \in X_{b}$ by $\xi \in X$, and hence $X_{b} = X_{b}^{+}$ by ${\langle b\cup\{\xi\},X\rangle}\Vdash \sigma$. Let $\eta = \min(d\setminus b) \in X$. Then $\eta \in X_{b} = X_{b}^{+}$, so ${\langle b\cup\{\eta\},Y\rangle}\Vdash\sigma$ for some $Y$, and hence ${\langle b\cup\{\eta\},Y_{b\cup\{\eta\}}\rangle}\Vdash\sigma$. Note that $d\setminus (b\cup\{\eta\}) \subseteq Y_{b\cup\{\eta\}}$ by $b\cup\{\eta\} \subseteq_{e} d$ and $d\setminus (b\cup\{\eta\}) \subseteq d\setminus b \subseteq X$. Thus ${\langle d, Z\cap Y_{b\cup\{\eta\}}\rangle}$ is a common extension of ${\langle d,Z\rangle}$ and ${\langle b\cup\{\eta\},Y_{b\cup\{\eta\}}\rangle}$, as desired. \end{proof} \begin{coro} $\mathbb{P}$ adds no new bounded subsets of $\kappa$. In particular, $\mathbb{P}$ preserves all cardinals below $\kappa$. \end{coro} \section{Prikry Forcing and a Non-progressive Interval} The first half of this section is devoted to \begin{proof}[Proof of Theorem~\ref{maintheorem}] Let $\kappa$ be a measurable cardinal. Take a normal ultrafilter $U$ over $\kappa$ and form $j:V \to M \simeq \text{Ult}(V,U)$. For each $\alpha\le2^{\kappa}$, we can choose $f_{\alpha} \in \mbox{}^{\kappa}\kappa$ such that $\alpha =[f_{\alpha}]_{U}$ by $2^{\kappa}\le(2^{\kappa})^{M}<j(\kappa)$. Note that ${\kappa \cap {\rm Reg}}\subseteq \pcf{\kappa \cap {\rm Reg}}\subseteq (2^{\kappa})^{+}\cap {\rm Reg}$. To complete the proof, it suffices to show that $[\kappa,(2^{\kappa})^{+})\cap{\rm Reg}\subseteq \pcf{\kappa\cap{\rm Reg}}$ in both cases, (1) and (2). (1) Let $\theta \in [\kappa,(2^{\kappa})^{+})\cap{\rm Reg}$. Then we may assume $f_{\theta} \in \mbox{}^{\kappa}(\kappa\cap{\rm Reg})$. Since $\kappa=[{\rm id}]_{U}\le[f_{\theta}]_{U}$, we have $$X = \{\xi < \kappa\mid \forall \eta < \xi( f_{\theta}(\eta) < \xi) \land \xi \leq f_{\theta}(\xi)\} \in U.$$ Note that $f_{\theta}\upharpoonright X$ is strictly increasing. Define an ultrafilter $U_{\theta}$ over $\kappa \cap {\rm Reg}$ by $Y \in U_{\theta}$ iff $f_{\theta}^{-1}``Y \in U$. Then we have ${\left(\prod_{\xi\in X}f_{\theta}(\xi),<_{U}\right)}\simeq {\left(\prod f_{\theta}``{X},<_{U_{\theta}}\right)}\simeq {\left(\prod \kappa\cap {\rm Reg},<_{U_{\theta}}\right)}$. \medskip Since $\langle{f_{\alpha}\upharpoonright X \mid \alpha < \theta}\rangle$ is increasing and cofinal in ${\left(\prod_{\xi \in X}f_{\theta}(\xi),<_{U}\right)}$, we have $\theta =\operatorname{cf}\left(\prod_{\xi \in X}f_{\theta}(\xi),<_{U}\right) =\operatorname{cf}(\prod \kappa \cap {\rm Reg} ,<_{U_{\theta}})\in \pcf{\kappa\cap {\rm Reg}}$, as desired. \medskip (2) Let $\mathbb{P}$ be Prikry forcing defined by $U$. Note that the set $(\kappa,(2^{\kappa})^{+})\cap {\rm Reg}$ remains the same after forcing with $\mathbb{P}$ and $\kappa$ is singular. Let $\theta\in(\kappa,(2^{\kappa})^{+})\cap{\rm Reg}$. It suffices to prove that $\mathbb{P}\Vdash\theta\in \pcf{\kappa\cap{\rm Reg}}$. Again, we may assume $f_\theta \in {^{\kappa}(\kappa \cap {\rm Reg})}$. First, note that $$X = \{\xi < \kappa\mid \forall \eta < \xi( f_{\theta}(\eta) < \xi) \land \xi< f_{\theta}(\xi)\} \in U.$$ Since $\mathbb{P} \Vdash \dot{g}\subseteq^{*} {X}$, we have \begin{center} $\mathbb{P} \Vdash {\left(\prod_{\xi \in \dot{g}}{f}_{\theta}(\xi),<^{*}\right)} \simeq {\left(\prod {f}_{\theta}{``}\dot{g},<_{\dot{F}}\right)}.$ \end{center} Here $<^{*}$ and $\dot{F}$ are $\mathbb{P}$-names for the order on $\prod_{\xi \in \dot{g}}{f}_{\theta}(\xi)$ defined by the cobounded filter over $\dot{g}$, and the cobounded filter over ${f}_{\theta}{``}\dot{g}$ respectively. Thus it suffices to prove \begin{itemize} \item[(i)] $\mathbb{P}\Vdash{\langle{f}_{\alpha}\upharpoonright \dot{g} \mid \alpha < {\theta}\rangle}${ is increasing in }${\left(\prod_{\xi \in \dot{g}} {f}_{\theta}(\xi),<^{*} \right)}$. \item[(ii)] $\mathbb{P}\Vdash{\langle{f}_{\alpha}\upharpoonright \dot{g} \mid \alpha < {\theta}\rangle}${ is cofinal in }${\left(\prod_{\xi \in \dot{g}} {f}_{\theta}(\xi),<^{*} \right)}$. \end{itemize} (i) Let $\alpha < \beta$. Then $Y= \{\xi < \kappa \mid f_{\alpha}(\xi) < f_{\beta}(\xi)\}\in U$. If ${\langle a,Z\rangle} \in \mathbb{P}$, then ${\langle a,Y \cap Z\rangle} \Vdash\forall \xi\in\dot{g} \setminus{a} ({f}_{\alpha} (\xi)<{f}_{\beta}(\xi))$, as desired. (ii) By the proof of (i), it suffices to show that $\left\{h \upharpoonright \dot{g}\mid h\in \prod^{V}_{\xi \in{X}}{f}_{\theta}(\xi) \right\}$ is forced to be cofinal in ${\left(\prod_{\xi \in \dot{g}} {f}_{\theta}(\xi),<^{*} \right)}$. Assume $\Vdash \dot{h} \in \prod_{\xi \in \dot{g}}{f}_{\theta}(\xi)$. For each $b\in [\kappa]^{<\omega}$ define $Y_{b}\in U$ and $\eta_b < \kappa$ as follows. Note that ${\langle b, X\rangle}$ forces $b \subseteq\dot{g}$ and hence $\dot{h}(\max{{b}}) < f_{\theta}(\max{b})$. By Prikry lemma, there is a ${\langle b,Y_{b}\rangle}\le{\langle b, X\rangle}$ that decides $\dot{h}(\max{{b}}) ={\eta}$ for every $\eta < f_{\theta}(\max{b})$. Then we can take an $\eta_b < f_{\theta}(\max{b})$ such that $${\langle b,Y_{b}\rangle}\Vdash \dot{h}(\max{{b}})=\eta_b.$$ For each $\xi \in{X}$ define $$h(\xi) = \sup \{\eta_b+1 \mid b\in [\xi + 1]^{<\omega}\}.$$ Since ${f}_{\theta}(\xi)> \xi$ is regular, we have $h\in \prod_{\xi \in{X}}{f}_{\theta}(\xi)$ in $V$. Let $Y = \triangle_{b}Y_{b}\in U$. We claim that ${\langle a, Y\rangle}\Vdash \forall \xi\in\dot{g}\setminus a(\dot{h}(\xi)<{h}(\xi))$ for every $a\in [\kappa]^{<\omega}$, which completes the proof. It suffices to show that any extension of ${\langle a, Y\rangle}$ forcing $\xi\in\dot{g}\setminus a$ is compatible with a condition forcing $\dot{h}(\xi)<{h}(\xi)$. Suppose ${\langle b,Z\rangle}\le{\langle a,Y\rangle}$ forces $\xi\in\dot{g}\setminus a$. By the property we saw in Section 2, we have $\xi\in b\setminus a$. ${\langle b,Z\rangle}$ is compatible with ${\langle b \cap (\xi+1),Y_{b\cap (\xi+1)}\rangle}$ forcing $\dot{h}(\xi) = \eta_{b\cap(\xi+1)} < h(\xi)$, as in the proof of Prikry lemma. \end{proof} Corollary~\ref{coro2} shows that the assumption of $A$ being progressive is necessary in Theorem~\ref{progressive}~(1) and (2). Corollary~\ref{coro34} does the same for Theorem~\ref{progressive}~(3). \begin{coro}\label{coro34} One can add ``$A\subsetneq \pcf{A}\subsetneq \pcf{\pcf{A}}$'' to the list of properties of $A$ in Corollary~\ref{coro2}. \end{coro} \begin{proof} Let $A=\kappa \cap {\rm Reg}$ in the final model for Corollary~\ref{coro2}, where $2^{\kappa}$ is singular. By Theorem~\ref{maintheorem}~(2) we have $\pcf{A} = (2^{\kappa})^{+}\cap {\rm Reg} = 2^{\kappa}\cap {\rm Reg} \ne A$, which in turn implies that $(2^{\kappa})^{+} \in\pcf{\pcf{A}} \setminus\pcf{A}$ by Theorem~\ref{scaleexists}. \end{proof} The rest of this section is devoted to improving Corollary~\ref{coro34}. Define $\operatorname{pcf}^{n}(A)$ for $n<\omega$ by $\operatorname{pcf}^{0}(A) = A$ and $\operatorname{pcf}^{n+1}(A) = \operatorname{pcf}(\operatorname{pcf}^{n}(A))$. \begin{thm}\label{pcfinc} Suppose $\langle\kappa_{i} \mid i < \omega\rangle$ is an increasing sequence of supercompact cardinals. Then the following hold in some forcing extension: \begin{enumerate} \item[$(1)$] $\kappa_{0}$ is a singular cardinal of cofinality $\omega$. \item[$(2)$] $\operatorname{pcf}^{n}(\kappa_{0} \cap {\rm Reg}) \subsetneq \operatorname{pcf}^{n+1}(\kappa_{0} \cap {\rm Reg})$ for every $n < \omega$. \end{enumerate} \end{thm} Lemma~\ref{ccpcf} ensures that sets of the form $\pcf{\theta \cap {\rm Reg}}$ remain the same throughout forcing extensions for Theorem~\ref{pcfinc}. \begin{lem}\label{ccpcf} Suppose $A\subseteq {\rm Reg}$, and $\mathbb{Q}$ has the $\kappa$-c.c. with $\kappa=\min (A)$. Then $\mathbb{Q}\Vdash{\rm pcf}^{V}(A)\subseteq\pcf{A}$. \end{lem} \begin{proof} In $V$, let $\theta \in \pcf{A}$ be arbitrary. Then there are an ultrafilter $D$ over $A$ and an increasing and cofinal sequence ${\langle f_{\alpha}\mid \alpha < \theta\rangle}$ in ${\left(\prod A,<_{D}\right)}$. Let $\dot{E}$ be a $\mathbb{Q}$-name for the filter generated by ${D}$. Since $\theta \geq \kappa$ remains regular after forcing with $\mathbb{Q}$, it suffices to prove that ${\langle f_{\alpha}\mid \alpha < \theta\rangle}$ is forced to be increasing and cofinal in ${\left(\prod{A},<_{\dot{E}}\right)}$. It is easy to see the former. For the latter, it suffices to prove that $\prod^{V}{A}$ is forced to be cofinal in ${\left(\prod{A},<_{\dot{E}}\right)}$. Assume $p \Vdash \dot{h} \in \prod{A}$. For each $\gamma \in A$, define $$h^*(\gamma) = \sup \{\xi + 1 \mid \exists q \leq p ( q \Vdash \dot{h}({\gamma}) ={\xi})\}.$$ Then $p \Vdash \dot{h}({\gamma}) <{h^*}({\gamma})$ for every $\gamma \in A$. Since $\mathbb{Q}$ has the $\kappa$-c.c. and $\gamma \geq \kappa$ is regular, we have $h^* \in \prod A$ in $V$, as desired. \end{proof} \begin{proof}[Proof of Theorem~\ref{pcfinc}] We may assume that each $\kappa_i$ is indestructibly supercompact in the sense of Laver~\cite{laver} and $2^{\kappa_{i}} = \kappa_i^{+}$. We refer the reader to \cite{apter} for more details. Let $\mathbb{Q}$ be the full support product $\prod_{i < \omega}{\rm Add}(\kappa_i,\kappa_{i + 1})$, where ${\rm Add}(\kappa_{i},\kappa_{i + 1})$ is the poset adding $\kappa_{i + 1}$ many Cohen subsets of $\kappa_{i}$. Standard arguments show that $\mathbb{Q}$ preserves cofinalities and forces $2^{\kappa_n} = \kappa_{n + 1}$ for every $n < \omega$. We claim that $\mathbb{Q}$ forces $\pcf{[\kappa_{n}^{+},\kappa_{n+1}^{+}) \cap {\rm Reg}}\supseteq [\kappa_{n}^{+},\kappa_{n + 2}^{+}) \cap {\rm Reg}$ for every $n < \omega$. Let $G\subseteq\mathbb{Q}$ be generic. Since $\mathbb{Q} \simeq \prod_{i > n}\operatorname{Add}(\kappa_{i},\kappa_{i + 1}) \times\textstyle{\prod_{i\le n}\operatorname{Add}(\kappa_{i},\kappa_{i+1})}$ in $V$, we have $G\simeq G_n \times H_n$ in $V[G]$. By Theorem~\ref{maintheorem}~(1), $\pcf{\kappa_{n}^{+} \cap {\rm Reg}} =\pcf{\kappa_{n} \cap {\rm Reg}} \cup\{\kappa_{n}\} =(2^{\kappa_{n}})^{+} \cap {\rm Reg} = \kappa_{n}^{++} \cap {\rm Reg}$ in $V$. This remains true in $V[G_n]$ by the $\kappa_{n+1}$-closure of the corresponding poset. Now we work in $V[G_n]$. Note that $\kappa_{n+1}$ is supercompact and $2^{\kappa_{n+1}} = \kappa_{n + 2}$. By Theorem~\ref{maintheorem}~(1) we have $\pcf{\kappa_{n+1}^{+} \cap {\rm Reg}} =\pcf{\kappa_{n+1} \cap {\rm Reg}} \cup\{\kappa_{n+1}\} = (2^{\kappa_{n+1}})^{+} \cap {\rm Reg} = \kappa_{n + 2}^{+} \cap {\rm Reg}$. Therefore $\pcf{[\kappa_{n}^{+},\kappa_{n+1}^{+}) \cap {\rm Reg}} = [\kappa_{n}^{+},\kappa_{n + 2}^{+}) \cap {\rm Reg}$. Note that $\left(\prod_{i \le n}\operatorname{Add}(\kappa_{i},\kappa_{i+1})\right)^{V}= \prod_{i \le n}\operatorname{Add}(\kappa_{i},\kappa_{i+1})$ has the $\kappa_{n}^{+}$-c.c. By Lemma~\ref{ccpcf} we have $\pcf{[\kappa_{n}^{+},\kappa_{n+1}^{+}) \cap {\rm Reg}} \supseteq \pcf{[\kappa_{n}^{+},\kappa_{n+1}^{+}) \cap {\rm Reg}}^{V[G_n]} = [\kappa_{n}^{+},\kappa_{n + 2}^{+}) \cap {\rm Reg}$ in $V[G]=V[G_n][H_n]$, as desired. Since $\mathbb{Q}$ is $\kappa_{0}$-directed closed in $V$, $\kappa_0$ remains supercompact in $V[G]$. So we can define Prikry forcing $\mathbb{P}$ over $\kappa_0$. By Theorem~\ref{maintheorem}~(2), $\mathbb{P}$ forces $\pcf{\kappa_{0} \cap {\rm Reg}}= (2^{\kappa_{0}} )^{+}\cap {\rm Reg} = \kappa_{1}^{+} \cap {\rm Reg}$. By Lemma~\ref{ccpcf}, $\mathbb{P}$ forces $[\kappa_{n}^{+},\kappa_{n + 2}^{+}) \cap {\rm Reg}\subseteq \pcf{[\kappa_{n}^{+},\kappa_{n+1}^{+}) \cap {\rm Reg}}\subseteq [\kappa_{n}^{+},(2^{\kappa_{n+1}})^{+}) \cap {\rm Reg} =[\kappa_{n}^{+},\kappa_{n + 2}^{+}) \cap {\rm Reg}$ for every $n< \omega$. Let $H\subseteq \mathbb{P}$ be generic. In $V[G][H]$, we have ${\operatorname{pcf}^{n+1}(\kappa_0 \cap {\rm Reg})} ={\kappa_{n+1}^{+} \cap {\rm Reg}}$ by induction on $n < \omega$. \end{proof} \section{An Analogue for Magidor Forcing} Prikry forcing is known for a wealth of variations. In this section, we give an analogue of Theorem~\ref{maintheorem}~(2) for one of them. Here we take up Magidor forcing from \cite{magidor1978changing}, but the argument works equally well for other variations, e.g. the diagonal Prikry forcing as defined in \cite{NU}. Magidor forcing uses a sequence of ultrafilters rather than a single ultrafilter, and makes a hypermeasurable cardinal into a singular cardinal of uncountable cofinality. For normal ultrafilters $U ,U'$ over $\kappa$, $U \vartriangleleft U'$ iff $U\in M \simeq \text{Ult}(V,U')$. Let ${\langle U_{\alpha} \mid \alpha < \lambda\rangle}$ be a $\vartriangleleft$-increasing sequence with $\lambda <\kappa$. Note that there is a such sequence if $\kappa$ is supercompact. For any $\beta< \alpha < \lambda$, we fix a function $F_{\beta}^{\alpha} \in \mbox{}^{\kappa}V$ such that $[F_{\beta}^{\alpha}]_{U_{\alpha}} = U_{\beta}$. For each $\alpha< \lambda$, define \begin{align*} A_{\alpha} & = \{\delta < \kappa \mid \forall \beta < \alpha \forall \gamma < \beta(F_{\gamma}^{\alpha}(\delta) \vartriangleleft F_{\beta}^{\alpha}(\delta) \text{ are normal ultrafilters over }\delta)\}.\\ B_{\alpha} & = \{\delta \in A_{\alpha}\setminus(\lambda + 1) \mid \forall \beta < \alpha \forall \gamma < \beta([F^{\beta}_{\gamma}\upharpoonright \delta]_{F_{\beta}^{\alpha}(\delta)} = F_{\gamma}^{\alpha}(\delta))\}. \end{align*} Note that $B_{\alpha} \in U_{\alpha}$. Magidor forcing $\mathbb{M}$ is the set of pairs $\langle a,X \rangle$ such that \begin{itemize} \item $a$ is an increasing function such that \begin{itemize} \item $\operatorname{dom}(a) \in [\lambda]^{<\omega}$ and $\forall \alpha \in \operatorname{dom}(a)(a(\alpha) \in B_{\alpha})$. \end{itemize} \item $X$ is a function such that \begin{itemize} \item $\operatorname{dom}(X) = \lambda \setminus \operatorname{dom}(a)$ and $\forall \alpha \in \operatorname{dom}(X)(X(\alpha) \subseteq B_{\alpha}$), \item For every $\alpha \in \operatorname{dom}(X)$, if $\operatorname{dom}(a) \setminus (\alpha + 1) = \emptyset$, $X(\alpha) \in U_{\alpha}$. Otherwise, $X(\alpha) \in F^{\beta}_{\alpha}(a(\rho))$ where $\beta = \min (\operatorname{dom}(a) \setminus (\alpha + 1))$. \end{itemize} \end{itemize} $\mathbb{M}$ is ordered by ${\langle a,X\rangle } \leq {\langle b,Y\rangle }$ iff $b \subseteq a$, $\forall \alpha \in \operatorname{dom}(X)(X(\alpha) \subseteq Y(\alpha))$ and $\forall \alpha \in \operatorname{dom}(a) \setminus \operatorname{dom}(b)(a(\alpha) \in Y(\alpha))$. $\mathbb{M}$ has the $\kappa^{+}$-c.c. and size $2^{\kappa}$. Thus, $\mathbb{M}$ does not change the value of $2^{\theta}$ for any $\theta \geq \kappa$. $\mathbb{M}$ preserves all cardinals above $\kappa$ but changes the cofinality of $\kappa$ like Prikry forcing. Let $\dot{g}$ be an $\mathbb{M}$-name such that $\mathbb{M} \Vdash \dot{g} = \bigcup\{a \mid \exists X\langle a,X \rangle \in \dot{G} \}$, where $\dot{G}$ is the canonical $\mathbb{M}$-name for a generic filter. $\dot{g}$ is forced to be an increasing sequence of length $\lambda$ which converges to $\kappa$. As in Prikry forcing, we also have \begin{itemize} \item $\langle a, X \rangle \Vdash \dot{g}\upharpoonright \operatorname{dom}({a}) = {a} \land \forall \alpha \in \lambda \setminus \operatorname{dom}(a)(\dot{g}(\alpha) \in X(\alpha))$. \end{itemize} For each $\beta < \lambda$, We let $\mathbb{M}_{\beta}= \{{\langle a,X \rangle}_{\beta}\mid {\langle a,X \rangle} \in \mathbb{M}\}$ and $\mathbb{M}^{\beta} = \{{\langle a,X \rangle}^{\beta} \mid {\langle a,X \rangle} \in \mathbb{M}\}$. Here, ${\langle a,X\rangle}_{\beta}$ and ${\langle a,X\rangle}^{\beta}$ are ${\langle a\upharpoonright (\beta + 1),X\upharpoonright (\beta + 1)\rangle}$ and ${\langle a\upharpoonright (\lambda \setminus (\beta + 1)),X\upharpoonright (\lambda \setminus (\beta + 1))\rangle}$ respectively. The orders on $\mathbb{M}_\beta$ and $\mathbb{M}^{\beta}$ are naturally defined by that on $\mathbb{M}$. $\mathbb{M}$ can be factored as follows. \begin{lem} For every ${\langle a,X\rangle} \in \mathbb{M}$ and $\beta \in \operatorname{dom}(a)$, we have $$\mathbb{M} / {\langle a,X\rangle} \simeq \mathbb{M}_{\beta}/{\langle a,X\rangle}_{\beta} \times \mathbb{M}^{\beta}/{\langle a,X\rangle}^{\beta}.$$ \end{lem} Note that $\mathbb{M}_{\beta} / \langle a,X\rangle_{\beta}$ has the $a(\beta)^{+}$-c.c. Lemmas~\ref{magidordiagonallemma} and \ref{magidorprikrycondition} are analogues of Lemmas~\ref{diagonallemma} and \ref{prikrycondition} for Magidor forcing respectively. See \cite{magidor1978changing} for proofs. \begin{lem}\label{magidordiagonallemma} Suppose that ${\langle a,X\rangle} \in \mathbb{M}$ and $\{{\langle b,X_{b}\rangle} \mid b \in {\rm LP}\}$ is a set of extensions of $\langle a,X\rangle$ where ${\rm LP} = \{b \mid \exists Y(\langle{b,Y}\rangle \leq \langle{a,X}\rangle )\}$. Then there is a $Z$ such that $\langle a,Z \rangle \in \mathbb{M}$ and every extension of $\langle{b,Y}\rangle$ is compatible with $\langle b,X_{b}\rangle$ if $\langle b,Y \rangle \leq \langle a,Z \rangle$. \end{lem} \begin{lem}[Prikry lemma]\label{magidorprikrycondition} For every $\langle a,X\rangle \in \mathbb{M}$ and statement $\sigma$ of the forcing language, $\beta \in \operatorname{dom}(a)$, there is a $Z$ such that \begin{itemize} \item $\langle a,Z \rangle \leq \langle a,X \rangle$ and $\langle a,Z \rangle_{\beta} = \langle a,X \rangle_{\beta}$. \item If $\langle b,Y \rangle \leq \langle a,Z \rangle$ decides $\sigma$, then $\langle b, Y\rangle_{\beta}^\frown \langle a,Z \rangle^{\beta}$ decides $\sigma$. \end{itemize} \end{lem} Here is the fundamental theorem of Magidor forcing: \begin{thm}[Magidor] The following hold: \begin{enumerate} \item[$(1)$] $\mathbb{M}$ adds no new subsets of $\lambda$. In particular, $\lambda^{+}\cap {\rm Reg}$ remains the same by $\mathbb{M}$. \item[$(2)$] $\mathbb{M}$ preserves all cardinals. \item[$(3)$] $\mathbb{M}$ forces that ${\kappa}$ is a strong limit singular cardinal of cofinality ${\lambda}$. \end{enumerate} \end{thm} Now we get an analogue of Theorem~\ref{maintheorem}~(2) for Magidor forcing. \begin{thm}\label{magidormaintheorem} $\mathbb{M}$ forces that $\pcf{{\kappa} \cap {\rm Reg}} = (2^{{\kappa}})^{+} \cap {\rm Reg}$. \end{thm} \begin{proof} By the proof of Theorem~\ref{maintheorem}, it suffices to show that $\mathbb{M} \Vdash (\kappa,{(2^{\kappa})}^{+}) \cap {\rm Reg} \subseteq \pcf{\kappa \cap {\rm Reg}}$. Note that $(\kappa,(2^{\kappa})^{+}) \cap {\rm Reg}$ remains the same after forcing with $\mathbb{M}$. Let $\theta \in (\kappa,(2^{\kappa})^{+})\cap {\rm Reg}$. Let us see that $\mathbb{M} \Vdash \theta \in \pcf{(\kappa,(2^{\kappa})^{+}) \cap {\rm Reg}}$. For every $\gamma \leq \theta$ and $\alpha < \lambda$, we fix a function $f^{\alpha}_{\gamma} \in {^{\kappa}}\kappa$ such that $[f^{\alpha}_{\gamma}]_{U_{\alpha}} = \gamma$. We may assume $f_\theta^\alpha \in {^{\kappa}(\kappa \cap {\rm Reg})}$. Let $X' \in \prod_{\alpha < \lambda}U_\alpha$ be a function in $V$ such that $X'({\alpha}) = \{\xi \in B_\alpha \mid \forall \eta < \xi(f^{\alpha}_{\theta}(\eta) < \xi) \land \xi <f_{\theta}^{\alpha}(\xi)\}$ for any $\alpha < \lambda$. We will show that $\mathbb{M}$ forces ${\left(\prod_{\alpha \in \lambda}{f}^{\alpha}_{\theta}(\dot{g}(\alpha)),<^{*}\right)}$ has an increasing and cofinal sequence of length $\theta$. Here, $<^{*}$ is an $\mathbb{M}$-name for the order on $\prod_{\alpha \in \lambda}{f}^{\alpha}_{\theta}(\dot{g}(\alpha))$ defined by the cobounded filter over $\lambda$. This gives the desired result, as shown by the following argument: By a usual density argument, we can find an $\mathbb{M}$-name $\dot{A}$ such that $\mathbb{M}$ forces the following properties: \begin{itemize} \item $\dot{A} \in [\lambda]^{\lambda}$. \item $\forall \alpha,\beta \in \dot{A}(\alpha < \beta \to f_\theta^{\alpha}(\dot{g}(\alpha)) < f_\theta^{\beta}(\dot{g}(\beta)))$. \end{itemize} And thus, by the proof of Theorem~\ref{maintheorem}, we have \begin{center} $\mathbb{M} \Vdash {\left(\prod_{\alpha \in \dot{A}}{f}^{\alpha}_{\theta}(\dot{g}(\alpha)),<^{*} \upharpoonright \dot{A}\right)} \simeq \left(\prod \{{f}_{\theta}^{\alpha}(\dot{g}(\alpha)) \mid \alpha \in \dot{A}\}, <_{\dot{F}}\right)$. \end{center} Here, $\dot{F}$ is an $\mathbb{M}$-names for the cobounded filter over $\{{f}_{\theta}^{\alpha}(\dot{g}(\alpha)) \mid \alpha \in \dot{A}\}$. It follows that $\mathbb{M}$ forces $\left(\prod \{{f}_{\theta}^{\alpha}(\dot{g}(\alpha) \mid \alpha \in \dot{A}\}, <_{\dot{F}}\right)$ has an increasing and cofinal sequence of length $\theta$. For every $\gamma < \theta$, let $\dot{f}_{\gamma}$ be an $\mathbb{M}$-name for a function $\alpha \mapsto {f}^{\alpha}_{\gamma}(\dot{g}(\alpha))$. It suffices to prove \begin{itemize} \item[(i)] $\mathbb{M}\Vdash{\langle\dot{f}_{\gamma}\mid \gamma < {\theta}\rangle}${ is increasing in }$\left(\prod_{\alpha < {\lambda}}{f}^{\alpha}_{\theta}(\dot{g}(\alpha)),<^{*}\right)$. \item[(ii)] $\mathbb{M} \Vdash \langle\dot{f}_\gamma \mid \gamma < \theta\rangle${ is cofinal in }$\left(\prod_{\alpha < {\lambda}}{f^{\alpha}_{\theta}}(\dot{g}(\alpha)),<^{*}\right)$. \end{itemize} (i) Let $\gamma < \delta < \theta$. Note that we have $Y(\alpha) = \{\xi < \kappa \mid f_{\gamma}^{\alpha}(\xi) < f_{\delta}^{\alpha}(\xi)\} \in U_\alpha$ for each $\alpha < \lambda$. Let $\langle a,X\rangle \in \mathbb{M}$ be arbitrary. Define $Z = (X \upharpoonright \beta_a)^{\frown}\langle X(\alpha) \cap Y(\alpha) \mid \alpha \geq \beta_a \rangle$. Here, $\beta_a = \max{\operatorname{dom}(a)}$. Then $\langle{a,Z}\rangle \leq \langle a,X\rangle$ forces $f_{\gamma}(\alpha) < f_{\delta}(\alpha)$ for every $\alpha > \beta_a$. (ii) Let $\langle a,X \rangle \in \mathbb{M}$ and $\dot{h}$ be arbitrary. Suppose $\langle a,X \rangle \Vdash \dot{h} \in \prod_{\alpha < {\lambda}}{f}^{\alpha}_{\theta}(\dot{g}(\alpha))$. By the proof of (i), we may assume that $X(\alpha) \subseteq X'(\alpha)$ for all $\alpha > \beta_b$. For each $b \in {\rm LP} = \{b \mid \exists Y(\langle b,Y\rangle \leq \langle a,X \rangle)\}$ define $Y_{b}$ and $\eta_b < \kappa$ as follows. If $\beta_b > \beta_a$, by Lemma~\ref{magidorprikrycondition} and $f_{\theta}^{\beta_b}(b(\beta_b)) < \kappa$, there is a $Y_b$ such that \begin{itemize} \item $\langle b,Y_b\rangle \leq \langle a,X\rangle$. \item if $\langle c,Z\rangle \leq \langle b,Y_{b}\rangle$ forces $\dot{h}(\beta_b) = {\zeta}$, then $\langle c,Z\rangle_{\beta_b}^{\frown} \langle b,Y_{b}\rangle^{\beta_b}\Vdash \dot{h}({\beta_b}) = {\zeta}$. \end{itemize} Define $\eta_{b}$ by \begin{center} $\eta_{b} = \sup\{\zeta + 1< f^{\beta_b}_{\theta}(b(\beta_b))\mid \exists p \in \mathbb{M}_{\beta_b} / \langle b,Y_{b}\rangle_{\beta_b}(p^{\frown}\langle b,Y_{b}\rangle^{\beta_b} \Vdash \dot{h}({\beta_b}) = {\zeta})\}$. \end{center} Then, \begin{center} $\langle b,Y_{b} \rangle \Vdash \dot{h}({\beta_b}) < \eta_b$. \end{center} For $b \in \mathrm{LP}$ with $\beta_b \leq \beta_a$, $Y_b = X$ and $\eta_b = 0$. For each $b \in \mathrm{LP}$, since $\mathbb{M}_{\beta_b} / \langle b,Y_{b}\rangle_{\beta_b}$ has the $b(\beta_b)^{+}$-c.c., $\eta_{b} < f^{\beta_b}_{\theta}(b(\beta_b))$. For every $\alpha > \alpha_b$, define $h^{\alpha}(\xi) = \sup \{\eta_{b}< f^{\alpha}_{\theta}(\xi) \mid b \in {\rm LP} \land b(\beta_b) = \xi \land \beta_b = \alpha\}$. Because of $|\{b \in {\rm LP} \mid b(\beta_b)= \xi \land \beta_b = \alpha\}| = |\alpha| \cdot |\xi|$, we have $h^\alpha(\xi) < f_\theta^\alpha(\xi)$ for every $\xi \in X(\alpha)$. Let $\gamma = \sup_{\alpha > \beta_b}[h^{\alpha}]_{U_{\alpha}} + 1< \theta$. By Lemma~\ref{magidordiagonallemma} and the proof of (i), there is an extension $\langle a,Z \rangle \leq \langle a,X \rangle$ such that \begin{itemize} \item every extension of $\langle b, Y \rangle$ is compatible with $\langle b,Y_{b}\rangle$ if $\langle b,Y \rangle \leq \langle a,Z \rangle$. \item $\forall \alpha > \beta_a\forall \xi \in Z(\alpha)(h^{\alpha}(\xi) < f_{\gamma}^{\alpha}(\xi))$. \end{itemize} Lastly, we claim that $\langle a,Z \rangle \Vdash \dot{h}(\alpha) < \dot{f}_{\gamma}(\alpha)$ for all $\alpha > \beta_a$. Let $\langle {b,Y}\rangle \leq \langle {a,Z} \rangle$ and $\alpha > \beta_a$ be arbitrary. Extending $\langle b,Y\rangle$ we may assume that $\alpha \in \operatorname{dom}(b) \setminus (\beta_a + 1)$. Now, we can find $\langle c,Y'\rangle$ such that $\langle b,Y\rangle \leq \langle c,Y'\rangle \leq \langle a,Z \rangle$ and $\beta_c = \alpha$. By the certain property of $\langle a,Z \rangle$, $\langle{c,Y_{c}}\rangle$ and $\langle b,Y\rangle$ have a common extension forcing $\dot{h}(\alpha) < \eta_c < h^{\alpha}(c(\alpha)) < f^{\alpha}_{\gamma}(c(\alpha)) = \dot{f}_{\gamma}(\alpha)$, as desired. \end{proof} Theorem~\ref{magidormaintheorem} enables us to generalize Theorem~\ref{pcfinc} as follows, including the case of uncountable cofinality. \begin{thm}\label{pcfinc2} Suppose $\langle\kappa_{i} \mid i < \omega \rangle$ is an increasing sequence of supercompact cardinals greater than a regular cardinal $\lambda$. Then in some forcing extension the following hold: \begin{enumerate} \item[$(1)$] $\kappa_{0}$ is a singular cardinal of cofinality $\lambda$. \item[$(2)$] $\operatorname{pcf}^{n}(\kappa_{0} \cap {\rm Reg}) \subsetneq \operatorname{pcf}^{n+1}(\kappa_{0} \cap {\rm Reg})$ for all $n< \omega$. \item[$(3)$] $\lambda^{+} \cap {\rm Reg} = (\lambda^{+} \cap {\rm Reg})^{V}$. \end{enumerate} \end{thm} \if0 We next take up the diagonal Prikry forcing originally introduced by Gitik--Sharon \cite{GS}. We refer the reader to \cite{NU} for the analogues of Lemmas \ref{diagonallemma} and \ref{prikrycondition}, which imply the following \begin{thm} The diagonal Prikry forcing over $\kappa$ forces that $\pcf{{\kappa}\cap {\rm Reg}} = (2^{{\kappa}})^{+}\cap {\rm Reg}$. \end{thm} \fi For $A \subseteq {\rm Reg}$, define \begin{center} $\operatorname{pcf}^{\alpha}(A) = \begin{cases}A & \alpha = 0 \\ \pcf{\operatorname{pcf}^{\beta}(A)} & \alpha = \beta + 1 \\ \bigcup_{\beta < \alpha} \operatorname{pcf}^{\beta}(A) & \alpha \in {\rm Lim} \end{cases}$ \end{center} Note that GCH implies $\pcf{\pcf{A}} = \pcf{A}$ for every $A \subseteq {\rm Reg}$. By Theorem~\ref{pcfinc2}, it is consistent that ${\langle\operatorname{pcf}^{n}(A) \mid n < \omega\rangle}$ is $\subsetneq$-increasing for some $A\subseteq {\rm Reg}$. We conclude this paper with the following \begin{ques} Is it a theorem of {\rm ZFC} that for every $A \subseteq {\rm Reg}$ there is an $\alpha$ such that $\operatorname{pcf}^{\alpha + 1}(A) = \operatorname{pcf}^{\alpha}(A)$? \end{ques} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,060
Telostylinus longipennis är en tvåvingeart som beskrevs av Aczel 1954. Telostylinus longipennis ingår i släktet Telostylinus och familjen Neriidae. Inga underarter finns listade i Catalogue of Life. Källor Tvåvingar longipennis
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,472
La Casa del Músic era una masia de l'antic nucli dels Masions, de l'antic terme de Fígols de Tremp, pertanyent actualment al municipi de Tremp. Estava situada a la part oriental dels Masions, a llevant de la Masia del Rei i al sud-sud-est de lo Masió. És a la dreta del barranc del Músic. Es tracta d'una antiga masia amb edificacions annexes. La casa consta de planta, pis i golfes. Està construïda amb pedra del país sense tallar i rejuntada amb fang. El parament està a la vista. La façana principal està gairebé derruïda i per tant no sabem quina era la distribució de les obertures. La coberta era a dues aigües i actualment està pràcticament esfondrada. Referències Enllaços externs Casa del Músic al web de l'Institut Cartogràfic de Catalunya Músic
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,009
Q: Can you add multiple values to one key in an Elixir map? I have a map of key value pairs that pair a specific type of drone with the type of photoshoot. Here is an example: %{ :real_estate => "Normal", :nature => "FPV", :wedding => "Cinewhoop" } However, for the :nature scenario, I also would like to use the "Normal" drone in addition to the "FPV" option. In other words, given :nature, I want to return "Normal" and "FPV". Is there a way to add multiple values to a key value pair? Or is there a different Elixir method that would suit this problem better? I looked through the docs and didn't find an option to do this. A: Maps in Elixir don't allow duplicate keys. Values in maps can also be lists (or any term), so you can always replace the value in the pair with a list of values if that matches your use case: %{real_estate: "Normal", nature: ["Normal", "FPV"], wedding: "Cinewhoop"} Lists of key-value pairs, such as keyword lists, do allow for duplicate keys: [real_estate: "Normal", nature: "FPV", wedding: "Cinewhoop", nature: "Normal"] In the end, though, how you choose to represent your data really comes down to how you intend to use it. A: Or to give you an answer using the same syntax as your original example, this would work: %{ :real_estate => "Normal", :nature => ["Normal","FPV"], :wedding => "Cinewhoop" } Simply use a list for the value rather than a single string.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,031
\section{ Introduction:} The discovery of compact stellar objects, such as X-ray pulsars, namely Her X1, millisecond pulsar SAX J1808.43658, X-ray sources 4U 1820-30 and 4U 1728-34 which are regarded as the probable strange star candidates, has led to critical studies of relativistic models of such stellar configurations [1-10]. There are several such astrophysical as well as cosmological situations where one needs to consider the equation of state of matter involving matter densities of the order of $10^{15} \;g \; cm^{-3}$ or higher, exceeding the nuclear density. The conventional approach of obtaining models of relativistic stars in equilibirium heavily relies on the availibility of definite information about the equation of state of its matter content. Our knowledge about possible equation of state inside a superdense strange star at present is not known. In this context Vaidya-Tikekar \cite{vt} and Tikekar \cite{t} have shown that in the absence of definite information about equation of state of matter content of stellar configuration, the alternative approach of prescribing suitable {\it ansatz} geometry for the interior physical 3-space of the configuration leads to simple easily tractable models of such stars which are physically viable. Relativistic models of superdense stars based on different solutions of Einstein's field equations obtained by using Vaidya-Tikekar approach of assigning different geometries with physical 3-spaces of such objects have been studied by several workers \cite{mpd,th,tj,jt}. Pant and Sah \cite{ps} obtained a class of relativistic static non-singular analytic solutions in isotropic form describing space time of static spherically symmetric distribution of matter. The solution has been found to lead to a physically viable causal model of neutron star with a maximum mass $4 M_{\odot}$. In this paper we discuss a class of solution of relativistic field equations as obtained in Ref. \cite{ps} and examine physical plausibility of several models of a class of neutron stars using numerical procedures to explore the possibility of using it to describe interior of a compact star. It is also possible to estimate the radius of a star when its mass is known. It is also possible to determine the variation of matter density on its boundary surface and that at the center of a superdense star for the prescribed geometry. The plan of the paper is as follows : in sec 2. the relevant relativistic field equations have been set up and their solution is discussed. In sec 3. several features of physical relevance have been reported. In sec. 4, stellar models are discussed with the observational stellar mass data for different values of the parameters $\lambda$, $k$, $A$ and $R$. Finally in sec 5, we give a brief discussion. \section{Field Equation and Solution } The Einstein's field equation is \begin{equation} R_{\mu \nu} -\frac{1}{2} g_{\mu \nu} R = 8 \pi G \; T_{\mu \nu} \end{equation} where $g_{\mu \nu}$, $R$, $R_{\mu \nu}$ and $T_{\mu \nu}$ are the metric tensor, Ricci scalar, Ricci tensor and energy momentum tensor respectively. We use the following form of the space time metric given by \begin{equation} ds^2= e^{\nu(r)} dt^2-e^{\mu(r)}(dr^2+r^2d\Omega^2) \end{equation} with \begin{equation} d\Omega^2=d\theta^2+ sin^2\theta \; d\phi^2. \end{equation} using isotropic spherical polar coordinate. In the next section we use systems of units with $8\pi G=1$, $c=1$ respectively. The energy momentum tensor for a spherical distribution of matter in the form of perfect fluid in equilibrium is given by \begin{equation} T^{\mu}_{\mu} = diag \; ( \rho, -p, -p, -p) \end{equation} where $\rho$ and $p$ are energy density and fluid pressure of matter respectively. Using the space time metric given by eq.(2), the Einstein's field eq. (1) gives the following equations : \begin{equation} \rho=-e^{-\mu}\left(\mu''+\frac{\mu'^2}{4}+\frac{2\mu'}{r} \right) \end{equation} \begin{equation} p=e^{-\mu}\left(\frac{\mu'^2}{4}+\frac{\mu'}{r}+\frac{\mu'\nu'}{2}+\frac{\nu'}{r}\right) \end{equation} \begin{equation} p=e^{-\mu}\left(\frac{\mu''}{2}+\frac{\nu''}{2}+\frac{\nu'^2}{4}+\frac{\mu'}{2r}+\frac{\nu'}{2r}\right). \end{equation} Now, pressure isotropy condition from eqs.(6) and (7) leads to the following relation between metric variables $\mu$ and $\nu$: \begin{equation} \nu''+\mu''+\frac{\nu'^2}{2}-\frac{\mu'^2}{2}-\mu'\nu'-\frac{1}{r}\left(\nu'+\mu'\right)=0 \end{equation} It is a second order differential equation which permits a solution \cite{ps} as follows : \begin{equation} e^{\frac{\nu}{2}}=A \left(\frac{1-k\alpha}{1+k\alpha } \right), \;\;\; \;\;\; e^{\frac{\mu}{2}}=\frac{(1+k\alpha)^2}{1+\frac{r^2}{R^2}} \end{equation} where $R$, $\lambda$, $k$ and $A$ are arbitrary constants. In the above we denote \begin{equation} \alpha(r)=\sqrt{\frac{1+\frac{r^2}{R^2}}{1+ \lambda \frac{ r^2}{R^2}}}. \end{equation} We observe that the geometry of that of the 3-space with metric \begin{equation} d\sigma^2=\frac{dr^2+r^2(d\theta^2+sin^2\theta d\phi^2)}{1+\frac{r^2}{R^2}} \end{equation} is that of a 3 sphere immersed in a 4-dimensional Euclidean space. Accordingly the geometry of physical space obtained at the $t=constant$ section of the space time is given by \begin{equation} ds^2=A^2\frac{(1-k\alpha)^2}{(1+k\alpha)^2}dt^2-\frac{(1+k\alpha)^4}{1+\frac{r^2}{R^2}}(dr^2+r^2(d\theta^2+sin^2\theta d\phi^2)) \end{equation} where, $\alpha(r)$ is given by eq.(10). Hence the geometry of the 3 space obtained at $t=constant$ section of the space time of metric (12) is a deviation introduced in spherical 3 space and the parameter $k$ is a geometrical parameter measuring inhomogenity of the physical space. With $k=0$, the space time metric (12) degenerates into that of Einstein's static universe which is filled with matter of uniform density. The space time metric of Pant and Sah \cite{ps} is a generalization of the Buchdahl solution, the physical 3-space associated with which has the same feature. For $\lambda=0$, the solution reduces to that obtained by Buchdahl which is an analog of a classical polytrope of index 5. However, for $\lambda>0$, the solution corresponds to finite boundary models. Pant and Sah \cite{ps} obtained a class of non-singular analytic solution of the general relativistic field equations for a static spherically symmetric material distribution which is matched with Schwarzschild's empty space time. In this paper we study physical properties of compact objects taking different values of $R$, $\lambda$, $k$ and $A$ as permitted by the field equations. Using solution given by eq.(9) in eqs.(5)-(7), one obtains the explicit expressions for the energy density and fluid pressure as follows: \begin{equation} \rho=\frac{12(1+\lambda k\alpha^5)}{R^2(1+k\alpha)^5}, \end{equation} \begin{equation} p=\frac{4(\lambda k^2\alpha^6-1)}{R^2(1+k\alpha)^5(1-k\alpha)}. \end{equation} The exterior Schwarzschild line element is given by \begin{equation} ds^2= \left( 1 - \frac{2m}{r_o} \right) dt^2 - \left( 1 - \frac{2m}{r_o} \right)^{-1} dr^2 - r_o^2 (d\theta^2 + sin^2 \theta d\phi^2) \end{equation} where $m$ represents the mass of spherical object. The above metric can be expressed in an isotropic form \cite{jvn} \begin{equation} ds^2=\left(\frac{1-\frac{m}{2r}}{1+\frac{m}{2r}}\right)^2dt^2-\left(1+\frac{m}{2r}\right)^4(dr^2+r^2d\Omega^2) \end{equation} using the transformation $r_o= r \left(1+ \frac{m}{2r} \right)^2$ where $r_o$ is the radius of the compact object. This form of the Schwarzschild metric will be used here to match at the boundary with the interior metric given by eq. (12). \section{Physical properties of a compact star} The solution given by eq.(9) is useful to study physical features of a compact star in a general way which are outlined as follows: (1) In this model, $\rho$ and $p$ are determined using eqs.(13) and (14). We note that $\rho$ is obviously positive for any positive $\lambda$ and $k$, while $p\geq 0$ leads to two different cases: (i) $\lambda>1/k^2\alpha^6$ with $k<1/\alpha$ and (ii) $\lambda<1/k^2\alpha^6$ with $k>1/\alpha$. (2) At the boundary of the star ($r=b$), the interior solution should be matched with the isotropic form of Schwarzschild exterior solution,i.e., \begin{equation} e^{\frac{\nu}{2}}|_{r=b}=\left( \frac{1-\frac{m}{2b}}{1+\frac{m}{2b}}\right)\;\;\;\;\;\;\; e^{\frac{\mu}{2}}|_{r=b}=\left(1+\frac{m}{2b}\right)^2 \end{equation} (3) The physical radius of a star $r_o$, is determined knowing the radial distance where the pressure at the boundary vanishes (i.e., $p(r)=0$ at $r=b$). The physical radius is related to the radial distance ($r=b$) through the relation $r_o= b \left(1+ \frac{m}{2b} \right)^2$ \cite{jvn}. (4) the ratio $\frac{m}{b}$ is determined using eqs. (9) and (16), which is given by \begin{equation} \frac{m}{b} = 2 \left( \frac{1+k \alpha}{\sqrt{1+y^2}} -1 \right) \end{equation} (5) The density inside the star should be positive i.e., $\rho>0$. (6) Inside the star the stellar model should satisfy the condition, $dp/d\rho<1$ for the sound propagation to be causal. The usual boundary conditions are that the first and second fundamental forms be continuous across the boundary $r=b$. Applying the boundary conditions we determine $A$ which is given by \begin{equation} A=\frac{\left({1-\frac{m}{2b}}\right)}{\left({1+\frac{m}{2b}}\right)} \left( \frac{\sqrt{1+\lambda \frac{b^2}{R^2}} + k \sqrt{1+ \frac{b^2}{R^2}} }{ \sqrt{1+\lambda \frac{b^2}{R^2}} - k \sqrt{1+ \frac{b^2}{R^2} }} \right) \end{equation} Equating eqs.(9) and (16) at the boundary $(r=b)$, we get a eighth order polynomial equation in $y$ (here $\frac{b}{R}$ is replaced by $y$): \[ [ (1+A)^4+k^4(1-A)^4-8(1+A)^2+16-2k^2(1-A^2)^2-8k^2(1-A)^2 ] + [2\lambda(1+A)^4-16\lambda(1+A)^2 \] \[ -8(1+A)^2+32(1+\lambda)-2k^2(1-A^2)^2(1+\lambda)-8k^2(2+\lambda)(1-A)^2+2k^4(1-A)^4]y^2 \] \[ + [\lambda^2(1+A)^4-8\lambda^2(1+A)^2-8\lambda(1+A)^2+(1+4\lambda+\lambda^2)-2\lambda k^2(1-A^2)^2 \] \begin{equation} -8k^2(1-A)^2(1+2\lambda)+k^4(1-A)^4]y^4- [8\lambda^2(1+A)^2-32(1+\lambda)-8\lambda k^2(1-A)^2]y^6+16\lambda^2 y^8=0 \end{equation} where $\lambda$, $k$ and $A$ are constants. Imposing the condition that pressure at the boundary vanishes in eq.(14), we determine $y$ which is given by, \begin{equation} y=\sqrt{\frac{1-\left(\lambda k^2\right)^{1/3}}{\left(\lambda k^2\right)^{1/3}-\lambda}}. \end{equation} Thus, the size of a star is determined by $k$ and $\lambda$. It is evident that a real $y$ is permitted when (i) $k>\lambda$ with $\lambda<1$, or (ii) $k<\lambda$ with $\lambda>1$. Using eqs.(20) and (21), a polynomial equation in $\lambda$, $k$ and $A$ is obtained. Although the eq.(20) is a polynomial of degree eight we note that only one realistic solution for $y$ is obtained for different domains of the values of any pair of parameters namely, $A$, $k$ and $\lambda$. Subsequently the other parameters may be determined. For example, (i) when $A=2$, we found that $\lambda$ and $k$ satisfy the following inequalities $2.9\leq k \leq5$ and $1.4877\times 10^{-6}\leq \lambda \leq0.04$, (ii) when $A=4$, the range of permitted values are $1.7\leq k \leq 2.3$ and $0.0185 \leq \lambda\leq 0.0653$. However, for a given $\lambda$, e.g., (i) $\lambda=0.15$, we note that the permitted values of $A$ lies in the range $3.6< A <5.6$, and (ii)that for $\lambda=0.1318$, one obtains realistic solution for $3.5< A <5.8$. The square of the acoustic velocity $\frac{dp}{d\rho}$ takes the form : \begin{equation} \frac{dp}{d\rho}=\frac{6 \lambda k \alpha^5(1-k \alpha)(1+k \alpha)-5(1-k \alpha)(\lambda k^2 \alpha^6-1)+(\lambda k^2 \alpha^6-1)(1+k \alpha)}{15(1-k \alpha)^2(\lambda \alpha^4(1+k \alpha)-(1+ \lambda k \alpha^5))}. \end{equation} A variation of $\frac{dp}{d\rho}$ for $\lambda = 0.1318$ and $k=2.2268$ is displayed in table 1. It is evident that $\frac{dp}{d\rho}$ is maximum at the center and gradually decreases outward. It is also found that inside the star the constrain $\frac{dp}{d\rho}<1$ is always maintained which ensures causality. In table 2, variation of $\frac{dp}{d\rho}$ from the centre to the boundary for different values of $\lambda$ and $k$ are presented. It is evident that as $\lambda$ increases $\frac{dp}{d\rho}$ decreases at the centre. The variation of the central density with $\lambda$ and $k$ are displayed in tables (3) and (4) for $A=2$ and $A=4$ respectively. It is evident that the central density ($\rho_{c}$) decreases with an increase in $\lambda$. Thus stellar models with larger $\lambda$ accommodate a denser compact object compare to that for lower values of $\lambda$ and $k$. The variation of pressure and density with radial distance are drawn employing eqs.(13) and (14) which are shown in figs.(1)-(4). Since it is not possible to express pressure in terms of density we study the behaviour of pressure and density inside the curve numerically.In fig.(5) a variation of pressure with density is plotted for different model parameters. \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|} \hline $r$ in the unit of $R$ & $\frac{dp}{d\rho}$ \\ \hline 0 & 0.521 \\ \hline 0.1 & 0.518 \\ \hline 0.2 & 0.513 \\ \hline 0.3 & 0.504 \\ \hline 0.4 & 0.496 \\ \hline 0.41 & 0.495 \\ \hline 0.42 & 0.495 \\ \hline \end{tabular} \caption{Variation of $\frac{dp}{d\rho}$ with radial distance $r$ for a given $\lambda=0.1318$ and $k=2.2268$} \end{center} \end{table} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline & $\frac{dp}{d\rho}$ & $\frac{dp}{d\rho}$ & $\frac{dp}{d\rho}$ \\ $r$ & for & for & for \\ in the unit of $R$ & $\lambda=0.1211$ \& $k=2.2$ & $\lambda=0.1318$ \& $k=2.2268$ & $\lambda=0.15$\& $k=2.2681$ \\ \hline 0 & 0.524 & 0.521 & 0.520 \\ \hline 0.1 & 0.521 & 0.518 & 0.520 \\ \hline 0.2 & 0.514 & 0.513 & 0.513 \\ \hline 0.3 & 0.504 & 0.504 & 0.508 \\ \hline 0.4 & 0.494 & 0.496 & \\ \hline \end{tabular} \caption{Variation of $\frac{dp}{d\rho}$ with radial distance $r$ for different values of $\lambda$ and $k$.} \end{center} \end{table} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|} \hline $\lambda$ & $k$ & $\rho_c$ in the unit of $\frac{1.9 \times 10^{15}}{ R^{2}} \; kg/m^3$ \\ \hline $1.4877\times 10^{-6}$ & 2.9 & 0.0133 \\ \hline $1.3836\times 10^{-5}$ & 3 & 0.0117 \\ \hline 0.0048 & 4 & 0.0039 \\ \hline 0.0400 & 5 & 0.0019 \\ \hline \end{tabular} \caption{Variation of central density for $A=2$ for different values of $\lambda$ and $k$.} \end{center} \end{table} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|} \hline $\lambda$ & $k$ &$\rho_c$ in the unit of $\frac{1.9 \times 10^{15}}{ R^{2}} \; kg/m^3$ \\ \hline 0.0185 & 1.7 & 0.0863 \\ \hline 0.0289 & 1.8 & 0.0734 \\ \hline 0.0432 & 1.9 & 0.0633 \\ \hline 0.0876 & 2.1 & 0.0496 \\ \hline 0.1211 & 2.2 & 0.0453 \\ \hline 0.15 & 2.268 & 0.0431 \\ \hline \end{tabular} \caption{Variation of central density for $A=4$ for different values of $\lambda$ and $k$} \end{center} \end{table} \input{epsf} \begin{figure} \epsffile{pressure.eps} \caption{ Variations of pressure with radial distance (in the unit of $R$) is plotted with solid line for $\lambda=0.15$, dashed line for $\lambda=0.1318$ and broken line for $\lambda=0.1211$.} \end{figure} \input{epsf} \begin{figure} \epsffile{density.eps} \caption{ Variations of density with radial distance (in the unit of $R$ ) is plotted with solid line for $\lambda=0.15$, dashed line for $\lambda=0.1318$ and broken line for $\lambda=0.1211$.} \end{figure} \input{epsf} \begin{figure} \epsffile{fig3.eps} \caption{ Variations of pressure with radial distance (in the unit of $R$ ) is plotted with solid line for $A=4$, broken line for $A=3$ and dashed line for $A=2$.} \end{figure} \input{epsf} \begin{figure} \epsffile{fig4.eps} \caption{ Variations of density with radial distance (in the unit of $R$ km ) is plotted with solid line for $A=4$, broken line for $A=3$ and dashed line for $A=2$.} \end{figure} \input{epsf} \begin{figure} \epsffile{fig1.eps} \caption{ Variations of pressure with density is plotted with green line for $\lambda=0.0876$, red line for $\lambda=0.1318$, solid line for $\lambda=0.15$ and dahed line for $\lambda=0.165633$.} \end{figure} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|} \hline $A=2$ & M (mass) in $M_{\odot}$ & $r_o$ (radius) in km \\ \hline $\lambda=1.48\times 10^{-6}$, $k=2.9$ & 3.61 & 12.087 \\ \hline $\lambda=1.17\times 10^{-5}$, $k=2.99$ & 2.69 & 9.250 \\ \hline $\lambda=1.38\times 10^{-5}$, $k=3.0$ & 2.63 & 9.067 \\ \hline $\lambda=3.93\times 10^{-2}$, $k=4.99$ & 0.12 & 0.622 \\ \hline \end{tabular} \caption{Variation of mass and radius of a compact star for different values of $\lambda$ and $k$ for $R=0.2$ km.} \end{center} \end{table} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|} \hline $A = 4$ & M (mass) in $M_{\odot}$ & $b_o$ (radius) in km \\ \hline $\lambda=0.1211$, $k=2.2$ & 3.35 & 11.268 \\ \hline $\lambda=0.1318$, $k=2.2268$ & 2.82 & 10.324 \\ \hline $\lambda=0.15$, $k=2.2681$ & 2.45 & 8.409 \\ \hline $\lambda=8.76\times 10^{-2}$, $k=2.1$ & 4.19 & 13.688 \\ \hline $\lambda=0.1656 $, $k=2.3$ & 1.79 & 6.214 \\ \hline \end{tabular} \caption{Variation of mass and radius of a compact star for different values of $\lambda$ and $k$ for $R=2.5$ km.} \end{center} \end{table} \begin{table}[ht!] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline & $\lambda=1.48\times10^{-6}$ & $\lambda=8.76\times10^{-2}$ & $\lambda=0.1211$ & $\lambda=0.1318$ & $\lambda=0.15$ & $\lambda=0.1656$ \\ &$k=2.9$ & $k=2.1$ & $k=2.2$ & $k=2.2268$ & $k=2.2681$ & $k=4.99$ \\ \hline $\frac{\rho_b}{\rho_o}$ & $1.89\times10^{-5}$ & 0.45 & 0.54 & 0.58 & 0.69 & 0.81 \\ \hline \end{tabular} \caption{Variation of the ratio of the density at the boundary to the density at the center of a compact star for different values of $\lambda$ and $k$.} \end{center} \end{table} \section{Physical Analysis :} In this section we analyze the physical properties of compact objects numerically. For given values of $\lambda$ and $k$, the radial coordinate at which the pressure vanishes may be determined from eq.(14). The mass to radial distance $\frac{m}{b}$ is estimated from eq.(18), which in turn determines the physical size of the compact star ($r_o$). For a given set of values of the parameters $\lambda$, $A$ and $k$, the mass ($m$) and radius of a compact object is obtained in terms of the model parameter $R$. Thus for a known mass of a compact star $R$ is determined which in turn determines the corresponding radius. As the equation to determine the parameters in the model is highly non-linear and intractable in known functional form, we adopt numerical technique in the next section. The radial variation of pressure and density of compact stars for different parameters are shown in figs. (1)-(5). It is evident that as $\lambda$ is increased both the pressure and density at the centre is found to decrease and at the same time it corresponds to a smaller size accommodating more mass. For a given mass of a compact star \cite{tab}, it is possible to estimate the corresponding radius in terms of the parameter $R$. We note that for a given mass of a compact star known from observation, the radius of the star may be estimated from a given $R$. However, as the radius of a neutron star is $\leq 10 \; km$, it is possible to obtain a class of stellar model taking different $R$ so that the size of the star is satisfies the upper bound. In the next section we consider a few such stars whose masses are known from observations. We present below four different models using stellar mass data \cite{tab, dd, ddd} in the next section : {\it Model 1 } : We consider X-ray pulsar Her X-1 \cite{tab, mdey, rs} which is characterized by mass $M = 1.47 \; M_{\odot}$, where $M_{\odot}$ = the solar mass and found that it permits a star with radius $r_o = 4.921 $ km, for $R=0.081$ km. The compactness of the star in this case is $ u=\frac{M}{r_o} = 0.30$. The ratio of density at the boundary to that at the centre for the star is 0.0003 which is possible for the set of parameters $\lambda = 1.48 \times 10^{-6}$ and $k=2.9$. Taking different values of $R$ we get different models but a physically realistic model is obtained which accommodates a compact star with radius $\sim$ 10 km. For example, if $R= 2.504$ km, one obtains a compact object with radius $r_o=7.791$ km. In the later case we note that the ratio of density at the boundary to that at the centre is very high (0.99). The compactness of the star is 0.189 which is permitted for the set of parameters $\lambda = 0.0393$ and $k=4.99$ with $A=2$. {\it Model 2} : We consider X-ray pulsar 4U 1700- 37 which is characterized by mass $M = 2.44 \; M_{\odot}$ \cite{tab}. We note that for $A=4$, $\lambda=0.1211$ and $k=2.2$, the corresponding radius of the above star is $ r_o = 8.197$ km with $R= 1.819$ km. The ratio of density at the boundary to that at the centre for the star in this case is 0.820. However, for the set of values $A=2$, $\lambda=0.1656$ and $k=2.3$, a compact object is permitted with radius $r_o = 8.110$ km when $R=0.135$ km. The ratio of density at the boundary to that at the centre for the star in this case is 0.0003. Another stellar model is obtained for a set of values with $A=2$, $\lambda = 0.0393$ and $k=4.99$, where the ratio of density at the boundary to that at the centre is 0.99. In the later case the values is more compare to that one obtains taking $A=4$. However both the cases permits a star with compactness factor $u=0.3$. {\it Model 3} : We consider a neutron star J1518+4904 which is characterized by mass $M= 0.72 \; M_{\odot}$ \cite{tab}. For $\lambda =0.1211$, $k=2.2$ and $A=4$, the radius of the star estimated here is $r_o= 2.419 $ km with $R=0.537$ km. The ratio of density at the boundary to that at the centre for the star is 0.82. In this case the compactness factor of the star is $u=0.3$. For $A=2$ we note the following : (i) when $\lambda = 1.48 \times 10^{-6}$ and $k=2.9$, it admits a star with radius $r_o=2.4$ km for $R=0.04$ km and (ii) when $\lambda = 0.0393$ and $k=4.99$, it admits a star with radius for $r_o=3.816$ km for $R=1.226$ km. The ratio of density at the boundary to that at the centre for the star in the first case is 0.0003 and that in the later case is 0.988. However, the compactness factor for the former is 0.3 which is higher than that in the second case (0.189). {\it Model 4} : We consider a neutron star J1748-2021 B which is characterized by mass $M= 2.74 \; M_{\odot}$ \cite{tab}. For $A=4$, $\lambda = 0.1318$ and $k=2.2268$, a star of radius $ r_o = 9.281$ km with $R=2.247$ km ids permmited . The ratio of density at the boundary to that at the centre for the star is 0.856. The compactness factor is $u=0.3$. In the other case one obtains a star with radius $ r_o = 8.467$ km with $R=3.406$ km when $\lambda = 0.1656$ and $k=2.3$. A star of smaller size is thus permitted in the later case with compactness factor (0.32) than that of the formal model. For $A=2$, stellar model admits a star with radius $r_o=13.154 $ km for $R=2.74 $ km, $\lambda = 0.138 \times 10^{-5}$ and $k=3$. However a smaller star with radius $r_o=8.380$ km is permitted here when $R=0.181$, km with $\lambda = 1.17 \times 10^{-5}$ and $k=2.99$. The ratio of density at the boundary to that at the centre in the first case is 0.0017 which is higher than the later (0.0015). The compactness factor in the former model is 0.20 which is lesser than the later case 0.32. \section{ Discussions : } In this paper, we present general relativistic solution for a class of compact stars which are in hydrostatic equilibrium considering the isotropic form for a static spherically symmetric matter distribution. The general relativistic solution obtained by Pant and Sah \cite{ps} is employed here to study compact objects. We use isotropic form of the exterior Schwarzschild solution to match at the boundary of the compact object. The stellar models discussed here contains four parameters $\lambda$, $A$, $k$ and $R$. The observed mass of a star determines $R$ for known values of $\lambda$, $A$, $k$. We note the following: (i) In fig. 1, variation of pressure with radial distance is plotted for different $\lambda$ for given values of $A$ and $k$. The figures show that as $\lambda$ increases pressure decreases inside the star. (ii) In fig. 2, radial variation of density is plotted for different $\lambda$. We note higher density for lower $\lambda$. (iii) The variation of $\frac{dp}{d\rho}$ inside the star for a given set of values of $\lambda$ and $k$ are shown in table 1. The causality condition is obeyed inside the star and $\frac{dp}{d\rho}$ is maximum at the center which however found to decrease monotonically radially outward. For different $\lambda$ and $k$, values of $\frac{dp}{d\rho}$ is also shown in table 2. It is evident that $\frac{dp}{d\rho}$ decreases for an increase in $\lambda$ and $k$ values. (iv) Variation of central density for different values of $\lambda$ and $k$ with $A=2$ and $A=4$ are presented separately in tables (3) and (4) respectively. We note that the central density decreases as the value for the pair ($\lambda$ and $k$) increases. From tables (3) and (4) similar tendency for central density is found to exist when $A$ is increased. As the isotropic Schwarzschild metric is singular at $m=2b$, the model considered here may be useful to represent a strange star with $m \neq 2b$ or $m<2b$. (v) In tables (5) and (6), the mass of a star with its maximum size is shown for different values of $\lambda$ and $k$ taking density of a star $\rho_b=2\times 10^{15}\; gm/cc$ at the boundary. We obtain here a class of relativistic stars for different values of $\lambda$, $A$, $k$ and $R$. (vi) The density profile of a given star with different values of $\lambda$ and $k$ is shown in table 7. As $\lambda$ increases the ratio of density at the boundary to that at the center is found to increase accommodating more compact stars. (vii) In fig. 3, variation of pressure with radial distance is plotted for different values of $A$. It is evident that as $A$ increases pressure decreases. (viii) In fig. 4, variation of density with radial distance is plotted for different $A$. We note that as $A$ is increased both the density and the pressure decreases. But the size of a star increases with an increase in $A$ thereby accommodating more compact stars. (ix) In fig. 5, variation of pressure with density is plotted for different $\lambda$. We note that for a given density pressure is more for higher $\lambda$, this leads to a star with higher central density. In sec. 4, we present models of the neutron stars that are tested for some known compact objects. As the equation of state is not known we analyze the star for known geometry considered here. The radii of the compact stars namely, neutron stars are also estimated here for known mass with a given $R$. The parameter $R$ permits a class of compact objects, some of which are relevant observationally. Considering observed masses of the compact objects namely, X-ray pulsars Her X-1, 4U 1700-37 and neutron stars J1518+4904, J1748-2021 B we analyze the interior of the star. We obtain a class of compact stars models for various $R$ with given values of $k$, $\lambda$ and $A$. The stellar models obtained here can accomodate highly compact objects. However a detail study of the stellar composition at high pressure and density will be taken up elsewhere. {\bf{ \it Acknowledgement :}} BCP would like to acknowledge fruitfull discussion with Mira Dey and Jisnu Dey while visiting IUCAA, Pune. Authors would like to thank IUCAA, Pune and IRC, Physics Department, North Bengal University (NBU) for providing facilities to complete the work. BCP would like to thank University Grants Commission, New Delhi for financial support. RT is thankfull to UGC for its award of Emiritus Fellowship. The authors would like to thank the referee for constructive criticism. \pagebreak
{ "redpajama_set_name": "RedPajamaArXiv" }
8,001
{"url":"https:\/\/physics.stackexchange.com\/questions\/321779\/finite-difference-method-the-correct-formula","text":"# Finite difference method: The correct formula\n\nSuppose I have a uniform 1D grid with spacing $\\Delta x$ and want to solve for example the Schr\u00f6dinger equation on this grid. What is the correct approximation for the second order derivative? Is it\n\n$$\\frac{d^2\\psi}{dx^2} = \\frac{1}{2\\Delta x^2}(\\psi_{i+1} + \\psi_{i-1} - 2\\psi_{i})$$\n\nor is it\n\n$$\\frac{d^2\\psi}{dx^2} = \\frac{1}{\\Delta x^2}(\\psi_{i+1} + \\psi_{i-1} - 2\\psi_{i})$$\n\nIt would seem that I find both in literature. I personally think that the first is the correct one since it agrees with a second order Taylor expansion. On the other hand if I try to insert a plane wave of the form $\\psi_{k}(x_{i}) = \\text{exp}(ikx_{i})$ I obtain only the correct dispersion relation (by Taylor expansion of the cosine) for the second formula. If the last confuses you, see https:\/\/wiki.physics.udel.edu\/phys824\/Discretization_of_1D_Hamiltonian.\n\n\u2022 The second one is correct. Mar 27 '17 at 19:17\n\u2022 Like lemon said, the second one is correct. It is most likely that you've seen the first one because they have included the 1\/2 in front of the Laplacian (from the 1\/2m part). Mar 27 '17 at 19:35\n\nThe second equation is correct. As you suggested, with $$\\psi_{i+1} = \\psi_i + \\psi'_i \\Delta x + \\frac{1}{2} \\psi''_i \\Delta x^2 + \\frac{1}{6} \\psi'''_i \\Delta x^3 + O\\left(\\Delta x^4\\right)$$ and $$\\psi_{i-1} = \\psi_i - \\psi'_i \\Delta x + \\frac{1}{2} \\psi''_i \\Delta x^2 - \\frac{1}{6} \\psi'''_i \\Delta x^3 + O\\left(\\Delta x^4\\right),$$ we have $$\\frac{1}{\\Delta x^2}\\left(\\psi_{i+1} + \\psi_{i-1} - 2\\psi_{i}\\right) = \\frac{\\psi''_i \\Delta x^2 + O\\left(\\Delta x^4\\right)}{\\Delta x^2} = \\psi''_i + O\\left(\\Delta x^2\\right)$$ To leading order, the Hamiltonian acting on $\\psi$ is then $$\\begin{eqnarray} \\left(\\hat{H} \\psi\\right)_i &=& -\\frac{\\hbar^2}{2m} \\psi''_i + U_i \\psi_i \\\\ &=& -\\frac{\\hbar^2}{2m \\Delta x^2} \\left(\\psi_{i+1} + \\psi_{i-1} - 2\\psi_{i}\\right) + U_i \\psi_i \\\\ \\end{eqnarray}$$ As @KaneBilliot said, maybe some references have included that 2 in the denominator with the expression for $\\psi''_i$?\n\n\u2022 So if I want to solve the SE for a non-uniform grid what would the appropriate formula for the second order derivative be? Should I have 1\/2 in front of $x_{i+1}-x_{i-1}$? Mar 27 '17 at 20:47\n\nYour question is essentially asking the finite different coefficients for a particular derivative order and choice of sample points.\n\nWith this tool you can see that your second choice is correct. In general these can be found by applying a Taylor series to each term and working out coefficients that fit.\n\nHowever, in practice this is simply an algorithmic and tedious exercise and would recommend using a table or the tool I linked.\n\nYou can easily derive the correct expression as follows. We can formally write down the Taylor expansion of a function as:\n\n$$\\exp(h D)f(x) = f(x+h)$$\n\nwhere $D$ is the differential operator. Then we can find many possible ways to express the derivative in terms of finite differences by writing the r.h.s. in terms of finite difference operators. E.g. we can write:\n\n$$f(x+h) = (1+\\Delta)f(x)$$\n\nwhere $\\Delta$ acts on $f$ as:\n\n$$\\Delta f(x) = f(x+h) - f(x)$$\n\nWe can thus write:\n\n$$\\exp(h D)f(x) = (1+\\Delta)f(x)$$\n\nThis allows you to formally express the differential operator in terms of the finite difference operator.\n\nWhile this will yield a formally correct expression, it will be in terms of forward differences and will thus yield an asymmetric expression. What we want here is a symmetric expression, but this obtained in just the same say, you just consider the symmetric finite difference expression:\n\n$$\\left[\\exp(h D)-\\exp(-h D)\\right]f(x) = f(x+h) - f(x-h)$$\n\nWe can write this as:\n\n$$\\sinh(h D)f(x) = \\Delta_s f(x)$$\n\nwhere $\\Delta_s$ is the average between the forward and backward finite difference. We can thus formally write:\n\n$$D = \\frac{1}{h}\\operatorname{arcsinh}{\\Delta_s} = \\frac{1}{h}\\left[\\Delta_s - \\frac{\\Delta_s^3}{6 } + \\frac{3 \\Delta_s^5}{40 }-\\frac{5 \\Delta_s^7}{112 }+\\cdots\\right]$$\n\nThe second derivative operator is easily expressed in terms of finite differences by squaring the series, we have:\n\n$$D^2 = \\frac{1}{h^2}\\left[\\Delta _s^2 -\\frac{\\Delta _s^4}{3 }+\\frac{8 \\Delta _s^6}{45 } -\\frac{4 \\Delta _s^8}{35 }+\\cdots\\right]$$\n\nUsing only the first term of this series yields the desired expression:\n\n$$\\Delta_s^2 f(x) = \\frac{1}{2}\\Delta_s \\left[f(x+h)-f(x-h)\\right] = \\frac{1}{4} \\left[f(x+2h)-2f(x)+ f(x-2h)\\right]$$\n\nThen the smallest stepsize $h$ you can take on your grid is $h = \\frac{\\Delta x}{2}$, because the smallest step that appears in this formula is $2 h$. This then yields the approximation:\n\n$$D^2 f(x)\\approx \\frac{f(x+\\Delta x)-2f(x)+ f(x-\\Delta x)}{\\Delta x^2}$$\n\nTo easily find expressions for higher order terms, it's convenient to introduce the shift operator $E$ that acts like:\n\n$$E f(x) = f(x+h)$$\n\nThen we have:\n\n$$\\Delta_s = \\frac{E - E^{-1}}{2}$$\n\nSo, the calculation of $\\Delta_s^2$ given above is nothing more than the binomial expansion of the square of the r.h.s.\n\nThese methods involving formal manipulations of differential operators and finite difference operators are also useful for purely theoretical computations, albeit they'll then yield formal expressions that will then lack a rigorous mathematical derivation. E.g. suppose that you want to compute an integral from zero to infinity of some function:\n\n$$\\int_0^{\\infty} x^{s-1}f(x) dx$$\n\nand the series expansion coefficients of the integrand are known, we have:\n\n$$f(x) = \\sum_{n=0}^{\\infty}(-1)^n\\frac{c_n}{n!}x^n$$\n\nWe can then write $c_n = E^{n}c_0$, where $E$ is again the shift operator. We thus have:\n\n$$f(x) = \\sum_{n=0}^{\\infty}(-1)^n\\frac{E^n}{n!}x^n c_0 = \\exp(-E x) c_0$$\n\nThe integral can then be formally computed as:\n\n$$\\int_0^{\\infty} x^{s-1}f(x) dx = \\Gamma(s)E^{-s} c_0 = \\Gamma(s) c_{-s}$$\n\nHere one assumes one is allowed to analytically continue the series expansion coefficients in some way, this can be made more rigorous, see here for details. But what should be clear is that using suitably defined operators you can get to results must faster, basically by cutting through the mathematical red tape.","date":"2021-12-08 01:16:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9216823577880859, \"perplexity\": 202.94445988685484}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964363420.81\/warc\/CC-MAIN-20211207232140-20211208022140-00395.warc.gz\"}"}
null
null
{"url":"https:\/\/www.quantumdiaries.org\/tag\/award\/","text":"## Posts Tagged \u2018award\u2019\n\n### Sam Zeller receives DOE award to track neutrinos\n\nFriday, June 8th, 2012\n\nSam Zeller won a DOE Early Career Research Award to support her work on liquid argon neutrino dectectors. Photo: Reidar Hahn\n\nNeutrinos are known for escaping capture. They fly through matter and their different types continuously morph into one another. That elusive, shifting behavior challenges nearly every available tool and capability scientists have to sketch their portraits.\n\nWith better tools come more detailed portraits. Last month, Fermilab scientist Geralyn \u201cSam\u201d Zeller received a 2012 DOE Early Career Research Award to advance a detector technology that will capture neutrinos\u2019 attributes with unprecedented detail. The $2.5 million award, spread over five years, will support a proof-of-principle study towards the construction of multi-kiloton liquid-argon neutrino detectors. \u201cThere are some really important questions we want to answer about how neutrinos behave,\u201d Zeller said. \u201cThe best chance for answering them is to study neutrinos with this exquisite detector.\u201d Liquid-argon detectors are practically photographic in their ability to show what happens when a neutrino hits an argon nucleus. Tracks that the resultant particles leave behind are shown in high resolution, and it\u2019s easy to distinguish the various particle types that arise from the interaction. But information on how neutrinos behave in liquid-argon detectors is sparse. Most of what is known is based on simulations rather than experiment. Also, researchers have typically gathered what they need to know from event displays \u2013 pretty pictures of events that, while useful, are relatively light on quantified information. Zeller, who has been at Fermilab since December 2009, plans to fill the gap with an abundance of new data. The DOE award will support the analysis of neutrino data recently collected by a small (less than 1 ton) liquid-argon detector prototype called ArgoNeuT. In the next few years, Zeller\u2019s team will also generate and analyze neutrino data using Fermilab\u2019s new MicroBooNE detector, a 170-ton liquid-argon detector. Their findings will tell them whether they can get the expected performance out of a detector of much larger scale. They\u2019ll also characterize exactly how neutrinos behave when interacting in argon. \u201cThere\u2019s a big gap in our knowledge of how neutrinos interact,\u201d Zeller said. \u201cWe want better information to inform the design of future detectors.\u201d Zeller\u2019s project leverages the current ongoing U.S. neutrino program with the idea that the community could build, in manageable stages, a liquid-argon detector weighing tens of thousands of tons. Its prodigious size increases scientists\u2019 chance of capturing a neutrino that has changed forms. Combined with its characteristic high precision, the detector would prove invaluable for the proposed Long-Baseline Neutrino Experiment, which will allow scientists to observe neutrino oscillations, as their form-changing is called. It would also be of use for the short-baseline program in looking for a fourth neutrino to add to the family of the known three. If future neutrino experiments go well, scientists may finally have answers to basic questions surrounding the ghostly particle: which neutrino types are the lightest and heaviest, and do they behave the same as their antiparticles? The DOE award will fund two postdocs and a dedicated team for the long-baseline program, as well as supporting technical and engineering work. \u201cThere\u2019s an opportunity here because we have these two detectors and the best neutrino beams in the world,\u201d Zeller said. \u201cNow we\u2019re going to try to get as much information out of them as we can.\u201d Leah Hesla ### Brendan Casey receives DOE award for muon research Wednesday, June 6th, 2012 This article first appeared in Fermilab Today on May 29. Brendan Casey was awarded a DOE Early Career Research Award to support his work developing detector technology for the Muon g-2 experiment. Photo: Reidar Hahn Four years ago, Fermilab physicist Brendan Casey began looking for a new research project. Should he join the thousands of physicists working on particle collider experiments at the Large Hadron Collider in Europe? Or should he collaborate with a relatively small group of scientists who wanted to build a new physics experiment at Fermilab to search for hidden subatomic forces? This month, Casey was rewarded for his decision to work on the smaller experiment. The Department of Energy\u2019s Office of Science named Casey a recipient of the 2012 DOE Early Career Research Award. It will support his research on the detector technology for the Muon g-2 experiment with a total of$2.5 million over five years.\n\n\u201cTo be chosen is a great honor,\u201d said Casey. \u201cIt also is an affirmation that the choice of pursuing the Muon g-2 experiment paid off.\u201d\n\nFor this year\u2019s awards, DOE selected 68 researchers from a pool of about 850 applicants based at universities and national laboratories in the United States. Three Fermilab scientists received the award this year: Casey, Tengming Shen and Geralyn \u201cSam\u201d Zeller.\n\nCasey is one of about 50 people working on the Muon g-2 experiment. The collaboration expects to add scientists from new institutions this June.\n\n\u201cWe are recruiting collaborators,\u201d said Casey, who worked on Fermilab\u2019s DZero collider experiment before joining Muon g-2. \u201cWith this award, we\u2019ll be able to expand our research efforts.\u201d\n\nThe DOE grant will pay for part of Casey\u2019s research efforts, fund a postdoctoral associate, support engineering and technical work and contribute to purchasing equipment for the experiment.\n\nThe Muon g-2 collaboration aims to settle a perplexing question that has haunted the particle physics community for more than a decade. Do muons behave as predicted by the highly successful theory known as the Standard Model, or are these particles subject to a mysterious force that changes the particles behavior when exposed to a magnetic field?\n\nResults obtained by a previous muon experiment at Brookhaven National Laboratory provided an unexpected but non-conclusive glimpse at the hidden force that might be tugging at the muon, a heavy relative of the electron. But the accelerator at Brookhaven cannot produce enough muons for scientists to make a more precise measurement. Hence scientists turned to Fermilab and its Main Injector accelerator.\n\nCasey, who received a Wilson Fellowship in 2007 and became a Fermilab staff scientist in 2011, focuses on the development of the special particle detector that scientists will use to measure the behavior of the muons in a magnetic field.\n\n\u201cWhile we will reuse some of the equipment used in the Brookhaven experiment, we will build the particle detectors from scratch,\u201d said Casey.\n\nCasey is collaborating with scientists and students from Boston University, Northwestern University and the Petersburg Nuclear Physics Institute on developing the experiment\u2019s straw tracking detector, which uses charged wires in long, narrow drift tubes to identify the trajectories of particles.\n\nKurt Riesselmann","date":"2019-04-20 02:49:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5031560659408569, \"perplexity\": 3902.135269240314}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-18\/segments\/1555578528481.47\/warc\/CC-MAIN-20190420020937-20190420042937-00275.warc.gz\"}"}
null
null
\section{Introduction} \global\long\def\mathbf{{f}}{\mathbf{{f}}} \global\long\def\mathbf{{g}}{\mathbf{{g}}} \global\long\def\mathbf{{h}}{\mathbf{{h}}} \global\long\def\mathbf{{s}}{\mathbf{{s}}} \global\long\def\mathbf{V}{\mathbf{V}} \global\long\def\mathbf{x}{\mathbf{x}} \global\long\def\v#1{\mathbf{#1}} \global\long\def\mathbf{q}{\mathbf{q}} \global\long\def\mathbf{w}{\mathbf{w}} \global\long\defD{D} \global\long\defd{d} \global\long\def\mathbf{n}{\mathbf{n}} \global\long\def\mathbf{q}{\mathbf{q}} \global\long\def\mathbf{u}{\mathbf{u}} \global\long\def\mathbf{v}_{i}{\mathbf{v}_{i}} \global\long\def\mathbf{v}{\mathbf{v}} \global\long\def\v u{\v u} In this note, we are interested in the numerical resolution of the following system of conservation laws \begin{equation} \partial_{t}\v u+\partial_{x}\mathbf{{f}}(\v u)=0,\label{eq:conslaw} \end{equation} where the unknown is the vector of conservative variables $\v u(x,t)\in\mathbb{R}^{m}$, depending on a space variable $x$ and a time variable $t\geq0$. In the first part of the paper we consider the case with no boundaries ($x \in\mathbb{R}$). In Section \ref{sec:bc}, we will discuss the case with boundaries ($x \in [0,1]$). The conservative variables satisfy an initial condition \begin{equation} \v u(x,0)=\mathbf{v}(x).\label{eq:cond_ini} \end{equation} The flux $\mathbf{{f}}$ is a non-linear function of $\v u$. The system of conservation laws (\ref{eq:conslaw}) is assumed to be hyperbolic: for any vector of conservative variables $\v u$, the jacobian of the flux \begin{equation} \v A(\v u)=\mathbf{{f}}'(\v u)\label{eq:defjacob} \end{equation} is diagonalizable with real eigenvalues. Jin and Xin (\citet{jin1995relaxation}) have proposed an approximation of (\ref{eq:conslaw}) of the following form \begin{eqnarray} \partial_{t}\mathbf{w}_{\varepsilon}+\partial_{x}\v z_{\varepsilon} & = & 0,\label{eq:relax1}\\ \partial_{t}\v z_{\varepsilon}+\lambda^{2}\partial_{x}\mathbf{w}_{\varepsilon} & = & \frac{1}{\varepsilon}(\mathbf{{f}}(\mathbf{w}_{\varepsilon})-\v z_{\varepsilon}),\label{eq:relax2} \end{eqnarray} with the initial condition \begin{equation} \mathbf{w}_{\varepsilon}(x,0)=\mathbf{v}(x),\quad\v z_{\varepsilon}(x,0)=\mathbf{{f}}(\mathbf{v}(x)).\label{eq:cond_ini_relax} \end{equation} In this formulation $\varepsilon$ is a small positive parameter and $\lambda$ a constant positive velocity. If $\lambda$ is large enough (``subcharacteristic'' condition), it can be proved that $\mathbf{w}_{\varepsilon}$ tends to $\v u$, the entropy solution of (\ref{eq:conslaw})-(\ref{eq:cond_ini}), when $\varepsilon$ tends to zero (\citet{jin1995relaxation}). The advantage of the Jin-Xin formulation is that the partial differential equations are now linear with constant coefficients and the non-linearity is concentrated in a simple source term. A simple way to solve numerically the Jin-Xin system is to use a time-splitting algorithm. For advancing by one time step of size $\Delta t$, one first solves \begin{eqnarray} \partial_{t}\mathbf{w}+\partial_{x}\v z & = & 0,\label{eq:relax1-1}\\ \partial_{t}\v z+\lambda^{2}\partial_{x}\mathbf{w} & = & 0,\label{eq:relax2-1} \end{eqnarray} for a duration of $\Delta t$ (free transport step). Then, for the same duration, one solves the system of differential equations (relaxation step) \begin{eqnarray} \partial_{t}\mathbf{w} & = & 0,\label{eq:relax1-2}\\ \partial_{t}\v z & = & \frac{1}{\varepsilon}(\mathbf{{f}}(\mathbf{w})-\v z).\label{eq:relax2-2} \end{eqnarray} Both sub-steps admit a simple explicit solution. Indeed, if we define the functional translation operator $\tau(\Delta t)$ by \[ (\tau(\Delta t)\v v)(x)=\v v(x-\lambda\Delta t), \] then the solution of the free transport step (\ref{eq:relax1-1})-(\ref{eq:relax2-1}) is given by \[ \left(\begin{array}{c} \mathbf{w}(\cdot,t+\Delta t)\\ \v z(\cdot,t+\Delta t) \end{array}\right)=T(\Delta t)\left(\begin{array}{c} \mathbf{w}(\cdot,t)\\ \v z(\cdot,t) \end{array}\right), \] with \begin{equation} T(\Delta t)\coloneqq\frac{1}{2}\left(\begin{array}{cc} \tau(\Delta t)+\tau(-\Delta t) & (\tau(\Delta t)-\tau(-\Delta t))/\lambda\\ \lambda(\tau(\Delta t)-\tau(-\Delta t)) & \tau(\Delta t)+\tau(-\Delta t) \end{array}\right).\label{eq:free_transport} \end{equation} This can be easily obtained noting that the characteristic quantities $\v w / 2 \pm \v z / 2\lambda$ are transported at velocities $\pm \lambda$. The solution of the relaxation step is given by \[ \left(\begin{array}{c} \mathbf{w}(\cdot,t+\Delta t)\\ \v z(\cdot,t+\Delta t) \end{array}\right)=P_{\varepsilon}(\Delta t)\left(\begin{array}{c} \mathbf{w}(\cdot,t)\\ \v z(\cdot,t) \end{array}\right), \] with \begin{equation} P_{\varepsilon}(\Delta t)\left(\begin{array}{c} \mathbf{w}\\ \v z \end{array}\right)\coloneqq\left(\begin{array}{c} \mathbf{w}\\ \mathbf{{f}}(\mathbf{w}) \end{array}\right)+\exp(-\Delta t/\varepsilon)\left(\begin{array}{c} 0\\ \v z-\mathbf{{f}}(\mathbf{w}) \end{array}\right).\label{eq:exact_relaxation} \end{equation} These operators being defined, we obtain a first-order-in-time approximation of the solution of (\ref{eq:relax1})-(\ref{eq:relax2})-(\ref{eq:cond_ini_relax}) \[ \left(\begin{array}{c} \mathbf{w}_{\varepsilon}(\cdot,\Delta t)\\ \v z_{\varepsilon}(\cdot,\Delta t) \end{array}\right)=S_{1}(\Delta t)\left(\begin{array}{c} \v v\\ \mathbf{{f}}(\v v) \end{array}\right)+O(\Delta t^{2}), \] with \begin{equation} S_{1}(\Delta t)=P_{\varepsilon}(\Delta t)T(\Delta t).\label{eq:lie_scheme} \end{equation} The splitting error is of order $O(\Delta t^{2})$, but when this approximation is accumulated on $t/\Delta t$ time steps, $S_{1}$ is indeed a first order scheme. The Jin-Xin scheme is very robust and can handle shock solutions. However, for smooth solutions, its accuracy is not sufficient. For achieving second order accuracy for smooth solutions, a simple idea would be to replace the splitting (\ref{eq:lie_scheme}) by a Strang procedure. We observe that $T(0)=I$, where $I$ is the identity operator. For $\varepsilon>0$ fixed, we also have $P_{\varepsilon}(0)=I$. However, when $\varepsilon$ tends to zero, the relaxation step becomes \[ P_{0}(\Delta t)\left(\begin{array}{c} \mathbf{w}\\ \v z \end{array}\right)=\left(\begin{array}{c} \mathbf{w}\\ \mathbf{{f}}(\mathbf{w}) \end{array}\right) \] and we observe that the limit relaxation operator does not satisfy $P_{0}(0)=I$ anymore. It has become a projection operator $$P_{0}(0)P_{0}(0)=P_{0}(0). $$ The fact that $P_{0}(0)\neq I$ is the main reason why a Strang splitting procedure like \[ S(\Delta t)=T(\frac{\Delta t}{2})P_{\varepsilon}(\Delta t)T(\frac{\Delta t}{2}) \] would not lead to a second order scheme in the case $\varepsilon = 0$ (\citet{coulette2016palindromic,coulette2017palindromic}). The objectives of this paper are: \begin{enumerate} \item Recall how to construct a splitting that remains second order when $\varepsilon=0$. \item Compute the formal equivalent equation of the resulting scheme. \item From this equivalent system of partial differential equations construct compatible boundary conditions ensuring stability and high order. \item Test the whole approach for a simple hyperbolic problem solved with a Lattice-Boltzmann Method. \end{enumerate} \section{Over-relaxation scheme} For constructing a second-order-in-time over-relaxation scheme, a possibility is to perform a Padé approximation when $\Delta t\simeq0$ of the exponential operator \[ \exp(-\frac{\Delta t}{\varepsilon})\simeq\frac{1-\frac{\Delta t}{2\varepsilon}}{1+\frac{\Delta t}{2\varepsilon}}=\frac{2\varepsilon-\Delta t}{2\varepsilon+\Delta t}, \] and to replace the exact relaxation (\ref{eq:exact_relaxation}) step by \begin{equation} R_{\varepsilon}(\Delta t)\left(\begin{array}{c} \mathbf{w}\\ \v z \end{array}\right)\coloneqq\left(\begin{array}{c} \mathbf{w}\\ \mathbf{{f}}(\mathbf{w}) \end{array}\right)+\frac{2\varepsilon-\Delta t}{2\varepsilon+\Delta t}\left(\begin{array}{c} 0\\ \v z-\mathbf{{f}}(\mathbf{w}) \end{array}\right).\label{eq:exact_relaxation-1} \end{equation} We would obtain the same formula by applying a Crank-Nicolson scheme for approximating the differential equation (\ref{eq:relax1-2})-(\ref{eq:relax2-2}). Now, we observe that \begin{equation} R_{0}(\Delta t)\left(\begin{array}{c} \mathbf{w}\\ \v z \end{array}\right)=\left(\begin{array}{c} \mathbf{w}\\ 2\mathbf{{f}}(\mathbf{w})-\v z \end{array}\right).\label{eq:zero_relax} \end{equation} This operator does not depend on $\Delta t$ anymore. We also observe that, as for the usual Strang splitting procedure, $R_{0}(0)\neq I$. In addition, $R_{0}$ is no more a projection, but an involutory operator \[ R_{0}R_{0}=I. \] With this observation in mind, we propose the following over-relaxation scheme $S_{2}(\Delta t)$ for approximating the solution of (\ref{eq:conslaw})-(\ref{eq:cond_ini}). It is defined by \begin{equation} S_{2}(\Delta t)\coloneqq T(\frac{\Delta t}{4}) \, R_{0}\, T(\frac{\Delta t}{2})\, R_{0}\, T(\frac{\Delta t}{4}).\label{eq:scheme_def} \end{equation} With this definition, we can check that the over-relaxation scheme is time-symmetric: \[ S_{2}(-\Delta t)=S_{2}(\Delta t)^{-1},\quad S_{2}(0)=I. \] This property ensures that the over-relaxation scheme is second order in time (\citet{hairer2006geometric,mclachlan2002splitting}). For one single time step we thus have \[ \left(\begin{array}{c} \v u(\cdot,\Delta t)\\ \mathbf{{f}}(\v u(\cdot,\Delta t)) \end{array}\right)=S_{2}(\Delta t)\left(\begin{array}{c} \v v\\ \mathbf{{f}}(\v v) \end{array}\right)+O(\Delta t^{3}), \] where $\v u$ is the exact solution of (\ref{eq:conslaw})-(\ref{eq:cond_ini}). \section{Equivalent equation} In this section, we will compute the equivalent equation of the over-relaxation scheme. The objective is to derive a system of partial differential equations satisfied by the approximations of $\mathbf{w}$ and $\v z$ when $\Delta t$ tends to zero. Of course, if $\v z = \mathbf{{f}}(\mathbf{w})$ at the initial time, we expect $\mathbf{w}$ and $\v z$ to satisfy \[ \partial_{t}\mathbf{w}+\partial_{x}\mathbf{{f}}(\mathbf{w})=O(\Delta t^{2}),\quad\v z-\mathbf{{f}}(\mathbf{w})=O(\Delta t^{2}). \] A more interesting question is to find a partial differential equation satisfied by the approximation of the flux $\v z.$ This is important in practice in order to construct stable boundary conditions to be applied to $\v z$, or for designing schemes that remain second order at the boundaries. Let $\mathbf{w}$ and $\v z$ denote now the numerical solution given by the second order scheme (\ref{eq:scheme_def}). We have \begin{multline} \frac{1}{\Delta t}\left(\begin{array}{c} \mathbf{w}(\cdot,t+\frac{\Delta t}{2})-\mathbf{w}(\cdot,t-\frac{\Delta t}{2})\\ \v z(\cdot,t+\frac{\Delta t}{2}))-\v z(\cdot,t-\frac{\Delta t}{2})) \end{array}\right) = \\ \frac{1}{\Delta t}(S_{2}(\frac{\Delta t}{2})-S_{2}(-\frac{\Delta t}{2}))\left(\begin{array}{c} \mathbf{w}(\cdot,t)\\ \v z(\cdot,t) \end{array}\right),\label{eq:def_s2} \end{multline} where the over-relaxation scheme $S_{2}$ has an explicit form given by (\ref{eq:free_transport}), (\ref{eq:zero_relax}) and (\ref{eq:scheme_def}). We can perform a Taylor expansion of both sides of \eqref{eq:def_s2} when $\Delta t$ tends to zero. By symmetry considerations, the first order terms vanish. An essential point is that $S_{2}(0)=I$. After simple but long calculations, we obtain the following result: \begin{thm} Let $\mathbf{w}$ and $\v z$ be smooth solutions of the time marching algorithm (\ref{eq:def_s2}). Let us define the flux error $\v y$ by \[ \v{y\coloneqq z-f(w)}. \] Then, up to second order terms in $\Delta t$, $\mathbf{w}$ and $\v y$ are solutions of the following (non conservative) hyperbolic system of conservation laws: \begin{equation} \partial_{t}\left(\begin{array}{c} \mathbf{w}\\ \v y \end{array}\right)+\left(\begin{array}{cc} \mathbf{{f}}'(\mathbf{w}) & 0\\ 0 & -\mathbf{{f}}'(\mathbf{w}) \end{array}\right)\partial_{x}\left(\begin{array}{c} \mathbf{w}\\ \v y \end{array}\right)=0.\label{eq:limit_wz} \end{equation} \end{thm} \begin{rem} The equivalent equation (\ref{eq:limit_wz}) shows that the over-relaxation scheme tends to propagate the conservative variables $\mathbf{w}$ and the flux error $\v y$ with opposite wave velocities. This gives hints to build stable boundary conditions on $\v z$. Roughly speaking, at an inflow boundary for $\mathbf{w}$, one should impose $\mathbf{w}$ and not $\v y$, while at an outflow boundary for $\mathbf{w}$ one should impose $\v y$ and not $\v w$. Numerical experiments that confirm this heuristic are given in the next section. \end{rem} \begin{rem} Generally, when the relaxation operator is a projection, the equivalent equation is only available for $\mathbf{w}$. See for instance \cite{dubois2008equivalent,sportisse2000analysis} \end{rem} \begin{rem} The kinetic speed $\lambda$ does not appear in the equivalent equation (\ref{eq:limit_wz}). It appears in the $O(\Delta t^{2})$ terms, which are complicated. We do not know yet how to perform the stability analysis of these terms. In practice, if $\lambda$ is too small, the scheme becomes unstable. This indicates that a subcharacteristic condition still has to be satisfied. \end{rem} \section{Numerical method} \subsection{Transport model} In this section, we describe a numerical discretization of $S_{2}$ in the simple case where $m=1$, $\v u=u\in\mathbb{R}$ and $\mathbf{{f}}(\v u)=f(u)=cu$, $c>0$. We thus solve a simple transport equation at velocity $c>0$ \[ \partial_{t}u+c\, \partial_{x}u=0. \] We assume that $x\in[0,1].$ We also provide an initial condition and a boundary condition at the left point \[ u(x,0)=v(x),\quad u(0,t)=v(-ct). \] where $v:\mathbb{R}\rightarrow\mathbb{R}$ is a given function. It can be checked that the exact solution of this initial-boundary value problem is \[ u(x,t)=v(x-ct). \] With this transport equation, we can associate its over-relaxation system with the approximated conservative data $\mathbf{w}=w\in\mathbb{R}$ and the approximated flux $\v z=z\in\mathbb{R}.$ \subsection{Numerical discretization} For the numerical discretization, we consider a positive integer $N$ and define the space step and grid points by \[ \Delta x=\frac{1}{N+1},\quad x_{i}=i\Delta x,\quad i=0\ldots N+1. \] The grid points $i=0$ and $i=N+1$ are the border points of the interval $[0,1]$, where the boundary conditions are applied. We consider an approximation of $w$ and $z$ at the grid points $x_{i}$ and times $t_{n}=n\Delta t$ \[ w_{i}^{n}\simeq w(x_{i},t_{n}),\quad z_{i}^{n}\simeq z(x_{i},t_{n}). \] The initial data are exactly sampled at the grid points \[ w_{i}^{0}=v(x_{i}),\quad z_{i}^{0}=cv(x_{i}). \] Like in the Lattice Boltzmann Method (\citet{chen1998lattice}) we choose a special time step \[ \Delta t=\frac{4\Delta x}{\lambda}. \] This choice ensures that the transport operator $T(\Delta t/4)$ only involves exact shift operators. For instance, the translation operator is approximated here by \[ (\tau(\Delta t/4)w)(x_{i},t_{n})\simeq w_{i-1}^{n}. \] Thus, the transport step reads: \begin{equation} \begin{aligned} w_i^{n+1/4} & = \frac{w_{i-1}^{n} + w_{i+1}^{n}}{2} + \frac{z_{i-1}^n - z_{i+1}^n}{2\lambda}, \\ z_i^{n+1/4} & = \frac{z_{i-1}^{n} + z_{i+1}^{n}}{2} + \lambda \frac{w_{i-1}^n - w_{i+1}^n}{2}. \end{aligned} \label{eq:discrete_scheme} \end{equation} Note that, these discrete equations are equivalent to: \begin{align} z_i^{n+1/4} - \lambda w_i^{n+1/4} & = z_{i+1}^{n} - \lambda w_{i+1}^{n}, \label{eq:discrete_scheme_f-}\\ z_i^{n+1/4} + \lambda w_i^{n+1/4} & = z_{i-1}^{n} + \lambda w_{i-1}^{n}. \label{eq:discrete_scheme_f+} \end{align} In practice, we also use the fact that $T(\Delta t/2)=T(\Delta t/4)T(\Delta t/4)$. \subsection{Boundary conditions, relaxation \label{sec:bc}} \subsubsection{Boundary conditions\label{subsec:Boundary-conditions}} Let us assume that at the beginning of a time step, for instance at time $t_n$, we know $w_{i}^{n}$ and $z_{i}^{n}$ for $i=0\ldots N+1$. The transport operator $T(\Delta t/4)$ can be applied to internal grid points $x_{i}$, corresponding to indices $i=1\ldots N.$ Thus, using \eqref{eq:discrete_scheme}, it is possible to compute $w_{i}^{n+1/4}$, $z_{i}^{n+1/4}$ for $i=1\ldots N$. At the left boundary $i=0$, one information is missing for computing $w_{0}^{n+1/4}$ and $z_{0}^{n+1/4}.$ According to the previous analysis, it is natural to impose the boundary condition on $w$ (because it is an inflow boundary) at the middle of the time step (in order to respect the time-symmetry): \[ w(0,t_{n}+\frac{\Delta t}{8})=v(-c(t_{n}+\frac{\Delta t}{8})). \] It is discretized by \begin{equation} \frac{w_{0}^{n}+w_{0}^{n+1/4}}{2}=v(-c(t_{n}+\frac{\Delta t}{8})), \label{eq:w-left-bounds} \end{equation} which provides the missing relation and enables to compute $w_{0}^{n+1/4}$. Then, from \eqref{eq:discrete_scheme_f-}, the value of $z_{0}^{n+1/4}$ can be computed: \[ z_0^{n+1/4} - \lambda w_0^{n+1/4} = z_1^n - \lambda w_1^n. \] At the right boundary, we will test several approaches: an ``exact'' strategy, a ``Dirichlet'' strategy on $y=z-cw$ or a ``Neumann'' strategy. \paragraph*{Exact strategy} Since we know the analytical solution, we can impose the values of $w_{N+1}$ given by the exact solution. Of course this method cannot be generalized to more complex equations and solutions. In addition, we expect it to generate oscillations, because the boundary condition is not compatible with an outflow boundary. As for the left boundary, we write \[ \frac{w_{N+1}^{n}+w_{N+1}^{n+1/4}}{2}=v(1-c(t_{n}+\frac{\Delta t}{8})). \] \paragraph*{Dirichlet strategy on $y$} In this method, we simply apply the condition $y=0$ at the middle of the time step. This gives \[ \frac{z_{N+1}^{n}+z_{N+1}^{n+1/4}}{2}-c\frac{w_{N+1}^{n}+w_{N+1}^{n+1/4}}{2}=0, \] which provides the missing relation. For instance, using this relation and expression \eqref{eq:discrete_scheme_f+}, one obtains \[ w_{N+1}^{n+1/4} = \frac{1}{\lambda + c} \left( \lambda w_{N}^{n} -c w_{N+1}^{n}\right) + \frac{1}{\lambda + c} \left( z_{N}^{n} + z_{N+1}^{n}\right). \] \paragraph*{Neumann strategy on $y$} The last method consists in imposing the condition $\partial_{x}y(L,t)=0$ at the right boundary. Formally, up to second order, this is equivalent to impose $\partial_{t}y(L,t)=0$ or $y(L,t)=y(L,0)=0.$ The missing relation is obtained from \[ z_{N+1}^{n+1/4}-cw_{N+1}^{n+1/4}=z_{N}^{n+1/4}-cw_{N}^{n+1/4}. \] Once again, using this relation and expression \eqref{eq:discrete_scheme_f-}, one now has the following relation for $w_{N+1}^{n+1/4}$ \begin{multline*} w_{N+1}^{n+1/4} = \frac{1}{2(\lambda+c)} \left[ \left( 2 \lambda w_{N}^{n} + (\lambda +c) w_{N+1}^{n} - (\lambda-c) w_{N-1}^{n}\right) + \right. \\ \left. \left( 2 z_{N}^{n} -\frac{\lambda + c}{\lambda} z_{N+1}^{n} -\frac{\lambda - c}{\lambda} z_{N-1}^{n}\right) \right]. \end{multline*} \subsubsection{Relaxation} The relaxation operation, as stated above, consists in replacing in each cell $(\mathbf{w},\v z)$ by $(\mathbf{w},2\mathbf{{f}}(\mathbf{w})-\v z)$. We emphasize that the relaxation is also performed in the boundary cells $i=0$ and $i=N+1.$ \subsection{Numerical results} We test the above scheme and boundary conditions with $x\in[0,1]$, $t\in[0,t_{\max}]$ and the following exact solution \[ u(x,t)=\exp(A(x-\alpha-ct)^{2}). \] We may also impose a (non-physical) flux disequilibrium $y=z-f(w)\neq 0$ at the initial time. The initial value of $y$ is given at time $t=0$ by \[ y = B\exp(A(x-\beta+ct)^{2}). \] We test the three boundary approaches proposed in Section \ref{subsec:Boundary-conditions}. We check the stability and order of the scheme. The error is measured by the discrete $L^{2}$ norm \[ e_{\Delta x}^{n}=\sqrt{\Delta x\sum_{i=0}^{N+1}\big(w_{i}^{n}-u(x_{i}-cn\Delta t)\big)^{2}+\big(z_{i}^{n}-cu(x_{i}-cn\Delta t)\big)^{2}}. \] \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{paper-w-y-transport-illustr.png} \caption{Transport of the $w$ (dashed lines) and $y = z-f(w)$ (plain lines) quantities.} \label{fig:y-transport-illustr} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{gauss-transport-figure.png} \caption{Initial state and comparison of the final states for the transport equation with Gaussian initial profile, $\Delta x = 2^{-7}$.} \label{fig:illustr-transport-test} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.9\textwidth]{gauss-transport-convergence.png} \caption{Convergence study for the transport equation with Gaussian initial profile. Comparison of Exact, Dirichlet and Neumann strategies.} \label{fig:convergence-transport-test} \end{center} \end{figure} In Figure \ref{fig:y-transport-illustr}, we first show an illustration of the propagation of the quantity $y = z-f(w)$ with the following numerical parameters: \[ c=1,\quad \lambda=2,\quad t_{\max}=0.33,\quad \alpha=0.25,\quad \beta = 0.75\quad A=80,\quad \text{and}\quad B=1/2. \] With this choice and sufficiently small final time, the boundary condition has no influence. One checks numerically that $w$ propagates with velocity $u$, while $y$ propagates with velocity $-u$, which agrees with the equivalent equation \eqref{eq:limit_wz}. An illustration of the numerical results using the three strategies for boundary conditions is given in Figure \ref{fig:illustr-transport-test}. For this test, we have imposed the following numerical parameters: \[ c=1,\quad \lambda=2,\quad t_{\max}=1,\quad \alpha=0,\quad \beta = 0\quad A=80,\quad \text{and}\quad B=0. \] One can see that the \textit{Exact} strategy generates oscillations at the right boundary. The \textit{Dirichlet} strategy generates weaker oscillations that are not amplified with time. The \textit{Neumann} strategy does not generate any oscillation. The convergence results are shown in Figure \ref{fig:convergence-transport-test}. One can see that the \textit{Exact} strategy and the \textit{Dirichlet} strategy are first order accurate, while the \textit{Neumann} strategy is second order accurate. The best choice for ensuring stability and second order accuracy seems to be the \textit{Neumann} strategy. \begin{rem} With the \textit{Dirichlet} strategy, we impose that $y = z-cw = 0$ at the boundary, while this equality may not be satisfied exactly inside the domain. This creates small discontinuities that may explain the loss of accuracy. \end{rem} \section{Conclusion} In this short note, we have derived the equivalent equation of the over-relaxation kinetic scheme. The equivalent equation reveals that the conservative variable and the flux error propagate in opposite directions. This allows us to determine natural boundary conditions for the over-relaxation scheme. Numerical experiments confirm the stability and accuracy of these boundary conditions. In a forthcoming work, we will extend the approach to more complex non-linear systems and to higher dimensions. It also important to incorporate in the over-relaxation method a dissipative mechanism in order to compute discontinuous solutions without oscillations. \section{Bibliography} \bibliographystyle{elsarticle-num-names} \addcontentsline{toc}{section}{\refname}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,475
class PrefService; namespace ios { class ChromeBrowserState; } namespace net { class URLRequestContextGetter; } class GoogleURLTrackerClientImpl : public GoogleURLTrackerClient { public: explicit GoogleURLTrackerClientImpl(ios::ChromeBrowserState* browser_state); ~GoogleURLTrackerClientImpl() override; private: // GoogleURLTrackerClient implementation. bool IsBackgroundNetworkingEnabled() override; PrefService* GetPrefs() override; net::URLRequestContextGetter* GetRequestContext() override; ios::ChromeBrowserState* browser_state_; DISALLOW_COPY_AND_ASSIGN(GoogleURLTrackerClientImpl); }; #endif // IOS_CHROME_BROWSER_GOOGLE_GOOGLE_URL_TRACKER_CLIENT_IMPL_H_
{ "redpajama_set_name": "RedPajamaGithub" }
6,993
Joan Didion exposed political stories America is still telling itself in order to live Didion crashed the political journalism party, revealing much about power, the mainstream press and America By David Masciotra Joan Didion (Neville Elder/Corbis via Getty Images) The death of Joan Didion steals from the United States not only one of its best literary artists, but also one of its most astute political analysts. Because Didion was so prolific and accomplished, it was always inevitable that certain aspects of her oeuvre would overshadow others. In this case, her political commentary and journalism fails to elicit attention equal to her cultural correspondences, novels, and the harrowing personal writing she published after the deaths of her husband and daughter. Given the ideological and mercurial biases of the corporate press, there are ground for suspicion that book critics, journalists and obituary writers have reason to overlook Didion's political work that goes beyond popular reputation. RELATED: Joan Didion for Salon: "Election by Sound Bite," commentary on the 2008 election There was no fog on the sightline of Didion. When she analyzed the absurdities of American politics, she identified and accurately described the racism, corporate restraints, self-serving myths and lack of ambition that paralyze the political system, rendering it unable, despite the country's inordinate wealth and educational resources, to adequately address the needs and concerns of the electorate. The inert process, Didion wrote in the foreword to a compendium of her political essays aptly called "Political Fictions," "proceeded from a series of fables about American experience." With her brutal chronicle of the fictive nature of political debate, she committed a cardinal sin of the American press: She exposed the overwhelming failures of the boys club that was, and to a large extent, still is the New York-Beltway nexus of credentialed reporters. "Boys" is the best word, because of the patriarchal gender bias of the press corps, but also because, like children, mainstream pundits are often the most gleeful prisoners and propagators of political fable. The story of Didion's entry into political journalism is all the more remarkable for her reluctance. In 1988, Robert Silvers, then editor of the New York Review of Books, requested that Didion, a frequent contributor, write a lengthy essay on that year's presidential campaign. He promised to acquire a press pass that would allow her to attend campaign events with insider access. She wrote that she was flattered, but largely disinterested. Domestic politics failed to capture her enthusiasm, and she worried that she lacked the political expertise of other NYRB writers, like Gore Vidal. Eventually, she relented and produced a series of essays of far greater clarity, insight, and ethical force than most comparably seasoned male and milquetoast writers of the New York Times and Washington Post school could ever muster. Didion's essays, like Vidal's, not only capture the reality of their time, but also undress ugly truths of the American experience, and the use of power more broadly, that make them, unfortunately, timeless. Almost immediately as a political writer, Didion was able to slice through the layers of foolishness and insulation that keep average Americans from seeing the truth of their democracy. This passage comes approximately 1,000 words into her first political essay, the cleverly titled "Insider Baseball": When we talk about the "process," then, we are talking, increasingly, not about the "democratic process," or the general mechanism affording the citizens of a state a voice in its affairs, but the reverse: a mechanism seen as so specialized that access to it is correctly limited to its own professionals, to those who manage policy and those who report on it, to those who run the polls and those who quote them, to those who ask and those who answer the questions on the Sunday shows, to the media consultants, to the columnists, to the issues advisers, to those who give the off-the-record breakfasts and those who attend them; to the handful of insiders who invent, year in and year out, the narrative of public life. "I didn't realize you were a political junkie," Martin Kaplan, the former Washington Post reporter and Mondale speechwriter who was married to Susan Estrich, the manager of the Dukakis campaign, said when I mentioned that I planned to write about the campaign; the assumption here, that the narrative should be not just written only by its own specialists but also legible only to its own specialists, is why, finally, an American presidential campaign raises questions that go so vertiginously to the heart of the structure. Didion saw clearly that the structure, in the institutional sense, was under strict control of capital, writing about how it did not seem to concern most politicians or reporters that over half of the public did not vote, as long as commercial sponsors like Merrill Lynch were confident that the Republican National Convention and Democratic National Convention would score high ratings with "an upscale audience." The "structure," in the more metaphorical sense — the vocabulary and discourse that dominated the political imagination — passed through the "upscale audience" filter. The filter, as Didion painstakingly argued, created a fictional narrative so vast that few could escape it. RELATED: The meaning of words: Orwell, Didion, Trump and the death of language As a writer, not a "political junkie," Didion crashed the party by exposing the fictive status of the popular story. The prevailing obsession, then and now, with the personal biographies and foibles of the candidates prohibits other writers, and more importantly, politicians from doing the same clarifying work of her journalism. Didion wrote in "Insider Baseball" that "All stories, of course, depend for their popular interest upon the invention of personality, or 'character,' but in the political narrative … it is to maintain the illusion of consensus by obscuring rather than addressing actual issues." It is important to note that Didion is using "narrative" not as a synonym for argument or theory, as many dense pundits currently employ the term, but to mean an intentionally crafted story with heroes, villains, a setting, a problem and a proposed resolution. The story of 1988 bears strong resemblance, despite the dramatic difference in circumstances and crises, to the story of 2021. The 1988 race was the first in three election cycles that the smiling reactionary Ronald Reagan would not dominate. On the Republican side, Didion saw the influence and proliferation of "reactive angers" illustrating a "quite florid instance of what Richard Hofstadter had identified in 1965 as the paranoid style of American politics." Anyone minimally lucid can understand Didion's assertion — fear of Black people, immigrants, the poor, uppity women, and the "radical left" creates an extreme anti-government and antisocial form of anti-politics on the right, which has only intensified and grown more dangerous in the past three decades. Invocation of "law and order," as Didion and other analysts have well understood, is the Republican technique of telling their frightened voters that racial minorities, especially those who are poor, will not cut in on their action. Democratic Party politics were and are more complicated. They are also more indicative of the middle class complacency that often inhibits societal transformation, and plays into the devilish hands of the Republican Party. There was one candidate in the 1988 race who Didion believed would present the U.S. with a profound opportunity to elevate itself out of the miasma surrounding systemic racism, oligarchic oppression, and the ongoing sabotage of democracy: Jesse Jackson. When I conducted research for my latest book, "I Am Somebody: Why Jesse Jackson Matters," Didion was nearly alone in the mainstream press as treating Jackson with the respect and attention that his groundbreaking candidacy deserved. Demonstrating her keen political insight and her rare gift for literary flare, she wrote that Jackson "rode on a Trailways bus into the sedative fantasy of a fixable imperial America." RELATED: Democracy vs. fascism, part 1: What do those words mean — and do they describe this moment? While all the other candidates in the race, including the eventual nominee, Michael Dukakis, could offer nothing more than buffering of America's hard and often deadly edges, Jackson advocated for Medicare for All, tuition free public universities, a national public development bank, full employment through infrastructural programs, subsidized childcare, and paid family leave. He was the only candidate to articulate support for Nelson Mandela, and call for a reduction in the Pentagon budget and a withdrawal of American military from overseas bases and installations. He also was the first candidate, in American history, to make gay rights a major campaign plank. His policy platform was accessible in his soaring oratory, featuring rhetorical gems like, "We must leave the racial battleground to find economic common ground. Then, we can reach for moral higher ground." Bringing together a "rainbow coalition" of Black, Latino, Native American, Asian American and progressive white voters, from family farmers in Missouri to beleaguered manual laborers in Milwaukee and Detroit, Jackson nearly won the nomination, scoring, at the time, the closest second place finish in the history of the Democratic Party. Didion wrote that Jackson offered an alternative to "what had come to be the very premise of the process, the notion that the winning and maintaining of public office warranted the invention of a public narrative based at no point on observable reality." As part of his campaign, Jackson registered six million new voters. For his trouble, an unnamed Democratic superdelegate, while talking to Didion, likened Jackson to a "terrorist." Then vice president and eventual president, George H.W. Bush, called him a "Chicago hustler" and "con man." Didion, with the notable exceptions of Norman Mailer and Vidal, was the only mainstream white writer to identify the racism at the heart of opposition to Jackson, and the cruelty that those with power were showing not only to Jackson himself, but more important, the voters for whom he spoke. Bush's racially-coded ridicule shows that the supposedly "decent" forebears of Donald Trump were not innocent on the charges of using bigotry to provoke white hostility in the favor of reactionary politics. Didion's most famous line is probably, "We tell ourselves stories in order to live." Her political writing had captured the real story of the "process." When she covered the 1992 campaign, she offered a perceptive examination of how the Democratic Party's compromises, while perhaps setting themselves up for short term victory with an undeniably charismatic Bill Clinton at the top of the ticket, would poison the long term public interest. Jackson and former president Jimmy Carter were relegated to "losers' night" at the Democratic National Convention, meaning the night that convention planners expected the lowest ratings, to make way for Clinton, running mate Al Gore, and a slate of corporate personalities to articulate a series of bromides: "forgotten middle class," "character and values," "an end to division," "big government is over." Gone was Jackson's language of justice, and so too had his platform of peace, equality, and working class economics vanished. In its place was a hierarchal agenda with white suburban anxieties at the top, and the concerns of all other voters competing for placement at the bottom. The "political narrative" of 2021, with its fixation on suburban parents, rising crime rates, and backlash against Black Lives Matter and Me Too in the form of whining about "cancel culture," is a sequel to the story of 1988. Didion's conclusion was grim, but should resonate in the present, as progressives in the House and Senate struggle to pass relatively moderate social reforms against the corruption of Joe Manchin and the timidity of Joe Biden: The Republican Party, "standing for ideology and interest" and "not compromise," is the only real political party in the United States. The progressives, or what Howard Dean called the "Democratic wing of the Democratic Party," have grown more powerful and influential in the past three decades, and much of the future of America rides on whether they can transform their party into a force more united and authentic. Before studying and writing about politics, Didion explained that she was a "Goldwater Republican," whose reflexive political instincts were the consequence of spending her childhood and early adulthood in the almost exclusive company of California conservatives. Her experience and brilliance enabled her to accurately identify the self-preservation of power, wealth and majority status that motivated the American right. She cast her discerning eye not only on the machinations of presidential contests, but also the violent mechanisms of American power. In 1990, Joan Didion wrote a pamphlet-length essay on the Central Park jogger case — the miscarriage of justice that occurred when five Black and Latino teenagers were sentenced to lengthy prison terms for the assault and rape of a white woman after the police coerced confessions from the teens, violated their civil rights, and ignored any evidence that contradicted their assumptions of guilt. The prosecutor's abusive behavior was arguably worse. Eventually, all five were exonerated, when the actual rapist confessed to the crime. After their release from prison, the five wrongly-convicted men filed a lawsuit against New York, and settled for $41 million as recompense for "malicious prosecution," "racial discrimination" and "emotional distress." Because Donald Trump took out a psychotic advertisement in the New York Times calling for the execution of the teens, even before opening arguments, and Ava DuVernay directed an acclaimed miniseries about the trial and aftermath for Netflix, the Central Park Five story has become emblematic of systemic racism and the culture of paranoia and hatred that scaffolds it. Hindsight makes the injustice painfully clear, but at the time, few white writers or political figures were willing to defend the teenagers without equivocation or apology. Joan Didion wrote the first major essay arguing that the boys were innocent, and that the rush to prosecute and punish them was indicative of a dark undercurrent charging beneath the city, and the entire country. With references to the literature of slavery and the autobiography of Malcolm X, Didion connects the case — and, in her then-correct but minority opinion, the persecution of the teenage suspects — to the myths and mechanisms of white supremacy. At work in the case and coverage surround it were the same institutional and cultural evils responsible for immeasurable suffering throughout the United States and around the world, as well as the perpetual suffocation of democracy. At the center of the white supremacist mythology, from the slave fields to the Emmett Till case, Didion identified "a special emotional undertow that derived in part from the deep and allusive associations and taboos attaching, in American Black history, to the idea of the rape of white women." Didion's essay not only castigates the racist criminal justice system, but also makes clear how the liberal establishment of New York, including then mayor David Dinkins and governor Mario Cuomo, contributed to the noxious atmosphere of hostility toward the boys. Eventually, the U.S. would learn that a single rapist committed the ghastly crime, but the press, from the New York Times to the Wall Street Journal, did not hesitate to run wild with headlines and reports on the "wolf packs" terrorizing Central Park after sundown. The predatory "animals," of course, always have dark skin, and prowl out of the poorest neighborhoods. Almost in isolation did Didion state the obvious that such language, comparable to Ku Klux Klan propaganda, strips Black criminal suspects of their humanity in a society that is supposed to afford them legal protections, and can morph into a lethal weapon against all people of color. "The attack upon the jogger," Didion wrote with contempt for conventional opinion," became "an exact representation of what was wrong with the city, of a city systematically ruined, violated, raped by its underclass." It was also convenient as a "frame in which the actual social and economic forces wrenching the city could be personalized and ultimately obscured." The "law and order" reactionaries made predictable calls for the police to, in essence, occupy New York, but Didion shows that leading feminists, supposedly progressive, did not dramatically differ in how they chose to "frame" and "seize upon" the problem. Quoting Anna Quindlen and other mainstream feminists, Didion attacks the "abstraction" and "sentimentalization" of the case. Because the suspects were named, but the victim was not, the press and political class were able to cast the white victim as a symbol of the city's "inspiration," to use the word Didion most frequently quotes as descriptive of the jogger, and the suspects as its "defilement" and "endangerment." Not only was the nameless victim able to quickly achieve "favored victim status," Didion wrote, because she was "white and middle class professional," but the case arrived in the nick of time — perfect for exploitation from politicians who gain advantage by making interpersonal crime the main story in the life of a city, even the life of a country. The crime story, according to Didion, is "devised to obscure not only the city's actual tensions of race and class but also, more significantly, the civic and commercial arrangements that rendered those tensions irreconcilable." The blunt force thesis that Didion posits complements her reading of presidential elections as theater in which nearly any topic is fair game, except scrutiny of the sociopolitical crises that collectively act as mockery of the Bill of Rights and patriotic verse: Stories in which terrible crimes are inflicted on innocent victims, offering as they do a similarly sentimental reading of class differences and human suffering, a reading that promises both resolution and retribution, have long performed as the city's endorphins, a built-in source of natural morphine working to blur the edges of real and to a great extent insoluble problems. The horrific development of the tabloid fascist who wrote the ad calling for the execution of the Central Park Five becoming president of the United States, even after he continually refused to apologize for that offense and his myriad other misdeeds, confirms Didion's argument that the inequities of racism, white panic, and class oppression extend far beyond bad cops, prosecutors and judges. Joan Didion was a genius of story. Throughout her life, whether she was writing about the mourning of her husband and daughter or the failures of American democracy, she understood and was able to articulate, like few others, how the cultivation of narrative is an inescapable part of the human experience, and how it simultaneously liberates and shackles both individuals and communities. "We tell ourselves stories in order to live" is an equally promising and frightening summary of humanity. With more sophistication, brilliance, and honesty than most American writers, she possessed a clarity of insight regarding U.S. inequality and injustice: For genuine reform to transpire, Americans must collectively change the story they tell themselves about their country, and its people. Joan Didion's flaw was her cynicism. She often dismissed the idea of progress, and even questioned the notion of trying to "make the world a better place." As the United States faces an unprecedented threat against its already fragile democracy, and as the "tensions of race and class" continue to manifest in poverty and violence, it is essential to remember the faith that Didion's work, no matter how cynical, offers to readers. Telling the truth, especially in a society of political fictions, always requires courage. It is always an act of hope. More Salon stories about Joan Didion: Dave Eggers interviews Joan Didion for Salon in 1996 Andrew O'Hehir interviews Joan Didion for Salon in 2003 The long goodbye: Salon's review of Joan Didion's "The Year of Magical Thinking" Joan Didion vs. mythic America: The evolution of a literary legend David Masciotra David Masciotra is the author of "I Am Somebody: Why Jesse Jackson Matters" (Bloomsbury Publishing) and "Mellencamp: American Troubadour" (University Press of Kentucky, 2015). MORE FROM David Masciotra Commentary Joan Didion Media
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,030
Q: least square percentage regression Is there an R package implementing least square percentage regression. A paper on this is found on this link: https://uhra.herts.ac.uk/dspace/bitstream/2299/965/1/S64.pdf Any help or direction would be helpful.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,247
\section*{Acknowledgments} NLZ is supported by a {\it Spitzer} fellowship. The observations reported here were obtained with the NASA/ESA Hubble Space Telescope and at the MMT Observatory, a facility operated jointly by the Smithsonian Institution and the University of Arizona. Public Access time is available at the MMT Observatory through an agreement with the National Science Foundation. Funding for the creation and distribution of the SDSS Archive has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, NASA, the NSF, the U.S. Department of Energy, the Japanese Monbukagakusho, and the Max Planck Society. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are The University of Chicago, Fermilab, the Institute for Advanced Study, the Japan Participation Group, The Johns Hopkins University, the Korean Scientist Group, Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,377
@interface STPCheckoutAccount : NSObject + (nullable instancetype)accountWithData:(nullable NSData *)data URLResponse:(nullable NSURLResponse *)response; @property(nonatomic, nonnull, readonly)NSString *email; @property(nonatomic, nonnull, readonly)NSString *phone; @property(nonatomic, nonnull, readonly)NSString *csrfToken; @property(nonatomic, nonnull, readonly)NSString *sessionID; @property(nonatomic, nonnull, readonly)STPCard *card; @end
{ "redpajama_set_name": "RedPajamaGithub" }
2,012
\section{Introduction} Polarization, in its essence, describes the behaviour of the electric field within an electromagnetic wave. This behaviour can readily be manipulated \cite{ChoquetteLeibenguth1994,Grier2003,Straufetal2007} and measured \cite{Hauge1976,Mishchenkoetal2011}, making it useful for encoding classical and quantum information. Polarization has successfully been used for communication and metrology and is so ubiquitous that it is treated in a large number of authoritative works \cite{Wiener1930,Wolf1959,McMaster1961,ClarkeGrainger1971,AzzamBashara1977,Collett1992,MandelWolf1995,Huard1997,Brosseau1998,Jackson1999,Goldstein2003,Collett2005,BrosseauDogariu2006,Gil2007,Brosseau2010,Brown2011,KumarGhatak2011,KligerLewis2012,Pye2015,GilOssikovski2016}. Because light's polarization changes when it interacts with an object, be it through reflection or transmission, it is readily used for characterizing objects without disturbing the latter. Polarimetry and its relative ellipsometry \cite{Azzam2011} do just that: they characterize substances by the unique changes they impart on light's polarization degrees of freedom \cite{Collett1992}. Highly precise measurements of polarization and its changes, i.e., polarimetry, have found applications in photonics \cite{Yoonetal2020}, bioimaging \cite{Dulketal1994,Ghoshetal2009,Tuchin2016,FirdousAnwar2016}, oceanography \cite{VossFry1984}, remote sensing \cite{Tyoetal2006}, astronomy \cite{Tinbergen2005,Dulketal1994}, and beyond. It is now taken for granted by practitioners of quantum optics that light is fundamentally described by a quantum field \cite{Glauber1963,Glauber1963quantumtheorycoherence,Sudarshan1963}. Light's polarization properties are no different, also fundamentally arising from quantum theory \cite{Fano1949,FalkoffMacDonald1951,Fano1954,JauchRohrlich1955,McMaster1961,Collett1970}, and have been the subject of recent reviews \cite{Karassiov2007review,Luis2016,Goldbergetal2021polarization}, where the nuances of quantum polarization are seen to overthrow certain concepts from classical polarization \cite{Klyshko1992,Usachevetal2001,delaHozeal2014}. In fact, quantum states may appear to be ``classically unpolarized'' while possessing "hidden" polarization properties, leading to quantum polarization effects with no classical parallel \cite{PrakashChandra1971,Klyshko1992,Klyshko1997,Tsegayeetal2000,Usachevetal2001,Bushevetal2001,Luis2002,AgarwalChaturvedi2003,Bjorketal2015,Bjorketal2015PRA,LuisDonoso2016,ShabbirBjork2016,GoldbergJames2017,Bouchardetal2017,GoldbergJames2018Euler,Goldbergetal2020extremal}. Quantum polarization has been used for quantum key distribution \cite{Bennettetal1992,Mulleretal1993}, Einstein-Podolsky-Rosen tests \cite{Kwiatetal1995}, quantum teleportation \cite{Bouwmeesteretal1997}, quantum tomography \cite{Jamesetal2001}, weak value amplification \cite{Hallajietal2017}, and more. However, the study of the \textit{changes} in quantum polarization have not, to our knowledge, been the focus of any major review, which is a void that must especially be filled due to the proliferation of recent experiments on polarimetry with explicitly quantum mechanical states of light \cite{Mitchelletal2004,Bogdanovetal2004,Toussaintetal2004,Grahametal2006,Ozaetal2010,Altepeteretal2011,Slussarenkoetal2017,Dayanooshetal2018,Yoonetal2020,Rosskopfetal2020,Sunetal2020arxiv,Zhangetal2021arxiv}. We set forth to present a complete picture of quantum polarimetry in this work. There are a number of steps required for understanding quantum polarimetry. First, we briefly review light's polarization properties from a classical standpoint in Section \ref{sec:classical polarization}, making note of the different geometrical and mathematical representations of these properties and various schemes for determining them in the laboratory. Next, we discuss the mathematical structures describing classical changes in polarization, which lead to discussions of the physically viable and forbidden transformations. These structures are then directly useful for characterizing materials through polarimetry. In turn, we introduce light's polarization properties from a quantum mechanical perspective in Section \ref{sec:quantum polarization}, explaining the correspondences with classical polarization properties and giving examples of the peculiar properties about which classical polarization is ignorant. The stage is now set for quantum polarimetry, beginning with a rigorous discussion of the possible quantum mechanical transformations underlying classical polarimetry. Additionally, quantum polarimetry can then incorporate the tools of quantum estimation theory, so we discuss in Section \ref{sec:quantum polarimetry estimation} the possible quantum enhancements in polarimetry. We also include discussions of techniques for measuring quantum polarization properties and how they compare to classical techniques. Deviations from and fortuitous agreement with classical polarization predictions abound, but they tend to be neglected in treatments of classical polarization, so we review them in Section \ref{sec:classical intuitions} and stress some nuances that are often overlooked. We hope this reference will serve as a useful guide in the rapidly evolving world of polarization. \section{Classical Polarization} \label{sec:classical polarization} Light's polarization degrees of freedom stem from Maxwell's equations. The simplest electromagnetic field obeying these equations is a plane wave: \eq{ \mathbf{E}(\mathbf{r},t)=\mathcal{E}_0\left(a\mathbf{e}_a+b\mathbf{e}_b\right)\text{e}^{\text{i}\left(\mathbf{k}\cdot\mathbf{r}-\omega t\right)}. \label{eq:plane wave} } Here, the plane wave is travelling in direction $\mathbf{k}$ within some large quantization volume $V$, $\mathbf{e}_a$ and $\mathbf{e}_b$ are mutually orthogonal unit vectors that are orthogonal to $\mathbf{k}$, $\omega$ is the wave's angular frequency, all dimensionful constants are absorbed into $\mathcal{E}_0=\sqrt{\hbar\omega/2V\varepsilon_0}$, and $\varepsilon_0$ is the permittivity of free space. The quantity $\mathbf{E}(\mathbf{r},t)$ represents the analytic signal of the electric field and the true physical quantity that can be measured is the real part thereof \cite{BornWolf1999}; the positive-frequency component of the true field is given by $\mathbf{E}(\mathbf{r},t)/2$ \cite{Glauber1963quantumtheorycoherence}. The polarization properties of the electromagnetic field correspond to the path taken by the tip of the electric field vector $\mathbf{E}(\mathbf{r},t)$ over time in the $\mathbf{e}_a$-$\mathbf{e}_b$ plane. Electromagnetic fields carry energy in direction $\mathbf{k}$, which we set to be the $\mathbf{z}$ direction for convenience. The electric field for a plane wave then traverses an ellipse in the $x$-$y$ plane. This ellipse is parametrized by the measured $a$ and $b$ components of the electric field, $E_a=\mathop{\mathrm{Re}} \nolimits\left[\mathbf{e}_a\cdot \mathbf{E}(\mathbf{r},t)\right]$ and $E_b=\mathop{\mathrm{Re}} \nolimits\left[\mathbf{e}_b\cdot \mathbf{E}(\mathbf{r},t)\right]$, through \cite{BornWolf1999} \eq{ \left(\frac{E_a}{\left|a\right|}\right)^2+\left(\frac{E_b}{\left|b\right|}\right)^2-2\frac{E_a E_b}{\left|ab\right|}\cos\delta=\mathcal{E}_0^2\sin^2\delta. } We observe that the relative phase $\delta = \arg(a/b)$ and the amplitudes $|a|$ and $|b|$ completely describe the orientation and relative sizes of the axes of the ellipse, while $\mathcal{E}_0$ governs the size of the ellipse. This equation is explicitly independent of time, as it simply governs the shape of the ellipse, and the electric field itself rotates around the ellipse over time when view along the direction of propagation. This is the sense in which the electric field's tip traverses the $\mathbf{e}_a$-$\mathbf{e}_b$ plane over time; we do not have to worry about defining complex amplitudes therein. The polarization ellipse is commonly defined using the $x$ and $y$ components of $\mathbf{E}$ for $a$ and $b$, so as to categorize beams of light by the ellipticity of their polarization ellipses. We will adopt the slightly nonstandard notation in which $\mathbf{e}_a$ and $\mathbf{e}_b$ refer to right- and left-handed circularly polarized light, respectively, through the unit vectors $\mathbf{e}_a=\mathbf{e}_{\mathrm{R}}=\left(\mathbf{x}-\text{i} \mathbf{y}\right)/\sqrt{2}$ and $\mathbf{e}_b=\mathbf{e}_{\mathrm{L}}=\left(\mathbf{x}+\text{i} \mathbf{y}\right)/\sqrt{2}$, so in this notation the shape of the polarization ellipse itself does not follow the nomenclature describing the polarization as ``linear'' or ``circular.'' One reason for choosing circularly polarized light as our fundamental elements is that these are the components that directly feature in the interactions mediated by beams of light to enact transitions between atomic sublevels \cite{Steck2016}. We will elucidate other motivations for prioritizing the circular components in our later descriptions of polarization. Polarization is easy to measure because it can be determined through intensity measurements alone. In terms of energy flow, the flux received in a given area averaged over a time that involves many oscillations of $1/\omega$ is given by the \textit{irradiance} of the field \eq{ I\propto \mathbf{E}^*\cdot\mathbf{E}=\mathcal{E}_0^2\left(\left|a\right|^2+\left|b\right|^2\right), \label{eq:intensity irradiance} } where $\,^*$ denotes the complex conjugate. The field's intensity, in slight contrast, is the energy flux per unit area in the $x$-$y$ plane (i.e., perpendicular to $\mathbf{k}$), but we will presently refer to $I$ as the intensity of the field, per common parlance [e.g., in the original work by \textcite{Stokes1852} and the authoritative work by \textcite{BornWolf1999}]; we can equivalently assert that all detectors measuring irradiance are in the $x$-$y$ plane. We can, further, define the intensities of the various components of the light through \eq{ I_i=\left|\mathbf{e}_i\cdot\mathbf{E}\right|^2. } \subsection{Characterizing Polarization} Light's polarization properties can be described in many equivalent ways. For example, one may determine the total intensity $I$ as well as the relative phase and amplitude of the two components $a$ and $b$. Similarly, one may consider the orientation and ellipticity of the polarization ellipse through the combined parameter \eq{ \frac{a}{b}=\left|\frac{a}{b}\right|\text{e}^{\text{i}\delta}. } Employing units that absorb the proportionality constant and $\mathcal{E}_0$ from Eq. \eqref{eq:intensity irradiance}, we can also describe light's polarization using the four Stokes parameters, which codify the intensity differences between three pairs of orthogonal components of the light: \eq{ S_0&=\frac{I_{\mathrm{H}}+I_{\mathrm{V}}}{2}=\frac{I_{\mathrm{D}}+I_{\mathrm{A}}}{2}=\frac{I_{\mathrm{R}}+I_{\mathrm{L}}}{2}=\frac{\left|a\right|^2+\left|b\right|^2}{2} ,\\ S_1&=\frac{I_{\mathrm{H}}-I_{\mathrm{V}}}{2}=\left|ab\right|\cos\delta ,\\ S_2&=\frac{I_{\mathrm{D}}-I_{\mathrm{A}}}{2}=\left|ab\right|\sin\delta ,\\ S_3&=\frac{I_{\mathrm{R}}-I_{\mathrm{L}}}{2}=\frac{\left|a\right|^2-\left|b\right|^2}{2}. \label{eq:Stokes classical} } We have now made use of the horizontal and vertical unit vectors $\mathbf{e}_{\mathrm{H}}=\mathbf{x}$ and $\mathbf{e}_{\mathrm{V}}=\mathbf{y}$ and their diagonal and antidiagonal counterparts $\mathbf{e}_{\mathrm{D}}=\left(\mathbf{x}+\mathbf{y}\right)/\sqrt{2}$ and $\mathbf{e}_{\mathrm{A}}=\left(\mathbf{x}-\mathbf{y}\right)/\sqrt{2}$. Other definitions of the Stokes parameters differ from ours by a nonessential factor of 2. We note that, even though Stokes first published work on these parameters in 1852 \cite{Stokes1852}, they were only rediscovered by Soleillet in 1929 \cite{Soleillet1929} and made famous by Chandrasekhar in the 1940s \cite{Chandrasekhar1950}. In the interim, the so-called Poincar\'e sphere was developed, which describes plane waves' polarization by a point on the unit sphere \cite{Poincare1889}. This is made apparent by realizing that the Stokes parameters for a plane wave obey \eq{ S_0^2=S_1^2+S_2^2+S_3^2, \label{eq:Stokes sphere} } so the polarization properties other than the total intensity are completely specified by the two angular coordinates of the polarization vector $\mathbf{S}=\left(S_1,S_2,S_3\right)^\top$, where $\,^\top$ denotes the transpose. As well, the Jones vector formalism was also developed, expressing the electric field in the circular basis with the time and space dependence removed, or expressed in a frame counter-rotating with angular frequency $\left(\mathbf{k}\cdot\mathbf{r}-\omega t\right)$, as \cite{Jones1941} \eq{ \mathbf{A}=\mathbf{E}\text{e}^{-\text{i}\left(\mathbf{k}\cdot\mathbf{r}-\omega t\right)}=\begin{pmatrix} a\\b \end{pmatrix}. \label{eq:quantized Jones vector} } Soon after, the coherency matrix description of polarization was developed \cite{Wolf1959}: \eq{ \boldsymbol{\Psi}=\mathbf{A}\mathbf{A}^\dagger=\begin{pmatrix}a^* a&b^*a\\a^*b&b^*b \end{pmatrix}. } where $\,^\dagger$ denotes the Hermitian conjugate. The Stokes parameters are related to the coherency matrix through the compact expression \eq{ S_\mu=\frac{1}{2}\mathbf{A}^\dagger \sigma_\mu\mathbf{A}=\frac{1}{2}\mathop{\mathrm{Tr}} \nolimits\left(\boldsymbol{\Psi}\sigma_\mu\right), } using the Pauli matrices $\sigma_\mu$ and letting Greek indices run from $0$ to $3$. This description in terms of the Pauli matrices is another reason for preferentially choosing the circular basis in our description of the electric field. We will see in our discussions of polarization from a quantum perspective the mathematical usefulness of the physically irrelevant factors of $2$ that we have incorporated into all of our expressions. Far from any source, plane waves are a good approximation to any freely propagating monochromatic wave. The polarization formalism developed for planes waves, fortunately, extends to quasimonochromatic light. Quasimonochromatic waves with average frequencies $\bar{\omega}$ are now characterized by their two complex components at a particular position $z$ (again, in units of $\mathcal{E}_0$) \eq{ E_{a}(t)=a(t) \text{e}^{-\text{i} \bar{\omega} t}\quad \mathrm{and}\quad E_{b}(t)=b(t) \text{e}^{-\text{i} \bar{\omega} t}. } Here, in contradistinction to Eq. \eqref{eq:plane wave}, the amplitudes $a(t)$ and $b(t)$ depend on time, but they only vary on timescales that are long compared to $1/\bar{\omega}$. We can then define the coherency matrix and Stokes parameters as before by taking a time average that includes many oscillations of $\bar{\omega}$, which, under the standard assumptions of stationarity and ergodicity, is equivalent to taking an ensemble average $\langle\cdot\rangle$: \eq{ \boldsymbol{\Psi}=\expct{\mathbf{A}\mathbf{A}^\dagger}= \begin{pmatrix} \expct{a^* a } & \expct{b^*a } \\ \expct{a^* b} & \expct{b^*b } \end{pmatrix} } and \eq{ S_\mu= \frac{1}{2}\mathop{\mathrm{Tr}} \nolimits\left(\boldsymbol{\Psi} \sigma_\mu\right) = \frac{1}{2}\expct{\mathbf{A}^\dagger\sigma_\mu\mathbf{A} }. \label{eq:Stokes ensemble definition} } Now, the Stokes vectors no longer satisfy Eq. \eqref{eq:Stokes sphere} and are, thus, not constrained to the surface of the Poincar\'e sphere. Instead, the Stokes parameters define a vector pointing somewhere inside the Poincar\'e sphere of radius $S_0$. The orientation of the vector $\mathbf{S}$ has two angular coordinates as before. Now, the length of $\mathbf{S}$ defines a new parameter called the degree of polarization \cite{Wiener1930,BillingsLand1948,Walker1954,Wolf1954,Wolf1959} \eq{ p= \frac{\sqrt{\mathbf{S}\cdot\mathbf{S}}}{S_0} . } The degree of polarization ranges from $0$, for completely unpolarized light, to $1$, for perfectly polarized light. This can be inferred from the positivity of the coherency matrix, which is equivalent to the constraint \eq{ S_1^2+S_2^2+S_3^2 \leq S_0^2. \label{eq:stokes squared inequality} } When the degree of polarization is less than $1$, the electric field no longer traces out a perfect ellipse over time, with decreasing $p$ implying an increasingly erratic behaviour for this vector. We will see later that these properties are important to revisit with the quantum theory of polarization. \subsubsection{Polarized States} We saw before that plane waves have $p=1$, making them perfectly polarized. Any other beam of light with $p=1$ is again perfectly polarized, which can be determined in a number of different ways. To start, a beam of light whose electric field traces a closed ellipse over time is completely polarized. Next, a beam whose Stokes vector lies on the surface of the Poincar\'e sphere is fully polarized. This means that the polarization vector must obey $\mathbf{S}=S_0\mathbf{e}$ for some unit vector $\mathbf{e}$, such that, defining the ``vector'' of Stokes parameters $\mathcal{S}=\left(S_0,S_1,S_2,S_3\right)^\top$, \eq{ \mathcal{S}_{\mathrm{pol}}=S_0\begin{pmatrix} 1\\ \sin\Theta\cos\Phi\\\sin\Theta\sin\Phi\\\cos\Theta \end{pmatrix}. \label{eq:stokes pol} } Here, the parameters $\Theta$ and $\Phi$ correspond to the angular coordinates of $\mathbf{S}$ and thus to the direction of the Stokes vector on the Poincar\'e sphere. In terms of the coherency matrix, polarized light must be a rank-one projector. This is because any single Jones vector $\mathbf{A}$ corresponds to completely polarized light and only through an ensemble average of nonparallel Jones vectors does the coherency matrix lose polarization. As such, perfectly polarized light has the determinant of $\boldsymbol{\Psi}$ vanish, through \eq{ \boldsymbol{\Psi}_{\mathrm{pol}}=2S_0\begin{pmatrix} \cos\frac{\Theta}{2}\\\sin\frac{\Theta}{2}\text{e}^{\text{i} \Phi} \end{pmatrix}\begin{pmatrix} \cos\frac{\Theta}{2}&\sin\frac{\Theta}{2}\text{e}^{-\text{i} \Phi} \end{pmatrix}, \label{eq:coherency pol} } where the angular coordinates $\Theta$ and $\Phi$ are the same as for the Stokes parameters. Translating between different states of light with perfect polarization is equivalent to changing the angular coordinates $\Theta$ and $\Phi$. Mathematically, this corresponds to rotating the polarization vector $\mathbf{S}$ and the coherency matrix $\boldsymbol{\Psi}$ or to changing the polarization ellipse, while these translations can be physically enacted by common devices such as waveplates. \subsubsection{Unpolarized States} Unpolarized light has $p=0$ and is, in some sense, the most different from plane waves. The trace of its electric field does not return to its starting position at the same angle, instead exhibiting chaotic behaviour to cover the entire disk. As with perfectly polarized light, unpolarized light can be determined in a variety of manners. The most important property of unpolarized light is that it is unchanged by physical devices that rotate the direction of polarization. For this to be true of a Stokes vector, it must have a vanishing polarization vector $\mathbf{S}=\mathbf{0}$: \eq{ \mathcal{S}_{\mathrm{unpol}}=S_0\begin{pmatrix} 1\\ 0\\0\\0 \end{pmatrix}. \label{eq:Stokes unpol} } Similarly, the coherency matrix must be unchanged by rotations, meaning that it must be proportional to the identity matrix: \eq{ \boldsymbol{\Psi}_{\mathrm{unpol}}=S_0\begin{pmatrix} 1&0\\0&1 \end{pmatrix}. \label{eq:coherency unpol} } The angular coordinates $\Theta$ and $\Phi$ are undefined for such light and no waveplate can affect the polarization properties of unpolarized light. \subsubsection{Decomposition of Light into Polarized and Unpolarized Elements} The classical polarization properties of quasimonochromatic light are completely described by four parameters, which are often organized into the polarization direction information, given by the direction of $\mathbf{S}$, the degree of polarization, given by $p$, and the total intensity, given by $S_0$. There is another useful interpretation of these parameters that follows from the linearity properties of independent electromagnetic waves. Superposing more than one wave leads to the summing of their electric field amplitudes. Then, ensemble averages over components from different fields all vanish when those fields are independent. For example, if the first field has amplitudes $a^{(1)}$ and $b^{(1)}$ and the second $a^{(2)}$ and $b^{(2)}$, independence of the two fields dictates that \eq{ \expct{a^{(1)*} a^{(2)}}=\expct{a^{(1)*}}\expct{ a^{(2)}}=0=\expct{a^{(1)*}}\expct{ b^{(2)}}=\expct{a^{(1)*} b^{(2)}}. } This directly implies that the polarization properties of the superposed fields are equal to the sums of their respective components from the two contributing fields: the Stokes parameters are comprised by \eq{ S_\mu^{(1+2)}=S_\mu^{(1)}+S_\mu^{(2)} } and the coherency matrix takes the form \eq{ \boldsymbol{\Psi}^{(1+2)}=\boldsymbol{\Psi}^{(1)}+\boldsymbol{\Psi}^{(2)}.} This lets us imagine every partially polarized field ($0<p<1$) to have arisen from the \textit{incoherent} superposition of a field that is completely unpolarized with a field that is perfectly polarized, with the relative weight in the sum being given by the degree of polarization $p$. How is this constructed? We take convex combinations of the polarized and unpolarized components from Eqs. \eqref{eq:stokes pol}, \eqref{eq:coherency pol}, \eqref{eq:Stokes unpol}, and \eqref{eq:coherency unpol}. In terms of Stokes parameters, we find the general expression \eq{ \mathcal{S}=p\mathcal{S}_{\mathrm{pol}}+\left(1-p\right)\mathcal{S}_{\mathrm{unpol}}, \label{eq:Stokes decomposition} } and we find a similar decomposition in terms of the coherency matrix \eq{ \boldsymbol{\Psi}=p\boldsymbol{\Psi}_{\mathrm{pol}}+\left(1-p\right)\boldsymbol{\Psi}_{\mathrm{unpol}}. \label{eq:coherency decomposition} } This implies that, regardless of the physical origin of a beam of light, we can always consider it to have arisen from the probabilistic mixture of a pefectly polarized and a completely unpolarized beam of light, with the probability of the former being given by the degree of polarization. In all of the decompositions, there is one parameter governing the total intensity, two parameters governing the orientation of the perfectly polarized component, and a final parameter, the degree of polarization, responsible for the relative contributions of the polarized and unpolarized components. These degrees of freedom can be used to specify a wide range of quantum states underlying the same classical beams of light. Extra degrees of freedom beyond the four in the decompositions of Eqs. \eqref{eq:Stokes decomposition} and \eqref{eq:coherency decomposition} give rise to numerous novel polarization phenomena that can only be investigated quantum mechanically, a topic to which we will give much consideration. \subsection{Measuring Polarization} Determining the four Stokes parameters is tantamount to providing complete knowledge of a classical polarization state. As per Eq. \eqref{eq:Stokes classical}, these parameters can be found by measuring six different intensities and computing various differences and sums between them. Fortunately, polarizing beam splitters (PBSs) can be used to spatially separate two orthogonal polarization components of a beam of light to each have their intensities measured by a photodector, while waveplates can be used to rotate a beam's polarization to properly arrange the two components of a beam to be measured. This led to the universal SU(2) gadget for measuring polarization the Stokes parameters \cite{SimonMukunda1990}, depicted in Fig. \ref{fig:SU2 gadget}. \begin{figure*} \centering \includegraphics[width=\textwidth]{fig1} \caption{SU(2) gadget for measuring Stokes parameters. The polarizing beam splitter (PBS) separates the horizontal and vertical components of the light to have their intensities measured by separate photodetectors. The two quarter waveplates (``$\lambda/4$'') and one half waveplate (``$\lambda/2$'') are sufficient to enact any polarization rotation, such that the PBS can split the light into any pair of orthogonal polarization components. The original scheme by \textcite{SimonMukunda1990} used two quarter waveplates and two half waveplates but modern schemes reduce this number \cite{Schillingetal2010}.} \label{fig:SU2 gadget} \end{figure*} The SU(2) gadget is useful for serially measuring all of the Stokes parameters, but what if one desires to measure all four parameters simultaneously? One trick is to first divide the light into three beams and then measure three different orthogonal pairs of polarization intensities as with a single SU(2) gadget \cite{Azzam1982,Azzam1985, AzzamDe2003}. Alternatively, one can employ an SU(2) gadget whose waveplates rotate continuously, such that one can recover the Stokes parameters from the ensuing interferograms \cite{Goldstein1992,Schaeferetal2007}. More intricacies are required if one wants to measure the quantum polarization properties of a beam of light, such as by appropriately splitting the light into eight beams \cite{AlodjantsArakelian1999,Alodjantsetal1999}. Instead of waveplates, liquid crystals can also be used in creating an SU(2) gadget. Then, instead of rotating waveplates to change the polarization direction, one can adjust the voltage applied to the liquid crystals to enact the same transformation \cite{Luetal2015}. This method is a promising new tool in the polarimeter's arsenal \cite{SuWang2021}. According to classical electrodymanics, all of the polarization components can be measured with arbitrary precision. This is in contradistinction with quantum polarization, where we will see explicitly that there exists a lower limit to the precision with which all of the Stokes parameters can be simultaneously measured. \subsection{Changes in Polarization} Polarimetry is useful because changes in polarization contain useful information about objects being measured. There are a few related methods for arranging this information, which we explore in turn. \subsubsection{Jones Matrix Calculus} The most basic question to ask is how an electric field changes when it interacts with an object. When these transformations are linear, the most general possible result is \eq{ \mathbf{A}^{(\mathrm{in})}\to \mathbf{A}^{(\mathrm{out})}=\mathbf{J}\mathbf{A}^{(\mathrm{in})}\qquad \iff \qquad \mathbf{E}^{(\mathrm{in})}\to \mathbf{E}^{(\mathrm{out})}=\mathbf{J}\mathbf{E}^{(\mathrm{in})}, } where $\mathbf{J}$ is a $2\times 2$ complex matrix known as the Jones matrix. These transformations are deemed ``nondepolarizing'' polarization transformations because they do not change the degree of polarization of perfectly polarized states. A range of physical and linguistic arguments in favour of adopting the nomenclature ``deterministic'' instead of ``nondepolarizing'' are due to \textcite{Simon1990}, but we will see that this is at odds with the standard quantum definition of deterministic transformations. When the electric field undergoes a Jones matrix transformation, the coherency matrix transforms as \eq{ \boldsymbol{\Psi}^{(\mathrm{in})}\to \boldsymbol{\Psi}^{(\mathrm{out})}=\mathbf{J}\boldsymbol{\Psi}^{(\mathrm{in})}\mathbf{J}^\dagger. \label{eq:coherency pure jones} } This equation now holds even for quasimonochromatic incident light. Then, there are some linear transformations that cannot be described by a single Jones matrix transformation. Instead, the most general linear transformation of the coherency matrix is \eq{ \boldsymbol{\Psi}^{(\mathrm{in})}\to \boldsymbol{\Psi}^{(\mathrm{out})}=\sum_i \mathbf{J}_i\boldsymbol{\Psi}^{(\mathrm{in})}\mathbf{J}_i^\dagger, \label{eq:coherency multiple Jones} } where each $2\times 2$ complex matrix $\mathbf{J}_i$ describes a deterministic transformation. These new transformations with more than a single $\mathbf{J}_i$ are termed ``depolarizing'' or ``nondeterministic'' and distinctions between the two types of transformations have been investigated thoroughly \cite{Kimetal1987,Simon1990,KuscerRibaric1959,AbhyankarFymat1969,FryKattawar1981,Barakat1981,Simon1982,GilBernabeu1985,Simon1987,Cloude1990,BrosseauBarakat1991,Kostinski1992,vanderMee1993,Brosseauetal1993,Kostinskietal1993,AndersonBarakat1994,Hovenier1994,CloudePottier1995,Raoetal1998I,Raoetal1998II,Gil2000,EspinosaLuna2007,Zakerietal2013}. What cannot be described by convex combinations of Jones matrix transformations are linear transformations of the form \eq{ \boldsymbol{\Psi}^{(\mathrm{in})}\to \boldsymbol{\Psi}^{(\mathrm{out})}=\sum_i \lambda_i \mathbf{J}_i\boldsymbol{\Psi}^{(\mathrm{in})}\mathbf{J}_i^\dagger \label{eq:coherency multiple Jones with weights} } with negative ``weights'' $\lambda_i$. The standard assumption is that only transformations with positive weights $\lambda_i>0$ are physically viable \cite{Cloude1986}, which is equivalent to requiring the transformations of the coherency matrix to be completely positive in the sense of Choi's theorem \cite{AhnertPayne2005,Aielloetal2007,Sudhaetal2008,GamelJames2011}. This allows us to interpret nondeterministic transformations as probabilistic mixtures of Jones matrix transformations. Transformations governed by a single Jones matrix can result from a number of physical processes. Standard electrodynamic theory dictates that different polarizations of light have different probabilities of being reflected off of a surface and, in terms of transmission, different polarization components experience different indices of refraction, leading to the development of a relative phase between the components \cite{Jackson1999}. In turn, different physical interactions lead to different transformations of the coherency matrix. By preparing a variety of states of light with known input coherency matrices and measuring the output coherency matrices after the physical interaction, one can identify the full Jones matrix description of the interaction and thereby classify the object with which the light interacted. \subsubsection{Mueller Matrix Calculus} Linear polarization transformations are naturally described by the Stokes vector transformation \eq{ \mathcal{S}^{(\mathrm{in})}\to \mathcal{S}^{(\mathrm{out})}=\mathbf{M}\mathcal{S}^{(\mathrm{in})}. \label{eq:Mueller definition Stokes} } Here, the $4\times 4$ real Mueller matrix $\mathbf{M}$ encodes all of the possible classical information that can be garnered from a linear polarization transformation. As with coherency matrices, by preparing a variety of input states of light with known Stokes parameters and measuring the output Stokes parameters after the physical interaction, one can identify the full Mueller matrix description of the interaction and thereby classify the object with which the light interacted \cite{Hauge1980}. For example, in the type of polarimetry alternatively known as transmission ellipsometry, one compares the measured $\mathbf{M}$ to various models of Mueller matrices for a medium through which the light has passed \cite{AzzamBashara1977}. Linearity lets one assume that all sets of input Stokes parameters transform according to an identical Mueller matrix when transmitted through the same physical system, such that all $16$ components of $\mathbf{M}$ can be estimated using the same set of probe beams without any \textit{a priori} knowledge of the object being measured. Mueller matrices corresponding to Jones matrix transformations can be calculated using \eq{ M_{\mu \nu}=\frac{1}{2}\mathop{\mathrm{Tr}} \nolimits\left(\sigma_\mu\mathbf{J} \sigma_\nu\mathbf{J}^\dagger\right). \label{eq:Mueller from pure Jones} } This compact correspondence is yet another motivation for singling out circular polarization in our basis elements. Similarly, Mueller matrices corresponding to nondeterministic polarization transformations can be calculated through \eq{ M_{\mu \nu}=\sum_i \lambda_i\frac{1}{2}\mathop{\mathrm{Tr}} \nolimits\left(\sigma_\mu\mathbf{J}_i \sigma_\nu\mathbf{J}_i^\dagger\right). \label{eq:Mueller from Jones with weights} } While it is clear that Mueller matrices can exist with $\lambda_i<0$, such as the matrix \eq{ \mathbf{M}=\begin{pmatrix} 1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&-\frac{1}{3} \end{pmatrix}, \label{eq:Mueller reflection} } which corresponds to a transformation of the coherency matrix with a negative weight \eq{ \boldsymbol{\Psi}^{(\mathrm{out})}=\frac{2}{3}\sigma_0\boldsymbol{\Psi}^{(\mathrm{in})}\sigma_0+\frac{1}{3}\sigma_1\boldsymbol{\Psi}^{(\mathrm{in})}\sigma_1+\frac{1}{3}\sigma_2\boldsymbol{\Psi}^{(\mathrm{in})}\sigma_2-\frac{1}{3}\sigma_3\boldsymbol{\Psi}^{(\mathrm{in})}\sigma_3, } it is contended that such transformations are not physically viable. Evidence to the contrary \cite{Ossikovskietal2008} must be investigated on a case-by-case basis. \subsubsection{Deterministic Transformations in terms of Scale Factor and SL$(2,\mathds{C})$} There are four free complex components of a generic Jones matrix. However, the global phase of $\mathbf{J}$ is irrelevant in its action on the coherency matrix, as it cancels in Eq. \eqref{eq:coherency pure jones}, and does not show up in Mueller matrices, as it cancels in Eq. \eqref{eq:Mueller from pure Jones}, so deterministic polarizations transformations are determined by seven parameters. In this and the next subsection we present a number of equivalent ways to organize these parameters, each of which comes with its own physical insights. Removing an overall multiplicative factor from a Jones matrix can always lead to the decomposition \eq{ \mathbf{J}=t\pmb{J}, \label{eq:Jones scale factor} } where $t\leq 1$ is an overall attenuation factor and $\pmb{J}$ belongs to the group of complex $2\times 2$ matrices with unit determinant SL$(2,\mathds{C})$, except in limiting situations in which $\det \mathbf{J}=0$. The factor $t$ does not affect the degree or direction of polarization, merely lowering $S_0$ by a factor of $t^2$ and shrinking the radius of the Poincar\'e sphere accordingly. One can separate measurement of the the scale factor from the other changes in polarization, by first measuring the total intensity of the transmitted beam, which transforms as \eq{ S_0\to t^2 S_0 , } then separately determining the changes in orientation and length of the vector $\mathbf{S}$ normalized by the new intensity $t^2 S_0$ to determine the corresponding matrix $\pmb{J}$. After dealing with the scale factor $t$, all Jones matrices are fundamentally related to the Lorentz group. This is because transformation matrices $\pmb{J}$ maintain the quadratic form \cite{Barakat1963} \eq{ \mu^2 = S_0^2-\mathbf{S}\cdot\mathbf{S}, } as with four vectors in special relativity. Deterministic polarization transformations can thus be though of as Lorentz transformations, with some transformations corresponding to rotations of the vector $\mathbf{S}$ and others corresponding to boosts along a particular axis. To rotate the polarization vector $\mathbf{S}$ by an angle $\Theta$ about the $\mathbf{n}$-axis, we employ a Jones matrix of the form \eq{ \pmb{J}_{\mathrm{rot}}\left(\Theta,\mathbf{n}\right)=\exp\left(-\text{i}\Theta \mathbf{n}\cdot\boldsymbol{\sigma}/2\right). \label{eq:Jones rotation} } Equation \eqref{eq:Jones rotation} is an example of an SU(2) rotation generated by the vector of Pauli matrices $\boldsymbol{\sigma}=\left(\sigma_1,\sigma_2,\sigma_3\right)^\top$. When the electric field undergoes a rotation transformation $\mathbf{A}\to \pmb{J}_{\mathrm{rot}}\mathbf{A}$, its polarization properties correspondingly rotate in the Poincar\'e sphere. This is described by a Mueller matrix of the form \eq{ \mathbf{M}_{\mathrm{rot}}\left(\Theta,\mathbf{n}\right) =\begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{0}&\mathbf{R}\left(\Theta,\mathbf{n}\right) \end{pmatrix}, \label{eq:Mueller rotation} } where $\mathbf{R}\left(\Theta,\mathbf{n}\right)$ is a standard $3\times 3$ rotation matrix that can be parametrized by a rotation angle $\Theta$ and axis of rotation $\mathbf{n}$, or equivalently by three Euler angles or another triad of parameters. We explicitly give one parametrization of $\mathbf{R}$ through the Rodrigues rotation formula \eq{ R_{i j}\left(\Theta,\mathbf{n}\right)=\delta_{i j}\cos\Theta-\sum_{k=1}^3\epsilon_{i j k} n_k\sin\Theta+n_i n_j\left(1-\cos\Theta\right), } where $\epsilon_{i j k}$ is the Levi-Civita tensor. Rotations leaves the $S_0$ component unchanged, which accords with waveplates not changing the intensity of an incident beam of light, and maintains the length of $\mathbf{S}$, as such transformations do not affect light's degree of polarization. These properties carry through to the quantum description of polarization. We remark that these rotations are polarization rotations, in the sense that they rotate the polarization vector $\mathbf{S}$ in the Poincar\'e sphere and not in three-dimensional space; the effect of rotating a beam of light in three dimensions can be found in various works by~\textcite{BialynickiBirula1996,BialynickiBirulaBialynickiBirula2020}. Similarly to the rotations expressed by Eq. \eqref{eq:Jones rotation}, to boost the Stokes vector along the $\mathbf{n}$-axis by a rapidity $\eta$, we employ the Jones matrix \eq{ \pmb{J}_{\mathrm{boost}}\left(\eta,\mathbf{n}\right)=\exp\left(\eta \mathbf{n}\cdot\boldsymbol{\sigma}/2\right) . \label{eq:Jones boost} } Eq. \eqref{eq:Jones boost} differs from Eq. \eqref{eq:Jones rotation} by the crucial difference that the argument of the exponential is now real, so the Lorentz boost transformations can be thought of as rotations by an imaginary phase (or as \textit{hyperbolic} rotations). Readers versed in special relativity may be more familiar with the $4\times 4$ representation of the Lorentz group, which is directly provided by Mueller matrices. Using Eq. \eqref{eq:Mueller from pure Jones}, we can immediately match the various types of transformations with the more well-known representation. For example, a boost by ``rapidity'' $\eta$ along the $\mathbf{e}_3$ axis\footnote{We refer to three unit vectors by $\mathbf{e}_1=\left(1,0,0\right)^\top$, $\mathbf{e}_2=\left(0,1,0\right)^\top$, and $\mathbf{e}_3=\left(0,0,1\right)^\top$ so as to distinguish between polarization rotations on the Poincar\'e sphere and rotations in physical space spanned by $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{z}$.} corresponds to the Mueller matrix \eq{ \mathbf{M}_{\mathrm{boost}}\left(\eta,\mathbf{e}_3\right)=\exp\left[\eta \begin{pmatrix} 0 &0&0&1\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0 \end{pmatrix}\right]=\begin{pmatrix} \cosh\eta &0&0&\sinh\eta\\ 0&1&0&0\\ 0&0&1&0\\ \sinh\eta&0&0&\cosh\eta \end{pmatrix}, } which is exactly the transformation matrix for ``boosts'' between reference frames with constant relative velocity. Technically, Jones matrices correspond to the ``proper'' Lorentz group SO$\vphantom{a}(1,3)$, which removes the possibility of ``spatial'' reflections of the polarization vector $\mathbf{S}$ such as in Eq. \eqref{eq:Mueller reflection}. By disallowing spatial reflections, the proper Lorentz group is comprised only from rotations and boosts, each with three real parameters corresponding to the axis and strength of the transformations. This lets us arrange the seven free parameters of deterministic polarization transformations into: \begin{itemize} \item The intensity reduction factor $t$. \item The restricted Lorentz transformation $\pmb{J}$: \begin{itemize} \item Three parameters describing the rotation $\pmb{J}_{\mathrm{rot}}$. \item Three parameters describing the boost $\pmb{J}_{\mathrm{boost}}$. \end{itemize} \end{itemize} Incidentally, the matrix polar decomposition guarantees that the Lorentz transformations can always be decomposed into a nonunique product of a single rotation and a single boost, as \cite{LuChipman1996} \eq{ \pmb{J}=\pmb{J}_{\mathrm{rot}}\left(\Theta,\mathbf{n}_1\right)\pmb{J}_{\mathrm{boost}}\left(\eta,\mathbf{n}_2\right)= \pmb{J}_{\mathrm{boost}}\left(\eta^\prime,\mathbf{n}_2^\prime\right)\pmb{J}_{\mathrm{rot}}\left(\Theta,\mathbf{n}_1\right). \label{eq:Jones polar} } A deterministic polarization transformation, described by a single Jones matrix, can thus always be interpreted as resulting from an intensity reduction, a rotation, and a boost applied sequentially, in any order, to an incident beam of light's polarization degrees of freedom. We note that the composition of two rotations is another rotation, while the same does not hold true for two boosts. Instead, the product of two boosts becomes the composition of a boost and a rotation. This extra rotation is present in many physical situations \cite{Malykin2006,Tudor2018}, such as through the Thomas precession in special relativity \cite{Thomas1926} and the Wigner rotation in mathematical physics \cite{Wigner1939}.\footnote{The effect seems to have documented known before either Thomas or Wigner discovered their eponymous effects \cite{Silberstein1914}.} This effect has indeed been investigated for light's polarization degrees of freedom \cite{Vigoureux1992,VigoureuxGrossel1993,MonzonSanchezSoto1999a,MonzonSanchezSoto1999b,MonzonSanchezSoto2001}, with the fascinating mathematical equivalence between indices of refraction for polarized light traveling through planar media and relative velocities of inertial frames in special relativity \cite{Vigoureux1992}. \subsubsection{Deterministic Transformations in terms of Rotation and Diattenuation} While the rotation transformations seen above are readily enacted by waveplates and liquid crystals, it is not immediately obvious to what laboratory equipment a Lorentz boost corresponds. Fortunately, the three boost parameters and the intensity reduction factor can be alternatively arranged into four ``diattenuation'' parameters, which is so named because diattenuations attenuate each of the field's polarization components by a different amount. The simplest example of a diattenuation is when the two circular components of polarization are each diminished by a different amount: \eq{ \mathbf{J}_{\mathrm{diatten}}\left(q,r,\mathbf{e}_3\right)\begin{pmatrix} a\\b \end{pmatrix}=\begin{pmatrix} a\sqrt{q}\\b\sqrt{r} \end{pmatrix}. \label{eq:Jones diattenuation} } This transformation can arise from, for example, light reflecting off of a surface where there is a difference in reflectivity for electric fields that are parallel versus perpendicular to the plane of reflection \cite{Jackson1999}. We can compare this transformation with Eqs. \eqref{eq:Jones scale factor} and \eqref{eq:Jones boost} to reveal the correspondences \eq{ q&=t^2\text{e}^{\eta}\\ r&=t^2\text{e}^{-\eta} . \label{eq:diattenuation parameters from boost etc} } Since attenuations only serve to decrease the intensity of light in any mode, we can use the requirements $q,r\leq 1$ to provide physical constraints on the boost and overall transmission parameters from Eqs. \eqref{eq:Jones scale factor} and \eqref{eq:Jones boost}. For example, a polarizer that transmits only left-handed circularly polarized light but not its right-handed counterpart is described by the pair $\left(q,r\right)=\left(0,1\right)$; this then implies the limit of an infinite boost $\eta\to-\infty$ along the $\mathbf{e}_3$ axis, subject to the constraint $t=\exp\left(\eta/2\right)$. We can also determine the Mueller matrices for arbitrary diattenuations using Eq. \eqref{eq:Mueller from pure Jones}. The above example of a boost along the $\mathbf{e}_3$ axis, with Jones matrix given by Eq. \eqref{eq:Jones diattenuation}, corresponds to the Mueller matrix \eq{ \mathbf{M}_{\mathrm{diatten}}\left(q,r,\pmb{e}_3\right)= \begin{pmatrix} \frac{q+r}{2} &0&0& \frac{q-r}{2}\\ 0&\sqrt{qr}&0&0\\ 0&0&\sqrt{qr}&0\\ \frac{q-r}{2}&0&0&\frac{q+r}{2} \end{pmatrix}. \label{eq:Mueller diattenuation} } Following a diattenuation, the total intensity $S_0$ decreases unless both $q$ and $r$ are equal to unity and all four Stokes parameters may decrease in magnitude. In the example of extreme attenuation corresponding to a polarizer, two components of $\mathbf{S}$ are completely nullified and the intensity of the retained component depends only on the original intensity of that component alone. The two polarizers to which our Eqs. \eqref{eq:Jones diattenuation} and \eqref{eq:Mueller diattenuation} may refer are \eq{ \pmb{M}_{\mathrm{diatten}}\left(1,0,\mathbf{e}_3\right)= \begin{pmatrix} \frac{1}{2} &0&0& \frac{1}{2}\\ 0&0&0&0\\ 0&0&0&0\\ \frac{1}{2}&0&0&\frac{1}{2} \end{pmatrix}\qquad \iff \qquad I_{\mathrm{R}}+I_{\mathrm{L}}\to I_{\mathrm{R}}}and \eq{ \pmb{M}_{\mathrm{diatten}}\left(0,1,\mathbf{e}_3\right)= \begin{pmatrix} \frac{1}{2} &0&0& -\frac{1}{2}\\ 0&0&0&0\\ 0&0&0&0\\ -\frac{1}{2}&0&0&\frac{1}{2} \end{pmatrix}\qquad \iff \qquad I_{\mathrm{R}}+I_{\mathrm{L}}\to I_{\mathrm{L}}. \label{eq:Mueller for polarizers} } The two other free parameters, in addition to $q$ and $r$, in a general diattentuation are the two angular coordinates of $\mathbf{n}$ dictating which two orthogonal modes are being attenuated. A general diattenuation is physically equivalent to first rotating the polarization such that the modes to be attenuated are $\mathrm{R}$ and $\mathrm{L}$, next applying the diattenuation given by Eqs. \eqref{eq:Jones diattenuation} and \eqref{eq:Mueller diattenuation}, and finally rotating the light back to its original polarization orientation. Even though a general rotation depends on three parameters, these rotations here depend only on the two parameters required to enact the rotation \eq{ \mathbf{R}\left(\mathbf{n}\to\mathbf{e}_3\right)\mathbf{n}=\mathbf{e}_3. } As such, more than one rotation is sufficient to describe this situation. Using any of these rotations, which are all orthogonal in the sense that $\mathbf{R}\left(\mathbf{n}\to\mathbf{e}_3\right)^\top=\mathbf{R}\left(\mathbf{n}\to\mathbf{e}_3\right)^{-1}=\mathbf{R}\left(\mathbf{e}_3\to\mathbf{n}\right)$, we can write the most general diattenuation transformation as \eq{ \mathbf{M}_{\mathrm{diatten}}\left(q,r,\mathbf{n}\right)&=\mathbf{M}_{\mathrm{rot}}\left(\mathbf{n}\to\mathbf{e}_3\right)^{-1} \mathbf{M}_{\mathrm{diatten}}\left(q,r,\mathbf{e}_3\right) \mathbf{M}_{\mathrm{rot}}\left(\mathbf{n}\to\mathbf{e}_3\right)\\&= \begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{0}&\mathbf{R}\left(\mathbf{e}_3\to\mathbf{n}\right) \end{pmatrix} \begin{pmatrix} \frac{q+r}{2} &0&0& \frac{q-r}{2}\\ 0&\sqrt{qr}&0&0\\ 0&0&\sqrt{qr}&0\\ \frac{q-r}{2}&0&0&\frac{q+r}{2} \end{pmatrix} \begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{0}&\mathbf{R}\left(\mathbf{n}\to\mathbf{e}_3\right) \end{pmatrix} . \label{eq:Mueller diattenuation general} } Combining diattenuations with the matrix polar decomposition, any deterministic polarization transformation given by a single Jones matrix can be composed from a single rotation and a single diattenuation transformation, in either order \cite{LuChipman1996}. As the set of rotations forms a group under multiplication and inversion, we conclude that arbitrary deterministic polarization transformations can be obtained from waveplates and a single diattenuating element. \subsubsection{Nondeterministic Transformations} Deterministic transformations account for seven of the $16$ degrees of freedom of a general polarization transformation contained by the Mueller matrices of Eq. \eqref{eq:Mueller definition Stokes}. The remaining nine parameters must arise from transformations requiring more than a single Jones matrix, such as through the convex combination in Eq. \eqref{eq:coherency multiple Jones}, which is what gives rise to the ``nondeterministic'' nomenclature. Such transformations are also denoted as depolarizing because they almost always depolarize incident light that is perfectly polarized light (unless they simply do not effect any transformation on the latter). Nondeterministic polarization transformations are usually ascribed to the inability to experimentally discriminate between physical processes that each enact a different deterministic polarization transformation, as opposed to arising from fundamentally indeterminate laws of nature \cite{Gil2007}. For example, when a detector absorbs photons with a range of frequencies that are present in a quasimonochromatic beam of light and each frequency experiences a different polarization rotation after travelling through a birefringent crystal, the resultant polarization state must be taken to be the convex combination of the polarization states of the various frequency components, each with their individual polarization rotations \cite{LuLoeber1975,Loeber1982,Dlugnikov1984,Chakraborty1986}. Similarly, light whose polarization is rotated more quickly than can be resolved by a detector will have its degree of polarization reduced accordingly \cite{Billings1951}. Finally, light scattering off of optically active media has its polarization change depending on the direction of scattering \cite{Chakraborty1986}, so a detector receiving light from a nonzero range of solid angles leads to convex combinations of the deterministic processes ascribed to each scattering angle. These examples are well summarized by the assertion that that all depolarization processes can, in principle, be reversed but cannot be reversed in practice \cite{LuLoeber1975}. The simplest example of a nondeterministic polarization transformation is that of an ideal depolarizer, whose Mueller matrix reads \eq{ \mathbf{M}_{\mathrm{depol}}=\begin{pmatrix} 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0 \end{pmatrix}. } An ideal depolarizer maintains $S_0$, leaving the total intensity and total energy of the light unchanged, while enacting $p\to 0$, regardless of the input state. Equivalently, an ideal depolarizer transforms the perfectly polarized component of the beam represented in Eqs. \eqref{eq:Stokes decomposition} and \eqref{eq:coherency decomposition} into a completely unpolarized component. The polarization scrambling methods mentioned are made to simulate ideal depolarizers \cite{LuLoeber1975}, with work continuing to be done in this field of depolarizer design \cite{Zhangetal2007,Geetal2012,ShahamEisenberg2011,Marcetal2019,Krohetal2021,Marcoetal2021}. Nondeterministic polarization transformations do not affect the total intensity $S_0$, only altering the polarization vector $\mathbf{S}$. The nine remaining parameters of general nondeterministic (``depolarizing'') Mueller matrices take the form of a $3\times 3$ symmetric real matrix $\mathbf{m}$ and a real vector $\mathbf{p}$ with magnitude less than or equal to unity \cite{LuChipman1996}: \eq{ \mathbf{M}_{\mathrm{depol}}=\begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{p} &\mathbf{m} \end{pmatrix}. } The assumption that only positive weights feature in Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} restricts the matrix $\mathbf{m}$ to being positive \cite{Gil2007}, which precludes transformations of the form of Eq. \eqref{eq:Mueller reflection}. Under some physical assumptions, such matrices can always be decomposed into the product of a single diattenuation, a single rotation, and a single depolarizing transformation with $\mathbf{p}=0$, through \cite{Gil2007} \eq{ \mathbf{M}_{\mathrm{depol}}=\mathbf{M}_{\mathrm{rot}}\left(\Theta,\mathbf{n}\right) \mathbf{M}_{\mathrm{diatten}}\left(q,r,\mathbf{n^\prime}\right) \begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{0} &\mathbf{m^\prime} \end{pmatrix}. } How can we achieve an ideal depolarizer using this formalism? We note that, due to Eq. \eqref{eq:Mueller from Jones with weights}, nondeterministic Mueller matrices come from probabilistic mixtures of deterministic Mueller matrices \eq{ \boldsymbol{\Psi}\to \sum_i \lambda_i \mathbf{J}_i\boldsymbol{\Psi} \mathbf{J}_i^\dagger \qquad \Rightarrow \qquad \mathbf{M}=\sum_i \lambda_i \mathbf{M}_i . } If each of the deterministic transformations in the combination correspond to a rotation by angle $\Theta_i$ about axis $\mathbf{n}_i$, we achieve a depolarizing transformation with $\mathbf{p}=0$ and $\mathbf{m}=\sum_i \lambda_i \mathbf{R}\left(\Theta_i,\mathbf{n}_i\right)$. A sufficient number of rotations in a sufficient number of directions leads to $\mathbf{m}=\mathbf{0}\mathbf{0}^\top$. We further note that an \textit{arbitrary} symmetric matrix $\mathbf{m}$ can be obtained from a convex combination of sufficiently many rotation matrices with appropriate weights, provided that those weights are allowed to be negative \cite{Goldberg2020}. Otherwise, only positive matrices $\mathbf{m}$ can be generated. We collect two few key facts before proceeding. First, any polarization transformation can be realized by the sequential application of a rotation, a diattenuation, and a depolarization, in any order \cite{LuChipman1996,Giletal2013}. Next, all such transformations can be realized via convex combinations of four or fewer deterministic transformations \cite{Cloude1986}. We thereby possess a complete description of all polarization transformations from a classical perspective. \subsubsection{Physical Constraints on Polarization Changes} \label{sec:classical constraints} Many works have investigated physical constraints on the viability of various Jones and Mueller matrices \cite{FryKattawar1981, Simon1982, GilBernabeu1985, Cloude1990,vanderMee1993, Hovenier1994, Gil2000, Zakerietal2013, Cloude1986, LuChipman1996, Hovenieretal1986, Barakat1987, GivensKostinski1993, Nagirner1993, Simonetal2010, vanZyletal2011, Giletal2013}. We will not review all of them here, instead focusing on a few constraints that become relevant in the quantum theory of polarimetry. Foremost, polarization changes are only considered to be physically viable if they are composed of probabilistic mixtures of pure Jones transformations, equivalent to Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} restricted to $\lambda_i>0$. This assertion can be traced to \textcite{Cloude1986}, which we will quote directly due to the challenge of obtaining this reference: ``What are the weighting coefficients [$\lambda_i$] and how are they determined?'' They proceed to answer this question ``by considering a new formulation of the scattering problem'' based on rearranging the elements of a rescaled Jones matrix $\pmb{J}$ into a $4\times 1$ vector $\mathcal{K}$. Then, a general Mueller matrix can be formed by taking linear combinations of the outer products $\mathbf{T}=\sum_i \lambda_i\mathcal{K}_i\mathcal{K}_i^\dagger$, eventually giving rise to Eq. \eqref{eq:Mueller from Jones with weights} with the eigenvalues of $\mathbf{T}$, $\lambda_i$, acting as the weights up to unitary transformations (``plane rotations in a 6 dimensional real target space''). This leads to the assertion that, because $\mathbf{T}$ ``is a complex correlation matrix [with] a much clearer physical interpretation than the Mueller matrix,'' it follows that ``the eigenvalues are positive real and each eigenvector corresponds . . . to a single scattering matrix.'' Physically, no reason is provided to prohibit a single Mueller matrix of the form of Eq. \eqref{eq:Mueller reflection} arising on its own in nature other than the, perhaps circular, perhaps physically reasonable, assertion that all Mueller matrices arise from probabilistic mixtures of deterministic transformations associated with single Jones matrices. A flurry of attention revisited this problem motivated by quantum theory \cite{AhnertPayne2005,Aielloetal2007,Sudhaetal2008,Simonetal2010,GamelJames2011}, which we have conjectured to truly underlie the necessity of positive weights in Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} \cite{Goldberg2020}. It was first noted that the transformations responsible for transforming coherency matrices look like completely positive quantum channels, where the Jones matrices of Eq. \eqref{eq:coherency multiple Jones} can be thought of as Kraus operators \cite{AhnertPayne2005,Aielloetal2007,Sudhaetal2008,GamelJames2011}. This hints at the well-known connection with the quantum theory of polarization, viz., that single photons have density matrices described by their coherency matrices $\boldsymbol{\Psi}$; the transformations of classical polarization states are akin to transformations of single-photon polarization states. Then, all physically viable transformations that take single-photon polarization states to single-photon polarization states must be completely positive, immediately restricting the weights in Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} to be positive. We learn that, if all light were to be described by the behaviours of single photons (which precludes studies of attenuation), quantum theory would enforce the positivity assertion of classical polarization transformations. Then, inspired by a phenomenon known as ``nonquantum entanglement,'' \textcite{Simonetal2010} showed that the only way for an extended version of Eq. \eqref{eq:stokes squared inequality} to hold is through the positivity assertion of classical polarization transformations. In the extended version, the Stokes parameters are extended to two-point correlation functions, wherein the electric fields are taken to be at different points in the $x$-$y$ plane [cf. Eq. \eqref{eq:Stokes ensemble definition}]: \eq{ S_\mu\left(x,y;x^\prime,y^\prime\right)=\frac{1}{2}\expct{\mathbf{E}\left(x^\prime,y^\prime,z;t\right)^\dagger\sigma_\mu \mathbf{E}\left(x,y,z;t\right)}. \label{eq:two point Stokes parameters} } In order to act linearly on these extended Stokes parameters, Mueller matrices must be made from positive-weight combinations of deterministic elements. Of course, this is asking more of Mueller matrices than the standard definition in Eq. \eqref{eq:Mueller definition Stokes}, so it remains to be proven whether the necessity of positive weights in Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} follows from any deeper physical condition. The most natural constraint on Mueller and Jones matrices is that they take physically viable polarization states to physically viable polarization states. This means that all linear polarization transformations must ensure their output states satisfy the constraint of Eq. \eqref{eq:stokes squared inequality} regardless of the input state. All pure Jones matrices automatically satisfy this constraint, as do convex combinations thereof, so we learn that the 16 parameters of a polarization transformation are not independently free, instead satisfying \eq{\mathrm{Tr}\left(\mathbf{M}\mathbf{M}^\top\right)\leq 4 M_{0,0}^2.} Finally, we mention the transmittance and reverse transmittance conditions for Mueller matrices \cite{Gil2000}. Under the assumption that a single deterministic transformation does not increase the intensity of an incident beam, it can be verified that any Mueller matrix arising from a single deterministic transformation satisfies \eq{ M_{0,0}+\sqrt{M_{0,1}^2+M_{0,2}^2+M_{0,3}^2}=M_{0,0}+\sqrt{M_{1,0}^2+M_{2,0}^2+M_{3,0}^2}\leq 1\,\, \iff\,\, \mathrm{Tr}\left(\mathbf{M}\mathbf{M}^\top\right)= 4 M_{0,0}^2. } Then, any Mueller matrix arising from a convex combination of deterministic elements must satisfy \eq{ M_{0,0}+\sqrt{M_{0,1}^2+M_{0,2}^2+M_{0,3}^2}&\leq 1\qquad\mathrm{and}\qquad M_{0,0}+\sqrt{M_{1,0}^2+M_{2,0}^2+M_{3,0}^2}&\leq 1\\ &\qquad\iff\qquad \lambda_i>0\mathrm{\,in\, Eqs.\, \eqref{eq:coherency multiple Jones with weights}\, and\, \eqref{eq:Mueller from Jones with weights}}. } These conditions preclude the possibility of transformations such as lossless polarizers, which would be able to convert all input light of a given polarization component into light with the opposite component regardless of input beam, such as \eq{ \mathbf{M}_{\mathrm{lossless\,polarizer}}=\begin{pmatrix} 1&0&0&0\\ 0&0&0&0\\ 0&0&0&0\\ 1&0&0&0 \end{pmatrix}. \label{eq:lossless polarizer Mueller} } It turns out that we can concoct polarization transformations that satisfy all of the requisite conditions, even that of positive weights, while breaking the transmittance conditions. For example, we can consider the pair of pure Jones matrices \eq{ \mathbf{J}_1=\begin{pmatrix} 1&0\\0&0 \end{pmatrix}\qquad\mathrm{and}\qquad \mathbf{J}_2=\begin{pmatrix} 0&1\\0&0 \end{pmatrix}, \label{eq:lossless polarizer Jones} } which directly lead to the Mueller matrix of Eq. \eqref{eq:lossless polarizer Mueller} when added with $\lambda_1=\lambda_2=1$ as in Eq. \eqref{eq:coherency multiple Jones} (similarly, if we employ $\mathbf{J}_2^\top$ instead of $\mathbf{J}_2$, we find $\mathbf{M}_{\mathrm{lossless\,polarizer}}^\top$, which disobeys the forward instead of the reverse transmittance condition). On the contrary, if we consider adding them with $\lambda_1=\lambda_2=1/2$ as in Eq. \eqref{eq:coherency multiple Jones with weights}, we find the resulting Mueller matrix to be $\mathbf{M}=\mathbf{M}_{\mathrm{lossless\,polarizer}}/2$, which indeed satisfies the reverse transmittance condition. How should we rectify this situation? Requiring the coefficients $\lambda_i$ to sum to unity would suffice, because the transmittance conditions assume we cannot rescale these Jones matrices by a factor greater than unity. In contradistinction, the unital nature of quantum channels only requires Kraus operators to satisfy $\mathbf{J}_1^\dagger \mathbf{J}_1+\mathbf{J}_2^\dagger \mathbf{J}_2=\mathds{1}$, which is indeed satisfied here with $\lambda_1=\lambda_2=1$, so there seems to exist a quantum channel that would act like a lossless polarizer on a single photon. Further inspection reveals that $\mathbf{J}_1$ ensures that all right-handed circularly polarized light retains its polarization and $\mathbf{J}_2$ converts all left-handed circularly polarized light to its right-handed counterpart. This can only be achieved in a ``linear'' manner by a device that measures the total incident intensity, then prepares a right-handed circularly polarized state with that same intensity to be output: a highly nonlinear device poising as a linear one. Because this must hold regardless of the input Stokes parameters, it must be capable of measuring and generating output states of light with arbitrarily large intensities. These considerations prohibit such a device from being physically viable and reinforce the transmittance conditions; we will return to this consideration when discussing quantum polarization transformations. \section{Quantum Polarization} \label{sec:quantum polarization} Maxwell's equations equally apply to quantized light fields. In the quantum theory, the field amplitudes $a$ and $b$ from Eq. \eqref{eq:plane wave} get promoted to bosonic operators $\hat{a}$ and $\hat{b}$ that annihilate right- and left-handed circularly polarized photons: \eq{ \hat{\mathbf{E}}=\mathcal{E}_0\left(\hat{a}\mathbf{e}_a+\hat{b}\mathbf{e}_b\right)\text{e}^{\text{i}\left(kz-\omega t\right)}. } These operators satisfy the standard bosonic commutation relations \eq{ \left[\hat{a},\hat{a}^\dagger\vphantom{a}\right]=\left[\hat{b},\hat{b}^\dagger\vphantom{a}\right]=1\qquad \mathrm{and}\qquad\left[\hat{a},\hat{b}^\dagger\vphantom{a}\right]=\left[\hat{b},\hat{a}^\dagger\vphantom{a}\right]=0 } and can be used to create states with definite photon numbers in a single spatial mode from the two-mode vacuum $\ket{\mathrm{vac}}$ via \eq{ \ket{m,n}\equiv \ket{m}_{\mathrm{R}}\otimes\ket{n}_{\mathrm{L}}=\frac{\hat{a}^\dagger\vphantom{a}^m\hat{b}^\dagger\vphantom{a}^n}{\sqrt{m!n!}}\ket{\mathrm{vac}}. } The intensity of the field is given by the average number of excitations $\langle \hat{a}^\dagger\vphantom{a} \hat{a}^\dagger\vphantom{a}+\hat{b}^\dagger\vphantom{a}\hat{b}\rangle$ and similarly for the intensities within each polarization component. Following the above quantization rule, we can define the Stokes operators as \eq{ \hat{S}_0&=\frac{\hat{n}_{\mathrm{H}}+\hat{n}_{\mathrm{V}}}{2}=\frac{\hat{n}_{\mathrm{D}}+\hat{n}_{\mathrm{A}}}{2}=\frac{\hat{n}_{\mathrm{R}}+\hat{n}_{\mathrm{L}}}{2}=\frac{\hat{a}^\dagger\vphantom{a}\hat{a}+\hat{b}^\dagger\vphantom{a}\hat{b}}{2},\\ \hat{S}_1&=\frac{\hat{n}_{\mathrm{H}}-\hat{n}_{\mathrm{V}}}{2}=\frac{\hat{a}^\dagger\vphantom{a}\hat{b}+\hat{b}^\dagger\vphantom{a}\hat{a}}{2},\\ \hat{S}_2&=\frac{\hat{n}_{\mathrm{D}}-\hat{n}_{\mathrm{A}}}{2}=-\text{i}\frac{\hat{a}^\dagger\vphantom{a}\hat{b}-\hat{b}^\dagger\vphantom{a}\hat{a}}{2},\\ \hat{S}_3&=\frac{\hat{n}_{\mathrm{R}}-\hat{n}_{\mathrm{L}}}{2}=\frac{\hat{a}^\dagger\vphantom{a}\hat{a}-\hat{b}^\dagger\vphantom{a}\hat{b}}{2}, \label{eq:Stokes operators} } or succinctly through [cf. Eq. \eqref{eq:Stokes ensemble definition}] \eq{ \hat{S}_\mu=\frac{1}{2}\begin{pmatrix} \hat{a}^\dagger\vphantom{a} &\hat{b}^\dagger\vphantom{a} \end{pmatrix}\sigma_\mu\begin{pmatrix} \hat{a} \\\hat{b} \end{pmatrix}, } with the quantum-to-classical correspondence \eq{ S_\mu=\expct{\hat{S}_\mu}. } The factor of $2$ that we have been carrying through these definitions allows us to realize the Stokes operators as obeying the commutation relations of angular momentum, \eq{ \left[\hat{S}_i,\hat{S}_j\right]=\sum_{k=1}^3\epsilon_{i j k}\hat{S}_k\qquad \mathrm{and}\qquad \left[\hat{S}_0,\hat{S}_i\right]=0, } and we see that the Schwinger mapping \cite{Chaturvedietal2006,SakuraiNapolitano2011} governs the translation between two-mode states and angular momentum eigenstates. Classically, all of the Stokes parameters can be measured with arbitrary precision. However, since they arise as expectation values of noncommuting operators, the same cannot be said at a fundamental level; instead, there is a lower limit to the joint precision with which a pair of Stokes parameters can be measured. Methods of optimizing tradeoffs such as \eq{ 4\mathop{\mathrm{Var}}\nolimits \hat{S}_1\mathop{\mathrm{Var}}\nolimits \hat{S}_2\geq \left|\expct{\hat{S}_3}\right|^2 } and \eq{ \mathop{\mathrm{Var}}\nolimits \hat{S}_1+\mathop{\mathrm{Var}}\nolimits \hat{S}_2+\mathop{\mathrm{Var}}\nolimits \hat{S}_3\geq \expct{\hat{S}_0}, \label{eq:Stokes variance sum inequality} } where we denote operator variances by $\mathop{\mathrm{Var}}\nolimits X=\expct{X^2}-\expct{X}^2$, have led to the fruitful discovery and implementation of ``polarization squeezing,'' which may ultimately have applications in quantum-enhanced polarimetry and other communication tasks \cite{Winelandetal1992,Chirkinetal1993,KitagawaUeda1993,HilleryMlodinow1993,AgarwalPuri1994,Winelandetal1994,Karassiov1994,Alodzhantsetal1995,BrifMann1996,Klyshko1997,Alodjantsetal1998,Haldetal1999,SorensenMolmer2001,Korolkovaetal2002,Bowenetal2022,AbdelAtyetal2002,Schnabeletal2003,Heersinketal2003,Josseetal2003,AndersenBuchhave2003,Shindoetal2003,WangSanders2003,Golubevetal2004,Heersinketal2005,KorolkovaLoudon2005,Popescu2005,LuisKorolkova2006,ShersonMolmer2006,Marquadtetal2007,Chaudhuryetal2007,Lassenetal2007,Milanovicetal2007,RivasLuis2008,Corneyetal2008,Dongetal2008,Iskhakovetal2009,Shalmetal2009,GuhneToth2009,Hsuetal2009,Mahleretal2010,Milanovicetal2010,Heetal2011,PrakashShukla2011,Maetal2011,Barreiroetal2011,Gross2012,Puentesetal2013,Civitareseetal2013,KlimovMunoz2013,BeduiniMitchell2013,MitchellBeduini2014,Chirkin2015,Puentes2015,Mulleretal2016,Vitaglianoetal2017,Wenetal2017,Hanetal2018,Vitaglianoetal2018,Birrittellaetal2021,Baietal2021}. Polarization squeezing is significantly reviewed in Refs \cite{Maetal2011}, with briefer reviews in, e.g., Refs. \cite{TothApellaniz2014,Chirkin2015,Goldbergetal2021polarization}. From the theory of angular momentum, we immediately find that \eq{ \hat{\mathbf{S}}^2\equiv \hat{S}_1^2+\hat{S}_2^2+\hat{S}_3^2=\hat{S}_0\left(\hat{S}_0+1\right)\geq \hat{S}_0^2. } This does not imply that quantum states may disobey criterion of Eq. \eqref{eq:stokes squared inequality}, which remains true in the guise of \eq{ \expct{\hat{S}_1}^2+\expct{\hat{S}_2}^2+\expct{\hat{S}_3}^2\leq \expct{\hat{S}_0}^2; } these together enforce the variance inequality in Eq. \eqref{eq:Stokes variance sum inequality}. Interestingly, the ultimate quantum limit with which the Stokes parameters \textit{for classical light} may be simultaneously estimated obeys stricter conditions than Eq. \eqref{eq:Stokes variance sum inequality}: \eq{ \mathop{\mathrm{Var}}\nolimits \hat{S}_1+\mathop{\mathrm{Var}}\nolimits \hat{S}_2+\mathop{\mathrm{Var}}\nolimits \hat{S}_3\geq \frac{5}{2}\expct{\hat{S}_0} \label{eq:uncertainty limit classical state 1} } when the value of $S_0$ is unknown \cite{Kikuchi2020,Mecozzietal21,Jarzyna2021arxiv} and \eq{ \mathop{\mathrm{Var}}\nolimits \hat{S}_1+\mathop{\mathrm{Var}}\nolimits \hat{S}_2+\mathop{\mathrm{Var}}\nolimits \hat{S}_3\geq 2\expct{\hat{S}_0} \label{eq:uncertainty limit classical state 2} } when the value of $S_0$ is known \textit{a priori} \cite{Jarzyna2021arxiv}. The four Stokes parameters can be simultaneously measured by mixing the input light with seven other vacuum modes input to an interferometer and computing sum or difference currents among four particular pairs of output modes \cite{AlodjantsArakelian1999,Alodjantsetal1999}. Each sum or difference photocurrent yields one half of one of the Stokes parameters, even in the quantum regime, but the variances of these photocurrents are not proportional to the variances of the Stokes parameters; instead, the former are all offset from the latter by an amount proportional to the total intensity of the field and are thus never nonzero. Mathematical postprocessing of such a measurement can be used to verify the presence of polarization squeezing and the potential saturation of inequalities such as Eq. \eqref{eq:Stokes variance sum inequality}. It is instructive to investigate the properties of single-photon polarization states, spanned by $\ket{1,0}$ and $\ket{0,1}$, as these directly encompass classical polarization phenomena, which are governed by the $2\times 2$-density-matrix-like object $\boldsymbol{\Psi}$. The most general density matrix for such a state is given in this basis by \eq{ \hat{\rho}_{\mathrm{single\,photon}}=\begin{pmatrix} \rho_{0,0}&\rho_{1,0}^*\\ \rho_{1,0}&1-\rho_{0,0} \end{pmatrix}, } subject to the constraints of positivity. The Stokes parameters are readily calculable using Pauli matrices, yielding \eq{ \mathcal{S}=\begin{pmatrix} \frac{1}{2}\\ \mathop{\mathrm{Re}} \nolimits \rho_{1,0}\\ \mathop{\mathrm{Im}} \nolimits \rho_{1,0}\\ \rho_{0,0}-\frac{1}{2} \end{pmatrix}. } The density matrix can be decomposed into density matrices corresponding to pure and maximally mixed states, via \eq{ \hat{\rho}_{\mathrm{single\,photon}}=\sum_{\mu=0}^3 S_\mu\sigma_\mu=p\frac{\mathds{1}+\frac{\mathbf{S}}{\left|\mathbf{S}\right|}\cdot\boldsymbol{\sigma}}{2}+\left(1-p\right)\frac{\mathds{1}}{2}, \label{eq:rho single photon from Stokes} } where the degree of polarization is, as usual, $p=\left|\mathbf{S}\right|/S_0$. Pure single photons have degree of polarization $p=1$ and maximally mixed single photons have $p=0$, which can alternatively be expressed via the purity parameter $\mathop{\mathrm{Tr}} \nolimits \left(\hat{\rho}^2\right)$ through \eq{ p=\frac{\left|\mathbf{S}\right|}{S_0}=\sqrt{\frac{\mathop{\mathrm{Tr}} \nolimits\left(\boldsymbol{\Psi}^2\right)}{2S_0^2}-1}=\sqrt{2\mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}^2\right)-1}. \label{eq:purity polarization single photons} } The density matrix for single photons thus completely reproduces the classical coherency matrix for describing polarization states. We next explore properties of quantum states with more than a single photon. \subsection{Characterizing Polarization} The Stokes operators presented in Eq. \eqref{eq:Stokes operators} conserve photon number, as each creation operator is paired with an annihilation operator. This is one way to see why they commute with the total-photon-number operator $\hat{N}=2\hat{S}_0$ and ensures that the polarization properties of a beam of light can be broken into the polarization properties of each photon-number subspace, where the latter are sometimes called Fock layers \cite{Donatietal2014,Mulleretal2016}. We next explore the polarization properties of pure states with a fixed number of photons $N$, equivalent to states with spin $N/2$, whose most general form is given by \eq{ \ket{\psi^{(N)}}=\sum_{m=0}^N\psi_m\ket{m,N-m},\qquad \sum_{m=0}^N \left|\psi_m\right|^2=1. \label{eq:pure state Nth layer in terms of psim} } Geometrically, it is easy to visualize pure single-photon states on the surface of the Poincar\'e (or Bloch) sphere, as they can be parametrized by two angular coordinates $\Omega=\left(\Theta,\Phi\right)$: \eq{ \ket{\Omega^{(1)}}=\cos\frac{\Theta}{2}\ket{1,0}+\text{e}^{\text{i}\Phi}\sin\frac{\Theta}{2}\ket{0,1}. } We can alternatively write these states as resulting from the action of a single creation operator, parametrized by $\Omega$, acting on the two-mode vacuum, through \eq{ \ket{\Omega^{(1)}}=\hat{a}_\Omega^\dagger\ket{\mathrm{vac}},\qquad \hat{a}_\Omega^\dagger=\cos\frac{\Theta}{2}\hat{a}^\dagger\vphantom{a}+\text{e}^{\text{i}\Phi}\sin\frac{\Theta}{2}\hat{b}^\dagger\vphantom{a}. } However, we cannot visualize an $N$-photon state as a series of $N$ Poincar\'e spheres, as the photons are, in general, mutually correlated. This challenge is addressed by the Majorana representation \cite{Majorana1932,BengtssonZyczkowski2017}. The Majorana representation begins by realizing the one-to-one correspondence between the amplitudes $\left\{\psi_m\right\}$ and the set of $N$ angular coordinates $\left\{\Omega_k\right\}$, through \eq{ \ket{\psi^{(N)}}=\frac{1}{\sqrt{\mathcal{N}}}\prod_{k=1}^N \hat{a}_{\Omega_k}^\dagger\ket{\mathrm{vac}}, } where the normalization constant $\mathcal{N}$ depends on the coordinates and does not affect the geometry of the state. This correspondence allows us to represent any $N$-photon pure state by a constellation of $N$ points on the surface of a sphere, where each point is sometimes referred to as a star in the Majorana constellation.\footnote{The constellation is often taken to be the set of points antipodal to these $N$ angular coordinates $\Omega_k$ so as to correspond directly to the zeroes of the Husimi $Q$-function.} Notably, the entire constellation rotates rigidly under a polarization rotation, lending geometric intuition to multiphoton polarization states. The usefulness of the Majorana representation has been realized in topics from metrology \cite{ChryssomalakosHC2017,Bouchardetal2017,GoldbergJames2018Euler,Martinetal2020,Goldberg2020,Goldbergetal2021rotationspublished,Chryssomalakosetal2021} to Bose-Einstein condensates \cite{Lianetal2012,Cuietal2013} to non-Hermitian physics \cite{Bartlettetal2021} and beyond \cite{Hannay1998JPA,Hannay1998JMO,Bjorketal2015,Bjorketal2015PRA,Giraudetal2010,KolenderskiDemkowiczDobrzanski2008,MakelaMessina2010PS,Martinetal2010,Lamacraft2010,Bruno2012,UshaDevietal2012,Yangetal2015,LiuFu2016,Chabaudetal2020,Dograetal2020}. We exemplify some Majorana constellations in Fig. \ref{fig:Majorana examples} and note that states whose Majorana constellations are randomly distributed have intriguing properties that are only now being elucidated \cite{Goldbergetal2021randomarxiv}. \begin{figure*} \centering \includegraphics[width=\textwidth]{fig2} \caption{Three views of three different Majorana constellations for four-photon states. The large green ball corresponds to the fourfold degenerate constellation of an SU(2)-coherent state with $\Omega_1=\Omega_2=\Omega_3=\Omega_4$, the smaller blue balls correspond to a NOON state with Majorana stars equally spread about the equator, and the red cubes correspond to a state whose Majorana constellation is a regular tetrahedron.} \label{fig:Majorana examples} \end{figure*} The Majorana constellation, as it stands, pertains to pure $N$-photon states. What can we retain when considering more general quantum states? For mixed states with a fixed number of photons, representations have been derived that retain some of the geometrical properties of the standard constellation by either decomposing the density matrix into its eigenbasis \cite{Migdal2011} or into the spherical tensor basis \cite{SerranoEnsastigaBraun2020} and finding a constellation for each element in said basis. For pure states with indeterminate numbers of photons, one can consider a Majorana representation within each photon-number subspace, where one must also keep track of the relative weights and relative phases between each subspace (ignoring the relative phases, one can consider a set of Majorana constellations for a convex combination of pure states that each have a different number of photons) \cite{Bjorketal2015}. Regrattably, a unified geometrical picture for arbitrary quantum states \`a la Majorana is still lacking. \subsubsection{Polarized States} We are now in a position to ascertain which quantum mechanical states underlie their classical counterparts with degree of polarization $p=1$ \cite{GoldbergJames2017}, expanding upon earlier work by \textcite{MehtaSharma1974,PrakashSingh2000,SinghPrakash2013,Luis2016}. These quantum states have significant complementarity properties \cite{Norrmanetal2020}. We note in passing that other degrees of polarization have been proposed in light of the quantum nature of polarization \cite{Alodjantsetal1998,AlodjantsArakelian1999,Klimovetal2010,Luis2002,Luis2007PRAtypeII,Luis2016,SanchezSotoetal2006,Luis2007OptComm,Bjorketal2010}, each with their own merits and motivations, but continue our investigation along the lines of the canonical degree of polarization. The easiest example of a perfectly polarized quantum state is that of a pure single photon. This is readily generalized to pure states with exactly $N$ photons, where the resulting state is completely polarized if and only if all $N$ of the constituent photons have the same direction of polarization. In terms of the Majorana constellation, this requires all $N$ of the angular coordinates to degenerate to a single point on the surface of the sphere as in Fig. \ref{fig:Majorana examples}, with the angular coordinates of that point dictating the state's polarization direction. These states are the well-known SU(2)-coherent, or spin-coherent, states \cite{Arecchietal1972} \eq{\ket{\Omega^{(N)}}=\frac{\hat{a}_{\Omega}^\dagger\vphantom{a}^N}{\sqrt{N!}}\ket{\mathrm{vac}}=\sum_{m=0}^N \psi_m^{\Omega;N}\ket{m,N-m}, \label{eq:SU(2) coherent states} }where \eq{\psi_m^{\Omega;N}=\sqrt{\binom{N}{m}}\cos^m\frac{\Theta}{2}\sin^{N-m}\frac{\Theta}{2}\text{e}^{\text{i}\Phi(N-m)}. \label{eq:coherent state amplitudes} } SU(2)-coherent states are eigenstates of an angular momentum operator projected in the direction $\mathbf{n}_{\Omega}=\left(\sin\Theta\cos\Phi,\sin\Theta\sin\Phi,\cos\Theta\right)^\top$ with eigenvalue $S_0=N/2$: \eq{\left(\hat{\mathbf{S}}\cdot\mathbf{n}_{\Omega}\right)\ket{\Omega^{(N)}}=\frac{N}{2}\ket{\Omega^{(N)}} \label{eq:Stokes from SU(2) coherent} } and have many other useful properties \cite{Perelomov1986,Gazeau2009}. The Stokes parameters for SU(2)-coherent states are exactly those of perfectly polarized light with intensity equal to that of $N$ photons \eq{\mathcal{S}=\frac{N}{2}\begin{pmatrix} 1\\\mathbf{n}_{\Omega} \end{pmatrix}} and SU(2)-coherent states can be considered as spin states with maximal spin projection. Notably, these perfectly polarized states behave sensibly when undergoing polarization rotations, with the direction of polarization rotating as expected for classical beams of light. The only states, pure or mixed, with exactly $N$ photons that are perfectly polarized are the SU(2)-coherent states of Eq. \eqref{eq:SU(2) coherent states}. The rest of the quantum states with $p=1$ must therefore have indeterminate photon number (i.e., not be eigenstates of $\hat{S}_0$). One simple extension of SU(2)-coherent states is to convex combinations of SU(2)-coherent states, each with the same angular coordinates. One can prove that such states, given by \eq{ \hat{\rho}=\sum_{N=0}^\infty \rho_N \ket{\Omega^{(N)}}\bra{\Omega^{(N)}},\quad \sum_{N=0}^\infty \rho_N=1, \quad \rho_N\geq 0 , \label{eq:convex combos of spin coherent states} } are all perfectly polarized in direction $\Omega$, with intensity equal to the average photon number (in the appropriate units) $\hat{S}_0\propto \sum_{N=0}^\infty\rho_N N$ \cite{MehtaSharma1974}. Even though there is a probabilistic mixture present, all of the states in the mixture have the same direction of polarization, so they conspire to yield a state that is completely polarized overall. Equation \eqref{eq:convex combos of spin coherent states} is markedly different from the case of single photons: for single photons, purity and degree of polarization are the same quantity, as seen in Eq. \eqref{eq:purity polarization single photons}; when more than one photon number is involved, mixed states can still have degree of polarization $p=1$. Returning momentarily from mixed states back to pure states, superpositions of SU(2)-coherent states in different photon-number subspaces whose directions of polarization are all collinear are also perfectly polarized. In fact, these are the only possible pure states with degree of polarization $p=1$ \cite{GoldbergJames2017}: \eq{ \ket{\Psi}=\sum_{N=0}^\infty\text{e}^{\text{i}\varphi_N}\sqrt{\rho_N}\ket{\Omega^{(N)}},\quad \sum_{N=0}^\infty \rho_N=1, \quad \rho_N\geq 0,\quad\varphi_N\in\mathds{R}. \label{eq:pure perfectly polarized} } This simply means that classical polarization properties may be underlain by quantum superpositions about which the former are ignorant, especially because even the Stokes operators themselves do not distinguish between the pure superpositions of Eq. \eqref{eq:pure perfectly polarized} and the mixed states of Eq. \eqref{eq:convex combos of spin coherent states} in any of their correlation functions. In that sense, another form of correlation gadget is required to be sensitive to these relative phases between Fock layers, as outlined in the conclusions of \cite{Goldbergetal2021multipolesarxiv}, such as by using weak-field homodyne detection \cite{Donatietal2014}. The pure states we are now discussing encompass canonical coherent states, which are generally agreed to be the most classical states according to quantum optics\cite{MandelWolf1995}: \eq{ \ket{\alpha}\propto \exp(\alpha\hat{a}^\dagger\vphantom{a})\ket{\mathrm{vac}}. \label{eq:canonical coherent state} } These states obey the restrictions of Eqs. \eqref{eq:uncertainty limit classical state 1} and \eqref{eq:uncertainty limit classical state 2} in terms of their simultaneously measurable properties. From the perspective of polarization, these states take the form \cite{AtkinsDobson1971} \eq{\ket{\alpha_{\Omega}}\propto \exp(\alpha\hat{a}_{\Omega}^\dagger)\ket{\mathrm{vac}} =\sum_{N=0}^\infty \frac{\alpha^N}{N!}\hat{a}_{\Omega}^\dagger\vphantom{a}^N\ket{\mathrm{vac}}=\sum_{N=0}^\infty\frac{\alpha^N}{\sqrt{N!}}\ket{\Omega^{(N)}}. \label{eq:canonical coherent states any pol} } These are the states sometimes thought to underlie classical polarization phenomena, as they can be described solely using the Stokes vector [cf. Eqs. \eqref{eq:stokes pol} and \eqref{eq:Stokes from SU(2) coherent}] \eq{ \mathcal{S}=\frac{\left|\alpha\right|^2}{2}\begin{pmatrix} 1\\ \mathbf{n}_\Omega \end{pmatrix} } with no other free parameters (i.e., no additional ``quantum'' degrees of freedom present beyond the classical description), but we will see later that even this thinking has its pitfalls when we begin to consider partially polarized states. We are now in the position to write the most general perfectly polarized quantum state: \eq{ \hat{\rho}=\sum_{M,N=0}^\infty \sigma_{M,N} \ket{\Omega^{(M)}}\bra{\Omega^{(N)}},\quad \mathop{\mathrm{Tr}} \nolimits\pmb{\sigma}=1, } for any positive-semidefinite matrix $\pmb{\sigma}$. These states can be thought of as probabilistic mixtures of pure states of the form of Eq. \eqref{eq:pure perfectly polarized}, which can also be realized using a single element from a pair of orthogonal creation operators via: \eq{\hat{\rho}=\sum_{i=0}^\infty F_i\left(\hat{a}_{\Omega}^\dagger\right)\ket{\mathrm{vac}}\bra{\mathrm{vac}}F_i^*\left(\hat{a}_{\Omega}\right),} where the functions $F_i(z)=\sum_N f_N^{(i)}z^N$ need only be normalized by a common factor. Moreover, these states arise exclusively from polarization rotations of states that have all of their excitations in a single mode, using the polarization rotation operators $\hat{R}$ that we will discuss in Section \ref{sec:quantum rotations}: \eq{\hat{\rho}=\hat{R}\left(\hat{\sigma}_{\mathrm{R}}\otimes \ket{0}_{\mathrm{L}}\bra{0}\right)\hat{R}^\dagger.} While a formal proof of these facts can be found in \cite{GoldbergJames2017}, we presented a simpler proof in \cite{Goldberg2021thesis} that only relies on polarization rotations and that \eq{ \expct{\hat{b}^\dagger\vphantom{a}\hat{b}}=0\qquad\iff\qquad\hat{\rho}=\hat{\sigma}_{\mathrm{R}}\otimes \ket{0}_{\mathrm{L}}\bra{0}. } In summary, perfectly polarized states have all of their photons conspire to seem completely classical. We stress, still, that the purity of such states can be quite low, in contradistinction to our classical intuition. We are now positioned to discuss the other term in the decompositions of Eq. \eqref{eq:Stokes decomposition} and \eqref{eq:coherency decomposition}, corresponding to unpolarized states, to partner with the perfectly polarized states and complete our quantum description of classical polarization. \subsubsection{Unpolarized States} \label{sec:unpolarized states} Classically, unpolarized states are those that are unchanged by polarization rotations and such states have $p=0$. Quantum mechanically, there is a marked difference between states unchanged by polarization rotations and states with $p=0$. This strongly underscores the differences between classical and quantum intuition in the realm of polarization. Quantum states with $p=0$ must have $\expct{\hat{\mathbf{S}}}=\mathbf{0}$. This imposes three constraints onto a generic state that has many more than three degrees of freedom, so it is not surprising than many different states may underly classically unpolarized light. We begin with unpolarized single-photon states. These are given by the density matrices from Eq. \eqref{eq:rho single photon from Stokes} with $\mathbf{S}=\mathbf{0}$ and correspond to maximally mixed states, according with the purity condition of Eq. \eqref{eq:purity polarization single photons}. No free parameters remain, so all unpolarized single-photon states are the same, without any extra quantum mechanical degrees of freedom. Unpolarized pure states of $N$ photons must satisfy the constraints \eq{ \sum_{m=0}^N\left|\psi_m\right|^2=1,\quad 2\sum_{m=0}^N m \left|\psi_m\right|^2=N,\quad \sum_{m=1}^N\psi_m^*\psi_{m-1}\sqrt{m\left(N-m+1\right)}=0. } These four constraints, including normalization, can be compared to the free parameters of an $N$ photon state: the latter has $N+1$ complex degrees of freedom subject to normalization and the irrelevance of a global phase. Such unpolarized states thus retain $2N-3$ degrees of freedom, in stark contrast to the classical picture that cannot distinguish between any of these degrees of freedom. In addition, this directly shows that all unpolarized pure states of light must have more than one photon, signifying that one must look beyond $2\times 2$ matrices for investigating all polarization phenomena. It is easy to geometrically concoct quantum states that are unpolarized with $p=0$ by taking advantage of symmetry properties through the Majorana representation. For example, the so-called NOON states are given by superpositions of SU(2)-coherent states pointed in opposite directions [$\Omega_\perp=\left(\Pi-\Theta,\Phi+\Theta\right)$] \eq{ \ket{\psi_{\mathrm{NOON}}}=\frac{\ket{\Omega^{(N)}}+\ket{\Omega_\perp^{(N)}}}{\sqrt{2}}; \label{eq:NOON} } such states have Majorana constellations equally spread about a single great circle, as depicted along the equator in Fig. \ref{fig:Majorana examples}. Similarly, one can consider states whose Majorana constellations have three-dimensional symmetry, including those corresponding to platonic solids like the tetrahedron (again, see Fig. \ref{fig:Majorana examples}) \eq{ \ket{\psi_{\mathrm{tet}}}=\frac{\ket{4,0}+\sqrt{2}\ket{1,3}}{\sqrt{3}}\propto\hat{a}^\dagger\vphantom{a}\left(\hat{a}^\dagger\vphantom{a}+\sqrt{2}\hat{b}^\dagger\vphantom{a}\right)\left(\hat{a}^\dagger\vphantom{a}+\text{e}^{\iu2\pi/3}\sqrt{2}\hat{b}^\dagger\vphantom{a}\right)\left(\hat{a}^\dagger\vphantom{a}+\text{e}^{\iu4\pi/3}\sqrt{2}\hat{b}^\dagger\vphantom{a}\right)\ket{\mathrm{vac}}. } Since Majorana constellations rotate rigidly under polarization rotations, any such rotation will preserve the unpolarized nature of a state. We can also use the geometrical picture without much elegance by considering unpolarized states to be, for example, superpositions of SU(2)-coherent states pointed in opposite directions in different photon-number subspaces. A straightforward example is a state such as \eq{ \ket{\psi_{\mathrm{NOON-inspired}}}=\frac{\sqrt{N}\ket{\Omega^{(M)}}+\sqrt{M}\ket{\Omega_\perp^{(N)}}}{\sqrt{M+N}}. } This should make it clear that there are an infinite number of possible quantum states with classical degree of polarization $p=0$, even though the classical picture assumes them all to be identical. Moreover, we have only shown an infinite number of possible pure unpolarized states; any convex combination of such states will also be unpolarized, so there are a plethora of unpolarized mixed states according to the classical degree. A final note about classically unpolarized states is due to \textcite{Klyshko1992}. Orthogonal states such as $\ket{1,1}$ and $\left(\ket{2,0}+\ket{0,2}\right)/\sqrt{2}$ seem to be unpolarized, but they have the same Majorana constellation up to a rigid rotation (two antipodal points), so they can be interconverted via polarization rotations. This is terrible from the perspective of classical polarization: polarization rotations should not change the measurable properties of a state, yet, somehow, a polarization rotation is here converting a state into an orthogonal one, which can readily be distinguished from the former. The pitfalls of classical polarization intuition continue to be elucidated. An alternative to states simply satisfying $p=0$ is the set of states that are completely unchanged by polarization rotations. These states were shown by \textcite{Agarwal1971,PrakashChandra1971} to uniquely correspond to convex combinations of maximally mixed states in each photon-number subspace: \eq{ \hat{\rho}_{\mathrm{isotropic}}=\sum_{N=0}^\infty \beta_N\frac{\hat{\mathds{1}}_N}{N+1},\quad \sum_{N=0}^\infty \beta_N=1,\quad \beta_N\geq 0. \label{eq:isotropic state} } Here, the maximally mixed states are projections onto an $N$-photon subspace that can be written explicitly as \eq{ \hat{\mathds{1}}_N=\sum_{m=0}^N\ket{m}_\mathrm{R}\bra{m}\otimes\ket{N-m}_\mathrm{L}\bra{N-m}&=\sum_{m=0}^N\ket{m}_\mathrm{D}\bra{m}\otimes\ket{N-m}_\mathrm{A}\bra{N-m}\\ &=\sum_{m=0}^N\ket{m}_\mathrm{H}\bra{m}\otimes\ket{N-m}_\mathrm{V}\bra{N-m}. } Within a given photon-number subspace, these states have no remaining degrees of freedom, just like the classical description of unpolarized states. The only degrees of freedom in these states come from the probability distributions $\boldsymbol{\beta}$, which may be interpretted as the intensity distribution information to which classical polarization could, in theory, be privy. For example, the intensity for classical states may follow a Poisson distribution, taking the form \eq{ \hat{\rho}_{\mathrm{isotropic\,classical}}=\sum_{N=0}^\infty \frac{\left|\alpha\right|^{2N}\text{e}^{-\left|\alpha\right|^2}}{N!}\frac{\hat{\mathds{1}}_N}{N+1}, \label{eq:isotropic Poisson} } which could arise from the convex combination of the classically polarized states given in Eq. \eqref{eq:canonical coherent states any pol} averaged over all polarization directions. The distinction between classically unpolarized states with $p=0$ and the completely isotropic states $\hat{\rho}_{\mathrm{isotropic}}$ can be summarized using the anticoherence concept uncovered by \textcite{Zimba2006}: classical unpolarization implies that $\expct{\left(\hat{\mathbf{S}}\cdot\mathbf{n}\right)}=0$ for all unit vectors $\mathbf{n}$, while quantum unpolarization implies that $\expct{\left(\hat{\mathbf{S}}\cdot\mathbf{n}\right)^k}$ is independent from $\mathbf{n}$ for larger integers $k$. States satisfying these constraints for the largest integers $k$ are now known as Kings of Quantumness \cite{Bjorketal2015,Bjorketal2015PRA} and have been explored numerically in many dimensions. We present a final method for finding unpolarized states stemming from the classical definition. Given the decompositions of Eqs. \eqref{eq:Stokes unpol} and \eqref{eq:coherency unpol}, we are inspired to write \eq{ \hat{\rho}=p\hat{\rho}_{\mathrm{pol}}+(1-p)\hat{\rho}_{\mathrm{unpol}}, \label{eq:quantum decomposition into classical} } where $\hat{\rho}_{\mathrm{pol}}$ has degree of polarization $p=1$ and $\hat{\rho}_{\mathrm{unpol}}$ has $p=0$. Classically, we can thus always take any partially polarized state and subtract the polarized component to find the unpolarized component, such as through \eq{ \boldsymbol{\Psi}_{\mathrm{unpol}}=\frac{\boldsymbol{\Psi}-p\boldsymbol{\Psi}_{\mathrm{pol}}}{1-p}. } Quantum mechanically, this is not always tenable. Making the same construction with quantum states, we have \eq{ \hat{\rho}_{\mathrm{unpol,\,candidate}}=\frac{\hat{\rho}-p\hat{\rho}_{\mathrm{pol}}}{1-p}, } but this is not unique, because there is no single unique $\hat{\rho}_{\mathrm{pol}}$ to subtract. Moreover, the resulting candidate is not always a quantum state, as it may fail to be positive. For example, given that the initial state may be pure, the subtracted state will always fail to be positive: \eq{ \hat{\rho}_{\mathrm{unpol,\,poor\,candidate}}=\frac{\ket{\psi}\bra{\psi}-p\hat{\rho}_{\mathrm{pol}}}{1-p}\leq 0. } We thus find that this classical method for finding unpolarized states only \textit{sometimes} works in the quantum domain, requiring both a judicious choice of polarized component $\hat{\rho}_{\mathrm{pol}}$ and the verification that the resultant state is physically viable. \subsection{Changes in Polarization} What is the quantum perspective on classical polarization transformations? From the proceeding discussions, it will be clear that there are quantum transformations about which classical polarization is ignorant, while the quantum theory is fully cognisant of the classical transformations. We can analyze each of the classical transformations in turn. \subsubsection{Quantum Transformations Underlying Jones Matrix Calculus} We first inspect the quantum transformations that can be described by pure Jones matrix transformations, as in Eqs. \eqref{eq:coherency pure jones} and \eqref{eq:Mueller from pure Jones}. Classically, these are referred to as deterministic transformations, while we will see this notation to be at odds with some standard quantum nomenclature. \label{sec:quantum rotations} First, we consider polarization rotations. The Jones matrix given in Eq. \eqref{eq:Jones rotation} directly corresponds to its quantum counterpart, where a rotation operator is defined by \eq{ \hat{R}\left(\Theta,\mathbf{n}\right)=\exp\left(-\text{i}\Theta\mathbf{n}\cdot\hat{\mathbf{S}}\right). \label{eq:quantum rotation} } When a quantum state undergoes a polarization rotation \eq{ \hat{\rho}^{(\mathrm{in})}\to \hat{\rho}^{(\mathrm{out})}=\hat{R}\hat{\rho}^{(\mathrm{in})}\hat{R}^\dagger, } the Stokes operators transform as \eq{ \hat{\mathbf{S}}^{(\mathrm{in})}\to \hat{\mathbf{S}}^{(\mathrm{out})}=\hat{R}\hat{\mathbf{S}}^{(\mathrm{in})}\hat{R}^\dagger=\mathbf{R}\hat{\mathbf{S}}, } where $\mathbf{R}$ is the $3\times 3$ rotation matrix found in Eq. \eqref{eq:Mueller rotation}. This type of transformation is unitary, is known as an SU(2) rotation, and leaves $\hat{S}_0$ unchanged, thereby allowing the Stokes \textit{operators} to transform in the same way as the Stokes parameters, through \eq{ \hat{S}_\mu\to\sum_{\nu=0}^3 M_{\mu \nu}\hat{S}_\nu. } In fact, because we seek descriptions of polarization transformations that remain valid regardless of the input state, it will remain a generic feature that the Stokes operators transform via the Mueller matrices describing the transformations of the associated Stokes parameters. When acting on creation and annihilation operators, the rotation operations enact \eq{ \hat{R}\hat{A}\hat{R}^\dagger=\mathbf{J}_{\mathrm{rot}}\hat{A}, }for the quantized Jones vector [cf. Eq. \eqref{eq:quantized Jones vector}] \eq{ \hat{A}=\begin{pmatrix} \hat{a}\\\hat{b} \end{pmatrix}. } This is what guarantees that the Majorana constellation rotates rigidly under a polarization rotation, as the creation operators $\hat{a}^\dagger_\Omega$ have their angular coordinates rotate together through \eq{ \hat{R}\ket{\psi^{(N)}}\propto\hat{R}\prod_{k=1}^N \hat{a}^\dagger_{\Omega_k}\ket{\mathrm{vac}}=\left(\prod_{k=1}^N \hat{R}\hat{a}^\dagger_{\Omega_k}\hat{R}^\dagger\right) \hat{R}\ket{\mathrm{vac}}=\left(\prod_{k=1}^N \hat{R}\hat{a}^\dagger_{\Omega_k}\hat{R}^\dagger\right) \ket{\mathrm{vac}}. } In addition, that the Stokes operators themselves transform in the same way as the Stokes parameters means that higher-order moments such as $\expct{\hat{S}_i\hat{S}_j}$ also transform as expected classically under polarization rotations, albeit under the assumption that the classical values for operator correlations are already correctly given by the quantum expectation values. These facts make the quantum rotation transformations very similar to their classical counterparts, explaining the true origin of classical polarization rotations. Arbitrary Jones matrices acting on $A$ cannot, in contrast to rotations, simply act on the quantized vector $\hat{A}$. The diattenuation transformation of Eq. \eqref{eq:Jones diattenuation} applied to quantized fields through \eq{ \hat{a}\to\sqrt{q}\hat{a},\quad \hat{b}\to\sqrt{r}\hat{b}, } for example, is not a trace-preserving quantum channel and does not preserve the bosonic commutation relations of the two modes. To make these transformations preserve unitarity, an auxiliary, possibly fictious, extra pair of modes annihilated by some bosonic operators $\hat{v}_1$ and $\hat{v}_2$ must be introduced, to create transformations of the form \eq{ \hat{a}\to\sqrt{q}\hat{a}-\sqrt{1-q}\hat{v}_1,\quad \hat{b}\to\sqrt{r}\hat{b}-\sqrt{1-r}\hat{v}_2. \label{eq:quantum diattenuation two input outputs} } Then, ignoring the auxiliary modes leads to an \textit{effective} transformation that looks like that of a diattenuation of the two modes $a$ and $b$ in which the underlying physical transformation implies that some photons from those two modes were transferred to auxiliary modes. We can consider an attenuation transformation to be a rotation between some polarization mode $a$ and another mode $v_1$ initially in its vacuum state. Physically, this is also equivalent to introducing a beam splitter that intermixes modes $a$ and $v_1$ and then ignores the latter mode. The transmission probability of the beam splitter or the effective transmission probability of the fictitious beam splitter is exactly equal to the attenuation coefficient $q$. Notably, since the effect of sequential attenuations can be collated into that of a single attenuation by a larger factor $q=q_1\cdots q_n$, we only need a single rotation matrix with a single auxiliary mode to describe attenuation from a quantum standpoint. A quantum state undergoing attenuation in mode $a$ has the quantum channel \eq{ \hat{\rho}\to\sum_l \hat{K}_l \hat{\rho}\hat{K}_l,\quad \sum_l \hat{K}_l^\dagger\hat{K}_l^\dagger=\hat{\mathds{1}}\label{eq:quantum channel Kraus}, } with Kraus operators given by \cite{Goldberg2021thesis} \eq{ \hat{K}_l=\sum_{m=0}^\infty\sqrt{\binom{m+l}{m}q^l\left(1-q\right)^m}\ket{m}_a\bra{m+l}. } The Stokes operators for such a process transform, in turn, as \eq{ \hat{S}_\mu\to\sum_l \hat{K}_l^\dagger \hat{S}_\mu\hat{K}_l \label{eq:quantum channel Kraus on Stokes}. } This can be combined with an attenuation in the second mode to yield the classical transformations of Eq. \eqref{eq:Jones diattenuation}. As seen in the classical picture of Eq. \eqref{eq:Mueller diattenuation general}, the most general diattenuation transformation has four free parameters: two govern the strengths of the attenuations and two govern the pair of orthogonal polarization modes being attenuated. These can all be accounted for using rotation operations: two polarization rotations enable the basis changes to find the modes to be attenuated and two rotations into vacuum modes perform the attenuations. This most general diattenuation transformation is schematized in Fig. \ref{fig:diattenuation general}. When the vacuum modes are ignored, it is as if the quantized Jones vector undergoes a classical diattenuation transformation \eq{ \hat{A}\to\mathbf{J}\hat{A}. } \begin{figure*} \centering \includegraphics[width=\textwidth]{fig3} \caption{Quantum circuit diagram underlying classical polarization diattenuations. The input polarization state $\hat{\rho}_{ab}$ is first rotated to a desired configuration, the two modes are each attenuated via a rotation with their auxiliary vacuum modes $v_1$ and $v_2$, the vacuum modes are ignored, the polarization modes are rotated back to the original configuration, and the output two arrows correspond to the diattenuated polarization state.} \label{fig:diattenuation general} \end{figure*} The marked difference between polarization rotations and diattenuations is that the former is a unitary transformation on the polarization state and the latter is not, instead given by a quantum channel of the form of Eq. \eqref{eq:quantum channel Kraus} with more than one Kraus operator $\hat{K}_l$. Regrettably, the quantum theory refers to unitary channels as deterministic and nonunitary channels as nondeterministic, even though both transformations are classically referred to as deterministic from a polarization standpoint. It is noteworthy that all of transformations governed by a single Jones matrix, classically referred to as deterministic, can be considered to conserve photon number in some enlarged Hilbert space. We will see this to change when inspecting classically nondeterministic polarization transformations. \subsubsection{Quantum Transformations Underlying Mueller Matrix Calculus} It is straightforward to generalize the quantum transformations underlying ``deterministic'' polarization transformations to those underlying transformations of the form of Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} with more than one nonzero weight $\lambda_i$. Under the standard assumption that the classical weights can only be positive, the quantum transformations immediately follow as probabilistic mixtures of the underlying quantum transformations. These can be seen as the nondeterministic transformations \eq{ \hat{\rho}\to\sum_i\lambda_i\sum_l \hat{K}_l^{(i)}\hat{\rho}\hat{K}_l^{(i)\dagger} \quad\Rightarrow \quad \mathbf{M}=\sum_i \lambda_i \mathbf{M}^{(i)}, } where each Mueller matrix $\mathbf{M}^{(i)}$ arises from a deterministic polarization transformation with Kraus operators $\hat{K}_l^{(i)}$. The new transformations are now governed by the larger set of Kraus operators $\sqrt{\lambda_i}\hat{K}_l^{(i)}$, indexed by both $i$ and $l$, which can also be used in Eq. \eqref{eq:quantum channel Kraus on Stokes} to describe the most general quantum mechanical transformation on the Stokes operators that leads to Mueller matrix transformations of the Stokes parameters. Many examples serve to tease apart the nuances of quantum polarization transformations. For example, consider a classical transformation whereby a state has an equal probability of undergoing one of two rotations parametrized as $\mathbf{R}_i=\mathbf{R}\left(\Theta_i,\mathbf{n}_i\right)$. Quantum mechanically, this may arise by a unitary operation between a polarization state $\ket{\psi}_{ab}$ and an auxiliary mode initially in its vacuum state as \eq{ \ket{\psi}_{ab}\otimes \ket{0}_v\to\frac{\hat{R}_1\ket{\psi}_{ab}\otimes\ket{1}_v+\hat{R}_2\ket{\psi}_{ab}\otimes\ket{2}_v}{\sqrt{2}}. } Since the amount of rotation becomes entangled with the auxiliary mode, which may have gone a photon-number \textit{non}conserving operation, ignoring the vacuum mode leads to the effective polarization transformation \eq{ \hat{\rho}_{ab}\to\frac{1}{2}\hat{R}_1\hat{\rho}\hat{R}_1^\dagger+\frac{1}{2}\hat{R}_2\hat{\rho}\hat{R}_2^\dagger. } This is equivalent to a transformation with the pair of Kraus operators $\hat{K}_i=\hat{R}_i/\sqrt{2}$ that manifestly satisfy the normalization requirement of Eq. \eqref{eq:quantum channel Kraus}. Even though this operation is unitary in a larger Hilbert space, it is markedly different from the simple rotation transformations that enact diattenuations in larger Hilbert spaces. \subsubsection{Speculative Constraints on Polarization Changes} Quantum channels acting on a quantum state must be completely positive. A general further assumption is that they preserve the trace of the quantum state, so as to preserve total probability. How do these considerations, encompassed by Eq. \eqref{eq:quantum channel Kraus}, constrain the possible Mueller matrix transformations of Eq. \eqref{eq:Mueller definition Stokes}? We do not yet have an answer to this question. It is clear from the above sections that all classical transformations falling under the assumptions of Section \ref{sec:classical constraints} can be reproduced by the quantum theory. Can we justify these assumptions by assuming only quantum theory? Can we circumvent these assumptions using quantum theory? As mentioned before, these questions have been touched on previously by \textcite{AhnertPayne2005,Aielloetal2007,Sudhaetal2008,Simonetal2010,GamelJames2011}, but we believe them to remain unanswered. The linearity assumption is easiest to break, but that can be broken using both classical and quantum perspectives. Namely, many transformations ascribing to the form of Eq. \eqref{eq:quantum channel Kraus on Stokes} do not lead to linear transformations among the Stokes parameters; an easy example is a unitary operation corresponding to a nonlinear Hamiltonian, such as \eq{ \hat{U}=\exp\left(-\text{i} \chi \hat{S}_3^2\right). } Similarly, not all classical transformations are linear, as with light experiencing the Kerr effect, so we should not expect every operation in the universe to produce a transformation with a Mueller matrix as in Eq. \eqref{eq:Mueller definition Stokes}. In fact, nonlinear polarimetry is a field unto itself that merits its own attention \cite{Bazhenovetal1994,BrasseletZyss2007,Samimetal2016PRA,Samimetal16JOSAB,KrouglovBarzda2019}. These considerations let us refine our current questions to ask whether quantum considerations affect the constraints of classical \textit{linear} polarization transformations that are simply governed by Mueller matrices as in Eq. \eqref{eq:Mueller definition Stokes}. The most tantalizing question is whether quantum transformations alone can be used to restrict the weights in Eqs. \eqref{eq:coherency multiple Jones with weights} and \eqref{eq:Mueller from Jones with weights} to be positive, as conjectured by \textcite{Goldberg2020}. Quantum transformations restricted to the single-photon subspace automatically necessitate the positivity assumption \cite{GamelJames2011}, so our conjecture would be proven if one could prove that all quantum transformations that enforce linear transformations among the Stokes parameters must necessarily take single-photon states to single-photon states, but we have already seen that diattenuations do not maintain a single photon-number subspace. Similarly, our conjecture could be proven if one could show that the only quantum transformations that enable linear transformations among the extended Stokes parameters of Eq. \eqref{eq:two point Stokes parameters} are those that enact linear transformations among the regular Stokes parameters. It would be nice to use the SU(4)--O$^+$(6) homomorphism discussed in the classical context of \textcite{Cloude1986} to attack this problem from a quantum standpoint, but not all Mueller matrices are unitary, so there indeed remains work to be done. We can also continue our classical discussion of the restriction on Mueller matrices to satisfy the transmittance and reverse transmittance conditions. It is evident that two Kraus operators $\hat{K}_1=\mathbf{J}_1$ and $\hat{K}_2=\mathbf{J}_2$ expressed in the single-photon basis as in Eq. \eqref{eq:lossless polarizer Jones} lead to the transformation \eq{ \hat{\rho}_{\mathrm{single\,photon}}\to \sum_{l=1}^2\hat{K}_l\hat{\rho}_{\mathrm{single\,photon}}\hat{K}_l^\dagger=\ket{1}_{\mathrm{R}}\bra{1}\otimes\ket{0}_{\mathrm{L}}\bra{0}. \label{eq:Kraus to single photon} } Somehow, there exists a quantum mechanical transformation that leads to the Mueller matrix of Eq. \eqref{eq:lossless polarizer Mueller}, when restricted to act on single-photon input states, that violates the reverse transmittance condition. Have we uncovered a contradiction between the theories? As with all paradoxes in the technical sense of the word, a resolution awaits. The creation of such an input-agnostic polarization transformer from linear optical devices requires postselection to enable a lossless polarizer \cite{Zhangetal2021arxiv}; in actuality, some light is always lost by such a polarizer. Such a transformation with Kraus operators enacting Eq. \eqref{eq:Kraus to single photon} certainly exists, because it is always possible to create a quantum channel on a finite-dimensional Hilbert space that takes arbitrary input states to a fixed output state \cite{Wuetal2007}, but it raises concerns that we presently address: What about infinite-dimensional input states? Is this transformation really linear? One can concoct a quantum channel that transforms arbitrary input states into a given pure output state $\ket{\psi_{\mathrm{target}}}$ by amassing an infinite set of Kraus operators that transform a complete set of basis states into the desired final state: \eq{ \hat{K}_{m,n}=\ket{\psi_{\mathrm{target}}}\bra{m,n},\qquad \forall\,m,n\in \left(0,1,\cdots,\infty\right). \label{eq:Kraus to pure state} } These Kraus operators readily satisfy the normalization constraint of Eq. \eqref{eq:quantum channel Kraus} and so represent a viable physical transformation; similar results can be obtained if the target state is mixed \cite{Wuetal2007}. However, such a channel does not enact a linear transformation among the Stokes parameters, as it creates output states with the same total intensity regardless of the input intensity. We can modify this scheme to create an ideal lossless polarizer that polarizes all input light into the direction $\Omega$ by using the infinite set of Kraus operators \eq{ \hat{K}_{m,n}=\ket{\Omega^{(m+n)}}\bra{m,n},\qquad \forall\,m,n\in \left(0,1,\cdots,\infty\right), \label{eq:Kraus to polarized state} } which again satisfies all of the requirements of a quantum channel. This appears to act linearly, as it directly enacts a Mueller matrix with the form of Eq. \eqref{eq:lossless polarizer Mueller}: \eq{ \mathbf{M}=\begin{pmatrix} 1&\mathbf{0}^\top\\ \mathbf{n}_\Omega&\mathbf{0}\mathbf{0}^\top \end{pmatrix} } (as a reminder, the outer product $\mathbf{0}\mathbf{0}^\top$ equals a $3\times 3$ matrix of zeroes). Quantum theory here contradicts the standard assumptions of classical polarization theory. For many reasons, one should not expect transformations with Kraus operators such as those in Eqs. \eqref{eq:Kraus to pure state} and \eqref{eq:Kraus to polarized state} to ever be feasible in practice. The Kraus operators can be physically interpretted as a measure-and-prepare operation that checks what the input state is and outputs some state depending on that input. Currently, Kraus operators as in Eq. \eqref{eq:Kraus to polarized state} must measure the intensity of the input state and coherently output a perfectly polarized state with that same intensity in order to act like a lossless polarizer. Such a measurement process alone is unlikely to be linear, but an even greater challenge is to create a physical setup that can enact this transformation for arbitrary input intensities (and, likewise, arbitrary superpositions and convex combinations of intensities). Additionally, when considering these Kraus operators as arising from a unitary operation in an enlarged Hilbert space, the unitary must necessarily create a \textit{different} output state when the initial state of the auxiliary system $v$ is different. For example, the single-photon transformation from Eq. \eqref{eq:Kraus to single photon} must be constructed from a unitary of the form \eq{ \hat{U}&=\ket{R}\bra{R}\otimes\ket{0}_v\bra{0}+\ket{R}\bra{L}\otimes\ket{1}_v\bra{0}+ \ket{L}\bra{R}\otimes\ket{0}_v\bra{1}+\ket{L}\bra{L}\otimes\ket{1}_v\bra{1},\, \mathrm{or}\\ \hat{U}&=\ket{R}\bra{R}\otimes\ket{0}_v\bra{0}+\ket{R}\bra{L}\otimes\ket{1}_v\bra{0}+ \ket{L}\bra{L}\otimes\ket{0}_v\bra{1}+\ket{L}\bra{R}\otimes\ket{1}_v\bra{1}, } up to a relabelling of the auxiliary mode. This performs the correct transformation on $\hat{\rho}_{\mathrm{single\,photon}}\otimes\ket{0}_v\bra{0}$, sending it to a right-handed circularly polarized state, while doing the opposite to $\hat{\rho}_{\mathrm{single\,photon}}\otimes\ket{1}_v\bra{1}$, sending it to a left-handed circularly polarized state, so the device would have to be reset each time to ensure the proper input state in the auxiliary system. The infeasibility of such operations is then a quantum justification for the polarization transmittance conditions that are classically assumed. Future work could certainly bolster this connection, which we have only here begun to undertake. \section{Quantum Polarimetry from the Perspective of Quantum Estimation Theory} \label{sec:quantum polarimetry estimation} The crux of polarimetry is the ability to measure polarization and its changes \textit{well}. There has been significant research showing that techniques from or inspired by quantum theory can be used to outperform their classical counterparts for particular tasks in sensing and metrology \cite{Caves1981,Dowling1998,Mitchelletal2004,Giovannettietal2004,Berryetal2009,LIGO2011,Humphreysetal2013,Tayloretal2013,Tsangetal2016,Liuetal2020}, so quantum polarimetry offers the potential for similar quantum advantages. Polarimetry tasks have been explored directly from a quantum perspective in a variety of guises that we presently review. \subsection{Quantum Fisher Information} A central quantity in quantum sensing applications is the quantum Fisher information matrix (QFIM), which characterizes how sensitive a given probe state is to changes in the parameters being measured. The QFIM differs from its classical counterpart in that it is optimized over all possible measurement strategies such that it depends only on the probe state and the parameters being measured. This facilitates a direct search for the most sensitive probe states, which can be used to produce estimates of the underlying parameters with the lowest possible uncertainties. We start with a few properties of QFIMs \cite{Matsumoto2002,Paris2009,TothApellaniz2014,Szczykulskaetal2016,Braunetal2018,SidhuKok2020,Albarellietal2020,Polinoetal2020,DemkowiczDobrzanskietal2020,Liuetal2021accepted}. Given a series of of parameters to be estimated $\pmb{\theta}$, we wish to find as small as possible of a covariance matrix for an estimator $\tilde{\pmb{\theta}}$ \eq{ \mathop{\mathrm{Cov}} \nolimits\left(\tilde{\pmb{\theta}}\right)=\expct{\left(\tilde{\pmb{\theta}}-\pmb{\theta}\right)\left(\tilde{\pmb{\theta}}-\pmb{\theta}\right)^\top}, } where we have assumed unbiased estimators\footnote{Biased estimators have indeed been investigated in the context of quantum metrology but may be harder to implement as they depend on the underlying parameters $\pmb{\theta}$ \cite{Motkaetal2016,BonsmaFisheretal2019}.} such that \eq{ \expct{\tilde{\pmb{\theta}}}=\pmb{\theta}. } Then, the pivotal result from quantum estimation theory is that the covariance matrix for a measurement repeated $\nu\gg 1$ times is bounded from below by the inverse of the QFIM through the quantum Cram\'er-Rao bound (qCRB): \eq{ \mathop{\mathrm{Cov}} \nolimits\left(\tilde{\pmb{\theta}}\right) \geq \nu^{-1} \mathbf{Q}\left(\pmb{\theta};\hat{\rho}_{\pmb{\theta}}\right)^{-1}. \label{eq:qCRB} } The QFIM is computed from the evolved probe state $\hat{\rho}_{\pmb{\theta}}$ as \eq{ Q_{i j}\left(\pmb{\theta};\hat{\rho}_{\pmb{\theta}}\right)=\mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}_{\pmb{\theta}}\frac{\left\{\hat{L}_{\theta_i},\hat{L}_{\theta_j}\right\}}{2}\right), } where the symmetric logarithmic derivative operators are implicitly defined through \eq{ \frac{\partial \hat{\rho}_{\pmb{\theta}}}{\partial \theta_i}=\frac{\left\{\hat{L}_{\theta_i},\hat{\rho}_{\pmb{\theta}}\right\}}{2} } and we are using the anticommutator $\left\{A,B\right\}=AB+BA$. For pure probe states with unitary generators \eq{ \frac{\partial \hat{\rho}_{\pmb{\theta}}}{\partial \theta_i}=\text{i}\left[\hat{\rho}_{\pmb{\theta}},\hat{G}_i\right] } the QFIM takes the particularly simple form \eq{ Q_{i j}\left(\pmb{\theta};\hat{\rho}_{\pmb{\theta}}\right)=4\mathop{\mathrm{Cov}} \nolimits_{\hat{\rho}_{\pmb{\theta}}}\left(\hat{G}_i,\hat{G}_j\right), } where we are using the symmetrized covariance $\mathop{\mathrm{Cov}} \nolimits_{\hat{\rho}}\left(A,B\right)=\expct{\left\{A,B\right\}}/2-\expct{A}\expct{B}$ that takes expectation values with respect to state $\hat{\rho}$. When there is a single parameter $\theta$ to be estimated, the qCRB of Eq. \eqref{eq:qCRB} is always saturable. For multiparameter estimation, the bound is saturable when, for all $i$ and $j$, \eq{ \mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}_{\pmb{\theta}}\left[\hat{L}_{\theta_i},\hat{L}_{\theta_j}\right]\right)=0. \label{eq:commutativity condition} } Many other details about quantum estimation can be found in the above-mentioned reviews and we, further, draw attention to recent work investigating situations in which the QFIM in Eq. \eqref{eq:qCRB} cannot be inverted or changes discontinuously \cite{Safranek2017,GoldbergJames2018Euler,ZhouJiang2019arxiv,Sevesoetal2019,YeLual2021arxiv,Goldbergetal2021singularaccepted}. \subsection{Rotation Measurements} \subsubsection{Phase Estimation} One main quantum-enhanced measurement protocol is mathematically equivalent to a subset of rotation measurements: phase estimation. Many physical processes amount to estimating a single parameter $\theta$ from a unitary of the form $\hat{U}=\exp\left(-\text{i}\theta\hat{G}\right)$. In the language of polarization, this arises when trying to estimate the angle of a polarization rotation about a known axis such as $\mathbf{n}=\mathbf{e}_3$: \eq{ \hat{U}(\theta)=\hat{R}\left(\theta,\mathbf{z}\right)=\exp\left(-\text{i}\theta\hat{S}_3\right). } Using a classical state for such an estimation problem leads to the evolution \eq{ \ket{\psi_{\mathrm{classical}}(\theta)}=\hat{U}(\theta)\left(\ket{\alpha}_{\mathrm{R}}\otimes\ket{0}_{\mathrm{L}}\right)=\ket{\alpha\text{e}^{-\text{i}\frac{\theta}{2}}}_{\mathrm{R}}\otimes\ket{0}_{\mathrm{L}}. } We can compute the QFIM as a function of the average photon number (i.e., energy) of the input light \eq{ H=\expct{\hat{N}} } to find the ``shot-noise-scaling'' QFIM \eq{ Q\left(\theta;\ket{\psi_{\mathrm{classical}}}\right)=H. } A NOON state such as that of Eq. \eqref{eq:NOON} with $\Theta=0$ is much more sensitive, evolving as \eq{ \ket{\psi_{\mathrm{NOON}}(\theta)}=\hat{U}(\theta)\frac{\ket{N,0}+\ket{0,N}}{\sqrt{2}}=\frac{\text{e}^{-\text{i}\frac{\theta}{2}}\ket{N,0}+\text{e}^{\text{i}\frac{\theta}{2}}\ket{0,N}}{\sqrt{2}}. } This leads to a ``Heisenberg-scaling'' QFIM \eq{ Q\left(\theta;\ket{\psi_{\mathrm{NOON}}}\right)=H^2, } allowing for much more sensitive measurements than the classical scheme with a commensurate amount of input energy. This advantage in phase estimation is readily extendible to simultaneously estimating multiple phases \cite{Humphreysetal2013}. There, an additional advantage is seen versus the sequential estimation of each of the parameters. Caveats exist in terms of the availability of a reference mode \cite{JarzynaDemkowiczDobrzanski2012,Goldbergetal2020multiphase} and the number of required measurements \cite{GoreckiDemkowiczDobrzanski2021arxiv}, but we cannot consider generalized polarimetry to be a multiphase estimation protocol because the polarimetric parameters are not all mutually independent. Instead, we continue to investigate quantum-enhanced polarimetry along the standard decompositions of polarization transformations. \subsubsection{Rotations about Unknown Axes} A more intricate estimation problem is that of estimating both the rotation angle and rotation axis of an unknown rotation. This has been done using a variety of parametrizations for the three parameters of a rotation \cite{BaumgratzDatta2016,GoldbergJames2018Euler,Frieletal2020arxiv,Houetal2020,Goldbergetal2021rotationspublished}, which can all be unified from a geometrical perspective \cite{Goldbergetal2021intrinsic}. No matter the parametrization of the triad of rotation parameters $\pmb{\theta}$, rotation operations are unitary, per Eq. \eqref{eq:quantum rotation}. This facilitates three unitary generators of the transformation \eq{ \hat{G}_i=\text{i}\frac{\partial \hat{R}\left(\pmb{\theta}\right)}{\partial \theta_i}\hat{R}^\dagger\left(\pmb{\theta}\right), } one for each parameter $\theta_i$. Since these can be shown to be comprised from linear combinations of Stokes operators \cite{Goldbergetal2021rotationspublished,Goldbergetal2021intrinsic}, in the form of \eq{ \hat{G}_i=\mathbf{g}\cdot\mathbf{S}, }they can readily be computed using any representation of SU(2), which is especially straightforward using Pauli matrices. The QFIM then takes the form \eq{ \mathbf{Q}\left(\pmb{\theta};\psi\right)=4\mathbf{G}\left(\pmb{\theta}\right)^\top\mathbf{C}\left(\psi_{\pmb{\theta}}\right)\mathbf{G}\left(\pmb{\theta}\right), \quad \mathbf{G}=\begin{pmatrix} \mathbf{g}_1 &\mathbf{g_2} &\mathbf{g}_3 \end{pmatrix}, \quad \mathbf{C}_{ij}\left(\psi\right)=\mathop{\mathrm{Cov}} \nolimits_{\ket{\psi}\bra{\psi}}\left(\hat{S}_i,\hat{S}_j\right), } which, by rotating the covariance matrix to that of the unrotated state via $\mathbf{C}\left(\psi_{\pmb{\theta}}\right)=\mathbf{R}\left(\pmb{\theta}\right)^\top\mathbf{C}\left(\psi\right)\mathbf{R}\left(\pmb{\theta}\right)$, allows us to fully separate the parametric dependence of the QFIM from its state dependence: \eq{ \mathbf{Q}\left(\pmb{\theta};\psi\right)=4\pmb{G}\left(\pmb{\theta}\right)^\top\mathbf{C}\left(\psi\right)\pmb{G}\left(\pmb{\theta}\right), \quad \pmb{G}=\mathbf{R}\mathbf{G}. } This allows one to maximize the ``sensitivity covariance matrix'' over all probe states $\ket{\psi}$ without having to worry about the absolute values of the parameters or the parametrization being considered. The qCRB of Eq. \eqref{eq:qCRB} is a matrix bound, which is hard to uniquely optimize. A scalar bound can be found by using a positive-definite weight matrix $\mathbf{W}$ to produce the lower bound for a certain linear combination of the parameters' variances and covariances \eq{ \mathop{\mathrm{Tr}} \nolimits\left[\mathbf{W}\mathop{\mathrm{Cov}} \nolimits\left(\tilde{\pmb{\theta}},\tilde{\pmb{\theta}}^\top\right)\right] \geq\mathop{\mathrm{Tr}} \nolimits\left[\mathbf{W} \mathbf{Q}\left(\pmb{\theta};\hat{\rho}_{\pmb{\theta}}\right)^{-1}\right]. } When the weight matrix is chosen to be the metric tensor for SU(2), $\mathbf{W}=\boldsymbol{\eta}$, all of the matrices $\pmb{G}$ cancel and we are left with the scalar qCRB for the weighted mean-square error wMSE \cite{Goldbergetal2021intrinsic} \eq{ \mathrm{wMSE}(\tilde{\pmb{\theta}})=\mathop{\mathrm{Tr}} \nolimits\left[\boldsymbol{\eta}\mathop{\mathrm{Cov}} \nolimits\left(\tilde{\pmb{\theta}},\tilde{\pmb{\theta}}^\top\right)\right] \geq\mathop{\mathrm{Tr}} \nolimits\left[\mathbf{C}\left(\psi\right)^{-1}\right]. } This can then be uniquely optimized to find the most sensitive probe states for simultaneously estimating all three parameters of a rotation. Probe states with the smallest values of $\mathrm{wMSE}(\tilde{\pmb{\theta}})$ for a given average energy $H$ have a few properties \cite{GoldbergJames2018Euler,Goldbergetal2020extremal,Goldbergetal2021rotationspublished,Goldbergetal2021intrinsic}: \begin{itemize} \item They are pure states \item They have definite total spin $S_0=H/2$. \item They are classically unpolarized ($p=0$), meaning that the Stokes operators have vanishing expectation value $\hat{\mathbf{S}}=\mathbf{0}$. \item They are unpolarized to second order, making them anticoherent states or Kings of Quantumness to second order, with isotropic sensitivity covariance matrices $\mathbf{C}\left(\psi\right)\propto \mathds{1}$. \end{itemize} The tetrahedral state given in Fig. \ref{fig:Majorana examples} is an example of such an ideal probe state, with other examples also having highly symmetric Majorana constellations. They achieve the Heisenberg-scaling lower bound \eq{ \mathrm{wMSE}(\tilde{\pmb{\theta}})\geq 4\frac{9}{H\left(H+1\right)}, } which can be saturated by an optimal measurement scheme \cite{Goldbergetal2021rotationspublished}. NOON states, by contrast, only have symmetries about a single axis in their Majorana constellations, so they are no longer superior in rotation estimation, instead achieving only shot-noise scaling \eq{ \mathrm{wMSE}(\tilde{\pmb{\theta}})\geq 4\frac{2H+1}{H^2}. \label{eq:NOON state MSE} } Such ideal second-order unpolarized states have been generated using light's orbital angular momentum degree of freedom \cite{Bouchardetal2017} and work is underway to do likewise in the polarization degrees of freedom. The qCRB is saturable for the simultaneous estimation of all three rotation parameters for all pure unpolarized states. This is in stark contrast to classical light, which we saw in Eqs. \eqref{eq:uncertainty limit classical state 1} and \eqref{eq:uncertainty limit classical state 2} to obey stricter inequalities due to them violating the commutativity condition of Eq. \eqref{eq:commutativity condition} \cite{Jarzyna2021arxiv}. The nature of the optimal detection scheme for a given probe state thereby depends on the polarization properties of the probe, with the ultimate limit being achieved by second-order unpolarized states of light. \subsection{Diattenuation Measurements} It is also possible to obtain quantum enhancements in diattenuation measurements. These are physically equivalent to transmission and absorption measurements, which have been shown to benefit from quantum enhancements \cite{JakemanRarity1986,Heidmannetal1987,Hayatetal1999,MonrasParis2007,Brambillaetal2008,Jietal2008,Adessoetal2009,Alipouretal2014,Crowleyetal2014,Medaetal2017,Nair2018,Loseroetal2018,Ioannouetal2020arxiv}; however, these enhancements are modest, as they do not allow for improved scaling with probe energy beyond the shot-noise limit. \subsubsection{Attenuation} As described above, attenuation enacts the input-output relation \eq{ \hat{a}\to\sqrt{q}\hat{a}+\sqrt{1-q}\hat{v}, } where mode $a$ is being attenuated and mode $v$ is initially in its vacuum state. This evolution also governs absorption, reflection, and transmission measurements, all of which seek to estimate $q$ or a related parameter. A canonical coherent state evolves as \eq{ \ket{\alpha}\to\ket{\sqrt{q}\alpha}=\text{e}^{\left(1-q\right)\left|\alpha\right|^2/2}q^{\hat{a}^\dagger\vphantom{a}\hat{a}/2}\ket{\alpha}, } yielding the QFIM \eq{ Q(q;\ket{\alpha})=\frac{H}{q},\quad H=\left|\alpha\right|^2. } The optimal quantum strategy, by contrast, uses Fock states $\ket{N}$ that evolve as \eq{ \ket{N}\bra{N}\to\sum_{k=0}^N \binom{N}{k} q^k \left(1-q\right)^{N-k} \ket{k}\bra{k}. } The coefficients in this probability distribution take the same form as for SU(2)-coherent states in Eq. \eqref{eq:coherent state amplitudes} because attenuations are akin to rotations with an ignored auxiliary mode. Fock states have the improved QFIM \eq{ Q(q;\ket{N})=\frac{H}{q\left(1-q\right)}, } which is especially helpful in the limit of small losses $q\approx 1$. No enhanced scaling with $H$ is possible, so the improvements are not as dramatic as for enhanced rotation sensing and can equally be achieved with $N$ single-photon states or a single $N$-photon state. This has been demonstrated with single photons heralded using spontaneous parametric downconversion \cite{YabushitaKobayashi2004,Bridaetal2010,Moreauetal2017,Samantarayetal2017,SabinesChesterkingetal2017,Whittakeretal2017,Yoonetal2020}. Practically speaking, it is often easier to create two-mode squeezed vacuum (TMSV) states of the form \eq{ \ket{\psi_{\mathrm{TMSV}}}=\frac{1}{\cosh \zeta}\sum_{N=0}^\infty \left(-\text{e}^{\text{i}\varphi}\tanh \zeta\right)^N\ket{N,N}, }with average energy $H=2\sinh^2\zeta$, than Fock states $\ket{N}$. In fact, this is what is generated via spontaneous parametric downconversion when no heralding is performed. These states evolve as \eq{ \ket{\psi_{\mathrm{TMSV}}}\bra{\psi_{\mathrm{TMSV}}}\to\sum_{m=0}^\infty\left(1-q\right)^m\ket{\psi_m}\bra{\psi_m} } for the mutually orthogonal, unnormalized states \eq{ \ket{\psi_m}=\frac{1}{\cosh\zeta}\sum_{N=0}^\infty\sqrt{\binom{N+m}{m}} \left(-\text{e}^{\text{i}\varphi}\sqrt{q}\tanh\zeta\right)^N\ket{N,N+m}. } TMSV states have the same QFIM as Fock states to lowest order in $H$, making them similarly useful in the limit of small $q$ and small $H$ for providing quantum advantages, which has indeed been demonstrated \cite{Tapsteretal1991,SoutoRibeiroetal1997,DAuriaetal2006,Shietal2020,Atkinsonetal2021}. \subsubsection{Diattenuation} The estimation of two attenuation factors for orthogonal modes can simply consist of two parallel single attenuation measurements. Seeing that a Fock state $\ket{N}$ is optimal for sensing a single attenuation parameter $q$, a pair of Fock states is ideal for measuring a pair of attenutation parameters: \eq{ \mathbf{Q}\left(q,r;\ket{N_1,N_2}\right)=H\begin{pmatrix} \frac{h_1}{q\left(1-q\right)}&0\\ 0&\frac{h_2}{r\left(1-r\right)} \end{pmatrix}, } where $h_i=N_i/\left(N_1+N_2\right)$ is the energy fraction in each mode. This is a special sort of quantum probe state that is only partially polarized with degree of polarization $p=\left|h_1-h_2\right|$, in contrast to a fully polarized classical probe state that achieves \eq{ \mathbf{Q}\left(q,r;\ket{\alpha_\Omega}\right)=H\begin{pmatrix} \frac{\cos^2\Theta}{q}&0\\ 0&\frac{\sin^2\Theta}{r} \end{pmatrix}. } One could, in theory, use a pair of TMSV states to measure two attenuation parameters in parallel. This, however, presents physical challenges because the two polarization modes being attenuated are in the same spatial mode, making it difficult to entangle each polarization mode with a different external mode. In this multiparameter context, it is possible to directly use a single TMSV state to measure diattenuation, where the two modes are the polarization modes being attenuated. We have not found this idea directly discussed in the literature, so we briefly sketch such a scheme. A diattenuation of the form of Eq. \eqref{eq:quantum diattenuation two input outputs} acting on the two modes of the TMSV leads to the transformation \eq{ &\ket{\psi_{\mathrm{TMSV}}}\bra{\psi_{\mathrm{TMSV}}}\to \hat{\varrho}=\sum_{m,n=0}^\infty \left(\frac{1-q}{q}\right)^m\left(\frac{1-r}{r}\right)^n\ket{\psi_{m,n}}\bra{\psi_{m,n}},\\ &\qquad\ket{\psi_{m,n}}=\frac{1}{\cosh\zeta}\sum_{N\geq m,n}\left(-\text{e}^{\text{i}\varphi}\sqrt{qr}\tanh\zeta\right)^N\sqrt{\binom{N}{m}\binom{N}{n}}\ket{N-m,N-n}. } To lowest order in the amount of loss, the QFIM for this evolved state is \eq{ \mathbf{Q}\left(q,r;\ket{\psi_{\mathrm{TMSV}}}\right)=\frac{H}{2}\begin{pmatrix} \frac{1}{1-q}+\mathcal{O}\left(1\right)&-\frac{3H}{2}-2+\mathcal{O}\left(1-q,1-r\right)\\ -\frac{3H}{2}-2+\mathcal{O}\left(1-q,1-r\right)&\frac{1}{1-r}+\mathcal{O}\left(1\right) \end{pmatrix}. } This evidently performs as well as a pair of Fock states, outperforming coherent states in the small-loss limit, and could readily be used for quantum-enhanced multiparameter diattenuation estimation by taking advantage of the polarization correlations generated by Type II spontaneous parametric downconversion. \subsection{Combined Measurements} It is not always possible to isolate individual Mueller- or Jones-matrix components for estimation; many realistic scenarios require the simultaneous estimation of, say, rotation and diattenuation parameters. As may be expected, there are tradeoffs in the precision with which each of these parameters may be simultaneously estimated due to different probe states being optimally sensitive for different parameters. \subsubsection{Phase and Loss} The study of how to achieve quantum enhancements in the simultaneous estimation of two unrelated polarimetry parameters is much younger than its single-parameter counterpart. \textcite{Crowleyetal2014} first showed that one cannot achieve the ultimate quantum limit in measuring both a rotation angle and an attenuation parameter. Even though the QFIM for estimating these parameters is diagonal, the qCRB is not saturable, precluding an estimate with zero correlation between the two parameters.\footnote{It is still possible to approach the lower bound in some parameter regimes \cite{Albarellietal2019}.} Perhaps the most disappointing result is that the Heisenberg-scaling performance of phase estimation is overwhelmed by the shot-noise scaling performance of loss estimation in such a scenario for most nonzero values of loss, even when one does not desire to estimate the attenuation parameter \cite{DemkowiczDobrzanskietal2009}, due to physical constraints imposed by the Kramers-Kronig relations \cite{Giananietal2021arxiv}; this makes quantum-enhanced polarimetry even more of a challenge. The main conclusion is that one has to determine the relative importance of the parameters being estimated in order to best optimize the tradeoffs in their estimation \cite{Crowleyetal2014}. This type of tradeoff is present in other quantum estimation tasks including the measurment of a phase and a phase diffusion parameter \cite{Vidrighinetal2014,Altorioetal2015,Szczykulskaetal2017,Rocciaetal2017}. Possible methods to circumvent these tradeoffs include having the parameters be correlated \cite{Birchalletal2020} and using postselection \cite{HoKondo2021}. \subsubsection{Ellipsometry} Until now we have not made any distinction between polarimetry and ellipsometry, as they both investigate polarization properties of the electric field. Ellipsometry typically focuses on determining a specific pair of polarization parameters, namely the ratio between two reflection or transmission components and the difference between two phase shifts, which can be cast as the ratio between two diattenuation parameters $\left(1-q\right)/\left(1-r\right)$ or $q/r$ and a rotation angle about the $\mathbf{e}_3$ axis.\footnote{Reflection ellipsometry often singles out two linear polarization components while transmission ellipsometry prefers circular components; the mathematics for both are identical and the physical scenarios can be interconverted using waveplates.} This has been studied from a few quantum perspectives \cite{Abouraddyetal2001,Abouraddyetal2002entangledellipsometry,Toussaintetal2002,Toussaintetal2004,Grahametal2006,Rudnickietal2020,WangAgarwal2021arxiv} that we review here. First, one can show that particular quantum states, such as entangled photon pairs, can be used for measuring these two parameters \cite{Abouraddyetal2001,Abouraddyetal2002entangledellipsometry,Toussaintetal2002,Toussaintetal2004,Grahametal2006,WangAgarwal2021arxiv}. The QFIM for this measurement has recently been calculated and shown to outperform that of coherent states, especially in the small-absorption regime \cite{WangAgarwal2021arxiv}, similar to other attenuation measurement results. Then, the ultimate quantum limit for such a measurement was studied by~\textcite{Rudnickietal2020}, where it was found that certain squeezed states allow for a simultaneous estimate of both ellipsometry parameters that approaches the Heisenberg limit. This is done by recasting the parameters as eigenvalues of an operator like $\hat{S}_3$ and an exponential-of-the-relative-phase operator \eq{ \hat{E}=\sum_N \hat{E}^{(N)},\qquad\hat{E}^{(N)}=\ket{N,0}\bra{0,N}+\sum_{m=0}^{N-1}\ket{m,N-m}\bra{m+1,N-m-1} } that serves to shift eigenstates of $\hat{S}_3$ by one quantum of angular momentum \cite{LuisSanchezSoto1993} and finding a probe state that minimizes the uncertainty relation between these two operators. The optimal states in the limit of a large input energy are squeezed vacuum states; for finite energy, the probability distribution of the coefficients $\psi_m$ in Eq. \eqref{eq:pure state Nth layer in terms of psim} is the Fourier transform of a Mathieu function. Such optimal ellipsometric measurements have yet to be performed for large input energies $H$. \subsubsection{Depolarization} Finally, we mention that estimating the weights $\lambda_i$ from a general nondeterministic Mueller matrix stemming from Eq. \eqref{eq:Mueller from Jones with weights} can never be done with better than shot-noise scaling. This significant result is due to an argument given by \textcite{Jietal2008} relating quantum estimation to programmable quantum channels. Quantum-enhanced polarimetry must therefore be reserved for particular scenarios in which it is worth the cost of preparing special probe states and the ranges of parameters being estimated fall within the appropriate regions discussed above (pure rotations, small loss, etc.). \subsection{Higher-Order Polarimetry: Measuring Quantum Polarization} No discussion of quantum polarimetry is complete without describing measurements capable of discerning quantum polarization properties about which classical polarimetry is ignorant. A quantum decomposition following the classical form, as given in Eq. \eqref{eq:quantum decomposition into classical}, is not unique, with $\left(N+1\right)^2-4$ remaining degrees of freedom. Similarly, a pure-state decomposition following the classical form \eq{ \ket{\psi^{(N)}}=\sqrt{p}\ket{\psi_{\mathrm{pol}}^{(N)}}+\text{e}^{\text{i}\varphi}\sqrt{1-p}\ket{\psi_{\mathrm{unpol}}^{(N)}},\quad \bra{\psi_{\mathrm{pol}}^{(N)}}\hat{S}_\mu \ket{\psi_{\mathrm{unpol}}^{(N)}}=0 \label{eq:psiN classical decomp} } has $2N-7$ remaining parameters after the degree and direction of polarization have been specified. The quantum polarization properties can be arranged in many ways, all of which deal with the extra degrees of freedom present in quantum states beyond the four given by the Stokes parameters. Quantum tomography can be inspired by classical polarimetry for determining all of the free parameters of a quantum state. By assuming the polarization state to be an arbitrary state of $N$ photons that are not necessarily indistinguishable and thus not necessarily in a symmetric state, measuring the $4^N$ generalized Stokes parameters operators \eq{ \hat{S}_{\mu \nu \cdots \xi}=\sigma_\mu\otimes\sigma_\nu\otimes\cdots\otimes \sigma_\xi } will provide enough information to fully determine the state \cite{Jamesetal2001}, where we have used a first-quantized notation to delineate the operators acting on each photon as $2\times 2$ Pauli matrices. Linear combinations of these operators may be easier to measure for polarimetry \cite{Lingetal2006PRA}, but this does not improve the exponential scaling of the number of measurements required for full tomography. Many other such methods have been proposed using tools from maximum likelihood estimation \cite{Hradil1997}, Bayesian inference \cite{BlumeKohout2010}, and principal component analysis \cite{Lloydetal2014PCA}, as well as using mutually unbiased bases \cite{Lawrenceetal2002,Romeroetal2005,Klimovetal2008MUB,AdamsomSteinberg2010}. The number of measurements to be performed can be reduced using techniques from compressed sensing when the quantum state in question is close to being pure in terms of its matrix rank \cite{Grossetal2010,Crameretal2010,Liu2011,Shabanietal2011,Flammiaetal2012,Liuetal2012,Kalevetal2015,Baldwinetal2016,Steffensetal2017,Riofrioetal2017,Bouchardetal19compressedsensing,Gianani2020,GilLopezetal2021arxiv} and these measurements can be done adaptively \cite{Ahnetal2019adaptivecompressionexpt,Ahnetal2019adaptivecompressionnumerics} and have their completeness verified by machine-learning techniques \cite{Teoetal2021}. In practice, overcomplete measurement sets with redundant information tend to be much more useful than minimal measurement sets \cite{deBurghetal2008,Zhu2014}. The SU(2) symmetry underlying quantum polarization dramatically reduces the number of parameters needed to be estimated for full tomography \cite{MankoManko1997,DArianoetal2003,Tothetal2010}. Inspecting Eq. \eqref{eq:pure state Nth layer in terms of psim}, for example, the number of free parameters in a pure $N$-photon state is reduced from $2^{N+1}-2$ to $2N$ and in a mixed state from $4^N-1$ to $\left(N+1\right)^2-1$. This makes the determination of all higher-order polarization properties feasible in practice. How is quantum polarization best determined? The minimal set of projection operators required for fully determining quantum polarization have been explicitly determined but are challenging to implement experimentally because they require entangled measurements \cite{Klimovetal2013}. In an analogous system with SU(2) symmetry, the basis states $\ket{m,N-m}$ can be spatially separated by applying an appropriate magnetic field as in a Stern-Gerlach experiment, facilitating a reconstruction of the quantum state using projectors \cite{DArianoetal2003}. \eq{ \hat{P}_m=\ket{m,N-m}\bra{m,N-m} } Measurement of the projectors can be used to fully determine the properties of a mixed quantum polarization state if one measures these projectors as well as their rotated counterparts for an appropriate set of rotation angles \cite{MankoManko1997} \eq{ \hat{P}_m\left(\pmb{\theta}\right)=\hat{R}\left(\pmb{\theta}\right)\ket{m,N-m}\bra{m,N-m}\hat{R}^\dagger\left(\pmb{\theta}\right). } This facilitates the experimental reconstruction of the SU(2) Wigner function that completely describes the quantum polarization state \cite{Mulleretal2012}. Instead of spatially separating the components $\ket{m,N-m}$, they can be determined by the correlations between the intensities at the two detectors in the SU(2) gadget of Fig. \ref{fig:SU2 gadget}, through a scheme that can be traced back to \textcite{MukundaJordan1966}. First, using the SU(2) gadget with nine specific waveplate orientations allows one to directly measure all of the variances and covariances between the Stokes operators \cite{AgarwalChaturvedi2003}. Next, if only the difference between the intensities at the two detectors is recorded for a sufficient variety of rotation angles, amounting to determining the expectation value of $\hat{P}_m(\pmb{\theta})$ summed over all $N\geq m$, a complete polarization tomogram can be constructed \cite{Bushevetal2001,KarassiovMasalov2004,Karassiov2005,Karassiov2007Pquasispin}. This can also be used to furnish an analysis of a cumulants for the Stokes parameters to arbitrary orders \cite{ChirkinSingh2021}, which is another way of arranging the higher-order polarization properties present in a quantums state. Recording \textit{all} of the intensity correlations for the gadget in Fig. \ref{fig:SU2 gadget} amounts to measuring the expectation values of the correlation operators \cite{Schillingetal2010} \eq{ \hat{W}_m=\hat{a}^{\dagger m}\hat{b}^{\dagger N-m}\hat{a}^m\hat{b}^{N-m} } that, when rotated appropriately, facilitate a complete reconstruction of the quantum polarization state \cite{Israeletal2012}. The intensity correlations can be directly determined using detectors with sufficient photon-number resolution \cite{Bayraktaretal2016}. Alternatively, one can measure correlations between Stokes operators such as \cite{delaHozetal2013} \eq{ \hat{T}_l=\hat{S}_3^l } and its rotated counterparts, where again the particular sets of sufficient rotations have been studied \cite{FilippovManko2010}. These types of correlation measurements allow one to express the quantum polarization properties in a hierarchy of multipole moments \cite{Goldbergetal2021multipolesarxiv} that generalizes the according hierarchy in coherence theory \cite{Wolf2007}. Another method for determining quantum polarization properties is measuring projectors onto SU(2)-coherent states \eq{ \hat{P}\left(\Omega\right)=\ket{\Omega}\bra{\Omega} } and appropriately normalizing the results due to the overcompleteness of these states. This is possible due to the Husimi $Q$-function \eq{ q(\Omega)=\expct{\hat{P}\left(\Omega\right)} } containing complete information about a quantum state \cite{Husimi1940}, which requires obtaining $q(\Omega)$ for a sufficient number of coordinates $\Omega$. For estimating polarization rotations, only four such measurements tend to be necessary \cite{Goldbergetal2021rotationspublished}, while the higher-order quantum polarization properties require $\left(N+1\right)^2-1$ measurements along a sufficient set of directions to completely specify the state \cite{HoffmannTakeuchi2004}. The aforementioned compressed sensing techniques can even be used within the symmetric subspace to further enhance the efficiency of quantum polarization measurements for near-pure states \cite{Schwemmeretal2014}, with the current state of affairs reviewed by \textcite{TeoSanchezSoto2021accepted}. We mention in conclusion that the higher-order polarization properties undergo higher-order transformations, leading to objects such as Mueller matrices with dimensions larger than $4\times 4$ \cite{Samimetal2016}. This type of polarimetry could benefit from quantum process tomography and may be useful to understanding high harmonic generation. \section{Evaluating Classical Intuitions in Light of Quantum Polarimetry} \label{sec:classical intuitions} Armed with a full description of quantum polarization and its transformations, what do we have to say to classical physicists studying polarization? Different nuances are required in different scenarios, so we here highlight a few that we deem most relevant. \subsection{Which Photons are Measured Classically?} No detector measures every photon in an incoming beam of light. Given that the photonic nature underlying classical beams adds extra richness to their polarization properties, how does this affect the outcome of a classical polarization measurement? We here explain that the polarization properties of any subset of photons from a beam of light mimic the polarization properties of the whole beam from a classical standpoint. We begin by inspecting a general quantum state with some fixed number of photons $N$: \eq{ \hat{\rho}^{(N)}=\sum_{m,n=0}^N\rho_{m,n}^{(N)}\ket{m,N-m}\bra{n,N-n}. } Each basis state can be rewritten in a first-quantized description as a symmetric (i.e., permutationally invariant) superposition of single-photon states through \eq{ \ket{m,N-m}=\frac{1}{\sqrt{\binom{N}{m}}}\sum_{\mathrm{permutations}}\ket{\mathrm{R}}^{\otimes m}\otimes\ket{\mathrm{L}}^{\otimes N-m}. } Then, if a detector only receives $N-1$ of the photons comprising the state, we can describe the state seen by the detector as the result of ignoring one of the permutationally invariant photons, which we set to be the final one for convenience: \eq{ \hat{\rho}^{(N-1)}=\mathop{\mathrm{Tr}} \nolimits_{\mathrm{one\,photon}}\left(\hat{\rho}^{(N)}\right)=\sum_{\mathrm{X}=\mathrm{R},\mathrm{L}}\bra{X}_N\hat{\rho}^{(N)}\ket{X}_N. } After performing the calculation for each basis state, we find the relationship between the state coefficients to be \eq{ \rho_{m,n}^{(N-1)}=\frac{\sqrt{\left(N-m\right)\left(N-n\right)}}{N}\rho_{m,n}^{(N)}+\frac{\sqrt{\left(m+1\right)\left(n+1\right)}}{N}\rho_{m+1,n+1}^{(N)}, } which may be iterated to find the coefficients for any $\hat{\rho}^{(M)}$ with $0\leq M\leq N$. This immediately yields interesting relationships, such as that obtaining any number of photons from a perfectly polarized or completely isotropic state with definite photon number yields a state with identical properties: \eq{ \mathop{\mathrm{Tr}} \nolimits_{\mathrm{one\,photon}}\left(\ket{\Omega^{(N)}}\bra{\Omega^{(N)}}\right)=\ket{\Omega^{(N-1)}}\bra{\Omega^{(N-1)}},\qquad \mathop{\mathrm{Tr}} \nolimits_{\mathrm{one\,photon}}\left(\hat{\mathds{1}}_N\right)=\hat{\mathds{1}}_{N-1}. } Even more, we find that the Stokes parameters for the states remain the same up to the intensity normalization, with \cite{Goldberg2021thesis} \eq{ \frac{1}{N}\mathop{\mathrm{Tr}} \nolimits\left(\rho_{m,n}^{(N)}\hat{S}_\mu\right)=\frac{1}{M}\mathop{\mathrm{Tr}} \nolimits\left(\rho_{m,n}^{(M)}\hat{S}_\mu\right); } the degree and direction of any subset $M$ photons from an $N$-photon beam are identical. This is a scenario in which quantum polarization reinforces classical intuition, even though the description of the light seems radically different from the latter. At the extreme of $M=1$, \textit{every photon that one examines from a beam with $N$ photons will reproduce the same polarization properties as the whole beam} [this was noted for $N=2$ by \textcite{Dograetal2020}]. We can ask the same question when given a beam of light with indeterminate photon number \eq{ \hat{\rho}=\sum_N \rho_N \hat{\rho}^{(N)}. } Following immediately from the above results, we learn that, when $\hat{\rho}\to\mathop{\mathrm{Tr}} \nolimits_{\mathrm{one\,photon}}\left(\hat{\rho}\right)$, the Stokes operators transform as \eq{ S_\mu=\sum_N\rho_N\mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}^{(N)}\hat{S}_\mu\right)\to \sum_N\rho_N\frac{N-1}{N}\mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}^{(N)}\hat{S}_\mu\right). } When the Stokes parameters within each photon-number subspace have the same direction and degree of polarization, i.e., when \eq{ \mathop{\mathrm{Tr}} \nolimits\left(\hat{\rho}^{(N)}\hat{S}_\mu\right)=Ns_\mu } for some $N$-independent constants $s_\mu$, the overall direction and degree of polarization will be independent from the subset of photons that one obtains from the entire beam. If, however, the polarization properties are not the same in the different Fock layers, then inspecting $N$ photons will, in general, lead to different polarization properties than inspecting some other number $M$ photons. This forces us to distrust classical polarization properties whenever there is a possibility that different photon-number subspaces are polarized to different amounts or in different directions. The stark differences between states of light with fixed and indeterminate numbers of photons emphasizes the additional complexities and richness that arise when quantum polarization properties are brought to bear. \subsection{Rotating Unpolarized Light leads to Measurable Changes} In classical optics, all unpolarized states of light are the same, up to specifying the total intensity of the light. Remarkably, this same conclusion does not follow quantum mechanically, even for the quantum states of light deemed to be closest to their classically described counterparts. To recapitulate, we mentioned in Section \ref{sec:unpolarized states} that certain unpolarized quantum states of light are completely distinguishable from themselves after undergoing a polarization rotation. This is a marked difference from classically unpolarized light, as classical polarization dictates that unpolarized states do not transform when subject to polarization rotations. Similar discrepancies abound. One method for obtaining classically unpolarized light is by incoherently mixing two perfectly polarized beams of light with equal intensities and orthogonal polarization components. Quantum mechanically, we similarly find that the state \eq{ \hat{\rho}=\frac{1}{2}\ket{\Omega^{(N)}}\bra{\Omega^{(N)}}+\frac{1}{2}\ket{\Omega_\perp^{(N)}}\bra{\Omega_\perp^{(N)}} \label{eq:mixture of two SU(2) coherent states} } is classically unpolarized, where $\braket{\Omega^{(N)}}{\Omega_\perp^{(N)}}=0$, paralleling the discussion surrounding Eq. \eqref{eq:NOON}. However, this state is far from isotropic: the variances of the Stokes parameters are not all the same. Choosing, for example, $\Omega$ to point along the $\mathbf{e}_3$ axis, we find \eq{ \mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_1=\mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_2=\frac{N}{4},\quad \mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_3=\frac{N^2}{4}, } which is responsible for the suboptimal precision of NOON states in metrology seen in Eq. \eqref{eq:NOON state MSE}. One might think that this anisotropy arises because the states being mixed in Eq. \eqref{eq:mixture of two SU(2) coherent states} are not fully classical. However, a similar result arises when two canonical coherent states with orthogonal polarization properties are incoherently mixed: \eq{ \hat{\rho}=\frac{1}{2}\ket{\alpha_\Omega}\bra{\alpha_\Omega}+\frac{1}{2}\ket{\alpha_{\Omega_\perp}}\bra{\alpha_{\Omega_\perp}}. } Again choosing $\Omega$ to point along the $\pmb{e}_3$ axis leads to the anisotropic Stokes parameter variances \eq{ \mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_1=\mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_2=\frac{\left|\alpha\right|^2}{4},\quad \mathop{\mathrm{Var}}\nolimits_{\hat{\rho}} \hat{S}_3=\frac{\left|\alpha\right|^2\left(1+\left|\alpha\right|^2\right)}{4}. } These anisotropies mean that \textit{the variance of a Stokes parameter will change if such a classically unpolarized state has its polarization rotated} and, further, that \textit{mixing a right- with a left-handed circularly polarized beam leads to a different state than mixing a horizontally with a vertically polarized beam}. Such information is completely lost in a classical description of polarization and yet these properties are completely discernable in classical polarization experiments. The only method to circumvent such problems is to acquire completely isotropic states of the form of Eq. \eqref{eq:isotropic state}, which may be obtained by the complete depolarization of a beam of light \eq{ \hat{\rho}\to\int d\pmb{\theta}\, \hat{R}\left(\pmb{\theta}\right)\hat{\rho}\hat{R}\left(\pmb{\theta}\right)^\dagger. } These discrepancies can all be attributed to the plethora of quantum states underlying a single classical decomposition of the form of Eq. \eqref{eq:coherency decomposition}. \subsection{Transformations of the Polarized-Plus-Unpolarized Decomposition} Polarization transformations act linearly on the polarized and unpolarized components in Eqs. \eqref{eq:Stokes decomposition} and \eqref{eq:coherency decomposition}. This means that we can inspect the action of all classical polarization transformations on polarized and unpolarized states \textit{in vacuo} and combine the actions for partially polarized states. We will see here that such a correspondence only works for the quantum decomposition of Eq. \eqref{eq:quantum decomposition into classical}, even though other decompositions have other strong physical motivations. We will iterate three possible quantum decompositions underlying the classical one, following Eqs. \eqref{eq:quantum decomposition into classical}, \eqref{eq:psiN classical decomp}, and more: \eq{ &\mathrm{I:}\,\,\,\ket{\psi}=\sqrt{p}\ket{\psi_{\mathrm{pol}}}+\text{e}^{\text{i}\varphi}\sqrt{p}\ket{\psi_{\mathrm{unpol}}},\quad \bra{\psi_{\mathrm{unpol}}}\hat{S}_\mu\ket{_{\mathrm{pol}}}=0,\\ &\mathrm{II:}\,\,\,\hat{\rho}=p\hat{\rho}_{\mathrm{pol}}+\left(1-p\right)\hat{\rho}_{\mathrm{unpol}},\\ &\mathrm{III:}\,\hat{\rho}=p\hat{\rho}_{\mathrm{pol}}+\left(1-p\right)\hat{\rho}_{\mathrm{isotropic}}. } Each of these decompositions transforms as expected classically under polarization rotations, in that the polarized component remains polarized in a new direction as per the classical rotation of the Stokes vector, the unpolarized component remains unpolarized, and the degree of polarization remains unchanged. Such agreement will not be seen with other polarization transformations. We can also inspect the minimum number of nonclassical polarization degrees of freedom in each decomposition, which happens in a single $N$-photon subspace, where I, II, and III have $2N-7$, $\left(N+1\right)^2-4$, and $0$ remaining degrees of freedom. This points to III best resembling the classical decomposition, but we will see this to not be the case following polarization transformations. Diattenuation and depolarization transformations, in general, reduce the purity of an input quantum state. This means that the pure-state decomposition I will no longer hold following a general polarization transformation, so we cannot simply say that quantum superpositions underly the classical decompositions into polarized and unpolarized fractions. Partial attenuation ruins decomposition III. Choosing $q=1$ and a nontrivial value of $r$ for the attenuation of the second mode, an isotropic unpolarized state transforms into \eq{ \frac{\hat{\mathds{1}}_N}{N+1}\quad\underset{r}{\to}\quad\frac{1}{N+1}\sum_{m=0}^N\sum_{M=m}^{N}\binom{N-m}{M-m}r^{M-m}\left(1-r\right)^{N-M}\ket{m,M-m}\bra{m,M-m}. } The unpolarized component of this state is only isotropic in the vacuum and $M=1$ photon-number subspaces; otherwise, it no longer follows decomposition III. This is why we generally state that only decomposition II, i.e., Eq. \eqref{eq:quantum decomposition into classical}, may underlie the classical decomposition, even though decomposition III is preferable in terms of degrees of freedom and isotropy properties. It is clear from the discussion of unpolarized states that decomposition III will also perform poorly for depolarization transformations. For example, we expect a convex combination of a do-nothing operation and rotation by $\pi$ to be able to completely depolarize an incident polarized state: \eq{ \hat{\rho}_{\mathrm{pol}}\quad\underset{\mathrm{depolarization}}{\to}\quad\frac{1}{2}\hat{\rho}_{\mathrm{pol}}+\frac{1}{2}\hat{R}\left(\pi,\mathbf{n}\right)\hat{\rho}_{\mathrm{pol}}\hat{R}^\dagger\left(\pi,\mathbf{n}\right)=\hat{\rho}_{\mathrm{unpol}}\neq\hat{\rho}_{\mathrm{isotropic}}, } where $\mathbf{n}$ may be any axis orthogonal to the direction of polarization of $\hat{\rho}_{\mathrm{pol}}$. Other depolarizing transformations do indeed suffice for decomposition III, such as that of complete depolarization \eq{ \hat{\rho}_{\mathrm{pol}}\quad\underset{\mathrm{complete\,depolarization}}{\to}\quad\int d\pmb{\theta}=\hat{R}\left(\pmb{\theta}\right)\hat{\rho}_{\mathrm{pol}}\hat{R}^\dagger\left(\pmb{\theta}\right)=\hat{\rho}_{\mathrm{unpol}}=\hat{\rho}_{\mathrm{isotropic}}. } From these considerations, we see that quantum decompositions I-III may be seen to underly the classical polarization decomposition in certain circumstances, but only the quantum decomposition into first-order polarized and unpolarized fractions, II, transforms correctly under all classical polarization transformations. We are thus faced with two alternatives: \textit{give up the privileged status of classical polarization transformations or give up the physical intuition of unpolarized states being isotropic under polarization rotations}. \subsection{Higher-Order Correspondences} Quantum polarization dictates that classical polarization properties arise from expectation values of noncommuting operators. Therefore, \textit{we should not expect higher-order moments of Stokes parameters to transform classically}. The Stokes operators themselves transform classically under rotations, so we will leave those aside in this section. We elucidate the present discrepancy with the example of a diattenuation along the $\mathbf{e}_3$ axis, which leads to the classical and quantum transformation \eq{ S_3\quad\underset{q,r}{\to}\quad\frac{q-r}{2}S_0+\frac{q+r}{2}S_3. } Squared, we expect a transformation like $S_3^2\to\left(\frac{q-r}{2}S_0+\frac{q+r}{2}S_3\right)^2$, which is quadratic in the Stokes parameters. Quantum mechanically, however, we must take into account the noncommutativity of the vacuum operators $\left[\hat{v}_i,\hat{v}_j^\dagger\right]=\delta_{i,j}$ before tracing out their modes, leading to \eq{ \hat{S}_3^2\quad\underset{q,r}{\to}\quad\left(\frac{q-r}{2}\hat{S}_0+\frac{q+r}{2}\hat{S}_3\right)^2+\frac{q\left(1-q\right)}{4}\left(\hat{S}_0+\hat{S}_3\right)+\frac{r\left(1-r\right)}{4}\left(\hat{S}_0-\hat{S}_3\right). } This differs from what is expected classically, even though the operators $\hat{S}_0$ and $\hat{S}_3$ commute. In the limit of strong fields, the terms linear in the Stokes operators may perhaps be neglected and one may be permitted to approximate $\expct{\hat{S}_0\hat{S}_3}\approx S_0 S_3$, but a careful consideration of the diattenuation parameters must be considered in such a case to ensure that the fields remain sufficiently strong. The deviations from classical predictions are even more pronounced with depolarization transformations of higher-order Stokes parameters. This is because a product of Stokes operators evolving under a Kraus map of the form of Eq. \eqref{eq:quantum channel Kraus on Stokes} undergoes \eq{ \hat{S}_\mu\hat{S}_\nu\to\sum_l\hat{K}_l^\dagger \hat{S}_\mu\hat{S}_\nu\hat{K}_l=\sum_{l,m}\hat{K}_l^\dagger \hat{S}_\mu\hat{K}_m^\dagger \hat{K}_m\hat{S}_\nu\hat{K}_l, } where the operations $\hat{K}_m\hat{S}_\nu\hat{K}_l$ are never guaranteed to equal linear combinations of Stokes operators. In some cases, they do facilitate a decomposition into such linear combinations, but those linear combinations are not the ones predicted classically. One must therefore be leery of all classical polarization transformations calculated with higher-order Stokes parameters and use a quantum formalism for understanding any such process. \section{Conclusions} We have toured quantum polarization and the impacts it has on polarimetry. Along the way, we saw the impacts of quantum polarimetry on its classical counterpart and the possibilities of performing quantum-enhanced polarimetry and quantum polarimetry of manifestly nonclassical properties of light. Light's polarization continues to be a subject of prime interest to novices and practitioners alike and we suspect more mysteries and features of the theory to be unearthed as we go deeper into the quantum realm. \textit{Acknowledgments:} AZG would like to thank Girish Agarwal, Jos\'e Gil, Khabat Heshami, Daniel James, and Wenlei Zhang for discussions and Luis S\'anchez-Soto for insightful comments on the manuscript.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,599
{"url":"https:\/\/cartesianproduct.wordpress.com\/2011\/11\/27\/the-binomial-distribution-part-1\/","text":"# The binomial distribution, part\u00a01\n\nI think there are now going to be a few posts here which essentially are about me rediscovering some A level maths probability theory and writing it down as an aid to memory.\n\nAll of this is related as to whether the length of time pages are part of the working set is governed by a stochastic (probabilistic) process or a deterministic process. Why does it matter? Well, if the process was stochastic then in low memory situations a first-in, first-out approach, or simple single queue LRU approach to page replacement might work well in comparison to the 2Q LRU approach currently in use. It is an idea that is worth a little exploring, anyway.\n\nSo, now the first maths aide memoire \u2013 simple random\/probabilistic processes are binomial \u2013 something happens or it does not. If the probability of it happening in a unit time period is $p$ (update:\u00a0is this showing up as \u2018nm\u2019? It\u2019s meant to be \u2018p\u2019!) then the probability it will not happen is $1 - p = q$.\u00a0 For instance this might be the probability that an atom of Uranium 235 shows $\\alpha$particle decay (the probability that one U 235 atom will decay is given by its half-life of 700 million years ie., $2.21\\times10^{16}$ seconds, or a probability, if my maths is correct, of a given individual atom decaying in any particular second of approximately $4.4\\times10^{-16}$.\n\n(In operating systems terms my thinking is that if the time pages spent in a working set were governed by similar processes then there will be a half life for every page that is read in. If we discarded pages after they were in the system after such a half life, or better yet some multiple of the half life, then we could have a simpler page replacement system \u2013 we would not need to use a CLOCK algorithm, just record the time a page entered the system and stick it in a FIFO queue and discard it when the entry time was more than a half life ago.\n\nAn even simpler case might be to just discard pages once the stored number reached above a certain \u2018half life\u2019 limit. Crude, certainly, but maybe the simplicity might compensate for the lack of sophistication.\n\nSuch a system would not work very well for a general\/desktop operating system \u2013 as the graph for the MySQL daemon referred to in the previous blog shows, even one application could seem to show different distributions of working set sizes. But what if you had a specialist system where the OS only ran one application \u2013 then tuning might work: perhaps that could even apply to mass electionics devices, such as Android phones \u2013 after all the Android (Dalvik) VM is what is being run each time.)","date":"2017-11-23 01:48:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 5, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.49755755066871643, \"perplexity\": 684.3539406884538}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806715.73\/warc\/CC-MAIN-20171123012207-20171123032207-00383.warc.gz\"}"}
null
null
Knowledge Adventure, Inc. 3-D Body Adventure MS-DOS - Released - 1994 3-D Body Adventure is a comprehensive human anatomy learning game. This program combines the most exciting games, activities and reference tools designed especially for kids, using advanced 3-D visualization technology. Traveling through a virtual reality human body, 3-D Body Adventure lets kids see the inner workings of the human body like never before. This human anatomy learning adventure is so amazing that your child will be hooked on science forever. 3-D Dinosaur Adventure: Anniversary Edition Windows 3.X - Released - February 1, 1997 The Anniversary Edition is a re-release of the original game 3-D Dinosaur Adventure. New to this version is a separate disc with a small program called "Art-a-Saurus" which lets the player make pictures using paint, moving dinosaurs or non moving ones, music and lots of backgrounds to choose from. Also included with the package is an authentic fossil. 3D Dinosaur Adventure is an educational game that gives info about Dinosaurs and early life on earth. Their are nine activities featured in the game including: "Save the Dinosaur", "Dinosaur movies" and "3D Dinosaur Museum", "Create a Saurus",... Casper Brainy Book Windows 3.X - Released - 1995 Your child can discover how much fun reading and learning can be! The Casper Brainy Book builds reading skills with its entertaining interactive Storybook adapted from the motion picture by an award-winning children's author. The Casper Brainy Book also includes Games and Puzzles the build vocabulary and memory skills. Designed for pre-readers and early readers, Casper age-sensitive activities continually adapt to your child's ability, so the Casper Brainy Book Grows with your child. The Discoverers The Discoverers is a multimedia interactive CD with mini-games, based on a IMAX documentary about historical figures that achieved great advances in science. The film itself is based on a book by the Pulitzer winner Daniel Boorstin, and it is included in its entirety on the CD. It can be played from the main screen, with a series of buttons at the bottom to skip forwards or backwards or to access the other portions of the program. Clicking the movie frame on key moments brings up a separate screen, showing a related entry about the subject. The exploration screen shows a large picture with... Dr. Brain Thinking Games: IQ Adventure Windows - Released - 1999 Dr. Brain Thinking Games: IQ Adventure is an educative action game with lots of puzzle elements. The player takes on the role of a test subject of Dr. Brain who's stuck in another universe. To top it off robots are hot on his heels to dissect his powerful brain! To get back to earth the player has to solve various puzzles in order to find the required parts to reconstruct the interstellar travelling machine. And all this before the robots do and find their way to earth. In fifteen missions through a large variety of different isometric terrains the player will have to fight, puzzle and talk... Dr. Brain Thinking Games: Puzzle Madness Windows - Released - October 1, 1998 In Dr. Brain Thinking Games: Puzzle Madness, you are Pro, the good clone of Dr. Brain. You'll get to play lots of brain teasing puzzles in your pursuit of Conn, the evil clone of Dr. Brain! To succeed, you will have to use logic, strategy, planning, experimentation, and skill. Complete puzzles to get items and friends, which you will use to defeat Conn and his evil companions. JumpStart 1st Grade Windows 3.X - Released - July 1, 1995 You and your classmate Frankie, go on a treasure hunt created by your teacher, Ms. Nobel. You search for clues and assist the school staff during your hunt. The games consists of several educational mini games involving adding and subtraction math, word recognition, and reading. The player interacts with several colorful characters that provide the premise for each puzzle. Solving a puzzle helps bring the player closer to finding the 3 clues. This game is rated for ages of 5-7. JumpStart Pre-K JumpStart Pre-K follows (in age-range) and builds on the skills learned in Jump Start Preschool in Knowledge Adventure's lineup of learning games created for each grade and age-group. In this game for the 3 - 5 year old, the player is taken to the colorful main page: a town map full of clickable icons leading to activities and song. Each activity begins at the easiest level, and can be adjusted manually, or the game will automatically adjust as the player responds with correct answers. All actions are point and click and there are many clickable places on the map and in each area that... JumpStart Preschool Year 2: Trucks N Things Give your child a head start on Kindergarten with JumpStart Preschool Year 2. Designed by teachers, JumpStart Preschool Year 2 invites kids to explore an adorable Preschool town while solving puzzles, playing games and singing songs that reinforce important fundamentals. Kid's Zoo: A Baby Animal Adventure Kid's Zoo: A Baby Animal Adventure is an interactive book where kids will learn basic facts about baby animals. All the activities are presented as mini-games. For instance, "Who Am I?" is a guessing game that displays photographs of part of an animal. "Photo safari" will challenge you to find one of the animals on the screen. "Who Makes This Sound?" is a matching game between an animal sound and pictures of animals displayed on screen. All in all, Kid's Zoo teaches animal names, sounds, footprints, size and many information such as how long they live. Pyramid: Challenge of the Pharaoh's Dream Pyramid: Challenge of the Pharoah's Dream is an educational game about Ancient Egypt. Your character will be guided by two of the most important gods: Anubis and Ra. The objective? Make the pharaoh's dream come true: build the pyramid for his eternal rest. In order to do this you'll have to use the tools of the time, such as plumb-bobs and squares for construction. In other situations you'll have to use other techniques such as determining the true north by following the stars or learning how to make papyrus. You will advance through the game by collecting scrolls, your recompenses for... The universe is at your fingertips in this interactive adventure through space. Read about WWII rockets, eclipses, how humans might live in space colonies, and more. Watch Voyager flybys, Apollo launches, former President John F. Kennedy challenge America to go to the moon, computer simulations, and other videos. By clicking on objects on the screen, you can zoom to other topics on space. Space Adventure even covers some science fiction, too! The whole thing is like a really big encyclopedia covering, space, our solar system, space probes, aliens, and other stuff that a budding astronomer... This is an interactive movie straight from the IMAX movie "Speed". The film is about the never-ending quest that humans have to go faster. It starts out in prehistoric times, when all humans speed was restricted to how fast they could run. Other inventions appear and evolve, like the wheel making way to the chariot, which made way to the automobile, which made way to the high-performance sports car of today. The Wright brother flier made way for the biplane, which eventually made way for the jet airplane. The film ends with this message: how fast will humans go in the future? The whole... Steven Spielberg's Director's Chair With a guide like Steven Spielberg we have got a chance to direct our own movie in this interactive movie-making game. Some of the most talented people in cinema industry are also helping us to choose and do the right things during our project. In addition Jennifer Aniston and Quentin Tarantino are our stars of our own movie. However, like most of the interactive games, this game is lacking its game side: Mostly all you've to do is to click the places you've told and wait to see what happens next. Undersea Adventure Undersea Adventure is a 2D adventure that aims to educate children on ocean life. The player explores the sea in a submarine while learning geology, marine biology, geography, oceanography, and ecology through nine mini-games and activities. The game includes hundreds of photos as well as movies of undersea life and an Undersea Reference encyclopedia which helps with report writing.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,226
Foroudi, P, Dinnie, K, Kitchen, PJ ORCID: https://orcid.org/0000-0002-3128-9527, Melewar, TC and Foroudi, M 2016, 'IMC antecedents and the consequences of planned brand identity in Higher Education' , European Journal of Marketing, 51 (3) , pp. 528-550. Purpose – This study identifies the IMC antecedents and consequences of brand awareness, image, reputation and identification in the context of higher education, and empirically tests a number of hypotheses related to the aforementioned constructs. Design/methodology/approach – This research design marshalling explanatory research involving data collection comprised semi-structured interviews and focus groups in the preliminary stage of this research. This along with a review of the literature informed the conceptual framework. The model was tested in a survey carried out among stakeholders in two London-based Universities. Structural equation modelling with AMOS was conducted to get insight into the various influences and relationships. Findings – The study gains new knowledge relating to university's brand awareness in image, reputation, and identification formation. Specifically, the authors identify and confirm important key constructs in brand awareness. Furthermore, a conceptual model is proposed on the relationship between university brand management elements relevant in University context and their influence on brand image, brand reputation, and identification. Research limitations/implications – The focus on two UK universities limits the generalisability of the findings; future research should be conducted in other country settings in order to test the relationships identified in the present study. Also, future research could build on the study's findings by investigating the attitudinal and behavioural consequences of brand identification in the higher education context. Practical implications – Professionals responsible for universities' promotional and branding activities need to evaluate the relative contributions of the IMC antecedents of brand awareness. Brand elements such as design, colour and name, for example, should be reviewed to determine whether modifications are required in different international markets. The increasing prevalence of social media, one of the key antecedents of brand awareness, offers opportunities for universities to engage in brand co-creation by interacting with past, present and future students on relevant platforms such as Twitter, YouTube and Facebook. Universities should also devote more attention to their brand personality, as this can influence perceptions held by different stakeholder groups. Finally, the country of origin cue is of particular relevance to institutions of higher education given the increasing numbers of students at both undergraduate and postgraduate levels who are choosing to study abroad. The attraction of the United Kingdom as a country to study in, or the appeal of individual cities such as London, should be fully integrated into universities' IMC strategies. Originality/value – The study makes two main contributions. First, we make a theoretical contribution by identifying the core IMC antecedents and consequences of brand awareness for universities and from this extrapolate key directions for future research. Second, we indicate a number of managerial implications designed to assist in the formulation of improved professional practice.
{ "redpajama_set_name": "RedPajamaC4" }
4,794
Q: Js Chrome dev tools prints Array with length but no children You can see in the screenshot that all three Arrays has lengths, but no children. Why that happens? Is it because Garbage collection hasn't collected and space for items is still reserved? A: What version of chrome are you on? It is likely you have an empty array of size 10. var test = { myArray: new Array(10) }; console.log(test) This results in an array of length 10 but the items are not initialized. In some versions of chrome, for example you could not use the map function on an uninitialized array. A: I use Chrom 76 also. In my console it give me the info empties
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,818
\section{Introduction} The investigation of very high energy cosmic rays and their interactions are inherently connected subjects of astroparticle physics. On one hand the understanding of cosmic ray interactions is needed to study the flux, acceleration and propagation of cosmic rays. For example, the high energy cosmic ray flux can only be measured by linking secondary particle cascades observed in detectors or the Earth's atmosphere to primary particles of certain energy, mass number and arrival direction. Furthermore the knowledge of particle production is needed for the interpretation of secondary particle fluxes due to cosmic ray interactions in various astrophysical environments. On the other hand cosmic rays provide us with a continuous beam of high energy particles that can be exploited for studies of interaction physics at energies and phase space regions not accessible at man-made accelerators. Cosmic ray research of the last years is characterized by substantial progress in measuring primary and secondary particle fluxes. Examples for new results on the primary cosmic ray flux are the measurements below the knee by AMS, BESS and ATIC, in the knee energy region by KASCADE and TIBET, and at the highest energies by the High Resolution Fly's Eye (HiRes) experiment. Still some of the experimental results appear contradictory and are subject of controversial discussions. For example, the results of the composition analyses of the KASCADE and EAS-TOP data seem to be in variance with a first, preliminary analysis of the TIBET data. Similarly, there appears to be a discrepancy between the AGASA measurements of the cosmic ray flux above $10^{20}$\,eV and the new HiRes data. In addition to the measurement of the primary cosmic ray flux the most powerful method of improving our understanding of cosmic ray physics is the study of secondary particle fluxes. New instruments measuring gamma-rays (CANGAROO, HESS, MAGIC, VERITAS, and Milagro), muons and neutrinos (AMANDA, BAIKAL, NESTOR, and ANTARES) have begun taking data or successfully performed prototype runs. Regarding cosmic ray physics, they are expected not only to test models of cosmic ray acceleration and interaction in supernova remnants and other astrophysical objects but also to provide valuable clues on cosmic ray composition and the characteristics of high energy particle production. There are many efforts to develop better models for cosmic ray interactions or to derive information on hadronic multiparticle production. The progress in this field is closely linked to measurements of forward multiparticle production in fixed-target and collider experiments. One of the central problems is the consistent implementation of the consequences of the steeply rising parton densities measured in deep inelastic e-p collisions at HERA and the indications of parton density saturation seen at the Relativistic Heavy Ion Collider RHIC. RHIC data clearly demonstrate the difficulties of extrapolating models tuned to accelerator data to higher energy or other projectile/target combinations. Many models predicted a secondary particle multiplicity exceeding that measured in central Au-Au collisions by $\sim 30$\% or more. The impact of the RHIC data on the extrapolation of cosmic ray interaction models to ultra-high energy is still far from being understood. Measurements of cosmic ray showers and secondary particle fluxes have reached a precision that they become increasingly important in constraining hadronic interaction models. Despite providing mainly indirect information on hadronic multiparticle production they allow the exclusion of extreme model extrapolations and limit exotic physics scenarios. This article presents a summary of recent developments in the field of very high energy cosmic ray physics and related interaction physics, focusing on the contributions presented at the XIII International Symposium on Very High Energy Cosmic Ray Interactions. The plan of the paper is as follows. In Sec.~\ref{sec:flux} the current status of cosmic ray flux measurements is given. Results of different measurements are compared and their dependence on hadronic interaction models employed for data analysis is discussed. The progress in modeling extensive air showers is outlined in Sec.~\ref{sec:eas-modeling}, focussing on status and uncertainties of high-energy interaction models. Motivated by the current use of QGSJET as ``standard candle'' interaction model in almost all high energy cosmic ray experiments, uncertainties and features of interaction models are discussed in some detail. The importance of analyzing observables of relevance to cosmic ray physics in experiments at current and future accelerators is emphasized. Sec.~\ref{sec:exotics} gives a short update on the controversial subject of exotic interaction features claimed to be found in emulsion chamber measurements. The very active field of measuring gamma-rays, muons and neutrinos produced in cosmic ray interactions is briefly touched upon in Sec.~\ref{sec:secondaries}. Finally, conclusions and an outlook is given in Sec.~\ref{sec:outlook}. \section{Cosmic ray flux\label{sec:flux}} For understanding very high energy cosmic rays and their sources, the measurement and interpretation of the all-particle flux, the elemental composition (including the $\gamma$-ray fraction), the arrival direction distribution (large scale anisotropy, small scale clustering, and correlation with hypothetical sources), and temporal variations are of central importance. We will briefly discuss these topics beginning with the highest energies. Recent reviews of the experimental situation can be found in~\cite{Anchordoqui:2002hs,Haungs:2003jv,% Watson:2003ba,Cronin:2004ye,Engel:2004ui}. \begin{figure*}[!htb] \centerline{ \includegraphics[width=0.8\textwidth]{CR-flux-HEP-scaled2p5-2c.eps} } \vspace*{-8mm} \caption{ Primary cosmic ray flux scaled with $E^{2.5}$. Shown is a selection of recent measurements as discussed at this meeting together with some older data for reference (AGASA~\cite{Takeda:2003aa}, Akeno~\cite{Shinozaki,Nagano:1984db,Nagano:1992jz}, HiRes~\cite{Westerhoff,Abbasi:2002ta,Abu-Zayyad:2002sf}, HiRes-MIA~\cite{Abu-Zayyad:2000ay}, KASCADE~\cite{Haungs1,Ulrich:2004bn}, MSU~\cite{Fomin:2003tp}, RUNJOB~\cite{Shibata,Furukawa:2003dm}, ATIC~\cite{Ahn:2003cz}). For the sake of clarity, the all-particle fluxes from EAS-TOP~\cite{Navarra,Aglietta:1998te} and Tibet~\cite{Ma1,Amenomori:2003tu} are not shown. They cannot be distinguished from the others in this representation. \label{fig:flux} } \end{figure*} \subsection{Ultra-high energy cosmic rays} At the highest energies, the Akeno Giant Air Shower Array (AGASA) and the High Resolution Fly's Eye (HiRes) detectors are the installations with the biggest accumulated aperture that have published flux data. The flux data from both experiments are shown in Fig.~\ref{fig:flux}. First of all there is the well-known and often discussed discrepancy between the two data sets at very high energy. Whereas the AGASA data do not show any sign of the expected GZK cutoff~\cite{Zatsepin} in the energy spectrum \cite{Shinozaki,Takeda:1998ps}, the HiRes results are compatible with such a GZK signature \cite{Westerhoff,Abbasi:2002ta}. The statistical significance of this discrepancy above $10^{20}$\,eV is about 2-3\,$\sigma$ \cite{DeMarco:2003ig}. At lower energy, the overall difference between the measurements is well within the range of the systematic errors of both experiments \cite{Takeda:2002at,Bergman:2003aa}. This also applies to the independent data set from the Yakutsk array~\cite{Knurenko2} which is characterized by a larger shower-to-shower reconstruction uncertainty of 32 - 46\% as compared to about 20\% for HiRes and AGASA. The Yakutsk array also has an integrated aperture $\sim 40$\% smaller than AGASA. Secondly there is a smooth transition between the HiRes and AGASA data and the lower energy measurements of the prototype instrument HiRes-MIA \cite{Abu-Zayyad:2000ay} and the Akeno air shower array \cite{Nagano:1984db,Nagano:1992jz}, respectively. This might indicate a systematic bias in one or both measurement techniques that could be related to the simulation of ultra-high energy air showers. To estimate the elemental composition of ultra-high energy cosmic rays (UHECR), both AGASA and HiRes have analyzed their data in terms of a two-component proton/iron composition hypothesis. The HiRes analysis is based on the measurement of the depth of shower maximum. Using the hadronic interaction model QGSJET \cite{Kalmykov92e,Kalmykov97a} for interpreting the data they find a transition to a light composition \cite{Abu-Zayyad:2000ay} that remains proton dominated (80\% protons) between $10^{17}$\,eV and $10^{19.3}$\,eV \cite{Westerhoff,Abbasi:2004nz}. Preliminary results of the re-analysis of the AGASA muon density data measured with the Akeno muon detectors show also a transition to a proton dominated composition. In contrast to the HiRes results, the transition is found to occur gradually over a large energy range. From about 50\% iron fraction at $10^{17.5}$\,eV the iron contribution drops below 20\% at $10^{19}$\,eV \cite{Shinozaki,Shinozaki:2004nh}. Similar to the HiRes analysis, the Yakutsk composition measurement~\cite{Knurenko1} is related to the primary mass sensitivity of the shower depth of maximum. The Yakutsk group find a light composition of about 70-80\% proton and helium in the energy range $5\times 10^{17} - 5\times 10^{18}$\,eV~\cite{Knurenko1}. The most natural interpretation of the changing composition would be the transition from Galactic to extragalactic cosmic rays. The transition energy would be somewhere between $10^{17}$ and $10^{19}$\,eV. Whereas HiRes data are consistent with the interpretation that the ``ankle'' in the cosmic ray spectrum is already a signature of the GZK cutoff ($e^+e^-$-pair production of protons with photons of the CMB) \cite{Bergman:2003wx,Berezinsky:2005cq}, AGASA data favour an interpretation of the ankle as transition region between Galactic and extragalactic cosmic rays. It is clear, however, that these composition measurements are strongly model dependent as there is a large theoretical uncertainty in predicting electron and muon shower sizes as well as the depth of shower maximum for hadron-induced showers \cite{Drescher2,Ostapchenko2,Pierog}. An interpretation based on SIBYLL \cite{Fletcher94,Engel99a} or neXus \cite{Drescher:2000ha} gives a heavier composition: about 30 and 50\% iron, respectively (see also Fig.~\ref{fig:xmax-models}). It is unclear which of the model predictions is more realistic and also whether the range of predictions exhausts the range of the theoretical uncertainties (see discussion in Sec.~\ref{sec:eas-modeling}). Moreover there are signs of inconsistencies \cite{Watson,Watson:2003ba}. For example, the muon densities measured for the same showers as used in the depth of maximum analysis of the HiRes-MIA data are similar to or even exceed those expected for iron primaries \cite{Abu-Zayyad:2000ay}. Furthermore investigations based on mass-sensitive observables such as shower disk thickness and shape of the lateral distribution also indicate a heavier composition ($\sim 80-90$\% iron) \cite{Ave:2003ab,Dova:2004nq}. Limits on the primary photon fraction were given by AGASA based on the investigation of showers with muon density information. The fraction of photon-induced showers is found to be smaller than 28\% (67\%) at energies greater than $10^{19}$ ($10^{19.5}$) eV at 95\% CL. A recently developed method of comparing shower-by-shower measurements with theoretical expectations for photon-induced showers \cite{Homola} allows one to derive a limit at even higher energy where the statistics is very sparse: less than 65\% of all showers with energies above $1.25\times 10^{20}$\,eV are initiated by photons (95\% CL) \cite{Risse:2005jr}. Given the limited statistics accumulated until now, the arrival direction distribution of UHECR appears isotropic. There are a number of cosmic rays forming arrival direction multiplets in the AGASA data set (57 events with $E>4\times 10^{19}$\,eV: 6 doublets, 1 triplet) \cite{Shinozaki,Teshima:2003ab}. The statistical significance of this small scale clustering is subject to controversial discussion. If the clustering were found with ``a priori'' chosen values for energy threshold and separation angle, i.e.\ without performing a scan in energy threshold and separation angle, the chance probability would be less than $10^{-4}$. Assuming that such a scan was performed, the chance probability would increase to about $0.3$\% \cite{Finley:2003ur}. The exposure of HiRes in stereo mode has not yet reached that of AGASA\footnote{Viewing showers in monoscopic mode HiRes I has reached a higher accumulated aperture than AGASA for energies above $3\times 10^{19}$\,eV. However, due to the limited and highly asymmetric angular resolution, the HiRes I mono data set is not suited for studying small scale clustering.}. There are 27 events detected in stereoscopic mode above $4\times 10^{19}$\,eV. Using this data set in an autocorrelation analysis no significant small-scale clustering is found. Adding the HiRes stereo data set to that from AGASA only one additional pair is found. The clustering found in the combined data set is estimated to have a chance probability of 1\%. The search for correlations with astrophysical sources is hampered by the incompleteness of catalogs and related object detection and selection biases. Assuming that the source distribution follows that of the luminous matter in the universe it is natural to expect a correlation with the supergalactic plane \cite{Stanev:1995my}. No such correlation is found in the current data sets. There are a number of correlations claimed between the AGASA and Yakutsk data sets and BL Lacs \cite{Tinyakov:2001nr,Gorbunov:2002hk,Tinyakov:2001ir}. With an angular resolution of about $0.7^\circ$, the HiRes stereo data set is ideally suited for such studies. Using the same astrophysical objects no correlation with the HiRes events is found for energies above $2.4$ and $4\times 10^{19}$\,eV. Recently new indications of a correlation with BL Lacs were found if the energy threshold for comparison is lowered to $10^{19}$\,eV (10 out of 271 showers have arrival directions within $0.8^\circ$ of one of the 156 selected BL Lacs from the Veron catalog) \cite{Gorbunov:2004bs}. The chance probability of finding such a correlation in an un-correlated data set is estimated as 0.1\%, but only a new, independent data set will allow to assess the significance unambiguously. Taken at face value, the correlation would have to be interpreted as a small fraction of neutral particles in UHECR. Currently there are two large-aperture detectors for UHECR in construction, the Pierre Auger Observatory \cite{Kampert,Auger} and the Telescope Array (TA)~\cite{Fukushima:2003ig,TA}. Both detector concepts employ the hybrid detection technique of measuring air showers with surface detectors (Auger: water Cherenkov tanks, TA: plastic scintillators) and fluorescence telescopes. The hybrid technique will allow a good energy calibration of showers measured with surface detectors and improve the ability of composition measurements. After testing the detector design with an engineering array~\cite{Abraham:2004dt} the construction of the southern Auger observatory in Malargue, Argentina is in full progress \cite{Kampert}. At the time of writing this article about 800 of the planned 1600 surface detector stations and 50\% of the fluorescence telescopes are completed. Already during the construction phase the integrated aperture of Auger has reached that of 10 years of data taking with AGASA. It is planned to build a similar observatory in the northern hemisphere to obtain nearly uniform full sky coverage. \subsection{Knee energy region} In contrast to previous measurements in the knee energy region (for example, see the compilations in \cite{Hoerandel:2004gv,Haungs:2003jv,Swordy:2002df}) the recent all-particle flux measurements by KASCADE \cite{Haungs1}, EAS-TOP \cite{Navarra}, and Tibet \cite{Amenomori:2003tu} agree within 10\% with each other. The knee is at about $3\times 10^{15}$\,eV and no deviation from a broken power law with a smooth transition region is found within the current experimental resolution. Concerning the elemental composition, the situation is less clear. There is increasing evidence for a transition from a mixed to a heavy composition with increasing energy. However, the detailed change of composition through the knee energy range is still unclear. All experimental results discussed at the meeting show a trend towards a heavy composition with increasing energy (KASCADE~\cite{Haungs1}, Tibet~\cite{Ma1}, EAS-TOP~\cite{Navarra}, and SPASE/AMANDA~\cite{Karle}). However, it is difficult to find observables that demonstrate this transition beyond doubt as all composition studies depend very much on the hadronic interaction models used for data interpretation and alternative explanations might be possible, although unlikely (see, for example, \cite{Petrukhin}). Probably the least model dependent analysis is that of muon-rich and muon-poor showers by the KASCADE Collab.~\cite{Antoni:2002au}, which demonstrates that the knee is mainly caused by a disappearance of the light (i.e.\ muon-poor) flux components. \begin{figure}[htb!] \centerline{ \includegraphics[width=7cm]{qgs_ergeb_allprim_e27_1.eps} } \centerline{ \includegraphics[width=7cm]{sib_ergeb_allprim_e27.eps} } \vspace*{-8mm} \caption{ Flux derived for 5 elemental groups from KASCADE data \cite{Haungs1,Ulrich:2004bn}. The top panel shows the results obtained with QGSJET 01, the bottom panel those with SIBYLL 2.1. \label{fig:KASCADE-comp} } \end{figure} The high-statistics data of multi-detector setup KASCADE \cite{Antoni:2003gd} allow the analysis of the correlated muon ($E_\mu > 240$\,MeV) and electron sizes of showers in terms of 5 mass groups \cite{Haungs1,Ulrich:2004bn}. Fig.~\ref{fig:KASCADE-comp} shows the results for two hadronic interaction models, QGSJET and SIBYLL. In this analysis, the derived all-particle flux turns out to be almost independent of the used hadronic interaction model. However, the different elemental fluxes vary strongly. For example, using QGSJET the flux appears dominated by helium below the knee with no significant iron contribution. The SIBYLL-based interpretation favours instead helium and carbon below the knee and a small but significant fraction of iron primaries is needed. Both models do not provide a fully consistent description of the KASCADE $N_e$-$N_\mu$ data. The found deviations underline the high statistical accuracy of the KASCADE data and show the need of improving hadronic interaction models. The interpretation of the KASCADE data with both models shows that the flux of the light components exhibits a break in the power law at different energies with lighter elements having a lower break energy. No spectral break is found for iron in the considered energy range. One of the central questions is that of the scaling of the break energies. Acceleration and propagation models for the knee typically predict rigidity-dependent scaling whereas models with new particle physics lead to mass-dependent scaling. Unfortunately, the strong had.\ interaction model dependence does not allow us to draw conclusions on a possible mass- or rigidity-dependent scaling of the break energies. The composition measurement by the EAS-TOP Collab.~\cite{Navarra,Aglietta:1998te} is based on 3 elemental groups and uses electron and GeV-muon data. It shows the same qualitative behaviour of the elemental groups as seen in the KASCADE analysis. The iron-like mass group does not exhibit a break in the power spectrum and seems to have a harder power law index than the light component. The knee appears to be caused by elements in the mass range of proton and helium with a break energy of about $3.5\times 10^{15}$\,eV. Some of the air showers measured with the EAS-TOP and SPASE arrays produce high-energy muon bundles that can be detected with MACRO ($E_\mu > 1.3$\,TeV) and AMANDA ($E_\mu > 300$\,GeV), respectively. Again, analyses of the coincidence data sets show a transition from a mixed to a heavy composition \cite{Navarra,Aglietta:2003hq,Karle,Ahrens:2004nn}. The preliminary and statistically limited measurement of the proton flux by the Tibet AS$\gamma$ Collab.~\cite{Amenomori:2003tk} seems to be at variance with KASCADE and EAS-TOP results. Over the energy range from $2\times 10^{14}$ to $10^{16}$\,eV the proton spectrum is found to follow a power law with the index $3.14 \pm 0.10$. By comparing the Tibet proton spectrum with that of RUNJOB and JACEE a much lower break energy of $\sim 5\times 10^{14}$\,eV is inferred. The reasons for the discrepancy between Tibet and KASCADE/EAS-TOP data is not yet understood. However, it should be noted that both KASCADE and EAS-TOP employ in their analysis the electron-muon size correlations in showers whereas the Tibet measurement is based on a neural net analysis of a number of shower observables that combine emulsion chamber information with scintillator data: $N_\gamma$ (multiplicity of a family), $\sum E_\gamma$ (energy sum of a family), $\langle R_\gamma \rangle$ (mean lateral spread of a family), $\langle E_\gamma \cdot R_\gamma \rangle$ (mean lateral spread of the family energy), and $N_e$ (shower size) \cite{Amenomori:2003tk}. Furthermore all three experiments are at different altitudes and probe different stages of shower evolution. Tibet is at a vertical depth of 606 g/cm$^2$, EAS-TOP at 820 g/cm$^2$ and KASCADE at 1020 g/cm$^2$. The use of different air shower simulations can be ruled out as all experiments now apply CORSIKA \cite{Heck98a}. Due to the large statistical errors, the direct flux measurements of individual mass groups by JACEE \cite{Asakimori:1998aa}, RUNJOB \cite{Shibata,Furukawa:2003dm} and ATIC \cite{Ahn:2003de} do not yet impose strong constraints on the air shower data. All composition data discussed here are compatible with possible extrapolations of direct measurements at lower energy. In cosmic ray models that explain the knee by propagation effects (leakage from our Galaxy), an increasing dipole anisotropy of the shower arrival directions is expected (i.e.~\cite{Candia:2003dk}). Analyzing $2\times 10^7$ showers in the knee energy range the KASCADE group do not find any significant anisotropy signal \cite{Antoni:2003jm}. Similarly, no cosmic ray point sources are seen \cite{Antoni:2004sc}. Being located at higher altitude, the Tibet array has a much lower energy threshold. Again, no Galactic anisotropy was found in the Tibet AS$\gamma$ data. However, using more than $5\times 10^{9}$ showers with $E>3\times 10^{12}$\,eV, the Tibet AS$\gamma$ Collab.\ could detect the dipole anisotropy due to the orbital motion of the Earth around the Sun (Compton-Getting effect) with an amplitude of about 0.1\% \cite{Ma1,Amenomori:2004bf} (see also \cite{Aglietta:1996sz}). \section{Modeling of cosmic ray interactions and EAS\label{sec:eas-modeling}} Given the dependence of the cosmic ray flux and composition measurements on the understanding and modeling of hadronic interactions of cosmic rays and their secondary particles, it is natural to assume that discrepant results discussed in the previous section can be, at least to some extent, traced back to the use of different models for data interpretation. Several groups have shown that an analysis of the same data set with different hadronic interaction models can lead to a wide range of different results (see, for example, \cite{Aglietta:1998te,Ulrich:2004bn}). This means that the use of the same shower simulation model is pre-requisite for a fair comparison of the results of different experiments. Another important aspect of inter-experiment comparison is the use of sufficiently realistic and accurate shower simulations. For example, using the same model for shower evolution, one can obtain different interpretations of the data if different observables of the showers are considered \cite{Antoni:2001pw}. Therefore experiments might arrive at contradicting conclusions even if the same shower simulation tools are used. The largest uncertainty in EAS simulation stems from the unknown characteristics of hadronic multiparticle production \cite{Knapp96a,Knapp:2002vs}. As has been realized during the last years, also interactions at intermediate energies can contribute significantly to this uncertainty, though to a smaller extent \cite{Engel99c,Drescher:2002vp,Heck:2003br}. In addition there are uncertainties coming from the treatment of electromagnetic interactions and differences in details of particle transport and decay implementations \cite{Knapp:2002vs}. Motivated by different models of hadronic multiparticle production, three energy regions are distinguished. At very low energy (close to the particle production threshold up to a few GeV) particle production is characterized by the production and decay of resonances. Knowing all resonances and their decay branching ratios allows one to construct a rather complete model for the interaction cross section and hadronic final states. For example, the codes HADRIN~\cite{Haenssgen86a} and SOPHIA~\cite{Muecke00a} have many resonance channels tabulated. The intermediate energy range up to about $10^3$ GeV can be well understood in a model that describes particle production on the basis of the fragmentation of two color strings (i.e. older versions of FLUKA~\cite{Fasso01a}). At energies above $10^3$ GeV minijet production and multiple parton-parton interactions become important and require again a different modeling. The most frequently used high energy models are DPMJET~\cite{Ranft95a}, neXus~\cite{Drescher:2000ha}, QGSJET~\cite{Kalmykov97a}, and SIBYLL~\cite{Fletcher94}. In the following we will summarize some important recent developments in modeling hadronic interactions and related activities of measuring hadron production in accelerator experiments, and will discuss new trends in air shower simulation. \subsection{Hadronic interactions at high energy} \begin{figure*}[htb!] \centerline{ \includegraphics[width=0.7\textwidth]{contour05_ne_nmu_qgs01_II_sib21.eps} } \vspace*{-8mm} \caption{ Electron-muon size correlation for showers simulated with CORSIKA \protect\cite{Heck-Ostapchenko-private}. Predictions of the old and new versions of QGSJET \cite{Ostapchenko2} are compared with SIBYLL 2.1 \cite{Engel99a}. \label{fig:ne-nmu-plot} } \end{figure*} There are basically four central assumptions that characterize a model's high energy extrapolation \cite{Engel03a,Alvarez-Muniz:2002ne,Ostapchenko:2003sj}: \begin{itemize} \item[(i)] size and energy dependence of the QCD minijet cross section, \item[(ii)] distribution of partons in transverse space (profile function), \item[(iii)] scaling of leading particle distributions or scaling violation, and \item[(iv)] treatment of nuclear effects (semi-superposition model, Gribov-Glauber approximation, increased parton shadowing, etc.) \end{itemize} It is beyond the scope of this article to discuss all models and their differences. We shall concentrate here on aspects relevant to p-air interactions and consider only the two most frequently applied models, QGSJET and SIBYLL. Apart from the treatment of nucleus-nucleus collisions, QGSJET and SIBYLL differ mainly in the first two points \cite{Stanev,Alvarez-Muniz:2002ne}. Since version 2.1, ``post-HERA'' parton densities are used in SIBYLL for calculating the minijet cross section whereas QGSJET 01 (and the earlier version QGSJET 98) was developed with older, ``pre-HERA'' parton densities. Another important difference between the models is the treatment of the minijet transverse momentum cutoff needed to restrict the minijet calculation to the perturbative QCD domain. In QGSJET 01 an energy-independent, constant value of 2\,GeV is used. The SIBYLL authors implemented an energy-dependent transverse momentum cutoff whose value is similar to that of QGSJET 01 at low energy but increases to about 8\,GeV at $10^{20}$\,eV \cite{Engel99a}. This was needed as ``post-HERA'' parton density functions predict at ultra-high energy gluon densities that lead to overlap of individual gluon wave functions in a proton for a transverse momentum cutoff as low as 1.5 GeV (see \cite{Gribov83}, a recent review on this subject is \cite{Mueller:2005me}). In this phase space region non-linear evolution equations have to be used to describe parton densities. The expected size of the non-linear corrections is theoretically not understood and subject of intense research \cite{Erice04}. There are models that predict an early and total saturation of the gluon density (color glass condensate \cite{Iancu:2003xm}) and others with moderate changes. Many experiments are searching for signs of deviations from linear parton density evolution equations or gluon density saturation. Although HERA and RHIC are colliders with CMS energies of $\sqrt{s}\sim 200$\,GeV, corresponding to only $2\times 10^{13}$\,eV, they are currently the best instruments for studying saturation effects. At HERA, parton densities of quarks are measured directly and, through scaling violations, that of gluons derived. HERA data can be described assuming parton density saturation, for example, in terms of the Golec-Biernat--W\"usthoff model \cite{Golec-Biernat:1998js}, or applying perturbative QCD without any non-linear effects \cite{Forshaw:2004rn,Abramowicz99a}. At RHIC, parton densities cannot be measured directly. However, the scaling of jet rates and other observables with the number of participating nucleons depends on the assumptions on the number of partons in the very gluon-dense environment of a heavy nucleus. Many aspects of RHIC data indicate strong deviations from naive parton model predictions \cite{Lange,Levin:2004ak} but the energy range of the collisions is too limited to show unambiguously that the effects observed so far are requiring parton density saturation \cite{Steinberg:2004ii}. The problem of using ``post-HERA'' parton densities for extrapolating hadronic interactions to ultra-high energy $\sqrt{s} \sim 500$\,TeV within the Quark-Gluon Strings Model \cite{Kaidalov:1983vn} is addressed in the new version of QGSJET, called QGSJET-II \cite{Ostapchenko1}. Different from the SIBYLL approach, the transverse momentum cutoff is kept energy-independent. This is achieved by introducing non-linear effects for partons below the perturbative scale, equivalent to non-linear evolution equations. As one cannot speak of individual partons in the soft, non-perturbative domain, these non-linear effects are implemented as multi-pomeron interactions (enhanced pomeron graphs \cite{Kaidalov86b-e}), which are summed to all orders. Other important improvements are the treatment of diffraction dissociation. Whereas the old version of QGSJET had a fixed ratio of diffractive to elastic cross sections (grey disk limit, see also \cite{Luna:2004eg}), the new version approaches at high energy the black disk limit (see discussion in \cite{Engel03a}). Furthermore QGSJET-II was tuned to better describe the secondary particle multiplicity at low collision energy. Although the latter changes are more of technical nature they still are important. The old version predicted at low energy too high a pion multiplicity, a possible reason for higher GeV-muon multiplicities obtained for EAS than found with other models \cite{Engel:2002id,Heck:2003br}. It is clear that QGSJET-II is a much more theoretically consistent model than the previous versions. There are a number of important consequences from these changes in QGSJET. First of all the low-$x$ extrapolation of the parton densities becomes less steep than naively expected in linear perturbative QCD. Secondly there are changes of the effective parton densities acting in hadron-hadron collisions in dependence of the projectile and target mass number $A$ (suppression for large $A$). This leads to the violation of the superposition approximation even for fully inclusive observables.\footnote{In the simplest version of the superposition model, an iron-induced shower is equivalent to 56 proton induced showers having 1/56 of the shower energy. The superposition model is expected to be valid for inclusive observables.} Thirdly the fluctuations in inelasticity are considerably reduced. The leading particle distributions are now qualitatively similar to those of SIBYLL and DPMJET \cite{Heck-private}. The impact of the QGSJET improvements on air shower predictions is currently under investigation \cite{Heck-private}. For example, Fig.~\ref{fig:ne-nmu-plot} shows the electron-muon size correlation for vertical EAS at sea level \cite{Heck-Ostapchenko-private}. The contours are iso-lines of the correlation function at half maximum. For a given energy the number of muons is reduced significantly. At the same time 30-40\% more electrons are expected at detector level in the knee energy region. The interpretation of data from experiments that utilize electron and muon numbers as composition sensitive observable (e.g.\ KASCADE, EAS-TOP) will change towards a heavier composition if QGSJET-II is used. At the highest energies, the predictions of the electron numbers of the old and new QGSJET versions are not very different. Still it can be expected that the energy calibration of experiments like AGASA would have to be revised downward though detailed simulations are needed to estimate by what amount. Experiments like Auger~\cite{Kampert} will be much more sensitive to the modifications: the predicted muon density at 600 to 1000\,m from the shower core is reduced by $\sim 30$\%. Using QGSJET-II also the interpretation of fluorescence measurements will change and shift towards a heavier composition as the mean depth of maximum is increased by $10$\,g/cm$^2$ and $20$\,g/cm$^2$ at $10^{20}$\,eV for proton and iron showers, respectively \cite{Ostapchenko2}. Different assumptions on the QCD minijet cross section, i.e. calculated with or without saturation, lead to enormous differences in the model extrapolations \cite{Engel03a}. The most striking example is the secondary particle multiplicity in p-air collisions. At ultra-high energy, QGSJET 01 predicts a more than 3 times higher secondary particle multiplicity than SIBYLL \cite{Knapp96a,Alvarez-Muniz:2002ne}. Due to the implementation of new parton densities, the multiplicity of QGSJET-II is even higher than that of QGSJET 01 at ultra-high energy. However, these differences are of very much reduced importance for the evolution of air showers as most of the secondary particles have a very small energy. Of greater importance are two model characteristics that are indirectly linked to the minijet cross section: the total p-air and $\pi$-air cross sections and the distribution of leading secondary particles. \begin{figure}[htb!] \centerline{ \includegraphics[width=7.5cm]{sigmod_p_air_new3.eps} } \vspace*{-8mm} \caption{ Compilation of p-air production cross sections and model predictions (from \cite{Knapp:2002vs}, modified and updated \cite{Heck-private}). \label{fig:p-air-cs} } \end{figure} The total cross section of a model depends at high energy mainly on the minijet cross section and the transverse profile function which is a measure of how partons are distributed in a hadron \cite{Alvarez-Muniz:2002ne}. It is the difference of the assumed profile functions and minijet cross sections and the lack of data to distinguish between these assumptions that lead to widely varying high-energy cross section extrapolations (see Fig.~\ref{fig:p-air-cs}). Cross section measurements over a wide range of energy would help to reduce the model ambiguities. Although hadron-air cross sections are needed for simulation, p-p and p-$\bar{\rm p}$ cross section measurements are also of great interest. Using the Gribov-Glauber approximation, cross sections with air can be estimated using nucleon-nucleon cross section data. At the moment, due to the contradicting total p-$\bar{\rm p}$ cross section measurements at Tevatron \cite{Rapidis,Albrow99a}, possible theoretical extrapolations are experimentally not very much restricted \cite{Engel03a,Zha:2003bt}. Therefore the planned total cross section measurement with the TOTEM/CMS detector combination at the LHC \cite{Orava,Avati:2003qj}, corresponding to an equivalent energy of about $10^{17}$\,eV, will be of outstanding importance. All current high energy interaction models are based on the implicit assumption that leading particle distributions scale with energy. The leading particle distributions are tuned to low-energy data and change at high energy only due to energy-momentum conservation effects as the energy has to be shared between the leading particles and the increasing bulk of low-energy secondary particles. This assumption seems to describe the very sparse data we have on leading particle production up to HERA energy ($\sim 2\times 10^{13}$\,eV) \cite{Engel:1998hf}. However, there are theoretical arguments that the leading particle distributions will change drastically at ultra-high energy \cite{Frankfurt:1997ij}. If parton density saturation indeed occurs at cosmic ray energies, a collision can be viewed as black disk scattering: the gluons completely ``fill'' the target nucleus \cite{Drescher2,Drescher:2004sd,Drescher:2005ak}. Not only the very numerous partons at small $x$ but also the much faster valence quarks will participate in the interaction. Indeed, in a non-peripheral collision (complete saturation) the chance probability of valence quarks to scatter off these gluons approaches unity. As a consequence this will lead to the disintegration of the leading valence di-quark: no leading baryon is produced and the elasticity of the collision drops by almost a factor 2. Indeed there are some indications of ``anomalous'' baryon stopping in heavy ion collisions, however, at much lower energy (for example, \cite{Mitchell:1993cm,Videbaek:2001mi}) and different theoretical interpretations are put forward (for example, string junction interpretation: \cite{Capella:1996th,Capella:2002sx} and saturation \cite{Itakura:2003jp}). Measurements of hadron production in the very forward direction at RHIC \cite{Lange}, Tevatron \cite{Rapidis}, and LHC \cite{Orava} will be needed to study the leading baryon distributions systematically and clarify the situation. The implementation of different scenarios of parton density saturation in the SIBYLL 2.1 code allows a first estimate of the expected effects and their dependence on model parameters \cite{Drescher:2004sd,Drescher:2005ak}. In a conservative scenario the mean depth of shower maximum, $\langle X_{\rm max}\rangle$, is reduced by about 20\,g/cm$^2$ at $10^{19}$\,eV, corresponding to the difference between SIBYLL and QGSJET 01 predictions. Depending on the assumptions, much larger reductions of $\langle X_{\rm max} \rangle$ are possible. \subsection{Hadronic interactions at intermediate energy} Hadronic interactions in air showers at intermediate energy are often simulated with models like GHEISHA~\cite{Fesefeldt85a} (used in CORSIKA \cite{Heck98a}) or the Hillas splitting algorithm \cite{Hillas81a} (used in MOCCA \cite{Hillas95a} and AIRES \cite{Sciutto:1999jh}). Both models are very fast but rather crude parametrizations of low-energy data or interaction physics. Their application is certainly justified if only the electron/photon component of a shower or calorimetric quantities are studied as in this case the details of low-energy interactions are of minor importance. The situation is, however, different for muons. Due to successive hadronic interactions, the number of pions and kaons increases in an air shower with decreasing particle energy. Below some 100 (500) GeV pions (kaons) are more likely to decay than to undergo further interactions. Therefore, hadronic interactions in the energy range from several GeV to a few hundred GeV are very important for understanding GeV muon production in EAS of all energies \cite{Engel99c,Drescher:2002vp}. Recent studies have shown that the muon density at large lateral distance is indeed very sensitive to the model used for low- and intermediate-energy interactions. The differences between the predictions of the various models are of the order of 10-20\% in the relevant lateral distance range but can be as large as 50\% \cite{Drescher:2003gh,Heck:2003br}. A detailed comparison of low- and intermediate-energy models to available data \cite{Heck:2003br,Heck} show that GHEISHA does not provide an adequate parametrization of the interaction characteristics. The Hillas splitting algorithm seems to give a somewhat better description but a thorough comparison to data is hampered by the limitation to only p/$\pi$-air collisions. The best models available are clearly FLUKA~\cite{Fasso01a} and UrQMD~\cite{Bleicher99a} but there are still significant differences between the predictions of these two models. The energy range up to 400\,GeV is in reach of fixed-target accelerator experiments. Not only that such experiments can easily measure with light, air-like nuclei as targets they can also run with tagged pion and kaon beams and measure particle production in the very forward direction. Recognizing the importance of low-energy measurements for atmospheric neutrino flux predictions \cite{Engel:1999zq} and neutrino factories, a programme was begun to systematically measure pion and kaon production in minimum bias collisions. Examples are the HARP experiment \cite{Barr-Engel,Gomez-Cadenas:2004hy} where first data are available now, the NA49 minimum bias p-C run \cite{Barr-Engel}, and the MIPP experiment \cite{Rapidis,Raja:2005sh} which is currently taking data. \subsection{Information from air shower measurements} It is difficult to obtain information on hadronic multiparticle production at ultra-high energy from EAS measurements. First of all the primary particle mass is, in general, not known. Secondly, the large number of successive hadronic interactions smears out any striking features of the primary interaction. Therefore most analyses of air shower data in terms of interaction physics are highly indirect and often serve only the exclusion of extreme scenarios (for example, \cite{Drescher2}). One of the most interesting measurements of this kind is the analysis of deeply penetrating air showers to obtain the inelastic proton-air cross section. Traditionally an exponential function is fitted to the observed $X_{\rm max}$ distribution and the derived absorption length $\Lambda$ is converted to the interaction length via a so-called $k$-factor (for a recent discussion, see \cite{Alvarez-Muniz:2004bx}). The HiRes Collab.\ developed a new method that does partially avoid the ambiguities of the definition of a $k$-factor \cite{Belov:2003ie}. Applying this method to the stereo data set, a preliminary p-air cross section of $456\pm 17{\rm (stat)} + 39 - 11 {\rm (sys)}$\,mb at $10^{18.5}$\,eV is derived \cite{Belov}. This cross section is lower than the current model extrapolations, see Fig.~\ref{fig:p-air-cs}. A number of possible biases still need to be investigated. For example, a small fraction of photon primaries would be enough to spoil the cross section measurement as photon-initiated showers have a much larger depth of maximum. Furthermore the self-consistency of the method should be checked by applying it to a model that is modified to match the actually measured cross section. Experiments that measure many observables of air showers simultaneously can check the consistency of EAS simulation. For example, the KASCADE installation allows the measurement of shower size (electrons), muon densities with thresholds of 240, 490, and 2400\,MeV, and hadron multiplicities and energies above 70\,GeV in the shower core~\cite{Haungs1,Zabierowski}. The correlation between the different observables provides constraints on interaction models even if the full range of possible primary particles is considered \cite{Milke1}. The latest versions of the had. interaction models available in CORSIKA satisfy these constraints but some earlier versions are clearly at variance with the data. Emulsion chamber experiments at high altitude have the advantage that they have a low shower energy threshold, reaching into the region of direct primary flux measurements. Therefore they are well suited for testing shower simulation models. For example, by comparing the predicted and measured optical density distribution (i.e. the energy distribution of particles in the TeV range) in emulsions of the Pamir experiment, it was found that some old versions of hadronic interaction models could be excluded \cite{Haungs2,Haungs:2003bx}. Finally it should be noted that there are some indications of a systematic discrepancy between the mass composition derived from $N_e-N_\mu$ based measurements and that from data sensitive to the depth of maximum \cite{Hoerandel1,Hoerandel:2003vu}. It is unclear to what extent these discrepancies are related to shortcomings in the simulation of hadronic interactions, but one can try to bring different measurements into better agreement by modifying the underlying shower simulation correspondingly. A hadronic interaction model with small cross section and a somewhat reduced inelasticity as compared to QGSJET 01 is favoured in this analysis \cite{Hoerandel:2003vu}. \subsection{Trends in EAS simulation} \begin{figure}[htb!] \centerline{ \includegraphics[width=7.5cm]{Xmax.eps} } \vspace*{-8mm} \caption{ Mean depth of shower maximum for proton- and iron-induced showers \cite{Drescher1}. The predictions of different hadronic interaction models as calculated with different shower codes are compared. \label{fig:xmax-models} } \end{figure} There is a lack of statistics of Monte Carlo simulations in many comparisons of data with theoretical predictions. For example, the statistics of the KASCADE shower data exceed the simulated one available for each interaction model combination by about a factor 10. A similar disparity of data to simulation statistics is also typical for modern large-scale detectors such as Auger. The problem is even amplified if many observables are used to characterize each shower: the set of simulated showers should by far exceed the number of observed showers to keep the statistical reconstruction errors small. At ultra-high energy, hybrid simulation schemes present a fast and efficient alternative to conventional Monte Carlo simulation techniques. In a hybrid simulation all interactions above a certain energy threshold are simulated with the Monte Carlo technique. Secondary particles that fall below this threshold are taken as sources of subshowers that are treated numerically. The various hybrid schemes available -- for a review see \cite{Drescher1} -- differ mainly in the method of calculating these sub-showers. For example, the sub-showers could be drawn from pre-calculated libraries \cite{Alvarez-Muniz:2002ne}, calculated by solving cascade equations for a Monte Carlo-generated source function \cite{Pierog}, or treated by applying numerical solutions of cascade equations together with analytical approximations or tables of shower evolution \cite{Dedenko,Bossard:2000jh,Drescher:2002cr}. The hybrid codes SENECA \cite{Drescher:2002cr} and CONEX \cite{Pierog} have several interaction models implemented and have reached a precision and sophistication that makes them suited for analyzing experimental data \cite{Ortiz:2004gb}. SENECA allows a full 3+1-dimensional simulation of EAS whereas CONEX is currently restricted to calculating the projection along the shower axis. Both codes have been extensively compared to CORSIKA. For example, a comparison of the mean depth of maximum of different models is shown in Fig.~\ref{fig:xmax-models}. The agreement between the different shower simulation codes is excellent. It should also be noted that, only with hybrid simulation codes, showers can be simulated in large numbers using the time-consuming interaction model neXus 3.97 \cite{Pierog:2002gj}. The simulation of inclined or even upward-going air showers has become increasingly important. Large-aperture experiments like Auger and EUSO have not only a large sensitivity to nearly horizontal showers but also hope to find neutrino-induced upward-going showers \cite{Kampert,Gorodetzky}. With the exception of CONEX, the currently available shower simulation packages are not optimized for such calculations. Modifications and extensions will be needed to allow detailed and efficient simulation of showers at these particularly interesting geometries. \section{Exotic phenomena and emulsion chamber data\label{sec:exotics}} The most striking, unexpected phenomena observed in emulsion chamber experiments are Centauro events with an exceptionally small number of photons, events with particles or groups of particles being aligned along a straight line, halo events characterized by an unusually large area of darkness in the X-ray film, and deeply penetrating cascades \cite{Lattes:1980wk,Slavatinsky:2003zz}. Whether these phenomena are related to fluctuations and the measurement technique of emulsion chamber experiments or signs of new physics is controversially debated for more than 30 years. In the following we will briefly discuss Centauro events and comment on the status of searching for events with alignment (see \cite{Tamada} for a complete review). There are several aspects that complicate the interpretation of emulsion chamber measurements. First of all, many of these phenomena are observed only in very high energy events with estimated energy greater than $10^{16}$\,eV, of which only a small number of about 100 event has been collected \cite{Slavatinsky:2003zz}. Secondly, due to the threshold effect of the detectors ($E_\gamma > 1$\,TeV), mainly proton initiated events are detected \cite{Haungs2}. As is well known, proton showers are characterized by very large shower-to-shower fluctuations (see, for example, \cite{Milke2}). Thirdly, the number of high energy $\gamma$-rays and hadrons cannot be measured directly -- it is obtained by comparing the tracks at various depths in the detector stacks. Probably the most famous exotic emulsion chamber event is Centauro-I, an event with about 40 high-energy hadronic jets and only one low-energy e.m.{} cluster \cite{Lattes:1980wk}, see Fig.~\ref{fig:centauro}. Detected in 1972 in the Chacaltaya emulsion chamber experiment \cite{Lattes:1973aa} it represents the most extreme Centauro event ever observed. In total there are about 10 Centauro events observed by the Chacaltaya and Pamir experiments (see \cite{Slavatinsky:2003zz}, a detailed review of all events is given in \cite{Gladysz-Dziadus:2001cq}). No events of this kind were found in the experiments at Mount Fuji and Mount Kanbala. Also all searches at accelerators were negative. Models proposed for explaining Centauro events range from assuming a small fraction of exotic primary particles in the cosmic ray flux (for example, strangelets \cite{Wlodarczyk,Rybczynski:2001bw} or quark globs \cite{Bjorken:1979xv}) over exotic interaction scenarios, like the creation of a disoriented chiral condensate \cite{Bjorken:1991xr} or production of evaporating mini black holes by neutrino primaries \cite{Tomaras,Mironov:2003jw}, to conventional features of diffraction dissociation \cite{Attallah:1993kc}. \begin{figure}[htb!] \centerline{ \includegraphics[width=7.5cm]{Centauro-1.eps} } \vspace*{-8mm} \caption{ Illustration of the original interpretation of the Centauro-I event \cite{Tamada}. \label{fig:centauro} } \end{figure} Given these ongoing attempts of explaining Centauro events, the results of the recent re-examination of the emulsion chamber plates are of outstanding importance \cite{Tamada,Ohsawa:2004ta}. As it turned out, the tracks in the two chambers that were previously thought to belong to the Centauro-I event are actually due to two different events. The azimuth angle of the tracks in the upper chamber does not match that in the lower one. As there is no counterpart found in the upper chamber, this event is still very difficult to understand \cite{Ohsawa:2004ta}. The probability of particles produced in an interaction well above the installation passing through a gap between the upper chambers seems to be very small. A point-like interaction in the upper layers or the wooden support frame is also unlikely: the tracks would then correspond to very high $p_\perp$ particles and should point back to the interaction vertex. No such geometric convergence of the tracks is found. This means that many particles of almost parallel trajectories have hit the emulsion chamber without interacting in the upper lead stack, again a very exotic scenario that lacks explanation. The situation is similarly controversial regarding the experimental information on events with co-planar particle emission. The Pamir Collab.\ find an excess of events with alignment of the substructures for $E > 8 - 10 \times 10^{15}$\,eV \cite{Borisov:2003th,Kopenkin:1994hu}. The highest energy event measured with an emulsion experiment during a series of Concorde flights (average atm.\ depth $\sim 100$\,g/cm$^2$) shows also impressive alignment \cite{Capdevielle:2001aa}. At lower energy, no excess of coplanar particle production is found. For example, measurements in the energy range below $10^{14}$\,eV by the RUNJOB Collab.~\cite{Galkin:2001aa} and also a direct search with the CERN NA22 experiment at $2.5\times 10^{11}$\,eV \cite{Kopenkin:1994hu} provided distributions that agreed with the expectations. Furthermore, a recent study of the KASCADE Collab.~\cite{Antoni:2005ce} showed that aligned structures in hadronic shower cores at sea level are not related to angular correlations in hadronic interactions as might be expected from jet production \cite{Halzen:1989rg}. Indeed the fraction of events with alignment is only determined by the lateral distribution of hadrons. Measured in terms of the alignment parameter $\lambda_4$, the event distributions of Pamir and KASCADE data look also surprisingly similar. \section{Gamma-ray, neutrino, and muon flux measurements\label{sec:secondaries}} Secondary particle fluxes such as hadronically produced gamma-rays and neutrinos provide information on acceleration, propagation and interaction of cosmic rays that are complementary to what can be directly deduced from the locally observed cosmic ray flux \cite{DeRujula,Waxman,Berezinsky}. In particular, gamma-rays and neutrinos propagate on straight trajectories, allowing the identification of the source objects or environments. \subsection{Gamma-rays} With the begin of routine operation of the second generation imaging atmospheric Cherenkov telescopes (IACT) CANGAROO~\cite{Kubo:2004ag}, HESS~\cite{Horns,Hinton:2004eu}, MAGIC~\cite{Fernandez,Bastieri:2005ry}, and VERITAS~\cite{Krennrich:2004ai}, many new TeV gamma-ray sources are discovered and their spectra measured. It is impossible to summarize the progress in this extremely active and diverse field of research and any comments made here will be soon outdated. At the time of writing this article all big four IACT installations are completed and take data. Whereas CANGAROO and HESS have already several telescopes online, MAGIC and VERITAS work with single, but bigger telescopes. Both the MAGIC and VERITAS Collaborations are in the process of adding another telescope for stereoscopic observation with greatly improved background rejection. The HESS telescope system is characterized by an unprecedented angular resolution of 0.06$^\circ$. The MAGIC telescope has a light-weight design for very fast slew to observe transient sources. It's low-energy threshold is planned to be about 20\,GeV as compared to $\sim 70$\,GeV for HESS and VERITAS \cite{Fernandez}. The four telescopes together give almost uniform full-sky coverage. Highlights of the early HESS data taking are certainly the measurement of the gamma-ray flux from the Galactic Center \cite{Aharonian:2004wa} and the first observation of a SNR as spatially resolved TeV gamma-ray source (RXJ 1713-3946, a possible site of cosmic ray acceleration) \cite{Aharonian:2004vr}. Both sources were previously detected with the CANGAROO telescopes \cite{Tsuchiya:2004wv,Enomoto:2002xk} but with much more limited resolution. The potential of the HESS telescopes is also underlined by the serendipituous discovery of an unknown TeV gamma-ray source, now called TeV J1303-63, in the field of view of the binary pulsar system PSR B1259-630 \cite{Horns}. In contrast to imaging Cherenkov telescopes, air shower arrays can be used to continuously monitor the gamma-ray sky with very high duty cycle and wide field of view. The two currently operated installations of this type are Tibet AS$\gamma$~\cite{Ma1,Amenomori:2003zv} and Milagro~\cite{Goodman,Sinnis:2003xv} being located at an altitude of 4300\,m and 2350\,m, respectively. Whereas the Milagro Collab.\ employ an active hadron/gamma-ray separation via two layers of PMTs in an 8\,m deep water pond, the Tibet experiment searches for arrival direction anisotropies due to gamma-rays on top of the isotropic cosmic ray background with a dense scintillator array. Both experiments have detected the Crab SNR and the active galaxy Mrk 421 at the $5\sigma$ level \cite{Atkins:2004yb,Amenomori:2005pn}. New results from Milagro are the detection of TeV gamma-rays from the entire inner Galactic plane region and the observation of two extended sources, one coincident with EGRET source 3EG J0520+2556 and another one in the Cygnus region of the Galactic plane \cite{Atkins:2005wu,SazParkinson:2005td}. It is intriguing that the latter source coincides within the Milagro resolution of about $2^\circ$ with the HEGRA source TeV J2032+4131 \cite{Aharonian:2005ex} and the region from where AGASA reported an excess of $\sim 10^{18}$\,eV cosmic rays \cite{Hayashida:1998qb}. At the moment an interpretation of these observations in terms of a very high energy cosmic ray source (region) is too speculative -- more data will be needed. \subsection{High-energy neutrinos} The interpretation of gamma-ray fluxes from potential cosmic ray sources suffers from ambiguities due to the superposition of different gamma-ray production processes: $\pi^0$ decay, inverse Compton scattering, synchrotron radiation, and bremsstrahlung. These uncertainties are expected to be much smaller for neutrinos as they are mainly produced in hadronic interactions via the decay of pions and kaons. Furthermore neutrinos can travel over large distances virtually unattenuated and are, therefore, ideal messenger particles, allowing a multitude of astrophysical investigations \cite{Gaisser:1994yf}. On the other hand, their small interaction cross section requires very large effective detector volumes. There are two neutrino telescopes taking data at the moment, AMANDA-II \cite{Karle,Andres:1999hm} and Baikal NT-200 \cite{Tzamarias,Spiering:2004dt}. Although limited by detector size, the sensitivity of both detectors will approach the cascade bound in the next years\footnote{ The cascade bound, also called gamma-ray bound, is based on the assumption that all observed extragalactic gamma-rays were produced together with neutrinos in hadronic cascades \cite{Berezinsky,Berezinsky:1975aa}. }, i.e.\ touch the region where one can hope for a discovery. The Waxman-Bahcall bound \cite{Waxman:1998yy} (see also discussion in \cite{Mannheim:1998wp}), often considered as a reference flux that is guaranteed if protons are the ultra-high energy cosmic rays, cannot be reached with these installations. About 3300 (370) neutrino candidates are found in the AMANDA (Baikal) data taken until end of 2003. The number of neutrinos and their distribution are compatible with the expectation from atmospheric neutrino production -- neither a diffuse flux of extra-terrestial neutrinos nor astrophysical point sources have yet been discovered. The construction of the successor to AMANDA and much bigger neutrino telescope IceCube is on track \cite{Nygren}. IceCube will have a sensitivity that reaches well into the region below the Waxman-Bahcall bound, promising discoveries and many astrophysical and particle physics applications \cite{Ahrens:2003ix}. The first IceCube string was successfully deployed in February 2004 \cite{Nygren}. The Mediterranean neutrino telescope collaborations (ANTARES~\cite{Sokalski:2005sf}, NESTOR~\cite{Tsirigotis:2004bs}, NEMO~\cite{Migneco:2004yk}) \cite{Tzamarias} have performed prototype installations and test runs. In 2003 the ANTARES Collab.\ operated a prototype sector line with PMTs and a mini instrumentation line at the selected ANTARES site near Toulon (2500m water depth). Valuable information on the performance of cables and connectors under the harsh deep-sea conditions were gathered. It is planned to build the complete ANTARES detector of 12 strings and in total 900 PMTs in 2005 -- 2007. NEMO has selected a site close to Sicily with nearly perfect conditions at 3500m water depth which is continuously monitored. The project is in the advanced R\&D stage with the plan to build a prototype in 2005. The NESTOR site provides a large plateau at the sea floor at about 4000 m water depth. In 2003 one fully equipped prototype ``star'' of 32\,m diameter of a NESTOR tower was successfully operated, allowing the measurement of the atmospheric muon flux. It is planned to install 7 complete towers by the end of 2006, providing a detector of about 0.15 km$^3$. It is clear that a km$^3$-sized neutrino detector is needed in the northern hemisphere to complement the field of view of IceCube. Therefore the Mediterranean neutrino telescope collaborations have recently joined their efforts to construct such a detector by initiating the design study KM3NeT \cite{KM3NeT}. To measure neutrino fluxes at even higher energy, radio emission of neutrino-induced showers in dense materials can be employed \cite{Learned}. Several experiments have recently performed measurements and derived first limits on the neutrino flux at ultra-high energy (FORTE~\cite{Lehtinen:2003xv}, GLUE~\cite{Gorham:2003da}, ANITA~\cite{Miocinovic:2005jh}). For example, ANITA is designed to search for neutrinos with $E_\nu > 3\times 10^{18}$\,eV by monitoring radio signals from the antarctic ice cap using a balloon-born system of antennas \cite{Learned,Miocinovic:2005jh}. A preparatory test flight with a prototype instrument (ANITA-lite) was performed during the 03/04 austral season \cite{Learned}. Already on the basis of the prototype flight, giving about 7 days of data, a competitive limit on the ultra-high energy neutrino flux could be derived \cite{Learned}. \subsection{High-energy muons} Atmospheric muons, being a major background for neutrino telescopes, carry valuable information as messengers of hadronic interactions in the atmosphere. Muons are also directly linked to atmospheric neutrino production and can be used to test predictions on neutrino fluxes as needed for oscillation parameter analyses \cite{Gaisser:2002jj,Brancus}. Particularly interesting is the comparison of muon fluxes to simulations performed with the same codes as used for air shower analyses \cite{LeCoultre,Ridky,Brancus,Ma2}. Of course, muon flux predictions depend on both the used hadronic interaction model and the assumed primary cosmic ray flux. \begin{figure}[htb!] \centerline{ \includegraphics[width=6.5cm]{l3c+theo.eps} } \vspace*{-8mm} \caption{Vertical atmospheric muon flux as measured by L3+C. The upper panel shows the flux of all muons and the lower panel the charge ratio. The data are compared with different theoretical predictions \cite{LeCoultre,Achard:2004ws}. \label{fig:L3C-muons} } \end{figure} Very precise high-energy muon measurements can be carried out by particle physics detectors at colliders. For example, Fig.~\ref{fig:L3C-muons} shows the inclusive muon flux and charge ratio measured by the L3 Collab.~\cite{Achard:2004ws}. The experimental results for vertical muons are compared to different theoretical predictions. In this case, in contrast to EAS simulations, QGSJET01 predicts a smaller muon flux than SIBYLL 2.1 (cmp.\ Fig.~\ref{fig:ne-nmu-plot}). Hadronic interaction models tuned for muon and neutrino flux calculations give a better description of the data \cite{Ma2}. As known from simulations at lower energy \cite{Brancus,Wentz:2003bp}, the muon charge ratio is found to be very sensitive to the production of fast secondary hadrons. None of the models implemented in CORSIKA gives a good overall description of the data. The deficit of muons found relative to CORSIKA simulations is also seen by two other LEP experiments. The DELPHI and Aleph groups find that the number of high energy muon bundles cannot be described by the hadronic interaction models available in CORSIKA, assuming even a completely iron-dominated primary composition \cite{Ridky:2005mx,Avati:2000mn}. At higher energy, the AMANDA and NESTOR collaborations have also measured the atmospheric muon flux in the TeV energy region \cite{Karle,Tzamarias}. These measurements are integral flux determinations because of the very limited energy resolution of neutrino telescopes. The AMANDA Collab.\ have also compared their measurement to simulations with CORSIKA and find good agreement within the experimental uncertainties \cite{Karle}. \section{Conclusions and outlook\label{sec:outlook}} Flux and composition of ultra-high energy cosmic rays are still very uncertain because of the low statistics of showers observed so far and the model-dependence of the shower data interpretation. Significant progress in this field is expected by new large-aperture installations -- the Pierre Auger Observatory and the Telescope Array \cite{TA}. To study the flux at even higher energies $\sim10^{21}$\,eV with sufficient statistics, new techniques will be required. Observing the atmosphere from outer space is one possible solution to increase the aperture further (for example, EUSO \cite{Gorodetzky}). Another possibility could be the use of radio antenna arrays similar to that of particle detectors \cite{Falcke:2004aw}. The situation is similar in the knee energy region. Although the all-particle flux is known rather well there are large uncertainties in the composition. Nevertheless a clear trend from a mixed composition at low energy to a predominantly heavy one above the knee is seen in all recent measurements. The experimental errors are completely dominated by the systematic uncertainties due to our limited understanding of hadronic interactions at high energy and in forward direction. The dependence on air shower simulations simulations can be reduced by combining different detection techniques to measure qualitatively different shower observables at the same time. Experiments of this type are the Pierre Auger Observatory, TA, and IceCube/IceTop. Whereas there are several detectors measuring the cosmic ray flux in the knee region and at ultra-high energy, there is a lack of data in the energy range $10^{17} - 10^{19}$\,eV. It is clear that the latter range is of great interest as it expected to cover the transition from Galactic to extragalactic cosmic rays. KASCADE-Grande and IceTop will measure showers only up to $10^{17.5}$\,eV with good statistics. Therefore it is worthwhile to upgrade large-aperture instruments such as Auger or TA to extend their energy range down to $10^{17}$\,eV. The field of emulsion chamber measurements is still full of mysteries. After more than 30 years the interpretation of one of the most famous emulsion chamber events, Centauro-I, has changed completely, now being even more difficult to explain. Not a single non-emulsion experiment could confirm any of the claimed exotic event features. Substantial progress in this field can only be expected by new measurements combining large-aperture emulsion stacks and modern particle detectors. Many of the questions related to cosmic rays and astroparticle physics can only be solved by measuring and understanding secondary particle fluxes. There has already been enormous progress in the field of gamma-ray and neutrino measurements and much more can be expected for the next years. The second generation imaging air Cherenkov telescopes will provide high resolution images of TeV gamma-ray sources and water/ice detectors of km$^3$ size will probe the neutrino flux in the same energy range. Neutrino fluxes at ultra-high energy will be searched for by large-aperture air shower installations and dedicated radio signal experiments. The field of high-energy neutrino astronomy is still in an infantile stage but with a bright future ahead. For all these measurements and related data analyses, detailed simulations are very important. Shower simulation tools in general and hadronic interaction models in particular should be improved continuously and tested by comparing them with a large variety of data. During the last decade great progress was achieved by the introduction of multi-purpose code packages such as CORSIKA and AIRES that are professionally maintained. However, it should not be overlooked that the quality of air shower and inclusive flux simulations depends crucially on particle production data measured in fixed-target and collider experiments. At the moment the lack of suitable accelerator data is the dominant source of systematic uncertainties in cosmic ray measurements. As we don't have a calculable theory of hadronic multiparticle production, there is no change of this dependence on accelerator data to be expected in near future. \subsection*{Acknowledgements} It is the author's pleasure to thank the organizers for inviting him to participate in this very interesting and fruitful symposium. He gratefully acknowledges clarifying and illuminating discussions with many participants of this meeting and his colleagues from the KASCADE-Grande and Pierre Auger collaborations. In particular he benefited from discussions with K.~Belov, V.~Berezinsky, A.~Haungs, D.~Heck, S.~Ostapchenko, T.~Pierog, L.~Resvanis, H.~Ulrich, M.~Unger, A.~Watson, and S.~Westerhoff. The author also would like to thank D.~Heck for providing him Figs.~\ref{fig:ne-nmu-plot} and \ref{fig:p-air-cs}.
{ "redpajama_set_name": "RedPajamaArXiv" }
582
{"url":"https:\/\/www.physicsforums.com\/threads\/which-rule-s-is-this-using.719438\/","text":"# Which rule(s) is this using?\n\n1. Oct 28, 2013\n\n### BOAS\n\n1. The problem statement, all variables and given\/known data\n\nDifferentiate the following with respect to x;\n\ny = $x^{2}$$(x-1)^{1\/2}$\n\n3. The attempt at a solution\n\nI have a solution to the problem that I will outline below, but my notes on this are confusing and i'm having trouble applying the method to another question. So if you can see the general rule that is being employed, it would really help me if you could point it out.\n\nLet u = $x^{2}$\n\nLet v = $(x-1)^{1\/2}$\n\n$\\frac{du}{dx}= 2x$\n\n$\\frac{dv}{dx}= \\frac{1}{2}(x - 1)^{-1\/2}$\n\n(that's all fine so far)\n\n$\\frac{dy}{dx}= \\frac{x^{2}(x-1)^{-1\/2}}{2} + 2x(x-1)^{1\/2}$\n\nI have a simplified answer and I can see how to get there, but what rule does the above employ?\n\nThanks!\n\n2. Oct 28, 2013\n\n### rock.freak667\n\nThe solution is employing the product rule for differentiation where if you have y=uv\n\nthen\n\ndy\/dx = v(du\/dx_ + u(dv\/dx)\n\n3. Oct 28, 2013\n\n### BOAS\n\nThanks for the response,\n\nso am I correct in thinking that after splitting the function into u and v, the chain rule has been used to find dv\/dx, the power rule can be used for du\/dx and then from there it's plain sailing with the product rule?\n\nI definitely need more practice on breaking these problems down.\n\n4. Oct 28, 2013\n\n### Staff: Mentor\n\nYes. In this case, the chain rule is very simple, since d\/dx(x - 1) = 1.\n\n5. Oct 28, 2013\n\n### BOAS\n\nThe problem I need to apply this method to is a little bit more complicated, but I can see how to do it.\n\nThanks.\n\n6. Oct 28, 2013\n\n### Staff: Mentor\n\nHere's another way to look at your problem.\n\ny = x2(x - 1)1\/2\ndy\/dx = x2 * d\/dx[(x - 1)1\/2] + d\/dx(x2) * (x - 1)1\/2\n= x2 * (1\/2)(x - 1)-1\/2 * d\/dx(x - 1) + 2x * (x - 1)1\/2\n= x2 * (1\/2)(x - 1)-1\/2 * 1 + 2x * (x - 1)1\/2\n\nAt each step along the way, I am postponing taking the derivative of something - this is signified by \"d\/dx( ... )\", which means that I haven't actually taken the derivative of whatever is to its right.\n\n7. Oct 28, 2013\n\n### BOAS\n\nThat's an interesting way of looking at it.\n\nI imagine it's much easier to look back through such a calculation and spot any potential error.\n\n8. Oct 28, 2013\n\n### Staff: Mentor\n\nYes, because the work is shown inline rather than several lines up the page.","date":"2017-08-23 00:46:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5600413084030151, \"perplexity\": 1633.3920862269458}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886116921.70\/warc\/CC-MAIN-20170823000718-20170823020718-00180.warc.gz\"}"}
null
null
Q: Is the polynomial irreducible? Is the polynomial $f = t^3 - t^2 + t + 2$ irreducible in ${\mathbb Q}[t]$? Can someone give me a hint how to figure it out? Thanks A: Hint. Being a cubic, $f(t)$ is reducible if and only if it has a linear factor $t-(p/q)$. And in this case $p$ must divide the constant term of $f(t)$, and $q$ must divide the leading coefficient, so there are only a small number of possibilities: you will easily either rule them all out or find one that works.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,436
{"url":"https:\/\/socratic.org\/questions\/how-do-you-solve-10x-2-31x-15-0-using-the-quadratic-formula#438700","text":"# How do you solve 10x^2-31x+15=0 using the quadratic formula?\n\nJun 12, 2017\n\nSee a solution process below:\n\n#### Explanation:\n\nFor $a {x}^{2} + b x + c = 0$, the values of $x$ which are the solutions to the equation are given by:\n\n$x = \\frac{- b \\pm \\sqrt{{b}^{2} - 4 a c}}{2 a}$\n\nSubstituting $10$ for $a$; $- 31$ for $b$ and $15$ for $c$ gives:\n\n$x = \\frac{- \\left(- 31\\right) \\pm \\sqrt{{\\left(- 31\\right)}^{2} - \\left(4 \\cdot 10 \\cdot 15\\right)}}{2 \\cdot 10}$\n\n$x = \\frac{31 \\pm \\sqrt{961 - 600}}{20}$\n\n$x = \\frac{31 \\pm \\sqrt{361}}{20}$\n\n$x = \\frac{31 \\pm 19}{20}$\n\n$x = \\frac{50}{20}$ and $x = \\frac{12}{20}$\n\n$x = \\frac{5}{2}$ and $x = \\frac{3}{5}$\n\nJun 12, 2017\n\n$\\frac{5}{2} , \\frac{6}{5}$\n\n#### Explanation:\n\n$f \\left(x\\right) = 10 {x}^{2} - 31 x + 15 = 0$\n$D = {d}^{2} = {b}^{2} - 4 a c = 961 - 600 = 361$ --> $d = \\pm 19$\n$x = - \\frac{b}{2 a} \\pm \\frac{d}{2 a} = \\frac{31}{20} \\pm \\frac{19}{20} = \\frac{31 \\pm 19}{20}$\n$x 1 = \\frac{50}{20} = \\frac{5}{2}$\n$x 2 = \\frac{12}{20} = \\frac{3}{5}$","date":"2021-10-25 11:16:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 24, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9367430210113525, \"perplexity\": 1183.1993115107448}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323587659.72\/warc\/CC-MAIN-20211025092203-20211025122203-00379.warc.gz\"}"}
null
null
Q: why primefaces p:password is losing its values on page refresh In my project I am using primefaces p:password component.Everything is working fine except one thing, when I refresh the page it loses its values.Can anyone tell what are the security reasons behind it. Thanks friends. A: The assumption is that password fields contain sensitive data so they won't get shown again on page-reload. Reason for this is that the sensitive data usually won't get cached by the browser (depends on your settings) and therefore is not available after the request was fired. This means your password input won't be a part of your page at any time but only gets submitted to its enclosing form. initial page-load: <p:password .../> |-- rendered to --> <input type="secret" value="" /> after page-reload: <p:password .../> |-- rendered to --> <input type="secret" value="" /> As you can see the value attribute of the rendered html-output is empty, when you input a password it'll only happen on client-side, when submitting the form the value is sent to the server and the client input gets cleared. To make your input persistent for multiple requests just set the redisplay attribute of the p:password component to true. initial page-load: <p:password redisplay="true" .../> |-- rendered to --> <input type="secret" value="" /> after page-reload: <p:password redisplay="true" .../> |-- rendered to --> <input type="secret" value="inputPW" /> Please be aware of the fact that after a reload with set redisplay="true" the submitted password will become part of the html-dom and is easily readable for any attacker from the html-source! Hope this helps! Have Fun!
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,900
{"url":"https:\/\/answerriddle.com\/answer-who-was-the-first-u-s-vice-president\/","text":"# Answer: Who was the first U.S. Vice President?\n\nThe Question: Who was the first U.S. Vice President?","date":"2021-10-17 01:23:29","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.940700352191925, \"perplexity\": 11044.259353751095}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585045.2\/warc\/CC-MAIN-20211016231019-20211017021019-00600.warc.gz\"}"}
null
null
Police arrested old, sickly activist leader on kidnapping, murder warrants Sep. 05, 2022 CATRINA RAE Photo from Gabriela Women's Party's Facebook page DAVAO CITY, Philippines — Police authorities have arrested an ailing senior citizen in Barangay Kinabjangan, Nasipit town, in Agusan del Norte on allegation that she was a member of the communist New People's Army (NPA). The police said 76-year-old Atheliana "Atel" Hijos was involved in an encounter between NPA guerillas and the military in Agusan del Norte on November 20, 2019, which resulted in the killing of Corporal Mario Suson. It also claimed she was involved in the kidnapping of a militiaman in 2018. Following these allegations, Hijos was between 72 to 73 years old when the incidents happened. Hijos was apprehended on August 30 and was charged with murder, kidnapping, and serious illegal detention with warrants of arrest issued by the regional courts of two Agusan provinces. She was tagged as the "4th most wanted NPA rebel" in the Caraga region. Human rights group Karapatan has condemned the arrest saying it is "yet another reprehensible act that shows how the judicial system is being weaponized against human rights defenders." Hijos fought the Marcos dictatorship in her younger days. She is the current secretary general of women's rights group Gabriela in Caraga region. "How can an elderly and sickly woman, with such frail built like Atel's, possibly commit all the crimes alleged against her?" said Cristina Palabay, Karapatan's secretary general. Palabay said Hijos has pulmonary tuberculosis, hypertension, and suffered from a mild stroke. "She has been bedridden, and has difficulties walking, at times using a wheelchair," she added. Multiple Facebook posts from 2015 up to 2018 showed Hijos needing assistance from others while carrying out legitimate activities. She ran for a local elective post in Butuan City in 2018. Karapatan expressed concern for the consecutive arrests of human rights and environmental defenders in Caraga. As of July 2022, there are 92 political prisoners, 22 of them are women. "With all these cases filed in Caraga courts, the region has become a warrant factory where trumped-up charges against activists are cooked up to suppress their voices and stifle political dissent," Karapatan said. Those who were arrested like Hijos, Karapatan said, have been subjected to red-tagging through numerous posters and fliers bearing her name and picture disseminated by the military in public places in Caraga. In Mindanews and Inquirer reports on Friday, September 2, Maj. Jennifer Ometer, public information officer of the Police Regional Office (PRO) 13, confirmed that Hijos is placed under the custody of the Regional Intelligence Division of PRO 13. Ometer, however, did not disclose where Hijos was detained. (davaotoday.com) ← Cult practice? Grandma burned to death by own family in MisOr 'Not a raid', BOC clarified inspection on Bukidnon sugar producers → Lookback: the news in 2022 Growing up, growing apart in 2022
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,626
\section{Introduction} A self-avoiding polygon (SAP) on a regular lattice ${\mathbb L}$ is the piecewise linear embedding of a simple closed curve as a sequence of distinct edges joining vertices in ${\mathbb L}$. The number of distinct unrooted polygons modulo translations is denoted $p_n$. It is known that $\lim_{n\to\infty} p_n^{\nicefrac{1}{n}} = \mu$ exists, where $\mu$ is the self-avoiding walk growth constant \cite{Hammersley1960, Hammersley1961}. \begin{figure}[h!] \begin{center} \includegraphics[height=3.6cm]{trefoils} \end{center} \vspace{-2ex} \caption{Minimal trefoils on the SC (left --- 24 edges), FCC (middle --- 15 edges) and BCC (right --- 18 edges).} \label{fig sc31} \end{figure} The knot type of polygons in three-dimensional lattices are well defined. Denote by $p_n(K)$ the number of unrooted polygons of length $n$ and knot type $K$, modulo translations. Computing $p_n$ or $p_n (K)$ is a very difficult combinatorial problem, though determining the minimal length $n$ such that $p_n(K)>0$ and the numbers of shortest embeddings is viable for knot types of low complexity. For example, there are 3 shortest unknots of length 4 in the simple cubic lattice (SC), 8 of length 3 in the face-centred cubic lattice (FCC), and 12 of length 4 in the body-centered cubic lattice (BCC). The simplest non-trivial knot type is the trefoil (denoted by $3_1$, see Figure~\ref{fig sc31}) and it is known that $p_n (3_1) = 0$ if $n<24$ and $p_{24} (3_1) = 3328$ in the SC \cite{Scharein2009}. Data on polygons collected by the GAS algorithm \cite{JvR2010a} shows that $p_{15} (3_1) = 128$ in the FCC and $p_{18} (3_1) = 3168$ in the BCC (see Table~\ref{tab min knot}); no shorter trefoils were observed. Numerical studies \cite{Orlandini1998, Marcone2007, JvR2008,Rawdon2008, JvR2010a} have shown that $p_n(K)$ behaves as \vspace{1ex}\begin{equation} p_n(K) \simeq \, C_K \, \mu_\emptyset^n \, n^{\alpha-3+N_K}, \quad \mbox{as $n \to \infty$}, \label{eqn pnk} \end{equation} where $N_K$ is the number of prime components of the knot type $K$. The exponent is thought to be universal, while the growth rate, $\mu_\emptyset$, depends on the lattice but not the knot type. The amplitude, $C_K$, depends on both the lattice and the knot type. Unfortunately very little of this form can be proven rigorously --- the exponential growth rate is only known to exist when $K$ is the unknot. A pattern-theorem \cite{Sumners1988,Pippenger1989} shows that the growth rate of unknots, $\mu_\emptyset$, is strictly smaller than $\mu$. The same argument also shows that the probability that a polygon of length $n$ has knot type $K$, given by $p_n(K)/p_n$, decays exponentially with length. In this paper we consider the asymptotic behaviour of ratios of knotting probabilities. In particular, for two prime knot types $K$ and $L$ one has $N_K = N_L=1$ and the ratio of probabilities is given by \begin{equation} \frac{p_n(K)/p_n}{p_n(L)/p_n} = \frac{p_n(K)}{p_n(L)} \simeq \left[ \frac{C_K}{C_L} \right], \quad\hbox{as $n\to\infty$}. \end{equation} Hence, the limiting ratio of probabilities approaches a constant. Since this limit is an \emph{amplitude ratio} we expect it to be universal --- depending only on the knot types and the universality class of the underlying model. Such ratios were studied previously on the SC \cite{JvR2010a} (by the methods used in this paper) and in \cite{Baiesi2010} (by very different methods). Here, we use the GAS algorithm to estimate $p_n(K)$ for various prime knots on the SC, FCC and BCC lattices. Our results indicate that the above ratio is dependent on the knot types, but independent of the underlying lattices, and so, universal. \section{Atmospheric moves on cubic lattices} The GAS algorithm \cite{JvR2009a} samples along sequences of conformations that evolve through local elementary transitions called atmospheric moves (see eg. \cite{JvR2009b}). The algorithm is a generalisation of the Rosenbluth algorithm \cite{Hammersley1954, Rosenbluth1955}, and is an approximate enumeration algorithm. The GAS algorithm was used to estimate the number of knotted polygons on the SC \cite{JvR2010a} using BFACF moves \cite{Berg1981,AdCC1983,AdCCF1983} as atmospheric moves. This implementation relies on a result in \cite{JvR1991} that the irreducibility classes of the BFACF elementary moves applied to SC polygons are the classes of polygons of fixed knot types~\cite{JvR1991}. \begin{figure}[h!] \begin{center} \includegraphics[height=1.4cm]{bfacf_v2} \end{center} \vspace{-2ex} \caption{Elementary moves of the BFACF algorithm on the SC lattice.} \label{fig bfacf moves} \end{figure} BFACF elementary moves (see Figure~\ref{fig bfacf moves}) are either neutral (or Type~I) operating on two adjacent orthogonal edges of a SC polygon, or the are of Type~2 which are positive or negative length changing moves. A neutral moves exchanges two adjacent edges over a unit lattice square which defines a \textit{neutral atmospheric plaquette}. A \textit{negative move} replaces three edges in a $\sqcap$ conformation by a single edge and so defines a \textit{negative atmospheric plaquette}. Similarly a \textit{positive move} replaces a single edge of the polygon by three edges in a $\sqcap$ arrangement; these edges define a \textit{positive atmospheric plaquette}. Let $a_+(\varphi), a_0(\varphi),a_-(\varphi)$ be the total numbers positive, neutral and negative atmospheric moves of a SC lattice polygon $\varphi$. \begin{figure}[h!] \begin{center} \includegraphics[height=3.3cm]{plaquettes} \end{center} \vspace{-2ex} \caption{(Left) There are 4 elementary triangular plaquettes incident to each edge in the FCC lattice. (Middle and right) Each edge in the BCC lattice is incident to 12 plaquettes; 6 planar and 6 non-planar (the remaining 3 are reflections of the 3 displayed).} \label{fig plaq} \end{figure} The plaquettes in the FCC and BCC lattice (see Figure~\ref{fig plaq}) define elementary moves in the FCC and BCC analogous to the BFACF moves. Since the FCC plaquettes are triangles they define positive and negative moves, while the BCC plaquettes are quadrilaterals and so also give neutral moves. These generalisations are discussed at length in \cite{JvR2010b} and it is shown that on each lattice the irreducibility classes of the moves coincide with classes of polygons of fixed knot types. \section{GAS sampling of knotted polygons} We have implemented the GAS algorithm using the atmospheric moves described above. Let $\varphi_0$ be a lattice polygon, then the GAS algorithm samples along a sequence of polygons $(\varphi_0, \varphi_1, \dots)$, where $\varphi_{j+1}$ is obtained from $\varphi_j$ by an atmospheric move. Each atmospheric move is chosen uniformly from the possible moves, so that if $\varphi_j$ has length $\ell_j$ then \begin{equation} \Pr(\mbox{$+$}) \propto \beta_{\ell_j} a_+(\varphi_j),\quad % \Pr(\mbox{$0$}) \propto a_0(\varphi_j),\quad % \Pr(\mbox{$-$}) \propto a_-(\varphi_j) . \end{equation} where $\beta_\ell$ is a parameter that is chosen to be approximately $\frac{\mean{a_+}_{\ell} }{ \mean{a_-}_{\ell} }$. This parameter can be chosen so that on average the probability of making a positive move is roughly the same as that of making a negative move. This produces a sequence $\langle\varphi_j \rangle$ of states and we assign a weight to each state: \begin{eqnarray} W(\varphi_n) &= \frac{a_-(\varphi_0) + a_0(\varphi_0) + \beta_{\ell_0} a_+(\varphi_0) } {a_-(\varphi_{n}) + a_0(\varphi_{n}) + \beta_{\ell_{n}} a_+(\varphi_{n}) } \times \prod_{j=0}^n \beta_{\ell_j}^{(\ell_j - \ell_{j+1})}. \end{eqnarray} The probabilities and weights are functions of the number of possible atmospheric moves and so the algorithm must recalculate these efficiently. Since the elementary moves only involve local changes, executing a move and updating the polygon takes $O(1)$ time. The resulting data were analysed by computing the mean weight $\mean{W}_n$ of polygon of length $n$ edges and then using the result (from \cite{JvR2009a}) \begin{eqnarray} \frac{\mean{W}_n}{\mean{W}_m} &= \frac{p_n(K)}{p_m(K)}. \label{eqn w ratio} \end{eqnarray} This gives approximations to the number of polygons of any given length $n$, provided the number of polygons is known exactly at another length $m$. \section{Results} We collected data on the prime knots $3_1,4_1,5_1$ and $5_2$ on the three lattices. In order to use equation~\Ref{eqn w ratio} we computed the total number minimal length polygons of fixed given knot type --- see Table~\ref{tab min knot}. We did this by collecting them while performing the simulation (or in independent runs); this idea was used in~\cite{Scharein2009} and~\cite{JvR2010a} and our SC results agree. Typically, the algorithm quickly found all realisations of minimal knots (within hours) and then failed to find new conformations after another few days of CPU time. We note that the result for trefoils in the SC has been proved \cite{Diao1994, Scharein2009}. \begin{table}[h!] \begin{center} \begin{tabular}{||c||c|c||c|c||c|c||} \hline Knot & \multicolumn{2}{|c||}{SC} & \multicolumn{2}{|c||}{FCC} & \multicolumn{2}{|c||}{BCC} \\ \hline & length & number & length & number & length & number \\ \hline $0_1$ & 4 & 3 & 3 & 8 & 4 & 12 \\ \hline \hline $3_1$ & 24 & 3328 & 15 & 64 & 18 & 1584 \\ \hline $4_1$ & 30 & 3648 & 20 & 2796 & 20 & 12 \\ \hline $5_1$ & 34 & 6672 & 22 & 96 & 26 & 14832 \\ $5_2$ & 36 & 114912 & 23 & 768 & 26 & 4872 \\ \hline \end{tabular} \end{center} \caption{The number of minimal length polygons of fixed knot types in the SC, FCC and BCC lattices.} \label{tab min knot} \end{table} Using the data in Table~\ref{tab min knot} and equation~\Ref{eqn w ratio} we were able to estimate $p_n(K)$ for each knot type in each of the three lattices. Each simulation ran for 1 week on a single node of WestGrid's Glacier cluster\footnote{See \texttt{http://www.westgrid.ca}}. The implementation was particularly simple and efficient in the FCC lattice and the SC simulations were faster than the BCC lattice simulations. Each simulation was composed of approximately $400$ chains of length $2^{27}$ polygons on the FCC lattice, $1400$ chains of $2^{23}$ polygons on the simple cubic lattice and $500$ chains of $2^{23}$ polygons on the BCC lattice. In each simulation we limited the maximum length of the polygons to $512$ edges. The estimates of $p_n (K)$ in each lattice were used to extrapolate the ratios $p_n(K) / p_n(L)$ for fixed prime knots $K,L$. In earlier work on SC polygons \cite{JvR2010a} we observed that the logarithm of these ratios were approximately linear in $n^{-1}$. In Figures~\ref{fig ratio 3141}, \ref{fig ratio 4152} and~\ref{fig ratio 5152} we plot the logarithm of the ratio against $n^{-1}$ for various pairs of prime knots. In Figure~\ref{fig ratio 3141} we show that there is strong agreement between the FCC and BCC data. In addition, the three extrapolated curves appear to have approximately the same limit. This is strong numerical support for the hypothesis that the limiting ratio is universal. Linear fits of the data gives $y$-intercepts of $3.34(3)$, $3.35(3)$, $3.29(3)$ for the SC, the FCC and the BCC lattices (respectively). These results includes each other within $95$\% confidence intervals. Exponentiating these results estimates the limiting amplitude ratio to be $27\pm 2$. If we exclude the BCC data, since it is not as well converged at large $n$, we obtain a limiting ratio of $28 \pm 1$. \begin{figure}[h!] \begin{center} \includegraphics[height=65mm]{ratios3141} \end{center} \vspace{-5mm} \caption{Plots of the logarithm of the ratio of the number of $3_1$ to $4_1$ knots. The dotted lines indicate the extrapolations. Note that the FCC and BCC data are nearly the same. The intercept indicates that the limiting ratio is approximately $e^{3.32}\approx 28$.} \label{fig ratio 3141} \end{figure} Turning to the ratio of $4_1$ to $5_2$ plotted in Figure~\ref{fig ratio 4152} we find similar results, though the data are not as well converged and our estimates are not as good. Linear fits lead to an estimate of $9\pm 1$ for the limiting ratio. The estimates on all three lattices agree, supporting the hypothesis of universality. In the final plot we show the ratio of $5_1$ to $5_2$ knots. Again we find similar results and estimate the limiting ratio to be $0.67(3)$. \begin{figure}[h!] \begin{center} \includegraphics[height=65mm]{ratios4152} \end{center} \vspace{-5mm} \caption{Plots of the logarithm of the ratio of the number of $4_1$ to $5_2$ knots. The linear extrapolations are indicated by dotted lines. The intercept indicates that the limiting ratio is approximately $e^{2.2} \approx 9$.} \label{fig ratio 4152} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[height=65mm]{ratios5152} \end{center} \vspace{-5mm} \caption{Plots of the logarithm of the ratio of the number of $5_1$ to $5_2$ knots. The linear extrapolations are indicated by dotted lines. The intercept indicates that the limiting ratio is approximately $e^{-0.4} \approx 0.67$.} \label{fig ratio 5152} \end{figure} We have also studied the other ratios and find similar support for their universality. In summary: \begin{eqnarray} \begin{array}{rlrlrl} × \nicefrac{C_{3_1}}{C_{4_1}}&= 28(1) &\nicefrac{C_{3_1}}{C_{5_1}} &= 400(20) & \nicefrac{C_{3_1}}{C_{5_2}} &= 280(20) \\[1ex] && \nicefrac{C_{4_1}}{C_{5_1}} &= 15(1) & \nicefrac{C_{4_1}}{C_{5_2}} &=9(1) \\[1ex] &&&& \nicefrac{C_{5_1}}{C_{5_2}} &= 0.67(3). \end{array} \end{eqnarray} These numbers are self-consistent within the stated error bars. Curiously, in each case we found that the curves for the FCC and BCC lie close together while that of the SC stands apart; we would like to understand this better, but have no explanation at this time. Some caution is needed when comparing these results to previous studies of knot probability amplitudes such as \cite{Deguchi1997, Millett2005, Baiesi2010}. Any estimate of the amplitude will have sensitive dependence on the estimate of the exponent. Indeed, unless the estimated exponents are equal, the ratio of the estimated probabilities will tend to zero or infinity. Mindful of this, we may compare the ratio of estimated amplitudes for SC polygons from \cite{Baiesi2010} we find $C_{3_1} / C_{4_1} \approx 22$ which is close to our estimate, but not within mutual error bars. The ratio of estimated amplitudes for off-lattice polygons from \cite{Deguchi1997} and~\cite{Millett2005} give quite different results. However, it is not clear that our models are in the same universality class as these off-lattice models. In addition, the comparison may also be affected by differences in estimated entropic exponents in these studies. \section{Conclusions} We have studied the ratio of probabilities of different knot types. The scaling assumption in equation~\Ref{eqn pnk} indicates that the limit of this ratio should be an amplitude ratio and thus universal. Using the GAS algorithm we have formed direct estimates of the number of polygons of various prime knot types on three different lattices. Extrapolating from these estimates provides numerical evidence that the probability ratios are universal --- depending only on the knot types and the universality class of the underlying model. In particular we find that a long polygon is about 28 times more likely to be a trefoil than a figure-eight. There are a number of extensions of this work that we would like to pursue --- extending these results to composite knots and links, and also to perform similar analyses of data from off-lattice models. \section*{Acknowledgements} We would like to thank BIRS and the organisers of the conference where we had the idea for this paper after discussions with several participants; in particular, Bertrand Duplantier. We are also indebted to Stu Whittington, Thomas Prellberg and Enzo Orlandini for their careful reading of the manuscript. Additionally we would like to thank the anonymous referees for their helpful suggestions. The simulations were run on the WestGrid computer cluster and we thank them for their support. Finally, both authors acknowledge financial support from NSERC, Canada. \section*{References} \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,122
Pável Andréyevich Gerdt, también conocido como Paul Gerdt (22 de noviembre de 1844-30 de julio de 1917), fue el Premiere Danseur Noble del Ballet Mariinski, el Teatro Bolshói Kámenny, y el Teatro Mariinsky por 56 años, haciendo su debut en 1860, y retirándose en 1916. Su hija Elizaveta Gerdt fue también una prominente ballerina y maestra. Gerdt estudió con Aleksandr Pímenov, un alumno del legendario Charles Didelot, y con Jean Petipa, el padre de Marius Petipa, un maestro de la antigua pantomima y alumno de Auguste Vestris. Fue conocido como el Caballero Azul de los escenarios de San Petersburgo, creando los roles de casi todos los papeles principales masculinos a lo largo de la segunda mitad del siglo , entre ellos, el Príncipe Desirè en La bella durmiente, y el príncipe Coqueluche en El cascanueces. Nadie en el teatro conocía su verdadera edad y, cuando se le preguntaba, siempre decía que tenía 23. Entre sus alumnos en la Escuela del Ballet Imperial estuvieron Michel Fokine, Vaslav Nijinsky, Tamara Karsávina, Agrippina Vagánova, Serguéi Legat, George Balanchine y Anna Pávlova, a quienes enseñó el altísimo salto de Marie Taglioni y Carlotta Grisi. Referencias Bailarines de Rusia
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,059
{"url":"https:\/\/e-learnteach.com\/is-3-14159-a-rational-number\/","text":"# Is 3.14159 a rational number\n\nThe number \u201cpi\u201d or ? (3.14159..) is a common example of an irrational number since it has an infinite number of digits after the decimal. When a rational number is split, the result is a decimal number, which can be either a terminating or a recurring decimal. Here, the given. Determine if Rational 3.14159. 3.14159 3.14159. A rational number is any integer, fraction, terminating decimal, or repeating decimal. Rational. 3.14159 3 .The number \u201c3.14159\u201d by itself is rational, since it can be written as 314159\/100000. However, the number pi (which is about 3.14159) is not. Pi is irrational and 3.14159 is rational. So (Pi+3.14159) is irrational. Therefore (Pi+3.14159)\/2 is also irrational. It\u2019s also halfway between 3.14159 and Pi.\n\nView this answer now! It\u2019s completely free.\n\n## is 3.14159 a integer\n\n3.14 can be written as a fraction of two integers: 314100 and is. When starting off in math, students are introduced to pi as a value of 3.14 or 3.14159.Irrational Numbers are those numbers that cannot be expressed in the form of p\/q where p and q are integers and q ? 0. Also, the decimal expansion of an. Proving the result by hand may take some effort, but the following process shows that it can be done. In what follows, one assumes the known. Is pi a rational, irrational number, natural, whole or integer? Answer. Verified. 122.7k+ views.Is pi a rational, irrational number, natural, whole or integer?. Pi is an irrational number. Irrational numbers are the real numbers that cannot be represented.\n\n## is -26 a rational number\n\nRational numbers can be represented as a quotient of two whole numbers. They are expressed as a fraction a \/ b, where a and b are integers and b is different. Confused about rational numbers? Lots of numbers you use every day are exactly that. These rational number examples and calculation tips make it clear.Rational numbers. Rational number \u2013 is any number that can be written as a fraction: frac{p}{q}. where: p \u2013 is any integerIn this article, we\u2019ll discuss the rational number definition, give rational numbers examples, and offer some tips and tricks for understanding. A rational number is a number that can be written in the form of a common fraction of two integers. In other words, it is a number that can be represented.\n\n## is 3.7 a rational number\n\n(natural numbers and zero), and they also include negative numbers. They don\u2019t include fractions. Rational Numbers. These are any numbers that can be expressed. So 11\/3 and 37\/10 become 110\/30 and 111\/30..changing these to a denominator of 60..we get 220\/60 and 222\/60. A rational number in between would beThe number 3.7 is best described as.. a rational number an integer a whole number an irrational number. Question. user avatar image. Is 1.73205 a rational number? So those are rational numbers, now let\u2019s look at some examples of irrational numbers: the..the rational number of -3.7 is -37\/10. heart outlined. Thanks 0.\n\n## is 5.1 a rational number\n\nYK Pao Secondary School 5.1 Rational Number Class: ______ Name: 1.True or False (1) 0 is not a rational number. ( ) (2) A natural number must be a positive. (natural numbers and zero), and they also include negative numbers. They don\u2019t include fractions. Rational Numbers. These are any numbers that can be expressed. Rational Numbers. Can be expressed as a ratio of two Integers: a\/b, (b ? 0) such ratios. (fractions) can be expressed as terminating or repeating decimals.YES, negative 5 (-5) is a rational number because -5 satisfies the definition of a rational number. A rational number is any number that can be expressed as. Any number that can be expressed as a ratio of p\/q where q is a non-zero number is a rational number. All integers are rational numbers.","date":"2023-04-01 23:19:04","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8569364547729492, \"perplexity\": 637.1253978105862}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296950363.89\/warc\/CC-MAIN-20230401221921-20230402011921-00152.warc.gz\"}"}
null
null
{"url":"https:\/\/www.dlubal.com\/en\/support-and-learning\/support\/knowledge-base\/001399","text":"# Actions on Silos According to EN 1991-4\n\n### Technical Article\n\n001399 2 February 2017\n\nSilos are used as large containers for storing bulk materials such as agricultural products or source materials as well as intermediates of industrial production. Structural engineering of such structures requires a\u00a0precise knowledge of the stresses due to particulate solids in the building structure. The standard EN\u00a01991\u20114 \u2018Actions on Silos and Tanks\u2019\u00a0[1] provides general principles and requirements for determining these actions.\n\n#### Scope of Application\n\nThe application of the design rules for silos and tanks is subjected to geometrical limitations. In\u00a0[1], the geometric dimensions are limited to hb \/ dc < 10 with hb < 100\u00a0m and dc < 60\u00a0m. In addition, the application limits depend on the cross\u2011section shape of the silo and on the stored solids.\n\n#### Properties of Particulate Solids\n\nAnnex\u00a0E of\u00a0[1] specifies parameters of the most common solids stored in silos, showing the range of\u00a0particulate solids properties. Furthermore, Section\u00a04 and Annex\u00a0C of [1] describe test methods for\u00a0the determination of the stored solids properties.\n\nWall frictional properties of the particulate solids take into account the roughness of wall surfaces where the solids slide along. Table\u00a04.1 of [1] describes various categories of wall surfaces. The categories of the wall surfaces are shown in the table below. Annex\u00a0D.2 of [1] also provides information for the evaluation of the wall friction coefficient for the D4 category.\n\nYou should always determine the loads of a\u00a0load case for a\u00a0particular combination of the relevant solid properties. For each of these load cases, the extreme values are reached when the solid properties take different extreme values within discharge flow of the particulate solids. The applicable extreme values of the particulate solid properties are listed in Table\u00a03.1 of [1] for each of the load cases to be examined. The relevant parameters for various load applications are included in\u00a0the following table.\n\n#### Action Assessment Class\n\nSilos are divided into three action assessment classes according to their storage capacity and eccentricity in compliance with Table\u00a02.1 of [1].\n\n#### Loads on Vertical Walls of Silos\n\nLoads on the vertical walls of silos are subjected to a\u00a0differentiated calculation considering silo slenderness. A\u00a0distinction is made between:\n\n\u2022 slender silos (hc \/ dc \u2265 2.0)\n\u2022 intermediate slenderness silos (1.0 < hc \/ dc < 2.0)\n\u2022 squat silos (0.4 < hc \/ dc \u2264 1.0)\n\u2022 retaining silos (hc \/ dc \u2264 0.4 and the silo bottom is flat)\n\nSymmetrical loads are fixed loads that are uniformly distributed over the silo circumference. Discharge loads arise when the uniform loads in full condition are increased by a\u00a0load magnifying factor.\n\nBesides the fixed loads, additional free loads are usually to be applied. Distributions of unsymmetrical loads (patch loads) in a\u00a0silo are caused by actions due to imperfections or eccentricities during filling and discharge of solids.\n\nIn the case of thick\u2011walled circular silos, the patch load applies to two opposite square areas with side length\u00a0s. In the case of non\u2011circular silos, the patch loads can be taken into account by increasing the symmetrical loads. The outward patch pressure should be taken to act on a\u00a0horizontal band on the silo wall at any level, over a\u00a0vertical height\u00a0s.\n\nGenerally, it is not necessary to apply the patch loads in the case of squat and intermediate slenderness silos.\n\nFor silos in Action Assessment Class\u00a02, the patch load method may be approximately used by uniformly increasing horizontal pressures.\n\n#### Discharge Loads with Large Eccentricities\n\nAccording to [1], the loads due to large discharge eccentricities should be used as a\u00a0separate load case. The development of this loading assessment is based on a\u00a0premise that a\u00a0flow channel may occur near the wall as a\u00a0result of large eccentric discharge. A\u00a0pipe flow channel is assumed, which is\u00a0constant due to the height of the silo wall, and intersects the silo wall at an\u00a0opening angle\u00a0\u03b8c.\n\nHowever, a\u00a0theoretical prediction of the geometric shape of a\u00a0discharge hopper is hardly possible with the tools currently available. Therefore, the flow channel must be specified. The calculation is performed with at least three different flow channel radii rc in order to determine the apparent variations of the flow channel.\n\nIn the contact areas of the flowing solid and the silo wall, lower horizontal pressures occur outside the flow channel. In the latter area, the loads of the filling load case apply. Directly next to the flow channel up to the opening angle of 2\u00a0\u03b8c, the pressure is increased.\n\nLoads due to eccentric filling must be considered for squat or intermediate slenderness silos.\n\nEN\u00a01991\u20114 [1] explains the determination of the additional vertical force (compressive) in the wall per unit length of circumference at any depth\u00a0zs below the point of highest wall contact. This force per unit circumference should be added to the force arising from wall friction.\n\n#### Loads on Silo Hoppers and Silo Bottoms\n\nThe loads on the walls of silo hoppers should be determined with regard to the steepness of the hopper walls according to [1].\n\nThe standard distinguishes between flat bottoms as well as steep and shallow hoppers. In the case of steep hoppers, there is an\u00a0additional distinction between the load case of filling and discharge. Kick load at the transition from the vertical walled section to the hopper is already included in the load distributions.\n\nAnnex\u00a0G of [1] provides alternative rules for pressures in hoppers.\n\n#### Example\n\nThe example presents a\u00a0free standing cylindrical silo for cement with a\u00a0diameter of 5.00\u00a0m and the maximum bulk unit depth of 8.00\u00a0m. The silo is made of reinforced concrete with a\u00a0wall thickness of 0.30\u00a0m.\n\n#### Particulate Solids\n\nThe following particulate solid properties of cement are given by Table\u00a0E.1 of [1].\n\n Unit weight (upper) \u03b3u = 16 kN\/m\u00b3 Angle of repose \u03a6r = 36 \u00b0 Angle of internal friction (mean) \u03a6im = 30 \u00b0 Factor a\u03c6 = 1.22 Lateral pressure ratio (mean) \u039am = 0.54 Factor a\u039a = 1.2 Wall friction coefficient (wall type D3) \u03bcm = 0.51 (for concrete) Factor a\u03bc = 1.07 Patch load solid reference factor Cop = 0.5\n\n#### Characteristic Particulate Solids Properties\n\nIn order to determine the characteristic values of the lateral pressure ratio, wall friction coefficient, and the angle of internal friction, the listed mean values of the particulate solids must be scaled using the conversion factors. The factors ax are specified in Table\u00a0E.1 of\u00a0[1] for the available particulate solids.\n\nUpper and lower characteristic value of the lateral pressure ratio:\n\n$$\\begin{array}{l}{\\mathrm K}_\\mathrm u\\;=\\;{\\mathrm a}_\\mathrm K\\;\\cdot\\;{\\mathrm K}_\\mathrm m\\;=\\;1.20\\;\\cdot\\;0.54\\;=\\;0.648\\\\{\\mathrm K}_\\mathrm l\\;=\\;\\frac{{\\mathrm K}_\\mathrm m}{{\\mathrm a}_\\mathrm K}\\;=\\;\\frac{0.54}{1.20}\\;=\\;0.450\\end{array}$$\n\nUpper and lower characteristic value of the wall friction coefficient:\n\n$$\\begin{array}{l}{\\mathrm\\mu}_\\mathrm u\\;=\\;{\\mathrm a}_\\mathrm\\mu\\;\\cdot\\;{\\mathrm\\mu}_\\mathrm m\\;=\\;1.07\\;\\cdot\\;0.51\\;=\\;0.546\\\\{\\mathrm\\mu}_\\mathrm l\\;=\\;\\frac{{\\mathrm\\mu}_\\mathrm m}{{\\mathrm a}_\\mathrm\\mu}\\;=\\;\\frac{0.51}{1.07}\\;=\\;0.477\\end{array}$$\n\nUpper and lower characteristic value of the angle of internal friction:\n\n$$\\begin{array}{l}{\\mathrm\\Phi}_\\mathrm{iu}\\;=\\;{\\mathrm a}_\\mathrm\\Phi\\;\\cdot\\;{\\mathrm\\Phi}_\\mathrm{im}\\;=\\;1.22\\;\\cdot\\;30.00^\\circ\\;=\\;36.60^\\circ\\\\{\\mathrm\\Phi}_\\mathrm{iu}\\;=\\;\\frac{{\\mathrm\\Phi}_\\mathrm{im}}{{\\mathrm a}_\\mathrm\\Phi}\\;=\\;\\frac{30.00^\\circ}{1.22}\\;=\\;24.59^\\circ\\end{array}$$\n\n#### Values of properties to be used for different wall loading assessments\n\nThe evaluation of each load case should be made using a\u00a0single set of consistent values of the solids properties, so that each limit state corresponds to a\u00a0single defined stored solid condition. The\u00a0extreme values of solids properties that should be adopted for each load case is given in the\u00a0following table.\n\nThe angle of wall friction must always be less than or equal to the angle of internal friction of the stored solid, that is \u03a6wh\u00a0\u2264\u00a0\u03a6i. Otherwise, the material will rupture internally if slip at the wall contact demands a\u00a0greater shear stress than the internal friction can sustain. This means that, in all cases, the wall friction coefficient should not be taken as greater than tan\u03a6i (i.e. \u03bc\u00a0= tan\u03a6w\u00a0\u2264\u00a0tan\u03a6i). This is considered in the table above where the relevant values are in bold.\n\n#### Actions\n\nActions are determined according to [1]. Only the filling loads on vertical walls and vertical pressures on flat bottoms of the silo should be calculated here.\n\n#### Silo Classification\n\nThe classification of the silo is based on the slenderness and the action assessment class.\n\n##### Slenderness\n$$1.0\\;<\\;\\frac{{\\mathrm h}_\\mathrm c}{{\\mathrm d}_\\mathrm c}\\;=\\;\\frac{8.00}{5.00}\\;=\\;1.6\\;<\\;2$$\n\nThe silo is classified as an\u00a0intermediate slenderness silo in accordance with 1.5.21 of [1].\n\n##### Action Assessment Class\n$$\\mathrm{Capacity}\\;=\\;\\mathrm V\\;\\cdot\\;{\\mathrm\\gamma}_\\mathrm u\\;=\\;157.08\\;\\cdot\\;16.00\\;=\\;2,513.27\\;\\cong\\;\\frac{2,513.27}{9.80665}\\;=\\;256.28\\;\\mathrm t$$\n\nAccording to Table\u00a02.1 of [1], at least Action Assessment Class\u00a02 must be selected.\n\n##### Construction form\n$$\\frac{{\\mathrm d}_\\mathrm c}{\\mathrm t}\\;=\\;\\frac{5.00}{0.20}\\;=\\;25\\;<\\;200$$\n\nThe silo is classified as a\u00a0thick\u2011walled silo in accordance with 1.5.43 of EN\u00a01991\u20114 [1].\n\n#### Symmetrical Filling Loads on Vertical Walls\n\n##### Horizontal Pressure\n###### Janssen characteristic depth zo\n$$\\begin{array}{l}{\\mathrm z}_o\\;=\\;\\frac1{\\mathrm K\\;\\cdot\\;\\mathrm\\mu}\\;\\cdot\\;\\frac{\\mathrm A}{\\mathrm U}\\;\\;\\;\\;\\;(5.75)\\\\{\\mathrm z}_o\\;=\\;\\frac1{0.648\\;\\cdot\\;0.458}\\;\\cdot\\;\\frac{19.63}{15.71}\\;=\\;4.22\\;\\mathrm m\\\\\\end{array}$$\n###### Vertical distance ho\n\nFor a\u00a0symmetrically filled circular silo, the vertical distance ho between the equivalent surface of the solid and the highest solid\u2011wall contact is calculated as follows:\n\n$$\\begin{array}{l}{\\mathrm h}_o\\;=\\;\\frac{{\\mathrm d}_\\mathrm c}{6\\;\\cdot\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r}\\;\\;\\;\\;\\;(5.77)\\\\{\\mathrm h}_o\\;=\\;\\frac{5.00}{6\\;\\cdot\\;\\tan\\;36.00^\\circ}\\;=\\;0.61\\;\\mathrm m\\\\\\end{array}$$\n###### Parameter n\n$$\\begin{array}{l}\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r)\\;\\cdot\\;\\frac{1\\;-\\;{\\mathrm h}_o}{{\\mathrm z}_o}\\;\\;\\;\\;\\;(5.76)\\\\\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;36.00^\\circ)\\;\\cdot\\;\\frac{1\\;-\\;0.61}{4.22}\\;=\\;-1.48\\\\\\end{array}$$\n###### Asymptotic horizontal pressure at great depth due to stored particulate solid pho\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{ho}\\;=\\;\\mathrm\\gamma\\;\\cdot\\;\\mathrm K\\;\\cdot\\;{\\mathrm z}_o\\;\\;\\;\\;\\;(5.73)\\\\{\\mathrm p}_\\mathrm{ho}\\;=\\;16.00\\;\\cdot\\;0.648\\;\\cdot\\;4.22\\;=\\;43.70\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\end{array}$$\n###### Horizontal pressure phf(z)\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{hf}(\\mathrm z)\\;=\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;{\\mathrm Y}_\\mathrm R(\\mathrm z)\\;=\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{\\mathrm z\\;-\\;{\\mathrm h}_\\mathrm o}{{\\mathrm z}_\\mathrm o\\;-\\;{\\mathrm h}_\\mathrm o}\\;+\\;1\\right)^\\mathrm n\\right)\\;\\;\\;\\;\\;(5.71)\\\\{\\mathrm p}_\\mathrm{hf}(0.61)\\;=\\;0\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(1.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{1.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;13.26\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(2.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{2.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;20.93\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(3.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{3.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;25.83\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(4.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{4.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;29.19\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(5.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{5.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;31.62\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(6.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{6.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;33.43\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(7.61)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{7.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;34.83\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{hf}(8.00)\\;=\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{8.00\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;35.29\\;\\mathrm{kN}\/\\mathrm m^2\\end{array}$$\n##### Wall Frictional Traction\n###### Janssen characteristic depth zo\n$$\\begin{array}{l}{\\mathrm z}_o\\;=\\;\\frac1{\\mathrm K\\;\\cdot\\;\\mathrm\\mu}\\;\\cdot\\;\\frac{\\mathrm A}{\\mathrm U}\\;\\;\\;\\;\\;(5.75)\\\\{\\mathrm z}_o\\;=\\;\\frac1{0.648\\;\\cdot\\;0.458}\\;\\cdot\\;\\frac{19.63}{15.71}\\;=\\;4.22\\;\\mathrm m\\\\\\end{array}$$\n###### Vertical distance ho\n\nFor a\u00a0symmetrically filled circular silo, the vertical distance ho between the equivalent surface of the solid and the highest solid\u2011wall contact is calculated as follows:\n\n$$\\begin{array}{l}{\\mathrm h}_o\\;=\\;\\frac{{\\mathrm d}_\\mathrm c}{6\\;\\cdot\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r}\\;\\;\\;\\;\\;(5.77)\\\\{\\mathrm h}_o\\;=\\;\\frac{5.00}{6\\;\\cdot\\;\\tan\\;36.00^\\circ}\\;=\\;0.61\\;\\mathrm m\\\\\\end{array}$$\n###### Parameter n\n$$\\begin{array}{l}\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r)\\;\\cdot\\;\\frac{1\\;-\\;{\\mathrm h}_o}{{\\mathrm z}_o}\\;\\;\\;\\;\\;(5.76)\\\\\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;36.00^\\circ)\\;\\cdot\\;\\frac{1\\;-\\;0.61}{4.22}\\;=\\;-1.48\\\\\\end{array}$$\n###### Asymptotic horizontal pressure at great depth due to stored particulate solid pho\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{ho}\\;=\\;\\mathrm\\gamma\\;\\cdot\\;\\mathrm K\\;\\cdot\\;{\\mathrm z}_o\\;\\;\\;\\;\\;(5.73)\\\\{\\mathrm p}_\\mathrm{ho}\\;=\\;16.00\\;\\cdot\\;0.648\\;\\cdot\\;4.22\\;=\\;43.70\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\end{array}$$\n###### Wall frictional traction pwf(z)\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{wf}(\\mathrm z)\\;=\\;\\mathrm\\mu\\;\\cdot\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;{\\mathrm Y}_\\mathrm R(\\mathrm z)\\;=\\;\\mathrm\\mu\\;\\cdot\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{\\mathrm z\\;-\\;{\\mathrm h}_\\mathrm o}{{\\mathrm z}_\\mathrm o\\;-\\;{\\mathrm h}_\\mathrm o}\\;+\\;1\\right)^\\mathrm n\\right)\\;\\;\\;\\;\\;(5.72)\\\\\\;{\\mathrm p}_\\mathrm{wf}(0.61)\\;=\\;0\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(1.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{1.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;6.07\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(2.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{2.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;9.58\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(3.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{3.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;11.82\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(4.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{4.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;13.36\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(5.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{5.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;14.47\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(6.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{6.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;15.30\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(7.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{7.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;15.94\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\;{\\mathrm p}_\\mathrm{wf}(8.00)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{8.00\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;16.15\\;\\mathrm{kN}\/\\mathrm m^2\\end{array}$$\n##### Vertical Pressure\n###### Janssen characteristic depth zo\n$$\\begin{array}{l}{\\mathrm z}_\\mathrm o\\;=\\;\\frac1{\\mathrm K\\;\\cdot\\;\\mathrm\\mu}\\;\\cdot\\;\\frac{\\mathrm A}{\\mathrm U}\\;\\;\\;\\;\\;(5.75)\\\\\\;z_\\mathrm o\\;=\\;\\frac1{0.450\\;\\cdot\\;0.477}\\;\\cdot\\;\\frac{19.63}{15.71}\\;=\\;5.83\\;\\mathrm m\\\\\\end{array}$$\n###### Parameter n\n$$\\begin{array}{l}\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r)\\;\\cdot\\;\\frac{1\\;-\\;{\\mathrm h}_\\mathrm o}{{\\mathrm z}_\\mathrm o}\\;\\;\\;\\;\\;(5.76)\\\\\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;36.00^\\circ)\\;\\cdot\\;\\frac{1\\;-\\;0.61}{5.83}\\;=\\;-1.55\\\\\\end{array}$$\n###### Vertical pressure pvf(z)\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{vf}(\\mathrm z)\\;=\\;\\mathrm\\gamma\\;\\cdot\\;{\\mathrm z}_\\mathrm v(\\mathrm z)\\;=\\;\\mathrm\\gamma\\;\\cdot\\;\\left({\\mathrm h}_\\mathrm o\\;-\\;\\frac1{\\mathrm n\\;+\\;1}\\;\\cdot\\;\\left({\\mathrm z}_\\mathrm o\\;-\\;{\\mathrm h}_\\mathrm o\\;-\\;\\frac{(\\mathrm z\\;+\\;{\\mathrm z}_\\mathrm o\\;-\\;2\\;\\cdot\\;{\\mathrm h}_\\mathrm o)^{\\mathrm n+1}}{({\\mathrm z}_\\mathrm o\\;-\\;{\\mathrm h}_\\mathrm o)^\\mathrm n}\\right)\\right)\\;\\;\\;\\;\\;\\;(5.79)\\\\{\\mathrm p}_\\mathrm{vf}(0.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(0.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;9.69\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(1.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(1.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;23.65\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(2.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(2.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;34.51\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(3.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(3.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;43.27\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(4.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(4.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;50.52\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(5.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(5.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;56.65\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(6.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(6.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;61.92\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(7.61)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(7.61\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;66.50\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{vf}(8.00)\\;=\\;16.00\\;\\cdot\\;\\left(0.61\\;-\\;\\frac1{-1.55\\;+\\;1}\\;\\cdot\\;\\left(5.83\\;-\\;0.61\\;-\\;\\frac{(8.00\\;+\\;5.83\\;-\\;2\\;\\cdot\\;0.61)^{-1.55+1}}{(5.83\\;-\\;0.61)^{-1.55}}\\right)\\right)\\;=\\;68.15\\;\\mathrm{kN}\/\\mathrm m^2\\end{array}$$\n###### Vertical forces (compressive) in the wall nsk(z)\n$$\\begin{array}{l}{\\mathrm n}_\\mathrm{zSk}(\\mathrm z)\\;=\\;\\mathrm\\mu\\;\\cdot\\;{\\mathrm p}_\\mathrm{ho}(\\mathrm z)\\;\\cdot\\;(\\mathrm z\\;-\\;{\\mathrm z}_\\mathrm v)\\;\\;\\;\\;\\;(5.81)\\\\{\\mathrm n}_\\mathrm{zSk}(0.61)\\;=\\;0.00\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(1.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(1.61\\;-\\;1.48)\\;=\\;2.55\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(2.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(2.61\\;-\\;2.16)\\;=\\;8.97\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(3.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(3.61\\;-\\;2.70)\\;=\\;18.02\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(4.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(4.61\\;-\\;3.16)\\;=\\;28.96\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(5.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(5.61\\;-\\;3.54)\\;=\\;41.30\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(6.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(6.61\\;-\\;3.87)\\;=\\;54.72\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(7.61)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(7.61\\;-\\;4.16)\\;=\\;68.98\\;\\mathrm{kN}\/\\mathrm m\\\\{\\mathrm n}_\\mathrm{zSk}(8.00)\\;=\\;0.458\\;\\cdot\\;43.70\\;\\cdot\\;(8.00\\;-\\;4.26)\\;=\\;74.81\\;\\mathrm{kN}\/\\mathrm m\\end{array}$$\n\n#### Filling Patch Loads on Vertical Walls\n\n###### Dimension of the patch load zone\n$$\\begin{array}{l}\\mathrm s\\;=\\;\\frac{\\mathrm\\pi\\;\\cdot\\;{\\mathrm d}_\\mathrm c}{16}\\;\\;\\;\\;\\;\\;\\;(5.12)\\\\\\mathrm s\\;=\\;\\frac{\\mathrm\\pi\\;\\cdot\\;5.00}{16}\\;=\\;0.98\\;\\mathrm m\\end{array}$$\n###### Janssen characteristic depth zo\n$$\\begin{array}{l}{\\mathrm z}_\\mathrm o\\;=\\;\\frac1{\\mathrm K\\;\\cdot\\;\\mathrm\\mu}\\;\\cdot\\;\\frac{\\mathrm A}{\\mathrm U}\\;\\;\\;\\;\\;(5.75)\\\\\\;{\\mathrm z}_\\mathrm o\\;=\\;\\frac1{0.648\\;\\cdot\\;0.458}\\;\\cdot\\;\\frac{19.63}{15.71}\\;=\\;4.22\\;\\mathrm m\\\\\\end{array}$$\n###### Vertical distance ho\n\nFor a\u00a0symmetrically filled circular silo, the vertical distance between the equivalent surface of the solid and the highest solid\u2011wall contact is calculated as follows:\n\n$$\\begin{array}{l}{\\mathrm h}_\\mathrm o\\;=\\;\\frac{{\\mathrm d}_\\mathrm c}{6\\;\\cdot\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r}\\;\\;\\;\\;\\;(5.77)\\\\\\;{\\mathrm h}_\\mathrm o\\;=\\;\\frac{5.00}{6\\;\\cdot\\;\\tan\\;36.00^\\circ}\\;=\\;0.61\\;\\mathrm m\\\\\\end{array}$$\n###### Parameter n\n$$\\begin{array}{l}\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r)\\;\\cdot\\;\\frac{1\\;-\\;{\\mathrm h}_\\mathrm o}{{\\mathrm z}_\\mathrm o}\\;\\;\\;\\;\\;(5.76)\\\\\\mathrm n\\;=\\;-(1\\;+\\;\\tan\\;36.00^\\circ)\\;\\cdot\\;\\frac{1\\;-\\;0.61}{4.22}\\;=\\;-1.48\\\\\\end{array}$$\n###### Asymptotic horizontal pressure at great depth due to stored particulate solid pho\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{ho}\\;=\\;\\mathrm\\gamma\\;\\cdot\\;\\mathrm K\\;\\cdot\\;{\\mathrm z}_\\mathrm o\\;\\;\\;\\;\\;(5.73)\\\\\\;{\\mathrm p}_\\mathrm{ho}\\;=\\;16.00\\;\\cdot\\;0.648\\;\\cdot\\;4.22\\;=\\;43.70\\;\\mathrm{kN}\/\\mathrm m^2\\\\\\end{array}$$\n$$\\begin{array}{l}\\mathrm E\\;=\\;2\\;\\cdot\\;\\frac{{\\mathrm e}_\\mathrm f}{{\\mathrm d}_\\mathrm c}\\;\\;\\;\\;\\;(5.10)\\\\\\mathrm E\\;=\\;2\\;\\cdot\\;\\frac{0.00}{5.00}\\;=\\;0.00\\end{array}$$ $$\\begin{array}{l}{\\mathrm C}_\\mathrm{pf}\\;=\\;0.21\\;\\cdot\\;{\\mathrm C}_\\mathrm{op}\\;\\cdot\\;(1\\;+\\;2\\;\\cdot\\;\\mathrm E\u00b2)\\;\\cdot\\;\\left(1\\;-\\;\\mathrm e^{-1.5\\cdot(\\frac{{\\mathrm h}_\\mathrm c}{{\\mathrm d}_\\mathrm c}\\;-\\;1)}\\right)\\;\\;\\;\\;\\;\\;\\;(5.9)\\\\{\\mathrm C}_\\mathrm{pf}\\;=\\;0.21\\;\\cdot\\;0.50\\;\\cdot\\;(1\\;+\\;2\\;\\cdot\\;0.002)\\;\\cdot\\;\\left(1\\;-\\;\\mathrm e^{-1,5\\cdot(\\frac{8.00}{5.00}\\;-\\;1)}\\right)\\;=\\;0.06\\;\\geq\\;0\\end{array}$$\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{pf}(\\mathrm z)\\;=\\;{\\mathrm C}_\\mathrm{pf}\\;\\cdot\\;{\\mathrm p}_\\mathrm{hf}(\\mathrm z)\\;=\\;{\\mathrm C}_\\mathrm{pf}\\;\\cdot\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;{\\mathrm Y}_\\mathrm R(\\mathrm z)\\;=\\;{\\mathrm C}_\\mathrm{pf}\\;\\cdot\\;{\\mathrm p}_\\mathrm{ho}\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{\\mathrm z\\;-\\;{\\mathrm h}_\\mathrm o}{{\\mathrm z}_\\mathrm o\\;-\\;{\\mathrm h}_\\mathrm o}\\;+\\;1\\right)^\\mathrm n\\right)\\;\\;\\;\\;\\;\\;(5.8)\\\\{\\mathrm p}_\\mathrm{pf}(0.61)\\;=\\;0\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(1.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{1.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;0.83\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(2.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{2.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;1.30\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(3.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{3.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;1.61\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(4.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{4.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;1.82\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(5.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{5.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;1.97\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(6.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{6.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;2.08\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(7.61)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{7.61\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;2.17\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pf}(8.00)\\;=\\;0.06\\;\\cdot\\;43.70\\;\\cdot\\;\\left(1\\;-\\;\\left(\\frac{8.00\\;-\\;0.61}{4.22\\;-\\;0.61}\\;+\\;1\\right)^{-1.48}\\right)\\;=\\;2.20\\;\\mathrm{kN}\/\\mathrm m^2\\end{array}$$\n\n$$\\begin{array}{l}{\\mathrm p}_\\mathrm{pfi}(\\mathrm z)\\;=\\;\\frac{{\\mathrm p}_\\mathrm{pf}(\\mathrm z)}7\\;\\;\\;\\;\\;(5.13)\\\\{\\mathrm p}_\\mathrm{pfi}(0.61)\\;=\\;0\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(1.61)\\;=\\;\\frac{0.83}7\\;=\\;0.12\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(2.61)\\;=\\;\\frac{1.30}7\\;=\\;0.19\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(3.61)\\;=\\;\\frac{1.61}7\\;=\\;0.23\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(4.61)\\;=\\;\\frac{1.82}7\\;=\\;0.26\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(5.61)\\;=\\;\\frac{1.97}7\\;=\\;0.28\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(6.61)\\;=\\;\\frac{2.08}7\\;=\\;0.30\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(7.61)\\;=\\;\\frac{2.17}7\\;=\\;0.31\\;\\mathrm{kN}\/\\mathrm m^2\\\\{\\mathrm p}_\\mathrm{pfi}(8.00)\\;=\\;\\frac{2.20}7\\;=\\;0.31\\;\\mathrm{kN}\/\\mathrm m^2\\end{array}$$\n\n#### Pressures on Flat Bottoms\n\nThe vertical pressure acting on flat bottoms of the intermediate slenderness silos cannot be taken as uniform and the calculation is based on the following load assessments:\n\n$$\\begin{array}{l}{\\mathrm C}_\\mathrm b\\;=\\;1.0\\;\\;\\;\\;\\;\\;(6.3)\\\\{\\mathrm p}_\\mathrm{vb}\\;=\\;{\\mathrm C}_\\mathrm b\\;\\cdot\\;{\\mathrm p}_\\mathrm{vf}(\\mathrm{hc})\\;=\\;1.0\\;\\cdot\\;68.15\\;=\\;68.15\\;\\mathrm{kN}\/\\mathrm m\u00b2\\;\\;\\;\\;\\;\\;(6.2)\\\\{\\mathrm h}_\\mathrm{tp}\\;=\\;\\tan\\;{\\mathrm\\Phi}_\\mathrm r\\;\\cdot\\;\\frac{{\\mathrm d}_\\mathrm c}2\\;=\\;\\tan\\;36.00^\\circ\\;\\cdot\\;\\frac{5.00}2\\;=\\;1.82\\;\\mathrm m\\;\\;\\;\\;\\;\\;(\\mathrm{Figure}\\;6.3)\\\\{\\mathrm p}_\\mathrm{vtp}\\;=\\;\\mathrm\\gamma\\;\\cdot\\;{\\mathrm h}_\\mathrm{tp}\\;=\\;16.00\\;\\cdot\\;1.82\\;=\\;29.06\\;\\mathrm{kN}\/\\mathrm m\u00b2\\;\\;\\;\\;\\;\\;(6.15)\\\\{\\mathrm p}_\\mathrm{vho}\\;=\\;\\mathrm\\gamma\\;\\cdot\\;{\\mathrm z}_\\mathrm v\\;=\\;16.00\\;\\cdot\\;0.61\\;=\\;9.69\\;\\mathrm{kN}\/\\mathrm m\u00b2\\;\\;\\;\\;\\;\\;(5.79)\\\\{\\mathrm{\u0394p}}_\\mathrm{sq}\\;=\\;{\\mathrm p}_\\mathrm{vtp}\\;-\\;{\\mathrm p}_\\mathrm{vho}\\;=\\;29.06\\;-\\;9.69\\;=\\;19.37\\;\\mathrm{kN}\/\\mathrm m\u00b2\\;\\;\\;\\;\\;\\;(6.14)\\\\{\\mathrm p}_\\mathrm{vsq}\\;=\\;{\\mathrm p}_\\mathrm{vb}\\;+\\;{\\mathrm{\u0394p}}_\\mathrm{sq}\\;\\cdot\\;\\frac{2.0\\;-\\;{\\displaystyle\\frac{{\\mathrm h}_\\mathrm c}{{\\mathrm d}_\\mathrm c}}}{2.0\\;-\\;{\\displaystyle\\frac{{\\mathrm h}_\\mathrm{tp}}{{\\mathrm d}_\\mathrm c}}}\\;=\\;68.15\\;+\\;19.37\\;\\cdot\\;\\frac{2.0\\;-\\;{\\displaystyle\\frac{8.00}{5.00}}}{2.0\\;-\\;{\\displaystyle\\frac{1.82}{5.00}}}\\;=\\;72.89\\;\\mathrm{kN}\/\\mathrm m\u00b2\\;\\;\\;\\;\\;\\;(6.13)\\end{array}$$\n\nThe bottom load magnifying factor Cb applies to silos of Action Assessment Class\u00a02 under the condition that the stored solids do not tend toward dynamic behaviour during the discharge process.\n\nThe vertical pressure pvsq on the bottom of a\u00a0silo may be taken to act both after filling and during discharge.\n\nThe defined load can be entered in RFEM. Figure\u00a013\u00a0shows the exemplary filling patch load for z\u00a0=\u00a04.61\u00a0m. This load can be entered in RFEM as a\u00a0free variable load. The load input is displayed in\u00a0Figure\u00a014.\n\n#### Reference\n\n [1] Eurocode\u00a01 - Actions on structures - Part\u00a04: Silos and tanks; EN\u00a01991\u20114:2010\u201112","date":"2017-11-23 11:11:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6008341312408447, \"perplexity\": 3268.6222931268608}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806771.56\/warc\/CC-MAIN-20171123104442-20171123124442-00357.warc.gz\"}"}
null
null
Q: how do I query multiple tables with a specific name tha I get from a text field? I'm trying to query a textfield with a name in it the textbox is RegName, I want to get the name and select information from a MySQL database with 5 tables each table has a field DogId that relates to an individual dog. I want to get the information for the dog with the name in the textfield. Then I fill in textfields wit the information I got from the query. My problem seems to be the last 2 lines of the select statement (From doginfo Where RegName = ' " +dn+ " '); Again RegName is the textfield with the name I am looking for. I have tried everything I can think of and have spent days looking for an answer any help will be greatly appreciated. Here is my code // This is where we get the information on a dog based on its Name. @FXML private void querybyNameActionPerformed(ActionEvent event) { // Query the db for record with dog name equal to the name in regName. String user = "root"; String password = ""; String dn = null; dn = RegName.getText().trim(); // dn = dog name. try { Connection myConn = DriverManager.getConnection("jdbc:mysql://localhost:3306/kennelmanagment1", user, password); // Create statement. Statement myStmt = myConn.createStatement(); // Execute query. ResultSet myRs = myStmt.executeQuery("SELECT d.DogId, d.RegNum, d.RegName, d.WhelpDate, d.DNA, d.Notes, a.Breed, a.Sex, a.color, s.SireRegName, b.DamRegName, o.LastName, c.CoOwnerNames, o.Address1, o.Address2, o.City, o.State, o.Zip, o.Htel, o.Cell \n" + "FROM doginfo d LEFT JOIN dogat a \n" + "ON d.DogId = a.DogId \n" + "LEFT JOIN sires s \n" + "ON d.DogId = s.DogId \n" + "LEFT JOIN dams b \n" + "ON d.DogId = b.DogId\n" + "LEFT JOIN owners o\n" + "ON d.DogId = o.DogId\n" + "LEFT JOIN coowners c\n" + "ON d.DogId = c.DogId" +" FROM doginfo" + "Where RegName = ' " +dn+ " '"); //System.out.println (dn); // System.out.println ("SELECT * FROM doginfo WHERE Regname = '" + dn + " ' "); // Process the result. while (myRs.next ()) { DogId.setText (myRs.getString("DogId")); RegNum.setText (myRs.getString ("RegNum")); RegName.setText(myRs.getString ("RegName")); WhelpDate.setText (myRs.getString("WhelpDate")); Breed.setText (myRs.getString ("breed")); Sex.setText (myRs.getString ("sex")); Color.setText (myRs.getString ("color")); SireRegName.setText (myRs.getString ("SireRegName")); DamRegName.setText (myRs.getString ("DamRegNName")); Owner.setText (myRs.getString ("Owner")); CoOwners.setText (myRs.getString("CoOwners")); Address1.setText (myRs.getString ("Address1")); Address2.setText (myRs.getString ("Addess2")); City.setText (myRs.getString("City")); State.setText (myRs.getString ("State")); Zip.setText (myRs.getString ("Zip")); DNA.setText (myRs.getString ("DNA")); HTel.setText (myRs.getString ("HTel")); Cell.setText (myRs.getString ("Cell")); Notes.setText (myRs.getString ("Notes")); } } catch (Exception e) { JOptionPane.showMessageDialog (null, "Error"); } } A: I don't think you need the second last line since you already mentioned the table in first line. Try removing "FROM doginfo" in second last line of your query. Try Removing that and it should work.
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,880
The best mother's day gifts are handmade and usually involve construction paper, white glue and dried pasta. If you're a bit too old to give your mother a macaroni picture frame and you've left it way too late to get to a shop, you can still buy her a Gift Certificate for The Orange Room. She'll be able to browse our collections for the perfect hat or scarf as she thinks about what a great child you are.
{ "redpajama_set_name": "RedPajamaC4" }
2,213
Bob Rogers Reminiscing Every Saturday Night! CATCH BOB ROGERS EVERY SATURDAY NIGHT ON HIS SHOW 'REMINISCING'. Bob will pull up his chair, put on the headphones and settle into his studio to present "Reminiscing" 6pm to Midnight every Saturday Night.
{ "redpajama_set_name": "RedPajamaC4" }
6,914
Q: Search for value in multidimensional array and get parent array in PHP I have this array: Array ( [0] => Array ( [name] => Dick Jansen [matchedMovie] => Array ( [0] => Array ( [nameMovie] => Saw [genre] => Horror [patheMovie] => Texas Chainsaw 3D [patheMovieGenre] => Horror [score] => 100.00 ) ) ) [1] => Array ( [name] => Jim Scott [matchedMovie] => Array ( [0] => Array ( [nameMovie] => Shooter [genre] => Action, Thriller [patheMovie] => The Shining [patheMovieGenre] => Horror, Suspense/Thriller [score] => 52.38 ) [1] => Array ( [nameMovie] => Resident Evil Movie [genre] => Action/Horror [patheMovie] => Texas Chainsaw 3D [patheMovieGenre] => Horror [score] => 63.16 ) ) ) ) I want to search on a [patheMovie] value (like 'The Shining') and get the parent array with the [name] plus only the [matchedMovie] array with the matched [patheMovie] back. I tried something like this: $search='Texas Chainsaw 3D'; $sorted=false; foreach ($sorted as $n=>$c) if (in_array($search,$c)) { $cluster=$n; break; } if i search for 'The Shining' for example i want the array to return like this: Array ( [0] => Array ( [name] => Dick Jansen [nameMovie] => Saw [genre] => Horror [patheMovie] => Texas Chainsaw 3D [patheMovieGenre] => Horror [score] => 100.00 ) ) and if you search for 'Texas Chainsaw 3D' like so: Array ( [0] => Array ( [name] => Dick Jansen [nameMovie] => Saw [genre] => Horror [patheMovie] => Texas Chainsaw 3D [patheMovieGenre] => Horror [score] => 100.00 ) [1] => Array ( [name] => Jim Scott [nameMovie] => Resident Evil Movie [genre] => Action/Horror [patheMovie] => Texas Chainsaw 3D [patheMovieGenre] => Horror [score] => 63.16 ) ) A: This solution will depend into two conjugated loops. <?php function searchIt($arr, $searchItem){ $result = array(); $resultIndex = 0; for ($i =0; $i < count($arr); $i++){ for ($j = 0; $j < count($arr[$i]['matchedMovie']); $j++){ if ($arr[$i]['matchedMovie'][$j]['patheMovie'] == $searchItem){ $result[$resultIndex]['name'] = $arr[$i]['name']; foreach ($arr[$i]['matchedMovie'][$j] as $key => $value){ $result[$resultIndex][$key] = $value; } $resultIndex++; } } } return $result; } ?> phpfiddle demo A: Haven't tested this, but this should work: function findYourGuy($array, $searchTerm) { $searchTerm = 'The Shining'; // testing purpose only foreach($array as $personArray) { $matchedMovies = $personArray['matchedMovie']; $name = $personArray['name']; foreach($matchedMovies as $matchedMovie) { if($matchedMovie['patheMovie'] == $searchTerm) { return array('name' => $name, 'matchedMovie' => $matchedMovie) } } } return false; //no result } A: I would use array_filter. Something along the lines $movieName = 'The shining'; $result = array_filter($movies, filterCallback); function filterCallback($var) { foreach($var['matchedMovie'] as $movie) { if($movie['PatheMovie'] == $movieName) { return true; } } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,260
<!doctype html> <title>CSUN 2014 Presentations</title> <ul> <li><a href="aria">ARIA, JavaScript &amp; jQuery</a></li> <li><a href="fireeyes">FireEyes Custom Rules</a></li> </ul>
{ "redpajama_set_name": "RedPajamaGithub" }
3,131
Q: Techniques or tools to faciltate work in 2 o more versions of a same software I just finishing working in a Wordpress plugin. This plugin is split in 2 versions (standard and premium). The difficult start when I have to solve bugs that are common to both versions. Currently I proceed like this: Unpack one version to the Wordpress plugin dir, debug, pack again. And then the same for the another version. Not so painful but what really should be ideal is work only over one code and then send the changes to both versions at the same time. More over: automatically zipping and sending by email through a bash script or similar should be the heaven on Earth. I use Linux and Netbeans (can manage git, mercurial and subversion) but no need to be involved in the solution. THX.- A: More over: automatically zipping and sending by email Zipping what? Sending where? At least sending is dependent from "other" side, which have to handle it Without describing more detailed used workflow I can't see ways to answer on question. Are both versions related by code (i.e Premium is patches on top of Standard /or vice versa/)? Do you use any SCM now? Do you use branches? Can you use hooks in VCS-of-choice?
{ "redpajama_set_name": "RedPajamaStackExchange" }
487
Moons › Jupiter Moons › Autonoe Autonoe In Depth By the Numbers Autonoe was discovered Dec. 10, 2001 by Scott S. Sheppard, David C. Jewitt, and Jan T. Kleyna at the Mauna Kea Observatory in Hawaii. Autonoe is considered a member of the Pasiphae group, a family of Jovian satellites which have similar orbits and are therefore thought to have a common origin. Most or all of the Pasiphae satellites are thought to have begun as a single asteroid that, after being captured by Jupiter's gravity, suffered a collision which broke off a number of pieces. The bulk of the original asteroid survived as the moon called Pasiphae, and the other pieces became some or all of the other moons in the group. All of the Pasiphae moons are retrograde, which means that they orbit Jupiter in the opposite direction from the planet's rotation. Their orbits are also eccentric (elliptical rather than circular) and highly inclined with respect to Jupiter's equatorial plane. All of these characteristics support the idea that the Pasiphae satellites began as one or more captured asteroids, rather than forming as part of the original Jupiter system. Compared to Jupiter's other satellite groups, confidence is lower that all the moons in the Pasiphae group originated in a single collision. This is due to differences in color (varying from red to gray), and differences in orbital eccentricity and inclination among the members of the Pasiphae group. Sinope, in particular, is suspected of starting out as an independent asteroid. If Sinope does not belong in the Pasiphae group, then the individual moon called Pasiphae retains 99 percent of the mass of the original asteroid. If Sinope is included, Pasiphae still retains the lion's share: 87 percent of the original mass. None of the Pasiphae members is massive enough to pull itself into a sphere, so they are probably all irregularly shaped. Autonoe has a mean radius of 1.2 miles (2 km), assuming an albedo of 0.04. At a mean distance of about 14.9 million miles (24 million km) from Jupiter, the satellite takes about 761 Earth days to complete one orbit. How Autonoe Got its Name Originally called S/2001 J1, Autonoe was named for the mother of the Graces by Jupiter, according to some authors. A name ending in "e" was chosen for this moon in accordance with the International Astronomical Union's policy for designating outer moons with retrograde orbits.​ Charon in Enhanced Color Earth's Moon Poster - Version I What's Up: December 2019 [Video] more resources › Jupiter's south pole has a new cyclone. NASA's Juno Navigators Enable Jupiter Cyclone Discovery A research team led out of NASA's Goddard Space Flight Center has detected water vapor for the first time above Europa's surface. NASA Scientists Confirm Water Vapor on Europa The next full Moon will be on Sunday afternoon, October 13, 2019, The Moon will appear full for about three days centered on this time, from Saturday morning to Tuesday morning. October 2019: The Next Full Moon is the Hunter's Moon NASA's Juno mission to Jupiter successfully executed a 10.5-hour propulsive maneuver to avoid a mission-ending shadow cast by Jupiter during an upcoming flyby. NASA's Juno Prepares to Jump Jupiter's Shadow The next full Moon will be early Saturday morning, September 14, 2019. Find out what else to watch for in the skies during the next few weeks. September 2019: The Next Full Moon is the Harvest Moon Europa Clipper will explore Jupiter's icy moon in-depth during multiple flybys, studying whether Europa could harbor conditions suitable for life. Europa Clipper's Mission to Jupiter's Icy Moon Confirmed This new Hubble Space Telescope view of Jupiter, taken on June 27, 2019, reveals the giant planet's trademark Great Red Spot. Hubble's New Portrait of Jupiter Twenty-five years ago, humanity first witnessed a collision between a comet and a planet. How Historic Jupiter Comet Impact Led to Planetary Defense Scientists have discovered that the yellow color seen on the surface of Jupiter's moon Europa is actually sodium chloride, a compound known on Earth as table salt. Table Salt Compound Spotted on Europa Our solar system is a stormy place. Join us on a tour of storms. 10+ Things: Tour of Storms Across the Solar System NASA's Juno mission to Jupiter made the first definitive detection of a planetary magnetic field that changes over time, a phenomenon called secular variation. NASA's Juno Finds Changes in Jupiter's Magnetic Field Auroras at Jupiter's poles are heating the planet's atmosphere to a greater depth than previously thought — a rapid response to the solar wind. Jupiter's Atmosphere Heats up under Solar Wind Meet the women leading two of humankind's two most distant space missions. Women at the Helm The next full Moon will be on Feb. 19, 2019. February 2019: The Next Full Moon is the Crow Moon Jupiter's turbulent southern hemisphere captured by NASA's Juno spacecraft after a close flyby of the gas giant planet on Dec. 21, 2018. Juno's Latest Flyby of Jupiter Captures Two Massive Storms Upcoming 16th science pass of Jupiter marks Juno's halfway point in prime mission data collection. NASA's Juno Mission Halfway to Jupiter Science JunoCam data has detected atmospheric wave trains, towering atmospheric structures that trail one after the other as they roam the planet NASA's Juno Mission Detects Jupiter Wave Trains Quite a few candles have been lit since NASA was born on Oct. 1, 1958. Here's a quick look back at some highlights from six decades of exploration—and a glimpse of what's ahead. 10 Things: Happy Birthday, NASA September nights! Planets bright! Milky Way sights!! What's Up - September 2018 Is there water deep in Jupiter's atmosphere, and if so, how much? How a NASA Scientist Looks in the Depths of the Great Red Spot to Find Water on Jupiter To most of us, dust is an annoyance. But these tiny particles that float about and settle on surfaces play an important role across the solar system. 10 Things: Dust in the Wind (on Mars and Well Beyond) New comprehensive mapping of the radiation pummeling Jupiter's icy moon Europa reveals where scientists should look -- and how deep they'll have to go -- when searching for signs of habitability and biosignatures. Radiation Maps of Europa: Key to Future Missions This summer offers plenty of opportunities for skywatchers looking to observe Mars, Saturn, Jupiter and meteors--with or without a telescope. Look Up: Parade of Planets
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,749
Seminary>Library>Digital Collections>The Princeton Lectures on Youth, Church, and Culture>Read Online The 1998 Princeton Lectures on Youth, Church, and Culture Growing Up Postmodern: Imitating Christ in the Age of "Whatever" Descartes is history. That's the conclusion of postmodernity. Foundational truth is out, relativity is in. Trace it to Hiroshima, the assassination of John F. Kennedy, the Challenger explosion. Technology is not the panacea we thought it would be. Trace it to Watergate, liposuction, spin doctors. Truth is not an objective reality anymore. Trace it to institutional differentiation, Baskin Robbins, cable TV. Choice can paralyze as well as liberate. Nobody knows this better than the young people whose coming of age coincides with the turn of the millennium. They live in a world where microchips are obsolete every eighteen months, information is instantaneous, and parents change on weekends. The one constant in the postmodern adolescent's experience is upheaval. Truth changes daily. The signature quality of adolescence is no longer lawlessness, but awelessness. Go ahead, youth say to the church. Impress me. When everything is true, nothing is true. Whatever. It's true that we live in a world that considers truth too relative to specify. The comics brought us mutant "X-­‐Men" and now "X-­‐Women"; consumer thinking brought us X-­‐brands and X-­‐spouses; pop culture brought us X-­‐Files and Generation X. The letter "X" is having a banner decade, labeling "whatever" we don't have the time or the inclination to explain. Maybe the word "whatever" found its way into the contemporary adolescent vocabulary because "X" describes precisely the Truth they seek. In the early church, the Greek letter "X" (chi) referred to Jesus Christ. This generation of young people is neither the first nor the last in search of "X." Paul recognized this quest in the Athenians, who went as far as to erect an altar to "an unknown god": What you worship as unknown, this I proclaim to you. . .The One who is Lord of heaven and earth. . . made all nations. . . so that they would search for God. . . . God will have the world judged in righteousness by a man whom God has appointed, and of this we are assured because God raised him from the dead. (Acts 17:23-­‐31) We all seek "X," God's Truth beyond relativity. We are here because we are called to imitate and obey and proclaim this Truth to all who worship unknown gods. The Truth is out there, for young people and for us. May you find grace to peruse the "X-­‐Files" of your own life in the days ahead, as we grope for "X" together. Though, indeed, he is not far from each of us. Godspeed, Director, Institute for Youth Ministry Nancy T. Ammerman "Communities of Faith for Citizens of a Postmodern World" "Just What Is Postmodernity and What Difference Does It Make to People of Faith?" Martin E. Marty "Who Is Jesus Christ for Us Today?" As Asked by Young People" "Youth between Late Modernity and Postmodernity" Sharon Daloz Parks "Faithful Becoming in a Complex World: New Powers, Perils, and Possibilities" "Home and Pilgrimage: Deep Rhythms in the Adolescent Soul" Friedrich Schweitzer "Global Issues Facing Youth in the Postmodern Church" William Willimon "Imitating Christ in a Postmodern World: Young Disciples Today" Home and Pilgrimage: Deep Rhythms in the Adolescent Soul Sharon Dalox Parks is the author of The Critical Years: Young Adults and the Search for Meaning, Faith, and Commitment and a co-author of both Common Fire: Leading Lives of Commitment in a Complex World and Can Ethics Be Taught? For over twenty years, she was at Harvard University where she served in faculty and research positions in the schools of divinity, business, and government. She has also taught at Weston School of Theology in Weston, MA. She is presently an associate director and faculty member of the Whidbey Institute in Clinton, WA, and she is a lecturer and consultant in education and religion, leadership, and ethics for diverse institutions. Once upon a time, a wise professor advised an entering freshman class, "I hope that as you take notes throughout your college career, you will include what is not being said." Particularly in a time of significant historical change, it is important to pay attention to what has not been said, to what is apt to be neglected. The attempt to do that prompts my reflections here. Two people recently brought to my attention a special section of The New York Times that was devoted to a descriptive analysis of today's teenagers. What I found both particularly useful and sobering were the first two paragraphs of the lead article: Trying to label America's nearly 60 million teenagers is about as easy as staying on the trail of a snowboarder in a whiteout. No definition can stretch far enough to include bubbly Hanson fans and moody Marilyn Manson devotees, and marijuana-smoking home boys and savvy junior entrepreneurs, born-again virgins and single moms, student activists and frustrated truants, especially when one teenager might inhabit several of those identities. As adults circle, fascinated, teenagers survive and thrive within these interlocking universes. If these teenagers must be tagged, call them the autonomous generation, creating themselves in nobody's image but their own. 1 Autonomous? What I find disturbing in this first paragraph is the label, "the autonomous generation," and the description, "creating themselves in nobody's image but their own." Particularly in a society where vast amounts of money are spent by the advertising industry to form teens in the image of their clients' commercial interests, thinking of teens in this way misuses the language, distorts the truth of their lives, and utterly ignores the power of the social contexts that shape us all. The article continues, offering an example of an "autonomous" teenager: Nicole Hernandez is one such 1990s teenager. She lives in Elmhurst, Queens, but not with her parents. Fleeing a troubled home, she became an emancipated minor last year and moved in with her boyfriend and his mother. Nicole likes going to clubs and concerts, but she spends most of her time juggling high school classes, chores, and a full-time restaurant job. "It is hard," she said during a recent interview on her day off from work. "I have to help pay the rent, and if I don't pass all my classes I don't graduate. But then there are days when I just want to hang out with my friends and be young, like I was a year ago." The label "autonomous" and the description "emancipated" do not convey the reality of this young person's life. Such language does, however, absolve adults of our responsibility to support, nurture, and accompany adolescent youth. Surely, a vital part of adolescent development is the growing capacity to take responsibility for oneself and others, but in our society, so strongly steeped in an ideology of individualism, we too readily deny our interdependence as human beings with each other and with the natural world. We are much too quick to seek and to celebrate "autonomy" as the capacity to stand, to act, and to live alone — self-sufficient and needing no one. Indeed, the conventional psychological take on adolescence all too often rests on the assumption that as soon as a young person crosses the threshold of puberty and becomes a "teen," parents and other adults no longer influence young lives, and the "peer group" becomes the only power and value. We begin to move into a more accurate reading of adolescent psychology by noticing that if the peer group has significant power with adolescent youth, "autonomy" is not the dynamic at play. Moreover, what is happening in adolescent development is something more profound than moving from dependence to independence. The motion of the adolescent life is a movement into participation in a wider world within which family and community play an important but changing role. All who listen closely to teenagers know that to become an adolescent does not mean that one desires to abandon a good enough home. It means becoming part of a wider world and a larger social structure. Home and Pilgrimage Consider two companion metaphors that capture the rhythms of this more complex movement in the adolescent soul. The metaphors are "home" and "pilgrimage." During their childhood and teenage years, my husband's children, Kate and Todd, regularly spent time with us at our home in a very rural area of forest and page 55 fields in northern Vermont. 2 About the time that Kate turned thirteen, she arranged one Saturday to get together with her childhood friend and neighbor, Natalie. When Natalie came over to our house, the two of them wandered outside. It was a lovely summer day. An hour, maybe two-and-a-half hours went by, and then the phone rang. It was Kate. She was calling from the general store in our little town of Glover, about four miles away. "We wandered off through the woods and made our way across lots," she explained. "We really weren't sure where we were going, but we managed to end up in Glover. Will you come get us?" Of course we said yes, got in the car, and went to fetch them. On the way back home, Kate and Natalie were full of stories — the good fun and exhilaration of venturing forth from home through unfamiliar terrain. Clearly all of us had crossed some new threshold. Two years went by, and Todd turned thirteen. As it happened, one weekend in the dead of winter, Todd brought his friend, Adam, along with him to our home. On Saturday afternoon, Todd and Adam decided they would prepare themselves for an expedition. When there is three feet of snow and freezing temperatures, you do not wander into the woods in T-shirts and shorts. Indeed, they donned jackets, boots, hats, mufflers, gloves — then added a knife, a rope, and an axe. At the last minute, they also opted for snowshoes. We sent them off with good wishes. Time went by — an hour, two-and-a-half hours. The phone rang. It was Todd. They had traveled as far as the neighbors a quarter of a mile away — just beyond any sight of our house. The neighbors had welcomed them in and served them hot chocolate, and now, "Would you come and give us a ride home?" Todd asked. At whatever age, a good life is a rich mix of venturing and abiding 3 — home and pilgrimage. These moments in the lives of Kate, Natalie, Todd, and Adam are emblematic of the recomposing of home and the new possibilities of pilgrimage during the adolescent years. Home and pilgrimage depend upon each other. But our society tends to overvalue the journey in ways that eclipse the significance of home. 4 Journey at the Expense of Home There are many reasons why our society and especially our religious culture have overvalued the journey at the expense of home. The word "journey" is rooted in "jour," meaning day — a day's travel, ongoing travel through time. The word "pilgrimage," in contrast, signifies going out from a homeplace and then returning home with gifts, blessings, wisdom. Thus the interdependence of "home" and "pilgrimage." In religious-spiritual and other cultural literature, traditionally the images of home and pilgrimage were profoundly linked. This began to shift, however, about two hundred years ago. Particularly since the Enlightenment, we have been keenly aware of page 56 the limitations of our knowledge — especially our knowledge of God, Truth, Ultimate Reality. We have become poignantly aware of the relativized and partial character of truth. Our understanding is always incomplete — and, hence, we have a consciousness of always needing to press further in an ongoing intellectual and spiritual journey toward but never quite arriving at in our quest for truth and wholeness. Many people in American society are immigrants. We have left our "homelands," and we have not returned. For this we pay a psychic price — a soul price. Moreover, the Industrial Revolution separated home from the means of production. The workplace and the homeplace have become increasingly bifurcated, again at the cost of "balance," family, community, and aspirations toward an integrated life. When I was first beginning to explore the power of the relationship between these metaphors, I was in New Zealand. I will always remember how on the first evening that I was there, after I had spoken about these themes, a woman very purposefully sat down beside me and said, "I left the church, but I felt that I should still find a way to attend to my spiritual life. So I decided I would pay attention to my dreams." Then she said, "The first two figures who appeared were a homemaker and a pilgrim." I was surprised and intrigued. She continued, "Yes, I grew up in a home where my father was an alcoholic, and I never really had a chance to feel a sense of home until after I was an adult and he had died. Then I moved into a home that was the first home I ever really loved. But I subsequently gave up that home in order to go to seminary." And then she said, "In my dream, and in the reflection on it afterwards, the pilgrim was strong, robust, and striding along. The homemaker, however, was small and angry. I couldn't get them to talk to each other until first the pilgrim comforted the homemaker." The next day when I was speaking with another group, this same woman was there, and I asked her to share her story. Then I invited everyone present to reflect on whether or not there was a pilgrim and a homemaker within them. In the discussion that followed, one man spoke with particular poignancy, saying, "I guess I identify the pilgrim with the career part of my life, and I identify the homemaker with the personal part of my life." (This was not what I had intended, but others also spoke in these terms.) He went on to say how much he wished that those dimensions of his life were related in a more adequate way. As we reflect on the new powers of body, mind, heart, and soul that emerge during adolescence, we recognize the development of a capacity for a new experience of transcendence, a new "high" — a kind of soaring journey beyond where we have been. Here also we recognize that there is a corresponding development of a new experience of immanence, a new kind of intimacy, a new kind of belonging, a qualitatively new experience of home and tribe — since "now I can see you seeing me see you." 5 The Power of Table and Hearth With this dimension of development in mind, it is useful to note the power of the table. In the book Common Fire: Leading Lives of Commitment in a Complex World, my colleagues and I tell the story of the findings in our study of the formation of more than one hundred people who are able to sustain commitment to the common good and practice the kind of citizenship that is now needed. One of the patterns we observed is that for many, though not for all, the family dinner table had an important place in their formation. The practice of the table is a primary way by which we learn the habits of belonging and lay claim to the practices of civitas. Douglas Meeks has said that the table is a place where you know there will be a place for you, where what is on the table will be shared, and where you will be placed under obligation. At the table we learn the arts of participation, gratitude, sharing, and conversation. Across all societies, human beings eat together (except, of course, in our fast-food culture where one can find "stand-up gourmet" restaurants). We must eat in order to live, and in the face of that shared vulnerability, human beings gather together and eat in common. 6 During adolescence the table can become more, rather than less, important. As adolescents begin to explore a wider world, table and hearth can remain a meeting place for family, as well as with others. The table can be an important element in a new composition of self and world (even if perhaps at times it becomes a stormier place). We intuitively know this. Someone recently remarked, "I have all of these eighth graders, and I really do not know what to do to bring them together." To which another responded, "That's easy — pizza!" Jeanne Mclver leads the youth ministry program at Fairmont Presbyterian Church in Dayton, Ohio. The youth gather on Wednesday evenings, and the time begins with dinner at the table. At every table there are four adults and eight teenagers, and the places are assigned so that there is continuity from week to week. Jean ensures that there is a good mix of young people who talk easily with the young people who find it more difficult. This practice of the table creates inter-generational community within the church. Once she asked the young people to indicate by a show of hands how many of them ate regularly with their own families at home. Her own daughter was the only one who raised her hand. This is in Ohio —America's heartland — where we presume that our values still hold. This kind of shift in our social practice has implications for our society, in part because it has implications for our religious life. Wendy Wright has written, "To prepare and share a meal together is one of the holiest acts in all religious traditions. . . . We may pass our meals in conflict or page 58 with trivial conversation. Yet the power of the ritual remains. At the deepest level, the gathering in the dining room or at the kitchen table is an experience of communion in which the mystery of our mutual need and nourishment is played out. This is the level of our true hunger and satiety and the level on which we must encounter one another to genuinely know who we are." 7 Chuck Foster, dean and professor of religious education at the Candler School of Theology, teaches that we cannot grasp the meaning of Christian communion if we have not been steeped in the practice of the table in ordinary time. Recomposing Self, Home, and World A teaching video was produced in concert with the book Common Fire. It opens simply with a scene of a typical family dinner table with a lit hearth in the background. An incident in the production of this scene further illustrates the interrelated dynamics of home and pilgrimage in the formation of young lives. After the scene of table and hearth was first filmed, to our amazement and dismay some simple, very inexpensive, glass objects that had been used as a part of the table setting appeared much too elegant. We previewed it with a few people, and sure enough, the reactions were strong and charged: "That doesn't look like any dinner table I ever knew!" Precisely because the family table matters in people's lives (either positively or negatively), we knew that the image might in any case evoke uncomfortable and even painful memories for some, and it was important that the scene of table and hearth be one that all who saw it would want to affirm. So we made a tough budget decision and decided to shoot the scene over again. The producer, Terry Strauss, is not only an excellent artist herself, she knows how to put together a talented and committed team. The cameraman she chose, Michael Anderson, has received two Academy Award nominations. She chose his nineteen-year-old daughter, Sarah, to be the assistant producer. When the full team arrived to reshoot the scene in a specially rented facility, the proprietor said, "By the way, you can't use the fireplace this time. We had it inspected yesterday, and they say it isn't safe." Terry knew that even if nothing was filmed, the crew would have to be paid for half a day. So she pleaded, "I just have this one Duraflame log, and we only have to burn it for five minutes." The proprietor remained adamant. It was a tense moment. Everyone cast about for alternatives, to no avail. Then Sarah suggested, "Why don't we put a TV monitor in the fireplace and run the earlier take of the flames, and it might look like a real fire." The crew looked at each other, thought it wouldn't work, but agreed to give it a try. After the shot of table and hearth was thus redone, they put the new tape on the monitor, by this time thinking there was page 59 a chance that it might work well enough for untrained eyes. To their surprise, they all had to admit that even "trained eyes" couldn't tell the difference. Having heard about the crisis, I was thrilled when I heard the outcome, and I said to Terry, "You must tell Sarah how grateful we are." Terry responded, "Sarah knows that her father knows that she saved the shot. Nothing that you or I could say would add to the splendid satisfaction of that for her." Sarah is moving from her homeplace out into a wider world, and she is taking on new responsibility. But in the midst of that pilgrimage, "home" is being recomposed in a new conversation between self and world. Home and pilgrimage constitute the forth-and-back rhythm of a single dance in the development of the soul. A Larger Conversation Table and hearth can be important in the formation of adolescent spirituality because they are natural places of conversation. As we consider "imitating Christ in an age of whatever," notice that Jesus always appears in common settings and takes the conversation to a deeper place. Whether in conversation with the woman at the well, or the rich young ruler, or Zacchaeus, or the men who were prepared to stone the woman who had been taken in adultery, Jesus is always listening to the deepest currents of the soul and moving the conversation to the essence of things. Across college campuses students are asking for more time with faculty, and faculty are dutifully posting more office hours. But if we listen carefully, it becomes clear that students don't want more office hours. They want hearth. They want table. They want a back-and-forth rhythm of conversation that moves us from where we are to where we could be as it works its way to the essence of things. In ministry to youth, we do something enormously important when we legitimate "real talk." As adolescents increase their capacity for taking the perspective of others while maintaining their own, they move into transformative space. The Quaker mystic, Howard Thurman, expressed it this way: It is a miracle when one man [he is writing as a man] standing in his place is able while remaining there to put himself in another person's place, to send his imagination forth to establish a beachhead in another person's spirit, and from that vantage point so to blend with the other's landscape that what he sees and feels is authentic. This is the great adventure in human relations. To experience this is to be rocked to one's foundations. We are not the other person. We are ourselves. All that they are experiencing we can never know, but we can make accurate soundings. 8 One of the people we interviewed now owns a profitable software company and seeks to help his clients create an exemplary work environment Reflecting on page 60 his teenage years, he told us, "At my synagogue there was a woman who was a fantastic teacher who had a pretty profound influence. We would be Hebrew school teachers for the younger kids in a leadership development program. They were taking thirteen-to-seventeen-year-old kids and in the midst of our adolescent problems teaching us how to be leaders, literally, like the concept of active listening. That was taught to me when I was fourteen. I think it taught us all to become more empathetic and more discerning." 9 Another told us, "We had a youth camp every summer, and then we would meet once or twice each semester for a weekend retreat. It was a group that was about something important — serious discussions about life issues — sex and politics and pressing theological issues. It mattered to me so much that in those groups there was an affirmation of honesty and integrity and pushing you beyond the bounds of what you used to think. . . . The youth leaders were incredibly important in affirming a sense of a wider purpose, affirming my concerns, and giving me responsibility for beginning to act them out." 10 Indeed, "good talk" — transformative conversation among teenagers — is most apt to occur where there are adults who help to keep the talk good by providing ground rules about the kind of respect that we extend to each other and thus help to clarify the terms of the belonging that otherwise may become fierce and destructive. Good, even great, conversation can take form among adolescent youth when adults offer themes that legitimate hard, mind-and-soul-stretching stuff to talk about. When this kind of conversation happens, youth are initiated into essential practices of genuine community. 11 A Larger Homeplace Community, communication, and communion are all a part of creating a good homeplace. Adolescents are ready for a larger homeplace. It is only in recent times that we have come to associate "home" more exclusively with one's own domicile. For most of the humans who have lived on this planet, "home" has meant village or region — a wider sense of belonging. Teenagers want to participate in this larger sense of "home." They test a community's capacity to function as a place where youth can connect and belong and participate in play and sport, creativity and performance, caring and responsibility, believing and daring. It is from this kind of "home base" that meaningful pilgrimage can happen well. Pilgrimage beyond Tribe A particular kind of pilgrimage is needed in our time. It is no longer enough simply to go further afield geographically. Young people today need opportunities for pilgrimage "beyond tribe." As human beings, we all need "tribe," and adolescence page 61 can be the most "tribal" of times — in the best sense. But if we are preparing young people to become citizens committed to the common good rather than to just "me and mine," the study represented in Common Fire provides compelling evidence that it is critically important during one's formative years to have "constructive encounters with 'otherness.'" Surely, a primary characteristic of faithful citizenship in our time is the capacity to move comfortably and respectfully across institutional and social boundaries. As our society becomes more complex and diverse, there is an understandable tendency to move into gated communities, to fortify professional guilds, to create a cacophony of single issue politics, and to flee to fundamentalistic religion. In every case, the fear of "the other" abounds. Yet, increasingly, the art of being human depends, in part, on the capacity to live both within and beyond tribe — to find a good mix of home and pilgrimage. In his work, David Ng has helped us to see that "diversity" can take many forms in addition to ethnicity, gender, and economic-social class. Wherever "us" and "them" exist, we have diversity — no matter how subtle. Discovering that "us" and "them" can be transformed into "we" is one of the most vital dimensions of learning during the adolescent years. There are many ways that this can happen. For example, a bright, gifted young woman suffered a serious accident while playing basketball, and thus became dependent on the use of a wheelchair. While still in high school, she was invited to a national gathering of youth sponsored by a mainstream Christian denomination. She was somewhat dubious about participating because she was concerned about the patriarchal practices of the church. But she knew that this conference was making a special effort to include differently abled young people like herself, and she decided to attend. A dance was scheduled for the last evening of the conference, and she and another young woman (who also uses a wheelchair) debated going. In the end, they did go to the dance and watched from the edge of the large gymnasium. Then a young man from Latin America asked one of them to dance, and she accepted. He wheeled her to the center of the room, grabbed a chair for himself, and the two of them began to dance. Another young man went to the second young woman and began to dance with her in the same fashion. Within moments, the remaining five hundred young people went for chairs of their own, sat down, and kept dancing! "Us" and "them" became "we." While still cautious, this young woman has a renewed respect for the possibilities of church. This is precisely the kind of behavior that we expect from faithful practices of the church community. Growing up in this kind of community can make a significant difference in preparing people to live in the midst of a society that is increasingly diverse. There are many, however, who believe that people only become compassionate page 62 toward others if they have suffered profoundly themselves. To be sure, sometimes our own suffering prepares us to respond more empathetically to others, and often those who are at the margins of their own communities because of some vulnerability may more easily extend compassion to others beyond their own "tribe." 12 But there is another kind of "marginality" that also prepares people to respond in positive ways to the stranger. There is a kind of "value-based marginality" that we see in those who grow up in communities that practice an uncommon regard for others. 13 At a time when we bemoan the "coarsening of society," the church can be a place where young people can learn to respect and include the stranger — the ones who are different from "us." It has been rightly said that justice is simply a matter of who is included and who is excluded, or who we can tolerate neglecting. 14 Indeed, if we are "imitating Christ," even a limited reading of the Gospels reminds us that Jesus was always blurring the boundaries between who was included and who was excluded. Thus, a practice of pilgrimage that encourages young people to venture beyond their own tribe and into the place of "the other" is well placed as a central component of youth ministry. Many people are already giving young people opportunities to go on pilgrimages and to encounter strangers. One pastor makes it possible for young people to travel from Indiana to Vietnam. When they return from this pilgrimage, the young people talk with their parents about the economic disparity between the two societies. They have a new set of reference points as they think about how to live faithfully in a complex world. Another pastor accompanies young people on a pilgrimage to Tiajuana, Mexico. In an interesting blend of home and pilgrimage, the young people assist in building homes for people who are living in the new shantytowns. When the young people return to their own homes, they have important stories to tell. During the adolescent years, it is essential to return to someone who will listen to the story of your pilgrimage and receive the blessings you have garnered for your "tribe." Like young people in many other churches, when I was a teenager I went every year to a weeklong camp with the youth group. Each year when we returned to our church home, the Sunday evening service would make place for each of us to have a few minutes in the pulpit when we would share with the congregation what had happened for us. At a deep level, we were learning that our home community cared about us and believed that we had something to say and to give that mattered. Homelessness As Invitation to Contemplation The transforming pilgrimage awaits teenagers — sometimes across the world, sometimes across town, and sometimes just outside the door on the streets of our cities and towns. According to Evelyn Parker, homeless youth are individuals under the age of eighteen who lack parental, foster, or institutional care and who survive on their own without a safe home environment. Estimates of the total number of homeless youth in our society range from 100,000 on any given night to two million per year. Every year, as a society, we bury 5,000 teenagers in unmarked graves. Evidence suggests that young people generally have positive attitudes toward family and home. They do not leave home on a whim or because of peer pressure or conflict with positive family values. Premature separation from family support arises from abuse, poverty, or parent/child conflict. Kathleen Sorenson, from Boys Town, says that some of the young people who come to Boys Town are sent by the courts, some by their families, and "one percent are 'pilgrims.' They come by themselves seeking refuge, asking for home." The Power of Hope program does not have a particular focus on homeless youth, but in each city where it develops a weekend program, it seeks out the adults who are working with homeless youth. The program invites these adults and their young people to attend. This has been important both for the youth and for their adult leadership. These reflections upon homeless youth in our society are not intended to set us on a guilt trip, but they may provide an occasion when we can ask as a people, as a "tribe," Why was the workshop on homeless youth the one that was canceled at the Princeton Forum? By what strategies might the response to that particular seminar be different next time? When you and I return home, each of us will decide whether to file or toss the printed program for this Forum. Whichever we choose, I encourage us to turn first to the page titled "Extended Seminars" and use the third listing, "In Search of Sanctuary: The Quest of Homeless Adolescents," as a focus for our personal or communal time of prayer and meditation. Again, the question I invite us to contemplate is not, "Why did I not...?" But rather, "Why did we not...?" Breakfast for the Stranger At the end of the special section on teens in The New York Times, there is a story featuring Brian Raymond, who lives in Bangor, Maine. 15 Again in a strange choice of words, Brian is described as "legally emancipated" from his family in his junior year after his manic-depressive father tried to commit suicide, his mother had page 64 an emotional breakdown, and his younger sister fell in with some notably uncaring friends (and is now in foster care). Brian has a friend named Zach, and Zach said to his mom, Mrs. Woodward, "Can Brian come live with us?" Mom hesitated. The article reports two reasons for her hesitation. One is that she cares about how she keeps her home. But that was not a major concern because Brian was a good friend who had been around a whole lot over the years. The refrigerator, however, was another matter. Would it hold enough of the drinks that would be required for everybody when you have a house full of more than one teenager? Finally she said yes, and Brian moved in. Brian is now in his senior year, scored 1420 on his SATs, and is headed for one of the two colleges he has been admitted to. He has fit nicely into the Woodward home, where a study was converted into a bedroom, and a second refrigerator was purchased. It is the most home Brian has known. "In those places I rented," he says, "I just had a fridge and had to stay in the room. Here, I can sit down, watch TV. It's more like I live here; it's kind of cool." The article concludes: "Mornings at the Woodwards' are hectic. Mr. Woodward heads off to his job at the newspaper office, and Mrs. Woodward to her work at the Federal Building. The boys are always late and eat their pancakes standing at the kitchen counter. One is enough for Zach; Brian eats four. "Mr. Woodward once asked his wife why she bothered cooking pancakes every morning instead of letting them grab cereal. She responded, 'The boys will remember those pancakes the rest of their lives."' If I'm reading the same Scriptures that you are, we know that imitating Jesus may mean that occasionally we cook breakfast. It may also mean that as a part of youth ministry we offer hospitality to the stranger as our souls relearn the deep rhythms of home and pilgrimage. 1. Ann Powers, "Who Are These People, Anyway!" The New York Times, Wednesday, April 29, 1998, p. GI. 2. Note that for many youth who have suffered the divorce of their parents, the metaphors of home and pilgrimage carry a special meaning when they must regularly make, as it were, a pilgrimage from one home to another. 3. See Richard R. Niebuhr, "Pilgrims and Pioneers, " Parabola 9.3:pp. 6-13. 4. See Sharon Daloz Parks, "Home and Pilgrimage: Companion Metaphors for Personal and Social Transformation, " Soundings, 72.2-3, pp. 297-315. 5. See Sharon Daloz Parks, "Faithful Becoming in a Complex World: New Powers, Perils, and Possibilities" in this volume. 6. See Mary Frances Kennedy Fisher, The Art of Eating (New York World Publishing Co., 1954) p. 353. 7. Wendy M. Wright, Sacred Dwelling: A Spirituality of Family Life (New York Crossroad, 1989), pp. 61 -62. 8. Howard Thurman, "Mysticism and the Experience of Love," Pendle Hill Pamphlet #115 (Wallingford, PA: Pendle Hill Publications, 1961), p. 18. 9. Laurent A. Parks Daloz, Cheryl H. Keen, James P. Keen, and Sharon Daloz Parks, Common Fire: Leading Lives of Commitment in a Complex World (Boston: Beacon Press, 1996), p. 42. 11. See Carol Lakey Hess, Caretakers of Our Common House (Abingdon, 1997), ch. 6. 12. Daloz, Keen, Keen, and Parks, Common Fire, ch. 3. 14. See Ronald Marstin, Beyond Our Tribal Gods (Maryknoll, NY: Orbis Press, 1979), p. 37. 15. Michael Winerip,"He's Getting By with a Little Help from His Friends," The New York Times, Wednesday, April 29, 1998, p. G11. Home | Institute for Youth Ministry | Contact Us | © Princeton Theological Seminary, P.O. Box 821, 64 Mercer Street, Princeton, NJ 08542-0803, 609.921.8300
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,470
Q: Changing spaces into dashes in Plain-TeX In Plain-TeX, how can I change a string that is stored in a macro so that each space in that string is turned into a dash? E.g. if \def\a{This little example}, I will like to have some procedure that applied to \a returns "This-little-example". I know I can use \obeyspaces to do the trick on a literal string, \def\a{This little example} {\obeyspaces\let =-% 1. This is a test, only a test. 2. \a } \bye yet that trick does not work on the string stored in macro \a (that is, the expansion of \a above is not modified, yet the first line does change into "1.-This-is-a-test,-only-a---test."). I also tried changing the space's catcode as in \begingroup \catcode` =\active \gdef\swapSpace{% \catcode` =\active \def {-}% } \endgroup My test text...% plain text not to be changed... {\swapSpace 3. My test text... It works, for a literal string... } {\swapSpace 4. \a Yet it fails for the macro above. } \bye But that approach also fails as \a is expanded after all the other things. A: You can replace spaces in a macro already defined by another token (or token list). \def\replspaces#1#2{\expandafter\replspacesA\expandafter#2#1 \end} \def\replspacesA#1#2 #3{#2\ifx\end#3\else#1\afterfi{\replspacesA#1#3}\fi} \def\afterfi#1#2\fi{\fi#1} % usage: \replspaces\macro{what} % example: \def\a{This little example} \replspaces\a{-} The macro \replspaces is expandable. You can replace all spaces by another token by \def\a{This little example} \edef\a{\replspaces\a{?}} \meaning\a % macro:->This?little?example A: Your original would work if \a was defined when space is active, so {\obeyspaces\let =-% \def\a{This little example} 1. This is a test, only a test. 2. \a } \bye or {\obeyspaces \gdef\a{This little example} } {\obeyspaces\let =-% 1. This is a test, only a test. 2. \a } \bye If that is not possible you could use \scantokens if using etex (pdftex or etex binaries) \def\a{This little example} {\obeyspaces\let =-% 1. This is a test, only a test. 2. \scantokens\expandafter{\a}% } \bye A: You can define your strings to appear in \swapSpace with \scantokens incorporated. \def\specialdef#1#2{% \def#1{% \ifnum\catcode`\ =\active \scantokens{#2\noexpand\empty}% \else #2% \fi }% } \begingroup \catcode` =\active \gdef\swapSpace{% \catcode` =\active \def {-}% } \endgroup \specialdef\a{This little example} My test text...% plain text not to be changed... \a\ is good {\swapSpace 3. My test text... It works, for a literal string... } {\swapSpace 4. \a is good } \bye
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,715
December 2021, Vol 27, No. 4 Pharmaceutical Sciences is a peer-reviewed, and quarterly journal providing a forum for basic to clinical issues related to all areas of pharmaceutical sciences. It is the official journal of Faculty of Pharmacy, published by TUOMS PRESS, Tabriz University of Medical Sciences. - Platinum Open Access (no processing or publication fees) - Indexed in Web of Science Core Collection and Scopus - Average time to first decision in 2021: 20 Days - Acceptance rate in 2021: 18% - Publications from 22 countries (5 continents) in 2021. Updated April 2021 Submissions to Pharmaceutical Sciences are accepted through our online submission system. To streamline the process, the online submission system is designed to perform a series of automatic controls, promptly informing the user of any technical insufficiencies, and directing to the relevant instructions. To start submission, please create an account and log in. The submitting author will take responsibility on behalf of all co-authors as the corresponding author of the submission, and is required to enter full details including a working e-mail address, phone number and address, in their online profile. All correspondence, including, but not limited to, the results of initial evaluation, Editor's decision, and request for revisions or proofreading will be sent to the e-mail address of the corresponding author, which will be published with the article. The submission or any subsequent revision is evaluated at the editorial office, and if corrections are necessary, it may be temporarily unsubmitted and returned to the authors, who are responsible for formatting their submission and providing the required information. Please see our editorial workflow for more information. For further help regarding submission, you may contact the editorial office. We use iThenticate software in the processing of the submissions. The conditions of submission Open access license, copyright, and currently, there are no submission or publication charges applicable to the articles submitted to or published in Pharmaceutical Sciences, the open access publication of which is supported by Tabriz University of Medical Sciences Department of Vice Chancellor for Research. The authors retain the copyright to their work without restrictions, licensing under the Creative Commons license 4.0 (CC-BY-NC). Full names and email addresses of all authors, as well as their affiliations and institutional addresses are requested during submission. Providing the unique identifier (ORCID or Scopus ID) of each co-author is optional, but preferred. Please see our editorial policies on authorship and unique identifiers for more information. If a collaboration group should be listed as an author, please list the group name as an author. A cover letter is required for every submission. The authors will need to confirm the following conditions in the submission cover letter: That the submission is original, submitted solely to this journal, and not currently under consideration for publication or already published elsewhere, unless explained in the submission cover letter. See our editorial policies on duplicate publication. That no any sentence is copied from other sources. See our editorial policies on plagiarism and text recycling. That the submitting author takes responsibility for the submission on behalf of all authors as the corresponding author. That all authors have reviewed, approved, and consented to the submission, and they are accountable for all aspects of its accuracy and integrity in accordance with ICMJE criteria. The submission cover letter should also include the following information, as well as any additional information requested in the instructions for the specific article type that the authors are submitting: An explanation of why the submitted work should be published in the journal (the novelty of the work). An explanation of any issues relating to journal policies. A declaration of any potential competing interests. The name of particular special issue that the submission should be published in. The authors may also suggest potential peer reviewers for their submission by providing name, institutional email addresses, and an ORCID or Scopus ID. Please see our editorial policies for more information on suggesting peer reviewers and the use of unique identifiers. The authors may also provide the details of anyone who they would prefer not to review their work. Intentionally providing falsifying information, such as false names or email addresses, will result in rejection of the submission and may lead to further investigation in line with our misconduct policy. Preparing the Submission For general instructions, please see preparing the manuscript. Pharmaceutical Sciences publishes these article types: Research article: Original work resulting from research, constituting complete studies that contain all relevant information. Prepare the manuscript as follows: A Title, a Structured Abstract, Key words, Introduction, Methods, Results, Discussion, Conclusion, References, Tables, Legend for figures, List of additional files. Short communication: Original work, but less substantial than the regular research article, presenting preliminary results, or results of immediate relevance. Prepare the manuscript as follows: A Title, a Structured Abstract, Key words, Introduction, Methods, Results, Discussion, Conclusion, References, Tables, Legend for figures, List of additional files. Review: Narrative reviews or systematic review and/or meta-analysis on pharmaceutical sciences relevant topics. Prepare the manuscript as follows: A Title, an Unstructured/structured Abstract, Key words, Introduction, Subheadings in the manuscript as necessary, Conclusion, References, Tables, Legend for figures, List of additional files. Case report: Systematic reports of interesting or rare cases of importance for the practice of professionals. Prepare the manuscript as follows: A Title, an Unstructured Abstract, Key words, Introduction, Case Report, Discussion, Conclusion, References, Tables, Legend for figures, List of additional files. Commentary: Comments or concerns on specific subjects; overall or pertaining to items published in the journal. Also, new or additional findings of original nature. Prepare the letter as follows: A Title, Text, References. Editorial: The Journal's editors or top researchers in pharmaceutical sciences (invitation by editor) write the editorial. Title - A concise and informative title directed at the general reader. Lengthy systematic names and complicated/numerous chemical formulae should therefore be avoided where possible. Do not capitalize all words; only the first word and proper nouns should be capitalized. Authors' names - Full names (First, Middle and Last) for all the authors of an article should be given and specified with superscript number(s) for the affiliation(s) (e.g., Mark Junior Smiths1). The name of the corresponding author(s) should be specified with an asterisk after name (e.g., Mark Junior Smiths*). Where the family name may be ambiguous (e.g., a double name), please indicate this clearly. Affiliation - Affiliation of all the authors should be given and specified with superscripted number before address (e.g., 1 Faculty of …..). Running title - A very short running title should be given. Corresponding author - Full address, telephone and fax numbers (with country and area code) and email of the corresponding author(s) should be given. The structured abstract (maximum 300 words) is to contain the following major subheadings: Background, Methods, Results, and Conclusion. The Background subheading reflects the background as well as the purpose of the study, that is, the hypothesis being tested. The Methods should include the setting for the study, the subjects (number and type), the treatment or intervention, and the type of statistical analysis. The Results include the outcome of the study and statistical significance if appropriate. The Conclusion states the significance of the results. Clinical trials should include the trial registration number on the last line of the abstract. The structured abstract for review articles is not necessary. Three to 6 keywords for each submission is neccesary. They could be selected from the list of MESH words. List key words in alphabetic order, all lower case, except where necessary. The introduction contains a concise review of the subject area and the rationale for the study. More detailed comparisons to previous work and conclusions of the study appear in the Discussion section. The methods section should describe in adequate detail the experimental subjects, their important characteristics, and the methods, apparatus, and procedures used so that other researchers can reproduce the experiment. When reporting experiments on human subjects, authors should indicate whether the procedures followed were in accordance with the ethical standards of the responsible committee on human experimentation (institutional and national) and with the Helsinki Declaration of 1975, as revised in 2008. If doubt exists whether the research was conducted in accordance with the Helsinki Declaration, the authors must explain the rationale for their approach and demonstrate that the institutional review body explicitly approved the doubtful aspects of the study. When reporting experiments on animals, authors should indicate whether the institutional and national guide for the care and use of laboratory animals was followed. The methods section must indicate that the protocol was reviewed by the appropriate institutional review body and that each subject in the project signed a detailed informed consent form. Results should be presented in a logical sequence with reference to tables, figures, and illustrations as appropriate. If necessary, results and discussion sections can be combined in a single section. New and possible findings of the study should be emphasized, as well as any conclusions that can be drawn. The discussion should compare the present data to previous findings. Limitations of the experimental methods should be indicated, as should implications for future research. New hypotheses and clinical recommendations are appropriate and should be clearly identified. Recommendations, particularly clinical ones, may be included when appropriate. The main question of the work should be very concisely stated and the final conclusions of the study may be presented in a short "Conclusion" section. Preparing references, equations, tables, figures, and additional files To correctly prepare the references, equations, tables, figures, or additional files for a submission, please follow these guidelines: For instructions on formatting the citations of the submission, please see preparing references. EndNote software can be used to arrange references as a numbered list at the end of the manuscript, using our Endnote style. To do so, please first download our EndNote style (ZIP file) here, and then unzip and copy the EndNote style (ENS) file into EndNote "Styles" folder on your computer, which should be accessible at Program Files [folder] > Endnote [folder] > Styles [folder]. Please use the following link to download the styles for EndNote: Styles for EndNote (To add Pharmaceutical Sciences reference style in EndNote, please download the zip file). For correct formatting of formulas, please see preparing formulas or equations. Smaller tables that are considered integral to the manuscript can be pasted at the end of the manuscript file in A4 portrait or landscape format. Tables may also be uploaded separately. Please see preparing tables for more instructions. Figures must be submitted separately in a proper format (and also embedded in the manuscript to expedite the review process). Please see preparing figures for more instructions. Datasets, large tables, videos, or other information must be submitted separately as additional files, which will be published along with the article. Please see preparing additional files for more instructions. The authors should have the required information below ready upon submission. The manuscript should not include this information to ensure a blind peer-review. Please see our editorial policies for more information regarding peer review policy. The supporting information will be reviewed by the editor. In an "Acknowledgments" section, the authors are required to acknowledge anyone who contributed to the submitted work who does not meet the criteria for authorship. It is obligatory to state any support with translating or editing by third parties such as professional commercial writing/editing services. The authors should obtain permission to acknowledge from all those mentioned in the Acknowledgments section. The authors are required to declare all sources of funding for the research reported. The role of the funding body in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript should be declared. Please see our editorial policies for further explanation of authorship criteria and acknowledgments. The authors are required to declare all financial and non-financial competing interests with regard to the publication of their work during submission. Please see our editorial policies for more information on competing interests. If any of the authors are unsure whether they have a competing interest, they should contact the editorial office. Authors of submissions reporting studies involving human participants, human data, or human tissues are required to provide the following information: A statement on ethics approval and consent (even where the need for approval was waived). The name of the ethics committee that approved the study and the committee's reference number if appropriate. Submissions reporting studies involving animals must include a statement on ethics approval. If the submission contains any individual person's data in any form, consent to publish must be obtained from that person, or in the case of children, their parent or legal guardian. All presentations of case reports must have consent to publish. The authors may use their institutional consent form. The form is not to be sent on submission, but we may request to see a copy at any stage (including after publication). Please see our editorial policies for more information. Pharmaceutical Sciences encourages authors to share the data and any other material associated with methodology and the results of the submitted articles, in an appropriate public repository, or as open access supplementary to the article. In line with ICMJE recommendations, a data sharing statement is required for manuscripts reporting the results of clinical trials, on whether and how the data will be available. For more information, please consult ICMJE recommendations (http://www.icmje.org/recommendations/browse/publishing-and-editorial-issues/clinical-trial-registration.html) Supplementary data (figures and tables, video and etc) is peer-reviewed material that cannot be included in the printed version for reasons of space or medium. It is posted on the freely available part of our website at the time of publication. Finalizing submission Before completing the process, the submitting author is required to review the submission proof (PDF) which will be automatically generated. The submission proof may be shared with co-authors for a final check and approval. The submitting author may go back and correct any parts as necessary, review the submission proof again, and then submit the work using the "Submit" button. Revising the submission Any subsequent revisions to the submission upon request from the editor will have to follow the same guidelines presented here. Upon submitting a revised submission, the authors will be guided to provide a re-submission letter, attaching the revision details, based on the comments provided by the editor. The attached revision details should not include author information to ensure blind peer review. A graphical abstract must be included with the manuscript for display in the online table of contents. This graphic should be attractive to the reader and relevant to the manuscript title. Further, it should give the reader a prompt visual impression of the necessity of the manuscript with no specific results. -It should be simple yet informative -Colorful graphics are preferred. -The originality of graphics is required. -Use of graphics implying any bias to/against organizations or individuals should be avoided. -Graphics should be clear enough and the labels used inside it should be readable even in a very small font. -The graphical abstract file should be saved in TIFF with 300 dpi and 1200 dpi for respective color and black and white images.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,122
\section{Introduction.} Consider the following nonconvex and nonlinear composite minimization problem \begin{equation*} \text{(CM)} \qquad \mbox{minimize} \left\{ f\left(x\right) \equiv f_{0}\left(x\right) + h\left(F \left(x\right)\right) : \, x \in \rr^{n} \right\}, \end{equation*} where \begin{itemize} \item $f_{0} : \rr^{n} \rightarrow \rr$ is a continuously differentiable function. \item $F : \rr^{n} \rightarrow \rr^{m}$ ($m \leq n$) is a continuously differentiable mapping defined by \begin{equation*} F\left(x\right) := \left(f_{1}\left(x\right) , f_{2}\left(x\right) , \ldots , f_{m} \left(x\right)\right). \end{equation*} \item $h : \rr^{m} \to \erl$ is a proper and lower semi-continuous (lsc) function. \end{itemize} The structure of the composite model (CM) offers extreme versatility over the traditional nonlinear programming formulation. The smooth assumptions are in the mapping $F$ and the function $f_{0}$, while constraints, penalties and nonconvex/nonsmooth terms can be handled by the nonconvex and nonsmooth function $h$. The composite structure allows to beneficially model a given problem and exploit data information, and essentially captures most optimization problems. This is illustrated below in Section \ref{ssec:examples}. \medskip The main objective of this paper is to layout the main theoretical tools to achieve a deep understanding of augmented Lagrangian based methods and their fundamental properties in the nonconvex setting described by model (CM). \medskip The Augmented Lagrangian (AL) methodology has a long history which can be traced back to the works of Hestenes \cite{H1969}, Powell \cite{P1969} and Haarhoff and Buys \cite{HB1970} with the so-called multipliers method for problems with equality constraints. The AL algorithmic framework was a major breakthrough in nonlinear optimization providing the ground to fundamental algorithms and applications which have been extensively studied in the literature for various classes of problems. For classical results on the subject including many key results, extensions and closely related schemes such as the Proximal Methods of Multipliers (PMM) \cite{R1976} and the Alternating Direction of Multipliers (ADM) \cite{FG1983,GLT1989}, we refer the reader to the monographs of Bertsekas \cite{B1982-B} and Bertsekas-Tsitsiklis \cite{BT1989-B} and references therein. \medskip Recently, there has been an intensive renewed interest in augmented Lagrangian based methods, and in particular within the ADM scheme. This surge of interest is mainly due to the emergence of new and modern applications arising in a broad diversity of applications areas such as signal processing, sparse approximation in data analysis and machine learning. These problems share particular structures which often adapt well to ADM and lead to computationally attractive schemes. A typical prototype which has been extensively studied is when all the data is {\em convex} with $F$ being a {\em linear} mapping, and problem (CM) reduces to the {\em convex linear composite problem}: \begin{equation*} \text{\rm (CM-L)} \qquad \mbox{minimize} \left\{ f_{0}\left(x\right) + h\left(Fx\right) : \, x \in \rr^{n} \right\}. \end{equation*} The recent literature on ADM for this convex problem is voluminous and clearly it is not the purpose of this paper to review it here. See, for instance, the recent work \cite{ST2014} for an account of old and new results on the convergence analysis of various augmented Lagrangian schemes, as well as many relevant references to earlier works and to more modern and recent contributions in the convex setting. \medskip This work is a complete departure from the classical convex linear composite model. Indeed, in many of the modern applications alluded above the optimization model turns out to be not only nonsmooth but also includes inherent nonlinearities which the nonlinear composite model (CM) conveniently captures. Unfortunately, while as just mentioned, the analysis of Lagrangian based methods has been extensively studied in the convex case, the situation in the nonconvex setting is far from being well understood, and global analysis of Lagrangian methods for the general model (CM) remains scarce. In fact, only very recently some progress has been initiated in the nonconvex case, but {\em only for the linear} composite model (CM-L), see {\em e.g.}, \cite{LP2015} and references therein. Even in the simpler linear composite model, the situation is not trivial and the authors in \cite{LP2015} have to rely on various assumptions on the problem's data. Out of studies on the linear composite model, we are not aware of any work attempting to fully understand Lagrangian based methods for the general nonlinear composite model (CM) considered here. The objective of the present work is to address this situation, and to develop the main theoretical tools to achieve a deeper understanding of Lagrangian based methods and their fundamental properties in the nonconvex setting described by the nonlinear composite model (CM). \smallskip Before outlining some details on our approach, main contributions and results, we first recall some of the major obstacles met in the study of Lagrangian methods by evoking three most salient theoretical issues: \begin{enumerate} \item {\em AL methods are non-feasible methods:} this is due to the very nature of the penalty approach used to construct an augmented Lagrangian. As a consequence feasibility issues have to be dealt with particular care as they have a direct damaging impact on qualification conditions, as explained next. \item {\em Failure of qualification conditions}: A major problem with non-feasible methods is that qualification conditions must hold in a larger sense in order to allow for the good behavior of the algorithm when the current point is far from the feasible set. Yet, for very simple constraints, for instance spherical constraints (see Example \ref{ex:sc} and Remark \ref{r:failure}), assuming a qualification condition everywhere is not a viable option. \item {\em Oscillation issues}: AL methods are particularly well designed to handle problems having complex geometry, like for instance nonlinear inequality/equality constrained problems. A typical and difficult problem in this context is to tame oscillations of minimizing sequences\footnote{Similar difficulties occur in other approaches, see for instance, \cite{BP2015} for an illustration in the context of sequentially convex programming approaches, and \cite{AUS2015} in the context of an exact penalty approach.}. Moreover, AL methods are of {\em min-max dynamics} and thus, by nature, the values taken by the augmented Lagrangian function alternatively increase and decrease even if the sequence eventually converges. This oscillatory behavior makes the use and the design of Lyapunov functions particularly difficult. \end{enumerate} \medskip One of the goals of this paper is to provide the reader with an original general Lagrangian methodology which can deal, all at once, with the above obstacles under general and mild assumptions on the problem's data. Let us briefly outline our exact contributions now. \medskip The first innovative feature of our approach is to introduce and to study a broad class of algorithms through sequences that we call {\em Lagrangian sequences}. At the heart of this methodology is the idea of turning an arbitrary descent method into a multiplier method. The rationale is simple, once a method or mechanism is chosen, it is implemented on the primal variable(s) of the augmented Lagrangian, while the multiplier variable is updated in the classical and straightforward fashion. An illustrative but very informative instance of this approach is the famous proximal method of multipliers (PMM) alluded above which is modeled through an augmented Lagrangian with an added proximal term and consists of performing a proximal step on the primal variable while the multiplier is updated as in the classical AL method. \medskip Based on the above methodology, we proceed and describe how we address the three points evoked above. \medskip To circumvent the qualification failures and the lack of knowledge of fundamental constants, we introduce the notion of {\em information zone}. It is a subset of the space containing the feasible set and on which Lipschitz continuity and qualification conditions are known to hold and are quantifiable by simple real numbers (Lipschitz constants and regularity modulus). Then we provide our methodology with an {\em adaptive regime} that aims at detecting this zone and at forcing the iterates to stay within the zone. The detection of the zone is made by tuning dynamically the penalization parameter of the augmented Lagrangian at a sufficiently high value. This approach is shown to identify the zone in finitely many steps and deals thus with points 1 and 2. \medskip Once the information zone is found, another crucial issue remains to address: rule out oscillations to ensure descent properties of the method, this is point 3 above. This is done by using once more the adaptive idea to detect an adequate Lyapunov function. At a technical level this function is nonincreasing but the rate of decrease is only controlled for one block of the primal sequence which is a departure from classical analysis. \medskip The proposed novel approach and theoretical analysis developed in Sections \ref{Sec:LagCM} to \ref{Sec:proofs} allow us to eliminate the difficulties evoked above and to derive a generic Adaptive Lagrangian Based mUltiplier Method (ALBUM) for tackling the general nonconvex and nonlinear composite model (CM) which encompasses fundamental Lagrangian methods. This paves the way to derive convergence results, and in particular, global convergence results to a critical point of problem (CM) with semialgebraic data, by relying on the nonsmooth Kurdyka-{\L}ojasiewicz (KL) inequality \cite{L1963, K1998, BDL2006}. The potential of our results is demonstrated through the study of two major Lagrangian schemes whose convergence was never analyzed in the proposed general setting: the proximal multiplier method and the proximal alternating direction of multipliers scheme, this is done in Section \ref{sec:variants} where we also consider some additional interesting variants. We end the introduction with some examples illustrating the versatility of model (CM). \subsection{Examples of model (CM).} \label{ssec:examples} Below we give some examples which exhibit the versatility of model (CM). The first example describes various well-known and classical models in the nonlinear optimization literature, while the remaining four examples describe models arising in some recent applications. \medskip \begin{example}[Nonlinear programming] \label{E:NLP} The standard nonlinear program with equality and inequality constraints: \begin{equation*} \mbox{(NLP)} \qquad \inf_{x \in \rr^{n}} \left\{ f_{0}\left(x\right) : \, f_{i}\left(x\right) \leq 0, \, i = 1 , 2 , \ldots , p, \,\, f_{i}\left(x\right) = 0, \, i = p + 1 , p + 2 , \ldots , m \right\}, \end{equation*} can be reformulated through the composite model (CM) by defining the separable model function $h\left(u\right) := \sum_{i = 1}^{m} h_{i}\left(u_{i}\right)$, where \begin{equation*} h_{i}\left(u_{i}\right) = i_{(-\infty , 0]}\left(u_{i}\right), \, i = 1 , 2 , \ldots , p, \quad \text{and} \quad h_{i}\left(u_{i}\right) = i_{\{0\}}\left(u_{i}\right), \, i = p + 1 , p + 2 , \ldots , m. \end{equation*} {\em Lagrangians and Smooth penalties.} The standard Lagrangian associated to (NLP) as well as linear and quadratic penalty terms can easily be reformulated through model (CM) with a separable model function $h$ and an adequate choice of $h_{i}$, $i = 1 , 2 , \ldots , m$. For instance with $h_{i}\left(u_{i}\right) = y_{i}u_{i}$, $i = 1 , 2 , \ldots , m$, the standard Lagrangian of problem (NLP) is recovered. Likewise the usual penalized counterpart of the problem (NLP) given by \begin{equation*} \mbox{(P-NLP)} \qquad \inf \left\{ f_{0}\left(x\right) + \sum_{i = 1}^{p} \mu_{i}\max \left\{ 0 , f_{i}\left(x\right) \right\}^{2} + \sum_{i = p + 1}^{m} \mu_{i}\left|f_{i}\left(x\right) \right|^{2} \right\}, \, (\mu_{i} > 0), \end{equation*} is recovered through model (CM) with the obvious choices \begin{equation*} h_{i}\left(u_{i}\right) = \mu_{i}\max \left\{0 , u_{i} \right\}^{2}, \, i = 1 , 2 , \ldots , p , \quad \text{and} \quad h_{i}\left(u_{i}\right) := \mu_{i}\left|u_{i}\right|^{2}, i = p + 1 , p + 2 , \ldots ,m. \end{equation*} Obviously, the classical augmented Lagrangian itself for NLP can easily be recovered from model (CM) as well, with an adequate piecewise quadratic choice of $h_{i}$, $i = 1 , 2 , \ldots , m$, for the inequality constraints. \smallskip {\em Nonsmooth and nonseparable $h$.} A classical nonsmooth model is the $\ell_{1}$-norm penalized problem for equality constraints ($p \equiv 0$ in (NLP)) given by \begin{equation*} \inf_{x \in \rr^{n}} \left\{ f_{0}\left(x\right) + \sum_{i = 1}^{m} w_{i}\left|f_{i}\left(x \right)\right| \right\}, \end{equation*} which is covered by model (CM) with $h_{i}\left(u_{i}\right) := w_{i}\left|u_{i}\right|$ for some $w_{i} > 0$, $i = 1 , 2 , \ldots , m$. \smallskip {\em Nonseparable nonsmooth: mini-max problems.} Let $f_{0} \equiv 0$ and $h\left(u\right) := \max \left\{ u_{1} , u_{2} , \ldots , u_{m} \right\}$. Then, model (CM) produces the classical nonlinear mini-max problem \begin{equation*} \inf_{x \in \rr^{n}} \max_{1 \leq i \leq m} f_{i}\left(x\right). \end{equation*} \end{example} The above example exhibit the versatility of model (CM) for traditional NLP. In all these examples $h$ was convex. We now give three examples with {\em nonconvex} $h$ which include a broad variety of fundamental problems arising in applications. \begin{example}[Sparsity constrained problems] These problems arise in many areas of applications, for example, compressive sensing and machine learning see {\em e.g.}, \cite{SNW11}. A basic model (see \cite{BE2013}) reads \begin{equation*} \min \left\{ f\left(x\right) : \, \norm{x}_{0} \leq s \right\}, \end{equation*} where $\norm{\cdot}_{0}$ stands for the usual counting function, {\em i.e.}, the number of nonzero coordinates of $x$, $s > 0$ is the desired sparsity level, and $f$ can be any smooth fidelity criterion ({\em e.g.}, least squares). Let $S := \left\{ x : \, \norm{x}_{0} \leq s \right\}$. Then, the above problem is a special case of model (CM) with $f_{0}\left(x\right) \equiv f\left(x\right)$, $F \left(x\right) \equiv x$ and $h$ is the nonconvex function described by the indicator of the closed set $S$, {\em i.e.}, $h\left(u\right) \equiv i_{S}\left(u\right)$. \medskip Matrix rank minimization problems can be similarly formulated in the space of symmetric matrices using a constraint of the form $\mathrm{rank}(x) \leq s$. \medskip Moreover, nonconvex {\em penalized approximations} of the following form have also been considered and found useful (see, {\em e.g.}, \cite{LT14} and references therein) \begin{equation*} \min \left\{ f\left(x\right) + \rho\sum_{i = 1}^{n} \varphi\left(\left|x_{i}\right|\right) \, x \in \rr^{n} \right\}, \quad (\rho > 0 \; \mbox{is a penalty parameter}), \end{equation*} where $\varphi$ is a concave (increasing) function on $\rr$ used to approximate the $l_{0}$-quasi norm. A typical example is the $l_{p}$-quasi norm with $\varphi\left(t\right) := t^{p}$, $0 < p < 1$, and model (CM) covers this case as well, with an obvious identification for the nonconvex function $h$. \end{example} \begin{example}[Matrix minimization on Stiefel manifolds] \label{ex:sc} Optimization problems with matrix orthogonality constraints arise in many applications of science and engineering ({\em e.g.}, polynomial optimization, combinatorial optimization, eigenvalue problems, sparse PCA, matrix rank minimization, etc., \cite{EAS1998}). A basic problem reads as: \begin{equation*} \text{(O)} \quad \min \left\{ \Psi\left(X\right) : \, X^{T}X = I, \, X \in \rr^{n \times p} \right\}, \end{equation*} where $\Psi : \rr^{n \times p} \rightarrow \rr$ is a smooth function (often quadratic), and $I$ stands for the $p \times p$ identity matrix. The feasible set ${\cal S}_{n,p} := \left\{ X \in \rr^{n \times p} : \, X^{T}X = I \right\}$ is known as the \textit{Stiefel manifold}, which for $p = 1$ reduces to the unit-sphere manifold ${\cal S}_{n,1} \equiv {\cal S}_{n} = \left\{ x \in \rr^{n} : \, \norm{x}_{2} = 1 \right\}$. Clearly, with $h$ being the nonconvex function described by the indicator of the closed set ${\cal S}_{n,p}$, problem (O) can easily be seen as a special case of model (CM) with the obvious identification for $f_{0}$ and $F$ in the space of real matrices $\rr^{n \times p}$. \end{example} \begin{example}[Nonconvex feasibility] \label{feas} Let $S_{1} , S_{2} , \ldots , S_{p}$ (for $p \geq 2$) be nonempty and closed subsets of $\rr^{n}$. The nonconvex feasibility problem consists in finding a point in the intersection $\displaystyle \cap_{i = 1}^{p} S_{i}$. These type of problems abound in many applications such as phase retrieval, network sensors localizations or protein conformation, see {\em e.g.}, \cite{HL2013} for some recent developments. One standard way to tackle the feasibility problem is simply to reformulate it as an optimization problem: \begin{equation*} \min \left\{ \frac{1}{2\left(p - 1\right)}\sum_{i = 2}^{p} \norm{x_{1} - x_{i}}^{2} + \sum_{i = 1}^{p} i_{S_{i}}\left(x_{i}\right) : \, \left(x_{1} , x_{2} , \ldots , x_{p}\right) \in \rr^{n \times p} \right\}, \end{equation*} Observe that ${\bar x} \in \displaystyle\cap_{i = 1}^{p} S_{i}$ if and only if the optimal value of the above optimization problem at $\left({\bar x} , {\bar x} , \ldots , {\bar x}\right) \in \rr^{n \times p}$ is zero. \medskip Choosing $\rr^{n \times p}$ as the base space, setting $f_{0}\left(x_{1} , x_{2} , \ldots , x_{p} \right) = \left(2\left(p - 1\right)\right)^{-1}\sum_{i = 2}^{p} \norm{x_{1} - x_{i}}^{2}$ (which is obviously a $C^{1 , 1}$ function), $F\left(x_{1} , x_{2} , \ldots , x_{p}\right) = \left(x_{1} , x_{2} , \ldots , x_{p}\right)$ and $h\left(x_{1} , x_{2} , \ldots , x_{p}\right) = \sum_{i = 1}^{p} i_{S_{i}}\left(x_{i}\right)$, we see that the above optimization problem fits our general model (CM). \end{example} \medskip \noindent {\bf Notations.} For any vector $w \in \real^{d}$, the standard Euclidean norm is simply denoted by $\norm{w}$. Unless otherwise stated, for the subdifferential operators $\hat\partial$, $ \partial$ and $\partial^{\infty}$ and other objects coming from variational analysis, we adopt the notations and definitions of the monograph by Rockafellar and Wets \cite{RW1998-B}. \section{The Lagrangian for nonlinear composite problems.} \label{Sec:LagCM} This section outlines the first steps toward the generic algorithm we develop and analyze in this paper. We define the augmented Lagrangian associated to problem (CM), basic qualification condition and assumptions, and in particular, we introduce the fundamental and new concept of {\em information zone} which play a central role in the forthcoming analysis. \subsection{Lagrangian and qualification condition.} \label{s:lag} In analogy to standard NLP, one can construct a natural Lagrangian for problem (CM) as follows. We first reformulate problem (CM) in the equivalent split form: \begin{equation*} \mbox{(CM)} \qquad \inf_{} \left\{ f_{0}\left(x\right) + h\left(u\right) : \, u = F\left(x\right), \, (x , u)\in \rr^{n}\times \R^m \right\}. \end{equation*} For this abstract equality constrained reformulation, the classical {\em Lagrangian} is defined by $\Lag : \rr^{n} \times \rr^{m} \times \rr^{m} \to \erl$ via \begin{equation*} \Lag\left(x , u , y\right) \equiv f_{0}\left(x\right) + h\left(u\right) + \act{y , F\left(x\right) - u}. \end{equation*} An {\em augmented Lagrangian} is a quadratic penalized version of the Lagrangian: \begin{align} \Laug_{\rho}\left(x , u , y\right) & := \Lag\left(x , u , y\right) + \frac{\rho}{2}\norm{F\left(x \right) - u}^{2} \nonumber \\ & = f_{0}\left(x\right) + h\left(u\right) + \act{y , F\left(x\right) - u}+ \frac{\rho}{2}\norm{F \left(x\right) - u}^{2}, \label{D:AugLAc} \end{align} where $\rho > 0$ is a penalty parameter. \medskip To ensure the well-posedness of the algorithms to come, throughout this paper we assume: \begin{equation} \label{WellPosed} \inf_{x , u} \Laug_{\rho}\left(x , u , y\right) > -\infty \,\,\, \text{for any fixed} \,\, y \in \rr^{m}. \end{equation} We assume below that model (CM) satisfies a standard qualification condition which we express in the compact form provided by variational analysis \cite[Chapter 10, pp. 428--430]{RW1998-B}. We denote by $\nabla F\left(x\right) \in \rr^{m \times n}$ the Jacobian matrix of $F$, whose rows are given by the gradient vectors $\left[\nabla f_{i}\left(x\right)\right]_{i = 1}^{m}$. \begin{assumption} \label{AssumptionA} The following constraint qualification holds for problem (CM), \begin{equation*} \mbox{[CQ]} \qquad \nabla F\left(x\right)^{T}y = 0 , \quad y \in \partial^{\infty} h\left(F \left(x\right)\right) \, \Longrightarrow \, y = 0. \end{equation*} \end{assumption} For the classical NLP case, which can be obtained from model (CM) as described in Example \ref{E:NLP}, the condition [CQ] reduces to the classical Mangasarian-Fromovitz constraint qualification, see {\em e.g.}, \cite{B1982-B}. \medskip The condition [CQ] is not only essential to provide smoothness and regularity of the constraint set, at a technical level, it is also important to provide a chain rule for the objective function of model (CM). This allows us to derive the first order necessary conditions for this model. \begin{definition}[First order optimality condition] \label{D:Opt} Let $F : \rr^{n} \rightarrow \rr^{m}$ be a continuously differentiable mapping, and let $h : \rr^{m} \rightarrow \erl$ be a proper and lsc function. If $x$ is a local minimizer of problem {\rm (CM)} satisfying Assumption \ref{AssumptionA}, then there exists $y \in \rr^{m}$ such that \begin{equation*} \nabla f_{0}\left(x\right) + \nabla F\left(x\right)^{T}y = 0 \quad \mbox{with} \quad y \in \partial h\left(F\left(x\right)\right). \end{equation*} \end{definition} The set of critical points of a function $\psi$, is denoted by $\crit \psi$. For problem (CM) with the objective function $f$, we have \begin{equation} \label{crit-f} \crit f = \left\{ x \in \rr^{n} : \, 0 \in \nabla f_{0}\left(x\right) + \nabla F\left(x\right)^{T} \partial h\left(F\left(x\right)\right) \right\}. \end{equation} \subsection{The information zone.} Lagrangian based methods require to handle simultaneously penalty parameters, constants, and qualification condition which is a delicate matter. An important aspect of this work is to address these issues. \medskip Augmented Lagrangian methods are based on relaxing the classical Lagrangian and therefore by nature these are unfeasible methods. Measures of unfeasibility of these methods are naturally connected to the ``looseness" of the relaxation. The looser is the relaxation, the more unfeasible is the method. Over relaxation could even result in absurd behaviors. \medskip The augmented Lagrangian $\Laug_{\rho}$ as given in \eqref{D:AugLAc} is \begin{equation*} \Laug_{\rho}\left (x , u , y\right) := f_{0}\left(x\right) + h\left(u\right) + \act{y , F\left(x \right) - u} + \frac{\rho}{2}\norm{F\left(x\right) - u}^{2}, \,\, \mbox{ with }\rho > 0. \end{equation*} In this context the looseness/sharpness of the relaxation is embodied within the penalty parameter $\rho $ which is used to penalize the constraint $F\left(x\right) = u$ in the augmented Lagrangian $\Laug_{\rho}$. At an analytic level this penalty reflects the fact that for a fixed triple $\left(x , u , y\right)$ one has \begin{equation*} \lim_{\rho \rightarrow +\infty} \Laug_{\rho}\left(x , u , y\right) = \begin{cases} f_{0}\left(x\right) + h\left(F\left(x\right)\right), & \mbox{ if } F\left(x\right) = u, \\ +\infty, & \text{ otherwise,} \end{cases} \end{equation*} which amounts, in some sense, to the convergence of $\Laug_{\rho}$ to $\Lag$ as $\rho \rightarrow + \infty$. \medskip A major drawback of such unfeasible methods, easily guessed from the above, is that they generate points that might be out of control in the sense that: \begin{itemize} \item[--] constraint qualification conditions may fail, \item[--] assumptions on the problem's data, such as global Lipschitz constants of the various objects involved may become unknown or out of reach. \end{itemize} \medskip On the other hand, assuming a global control is very demanding and could be unrealistic in practice. \medskip To remedy these obstacles all at once our approach is twofold: we first define an information zone, denoted by $\Zone$, to be a region for which regularity is under control and constants are known. Second we provide a generic Lagrangian scheme described below with an extra-adaptive search made to reach the information zone\footnote{As we shall see soon the adaptive regime allows also for dynamic adjustment of the step-sizes to other geometrical features.} Let $\dom{h} = \left\{ u \in \rr^{m} : \, h\left(u\right) < \infty \right\}$ which is nonempty and closed. Then the feasible set of problem (CM) is defined by \begin{equation*} {\cal F} = \left\{ x \in \rr^{n} : \, F\left(x\right) \in \dom h \right\}. \end{equation*} \begin{definition}[Information zone] \label{def:ifoz} Given the feasible set ${\cal F}$ for problem (CM), an information zone is a subset $\Zone$ of $\rr^{n}$ such that there exists ${\bar d} \in \left(0 , +\infty\right]$ for which \begin{equation} \label{zone} \Zone \supset \left\{ x \in \rr^{n} : \, \dist\left(F\left(x\right) , \dom h\right) \leq {\bar d} \, \right\} \supset {\cal F}. \end{equation} \end{definition} The information zone is an enlargement of the feasible set ${\cal F}$. It should be noted that the information zone $\Zone$ depends on the parameter ${\bar d}$. For simplicity of exposition, in the forthcoming section, this dependence is not explicitly mentioned. In the next definition we recall a fundamental and classical regularity assumption (see, {\em e.g.}, Milnor \cite{M31}). \begin{definition}[Uniform regularity] \label{def:regul} Let $\Omega$ be an open subset of $\rr^{n}$, $F : \Omega \rightarrow \rr^{m}$ be a continuously differentiable mapping, and let $S$ be a nonempty subset of $\Omega$. We say that $F$ is uniformly regular on $S$ with constant $\gamma > 0$ if the following holds: \begin{equation*} \norm{\nabla F\left(x\right)^{T}v} \geq \gamma\norm{v}, \,\, \forall \, x \in S, \,\, v \in \rr^{m}. \end{equation*} \end{definition} \begin{remark} \label{r:param1} For a given $x \in \Omega$, asserting that \begin{equation*} \gamma(F,x) = \min\left\{ \norm{\nabla F\left(x\right)^{T}v} : \, \norm{v} = 1 \right\}, \end{equation*} is nonzero is equivalent to the fact that $\nabla F\left(x\right)$ is surjective or $\nabla F\left(x \right)\nabla F\left(x\right)^{T}$ is positive definite. In nonlinear optimization it is also known as Mangasarian-Fromovitz condition at~$x$. Geometrically it means that the set $\left\{ y \in U : \, F\left(y\right) = F\left(x\right) \right\}$ is a $C^{1}$ manifold for any small open neighborhood around $x$. \medskip Note also that \begin{equation} \label{eigen} \gamma \equiv \gamma(F,x) = \sqrt{\lambda_{\min}\Big(\nabla F\left(x\right)\nabla F\left(x \right)^{T}\Big)}, \end{equation} where $\lambda_{\min} (A)$ denotes the smallest eigenvalue of a real symmetric matrix $A$. \end{remark} \subsection{Basic assumptions for model (CM).} We introduce the following essential assumptions. \begin{assumption} \label{AssumptionB} Given an information zone $\Zone$, we assume that: \begin{itemize} \item[$\rm{(i)}$] $F$ is uniformly regular over $\Zone$ with constant $\gamma$, \item[$\rm{(ii)}$] $\nabla F$ is $L(F)$ Lipschitz continuous over $\Zone$, \item[$\rm{(iii)}$] $\nabla f_{0}$ is $L(f_{0})$ Lipschitz continuous over $\Zone$. \end{itemize} \end{assumption} \begin{remark} \label{r:info} \begin{itemize} \item[(a)] Naturally, the Lipschitz continuity and the uniform regularity are not required on the whole space $\rr^{n}$, but only on the information zone $\Zone$. This is a departure from the usual setting. \item[(b)] When $\nabla f_{0}$ is known to be Lipschitz continuous on the whole space $\rr^{n}$, and the mapping $F$ is assumed to be linear, {\em i.e.}, $F\left(x\right) = Fx$ for some matrix $F \in \rr^{n \times m}$ with full row rank, then Assumption \ref{AssumptionB} holds with $\Zone \equiv \rr^{n}$ ({\em i.e.}, ${\bar d} = +\infty$) and $FF^{T} \succeq \gamma I_{n}$ where $\gamma = \sqrt{\lambda_{\min}(FF^{T})} > 0$. \end{itemize} \end{remark} \medskip Let us illustrate the concept of the information zone on a simple but fundamental and emblematic situations (\emph{cf.\ } Example \ref{ex:sc}). \begin{example}[Spherical constraints] \label{cex} Assume that $F\left(x\right) = \norm{x}^{2}$ and $h = i_{\{ 1 \}}$. For simplicity we also assume that $f_{0}$ is globally Lipschitz. \smallskip One has $\nabla F\left(x\right) = 2x$ and thus for a fixed $x$, $\gamma\left(F , x\right) = 2\norm{x}$. Take $r_{1} \in \left(0 , 1\right)$, and define $\Zone = \left\{ x \in \rr^{n} : \, r_{1} \leq \norm{x} \right\}$. We see that $F$ is $2r_{1}$ regular on $\Zone$ and $\nabla F$ is $2$-Lipschitz continuous. Hence $\Zone$ can be chosen as an information zone as long as we show that \eqref{zone} holds true. Take ${\bar d} = 1 - r_{1}^{2}$, it is easy to check that $\left| \norm{x}^{2} - 1\right| \leq {\bar d}$ implies, in particular, that $1 - \norm{x}^{2} \leq 1 - r_{1} ^{2}$. Note that $0 \notin \Zone$ and that $\rr^{n}$ could not be an acceptable choice for an information zone because of the degeneracy of $\nabla F$ at $\left\{ 0 \right\}$. \end{example} \begin{remark}[Systematic failure of global CQ with compact equality constraints] \label{r:failure} The preceding example reveals a simple and systematic phenomenon which motivates strongly the use of an information zone. Consider a $C^{1}$ function $F:\rr^{n} \rightarrow \rr$ such that $[F = 0]$ is a compact manifold and assume that $\mbox{int } [F\leq 0] = [F<0]$. Then, necessarily there exists $x^{\ast}$ such that $\nabla F\left(x^{\ast}\right) = 0$. Indeed, by taking $x^{\ast}$ to be a minimizer of $F$ over the compact set $[F \leq 0]$ and since this minimizer lies within $[F < 0]$ it follows that $\nabla F\left(x^{\ast}\right) = 0$. This shows that in general, it is not possible, to have $\Zone = \rr^{n}$. \end{remark} \section{Adaptive Lagrangian based multiplier method.} \label{Sec:ALBUM} $ $ \medskip \begin{center} \fbox{From now on Assumptions \ref{AssumptionA} and \ref{AssumptionB} form our blanket assumptions.} \end{center} \medskip As explained previously, difficult obstacles are faced both in the design and the study of Lagrangian based methods: lack of descent, and above all, feasibility issues. The adaptive idea we develop here is precisely meant to put us in a position where these issues are treated in a dynamical fashion: both the information zone and the ``energy functional" ${\cal E}_{\beta}$ which we introduce now come into a play. \subsection{Lagrangian and a Lyapunov function.} We shall need to work with an auxiliary function which is very similar to the augmented Lagrangian $\Laug_{\rho}$ (defined in \eqref{D:AugLAc}). This is a classical approach often called the ``Lyapunov" methodology. It will reveal the optimizing property of the generic Lagrangian scheme we introduce next. \medskip Let $\beta > 0$ and $w \in \rr^{n}$, here we consider the Lyapunov function which is defined by \begin{equation} \label{D:Lyap} {\cal E}_{\beta}\left(x , u , y , w\right) : = \Laug_{\rho}\left(x , u , y\right) + \beta\norm{x - w} ^{2}. \end{equation} Below, we record the relationships between the critical point sets of the three relevant functions $f$, $\Laug_{\rho}$ and ${\cal E}_{\beta}$. These relations already suggest the pivotal role to be played by ${\cal E}_{\beta}$. Recall that condition [CQ] is always assumed, {\em i.e.}, Assumption \ref{AssumptionA} holds. \begin{proposition}[Critical points relationships] \label{P:Crit} Let $x \in \rr^{n}$ and $u , y \in \rr^{m}$. The following implications hold: \begin{equation*} \left(x , u , y , x\right) \in \crit{{\cal E}_{\beta}} \, \Longrightarrow \, \left(x , u , y \right) \in \crit{\Laug_{\rho}} \, \Longrightarrow \, x \in \crit{f}, \end{equation*} for all $\beta , \rho > 0$. \end{proposition} \begin{proof} The result follows easily from standard subdifferential calculus rules. Indeed, from the definition of ${\cal E}_{\beta}$ (see \eqref{D:Lyap}) we have that $\left(x , u , y , w\right) \in \crit{{\cal E}_{\beta}}$ if and only if \begin{equation} \label{critE} \hspace{-0.07in} \left(0 , 0 , 0 , 0\right) \in \left(\nabla_{x} \Laug_{\rho}\left(x , u , y \right) + 2\beta\left(x - w\right) , \partial_{u} \Laug_{\rho}\left(x , u , y\right) , \nabla_{y} \Laug_{\rho}\left(x , u , y\right) , 2\beta\left(w - x\right)\right). \end{equation} On the other hand, using the definition of $\Laug_{\rho}$ (see \eqref{D:AugLAc}) we obtain \begin{align} \nabla_{x} \Laug_{\rho}\left(x , u , y\right) & = \nabla f_{0}\left(x\right) + \nabla F\left(x \right)^{T}\left(y + \rho\left(F\left(x\right) - u\right)\right), \label{critAc1} \\ \partial_u \Laug_{\rho}\left(x , u , y\right) & = \partial h\left(u\right) + \rho\left(u - F \left(x\right)\right) - y, \label{critAc2} \\ \nabla_{y} \Laug_{\rho}\left(x , u , y\right) & = F\left(x\right) - u. \label{critAc3} \end{align} Therefore, taking $w = x$ in \eqref{critE}, the first implication in the proposition follows. The second implication follows by noticing that with $\left(x , u , y\right) \in \crit{\Laug_{\rho}}$, the three relations \eqref{critAc1}, \eqref{critAc2} and \eqref{critAc3} reduce to $0 = \nabla f_{0} \left(x\right) + \nabla F\left(x\right)^{T}y$ and $0 \in \partial h\left(F\left(x\right)\right) - y $. Hence, using Definition \ref{D:Opt}, we obtain that $x \in \crit{f}$. This complete the proof. \end{proof} \subsection{A generic algorithm: {\bf ALBUM}.} \label{s:algo} In order to describe the forthcoming generic scheme, we first need to introduce a primal black-box map which governs the mechanism of the global convergence methodology to be developed in Section \ref{SSec:Meth}. \begin{definition}[Lagrangian algorithmic map] \label{D:LagAlg} Consider the optimization model (CM) and its associated augmented Lagrangian $\Laug_{\rho}$ which is defined in \eqref{D:AugLAc}. Let $\left(x , u , y\right) \in \real^{n} \times \real^{m} \times \real^{m}$ be any given triple. A \emph{primal black-box map} ${\cal A}_{\rho}$ generates a couple $\left(x^{+} , u^{+}\right)$ by \begin{equation*} \left(x^{+} , u^{+}\right) \in {\cal A}_{\rho}\left(x , u , y\right). \end{equation*} A primal black-box map ${\cal A}_{\rho}$ is called a \emph{Lagrangian algorithmic map} if there are two positive constants $a$ and $b$ such that \begin{equation*} \mbox{(i)} \quad \frac{a}{2}\norm{x^{+} - x}^{2} + \Laug_{\rho}\left(x^{+} , u^{+} , y\right) \leq \Laug_{\rho}\left(x , u , y\right), \end{equation*} and \begin{equation*} \hspace{-0.7in} \mbox{(ii)} \hspace{0.2in} \norm{\nabla_{x} \Laug_{\rho}\left(x^{+} , u^{+} , y \right)} \leq b\norm{x^{+} - x}. \end{equation*} \end{definition} \medskip Thus, once we chose the Lagrangian algorithmic map ${\cal A}_{\rho}$, this choice fully determine the constants $a$ and $b$, which play an important role in the generic algorithm outlined below. Note that these constants might depend on the problem's data input ({\em e.g.}, Lipschitz constant, uniform regularity constant, or/and algorithmic constants, {\em e.g.}, proximal/penalty parameters). We deferred to Section~\ref{sec:variants} for two instances of fundamental Lagrangian algorithmic maps. \medskip The proposed generic adaptive algorithm aims at forcing $x^{k}$ to enter the information zone, which is a minimal requirement if we hope for a good behavior of our unfeasible schemes. \medskip {\center\fbox{\parbox{16cm}{{\bf Adaptive Lagrangian-Based mUltiplier Method -- ALBUM} \begin{enumerate} \item[1.] Input: ${\cal A}_{\rho}$ a Lagrangian algorithmic map. \item[2.] Initialization: Fix $\delta , \rho_{0} > 0$ and start with any $\left(x^{0} , u^{0} , y^{0}\right) \in \rr^{n} \times \rr^{m} \times \rr^{m}$. \item[3.] For each $k = 0 , 1 , \ldots$ generate a sequence $\left\{ \left(x^{k} , u^{k} , y^{k}\right) \right\}_{k \in \nn}$ as follows: \begin{enumerate} \item[3.1.] Primal step \begin{equation} \label{GenericAdap:PriStep} \left(x^{k + 1} , u^{k + 1}\right) \in {\cal A}_{\rho_{k}}\left(x^{k} , u^{k} , y^{k}\right). \end{equation} \item[3.2.] Multiplier step \begin{equation} \label{GenericAdap:MultiStep} y^{k + 1} = y^{k} + \rho_{k}\left(F\left(x^{k + 1}\right) - u^{k + 1} \right). \end{equation} \item[3.3.] Adaptive step: choose $\tau \in \left(0 , \frac{a}{2}\right)$ and set $\beta_{k} := \frac{b^{2}}{\rho_{k}\gamma}$. If $x^{k + 1} \notin \Zone$ or \begin{equation} \label{GenericAdap:AdapStep} \tau\norm{x^{k + 1} - x^{k}}^{2} > {\cal E}_{\beta_{k}}\left(x^{k} , u^{k} , y^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right), \end{equation} set $\rho_{k + 1} = \rho_{k} + \delta$. Otherwise, set $\rho_{k + 1} = \rho_{k}$. \end{enumerate} \end{enumerate}}}} \vspace{0.2in} The relations between $a$, $b$, the penalty parameters sequence $\seq{\rho}{k}$ and other data input constants will be made more precise whence we develop our analytic framework in Section \ref{Sec:KeyL}. \medskip We record here a simple consequence which will be useful in our analysis that immediately follows from the definitions of $\rho_{k}$ and $\beta_{k}$ (see Step 3.3): \begin{equation} \label{parcb} \rho_{k} \geq \rho_{0} > 0 \quad \text{and} \quad \beta_{k} \leq \beta_{0}, \,\,\, \text{for all} \, \, k \in \nn. \end{equation} \begin{remark} \label{r:param2} In some cases the penalty parameters $\rho_{k}$, $k \in \nn$, can be adjusted so that Step 3.3 automatically holds with $\rho_{k} = \rho$ for all $k \in \nn$. In this case the iterations boils down to Steps 3.1 and 3.2 only. This will happen for instance in the case when the information zone is the whole space, {\em e.g.}, when $F$ is linear (\emph{cf.\ } Remark \ref{r:info} and Remark \ref{r:lin} below). \end{remark} \subsection{A methodology for Lagrangian based methods.} \label{SSec:Meth} First note that, once the input Lagrangian algorithmic map ${\cal A}_{\rho}$ is chosen, {\bf ALBUM} generates a sequence $\Seq{z}{k} := \left\{ \left(x^{k} , u^{k} , y^{k}\right) \right\}_{k \in \nn}$, which thanks to Definition \ref{D:LagAlg}, must satisfy the following two conditions \begin{itemize} \item[{\bf C1}] There exists a positive constant $a$ such that \begin{equation*} \frac{a}{2}\norm{x^{k + 1} - x^{k}}^{2} + \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) \leq \Laug_{\rho_{k}}\left(x^{k} , u^{k} , y^{k}\right), \quad \forall \,\, k \geq 0. \end{equation*} \item[{\bf C2}] There exists a positive constant $b$ such that \begin{equation*} \norm{\nabla_{x} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right)} \leq b \norm{x^{k + 1} - x^{k}}, \quad \forall \,\, k \geq 0. \end{equation*} \end{itemize} Independently of the algorithmic map ${\cal A}_{\rho}$ which governs the mechanism of a primal black-box, we also need two additional assumptions on the corresponding generated sequence $\Seq{z}{k}$ which we record now: \begin{itemize} \item[{\bf C3}] There exists a positive constant $c$ such that \begin{equation*} \norm{v^{k + 1}} \leq c\norm{x^{k + 1} - x^{k}}, \quad \forall \,\, k \geq 0, \end{equation*} for some $v^{k + 1} \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) $. \item[{\bf C4}] Let ${\bar u}$ be a limit point of a subsequence $\left\{ u^{k} \right\}_{k \in {\cal K}}$ of $\Seq{u}{k}$, then $\limsup_{k \in {\cal K} \subset \nn} h(u^k) \leq h({\bar u})$. \end{itemize} \medskip Some comments are now in order. First, note that the proposed methodology, while similar in spirit, is fundamentally different from the general methodology recently proposed in \cite{BST2014}, which is unfortunately not applicable for {\bf ALBUM}, due to the primal-dual structure of this scheme. In particular, \begin{itemize} \item The first condition {\bf C1} is a {\em partial descent property} on $\Laug_{\rho}\left(\cdot \right)$. It pertains to the primal variables $\left(x , u\right)$, since by nature the dual variable $y$ is an ``ascent variable". The dissymmetry between $x$ and $u$ in the descent condition could be removed by further generalizing our approach. For the sake of simplicity, we only consider the case when the quantity of decrease in $x$ is known. \item Conditions {\bf C2} and {\bf C3} provide subgradient bounds for $\Laug_{\rho}\left(\cdot \right)$ with respect to the primal variables. \item The sequential assumption on $h$, that is, condition {\bf C4}, is a minimal and extremely weak requirement. This property holds for instance when $h : \dom h \rightarrow \rr$ is continuous. \end{itemize} \medskip From now on, and through the rest of this paper we adopt the following terminology: \medskip \begin{center} \fbox{\parbox{11cm}{\center \vspace{-0.15in}A sequence $\Seq{z}{k}$ which is generated by {\bf ALBUM} and satisfies conditions {\bf C1}--{\bf C4} is called a \textit{Lagrangian sequence}.}} \end{center} \medskip As we shall see soon, many fundamental Lagrangian based methods produce Lagrangian sequences. This allows us to derive convergence results in a unified way for such methods and their variants. We postpone the description of these methods to Section \ref{sec:variants}, and we announce next, our main convergence results for {\bf ALBUM}, which will be proved in the following sections. \subsection{Main convergence results for {\bf ALBUM}.} \label{mainconv} Our central theoretical contributions on the convergence of {\bf ALBUM} to a critical point of problem (CM) are stated in the following two results. \begin{theorem}[Subsequence convergence] \label{T:SubConv} Let $\Seq{z}{k}$ be a bounded Lagrangian sequence and let $\left({\bar x} , {\bar u} , {\bar y} \right)$ be a limit point of $\Seq{z}{k}$. Then ${\bar x}$ is a critical point of the original problem {\rm(CM)}. \end{theorem} Considering semi-algebraic or definable data, and relying on the so-called nonsmooth KL property \cite{BDL2006}, we can rule out oscillatory behaviors and establish the global convergence of the whole sequence. \begin{theorem}[Global convergence] \label{T:GlobConv} Under the premises of Theorem \ref{T:SubConv}, and assuming that $f_{0}$, $F$ and $h$ are semi-algebraic, the whole sequence $\Seq{z}{k}$ converges to a point $\left({\bar x} , {\bar u} , {\bar y}\right)$ such that ${\bar x}$ is a critical point of problem {\rm (CM)}. \end{theorem} \begin{remark} \label{r:rate} \begin{itemize} \item[$\rm{(i)}$] Standard arguments show that convergence rates of the sequence $\Seq{z}{k}$ of the type $\displaystyle O\left(k^{-s}\right)$ could be established with $s > 0$. We refer to the technique in \cite{AB2009}. \item[$\rm{(ii)}$] The essential tools for convergence are elementary stability questions and the nonsmooth Kurdyka-\L ojasiewicz inequality, and thus semi-algebraicity can be replaced by definability in a o-minimal structure on $\R,+,\times$. \end{itemize} \end{remark} \medskip The next section develops our analytically framework. We present the main ideas underlying the proposed algorithm, the main obstacles that need to be addressed, and the key tools necessary for developing the convergence analysis of {\bf ALBUM}. \section{A key lemma: penalty parameter stabilization.} \label{Sec:KeyL} In this section, we establish a central result which is essential in our approach. It asserts that the sequence of penalty parameters $\seq{\rho}{k}$ becomes stationary and that the information zone $\Zone$ is reached within finitely many steps. To establish this result, we provide in a preliminary subsection some simple but yet fundamental properties. \subsection{Fundamental properties of Lagrangian sequences.} The first elementary result identifies when an iterate enters the information zone $\Zone$. \begin{lemma}[Information lemma] \label{L:Stab} Let $\Zone$ be a given information zone. Let $\Seq{z}{k}$ be a Lagrangian sequence and assume that the multiplier sequence $\Seq{y}{k}$ is bounded. Then, there exists an index $k_{\footnotesize \rm info} \in \nn$, such that $x^{k} \in \Zone$ for all $k \geq k_{\footnotesize \rm info}$. \end{lemma} \begin{proof} We argue by contradiction and assume that $x^{k} \notin \Zone$ for $k \in I$ where $I$ is an infinite set. On one hand, by the definition of the information zone $\Zone$, we have for all $k \in I$ that \begin{equation} \label{L:Stab:1} \dist\left(F\left(x^{k}\right) , \dom{h}\right) > \bar{d}. \end{equation} On the other hand, for all $k \in \nn$ we have \begin{align*} \dist\left(F\left(x^{k}\right) , \dom{h}\right) & = \inf_{u \in \dom{h}} \norm{u - F\left(x^{k} \right)} \\ & \leq \norm{u^{k} - F\left(x^{k}\right)} \hspace{1.2in} \left[u^{k} \in \dom{h}\right] \\ & = \frac{1}{\rho_{k - 1}}\norm{y^{k} - y^{k - 1}} \hspace{1.03in} \left[\eqref{GenericAdap:MultiStep}\right] \\ & \leq \frac{M}{\rho_{k - 1}}. \hspace{1.78in}\left[\Seq{y}{k} \;\text{is assumed bounded} \right] \end{align*} By Step 3.3 of the algorithm and the fact that $I$ is an infinite set, it follows that $\rho_{k} \rightarrow \infty$ as $k \rightarrow \infty$, thus there exists $k_{\footnotesize \rm info} \in \nn$ such that \begin{equation*} \dist\left(F(x^{k}), \dom{h}\right) \leq \frac{M}{\rho_{k}} \leq \bar{d}, \quad \forall \,\, k \geq k_{\footnotesize \rm info}, \end{equation*} which obviously contradicts \eqref{L:Stab:1}. \end{proof} The next result provides an important relation on the sequences $\Seq{x}{k}$ and $\Seq{y}{k}$ produced by {\bf ALBUM} and reflects the min-max dynamics at the root of these methods. \begin{lemma} \label{L:DescentProperty} Let $\Seq{z}{k}$ be a Lagrangian sequence. The following inequality holds true for any $k \geq 0$ \begin{equation*} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k+1}\right) - \Laug_{\rho_{k}}\left(x^{k} , u^{k} , y^{k}\right) \leq \frac{1}{\rho_{k}}\norm{y^{k + 1} - y^{k}}^{2} - \frac{a}{2} \norm{x^{k + 1} - x^{k}}^{2}. \end{equation*} \end{lemma} \begin{proof} From condition {\bf C1}, \begin{equation} \label{L:DescentProperty:1} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) - \Laug_{\rho_{k}}\left(x^{k} , u^{k} , y^{k}\right) \leq -\frac{a}{2}\norm{x^{k + 1} - x^{k}}^{2}. \end{equation} Using the definition of $\Laug_{\rho}$ (\emph{cf.\ } \eqref{D:AugLAc}) we have from \eqref{GenericAdap:MultiStep} that \begin{align*} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k+1}\right) - \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) & = \act{y^{k + 1} - y^{k} , F\left(x^{k + 1}\right) - u^{k + 1}} \\ & = \frac{1}{\rho_{k}}\norm{y^{k + 1} - y^{k}}^{2}. \end{align*} Adding the latter to \eqref{L:DescentProperty:1} yields the desired result. \end{proof} The next result relates the evolution of the multiplier sequence $\Seq{y}{k}$ with that of the primal sequence $\Seq{x}{k}$. \begin{lemma} \label{L:DualSequence} Let $\Seq{z}{k}$ be a Lagrangian sequence. Assume that the multiplier sequence $\Seq{y}{k}$ is bounded by some $\Lambda > 0$. Then, the following inequality holds true for any $k \geq k_{\footnotesize \rm info}$, \begin{equation} \label{L:DualSequence:00} \norm{y^{k + 1} - y^{k}}^{2} \leq d_{1}\norm{x^{k + 1} - x^{k}}^{2} + d_{2}\norm{x^{k} - x^{k - 1}}^{2}, \end{equation} where \begin{equation} \label{L:DualSequence:0} d_{1} = \frac{2}{\gamma^2}\left(L(f_{0}) + L(F)\Lambda + b\right)^{2} \quad \text{and} \quad d_{2} = \frac{2b^{2}}{\gamma^{2}}. \end{equation} \end{lemma} \begin{proof} For convenience, we define \begin{equation*} \Delta_{k} := \nabla F\left(x^{k + 1}\right)^{T}y^{k + 1} - \nabla F\left(x^{k}\right)^{T} y^{k}. \end{equation*} Then, by Lemma \ref{L:Stab} and Assumption \ref{AssumptionB}(i) and (ii) which warrants that $F$ is uniform regular on $\Zone$ with constant $\gamma$ and $\nabla F$ is Lipschitz continuous on $\Zone$, respectively, it follows for all $k \geq k_{\footnotesize \rm info}$ that \begin{align} \norm{\Delta_{k}} & = \norm{\nabla F\left(x^{k + 1}\right)^{T}\left(y^{k + 1} - y^{k}\right) + \left(\nabla F\left(x^{k + 1}\right) - \nabla F\left(x^{k}\right)\right)^{T}y^{k}} \nonumber \\ & \geq \gamma\norm{y^{k + 1} - y^{k}} - L(F)\Lambda\norm{x^{k + 1} - x^{k}}. \label{L:DualSequence:2} \end{align} On the other hand, from the definition of $\Laug_{\rho}$ (see \eqref{D:AugLAc}), we have that \begin{align*} \nabla_{x} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) & = \nabla f_{0}\left(x^{k + 1}\right) + \nabla F\left(x^{k + 1}\right)^{T}\left(y^{k} + \rho_{k}\left(F\left(x^{k + 1} \right) - u^{k + 1}\right)\right) \nonumber \\ & = \nabla f_{0}\left(x^{k + 1} \right) + \nabla F\left(x^{k + 1}\right)^{T}y^{k + 1}, \end{align*} where the second equality uses the the multiplier update given in \eqref{GenericAdap:MultiStep}. Thus, using the latter, thanks to condition {\bf C2} we obtain for all $k \geq 0$ that there exists $b > 0$ such that \begin{equation}\label{lagb} \norm{\nabla f_{0}\left(x^{k + 1}\right) + \nabla F\left(x^{k + 1}\right)^{T}y^{k + 1}} \leq b\norm{x^{k + 1} - x^{k}}. \end{equation} Therefore, we obtain for all $k \geq k_{\footnotesize \rm info}$, \begin{align} \norm{\Delta_{k}} & = \norm{\nabla F\left(x^{k + 1}\right)^{T}y^{k + 1} - \nabla F\left(x^{k} \right)^{T}y^{k}} \nonumber \\ & = \norm{\nabla F\left(x^{k + 1}\right)^{T}y^{k + 1} + \nabla f_{0}\left(x^{k + 1}\right) - \nabla F\left(x^{k}\right)^{T}y^{k} - \nabla f_{0}\left(x^{k}\right) + \nabla f_{0}\left(x^{k} \right) - \nabla f_{0}\left(x^{k + 1}\right)} \nonumber \\ & \leq \norm{\nabla F\left(x^{k + 1}\right)^{T}y^{k + 1} + \nabla f_{0}\left(x^{k + 1}\right)} + \norm{ \nabla F\left(x^{k}\right)^{T}y^{k} + \nabla f_{0}\left(x^{k}\right)} \nonumber \\ & + \norm{\nabla f_{0}\left(x^{k + 1}\right) - \nabla f_{0}\left(x^{k}\right)} \nonumber \\ & \leq \left(L(f_{0}) + b\right)\norm{x^{k + 1} - x^{k}} + b\norm{x^{k} - x^{k - 1}}, \label{L:DualSequence:4} \end{align} where the last inequality uses \eqref{lagb}, and the Lipschitz continuity of $\nabla f_{0}$ over $\Zone$ (see Assumption \ref{AssumptionB}(iii)). Combining \eqref{L:DualSequence:2} and \eqref{L:DualSequence:4}, we thus obtain for any $k \geq k_{\footnotesize \rm info}$ \begin{equation} \label{L:DualSequence:5} \gamma\norm{y^{k + 1} - y^{k}} \leq \left(L(f_{0}) + L(F)\Lambda + b\right)\norm{x^{k + 1} - x^{k}} + b\norm{x^{k} - x^{k - 1}}. \end{equation} Therefore, squaring the last inequality and using the fact that $\left(r + s\right)^{2} \leq 2r^{2} + 2s^{2}$ for all $r , s \in \rr$, the claimed assertion follows. \end{proof} \subsection{Finite stabilization of the penalty sequence $\seq{\rho}{k}$.} We are now ready to establish the promised key lemma which asserts that the sequence of penalizing parameters $\seq{\rho}{k}$ becomes stationary from a certain iteration-index $k_{\footnotesize \rm statio}$. A ``Lyapunov zone" for ${\cal E}_{\beta}$ is thus reached within finitely many steps. \begin{lemma}[Finite stabilization of the sequence $\seq{\rho}{k}$] \label{L:SufficentDecrease} Let $\Seq{z}{k}$ be a Lagrangian sequence. Assume that the multiplier sequence $\Seq{y}{k}$ is bounded. Then, there exists an index $k_{\footnotesize \rm statio} \in \nn$ such that \begin{equation*} \rho_{k} = \rho_{k_{\footnotesize \rm statio}}, \quad \forall \,\, k \geq k_{\footnotesize \rm statio}. \end{equation*} Moreover, for all $k \geq k_{\footnotesize \rm statio}$ we have $x^{k} \in \Zone$, and there exists $\tau > 0$ such that \begin{equation} \label{L:SufficentDecrease:0} \tau\norm{x^{k + 1} - x^{k}}^{2} \leq {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left(x^{k} , u^{k} , y^{k} , x^{k -1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k} \right). \end{equation} \end{lemma} \begin{proof} Lemma \ref{L:Stab} warrants that $x^{k} \in \Zone$ for all $k \geq k_{\footnotesize \rm info}$ and by applying Lemma \ref{L:DescentProperty}, we obtain for all $k \geq 0$ that \begin{equation} \label{L:SufficentDecrease:1} \Laug_{\rho_{k}}\left(z^{k}\right) - \Laug_{\rho_{k}}\left(z^{k + 1}\right) \geq \frac{a}{2} \norm{x^{k + 1} - x^{k}}^{2} - \frac{1}{\rho_{k}}\norm{y^{k + 1} - y^{k}}^{2}. \end{equation} Using Lemma \ref{L:DualSequence}, we get for all $k \geq k_{\footnotesize \rm info}$, \begin{equation} \label{L:SufficentDecrease:2} \norm{y^{k + 1} - y^{k}}^{2} \leq d_{1}\norm{x^{k + 1} - x^{k}}^{2} + d_{2}\norm{x^{k} - x^{k - 1}}^{2}, \end{equation} where $d_{1}$ and $d_{2}$ are given in \eqref{L:DualSequence:0}. Hence, by combining \eqref{L:SufficentDecrease:1} and \eqref{L:SufficentDecrease:2}, it follows for all $k \geq k_{\footnotesize \rm info}$, that \begin{equation} \label{L:SufficentDecrease:3} \Laug_{\rho_{k}}\left(z^{k}\right) - \Laug_{\rho_{k}}\left(z^{k + 1}\right) \geq \left(\frac{a} {2} - \frac{d_{1}}{\rho_{k}}\right)\norm{x^{k + 1} - x^{k}}^{2} - \frac{d_{2}}{\rho_{k}} \norm{x^{k} - x^{k - 1}}^{2}. \end{equation} Using the definition of ${\cal E}_{\beta}$ (see \eqref{D:Lyap}) and setting $\beta := \beta_{k}$ for all $k \geq 0$, we get \begin{align} V_{k} & := {\cal E}_{\beta_{k}}\left(x^{k} , u^{k} , y^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k}} \left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right) \nonumber \\ & = \Laug_{\rho_{k}}\left(z^{k}\right) - \Laug_{\rho_{k}}\left(z^{k + 1}\right) + \beta_{k} \norm{x^{k} - x^{k - 1}}^{2} - \beta_{k}\norm{x^{k + 1} - x^{k}}^{2}. \end{align} Therefore, with \eqref{L:SufficentDecrease:3}, we deduce that for all $k \geq k_{\footnotesize \rm info}$ \begin{align} V_{k} & \geq \left(\frac{a}{2} - \frac{d_{1}}{\rho_{k}} - \beta_{k}\right)\norm{x^{k + 1} - x^{k}}^{2} - \left(\frac{d_{2}}{\rho_{k}} - \beta_{k}\right)\norm{x^{k - 1} - x^{k}}^{2} \nonumber \\ & = \left(\frac{a}{2} - \frac{d_{1}}{\rho_{k}} - \beta_{k}\right)\norm{x^{k + 1} - x^{k}}^{2}, \label{L:SufficentDecrease:4} \end{align} where the equality follows from the definition of $\beta_{k}$ given in Step 3.3 of {\bf ALBUM}. Hence, using \eqref{L:DualSequence}, we get that \begin{equation*} \beta_{k} = \frac{d_{2}}{\rho_{k}}= \frac{2b^{2}}{\rho_{k}\gamma^{2}}. \end{equation*} In addition, one has for all $k \geq k_{\footnotesize \rm info}$ that \begin{equation} \frac{a}{2} - \frac{d_{1}}{\rho_{k}} - \beta_{k} = \frac{a}{2} - \frac{d_{1} + d_{2}}{\rho_{k}}. \end{equation} Thus \eqref{L:SufficentDecrease:4} rewrites \begin{equation} \label{L:SufficentDecrease:5} V_{k} \geq \left(\frac{a}{2} - \frac{d_{1} + d_{2}}{\rho_{k}}\right)\norm{x^{k + 1} - x^{k}} ^{2}. \end{equation} The sequence $\seq{\rho}{k}$ cannot increase indefinitely else we would get from \eqref{L:SufficentDecrease:5} that \begin{equation*} {\cal E}_{\beta_{k}}\left(x^{k} , u^{k} , y^{k} , x^{k -1}\right) - {\cal E}_{\beta_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right) \geq \tau \norm{x^{k + 1} - x^{k}}^{2}, \end{equation*} for all $k$ sufficiently large, where $\tau > 0$ is the parameter given in the {\bf ALBUM} scheme. Thus we obtain the existence of an iteration-index $k_{\footnotesize \rm statio} \geq k_{\footnotesize \rm info}$ such that $\rho_{k} = \rho_{k_{\footnotesize \rm statio}}$ for all $k \geq k_{\footnotesize \rm statio}$, and the desired result follows. \end{proof} \begin{remark}[Adaptive process and the dynamics of $\seq{\rho}{k}$] \label{liap} Lemma \ref{L:SufficentDecrease} establishes that {\bf ALBUM}, within Step 3.3, relies on two fundamental tests: \begin{itemize} \item[--] a weak\footnote{Weak because we do not ask for actual feasibility.} feasibility test, {\em i.e.}, $x^{k} \in \Zone$, \item[--] a surrogate\footnote{Surrogate because we do not ask for the augmented Lagrangian function $\Laug_{\rho}$ to be Lyapunov, but rather that the auxiliary function ${\cal E}_{\beta}$ is Lyapunov.} descent test for ${\cal E}_{\beta}$ which implicitly tunes the algorithm to match the natural step-sizes attached to $f_{0}$ and $F$. \end{itemize} Lemma \ref{L:SufficentDecrease} tells us that $\rho_{k}$ can be automatically tuned to an acceptable value $\rho_{k_{\footnotesize \rm statio}}$ in finitely many steps. As a consequence, and it is a fundamental fact, we have the descent property: \begin{equation*} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{k} , u^{k} , y^{k} , x^{k -1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right) \geq \tau \norm{x^{k + 1} - x^{k}}^{2}, \quad \forall \,\, k \geq k_{\footnotesize \rm statio}. \end{equation*} In short and to conclude, one could say that the adaptive protocol leads to the finite identification of the information zone and to a sufficient descent property. \end{remark} \begin{remark} One observes from the proof, that the descent property on ${\cal E}_{\beta}$ is ensured once we know that \begin{equation} \label{descond} \frac{a}{2} - \frac{d_{1} + d_{2}}{\rho_{k}} > \tau, \quad \forall \,\, k \geq k_{\footnotesize \rm info}. \end{equation} In order to shunt the surrogate descent test, it is thus tempting to fix a value $\rho_{0}$ a priori (before running the method), so that the above holds directly. Yet it is important to understand that this cannot be done in general, since $d_{1}$ (\emph{cf.\ } \eqref{L:DualSequence:0}) is a constant that {\em depends} on a bound $\Lambda$ of the sequence $\Seq{y}{k}$ which by itself depends on $\seq{\rho}{k}$! \end{remark} \medskip \begin{remark}[Special case with $F$ assumed to be linear] \label{r:lin} \begin{itemize} \item[(i)] In that case the dependence of $d_{1}$ with $\Lambda$ given in Lemma \ref{L:DualSequence} {\em disappears}. This allows for a more direct and simplified approach. Indeed, exploiting the linearity of $F$, the inequality \eqref{L:DualSequence:2} reduces to $\norm{\Delta_{k}} \geq \gamma\norm{y^{k + 1} - y^{k}}$ for all $k \geq 0$, where here $\gamma \equiv \sqrt{\lambda_{\min}(FF^T)} > 0$, \emph{cf.\ } Remark \ref{r:info}. Therefore, the boundedness of $\Seq{y}{k}$ is not needed, and it immediately follows that the proof of inequality \eqref{L:DualSequence:00} holds true in Lemma \ref{L:DualSequence} for all $k \geq 0$, with \begin{equation} \label{dlin} d_{1} = \frac{2}{\lambda_{\min}(FF^T)}\left(L(f_{0}) + b\right)^{2} \quad \text{and} \quad d_{2} = \frac{2b^{2}}{\lambda_{\min}(FF^T)}. \end{equation} Secondly, as mentioned before (\emph{cf.\ } Remark \ref{r:info}) the information zone can be taken as the whole space {\em i.e.}, $\Zone \equiv \rr^{n}$, and in that case the adaptive regime is not anymore necessary. Thus we set $\rho_{k} \equiv \rho > 0$ for all $k \in \nn$, and Step 3.3 of {\bf ALBUM} is simply removed (see also Remark \ref{r:param2}). Therefore, in order to guarantee sufficient descent of the Lyapunov ${\cal E}_{\beta}$, all we need is that \eqref{descond} holds true, that is (with $\tau = 0$), it reduces to \begin{equation} \label{rholin} \rho > {\bar \rho} := \frac{2\left(d_{1} + d_{2}\right)}{a}, \end{equation} where $d_{1}$ and $d_{2}$ are given in \eqref{dlin}. Therefore, in the special linear case, this allows for determining explicitly the threshold value ${\bar \rho}$, for a chosen Lagrangian algorithmic map $ {\cal A_\rho}$ which provides the constants $a$ and $b$ and to obtain the corresponding convergence results via a straightforward application of Theorems \ref{T:SubConv} and \ref{T:GlobConv}. \item[(ii)] Interestingly, this also provides a positive answer to a question posed in \cite[Remark 4(3) p. 2451]{LP2015}, where the authors pointed out that it would be interesting to see if global convergence of a proximal ADM could be derived; see also Section \ref{sec:variants} for more results. \end{itemize} \end{remark} \section{Proof of the main convergence results.} \label{Sec:proofs} Equipped with the results we have established, we can now apply our methodology to prove the main convergence results of {\bf ALBUM} announced in Section \ref{mainconv}. \subsection{Subgradient bound for the Lyapunov function ${\cal E}_{\beta}$.} As mentioned previously, we work with the function ${\cal E}_{\beta}$ to overcome the descent obstacle and to detect hidden descent mechanisms. Now the third condition {\bf C3} of our methodology comes into a play. We derive below an upper bound on a subgradient of the Lyapunov function ${\cal E}_{\beta}$. \begin{lemma} \label{L:SubgradientBound} Let $\Seq{z}{k}$ be a bounded Lagrangian sequence. Then, for each $k \in \nn$, there exist positive constants $\sigma_{1}$ and $\sigma_{2}$ together with $q^{k + 1} \in \partial {\cal E}_{\beta_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right)$, such that for all $k \geq k_{\footnotesize \rm info}$ \begin{equation} \label{L:SubgradientBound:00} \norm{q^{k + 1}} \leq \sigma_{1}\norm{x^{k + 1} - x^{k}} + \sigma_{2}\norm{x^{k} - x^{k - 1}}. \end{equation} \end{lemma} \begin{proof} Consider the quadruplet $q^{k + 1} = \left(q_{1}^{k + 1} , q_{2}^{k + 1} , q_{3}^{k + 1} , q_{4}^{k + 1}\right) \in \partial {\cal E}_{\beta_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right)$. Using the definition of ${\cal E}_{\beta}$ (see \eqref{D:Lyap}), subdifferential calculus rules, and recalling the multiplier update rule \eqref{GenericAdap:MultiStep}, a direct computation shows that: \begin{align} q_{1}^{k + 1} & = \nabla_{x} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1}\right) + 2\beta_{k}\left(x^{k + 1} - x^{k}\right) \nonumber \\ & = \nabla_{x} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) + \nabla F\left(x^{k + 1}\right)^{T}\left(y^{k + 1} - y^{k}\right) + 2\beta_{k}\left(x^{k + 1} - x^{k}\right), \label{L:SubgradientBound:1} \\ q_{2}^{k + 1} & \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1}\right) = \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) - \left(y^{k + 1} - y^{k}\right), \label{L:SubgradientBound:2} \\ q_{3}^{k + 1} & = \nabla_{y} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1}\right) = F\left(x^{k + 1}\right) - u^{k + 1} = \rho_{k}^{-1}\left(y^{k + 1} - y^{k}\right), \label{L:SubgradientBound:3} \\ q_{4}^{k + 1} & = 2\beta_{k}\left(x^{k} - x^{k + 1}\right). \label{L:SubgradientBound:4} \end{align} Since $\Seq{x}{k}$ is assumed bounded and $\nabla F$ is continuous (see Assumption \ref{AssumptionB}(ii)) it follows that there exists $B > 0$ such that \begin{equation} \label{L:SubgradientBound:0} \sup_{k \geq k_{\footnotesize \rm info}} \norm{\nabla F\left(x^{k}\right)} \leq B. \end{equation} Moreover, recall that from \eqref{parcb}, we have $\rho_{k} \geq \rho_{0}$ and $\beta_{k} \leq \beta_{0}$ for all $k \in \nn$. Therefore, using condition {\bf C2} and the expressions for $q_{j}^{k + 1}$, $j = 1 , 2 , 3 , 4$ derived above, we get the following estimates: \begin{align*} \norm {q_{1}^{k + 1}} & \leq \norm {\nabla_{x} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right)} + B\norm{y^{k + 1} - y^{k}} + 2\beta_{0}\norm {x^{k + 1} - x^{k}} \\ & \leq b\norm{x^{k + 1} - x^{k}} + B\norm{y^{k + 1} - y^{k}} + 2\beta_{0}\norm {x^{k + 1} - x^{k}} \\ & = B\norm{y^{k + 1} - y^{k}} + \left(b + 2\beta_{0}\right)\norm {x^{k + 1} - x^{k}}. \end{align*} Likewise, thanks to condition {\bf C3} we have with $v^{k + 1} \in \partial_{u} \Laug_{\rho_{k}} \left(x^{k + 1} , u^{k + 1} , y^{k}\right)$ that $\norm{v^{k + 1}} \leq d\norm{x^{k + 1} - x^{k}}$, and hence by defining $q_{2}^{k + 1} = v^{k + 1} - \left(y^{k + 1} - y^{k}\right)$, it immediately follows that $q_{2}^{k + 1} \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} \right)$, and from \eqref{L:SubgradientBound:2} \begin{equation*} \norm{q_{2}^{k + 1}} \leq \norm{v^{k + 1}} + \norm{y^{k + 1} - y^{k}} \leq d\norm{x^{k + 1} - x^{k}} + \norm{y^{k + 1} - y^{k}}. \end{equation*} Finally, from \eqref{L:SubgradientBound:3} and \eqref{L:SubgradientBound:4} we immediately obtain (recall \eqref{parcb}) \begin{equation*} \norm{q_{3}^{k + 1}} \leq \frac{1}{\rho_{0}}\norm{y^{k + 1} - y^{k}} \quad \text{and} \quad \norm{q_{4}^{k + 1}} \leq 2\beta_{0}\norm{x^{k + 1} - x^{k}}. \end{equation*} Therefore, summing these inequalities, we obtain for all $k \geq k_{\footnotesize \rm statio}$ \begin{equation*} \norm{q^{k + 1}} \leq \sum_{j = 1}^{4} \norm{q_{j}^{k + 1}} \leq \left(B + 1 + \frac{1} {\rho_{0}}\right)\norm{y^{k + 1} - y^{k}} + \left(4\beta_{0} + b + d\right)\norm{x^{k + 1} - x^{k}}. \end{equation*} Using the proof of Lemma \ref{L:DualSequence}, for all $k \geq k_{\footnotesize \rm statio}$, we know from \eqref{L:DualSequence:5} that \begin{equation} \gamma\norm{y^{k + 1} - y^{k}} \leq \left(L(f_{0}) + L(F)\Lambda + b\right)\norm{x^{k + 1} - x^{k}} + b\norm{x^{k} - x^{k - 1}}. \end{equation} Combining this with the above inequality yields the desired estimation \eqref{L:SubgradientBound:00} by choosing \begin{equation*} \sigma_{1} = \frac{1}{\gamma}\left(B + 1 + \frac{1}{\rho_{0}}\right)\left(L(f_{0}) + L(F)\Lambda + b\right) + 4\beta_{0} + b + d \quad \text{and} \quad \sigma_{2} = \frac{b}{\gamma}\left(B + 1 + \frac{1}{\rho_{0}}\right). \end{equation*} This completes the proof. \end{proof} Equipped with Lemma \ref{L:SufficentDecrease} we immediately obtain the following result. \begin{proposition} \label{P:WeakConv} Let $\Seq{z}{k}$ be a Lagrangian sequence. Assume that the multiplier sequence $\Seq{y}{k}$ is bounded. Then \begin{equation*} \sum_{k = 1}^{\infty} \norm{x^{k + 1} - x^{k}}^{2} < \infty \quad \text{and} \quad \sum_{k = 1} ^{\infty} \norm{y^{k + 1} - y^{k}}^{2} < \infty. \end{equation*} \end{proposition} \begin{proof} Invoking Lemma \ref{L:SufficentDecrease} which holds true under the stated assumptions, we have that \begin{equation} \label{P:WeakConv:1} \tau\norm{x^{k + 1} - x^{k}}^{2} \leq {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left(x^{k} , u^{k} , y^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{k + 1} , u^{k + 1} , y^{k + 1} , x^{k}\right), \end{equation} for all $k \geq k_{\footnotesize \rm statio}$. Summing \eqref{P:WeakConv:1} over $k = k_{\footnotesize \rm statio} , k_{\footnotesize \rm statio} + 1 , \ldots , k_{\footnotesize \rm statio} + p$ we obtain \begin{align*} \tau\sum_{k = k_{\footnotesize \rm statio}}^{k_{\footnotesize \rm statio} + p} \norm{x^{k + 1} - x^{k}}^{2} & \leq {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left(x^{1} , u^{1} , y^{1} , x^{0}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{p + 1} , u^{p + 1} , y^{p + 1} , x^{p}\right) \\ & \leq {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(x^{1} , u^{1} , y^{1} , x^{0}\right), \end{align*} where the last inequality follows from the fact that $\inf_{(x , u)} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} > - \infty$ (thanks to \eqref{WellPosed} since ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(\cdot\right) \geq \Laug_{\rho_{k_{\footnotesize \rm statio}}} \left(\cdot\right)$). Letting $p \rightarrow \infty$ yields \begin{equation*} \sum_{k = 1}^{\infty} \norm{x^{k + 1} - x^{k}}^{2} < \infty. \end{equation*} Therefore, from Lemma \ref{L:DualSequence}, it also follows that $\sum_{k = 1}^{\infty} \norm{y^{k + 1} - y^{k}}^{2} < \infty$, as required. \end{proof} We are now ready to prove our first convergence result for the generic scheme {\bf ALBUM}. \subsection{Proof of Theorem \ref{T:SubConv} -- subsequence convergence.} The sequence $\Seq{z}{k}$ is bounded and therefore there exists a subsequence $\left\{ z^{m_{k}} \right\}_{k \in \nn}$ which converges to ${\bar z} = \left({\bar x} , {\bar u} , {\bar y}\right)$. We first prove that $\left({\bar x} , {\bar u} , {\bar y} , {\bar x}\right)$ is a critical point of ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$, that is, \begin{equation*} \left(0 , 0 , 0 , 0\right) \in \partial {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar x} , {\bar u} , {\bar y} , {\bar x}\right). \end{equation*} Since $h$ is lower semi-continuous we have that \begin{equation*} \liminf_{k \rightarrow \infty} h\left(u^{m_{k}}\right) \geq h\left({\bar u}\right), \end{equation*} which combined with condition {\bf C4} yields that $h\left(u^{m_{k}}\right)$ converges to $h\left({\bar u}\right)$ as $k \rightarrow \infty$. Therefore, from Proposition \ref{P:WeakConv} and the continuity of $f_{0}$ and $F$ (see Assumption \ref{AssumptionB}(ii) and (iii)), we obtain that \begin{align*} \lim_{k \rightarrow \infty} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{m_{k}} , x^{m_{k} - 1}\right) & = \lim_{k \rightarrow \infty} \left[\Laug_{\rho_{k_{\footnotesize \rm statio}}}\left(x^{m_{k}} , u^{m_{k}} , y^{m_{k}}\right) + \beta_{k_{\footnotesize \rm statio}}\norm{x^{m_{k}} - x^{m_{k} - 1}}^{2}\right] \\ & = \Laug_{\rho_{k_{\footnotesize \rm statio}}}\left({\bar x} , {\bar u} , {\bar y}\right) \\ & = {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right). \end{align*} We know from Lemma \ref{L:SubgradientBound} that there exist $\sigma_{1} , \sigma_{2} > 0$ and $q^{k + 1} \in \partial {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k + 1} , x^{k}\right)$ for which \begin{equation*} \norm{q^{k + 1}} \leq \sigma_{1}\norm{x^{k + 1} - x^{k}} + \sigma_{2}\norm{x^{k} - x^{k - 1}}. \end{equation*} On the other hand, from Proposition \ref{P:WeakConv} it follows that \begin{equation*} \lim_{k \rightarrow \infty} \norm{x^{k + 1} - x^{k}} = 0. \end{equation*} Thus $q^{k + 1} \rightarrow 0$ as $k \rightarrow \infty$. Using the closedness property of the graph of the subdifferential $\partial {\cal E}_{\beta}$, we obtain that $\left(0 , 0 , 0 , 0\right) \in \partial {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar x} , {\bar u} , {\bar y} , {\bar x}\right)$. This shows that $\left({\bar x} , {\bar u} , {\bar y} , {\bar x}\right)$ is a critical point of ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$. Proposition \ref{P:Crit} now implies that ${\bar x}$ is a critical point of the objective function $f$ of model (CM), and the proof is completed. \medskip Next, in order to prove the second main global convergence result of our algorithm {\bf ALBUM}, we need to introduce adequate and necessary material on the nonsmooth KL property \cite{BDL2006}. \medskip Let $\eta \in \left(0 , +\infty\right]$. We denote by $\Phi_{\eta}$ the class of all concave and continuous functions $\varphi : \left[0 , \eta\right) \rightarrow \rr_{+}$ which satisfy the following conditions \begin{itemize} \item[$\rm{(i)}$] $\varphi\left(0\right) = 0$; \item[$\rm{(ii)}$] $\varphi$ is $C^{1}$ on $\left(0 , \eta\right)$ and continuous at $0$; \item[$\rm{(iii)}$] for all $s \in \left(0 , \eta\right)$: $\varphi'\left(s\right) > 0$. \end{itemize} The next result plays a crucial role, see \cite[Lemma 6]{BST2014}. \begin{lemma}[Uniformized KL property] \label{L:KLProperty} Let $\Omega$ be a compact set and let $\sigma : \rr^{d} \rightarrow \left(-\infty , \infty\right]$ be a proper and lower semicontinuous function. Assume that $\sigma$ is constant on $\Omega$ and satisfies the KL property at each point of $\Omega$. Then, there exist $\varepsilon > 0$, $\eta > 0$ and $\varphi \in \Phi_{\eta}$ such that for all $\overline{u}$ in $\Omega$ and all $u$ in the following intersection \begin{equation} \label{L:KLProperty:1} \left\{ u \in \rr^{d} : \, \dist\left(u , \Omega\right) < \varepsilon \right\} \cap \left[ \sigma\left(\overline{u}\right) < \sigma\left(u\right) < \sigma\left(\overline{u}\right) + \eta\right], \end{equation} one has, \begin{equation} \label{L:KLProperty:2} \varphi'\left(\sigma\left(u\right) - \sigma\left(\overline{u}\right)\right)\dist\left(0 , \partial \sigma\left(u\right)\right) \geq 1. \end{equation} \end{lemma} Equipped with these results we proceed with the proof of the second main theorem, {\em i.e.}, convergence of the whole sequence $\Seq{z}{k}$ to a critical point of problem (CM) with semi-algebraic data $f_{0}$, $h$ and $F$. Note that the technique used below is patterned after the recent work \cite{BST2014}. However, as explained previously, we cannot apply directly these results to {\bf ALBUM}, since the descent requirements stated there clearly do not hold in our framework. \subsection{Proof of Theorem \ref{T:GlobConv} -- global convergence.} Since $\Seq{z}{k}$ is bounded there exists a subsequence $\left\{ z^{m_{k}} \right\}_{k \in \nn}$ such that $z^{m_{k}} \rightarrow {\bar z}$ as $k \rightarrow \infty$. In a similar way as in Theorem \ref{T:SubConv} we get that \begin{equation} \label{T:FiniteLength:1} \lim_{k \rightarrow \infty} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) = {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left({\bar z} , {\bar x}\right). \end{equation} If there exists an integer $\bar{k} \geq k_{\footnotesize \rm statio}$ for which ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{\bar{k}} , x^{\bar{k} - 1}\right) = {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)$ then the decreasing property obtained in Lemma \ref{L:SufficentDecrease} would imply that $z^{\bar{k} + 1} = z^{\bar{k}}$. A trivial induction show then that the sequence $\Seq{z}{k}$ is stationary and the announced result is obvious. \medskip Since $\left\{ {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) \right\}_{k \in \nn}$ is a nonincreasing sequence, it is clear from \eqref{T:FiniteLength:1} that \begin{equation*} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right) < {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1} \right)\; \text{for all} \; k \geq k_{\footnotesize \rm statio}. \end{equation*} Again from \eqref{T:FiniteLength:1}, for any $\eta > 0$ there exists $k_{0} \geq k_{\footnotesize \rm statio}$ such that \begin{equation*} {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) < {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x} \right) + \eta, \, \forall \, k > k_{0}. \end{equation*} From Theorem \ref{T:SubConv} we know that $\lim_{k \rightarrow \infty} \dist\left(z^{k} , \omega \left(z^{0}\right)\right) = 0$. This means that for any $\varepsilon > 0$ there exists a positive integer $k_{1} \geq k_{\footnotesize \rm statio}$ such that $\dist\left(z^{k} , \omega\left(z^{0}\right)\right) < \varepsilon$ for all $k > k_{1}$. Summing up all these facts, we get that $z^{k}$ belongs to the intersection in \eqref{L:KLProperty:1} for all $k > l := \max\left\{ k_{0} , k_{1} \right\} \geq k_{\footnotesize \rm statio}$. \medskip We denote by $\omega\left(z^{0}\right)$ the set of all limit points. By Theorem \ref{T:SubConv}, $\omega \left(z^{0}\right)$ is nonempty and compact (since by definition, it can viewed as an intersection of compact sets). Now, we show that ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$ is finite and constant on $\omega\left(z^{0} \right)$. Indeed, by our standing assumption (see \eqref{WellPosed}) we know that $\Laug_{\rho} \left(z^{k} \right) > -\infty$ for all $k \in \nn$, therefore from the definitions of $\Laug_{\rho}$ and ${\cal E}_{\beta}$ (see \eqref{D:AugLAc} and \eqref{D:Lyap}, respectively) we have that $\left\{ {\cal E}_{\beta_{k}}\left(z^{k} , x^{k - 1}\right) \right\}_{k \in \nn}$ is bounded from below. Lemma \ref{L:SufficentDecrease} now guarantees that $\left\{ {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) \right\}_{k \in \nn}$ converges to a finite limit, say $l$. From \eqref{T:FiniteLength:1} it follows that $l = {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)$, which proves that ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$ is finite and constant on $\omega\left(z^{0}\right)$. \medskip Thus, since ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$ is a KL function, we can apply the Uniformization Lemma \ref{L:KLProperty} with $\Omega = \omega\left(z^{0}\right)$. Therefore, for any $k > l$, we have \begin{equation} \label{T:FiniteLength:2} \varphi'\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right) \cdot \dist\left(0 , \partial {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1} \right)\right) \geq 1. \end{equation} This makes sense since we know that ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) > {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)$ for any $k > l \geq k_{\footnotesize \rm statio}$. Using Lemma \ref{L:SubgradientBound} (recalling that $k_{\footnotesize \rm statio} \geq k_{\footnotesize \rm info}$), we get that \begin{align} \varphi'\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right) & \geq \frac{1}{\dist\left(0 , \partial {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right)\right)} \nonumber \\ & \geq \left(\sigma\norm{x^{k} - x^{k - 1}} + \sigma\norm{x^{k - 1} - x^{k - 2}}\right)^{-1}, \label{T:FiniteLength:3} \end{align} where $\sigma = \max \left\{ \sigma_{1} , \sigma_{2} \right\}$ while $\sigma_{1}$ and $\sigma_{2}$ given in Lemma \ref{L:SubgradientBound}. On the other hand, from the concavity of $\varphi$ we get that \begin{align} \varphi\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right) - \varphi\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k + 1} , x^{k}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right) \geq & \nonumber \\ & \hspace{-5.5in} \varphi'\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right)\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k} , x^{k - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{k + 1} , x^{k}\right)\right). \label{T:FiniteLength:4} \end{align} For convenience, we define for all $p , q \in \nn$ and ${\bar z}$ the following quantities \begin{equation*} \Delta_{p , q} : = \varphi\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{p} , x^{p - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right) - \varphi\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left(z^{q} , x^{q - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left({\bar z} , {\bar x}\right)\right). \end{equation*} Combining \eqref{T:FiniteLength:3} and \eqref{T:FiniteLength:4} and using Lemma \ref{L:SufficentDecrease} yields for any $k > l$ that \begin{equation} \label{T:FiniteLength:5} \Delta_{k , k + 1} \geq \frac{\tau\norm{x^{k + 1} - x^{k}}^{2}}{\psi\left(\norm{x^{k} - x^{k - 1}} + \norm{x^{k - 1} - x^{k - 2}}\right)}, \end{equation} and hence \begin{equation*} \norm{x^{k + 1} - x^{k}}^{2} \leq \rho\Delta_{k , k + 1}\left(\norm{x^{k} - x^{k - 1}} + \norm{x^{k - 1} - x^{k - 2}}\right), \end{equation*} where $\rho = \sigma/\tau$. Using the fact that $2\sqrt{\alpha\beta} \leq \alpha + \beta$ for all $\alpha , \beta \geq 0$, we infer \begin{equation} \label{T:FiniteLength:6} 4\norm{x^{k + 1} - x^{k}} \leq \norm{x^{k} - x^{k - 1}} + \norm{x^{k - 1} - x^{k - 2}} + 4\rho \Delta_{k , k + 1}. \end{equation} Let us now prove that for any $k > l$ the following inequality holds \begin{equation*} 2\sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} \leq 2\norm{x^{l + 1} - x^{l}} + \norm{x^{l} - x^{l - 1}} + \rho\Delta_{l + 1 , k + 1}. \end{equation*} Summing up \eqref{T:FiniteLength:6} for $i = l + 1 , \ldots , k$ yields \begin{align*} 4\sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} & \leq \sum_{i = l + 1}^{k} \norm{x^{i} - x^{i - 1}} + \sum_{i = l + 1}^{k} \norm{x^{i - 1} - x^{i - 2}} + 4\rho\sum_{i = l + 1}^{k} \Delta_{i , i + 1} \\ & \leq \sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} + \norm{x^{l + 1} - x^{l}} + 4\rho\sum_{i = l + 1}^{k} \Delta_{i , i + 1} \\ & + \sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} + \norm{x^{l + 1} - x^{l}} + \norm{x^{l} - x^{l - 1}} \\ & = 2\sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} + 2\norm{x^{l + 1} - x^{l}} + \norm{x^{l} - x^{l - 1}} + 4\rho\Delta_{l + 1 , k + 1}, \end{align*} where the last inequality follows from the fact that $\Delta_{p , q} + \Delta_{q , r} = \Delta_{p , r}$ for all $p , q , r \in \nn$. Since $\varphi \geq 0$, we thus have for any $k > l$ that \begin{equation*} 2\sum_{i = l + 1}^{k} \norm{x^{i + 1} - x^{i}} \leq 2\norm{x^{l + 1} - x^{l}} + \norm{x^{l} - x^{l - 1}} + \gamma\varphi\left({\cal E}_{\beta_{k_{\footnotesize \rm statio}}}\left(z^{l} , x^{l - 1}\right) - {\cal E}_{\beta_{k_{\footnotesize \rm statio}}} \left({\bar z} , {\bar x}\right)\right). \end{equation*} Since the right hand-side of the inequality above does not depend on $k$ at all, it is easily shows that the sequence $\Seq{x}{k}$ has finite length, that is, \begin{equation} \label{T:FiniteLength-Item1:6} \sum_{k = 1}^{\infty} \norm{x^{k + 1} - x^{k}} < \infty. \end{equation} This means that it is a Cauchy sequence and hence a convergent sequence. In addition, from \eqref{L:DualSequence:5} we also have \begin{equation*} \sum_{k = 1}^{\infty} \norm{y^{k + 1} - y^{k}} < \infty, \end{equation*} and thus $\Seq{y}{k}$ has also finite length and therefore a convergent sequence. Now, the multiplier Step 3.2 yields, for any $k \geq k_{\footnotesize \rm statio}$, that \begin{equation*} u^{k + 1} = F\left(x^{k + 1}\right) + \frac{1}{\rho_{k_{\footnotesize \rm statio}}}\left(y^{k} - y^{k + 1}\right). \end{equation*} Since $F$ is continuous, $\Seq{x}{k}$ a convergent sequence and thanks to Proposition \ref{P:WeakConv} it follows that $\Seq{u}{k}$ is also a convergent sequence. From Theorem \ref{T:SubConv} it is clear that $\left\{\left(z^{k} , x^{k - 1}\right) \right\}_{k \in \nn}$ converges to a critical point $\left({\bar x} , {\bar u} , {\bar y} , {\bar x}\right)$ of ${\cal E}_{\beta_{k_{\footnotesize \rm statio}}}$. We finally conclude from Proposition \ref{P:Crit} that ${\bar x}$ is a critical point of $f$. \section{Applications: specific schemes from {\bf ALBUM}.} \label{sec:variants} The generic scheme {\bf ALBUM} encompasses interesting Lagrangian based methods. First recall that in any Lagrangian based method, the multiplier update is always given by an {\em explicit} formula (see \eqref{GenericAdap:MultiStep}): \begin{equation*} y^{k + 1} = y^{k} + \rho_{k}\left(F\left(x^{k + 1}\right) - u^{k + 1}\right). \end{equation*} Thus, the main computational and algorithmic issues which emerge from {\bf ALBUM} depend on the way we define the Lagrangian algorithmic map ${\cal A}_{\rho}$ to compute the primal step. In general, any minimization algorithm can be used at this stage. We focus on the description of two fundamental types of maps ${\cal A}_{\rho}$, yet we note that other variants can also be conceived depending on the problem's data information and the structure at hand. This point will be further developed below in Section \ref{other}. \subsection{Two fundamental instances of ${\cal A}_{\rho}$ and the corresponding {\bf ALBUM}.} Given a triple $\left(x^{k} , u^{k} , y^{k}\right)$ we compute the next primal variables $x^{k + 1}$ and $u^{k + 1}$ in {\bf ALBUM} via the algorithmic map ${\cal A}_{\rho}$ given by either one of the following minimization schemes: \begin{itemize} \item {\bf ALBUM 1} -- {\em Joint Minimization $\equiv$ Proximal Multipliers Method \cite{R1976}} \begin{equation} \label{AAA-PMM} \left(x^{k + 1} , u^{k + 1}\right) \in \mathrm{argmin}_{(x , u)} \left\{ \Laug_{\rho_{k}}\left(x , u , y^{k}\right) + \frac{\mu}{2}\norm{x - x^{k}}^{2} \right\}, \quad (\mu > 0). \end{equation} This simple idea consists in minimizing a proximal counterpart of the augmented Lagrangian $\Laug_{\rho}$, jointly with respect to both primal variables $x$ and $u$, is nothing else but the classical dynamic of Proximal Method of Multipliers (PMM) of Rockafellar \cite{R1976}. \item {\bf ALBUM 2 } -- {\em Alternating Minimization (aka Gauss-Seidel) $\equiv$ Proximal ADM \cite{GLT1989}} Update the variables $x$ and $u$ in an alternating fashion as follows: \begin{align} u^{k + 1} & \in \mathrm{argmin}_{u} \Laug_{\rho_{k}}\left(x^{k} , u , y^{k}\right), \label{AAA-ADPMM:Step1} \\ x^{k + 1} & \in \mathrm{argmin}_{x} \left\{ \Laug_{\rho_{k}}\left(x , u^{k + 1} , y^{k}\right) + \frac{\mu}{2}\norm{x - x^{k}}^{2} \right\}, \quad (\mu > 0). \label{AAA-ADPMM:Step2} \end{align} \end{itemize} \begin{remark} \label{albums} \begin{itemize} \item[(i)] Note that in the above two schemes the proximal regularization term was added only for the primal variable $x$ of the augmented Lagrangian, since by the construction of $\Laug_{\rho}$ (see \eqref{D:AugLAc}), we note that the primal variable $u$ already admit a built-in proximal term. \item[(ii)] Also, note that the flexibility of {\bf ALBUM} provides potential for further studies within other strategies or variants that could be conceived and further developed in future work, {\em e.g.}, adding a proximal regularization term for $u$ around $u^{k}$ and performing a subgradient step for determining the next point $u^{k + 1}$; or dropping one of the proximal regularization term in exchange of other assumptions on the problem's data, see section \ref{other} for the latter situation. \end{itemize} \end{remark} \medskip \begin{remark}[Tractability of the subproblems] \label{tract} Although the practical aspects involving implementation are beyond the scope of this work, it is important to discuss some of these issues. In this regard we comment the general practicability of the steps of {\bf ALBUM 2} whose alternating structure is often more favorable toward implementation. Recall that {\bf ALBUM 2} features a simple dual step and two primal steps \`a la Gauss-Seidel, one with respect to $u$ and one with respect to $x$, we discuss them below: \medskip \begin{itemize} \item[(i)] As already mentioned the $u$-step, defined through \eqref{AAA-ADPMM:Step1}, reduces to the computation of the proximal mapping of the function $h$. Thus, this step can be efficiently computed when the proximal map of $h$ is accessible, {\em i.e.}, via an and explicit formula or via simple computations, see for instance, \cite{LT14,BST2014,BH2016} for interesting examples. \item[(ii)] The second subproblem, namely the $x$-step, is more involved. Let us discuss two protocols for solving this step approximately. For simplicity, suppose that $f_{0} \equiv 0$. Then, the step \eqref{AAA-ADPMM:Step2} reduces to solve an {\em unconstrained Nonlinear Least Squares problem}, NLS for short. Therefore, the proposed Lagrangian methodology which allows to reduce the very general constrained nonlinear optimization model (CM) to solving sequentially unconstrained NLS subproblems, provides interesting future research avenues, whereby fundamental methods of NLS could be considered and exploited to analyze inexact variants. Indeed, NLS problems are central in scientific computation, and even though these are nonconvex problems, there exist two well-known fundamental methods: Gauss-Newton and Levenberg-Marquardt, including many of their variants, which address this key computational problem within a very large body of literature, see {\em e.g.}, \cite{B1996-B,C2009-B}; see also the interesting work \cite{NLS}, where SDP relaxations are shown to find global solutions of some unconstrained NLS of polynomial type. Another approach to tackle the $x$-step is to approximate it through convex subproblems, which can then be efficiently solved. For this, we refer the reader to Section \ref{other} where we give further insights into this question, and we also introduce a new and easily implementable version of {\bf ALBUM 2} for (CM-L) problems. \end{itemize} \end{remark} \subsection{Convergence results for {\bf ALBUM 1} and {\bf ALBUM 2}.} To apply our main results (\emph{cf.\ } Section \ref{Sec:ALBUM}), as previously explained, we first need to verify that \emph{joint minimization} and \emph{alternating minimization} satisfy the two conditions of Definition \ref{D:LagAlg}, {\em i.e.}, they are Lagrangian algorithmic maps. Recall that following our notations, for a given point $\xi := \xi^{k}$ at iteration $k$, the next point $\xi^{+}$ stands for $\xi^{k + 1}$. \begin{itemize} \item {\bf ALBUM 1} -- {\em Joint Minimization} From the choice of ${\cal A}_{\rho}$ (see (\ref{AAA-PMM})) we immediately get \begin{equation*} \Laug_{\rho}\left(x^{+} , u^{+} , y\right) + \frac{\mu}{2}\norm{x^{+} - x}^{2} \leq \Laug_{\rho} \left(x , u , y\right), \end{equation*} showing that Definition \ref{D:LagAlg}(i) holds true with $a = \mu$. Moreover, we also obtain \begin{equation}\label{optl} \left(0 , 0\right) \in \left(\nabla_{x} \Laug_{\rho}\left(x^{+} , u^{+} , y\right) + \mu \left(x^{+} - x\right) , \partial_{u} \Laug_{\rho}\left(x^{+} , u^{+} , y\right)\right), \end{equation} hence it follows that Definition \ref{D:LagAlg}(ii) immediately holds true with $b = \mu$. \item {\bf ALBUM 2} -- {\em Alternating Minimization} Thanks to the choice of ${\cal A}_{\rho}$, we get from \eqref{AAA-ADPMM:Step1} that $\Laug_{\rho}\left(x , u^{+} , y\right) \leq \Laug_{\rho}\left(x , u , y\right)$ and from \eqref{AAA-ADPMM:Step2} we get that $\Laug_{\rho}\left(x^{+} , u^{+} , y\right) + \frac{\mu}{2} \norm{x^{+} - x}^{2} \leq \Laug_{\rho}\left(x , u^+ , y\right)$. Combining both inequalities shows that Definition \ref{D:LagAlg}(i) holds true with $a = \mu$. Moreover, as before it also follows immediately that Definition \ref{D:LagAlg}(ii) holds true with $b = \mu$. \end{itemize} \medskip We will now show that both {\bf ALBUM 1} and {\bf ALBUM 2} generate Lagrangian sequences $\Seq{z}{k}$. To this end we have to verify that conditions {\bf C3} and {\bf C4} hold true for both schemes. \medskip First, for {\bf ALBUM 1} we obtain from \eqref{AAA-PMM} (\emph{cf.\ } \eqref{optl}) that $0 =: v^{k + 1} \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right)$, and hence condition {\bf C3} holds true with any $c >0$. The next result shows that condition {\bf C3} also holds true for {\bf ALBUM 2}. \begin{proposition} \label{L:Cond3ADPMM} Let $\Seq{z}{k}$ be a sequence generated by {\bf ALBUM 2} which is assumed to be bounded. Then, for each $k \in \nn$, there exist a positive constant $c$ and $v^{k + 1} \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right)$, such that for all $k \geq k_{\footnotesize \rm statio}$ we have \begin{equation*} \norm{v^{k + 1}} \leq c\norm{x^{k + 1} - x^{k}}. \end{equation*} \end{proposition} \begin{proof} Since $\Seq{x}{k}$ is bounded, and for each $k \geq k_{\footnotesize \rm info}$, we have that $\nabla F$ is Lipschitz continuous on $\Zone$ (by Assumption \ref{AssumptionB}(ii)), it follows that there exists $B > 0$ such that \begin{equation*} \sup_{k \geq k_{\footnotesize \rm info}} \norm{\nabla F\left(x^{k}\right)} \leq B. \end{equation*} From \eqref{AAA-ADPMM:Step1} we get that \begin{equation*} 0 \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k} , u^{k + 1} , y^{k}\right). \end{equation*} Using the definition of $\Laug_{\rho}$ (see \eqref{D:AugLAc}) we obtain that \begin{equation*} \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) = \partial_{u} \Laug_{\rho_{k}}\left(x^{k} , u^{k + 1} , y^{k}\right) + \rho_{k}\left(F\left(x^{k}\right) - F \left(x^{k + 1}\right)\right). \end{equation*} Therefore, using the inclusion just above, we obtain for all $k \in \nn$ that \begin{equation*} v^{k + 1} \equiv \rho_{k}\left(F\left(x^{k}\right) - F\left(x^{k + 1}\right)\right) \in \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right), \end{equation*} and \begin{equation*} \norm{v^{k + 1}} = \rho_{k}\norm{F\left(x^{k + 1}\right) - F\left(x^{k}\right)} \leq \rho_{k_{\footnotesize \rm statio}}B\norm{x^{k + 1} - x^{k}}, \end{equation*} where the last inequality follows from the Mean Value Theorem\footnote{Recall that $\norm{F\left(u \right) - F\left(v\right)} \leq \sup_{\theta \in [0 , 1]} \norm{\nabla F\left(v + \theta\left(u - v \right)\right)}\norm{u - v}$, \cite[p. 69]{OR1970-B}} and the fact that $\rho_{k} \leq \rho_{k_{\footnotesize \rm statio}}$ for all $k \geq k_{\footnotesize \rm statio}$ (see Lemma \ref{L:SufficentDecrease}). This proves that condition {\bf C3} holds true with $c = \rho_{k_{\footnotesize \rm statio}}B$. \end{proof} \medskip Having established that the three conditions {\bf C1}, {\bf C2} and {\bf C3} of the basic methodology hold, to apply our main convergence results to {\bf ALBUM 1} and {\bf ALBUM 2}, it remains to verify the validity of the condition {\bf C4} for $h$. This is done next. \begin{proposition} \label{P:ALBUM12C4} Let $\Seq{z}{k}$ be a sequence generated by either {\bf ALBUM 1} or {\bf ALBUM 2}, which is assumed to be bounded. Let ${\bar z}$ be a limit point of a subsequence $\left\{ z^{k} \right\}_{k \in{\cal K}}$ of $\Seq{z}{k}$, then we have that $\limsup_{k \in {\cal K}\subset \nn} h\left(u^{k}\right) \leq h\left({\bar u}\right)$. \end{proposition} \begin{proof} The sequence $\Seq{z}{k}$ is bounded and therefore there exists a subsequence $\left\{ z^{m_{k}} \right\}_{k \in \nn}$ which converges to ${\bar z} = \left({\bar x} , {\bar u} , {\bar y}\right)$. \medskip For {\bf ALBUM 1}: from the $x$-step we have for all $k \geq k_{\footnotesize \rm statio}$ that \begin{equation*} \Laug_{\rho_{k_{\footnotesize \rm statio}}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) + \frac{\mu}{2}\norm{x^{k + 1} - x^{k}}^{2} \leq \Laug_{\rho_{k_{\footnotesize \rm statio}}}\left({\bar x} , {\bar u} , y^{k}\right) + \frac{\mu}{2} \norm{{\bar x} - x^{k}}^{2}. \end{equation*} We now substitute $k$ by $m_{k} - 1$ and obtain from the definition of $\Laug_{\rho}$ (see \eqref{D:AugLAc}) that \begin{align} f_{0}\left(x^{m_{k}}\right) + h\left(u^{m_{k}}\right) & + \act{y^{m_{k} - 1} , F\left(x^{m_{k}} \right) - F\left({\bar x}\right)} + \act{y^{m_{k} - 1} , {\bar u} - u^{m_{k}}} \nonumber \\ & + \frac{\rho_{k_{\footnotesize \rm statio}}}{2}\norm{F\left(x^{m_{k}}\right) - u^{m_{k}}}^{2}\leq f_{0}\left({\bar x}\right) + h\left({\bar u}\right) + \frac{\rho_{k_{\footnotesize \rm statio}}}{2}\norm{F\left({\bar x}\right) - {\bar u}}^{2} \notag \\ & + \frac{\mu}{2}\norm{{\bar x} - x^{m_{k} - 1}}^{2}. \label{h1} \end{align} Likewise, for {\bf ALBUM 2}, from the $u$-step (see \eqref{AAA-ADPMM:Step1}), we have for all $k \geq k_{\footnotesize \rm statio}$ that \begin{equation*} \Laug_{\rho_{k_{\footnotesize \rm statio}}}\left(x^{k} , u^{k + 1} , y^{k}\right)\leq \Laug_{\rho_{k_{\footnotesize \rm statio}}}\left(x^{k} , {\bar u} , y^{k}\right). \end{equation*} We now substitute $k$ by $m_{k} - 1$ and obtain from the definition of $\Laug_{\rho}$ (see \eqref{D:AugLAc}) that \begin{equation}\label{h2} h\left(u^{m_{k}}\right) + \act{y^{m_{k} - 1} , u^{m_{k}} - {\bar u}} + \frac{\rho_{k_{\footnotesize \rm statio}}}{2} \norm{F\left(x^{m_{k} - 1}\right) - u^{m_{k}}}^{2} \leq h\left({\bar u}\right) + \frac{\rho_{k_{\footnotesize \rm statio}}}{2}\norm{F\left(x^{m_{k} - 1}\right) - {\bar u}}^{2}. \end{equation} For each of the just derived inequalities \eqref{h1} and \eqref{h2}, letting $k$ goes to $\infty$ and using the continuity of $f_{0}$ and $F$ (see Assumption \ref{AssumptionB}(ii) and (iii)), together with Proposition \ref{P:WeakConv} (for the case of \eqref{h1}) yields in both cases that \begin{equation*} \limsup_{k \rightarrow \infty} h\left(u^{m_{k}}\right) \leq h\left({\bar u}\right), \end{equation*} and the proof is completed. \end{proof} \medskip To summarize at this point, we have therefore shown that the two main schemes {\bf ALBUM 1} and {\bf ALBUM 2} produce \emph{Lagrangian sequences} and hence our convergence results Theorems \ref{T:SubConv} and \ref{T:GlobConv} are applicable. Observe that we do not only prove that these well-known methods converge in the absence of convexity for the general nonlinear composite model (CM), we also show how to apply them under weak assumptions through the use of a new adaptive regime. \subsection{Towards implementable variants of {\bf ALBUM}.} \label{other} To further illustrate the potential benefits and generality of our approach we now consider further specific instances and variants of {\bf ALBUM} under other relevant assumptions on data information which occur in many interesting applications. This allows us to extend some recent results in the literature and even to propose a new scheme. \medskip {\bf The classical method of alternating direction of multipliers (ADM).} Consider the limiting case of {\bf ALBUM 2} obtained with $\mu \equiv 0$. We recover the classical Alternating Direction of Multipliers (ADM) \cite{GLT1989}. Under the additional assumption that the augmented Lagrangian $x \rightarrow \Laug_{\rho}\left(x , u , y\right)$ is $\sigma$-strongly convex, for any fixed $u , y \in \rr^{m}$, we can obtain global convergence of the ADM to critical points of the {\em nonlinear} nonconvex composite model (CM). Indeed, in this case {\bf ALBUM 2} yields (recall \eqref{AAA-ADPMM:Step1} and \eqref{AAA-ADPMM:Step2}) that \begin{equation} \label{ALBUM2SC:Step1} \Laug_{\rho}\left(x , u^{+} , y\right) \leq \Laug_{\rho}\left(x , u , y\right), \end{equation} and \begin{equation} \label{ALBUM2SC:Step2} \nabla_{x} \Laug_{\rho}\left(x^{+} , u^{+} , y\right) = 0 . \end{equation} Now, by the $\sigma$-strong convexity of $x \rightarrow \Laug_{\rho}\left(x , u^{+} , y\right)$ together with \eqref{ALBUM2SC:Step2} we have that \begin{equation*} \Laug_{\rho}\left(x^{+} , u^{+} , y\right) + \frac{\sigma}{2}\norm{x^{+} - x}^{2} \leq \Laug_{\rho} \left(x , u^{+} , y\right), \end{equation*} and hence from \eqref{ALBUM2SC:Step1} it follows that Definition \ref{D:LagAlg}(i) holds true with $a = \sigma$. Moreover, we also get that $\norm{\nabla_{x} \Laug_{\rho}\left(x^{+} , u^{+} , y\right)} = 0 \leq b\norm{x^{+} - x}$, showing that Definition \ref{D:LagAlg}(ii) immediately holds true with any $b > 0$. Now it is trivial to see that the proofs of conditions {\bf C3} and {\bf C4} as done for the case $ \mu > 0$ for {\bf ALBUM 2} remain valid for the case $\mu = 0$. Thus our convergence results apply, and extend the recent result \cite[Theorem 4]{LP2015}, which uses the same assumption on the Lagrangian, but was valid only for the linear case ({\em i.e.}, $F\left(x\right) \equiv Fx$). Furthermore, for the linear case with a matrix $F$ full row rank, we have $\gamma =\sqrt{\lambda_{\min}(FF^{T})}>0$, and since $a = \sigma$ and $b$ can be any positive number, ({\em e.g.}, we can set $b = 1$), we immediately obtain the threshold value for $\rho$ (see \eqref{rholin} in Remark \ref{r:lin}) that warrant our convergence results: \begin{equation*} \rho > \frac{4 ((L(f_{0}) + 1)^{2} +1)}{\sigma\lambda_{\min}(FF^{T})}. \end{equation*} \medskip {\bf Tractable convex subproblems for ALBUM 2.} In relation to Remark \ref{tract}, we focus on the tractability of the $x$-step (as already mentioned, the $u$-step is easier for any proximable $h$). We illustrate here a specific but fundamental aspect of our family of methods through the important case of {\bf ALBUM 2}. In addition to the standing assumptions, we assume that $f_{0}$ is $C^{2}$ with Lipschitz continuous gradient (for simplicity) and $F$ is linear (so that the information zone is the whole space, \emph{cf.\ } Remark \ref{r:info}). The constant $\rho > 0$ can thus be determined. We observe that for fixed couple $\left(u , y\right)$, the function $\Laug_{\rho}\left(\cdot , u , y\right)$ is $C^{2}$ whenever $u$ is in $\dom h$ and that its Hessian matrix is given by $x \rightarrow \nabla^{2} f_{0}\left(x\right) + \rho F^{T}F$. As a consequence of the Lipschitz continuity assumption of $f_{0}$ we have that: \begin{equation} \sup _{(x , u , y) \in \rr^{n} \times \rr^{m} \times \dom{h}} \norm{\nabla_{x}^{2} \Laug_{\rho} \left(x , u , y\right)} \leq L(f_{0}) + \rho\lambda_{\max}(FF^{T}). \end{equation} Thus, with $\mu = L(f_{0}) + \rho\lambda_{\max}(FF^{T})$, the $x$-step in {\bf ALBUM 2} consists in minimizing a {\em convex function} $x \rightarrow \Laug_{\rho}\left(x , u^{k + 1} , y^{k}\right) + \left(\mu/2\right)\norm{x - x^{k}}^{2}$ with {\em known Lipschitz continuous gradient}. \medskip {\bf Solving general semi-algebraic feasibility problems with ALBUM 2.} The specialization of {\bf ALBUM 2} to the general feasibility problem described in Example \ref{feas} provides a new parallel projection method; the details of the easy derivation of the corresponding steps in this case are left to the reader. In view of our general results, the penalty parameter $\rho > 0$ can be determined and no other assumption than semi-algebraicity of the subsets $S_{i}$, $i = 1 , 2, \ldots , p$, is necessary to obtain global convergence of the methods (under our classical boundedness assumptions) \medskip {\bf A simple explicit algorithm: Proximal Linearized Alternating Minimization.} We consider here a proximal linearized instance of {\bf ALBUM 2} with proven global convergence results which seems to be new in the literature for the nonconvex composite model. Our setting here is confined to the particular, yet interesting and important case, where in the model (CM): \medskip \begin{itemize} \item The function $f_{0}$ has an $L(f_{0})$-Lipschitz continuous gradient on $\rr^{n}$. \item The mapping $F$ is linear, namely $F\left(x\right) \equiv Fx$ for all $x \in \rr^{n}$, for some matrix $F \in \rr^{n \times m}$ with full row rank. \end{itemize} Furthermore, we additionally assume that $\kappa(FF^{T}) < 2$, where $\kappa(A)$ denotes the condition number of a square matrix $A$, namely the ratio $\lambda_{\max}(A) / \lambda_{\min}(A)$. \medskip Note that this assumption always holds true whenever $FF^{T}$ or $F^{T}F$ is the identity matrix, which often occurs in applications, {\em e.g.}, in some problems in signal recovery \cite{BBC2011}. \medskip Recall (cf. Remark \ref{r:info}) that under the above hypothesis on the problem's data, Assumption \ref{AssumptionB} holds with ${\cal Z} \equiv \rr^{n}$, and we also have that $\gamma = \sqrt{\lambda_{\min}(FF^T)} > 0$. The augmented Lagrangian in this case reads (\emph{cf.\ } \eqref{D:AugLAc}), for $\rho > 0$, as follows \begin{equation*} \Laug_{\rho}\left (x , u , y\right) := f_{0}\left(x\right) + h\left(u\right) + \act{y , Fx - u} + \frac{\rho}{2}\norm{Fx - u}^{2}. \end{equation*} We then consider approximating the $x$-step in {\bf ALBUM 2} (leaving the $u$-step untouched) through the following scheme: \medskip \begin{itemize} \item {\bf ALBUM 3} -- {\em Proximal Linearized Alternating Minimization} \end{itemize} \begin{align} u^{k + 1} & \in \mathrm{argmin}_{u} \Laug_{\rho_{k}}\left(x^{k} , u , y^{k}\right), \label{ALBUM3:1} \\ x^{k + 1} & \in \mathrm{argmin}_{x} \left\{ \act{x - x^{k} , \nabla_{x} \Laug_{\rho_{k}}\left(x^{k} , u^{k + 1} , y^{k}\right)} + \frac{\mu}{2}\norm{x - x^{k}}^{2} \right\}, \quad (\mu > 0). \label{ALBUM3:2} \end{align} Thus, the $x$-step consists of first linearizing the augmented Lagrangian around a given point and adding a proximal term, which is a common strategy to generate a simpler approximate step (see {\em e.g.}, \cite{BST2014}), and hence \eqref{ALBUM3:2} is nothing else but {\em one shot} of an explicit gradient step for minimizing $\Laug_{\rho_{k}}\left (x , u^{k+1} , y^{k}\right)$, with an easy explicit formula. \medskip To apply the convergence results of Section \ref{Sec:ALBUM}, we first need to verify that the corresponding algorithmic map ${\cal A}_{\rho}$ of {\bf ALBUM 3} satisfies the two conditions of Definition \ref{D:LagAlg} ,{\em i.e.}, is a Lagrangian algorithmic map. For that purpose, first note that given couple $\left(u , y\right)$, the gradient of $\Laug_{\rho}\left(x , u , y\right)$ with respect to $x$, is the mapping $x \rightarrow \nabla f_{0}\left(x\right) + F^{T}y + \rho F^{T}\left(Fx - u\right)$, which is a $L$-Lipschitz continuous mapping, with $L:= L(f_{0}) + \rho\norm{F}^{2}$. Invoking the well known Descent Lemma, it follows that condition {\bf C1} holds with $a = \mu - L/2$. However, observe that contrary to ALBUM 1 and 2, the constant $a$ depends on $\rho$ through $L$, and $a>0$ will be warranted thanks to Lemma \ref{L:ALBUM3Rho} given below. \medskip Next, using the steps of the corresponding algorithmic map ${\cal A}_{\rho}$, together with the fact that $f_{0}$ admits an $L(f_{0})$-Lipschitz continuous gradient, one easily verifies that for any $k \geq 0$, \begin{align} \label{L:C2ALBUM3} \norm{\nabla_{x} \Laug_{\rho}\left(x^{+} , u^{+} , y\right)} & \leq \norm{\nabla_{x} \Laug_{\rho} \left(x^{+} , u^{+} , y\right) - \nabla_{x} \Laug_{\rho}\left(x , u^{+} , y\right)} + \norm{\nabla_{x} \Laug_{\rho}\left(x , u^{+} , y\right)} \nonumber \\ & \leq \left(L(f_{0}) + \rho\norm{F}^{2} + \mu\right)\norm{x^{+} - x}. \end{align} This shows that condition {\bf C2} holds true with $b = L(f_{0}) + \rho\norm{F}^{2} + \mu$. In addition, condition {\bf C3} is immediate, since here the optimality condition of the $u$-step (see \eqref{ALBUM3:1}) implies that \begin{equation*} \partial_{u} \Laug_{\rho_{k}}\left(x^{k + 1} , u^{k + 1} , y^{k}\right) \ni v^{k + 1} = \rho F \left(x^{k + 1} - x^{k} \right) \; \Rightarrow \; \norm{v^{k + 1}} \leq \rho\norm{F}\norm{x^{k + 1} - x^{k}}, \end{equation*} showing that condition {\bf C3} holds with $c = \rho\norm{F}$. \medskip Finally, since the $u$-step in {\bf ALBUM 3} is identical to the one in {\bf ALBUM 2}, the statement and the proof of Proposition \ref{P:ALBUM12C4} holds in this case with the same proof (see only the part that related to {\bf ALBUM 2}), and hence condition {\bf C4} holds true in this case too. \medskip Despite the fact that conditions {\bf C1--C4} are satisfied it is important to realize that our general theorem does not apply at this stage because both $a$ and $b$ depend on $\rho$ and may become negative if $\rho$ is too large. In order to circumvent this difficulty and obtain the general convergence of the scheme (as in Theorems \ref{T:SubConv} and \ref{T:GlobConv}), it suffices to guarantee a sufficient descent of the Lyapunov function ${\cal E}_{\beta}$. For this we need that \eqref{descond} holds true (see Remark~\ref{liap}(b)) for a couple of well chosen $\mu$ and $\rho$, that is, \begin{equation} \label{ParaDescGene} \frac{a}{2} - \frac{d_{1} + d_{2}}{\rho} > 0. \end{equation} For that purpose let us first observe that a stronger version of Lemma \ref{L:DualSequence} can be derived. Just follow the same proof by exploiting the linearity of $F$, and note that the boundedness assumption on the sequence of multipliers $\Seq{y}{k}$ in not anymore needed in that case. We leave the details to the reader, and record this result below. \begin{lemma} \label{L:DualSequenceLinALBUM3} Let $\Seq{z}{k}$ be a Lagrangian sequence. Then, the following inequality holds true for any $k \geq 0$, \begin{equation} \label{L:DualSequenceLinALBUM3:0} \norm{y^{k + 1} - y^{k}}^{2} \leq d_{1}\norm{x^{k + 1} - x^{k}}^{2} + d_{2}\norm{x^{k} -x^{k - 1}}^{2}, \end{equation} where \begin{equation} \label{L:DualSequenceLinALBUM3:1} d_{1} = \frac{2\norm{{\cal M}}^{2}}{\lambda_{\min}(FF^{T})}, \quad d_{2} = \frac{2\left(L(f_{0}) + \norm{{\cal M}}\right)^{2}}{\lambda_{\min}(FF^{T})}, \end{equation} and ${\cal M} := \mu I_{n} - \rho F^{T}F$. \end{lemma} Equipped with this result, we now show that we can find positive constants $\rho$ and $\mu$ in terms of the problem's data so that \eqref{ParaDescGene} holds, and hence our convergence results for {\bf ALBUM 3}: Theorems \ref{T:SubConv} and \ref{T:GlobConv} with semi-algebraic data, apply. \begin{lemma}[Determining threshold value for $\rho$] \label{L:ALBUM3Rho} Let $F : \rr^{n} \rightarrow \rr^{m}$ be a linear mapping for which $\kappa(FF^{T}) < 2$. Let $\Seq{z}{k}$ be a sequence generated by {\bf ALBUM 3}. Then, there exists a constant ${\bar \rho}$ such that \eqref{ParaDescGene} holds for any $\rho > {\bar \rho}$, and with $\mu \in \left(\mu_{1} , \mu_{2}\right)$ for some $\mu_{1} , \mu_{2} >0$, where both ${\bar \rho}$, $\mu_{1}$ and $\mu_{2}$ are given in terms of the problem's data $L(f_{0})$ and $\gamma$. \end{lemma} \begin{proof} For convenience we denote $\ell := L(f_{0})$. Using Lemma \ref{L:DualSequenceLinALBUM3} and the fact that $a = \mu - \left(\ell + \rho\norm{F}^{2}\right)/2$, in order to satisfy \eqref{ParaDescGene}, we need to find $\rho > 0$ and $\mu > 0$ such that \begin{equation} \label{L:ALBUM3Rho:1} \frac{\mu - \frac{\ell + \rho\norm{F}^{2}}{2}}{2} - \frac{2\norm{{\cal M}}^{2} + 2\left(\ell + \norm{{\cal M}}\right)^{2}}{\rho\gamma^{2}} > 0. \end{equation} Rewriting this inequality yields the following equivalent one \begin{equation*} 16\norm{{\cal M}}^{2} + \rho\gamma^{2}\left(\ell + \rho\norm{F}^{2} - 2\mu\right) + 8\ell^{2} + 16\ell\norm{{\cal M}} < 0. \end{equation*} Since ${\cal M} = \mu I - \rho F^{T}F$, and symmetric we have \begin{equation*} \norm{{\cal M}} = \lambda_{\max}({\cal M}) = \lambda_{\max}(\mu I - \rho F^{T}F) = \lambda_{\max}(\mu I) - \rho\lambda_{\min}(F^{T}F) = \mu - \rho\gamma^{2}, \end{equation*} where the last equality uses the fact $\lambda_{\min}(F^{T}F)=\lambda_{\min}(FF^{T}) = \gamma^2$. Therefore, defining $t: = \mu - \rho\gamma^{2}\equiv \norm{{\cal M}}$, and rearranging terms, the above inequality reduces to show that \begin{equation} \label{psi} \psi\left(t\right):= 16t^{2} - 2\left(\rho\gamma^{2} - 8\ell\right)t + \rho\gamma^{2}\left(\ell + \rho\norm{F}^{2} - 2\rho\gamma^{2}\right) + 8\ell^{2} < 0. \end{equation} Computing the (reduced) discriminant $\Delta_{\psi}$ of the above quadratic function $\psi \left(\cdot\right)$ yields \begin{equation*} \Delta_{\psi} := \left(\rho\gamma^{2} - 8\ell\right)^{2} - 16\left(\rho\gamma^{2}\left(\ell + \rho\norm{F}^{2} - 2\rho\gamma^{2}\right) + 8\ell^{2}\right) = \rho^{2}\gamma^{2}\eta - 32\rho \gamma^{2}\ell - 64\ell^{2}, \end{equation*} where thanks to our assumption $\kappa(FF^{T}) < 2$, we have $\eta := \left(33\gamma^{2} - 16\norm{F}^{2}\right)>0$. Therefore, \eqref{psi} holds (and hence so does \eqref{L:ALBUM3Rho:1}), if $\Delta_{\psi} > 0$ and $t_{1} < t < t_{2}$ where $t_{1}$ and $t_{2}$ are the zeroes of $\psi\left(t \right)$. Some algebra then shows that the latter is satisfied with \begin{equation*} \rho > {\bar \rho}:= \frac{8\ell}{\eta\gamma}\left(2\gamma + \sqrt{4\gamma^{2} + \eta}\right), \end{equation*} and \begin{equation} \label{L:ALBUM3Rho:2} t_{1} \equiv \frac{\left(\rho\gamma^{2} - 8\ell\right) - \sqrt{\Delta_{\psi}}}{16} < t < \frac{\left(\rho\gamma^{2} - 8\ell\right) + \sqrt{\Delta_{\psi}}}{16} \equiv t_{2}. \end{equation} Moreover, since $\norm{{\cal M}} = \mu - \rho\gamma^{2} = t$, we must have $t \geq 0$, and indeed it is easy to check that $t_{1} >0$. Using the relation $\mu = t + \rho\gamma^{2}$, we can rewrite \eqref{L:ALBUM3Rho:2} as follows \begin{equation*} \mu_{1} \equiv \frac{\left(17\rho\gamma^{2} - 8\ell\right) - \sqrt{\Delta_{\psi}}}{16} < \mu < \frac{\left(17\rho\gamma^{2} - 8\ell\right) + \sqrt{\Delta_{\psi}}}{16} \equiv \mu_{2}. \end{equation*} and the proof is completed. \end{proof} \section*{Acknowledgments.} The research of J\'{e}r\^{o}me Bolte is sponsored by the Air Force Office of Scientific Research, Air Force Material Command, USAF, under grant number FA9550-15-1-0500 \& the FMJH Program Gaspard Monge in optimization and operations research. The research of Shoham Sabach was partially supported by the German Israel Foundation, GIF Grant G-1243-304.6/2014. The research of Marc Teboulle was partially supported by the Israel Science Foundation, ISF Grants 998/12 and 1844/16, and the German Israel Foundation, GIF Grant G-1243-304.6/2014. \bibliographystyle{informs2014}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,429
{"url":"http:\/\/opentradingsystem.com\/quantNotes\/Properties_of_averaged_Taylor_polynomial_.html","text":"I. Basic math.\n II. Pricing and Hedging.\n III. Explicit techniques.\n IV. Data Analysis.\n V. Implementation tools.\n VI. Basic Math II.\n VII. Implementation tools II.\n 1 Calculational Linear Algebra.\n 2 Wavelet Analysis.\n 3 Finite element method.\n 4 Construction of approximation spaces.\n A. Finite element.\n B. Averaged Taylor polynomial.\n a. Properties of averaged Taylor polynomial.\n b. Remainder of averaged Taylor decomposition.\n c. Estimates for remainder of averaged Taylor polynomial. Bramble-Hilbert lemma.\n d. Bounds for interpolation error. Homogeneity argument.\n C. Stable space splittings.\n D. Frames.\n E. Tensor product splitting.\n F. Sparse tensor product. Cure for curse of dimensionality.\n 5 Time discretization.\n 6 Variational inequalities.\n VIII. Bibliography\n Notation. Index. Contents.\n\n## Properties of averaged Taylor polynomial.\n\nroposition\n\n(Properties of averaged Taylor polynomial)\n\n1. Let be a bounded domain in . For any we have\n\n2. For we have\n\n3. For s.t. and we have\n\nProof\n\nOnly the statement needs a proof.\n\nFirst, we verify that for By the properties of derivative and structure of the formula it suffices to establish it for . We calculate At this point we make a change combined with renaming then the rules and transform into and thus\n\nNext, we verify for Finally, we extend the proof to by noting that is dense in and the part of this statement insures that the case extends to by taking an -convergent sequence of functions.\n\n Notation. Index. Contents.","date":"2018-01-18 13:19:05","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8160073161125183, \"perplexity\": 3196.6301701807147}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-05\/segments\/1516084887414.4\/warc\/CC-MAIN-20180118131245-20180118151245-00056.warc.gz\"}"}
null
null
Docter is a surname. Notable people with the surname include: Pete Docter (born 1968), American film director, animator, screenwriter, and producer Mary Docter (born 1961), American speed skater Sarah Docter (born 1964), American speed skater Tijn Docter (born 1972), Dutch actor
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,070
{"url":"https:\/\/physics.stackexchange.com\/questions\/90216\/where-does-the-loss-in-gravitational-energy-of-the-load-go-when-a-spring-is-pull","text":"Where does the loss in gravitational energy of the load go when a spring is pulled?\n\nA mass spring system is in equilibrium. If I pull on the load by $x$ meters, the energy stored in the spring is (this is what is given in my book):\n\n$$E=\\frac12kx^2$$\n\nHowever, doesn't the load lose gravitational potential energy as it moves down? Where would this energy go? By conservation law, shouldn't the energy equation be: $$E_{stored}= \\frac12kx^2 + mgx$$\n\nIn short, where does the loss of gravitational potential energy (mgx) of the load get transferred to if it is not stored in the spring?\n\n(Referring to a vertical mass spring system)\n\nThe picture in my mind:\n\n\u2022 The wording of the problem sounds like they're really just asking for the change in elastic potential energy. Could you post the actual question text to confirm? \u2013\u00a0BMS Dec 14 '13 at 17:55\n\u2022 @BMS: it isn't an actual question... I just wanted to understand the formula and also to know where the loss in potential energy (mgx) of the load would go if it isn't stored in the spring... \u2013\u00a0Eliza Dec 14 '13 at 17:58\n\nThere are some very interesting subtleties here. Let's analyze the situation very carefully.\n\nLet's choose our system to consist of the block, spring, and Earth. By choosing the Earth and block to be in our system, we will have a change in gravitational potential energy.\n\nIn the beginning, the (massless) spring hangs vertically with a block of mass $m$ attached at the bottom. We could calculate how much the spring is stretched by equating the gravitational and spring forces ($kx_1=mg$) but we won't need this.\n\nNow, during the pulling process you describe, it's important to note that you are doing positive work on the system, which means that the energy in the system increases. It is tempting to say that the change in energy is zero, but this isn't the case for the system we've chosen.\n\nLet's use the work-energy theorem to answer your question of where the gravitational potential energy \"goes.\" $$\\underbrace{W_\\text{net, external}}_\\text{Positive}=\\Delta E_\\text{tot}=\\underbrace{\\Delta U_\\text{grav}}_\\text{Negative}+\\underbrace{\\Delta U_\\text{elastic}}_\\text{Positive}$$\n\nYes, the gravitational potential energy decreases. Where does it go? Well, the only other term that could (mathematically) compensate for this decrease in gravitational potential energy is the increase in elastic potential energy. But be careful with wording here. The spring is not storing gravitational potential energy; rather, gravitational potential energy was converted to elastic potential energy.\n\nAs a side note, since the left-hand side of the equation above is positive, the absolute value of $\\Delta U_\\text{elastic}$ is greater than that of $\\Delta U_\\text{grav}$. So, not only was the gravitational potential energy converted to elastic potential energy, the positive work you did on the system also adds to the increase in elastic potential energy.\n\n\u2022 so when the spring is pulled by X2 meters by an external agent, positive work done by the external agent AND the loss in gravitational potential energy of the load will be converted to elastic potential energy? \u2013\u00a0Eliza Dec 15 '13 at 17:56\n\u2022 That is my interpretation. \u2013\u00a0BMS Dec 15 '13 at 20:10\n\u2022 I finally get it. Just one more thing, if this was a horizontal mass-spring system, would the elastic potential energy stored in the spring just equal to the work done by the external agent (i.e. no complications involving gravitational potential energy)? \u2013\u00a0Eliza Dec 16 '13 at 12:03\n\u2022 @Eliza if $\\Delta K=0$ during the process, yes. If $\\Delta K \\ne0$, then some of the work \"goes into\" kinetic energy as well. \u2013\u00a0BMS Dec 16 '13 at 16:15\n\nThe difference in potential energy is due to different definitions of what $x=0$ means. Since from the perspective of the spring this would be when the spring is not compressed nor stretched (rest length). However in the case of a mass-spring system in a gravity field (assumed to be a constant acceleration, $g$) this position is often chosen to be the equilibrium position, so where the force of the spring is equal to the force of gravity. The difference between these position can be derived with the following equation, $$k\\Delta x=mg.$$ If you would substitute this into the potential energy equation you get: $$E=\\frac{1}{2}k\\left(x+\\Delta x\\right)^2=\\frac{1}{2}k\\left(x^2+2x\\Delta x+\\Delta x^2\\right)=\\frac{1}{2}kx^2+mxg+\\frac{1}{2}km^2g^2,$$ where $x$ is relative to the rest length, so the position relative to the equilibrium position would be $x_{eq}=x+\\Delta x$.\n\nYou can also remove the last therm ($\\frac{1}{2}km^2g^2$) since it is independent of $x$, because you are free to choose the position of zero potential energy since you only look at changes in potential energy.\n\n\u2022 The term mgx... where is this change in gravitational potential energy stored in? \u2013\u00a0Eliza Dec 15 '13 at 3:52\n\u2022 @Eliza: I am not sure what you mean, but $E=\\frac{1}{2}kx_{eq}^2$ would represent the sum of potential energy of the spring and gravity. \u2013\u00a0fibonatic Dec 15 '13 at 4:33\n\u2022 Could you please look at my edit. I have made my question clearer using a diagram \u2013\u00a0Eliza Dec 15 '13 at 4:58\n\u2022 The work done by the external force would be equal to: $$W=\\frac{1}{2}k\\left((x_1+x_2)^2-x_1^2\\right)-mgx_2$$ But $x_1=\\frac{mg}{k}$, which simplifies the work done to $$W=\\frac{1}{2}kx_2^2$$ So this is what I mean by $x$ relative to the equilibrium position. \u2013\u00a0fibonatic Dec 15 '13 at 5:17\n\u2022 Like I showed in my previous comment, the decrease of gravitational potential energy will reduce the amount of work done, since gravity acts in the same direction as the external force. So you could say that the work done by the external force and gravity together wil be equal to minus (in the other direction) the work done by the spring. So you could say that the spring does store the change of gravitational potential energy. \u2013\u00a0fibonatic Dec 16 '13 at 12:24\n\nThe energy stored in the spring is the one that is give $\\frac{1}{2}{kx^2}$. As you mention, by conservation of energy there also a reduction of potential energy, but that reduction is not energy that it's stored by the spring but the complete change of energy of the whole system.\n\nTake into consideration, that the problem just states what's happening with the spring, for example in a horizontal configuration where there would be no gravitational force applied to the mass\/spring system.\n\n\u2022 :I am referring to a vertical mass spring system. Also, what do you mean by the complete change of energy of the whole system? \u2013\u00a0Eliza Dec 14 '13 at 17:18\n\nThe potential energy turns into kinetic energy. It makes causes a simple harmonic motion. Whenn you release the mass, it means a free fall. It never comes at rest. Dissipative forces neglected. To stop it another force must be applied, which results in loss of energy.","date":"2020-11-24 12:42:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7759127020835876, \"perplexity\": 160.12922888776328}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-50\/segments\/1606141176256.21\/warc\/CC-MAIN-20201124111924-20201124141924-00453.warc.gz\"}"}
null
null
Негежма — упразднённая деревня на территории Свирьстройского городского поселения Лодейнопольского района Ленинградской области. Название Означает «не гожая для жизни земля». История По данным 1563 года на месте деревни находился погост с церковью Рождества Богородицы, дворы священника и причта. До строительства церкви на погосте располагалась часовня. Из причта упоминались: деревенский поп Гаврило, дьяк Нифонтик и пономарь Фетко. В 1925 году близ деревни была обнаружена стоянка людей каменного века. Первая фаза развития гребенчатой керамики, обнаруженная на этой стоянке, датируется временем, предшествующим ладожской трансгрессии или самому началу последней (2800—1700 гг. до н. э.). География Располагалась в месте впадения реки Негежма в Свирь, сейчас это в северной части Лодейнопольского района. Примечания Исчезнувшие населённые пункты Лодейнопольского района
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,689
\section{Details of the Neural Networks $f$ and $\sigma^2$} \label{sec:detailsofournetwork} Figure \ref{fig:highlevel} shows the neural network architectures that define the Gaussian likelihood function of the proposed framework at a high level. Architectural details of the residual blocks~\cite{He2016Resnet} used in the deep unrolled network $f$ are illustrated in Figure \ref{fig:residual}. Mardani \emph{et. al.}\ \cite{Mardani2018NPGD} have used the same architecture for the residual blocks to develop a proximal gradient descent-based deep unrolling method for MRI reconstruction. Architectural details of the neural network $\sigma^2$, which models the diagonal entries of the covariance matrix of the Gaussian likelihood function, are shown in Figure \ref{fig:unet}. We emphasize that the proposed framework uses MC Dropout~\cite{Srivastava2014Dropout} to approximate the posterior distribution of the parameters of the likelihood function, which requires the use of dropout~\cite{Srivastava2014Dropout} after the convolutional layers of the neural networks $f$ and $\sigma^2$. We use dropout after every convolutional layer, except the first convolutional layer, of every residual block to obtain the dropout-added neural network $\bar{f}$. Similarly, to obtain the dropout-added neural network $\bar{\sigma}^2$, we use dropout after every convolutional layer of the neural network $\sigma^2$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/architecture.png} \caption{High level overview of the neural networks that define the form of the Gaussian likelihood function of the proposed framework.} \label{fig:highlevel} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/residual_block.png} \caption{Details of a residual block $D_{\gamma_k}$ used in the neural network $f$ for CT reconstruction. For MRI reconstruction, the number of input and output channels is two instead of one. Red arrows represent convolutional layers with a kernel size of $3\times 3$ and a padding size of $1$ followed by a LeakyReLU activation function. Green arrows represent convolutional layers with a kernel size of $1\times1$ and a padding size of $0$ followed by a LeakyReLU activation function.} \label{fig:residual} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/ushaped.png} \caption{Details of the U-Shaped neural network used in the neural network $\sigma^2$ for CT reconstruction. For MRI reconstruction, the number of input and output channels is two instead of one. Green arrows represent convolutional layers with a kernel size of $3 \times 3$ and a padding size of $1$ followed by batch normalization and ReLU activation function. Brown arrows represent maxpooling layers with a kernel size of $2$ and a stride of $2$. Pink arrows represent bilinear upsampling with a scale factor of $2$. Purple arrows represent the concatenation operation along the channel dimension. Yellow arrows represent convolutional layers with a kernel size of $3\times 3$ and a padding size of $1$.} \label{fig:unet} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/architecture_dual_head.png} \caption{High level overview of the dual-head neural network that simultaneously outputs the mean and the covariance matrix of the Gaussian likelihood function of the proposed method.} \label{fig:highleveldualhead} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/problem_setup.png} \caption{Details of the one dimensional toy problem.} \label{fig:ps} \end{figure} \section{Dual-Head Architecture} \label{sec:dualhead} In the proposed framework, we have used a U-shaped architecture~\cite{Ronneberger2015UNet} to represent the covariance matrix of the Gaussian likelihood function of the proposed method. An alternative is to use a dual-head architecture~\cite{Kendall2017BayesianNN}, which is illustrated in Figure \ref{fig:highleveldualhead}. The advantage of using a dual-head architecture is that it is less GPU memory intensive, and larger batch sizes can be used for the training and inference stages, leading to faster inference time. For example, the proposed framework presented in the paper allows the use of a batch size of $4$ on a 16GB GPU, whereas the dual-head variant of the proposed framework allows the use of a batch size of $5$, leading to a decrease in the average inference time from $5.30$ seconds to $4.65$ seconds. On the other hand, we have experimentally observed that training a dual-head architecture is slightly more challenging compared to the proposed method presented in the paper and that aleatoric uncertainty maps obtained by the dual-head variant of the proposed method are noisier compared to ones obtained by the proposed framework presented in the paper. \section{Toy Problem} \label{sec:toyproblem} In this section, we provide a one dimensional inverse problem, which we refer to as the toy problem, to make the abstract concepts of aleatoric and epistemic uncertainties more concrete and to show that the proposed framework successfully captures epistemic and aleatoric uncertainties. As a toy problem, we consider a one dimensional linear inverse problem having the form \begin{equation} m = a s + n, \end{equation} where $m \in \mathbb{R}$ is the measurement, $a \in \mathbb{R}$ is the forward operator, $s \in \mathbb{R}$ is the target signal, and $n \sim \mathcal{N}(0, \sigma_n^2)$ is additive white Gaussian noise. For this setup, we choose the true prior distribution of the target signal $s$ to be $p(s) = \mathcal{N}(s| \mu, \tau^{-1})$. Thus, the posterior distribution of the target signal given a measurement $m$ becomes \begin{equation} p(s|m) = \mathcal{N}(s | \eta(m), \epsilon), \label{eq:posterior} \end{equation} where $\eta(m)=\epsilon[a \sigma_{n}^{-2}m + \tau \mu]$, and $\epsilon = (\tau + a^2 \sigma_{n}^{-2})^{-1}$. For the experiment, we chose the following values for the parameters of the toy problem: $a=0.5, \sigma_n=0.1, \mu=0$, and $ \tau^{-1}=0.2$. We obtained the training dataset by taking $100$ measurements uniformly spaced over the interval $[0,3/2]$ and generating the corresponding target signals by sampling from the distribution $p(s|m)$. Figure \ref{fig:ps} shows the details of the toy problem. For this toy problem, we used the proposed framework to obtain epistemic and aleatoric uncertainties. We used multi-layer perceptrons (MLPs) with three hidden layers for the residual blocks of the neural network $f$ and a MLP with two hidden layers for the neural network $\sigma^2$. Because the output is one dimensional, we did not put a dropout layer at the end of the neural networks. Initial step size of the neural network $f$ was fixed to $1.0$, and number of iterations $K$ of the proximal gradient descent was set to $5$. The dropout rate was fixed to $0.5$, and the proposed framework was trained for $20000$ epochs using the learning rate of $1 \times 10^{-4}$. In the inference stage, for $200$ uniformly spaced test measurement vectors over the interval $[0,3]$, we computed the reconstruction, aleatoric standard deviation and the epistemic standard deviation. Figure \ref{fig:aleatoric} and Figure \ref{fig:epistemic} show the aleatoric and epistemic uncertainties captured by the proposed framework, respectively. For this toy problem, the inherent uncertainty in the reconstruction task, i.e.\ the uncertainty on the target signal for a given measurement, is caused by the variance term $\epsilon$ (see \eqref{eq:posterior}). For the test measurements lie in the interval $[0,3/2]$, which are the ones we have in the training dataset, the aleatoric uncertainty captured by the proposed framework significantly overlaps with the true aleatoric uncertainty. Epistemic uncertainty on the other hand is the uncertainty on the parameters, which is due to lack of training examples around a test measurement. For the test measurements lie in the interval $[0,3/2]$, which are the ones that we have in the training dataset, the epistemic uncertainty captured by the proposed framework is low as we expected. As we move towards to the region for which we do not have any training data, i.e., as the measurements start deviating from the training data, the epistemic uncertainty captured by the proposed framework increases. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/aleatoric_uncertainty.png} \caption{True aleatoric uncertainty and the aleatoric uncertainty captured by the proposed framework for the toy problem.} \label{fig:aleatoric} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/epistemic_uncertainty.png} \caption{Epistemic uncertainty captured by the proposed framework.} \label{fig:epistemic} \end{figure} \section{Other Methods Used in Reconstruction Experiments} \label{sec:othermethods} In the experiments section of the article, we have compared the reconstruction performance of the proposed framework with six other reconstruction methods. Descriptions of those methods and their implementation details are provided below. \textbf{Zero-Filling:} Zero-filling is one of the baseline reconstruction methods that we have used for the MRI experiments. This method simply fills the unobserved Fourier (k-space) coefficients with zeros and computes the inverse Fourier transform. \textbf{Filtered Backprojection:} Filtered backprojection is one of the baseline reconstruction methods that we have used for the CT experiments. This method first filters the sinogram data and then computes the backprojection of the filtered sinogram. In our experiments, we used the TorchRadon~\cite{torch_radon} package to implement this method. \textbf{Total Variation:} Total variation reconstruction is the second baseline reconstruction method that we have used in our experiments. This method solves the following optimization problem to reconstruct the image. \begin{equation} \hat{{\mathbf s}} = \argmin_{{\mathbf s}} \left\{ \| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2 + \beta \| {\mathbf s} \|_{\text{TV}} \right\}, \end{equation} where $\|.\|_{\text{TV}}$ denotes the total variation semi-norm~\cite{Chambolle2004TV}. We have used alternating direction method of multipliers (ADMM)~\cite{Boyd2011ADMM} to obtain an iterative algorithm that solves this optimization problem. In our experiments, the number of iterations and the penalty parameter of the ADMM were fixed to $100$ and $10.0$, respectively. The data-dependent update step of the ADMM was solved using conjugate gradient (CG) method. The tolerance parameter of the CG was fixed to $1 \times 10^{-5}$, and maximum number of CG iterations was set to $10$. The value of the regularization constant $\beta$ was chosen from the set $\{1 \times 10^{-4}, 1 \times 10^{-3}, 1 \times 10^{-2}, 1 \times 10^{-1}, 1 \times 10^{0}, 1 \times 10^{1} \}$ to maximize the SSIM. \textbf{Deep Unrolling:} This is a learning-based image reconstruction method that leverages the idea of deep unrolling. The neural network used for this method is the neural network $f$ depicted in Figure \ref{fig:highlevel} with the residual blocks in Figure \ref{fig:residual}, except that there is a batch normalization layer between every convolutional layer and the activation function ReLU. The batch size was set to $4$ for MRI experiments and $2$ for the CT experiments, and learning rate was set to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was fixed to $5$, and the neural network was trained for $100$ epochs using mean-squared error loss function. \textbf{Deep Unrolling without Batch Normalization:} This is another variant of the deep unrolling method used in the reconstruction experiments. The neural network used for this method is the neural network $f$ depicted in Figure \ref{fig:highlevel} with the residual blocks in Figure \ref{fig:residual}. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was fixed to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to $5$, and the neural network was trained for $100$ epochs using mean-squared error loss function. \textbf{Proposed Only Aleatoric Model:} This method only quantifies aleatoric uncertainty by using the Gaussian likelihood of the proposed framework with the maximum likelihood estimate of the parameters $\theta$. For this method, we have used the neural networks $f$ and $\sigma^2$ depicted in Figure \ref{fig:highlevel}, Figure \ref{fig:residual}, and Figure \ref{fig:unet}. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was fixed to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to be $5$, and the neural networks $f$ and $\sigma^2$ were trained for $100$ epochs. \textbf{Proposed Only Epistemic Model:} This is the preliminary variant of the proposed method that we have presented in the prior conference publication~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}. This method only quantifies the epistemic uncertainty since it treats the covariance matrix of the Gaussian likelihood function as a fixed parameter. In our experiments, we fixed the covariance matrix to $(1/10) \mathbf{I}$. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was set to be $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was fixed to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to be $5$, and the dropout-added neural network was trained for $100$ epochs. \section{Model Calibration} \label{sec:modelrecalibration} While developing the proposed framework, we have made several assumptions about the form of the likelihood function, the prior distribution of the parameters of the likelihood function, and the parametric distribution that we use to approximate the true posterior distribution of the parameters. These assumptions introduce a model bias and may lead to uncalibrated predictions in practice. In this section, we show how the proposed framework can be calibrated easily, if needed, using the calibration method proposed by Kuleshov \emph{et~al.}~\cite{Kulesov2018recalibration}. For a given test measurement vector $\mathbf{m}_*$ and a training dataset $\mathcal{D}$, the proposed framework approximates the predictive distribution as follows: \begin{equation} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) \approx \frac{1}{T} \sum_{t=1}^T \mathcal{N}({\mathbf s}_*| f_{\tilde{\hat}^{(t)}}({\mathbf m}_*), \diag(\sigma^2_{\hat{\delta}^{(t)}}({\mathbf m}_*))), \end{equation} where $T$ is the number of Monte Carlo samples used to approximate the integral, and $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the parametric distribution $q_{\hat{\omega}}(\theta)$. For calibration purposes, we approximate the predictive distribution of each pixel with a Gaussian distribution as follows: \begin{equation} p([{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}) \approx \mathcal{N}([{\mathbf s}_*]_k| [\mathbb{E}[{\mathbf s}_* | {\mathbf m}_*, \mathcal{D}]]_k, \Var[[{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}]), \label{eq:preddistrecalibration} \end{equation} where the mean and the variance of this distribution are defined in the paper. Next, assuming that we have a validation dataset $\mathcal{D}_{\text{val}} = \{ ({\mathbf m}^{[i]}, {\mathbf s}^{[i]}) \}_{i=1}^V$, which is different than the test dataset, we generate a calibration dataset $\mathcal{D}_{\text{cal}}$ which is defined as follows: \begin{equation} \mathcal{D}_{\text{cal}} = \{ ({\mathbf m}^{[i]}, [{\mathbf s}^{[i]}]_k) | i \in [V], k \in [S] \}. \label{eq:calibrationdataset} \end{equation} Using the calibration dataset $\mathcal{D}_{\text{cal}}$ and the predictive distribution defined in \eqref{eq:preddistrecalibration}, we can utilize the calibration method presented in \cite{Kulesov2018recalibration} to calibrate the proposed method. In the experiments section of the paper, we have observed that epistemic and aleatoric uncertainty maps convey useful information about the confidence of the reconstruction method and the imaging problem. However, to evaluate the reliability of the uncertainty information provided by the proposed framework more reliably, we need to perform a quantitative analysis. One way to measure the reliability of uncertainty estimates is to create a calibration plot~\cite{Kulesov2018recalibration}. An example calibration plot of the proposed framework for an MRI experiment is given in Figure \ref{fig:calibration}. From this figure, we can notice that the proposed framework may provide slightly underconfident predictions due to model bias, although the visual observations provided in the paper match with our expectations about the behavior of epistemic and aleatoric uncertainties. To obtain calibrated predictions, we calibrated the proposed model using the calibration method presented in \cite{Kulesov2018recalibration} with the help of Uncertainty Toolbox~\cite{chung2021uncertainty}. The calibration plot of the calibrated proposed framework is also depicted in Figure \ref{fig:calibration}. As can be seen from the figure, the calibrated proposed framework provides calibrated uncertainty estimates. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/calibration_2.png} \caption{Calibration plots of the proposed framework and the calibrated proposed framework.} \label{fig:calibration} \end{figure} \section{Reconstruction Performance} Figure \ref{fig:reconstructionexample} compares the reconstruction performance of the proposed method with the reconstruction methods whose details are discussed in Section \ref{sec:othermethods} of the Supplementary Material. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figures_supplementary/reconstruction_merged_with_zoom.png} \caption{Visual comparison of the image reconstruction performance of zero-filling (ZF) / filtered backprojection (FBP), total variation reconstruction (TV), PGD-based deep unrolling method (PUM), PGD-based deep unrolling method without batch normalization (PUMw/oBN), proposed only epistemic model (POEM), proposed only aleatoric model (POAM), and the proposed method.} \label{fig:reconstructionexample} \end{figure*} \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:introduction} \IEEEPARstart{T}{his} article concerns imaging problems where the target image is observed through a linear transformation followed by additive noise. This observation model is quite general and has been used to model a variety of imaging techniques such as computed tomography (CT)~\cite{Elbakri2002CT}, magnetic resonance imaging (MRI)~\cite{Fessler2010MRI}, microscopy~\cite{Sarder2006Microscopy}, and radar imaging~\cite{Potter2010Radar}. For this observation model, classical model-based iterative reconstruction methods cast the image reconstruction problem as a regularized least squares problem whose objective function is the sum of a data-fidelity term and a regularizer. The observation model of the imaging problem determines the form of the data-fidelity term, and the prior knowledge about the target image governs the form of the regularizer. After obtaining an analytical expression for the data-fidelity term and choosing a regularization function, such as the total variation (TV) semi-norm~\cite{Chambolle2004TV}, the resulting optimization problem is solved iteratively by using an appropriate iterative optimization algorithm such as alternating direction method of multipliers (ADMM)~\cite{Boyd2011ADMM}, half-quadratic splitting (HQS), and proximal gradient descent (PGD) method~\cite{Parikh2014Proximal}. Inspired by model-based iterative reconstruction methods and the pioneering work of Gregor and LeCun~\cite{Gregor2010Unfolding} on sparse coding, a deep learning-based image reconstruction methodology, which is often referred to as deep unrolling~\cite{Aggarwal2019MoDL, Adler2018LearnedPrimalDual, Borgerding2017VAMP, Chun2018BCDNet, Diamond2018Unrolled, Liu2019DPUnrolling, Yang2016ADMMNet, Mardani2018NPGD} has emerged to bridge the gap between model-based image reconstruction methods and purely deep neural network-based image reconstruction methods. The common theme among deep unrolling methods is that they often design a deep neural network by replacing some parts of the unrolled iterative reconstruction algorithm with trainable parameters and neural networks. The main advantages of deep unrolling methods are that they explicitly incorporate the observation model into the neural network, hence they enforce data consistency, and the resulting deep neural network is interpretable in the sense that the resulting deep learning-based image reconstruction method is essentially classical model-based reconstruction algorithm with some learnable components. \IEEEpubidadjcol Although deep unrolling methods have the advantage of incorporating domain knowledge and the physics of the imaging problem into the neural network architecture, existing deep unrolling methods do not provide any predictive uncertainty information about the reconstructed image since they rely on non-Bayesian (standard) neural networks to reconstruct the target image from the corrupted measurements. This severely limits their applicability in safety-critical real-world imaging applications such as medical imaging, where uncertainty information is crucial to make accurate decisions. Our perspective is that we can solve this problem by taking a Bayesian approach for uncertainty estimation and using Bayesian neural networks (BNNs)~\cite{Neal1995BayesianNN}. BNNs are probabilistic models that can quantify the inherent uncertainty in the reconstruction task, which is referred to as the \emph{aleatoric} uncertainty~\cite{Kendall2017BayesianNN}, and the uncertainty on the parameters of a neural network, which is referred to as the \emph{epistemic} uncertainty~\cite{Kendall2017BayesianNN}, by putting a probability distribution on the parameters and computing the posterior distribution of the parameters given a training dataset. By using BNNs together with the idea of deep unrolling, we claim that we can provide predictive uncertainty information for the reconstructed image while preserving the advantages of deep unrolling. In this article, we present an uncertainty quantifying learning-based image reconstruction framework based on the idea of deep unrolling and Bayesian neural networks. In the proposed framework, we first define a likelihood function whose form originates from the principles of deep unrolling. Then, we place a prior distribution on the parameters of the likelihood function and obtain an approximation of the posterior distribution of the parameters using a scalable variational inference method called Monte Carlo (MC) Dropout~\cite{Gal2016MCDropout}. Next, for a given test measurement and a training dataset, we follow the principles of BNNs and compute the predictive distribution via Monte Carlo integration. Finally, we compute the mean and element-wise variance of the predictive distribution to obtain the reconstructed image and the epistemic and aleatoric uncertainty maps, respectively. We evaluate the proposed framework on MRI and CT reconstruction problems and show that the proposed framework can achieve comparable reconstruction performance to a state-of-the-art deep unrolling method and provide epistemic and aleatoric uncertainty information about the reconstructed image while incorporating the domain-knowledge into the reconstruction process. Moreover, we demonstrate the characteristics of epistemic and aleatoric uncertainties provided by the proposed framework to motivate further research on leveraging the uncertainty information for image reconstruction and analysis tasks. \section{Related Work} \label{sec:relatedwork} The problem of uncertainty quantification for image reconstruction tasks, e.g.,~\cite{Adler2019CWGAN, Bardsley2012MCMC, Bohm2019VAE, Cai2018aMCMC, Cai2018bMAP, Cochrane2022BNN, Dasgupta2021NF, Edupuganti2021VAE, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Herrmann2019Bregman, Hoffmann2021BNNensemble, Kitichotkul2021SURE, Liu2019Heteroscedastic, Repetti2019UQCredibleRegions, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Sun2020DPI, Tanno2019uncertainty, Tonolini2020VarInf, Xue2019UncertaintyPhaseImaging}, has attracted the attention of the computational imaging community again recently due to recent advancements in deep generative modeling~\cite{BondTaylor2021GenerativeSurvey} and BNNs~\cite{Neal1995BayesianNN, Gal2016MCDropout, Kendall2017BayesianNN}. The state-of-the-art deep learning-based image reconstruction methods performing uncertainty characterization, e.g.,~\cite{Adler2019CWGAN, Bohm2019VAE, Cochrane2022BNN, Dasgupta2021NF, Edupuganti2021VAE, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Herrmann2019Bregman, Hoffmann2021BNNensemble, Kitichotkul2021SURE, Liu2019Heteroscedastic, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Sun2020DPI, Tanno2019uncertainty, Tonolini2020VarInf, Xue2019UncertaintyPhaseImaging}, can be divided into two groups: deep generative model-based reconstruction methods and BNN-based reconstruction methods. Deep generative model-based reconstruction methods, e.g.,~\cite{Adler2019CWGAN, Bohm2019VAE, Dasgupta2021NF, Edupuganti2021VAE, Sun2020DPI, Tonolini2020VarInf}, seek to approximate the posterior distribution of the target image with the help of a generative model to characterize the inherent uncertainty in the reconstruction task, i.e., the uncertainty on the target image for a given measurement vector. For example, Adler and Oktem~\cite{Adler2019CWGAN} approximate the posterior distribution of the target image given a measurement vector using a conditional Wasserstein generative adversarial network~\cite{Mirza2014CGAN, ArjovskyWGAN}. Bohm \emph{et~al.}~\cite{Bohm2019VAE} use a variational autoencoder~\cite{Kingma2013auto} to represent the prior distribution of the target image and perform variational inference to learn the true posterior distribution of the latent variable given a measurement vector. Sun and Bouman~\cite{Sun2020DPI} utilize another popular generative model, a flow-based model~\cite{Kobyzev2021Normalizingflows, Papamakarios2021Normalizingflows}, to approximate the posterior distribution of the target image given a measurement vector and adjust the parameters of the flow-based model by minimizing the reverse Kullback-Leibler divergence~\cite{Papamakarios2021Normalizingflows} between the output distribution of the flow-based model and the posterior distribution. After training the generative model, the uncertainty on the target image for a given measurement vector can be quantified by calculating the sample variance of the samples generated from the approximation of the posterior distribution of the target image. While deep generative model-based reconstruction methods aim to quantify the inherent uncertainty in the reconstruction task, the goal of Bayesian neural network-based reconstruction methods, e.g.,~\cite{Cochrane2022BNN, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Hoffmann2021BNNensemble, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Tanno2019uncertainty, Xue2019UncertaintyPhaseImaging}, is to quantify either the uncertainty on the parameters of the statistical model or both the inherent uncertainty in the reconstruction task and the uncertainty on the parameters of the statistical model. To the best of the authors' knowledge, Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} presented the first two BNN-based image reconstruction methods for the MRI reconstruction problem, showing the potential of BNNs for uncertainty quantification for imaging problems. Subsequently, many BNN-based image reconstruction methods were developed for various problems such as the neuroimage enhancement~\cite{Tanno2019uncertainty}, phase imaging~\cite{Xue2019UncertaintyPhaseImaging}, seismic imaging~\cite{Siahkoohi2020BNN}, computational optical form measurements~\cite{Hoffmann2021BNNensemble}, single-pixel imaging~\cite{Shang2021BNN}, imaging through scattering media~\cite{Cochrane2022BNN}, and more general image reconstruction problems~\cite{Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary}. Table \ref{tab:relatedworkcomparison} shows the functional models of and the types of uncertainties quantified by BNN-based image reconstruction methods. \begin{table}[t] \centering \caption{High-level comparison of Bayesian neural network-based image reconstruction methods} \label{tab:relatedworkcomparison} \begin{tabular}{lccc} \toprule Method & Functional Model & Quantified Uncertainties \\ \midrule Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} & DCCNN~\cite{Schlemper2018DCCNN} & Epistemic \& Aleatoric \\ Tanno \emph{et~al.}~\cite{Tanno2019uncertainty} & ESPCN~\cite{Shi2016ESCPCN} & Epistemic \& Aleatoric \\ Xue \emph{et~al.}~\cite{Xue2019UncertaintyPhaseImaging} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Siahkoohi \emph{et~al.}~\cite{Siahkoohi2020BNN} & DIP~\cite{Lempitsky2018DIP} & Epistemic \\ Hoffmann \emph{et~al.}~\cite{Hoffmann2021BNNensemble} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \\ Shang \emph{et~al.}~\cite{Shang2021BNN} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Ekmekci and Cetin~\cite{Ekmekci2021UncertaintyPnP} & DRUNet~\cite{Zhang2021plug} & Epistemic \\ Ekmekci and Cetin~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}$^*$ & Deep Unrolling & Epistemic \\ Cochrane \emph{et~al.}~\cite{Cochrane2022BNN} & U-Net~\cite{Ronneberger2015UNet} & Epistemic\\ Proposed Framework & Deep Unrolling & Epistemic \& Aleatoric\\ \bottomrule \multicolumn{3}{l}{$^*$Preliminary version of this work} \end{tabular} \end{table} Table \ref{tab:relatedworkcomparison} highlights the main differences between the proposed framework and the aforementioned BNN-based image reconstruction methods. The main difference between the proposed framework and the methods presented in \cite{Schlemper2018UncertaintyMRI, Tanno2019uncertainty, Xue2019UncertaintyPhaseImaging, Hoffmann2021BNNensemble, Shang2021BNN, Cochrane2022BNN, Siahkoohi2020BNN} is that the proposed framework utilizes the idea of deep unrolling to integrate the observation model into the reconstruction process. Incorporation of physics-based models through data-consistency layers provides some level of interpretability. The DCCNN~\cite{Schlemper2018DCCNN} based method presented in \cite{Schlemper2018UncertaintyMRI} contains data-consistency layers; however, the data-consistency layer in \cite{Schlemper2018UncertaintyMRI} leverages the characteristic properties of the forward operator of the MRI observation model, making it highly specialized for MRI reconstruction. On the other hand, the proposed framework only requires the computation of the adjoint of the forward operator of the observation model, which is a considerably less restrictive requirement. If the forward operator deviates from a Fourier operator, the data consistency layer of the DCCNN-based method requires matrix inversion, which is not computationally feasible for large scale inverse problems. The difference between the proposed framework and the framework presented in \cite{Ekmekci2021UncertaintyPnP} lies in the difference between end-to-end models and Plug-and-Play (PnP) methods~\cite{Venkatakrishnan2013PnP}. While the BNN-based image reconstruction method presented in \cite{Ekmekci2021UncertaintyPnP} is built upon the idea of Plug-and-Play (PnP) priors~\cite{Venkatakrishnan2013PnP}, which does not require end-to-end training, the proposed framework uses a deep unrolled network as its functional model, which is trained in an end-to-end manner. We note that the preliminary version of this work appeared in \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} as a conference paper. The work presented in this manuscript extends the preliminary work in \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} in several significant ways. First, \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} involved the quantification of epistemic uncertainty only, whereas this paper proposes both epistemic and aleatoric uncertainty quantification. Second, unlike \cite{Ekmekci2021UncertaintyUnfoldingPreliminary}, the unrolled neural network in the framework we propose here contains different CNN blocks at each iteration. We have experimentally observed that this change leads to a faster and more stable training stage. Finally, this manuscript contains an extensive set of experiments demonstrating the characteristics of epistemic and aleatoric uncertainties. \section{Proposed Framework} \label{sec:proposedframework} In this section, we present a learning-based image reconstruction framework that can incorporate the observation model into the reconstruction process and quantify epistemic and aleatoric uncertainties arising in imaging problems. We start by introducing the assumed observation model and presenting a probabilistic formulation of deep unrolling methods along with a motivation for bringing in BNNs. This provides the basis for our BNN-based image reconstruction and uncertainty characterization approach, the components of which are described in the rest of this section. \subsection{Observation Model and the Inverse Problem} We consider the following observation model. \begin{equation} {\mathbf m} = {\mathbf A} {\mathbf s} + {\mathbf n}, \label{eq:forwardproblem} \end{equation} where ${\mathbf m} \in \mathbb{F}^M$ is the measurement vector; ${\mathbf A} \in \mathbb{F}^{M \times N}$ is the forward operator, which is the discrete approximation of the transformation applied by the imaging system; ${\mathbf s} \in \mathbb{F}^{N}$ is the target image; and ${\mathbf n} \sim \mathcal{N}({\mathbf 0}, \sigma_n^2 {\mathbf I})$ is additive white Gaussian noise, where $\mathbb{F}$ stands for either $\mathbb{R}$ or $\mathbb{C}$. In this section, without loss of generality, we only consider the case where $\mathbb{F} = \mathbb{R}$ since generalizing the proposed framework to cover the case $\mathbb{F} = \mathbb{C}$ is straightforward (see \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} for details). For an underdetermined system ($M<N$), the inverse problem, i.e., recovering the target image ${\mathbf s}$ from the measurement vector ${\mathbf m}$, is an ill-posed problem. To narrow down the solution space, we can utilize any prior knowledge about the target image. One way to use such prior knowledge systematically is to treat the inverse problem as a maximum \emph{a posteriori} (MAP) estimation problem, which is defined by \begin{equation} \hat{{\mathbf s}} = \argmin_{{\mathbf s} \in \mathbb{R}^{N}} \left\{ \| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2 + \beta \psi({\mathbf s}) \right\}, \label{eq:mapestimationproblem} \end{equation} where $\hat{{\mathbf s}}$ is the MAP estimate of the target image, the term $\| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2$ is the data-fidelity term, the function $\psi: \mathbb{R}^N \to \mathbb{R}$ is the regularizer that comes from the prior knowledge on the target image, and $\beta > 0$ is the parameter controlling the balance between the data-fidelity term and the regularizer. After deciding on the form of the regularizer, e.g., total variation semi-norm or wavelet transform domain regularization, model-based reconstruction methods solve the problem in \eqref{eq:mapestimationproblem} iteratively by using an appropriate optimization algorithm, e.g., ADMM~\cite{Boyd2011ADMM}, HQS, or PGD~\cite{Parikh2014Proximal}. \subsection{Probabilistic Formulation of Deep Unrolling and BNNs} \label{ssec:observations} For the inverse problem, which is essentially a regression problem, suppose that the likelihood function $p({\mathbf s}|{\mathbf m},\theta)$ has the following form. \begin{equation} p({\mathbf s} | {\mathbf m}, \theta) = \mathcal{N} \left( {\mathbf s} | f_\theta ({\mathbf m}), \sigma^2 {\mathbf I} \right), \label{eq:gaussianlikelihood} \end{equation} where $f_\theta: \mathbb{R}^M \to \mathbb{R}^N$ is a deep unrolled network parametrized by the set of parameters $\theta$, and $\sigma > 0$ is a fixed constant. Assuming that the training dataset $\mathcal{D}$ contains i.i.d.\ pairs of measurement vectors and target images, we can compute a MAP estimate of the set of parameters by solving the following optimization problem. \begin{equation} \hat{\theta}_{\text{MAP}} = \argmin_{\theta} \left\{ \frac{1}{2\sigma^2} \sum_{i=1}^{N_\mathcal{D}} \| {\mathbf s}^{[i]} - f_\theta({\mathbf m}^{[i]}) \|_2^2 - \log p(\theta) \right\} \label{eq:mapestimateofparameters} \end{equation} where $({\mathbf m}^{[i]}, {\mathbf s}^{[i]})$ is the $i^{\text{th}}$ example in the training dataset, $N_\mathcal{D}$ is the number of examples in the training dataset, and the distribution $p(\theta)$ is the prior distribution of the set of parameters. In the inference stage, for a given measurement vector ${\mathbf m}_*$, we can compute the distribution $p({\mathbf s}_* | {\mathbf m}_*, \hat{\theta}_{\text{MAP}})$ to make predictions about the target image. This probabilistic formulation implicitly appears in the training and inference stages of state-of-the-art deep unrolling methods. For instance, if we choose the prior $p(\theta)$ to be standard Gaussian distribution, then finding the MAP estimate of the set of parameters boils down to training the neural network $f_\theta$ using the squared error loss with weight decay, which is a cost function frequently used by deep unrolling methods. In the inference stage, for a given measurement vector ${\mathbf m}_*$, outputting the mean of the distribution $p({\mathbf s}_* | {\mathbf m}_*, \hat{\theta}_{\text{MAP}})$ as the reconstructed image is equivalent to feeding the measurement vector ${\mathbf m}_*$ into the trained neural network $f_{ \hat{\theta}_{\text{MAP}}}$. Hence, training and inference procedures followed by many existing deep unrolling methods can be interpreted probabilistically using the formulation above. Although such procedures are frequently used to train deep unrolling methods, there are two problems with this approach regarding the characterization of uncertainties. The first problem is that this formulation does not model the uncertainty on the target image for a given measurement vector, i.e., the inherent uncertainty on the reconstruction task, since it assumes that the covariance matrix of the likelihood function is a fixed model parameter. The second problem is that this formulation does not model the uncertainty on the set of parameters because it only uses a point estimate of the set of parameters by following MAP estimation principles. BNNs~\cite{Neal1995BayesianNN, Jospin2022BNNTutorial, Kendall2017BayesianNN} can solve these two problems. BNNs solve the first problem by defining a likelihood function that models the inherent uncertainty on the reconstruction task. In the case of a Gaussian likelihood, this can be accomplished by representing the covariance matrix of the likelihood function with a neural network. To solve the second problem, BNNs place a prior distribution on the set of parameters of the likelihood function and compute the posterior distribution of the parameters given a training dataset. Then, at the inference stage, BNNs compute the predictive distribution for a given measurement vector ${\mathbf m}_*$ by computing the following integral: \begin{equation} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) = \int p({\mathbf s}_*|{\mathbf m}_*, \theta) p(\theta | \mathcal{D}) d\theta, \label{eq:predictivedistribution} \end{equation} where the distribution $p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D})$ is the predictive distribution, and the integral is taken over all possible values of $\theta$. The first term of the integrand, which is the likelihood function, incorporates the inherent uncertainty on the reconstruction task (i.e., aleatoric uncertainty), which is created by the ill-posedness of the inverse problem, into the predictive distribution. The second term of the integrand, on the other hand, which is the posterior distribution of the parameters, incorporates the uncertainty on the set of parameters (i.e., epistemic uncertainty), which is created by the lack of training examples in the training dataset around the test measurement vector, into the predictive distribution through an integral over the possible parameter values. Thanks to this conceptually simple probabilistic formulation, we can utilize BNNs to quantify both epistemic and aleatoric uncertainties in computational imaging problems. \subsection{Form of the Likelihood Function} Based on our observations presented in Section \ref{ssec:observations}, we define the form of likelihood function as follows. \begin{equation} p({\mathbf s}|{\mathbf m},\theta) = \mathcal{N}({\mathbf s}| f_\gamma({\mathbf m}), \diag(\sigma_\delta^2({\mathbf m}))), \label{eq:proposedlikelihood} \end{equation} where $\theta = \gamma \cup \delta$, and $f_\gamma : \mathbb{R}^M \to \mathbb{R}^N$ and $\sigma_\delta^2: \mathbb{R}^M \to \mathbb{R}^N$ are two neural networks parametrized by sets of parameters $\gamma$ and $\delta$, respectively. The neural network $f_\gamma$ maps a given measurement vector to a point in the target image space, and the neural network $\sigma_\delta^2$ aims to capture the inherent uncertainty on the target image for a given measurement vector. To incorporate the observation model into the neural network $f_\gamma$, which maps a given measurement vector to a point on the target image space, we start constructing $f_\gamma$ by first solving the optimization problem in \eqref{eq:mapestimationproblem} using the proximal gradient descent (PGD) method. The main advantage of using PGD over methods such as ADMM and HQS is that the data dependent update equation of PGD requires computing only the adjoint of the forward operator and does not involve any inversion step, which makes it suitable for large scale imaging problems with non-structured forward operators. Assuming that the regularizer $\psi$ in \eqref{eq:mapestimationproblem} is a closed proper convex function, PGD yields the following iterative image reconstruction algorithm. \begin{equation} \begin{aligned} {\mathbf z}^{(k+1)} &= ({\mathbf I} - 2\alpha {\mathbf A}^\top {\mathbf A}) {\mathbf s}^{(k)} + 2\alpha {\mathbf A}^\top {\mathbf m} \\ {\mathbf s}^{(k+1)} &= \prox_{\alpha \beta \psi} \left( {\mathbf z}^{(k+1)} \right) \end{aligned} \end{equation} where ${\mathbf z}^{(k+1)} \in\mathbb{R}^N $ is an intermediate vector of the algorithm at the $(k+1)^{\text{st}}$ iteration, ${\mathbf s}^{(k+1)} \in \mathbb{R}^N$ is the reconstructed image at the $(k+1)^{\text{st}}$ iteration, the operator $\prox: \mathbb{R}^N \to \mathbb{R}^N$ is the proximal operator~\cite{Parikh2014Proximal}, and $\alpha \geq 0$ is the step size. To learn the prior information implicitly from the training data, we replace the proximal operator in the second step with a neural network, which has been frequently done by deep unrolling methods such as \cite{Mardani2018NPGD}. Then, the resulting update equations become \begin{equation} \begin{aligned} {\mathbf z}^{(k+1)} &= ({\mathbf I} - 2\alpha {\mathbf A}^\top {\mathbf A}) {\mathbf s}^{(k)} + 2\alpha {\mathbf A}^\top {\mathbf m} \\ {\mathbf s}^{(k+1)} &= D_{\gamma_{k+1}} \left( {\mathbf z}^{(k+1)} \right), \end{aligned} \label{eq:pgdupdatewithnn} \end{equation} where $D_{\gamma_{k+1}}: \mathbb{R}^N \to \mathbb{R}^N$ is a residual neural network~\cite{He2016Resnet} parametrized by the set of parameters $\gamma_{k+1}$. For a fixed number of iterations $K$, the series of update equations in \eqref{eq:pgdupdatewithnn} correspond to a deep neural network $f_\gamma$, where $\gamma = \bigcup_{k=1}^K \gamma_k$. Figure \ref{fig:architecture} displays the high-level summary of the neural network $f_\gamma$, and the details of the architecture are provided in the Supplementary Material. To completely specify the form of the likelihood function given in \eqref{eq:proposedlikelihood}, we have to specify the architecture of the neural network $\sigma_\delta^2$ as well. The neural network we use for the neural network $\sigma_\delta^2$ is a U-shaped neural network~\cite{Ronneberger2015UNet} followed by an element-wise exponentiation to ensure that the output contains positive entries. Figure \ref{fig:architecture} depicts a high-level summary of the neural network $\sigma_\delta^2$, the details of which are given in the Supplementary Material. We must remark that we can also use a dual-head architecture to jointly represent the neural networks $f_\gamma$ and $\sigma_\delta^2$. A brief discussion on the dual-head variant of the proposed framework is also provided in the Supplementary Material for interested readers. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/architecture.png} \caption{The structure of the neural networks $f_\gamma$ and $\sigma_\delta^2$ at a high level. The neural network $f_\gamma$ maps a measurement vector to a point in the target image space, and the neural network $\sigma_\delta^2$ aims to capture the aleatoric uncertainty. These two neural networks completely specify the form of the Gaussian likelihood in \eqref{eq:proposedlikelihood}.} \label{fig:architecture} \end{figure} \subsection{Approximating the Posterior Distribution} \label{ssec:approximatingposterior} To be able to compute the predictive distribution using \eqref{eq:predictivedistribution}, we have to compute the posterior distribution $p(\theta|\mathcal{D})$. However, exact computation of the posterior distribution is not tractable for deep neural networks because of the massive number of parameters and complex hierarchical structures. Thus, we either have to approximate the posterior distribution with a parametric distribution, or we have to generate samples from the posterior distribution to approximate the integral in \eqref{eq:predictivedistribution}. In our framework, we use a variational inference method called MC Dropout to approximate the posterior distribution with a parametric distribution. The advantages of using MC Dropout are that it is scalable for deep neural networks since it does not introduce additional parameters, variational inference and inference procedures can be straightforwardly implemented in deep learning frameworks because this only requires small changes on the training and testing procedures of the standard neural network pipelines, and it has been shown that MC Dropout provides reliable uncertainty estimates for several problems such as camera relocalization~\cite{Kendall2016CameraRelocalization}, depth completion~\cite{Kendall2017BayesianNN, Gustaffson2020EvaluatingUncertainty} and semantic segmentation~\cite{Kendall2017BayesianNN, Gustaffson2020EvaluatingUncertainty}. For the sake of completeness, we state the assumptions of MC Dropout explicitly and discuss the variational inference and inference steps. For a more detailed discussion, the reader can refer to ~\cite{Gal2016MCDropout, Gal2016BayesianCNN, Kendall2017BayesianNN}. Suppose that the neural networks $f_\gamma$ and $\sigma_\delta^2$ contain $C$ and $E$ convolutional layers, respectively. Then, we can write the two sets $\gamma$ and $\delta$ as follows: \begin{equation} \gamma = \bigcup_{i=1}^{C} \{ {\mathbf W}_{i}^f \} \quad \text{and} \quad \delta = \bigcup_{j=1}^E \{ {\mathbf W}_{j}^\sigma \}, \end{equation} where ${\mathbf W}_{i}^f$ and ${\mathbf W}_j^\sigma$ are the matrices whose rows contain the vectorized filter coefficients of the $i^{\text{th}}$ and $j^{\text{th}}$ convolutional layers of the neural networks $f_\gamma$ and $\sigma^2_{\delta}$, respectively. The assumptions~\cite{Gal2016MCDropout, Gal2016BayesianCNN, Kendall2017BayesianNN} on the parametric distribution $q_\omega (\theta)$ that we use to approximate the true posterior distribution $p(\theta | \mathcal{D})$ are as follows: (i) For the parametric distribution, we assume that the layers of the neural networks $f_\gamma$ and $\sigma_\delta^2$ are independent, and layers within the neural networks are mutually independent, i.e., \begin{equation} q_\omega(\theta) = \left( \prod_{i=1}^{C} q\left({\mathbf W}_{i}^f\right) \right) \left( \prod_{j=1}^{E} q\left({\mathbf W}_{j}^\sigma \right) \right); \end{equation} (ii) Filters of a convolutional layer are assumed to be mutually independent, more explicitly \begin{equation} \begin{aligned} q({\mathbf W}_{i}^f) = \prod_{l=1}^{K_{i, f}^{[out]}} q( [ {\mathbf W}_{i}^f ]_{l,:} ), q\left({\mathbf W}_{j}^\sigma \right) = \prod_{m=1}^{K_{j,\sigma}^{[out]}} q( \left[ {\mathbf W}_{j}^\sigma \right]_{m,:} ), \end{aligned} \end{equation} where $K_{i, f}^{[out]}$ is the number of filters in the $i^\text{th}$ convolutional layer of $f_\gamma$, and $K_{j,\sigma}^{[out]}$ is the number of filters in the $j^\text{th}$ convolutional layer of $\sigma^2_\delta$; (iii) The distribution of the filter coefficients of each filter is a mixture of Gaussians distribution defined by \begin{equation} \begin{aligned} q( [ {\mathbf W}_{i}^f ]_{l,:} ) &= p(z_{i,l}^f=1) \mathcal{N}( [ {\mathbf W}_{i}^f ]_{l,:} | {\mathbf a}_{i,l}^f, \epsilon^2{\mathbf I}) \\ &\quad+p(z_{i,l}^f=0) \mathcal{N}( [ {\mathbf W}_{i}^f ]_{l,:} | {\mathbf 0}, \epsilon^2{\mathbf I}), \\ q( [ {\mathbf W}_{j}^\sigma ]_{m,:} ) &= p(z_{j,m}^\sigma=1) \mathcal{N}([ {\mathbf W}_{j}^\sigma ]_{m,:} | {\mathbf a}_{j,m}^\sigma, \epsilon^2{\mathbf I}) \\ &\quad+ p(z_{j,m}^\sigma=0) \mathcal{N}([ {\mathbf W}_{j}^\sigma ]_{m,:} | {\mathbf 0}, \epsilon^2{\mathbf I}), \end{aligned} \label{eq:bernoullivariationaldistribution} \end{equation} where the variables $z_{i,l}^f$ and $z_{j,m}^\sigma$ are the latent variables, and the scalars $p_{i,l}^f \triangleq p(z_{i,l}^f=1)$ and $p_{j,m}^\sigma \triangleq p(z_{j,m}^\sigma=1)$ are fixed constants. The scalar $\epsilon$ is a very small fixed constant, and the sets $\Delta_f \triangleq \{ {\mathbf a}_{i,l}^f \}$ and $\Delta_\sigma \triangleq \{ {\mathbf a}_{j,m}^\sigma \}$ are the adjustable parameters of the parametric distribution. Previously we have denoted the set of adjustable parameters of the parametric distribution $q_\omega(\theta)$ with $\omega$, so we can write the set $\omega$ explicitly as $\omega = \Delta_f \cup \Delta_\sigma$. Under these assumptions, we adjust the parameters of the parametric distribution by minimizing the Kullback-Leibler divergence between the parametric distribution and the true posterior distribution, i.e., \begin{equation} \hat{\omega} = \argmin_\omega D_{\text{KL}} \left( q_\omega(\theta) || p(\theta|\mathcal{D}) \right). \end{equation} Under certain approximations and mathematical manipulations (see the supplementary material of \cite{Gal2016MCDropout} for the details), the above optimization problem can be approximated with the following optimization problem. \begin{equation} \hat{\omega} \approx \argmin_\omega \left\{ g(\omega) + h(\omega)\right\}, \label{eq:variationalinference} \end{equation} where \begin{equation} \begin{aligned} g(\omega) &\triangleq \frac{1}{N_\mathcal{D}} \sum_{n=1}^{N_\mathcal{D}} \sum_{k=1}^{N} \bigg[ \log [\sigma_{\tilde{\delta}^{(n)}}^2 ({\mathbf m}^{[n]})]_k \\ &\mkern-18mu + \exp( - \log [\sigma_{\tilde{\delta}^{(n)}}^2 ({\mathbf m}^{[n]})]_k) ( [{\mathbf s}^{[n]}]_k - [f_{\tilde{\gamma}^{(n)}} ({\mathbf m}^{[n]})]_k )^2 \bigg], \\ h(\omega) &\triangleq \sum_{i=1}^{C} \sum_{l=1}^{K_{i,f}^{[out]}} \frac{p_{i,l}^f}{2 N_\mathcal{D}} \| {\mathbf a}_{i,l}^f \|_2^2 + \sum_{j=1}^E \sum_{m=1}^{K_{j, \sigma}^{[out]}} \frac{p_{j,m}^\sigma}{2 N_\mathcal{D}} \| {\mathbf a}_{j,m}^\sigma \|_2^2, \end{aligned} \label{eq:variationalinferencedefinitions} \end{equation} and $\tilde{\theta}^{(n)} = \tilde{\delta}^{(n)} \cup \tilde{\gamma}^{(n)}$ is the $n^{\text{th}}$ sample generated from the parametric distribution $q_\omega(\theta)$. After approximating the true posterior distribution $p(\theta | \mathcal{D})$ with the parametric distribution $q_{\hat{\omega}}(\theta)$, we approximate the integral in \eqref{eq:predictivedistribution} using Monte Carlo integration with $T$ samples as follows. \begin{equation} \begin{aligned} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) &\approx \frac{1}{T} \sum_{t=1}^T \mathcal{N}({\mathbf s}_*| f_{\hat{\gamma}^{(t)}}({\mathbf m}_*), \diag(\sigma^2_{\hat{\delta}^{(t)}}({\mathbf m}_*))) \end{aligned} \label{eq:approximationofpredictivedistribution} \end{equation} where $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the parametric distribution $q_{\hat{\omega}}(\theta)$. The approximation of the predictive distribution is a mixture of $T$ Gaussians with uniform weights; therefore, we can compute its mean vector and element-wise variance analytically as follows. \begin{equation} \mathbb{E}[{\mathbf s}_* | {\mathbf m}_*, \mathcal{D}] \approx \frac{1}{T} \sum_{t=1}^T f_{\hat{\gamma}^{(t)}}({\mathbf m}_*), \label{eq:predictivemean} \end{equation} \begin{equation} \begin{aligned} &\Var[[{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}] \approx \underbrace{\frac{1}{T} \sum_{t=1}^T [\sigma_{\hat{\delta}^{(t)}}^2 ({\mathbf m}_*)]_k}_\text{Aleatoric variance} \\ & + \underbrace{\frac{1}{T} \sum_{t=1}^T [f_{\hat{\gamma}^{(t)}}({\mathbf m}_*)]_k^2 - \left( \frac{1}{T} \sum_{t=1}^T [f_{\hat{\gamma}^{(t)}}({\mathbf m}_*)]_k \right)^2}_\text{Epistemic variance}, \end{aligned} \label{eq:predictivevariance} \end{equation} where $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the optimized parametric distribution $q_{\hat{\omega}}(\theta)$. The first term of the predictive variance, which we refer to as the aleatoric variance, reflects the aleatoric uncertainty in the reconstruction task, and the remaining residual sum, which we refer to as the epistemic variance, represents the epistemic uncertainty. At this point we have to be aware that we have to generate samples from the parametric distribution to solve the optimization problem in \eqref{eq:variationalinference} and to obtain the predictive mean and variance given by \eqref{eq:predictivemean} and \eqref{eq:predictivevariance}. Because we have assumed that filters of convolutional layers are mutually independent, one naive way to generate a sample from the parametric distribution is to generate samples from the distributions in \eqref{eq:bernoullivariationaldistribution}. Sampling from those distributions is equivalent to sampling from a mixture of Gaussians distribution with two components, so, first, we need to sample a Bernoulli random variable, and based on that sample, we generate a sample from one of the two multivariate Gaussian distributions. Because the scalar $\epsilon$ is assumed to be a very small non-zero constant, generating a sample from the multivariate Gaussian distributions in \eqref{eq:bernoullivariationaldistribution} can be approximated by directly reporting the mean. Thus, the whole process of generating a sample from the parametric distribution $q_\omega(\theta)$ boils down to generating samples from Bernoulli random variables and multiplying them with the adjustable parameters of the parametric distribution. Hence we can write \begin{equation} \begin{aligned} \tilde{\gamma}^{(n)} &\approx \left\{ \tilde{z}_{i,l}^{(n)} {\mathbf a}_{i,l}^f | \text{sample }\tilde{z}_{i,l}^{(n)} \sim \text{Bernoulli}(p_{i,l}^f) \right\}, \\ \tilde{\delta}^{(n)} &\approx \left\{ \tilde{z}_{j,m}^{(n)} {\mathbf a}_{j,m}^\sigma | \text{sample }\tilde{z}_{j,m}^{(n)} \sim \text{Bernoulli}(p_{j,m}^\sigma) \right\}, \\ \tilde{\theta}^{(n)} &= \tilde{\delta}^{(n)} \cup \tilde{\gamma}^{(n)}. \end{aligned} \end{equation} An interesting observation is that the sampling operation described above resembles the dropout~\cite{Srivastava2014Dropout} operation. Hence, solving the optimization problem in \eqref{eq:variationalinference} boils down to training two neural networks $\bar{f}$ and $\bar{\sigma}^2$, which are the dropout-added versions of the neural networks $f$ and $\sigma^2$ that we want to perform variational inference for, using the function $g$ in \eqref{eq:variationalinferencedefinitions} as a loss function with weight decay parameters $p_{i,l}^f / (2 N_\mathcal{D})$ and $p_{j,m}^\sigma / (2 N_\mathcal{D})$ and with dropout rates $1-p_{i,l}^f$ and $1-p_{j,m}^\sigma$. The resulting weights of the neural networks after the training stage will be the optimal parameters $\hat{\omega}$ of the parametric distribution $q_{\hat{\omega}}(\theta)$. Furthermore, calculating the approximation of the predictive distribution using \eqref{eq:approximationofpredictivedistribution} boils down to feeding the test measurement vector to the dropout-added neural networks $\bar{f}_{\Delta_f^*}$ and $\bar{\sigma}_{\Delta_\sigma^*}$ $T$ times while the dropout is on. To obtain a reconstruction, we can either generate samples from the approximation of the predictive distribution or compute its mean using \eqref{eq:predictivemean}. To obtain the epistemic and aleatoric uncertainty maps, we use the expression in \eqref{eq:predictivevariance}. \section{Experiments and results} \label{sec:experiments} In this section, we present experimental results demonstrating the behavior of our proposed approach. Although the proposed framework can be applied to any inverse problem that can be cast as the optimization problem in \eqref{eq:mapestimationproblem}, we evaluate the proposed framework on basic MRI and CT reconstruction problems as representative applications. We investigate the behavior of epistemic and aleatoric uncertainties under various experimental conditions and show that the epistemic and aleatoric uncertainty information provided by the proposed framework is consistent with the definitions of those uncertainties. We then compare the image reconstruction performance of the proposed framework with other image reconstruction methods to demonstrate the image reconstruction capability of the proposed framework. Supplementary Material also contains a toy problem and an additional experiment on the recalibration~\cite{Kuleshov2018Recalibration} of the proposed framework. \subsection{Experimental Setup} \label{ssec:experimentalsetup} \textbf{Datasets:} For the MRI reconstruction problem, we extracted $530$ $256 \times 256$ target images from the IXI Dataset~\cite{ixidataset}. Each target image was normalized between $0$ and $1$. We split the $530$ target images into training, validation, and test datasets containing $500$, $15$, and $15$ target images, respectively. The training, validation, and test datasets were constructed such that they contain target images collected from different subjects. The measurement vectors, i.e., k-space measurements, were generated by computing the subsampled Fourier transform of the target images. For the CT reconstruction problem, we extracted $530$ $512 \times 512$ target images from the LUNA Dataset~\cite{lunadataset}. Each image was resized to $256 \times 256$ pixels and normalized between $0$ and $1$. The training dataset was created by using $500$ target images, and the remaining $30$ images were split into two sets to generate validation and test datasets, each containing $15$ target images. The training, validation, and test datasets were constructed such that they contain target images collected from different subjects. The measurement vectors, i.e., sinogram data, were generated by computing the sparse Radon transform of the target images. Finally, we added white Gaussian noise to the measurement vectors to obtain the final measurement vectors we used in our experiments, where the SNR of the measurement vector is defined as follows: \begin{equation} \text{SNR}({\mathbf m}_{\text{noiseless}} + {\mathbf n}, {\mathbf m}_{\text{noiseless}}) = 20 \log_{10} \left( \frac{ \| {\mathbf m}_{\text{noiseless}} \|_2}{\|{\mathbf n}\|_2} \right). \end{equation} \textbf{Training and Inference Procedures:} Training of the proposed framework refers to solving the optimization problem in \eqref{eq:variationalinference}, where the first term of the objective function is replaced with its mini-batch approximation~\cite{Gal2016MCDropout}. We obtained the neural network $\bar{f}$ by fixing the number of iterations $K$ of the PGD to be $5$ and taking the starting point ${\mathbf s}^{(0)}$ to be the result of zero-filling and filtered backprojection for MRI and CT reconstruction problems, respectively. Each residual block of the neural network $\bar{f}$ contains $5$ convolutional layers, and each convolutional layer is followed by a dropout layer and the leaky ReLU activation function. We used the U-Net architecture for the neural network $\bar{\sigma}^2$, where each convolutional layer is followed by a dropout layer. For the MRI reconstruction problem, the batch size for the training was set to $4$, and the learning rate was fixed to $1\times 10^{-4}$. For the CT reconstruction problem, we used a batch size of $2$ for the training and set the learning rate to $1\times 10^{-5}$. The initial step size $\alpha$ of the PGD algorithm was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The dropout rate of the dropout layers of the neural networks $\bar{f}$ and $\bar{\sigma}^2$ was set to $0.1$, and the neural networks $\bar{f}$ and $\bar{\sigma}^2$ were trained for $100$ epochs. At the inference stage, a given measurement vector was fed to the neural networks $\bar{f}$ and $\bar{\sigma}^2$ $T=100$ times while the dropout was still activated. The reconstructed image was then obtained by following the approximation in \eqref{eq:predictivemean}. The epistemic and aleatoric uncertainty maps were obtained by calculating three times of the epistemic and aleatoric standard deviations given by \eqref{eq:predictivevariance}. \subsection{Epistemic Uncertainty} \label{ssec:epistemicuncertainty} In theory, epistemic uncertainty is the uncertainty created by the lack of training data around test data and can be explained away by making appropriate changes on the training data. In this subsection, we investigate the characteristics of epistemic uncertainty information provided by the proposed framework and show that the behavior and results of our approach are consistent with the theoretical characteristics of epistemic uncertainty. To show that the proposed framework outputs epistemic uncertainty information that reflects the uncertainty caused by the lack of training data that can explain the test sample well, we consider two scenarios. In the first scenario, we assess the impact of the size of the training dataset on the inferred epistemic uncertainty. A good uncertainty characterization method should yield larger epistemic uncertainties for smaller training datasets, as it is less probable for such data to represent a random test sample well. In our experiments, we generated five subsets of the MRI training dataset containing $10, 50, 125, 250$, and $500$ examples, and trained five instances of the proposed framework using these subsets as training datasets. Then, for a given test measurement, we obtained the epistemic uncertainty maps using the five trained instances of the proposed framework. We repeated the same procedure for the CT reconstruction problem. The resulting epistemic uncertainty maps are illustrated in Figure \ref{fig:reducibilityofepistemicuncertainty}. For both MRI and CT reconstruction problems, epistemic uncertainty achieves its highest value for the case where we use only $10$ training examples. Then, as we add more examples to the training dataset, epistemic uncertainty on the same test image decreases. To confirm these visual results quantitatively, we calculated the average epistemic uncertainty per pixel taken over the test dataset as a function of the size of the training dataset. Figure \ref{fig:reducibilityofepistemicuncertaintyplot} shows the quantitative results for both MRI and CT reconstruction problems. From this figure, we observe that an increase in the number of training examples leads to a decrease in the overall epistemic uncertainty, which is consistent with the visual results presented in Figure \ref{fig:reducibilityofepistemicuncertainty}. For the second scenario, we insert an artificial feature that is not well-represented by the training dataset to a test target image. Then, we vary the intensity of the inserted abnormal feature to modify the degree of deviation of the test data from the training data. A good uncertainty characterization method would result in larger epistemic uncertainty as the test sample deviates more from the training data. In our experiments, we first trained the proposed framework on the MRI training dataset. Next, we picked a target image from the test dataset and inserted a $25 \times 25$ square with the intensity value of $1.0$ to the test target image. Then, we obtained the epistemic uncertainty map. We repeated the same procedure for different values of the intensity of the inserted abnormal feature and for the CT reconstruction problem. Figure \ref{fig:differentintensity} shows the epistemic uncertainty maps obtained by the proposed framework for different intensity values of the inserted abnormal feature for both MRI and CT reconstruction problems. We observe that the epistemic uncertainty in the abnormal region decreases as the intensity of the inserted square gets close to a value that makes the inserted square visually similar to the target images in the training dataset. Thus, our experiment shows that the epistemic uncertainty map obtained by the proposed framework shows high epistemic uncertainty for test data that are not well-represented by the training data, confirming that the proposed framework successfully captures the uncertainty caused by the lack of training data around the test data. Now, we demonstrate that the epistemic uncertainty provided by the proposed framework possesses the reducibility property. For the first scenario, we have already shown in Figure \ref{fig:reducibilityofepistemicuncertainty} and Figure \ref{fig:reducibilityofepistemicuncertaintyplot} that we can reduce the epistemic uncertainty by collecting more training data having similar characteristics to the test data. For the second scenario, if the proposed framework is capable of capturing the epistemic uncertainty well, we expect to see that using training examples containing features similar to the abnormal feature encountered in the test data would result in reduced epistemic uncertainty. To this end, we added $25 \times 25$ white squares on the training target images and trained the proposed framework with such training data containing the abnormal features. We repeated the same procedure for the CT reconstruction problem. Figure \ref{fig:outofdataexample} shows the resulting epistemic uncertainty maps obtained by the proposed framework for both CT and MRI reconstruction problems. We observe that the epistemic uncertainty around the white square is decreased significantly after we added target images containing white squares into the training dataset, confirming that the epistemic uncertainty provided by the proposed framework can be explained away with additional training data that can represent the test data well. These results confirm that the proposed framework is capable of successfully quantifying epistemic uncertainty. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/epistemic_reducibility.png} \caption{Epistemic uncertainty maps on an MRI (top) and a CT (bottom) test sample as a function of the training dataset size (TDS). As we increase the number of examples in the training dataset, the overall epistemic uncertainty decreases. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:reducibilityofepistemicuncertainty} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/proximity.png} \caption{Epistemic uncertainty as a function of the intensity of the abnormal feature. The first row contains the ground truth test images, i.e., the target test images. The second row contains the corresponding epistemic uncertainty maps obtained by the proposed framework. As the inserted square deviates more from the pattern of intensities in the test image (which would be well-represented by the training data), the inferred epistemic uncertainty in the abnormal region increases. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:differentintensity} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/out_of_distribution_example.png} \caption{Effect of the structure of the training dataset on epistemic uncertainty maps. The first row contains the ground truth test images, i.e., the target test images, and the second row contains the corresponding epistemic uncertainty maps. The images on the first and fourth columns show the performance of the proposed framework on normal data (i.e., no abnormal features in training and set data). The images on the second and fifth columns show the performance of the proposed framework on a case where an abnormal feature exists in the test data. The images on the third and sixth columns show the performance of the proposed framework with abnormal features present in both training and test data. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:outofdataexample} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/epistemic_plot_combined.png} \caption{Mean and standard deviation of epistemic uncertainty as a function of training dataset size. The mean and standard deviation are calculated using all pixels in the test dataset. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB. Mean SSIM values along with the standard deviations for the corresponding reconstructions are provided for reference.} \label{fig:reducibilityofepistemicuncertaintyplot} \end{figure} \subsection{Aleatoric Uncertainty} \label{ssec:aleatoricuncertainty} We now focus on aleatoric uncertainty characterization using our proposed framework. The experiments we present here demonstrate successful aleatoric uncertainty characterization, in particular, the aleatoric uncertainty captured by the proposed framework is high for the regions where the reconstruction is challenging due to the ill-posed nature of the inverse problem. Furthermore, we show that the overall aleatoric uncertainty provided by the proposed framework is an indication of how challenging the inverse problem is. For this analysis, we trained the proposed framework for various configurations of the imaging setups. We considered different percentages of observed k-space coefficients and SNR values for the MRI reconstruction problem and different number of views and SNR values for the CT reconstruction problem. Figure \ref{fig:aleatoricuncertaintyandchallengingness} shows the starting points of the proposed framework, i.e., the results of zero-filling and filtered backprojection, and the aleatoric uncertainty maps for different test measurement vectors generated from the two test target images using different configurations of the MR and CT imaging setups. For both MRI and CT reconstruction problems, we observe that the aleatoric uncertainty is high for the regions where the reconstruction is challenging for the unrolled network, such as the small localized structures and thin edges on the target images. On the other hand, we observe that the aleatoric uncertainty is low around the regions where the corruption is negligible or can be recovered using the spatial information, such as the smooth regions in the target images. This behavior can be understood analytically with a careful inspection of the objective function of the optimization problem in \eqref{eq:variationalinference}. Solving this optimization problem forces the neural network $\bar{\sigma}$ to output high values for the pixels where the squared error between the output of the neural network $\bar{f}$ and the target image is high. Moreover, we observe that the overall aleatoric uncertainty levels show a decrease as SNR decreases for a fixed percentage of the observed k-space coefficients/number of views. Similarly, for a fixed value of the SNR, we observe a decrease in the overall aleatoric uncertainty levels as the percentage of the observed k-space coefficients/number of views increases. Figure \ref{fig:aleatoricuncertaintyandchallengingnessplot} shows the average aleatoric uncertainty over all pixels in the test dataset for different configurations of the imaging setups. From this figure, we observe that the overall aleatoric uncertainty increases when the SNR decreases for a fixed percentage of the observed k-space coefficients/number of views or when the percentage of the observed k-space coefficients/number of views decreases for a fixed value of the SNR. Hence, the quantitative results shown in Figure \ref{fig:aleatoricuncertaintyandchallengingnessplot} confirm our visual observations about the overall aleatoric uncertainty. This result can be also understood by analyzing the objective function of the optimization problem in \eqref{eq:variationalinference}. Because the neural network $\bar{f}$ does not have an infinite learning capability in practice, we expect that the squared error between the output of the trained neural network $\bar{f}$ and the target image will increase as the reconstruction problem gets more challenging, leading to higher overall aleatoric uncertainty levels for the relatively more challenging image reconstruction problems. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{figures_main/aleatoric_uncertainty_and_error_2.png} \caption{Effect of the configuration of the imaging setup on aleatoric uncertainty. The first and the third rows contain the ground truth test images, i.e., target test images, as well as the starting points obtained by applying zero-filling (ZF) or filtered backprojection (FBP) to observations. The second and fourth rows contain the corresponding aleatoric uncertainty maps obtained by the proposed framework for different percentages of observed k-space coefficients (POC), numbers of views (NOV), and signal-to-noise ratios (SNR). Regions where the reconstruction from the starting point is challenging are the regions for which the aleatoric uncertainty is high. Moreover, the overall aleatoric uncertainty increases as the reconstruction problem gets more challenging in terms of data quality and quantity limitations.} \label{fig:aleatoricuncertaintyandchallengingness} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/aleatoric_plot_combined.png} \caption{Mean and standard deviation of aleatoric uncertainty for different configurations of the imaging setups. For MRI experiments, they are calculated for different percentages of observed k-space coefficients (POC) and signal-to-noise ratios (SNR). For CT experiments, they are calculated for different numbers of views (NOV) and signal-to-noise ratios (SNR). The averages and standard deviations are calculated using all pixels in the test dataset. Mean SSIM values along with the standard deviations for the corresponding reconstructions are provided for reference.} \label{fig:aleatoricuncertaintyandchallengingnessplot} \end{figure} \subsection{Reconstruction Performance} \label{ssec:reconstructionperformance} In this subsection, we demonstrate the reconstruction performance of the proposed framework. We compare the proposed framework with six methods: (1) zero-filling (ZF) / filtered backprojection (FBP), (2) total variation reconstruction (TV), (3) PGD-based deep unrolling method (PUM), (4) PGD-based deep unrolling method without batch normalization (PUMw/oBN), (5) proposed only epistemic model (POEM), and (6) proposed only aleatoric model (POAM). The methods ZF/FBP, and TV are the baseline reconstruction methods that we use to demonstrate how challenging the reconstruction problem is. PUM is a deep unrolling method using PGD. Each residual block of PUM consists of a series of convolutional layers, batch normalization layers, and an activation function. PUMw/oBN is the same model as the PUM, except that there are no batch normalization layers in residual blocks. POEM is the variant of the proposed framework that assumes that the covariance matrix of the likelihood function in \eqref{eq:proposedlikelihood} is a fixed model parameter. POEM is also the probabilistic model that was used in the experiments of the preliminary version of this paper~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}. As its name implies, POEM quantifies only the epistemic uncertainty, not the aleatoric uncertainty. POAM is also a variant of the proposed framework where a maximum likelihood estimate of the parameters of the likelihood function in \eqref{eq:proposedlikelihood} is used. POAM is capable of quantifying the aleatoric uncertainty, but not the epistemic uncertainty since it only uses the MAP estimate of the parameters. Implementation details of these methods are provided in the Supplementary Material. Table \ref{tab:reconstructionperformance} shows the performance of the seven methods for CT and MRI reconstruction problems under different configurations of the imaging setups. Among these image reconstruction methods, FBP and ZF achieve the worst reconstruction performance among the seven reconstruction methods. The TV method improves upon FBP and ZF by promoting a piecewise-constant reconstruction. The deep unrolling method PUM surpasses the TV method by implicitly learning the prior using the training dataset. The deep unrolling method PUM was trained using a small mini-batch size since it requires storing the intermediate variables having the same spatial dimensions as the target image in the memory to carry out the backpropagation. We empirically observed that the removal of the batch normalization layers from the unrolled network leads to an increase in the reconstruction performance. Specifically, we observe that the PUMw/oBN outperforms PUM in all the experiments. This empirical observation is mathematically justified in \cite{Yong2020BatchNormalization} where Yong \emph{et. al.}\ showed that batch normalization introduces a high level of noise for small mini-batch sizes, making the training difficult. This observation is the main reason why the unrolled network $f$ in the proposed framework does not contain any batch normalization layers. On the other hand, we experimentally observed that the addition of the batch normalization layers into the neural network $\sigma^2$ is necessary to have a stable training stage. Comparing POAM with PUMw/oBN, POAM shows an average SSIM decrease of $0.022$ for the MRI reconstruction problem and $0.002$ for the CT reconstruction problem. On the other hand, when compared to the state-of-the-art deep unrolling method PUM, POAM achieves average SSIM gains of $0.031$ and $0.011$ for the MRI and CT reconstruction problems, respectively. The reconstruction performance of POEM shows a decrease compared to PUMw/oBN due to using dropout after every convolutional layer, which is a strong form of regularization. Similarly, we observe that the reconstruction performance of POEM is slightly worse than that of POAM. The reconstruction performance of the proposed framework shows a decrease compared to POAM because of using dropout after every convolutional layer, which is a strong form of regularization. Comparing the proposed framework with POAM, the proposed framework shows an average SSIM decrease of $0.010$ for the MRI reconstruction problem and $0.007$ for the CT reconstruction problem. We observe a similar trend for the proposed framework and PUMw/oBN. On the other hand, the proposed framework achieves average SSIM gains of $0.010$ and $0.006$ for the MRI and CT reconstruction problems when compared to POEM, respectively. Similarly, the proposed framework surpasses the state-of-the-art deep unrolling method PUM. Due to space limitations, only representative visual results are presented in Figure \ref{fig:reconstructionperformancevisuals}. Detailed visual results are provided in the Supplementary Material. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figures_main/reconstruction_merged_with_zoom_article.png} \caption{Visual comparison of the image reconstruction performance of zero-filling (ZF) / filtered backprojection (FBP), total variation reconstruction (TV), state-of-the-art PGD-based deep unrolling method without batch normalization (PUMw/oBN), and the proposed method. Proposed method achieves comparable reconstruction performance to the state-of-the-art deep unrolling method PUMw/oBN, while providing uncertainty quantification.} \label{fig:reconstructionperformancevisuals} \end{figure*} \begin{table*}[t] \centering \caption{Comparison of average SSIM for different image reconstruction methods.} \label{tab:reconstructionperformance} \begin{tabular}{cccc|cccccccc} \toprule & POC & NOV & SNR & ZF & FBP & TV & PUM & PUMw/oBN & POAM & POEM & Proposed \\ \midrule \multirow{4}{*}{MRI} & 10 & - & 10 & 0.4910 & - & 0.7033 & 0.7979 & 0.8144 & 0.7773 & 0.7727 & 0.7674 \\ & 10 & - & 70 & 0.5448 & - & 0.7261 & 0.8032 & 0.9227 & 0.8996 & 0.8638 & 0.8784 \\ & 20 & - & 10 & 0.5609 & - & 0.7913 & 0.8611 & 0.8799 & 0.8589 & 0.8517 & 0.8568 \\ & 20 & - & 70 & 0.6774 & - & 0.8414 & 0.9231 & 0.9780 & 0.9726 & 0.9407 & 0.9642 \\ \midrule \multirow{4}{*}{CT} & - & 36 & 40 & - & 0.4919 & 0.7657 & 0.9053 & 0.9228 & 0.9178 & 0.9068 & 0.9129 \\ & - & 36 & 70 & - & 0.5895 & 0.8232 & 0.9175 & 0.9319 & 0.9290 & 0.9133 & 0.9181 \\ & - & 60 & 40 & - & 0.6726 & 0.8637 & 0.9390 & 0.9535 & 0.9520 & 0.9422 & 0.9467 \\ & - & 60 & 70 & - & 0.7846 & 0.9204 & 0.9548 & 0.9625 & 0.9626 & 0.9507 & 0.9576 \\ \bottomrule \end{tabular} \end{table*} \section{Discussion} \label{sec:discussion} Quantification of the epistemic uncertainty is crucial for learning-based image reconstruction methods, especially in safety-critical imaging applications, for quantifying the confidence on a reconstruction obtained using a model learned from available, potentially limited or unrepresentative training data. Our experimental results presented in Section \ref{sec:experiments} showed that the proposed framework successfully captures the epistemic uncertainty. The epistemic uncertainty provided by the proposed framework can be used to assess how uncertain the learning-based image reconstruction method is and to detect cases where the input contains abnormal features not present in the training data. For ill-posed inverse problems encountered in most imaging problems, inherent uncertainty on the target image for a given measurement vector is inevitable. Hence, it is essential to quantify the aleatoric uncertainty for imaging problems to capture the inherent randomness in the reconstruction task. Our experiments presented in Section \ref{sec:experiments} demonstrated that the proposed framework is capable of capturing the aleatoric uncertainty in the sense that the aleatoric uncertainty provided by the proposed framework shows the regions where the reconstruction is expected to be challenging for the unrolled network. The aleatoric uncertainty provided by the proposed framework can be utilized to determine the possible errors in the reconstructed image and can be used as a mechanism to further assess the reliability of the reconstructed image. As a result, the aleatoric and epistemic uncertainties provided by the proposed framework would open the possibility of developing more accurate, robust, trustworthy, uncertainty-aware, learning-based image reconstruction and analysis methods. The benefits of obtaining the epistemic and aleatoric uncertainty maps come with a price. Because the proposed framework requires feeding the measurement vector into the neural networks $T$ times for inference, the inference time of the proposed framework increases by $T$ times compared to the state-of-the-art deep unrolling method PUM. To shorten the inference time of the proposed framework, we can perform those $T$ forward passes in parallel. Assuming that the GPU memory allows using a batch size of $B$ in the inference stage, the proposed framework requires only $\lfloor T/B \rfloor + 1$ forward passes for inference. If we have multiple GPUs, the inference time of the proposed framework can be further reduced. Hence, the proposed framework can achieve shorter inference times at the expense of using more computational power. Another way to shorten the inference time of the proposed framework is to decrease the number of parameters the proposed framework so that a larger batch size $B$ can be used to parallelize the inference stage. To that end, we can design a variant of the proposed framework that uses a dual-head network. For the sake of brevity, we have not discussed this variant; however, a brief discussion on that variant is provided in the Supplementary Material. \section{Conclusion} \label{sec:conclusion} In this paper, we utilized the idea of deep unrolling and Bayesian neural networks to propose a learning-based image reconstruction framework that is capable of quantifying epistemic and aleatoric uncertainties while incorporating the imaging observation model into the reconstruction process. Our experimental results showed that the proposed framework quantifies the epistemic and aleatoric uncertainties successfully while providing a reconstruction performance comparable to the state-of-the-art deep unrolling methods. The proposed framework can be applied to a broad set of imaging problems and can be easily implemented in deep learning frameworks. We hope that the proposed framework and the provided discussion on epistemic and aleatoric uncertainties for imaging problems motivate further research on uncertainty characterization for imaging problems and on leveraging the uncertainty information for image reconstruction and analysis tasks. \bibliographystyle{IEEEtran} \section{Introduction} \label{sec:introduction} \IEEEPARstart{T}{his} article concerns imaging problems where the target image is observed through a linear transformation followed by additive noise. This observation model is quite general and has been used to model a variety of imaging techniques such as computed tomography (CT)~\cite{Elbakri2002CT}, magnetic resonance imaging (MRI)~\cite{Fessler2010MRI}, microscopy~\cite{Sarder2006Microscopy}, and radar imaging~\cite{Potter2010Radar}. For this observation model, classical model-based iterative reconstruction methods cast the image reconstruction problem as a regularized least squares problem whose objective function is the sum of a data-fidelity term and a regularizer. The observation model of the imaging problem determines the form of the data-fidelity term, and the prior knowledge about the target image governs the form of the regularizer. After obtaining an analytical expression for the data-fidelity term and choosing a regularization function, such as the total variation (TV) semi-norm~\cite{Chambolle2004TV}, the resulting optimization problem is solved iteratively by using an appropriate iterative optimization algorithm such as alternating direction method of multipliers (ADMM)~\cite{Boyd2011ADMM}, half-quadratic splitting (HQS), and proximal gradient descent (PGD) method~\cite{Parikh2014Proximal}. Inspired by model-based iterative reconstruction methods and the pioneering work of Gregor and LeCun~\cite{Gregor2010Unfolding} on sparse coding, a deep learning-based image reconstruction methodology, which is often referred to as deep unrolling~\cite{Aggarwal2019MoDL, Adler2018LearnedPrimalDual, Borgerding2017VAMP, Chun2018BCDNet, Diamond2018Unrolled, Liu2019DPUnrolling, Yang2016ADMMNet, Mardani2018NPGD} has emerged to bridge the gap between model-based image reconstruction methods and purely deep neural network-based image reconstruction methods. The common theme among deep unrolling methods is that they often design a deep neural network by replacing some parts of the unrolled iterative reconstruction algorithm with trainable parameters and neural networks. The main advantages of deep unrolling methods are that they explicitly incorporate the observation model into the neural network, hence they enforce data consistency, and the resulting deep neural network is interpretable in the sense that the resulting deep learning-based image reconstruction method is essentially classical model-based reconstruction algorithm with some learnable components. \IEEEpubidadjcol Although deep unrolling methods have the advantage of incorporating domain knowledge and the physics of the imaging problem into the neural network architecture, existing deep unrolling methods do not provide any predictive uncertainty information about the reconstructed image since they rely on non-Bayesian (standard) neural networks to reconstruct the target image from the corrupted measurements. This severely limits their applicability in safety-critical real-world imaging applications such as medical imaging, where uncertainty information is crucial to make accurate decisions. Our perspective is that we can solve this problem by taking a Bayesian approach for uncertainty estimation and using Bayesian neural networks (BNNs)~\cite{Neal1995BayesianNN}. BNNs are probabilistic models that can quantify the inherent uncertainty in the reconstruction task, which is referred to as the \emph{aleatoric} uncertainty~\cite{Kendall2017BayesianNN}, and the uncertainty on the parameters of a neural network, which is referred to as the \emph{epistemic} uncertainty~\cite{Kendall2017BayesianNN}, by putting a probability distribution on the parameters and computing the posterior distribution of the parameters given a training dataset. By using BNNs together with the idea of deep unrolling, we claim that we can provide predictive uncertainty information for the reconstructed image while preserving the advantages of deep unrolling. In this article, we present an uncertainty quantifying learning-based image reconstruction framework based on the idea of deep unrolling and Bayesian neural networks. In the proposed framework, we first define a likelihood function whose form originates from the principles of deep unrolling. Then, we place a prior distribution on the parameters of the likelihood function and obtain an approximation of the posterior distribution of the parameters using a scalable variational inference method called Monte Carlo (MC) Dropout~\cite{Gal2016MCDropout}. Next, for a given test measurement and a training dataset, we follow the principles of BNNs and compute the predictive distribution via Monte Carlo integration. Finally, we compute the mean and element-wise variance of the predictive distribution to obtain the reconstructed image and the epistemic and aleatoric uncertainty maps, respectively. We evaluate the proposed framework on MRI and CT reconstruction problems and show that the proposed framework can achieve comparable reconstruction performance to a state-of-the-art deep unrolling method and provide epistemic and aleatoric uncertainty information about the reconstructed image while incorporating the domain-knowledge into the reconstruction process. Moreover, we demonstrate the characteristics of epistemic and aleatoric uncertainties provided by the proposed framework to motivate further research on leveraging the uncertainty information for image reconstruction and analysis tasks. \section{Related Work} \label{sec:relatedwork} The problem of uncertainty quantification for image reconstruction tasks, e.g.,~\cite{Adler2019CWGAN, Bardsley2012MCMC, Bohm2019VAE, Cai2018aMCMC, Cai2018bMAP, Cochrane2022BNN, Dasgupta2021NF, Edupuganti2021VAE, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Herrmann2019Bregman, Hoffmann2021BNNensemble, Kitichotkul2021SURE, Liu2019Heteroscedastic, Repetti2019UQCredibleRegions, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Sun2020DPI, Tanno2019uncertainty, Tonolini2020VarInf, Xue2019UncertaintyPhaseImaging}, has attracted the attention of the computational imaging community again recently due to recent advancements in deep generative modeling~\cite{BondTaylor2021GenerativeSurvey} and BNNs~\cite{Neal1995BayesianNN, Gal2016MCDropout, Kendall2017BayesianNN}. The state-of-the-art deep learning-based image reconstruction methods performing uncertainty characterization, e.g.,~\cite{Adler2019CWGAN, Bohm2019VAE, Cochrane2022BNN, Dasgupta2021NF, Edupuganti2021VAE, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Herrmann2019Bregman, Hoffmann2021BNNensemble, Kitichotkul2021SURE, Liu2019Heteroscedastic, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Sun2020DPI, Tanno2019uncertainty, Tonolini2020VarInf, Xue2019UncertaintyPhaseImaging}, can be divided into two groups: deep generative model-based reconstruction methods and BNN-based reconstruction methods. Deep generative model-based reconstruction methods, e.g.,~\cite{Adler2019CWGAN, Bohm2019VAE, Dasgupta2021NF, Edupuganti2021VAE, Sun2020DPI, Tonolini2020VarInf}, seek to approximate the posterior distribution of the target image with the help of a generative model to characterize the inherent uncertainty in the reconstruction task, i.e., the uncertainty on the target image for a given measurement vector. For example, Adler and Oktem~\cite{Adler2019CWGAN} approximate the posterior distribution of the target image given a measurement vector using a conditional Wasserstein generative adversarial network~\cite{Mirza2014CGAN, ArjovskyWGAN}. Bohm \emph{et~al.}~\cite{Bohm2019VAE} use a variational autoencoder~\cite{Kingma2013auto} to represent the prior distribution of the target image and perform variational inference to learn the true posterior distribution of the latent variable given a measurement vector. Sun and Bouman~\cite{Sun2020DPI} utilize another popular generative model, a flow-based model~\cite{Kobyzev2021Normalizingflows, Papamakarios2021Normalizingflows}, to approximate the posterior distribution of the target image given a measurement vector and adjust the parameters of the flow-based model by minimizing the reverse Kullback-Leibler divergence~\cite{Papamakarios2021Normalizingflows} between the output distribution of the flow-based model and the posterior distribution. After training the generative model, the uncertainty on the target image for a given measurement vector can be quantified by calculating the sample variance of the samples generated from the approximation of the posterior distribution of the target image. While deep generative model-based reconstruction methods aim to quantify the inherent uncertainty in the reconstruction task, the goal of Bayesian neural network-based reconstruction methods, e.g.,~\cite{Cochrane2022BNN, Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary, Hoffmann2021BNNensemble, Schlemper2018UncertaintyMRI, Shang2021BNN, Siahkoohi2020BNN, Tanno2019uncertainty, Xue2019UncertaintyPhaseImaging}, is to quantify either the uncertainty on the parameters of the statistical model or both the inherent uncertainty in the reconstruction task and the uncertainty on the parameters of the statistical model. To the best of the authors' knowledge, Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} presented the first two BNN-based image reconstruction methods for the MRI reconstruction problem, showing the potential of BNNs for uncertainty quantification for imaging problems. Subsequently, many BNN-based image reconstruction methods were developed for various problems such as the neuroimage enhancement~\cite{Tanno2019uncertainty}, phase imaging~\cite{Xue2019UncertaintyPhaseImaging}, seismic imaging~\cite{Siahkoohi2020BNN}, computational optical form measurements~\cite{Hoffmann2021BNNensemble}, single-pixel imaging~\cite{Shang2021BNN}, imaging through scattering media~\cite{Cochrane2022BNN}, and more general image reconstruction problems~\cite{Ekmekci2021UncertaintyPnP, Ekmekci2021UncertaintyUnfoldingPreliminary}. Table \ref{tab:relatedworkcomparison} shows the functional models of and the types of uncertainties quantified by BNN-based image reconstruction methods. \begin{table}[t] \centering \caption{High-level comparison of Bayesian neural network-based image reconstruction methods} \label{tab:relatedworkcomparison} \begin{tabular}{lccc} \toprule Method & Functional Model & Quantified Uncertainties \\ \midrule Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Schlemper \emph{et~al.}~\cite{Schlemper2018UncertaintyMRI} & DCCNN~\cite{Schlemper2018DCCNN} & Epistemic \& Aleatoric \\ Tanno \emph{et~al.}~\cite{Tanno2019uncertainty} & ESPCN~\cite{Shi2016ESCPCN} & Epistemic \& Aleatoric \\ Xue \emph{et~al.}~\cite{Xue2019UncertaintyPhaseImaging} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Siahkoohi \emph{et~al.}~\cite{Siahkoohi2020BNN} & DIP~\cite{Lempitsky2018DIP} & Epistemic \\ Hoffmann \emph{et~al.}~\cite{Hoffmann2021BNNensemble} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \\ Shang \emph{et~al.}~\cite{Shang2021BNN} & U-Net~\cite{Ronneberger2015UNet} & Epistemic \& Aleatoric \\ Ekmekci and Cetin~\cite{Ekmekci2021UncertaintyPnP} & DRUNet~\cite{Zhang2021plug} & Epistemic \\ Ekmekci and Cetin~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}$^*$ & Deep Unrolling & Epistemic \\ Cochrane \emph{et~al.}~\cite{Cochrane2022BNN} & U-Net~\cite{Ronneberger2015UNet} & Epistemic\\ Proposed Framework & Deep Unrolling & Epistemic \& Aleatoric\\ \bottomrule \multicolumn{3}{l}{$^*$Preliminary version of this work} \end{tabular} \end{table} Table \ref{tab:relatedworkcomparison} highlights the main differences between the proposed framework and the aforementioned BNN-based image reconstruction methods. The main difference between the proposed framework and the methods presented in \cite{Schlemper2018UncertaintyMRI, Tanno2019uncertainty, Xue2019UncertaintyPhaseImaging, Hoffmann2021BNNensemble, Shang2021BNN, Cochrane2022BNN, Siahkoohi2020BNN} is that the proposed framework utilizes the idea of deep unrolling to integrate the observation model into the reconstruction process. Incorporation of physics-based models through data-consistency layers provides some level of interpretability. The DCCNN~\cite{Schlemper2018DCCNN} based method presented in \cite{Schlemper2018UncertaintyMRI} contains data-consistency layers; however, the data-consistency layer in \cite{Schlemper2018UncertaintyMRI} leverages the characteristic properties of the forward operator of the MRI observation model, making it highly specialized for MRI reconstruction. On the other hand, the proposed framework only requires the computation of the adjoint of the forward operator of the observation model, which is a considerably less restrictive requirement. If the forward operator deviates from a Fourier operator, the data consistency layer of the DCCNN-based method requires matrix inversion, which is not computationally feasible for large scale inverse problems. The difference between the proposed framework and the framework presented in \cite{Ekmekci2021UncertaintyPnP} lies in the difference between end-to-end models and Plug-and-Play (PnP) methods~\cite{Venkatakrishnan2013PnP}. While the BNN-based image reconstruction method presented in \cite{Ekmekci2021UncertaintyPnP} is built upon the idea of Plug-and-Play (PnP) priors~\cite{Venkatakrishnan2013PnP}, which does not require end-to-end training, the proposed framework uses a deep unrolled network as its functional model, which is trained in an end-to-end manner. We note that the preliminary version of this work appeared in \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} as a conference paper. The work presented in this manuscript extends the preliminary work in \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} in several significant ways. First, \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} involved the quantification of epistemic uncertainty only, whereas this paper proposes both epistemic and aleatoric uncertainty quantification. Second, unlike \cite{Ekmekci2021UncertaintyUnfoldingPreliminary}, the unrolled neural network in the framework we propose here contains different CNN blocks at each iteration. We have experimentally observed that this change leads to a faster and more stable training stage. Finally, this manuscript contains an extensive set of experiments demonstrating the characteristics of epistemic and aleatoric uncertainties. \section{Proposed Framework} \label{sec:proposedframework} In this section, we present a learning-based image reconstruction framework that can incorporate the observation model into the reconstruction process and quantify epistemic and aleatoric uncertainties arising in imaging problems. We start by introducing the assumed observation model and presenting a probabilistic formulation of deep unrolling methods along with a motivation for bringing in BNNs. This provides the basis for our BNN-based image reconstruction and uncertainty characterization approach, the components of which are described in the rest of this section. \subsection{Observation Model and the Inverse Problem} We consider the following observation model. \begin{equation} {\mathbf m} = {\mathbf A} {\mathbf s} + {\mathbf n}, \label{eq:forwardproblem} \end{equation} where ${\mathbf m} \in \mathbb{F}^M$ is the measurement vector; ${\mathbf A} \in \mathbb{F}^{M \times N}$ is the forward operator, which is the discrete approximation of the transformation applied by the imaging system; ${\mathbf s} \in \mathbb{F}^{N}$ is the target image; and ${\mathbf n} \sim \mathcal{N}({\mathbf 0}, \sigma_n^2 {\mathbf I})$ is additive white Gaussian noise, where $\mathbb{F}$ stands for either $\mathbb{R}$ or $\mathbb{C}$. In this section, without loss of generality, we only consider the case where $\mathbb{F} = \mathbb{R}$ since generalizing the proposed framework to cover the case $\mathbb{F} = \mathbb{C}$ is straightforward (see \cite{Ekmekci2021UncertaintyUnfoldingPreliminary} for details). For an underdetermined system ($M<N$), the inverse problem, i.e., recovering the target image ${\mathbf s}$ from the measurement vector ${\mathbf m}$, is an ill-posed problem. To narrow down the solution space, we can utilize any prior knowledge about the target image. One way to use such prior knowledge systematically is to treat the inverse problem as a maximum \emph{a posteriori} (MAP) estimation problem, which is defined by \begin{equation} \hat{{\mathbf s}} = \argmin_{{\mathbf s} \in \mathbb{R}^{N}} \left\{ \| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2 + \beta \psi({\mathbf s}) \right\}, \label{eq:mapestimationproblem} \end{equation} where $\hat{{\mathbf s}}$ is the MAP estimate of the target image, the term $\| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2$ is the data-fidelity term, the function $\psi: \mathbb{R}^N \to \mathbb{R}$ is the regularizer that comes from the prior knowledge on the target image, and $\beta > 0$ is the parameter controlling the balance between the data-fidelity term and the regularizer. After deciding on the form of the regularizer, e.g., total variation semi-norm or wavelet transform domain regularization, model-based reconstruction methods solve the problem in \eqref{eq:mapestimationproblem} iteratively by using an appropriate optimization algorithm, e.g., ADMM~\cite{Boyd2011ADMM}, HQS, or PGD~\cite{Parikh2014Proximal}. \subsection{Probabilistic Formulation of Deep Unrolling and BNNs} \label{ssec:observations} For the inverse problem, which is essentially a regression problem, suppose that the likelihood function $p({\mathbf s}|{\mathbf m},\theta)$ has the following form. \begin{equation} p({\mathbf s} | {\mathbf m}, \theta) = \mathcal{N} \left( {\mathbf s} | f_\theta ({\mathbf m}), \sigma^2 {\mathbf I} \right), \label{eq:gaussianlikelihood} \end{equation} where $f_\theta: \mathbb{R}^M \to \mathbb{R}^N$ is a deep unrolled network parametrized by the set of parameters $\theta$, and $\sigma > 0$ is a fixed constant. Assuming that the training dataset $\mathcal{D}$ contains i.i.d.\ pairs of measurement vectors and target images, we can compute a MAP estimate of the set of parameters by solving the following optimization problem. \begin{equation} \hat{\theta}_{\text{MAP}} = \argmin_{\theta} \left\{ \frac{1}{2\sigma^2} \sum_{i=1}^{N_\mathcal{D}} \| {\mathbf s}^{[i]} - f_\theta({\mathbf m}^{[i]}) \|_2^2 - \log p(\theta) \right\} \label{eq:mapestimateofparameters} \end{equation} where $({\mathbf m}^{[i]}, {\mathbf s}^{[i]})$ is the $i^{\text{th}}$ example in the training dataset, $N_\mathcal{D}$ is the number of examples in the training dataset, and the distribution $p(\theta)$ is the prior distribution of the set of parameters. In the inference stage, for a given measurement vector ${\mathbf m}_*$, we can compute the distribution $p({\mathbf s}_* | {\mathbf m}_*, \hat{\theta}_{\text{MAP}})$ to make predictions about the target image. This probabilistic formulation implicitly appears in the training and inference stages of state-of-the-art deep unrolling methods. For instance, if we choose the prior $p(\theta)$ to be standard Gaussian distribution, then finding the MAP estimate of the set of parameters boils down to training the neural network $f_\theta$ using the squared error loss with weight decay, which is a cost function frequently used by deep unrolling methods. In the inference stage, for a given measurement vector ${\mathbf m}_*$, outputting the mean of the distribution $p({\mathbf s}_* | {\mathbf m}_*, \hat{\theta}_{\text{MAP}})$ as the reconstructed image is equivalent to feeding the measurement vector ${\mathbf m}_*$ into the trained neural network $f_{ \hat{\theta}_{\text{MAP}}}$. Hence, training and inference procedures followed by many existing deep unrolling methods can be interpreted probabilistically using the formulation above. Although such procedures are frequently used to train deep unrolling methods, there are two problems with this approach regarding the characterization of uncertainties. The first problem is that this formulation does not model the uncertainty on the target image for a given measurement vector, i.e., the inherent uncertainty on the reconstruction task, since it assumes that the covariance matrix of the likelihood function is a fixed model parameter. The second problem is that this formulation does not model the uncertainty on the set of parameters because it only uses a point estimate of the set of parameters by following MAP estimation principles. BNNs~\cite{Neal1995BayesianNN, Jospin2022BNNTutorial, Kendall2017BayesianNN} can solve these two problems. BNNs solve the first problem by defining a likelihood function that models the inherent uncertainty on the reconstruction task. In the case of a Gaussian likelihood, this can be accomplished by representing the covariance matrix of the likelihood function with a neural network. To solve the second problem, BNNs place a prior distribution on the set of parameters of the likelihood function and compute the posterior distribution of the parameters given a training dataset. Then, at the inference stage, BNNs compute the predictive distribution for a given measurement vector ${\mathbf m}_*$ by computing the following integral: \begin{equation} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) = \int p({\mathbf s}_*|{\mathbf m}_*, \theta) p(\theta | \mathcal{D}) d\theta, \label{eq:predictivedistribution} \end{equation} where the distribution $p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D})$ is the predictive distribution, and the integral is taken over all possible values of $\theta$. The first term of the integrand, which is the likelihood function, incorporates the inherent uncertainty on the reconstruction task (i.e., aleatoric uncertainty), which is created by the ill-posedness of the inverse problem, into the predictive distribution. The second term of the integrand, on the other hand, which is the posterior distribution of the parameters, incorporates the uncertainty on the set of parameters (i.e., epistemic uncertainty), which is created by the lack of training examples in the training dataset around the test measurement vector, into the predictive distribution through an integral over the possible parameter values. Thanks to this conceptually simple probabilistic formulation, we can utilize BNNs to quantify both epistemic and aleatoric uncertainties in computational imaging problems. \subsection{Form of the Likelihood Function} Based on our observations presented in Section \ref{ssec:observations}, we define the form of likelihood function as follows. \begin{equation} p({\mathbf s}|{\mathbf m},\theta) = \mathcal{N}({\mathbf s}| f_\gamma({\mathbf m}), \diag(\sigma_\delta^2({\mathbf m}))), \label{eq:proposedlikelihood} \end{equation} where $\theta = \gamma \cup \delta$, and $f_\gamma : \mathbb{R}^M \to \mathbb{R}^N$ and $\sigma_\delta^2: \mathbb{R}^M \to \mathbb{R}^N$ are two neural networks parametrized by sets of parameters $\gamma$ and $\delta$, respectively. The neural network $f_\gamma$ maps a given measurement vector to a point in the target image space, and the neural network $\sigma_\delta^2$ aims to capture the inherent uncertainty on the target image for a given measurement vector. To incorporate the observation model into the neural network $f_\gamma$, which maps a given measurement vector to a point on the target image space, we start constructing $f_\gamma$ by first solving the optimization problem in \eqref{eq:mapestimationproblem} using the proximal gradient descent (PGD) method. The main advantage of using PGD over methods such as ADMM and HQS is that the data dependent update equation of PGD requires computing only the adjoint of the forward operator and does not involve any inversion step, which makes it suitable for large scale imaging problems with non-structured forward operators. Assuming that the regularizer $\psi$ in \eqref{eq:mapestimationproblem} is a closed proper convex function, PGD yields the following iterative image reconstruction algorithm. \begin{equation} \begin{aligned} {\mathbf z}^{(k+1)} &= ({\mathbf I} - 2\alpha {\mathbf A}^\top {\mathbf A}) {\mathbf s}^{(k)} + 2\alpha {\mathbf A}^\top {\mathbf m} \\ {\mathbf s}^{(k+1)} &= \prox_{\alpha \beta \psi} \left( {\mathbf z}^{(k+1)} \right) \end{aligned} \end{equation} where ${\mathbf z}^{(k+1)} \in\mathbb{R}^N $ is an intermediate vector of the algorithm at the $(k+1)^{\text{st}}$ iteration, ${\mathbf s}^{(k+1)} \in \mathbb{R}^N$ is the reconstructed image at the $(k+1)^{\text{st}}$ iteration, the operator $\prox: \mathbb{R}^N \to \mathbb{R}^N$ is the proximal operator~\cite{Parikh2014Proximal}, and $\alpha \geq 0$ is the step size. To learn the prior information implicitly from the training data, we replace the proximal operator in the second step with a neural network, which has been frequently done by deep unrolling methods such as \cite{Mardani2018NPGD}. Then, the resulting update equations become \begin{equation} \begin{aligned} {\mathbf z}^{(k+1)} &= ({\mathbf I} - 2\alpha {\mathbf A}^\top {\mathbf A}) {\mathbf s}^{(k)} + 2\alpha {\mathbf A}^\top {\mathbf m} \\ {\mathbf s}^{(k+1)} &= D_{\gamma_{k+1}} \left( {\mathbf z}^{(k+1)} \right), \end{aligned} \label{eq:pgdupdatewithnn} \end{equation} where $D_{\gamma_{k+1}}: \mathbb{R}^N \to \mathbb{R}^N$ is a residual neural network~\cite{He2016Resnet} parametrized by the set of parameters $\gamma_{k+1}$. For a fixed number of iterations $K$, the series of update equations in \eqref{eq:pgdupdatewithnn} correspond to a deep neural network $f_\gamma$, where $\gamma = \bigcup_{k=1}^K \gamma_k$. Figure \ref{fig:architecture} displays the high-level summary of the neural network $f_\gamma$, and the details of the architecture are provided in the Supplementary Material. To completely specify the form of the likelihood function given in \eqref{eq:proposedlikelihood}, we have to specify the architecture of the neural network $\sigma_\delta^2$ as well. The neural network we use for the neural network $\sigma_\delta^2$ is a U-shaped neural network~\cite{Ronneberger2015UNet} followed by an element-wise exponentiation to ensure that the output contains positive entries. Figure \ref{fig:architecture} depicts a high-level summary of the neural network $\sigma_\delta^2$, the details of which are given in the Supplementary Material. We must remark that we can also use a dual-head architecture to jointly represent the neural networks $f_\gamma$ and $\sigma_\delta^2$. A brief discussion on the dual-head variant of the proposed framework is also provided in the Supplementary Material for interested readers. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/architecture.png} \caption{The structure of the neural networks $f_\gamma$ and $\sigma_\delta^2$ at a high level. The neural network $f_\gamma$ maps a measurement vector to a point in the target image space, and the neural network $\sigma_\delta^2$ aims to capture the aleatoric uncertainty. These two neural networks completely specify the form of the Gaussian likelihood in \eqref{eq:proposedlikelihood}.} \label{fig:architecture} \end{figure} \subsection{Approximating the Posterior Distribution} \label{ssec:approximatingposterior} To be able to compute the predictive distribution using \eqref{eq:predictivedistribution}, we have to compute the posterior distribution $p(\theta|\mathcal{D})$. However, exact computation of the posterior distribution is not tractable for deep neural networks because of the massive number of parameters and complex hierarchical structures. Thus, we either have to approximate the posterior distribution with a parametric distribution, or we have to generate samples from the posterior distribution to approximate the integral in \eqref{eq:predictivedistribution}. In our framework, we use a variational inference method called MC Dropout to approximate the posterior distribution with a parametric distribution. The advantages of using MC Dropout are that it is scalable for deep neural networks since it does not introduce additional parameters, variational inference and inference procedures can be straightforwardly implemented in deep learning frameworks because this only requires small changes on the training and testing procedures of the standard neural network pipelines, and it has been shown that MC Dropout provides reliable uncertainty estimates for several problems such as camera relocalization~\cite{Kendall2016CameraRelocalization}, depth completion~\cite{Kendall2017BayesianNN, Gustaffson2020EvaluatingUncertainty} and semantic segmentation~\cite{Kendall2017BayesianNN, Gustaffson2020EvaluatingUncertainty}. For the sake of completeness, we state the assumptions of MC Dropout explicitly and discuss the variational inference and inference steps. For a more detailed discussion, the reader can refer to ~\cite{Gal2016MCDropout, Gal2016BayesianCNN, Kendall2017BayesianNN}. Suppose that the neural networks $f_\gamma$ and $\sigma_\delta^2$ contain $C$ and $E$ convolutional layers, respectively. Then, we can write the two sets $\gamma$ and $\delta$ as follows: \begin{equation} \gamma = \bigcup_{i=1}^{C} \{ {\mathbf W}_{i}^f \} \quad \text{and} \quad \delta = \bigcup_{j=1}^E \{ {\mathbf W}_{j}^\sigma \}, \end{equation} where ${\mathbf W}_{i}^f$ and ${\mathbf W}_j^\sigma$ are the matrices whose rows contain the vectorized filter coefficients of the $i^{\text{th}}$ and $j^{\text{th}}$ convolutional layers of the neural networks $f_\gamma$ and $\sigma^2_{\delta}$, respectively. The assumptions~\cite{Gal2016MCDropout, Gal2016BayesianCNN, Kendall2017BayesianNN} on the parametric distribution $q_\omega (\theta)$ that we use to approximate the true posterior distribution $p(\theta | \mathcal{D})$ are as follows: (i) For the parametric distribution, we assume that the layers of the neural networks $f_\gamma$ and $\sigma_\delta^2$ are independent, and layers within the neural networks are mutually independent, i.e., \begin{equation} q_\omega(\theta) = \left( \prod_{i=1}^{C} q\left({\mathbf W}_{i}^f\right) \right) \left( \prod_{j=1}^{E} q\left({\mathbf W}_{j}^\sigma \right) \right); \end{equation} (ii) Filters of a convolutional layer are assumed to be mutually independent, more explicitly \begin{equation} \begin{aligned} q({\mathbf W}_{i}^f) = \prod_{l=1}^{K_{i, f}^{[out]}} q( [ {\mathbf W}_{i}^f ]_{l,:} ), q\left({\mathbf W}_{j}^\sigma \right) = \prod_{m=1}^{K_{j,\sigma}^{[out]}} q( \left[ {\mathbf W}_{j}^\sigma \right]_{m,:} ), \end{aligned} \end{equation} where $K_{i, f}^{[out]}$ is the number of filters in the $i^\text{th}$ convolutional layer of $f_\gamma$, and $K_{j,\sigma}^{[out]}$ is the number of filters in the $j^\text{th}$ convolutional layer of $\sigma^2_\delta$; (iii) The distribution of the filter coefficients of each filter is a mixture of Gaussians distribution defined by \begin{equation} \begin{aligned} q( [ {\mathbf W}_{i}^f ]_{l,:} ) &= p(z_{i,l}^f=1) \mathcal{N}( [ {\mathbf W}_{i}^f ]_{l,:} | {\mathbf a}_{i,l}^f, \epsilon^2{\mathbf I}) \\ &\quad+p(z_{i,l}^f=0) \mathcal{N}( [ {\mathbf W}_{i}^f ]_{l,:} | {\mathbf 0}, \epsilon^2{\mathbf I}), \\ q( [ {\mathbf W}_{j}^\sigma ]_{m,:} ) &= p(z_{j,m}^\sigma=1) \mathcal{N}([ {\mathbf W}_{j}^\sigma ]_{m,:} | {\mathbf a}_{j,m}^\sigma, \epsilon^2{\mathbf I}) \\ &\quad+ p(z_{j,m}^\sigma=0) \mathcal{N}([ {\mathbf W}_{j}^\sigma ]_{m,:} | {\mathbf 0}, \epsilon^2{\mathbf I}), \end{aligned} \label{eq:bernoullivariationaldistribution} \end{equation} where the variables $z_{i,l}^f$ and $z_{j,m}^\sigma$ are the latent variables, and the scalars $p_{i,l}^f \triangleq p(z_{i,l}^f=1)$ and $p_{j,m}^\sigma \triangleq p(z_{j,m}^\sigma=1)$ are fixed constants. The scalar $\epsilon$ is a very small fixed constant, and the sets $\Delta_f \triangleq \{ {\mathbf a}_{i,l}^f \}$ and $\Delta_\sigma \triangleq \{ {\mathbf a}_{j,m}^\sigma \}$ are the adjustable parameters of the parametric distribution. Previously we have denoted the set of adjustable parameters of the parametric distribution $q_\omega(\theta)$ with $\omega$, so we can write the set $\omega$ explicitly as $\omega = \Delta_f \cup \Delta_\sigma$. Under these assumptions, we adjust the parameters of the parametric distribution by minimizing the Kullback-Leibler divergence between the parametric distribution and the true posterior distribution, i.e., \begin{equation} \hat{\omega} = \argmin_\omega D_{\text{KL}} \left( q_\omega(\theta) || p(\theta|\mathcal{D}) \right). \end{equation} Under certain approximations and mathematical manipulations (see the supplementary material of \cite{Gal2016MCDropout} for the details), the above optimization problem can be approximated with the following optimization problem. \begin{equation} \hat{\omega} \approx \argmin_\omega \left\{ g(\omega) + h(\omega)\right\}, \label{eq:variationalinference} \end{equation} where \begin{equation} \begin{aligned} g(\omega) &\triangleq \frac{1}{N_\mathcal{D}} \sum_{n=1}^{N_\mathcal{D}} \sum_{k=1}^{N} \bigg[ \log [\sigma_{\tilde{\delta}^{(n)}}^2 ({\mathbf m}^{[n]})]_k \\ &\mkern-18mu + \exp( - \log [\sigma_{\tilde{\delta}^{(n)}}^2 ({\mathbf m}^{[n]})]_k) ( [{\mathbf s}^{[n]}]_k - [f_{\tilde{\gamma}^{(n)}} ({\mathbf m}^{[n]})]_k )^2 \bigg], \\ h(\omega) &\triangleq \sum_{i=1}^{C} \sum_{l=1}^{K_{i,f}^{[out]}} \frac{p_{i,l}^f}{2 N_\mathcal{D}} \| {\mathbf a}_{i,l}^f \|_2^2 + \sum_{j=1}^E \sum_{m=1}^{K_{j, \sigma}^{[out]}} \frac{p_{j,m}^\sigma}{2 N_\mathcal{D}} \| {\mathbf a}_{j,m}^\sigma \|_2^2, \end{aligned} \label{eq:variationalinferencedefinitions} \end{equation} and $\tilde{\theta}^{(n)} = \tilde{\delta}^{(n)} \cup \tilde{\gamma}^{(n)}$ is the $n^{\text{th}}$ sample generated from the parametric distribution $q_\omega(\theta)$. After approximating the true posterior distribution $p(\theta | \mathcal{D})$ with the parametric distribution $q_{\hat{\omega}}(\theta)$, we approximate the integral in \eqref{eq:predictivedistribution} using Monte Carlo integration with $T$ samples as follows. \begin{equation} \begin{aligned} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) &\approx \frac{1}{T} \sum_{t=1}^T \mathcal{N}({\mathbf s}_*| f_{\hat{\gamma}^{(t)}}({\mathbf m}_*), \diag(\sigma^2_{\hat{\delta}^{(t)}}({\mathbf m}_*))) \end{aligned} \label{eq:approximationofpredictivedistribution} \end{equation} where $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the parametric distribution $q_{\hat{\omega}}(\theta)$. The approximation of the predictive distribution is a mixture of $T$ Gaussians with uniform weights; therefore, we can compute its mean vector and element-wise variance analytically as follows. \begin{equation} \mathbb{E}[{\mathbf s}_* | {\mathbf m}_*, \mathcal{D}] \approx \frac{1}{T} \sum_{t=1}^T f_{\hat{\gamma}^{(t)}}({\mathbf m}_*), \label{eq:predictivemean} \end{equation} \begin{equation} \begin{aligned} &\Var[[{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}] \approx \underbrace{\frac{1}{T} \sum_{t=1}^T [\sigma_{\hat{\delta}^{(t)}}^2 ({\mathbf m}_*)]_k}_\text{Aleatoric variance} \\ & + \underbrace{\frac{1}{T} \sum_{t=1}^T [f_{\hat{\gamma}^{(t)}}({\mathbf m}_*)]_k^2 - \left( \frac{1}{T} \sum_{t=1}^T [f_{\hat{\gamma}^{(t)}}({\mathbf m}_*)]_k \right)^2}_\text{Epistemic variance}, \end{aligned} \label{eq:predictivevariance} \end{equation} where $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the optimized parametric distribution $q_{\hat{\omega}}(\theta)$. The first term of the predictive variance, which we refer to as the aleatoric variance, reflects the aleatoric uncertainty in the reconstruction task, and the remaining residual sum, which we refer to as the epistemic variance, represents the epistemic uncertainty. At this point we have to be aware that we have to generate samples from the parametric distribution to solve the optimization problem in \eqref{eq:variationalinference} and to obtain the predictive mean and variance given by \eqref{eq:predictivemean} and \eqref{eq:predictivevariance}. Because we have assumed that filters of convolutional layers are mutually independent, one naive way to generate a sample from the parametric distribution is to generate samples from the distributions in \eqref{eq:bernoullivariationaldistribution}. Sampling from those distributions is equivalent to sampling from a mixture of Gaussians distribution with two components, so, first, we need to sample a Bernoulli random variable, and based on that sample, we generate a sample from one of the two multivariate Gaussian distributions. Because the scalar $\epsilon$ is assumed to be a very small non-zero constant, generating a sample from the multivariate Gaussian distributions in \eqref{eq:bernoullivariationaldistribution} can be approximated by directly reporting the mean. Thus, the whole process of generating a sample from the parametric distribution $q_\omega(\theta)$ boils down to generating samples from Bernoulli random variables and multiplying them with the adjustable parameters of the parametric distribution. Hence we can write \begin{equation} \begin{aligned} \tilde{\gamma}^{(n)} &\approx \left\{ \tilde{z}_{i,l}^{(n)} {\mathbf a}_{i,l}^f | \text{sample }\tilde{z}_{i,l}^{(n)} \sim \text{Bernoulli}(p_{i,l}^f) \right\}, \\ \tilde{\delta}^{(n)} &\approx \left\{ \tilde{z}_{j,m}^{(n)} {\mathbf a}_{j,m}^\sigma | \text{sample }\tilde{z}_{j,m}^{(n)} \sim \text{Bernoulli}(p_{j,m}^\sigma) \right\}, \\ \tilde{\theta}^{(n)} &= \tilde{\delta}^{(n)} \cup \tilde{\gamma}^{(n)}. \end{aligned} \end{equation} An interesting observation is that the sampling operation described above resembles the dropout~\cite{Srivastava2014Dropout} operation. Hence, solving the optimization problem in \eqref{eq:variationalinference} boils down to training two neural networks $\bar{f}$ and $\bar{\sigma}^2$, which are the dropout-added versions of the neural networks $f$ and $\sigma^2$ that we want to perform variational inference for, using the function $g$ in \eqref{eq:variationalinferencedefinitions} as a loss function with weight decay parameters $p_{i,l}^f / (2 N_\mathcal{D})$ and $p_{j,m}^\sigma / (2 N_\mathcal{D})$ and with dropout rates $1-p_{i,l}^f$ and $1-p_{j,m}^\sigma$. The resulting weights of the neural networks after the training stage will be the optimal parameters $\hat{\omega}$ of the parametric distribution $q_{\hat{\omega}}(\theta)$. Furthermore, calculating the approximation of the predictive distribution using \eqref{eq:approximationofpredictivedistribution} boils down to feeding the test measurement vector to the dropout-added neural networks $\bar{f}_{\Delta_f^*}$ and $\bar{\sigma}_{\Delta_\sigma^*}$ $T$ times while the dropout is on. To obtain a reconstruction, we can either generate samples from the approximation of the predictive distribution or compute its mean using \eqref{eq:predictivemean}. To obtain the epistemic and aleatoric uncertainty maps, we use the expression in \eqref{eq:predictivevariance}. \section{Experiments and results} \label{sec:experiments} In this section, we present experimental results demonstrating the behavior of our proposed approach. Although the proposed framework can be applied to any inverse problem that can be cast as the optimization problem in \eqref{eq:mapestimationproblem}, we evaluate the proposed framework on basic MRI and CT reconstruction problems as representative applications. We investigate the behavior of epistemic and aleatoric uncertainties under various experimental conditions and show that the epistemic and aleatoric uncertainty information provided by the proposed framework is consistent with the definitions of those uncertainties. We then compare the image reconstruction performance of the proposed framework with other image reconstruction methods to demonstrate the image reconstruction capability of the proposed framework. Supplementary Material also contains a toy problem and an additional experiment on the recalibration~\cite{Kuleshov2018Recalibration} of the proposed framework. \subsection{Experimental Setup} \label{ssec:experimentalsetup} \textbf{Datasets:} For the MRI reconstruction problem, we extracted $530$ $256 \times 256$ target images from the IXI Dataset~\cite{ixidataset}. Each target image was normalized between $0$ and $1$. We split the $530$ target images into training, validation, and test datasets containing $500$, $15$, and $15$ target images, respectively. The training, validation, and test datasets were constructed such that they contain target images collected from different subjects. The measurement vectors, i.e., k-space measurements, were generated by computing the subsampled Fourier transform of the target images. For the CT reconstruction problem, we extracted $530$ $512 \times 512$ target images from the LUNA Dataset~\cite{lunadataset}. Each image was resized to $256 \times 256$ pixels and normalized between $0$ and $1$. The training dataset was created by using $500$ target images, and the remaining $30$ images were split into two sets to generate validation and test datasets, each containing $15$ target images. The training, validation, and test datasets were constructed such that they contain target images collected from different subjects. The measurement vectors, i.e., sinogram data, were generated by computing the sparse Radon transform of the target images. Finally, we added white Gaussian noise to the measurement vectors to obtain the final measurement vectors we used in our experiments, where the SNR of the measurement vector is defined as follows: \begin{equation} \text{SNR}({\mathbf m}_{\text{noiseless}} + {\mathbf n}, {\mathbf m}_{\text{noiseless}}) = 20 \log_{10} \left( \frac{ \| {\mathbf m}_{\text{noiseless}} \|_2}{\|{\mathbf n}\|_2} \right). \end{equation} \textbf{Training and Inference Procedures:} Training of the proposed framework refers to solving the optimization problem in \eqref{eq:variationalinference}, where the first term of the objective function is replaced with its mini-batch approximation~\cite{Gal2016MCDropout}. We obtained the neural network $\bar{f}$ by fixing the number of iterations $K$ of the PGD to be $5$ and taking the starting point ${\mathbf s}^{(0)}$ to be the result of zero-filling and filtered backprojection for MRI and CT reconstruction problems, respectively. Each residual block of the neural network $\bar{f}$ contains $5$ convolutional layers, and each convolutional layer is followed by a dropout layer and the leaky ReLU activation function. We used the U-Net architecture for the neural network $\bar{\sigma}^2$, where each convolutional layer is followed by a dropout layer. For the MRI reconstruction problem, the batch size for the training was set to $4$, and the learning rate was fixed to $1\times 10^{-4}$. For the CT reconstruction problem, we used a batch size of $2$ for the training and set the learning rate to $1\times 10^{-5}$. The initial step size $\alpha$ of the PGD algorithm was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The dropout rate of the dropout layers of the neural networks $\bar{f}$ and $\bar{\sigma}^2$ was set to $0.1$, and the neural networks $\bar{f}$ and $\bar{\sigma}^2$ were trained for $100$ epochs. At the inference stage, a given measurement vector was fed to the neural networks $\bar{f}$ and $\bar{\sigma}^2$ $T=100$ times while the dropout was still activated. The reconstructed image was then obtained by following the approximation in \eqref{eq:predictivemean}. The epistemic and aleatoric uncertainty maps were obtained by calculating three times of the epistemic and aleatoric standard deviations given by \eqref{eq:predictivevariance}. \subsection{Epistemic Uncertainty} \label{ssec:epistemicuncertainty} In theory, epistemic uncertainty is the uncertainty created by the lack of training data around test data and can be explained away by making appropriate changes on the training data. In this subsection, we investigate the characteristics of epistemic uncertainty information provided by the proposed framework and show that the behavior and results of our approach are consistent with the theoretical characteristics of epistemic uncertainty. To show that the proposed framework outputs epistemic uncertainty information that reflects the uncertainty caused by the lack of training data that can explain the test sample well, we consider two scenarios. In the first scenario, we assess the impact of the size of the training dataset on the inferred epistemic uncertainty. A good uncertainty characterization method should yield larger epistemic uncertainties for smaller training datasets, as it is less probable for such data to represent a random test sample well. In our experiments, we generated five subsets of the MRI training dataset containing $10, 50, 125, 250$, and $500$ examples, and trained five instances of the proposed framework using these subsets as training datasets. Then, for a given test measurement, we obtained the epistemic uncertainty maps using the five trained instances of the proposed framework. We repeated the same procedure for the CT reconstruction problem. The resulting epistemic uncertainty maps are illustrated in Figure \ref{fig:reducibilityofepistemicuncertainty}. For both MRI and CT reconstruction problems, epistemic uncertainty achieves its highest value for the case where we use only $10$ training examples. Then, as we add more examples to the training dataset, epistemic uncertainty on the same test image decreases. To confirm these visual results quantitatively, we calculated the average epistemic uncertainty per pixel taken over the test dataset as a function of the size of the training dataset. Figure \ref{fig:reducibilityofepistemicuncertaintyplot} shows the quantitative results for both MRI and CT reconstruction problems. From this figure, we observe that an increase in the number of training examples leads to a decrease in the overall epistemic uncertainty, which is consistent with the visual results presented in Figure \ref{fig:reducibilityofepistemicuncertainty}. For the second scenario, we insert an artificial feature that is not well-represented by the training dataset to a test target image. Then, we vary the intensity of the inserted abnormal feature to modify the degree of deviation of the test data from the training data. A good uncertainty characterization method would result in larger epistemic uncertainty as the test sample deviates more from the training data. In our experiments, we first trained the proposed framework on the MRI training dataset. Next, we picked a target image from the test dataset and inserted a $25 \times 25$ square with the intensity value of $1.0$ to the test target image. Then, we obtained the epistemic uncertainty map. We repeated the same procedure for different values of the intensity of the inserted abnormal feature and for the CT reconstruction problem. Figure \ref{fig:differentintensity} shows the epistemic uncertainty maps obtained by the proposed framework for different intensity values of the inserted abnormal feature for both MRI and CT reconstruction problems. We observe that the epistemic uncertainty in the abnormal region decreases as the intensity of the inserted square gets close to a value that makes the inserted square visually similar to the target images in the training dataset. Thus, our experiment shows that the epistemic uncertainty map obtained by the proposed framework shows high epistemic uncertainty for test data that are not well-represented by the training data, confirming that the proposed framework successfully captures the uncertainty caused by the lack of training data around the test data. Now, we demonstrate that the epistemic uncertainty provided by the proposed framework possesses the reducibility property. For the first scenario, we have already shown in Figure \ref{fig:reducibilityofepistemicuncertainty} and Figure \ref{fig:reducibilityofepistemicuncertaintyplot} that we can reduce the epistemic uncertainty by collecting more training data having similar characteristics to the test data. For the second scenario, if the proposed framework is capable of capturing the epistemic uncertainty well, we expect to see that using training examples containing features similar to the abnormal feature encountered in the test data would result in reduced epistemic uncertainty. To this end, we added $25 \times 25$ white squares on the training target images and trained the proposed framework with such training data containing the abnormal features. We repeated the same procedure for the CT reconstruction problem. Figure \ref{fig:outofdataexample} shows the resulting epistemic uncertainty maps obtained by the proposed framework for both CT and MRI reconstruction problems. We observe that the epistemic uncertainty around the white square is decreased significantly after we added target images containing white squares into the training dataset, confirming that the epistemic uncertainty provided by the proposed framework can be explained away with additional training data that can represent the test data well. These results confirm that the proposed framework is capable of successfully quantifying epistemic uncertainty. \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/epistemic_reducibility.png} \caption{Epistemic uncertainty maps on an MRI (top) and a CT (bottom) test sample as a function of the training dataset size (TDS). As we increase the number of examples in the training dataset, the overall epistemic uncertainty decreases. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:reducibilityofepistemicuncertainty} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/proximity.png} \caption{Epistemic uncertainty as a function of the intensity of the abnormal feature. The first row contains the ground truth test images, i.e., the target test images. The second row contains the corresponding epistemic uncertainty maps obtained by the proposed framework. As the inserted square deviates more from the pattern of intensities in the test image (which would be well-represented by the training data), the inferred epistemic uncertainty in the abnormal region increases. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:differentintensity} \end{figure*} \begin{figure*}[t] \centering \includegraphics[width=\textwidth]{figures_main/out_of_distribution_example.png} \caption{Effect of the structure of the training dataset on epistemic uncertainty maps. The first row contains the ground truth test images, i.e., the target test images, and the second row contains the corresponding epistemic uncertainty maps. The images on the first and fourth columns show the performance of the proposed framework on normal data (i.e., no abnormal features in training and set data). The images on the second and fifth columns show the performance of the proposed framework on a case where an abnormal feature exists in the test data. The images on the third and sixth columns show the performance of the proposed framework with abnormal features present in both training and test data. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB.} \label{fig:outofdataexample} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/epistemic_plot_combined.png} \caption{Mean and standard deviation of epistemic uncertainty as a function of training dataset size. The mean and standard deviation are calculated using all pixels in the test dataset. For the MRI experiments, the percentage of observed k-space coefficients is $20\%$, and SNR is $70$ dB. For the CT experiments, number of views is $60$, and SNR is $70$ dB. Mean SSIM values along with the standard deviations for the corresponding reconstructions are provided for reference.} \label{fig:reducibilityofepistemicuncertaintyplot} \end{figure} \subsection{Aleatoric Uncertainty} \label{ssec:aleatoricuncertainty} We now focus on aleatoric uncertainty characterization using our proposed framework. The experiments we present here demonstrate successful aleatoric uncertainty characterization, in particular, the aleatoric uncertainty captured by the proposed framework is high for the regions where the reconstruction is challenging due to the ill-posed nature of the inverse problem. Furthermore, we show that the overall aleatoric uncertainty provided by the proposed framework is an indication of how challenging the inverse problem is. For this analysis, we trained the proposed framework for various configurations of the imaging setups. We considered different percentages of observed k-space coefficients and SNR values for the MRI reconstruction problem and different number of views and SNR values for the CT reconstruction problem. Figure \ref{fig:aleatoricuncertaintyandchallengingness} shows the starting points of the proposed framework, i.e., the results of zero-filling and filtered backprojection, and the aleatoric uncertainty maps for different test measurement vectors generated from the two test target images using different configurations of the MR and CT imaging setups. For both MRI and CT reconstruction problems, we observe that the aleatoric uncertainty is high for the regions where the reconstruction is challenging for the unrolled network, such as the small localized structures and thin edges on the target images. On the other hand, we observe that the aleatoric uncertainty is low around the regions where the corruption is negligible or can be recovered using the spatial information, such as the smooth regions in the target images. This behavior can be understood analytically with a careful inspection of the objective function of the optimization problem in \eqref{eq:variationalinference}. Solving this optimization problem forces the neural network $\bar{\sigma}$ to output high values for the pixels where the squared error between the output of the neural network $\bar{f}$ and the target image is high. Moreover, we observe that the overall aleatoric uncertainty levels show a decrease as SNR decreases for a fixed percentage of the observed k-space coefficients/number of views. Similarly, for a fixed value of the SNR, we observe a decrease in the overall aleatoric uncertainty levels as the percentage of the observed k-space coefficients/number of views increases. Figure \ref{fig:aleatoricuncertaintyandchallengingnessplot} shows the average aleatoric uncertainty over all pixels in the test dataset for different configurations of the imaging setups. From this figure, we observe that the overall aleatoric uncertainty increases when the SNR decreases for a fixed percentage of the observed k-space coefficients/number of views or when the percentage of the observed k-space coefficients/number of views decreases for a fixed value of the SNR. Hence, the quantitative results shown in Figure \ref{fig:aleatoricuncertaintyandchallengingnessplot} confirm our visual observations about the overall aleatoric uncertainty. This result can be also understood by analyzing the objective function of the optimization problem in \eqref{eq:variationalinference}. Because the neural network $\bar{f}$ does not have an infinite learning capability in practice, we expect that the squared error between the output of the trained neural network $\bar{f}$ and the target image will increase as the reconstruction problem gets more challenging, leading to higher overall aleatoric uncertainty levels for the relatively more challenging image reconstruction problems. \begin{figure*}[t] \centering \includegraphics[width=0.85\textwidth]{figures_main/aleatoric_uncertainty_and_error_2.png} \caption{Effect of the configuration of the imaging setup on aleatoric uncertainty. The first and the third rows contain the ground truth test images, i.e., target test images, as well as the starting points obtained by applying zero-filling (ZF) or filtered backprojection (FBP) to observations. The second and fourth rows contain the corresponding aleatoric uncertainty maps obtained by the proposed framework for different percentages of observed k-space coefficients (POC), numbers of views (NOV), and signal-to-noise ratios (SNR). Regions where the reconstruction from the starting point is challenging are the regions for which the aleatoric uncertainty is high. Moreover, the overall aleatoric uncertainty increases as the reconstruction problem gets more challenging in terms of data quality and quantity limitations.} \label{fig:aleatoricuncertaintyandchallengingness} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{figures_main/aleatoric_plot_combined.png} \caption{Mean and standard deviation of aleatoric uncertainty for different configurations of the imaging setups. For MRI experiments, they are calculated for different percentages of observed k-space coefficients (POC) and signal-to-noise ratios (SNR). For CT experiments, they are calculated for different numbers of views (NOV) and signal-to-noise ratios (SNR). The averages and standard deviations are calculated using all pixels in the test dataset. Mean SSIM values along with the standard deviations for the corresponding reconstructions are provided for reference.} \label{fig:aleatoricuncertaintyandchallengingnessplot} \end{figure} \subsection{Reconstruction Performance} \label{ssec:reconstructionperformance} In this subsection, we demonstrate the reconstruction performance of the proposed framework. We compare the proposed framework with six methods: (1) zero-filling (ZF) / filtered backprojection (FBP), (2) total variation reconstruction (TV), (3) PGD-based deep unrolling method (PUM), (4) PGD-based deep unrolling method without batch normalization (PUMw/oBN), (5) proposed only epistemic model (POEM), and (6) proposed only aleatoric model (POAM). The methods ZF/FBP, and TV are the baseline reconstruction methods that we use to demonstrate how challenging the reconstruction problem is. PUM is a deep unrolling method using PGD. Each residual block of PUM consists of a series of convolutional layers, batch normalization layers, and an activation function. PUMw/oBN is the same model as the PUM, except that there are no batch normalization layers in residual blocks. POEM is the variant of the proposed framework that assumes that the covariance matrix of the likelihood function in \eqref{eq:proposedlikelihood} is a fixed model parameter. POEM is also the probabilistic model that was used in the experiments of the preliminary version of this paper~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}. As its name implies, POEM quantifies only the epistemic uncertainty, not the aleatoric uncertainty. POAM is also a variant of the proposed framework where a maximum likelihood estimate of the parameters of the likelihood function in \eqref{eq:proposedlikelihood} is used. POAM is capable of quantifying the aleatoric uncertainty, but not the epistemic uncertainty since it only uses the MAP estimate of the parameters. Implementation details of these methods are provided in the Supplementary Material. Table \ref{tab:reconstructionperformance} shows the performance of the seven methods for CT and MRI reconstruction problems under different configurations of the imaging setups. Among these image reconstruction methods, FBP and ZF achieve the worst reconstruction performance among the seven reconstruction methods. The TV method improves upon FBP and ZF by promoting a piecewise-constant reconstruction. The deep unrolling method PUM surpasses the TV method by implicitly learning the prior using the training dataset. The deep unrolling method PUM was trained using a small mini-batch size since it requires storing the intermediate variables having the same spatial dimensions as the target image in the memory to carry out the backpropagation. We empirically observed that the removal of the batch normalization layers from the unrolled network leads to an increase in the reconstruction performance. Specifically, we observe that the PUMw/oBN outperforms PUM in all the experiments. This empirical observation is mathematically justified in \cite{Yong2020BatchNormalization} where Yong \emph{et. al.}\ showed that batch normalization introduces a high level of noise for small mini-batch sizes, making the training difficult. This observation is the main reason why the unrolled network $f$ in the proposed framework does not contain any batch normalization layers. On the other hand, we experimentally observed that the addition of the batch normalization layers into the neural network $\sigma^2$ is necessary to have a stable training stage. Comparing POAM with PUMw/oBN, POAM shows an average SSIM decrease of $0.022$ for the MRI reconstruction problem and $0.002$ for the CT reconstruction problem. On the other hand, when compared to the state-of-the-art deep unrolling method PUM, POAM achieves average SSIM gains of $0.031$ and $0.011$ for the MRI and CT reconstruction problems, respectively. The reconstruction performance of POEM shows a decrease compared to PUMw/oBN due to using dropout after every convolutional layer, which is a strong form of regularization. Similarly, we observe that the reconstruction performance of POEM is slightly worse than that of POAM. The reconstruction performance of the proposed framework shows a decrease compared to POAM because of using dropout after every convolutional layer, which is a strong form of regularization. Comparing the proposed framework with POAM, the proposed framework shows an average SSIM decrease of $0.010$ for the MRI reconstruction problem and $0.007$ for the CT reconstruction problem. We observe a similar trend for the proposed framework and PUMw/oBN. On the other hand, the proposed framework achieves average SSIM gains of $0.010$ and $0.006$ for the MRI and CT reconstruction problems when compared to POEM, respectively. Similarly, the proposed framework surpasses the state-of-the-art deep unrolling method PUM. Due to space limitations, only representative visual results are presented in Figure \ref{fig:reconstructionperformancevisuals}. Detailed visual results are provided in the Supplementary Material. \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{figures_main/reconstruction_merged_with_zoom_article.png} \caption{Visual comparison of the image reconstruction performance of zero-filling (ZF) / filtered backprojection (FBP), total variation reconstruction (TV), state-of-the-art PGD-based deep unrolling method without batch normalization (PUMw/oBN), and the proposed method. Proposed method achieves comparable reconstruction performance to the state-of-the-art deep unrolling method PUMw/oBN, while providing uncertainty quantification.} \label{fig:reconstructionperformancevisuals} \end{figure*} \begin{table*}[t] \centering \caption{Comparison of average SSIM for different image reconstruction methods.} \label{tab:reconstructionperformance} \begin{tabular}{cccc|cccccccc} \toprule & POC & NOV & SNR & ZF & FBP & TV & PUM & PUMw/oBN & POAM & POEM & Proposed \\ \midrule \multirow{4}{*}{MRI} & 10 & - & 10 & 0.4910 & - & 0.7033 & 0.7979 & 0.8144 & 0.7773 & 0.7727 & 0.7674 \\ & 10 & - & 70 & 0.5448 & - & 0.7261 & 0.8032 & 0.9227 & 0.8996 & 0.8638 & 0.8784 \\ & 20 & - & 10 & 0.5609 & - & 0.7913 & 0.8611 & 0.8799 & 0.8589 & 0.8517 & 0.8568 \\ & 20 & - & 70 & 0.6774 & - & 0.8414 & 0.9231 & 0.9780 & 0.9726 & 0.9407 & 0.9642 \\ \midrule \multirow{4}{*}{CT} & - & 36 & 40 & - & 0.4919 & 0.7657 & 0.9053 & 0.9228 & 0.9178 & 0.9068 & 0.9129 \\ & - & 36 & 70 & - & 0.5895 & 0.8232 & 0.9175 & 0.9319 & 0.9290 & 0.9133 & 0.9181 \\ & - & 60 & 40 & - & 0.6726 & 0.8637 & 0.9390 & 0.9535 & 0.9520 & 0.9422 & 0.9467 \\ & - & 60 & 70 & - & 0.7846 & 0.9204 & 0.9548 & 0.9625 & 0.9626 & 0.9507 & 0.9576 \\ \bottomrule \end{tabular} \end{table*} \section{Discussion} \label{sec:discussion} Quantification of the epistemic uncertainty is crucial for learning-based image reconstruction methods, especially in safety-critical imaging applications, for quantifying the confidence on a reconstruction obtained using a model learned from available, potentially limited or unrepresentative training data. Our experimental results presented in Section \ref{sec:experiments} showed that the proposed framework successfully captures the epistemic uncertainty. The epistemic uncertainty provided by the proposed framework can be used to assess how uncertain the learning-based image reconstruction method is and to detect cases where the input contains abnormal features not present in the training data. For ill-posed inverse problems encountered in most imaging problems, inherent uncertainty on the target image for a given measurement vector is inevitable. Hence, it is essential to quantify the aleatoric uncertainty for imaging problems to capture the inherent randomness in the reconstruction task. Our experiments presented in Section \ref{sec:experiments} demonstrated that the proposed framework is capable of capturing the aleatoric uncertainty in the sense that the aleatoric uncertainty provided by the proposed framework shows the regions where the reconstruction is expected to be challenging for the unrolled network. The aleatoric uncertainty provided by the proposed framework can be utilized to determine the possible errors in the reconstructed image and can be used as a mechanism to further assess the reliability of the reconstructed image. As a result, the aleatoric and epistemic uncertainties provided by the proposed framework would open the possibility of developing more accurate, robust, trustworthy, uncertainty-aware, learning-based image reconstruction and analysis methods. The benefits of obtaining the epistemic and aleatoric uncertainty maps come with a price. Because the proposed framework requires feeding the measurement vector into the neural networks $T$ times for inference, the inference time of the proposed framework increases by $T$ times compared to the state-of-the-art deep unrolling method PUM. To shorten the inference time of the proposed framework, we can perform those $T$ forward passes in parallel. Assuming that the GPU memory allows using a batch size of $B$ in the inference stage, the proposed framework requires only $\lfloor T/B \rfloor + 1$ forward passes for inference. If we have multiple GPUs, the inference time of the proposed framework can be further reduced. Hence, the proposed framework can achieve shorter inference times at the expense of using more computational power. Another way to shorten the inference time of the proposed framework is to decrease the number of parameters the proposed framework so that a larger batch size $B$ can be used to parallelize the inference stage. To that end, we can design a variant of the proposed framework that uses a dual-head network. For the sake of brevity, we have not discussed this variant; however, a brief discussion on that variant is provided in the Supplementary Material. \section{Conclusion} \label{sec:conclusion} In this paper, we utilized the idea of deep unrolling and Bayesian neural networks to propose a learning-based image reconstruction framework that is capable of quantifying epistemic and aleatoric uncertainties while incorporating the imaging observation model into the reconstruction process. Our experimental results showed that the proposed framework quantifies the epistemic and aleatoric uncertainties successfully while providing a reconstruction performance comparable to the state-of-the-art deep unrolling methods. The proposed framework can be applied to a broad set of imaging problems and can be easily implemented in deep learning frameworks. We hope that the proposed framework and the provided discussion on epistemic and aleatoric uncertainties for imaging problems motivate further research on uncertainty characterization for imaging problems and on leveraging the uncertainty information for image reconstruction and analysis tasks. \bibliographystyle{IEEEtran} \section{Details of the Neural Networks $f$ and $\sigma^2$} \label{sec:detailsofournetwork} Figure \ref{fig:highlevel} shows the neural network architectures that define the Gaussian likelihood function of the proposed framework at a high level. Architectural details of the residual blocks~\cite{He2016Resnet} used in the deep unrolled network $f$ are illustrated in Figure \ref{fig:residual}. Mardani \emph{et. al.}\ \cite{Mardani2018NPGD} have used the same architecture for the residual blocks to develop a proximal gradient descent-based deep unrolling method for MRI reconstruction. Architectural details of the neural network $\sigma^2$, which models the diagonal entries of the covariance matrix of the Gaussian likelihood function, are shown in Figure \ref{fig:unet}. We emphasize that the proposed framework uses MC Dropout~\cite{Srivastava2014Dropout} to approximate the posterior distribution of the parameters of the likelihood function, which requires the use of dropout~\cite{Srivastava2014Dropout} after the convolutional layers of the neural networks $f$ and $\sigma^2$. We use dropout after every convolutional layer, except the first convolutional layer, of every residual block to obtain the dropout-added neural network $\bar{f}$. Similarly, to obtain the dropout-added neural network $\bar{\sigma}^2$, we use dropout after every convolutional layer of the neural network $\sigma^2$. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/architecture.png} \caption{High level overview of the neural networks that define the form of the Gaussian likelihood function of the proposed framework.} \label{fig:highlevel} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/residual_block.png} \caption{Details of a residual block $D_{\gamma_k}$ used in the neural network $f$ for CT reconstruction. For MRI reconstruction, the number of input and output channels is two instead of one. Red arrows represent convolutional layers with a kernel size of $3\times 3$ and a padding size of $1$ followed by a LeakyReLU activation function. Green arrows represent convolutional layers with a kernel size of $1\times1$ and a padding size of $0$ followed by a LeakyReLU activation function.} \label{fig:residual} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/ushaped.png} \caption{Details of the U-Shaped neural network used in the neural network $\sigma^2$ for CT reconstruction. For MRI reconstruction, the number of input and output channels is two instead of one. Green arrows represent convolutional layers with a kernel size of $3 \times 3$ and a padding size of $1$ followed by batch normalization and ReLU activation function. Brown arrows represent maxpooling layers with a kernel size of $2$ and a stride of $2$. Pink arrows represent bilinear upsampling with a scale factor of $2$. Purple arrows represent the concatenation operation along the channel dimension. Yellow arrows represent convolutional layers with a kernel size of $3\times 3$ and a padding size of $1$.} \label{fig:unet} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/architecture_dual_head.png} \caption{High level overview of the dual-head neural network that simultaneously outputs the mean and the covariance matrix of the Gaussian likelihood function of the proposed method.} \label{fig:highleveldualhead} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/problem_setup.png} \caption{Details of the one dimensional toy problem.} \label{fig:ps} \end{figure} \section{Dual-Head Architecture} \label{sec:dualhead} In the proposed framework, we have used a U-shaped architecture~\cite{Ronneberger2015UNet} to represent the covariance matrix of the Gaussian likelihood function of the proposed method. An alternative is to use a dual-head architecture~\cite{Kendall2017BayesianNN}, which is illustrated in Figure \ref{fig:highleveldualhead}. The advantage of using a dual-head architecture is that it is less GPU memory intensive, and larger batch sizes can be used for the training and inference stages, leading to faster inference time. For example, the proposed framework presented in the paper allows the use of a batch size of $4$ on a 16GB GPU, whereas the dual-head variant of the proposed framework allows the use of a batch size of $5$, leading to a decrease in the average inference time from $5.30$ seconds to $4.65$ seconds. On the other hand, we have experimentally observed that training a dual-head architecture is slightly more challenging compared to the proposed method presented in the paper and that aleatoric uncertainty maps obtained by the dual-head variant of the proposed method are noisier compared to ones obtained by the proposed framework presented in the paper. \section{Toy Problem} \label{sec:toyproblem} In this section, we provide a one dimensional inverse problem, which we refer to as the toy problem, to make the abstract concepts of aleatoric and epistemic uncertainties more concrete and to show that the proposed framework successfully captures epistemic and aleatoric uncertainties. As a toy problem, we consider a one dimensional linear inverse problem having the form \begin{equation} m = a s + n, \end{equation} where $m \in \mathbb{R}$ is the measurement, $a \in \mathbb{R}$ is the forward operator, $s \in \mathbb{R}$ is the target signal, and $n \sim \mathcal{N}(0, \sigma_n^2)$ is additive white Gaussian noise. For this setup, we choose the true prior distribution of the target signal $s$ to be $p(s) = \mathcal{N}(s| \mu, \tau^{-1})$. Thus, the posterior distribution of the target signal given a measurement $m$ becomes \begin{equation} p(s|m) = \mathcal{N}(s | \eta(m), \epsilon), \label{eq:posterior} \end{equation} where $\eta(m)=\epsilon[a \sigma_{n}^{-2}m + \tau \mu]$, and $\epsilon = (\tau + a^2 \sigma_{n}^{-2})^{-1}$. For the experiment, we chose the following values for the parameters of the toy problem: $a=0.5, \sigma_n=0.1, \mu=0$, and $ \tau^{-1}=0.2$. We obtained the training dataset by taking $100$ measurements uniformly spaced over the interval $[0,3/2]$ and generating the corresponding target signals by sampling from the distribution $p(s|m)$. Figure \ref{fig:ps} shows the details of the toy problem. For this toy problem, we used the proposed framework to obtain epistemic and aleatoric uncertainties. We used multi-layer perceptrons (MLPs) with three hidden layers for the residual blocks of the neural network $f$ and a MLP with two hidden layers for the neural network $\sigma^2$. Because the output is one dimensional, we did not put a dropout layer at the end of the neural networks. Initial step size of the neural network $f$ was fixed to $1.0$, and number of iterations $K$ of the proximal gradient descent was set to $5$. The dropout rate was fixed to $0.5$, and the proposed framework was trained for $20000$ epochs using the learning rate of $1 \times 10^{-4}$. In the inference stage, for $200$ uniformly spaced test measurement vectors over the interval $[0,3]$, we computed the reconstruction, aleatoric standard deviation and the epistemic standard deviation. Figure \ref{fig:aleatoric} and Figure \ref{fig:epistemic} show the aleatoric and epistemic uncertainties captured by the proposed framework, respectively. For this toy problem, the inherent uncertainty in the reconstruction task, i.e.\ the uncertainty on the target signal for a given measurement, is caused by the variance term $\epsilon$ (see \eqref{eq:posterior}). For the test measurements lie in the interval $[0,3/2]$, which are the ones we have in the training dataset, the aleatoric uncertainty captured by the proposed framework significantly overlaps with the true aleatoric uncertainty. Epistemic uncertainty on the other hand is the uncertainty on the parameters, which is due to lack of training examples around a test measurement. For the test measurements lie in the interval $[0,3/2]$, which are the ones that we have in the training dataset, the epistemic uncertainty captured by the proposed framework is low as we expected. As we move towards to the region for which we do not have any training data, i.e., as the measurements start deviating from the training data, the epistemic uncertainty captured by the proposed framework increases. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/aleatoric_uncertainty.png} \caption{True aleatoric uncertainty and the aleatoric uncertainty captured by the proposed framework for the toy problem.} \label{fig:aleatoric} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/epistemic_uncertainty.png} \caption{Epistemic uncertainty captured by the proposed framework.} \label{fig:epistemic} \end{figure} \section{Other Methods Used in Reconstruction Experiments} \label{sec:othermethods} In the experiments section of the article, we have compared the reconstruction performance of the proposed framework with six other reconstruction methods. Descriptions of those methods and their implementation details are provided below. \textbf{Zero-Filling:} Zero-filling is one of the baseline reconstruction methods that we have used for the MRI experiments. This method simply fills the unobserved Fourier (k-space) coefficients with zeros and computes the inverse Fourier transform. \textbf{Filtered Backprojection:} Filtered backprojection is one of the baseline reconstruction methods that we have used for the CT experiments. This method first filters the sinogram data and then computes the backprojection of the filtered sinogram. In our experiments, we used the TorchRadon~\cite{torch_radon} package to implement this method. \textbf{Total Variation:} Total variation reconstruction is the second baseline reconstruction method that we have used in our experiments. This method solves the following optimization problem to reconstruct the image. \begin{equation} \hat{{\mathbf s}} = \argmin_{{\mathbf s}} \left\{ \| {\mathbf A} {\mathbf s} - {\mathbf m} \|_2^2 + \beta \| {\mathbf s} \|_{\text{TV}} \right\}, \end{equation} where $\|.\|_{\text{TV}}$ denotes the total variation semi-norm~\cite{Chambolle2004TV}. We have used alternating direction method of multipliers (ADMM)~\cite{Boyd2011ADMM} to obtain an iterative algorithm that solves this optimization problem. In our experiments, the number of iterations and the penalty parameter of the ADMM were fixed to $100$ and $10.0$, respectively. The data-dependent update step of the ADMM was solved using conjugate gradient (CG) method. The tolerance parameter of the CG was fixed to $1 \times 10^{-5}$, and maximum number of CG iterations was set to $10$. The value of the regularization constant $\beta$ was chosen from the set $\{1 \times 10^{-4}, 1 \times 10^{-3}, 1 \times 10^{-2}, 1 \times 10^{-1}, 1 \times 10^{0}, 1 \times 10^{1} \}$ to maximize the SSIM. \textbf{Deep Unrolling:} This is a learning-based image reconstruction method that leverages the idea of deep unrolling. The neural network used for this method is the neural network $f$ depicted in Figure \ref{fig:highlevel} with the residual blocks in Figure \ref{fig:residual}, except that there is a batch normalization layer between every convolutional layer and the activation function ReLU. The batch size was set to $4$ for MRI experiments and $2$ for the CT experiments, and learning rate was set to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was fixed to $5$, and the neural network was trained for $100$ epochs using mean-squared error loss function. \textbf{Deep Unrolling without Batch Normalization:} This is another variant of the deep unrolling method used in the reconstruction experiments. The neural network used for this method is the neural network $f$ depicted in Figure \ref{fig:highlevel} with the residual blocks in Figure \ref{fig:residual}. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was fixed to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to $5$, and the neural network was trained for $100$ epochs using mean-squared error loss function. \textbf{Proposed Only Aleatoric Model:} This method only quantifies aleatoric uncertainty by using the Gaussian likelihood of the proposed framework with the maximum likelihood estimate of the parameters $\theta$. For this method, we have used the neural networks $f$ and $\sigma^2$ depicted in Figure \ref{fig:highlevel}, Figure \ref{fig:residual}, and Figure \ref{fig:unet}. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was fixed to $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was set to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to be $5$, and the neural networks $f$ and $\sigma^2$ were trained for $100$ epochs. \textbf{Proposed Only Epistemic Model:} This is the preliminary variant of the proposed method that we have presented in the prior conference publication~\cite{Ekmekci2021UncertaintyUnfoldingPreliminary}. This method only quantifies the epistemic uncertainty since it treats the covariance matrix of the Gaussian likelihood function as a fixed parameter. In our experiments, we fixed the covariance matrix to $(1/10) \mathbf{I}$. The batch size was set to $4$ for the MRI experiments and $2$ for the CT experiments. Learning rate was set to be $1\times 10^{-4}$ for the MRI experiments and $1\times 10^{-5}$ for the CT experiments. Initial step size of the neural network $f$ was fixed to $1.0$ for the MRI experiments and $1\times 10^{-4}$ for the CT experiments. The number of iterations of the PGD was set to be $5$, and the dropout-added neural network was trained for $100$ epochs. \section{Model Calibration} \label{sec:modelrecalibration} While developing the proposed framework, we have made several assumptions about the form of the likelihood function, the prior distribution of the parameters of the likelihood function, and the parametric distribution that we use to approximate the true posterior distribution of the parameters. These assumptions introduce a model bias and may lead to uncalibrated predictions in practice. In this section, we show how the proposed framework can be calibrated easily, if needed, using the calibration method proposed by Kuleshov \emph{et~al.}~\cite{Kulesov2018recalibration}. For a given test measurement vector $\mathbf{m}_*$ and a training dataset $\mathcal{D}$, the proposed framework approximates the predictive distribution as follows: \begin{equation} p({\mathbf s}_* | {\mathbf m}_*, \mathcal{D}) \approx \frac{1}{T} \sum_{t=1}^T \mathcal{N}({\mathbf s}_*| f_{\tilde{\hat}^{(t)}}({\mathbf m}_*), \diag(\sigma^2_{\hat{\delta}^{(t)}}({\mathbf m}_*))), \end{equation} where $T$ is the number of Monte Carlo samples used to approximate the integral, and $\hat{\theta}^{(t)} = \hat{\delta}^{(t)} \cup \hat{\gamma}^{(t)} $ is the $t^{\text{th}}$ sample from the parametric distribution $q_{\hat{\omega}}(\theta)$. For calibration purposes, we approximate the predictive distribution of each pixel with a Gaussian distribution as follows: \begin{equation} p([{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}) \approx \mathcal{N}([{\mathbf s}_*]_k| [\mathbb{E}[{\mathbf s}_* | {\mathbf m}_*, \mathcal{D}]]_k, \Var[[{\mathbf s}_*]_k | {\mathbf m}_*, \mathcal{D}]), \label{eq:preddistrecalibration} \end{equation} where the mean and the variance of this distribution are defined in the paper. Next, assuming that we have a validation dataset $\mathcal{D}_{\text{val}} = \{ ({\mathbf m}^{[i]}, {\mathbf s}^{[i]}) \}_{i=1}^V$, which is different than the test dataset, we generate a calibration dataset $\mathcal{D}_{\text{cal}}$ which is defined as follows: \begin{equation} \mathcal{D}_{\text{cal}} = \{ ({\mathbf m}^{[i]}, [{\mathbf s}^{[i]}]_k) | i \in [V], k \in [S] \}. \label{eq:calibrationdataset} \end{equation} Using the calibration dataset $\mathcal{D}_{\text{cal}}$ and the predictive distribution defined in \eqref{eq:preddistrecalibration}, we can utilize the calibration method presented in \cite{Kulesov2018recalibration} to calibrate the proposed method. In the experiments section of the paper, we have observed that epistemic and aleatoric uncertainty maps convey useful information about the confidence of the reconstruction method and the imaging problem. However, to evaluate the reliability of the uncertainty information provided by the proposed framework more reliably, we need to perform a quantitative analysis. One way to measure the reliability of uncertainty estimates is to create a calibration plot~\cite{Kulesov2018recalibration}. An example calibration plot of the proposed framework for an MRI experiment is given in Figure \ref{fig:calibration}. From this figure, we can notice that the proposed framework may provide slightly underconfident predictions due to model bias, although the visual observations provided in the paper match with our expectations about the behavior of epistemic and aleatoric uncertainties. To obtain calibrated predictions, we calibrated the proposed model using the calibration method presented in \cite{Kulesov2018recalibration} with the help of Uncertainty Toolbox~\cite{chung2021uncertainty}. The calibration plot of the calibrated proposed framework is also depicted in Figure \ref{fig:calibration}. As can be seen from the figure, the calibrated proposed framework provides calibrated uncertainty estimates. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{figures_supplementary/calibration_2.png} \caption{Calibration plots of the proposed framework and the calibrated proposed framework.} \label{fig:calibration} \end{figure} \section{Reconstruction Performance} Figure \ref{fig:reconstructionexample} compares the reconstruction performance of the proposed method with the reconstruction methods whose details are discussed in Section \ref{sec:othermethods} of the Supplementary Material. \begin{figure*}[t] \centering \includegraphics[width=0.95\textwidth]{figures_supplementary/reconstruction_merged_with_zoom.png} \caption{Visual comparison of the image reconstruction performance of zero-filling (ZF) / filtered backprojection (FBP), total variation reconstruction (TV), PGD-based deep unrolling method (PUM), PGD-based deep unrolling method without batch normalization (PUMw/oBN), proposed only epistemic model (POEM), proposed only aleatoric model (POAM), and the proposed method.} \label{fig:reconstructionexample} \end{figure*} \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
563
{"url":"https:\/\/agirlhasnona.me\/carriage-returns-matter\/","text":"It all started as a request as easy as any other requests: please take this file and use it to upload data to the database.\n\nSure, no problem. First, open the Excel sheet ... clean it up a bit ... save as a CSV ... and then I have it in the form that this handy little Perl script can upload to the database.\n\nBut when I run the script, which is handling zip codes for context, all the zip codes are 00000. Not only is this not a valid zip code, even if it were the script basically would have taken several hundred unique zip codes and their location data and made them all into 00000. \ud83e\udd14\n\nThe first thing that I notice on opening the file is that all the rows are actually on a single line and are separated by ^M in vim.\n\n## So what's ^M and how do I fix it?\n\nThis is where we circle back around to our title about carriage returns and line feeds. A little relevant info bites to help us get started:\n\n\u2022 A carriage return (CR) means that the cursor moves to the beginning of the line, this is denoted by \\r.\n\u2022 A line feed (LF) means moving to the next line, this is denoted by \\n.\n\nAnd a few more bits of information:\n\n\u2022 Windows uses \\r\\n as its End of Line sequence.\n\u2022 Mac uses \\r as its EOL sequence.\n\u2022 Unix uess \\n as its EOL sequence.\n\nYou can see the trouble brewing now, can't you? \ud83d\udd2e\n\nSo when I used Excel on a Mac to convert the file to a CSV file, it was using \\r as the newline character. When I uploaded that to the server of interest, running CentOS, it showed it as being one long line because I was now in Linux not macOS. So when I supplied the CSV file to the Perl script, a script that wasn't written to handle all three scenarios, it nope'd right outta there.\n\nBut that leads us to our purpose:\n\n1. How do I fix this?\n2. Why ^M?\n\nFor the first, in vim I can do a global replacement on the new line mish-mash with:\n\n:%s\/\\r\/\\r\/g\n\n\nBreaking this down \\r in vim is finding any existing new line character, in this case all the ^Ms, and replacing them with the OS's new line character, in this case \\n. That solves our new line issue. %s applies this change to all lines of the file and g applies the change to all instances on each line. Without the g only the first instance of ^M will be changed per line. (In this single line file, that means it'll only be replaced once.)\n\nNow for the ^M. If you pull up an ASCII chart, you'll find that the line feed character, \\n, is 0xA (or 0x0A); whereas the carriage return character, \\r, is 0xD (or 0x0D). The reason vim displayed the carriage returns as ^M is because D is 13 in hexidecimal (0-9, then A-F for 10-15) and the 13th letter of the English alphabet is ... \ud83e\udd41 ... M.\n\n## But that's not your only problem\n\nAfter feeling all happy that I fixed my new lines, I found another problem. Because there's always more than one \ud83d\ude09\n\nWhen the file was being read in by Perl every line was prefixed with: \\x{feff}. This is a zero width no break space.\n\nInvisicharacters are the bane of my existance today, it seems. A quick way to fix this is:\n\nperl -CSD -pe 's\/^\\x{feff}\/\/' ${FILENAME}.csv Since I had a few files with this problem I just wrapped this into a single line for loop in BASH: for FILE in *csv; do TMPZIP=$(mktemp zip-XXXXX) && perl -CSD -pe 's\/^\\x{feff}\/\/' ${FILE} >${TMPZIP} && cp ${TMPZIP}${FILE} && rm \\${TMPZIP}; done\n\n\nA quick explanation of the script:\n\n\u2022 FILE in *csv is a foreach, so the loop will perform this action on all CSV files with the .csv extension\n\u2022 I used mktemp to make a temp file to prevent the unlikely event where me mashing up a common temporary file name like tmp will overwrite a tmp file that I actually needed \/ wanted. See man mktemp for more about the command.\n\u2022 the perl ... line is reading in the file, replacing the feff hex character with no character (removing it), and writing the output to a new file. This is because the perl line doesn't modify the file itself, it just prints it to screen (stdout).\n\u2022 I copy the TMPZIP file to overwrite the existing FILE.\n\u2022 I remove the TMPZIP file.\n\nNote that if you mktemp zip-XXXXX.csv you'll be making a CSV file, which will be pulled into your for loop and will wreak some havoc on what you were hoping would be an easy, clean fix. To see how this works, create some faux CSV files or just some backups of real files, and run the for loop with a temp file that has the CSV extension.","date":"2018-03-23 20:12:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3080484867095947, \"perplexity\": 2605.9737920760144}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257648594.80\/warc\/CC-MAIN-20180323200519-20180323220519-00597.warc.gz\"}"}
null
null
HOME AGENDA PARTICIPANTS RESOURCES EXCURSIONS PHOTOS FAQ CONTACT SNAC Participant Bios Robert Ackland holds a PhD in Economics from the Australian National University, MA (Economics) from Yale University and B. Commerce (Economics) from the University of Melbourne and is a Research Fellow in the Research School of Social Sciences at the ANU. Robert's PhD research was in the application of index number theory to global comparisons of income and poverty. Since 1994, he has worked as a consultant in poverty analysis and targeting for organizations such as The World Bank (based in Washington. D.C., 1995-1997), Asian Development Bank and AusAID and has also worked on capacity building projects in these areas. More recently, Robert moved into a new area of research at the intersection of information science and empirical social science -- the development of new methods (and associated software) for quantitative analysis of social and economic phenomena on the Internet. Robert is developing research methods that combine information retrieval, data visualization and more traditional quantitative social science methods and is co-Chief Investigator on an Australian Research Council (ARC) Discovery Project titled "New Methods for Researching the Existence and Impact of Political Networks on the WWW" (2004-2006). He is also co-chief investigator on an ARC Special Research Initiative (e-Research Support) project for the establishment of the Virtual Observatory for the Study of Online Networks. Robert is taking a leading role in establishing e-Research in the social sciences (or e-Social Science) in Australia, and co-organized the First Australian e-Social Science Symposium, held at the ANU in November 2004. Lada A. Adamic is an assistant professor in the School of Information. Her research interests center on information dynamics in networks: how information diffuses, how it can be found, and how it influences the evolution of a network's structure. She worked previously in Hewlett-Packard's Information Dynamics Lab on research projects relating to networks constructed from large data sets. These projects included mining the medical literature for gene-disease connections, tracking and modeling information flow in E-mail and blog networks, modeling search processes on real-world social networks, and building expertise-finding systems. Eytan Adar is currently a graduate student at the University of Washington in the Computer Science and Engineering department. Most recently he was a researcher at HP Labs and at Xerox PARC before that. His research focus is the study of the behavior of large user populations in digital spaces. In general, he is very interested in the study and application of information side-effects. These are the pieces of data that users produce as part of their daily digital existence. Though not originally intended for this purpose, the data can be used to construct novel tools (e.g. building better search by using web-surfing behavior) as well as mined to answer fundamental questions of individual and social behavior. An example of this is his work with Lada Adamic on modeling and predicting information propagation by analyzing weblogs. Understanding these networks requires large-scale crawls and the application of computationally intensive algorithms and he is very interested in infrastructures that support this kind of research. Another recent, and continuing, project is the Graph Exploration System (GUESS). GUESS allows researchers in many fields to explore, analyze, and visualize networks using a domain specific language (an extension of Python). The language supports work with graphs where nodes and edges have arbitrary properties as well as the construction of new visualization applications. The Java-based system is available under a GPL license from http://www.graphexploration.org/. He received his Bachelor and Master degrees at MIT in Computer Science. More information and papers are available at http://www.cond.org/. Vladimir Batagelj is Professor of Discrete and Computational mathematics at the University of Ljubljana. He is a chair of the Department of theoretical computer science, IMPM, Ljubljana. His main research interests are in mathematics and computer science, combinatorics with emphasis on graph theory, algorithms on graphs and networks, combinatorial optimization, algorithms and data structures, cluster analysis, visualization and applications of information technology in education. He is a member of IEEE, IFCS Group at Large, Classification Society of North America, The international network for social network analysis, International Association for Statistical Computing, and elected member of International Statistical Institute. He is a member of editorial boards of Informatica and of Journal of Social Structure. With Andrej Mrvar (and Matja�z Zaver�snik) he is developing from 1996 a program Pajek for analysis and visualization of large networks. In the past years they won several first prizes in the graph drawing contests. He co-authored two books on network analysis published recently by the Cambridge University Press: W. de Nooy, A. Mrvar, V. Batagelj: Exploratory Social Network Analysis with Pajek P. Doreian, V. Batagelj, A. Ferligoj: Generalized Blockmodeling. Katy Börner is an Associate Professor of Information Science in the School of Library and Information Science, Adjunct Associate Professor in the School of Informatics, Core Faculty of Cognitive Science, Research Affiliate of the Biocomplexity Institute, Member of the Advanced Visualization Laboratory and directs the Information Visualization lab at Indiana University. Her research focuses on the development of data analysis and visualization techniques that improve information access, understanding, and management. She is particularly interested in the study of the structure and evolution of scientific disciplines; the analysis and visualization of online activity, e.g., user actions in 3D virtual worlds; and the development of cyberinfrastructures for scientific collaboration and computation, e.g., the information visualization cyberinfrastructure (http://iv.slis.indiana.edu/). She co-edited a book on 'Visual Interfaces to Digital Libraries' published by Springer in 2002, a special issue of PNAS on 'Mapping Knowledge Domains' published in April 2004, a special issue on 'Collaborative Information Visualization Environments' in PRESENCE: Teleoperators and Virtual Environments, MIT Press that appeared in Feb. 2005, and a special issue on 'Information Visualization Interfaces for Retrieval and Analysis' in the Journal of Digital Libraries that appeared in March 2005. B�rner is the recipient of many fellowships and awards, including Outstanding Junior Faculty Award, Pervasive Technology Laboratories Fellowship, SBC Fellow, NSF CAREER Award, and Trustees Teaching Award. She is currently PI or Co-PI on 12 grants that are funded by NSF, the James S. McDonnell Foundation, 21st Century Fund, and SUN Microsystems. David Brandon, General Administrator with the Theoretical and Computational Biophysics Group, started with the group as part of BioCoRE's evaluation team, and soon expanded responsibility to evaluation of the group's other software packages. Following completion of a doctoral degree in organizational communication, with an emphasis on small group phenomena, he moved into an administrative position with the group. Currently, he performs a range of duties, from evaluation to training programs to proposal preparation to dissemination activities. Larry Brandt joined the National Science Foundation (NSF) in 1976 and in 1984 he was a member of the management team for NSF's Office of Advanced Scientific Computing. Larry stayed with the Supercomputer Centers program for 14 years as a program manager. In the early 1990s Larry made several small grants to the undergraduates on the software development team at NCSA, resulting in the first multimedia Web browser (Mosaic) and a related server in January of 1993. To respond to the demand of early Web users for better faster Mosaic development, Larry assembled a consortium of 15 interested Federal agencies who agreed to provide $3M over three years. Generalizing from that experience, in 1998 Larry created and still manages the NSF's Digital Government research program, which funds collaboration between academic researchers and government agencies. Valerie Gregg, on detail from the US Census Bureau to NSF, was the key collaborator in the program's development. The Digital Government program crosses all computer and information science disciplines and all gov't domains and missions, from international and Federal agencies to local levels and supports primarily technical projects. In the last three years, an additional emphasis on projects from the policy, organizational, political and social sciences has been added. These encompass, for example, projects to explore the impact of IT on government organizations, the impact of IT on democracy, and e-voting. The program has funded over 100 research grants, with a current budget of about $10M per year. More about the program can be found at the web site http://www.digitalgovernment.org/. Carter T. Butts received his Ph.D. in sociology from Carnegie Mellon University, and is currently assistant professor of sociology at the University of California, Irvine. Dr. Butts's research involves the application of formal (i.e., mathematical and computational) techniques to theoretical and methodological problems within the areas of social network analysis, mathematical sociology, quantitative methodology, and human judgment and decision making. Substantively, his current work focuses on modeling the structure of spatially embedded interpersonal networks, sexual contact networks, assignment processes, and individual and organizational interaction in crisis settings. The latter includes the RESCUE project (an NSF-funded interdisciplinary collaboration centered on improving information technology in the context of crisis response). His current methodological work includes hierarchical Bayesian models for network inference and time-dependent processes, and the use of discrete exponential families for direct modeling of network structure, structural comparison, and assignment systems. He is the primary author/maintainer of several software packages for social network analysis (including sna, network, and net theory), which can be found at http://erzuli.ss.uci.edu/R.stuff/. He is also involved with the Statnet project, which is an NIH-funded effort to produce tools for the statistical modeling and analysis of social networks. Dr. Butts is a member of UCI's Institute for Mathematical Behavioral Sciences and the California Institute for Telecommunications and Information Technology, and serves on the council of the American Sociological Association's section on Mathematical Sociology. Pamela I. Clark is a senior research scientist at Battelle Centers for Public Health Research and Evaluation. She holds a dual master's degree in Epidemiology and Biostatistics and a PhD in Epidemiology from the University of South Florida in Tampa. Her current research projects include a laboratory study of human smoking profiles over a range of cigarette products, two studies which aims to identify quality process and outcome indicators to evaluate the impact of comprehensive tobacco use prevention and control programs, and a study of advertising and promotion of tobacco products in retail stores. Her work that is most pertinent to SNAC is the National Cancer Institute Tobacco Control Research Branch's Initiative on the Study and Implementation of Systems (ISIS). ISIS is an ambitious project to apply systems thinking methodologies to practices in tobacco control, including building the cyberinfrastructury necessary to enable transdisciplinariness, support development of knowledge networks, and promote a seamless continuum from discover to development to delivery of new knowledge. Noshir Contractor is a Professor in the Departments of Speech Communication and Psychology at the University of Illinois at Urbana-Champaign. He is a Research Affiliate of the Beckman Institute for Advanced Science and Technology, Director of the Science of Networks in Communities (SONIC) Research Group at the National Center for Supercomputing Applications, and Co-Director of the Age of Networks Initiative at the Center for Advanced Study at the University of Illinois at Urbana-Champaign. His research program is investigating factors that lead to the formation, maintenance, and dissolution of dynamically linked knowledge networks among communities involved in emergency response, food safety, public health, and environmental engineering. His research, funded continuously for the past decade by major grants from the U.S. National Science Foundation, has been published in Academy of Management Review, Communication Research, Computational and Mathematical Organizational Theory, Decision Science, Human Communication Research, Journal of Broadcasting & Electronic Media, Journal of Cultural Economics, Organization Science, Small Group Research, and Social Psychology Quarterly. His papers have received top-paper awards from both the International Communication Association and the National Communication Association. His book titled "Theories of Communication Networks" (co-authored with Professor Peter Monge and published by Oxford University Press) received the 2003 Book of the Year award from the Organizational Communication Division of the National Communication Association. He is the lead developer of IKNOW (Inquiring Knowledge Networks On the Web), a web-based social networking software and Blanche, a software program to simulate the dynamics of social networks. For more information, see http://sonic.ncsa.uiuc.edu/. Steven R. (Steve) Corman (PhD University of Illinois at Urbana-Champaign, 1988) is a Professor in the Hugh Downs School of Human Communication at Arizona State University. There he is Director of the Consortium for Strategic Communication, which brings ideas from communication theory and research to bear on problems of counter-terrorism and national security. In a related capacity he serves on a scientific advisory committee for U.S. Special Operations Command. Corman is currently Chair of the Organizational Communication Division of the International Communication Association. In that capacity he leads effort supported by SONIC and NCSA, to make the Enron e-mail dataset available to researchers in communication and other disciplines through a user-friendly interface. More information on the project is available at http://sonic.ncsa.uiuc.edu/enron/. Corman is former co-director of the ASU Software Factory Project, a recently-completed effort. Its research objective was to create a complete record of communication in a "real" (albeit small) organization over an extended period of time. The project has collected over 12,000 hours of recorded talk of 54 participants over three years. This is supported by 400 recorded interviews, a weekly perceived social network survey (127 time points), ethnographic observation notes, time tracking, and software engineering data. More information is available at http://www.public.asu.edu/~corman/sunbelt05.ppt. Finally Corman is cofounder and Chief Technology Office of Crawdad Technologies LLC, a software firm specializing in network text analysis. More information is available at http://www.crawdadtech.com/. Donna J. Cox is Professor in the School of Art and Design at the University of Illinois at Urbana-Champaign; and the Director for Visualization and Experimental Technologies at the National Center for Supercomputing Applications. Cox received the international Coler-Maxwell Award for Excellence granted by the Leonardo International Society in Arts Science and Technology for her seminal paper on "Renaissance Teams." Cox has written numerous publications on scientific and information visualization. She is a recognized international keynote speaker at research institutions in countries including Australia, New Zealand, Brazil, Finland, Japan, and Switzerland. Inviting institutions include MIT, Kodak, Motorola, EDUCOM, T.J. Watson Research Center, and the National Library of Medicine. Her collaborative work has been cited, reviewed, or published in over 100 publications including Newsweek, TIME, National Geographic, Wall Street Journal, New York Times, and The Chronicle of Higher Education. Cox has been featured in numerous television programs including "Good Morning America." She was Associate Producer for Scientific Visualization and Art Director for the PIXAR/NCSA segment of the IMAX movie, "Cosmic Voyage," nominated for 1997 Academy Award in documentary short subject category. Recent projects include two Hayden Planetarium digital space shows, American Museum of Natural History in New York City; The Discovery Channel "Unfolding Universe;" and the NOVA HDTV "Runaway Universe" received the 2002 Golden Camera Festival Award. She is juror on the NSF's Visualization Challenge and SIGGRAPH 2005 Emerging Technologies Chair. Cox is currently working on a PBS NOVA show and Denver Museum of Nature and Science Planetarium Digital Dome Show on Black Holes. Jonathon Cummings is an Associate Professor of Management at the Fuqua School of Business, Duke University. He spent three years at the MIT Sloan School of Management as an Assistant Professor after completing his dissertation and post-doc at Carnegie Mellon University. During graduate school he interned at Intel (studying collaborative software) and at Motorola (studying knowledge management). He has an undergraduate degree in organizational psychology from the University of Michigan and a master's degree in social psychology from Harvard University. Professor Cummings is the author of NetVis, a free open source web-based tool to analyze and visualize social networks using data from csv files, online surveys, and dispersed teams. Roberto Dandi is a postdoctoral researcher at the National Center for Supercomputing Applications, Science of Networks in Communities (SONIC) research group at the University of Illinois at Urbana-Champaign. He obtained his PhD in Organizational Behavior at Universit� degli Studi del Molise (Ital) in 2004 with a dissertation on the consequences of email communication on organizational participation in decision making. His research interests, broadly speaking, deal with the organizational and social consequences of Information and Communication Technology. In particular, he has been involved in projects focusing on computer-mediated communication in organizations, on the creation of Virtual Organizations, and on the development of knowledge networks in teams and between organizations. He is focusing now in applying Social Network Analysis for the study of online behavior in communities of scholars and organizations that use cyberinfrastructure. Thom Dunning is the director for the National Center for Supercomputing Applications. He also holds an endowed position as Distinguished Chair for Research Excellence in Chemistry and professor in the department of chemistry at the University of Illinois at Urbana-Champaign. Dunning comes to NCSA from Tennessee, where he was the director of the Joint Institute for Computational Sciences in Oak Ridge, a distinguished professor of chemistry and chemical engineering at the University of Tennessee in Knoxville, and a distinguished scientist in computing and computational sciences at Oak Ridge National Laboratory. Before that, Dunning was responsible for supercomputing and networking for the University of North Carolina System and was a professor of chemistry at the University of North Carolina at Chapel Hill. Before going to North Carolina, Dunning was assistant director for scientific simulation in the Office of Science at the U.S. Department of Energy, on leave from Pacific Northwest National Laboratory. In that position, he was instrumental in creating DOE's new scientific computing program, Scientific Discovery through Advanced Computing (SciDAC). SciDAC is the federal government's first comprehensive program aimed at developing the software infrastructure needed for scientific computing. Dunning is the former leader of the Theoretical and Computational Chemistry Group at Argonne National Laboratory and was associate director for theory, modeling, and simulation in the Environmental Molecular Sciences Laboratory at Pacific Northwest National Laboratory as well as EMSL director. He is a fellow of the American Physical Society and of the American Association for the Advancement of Science as well as a member of the American Chemical Society. He received his bachelor's degree in chemistry in 1965 from the University of Missouri-Rolla and his Ph.D. in chemical physics from the California Institute of Technology in 1970. Stephen Eubank received his B.A. in physics from Swarthmore College in 1979 and his Ph.D. in theoretical particle physics from the University of Texas at Austin in 1986. He has worked in the fields of fluid turbulence (at the La Jolla Institute); nonlinear dynamics and chaos (at Los Alamos National Laboratory's Center for Nonlinear Studies); financial market modeling (as a founder of Prediction Company); ecological time series analysis (at Biosphere 2); and natural language processing (as an invited researcher at Advanced Telecommunication Research in Kyoto, Japan). As a staff member at Los Alamos from 1997-2005. Dr. Eubank played a leading role in development of the traffic microsimulation component of the Transportation Analysis and Simulation System (TRANSIMS); he led the Epidemiology Simulation (EpiSims) project; and he was the team leader for the Urban Infrastructure Suite (UIS), of which both TRANSIMS and EpiSims are parts. UIS is a collection of interoperable simulations of interacting infrastructures, each of which simulates the behavior of every individual in a large urban region. The goal of UIS is to model the dynamics of systems including both physical and social components. In his current position as Deputy Director of the Network Dynamics and Simulation Sciences Laboratory at the Virginia Bioinformatics Institute at Virginia Tech, he is pursuing interests in developing advanced technology for the study of large socio-technical systems and understanding the dynamics and structure of social networks. Thomas A. Finholt is the director of the Collaboratory for Research on Electronic Work at the University of Michigan's School of Information, where he is also a research associate professor. Currently, Dr. Finholt is involved with several cyberinfrastructure projects. First, he is working with the National Center for Supecomputing Applications (NCSA) to understand cyberinfrastructure requirements within the meso-scale weather community and the environmental engineering community. Second, Dr. Finholt is working with the NSF-funded Mid-America Earthquake Center to develop portal-based risk and loss assessment tools. Third, Dr. Finholt is involved with an NSF-sponsored effort to build social networking tools to assist tobacco control researchers. Fourth, Dr. Finholt is working with the San Diego Supercomputer Center as part of the NSF's George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES), where his group is supporting the transition of the NEESgrid software to the Sakai software platform. Fifth, Dr. Finholt is directing the education and outreach cores of the NIH-funded National Center for Integrative Biomedical Informatics. Finally, Dr. Finholt is leading two internal research projects at Michigan, the Michigan Grid Research and Infrastructure Development (MGRID) Center and the Connection Project, an effort to build highly realistic video links among geographically distributed collaborators. In the past, Dr. Finholt was a co-PI on the NEESgrid project and the Space Physics and Aeronomy Research Collaboratory (SPARC). He also led research projects on collaboration technology with NIST and with Bell Labs. Dr. Finholt is a graduate of Swarthmore College and he received his Ph.D. in social and decision sciences from Carnegie Mellon University. Danyel Fisher got his Masters' from UC Berkeley in Computer Science in 2000, and his PhD from UC Irvine, also in Computer Science, in 2004. His dissertation examined the structure of egocentric social networks in email and considered different temporal and social patterns that were visible (http://www.ics.uci.edu/~jpd/publications/2004/chi2004-soylent.pdf, http://www.isr.uci.edu/projects/soylent/). Since late 2004, he has worked for Microsoft Research's Community Technologies Group, home of Netscan. He is now doing research on social roles within Usenet newsgroups (articles in Online Deliberation conference, in JCMC, and in IEEE Internet) based on a social network perspective of response patterns. He is also involved in an email-based project, SNARF, which uses social involvement as a way of sorting and organizing email. SNARF also maintains a log of mail messages, which can be used to reconstruct mail history and social networks; the project is currently analyzing the mass of data obtained from a broad deployment of SNARF across over six hundred users at Microsoft. During his dissertation work, Danyel co-created the open-source JUNG (Java Universal Network/Graph) toolkit, which provides an easy and powerful interface to social network analysis using the Java programming language. Gary Giovino is a Senior Research Scientist and Director of the Tobacco Control Research Program at the Roswell Park Cancer Institute in Buffalo, New York. He also serves as a Research Professor in the SUNY at Buffalo School of Public Health and Health Professions, where he teaches a course in Tobacco Control. His professional interests and activities involve the study of patterns, determinants, consequences, and control of tobacco use. He is becoming increasingly interested in the role of nutrition in health and disease and in wellness. His four major research projects involve surveys. He is the Principal Investigator on the National Youth Smoking Cessation Survey, a two year nationally-representative cohort study of 2,582 16-24 year old smokers. The purpose of this study is to document the natural history of quitting smoking among older adolescent and young adult smokers. This month (November 2005) Giovino and colleagues will finalize data collection, completing the 24-month assessment. Gary Giovino is also PI on the Assessing Hard Core Smoking Survey, a nationally-representative cohort survey of 1,000 current smokers (ages 25 years and older) and 256 recent former smokers. The purpose of this study is to assess patterns and determinants of "hard-core" smoking in the United States. Third, he is PI on Policy Effects of Cigarette Design, Emissions, and Behavior, Project 3 of the Roswell Park's Transdisciplinary Tobacco Use Research Center (TTURC)(K. Michael Cummings, PI). As such, he leads work investigating cigarette product characteristics and smoke chemistries, coordinating efforts of data collection around our surveys in the USA, Canada, United Kingdom, Australia, Malaysia, and Thailand, with activity about to begin in China. He is also head the Tobacco Team on Project ImpacTeen, a research project designed to better understand program and policy interventions to reduce adolescent tobacco, alcohol, and illicit substance use and abuse. He is also very interested in the role that suboptimal nutrition may play in the development and maintenance of addiction, particularly nicotine addiction and in the role that optimal nutrition may have in accelerating the disease risk reduction trajectory in former smokers. He is working to improve the work of the nation's tobacco surveillance system, attempting to coordinate work that monitors products, users and potential users, the tobacco industry, and environmental influences such as media and policy. He also serves on the New York State Tobacco Control Advisory Board, where he (and others) advises the State to implement evidence-based tobacco control strategies. Harold D. Green, Jr. (Hank) holds a Ph.D. from the University of Florida Department of Anthropology with specializations in research design and qualitative and quantitative methods of social research, including social network analysis theory, method, and application. Beginning in 2001, Hank spent one and a half years evaluating cooperation among commodity-based international development organizations in Washington DC, applying structural measures used in social network analysis to questions of collaboration and network growth and health. Hank was a National Institute of Mental Health Postdoctoral Training Fellow at the University of Illinois at Urbana-Champaign from 2003-2005, continuing his studies of quantitative social research, applied statistics, research design, and applied social network analysis. He is currently a Postdoctoral Research Fellow at the National Center for Supercomputing Applications. With the Science of Networks in Communities (SONIC) research group, Hank has been active in projects that combine his qualitative and quantitative interests. Hank works primarily with communities of practice to explore and enable collaboration using social network analysis. Hank's personal research interests include developing indicators and methods to explore relational structures inherent in multiplex social environments. These approaches build on permutation based tests and on clustering and scaling techniques. He also collaborates with Dr. Christopher McCarty (University of Florida Bureau of Economic and Business Research) to investigate personal network structure and its relationship with personality and behavioral traits. Doug Gregor's background is in the areas of programming languages, programming methodologies, compilers, and the construction of high-performance, generic software libraries. The last of these is directly relevant to this workshop, because the Open Systems Laboratory has been developing generic software libraries of graph (network) algorithms and data structures for several years. The best known of these projects is the generic Boost Graph Library (BGL), available http://www.boost.org/libs/graph/doc/. More recently, Goug Gregor and his colleagues have developed the Parallel BGL, available at http://www.osl.iu.edu/research/pbgl/, which extends the BGL to parallel computation on clusters, allowing them to perform queries on graphs with tens of millions of nodes and billions of edges within a few seconds on a medium-sized cluster. For all their capabilities, the BGL and Parallel BGL are only low-level infrastructure libraries on which one could build interesting applications for social network analysis. Keith N. Hampton, Assistant Professor in the Annenberg School for Communication at the University of Pennsylvania, received his Ph.D. and M.A. from the University of Toronto in sociology, and a B.A. in sociology from the University of Calgary. From 2001-2005 he was Assistant Professor of Technology, Urban and Community Sociology and held the Class of '43 Career Development Chair in the Department of Urban Studies and Planning at the Massachusetts Institute of Technology. His research interests focus on the relationship between information and communication technologies, social networks, and the urban environment. Recent projects include: (i) i-neighbors.org -- a free, public resource where people find their geographic neighborhoods online and form corresponding digital communities. The i-neighbors project investigates in detail the specific contexts where Internet use affords local interactions and facilitates community involvement. I-neighbors.org is also an experiment in e-democracy, exploring the potential for new information and communication technologies to expand political participation. (ii) E-neighbors -- a three year, longitudinal study of four Boston neighborhoods that i) examines the relationship between media use and the composition of people's social networks, and ii) explores the potential for new information and communication technologies to expand social networks, social capital and community involvement at the neighborhood level. (iii) Grande Wi-Fi -- an exploratory study of how Wi-Fi infrastructures influences social relationships in paid and free Wi-Fi cafes in Boston and Seattle. Netville -- a three-year survey and ethnographic investigation of how living in a newly developed residential community, equipped with a series of advanced computer and communication technologies as part of its design, affects community relations. Eszter Hargittai is Assistant Professor of Communication Studies and Sociology, and Faculty Fellow of the Institute for Policy Research at Northwestern University where she heads the Web-Use Project. She received her Ph.D. in Sociology from Princeton University where she was a Wilson Scholar. Before joining the faculty at Northwestern, she was a post-doctoral fellow at the Center for Arts and Cultural Policy Studies of the Woodrow Wilson School of Public and International Affairs. Her research focuses on the social and policy implications of information technologies with a particular interest in how IT may contribute to or alleviate social inequalities. Her research projects have looked at differences in people's Web-use skills, the evolution of search engines and the organization and presentation of online content, political uses of information technologies, and how IT are influencing the types of cultural products people consume. In addition to her academic articles, her work has also been featured on CNNfn, the BBC's Web site and several national dailies. Her work has been supported by the National Science Foundation, the Markle Foundation, the Dan David Foundation and the Russell Sage Foundation. In 2006/07 she will be a Fellow at the Center for Advanced Study in the Behavioral Sciences at Stanford. Caroline Haythornthwaite is Associate Professor at the Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign. Her research examines how the Internet and computer media support and affect work, learning, and social interaction. Her research examines how information is exchanged, knowledge is shared and co-constructed, collaboration happens, and community forms. Studies have examined social networks of work and media use among researchers, the formation of social networks and media use over time in online classes, and social networks of knowledge sharing in collaborative research teams. Other work examines the development and nature of community online, communication issues for new online learners, distributed knowledge processes, and the nature and constraints of interdisciplinary collaboration. Her studies of social networks and media use show that those with stronger ties communicate via more media than weakly ties pairs (media multiplexity), and that, within groups, media are found in pairs' repertoires in a similar order. This suggests that group mandated means of communication provide a 'latent tie' structure on which pairs can build weak and then stronger ties (see papers in The Information Society, and Information, Communication and Society). Major publications include The Internet in Everyday Life (2002, edited with Barry Wellman); Learning, Culture and Community in Online Education: Research and Practice (2004, edited with Michelle M. Kazmer), a special issue of Journal of Computer-Mediated Communication on "Computer-Mediated Collaborative Practices and Systems" (2005), and Handbook of E-learning Research (in preparation, edited with Richard Andrews). Bruce Herr is a Computer Software Engineer for Katy B�rner's InfoVis Lab at IU. He graduated from Indiana University with a BS in Computer Science. His main career goal is to make really cool, extensible, and easy to use software. Current projects are the IVC, Taxonomy Validator (in production), and IVC-DB. He mainly programs in Java using the revolutionary Eclipse IDE and leveraging other open source technologies as needed. Raquell Holmes received her Ph.D. in the area of cellular, developmental biology from Tufts Sackler School of Biomedical Sciences. After a post doc at Dana Farber Cancer Research Institute she joined Boston University's Center for Computational Science to manage the Education, Outreach and Training Partnership for Advanced Computational Infrastructure (EOT-PACI). Her focus in the context of EOT-PACI and now EPIC (Engaging People in CyberInfrastructure) has been on building community within and beyond the NSF funded partnerships. In addition to participating in the leadership of EOT-PACI, Dr. Holmes headed the development of the EOT-PACI website and Metrics Online. These tools served the partnership and its audiences by making visible the activities and products of the PACI partners. The development of the Metrics tool required establishing rudimentary measures of collaboration within the partnerships. This has included linkages between institutions, projects and specific events or products. An historical overview of the tools development has been posted. Dr. Holmes also develops materials for training biologists in the area of modeling and simulation. This work builds on the computational science education and training efforts within EPIC, identifies numerical modeling applications easy to use for biologists and supplements these tools with curricular materials and professional development workshops. Through this work Dr. Holmes has developed a high-level understanding of metabolic pathways, biological database implementations and distinctions between static and dynamic analyses of biological systems at the cellular and molecular levels. Eric Jakobsson is in the Department of Molecular and Integrative Physiology and the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, and also has major commitments to the Center for Biophysics and Computational Biology and the Beckman Institute for Advanced Science and Technology on that campus. He has just returned to campus from a two-year term as Director of the Center for Bioinformatics and Computational Biology at the National Institute of General Medical Sciences and the Chair of the Biomedical Information Science and Technology Initiative Consortium at the National Institutes of Health, in Bethesda, Maryland. Jakobsson's research and academic interests are centered on computational studies of membrane structure and transport, and on the use of computation in education. He is the Principal Investigator of a newly awarded NIH Nanomedicine Development Center that is part of the NIH Roadmap. His primary immediate interest in Social Network Theory is in applying it to the management of a bioengineering project that must be focused on coherent goals while being distributed across 10 institutions that span 9 time zones. Karrie Karahalios is an assistant professor in the Computer Science department at the University of Illinois at Urbana-Champaign. Her work focuses on the interaction between people and the social cues they perceive in networked electronic spaces. Of particular interest are interfaces for pubic online and physical gathering spaces such as chatrooms, cafes, parks, etc. The goal is to create interfaces that enable users to perceive conversational patterns that are present, but not obvious, in traditional communication interfaces. Her most recent work involved integrating social catalysts into the design of interfaces for connecting spaces using audio and video. Previous projects include: Visiphone, a communication object that visualizes conversation patterns between two spaces; Hear&Here, an augmented reality interface for placing sound envelopes in space and retrieving them with an audio interface; Chit Chat Club, a hybrid social space that combines the immediacy of the traditional cafe with the global reach and easy introductions of an online chat. Ongoing research involves (i) analyzing patterns in cell phone conversations among groups of people and visualizing them to get a better understanding of mobile conversation patterns. (2) developing interfaces for incorporating emotion into mediated communication archives. (3) developing interactive furniture interfaces for collocated interaction among people about a table to show "value" on communication. Karrie completed a SB in electrical engineering, an MEng in electrical engineering and computer science, and an SM in media arts and science and a PhD in media arts and science at MIT. David Knoke is professor of sociology at the University of Minnesota. He received his Ph.D. in 1972 from the University of Michigan and was professor of sociology at Indiana University from 1972 to 1985. He was a Fulbright research scholar at Kiel University (1989) and a fellow at the Center for Advanced Study in the Behavioral Sciences (1992). In 1982, Knoke and James Kuklinski published a basic primer, Network Analysis. With various colleagues, he received several National Science Foundation research grant and published their results in research monographs on political and organizational behavior, including The Organizational State, Organizing for Collective Action, Political Networks, Organizations in America, Comparing Policy Networks, and Changing Organizations. Recent research interests include organizational theories, economic sociology, strategic alliances, and social network analysis. Knoke is currently investigating the formation and consequences of strategic alliances in the global information sector, using a data on more than 4,000 research, development, production, and marketing collaborations from 1989-2000 among the world's 150 largest corporations in the computer, publishing, motion picture, broadcasting, telecommunications, information services, and data processing industries. Laura Koehly, Ph.D. is an Investigator in the National Human Genome Research Institutes of the National Institutes of Health. She is the Head of the Social Network Methods Section of the Social and Behavioral Research Branch. Dr. Koehly completed her Ph.D. in quantitative psychology at the University of Illinois - Urbana/Champaign, after which she completed post-doctoral training at the University of Texas M.D. Anderson Cancer Center. Her current research focuses on the development of social network methodologies to measure and model the complexities of family systems, the utilization of these methods to understand the social, psychological, and communicative context of families at risk for hereditary disease, and the translation of this understanding into effective network-based interventions that facilitate the delivery of genetic counseling services and dissemination of risk information. Her work extends traditional approaches that focus on a single family member to a social network approach, which allows one to develop a more realistic understanding of the dissemination process when communicating health information through a family. Additionally, she is interested in understanding how formal support systems, such as health care providers (i.e. genetic counselors, general practitioners) participate in the process of decision making, communication and support for at-risk family members. Dr. Koehly's methodological research interests focus on the development of stochastic models for three-way social network data and interdependent ego-centered network data. Gavin La Rowe will complete a Masters in Information Science at Indiana University this fall. Aside from his role as the InfoVis CyberInfrastructure database administrator, Gavin's research primarily focuses on information retrieval (TREC & NTCIR), multi-dimensional scaling, ontology inference engines, automated classification and rdf search engines. He is currently a technical lead for the Web Information Discovery Integration Laboratory (WIDIT) and the Classification-based Search and Knowledge Discovery tool (CSKD). Gavin also works on multi-lingual information retrieval and knowledge discovery tools for East Asian languages. He is the lead developer for the Thesaurus Linguae Sericae project, formerly hosted at the University of Heidelberg. His research interests include: large-scale multi-dimensional scaling, ontology inference engines, information visualization, classification search and knowledge discovery. David M. J. Lazer, Associate Professor of Public Policy, is director and founder of the Program on Networked Governance. Lazer has written extensively on the process by which connections emerge among actors and the consequences that the resultant network has for individuals and the system. With the support of the NSF, he has launched a Web-based forum on the use of DNA in the criminal justice system, which has the objective of facilitating knowledge sharing within that community. He holds a PhD in political science from the University of Michigan. Roger Leenders is an associate professor of Business Development at the School of Management, University of Groningen, The Netherlands. His main research interest is in how social networks within and between groups/teams of individuals affect the performance of groups and teams. Primarily, he studies the effect of intra- and interteam networks on the creativity and innovativeness of teams and on the creation of knowledge by teams. His main empirical field is that of teams involved in R&D, both in goods and services. He both studies teams of co-located iundivuals and virtual sets of teams that communicate through electronic means. In addition to the study of creativity and knowledge, a second major field of study relates to the modeling of social influence. Here he has developed network autocorrelation models that allow one to estimate and test various regimes of influence that may occur through network relations. From a more substantive point of view he studies language patterns that people use to influence their network partners. This also ties in with work in which he studies how social networks can be created/influenced/engineered so as to increase payoff for (all or several) network members. Being an econometrician by training, he enjoys building mathematical and statistical models of social networks. As a business scholar, he enjoys the creation and testing of theory. He received his Ph.D. at the Interuniversity Center for Social Science Theory and Methodology, University of Groningen. His papers have appeared in a variety of journals such as Social Networks, Journal of Mathematical Sociology, Journal of Engineering and Technology Management, Journal of Product Innovation Management, Creativity and Innovation Management, Technovation, Team Performance Management. Michael Macy is Professor and Chair of Sociology at Cornell. In a series of studies funded by the National Science Foundation, his research team used computational models and laboratory experiments with human subjects to explore how threshold effects in network interactions might generate familiar but enigmatic social patterns, such as the emergence and collapse of fads, the spread of self-destructive behaviors, and the polarization of opinion. Macy pioneered the use of agent based models in sociology to explore the effects of heterogeneity, bounded rationality, and network structure on the dynamics and stability of social systems (http://hsd.soc.cornell.edu/Macy.htm). He now heads a team of social, information, and computer scientists who are building tools that will make the Internet Archive accessible for research on social and information networks (http://www.nsf.gov/news/news_summ.jsp?cntn_id=104477). He also leads a Cornell initiative to promote cross-disciplinary collaborative research and teaching on social and information networks. Madhav Marathe is Professor at the Virginia Bioinformatics Institute (VBI) and Department. of Computer Science, and Deputy Director, VBI Network Dynamics and Simulation Science Laboratory at Virginia Polytechnic Institute and State University. With over eight years of experience in project leadership and technology development, Professor Marathe specializes in population dynamics, telecommunication systems, epidemiology, design and architecture of the data grid design and analysis of algorithms for data manipulation, design of services-oriented architectures, and socio-technical systems. Currently he is on the external advisory board of ONR funded National Center for Advanced Secure Systems research. Professor Marathe has led the development of a computational theory of discrete simulations based on discrete dynamical systems, and the development of methods and tools for simulation-based representation, analysis, and generation of large socio-technical networks. He has published more than 125 research articles in peer reviewed journals, conferences, and books over the last fifteen years, including SIAM J. Computing, MONET, JSAC, and TCS. He has managed long-term research contracts and collaborations with a number of universities, has been PI and Co-PI on more than a dozen funded research programs, and has served as PC member for conferences and workshops. Sean Mason is a Research Programmer working in SONIC (Science of Networks in Communities) at NCSA. Current projects include CI-KNOW (Cyberinfrastructure Knowledge Networks on the Web) and its predecessor IKNOW (Inquiring Knowledge Networks on the Web). His current research interests include making useful social network analysis-based recommendations for users of cyberinfrastructures. He programs primarily in Java and Python. Chris McCarty is director of the University of Florida Survey Research Center, a 75-station telephone survey lab. He received an undergraduate degree in anthropology from West Virginia University in 1980 and a doctorate in anthropology from the University of Florida in 1992. He has conducted research in Mexico, and West Africa and consulted on USAID-funded projects in Cameroon, Ghana, Mali and Jamaica. McCarty's primary research interests are in the area of social networks, specializing in the measurement and analysis of personal networks -- the set of family, friends and acquaintances surrounding a focal person. His most recent work is in the area of personal network structure and visualization. He developed a Java program called EgoNet for the collection and analysis of personal networks. This program was recently re-written in Delphi by MDLogix of Towson, MD. EgoNet provides a questionnaire authoring language oriented to personal networks. It allows a researcher to visualize individual personal networks, overlay alter attributes, analyze structural attributes and output aggregate files across respondents to SPSS. He is currently applying this program to a project sponsored by the National Science Foundation to use personal networks as a measure of acculturation among Hispanic migrants to the US and African migrants to Spain. McCarty has also conducted social network research on the small world phenomenon, personal network size, and estimating the size of hard-to-count populations. He has published on survey research methods concerning response rates, estimating the demographic effects of hurricanes and adapting consumer confidence measures to developing countries. William Michener has been a Research Professor in the Biology Department at the University of New Mexico since 2000 and a Senior Scientist at the American Institute of Biological Sciences in Washington, DC since 2004. He serves as Associate Director of the National Science Foundation's Long Term Ecological Research Network Office and Co-Director of the National Ecological Observatory Network Project Office in Washington, DC. He is PI for the Science Environment for Ecological Knowledge (a large NSF Information Technology Research Project that involves 9 principal institutions and approximately 45 investigators and programmers). He is the author of more than 70 journal articles and book chapters, and has co-edited four books related to ecological informatics. He is a Certified Senior Ecologist and serves as Editor of Ecological Archives and Associate Editor of the International Journal of Ecological Informatics. His research focuses on ecology of natural and anthropogenic disturbances, design of environmental observatories, and application of scientific data and information technologies to ecology and natural resource management. Chris Mueller has extensive experience developing scientific software and distributed data infrastructures. At Research Systems, he created the first Web scripting language for developing scientific applications, ION Script. An important design goal in his applications is to make high-end computing accessible to end-users by finding the appropriate balance between abstraction and functionality that enables users and developers to be most productive. Currently, he is pursing a PhD in Computer Science at Indiana University and is exploring software and hardware architectures for large graph mining and visualization. Edward T. Palazzolo, Ph.D., University of Illinois at Urbana-Champaign, is an assistant professor in the School of Communication at The Ohio State University where he teaches graduate and undergraduate courses in organizational communication. He is the director of the Communication Research on Information and Organizational Systems Laboratory (Curious Lab: http://curious.comm.ohio-state.edu/). While at the University of Illinois he worked as a research assistant in the TECLab under the direction of Noshir Contractor on multiple projects and focused most of his time on an NSF funded grant in the area of Knowledge and Distributed Intelligence. His current research focuses on communication and knowledge networks and their impact on team performance in organizational settings. He recently received a grant from the Education Council for his work with the Columbus Public Schools' elementary school and middle school principals to help build a professional learning community and to improve student performance through managing communication and knowledge networks. W. Bradford Paley uses computers to create visual displays with the goal of making readable, clear, and engaging expressions of complex data. His visual representations are inspired by the calm, richly layered information in natural scenes. His process applies three perspectives: [1] rendering methods used by fine artists and graphic artists are [2] informed by their possible underpinnings in human perception, then [3] applied to creating narrowly-scoped, almost idiosyncratic representations whose visual semantics are often driven by the real-world metaphors of the experts who know the domains best. Brad did his first computer graphics in 1973, graduated Phi Beta Kappa from UC Berkeley in 1981, founded Digital Image Design Incorporated in 1982, and started doing financial & statistical data visualization in 1986. He has exhibited at the Museum of Modern Art; he created TextArc.org; he is in the ARTPORT collection of the Whitney Museum of American Art; has received multiple grants and awards for both art and design, and his designs are at work every day in the hands of brokers on the floor of the New York Stock Exchange. He is an adjunct associate professor at Columbia University, and is director of Information Esthetics: a fledgling interdisciplinary group exploring the creation and interpretation of data representations that are both readable and esthetically satisfying. Michael Piasecki holds degrees in Civil Engineering from the University of Hannover, Germany (Diplom, 1991) and the University of Michigan (PhD, 1994) with a focus on Water Resources Engineering. He is currently holding the rank of Associate Professor at Drexel University in the Department of Civil, Architectural, & Environmental Engineering, Philadelphia. Dr. Piasecki's research interests centers on the area of HydroInformatics and focuses on the development of metadata profiles for the hydrologic community as well the creation and representation of hydrologic processes and vocabularies using ontologies. Of special interest is the problem of semantic heterogeneity in description of processes and data files and the utilization of ontologies to overcome these heterogeneities. Dr. Piasecki is currently a member of the CUAHSI HIS team developing a prototype information system of the hydrologic community where he has taken the lead on developing the community metadata profile as well as controlled vocabularies for use in the information system. He is also the recipient of a CLEANER planning grant to investigate the CyberInfrastructure needs of a future Environmental Field Facility as part of an Engineering Analyses Network using an existing LTER facility. As part of his community involvement and recognition for his expertise he is a member of the CI advisory committee for CLEANER and the LTER network and has been invited to numerous workshops on CI development organized by the Environmental Observing System communities and NSF. Catherine Plaisant, PhD, is Associate Research Scientist and Associate Director of the Human-Computer Interaction Laboratory of the University of Maryland Institute for Advanced Computer Studies. She earned a Doctorat d'Ing�nieur degree in France in 1982 and worked on developing and evaluating user interfaces since then. In 1987 she joined Professor Ben Shneiderman at the Human-Computer Interaction Laboratory of the University of Maryland. She works with graduate students from Computer Science, Information Studies or Psychology on designing and evaluating new interface technologies that are useable, useful, and appealing. Research contributions range from focused interaction techniques to innovative visualization techniques validated with user studies and practical applications. The activities that are most relevant to the focus of this workshop all concern the design and evaluation of user interface. Current projects include two user interfaces that look for alternative to the traditional node link diagrams to explore network data: NetLens (which uses coordinated overviews and sorted lists) and TreePlus (which investigate interactive tree representations to present graphs). In an effort to promote the evaluation of information visualization and the development of benchmarks I started the InfoVis Contest which topic in 2004 was the History of InfoVis. Participants were judged on how much insights they could provide about the data (the papers and authors of 10 years of the conference). Shashikant Penumarthy is a graduate student in the School of Library and Information Science at Indiana University, Bloomington. He has a background in Electronics Engineering (B.E.) and Computer Science (M.S.). Currently he is pursuing his PhD in the InfoVis lab focusing on modeling and analysis of the diffusion of information in scientific networks. Other research interests include information visualization, complex systems, network analysis, agent-based models, computer-mediated communication, programming languages and object-oriented frameworks. In the InfoVis lab, he plays the role of software consultant and architect and leads the IVC Software Framework project that aims to build a highly extensible programming-language-independent framework to enable scientists to share code and data unmodified. Bill Richards. Network researchers have used eigen decomposition methods either implicitly or explicitly since the late 1960's, when computers became generally accessible in most universities. There are a number of problems with both the methods currently in use and what have come to be seen as "standard" ways of using those methods. The main goal of his research is to develop a unifying theoretical perspective that will address the problems mentioned above, and that will tie together analytic methods previously understood as being unrelated to one another by providing clear links to the mathematical foundation on which they stand, namely algebraic graph theory. His research activities have centered largely around the development and implementation of computer programs for the analysis of communication/information networks: Negopy, MultiNet. A number of research papers he wrote with Andrew Seary are available at http://www.sfu.ca/~richards/Pages/ResearchPapers.html. Besides developing analytic packages such as Negopy and MultiNet, he has presented papers and workshops about eigen analysis methods at annual international conferences of INSNA and ICA. In 2001 Andrew Seary and Bill Richards analysed data for a study of Multiple Chemical Sensitivity involving over 2000 people, 100 symptoms, and 150 exposures, using MultiNet and a new analytic strategy and obtained results showing a clear relationship between symptoms and exposures that had not been seen before. Subsequently, they were asked to extend the methods for use in an NIH-funded multi-national study of Colon Cancer. They attended the symposium on Social Network Analysis for National Security, hosted by the Committee on Human Factors of the National Academy of Sciences, in Washington, D.C., November 7-9, 2002 (http://www.nap.edu/books/0309089522/html/209.html). In August, 2000 Richards organized a Symposium on Networks, Needles, Drugs, Risk, and Infectious Disease that brought 24 network experts and public health practitioners together for three days to examine the social networks of drug users and determine which research strategies would be likely to lead to useful advances. A follow-up meeting will take place in April, 2006 before the annual INSNA Sunbelt Social Network conference. Garry Robins is a mathematical psychologist whose research deals with quantitative models for social and relational systems. Methodologically, he focuses on statistical models for social networks, and in particular exponential random graph models (p* models). These models assume that a large-scale network emerges from combinations of local patterns of interaction among small overlapping subsets of people. Such patterns can often be interpreted as the result of a localized social process, a set of behaviours within each subset of individuals. As a result, these models provide ways to examine large scale network structures (the macro- or global level) as the ramification of overlapping and intersecting localized behavioural patterns (the micro- or local level). He has generalised these models to include actor attributes, leading to social influence models and social selection models. More recently, together with his colleagues, Robins has developed new versions of these models, including social settings and new higher order network statistics (Snijders, Pattison, Robins & Handcock, 2005). These new specifications show a dramatically better performance, including in model estimation and fit. They have developed new programs for Monte Carlo maximum likelihood estimation to replace previous approximate estimation techniques. Current work includes: elaborations for directed networks, for multiple networks, and for bipartite graphs; and further development of estimation software. Robins collaborates in many empirical social network research projects, including: structure of sexual networks and HIV transmission; the social epidemiology of mental health in rural areas; health policy and local government networks; environmental governance arrangements; intra-organizational networks; defence-related issues; labor market dynamics; communication networks and stereotype formation; social structures and cultures in sporting teams; political processes; interlocking directorates. David Sallach is Associate Director of the Center for Complex Adaptive Agent Systems Simulation (CAS2) at the Argonne National Laboratory. He is the current President of the North American Association of Computational Social and Organizational Sciences (NAACSOS), and Program Co-Chair of the World Congress on Social Simulation to be held in Kyoto, Japan in 2006. Sallach received his doctorate in sociology from the University of Nebraska, and taught sociology at Indiana University, Bloomington, and Washington University in Saint Louis. He served for five years as the Director of Social Science Research computing at the University of Chicago, where he commissioned and designed the architecture of the Repast agent simulation toolkit. Sallach's research interests are concentrated on the design of interpretive agent models of social processes. Ramon Sangüesa is a professor at UPC, the Technical University of Catalonia, Barcelona, Spain as well as member of the i2CAT Foundation for Advanced Internet in Catalonia where he coordinates the development of grid computing infrastructures and applications, geared towards collaborative environments. His background comes from Artificial Intelligence with research on Intelligent Agents, Artificial Societies and Machine Learning. His current interests are focused on the emergence on collaborative social and knowledge networks and the integration of different types of technologies for collaboration (from the "low level" upwards: grid, agents, social and knowledge networks, web2.0 applications and interfaces). His current research on social and social networks is centered around the mechanisms of reciprocity, trust, reputation in the interchange of knowledge between agents in a society. Recent work explores the relationship between these characteristics and the emergence of several types of networks. Some of the results of this research has been applied to the e-government projecte e-catalunya by the Catalana Autonomous Government. Andrew Seary is a research associate and consultant at Simon Fraser University. His main research concerns application of methods of mathematical physics to large networks, with emphasis on analysis and visualization based on spectral methods. He has a B.Sc in mathematical physics from McGill University and a Ph.D. from Simon Fraser University, where he developed the MultiNet network analysis program. Recent publications include MultiNet software (richards@sfu.ca) and documentation, "Spectral methods for analyzing and visualizing networks: an introduction." with W. D. Richards and "Networks of Symptoms and Exposures" with W. D. Richards, G. McKeown-Eyssen & C. Baines. The last paper is an example of the application of social network methods to problems in other fields. Seary, in collaboration with Richards and McKeown-Eyssen, has also been applying these methods to problems in cancer epidemiology. Munindar P. Singh, PhD, is a professor in the department of computer science at North Carolina State University. Dr. Munindar's research interests include multiagent systems and service-oriented computing, wherein he addresses the challenges of trust, service selection, and business processes and protocols in large-scale open environments. Munindar has studied adaptive social networks from a computational perspective. At one level, he treats social networks as a basis for knowledge management in groups and organizations. At a deeper level, he treats social networks as a fundamental programming abstraction to support decentralized service selection and trust estimation. In the first, improved infrastructure would enable better mining, management, and exploitation of social networks. In the second, social networks underlie trustworthy use of infrastructure by supporting the dynamic configuration of effective virtual organizations. Munindar is widely published and has over 150 articles to his name. Munindar's books include Multiagent Systems (published by Springer-Verlag in 1994) and the coauthored text, Service-Oriented Computing: Semantics, Processes, Agents (published by Wiley in 2005). Munindar was the editor-in-chief of IEEE Internet Computing from 1999 to 2002 and continues to serve on its editorial board. He is also a member of the editorial boards of the Journal of Autonomous Agents and Multiagent Systems and the Journal of Web Semantics. He was general cochair of the 2005 edition of the International Joint Conference on Autonomous Agents and Multiagent Systems. Munindar's research has been recognized with awards and sponsorship by the National Science Foundation, DARPA, IBM, Cisco Systems, and Ericsson. Christian Steglich works as a postdoc researcher at the ICS research school and the Faculty of Behavioural and Social Sciences of the University of Groningen. His current research project concerns the stochastic modelling of the co-evolution of social networks and behaviour. His research interests cover theory formation about and formal modelling of (i) individual behaviour and attitudes, (ii) social networks, (iii) social dilemmata, (iv) normative behaviour, and (v) other emergent properties of agent interaction, both for data analytical purposes and for empirically informed simulation studies, and ideally in a dynamic, stochastically challenging context. Other current activities are: (i) Together with Andreas Flache, he hosts the MEMOS research seminar ("Methods and Models in the Social Sciences"), at the sociology department of his university. (ii) Together with Tom Snijders, he is preparing a workshop on the profitable use of the SIENA software. The workshop will take place January 16-20, 2006, in Groningen. (iii) On next year's XVI ISA World Congress of Sociology in Durban, he will host the social network session of the RC33 (Research Committee on Logic and Methodology). Kirby Vandivort, a Senior Research Programmer with the Theoretical and Computational Biophysics group at the University of Illinois at Urbana-Champaign's Beckman Institute for Advanced Science and Technology, has been a lead developer on the BioCoRE project since 1999. Since inception of the project he has been directly involved in every aspect of its development including programming, evaluation, and dissemination. Programming duties include creating server infrastructure, producing scripts and scripting interfaces, and structuring dynamic web pages. Prior to working on the BioCoRE project, Kirby taught programming classes as a Teaching Fellow at the University of Missouri-Rolla. Kirby holds an M.S. in Computer Science and a B.S. in Nuclear Engineering. Jing (Annie) Wang is a graduate student in the Department of Speech Communication at University of Illinois at Urbana-Champaign. She has a background in English Literature and Linguistics (B.S.) and Linguistics (M.A.). She is currently a Ph.D. student focusing on Organizational Communication, Social Network Analysis, Multi-agent System Modeling and Complex Systems. She is also a member of the Team Engineering Collaboratory (TECLab) and is currently participating in the National Science Foundation-funded project of IT-Based Collaboration Framework for Preparing against, Responding to, and Recovering from Disasters Involving Critical Physical Infrastructures. Stan Wasserman, an applied statistician, joined the Departments of Sociology and Psychology at Indiana University in Fall 2004. Formally, he is Rudy Professor of Sociology, Psychology, and Statistics. He also has an appointment in the Karl F. Schuessler Institute for Social Research. Prior to moving to Indiana, he held faculty positions at Carnegie-Mellon University, University of Minnesota, and University of Illinois, in the disciplines of Statistics, Psychology, and Sociology; in addition, at Illinois, he was a part-time faculty member in the Beckman Institute of Advanced Science and Technology, and has had visiting appointments at Columbia University and the University of Melbourne. Wasserman is best known for his work on statistical models for social networks and for his text, co-authored with Katherine Faust, Social Network Analysis: Methods and Applications. His other books have been published by Sage Publications and Cambridge University Press. He has published widely in sociology, psychology, and statistics journals, and has been elected to a variety of leadership positions in the Classification Society of North America and the American Statistical Association. He teaches courses on applied statistics and sociological and psychological methods. He is a fellow of the Royal Statistical Society, and an honorary fellow of the American Statistical Association and the American Association for the Advancement of Science. He has been an Associate Editor of a variety of statistics and methodological journals (Psychometrika, Journal of the American Statistical Association, Sociological Methodology, to name a few), as well as the Book Review Editor of Chance. His research has been supported over the years by NSF, ONR, and NIMH.. At present, his research is supported by NSF (he is co-PI on the IU Network WorkBench project, and PI for NS06, a workshop/conference on Network Science to be held in Bloomington in May 2006), and ONR (with Doug Steinley, University of Missouri). At present, Wasserman is also Chief Scientist of Visible Path Corporation in New York City, a software firm engaged in developing social network analysis for corporate settings. He is an editor of Centrality. Bob Wilhelmson serves as leader of NCSA's Cyberapplications and Communities Directorate and as the center's chief science officer. He is also a professor in the Department of Atmospheric Sciences at the University of Illinois at Urbana-Champaign, where his research group models severe weather. Currently he co-leads LEAD, a large National Science Foundation-funded research, integration, and deployment project that is focused on building and adapting advanced cyberinfrastructurer to address important challenges in severe weather research, forecasting, and education. Wilhelmson came to UIUC to do graduate work in 1966. He received his M.S. in 1969 and his Ph.D. in 1972 from the Department of Computer Science. Wilhelmson acted as co-PI for the original unsolicited proposal to the NSF that funded the NSF Supercomputers program in 1986. Mengxiao Zhu is a graduate student in the Department of Speech Communication at University of Illinois, at Urbana-Champaign. She holds degrees in Science and English (B.S.) and Computer Science (B.E. & M.E.). She is currently a Master's student focusing on Organizational Communication, Social Network Analysis, Multi-agent System Modeling and Complex Systems. She is also member of the Team Engineering Collaboratory (TECLab) and is currently participating in the National Science Foundation-funded project of IT-Based Collaboration Framework for Preparing against, Responding to, and Recovering from Disasters Involving Critical Physical Infrastructures. National Center for Supercomputing Applications at the ©2006 Board of Trustees of the University of Illinois. Last updated January 24, 2006
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,194
From ocean to orbit and everywhere in between, Harris provides mission-critical solutions to connect, inform and protect the world. Harris is a proven leader in tactical communications, electronic warfare, avionics, air traffic management, space and intelligence, and weather solutions . See individual product pages for all conditions system requirements. Prices may vary. Labels shown are for illustrative purposes only. Actual output (such as fonts and margins) may vary. GrillGrates(TM) amplify heat, prevent flareups, make flipping foods easier, keep small foods from committing suicide, kill hotspots, are easier to clean, flip over to make a fine griddle, and can be easily removed and moved from one grill to another. Hydro Tek industrial water vacuums put you in control of where the waste water from your pressure washing equipment goes. This is especially important when cleaning up oil or working with cleaning chemicals that could harm the environment. Yuesen Med is a professional veterinary equipment supplier veterinary supply companies, and we are committed to wholesale all kinds of vet equipment with high quality and best prices, like Vet X-ray machine, Vet Ultrasound machine, Pet Cage, Veterinary Anesthesia machine and other vet equipment, specially for vet hospital and vet clinic. Home Dictation and Transcription Portable Voice Recorders. Portable Voice Recorders Portable Devices for Voice or Telephone Recording. These portable recording devices are great for recording your conference or meeting notes, and are also great for recording your telephone conversations as well. From business presentations to at-home theaters, projector screens can serve multiple purposes. With an expansive array of sizes to turn any space into a projection area, setting up a workplace presentation site or backyard theater with an outdoor projector screen is easy. NOTE The personal email address associated with your account is the email address you entered in the application for admission to WGU, unless you have changed it in your profile on the student portal or asked a WGU faculty or staff member to change it for you. Whether it's for a business trip or a family vacation, a light, compact, and capable portable projector makes a good travel companion. These are the top-rated models we've tested under two pounds. At AvionTEq our goal is to be your most trusted adviser and meet or exceed your expectations at all times. We sell, buy, lease, rent and accept trade-in of new and used/refurbished aircraft avionics and instruments test equipment, aircraft maintenance tools and other testing systems, panels, equipment and accessories. Shenzhen Jiapeng Huaxiang Technology Co., Ltd established in 2004. A professional manufacturer and exporter which is concerned with the design, development and production of USB LAN Card,USB Sound Card, Wireless AP, Wireless Mouse.
{ "redpajama_set_name": "RedPajamaC4" }
3,795
Is there any way to change the name of the Nintex Configuration Database during an install of Nintex Workflow 2010? We have a scenario where we are mirroring Databases from Prod to Coop environment. I would like to install Nintex in the Coop envrionment, but am running into an issue where Nintex is picking up the mirrored databases from Prod and is trying to use them. I assume its because the database it sees is named the same. The Nintex configuration database is stored as a farm property (in the SP_Config DB to be exact). If you are mirroring all of the SP databases over to Co-OP site, you will want to add a failover partner to the config database so the connection string will reflect which server has the active copy. If you have a completely separate farm in CO-OP site, I would suggest either creating a separate Nintex configuration DB (if you are only mirroring the SP content databases and will bring them online in a different farm), or when you have the need to failover; manually update the farm property (IE connection string) at the time you switch over. For more information on modifying the config database connection, please see our DR document here: Disaster Recovery Planning Guide: Nintex Workflow (Page 5, step 2). If you are simply looking for a way to create a new Nintex Config DB with a different name, follow the standard install instructions for Nintex Workflow and when you reach the step to create the first DB, simply give it another name. By doing this, the farm will always connect to the DB created with the different name and will be separate from the other farm. To add the other mirrored databases to the CO-OP farm, you will need to failover all of the databases to the SQL server in your CO-OP site and setup the mappings in Central Admin > Nintex Workflow Management > Database Setup.
{ "redpajama_set_name": "RedPajamaC4" }
346
Galaxy S6 Edge: Is its distinctive style worth the premium price? What will customers pay for style? That's the question faced by the marketing teams behind the Apple Watch, whose 18k gold models are priced to start at $10,000. The same issue confronts marketing teams for the new Samsung Galaxy S6 Edge, with its unusual front glass display that wraps partly around both side edges. What will customers pay for style? That's the question faced by the marketing teams behind the Apple Watch, whose 18k gold models are priced to start at $10,000. The same issue confronts marketing teams for the new Samsung Galaxy S6 Edge, with its unusual front glass display that wraps partly around both side edges. Pricing in the U.S. for the Edge and its more conventional Galaxy S6 cousin are due to be announced Thursday, with shipments expected as early as April 10. Based on prices already revealed in several countries, the unlocked version of the Edge could price out at 11% to 30% more than the Galaxy S6 with equivalent storage. In nearly every way, the Edge and the Galaxy S6 are the same device, except for the Edge's unusual curved edges made of Gorilla 4-strengthened glass. Both phones have 5.1-in. displays and 64-bit processors, and both support Samsung Pay with magnetic and NFC payment capabilities and embedded wireless charging. Even with such similarities, buyers in Turkey will pay 11% more for the Edge, and in the UK they will pay up to 30% more under one UK carrier's subscription plan, according to prices reported by Tech Times and others. Samsung published pricing for an unlocked Edge with 32 GB at €699 (about US$767) in Spain on March 4 then more recently added 32 GB unlocked Edge pricing of €849 (around US$931). That's a 21% premium. Value is in the eye of the beholder. A large part of the value of the Edge's styling (as well as for the top-line Apple Watch) will depend on what buyers think other people will think of them as they go about their lives using their new gadgets. Marketing teams know this, but they still spend enormous amounts of time studying how to put a price on such a perceived value. It's the essence of good advertising. If the images in marketing videos and ads create the proper aura, that will help. Apple is known for its marketing prowess, and Samsung has benefited, some, from Apple's lead. So far, Apple's in-store marketing of the Apple Watch appears far more involved than any smartphone or smartwatch marketing campaign by any manufacturer. But marketing alone won't justify a higher price for fashion: Much of the premium price for the Edge will have to come from something almost intangible and undefinable. "Fashion still plays a big role in purchasing a smartphone," said Carolina Milanesi, chief of research at Kantar WorldPanel ComTech. "Having something that looks distinct and different from what you owned before matters a lot to consumers." MIlanesi said she expects the Edge, unlocked, will cost $50 to $100 more than an equivalent-sized Galaxy S6 in the U.S. That would be well below the $164 price increase for the 32 GB Edge in Spain over the 32 GB Galaxy S6. There is a higher cost in making the Edge's curved screen than the screen on the Galaxy S6, she noted. "It's not to be expected that Samsung would just absorb that increased cost, as that would then put in question the price of the Galaxy S6," she said. Jack Gold, an analyst at J. Gold Associates, said in some ways the actual cost of the Edge is irrelevant, although he expects the Edge to cost 15% to 25% more than the Galaxy S6. "Edge is meant to be a halo device, to show that Samsung can produce compelling devices as well as anyone," Gold said. "To that end, it appeals to those who must have the best, and the actual cost of the device is less relevant than the mainline device geared toward the masses like the Galaxy S6. That's not to say that Samsung doesn't want to sell a lot of the Edge devices, but Samsung could sell relatively fewer than the S6 and have the device be successful." However much more the Edge is priced, analyst Patrick Moorhead of Moor Insights & Strategy said Samsung needs to be thinking of pricing its Galaxy S6 at 25% below the price of the iPhone 6 to have a chance to sell well. "That doesn't guarantee great sales, it just enables the possibility," he said. "Samsung needs to invest in marketing their differentiators in a clear way that means something to consumers." Based on unlocked pricing in Spain with current exchange rates, the 64 GB unlocked Galaxy S6 will cost about $877 (€799), which doesn't come close to the discount Moorhead has in mind. The iPhone 6 with 64 GB is $749 unlocked (from an earlier $849), according to Apple's Web site.
{ "redpajama_set_name": "RedPajamaC4" }
9,344
Messier 14 (również M14 lub NGC 6402) – gromada kulista w gwiazdozbiorze Wężownika, odkryta w 1764 roku przez Charles'a Messiera. M14 znajduje się w odległości około 30 tysięcy lat świetlnych (9,3 kiloparseka) od Ziemi oraz 13,0 tysięcy lat świetlnych (4 kiloparseki) od centrum Galaktyki. Jej średnica na niebie to 11 minut łuku, zaś rzeczywista około 100 lat świetlnych. Jasność widoma M14 wynosi 7,6 magnitudo, a jasność absolutna aż –9,1 magnitudo. M14 zawiera kilkaset tysięcy gwiazd, z których najjaśniejsze osiągają 14 magnitudo. W 1938 roku w M14 wybuchła nowa, co zarejestrowano na zdjęciach. Jednak odkrycie tego wybuchu datuje się dopiero na rok 1964, kiedy to szczegółowo zbadano fotografie wykonane 30 lat wcześniej. Jasność nowej oszacowano na 9,2 magnitudo, czyli około 100 razy więcej niż najjaśniejsze gwiazdy w gromadzie. Jak dotąd odkryto w M14 ponad 70 gwiazd zmiennych różnych typów. Około 3° na południowy zachód od M14 znajduje się inna gromada kulista, NGC 6366. Zobacz też Lista gromad kulistych Drogi Mlecznej Lista obiektów NGC Katalog Messiera Przypisy Bibliografia Messier 14 w SEDS.org Linki zewnętrzne Messier 014 M014 6402 Messier 014 Obiekty astronomiczne odkryte w 1764
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,170
'''Estlands fotbollslandslag Estlands herrlandslag i fotboll Estlands damlandslag i fotboll
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,014
Az 1906. június 1-jén kezdődő cananeai sztrájk Mexikó történetének egyik legjelentősebb munkabeszüntetése volt. A Sonora államban található Cananea város bányászainak akcióját sokan a mexikói forradalom előzményének tartják. A sztrájkolók nem csak a munkát tették le, hanem felvonulásokat is tartottak és összecsaptak amerikai fegyveresekkel és a helyi hatóságokkal is. Története Cananeának 1891-ben alig több mint 100 lakója volt, azonban miután a William C. Greene által 1899-ben alapított Cananea Consolidated Copper Company társaság rézbányát nyitott a településen, a népesség robbanásszerűen megnőtt, a 20. század első éveire már elérte a 22 000 főt. A lakók közül körülbelül 7600 volt a bányász, ebből 5400 mexikói, 2200 idegen, többnyire USA-beli. A mexikói bányászok munkakörülményei rendkívül rosszak voltak, alacsony bérért dolgoztak hosszú műszakokban, egészségtelen körülmények között, sőt, hátrányos megkülönböztetésben is részesültek az amerikaiakkal szemben, akik általában több fizetést kaptak és magasabb pozíciókat foglaltak el. A növekvő elégedetlenség terjedésére rásegített a nemrég alakult szabadelvű párt, a Partido Liberal Mexicano propagandája. 1906 januárjában Esteban Baca Calderón, Manuel M. Diéguez és Francisco M. Ibarra megalapította az Unión Liberal Humanidad (Szabadelvű Emberség Unió) nevű szervezetet, Lázaro Gutiérrez de Lara pedig a cananeai szabadelvű klubot. Mindkét csoportosulás tagságában jelentős arányban képviseltették magukat a bányászok. Az 1906. május 5-i ünnepségeket mindkét szervezet kihasználta arra, hogy tüntetést rendezzen a munkakörülmények ellen tiltakozva. Gutiérrez és Baca beszédeikben hangsúlyozták, hogy megmutatják a "kapitalistáknak", hogy ők nem "teherhordó állatok", és hogy mindenben mellőzve vannak a "szőke, kék szemű" emberekkel szemben, valamint hogy az uralkodás és törvényalkotás joga kizárólag a népet illeti. A tüntetésekre válaszul a hatóságok statáriumot vezettek be. Május végén petíciós íveket nyújtottak be Greene-nek, melyben az egyik intéző menesztését, 5 pesós napi minimálbért és 8 órás munkanapot követeltek, valamint azt, hogy minden pozícióban legalább 75% legyen a mexikóiak aránya, az ellenőrök "jóérzésű" emberek legyenek, és hogy az előmenetelhez azonos joga legyen mindenkinek, csak a képességeitől függjön, hogy előléphet-e. Június 1-ének első óráitól kezdve a munkások sztrájkba kezdtek. A hatóságok felszólították őket, hogy kezdjenek tárgyalásokat a cég képviselőivel, ezért kijelöltek egy Baca Calderón és Diéguez által vezetett tárgyalóbizottságot, amely azonban csak egy terméketlen vitába tudott bocsátkozni a másik féllel. Miután közölték a dolgozókkal a tárgyaláson történteket, a mozgalom vezetőjévé Gutiérrez de Larát és Enrique Bermúdezt választották. Délután 3 óra körül mintegy 3000 fős munkásmenet indult el a városban. Amikor a fatelep mellett haladtak el, annak megbízottjai, William és George Metcalf összetűzésbe keveredtek néhány felvonuló munkással, majd hamarosan a konfliktus addig fajult, hogy az amerikaiak belelőttek a tömegbe, akik erre kődobálással válaszoltak. Hamarosan újabb amerikai fegyveres csoportok érkeztek meg, a mexikóiak pedig megtámadtak egy zálogházat, hogy ők is fegyverhez és lőszerhez jussanak. Összesen 10 ember halt meg, 19-en megsebesültek és néhány épület kigyulladt. Végül a tüntetők a település külső részeibe vonultak. Amíg az összecsapás tartott, Greene segélykérő táviratot küldött a közeli arizonai településeknek, a cananeai községi elnök pedig Sonora állam kormányzójától, Rafael Izábaltól kért segítséget. A kormányzó azonnal el is indult a helyszínre, miközben utasította a környező településen állomásozó katonákat és csendőröket, hogy szintén vonuljanak Cananeába. Az állam fővárosából, Hermosillóból Cananeába vezető legrövidebb út, melyet a kormányzó választott, érintette az Egyesült Államok területén fekvő Nacót is, ahova június 2-án reggel érkezett meg. Itt az amerikaiaktól azt az ajánlatot kapta, hogy működjön együtt velük és segítsen elfojtani a lázadást. Izábal elfogadta az ajánlatot és 275 önkéntes ranger kíséretében átlépte a határt, visszatért mexikói földre, és délelőtt fél 11 körül elérte Cananeát. Izábal arra szólította fel a sztrájkolókat, vegyék fel újra a munkát, ők azonban éjjel ismét összegyűltek és úgy határoztak, folytatják a tüntetéseket. Így egy újabb felvonulást tartottak, azonban a háztetőkről rájuk lőttek az amerikai katonák és a helyi rendőrök is, ezért a menet feloszlott és az emberek elmenekültek. Éjjel Izábal Ramón Corral kormányzati titkártól táviratot kapott, amiben tájékoztatták, hogy külföldi katonai erőknek tilos Mexikó területén tartózkodniuk, ezért visszaküldte a rangereket Nacóba. Másnap a helyi katonai zóna parancsnoka, Luis E. Torres vette át az ellenőrzést a település felett és üldözőbe vette a környező hegyekben elrejtőzött sztrájkolókat. Június 5-én Gutiérrez de Larát, Diéguezt, Baca Calderónt és Ibarrát letartóztatták és 15 év börtönre ítélték. Először Hermosillóban zárták be őket, majd 1909-ben a veracruzi San Juan de Ulúa-erődbe szállították a foglyokat. Emellett további közel 50 munkást szintén elfogtak és Hermosillóban valamint Cananeában bebörtönözték őket. A fegyveres összecsapások és a megtorlás egyes források szerint mintegy száz ember életét követelték. A rend hamarosan helyreállt a városban. Kapcsolódó szócikkek Río Blancó-i sztrájk Források Mexikói forradalom 1906 Sztrájkok Sonora történelme
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,763
{"url":"https:\/\/www.key2chem.org\/change-interparticle-interaction","text":"KEY2CHEM\n\nChange and Interparticle Interactions\n\nA physical change is one that changes only intermolecular interactions (not breaking chemical bonds). A chemical change results in a change in composition, which means breaking and forming new chemical bonds (intramolecular interactions). Some changes (such as dissolving a salt in water) can be classified as both since the strength of the interactions being broken is similar to the strong chemical bond strength.\n\nExample 1.\n\nWhich is a physical change?\n\nA.\u00a0$$\\require{mhchem}\\ce{H2O(g) -> H2O(l)}$$\n\nB.\u00a0$$\\require{mhchem}\\ce{2 H2O(g) -> 2 H2(g) + O2(g)}$$\n\nC.\u00a0$$\\require{mhchem}\\ce{H2(g) -> 2 H(g)}$$\n\nSolution\n\nA.\u00a0$$\\require{mhchem}\\ce{H2O(g) -> H2O(l)}$$\n\nA physical change breaks interparticle forces but a chemical change breaks chemical bonds to form new substances.\n\nExample 2.\n\nWhich is a chemical change?\n\nA.\u00a0$$\\require{mhchem}\\ce{O2(g) -> O2(aq)}$$\n\nB.\u00a0$$\\require{mhchem}\\ce{C(s) + O2(g) -> CO2(g)}$$\n\nC.\u00a0$$\\require{mhchem}\\ce{CO2(s) -> CO2(g)}$$\n\nSolution\n\nB.\u00a0$$\\require{mhchem}\\ce{C(s) + O2(g) -> CO2(g)}$$\n\nA chemical change breaks chemical bonds and results in formation of a new substance.\n\nExample 3.\n\nPhysical changes occur when _________ interactions are disrupted.\n\nA. intermolecular\n\nB. bonding\n\nC. intramolecular\n\nSolution\n\nA. intermolecular\n\nA physical change does not change the composition of the substance, meaning no chemical (intramolecular) bonds are broken.","date":"2020-08-05 19:42:21","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5585928559303284, \"perplexity\": 2265.494068662373}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-34\/segments\/1596439735964.82\/warc\/CC-MAIN-20200805183003-20200805213003-00020.warc.gz\"}"}
null
null
Campione del mondo dei pesi massimi dal 1880 al 1882, in un momento storico in cui gli incontri venivano disputati secondo le regole London Prize Ring Rules. Biografia Paddy Ryan nacque in Irlanda, ma la sua famiglia si trasferì negli Stati Uniti d'America durante la grande crisi economica che colpì il suo paese natale. Nei primi anni della sua permanenza a New York venne chiamato il gigante a causa della sua mole. Nel 1874 aprì un locale, proprio in quell'anno catturò l'attenzione di un direttore atletico del Rensselaer Polytechnic Institute, Jim Killoran che avviò Rayn alla carriera pugilistica. Nel 1877 combatté il suo primo incontro di boxe secondo le centenarie regole di Jack Broughton, accettate da tutti. Il 30 marzo 1880 alla Stazione di Collier in West Virginia Paddy affrontò Joe Goss, un pugile che all'epoca veniva riconosciuto come campione del mondo di pugilato. L'incontro durò 87 riprese e 90 minuti circa, alla fine Goss si arrese e Paddy Ryan venne acclamato campione. Nel 1882 venne sfidato da John Lawrence Sullivan per il titolo di campione del mondo. Anche questo incontro venne disputato secondo il London Prize Ring Rules. Inizialmente la sfida doveva svolgersi a New Orleans, ma a causa dei divieti imposti dalla polizia i due pugili seguiti da moltissimi spettatori si spostarono a Mississipy City, dove il 7 febbraio 1882 combatterono davanti al Hotel Barnes. Paddy aveva due secondi: John Roche (New York) e Tom Kelly (St. Louis). Joe Goss assieme a Billy Madden erano i secondi di Sullivan. Gli arbitri come sempre vennero scelti tra il pubblico, così come richiesto dalle regole. L'incontro iniziò alle 11.57 quando Paddy Ryan salì sul ring, alla nona ripresa un violento pugno di Sullivan colpì il volto di Paddy sotto l'orecchio sinistro. Il pugile irlandese cadde a terra sanguinante, venne dichiarato sconfitto dopo il conteggio di 30 secondi. Ryan e Sullivan si combatterono altre volte sempre con la vittoria di Sullivan. Morì il 14 dicembre del 1900 a New York e fu sepolto nel cimitero di St. Mary. È stato inserito di diritto nella Hall of Fame of boxing nel 1993. Ryan era parte del gruppo insieme all'attore Henry E. Dixey e al lottatore William Muldoon che, il 19 maggio 1885, accompagnò Robert Emmet Odlum (1851-1885) quando questi si tuffò dal ponte di Brooklyn. Odlum fu il primo uomo a gettarsi dal ponte, ma il tentativo finì tragicamente, con la morte del tuffatore. Altri progetti Collegamenti esterni
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,081
Q: Finding transformation matrix, given two sets of plane parameters and points on each plane Problem: I have a set of planes in 3d. I represent them using the following information * *plane equation : Ax + By + Cz + D = 0; (I have the coefficients in a vector) *points from which the plane_param A,B,C,D were found (using least squares) An array of structures, named frame1[] is used to store this information. frame1[1] represents first plane. It has the following members, frame[1].plane_param (1 x 4 vector) and frame1[1].points (3 X N matrix for N points) Now I have another set of planes in an array of structures, named frame2[]. This set will contain at least 5 planes planes that are present in frame1[]. array index wise, they will not have one to one correspondence. In other words, frame1[1] and frame2[1] may represent different planes. What I need: A method by which I will be able to get a single homogeneous transformation [4x4] that would transform all the planes in frame2[] to frame1[]. I do understand that the answer will have errors. In other words a perfect transformation matrix may not exist, in that case a transformation matrix that best suits the data (in a least squares sense) is enough. Constraints: The rotation angles are very small. if the exact angles are not possible to deduce, an approximation like cos(alpha) = 1 and sin(alpha) = alpha can be used in the transformation matrix. Language: MATLAB
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,015
package cli import ( "strings" "github.com/go-errors/errors" "github.com/spf13/cobra" "github.com/adrienkohlbecker/ejson-kms/model" "github.com/adrienkohlbecker/ejson-kms/utils" ) const docRotate = ` rotate: Rotate a secret from a secrets file. This will decrypt the given secret, check that the values are indeed different, and store the new encrypted value. It will ask you to type the secret at runtime, to avoid saving it to your shell history. If you need to pass in the contents of a file (such as TLS keys), you can pipe it's contents to stdin. Please be mindful of your bash history when piping in strings. ` const exampleRotate = ` ejson-kms rotate password cat tls-cert.key | ejson-kms rotate tls_key ` func rotateCmd() *cobra.Command { cmd := &cobra.Command{ Use: "rotate NAME", Short: "rotate a secret", Long: strings.TrimSpace(docRotate), Example: strings.TrimSpace(exampleRotate), } var storePath = ".secrets.json" cmd.Flags().StringVar(&storePath, "path", storePath, "path of the secrets file") cmd.RunE = func(_ *cobra.Command, args []string) error { err := utils.ValidSecretsPath(storePath) if err != nil { return errors.WrapPrefix(err, "Invalid path", 0) } name, err := utils.HasOneArgument(args) if err != nil { return errors.WrapPrefix(err, "Invalid name", 0) } err = utils.ValidName(name) if err != nil { return errors.WrapPrefix(err, "Invalid name", 0) } store, err := model.Load(storePath) if err != nil { return errors.WrapPrefix(err, "Unable to load JSON", 0) } if !store.Contains(name) { return errors.Errorf("No secret with the given name has been found. Use the `add` command") } plaintext, err := utils.ReadPassword() if err != nil { return errors.WrapPrefix(err, "Unable to read from stdin", 0) } client, err := kmsDefaultClient() if err != nil { return errors.WrapPrefix(err, "Unable to initialize AWS client", 0) } err = store.Rotate(client, name, plaintext) if err != nil { return errors.WrapPrefix(err, "Unable to rotate secret", 0) } err = store.Save(storePath) if err != nil { return errors.WrapPrefix(err, "Unable to save JSON", 0) } cmd.Printf("Exported new secrets file at: %s\n", storePath) return nil } return cmd }
{ "redpajama_set_name": "RedPajamaGithub" }
683