text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
SAN FRANCISCO (KGO) -- In an interview you'll see only on ABC7 News, the man accused of shooting and killing a young woman at San Francisco's Pier 14 admitted to the crime and talked about where he got the gun. Jose Inez Garcia Zarate, who was also known as Juan Francisco López-Sánchez, didn't recall every detail because he says he doesn't remember everything that happened. But in the 45 minute jailhouse interview, he revealed how the shooting went down and why he kept returning to the United States after being deported numerous times. At that point, Sanchez said he did not have an attorney.
{ "redpajama_set_name": "RedPajamaC4" }
9,538
\section{Introduction} Variational principles for magnetohydrodynamics were introduced by previous authors both in Lagrangian and Eulerian form. Sturrock \cite{Sturrock} has discussed in his book a Lagrangian variational formalism for magnetohydrodynamics. Vladimirov and Moffatt \cite{Moffatt} in a series of papers have discussed an Eulerian variational principle for incompressible magnetohydrodynamics. However, their variational principle contained three more functions in addition to the seven variables which appear in the standard equations of magnetohydrodynamics which are the magnetic field $\vec B$ the velocity field $\vec v$ and the density $\rho$. Kats \cite{Kats} has generalized Moffatt's work for compressible non barotropic flows but without reducing the number of functions and the computational load. Moreover, Kats have shown that the variables he suggested can be utilized to describe the motion of arbitrary discontinuity surfaces \cite{Kats3,Kats4}. Sakurai \cite{Sakurai} has introduced a two function Eulerian variational principle for force-free magnetohydrodynamics and used it as a basis of a numerical scheme, his method is discussed in a book by Sturrock \cite{Sturrock}. A method of solving the equations for those two variables was introduced by Yang, Sturrock \& Antiochos \cite{Yang}. In a recent work Yahalom \& Lynden-Bell \cite{YaLy,Yahalom2} have combined the Lagrangian of Sturrock \cite{Sturrock} with the Lagrangian of Sakurai \cite{Sakurai} to obtain an {\bf Eulerian} Lagrangian principle depending on only six functions. The vanishing of the variational derivatives of this Lagrangian entail all the equations needed to describe barotropic magnetohydrodynamics without any additional constraints. The equations obtained resemble the equations of Frenkel, Levich \& Stilman \cite{FLS} (see also \cite{Zakharov}). Furthermore, it was shown that for stationary flows three functions will suffice in order to describe a Lagrangian principle for barotropic magnetohydrodynamics. The non-singlevaluedness of the functions appearing in the reduced representation of barotropic magnetohydrodynamics was discussed in particular with connection to the topological invariants of magnetic and cross helicities. It was shown how the conservation of cross helicity can be easily generated using the Noether theorem and the variables introduced in that paper. In the current paper I improve on the previous results and show that four functions are enough to describe a general non stationary barotropic magnetohydrodynamics, the idea is borrowed from \cite{YaLy2} see also \cite{Yahalom3,Yahalom4,Yahalom5}. The plan of this paper is as follows: First I introduce the standard notations and equations of barotropic magnetohydrodynamics. Next I introduce the potential representation of the magnetic field $\vec B$ and the velocity field $\vec v$. This is followed by a review of the Eulerian variational principle developed by Yahalom \& Lynden-Bell \cite{YaLy,Yahalom2}. After those introductory sections I will present the four function Eulerian variational principles for non-stationary magnetohydrodynamics. \section{The standard formulation of barotropic magnetohydrodynamics} The standard set of \eqs solved for barotropic magnetohydrodynamics are given below: \beq \frac{\partial{\vec B}}{\partial t} = \vec \nabla \times (\vec v \times \vec B), \label{Beq} \enq \beq \vec \nabla \cdot \vec B =0, \label{Bcon} \enq \beq \frac{\partial{\rho}}{\partial t} + \vec \nabla \cdot (\rho \vec v ) = 0, \label{masscon} \enq \beq \rho \frac{d \vec v}{d t}= \rho (\frac{\partial \vec v}{\partial t}+(\vec v \cdot \vec \nabla)\vec v) = -\vec \nabla p (\rho) + \frac{(\vec \nabla \times \vec B) \times \vec B}{4 \pi}. \label{Euler} \enq The following notations are utilized: $\frac{\partial}{\partial t}$ is the temporal derivative, $\frac{d}{d t}$ is the temporal material derivative and $\vec \nabla$ has its standard meaning in vector calculus. $\vec B$ is the magnetic field vector, $\vec v$ is the velocity field vector and $\rho$ is the fluid density. Finally $p (\rho)$ is the pressure which we assume depends on the density alone (barotropic case). The justification for those \eqs and the conditions under which they apply can be found in standard books on magnetohydrodynamics (see for example \cite{Sturrock}). \Er{Beq}describes the fact that the magnetic field lines are moving with the fluid elements ("frozen" magnetic field lines), \ern{Bcon} describes the fact that the magnetic field is solenoidal, \ern{masscon} describes the conservation of mass and \ern{Euler} is the Euler equation for a fluid in which both pressure and Lorentz magnetic forces apply. The term: \beq \vec J =\frac{\vec \nabla \times \vec B}{4 \pi}, \label{J} \enq is the electric current density which is not connected to any mass flow. The number of independent variables for which one needs to solve is seven ($\vec v,\vec B,\rho$) and the number of \eqs (\ref{Beq},\ref{masscon},\ref{Euler}) is also seven. Notice that \ern{Bcon} is a condition on the initial $\vec B$ field and is satisfied automatically for any other time due to \ern{Beq}. Also notice that $p (\rho)$ is not a variable rather it is a given function of $\rho$. \section{Potential representation of vector quantities of magnetohydrodynamics} It was shown in \cite{YaLy} that $\vec B$ and $\vec v$ can be represented in terms of five scalar functions $\alpha,\beta,\chi,\eta,\nu$. Following Sakurai \cite{Sakurai} the magnetic field takes the form: \beq \vec B = \vec \nabla \chi \times \vec \nabla \eta. \label{Bsakurai} \enq Hence $\vec B$ satisfies automatically \er{Bcon} and is orthogonal to both $\vec \nabla \chi$ and $\vec \nabla \eta$. A similar representation was suggested by Dungey \cite{Dungey} but not in the context of variational analysis. The above expression can also describe a magnetic field with non-zero magnetic helicity as was demonstrated in \cite{YaLy}. Moreover, the velocity $\vec v$ can be represented in the following form: \beq \vec v = \vec \nabla \nu + \alpha \vec \nabla \chi + \beta \vec \nabla \eta. \label{vform} \enq this representation is a generalization of the Clebsch representation \cite{Lamb H.} for magnetohydrodynamics. \section{The Action of Barotropic Magnetohydrodynamics} It was shown in \cite{YaLy} that the action of barotropic magnetohydrodynamics takes the form: \ber A & \equiv & \int {\cal L} d^3 x dt, \nonumber \\ {\cal L} &\equiv & -\rho \left[\frac{\partial{\nu}}{\partial t} + \alpha \frac{\partial{\chi}}{\partial t} + \beta \frac{\partial{\eta}}{\partial t}+\varepsilon (\rho)+ \frac{1}{2} (\vec \nabla \nu + \alpha \vec \nabla \chi + \beta \vec \nabla \eta)^2 \right] \nonumber \\ &-&\frac{1}{8 \pi}(\vec \nabla \chi \times \vec \nabla \eta)^2, \label{Lagactionsimp} \enr in which $\varepsilon (\rho)$ is the specific internal energy. Taking the variational derivatives to zero for arbitrary variations leads to the following set of equations: \beq \frac{\partial{\rho}}{\partial t} + \vec \nabla \cdot (\rho \vec v ) = 0, \label{masscon2} \enq \beq \frac{d \chi}{dt} = 0, \label{chieq} \enq \beq \frac{d \eta}{dt} = 0, \label{etaeq} \enq \beq \frac{d \nu}{d t} = \frac{1}{2} \vec v^2 - w, \label{nueq} \enq in which $w$ is the specific enthalpy. \beq \frac{d \alpha}{dt} = \frac{\vec \nabla \eta \cdot \vec J}{\rho}, \qquad \label{aleq} \enq \beq \frac{d \beta}{dt} = -\frac{\vec \nabla \chi \cdot \vec J}{\rho}. \label{betaeq} \enq In all the above equations $\vec B$ is given by \er{Bsakurai} and $\vec v$ is given by \er{vform}. The mass conservation \ern{masscon} is readily obtained. Now one needs to show that also \er{Beq} and \er{Euler} are satisfied. It can be easily shown that provided that $\vec B$ is in the form given in \ern{Bsakurai}, and \ern{chieq} and \ern{etaeq} are satisfied, then \ers{Beq} are satisfied. We shall now show that a velocity field given by \ern{vform}, such that the \eqs for $\alpha, \beta, \chi, \eta, \nu$ satisfy the corresponding equations (\ref{masscon2},\ref{chieq},\ref{etaeq},\ref{nueq},\\ \ref{aleq},\ref{betaeq}) must satisfy Euler's equations. Let us calculate the material derivative of $\vec v$: \beq \frac{d\vec v}{dt} = \frac{d\vec \nabla \nu}{dt} + \frac{d\alpha}{dt} \vec \nabla \chi + \alpha \frac{d\vec \nabla \chi}{dt} + \frac{d\beta}{dt} \vec \nabla \eta + \beta \frac{d\vec \nabla \eta}{dt}. \label{dvform} \enq It can be easily shown that: \ber \frac{d\vec \nabla \nu}{dt} & = & \vec \nabla \frac{d \nu}{dt}- \vec \nabla v_k \frac{\partial \nu}{\partial x_k} = \vec \nabla (\frac{1}{2} \vec v^2 - w)- \vec \nabla v_k \frac{\partial \nu}{\partial x_k}, \nonumber \\ \frac{d\vec \nabla \eta}{dt} & = & \vec \nabla \frac{d \eta}{dt}- \vec \nabla v_k \frac{\partial \eta}{\partial x_k} = - \vec \nabla v_k \frac{\partial \eta}{\partial x_k}, \nonumber \\ \frac{d\vec \nabla \chi}{dt} & = & \vec \nabla \frac{d \chi}{dt}- \vec \nabla v_k \frac{\partial \chi}{\partial x_k} = - \vec \nabla v_k \frac{\partial \chi}{\partial x_k}. \label{dnabla} \enr In which $x_k$ is a Cartesian coordinate and a summation convention is assumed. Equations (\ref{chieq},\ref{etaeq},\ref{nueq}) where used in the above derivation. Inserting the result from equations (\ref{aleq},\ref{betaeq},\ref{dnabla}) into \ern{dvform} yields: \ber \frac{d\vec v}{dt} &=& - \vec \nabla v_k (\frac{\partial \nu}{\partial x_k} + \alpha \frac{\partial \chi}{\partial x_k} + \beta \frac{\partial \eta}{\partial x_k}) + \vec \nabla (\frac{1}{2} \vec v^2 - w) \nonumber \\ &+& \frac{1}{\rho} ((\vec \nabla \eta \cdot \vec J)\vec \nabla \chi - (\vec \nabla \chi \cdot \vec J)\vec \nabla \eta) \nonumber \\ &=& - \vec \nabla v_k v_k + \vec \nabla (\frac{1}{2} \vec v^2 - w) + \frac{1}{\rho} \vec J \times (\vec \nabla \chi \times \vec \nabla \eta) \nonumber \\ &=& - \frac{\vec \nabla p}{\rho} + \frac{1}{\rho} \vec J \times \vec B. \label{dvform2} \enr In which we have used both \ern{Bsakurai} and \ern{vform} in the above derivation. This of course proves that the barotropic Euler equations can be derived from the action given in \er{Lagactionsimp} and hence all the equations of barotropic magnetohydrodynamics can be derived from the above action without restricting the variations in any way except on the relevant boundaries and cuts. The reader should take into account that the topology of the magnetohydrodynamic flow is conserved, hence cuts must be introduced into the calculation as initial conditions. \section{A Simpler Action for Barotropic Magnetohydrodynamics} Can we obtain a further reduction of barotropic magnetohydrodynamics? Can we formulate magnetohydrodynamics with less than the six functions $\alpha,\beta,\chi,\eta,\nu,\rho$? The answer is yes, in fact four functions $\chi,\eta,\nu,\rho$ will suffice. To see this we may write the two \eqs (\ref{chieq},\ref{etaeq}) as \eqs for $\alpha,\beta$ that is: \ber & & \frac{d \chi}{dt} = \frac{\partial \chi}{\partial t}+ \vec v \cdot \vec \nabla \chi = \frac{\partial \chi}{\partial t}+ (\vec \nabla \nu + \alpha \vec \nabla \chi + \beta \vec \nabla \eta) \cdot \vec \nabla \chi = 0, \nonumber \\ & & \frac{d \eta}{dt} = \frac{\partial \eta}{\partial t}+ \vec v \cdot \vec \nabla \eta = \frac{\partial \eta}{\partial t}+ (\vec \nabla \nu + \alpha \vec \nabla \chi + \beta \vec \nabla \eta) \cdot \vec \nabla \eta = 0, \label{lagmul2} \enr in which we have used \ern{vform}. Solving for $\alpha,\beta$ we obtain: \ber \alpha[\chi,\eta,\nu] & = & \frac{(\vec \nabla \eta)^2(\frac{\partial \chi}{\partial t}+ \vec \nabla \nu \cdot \vec \nabla \chi) - (\vec \nabla \eta \cdot \vec \nabla \chi) (\frac{\partial \eta}{\partial t}+ \vec \nabla \nu \cdot \vec \nabla \eta)} {(\vec \nabla \eta \cdot \vec \nabla \chi)^2-(\vec \nabla \eta)^2 ( \vec \nabla \chi)^2 } \nonumber \\ \beta[\chi,\eta,\nu] & = & \frac{(\vec \nabla \chi)^2(\frac{\partial \eta}{\partial t}+ \vec \nabla \nu \cdot \vec \nabla \eta) - (\vec \nabla \eta \cdot \vec \nabla \chi) (\frac{\partial \chi}{\partial t}+ \vec \nabla \nu \cdot \vec \nabla \chi)} {(\vec \nabla \eta \cdot \vec \nabla \chi)^2-(\vec \nabla \eta)^2 ( \vec \nabla \chi)^2 }. \label{alphbeta} \enr Hence $\alpha$ and $\beta$ are not free variables any more, but depend on $\chi,\eta,\nu$. Moreover, the velocity $\vec v$ now depends on the same three variables $\chi,\eta,\nu$: \beq \vec v = \vec \nabla \nu + \alpha[\chi,\eta,\nu] \vec \nabla \chi + \beta[\chi,\eta,\nu] \vec \nabla \eta. \label{vform2} \enq Since $\vec v$ is given now by \ern{vform2} it follows that the two \eqs (\ref{chieq},\ref{etaeq}) are satisfied identically and need not be derived from a variational principle. The above equation can be somewhat simplified resulting in: \ber \vec v &=& \vec \nabla \nu + \frac{1}{\vec B^2} [\frac{\partial \eta}{\partial t} \vec \nabla \chi - \frac{\partial \chi}{\partial t} \vec \nabla \eta + \vec \nabla \nu \times \vec B]\times \vec B \nonumber \\ &=& \frac{1}{\vec B^2} [(\frac{\partial \eta}{\partial t} \vec \nabla \chi - \frac{\partial \chi}{\partial t} \vec \nabla \eta) \times \vec B + \vec B (\vec \nabla \nu \cdot \vec B)] \label{vform3} \enr Hence the velocity $\vec v$ is partitioned naturally into two components one which is parallel to the magnetic field and another one which is perpendicular to it: \ber \vec v &=& \vec v_{\bot}+ \vec v_{\|} \nonumber \\ \vec v_{\bot} &=& \frac{1}{\vec B^2} (\frac{\partial \eta}{\partial t} \vec \nabla \chi - \frac{\partial \chi}{\partial t} \vec \nabla \eta) \times \vec B, \qquad \vec v_{\|} = \frac{\vec B}{\vec B^2} (\vec \nabla \nu \cdot \vec B). \label{vform4} \enr Inserting the velocity representation (\ref{vform3}) into \ern{alphbeta} will lead to the result: \ber \alpha & = & \frac{\vec \nabla \eta \cdot (\vec B \times (\vec v- \vec \nabla \nu))}{\vec B^2} \nonumber \\ \beta & = & - \frac{\vec \nabla \chi \cdot (\vec B \times (\vec v- \vec \nabla \nu))}{\vec B^2}. \label{alphbeta2} \enr Finally \ers{alphbeta} should be substituted into \ern{Lagactionsimp} to obtain a Lagrangian density ${\cal L}$ in terms of $\chi,\eta,\nu,\rho$. \ber {\cal L}[\chi,\eta,\nu,\rho] &\equiv & -\rho [\frac{\partial{\nu}}{\partial t} + \alpha[\chi,\eta,\nu] \frac{\partial{\chi}}{\partial t} + \beta[\chi,\eta,\nu] \frac{\partial{\eta}}{\partial t}+\varepsilon (\rho) \nonumber \\ &+& \frac{1}{2} (\vec \nabla \nu + \alpha[\chi,\eta,\nu] \vec \nabla \chi + \beta[\chi,\eta,\nu] \vec \nabla \eta)^2 ] \nonumber \\ &-&\frac{1}{8 \pi}(\vec \nabla \chi \times \vec \nabla \eta)^2. \label{Lagsimp2} \enr Using \ers{alphbeta2} this can be written as: \beq {\cal L}[\chi,\eta,\nu,\rho] = \rho [\frac{1}{2} \vec v^2 - \frac{d{\nu}}{d t}-\varepsilon (\rho)] - \frac{1}{8 \pi}\vec B^2 \label{Lagsimp3} \enq were $\vec v$ is given by \ern{vform3} and $\vec B$ by \ern{Bsakurai}. Or more explicitly as: \ber {\cal L}[\chi,\eta,\nu,\rho] &=& \frac{1}{2} \frac{\rho}{(\vec \nabla \chi \times \vec \nabla \eta)^2} [\vec \nabla \eta \frac{\partial \chi}{\partial t}- \vec \nabla \chi \frac{\partial \eta}{\partial t}+ (\vec \nabla \chi \times \vec \nabla \eta) \times \vec \nabla \nu]^2 \nonumber \\ &-& \rho [\frac{\partial \nu}{\partial t} + \frac{1}{2} (\vec \nabla \nu)^2 + \varepsilon (\rho)] - \frac{(\vec \nabla \chi \times \vec \nabla \eta)^2}{8 \pi}. \label{Lagsimp4} \enr This Lagrangian density admits an infinite symmetry group of transformations of the form: \beq \hat{\eta} = \hat{\eta} (\chi,\eta), \qquad \hat{\chi} = \hat{\chi} (\chi,\eta), \enq provided that the absolute value of the Jacobian of these transformation is unity: \beq \left|\frac{\partial (\hat{\eta},\hat{\chi})}{\partial (\eta,\chi)}\right|=1. \enq In particular the Lagrangian density admits an exchange symmetry: \beq \hat{\eta} = \chi, \qquad \hat{\chi} = \eta. \enq As a consequence of the double infinite symmetry group we have two {\it local} conservation laws given by the two \eqs (\ref{chieq},\ref{etaeq}). Taking the variational derivatives of the action defined using \ern{Lagsimp4} to zero for arbitrary variations leads to the following set of equations: \beq \frac{\partial{\rho}}{\partial t} + \vec \nabla \cdot (\rho \vec v ) = 0, \label{masscon3} \enq \beq \frac{d \nu}{d t} = \frac{1}{2} \vec v^2 - w, \label{nueq2} \enq \beq \frac{d \alpha[\chi,\eta,\nu]}{dt} = \frac{\vec \nabla \eta \cdot \vec J}{\rho}, \qquad \label{aleq2} \enq \beq \frac{d \beta[\chi,\eta,\nu]}{dt} = -\frac{\vec \nabla \chi \cdot \vec J}{\rho}. \label{betaeq2} \enq Those equations should be solved for $\chi,\eta,\nu,\rho$. Equations (\ref{aleq2},\ref{betaeq2}) contain a complicated linear combination of the second derivatives $\frac{\partial^2 \chi}{\partial t^2}$ and $\frac{\partial^2 \eta}{\partial t^2}$. This is inconvenient numerically therefore the following approach is recommended. Taking the partial temporal derivative of the two \eqs (\ref{chieq},\ref{etaeq}) we obtain: \beq \frac{\partial^2 \chi}{\partial t^2}+ \frac{\partial \vec v}{\partial t} \cdot \vec \nabla \chi + \vec v \cdot \vec \nabla \frac{\partial \chi}{\partial t} = 0, \qquad \frac{\partial^2 \eta}{\partial t^2}+ \frac{\partial \vec v}{\partial t} \cdot \vec \nabla \eta + \vec v \cdot \vec \nabla \frac{\partial \eta}{\partial t} = 0. \label{d2chid2eta1} \enq Using the expression $\frac{\partial \vec v}{\partial t}$ from \ern{dvform2} we obtain an explicit expression for the second derivatives of the form: \ber \frac{\partial^2 \chi}{\partial t^2}&=&((\vec v \cdot \vec \nabla) \vec v+ \vec \nabla w -\frac{1}{\rho} \vec J \times \vec B )\cdot \vec \nabla \chi -\vec v \cdot \vec \nabla \frac{\partial \chi}{\partial t} \nonumber \\ \frac{\partial^2 \eta}{\partial t^2}&=&((\vec v \cdot \vec \nabla) \vec v+ \vec \nabla w -\frac{1}{\rho} \vec J \times \vec B )\cdot \vec \nabla \eta -\vec v \cdot \vec \nabla \frac{\partial \eta}{\partial t}. \label{d2chid2eta2} \enr Hence we arrived at a four function formalism for barotropic magnetohydrodynamics which can be derived from a Lagrangian. Notice, however, that this formalism contains two first order equations and two second order equations, while our previous six function formalism \cite{YaLy} contained six first order equations. \section{Conclusion} We have shown that barotropic magnetohydrodynamics can be represented in terms of four scalar functions $\chi,\eta,\nu,\rho$ instead of the seven quantities which are the magnetic field $\vec B$ the velocity field $\vec v$ and the density $\rho$. Anticipated applications include stability analysis and the description of numerical schemes using the described variational principles, exceed the scope of this paper. It was shown by the author \cite{Yahalom} that variational principles can be used directly for numerical analysis (simulation) without the need to refer to the field equations. This mathematical construction may lead to better algorithms for simulating magnetohydrodynamics in terms of the needed computer memory and CPU time. This approach was applied to potential flows in a series of papers \cite{YahalomPinhasi,YahPinhasKop,OphirYahPinhasKop}. Moreover, it was implemented in a user friendly software package FLUIDEX (which can be down loaded from the web site www.fluidex-cfd.com). A variational formalism of magnetohydrodynamics should serve the same use. As for stability analysis I suspect that for achieving this we will need to add additional constants of motion constraints to the action as was done by \cite{YahalomKatz}, hopefully this will be discussed in a future paper.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,481
require "saral/version" module Saral class Application def call(env) [200,{'Content-Type' => 'text/html'}, ["Hello from the Saral framework!"]] end end end
{ "redpajama_set_name": "RedPajamaGithub" }
9,489
Q: trying too add extra enemy every time I destroy one but keep getting. if enemy_y[i] > 400: IndexError: list index out of range. in my python code I keep trying to add num_of_enemies += 1, and I have tried this in different spots, but I keep getting the error: if enemy_y[i] > 400: IndexError: list index out of range. please help here is my code! import pygame import random import math from pygame import mixer # Inialize the pygame pygame.init() # create the screen screen = pygame.display.set_mode((800, 600)) running = True # Backgorund background = pygame.image.load('background.png') # Background sound mixer.music.load('background.wav') mixer.music.play(-1) # Title pygame.display.set_caption("SpaceVader") # Player player_img = pygame.image.load('spaceship-2.png') player_x = 370 player_y = 480 player_x_change = 0 player_y_change = 0 # Enemy # The [] square brackets creates an empty list for the enemys to go inside enemy_img = [] enemy_x = [] enemy_y = [] enemy_x_change = [] enemy_y_change = [] num_of_enemies = 6 for i in range(num_of_enemies): enemy_img.append(pygame.image.load('ufo-2.png')) enemy_x.append(random.randint(0, 735)) enemy_y.append(random.randint(50, 150)) enemy_x_change.append(2) enemy_y_change.append(40) # Bullet # ready - You cant' see the bullet on the screen # Fire - the bullet is fired bullet_img = pygame.image.load('bullet.png') bullet_x = 0 b_y = player_y bullet_y = b_y bullet_x_change = 0 bullet_y_change = 10 bullet_state = "ready" # Font score_value = 0 font = pygame.font.Font('freesansbold.ttf', 32) text_x = 10 text_y = 10 # Game Over Text over_font = pygame.font.Font('freesansbold.ttf', 64) # creates the score board and puts it on the screen def show_score(x, y): score = font.render("Score : " + str(score_value), True, (255, 255, 255)) screen.blit(score, (x, y)) # creates the game over message def game_over_text(): over_text = over_font.render("GAME OVER", True, (255, 255, 255)) screen.blit(over_text, (200, 250)) #puts the player on the screen def player(x, y): screen.blit(player_img, (x, y)) #puts the enemy on the screen def enemy(x, y, i): screen.blit(enemy_img[i], (x, y)) # creates the ability to fire the buller def fire_bullet(x, y): global bullet_state bullet_state = "fire" screen.blit(bullet_img, (x + 16, y + 10)) # creates the collision between the bullet and the enemy def is_collision(enemy_x, enemy_y, bullet_x, bullet_y,): distance = math.sqrt((math.pow(enemy_x - bullet_x, 2)) + (math.pow(enemy_y - bullet_y, 2))) if distance < 27: return True else: return False # Game Loop while running: screen.fill((0, 0, 0)) # background image screen.blit(background, (0, 0)) for event in pygame.event.get(): if event.type == pygame.QUIT: running = False # if key stroke is pressed check for right and left and up and down if event.type == pygame.KEYDOWN: if event.key == pygame.K_LEFT: player_x_change += -5 if event.key == pygame.K_RIGHT: player_x_change += 5 if event.key == pygame.K_UP: player_y_change += -5 if event.key == pygame.K_DOWN: player_y_change += 5 if event.key == pygame.K_SPACE: if bullet_state == "ready": bullet_sound = mixer.Sound("laser.wav") bullet_sound.play() # get x of player bullet_x = player_x # get y of player bullet_y = player_y fire_bullet(bullet_x, bullet_y) # allows the key release to stop the action if event.type == pygame.KEYUP: if event.key == pygame.K_LEFT or event.key == pygame.K_RIGHT or event.key == pygame.K_UP or event.key == pygame.K_DOWN: player_x_change = 0 player_y_change = 0 # create walls to restrict leaving screen player_x += player_x_change if player_x <= 0: player_x = 0 elif player_x >= 736: player_x = 736 if player_y <= 0: player_y = 0 elif player_y >= 536: player_y = 536 # enemy movment for i in range(num_of_enemies): # Game Over if enemy_y[i] > 400: for j in range(num_of_enemies): enemy_y[j] = 2000 game_over_text() explosion_sound = mixer.Sound("explosion.wav") explosion_sound.play() scream_sound = mixer.Sound("Game scream 2.wav") scream_sound.play() break #creates movment of enemy enemy_x[i] += enemy_x_change[i] if enemy_x[i] <= 0: enemy_x_change[i] = 2 enemy_x[i] += enemy_x_change[i] enemy_y[i] += enemy_y_change[i] elif enemy_x[i] >= 736: enemy_x_change[i] = -2 enemy_x[i] += enemy_x_change[i] enemy_y[i] += enemy_y_change[i] # Collision collision = is_collision(enemy_x[i], enemy_y[i], bullet_x, bullet_y) if collision: explosion_sound = mixer.Sound("explosion.wav") explosion_sound.play() bullet_y = player_y bullet_state = "ready" score_value += 1 enemy_x[i] = random.randint(0, 735) enemy_y[i] = random.randint(50, 150) enemy(enemy_x[i], enemy_y[i], i) # Bullet Movement if bullet_y <= 0: bullet_y = player_y bullet_state = "ready" if bullet_state == "fire": fire_bullet(bullet_x, bullet_y) bullet_y -= bullet_y_change player_y += player_y_change player(player_x, player_y) show_score(text_x, text_y) pygame.display.update() This is my first post and I am new to coding this is for a school project any help would be greatly appreciated! A: As you increase the number of enemies, you will also need to add new items to the lists: if collision: # [...] num_of_enemies += 1 enemy_img.append(pygame.image.load('ufo-2.png')) enemy_x.append(random.randint(0, 735)) enemy_y.append(random.randint(50, 150)) enemy_x_change.append(2) enemy_y_change.append(40)
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,427
Autore di numerosi progetti come la poltrona Gaia, inclusa nella collezione permanente di design del MoMA di New York e del Museo del Design della Triennale di Milano, e la sedia 4875 per Kartell, la prima al mondo realizzata in polipropilene, nonché parte della collezione design del Centre Pompidou di Parigi. Biografia Allievo di maestri dell'architettura e del design italiani come Luciano Baldessari e Marcello Nizzoli, Carlo Bartoli si laurea in architettura a Milano dove fonda uno studio nel 1960. I suoi primi lavori riguardano l'architettura e gli interni, ma ben presto inizia a dedicarsi al design di arredi. La sua collaborazione con aziende che sarebbero diventate punti di riferimento per il mondo del design, portò a risultati come la poltrona Gaia, per Arflex, inclusa nella collezione permanente di design del MoMA di New York e del Museo del Design della Triennale di Milano – e la sedia 4875 per Kartell, la prima al mondo realizzata in polipropilene, parte della collezione design del Centre Pompidou di Parigi. Da allora realizzò numerosi progetti. I suoi prodotti sono stati esposti in numerose mostre, alla Triennale di Milano, al Victoria and Albert Museum di Londra, allo Stadt Museum di Colonia, e a New York, Praga, Hong Kong, Atene, Buenos Aires. Insegnò presso il Politecnico di Milano e l'ISIA di Firenze e Roma. La giuria di Young&Design premiò Carlo Bartoli con il riconoscimento di "Apostolo del Design" nel 2012. Nel 2016 ricevette il Compasso d'oro alla carriera. Nel 2007 fondò lo studio Bartoli Design, che sviluppa progetti di architettura, allestimenti, interni e design urbano. Nel 2008 la sedia R606 Uno, progettata da Bartoli Design con Fauciglietti Engineering per Segis, ricevette il XXI Compasso d'oro ADI, dopo avere vinto il Materialica Design Award. Il divano "Tube", prodotto da Rossi di Albizzate, fu premiato con l'IF Award for Good Industrial Design 1995, e venne raffigurato nei francobolli emessi dalle Poste Italiane sul tema "Design italiano per un nuovo paesaggio domestico". La sedia impilabile Breeze, per Segis, vinse l'I.D. Design Distinction Award, l'Apex Product Design Award, il Red Dot Design Award e il premio IF Good Industrial Design, apparendo sull'edizione di francobolli delle Poste Italiane "Maestri del design italiano". Disegnati per Bonaldo, il tavolo Still e il tavolo Octa furono premiati rispettivamente con il Red Dot Design Award nel 2013 e il Good Design Award 2014. Nel 2015 la seduta per ufficio Mercury Curva ricevette il Good Design Award. Architettura Carlo Bartoli si laureò alla Facoltà di Architettura del capoluogo lombardo nel 1957. Iniziò subito la libera professione nell'edilizia, in un periodo in cui si aveva fame di case. Dal suo studio uscirono progetti di chiese, edifici d'abitazione, centri commerciali, ville, ristrutturazioni, esposizioni e casette prefabbricate di serie. A metà degli anni Sessanta, lo slancio della ricostruzione andò in crisi: risultò spontaneo a Carlo Bartoli rivolgersi, per trovare nuovi sbocchi professionali, all'arredamento e al design: l'altra faccia dell'operare architettonico, quella più libera e quasi vergine. Design La motivazione per l'assegnazione del Compasso d'oro alla carriera a Carlo Bartoli, espressa dalla Giuria internazionale ADI, sintetizzava le peculiarità del suo percorso di designer: ...Per aver saputo comunicare, nella propria esperienza professionale, una poetica costantemente volta alla ricerca dell'essenza del gesto creativo, a una particolare capacità di entrare in sintonia con le esigenze di crescita e di sviluppo di molte aziende dell'arredo. Fornisce alle imprese incontrate apporti ogni volta originali e innovativi, contribuendo in tal modo al loro successo. Un percorso rigoroso di progetto, declinato nei diversi ambiti tematici, con sobrietà e misura, contribuendo costantemente all'arricchimento della cultura del design italiano. Nei primi anni sessanta iniziò la sua attività da designer battezzata nel 1963 con la libreria B/146 prodotta per Arflex. La sua attenzione si focalizzò però subito sulle sedute, che ha sempre considerato elementi fondamentali ed emblematici del modo di abitare italiano. Creò così nel 1967 la prima seduta in vetroresina (allora una tecnologia all'avanguardia), la poltroncina Gaia. E fu proprio Gaia a permettere a Carlo Bartoli di spostare la sua attenzione sul design. Due sono le circostanze che segnarono questo inizio: una è stata l'invenzione di questa poltrona, la seconda è stato il rapporto, nato proprio attraverso Gaia, con l'azienda di punta, di cultura, di allora, che era Arflex. Opere principali Design 1963, libreria B146, Arflex 1966, tavolo Damiano, Arflex 1967, poltrona Gaia, Arflex 1969, componibile Multiplo, Tisettanta 1969, sedia Mito, Tisettanta 1969, poltrona Bicia, Arflex 1972, mensola Baco, Confalonieri 1973, divani Blop e Down, Rossi di Albizzate 1974, sedia 4875, Kartell 1974, sistema operativo Set/1, Oscam-Osi 1978, componibile Open, Tisettanta 1982, cucina KnockDown, ArcLinea 1985, programma Coba, Confalonieri 1987, sistema Odeon, Arclinea 1989, sedia Sophia, Bonaldo 1990, maniglie Tacta e Mixa, ColomboDesign 1992, sedia Galì, Ycami 1993, sistema ufficio Metron, Matteograssi 1993, accessori bagno Luna, Colombo Design 1995, sedia Breeze, Segis 1995, divano Tube, Rossi Di Albizzate 1997, sistema sedute Storm-Multistorm, Segis 1997, cucina Brera, Ernestomeda 1998, porta Theatre, Lualdi 1999, collezione Ellipse, Delight 2000, poltroncina Gallery, Segis 2000, imbottiti She-Shu-Sha, Rossi di Albizzate 2001, libreria Bebop, Kristalia 2001, accessori bagno Viva, Colombo Design 2001, contenitori Maxima, Laurameroni 2002, tavolo Sushi, Kristalia 2004 sedia Poppystar, Segis 2004, sedute auditorium Tulip, Kron 2004, sistema tavoli Milano, Sagsa 2005, sedia R606Uno, Segis 2005, collezione Formae, Colombo Design 2006, sedute Jazz, International Office Concept 2006, letto Bend, Move 2007, tavolo Ray-Ray Plus, Fiam 2008, sedia Lips, Segis 2008, divani Altopiano, Laurameroni 2009, tavolo Mille, Bonaldo 2009, sedia Joko, Kristalia 2009, tavolo Nori, Kristalia 2009, libreria Manhattan, Jesse 2010, sedia Mercury, Asis 2010, poltroncina May, Arflex 2011, imbottiti Flores, Segis 2012, credenza Aki, Riva1920 2012, sistema River, Segis 2012, tavolo Torii, Kristalia 2012, sedia By, Bonaldo 2013, tavolo Octa, Bonaldo 2014, sedia Filly, Bonaldo 2014, sedia Kuva, Bonaldo 2015, sedia 1085Edition, Kristalia 2015, sedia Sensu, Daa 2016, tavolo Maki, Kristalia 2016, sistema sedute Camel, Segis 2017, tavolo Non, Bonaldo 2017, tavolo Rime, Fiam Italia 2019, tavolo Mellow, Bonaldo 2019, contenitore Tango, Laurameroni Design Collection Architettura 1959, Concorso Istituto Professionale a Bergamo - 2º premio (in collaborazione con Giovanna Pericoli e Giancarlo Polo) 1961, Edificio per abitazione e laboratorio per calzature e Vigevano (in collaborazione con Luciano Baldessari) 1962, Chiesa parrocchiale "Madonna Regina" a Busto Arsizio (in collaborazione con Annig Sarian e Antonio Garavaglia) 1963, Esposizione Tecnorama a Lazise del Garda 1966, Casetta prefabbricata per vacanze 1968, Progetto esecutivo per gli edifici 22-23-24 al Quartiere Campo dei Fiori a Milano (in collaborazione con Luciano Baldessari e Annig Sarian) 1969, Complesso industriale Baldini a Lucca (in collaborazione con Piero Menichetti) 1971, Due ville residenziali a S. Agata Li Battiati 1972, Casa bifamiliare ad Abbadia Lariana 1972, Villa a Poiano 1972, Villa a Carate Brianza 1973, Edificio con residenze per vacanze a Bormio 1974, Villa a Bussolengo 1977, Complesso "4 Residenze" a Giussano 1980, Progetto per la sistemazione ambientale e per l'assetto viario di Piazza Roma a Giussano 1982, Centro residenziale e commerciale a Sesto San Giovanni (in collaborazione con Ambrogio Tacconi) 1983, Villa a Verano Brianza 1984, Edificio per abitazioni e uffici a Giussano (in collaborazione con Giuliana Celsi) 1985, Progetto di ristrutturazione edilizia a Bussolengo (in collaborazione con Ferdinando Montresor) 1986 – 1994, Vicolo San Luigi a Giussano, ristrutturazione edilizia e sistemazione ambientale (in collaborazione con Terenzio Sironi) 1987, Scuola elementare a Giussano (in collaborazione con Terenzio Sironi) 1987, Ristrutturazione Tecnorama a Lazise del Garda 1990, Villa a Verano (in collaborazione con Anna Bartoli e Euro Sironi) 1991, Villa a Giussano 1992, Villa a Cernusco sul Naviglio, ristrutturazione (in collaborazione con Anna Bartoli) 1992, Piazza Maffi a Sesto S. Giovanni, riqualificazione ambientale (in collaborazione con Giulio Ripamonti) 1992, Cassa Rurale ed Artigiana Sesto S. Giovanni Filiale Est e piazza antistante, ristrutturazione edilizia ed ambientale (in collaborazione con Giulio Ripamonti) 1993, Cassa Rurale ed Artigiana di Ostuni e piazza antistante: ristrutturazione edilizia e riqualificazione ambientale (in collaborazione con Giulio Ripamonti e Alfredo Castiglioni) 1994, Takamuroike Golf Club, Hyogo, Giappone (in collaborazione con Eisuke Ohnishi) 1999, Sede aziendale Confalonieri spa a Giussano (in collaborazione con Anna Bartoli) 2002, Centro Servizi BCC a Sesto S. Giovanni (in collaborazione con Giulio Ripamonti) 2006, Piano della Pubblicità per Lissone (in collaborazione con Paolo Bartoli) 2007, Via Sempione, Monza, riqualificazione ambientale e arredo urbano (in collaborazione con Alfredo Castiglioni e Giulio Ripamenti) 2008, Complesso edifici residenziali e terziari a Veduggio, Piano Integrato di Intervento (in collaborazione con Anna e Paolo Bartoli e Giulio Ripamonti) 2011, Complesso edifici residenziali a Robbiano, Piano di Lottizzazione (in collaborazione con Anna e Paolo Bartoli) 2010 – 2014, Villa Mirabello a Monza, ristrutturazione e restauro (in collaborazione con Paolo Bartoli e Alfredo Castiglioni) Riconoscimenti 1995 IF Award for Good Industrial Design per il divano Tube (Rossi di Albizzate) 1995 I.D. Design Distinction Award per la poltroncina Breeze (Segis) 1995 Apex Product Design Award per la poltroncina Breeze (Segis) 1995 Red Dot Design Award per la poltroncina Breeze (Segis) 1995 IF Award for Good Industrial Design per la poltroncina Breeze (Segis) 2000 la poltroncina Breeze (Segis) è rappresentata sui francobolli delle Poste Italiane Maestri del Design Italiano 2008 XXI Compasso d'oro per la sedia R606 Uno (Segis), in collaborazione con Fauciglietti Engineering 2010 Good Design Award per il tavolo Sol (Bonaldo) 2012 Apostolo del design, giuria Young&Design 2013 Red Dot Design Award per il tavolo Still (Bonaldo) 2014 Good Design Award per il tavolo Still e Octa (Bonaldo) 2015 Good Design Award per la sedia Mercury/Curva (Asis) 2016 Compasso d'oro ADI alla Carriera Mostre e manifestazioni Carlo Bartoli è stato invitato ad esporre in numerose mostre e manifestazioni, tra le quali 1968 XIV Triennale di Milano, Milano 1968 Plastic as Plastic, Museum of Contemporary Crafts, New York 1970 Modern chairs 1918-1970 , Victoria and Albert Museum, Londra 1972 Design and Plastics, Museum of Decorative Arts, Praga 1972 IV Eurodomus, Torino 1975 La sedia in materiale plastico, Centrokappa, Milano 1979 Design & Design, Palazzo delle Stelline, Milano 1979 Il disegno italiano per l'ufficio, Kyoto 1979 Italian Design, Hong Kong 1980 Design Italiano, Zàppion Mégaron, Atene 1981 Italienisches Moebel Design 1950-1980, Stadt Museum, Colonia 1983 Trieste '83 - I designers, Trieste 1983 Dal cucchiaio alla città - itinerario di 100 designers, Triennale di Milano, Milano 1983 "1949-1983: progetti per il presente", Kartell, Milano 1989 Forum Design '89, Milano 1991 XVI Compasso d'Oro, Milano 1997 Carlo Bartoli: Trenta anni di design, Buenos Aires 2002 Non sono una Signora, Triennale di Milano, Milano Opere di Bartoli nelle collezioni permanenti poltrona Gaia (Arflex 1967) presso il Museum of Modern Art (MoMa) New York e presso la Triennale di Milano sedia 4875 (Kartell 1974) presso il Musée National d'Art Moderne al Centre Pompidou, Parigi sedia Sophia, (Bonaldo 1989) presso l'Architectural Museum, Ljublijana sedia Breeze (Segis 1995) presso il Thessaloniki Design Museum, Tessalonica, e il Vitra Design Museum, Weil am Rhein Note Bibliografia Flavio Conti, Carlo Bartoli, Milano 1988, ed. Rima Editrice Peter Zec, Who's Who in Design (volume 2), 2003, ed. AVEdition Marcel Wanders, The International Design Yearbook 2005, 2005, ed. Laurence King Luca Vivanti, Tisettanta, quarant'anni di design quarant'anni di casa, Milano 2011, ed. Electa Mondadori Collegamenti esterni ADI Associazione per il Disegno Industriale Bartoli Design Intervista Kristalia 2015 Enciclopedia Treccani
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,647
A Love Letter to 'Insecure' Written By: Angelica Monk Sunday's finale of HBO's hit series, Insecure, was the end of an era. Every so often a series comes along that defines a generation and Insecure was it. Never has the experience of a young, Black millennial been portrayed so beautifully. Issa made me feel seen, in more ways than one. The idea of Black people existing was such a revolutionary concept. I can think of very few shows centered around the Black experience that doesn't exploit Black trauma. Insecure found a way to give us light. Heartbreaks, friendships, failures, and the uncertainty of the future was lovingly crafted through the eyes of strong Black women. They were humanized and flawed — that's what made the show so relatable. How many of us had a boyfriend that was "project" and never met his real potential? How many of us had a best friend who seemingly had it all, making us feel inferior, or a job that paid the bills but we hated? These were the stories that Issa brought into our homes every Sunday night with her trademark quirky flair. I can't remember how many times I have sat in my circle of friends and thought, "Which one of us is Issa, Molly, Kelli, or Tiffany?" A representation of a diverse group of women all beautiful and successful was something I had never seen on screen. These were all college-educated women who effortlessly switched between Standard American English and AAVE. I will never forget Issa's exchanges with Mirror Bitch, the absolute highlight of the show. For the first time in my adult life, I saw people who talked and acted just like me! Not only did they sound like me, but they looked like me too. Issa was a naturalista with hair that was to die for. Each week I looked forward to seeing her hair in braids, curls, a twist out — anything to draw inspiration for my everyday life. And let's get into the fashions. That Molly was sharp! I found myself googling her looks for some fabulous blouse that I could wear in the office (pre-pandemic of course). Don't get it twisted — Issa, Kelli, and Tiffany brought it too. Til this day, I've been stalking Telfar's website with my eye on that sage green shopping bag that Issa was rocking in Season 5's premiere. Please take all my coins. We can't talk about Insecure without talking about the music. Each season the soundtrack was fire. Whether we were nodding along to Keli's Bossy or Issa's "cypher" of "Broken P*ssy," Goldlink's "Palm Trees," "Run Up" by Cam & China, "The Glow" by Victoria Monet, or Teamarr's "Temperature," the music set the tone for the inevitable cultural time capsule that is the show. Insecure does not miss, I find myself bopping my head every episode and adding new songs to my playlist. David Ramsey Returns as John Diggle in Arrowverse Series 'Justice U' On top of the music, the cinematography was breathtaking, an ode to Los Angeles. The show did not shy away from showing parts of LA that many would fear to venture into. It is refreshing to look beyond the glitz and glam of Rodeo Drive, Sunset Boulevard, or the Hollywood Walk of Fame. We see the Ethiopian Merkato, Leimert Park Village, Mavericks Flat, and the now iconic Dunes — I plan on making a pilgrimage to this location on my next visit to LA, forgot about the Hollywood sign. Insecure was instrumental in shaping my late 20s and early 30s. When Issa quit We Got Y'all, I found myself at a new organization ready to start a new chapter. When Molly was going from one failed relationship to another, I found myself doing the same. No one seemed to stick, and like Molly, I started to question my self-worth. I saw Kelly and Tiffany struggle in their friendships and to a greater degree Issa and Molly; I looked within and discovered I need to work on my friendships as well. I think we could all relate to Issa's longtime love for Lawrence. How many times have we tried to work it out with someone, and it never felt right? Yet somehow, our hearts yearned for them and instead of dealing with ourselves, we pour into another person (looking at you, Nanceford). Like Issa, I eventually got it together with the men in my life but not before I dealt with myself. "You know what that is? Growth." Speaking of men, I can confidently say Insecure wasn't just about Black women; it also brought a level of care and sensitivity to the plight of Black men. Whether it was Lawrence tackling his ambitions (or lack thereof) and the realities of co-parenting or Nathan finding balance with his love life and mental health, these men were portrayed with such grace. Nearly all my guy friends were just as invested in Insecure as I was. They saw themselves just as I saw myself. Whether you were Team Lawrence or Team Nathan, we all loved Insecure. The last five seasons have given me so much joy, I don't know if anything will compare. Insecure has ushered in a new wave of shows portraying the Black experience. We now have Harlem, Run the World, and Sistas. I can't help but think they were all made possible because of an Awkward Black Girl. I will wholeheartedly miss Insecure Sundays, but I will always be grateful to Issa Rae for bringing this series to life. Always…okay? Guest Blogger December 28, 2021 Season 2 of 'Yearly Departed' Deconstructs the Good, the Bad, and the Ugly of 2021 PODCAST: Girl Chat on HBO's 'Insecure' and Netflix's 'Squid Game' Christina Elmore Exemplifies the Modern Black Woman Review: 'Insecure' Is Back for Its Last Season, Okay?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,528
Federación de Mocedades Galeguistas, organització juvenil del Partit Galleguista històric de Galícia Foundation for Medieval Genealogy, institució per al foment dels estudis de genealogia de l'edat mitjana Fmg també és l'abreviatura del franc malgaix, antiga moneda de Madagascar
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,273
{"url":"https:\/\/www.physicsforums.com\/threads\/anyone-know-of-a-place-that-sells-10-farad-capacitors-rated-at-20k-volts.121112\/","text":"# Anyone know of a place that sells 10 farad capacitors rated at 20k volts\n\n1. May 16, 2006\n\n### Agnostic\n\nAnyone know of a place that sells 10 farad capacitors rated at 20k volts\n\n2. May 17, 2006\n\n### NateTG\n\nNo, that's on the scale of large reserach facility capacitor banks.\n\n3. May 17, 2006\n\n### Staff: Mentor\n\nWhat'cha makin'?\n\n4. May 17, 2006\n\n### Hammie\n\nMaybe an inter-galactic bug zapper?\n\n5. May 17, 2006\n\n### Hammie\n\nI think I found one for you, kind of. It is just a tad small though.\n\nhttp:\/\/www.rheinmetall.de\/index.php?fid=1805&lang=3 [Broken]\n\nUsing J = 1\/2 C E^2, you are asking for a 2 x 10^9 Joule capacitor. This one is only 5 x 10^7,\n\nLast edited by a moderator: May 2, 2017\n6. May 17, 2006\n\n### Agnostic\n\nLooks like an interesting place considering the bombing of Dresden :)...So it goes\n\n7. May 17, 2006\n\n### Agnostic\n\nI am doing some undergraduate research....\n\nI guess the capacitance isnt difficult, but the voltage is...\n\n8. May 17, 2006\n\n### Agnostic\n\nMaybe I am being a little overzealous....\n\n9. May 17, 2006\n\n### Staff: Mentor\n\nThere are plenty of 10+ microF capacitors around for voltages of 20 kV, however, a 10 F capacitor at 20 kV would be for a power transmission system.\n\nABB or a power electronics company would probably have those.\n\n20 kV is considered medium voltage which would be used in local distribution systems.\n\nABB HiQ - Capacitor Units (Power Capacitors)\n\nThe first application for DryQ capacitors is shunt banks rated for 40\u2013170 kV and 10-100 MVAr.\n\nDryQ AC\n\nDryQ DC\nhttp:\/\/www.abb.com\/global\/seitp\/seitp332.nsf\/0\/7fce68898da2ef14c1256f64005041b0?OpenDocument [Broken]\n\nSiemens\nhttp:\/\/www.epcos.de\/web\/generator\/W...tronics\/Page,templateId=render,locale=en.html\n\nSiemens largest capacitor is 16 milli-F.\n\n10 F seems a bit large.\n\nI know one place that used huge inductors for energy storage and they had to use explosive switching. They were doing 10+ kA.\n\nI had the same thought as berkeman - that's a mighty big charge one is contemplating. :uhh:\n\nAnd as Hammie pointed out, 2 GJ is a rather large energy storage. That's the output of typical 1000 MWe in 2 seconds. I can't imagine an undergrad doing research with such an amount of stored energy. One can explode wires with that energy\/power.\n\nLast edited by a moderator: May 2, 2017\n10. May 18, 2006\n\n### Hammie\n\nAnother thought, or two..\n\nHow do you plan on charging this little bugger?\n\nI figured that, using the 20KV anode supply for a 13 inch television, you'd be safe limiting the current such that the source is supplying about a maximum of 10 watts or so.\n\nThe series resistor would have to be about forty megohms give or take.\n\nIt takes about five time constants to charge it to 99 % of the source voltage.\n\nOne time constant is 4 x 10^8 seconds. That is rougly thirteen years, to charge it to only 63 percent of 20KV.\n\nAs funny as all this may sound, it does give me an appreciation for what they are doing with those huge capacitor banks..\n\nI don't think I could afford the electric bill for even one charge cycle.\n\n1\/2 amp to the TV, the bill would be about a million and a half, at today's rate of about five cents per kilowatt hour.\n\n:surprised\n\nI'm only forty nine now. I think I'll pay at the end of the charge period..\n\nedit:\nI redid the figures.. only \\$333.33 dollars to do one fifth the job.. I never was good with finances..\n\nLast edited: May 18, 2006\n11. May 18, 2006\n\n### Agnostic\n\nI just realized a typo...I meant 10 microfarad...\n\n12. May 18, 2006\n\n### NateTG\n\n13. May 19, 2006\n\n### Hammie\n\nBe careful with the surplus stuff. Some of the really old oil filled units contain PCB's.\n\n14. May 19, 2006\n\n### Staff: Mentor\n\nMakes a big difference.\n\nThere are plenty of suppliers for 20 kV, 10 $\\mu$F capacitors.\n\nFor example - http:\/\/www.hivoltcapacitors.com\/page1.html [Broken]\n\nhttp:\/\/www.morganelectroceramics.com\/capacitors\/index.html [Broken]\n\nABB, Siemens woud also supply such capacitors.\n\nSee http:\/\/www.lambda-emi.com\/product_html\/203power.htm for charging systems.\n\nLast edited by a moderator: May 2, 2017\n15. May 19, 2006\n\n### turbo\n\nNow you know why people were curious about the application. I figured you were going to build an EMP device big enough to knock out the northeast. :rofl:\n\n16. May 19, 2006\n\n### enigma\n\nStaff Emeritus\nAye. I saw 10F before I read down further and all I saw was the robot from the Space Family Robinson:\n\nDANGER DANGER\n\nI get REALLY nervous around the 500mF caps we have in the lab because there are some people there who don't realise how dangerous they can be.\n\n17. May 20, 2006\n\n### Staff: Mentor\n\nPutting this in perspective -\n\n2 GJ is the kinetic energy of 1 kg traveling at a speed of 63.245 km\/s or 10 kg traveling at 20 km\/sec.\n\nA 100 kg man would have that amount of kinetic energy at 6.325 km\/s and that is pretty darn fast!\n\n18. May 20, 2006\n\n### x64bob\n\nlol that would totaly own","date":"2017-10-19 11:37:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3191410005092621, \"perplexity\": 5512.5234053416125}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187823282.42\/warc\/CC-MAIN-20171019103222-20171019123222-00492.warc.gz\"}"}
null
null
// // 874. Walking Robot Simulation.h // leetcode // // Created by andysheng on 2019/10/9. // Copyright © 2019 Andy. All rights reserved. // #ifndef _74__Walking_Robot_Simulation_h #define _74__Walking_Robot_Simulation_h #include <vector> using namespace std; namespace WalkingRobotSimulation { class Solution { public: int robotSim(vector<int>& commands, vector<vector<int>>& obstacles) { return 1; } }; } #endif /* _74__Walking_Robot_Simulation_h */
{ "redpajama_set_name": "RedPajamaGithub" }
8,541
In The News – The Law Offices of Kathleen Shannon Glancy, P.A. New studies show that the enactment of laws restricting victims' ability to bring medical malpractice suits as so called "tort reform" has not fulfilled the insurance industry promised benefits of improved healthcare and reduced costs. Read More. Kathleen Glancy has another article published in the Trial Briefs magazine (June 2012) regarding the interaction of Medicare and Workers' Compensation, entitled "The Perfect Storm." Click here to read the entire article. Political appointments of Gov. McCrory at the NC Industrial Commission have upset the traditional balance of interests and fair administration of the workers' compensation system. Read more. Kathleen Shannon Glancy Pa, Attorney At Law was nominated and now been awarded the 2013 Best of Business Award for Wilmington in the Small Business category. Kathleen Glancy was recognized in the September, 2012 edition of Best Lawyers, as a result of recognition from her peers, as being one of North Carolina's Best Attorneys. Check out this fantastic article on getting out to vote penned by our very own Leslie Ogilvie, Legislative Chair for the NCAJ Legal Assistants Division, published in the NCAJ October 2012 Newsletter. Kid's Chance of North Carolina was formed in 2004 by professionals in the workers' compensation community including lawyers, insurers, employers and medical and rehabilitation professionals. The nonprofit awards scholarships annually to students between the ages of 16 and 25 whose parent's on-the-job death or injury resulted in a substantial decline in family income. visit http://kidschancenc.org/ today to learn about this wonderful effort and to see if you qualify to apply. "Like us, you know how deeply family members can be affected by workplace injuries, both financially and emotionally. The impact of such injuries is long lasting. A family's struggles and challenges don't disappear once they obtain a verdict or a settlement or payment of their medical expenses. The financial impact can be life shattering. Kids' Chance of North Carolina is a nonprofit corporation that provides educational scholarships to the children of employees who have been seriously injured or killed as a result of a workers' compensation injury that is covered under the North Carolina Workers' Compensation Act. North Carolina Advocates for Justice NCAJ holds its annual convention in Wilmington NC this weekend and looks forward to recognizing this year's NCAJ award recipients. The Thurgood Marshall Award is being presented to Kathleen Glancy, a Wilmington Attorney to recognize extraordinary and selfless service to the people of North Carolina in keeping with the legacy of Justice Thurgood Marshall. Thurgood Marshall was an Associate Justice of the United States Supreme Court, serving from October 1967 until October 1991. Marshall was the Court's 96th justice and its first African-American justice. Ms. Glancy, who specializes in Workers' Compensation, has practiced law with her husband Mike, a Social Security Disability Advocate, in Wilmington and Williamston, NC. since 1985. We are pleased to announce that Michael Glancy has been elected for membership in the National Academy of Social Insurance (NASI)! Election to membership in the NASI is considered one of the highest honors that can be awarded to a social insurance professional. Academy membership recognizes those who have made distinguished and continuing achievements in the field. The NASI is a nonprofit, nonpartisan organization made up of the nation's leading experts on social insurance. Its mission is to promote understanding of how social insurance contributes to economic security and a vibrant economy. A consensus version of House Bill 709 was ratified on June 13, 2011 and signed into law by Governor Perdue on June 24, 2011. Check out "Hot Coffee," a documentary feature film by Susan Saladoff. "Hot Coffee" explores what really happened in the famous case involving Stella Liebeckand her spilled McDonalds coffee. The efforts against House Bill 709 continue! Many injured and disabled workers are really upset that HB 709 will shift the costs from the insurance company to the taxpayer and give more power and control to the insurance company. Many voters have told us that they do not understand why any legislator would do this and do not believe that North Carolina citizens are being properly represented in the General Assembly. To find out how your legislator stands on this bad bill call [insert general phone # to GA] and ask to be connected to your house legislator. Ask your legislator to oppose this bad bill HB 709. Ask them to stand up for injured workers and to not subsidize insurance company profits. Chief Executive, a bi-monthly magazine for CEOs, presidents, chairmen, vice-chairmen, and other top management executives, released its most recent ranking of Best & Worst states for business on May 3, 2011. This is another respected publication that places NC at #2 in the country – further evidence that legislation like that proposed in HB 709 and SB 544 are not needed in NC to attract new business. The recent edition of the Campbell Law Observer has an excellent article by Attorney Samuel A. Scudder of Scudder & Hedrick, PLLC, concerning the lack of need for the legislation proposed in House Bill 709 and Senate Bill 544. Please Like us on Facebook and Follow us on Twitter to stay up-to-date on this and other news of interest!
{ "redpajama_set_name": "RedPajamaC4" }
1,660
So many times, we don't recognize the value of a moment, or even a season, until later. So many different factors play into this, that it makes things more complex than they should be. I have spent most of my adult life trying to make sense of things that never made any sense at all because people kept trying to hold onto a life that never really existed. Many years of frustration pretty much amounted to nothing, until I chose to withdraw from everything, and do some personal introspection. I needed to save myself, and what was left inside, that I still found value in. In the process of that, I realized that the life I had been taught was unsustainable, and could not be brought into a future, that I had always wanted. The pain that had always defined me, and ruled over everything, that should have brought me joy, was not mine, after all. It wasn't until I walked through the darkest parts of my soul, that I ever was able to reconnect with who I was and to discover that everything I am, beyond soul-crushing tragedy. I got into the cosplay convention world, for some of the same reasons, that I have seen in others. It's a fun form of escapism, into a world, that allows us to express ourselves in personal ways. I was just trying to find a life that would work, for once, when I went to my first convention. We recognize in others, what is familiar when we share the same truth, even if our individual circumstances are different. From the first day I walked into the convention scene, I knew one thing for sure. Some people cry with tears, others with words. I could feel the pain flowing off of people like a waterfall. It was a surreal experience, like being at a party, where everyone appears to be celebrating, but are too broken to even be able to communicate with each other. So we find ways to release ourselves into an emotional high because we are used to being numb. After getting to know the local cosplay community, and being accepted by it, I came to another realization. Some have lost their ability to cry, at all, and no longer feel the parts of themselves, that have been disconnected, through traumatic experience. For a while, I kept quiet about this, because I was the same way, and didn't know how to even proceed, without overstepping. So I just appealed to people in the only way I knew how, through shared fandom obsessions. Then, one day, I went to a Goth cosplay event, because a photographer was hosting, and it was my first opportunity to pay for a professional photoshoot. I met so many people that day, that became family to me. This opened doors to being invited to other events. I well remember the day, that it all came into a point of focus for me, that absolutely broke me in half. I had been dealing with much, for several months, that I was unprepared for because I was having to learn a whole new world, with different rules and expectations. I had been having to push through some scary and painful emotions to be able to see past them. It would have been too easy to stand in judgment of what I could see in those, that were very sincere in their artistic expressions. That's putting it lightly. Just maybe, the appalling looks of horror I had gotten from the toxic souls, that I was trying to relate to, were the evidence that we shared the same reality but were on different levels of acceptance of it. This is the irony of denial. Some are taught that pain should be avoided, and is somehow, perceived as punishment for hidden sin, or that it is weakness, that can be exploited and used as justification for abuse. That is a stretch, to say the least, in a real world. I was confused by all of this, until the day that an unexpected incident at a cosplay event, basically, put me in the line of fire of feeling the pain of someone I did not know, but chose to reach out to. For some time, I had been aware that there were people that were reconnecting me with a life I had forgotten, and that they had become atmospheric points of light for me, just by being who they are. But, it was in this moment, that all of that hit a flashpoint, and became a focal point, an interface, to give me access to the understanding of things, that had always been numb to me. I didn't know what to do with all of this. So I kept quiet for a few months to see what played out. It's interesting what will ultimately give clarity. We planned a beach trip to the Gulf Coast, shortly after this. We didn't realize until a few days before we left for our trip, that we would be riding out a tropical storm, across the road from the Gulf of Mexico. We sat in a 4th-floor condo for 3 days, watching the sea get bigger than we had ever seen before, in our experience, and flooded everything around us. All we could do was wait until it was clear for us to cross over into what was a place we had been promised, because of a reservation, made in advance. As a side note, the owner gave us an extra day, because of this. I didn't draw a personal parallel to all of this, until we had gotten home, and were out of any kind of danger. I realize this is a cultural reference that has been made many times, but I wanted to add my personal take on it. I am a believer, that draws hope and encouragement from Biblical scripture, that is actually still relevant because it always has been. Many would cut me off right here, but I believe that the reason many don't understand the power of things like this, is because no one has handled it appropriately to teach and guide. I am struck by the importance of the Jordan River crossing, as it relates to paradigm shifts, boundaries, and destinations in our own personal journeys. Not only did they have to camp out on the beach for 3 days, and watch the Jordan at flood stage, but they knew that there was no way they could do anything in their own strength. They recognized the magnitude of the situation, and they didn't take it lightly, at all. They had to keep their distance from the Ark of the Covenant, which was interesting because it was about a half a mile. This, of course, becomes more significant later but is an acknowledgment of power here. They crossed over on dry land, just like they had through the Red Sea, which was symbolic of an escape from slavery. I believe the Jordan River crossing was more about remembering the faithfulness of God, while the people were still being rebellious and disobedient. The reason I believe this is, because they made an altar of 12 stones, to represent the 12 tribes of Israel, as they were being brought into promises, that were the result of a binding covenant. The number 12 is also symbolic of a perfect foundation of government, that flows out of God's power and authority. I also want to note that the name Gilgal, means, "circle," which has always been symbolic to me, of the promise of life, through relationship. It could also represent the reality of a boundary, which invites respect. Theocracy is usually seen as having negative connotations in our culture, because of how much humanity has misrepresented the true character of God, which has caused many to not care if such an entity even exists in power. My point is not whether anyone believes me or even agrees with me. I just want to share with others the places I have been, and how those seasons of experience have shaped my view of the world, life, and the way that we relate to one another. This is the first of many posts that I will share from a perspective that has been shifted many times over a lifetime, depending on where I stood, and what was made to look real. That is my parallax, that is always flowing in influence, my flux that has been given a place of purpose.
{ "redpajama_set_name": "RedPajamaC4" }
6,587
\chapter{Conclusions} \label{Conclusions} We conclude that differential renormalization is a useful method we can employ when renormalizing a gauge theory. Although the strengths of the method are well-known (e.g., gauge invariance is not broken, and we stay all the time in four dimensions, which is crutial when studying supersymmetric theories), in its original formulation it has, at least for us, one important drawback: the necessity of imposing explicitly the Ward identities in each calculation with gauge theories. However, this point was solved for the one-loop case by the introduction of Constrained Differential Renormalization. We have shown that we can made fruitful use of the one-loop CDR results in two-loop calculations, due to the fact that CDR fixes all the ambiguities related to the logarithms of the scales at the two-loop order. We have distinguished two cases: in the first one we have diagrams where the divergences are ``nested''. This implies that we can directly apply CDR to the ``inner'' divergence, which straightforwardly fixes all the coefficients of the logarithms of the scales in the total two-loop expression. The second case corresponds to diagrams with overlapping divergences. To deal with them we have obtained a list of renormalized two-loop integrals where in each calculation one-loop CDR rules have been maintained in every step. Although the problem is not solved for the general case, as the list is restricted to two-point functions with at most four derivatives acting on the propagators and two free indices, these integrals are the expression we have to deal with when we use the background field method. We have discussed also the application of DiffR to IR divergent expressions. Although the renormalization procedure in momentum space resembles the usual one, we have one subtle point: the co-existence in the same expression of UV and IR divergences. In this case, as both renormalizations should decouple (UV and IR divergences are local in position and momentum space respectively, so that Bogoliubov's UV $R$ and IR $\tilde{R}$ operations commute), the scales related to each type of divergence must be independent. In order to guarantee this, we have found that we have to modify the usual renormalization relations by means of an adjustment of the local terms involving both scales. In this work we have re-obtained two results that where previously derived with usual differential renormalization and Ward identities: the two-loop beta function of QED and its supersymmetric extension, SuperQED. In both cases, we have shown that using one-loop CDR simplifies the calculations, as expressions that vanish by symmetry automatically cancel, and we do not have to relate the different scales {\em{via}} Ward identities in the renormalized results. However, we have not only re-obtained previous results, but we have also performed two of the relevant calculations that were pending with differential renormalization: the two-loop renormalization of Yang-Mills and $N=1$ SuperYang-Mills. With the first one we have found no difficult when performing the calculation, as our use of one-loop CDR results allow us to perform the renormalization with the same ease as with standard dimensional methods. With Super Yang-Mills theory, the use of differential renormalization has one clear advantage over dimensional reduction (which is the usual regularization method employed with supersymmetric theories): in this case we have both UV and IR divergences, and with dimensional reduction they become mixed (both are renormalized with the same infinitesimal dimensional parameter), being necessary to subtract the IR contribution in the final result. Differential renormalization, however, clearly distinguishes between both divergences as they are renormalized with different scales. This feature allows us to give new insight on the origin (UV or IR) of the higher-loop contributions to the beta function, which has been a controversial point. We have found that higher order corrections to the beta function come from the one-loop UV scale, which survives in the higher-loop expression by the presence of the IR divergences. Finally, among the different open problems that we have, it is clear that the principal one is the extension to higher-loop order of CDR. To achieve this, we have to obtain a complete set of rules that fix the local ambiguity of the higher-order expressions as it has been derived for the one-loop case. We think that the results we have found are a step ahead in this direction, as the complete CDR renormalized expressions must coincide, at least in parts corresponding to the logarithms of the scales, with the renormalized results that we have presented here. \chapter{Resumen} \section*{Renormalizaci\'on diferencial} Renormalizaci\'on diferencial \cite{Freedman:1991tk} es un m\'etodo de renormalizaci\'on en el espacio de posiciones que consiste en sustituir expresiones que son demasiado divergentes para tener una transformada de Fourier bien definida, por derivadas de otras expresiones menos singulares. As\'i, por ejemplo $1/x^4$ no tiene una transformada de Fourier bien definida, y renormalizaci\'on diferencial proponer reemplazarla por la soluci\'on de la ecuaci\'on diferencial \begin{eqnarray} \frac{1}{x^4} &=& \Box G (x^2) ~~~ x^2 \neq 0 \;, \end{eqnarray} que es \begin{eqnarray} \frac{1}{x^4} \rightarrow \left[ \frac{1}{x^4} \right]_R = - \frac{1}{4}\Box \frac{ \ln x^2 M^2}{x^2} \end{eqnarray} Notar que se ha introducido una constante con dimensiones de masa $M$ que parametriza la ambiguedad local. Debido a que un cambio en $M$ puede ser reabsorbido en un reescalamiento de la constante de acoplamiento, esto sugiere que las amplitudes renormalizadas satisfacen ecuaciones del grupo de renormalizaci\'on, con $M$ jugando el papel de escala del grupo de renormalizaci\'on. Aunque en este trabajo se traten s\'olo teor\'ias sin masa, renormalizaci\'on diferencial puede ser aplicada sin ning\'un problema a teor\'ias masivas, ya que las masas s\'olo alteran el comportamiento a larga distancia de los correladores \cite{Freedman:1991tk,Haagensen:1992am}. Renormalizaci\'on diferencial se puede aplicar para renormalizar diagramas de orden arbitrario en teor\'ia de perturbaciones. En concreto, en \cite{Latorre:1993xh} se expone una implementaci\'on sistem\'atica de renormalizaci\'on diferencial a cualquier orden en teor\'ia de perturbaciones. En general, cuando se aplica renormalizaci\'on diferencial a un c\'alculo a orden superior, aparecen nuevas escalas correspondientes a la renormalizaci\'on de los distintos subdiagramas que forman el diagrama completo. Tambi\'en es relevante se\~nalar que, aplicando renormalizaci\'on diferencial en espacio de momentos, se pueden renormalizar divergencias IR. As\'i, por ejemplo \begin{equation} \left[ \frac{1}{p^4} \right]_{\tilde{R}} = - \frac{1}{4}{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}_p \frac{\ln p^2/\bar{M}_{IR}^2}{p^2} + a_{IR} \delta(p) \; . \end{equation} Sin embargo, a la hora de renormalizar una teor\'ia que tenga divergencias IR y UV, se ha de tener en cuenta que ambas renormalizaciones han de estar desacopladas, implicando por ello que ambas escalas (IR y UV) han de ser independientes. En concreto, en este trabajo se discute una expresi\'on divergente IR de la forma $ \ln p^2 / \bar{M}^2 / p^4$, donde $M$ es una escala UV producto de una renormalizaci\'on previa en espacio de posiciones. En este caso, la independencia de las escalas se consigue en el momento que imponemos la relaci\'on \begin{equation} M \frac{\delta}{\delta M} \left[\frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} = \left[M \frac{\delta}{\delta M} \frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} \;. \end{equation} Esto se satisface ajustando los t\'erminos locales en los que aparecen ambas escalas, obteniendo entonces la siguiente forma para la expresi\'on renormalizada m\'as general \cite{Mas:2002xh} \begin{equation} \left[\frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} = -\frac{1}{8} {\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}_p \frac{-\ln^2 p^2/\bar{M}_{IR}^2 + 2 \ln p^2/\bar{M}_{IR}^2 \, (1 + \ln p^2/\bar{M}^2)}{p^2} + (a_{IR} \ln \frac{M_{IR}^2}{M^2} + b_{IR}) \delta(p) \; . \end{equation} Una de las caracter\'isticas m\'as importantes de renormalizaci\'on diferencial es que la invariancia gauge se preserva. Sin embargo, debido a las ambiguedades que se generan en el m\'etodo de renormalizaci\'on, se han de imponer en cada c\'alculo (con una teor\'ia gauge) las identidades de Ward de forma expl\'icita, de tal manera que se fije el esquema de renormalizaci\'on. El hecho de que se preserve invariancia gauge se refleja en que siempre es posible satisfacer estas identidades con las expresiones renormalizadas (excepto por supuesto las anomal\'ias). \subsection*{Renormalizaci\'on diferencial restringida} Para evitar la necesidad de imponer las identidades de Ward expl\'icitamente en cada c\'alculo, se desarroll\'o Renormalizaci\'on Diferencial Restringida (RDR) \cite{delAguila:1997kw}. Este m\'etodo consiste en proporcionar una serie de reglas que {\em{a priori}} fijan toda la ambiguedad inherente al proceso, de tal modo que las expresiones renormalizadas sean directamente invariantes gauge (no es necesario imponer las identidades de Ward). Las reglas que impone RDR son: \begin{enumerate} \item {\em Reducci\'on diferencial} \begin{itemize} \item Funciones con comportamiento peor que el logar\'itmico se reducen a derivadas de (como mucho) funciones logar\'itmicamente divergentes sin introducir constantes dimensionales extra. \item Expresiones logar\'itmicamente divergentes se escriben como derivadas de funciones regulares, introduciendo una \'unica constante $M$, que tiene dimensiones de masa y juega el papel de escala del grupo de renormalizaci\'on. \end{itemize} \item{ \em Integraci\'on por partes formal}. No se tienen en cuenta los t\'erminos de contorno divergentes que aparecen cuando integramos por partes. En relaci\'on a esto, la renormalizaci\'on y la diferenciaci\'on deben ser dos operaciones conmutativas: si $F$ es una funci\'on arbitraria, entonces $[ \partial F ]_R = \partial [F]_R$. \item {\em Regla de renormalizaci\'on de la funci\'on delta} \begin{equation} [ F (x, x_1, \ldots , x_n ) \delta (x-y) ]_R = [ F ( x, x_1, \ldots , x_n)]_R \delta (x-y) \end{equation} \item {\em Validez de la ecuaci\'on del propagador} \begin{equation} [F(x,x_1,\ldots,x_n) ( \Box - m^2) \Delta_{m}(x)]_R = - [F(x,x_1,\ldots,x_n) \delta(x)]_R \end{equation} donde $\Delta_{m}$ es el propagador de una part\'icula de masa $m$ y $F$ una funci\'on arbitraria. \end{enumerate} Aplicando estas reglas, obtenemos un conjunto b\'asico de expresiones renormalizadas. Por lo tanto, el proceso de renormalizaci\'on consta de dos partes: en un primer momento, se realizan todas las contracciones de \'indices (RDR no conmuta con esta operaci\'on) y se escribe la expresi\'on desnuda en t\'erminos de estas funciones b\'asicas. En un segundo paso, se sustituyen estas funciones por sus valores renormalizados. \subsection*{Aplicaci\'on de RDR a c\'alculos a dos bucles} Aunque RDR se ha desarrollado s\'olo para c\'alculos a un bucle, proporciona informaci\'on \'util cuando tratamos c\'alculos a dos bucles. Veremos que aplicar RDR fija un\'ivocamente los coeficientes de todos los logaritmos de las escalas en la expresi\'on a dos bucles renormalizada, que son los t\'erminos que necesitamos para evaluar la ecuaci\'on del grupo de renormalizaci\'on. Es por ello que, al obtener las expresiones renormalizadas a dos bucles, no tendremos en cuenta los posibles t\'erminos locales que se generen. Distinguiremos dos situaciones diferentes: diagramas con divergencias anidadas y diagramas con solapamiento. \subsubsection*{Divergencias anidadas} En este caso, empezamos imponiendo RDR a la subdivergencia. Al hacer esto, fijamos los t\'erminos locales a un bucle que tenemos en el diagrama, junto con los logaritmos de las escalas a un bucle ($\ln x^2 M^2$). Entonces, al considerar la expresi\'on completa del diagrama y aplicar renormalizaci\'on diferencial normal, nos encontramos que todos los coeficientes que corresponden a logaritmos de las escalas est\'an un\'ivocamente determinados, ya que los t\'erminos locales a un bucle (que se promocionan a logaritmos) han sido fijados por RDR. Con este procedimiento, realizamos la renormalizaci\'on de diferentes expresiones que contienen la siguiente integral que se utiliza a lo largo del trabajo: $I^1 = \int d^4 u \Delta_{xu} \Delta^2_{yu}$ \begin{eqnarray} \left[ \Delta I^1 \right]_R (x) &=& - \frac{1}{32(4 \pi^2)^3} \Box \frac{ \ln^2 x^2 M^{2} + 2 \ln x^2 M^2}{x^2} +~\textrm{(termin.~locales)} \nonumber \\ \left[ \Delta \partial_{\mu} I^{1} \right]_R (x) &=& - \frac{1}{64 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \ln^2 x^2 M^2 + \ln x^2 M^2}{x^2} +~\textrm{(termin.~locales)} \nonumber \\ \left[ \Delta \partial_{\mu} \partial_{\nu} I^1 \right]_R (x) &=& - \frac{1}{96 (4 \pi^2)^3} \left[\partial_{\mu} \partial_{\nu} \Box \frac{ \ln^2 x^2 M^2 + \frac{2}{3} \ln x^2 M^2}{x^2} \right. \nonumber \\ & & - \left. \frac{1}{4}\delta_{\mu \nu} \Box \Box \frac{\ln^2 x^2 M^2 + \frac{11}{3} \ln x^2 M^2}{x^2} \right] +~\textrm{(termin.~locales)} \nonumber \\ \left[ \Delta \Box I^1 \right]_R (x) &=& \frac{1}{32 ( 4 \pi^2)^2} \Box \Box \frac{ \ln x^2 M^2}{x^2} +~\textrm{(termin.~locales)} \;. \end{eqnarray} \subsubsection*{Divergencias con solapamiento} En el caso de divergencias con solapamiento, la situaci\'on es m\'as complicada, ya que muchas veces es dif\'icil reconocer las expresiones a un bucle a las que hay que empezar aplicado RDR. Por lo tanto, lo que hemos hecho es obtener una conjunto completo de integrales renormalizadas con solapamiento, con como mucho cuatro derivadas actuando sobre los propagadores y dos \'indices libres. Con esta lista podemos obtener la expresi\'on a dos puntos renormalizada de cualquier teor\'ia con acoplos derivativos; lo que implica que, aplicando el m\'etodo del campo de background, nos permite obtener la funci\'on beta. Para evaluar las integrales hemos empleado b\'asicamente dos m\'etodos: \begin{itemize} \item Mediante igualdades integrales rescribimos las integrales en t\'erminos de otras que tengan un d'almbertiano actuando sobre uno de los propagadores. Esto permite obtener la integral como suma de contribuciones de integrales con divergencias anidadas, en las cuales se aplica lo que hemos discutido anteriormente. \item Utilizamos la descomposici\'on en parte con traza y sin traza que impone RDR (en la que se a\~nade un t\'ermino local fijo). \end{itemize} Esta lista est\'a escrita en t\'erminos de una expresi\'on $H$ que hemos definido como \begin{eqnarray} H[{\cal{O}}_1,{\cal{O}}_2 \; ; \; {\cal{O}}_3,{\cal{O}}_4] = \int d^4 u d^4 v \; ( {\cal{O}}_1^{x} \Delta_{xu})( {\cal{O}}_2^{x} \Delta_{xv})( {\cal{O}}_3^{y} \Delta_{yu} ) ({\cal{O}}_4^{y} \Delta_{yv}) \Delta_{uv} \;, \end{eqnarray} siendo ${\cal{O}}_i$ un operador diferencial. \begin{eqnarray} H^R[1,1 \; ; \; 1,1] &=& \frac{6 \pi^4 \xi(3) }{ ( 4 \pi^2)^4} \Delta \equiv a \Delta \\ H^R[\partial_{\mu},1 \; ; \; 1,1] &=& \frac{ 3 \xi(3)}{16 (4 \pi^2)^2} ( \partial_{\mu} \Delta) \equiv \frac{a}{2} \partial_{\mu} \Delta \\ H^R[1,\partial_{\lambda} \; ; \; 1,\partial_{\lambda}] &=& - \frac{1}{16(4 \pi^2)^3} \Box \frac{\ln z^2 M^2}{z^2} + \ldots \\ \partial_{\lambda}^x H^R[1,\partial_{\mu} \; ; \; 1,\partial_{\lambda}] &=& - \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} + \dots \\ \partial_{\lambda}^x H^R[1,1 \; ; \; \partial_{\lambda} \partial_{\nu},1] &=& \frac {1}{32(4 \pi^2)^3} \partial_{\nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2 }{z^2} + \ldots \\ H^R[1,\partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\mu},1] &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 - \frac{7}{8} \ln z^2 M^2 }{z^2} + \ldots \\ H^R[\partial_{\mu} \partial_{\lambda},\partial_{\lambda} \; ; \; 1,1] &=& \frac{1}{32(4 \pi^2)^3} \partial_{\mu} \Box \frac{ - \frac{1}{2} \ln^2 z^2 M^2 - \ln z^2 M^2}{z^2}+ \ldots \\ \partial_{\lambda}^x H^R[1,\partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda},1] &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{1}{8} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{-\frac{1}{4} \ln z^2 M^2}{z^2} \right] + \ldots \end{eqnarray} \begin{eqnarray} H^R[1,\partial_{\mu} \; ; \; 1,\partial_{\nu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \frac{- \frac{1}{2} \ln z^2 M^2}{z^2} + \ldots \\ \partial_{\lambda}^x H^R[1,\partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu},1] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{2} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] + \ldots \\ \partial_{\lambda}^x H^R[1,\partial_{\lambda} \; ; \; 1, \partial_{\mu} \partial_{\nu}] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] + \ldots \\ H^R[1,1 \; ; \; \partial_{\mu} \partial_{\nu},1] &=& \frac{1}{32(4 \pi^2)^3} \delta_{\mu \nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2}{z^2} + \ldots \\ \partial_{\lambda}^x H^R[1,1 \; ; \; \partial_{\lambda} \partial_{\nu},\partial_{\mu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \\ \partial_{\lambda}^x H^R[1,1 \; ; \; \partial_{\mu} \partial_{\nu},\partial_{\lambda}] &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \nonumber \\ H^R[1,\partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda},1] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{6} \ln^2 z^2 M^2 - \frac{5}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{ - \frac{1}{24} \ln^2 z^2 M^2 - \frac{29}{72} \ln z^2 M^2}{z^2} \right] + \ldots \\ H^R[1,\partial_{\mu} \partial_{\lambda} \; ; \; 1,\partial_{\nu} \partial_{\lambda}] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{6} \ln^2 z^2 M^2 + \frac{49}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{- \frac{1}{24} \ln^2 z^2 M^2 - \frac{11}{72} \ln z^2 M^2}{z^2} \right] + \ldots \end{eqnarray} \section*{Ejemplos abelianos} \subsection*{QED} La renormalizaci\'on diferencial de QED a dos bucles fue realizada en \cite{Haagensen:1992vz} empleando identidades de Ward para relacionar las escalas. En este trabajo reharemos este c\'alculo empleando RDR a un bucle, que nos permitir\'a obtener la funci\'on beta de QED sin necesidad de emplear dichas identidades. Utilizaremos para los c\'alculos los convenios de \cite{Haagensen:1992vz}, por lo que la acci\'on que consideramos es \begin{eqnarray} {\cal{L}} &=& \frac{1}{4} F^{\mu \nu} F_{\mu \nu} + \bar{\psi} \gamma^{\mu} ( \partial_{\mu} + i e A_{\mu} ) \psi \;, \end{eqnarray} donde $\psi$ es el campo fermi\'onico y $F_{\mu \nu}$ se expresa en t\'erminos del campo gauge $A_{\mu}$ como $F_{\mu \nu} (x) = \partial_{\mu} A_{\nu}(x) - \partial_{\nu} A_{\mu} (x)$. A diferencia que en \cite{Haagensen:1992vz}, realizaremos los c\'alculos con el m\'etodo del campo de background. Con este m\'etodo, se divide el campo gauge en dos contribuciones $A_{\mu} \rightarrow A_{\mu} + B_{\mu}$: una cu\'antica ($A_{\mu}$), que es la variable de integraci\'on en el la funci\'on de partici\'on y por lo tanto sobre la que se fija el gauge, y otra background ($B_{\mu}$), en la que se mantiene invariancia gauge expl\'icita. Esto tiene m\'ultiples consecuencias, entre las que destacamos el hecho de poder obtener la funci\'on beta a partir s\'olo de la funci\'on a dos puntos. \subsubsection*{Un bucle} La autoenerg\'ia del fot\'on a un bucle tiene una expresi\'on desnuda de la forma \begin{eqnarray} \Pi_{\mu \nu}^{(1 \; bucle)} &=& - (i e)^2 Tr \left[ \gamma_{\mu} \gamma^{\lambda} \partial_{\lambda}^x \Delta \gamma_{\nu} \gamma^{\sigma} \partial_{\sigma}^y \Delta \right] \nonumber \\ &=& - e^2 Tr\left[ \gamma_{\mu} \gamma^{\lambda} \gamma_{\nu} \gamma^{\sigma} \right] \left( \partial_{\lambda} ( \Delta \partial_{\varepsilon} \Delta ) - \Delta \partial_{\lambda} \partial_{\sigma} \Delta \right) \;, \end{eqnarray} a partir de la que, de acuerdo a las reglas de RDR, obtenemos el siguiente valor renormalizado \begin{eqnarray} \Pi_{\mu \nu R}^{(1)} (x) &=& - ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \left[ - \frac{e^2}{12 \pi^2 ( 4 \pi^2 )} \Box \frac{ \ln x^2 M^2}{x^2} - \frac{e^2}{36 \pi^2} \delta (x) \right] \;. \end{eqnarray} Obtenemos ahora la autoenerg\'ia del electr\'on, ya que es una inserci\'on dentro de uno de los diagramas a dos bucles. La expresi\'on para est\'a contribuci\'on es \begin{eqnarray} \Sigma (x)^{(1)} &=& e^2 \gamma_{\mu} \Delta_{\mu \nu} (x) \gamma^{\lambda} \partial_{\lambda} \Delta (x) \gamma_{\nu} \;, \end{eqnarray} que utilizando el propagador del fot\'on en un gauge general y las funciones b\'asicas de RDR se renormaliza como \begin{eqnarray} \Sigma (x)_R^{(1)} (x) &=& e^2 \gamma^{\lambda} \left[ \frac{1}{4(4 \pi^2)^2} \partial_{\lambda} \Box \frac{\ln x^2 M^2}{x^2} + (\alpha -1) \left( \frac{1}{4(4 \pi^2)^2} \partial_{\lambda} \Box \frac{\ln x^2 M^2}{x^2} + \frac{1}{16 \pi^2} \partial_{\lambda} \delta (x) \right)\right] \;. \nonumber \\ \end{eqnarray} \subsubsection*{Autoenerg\'ia del fot\'on background a dos bucles} \begin{figure}[ht] \centerline{\epsfbox{QED2loop.eps}} \caption{Diagramas a dos bucles de QED.} \end{figure} En primer lugar, se\~nalar que los c\'alculos a dos bucles los realizamos en el gauge de Feynman. Esto es debido a que la selecci\'on de este gauge en concreto no afecta a la verificaci\'on de las ecuaciones del grupo de renormalizaci\'on a dos bucles, como se ver\'a al discutir dichas ecuaciones. Por lo tanto, el propagador desnudo del fermi\'on en dicho gauge es $\Sigma^{(1)}(x) = - 2 e^2 \gamma^{\lambda} \Delta \partial_{\lambda} \Delta (x)$. Comenzando por el diagrama $(a)$, este tiene la siguiente expresi\'on desnuda \begin{eqnarray} \Pi^{(2 \; a)}_{\mu \nu} (x-y) &=& - (ie)^2 \int d^4 u d^4 v \; Tr \left[ \gamma_{\mu} \gamma^{\lambda} (- \partial_{\lambda}^x \Delta_{xu}) \Sigma^{(1)} (u-v) \gamma^{\varepsilon} (- \partial_{\varepsilon}^v \Delta_{vy}) \gamma_{\nu} \gamma^{\sigma} ( - \partial_{\sigma}^y \Delta_{yx} ) \right]. \nonumber \\ \end{eqnarray} Para simplificar la notaci\'on, definimos $I^0_{\mu}$ como $I^0_{\mu} = \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial_{\mu} \Delta_{uv})$, que nos permite escribir esta contribuci\'on como \begin{eqnarray} \Pi^{(2 \; a)}_{\mu \nu} (x) &=& e^4 \left[ -32 (\partial_{\mu} \Delta) \partial_{\lambda} \partial_{\nu} I^0_{\lambda} + 16 \delta_{\mu \nu} ( \partial_{\sigma} \Delta ) \partial_{\lambda} \partial_{\sigma} I^0_{\lambda} + 16 ( \partial_{\mu} \Delta ) \Box I^0_{\nu} - 8 \delta_{\mu \nu} ( \partial_{\rho} \Delta ) \Box I^0_{\rho} \right] \;. \nonumber \\ \end{eqnarray} Por lo tanto, la renormalizaci\'on de esta expresi\'on pasa por estudiar la renormalizaci\'on de $I^0_{\mu}$. Es f\'acil demostrar que esto se puede escribir en t\'erminos de la renormalizaci\'on de la integral $I^1$ definida previamente. As\'i, $\partial_{\mu} I^0_{\mu \; R} = - \frac{1}{2} I^1_R$ y $\Box I^0_{\mu \;R} = - \frac{1}{2} \partial_{\mu} I^1_R$. Entonces, la contribuci\'on renormalizada de este diagrama es \begin{eqnarray} \Pi^{(2 \; a)}_{\mu \nu \; R} (x)&=& \frac{e^4}{24(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{- \ln^2 x^2 M^2 - \frac{5}{3} \ln x^2 M^2 }{x^2} + \delta_{\mu \nu} \Box \Box \frac{\ln^2 x^2 M^2 + \frac{8}{3} \ln x^2 M^2}{x^2} \right] \nonumber \\ & & +~\textrm{(termin.~locales)} \;. \nonumber \\ \end{eqnarray} Pasamos ahora a renormalizar el diagrama $(b)$. En este caso, lo que tenemos es un diagrama con divergencias de solapamiento. La expresi\'on b\'asica de este diagrama es \begin{eqnarray} \Pi_{\mu \nu}^{(2 \; b)} (x-y) &=& - ( i e )^4 \int d^4 u d^4 v \; Tr \left[ \gamma_{\mu} ( \gamma^{\alpha} \partial^x_{\alpha} \Delta_{xu} ) \gamma^{\rho} ( \gamma^{\beta} \partial_{\beta}^u \Delta_{uy} ) \gamma_{\nu} \nonumber \right. \\ & & \times \left. ( \gamma^{\lambda} \partial_{\lambda}^y \Delta_{yv} ) \gamma_{\rho} ( \gamma^{\sigma} \partial_{\sigma} \Delta_{vx}) \Delta_{uv} \right] \;,\nonumber \\ \end{eqnarray} Mediante identidades de las matrices $\gamma$, e integrando por partes las derivadas que act\'uan sobre $\Delta_{xu}$ y $\Delta_{yv}$, podemos rescribir esto en t\'erminos de las expresiones $H$ como \begin{eqnarray} \Pi^{(2 \; b)}_{\mu \nu} = e^4 &\left[\right.& - 8 \delta_{\mu \nu} \Box H[ 1 , \partial_{\lambda} \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\mu}^x H[ 1, \partial_{\nu} \; ; \; \Box, 1] - 8 \delta_{\mu \nu} \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; \Box , 1] \nonumber \\ & & - 16 \partial_{\mu}^x H[ 1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu}, 1] + 16 \partial_{\lambda}^x H [ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu} ,1] - 16 \partial_{\lambda}^x H [ 1, \partial_{\mu} \; ; \; \partial_{\lambda} \partial_{\nu},1] \nonumber \\ & & - 16 \partial_{\mu} H [ 1, \Box \; ; \; \partial_{\nu}, 1] + 8 \delta_{\mu \nu} \partial_{\lambda}^x H[ 1, \Box \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\lambda}^x H[ 1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\nu},1] \nonumber \\ & & - 16 \partial_{\lambda}^x H[ 1, \partial_{\mu} \partial_{\nu} \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\nu}^x H[ 1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\lambda}, 1] - 16 H[ 1, \Box \; ; \; \partial_{\mu} \partial_{\nu},1] \nonumber \\ & & \left. + 8 \delta_{\mu \nu} H[ 1, \Box \; ; \; \Box ,1 ] + 32 H[ 1 , \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda}, 1] - 16 H[ 1 , \partial_{\mu} \partial_{\nu} \; ; \; \Box , 1] \; \right]. \end{eqnarray} Aquellas contribuciones que tienen un d'alamebertiano se pueden rescribir en t\'erminos de la integral $I^1$, por lo que su renormalizaci\'on es inmediata. El resto se encuentran dentro de la lista de integrales con divergencias de solapamiento (o pueden ser f\'acilmente expresadas en t\'erminos de esas integrales), con lo que s\'olo hay que sustituir el valor renormalizado correspondiente. Por lo tanto, la contribuci\'on renormalizada del diagrama $(b)$ es \begin{eqnarray} \Pi^{(2 \; b)}_{\mu \nu R} (x) &=& \frac{e^4}{12 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \ln^2 x^2 M^2 + \frac{14}{3} \ln x^2 M^2}{x^2} - \delta_{\mu \nu} \Box \Box \frac{ \ln^2 x^2 M^2 + \frac{17}{3} \ln x^2 M^2}{x^2} \right] \nonumber \\ & & +~\textrm{(termin.~locales)} \;. \end{eqnarray} Entonces, la contribuci\'on total renormalizada a dos bucles de la autoenerg\'ia del fot\'on es \begin{eqnarray} \Pi_{\mu \nu \; R}^{(2)} (x) &=& 2 \Pi_{\mu \nu}^{(2 \; a)} (x) + \Pi_{\mu \nu}^{(2 \; b)} (x) \nonumber \\ &=& \frac{e^4}{4(4 \pi^2)^3} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \frac{ \ln x^2 M^2}{x^2} + \ldots \end{eqnarray} donde $\ldots$ corresponde a los t\'erminos locales que no estamos teniendo en cuenta. \subsubsection*{Ecuaci\'on del grupo de renormalizaci\'on} Estudiando la ecuaci\'on del grupo de renormalizaci\'on a un bucle para la funci\'on a dos puntos de los campos cu\'anticos, obtenemos que la funci\'on que corresponde a la variaci\'on del par\'ametro gauge en dicha ecuaci\'on $\gamma_{\alpha}(e) \partial / \partial \alpha$. Esta funci\'on tiene una expansi\'on de la forma $\gamma_{\alpha}(e) = - \frac{2 \alpha}{3 (4 \pi^2)} e^2 + {\cal{O}}(e^3) $. Esto justifica el poder realizar el c\'alculo a dos bucles en el gauge de Feynman, ya que $\gamma_{\alpha}(e) \partial / \partial \alpha$ actuando sobre cualquier diagrama a dos bucles no afecta a la verificaci\'on de las ecuaciones del grupo de renormalizaci\'on (es de orden superior en $e$, ya que la primera dependencia en $\alpha$ de la autoenerg\'ia del fot\'on background aparece a dos bucles). En cuanto a los campos background, si definimos $B_{\mu} = \frac{1}{e} B_{\mu}^{\prime}$, tenemos que la dimensi\'on an\'omala de este nuevo campo es nula, ya que la renormalizaci\'on de $B_{\mu}$ y de la constante de acoplamiento verifican la relaci\'on $Z_{e} \sqrt{Z_B} = 1$. Por lo tanto, la ecuaci\'on del grupo de renormalizaci\'on que verifican estos campos es \begin{eqnarray} \left( M \frac{\partial}{\partial M} + \beta(e) \frac{\partial}{\partial e} \right) \Gamma_{\mu \nu}^{B B \; (2)} = 0 \;, \end{eqnarray} con \begin{eqnarray} \Gamma_{\mu \nu}^{B B}(x-y) &=& \left(\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box \right) \delta^{(4)}(x-y) - \Pi_{\mu \nu} (x-y) \;. \end{eqnarray} Con las contribuciones renormalizadas que tenemos para $\Pi_{\mu \nu}$, obtenemos que la funci\'on beta a dos bucles de QED es \begin{eqnarray} \beta (e) &=& \frac{1}{3(4 \pi^2)} e^3 + \frac{1}{4 (4 \pi^2)^2} e^5 + {\cal{O}}(e^7) \;. \end{eqnarray} \subsection*{SuperQED} Pasamos ahora a obtener la renormalizaci\'on de la extensi\'on supersim\'etrica del caso anterior, SuperQED. Aqu\'i, aplicamos los convenios del superespacio definidos en \cite{Gates:1983nr}, con los que la acci\'on de SuperQED es \begin{eqnarray} S &=& \int d^4 x d^2 \theta \; W^2 + \int d^4 x d^4 \theta \; \bar{\Phi}_{+} e^{gV} \Phi_{+} + \int d^4 x d^4 \theta \; \bar{\Phi}_{-} e^{-gV} \Phi_{-} \;, \end{eqnarray} donde $W_{\alpha}$ es un supercampo quiral, que se expresa en t\'erminos del supercampo real gauge $V$ y de superderivadas covariales $D_{\alpha}$ como $W_{\alpha} = i \bar{D}^2 D_{\alpha} V$. $\Phi_{\pm}$ son supercampos quirales de materia. Es importante se\~nalar tambi\'en que se puede definir teor\'ia de perturbaciones en el superespacio: en este caso tenemos diagramas definidos en t\'erminos de superpropagadores $P_{ij} = \Delta_{ij} \delta_{ij}$, donde $\Delta_{ij}$ es el propagador usual y $\delta_{ij}$ es la funci\'on delta de variables grassmanianas. Al igual que en el caso de QED, realizamos los c\'alculos en el gauge de Feynman (se justificar\'a posteriormente su uso) y con el m\'etodo de campo de background. El supercampo gauge $V$ se divide por lo tanto en dos contribuciones $V \rightarrow V + B$: $V$ es el supercampo gauge cu\'antico y $B$ el background. \subsubsection*{Funci\'on a dos puntos del campo $B$ a un bucle} \begin{figure}[ht] \centerline{\epsfbox{SQED1loop.eps}} \caption{Diagrama a un bucle en SuperQED.} \end{figure} En este caso, la expresi\'on desnuda es \begin{eqnarray} \Gamma^{(1)}_{+} &=& \frac{g^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) B(z_2) \left[ D^2_1 P_{12} \stackrel{\leftarrow}{D^2}_2 \right] \left[ D^2_2 P_{12} \stackrel{\leftarrow}{D^2}_1 \right] \end{eqnarray} que, con el \'algebra de derivadas covariantes, puede ser rescrita como \begin{eqnarray} \Gamma^{(1)}_{+} &=& \frac{g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^2 D^2 B(y, \theta) \right] \Delta^2_{xy} \nonumber \\ & & + \frac{g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta_{xy} \left( \Box \Delta_{xy} \right) \nonumber \\ & & - \frac{i g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \Delta_{xy} \partial_{\alpha \dot{\alpha}}^y \Delta_{xy} \;. \end{eqnarray} Aplicando RDR, obtenemos el siguiente valor renormalizado \begin{eqnarray} \Gamma^{(1)}_{+ \;R} &=& - \frac{g^2}{16(4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \nonumber \\ \end{eqnarray} \subsubsection*{Funci\'on a dos puntos del campo $B$ a dos bucles} \begin{figure}[ht] \centerline{\epsfbox{SQED2loop.eps}} \caption{Diagramas de SuperQED a dos bucles.} \label{Resumen_SQED} \end{figure} Omitiendo la expresi\'on desnuda de cada diagrama en t\'erminos del superpropagador $P_{ij}$, tras aplicar el \'algebra de superderivadas covariantes tenemos las siguientes contribuciones no renormalizadas \begin{eqnarray} \Gamma^{(2a)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \left[ \Box ( \Delta I^{1} ) - 2 \Delta^3 - \partial^{\alpha \dot{\alpha}} ( \Delta \partial_{\alpha \dot{\alpha}} I^{1} ) \right] (x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \nonumber \\ \Gamma^{(2b)}_{+} &=& g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y,\theta) \left[ - \Box ( \Delta I^1) + 2 \Delta^3 + \partial^{\alpha \dot{\alpha}} \left( \Delta \partial_{\alpha \dot{\alpha}} I^1 \right) \right] (x-y) \nonumber \\ & & - g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \nonumber \\ \Gamma^{(2c)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right](x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \left[ \Box ( \Delta I^1 ) - \Delta^3 - \partial^{\alpha \dot{\alpha}} ( \Delta \partial_{\alpha \dot{\alpha}} I^1 ) \right] (x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta B(x, \theta) \left[D^{\beta} \bar{D}^2 D^{\alpha} B(y, \theta) \right] C^{\dot{\beta} \dot{\alpha}} H[\partial_{\beta \dot{\beta}},1 \; ; 1, \partial_{\alpha \dot{\alpha}}] \nonumber \\ \Gamma^{2d}_{+} &=& - \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta^3_{xy} \;. \end{eqnarray} Antes de obtener las expresi\'ones renormalizadas, podemos sumar todas las contribuciones desnudas que forman la expresi\'on a dos bucles, ya que al emplear RDR todas las estructuras se renormalizan siempre con las mismas escalas. En \cite{Song}, donde se obtuvo la renormalizaci\'on diferencial de SuperQED a dos bucles, este c\'alculo simplificado no se pod\'ia realizar, ya que hab\'ia que renormalizar cada estructura con su escala correspondiente, para al final relacionarlas mediante las identidades de Ward. La expresi\'on final renormalizada que encontramos es \begin{eqnarray} \Gamma^{2}_R &=& \left. 2 \left( \Gamma_{+}^{(2a)} + \Gamma_{+}^{(2b)} + \Gamma_{+}^{(2c)} + \Gamma_{+}^{(2d)} \right) \right|_R \nonumber \\ &=& g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\beta} \bar{D}^2 D^{\alpha} B(y, \theta) \right] C^{\dot{\beta} \dot{\alpha}} H^R [\partial_{\beta \dot{\beta}},1 \; ; \; 1, \partial_{\alpha \dot{\alpha}} ] \nonumber \\ &=& - \frac{g^4}{16 (4 \pi^2)^3} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} + \ldots \nonumber \\ \end{eqnarray} \subsubsection*{Ecuaci\'on del grupo de renormalizaci\'on} Al igual que en QED, evaluando la ecuaci\'on del grupo de renormalizaci\'on a un bucle de los campos cu\'anticos, obtenemos que el valor de la funci\'on correspondiente a la variaci\'on del par\'ametro gauge es de orden $g^2$. Esto, unido al hecho de que ni el nivel arbol ni la correcci\'on a un bucle de la autoenenerg\'ia del campo B dependen del par\'ametro gauge, justifica que hayamos podido realizar el c\'alculo en el gauge de Feynman. Con la ecuaci\'on del grupo de renormalizaci\'on para campos background, obtenemos el valor de la funci\'on beta a dos bucles como \begin{eqnarray} \beta(g_{SQED}) &=& \frac{1}{8 \pi^2} g^3_{SQED} + \frac{1}{2 (4 \pi^2)^2} g^5_{SQED} + {\cal{O}}(g^7_{SQED}) \;, \end{eqnarray} donde empleamos la normalizaci\'on usual de la constante de acoplamiento, $g = \sqrt{2} g_{SQED}$. Estos valores concuerdan con resultados previos encontrados en la literatura \cite{Vainshtein:1986ja,Shifman:1985fi,Novikov:1985rd}. \section*{Ejemplos no abelianos} \subsection*{Yang-Mills} El lagrangiano de esta teor\'ia es \begin{eqnarray} \cal{L} &=& \frac{1}{4} F^a_{\mu \nu} F^a_{\mu \nu} + \frac{1}{2 \alpha} ( \partial_{\mu} A_{\mu} )^a ( \partial_{\nu} A_{\nu} )^a + ( \partial_{\mu} \bar{\eta})^a ( {D}_{\mu} \eta )^a \;, \end{eqnarray} donde $A_{\mu}^a$ es el campo gauge, $\eta$ y $\bar{\eta}$ son los fantasmas de Fadeev-Popov, $F_{\mu \nu}^a = \partial_{\mu} A_{\nu}^a - \partial_{\nu} A_{\mu}^a + g f^{abc} A_{\mu}^b A_{\nu}^c$ y $f^{abc}$ las constantes de estructura del \'algebra de Lie asociada al grupo de simetr\'ia. Al igual que en los ejemplos previos abelianos, los c\'alculos los realizaremos en el gauge de Feynman y con el m\'etodo del campo de background, obteniendo la funci\'on de dos puntos renormalizada del campo background $B_{\mu}^a$. \subsubsection*{Un bucle} \begin{figure} \centerline{\epsfbox{YM1loop.eps}} \caption{Diagramas de Yang-Mills a un bucle.} \end{figure} Si estudiamos la autoenerg\'ia del campo background en el gauge de Feynman, obtenemos la siguiente expresi\'on desnuda (suma de las contribuciones del bucle de gluones y fantasmas) \begin{eqnarray} <B_{\mu}^a(x) B_{\nu}^b(y)> &=& g^2 C_A \delta^{ab} \left[ 4 \partial_{\mu} \partial_{\nu} \Delta^2 - 4 \delta_{\mu \nu} \Box \Delta^2 + 2 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 4 \Delta \partial_{\mu} \partial_{\nu} \Delta \right] \;, \nonumber \\ \end{eqnarray} que se renormaliza como \begin{eqnarray} <B_{\mu}^a(x) B_{\nu}^b(0)>_R &=& g^2 C_A \delta^{ab} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ - \frac{11}{48 \pi^2 (4 \pi^2)} \Box \frac{\ln x^2 M^2}{x^2} - \frac{1}{72 \pi^2} \delta (x) \right] \;. \nonumber \\ \end{eqnarray} Tambi\'en consideramos la funci\'on a dos puntos del campo gauge cu\'antico, ya que ser\'a una inserci\'on en uno de los diagramas a dos bucles. El valor total no renormalizado de esta funci\'on es \begin{eqnarray} <A_{\mu}^a (x) A_{\nu}^b (y) > &=& g^2 C_A \delta^{ab} \left[ \partial_{\mu} \partial_{\nu} \Delta^2 - \delta_{\mu \nu} \Box \Delta^2 + 4 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 2 \delta_{\mu \nu} \partial^{\lambda} ( \Delta \partial_{\lambda} \Delta ) \right. \nonumber \\ & &- \left. 4 \Delta \partial_{\mu} \partial_{\nu} \Delta - \delta_{\mu \nu} \Delta ( \Box \Delta ) \right] \;, \end{eqnarray} y aplicando RDR \begin{eqnarray} <A_{\mu}^a (x) A_{\nu}^b (0) >_R &=& g^2 C_A \delta^{ab} \left[ \frac{5}{3}( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu } \Box) \Delta^2_R (x) - \frac{1}{72 \pi^2} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \delta (x) \right] \nonumber \\ &=& - \frac{g^2 C_A \delta^{ab}}{144 \pi^2} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ \frac{15}{4 \pi^2} \Box \frac{ \ln x^2 M^2}{x^2} + 2 \delta (x) \right] \;. \end{eqnarray} \subsubsection*{Acci\'on efectiva en un gauge gen\'erico} A diferencia que en los ejemplos abelianos, el restringirnos al gauge de Feynman no es inocuo en este caso. Para incluir las variaciones del par\'ametro gauge en la ecuaci\'on del grupo de renormalizaci\'on, obtendremos la dependencia lineal en $\xi$ (el par\'ametro gauge definido a partir del usual como $\frac{1}{\alpha} = 1 + \xi$) de la acci\'on efectiva background expandida a segundo orden en los campos $B_{\mu}^a$. Para obtener dicha acci\'on efectiva, consideramos el generador de funciones de Green conectadas $W$ \begin{eqnarray} W &=& - \frac{1}{2} tr \ln \left[ \delta_{\mu \nu} \Box^{ab} - 2 g f^{cab} B_{\mu \nu}^c + \xi ({\bf{D}}_{\mu} {\bf{D}}_{\nu})^{ab} \right] \;, \end{eqnarray} con ${\bf{D}}_{\mu}^{ac} = \partial_{\mu} \delta^{ac} + g f^{abc} B_{\mu}^b $ y ${\Box}^{ab} = ( {\bf{D}}^{\mu} {\bf{D}}_{\mu})^{ab}$. A primer orden en $\xi$ y segundo orden en $B_{\mu}^a$, esto se rescribe como \begin{eqnarray} W &=& \xi C_A g^2 tr \left[ \frac{1}{2} \Delta B_{\mu \nu}^a \Delta B_{\mu \nu}^a - 2 \Delta B_{\mu \nu}^a \Delta B_{\nu \lambda}^a \Delta \partial_{\lambda} \partial_{\mu} \right] \;. \end{eqnarray} Renormalizando esta expresi\'on, se obtiene f\'acilmente la acci\'on efectiva como \begin{eqnarray} \Gamma_\xi = - \frac{\xi C_A g^2}{4(4 \pi^2)} \int d^4 x d^4 y \; B_{\mu}^a (x) B_{\nu}^a (y) (\partial^x_{\mu} \partial^x_{\nu} - \delta_{\mu \nu} \Box) ( \Box \Delta(x-y) ) \;. \end{eqnarray} \subsubsection*{Renormalizaci\'on del propagador background a dos bucles} \begin{figure}[ht] \centerline{\epsfbox{YM2loop_1.eps}} \caption{Diagramas de Yang-Mills a dos bucles (a)-(e).} \end{figure} \begin{figure}[ht] \centerline{\epsfbox{YM2loop_2.eps}} \caption{Diagramas de Yang-Mills a dos bucles (f)-(k).} \end{figure} Los diagramas $(a)$-$(h)$ tienen divergencias anidadas, mientras que $(i)$, $(j)$ y $(k)$ corresponden a expresiones con divergencias de solapamiento. En concreto, si evaluamos el diagrama $(a)$, tenemos la siguiente expresi\'on desnuda \begin{eqnarray} < B_{\mu}^a (x) B_{\nu}^b (y) >_a &=& - 2 g^4 f^{aec} f^{bcd} f^{gdf} f^{gfe} \int d^4 u d^4 v \; \Delta_{xy} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x) ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) \Delta_{yv} \nonumber \\ & & \times ( \partial_{\lambda}^v \Delta_{uv}) \Delta_{uv} ( \partial_{\lambda}^u \Delta_{xu}) \;, \nonumber \\ \end{eqnarray} que puede ser rescrita en t\'erminos de la integral $I^1$ previamente definida como\begin{eqnarray} < B_{\mu}^a (x) B_{\nu}^b (y) >_a &=& - g^4 C_A^2 \delta^{ab} \left[ 4 \partial_{\nu} ( \Delta \partial_{\mu} I^1 ) - \partial_{\mu} \partial_{\nu} ( \Delta I^1 ) - 4 \Delta \partial_{\mu} \partial_{\nu} I^1 \right] \;. \end{eqnarray} Con los resultados mostrados para $I^1$, esto se puede renormalizar de forma inmediata y escribir como \begin{eqnarray} <B_{\mu}^a(x) B_{\nu}^b (0) >_{a\; R} &=& \frac{g^4 C_A^2 \delta^{ab}}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{3} \ln^2 x^2 M^2 - \frac{8}{9} \ln x^2 M^2}{x^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{3} \ln^2 x^2 M^2 + \frac{11}{9} \ln x^2 M^2}{x^2} \right] + \textrm{(termin.~locales)} \;. \nonumber \\ \end{eqnarray} Tomando ahora $(i)$ como un ejemplo de integrales de solapamiento, esta contribuci\'on se puede escribir en t\'erminos de las integrales $H$ como \begin{eqnarray} < B_{\mu}^a (x) B_{\nu}^b (y) >_i = - \frac{1}{2} g^4 C_A^2 \delta^{ab} &\left[ \right.& \partial_{\mu}^x \partial_{\nu}^y H[1, \partial_{\lambda} \; ; \; \partial_{\lambda},1] - 2 \partial_{\mu}^x H[1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu} , 1] \nonumber \\ & & \left. - 2 \partial_{\nu}^y H[1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\lambda},1] + 4 H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda} , 1] \; \right] \;. \nonumber \\ \end{eqnarray} Una vez que tenemos esto, con la lista de expresiones $H$ renormalizadas, llegamos inmediatamente al siguiente resultado renormalizado \begin{eqnarray} < B_{\mu}^a (x) B_{\nu}^b (0) >_{i \; R} &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{12} \ln^2 x^2 M^2 - \frac{17}{36} \ln x^2 M^2 }{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{12} \ln^2 x^2 M^2 + \frac{29}{36} \ln x^2 M^2}{x^2} \right] +~\textrm{(termin.~locales)} \;. \nonumber \\ \end{eqnarray} Procediendo de forma similar en el resto de los diagramas, obtenemos la renormalizaci\'on de todas las contribuciones. Sumando todos los resultados, obtenemos el valor renormalizado de la funci\'on a dos puntos del campo background como \begin{eqnarray} < B_{\mu}^a (x) B_{\nu}^b (0) >_R &=& - \frac{g^4 C_A^2 \delta^{ab}}{2(4 \pi^2)^3} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \frac{ \ln x^2 M^2}{x^2} +~\textrm{(termin.~locales)} \;. \nonumber \\ \end{eqnarray} \subsubsection*{Ecuaci\'on del grupo de renormalizaci\'on} Evaluando la ecuaci\'on del grupo de renormalizaci\'on a un bucle para la funci\'on a dos puntos del campo cu\'antico, obtenemos el valor del coeficiente que se ocupa de las variaciones del par\'ametro gauge. Dicho valor es \begin{eqnarray} \gamma_{\xi} &=& - \frac{5 C_A}{24 \pi^2} g^2 + \cdots \end{eqnarray} Entonces, empleando $\gamma_{\xi}$, la acci\'on efectiva background a un bucle en un gauge gen\'erico y el resultado para la autoenerg\'ia renormalizada del campo $B_{\mu}^a$ a uno y dos bucles, obtenemos a partir de la ecuaci\'on del grupo de renormalizaci\'on background el valor de la funci\'on beta como \begin{eqnarray} \beta (g) &=& \beta_1 g^3 + \beta_2 g^5 + {\cal{O}}(g^7) \nonumber \\ \beta_1 &=& - \frac{11 C_A}{48 \pi^2} \nonumber \\ \beta_2 &=& - \frac{17 C^2_A}{24 (4 \pi^2)^2} \;. \end{eqnarray} \section*{Super Yang-Mills} Pasamos ahora a estudiar la versi\'on supersim\'etrica del modelo anterior, Super Yang-Mills. Al igual que con SuperQED, aplicamos los convenios de \cite{Gates:1983nr}. En este caso, la divisi\'on del campo gauge en parte cu\'antica y parte background es no lineal $e^{g V_{(split)}} = e^{\boldsymbol{\Omega}} e^{g V} e^{\bar{\boldsymbol{\Omega}}}$, con $V$ el campo cu\'antico gauge y $\boldsymbol{\Omega}$ el prepotencial background. Por lo tanto, las derivadas covariantes gauge se escriben en una representaci\'on quiral cu\'antica y vectorial background, por lo que la acci\'on dividida tiene la forma de \begin{eqnarray} S &=& - \frac{1}{2 g^2} tr \int d^4 x d^4 \theta \; ( e^{-g V} \boldsymbol{\nabla}^{\alpha} e^{g V}) \bar{\boldsymbol{\nabla}}^2 ( e^{- g V} \boldsymbol{\nabla}_{\alpha} e^{g V} ) \;, \end{eqnarray} donde $\boldsymbol{\nabla}_{\alpha}$ es la derivada covariante background. Esto implica que el la parte cuadr\'atica en $V$ de la acci\'on con el gauge fijado (de la cual se deriva el propagador cu\'antico) depende de los campos background como \begin{eqnarray} & & - \frac{1}{2} tr \int d^4 x d^4 \theta V \left[ {\boldsymbol{\Box}} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \right] V \nonumber ~~~,~~~ \boldsymbol{\Box} = \frac{1}{2} \boldsymbol{\nabla}^{\alpha \dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \;, \end{eqnarray} donde denotamos el operador cin\'etico como $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} = \boldsymbol{\Box} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}}$. Otra consecuencia del m\'etodo de campo de background en el superespacio es la aparici\'on de los fantasmas de Nielsen-Kallosh, que corresponden a la normalizaci\'on de la funci\'on que se utiliza para realizar el promedio sobre los par\'ametros gauge en el procedimiento funcional est\'andar de cuantizaci\'on. Hay que destacar que, como estos fantasmas entran cuadr\'aticamente en la acci\'on y s\'olo interaccionan con el campo background $B$ ($\boldsymbol{\Omega} = \bar{\boldsymbol{\Omega}} = \frac{1}{2} B$), s\'olo contribuyen a primer orden en teor\'ia de perturbaciones. Debemos notar tambi\'en que para realizar los c\'alculos a dos bucles empleamos supergr\'aficos covariantes. B\'asicamente, realizamos los c\'alculos sin extraer de la derivada covariante la conexi\'on espinorial, mediante el \'algebra de las derivadas covariantes. Por lo tanto, tenemos menos gr\'aficos y estos son m\'as convergentes. \subsubsection*{Un bucle} A la autoenerg\'ia background a un bucle s\'olo contribuyen los fantasmas (tanto los de Fadeev-Popov como los de Nielsen-Kallosh), siendo la expresi\'on desnuda \begin{eqnarray} \Gamma^{(1)} &=& - \frac{3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) \left[ \bar{D}^2 D^2 B^a (y, \theta) \right] \Delta^2_{xy} \nonumber \\ & & - \frac{3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) B^a (y, \theta) \Delta_{xy} \Box \Delta_{xy} \nonumber \\ & & + \frac{i 3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B^a (y, \theta) \right] \Delta_{xy} \partial_{\alpha \dot{\alpha}}^y \Delta_{xy} \;, \end{eqnarray} que se renormaliza de acuerdo con las reglas de RDR como \begin{eqnarray} \Gamma^{(1)} &=& \frac{ 3 C_A}{16 (4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; B^a(x,\theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B^a (y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \nonumber \\ \end{eqnarray} En cuanto a la funci\'on a dos puntos de los campos gauge cu\'anticos, las contribuciones corresponden a los siguientes diagramas: \begin{figure}[ht] \centerline{\epsfbox{SYM1loop_quantum.eps}} \caption{Contribuciones a la funci\'on de dos puntos cu\'antica a un bucle.} \end{figure} La contribuci\'on final renormalizada en este caso es \begin{eqnarray} \Gamma^{(1)}_V &=& - \frac{3 g^2 C_A}{16 (4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; V^a(x, \theta) \Box \Pi_{1/2} V^{a} (y, \theta) \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \end{eqnarray} \subsubsection*{Acci\'on en un gauge gen\'erico} Al igual que en caso de Yang-Mills, tenemos que evaluar la acci\'on efectiva background en un gauge gen\'erico para tener en cuenta el t\'ermino de variaci\'on del par\'ametro gauge $\xi$ (redefinido a partir del usual como $\xi + 1 = \frac{1}{\alpha}$) en la ecuaci\'on del grupo de renormalizaci\'on. Obtendremos el t\'ermino lineal en $\xi$ correspondiente a la contribuci\'on a segundo orden en campos background de la expansi\'on de la acci\'on efectiva. Para ello, a partir de un c\'alculo funcional, escribimos dicha acci\'on como \begin{eqnarray} \Gamma_{eff} &=& - \frac{1}{2} tr \ln \left[ \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} + \xi \left( \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 + \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right) \right] + tr \ln \left[ \Box_{-} + \xi \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] \nonumber \\ &=& - \frac{1}{2} tr \ln \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} + tr \ln \Box_{-} + \Gamma_{\xi} \;, \end{eqnarray} donde \begin{eqnarray} \Box_{+} &=& \Box - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - \frac{i}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} ) \nonumber \\ \Box_{-} &=& \Box - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} - \frac{i}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \;. \end{eqnarray} Considerando la expresiones inversas de los operadores, y qued\'andonos a segundo orden en campos background, obtenemos el valor renormalizado como \begin{eqnarray} \Gamma_{\xi} &=& - \frac{\xi}{16 (4 \pi^2)^2} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}(x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Box \frac{ \ln (x-y)^2 M^2_{IR}}{(x-y)^2} + {\cal{O}}(\xi^2 ; B^3) \;. \nonumber \\ \end{eqnarray} \subsubsection*{Dos bucles} Para realizar la renormalizaci\'on a dos bucles con supergr\'aficos covariantes, simplemente tenemos que considerar el siguiente diagrama de vac\'io: \begin{figure}[h] \centerline{\epsfbox{SYM2loop_vacuum.eps}} \caption{Contribuci\'on a dos bucles a la acci\'on efectiva background.} \end{figure} En dicho diagrama, los propagadores gauge $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$ dependen de los campos background, por lo que hemos de expandirlos y quedarnos con el segundo orden en $B$. De esta expansi\'on tenemos dos tipos de contribuciones. Unas tienen campos $\boldsymbol{W}_{\alpha}$ expl\'icitos, mientras que otras se construyen en base a las conexiones espacio-temporales background $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}}$. As\'i, tenemos los siguients resultados renormalizados para las contribuciones con campos $\boldsymbol{W}_{\alpha}$ \begin{eqnarray} \sum^{2}_{i=1}\Gamma^{(2)}_i |_R &=& - \frac{3 i g^2 C_A^2}{2} tr \int d^4 x d^4 y d^4 \theta \; \left[ \boldsymbol{W}^{\alpha}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}} + \bar{\boldsymbol{W}}^{\dot{\alpha}}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \boldsymbol{W}^{\alpha}(y,\theta) \right] \nonumber \\ & & \times \left[ \Delta I^0 \right]_R (x-y) \nonumber \\ & & + \frac{3 i g^2 C_A^2}{16 (4 \pi^2)^3} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \bar{\boldsymbol{W}}^{\dot{\alpha}}(y, \theta) \partial_{\alpha \dot{\alpha}}^x \frac{\ln (x-y)^2 M^2}{(x-y)^2} + {\cal{O}}(B^3) \;, \nonumber \\ \end{eqnarray} la cual, mediante una identidad de Bianchi se puede escribir en una forma invariante gauge como \begin{eqnarray} \sum^{2}_{i=1}\Gamma^{(2)}_i |_R &=& 3 g^2 C_A^2 tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y,\theta) \Box \left[ \Delta I^0 \right]_R (x-y) \nonumber \\ & & - \frac{3 g^2 C_A^2}{16(4 \pi^2)^3} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}(x,\theta) \boldsymbol{W}_{\alpha}(y, \theta) \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} + {\cal{O}}(B^3) \;. \nonumber \\ \end{eqnarray} Por otro lado, la suma de los diagramas con conexiones espacio-temporales es \begin{eqnarray} \sum^{5}_{i=3} \Gamma^{(2)}_i |_R &=& - 3 g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y,\theta) \left( \partial_{\alpha \dot{\alpha}}^x \partial_{\beta \dot{\beta}}^x - (2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}}) \Box \right) \nonumber \\ & & \times \left[ \frac{1}{4} [ \Delta I^0 ]_R (x-y) - \frac{1}{32(4 \pi^2)^3} \frac{\ln (x-y)^2 M^2}{(x-y)^2} \right] + {\cal{O}}(B^3) \;, \end{eqnarray} que, debido a ser una expresi\'on transversa se puede escribir en t\'erminos de $\boldsymbol{W}_{\alpha}$ como \begin{eqnarray} & & tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}} (y, \theta) \left( \partial_{\alpha \dot{\alpha}}^x \partial_{b \dot{\beta}}^x - 2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}} \Box \right) f(x-y) \nonumber \\ & & = - 3 tr \int d^4 x d^4 y d^4 \theta \; \left[ D^{\alpha} B(x, \theta) \right] \left[ \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box f(x-y) + {\cal{O}}(B^3) \nonumber \\ & & = 3 tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Box f(x-y) + {\cal{O}}(B^3) \;. \end{eqnarray} Por lo tanto, la expresi\'on total renormalizada a dos bucles y segundo orden en los campos background es \begin{eqnarray} \frac{1}{2}\sum^{5}_{i=1} \Gamma^{(2)}_i &=& tr \int d^4 x d^4 y d^2 \theta \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Gamma^{(2)}(x-y) \;, \end{eqnarray} con \begin{eqnarray} \Gamma^{(2)}(x) &=& \frac{3 g^2 C_A^2}{64(4 \pi^2)^3} \Box \frac{\frac{1}{4} \ln^2 x^2 M^2_{IR} + \frac{1}{2} \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2) + \ln x^2 M^2}{x^2} \nonumber \\ & & +~\textrm{(termin.~locales)} \;. \end{eqnarray} \subsubsection*{Ecuaci\'on del grupo de renormalizaci\'on} A la hora de evaluar la ecuaci\'on del grupo de renormalizaci\'on, los pasos que hemos de dar son id\'enticos al caso de Yang-Mills. En primer lugar, con la ecuaci\'on correspondiente a la funci\'on a dos puntos de los campos cu\'anticos a un bucle, obtenemos el valor del coeficiente de variaci\'on del par\'ametro gauge, $\gamma_{\xi} \partial / \partial \xi$. En concreto, tenemos el siguiente resultado \begin{eqnarray} \gamma_\xi &=& - \frac{3 C_A}{4 (4 \pi^2)}g^2 + {\cal{O}}(g^4) \;. \end{eqnarray} Entonces, con $\gamma_{\xi}$, la acci\'on efectiva en una gauge gen\'erico y la contribuci\'on a uno y dos bucles a la autoenerg\'ia background, de la ecuaci\'on del grupo de renormalizaci\'on satisfecha por los campos $B$ \begin{eqnarray} \left. \left[ M \frac{\partial}{\partial M} + \beta(g) \frac{\partial}{\partial g} + \gamma_{\xi} (g) \frac{\partial}{\partial \xi} \right] \Gamma (x) \right|_{\xi = 0} = 0 \; \end{eqnarray} obtenemos la expansi\'on de la funci\'on beta como \begin{eqnarray} \beta(g_{SYM}) = - (3/2) [ C_A/ (8\pi^2)] g^3_{SYM} - (3/2) [ C_A / (8 \pi^2) ]^2 g^5_{SYM} + {\cal{O}}(g^7_{SYM}) \;, \end{eqnarray} donde hemos utilizado, igual que en el caso de SuperQED, la constante de acoplamiento usual, que difiere en un factor $\sqrt{2}$ de la que se emplea en \cite{Gates:1983nr}. Notar que, al igual que en el resto de las teor\'ias consideradas, no tenemos dimensi\'on an\'omala ($\gamma_B$), ya que con la normalizaci\'on que tenemos del campo background y la relaci\'on entre la renormalizaci\'on de dicho campo y la constate de acoplamiento, $\gamma_B$ se anula. Este c\'alculo nos permite dar una nueva visi\'on sobre un punto controvertido: el origen de las correcciones m\'as all\'a de un bucle a la funci\'on beta de Super Yang-Mills. En un principio, Novikov, Shifman, Vainshtein and Zakharov (NSVZ) en \cite{Novikov:1983uc} obtuvieron la ``funci\'on beta exacta'' ($\beta_{NSVZ}$) empleando un c\'alculo de instantones, siendo posteriormente reobtenido el coeficiente a dos bucles de esta funci\'on mediante c\'alculos perturbativos (empleando reducci\'on dimensional) \cite{Abbott:1984pz,Grisaru:1985tc,Shifman:1986zi}. Aunque algunos c\'alculos parec\'ian indicar un origen IR a las correcciones de orden superior de $\beta_{NSVZ}$ \cite{Novikov:1983uc,Novikov:1985rd,Shifman:1986zi}, esto fue cuestionado en \cite{Arkani-Hamed:1997ut,Arkani-Hamed:1997mj}, donde empleando un formalismo wilsoniano (por lo tanto, en principio s\'olo dependiente del comportamiento UV de la teor\'ia) y diferenciando entre una constante de acoplamiento holom\'orfica y una can\'onica, se obtuvo un flujo NSVZ para la \'ultima. Nuestro c\'alculo tiene la virtud de evitar uno de los puntos conflictivos de la aplicaci\'on de m\'etodos dimensionales a Super Yang-Mills: la regularizaci\'on de tanto las divergencias UV como IR con el mismo par\'ametro infinitesimal, que implica que se mezclen ambas contribuciones en los resultados renormalizados. Nosotros en cambio tenemos las divergencias claramente diferenciadas al tener asociadas dos escalas independientes. Lo que hemos encontrado es que la escala correspondiente a la renormalizaci\'on a un bucle es la que genera la contribuci\'on a dos bucles de $\beta_{NSVZ}$. No existe una escala a dos bucles UV (lo cual coincide con la conclusi\'on obtenida en \cite{Grisaru:1985tc}, conforme a la cual, en un esquema de regularizaci\'on en cuatro dimensiones no hay divergencias superficiales), aunque esto no implica que el coeficiente a dos bucles de la funci\'on beta sea nulo. Como se ve en este caso, la escala UV a un bucle sobrevive a dos bucles al tener en cuenta los efectos IR. \chapter{Agradecimientos} \begin{flushright} \begin{it} Si sale, sale. Y si no sale \\ hay que volver a empezar. \\ Lo dem\'as son fantas\'ias \\ \end{it} {\bf{Edouard Manet}} \end{flushright} En un trabajo que ha llevado tanto tiempo, es normal el tener un mont\'on de gente a la que agradecerle el apoyo prestado para terminarlo. Espero que no se me olvide nadie, ya que como dijo Quevedo: ``el agradecimiento es la parte principal de un hombre de bien''. En primer lugar, tengo que agradecer al profesor Javier Mas la oportunidad que me ha brindado de poder realizar esta tesis doctoral. Entre otras muchas cosas, y dejando a parte toda la f\'isica que he aprendido con \'el, me gustar\'ia destacar su apoyo en las diferentes circunstancias por las que he pasado durante todo este tiempo, as\'i como el haber intentado ense\~narme a tener la visi\'on cr\'itica necesaria para llevar a cabo una investigaci\'on. Tambi\'en me gustar\'ia agradecer la ayuda que me han brindado los profesores Manuel P\'erez-Victoria y Jose Ignacio Latorre. A Manuel tengo que darle las gracias por haber compartido conmigo su amplio conocimiento de renormalizaci\'on diferencial y simetr\'ias, y por su disponibilidad en todo momento para revisar y responder mis dudas de estos temas. En cuanto a Jose Ignacio, aunque no llegamos a trabajar juntos directamente, le estoy muy agradecido por haberme proporcionado informaci\'on muy valiosa de su trabajo en el desarrollo del m\'etodo de renormalizaci\'on diferencial. Tambi\'en tengo que darles las gracias a mis compa\~neros de despacho y departamento de la facultad. Tanto en los buenos momentos (que fueron la mayor\'ia) como en los malos (realmente muy pocos) me disteis todo vuestro apoyo. Much\'isimas gracias. En cuanto a ``la banda'', qu\'e os puedo decir que no sepais. Siempre hab\'eis estado ah\'i, y soy muy consciente de lo afortunado que he sido por eso. Nunca os agradecer\'e lo suficiente el haberme aguantado y apoyado durante todo este tiempo. Uno de los mejores recuerdos que tengo en mi vida es el paso por Santiago. Y esto es as\'i, gracias a la gente excepcional que conoc\'i all\'i. Lo m\'as seguro es que poco a poco nos vayamos alejando cada vez m\'as, pero el compa\~nerismo y las horas de estudio y ocio compartidas con vosotros son algo que nunca olvidar\'e. Tambi\'en tengo muchas cosas que agradecer a la gente que encontr\'e en mi etapa madrile\~na. En un momento lleno de cambios e incertidumbres en mi vida, fuisteis el apoyo que necesitaba. Adem\'as, si tuve el empuje de retomar los c\'alculos cuando estaba en lo que parec\'ia un callej\'on sin salida, fue en gran parte ayudado por vosotros. Gracias de todo coraz\'on. Tampoco quiero dejar olvidados a todos los compa\~neros (bueno, y en muchos casos ya amigos) con los que he trabajado en Softgal Gesti\'on. Realmente, vosotros me habeis demostrado que el capital humano es la mayor riqueza que tiene una empresa. Finalmente, si el hombre es uno mismo y sus circunstancias, una gran parte de las m\'ias son mi familia. Si he llegado a poder escribir este trabajo, ha sido por vosotros. Os quiero. \chapter{Background (super)field method} \label{ap_BFM} In order to quantize a gauge theory with functional methods, a gauge fixing procedure has to be used, so as no gauge field configurations related to a given one by gauge transformations are taken into account in the path integral (all of them correspond to the same physical state). However, as a result of this procedure, explicit gauge invariance is lost. The background field method was developed (see references from \cite{DeWitt:1967ub} to \cite{Grisaru:1975ei}) to allow us to fix a gauge without losing explicit gauge invariance. Although originally the method was developed for the one-loop case, it was soon extended to include multi-loop effects \cite{Abbott:1980hw,DeWitt:1980jv,'tHooft:1975vy,Capper:1982tf,Abbott:1983zw}. The basic idea is the splitting of the gauge field in two parts: the quantum field, which is the variable of integration in the functional integral, and the background field. Thus, we are allowed to fix the gauge for the quantum field whereas we can maintain explicit gauge invariance in the background one. We will first discuss the non-supersymmetric case and then obtain the generalization of the method to superspace \cite{Gates:1983nr,Grisaru:1979wc}. \section{Yang-Mills theory} We will use the conventions of \cite{Zinn-Justin:1993wc} which are those detailed in section \ref{YM_conventions}. We begin defining the splitting of the gauge field in two parts as \begin{eqnarray} A_{\mu}^a \rightarrow A_{\mu}^a + B_{\mu}^a \;, \end{eqnarray} where $A_{\mu}^a$ is the quantum field and $B_{\mu}^a$ is the background field. With this splitting, let us consider the functional ${\bf{Z}}[B] = \int [dA] e^{-S_0(A+B)}$ where $S_0 (A) = 1/4 \int d^4 x F_{\mu \nu}^a (A) F_{\mu \nu}^a (A) $ is the usual Yang-Mills action. After the usual gauge fixing procedure \cite{Abbott:1980hw} this functional becomes ($c$, $\bar{c}$ are Faddeev-Popov ghost fields) \begin{eqnarray} {\bf{Z}}[B] &=& \int [dA dc d \bar{c}] exp \left\{ - S_0 (A + B) - \frac{1}{2 \alpha} tr \int d^4 x \; F(A,B)^2 + tr \int d^4 x \; \bar{c} \left. \frac{\delta F(B)}{\delta w} \right|_{w = 0} c \right\} \;. \nonumber \\ \end{eqnarray} Notice now that $S_0 (A +B)$ is invariant under two types of transformations \begin{enumerate} \item Quantum \begin{eqnarray} \delta B_{\mu}^a &=& 0 \nonumber \\ \delta A_{\mu}^a &=& \frac{1}{g} \left[ \partial_{\mu} w^a + g f^{abc} B_{\mu}^b w^c \right]+ f^{abc} A_{\mu}^b w^c \nonumber \\ &=& \frac{1}{g} ({\bf{D}}_{\mu}w)^a + f^{abc} A_{\mu}^b w^c \end{eqnarray} where ${\bf{D}}_{\mu}$ is the background covariant derivative. \item Background \begin{eqnarray} \delta B_{\mu}^a &=& \frac{1}{g} \partial_{\mu} w^a + f^{abc} B_{\mu}^b w^c \nonumber \\ \delta A_{\mu}^a &=& f^{abc} A_{\mu}^b w^c \end{eqnarray} \end{enumerate} Our aim is to fix the quantum gauge invariance and at the same time maintain the background gauge invariance. Thus, the gauge fixing function has to transform covariantly with respect to background gauge transformations. Hence, we choose as gauge fixing function $F^a = ({\bf{D}}_{\mu} A_{\mu})^a$, which implies that ${\bf{Z}}[B]$ becomes \begin{eqnarray} {\bf{Z}}[B] &=& \int [dA dc d \bar{c}] e^{-S (A,B)} \nonumber \\ &=& \int [dA dc d \bar{c}] exp \left\{ - S_0(A+B) - \frac{1}{2 \alpha} tr \int d^4 x \; ( {\bf{D}}_{\mu} A_{\mu} )^2 + tr \int d^4 x \; \bar{c} [{\bf{D}}_{\mu} {\cal{D}}_{\mu}] c \right\} \;, \nonumber \\ \end{eqnarray} with $({\cal{D}}_{\mu} w)^a = \partial_{\mu} w^a + g f^{abc} (A_{\mu}^b + B_{\mu}^b) w^c$. As can be seen, ${\bf{Z}}[B]$ is manifestly invariant under background gauge transformations. In order to make the connection with the usual functionals ($Z$,$W= \ln Z$ and $\Gamma[\bar{A}] = \int J \bar{A} - W$ with $\bar{A} = \delta W / \delta J$), we define another functional like ${\bf{Z}}[B]$ but with the quantum field coupled to a source \begin{eqnarray} \tilde{Z}[J,B] &=& \int [d A dc d \bar{c}] e^{- S (A,B) + \int d^4 x \; J_{\mu}^a A_{\mu}^a} \;. \end{eqnarray} We remark again that by construction, this is explicitly invariant with respect to background gauge transformations. Starting with $\tilde{Z}[J,B]$ we can define analogous functionals as the usual ones like $\tilde{W} = \ln \tilde{Z}$ and $\tilde{\Gamma}[\tilde{A},B] = \int J \tilde{A} - \tilde{W}[J,B]$, with $\tilde{A} = \delta \tilde{W}/\delta J$. If we perform the change of variables in the partition function $A_{\mu}^a \rightarrow A_{\mu}^a - B_{\mu}^a$ is straightforward to arrive to \cite{Abbott:1980hw} \begin{eqnarray} \tilde{Z}[J,B] &=& e^{ - \int JB} Z[J,B] \;, \end{eqnarray} where $Z[J,B]$ is the usual partition function with the gauge fixing and ghost terms evaluated in an unusual but nevertheless valid gauge that depends on the background gauge field. So, we have for the other functionals \begin{eqnarray} \tilde{W}[J,B] &=& - \int J B + W[J,B] \end{eqnarray} and, with $\bar{A} = \delta W / \delta J$ being the usual classical field \begin{eqnarray} \tilde{A} &=& - B + \bar{A} \nonumber \\ \tilde{\Gamma}[\tilde{A},B] &=& \int J ( \tilde{A} + B ) - W[J,B] \nonumber \\ &=& \int J \bar{A} - W[J,B] \nonumber \\ &=& \Gamma [ \bar{A},B] \end{eqnarray} We have found is that $\tilde{\Gamma}$ and the usual effective action $\Gamma$ are related by \begin{eqnarray} \tilde{\Gamma} [\tilde{A},B] &=& \Gamma[ \tilde{A}+B,B] \;. \end{eqnarray} If we restrict ourselves to diagrams with no external $\tilde{A}$ we have a relevant identity: $\tilde{\Gamma} [0,B] = \Gamma [B] $. This implies that the usual effective action can be obtained through the evaluation of $\tilde{\Gamma}[0,B]$. This quantity is computed by summing all 1PI diagrams with $B$ fields on external legs ($\tilde{A} = 0$ implies that no $A$ field propagators appears on external lines) and $A$ fields inside loops (as the functional integral is only evaluated on $A$ fields). One of the consequences of the background field method is that the renormalization of the gauge parameter ($g_0 = Z_g g$) and the background field ($B_0 = Z_B^{1/2} B$) are related. As we have explicit background gauge invariance, the infinites appearing in $\tilde{\Gamma}[0,B]$ must take the form of a divergent constant times $(F_{\mu \nu}^a )^2$. At the same time, $F_{\mu \nu}^a$ is renormalized as \begin{eqnarray} (F_{\mu \nu}^a)_0 &=& Z_B^{1/2} \left[ \partial_{\mu} B_{\nu}^a - \partial_{\nu} B_{\mu}^a + g Z_g Z_B^{1/2} f^{abc} B_{\mu}^b B_{\nu}^c \right] \;. \end{eqnarray} Hence, in order to get explicit background gauge invariance, the following relation must hold \begin{eqnarray} Z_g &=& Z_B^{-1/2} \;. \end{eqnarray} \section{Super Yang-Mills theory} \label{BFM_SYM} In this section we apply the conventions for superspace discussed in appendix \ref{ap_SUSY} (those of reference \cite{Gates:1983nr}, which we also follow in this section). As the gauge transformation is non-linear in the supersymmetric Yang-Mills theory, a linear splitting in the gauge field is unsuitable \cite{Gates:1983nr,Grisaru:1979wc}. In order to define a splitting, we will re-examine the Yang-Mills case from a different point of view. The Yang-Mills action is invariant under {\em{local}} transformations of the form $\delta A_{\mu}^a = 1/g \; \partial_{\mu} w^a + f^{abc} A_{\mu}^b w^c$. If the transformation is {\em{global}} we still have invariance, with the gauge field transforming as a matter field $\delta A_{\mu}^a = f^{abc} A_{\mu}^b w^c$. Then, considering again a local $w$, we can gauge the global transformation using the background field to covariantize the derivatives. This covariantization is of the form \begin{eqnarray} (D_{\mu}w)^a = ( \partial_{\mu} w)^a + g f^{abc} A_{\mu}^b w^c &\rightarrow& ( {\bf{D}}_{\mu} w)^a + g f^{abc} A_{\mu}^b w^c \nonumber \\ &=& \partial_{\mu} w^a + g f^{abc} ( A_{\mu}^b + B_{\mu}^b ) w^c \;. \end{eqnarray} Hence, we have a linear splitting because the gauge field is linear in the covariant derivative. The procedure for the supersymmetric case is completely analogous \cite{Gates:1983nr}. At the end, we have to covariantize the derivatives with the background field, which implies that we have to replace $D_A \rightarrow {\boldsymbol{\nabla}}_A$ with ${\boldsymbol{\nabla}}_A$ a background covariant derivative. However, we have to take also into account that the covariant derivatives in a supersymmetric gauge theory can be formulated in two representations: \begin{itemize} \item Chiral representation: This is more suitable for quantization. Hence, in the background field method, we work with a quantum $V$ in a chiral representation. \item Vector representation: We do not have to quantize the background fields. Hence, vector representation is useful for these fields as background covariance will be manifest. In fact, we will show that we can work with the background covariant derivatives without introducing explicitly the background prepotentials. \end{itemize} So, we define the supersymmetric splitting writing the covariant derivatives in a quantum chiral but background vector representation: \begin{eqnarray} \nabla_{\alpha} &=& e^{-g V} \boldsymbol{\nabla}_{\alpha} e^{g V} \nonumber \\ \nabla_{\dot{\alpha}} &=& \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \nonumber \\ \nabla_{\alpha \dot{\alpha}} &=& -i \anticomm{\nabla_\alpha}{\nabla_{\dot{\alpha}}} \end{eqnarray} with $g$ the coupling constant. If we go to a background chiral representation (as can be seen in \ref{SUSY_vector_rep} this is achieved by multiplying with $e^{-g \bar{\boldsymbol{\Omega}}} (\ldots) e^{g \bar{\boldsymbol{\Omega}}}$, where $\boldsymbol{\Omega}$ is the background prepotential), we can straightforwardly see that this splitting is equivalent to \cite{Gates:1983nr} \begin{eqnarray} e^{g V} \rightarrow e^{g \boldsymbol{\Omega}} e^{g V} e^{g \bar{\boldsymbol{\Omega}}} \;. \label{BFM_SYM_splitting} \end{eqnarray} The split derivatives $\nabla_A$ transform covariantly under two sets of transformations: \begin{enumerate} \item Quantum: \begin{eqnarray} e^{g V} &\rightarrow& e^{i g \bar{\Lambda}} e^{g V} e^{-ig \Lambda} ~~,~~ \boldsymbol{\nabla}_{\alpha} \bar{\Lambda} = \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \Lambda = 0 \nonumber \\ \boldsymbol{\nabla}_A &\rightarrow& \boldsymbol{\nabla}_A \end{eqnarray} Which implies that $\nabla_A$ transforms as \begin{eqnarray} \nabla_A \rightarrow e^{ig \Lambda} \nabla_A e^{-ig \Lambda} \;. \end{eqnarray} \item Background: \begin{eqnarray} e^{g V} &\rightarrow& e^{ig K} e^{g V} e^{- ig K} ~~,~~ K = \bar{K} \nonumber \\ \boldsymbol{\nabla}_A &\rightarrow& e^{ig K} \boldsymbol{\nabla}_A e^{-ig K} \end{eqnarray} Which implies \begin{eqnarray} \nabla_A \rightarrow e^{ig K} \nabla_A e^{-ig K} \;. \end{eqnarray} \end{enumerate} Although $\nabla_A$ has different transformations under background and quantum transformations, is not difficult to show that the transformation of the unsplit gauge field is the same in both cases \cite{Gates:1983nr}. If we study the background splitting of an abelian theory (SuperQED), the situation is simpler. In concrete, we have a linear quantum-background splitting of the form \begin{eqnarray} V \rightarrow V + B \;, \end{eqnarray} where $V$ and $B$ are the quantum and background gauge fields respectively. This can be seen from the general supersymmetric quantum-background splitting expressed in terms of the background prepotential $\boldsymbol{\Omega}$ (\ref{BFM_SYM_splitting}) \begin{eqnarray} e^{V} &\rightarrow& e^{\boldsymbol{\Omega}} e^{V} e^{\bar{\boldsymbol{\Omega}}} \nonumber \\ B &=& \boldsymbol{\Omega} + \bar{\boldsymbol{\Omega}} \;. \end{eqnarray} With this splitting the two sets of transformations are \begin{enumerate} \item Quantum: \begin{eqnarray} V &\rightarrow& V + i ( \bar{\Lambda} - \Lambda ) \nonumber \\ B &\rightarrow& B \;. \end{eqnarray} \item Background: \begin{eqnarray} V &\rightarrow & V \nonumber \\ B &\rightarrow & B + i( \bar{\Lambda} - \Lambda ) \;. \end{eqnarray} \end{enumerate} Let us now consider the background field quantization. We start defining, as in the Yang-Mills case, a partition function ${\bf{Z}}$ with the gauge field split. After the gauge fixing procedure with a background covariantly chiral gauge fixing function of the form of $F = \bar{\boldsymbol{\nabla}}^2 V$ (which implies that the Faddeev-Popov ghosts are also background covariantly chiral) this functional becomes \cite{Gates:1983nr} \begin{eqnarray} {\bf{Z}} &=& \int [dV dc dc^{\prime} d \bar{c} d \bar{c}^{\prime}] \; \delta( \bar{\boldsymbol{\nabla}}^2 V - f) \delta( \boldsymbol{\nabla}^2 V - \bar{f}) e^{S_0 + S_{FP}} \;. \end{eqnarray} In this case, due to the fact that we are dealing with constrained background chiral superfields rather than usual chiral superfields, when we gauge-average we have to consider a more sophisticated function. If we average with a factor like $exp \int f M f$, with $M$ an operator, in order to normalize we have to divide by $det M$. Hence, if $M$ is a function of the background field we must average with an expression of the form \begin{eqnarray} \int [d f d b] e^{ f M f} e^{b M b} \;, \end{eqnarray} with $b$ a field of opposite statistics to $f$ that is called Nielsen-Kallosh ghost \cite{Nielsen:1978mp}. As this field only interacts with the background field and enters quadratically in the action, it only contributes at the one-loop level. In our case we gauge-average with a factor as \begin{eqnarray} \int [d f d \bar{f} d b d \bar{b}] e^{- \int d^8 z \; [ \bar{f}f + \bar{b}b]} \;, \end{eqnarray} where it is clear that $b$,$\bar{b}$ are background covariantly chiral ghost fields. All of this implies that the partition function can be written as \begin{eqnarray} {\bf{Z}} &=& \int [ d V d c d c^{\prime} d \bar{c} d \bar{c}^{\prime} d b d \bar{b} ] e^{S_{eff}} \nonumber \\ S_{eff} &=& S_0 + S_{GF} + S_{FP} + \int \bar{b}b \;. \label{BFM_SYM_Seff} \end{eqnarray} As in the usual Yang-Mills case, we can relate the background field functional with the usual effective action (in a special background gauge) once we set the sources to zero and consider diagrams with external background lines and internal quantum propagators \cite{Gates:1983nr}. \subsection{Covariant Feynman rules} After the supersymmetric quantum-background splitting, two approaches can be followed to perform the calculations. The first one is to use explicitly the background connections of the covariant derivatives in the calculations ($\boldsymbol{\nabla}_{\alpha} = D_{\alpha} + {\bf{\Gamma}}_{\alpha}$ and $\boldsymbol{\nabla}_{\alpha \dot{\alpha}} = \partial_{\alpha \dot{\alpha}} + {\bf{\Gamma}}_{\alpha \dot{\alpha}}$). So, in this situation we can apply usual D-algebra \cite{Abbott:1984pz}. On the other approach, we do not extract the spinor connection of the covariant derivatives. Therefore, instead of using usual D-algebra, we apply the covariant D-algebra defined for these derivatives. Supergraphs obtained in this way give contributions that are only functions of the space-time connection ${\bf{\Gamma}}_{\alpha \dot{\alpha}}$ or the field strength ${\bf{W}}_{\alpha}$. Not only these diagrams are simpler and fewer in number than those of the first procedure, but they are more convergent, as we do not have contributions with $\boldsymbol{\Gamma}_{\alpha}$, which is of lower dimension that $\boldsymbol{W}_{\alpha}$ and $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}}$. In this section we will detail this second approach. Let us consider a quantum-background split action of the form of \begin{eqnarray} S &=& - \frac{1}{2 g^2} tr \int d^4 x d^4 \theta \; ( e^{-g V} \boldsymbol{\nabla}^{\alpha} e^{g V}) \bar{\boldsymbol{\nabla}}^2 ( e^{- g V} \boldsymbol{\nabla}_{\alpha} e^{g V} ) \nonumber \\ & & + \int d^4 x d^4 \theta \; \bar{\phi} e^{g V} \phi + \int d^4 x \left[ d^2 \theta \; P( \phi )+ h.c. \right] \;, \end{eqnarray} where $\phi$ is background covariantly chiral superfield. After the gauge fixing procedure, we add to the action a gauge-fixing term ($S_{GF}$), Faddeev-Popov ($S_{FP}$) and Nielsen-Kallosh ghosts terms ($S_{NK}$). All of them have the following expressions \begin{eqnarray} S_{GF} &=& - \frac{1}{\alpha} tr \int d^4 x d^4 \theta ( \boldsymbol{\nabla}^2 V ) ( \bar{\boldsymbol{\nabla}}^2 V ) \nonumber \\ S_{FP} &=& tr \int d^4 x d^4 \theta ; \left[ \bar{c}^{\prime} c - c^{\prime} \bar{c} + \frac{1}{2} ( c^{\prime} + \bar{c}^{\prime} ) \comm{gV}{c+\bar{c}} + \ldots\right] \nonumber \\ S_{NK} &=& \frac{1}{\alpha} tr \int d^4 x d^4 \theta \; \bar{b} b \end{eqnarray} The quantum gauge quadratic action can be written as \cite{Gates:1983nr,Grisaru:1979wc,Grisaru:1984ja,Grisaru:1984jc} \begin{eqnarray} & & - \frac{1}{2} tr \int d^4 x d^4 \theta V \left[ {\boldsymbol{\Box}} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \right] V \nonumber ~~~,~~~ \boldsymbol{\Box} = \frac{1}{2} \boldsymbol{\nabla}^{\alpha \dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \end{eqnarray} We denote this kinetic operator as $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} = \boldsymbol{\Box} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}}$. Also, for the background covariantly chiral superfields, we define operators $\Box_{\pm}$ by \begin{eqnarray} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \phi &=& \Box_{+} \phi ~~~,~~~ \Box_{+} = \Box - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - \frac{i}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} ) \nonumber \\ \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \bar{\phi} &=& \Box_{-} \bar{\phi} ~~~,~~~ \Box_{-} = \Box - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} - \frac{i}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \end{eqnarray} The covariant Feynman rules can be obtained from the partition function once we have introduced real and chiral sources $J, j$. This partition function can be written as \cite{Gates:1983nr} \begin{eqnarray} Z = \Delta_{+} \hat{\Delta} exp \left[ S_{int}\left( \frac{\delta}{\delta J},\frac{\delta}{\delta j},\frac{\delta}{\delta \bar{j}}\right) \right] exp \left[ \int d^4 x d^4 \theta ( \frac{1}{2} J \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1} J - \bar{j} \boldsymbol{\Box}^{-1}_{+} j ) \right] \;, \end{eqnarray} where $\hat{\Delta}$, $\Delta_{+}$ are one-loop contributions from real and chiral superfields (including ghosts). From this it is clear that covariant Feynman rules are similar to the usual ones except that for V-lines we have $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$ as the propagator, $\bar{\phi}$ and $\phi$ fields in vertices are joined by $-\Box_{+}^{-1}$, and usual $D^2$ factors at the vertices are replaced by $\boldsymbol{\nabla}^2$ factors. Making use of these covariant Feynman rules, in the background field approach we consider vacuum graphs with quantum vertices derived from $S_{int}$, $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$ and $-\Box_{+}^{-1}$ propagators and $\boldsymbol{\nabla}^2$, $\bar{\boldsymbol{\nabla}}^2$ factors. The idea is to use the algebra of covariant derivatives and Bianchi identities to push, in each propagator, the covariant spinor derivatives to a given vertex. At this point, using the anticommutation relations, they can be integrated by parts or eliminated in favour of space-times derivatives. Finally, we have to apply the relation \cite{Grisaru:1984ja} \begin{eqnarray} \delta_{12} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \delta_{12} = \delta_{12} \;, \label{cov_SUSY_delta_id} \end{eqnarray} in order to obtain free grassmanian $\delta$-functions that allow us to evaluate $\theta$ integrals. \chapter{Explicit calculations} \label{ap_calc} \section{Integrals with overlapping divergences} \label{ap_integrales} Here we will show how we can obtain the different expressions for the integrals with overlapping divergences presented in section \ref{overlap_integrals}. \subsection{Notation} Be begin by discussing some notation that we use in these calculations. As we did when we listed integrals of section \ref{overlap_integrals}, we will write the final results in terms of a variable $z=x-y$. Some intermediate local results will be found to be multiplied by a constant termed $a$ which value is $a=6 \pi^4 \xi(3) / ( 4 \pi^2)^4$. \subsubsection{Integral relations} To obtain some of the results, we use the following exact integral relations \begin{eqnarray} \int & d^4 u d^4 v & \Delta_{xu} ( \partial_{\mu}^x \Delta_{xv} ) ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \nonumber \\ &=& \partial_{\lambda}^x \int d^4 u d^4 v \; \Delta_{xu} ( \partial_{\mu}^x \Delta_{xv} ) \Delta_{yu} ( \partial_{\nu}^y \Delta_{yv} ) \Delta_{uv} \nonumber \\ & & + \int d^4 u d^4 v \; \Delta_{xu} ( \partial_{\mu}^x \Delta_{xv} ) \Delta_{yu} ( \partial_{\nu}^y \partial_{\lambda}^y \Delta_{yv} ) \Delta_{uv} \nonumber \\ & & + \partial_{\nu}^x \int d^4 u d^4 v \; \Delta_{xu} ( \partial_{\mu}^x \Delta_{xv}) \Delta_{yu} ( \partial_{\lambda}^y \Delta_{yv} ) \Delta_{uv} \nonumber \\ & & + \partial_{\nu}^x \partial_{\lambda}^x \int d^4 u d^4 v \; \Delta_{xu} ( \partial_{\mu}^x \Delta_{xv}) \Delta_{yu} \Delta_{yv} \Delta_{uv} \label{rel_int1} \end{eqnarray} \begin{eqnarray} \partial_{\lambda}^y \int & d^4 u d^4 v & \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \nonumber \\ & = & \frac{1}{2} \partial_{\nu}^y \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \Box \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \nonumber \\ & & - \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \Box \Delta_{yu} ) \Delta_{yu} \Delta_{uv} \nonumber \\ & & + \frac{1}{2} \partial_{\nu}^y \partial_{\lambda}^y \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \label{rel_int2} \end{eqnarray} \begin{eqnarray} \partial_{\lambda}^y \int & d^4 u d^4 v& \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{uy} ) ( \partial_{\mu}^y \Delta_{yv} )\Delta_{uv} \nonumber \\ &=& \frac{1}{4} \partial_{\nu}^y \partial_{\mu}^y \partial_{\lambda}^y \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \Delta_{uy} ) \Delta_{yv} \Delta_{uv} \nonumber \\ & & + \frac{1}{4} \partial_{\nu}^y \Box \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \partial_{\mu}^y \Delta_{uy} ) \Delta_{yv} \Delta_{uv} \nonumber \\ & & - \frac{1}{2} \Box \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \partial_{\mu}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \label{rel_int3} \end{eqnarray} To prove these relations, one has only to perform the derivatives and expand the different terms. \subsection{Calculations} \subsubsection{Evaluation of $H[1,1 \; ; \; 1,1]$, $H[\partial_{\mu},1 \; ; \; 1,1]$ and $H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\lambda}]$} These integrals are obtained by means of Gegenbauer Polynomials \cite{Freedman:1991tk,Song}. \subsubsection{Evaluation of $\partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; 1, \partial_{\lambda}]$} Contracting (\ref{rel_int1}) with $\delta_{\nu \lambda}$ we obtain \begin{eqnarray} \partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; 1, \partial_{\lambda}] &=& \frac{1}{2} \partial_{\mu}^x H [ 1, 1 \; ; \; \Box, 1] - H[ 1, \partial_{\mu} \; ; \; 1, \Box ] - \frac{1}{2} \Box H [ 1, \partial_{\mu} \; ; \; 1,1] \;. \nonumber \\ \end{eqnarray} In order to renormalize, we have only to write the previous expressions in terms of the integral form $I^1(x)$ (remember $\Box \Delta = - \delta$), and use the results found in section \ref{Nested_div}. Therefore we find \begin{eqnarray} \partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; 1, \partial_{\lambda}] &=& \frac{1}{2} \partial_{\mu} ( \Delta I^1 ) - ( \Delta \partial_{\mu} I^1 ) + \frac{a}{4} ( \partial_{\mu} \delta ) \nonumber \\ &\stackrel{R}{\rightarrow}& - \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $H[ \partial_{\mu} \partial_{\lambda}, \partial_{\lambda} \; ; \; 1,1]$} In this case, no integral relation is used to write $H[ \partial_{\mu} \partial_{\lambda}, \partial_{\lambda} \; ; \; 1,1]$ in terms of $I^1$ \begin{eqnarray} H[ \partial_{\mu} \partial_{\lambda}, \partial_{\lambda} \; ; \; 1,1] &=& \frac{1}{2} \partial_{\mu}^x H[ \partial_{\lambda}, \partial_{\lambda} \; ; \; 1,1] \nonumber \\ &=& \frac{1}{2} \partial_{\mu}^x \partial_{\lambda}^x H[1, \partial_{\lambda} \; ; \; 1, 1] - \frac{1}{2} \partial_{\mu}^x H[1, \Box \; ; \; 1,1] \nonumber \\ &=& \frac{1}{2} \partial_{\mu} ( \Delta I^1 ) - \frac{a}{4} ( \partial_{\mu} \delta) \nonumber \\ &\stackrel{R}{\rightarrow}& \frac{1}{32(4 \pi^2)^3} \partial_{\mu} \Box \frac{ - \frac{1}{2} \ln^2 z^2 M^2 - \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $\partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda}, 1]$, $\partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu}, 1]$ and $H[ 1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\mu},1]$} First of all, the third integral (\ref{int6}) will be evaluated with relation (\ref{rel_int1}) \begin{eqnarray} H[ 1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu},1] &=& \frac{1}{2} \partial_{\lambda}^x H [1, \partial_{\lambda} \; ; \; 1, \partial_{\nu} ] + \frac{1}{2} \partial_{\lambda}^x H[ 1, 1 \; ; \; 1, \partial_{\lambda} \partial_{\nu} ] \nonumber \\ & & + \frac{1}{2} \partial_{\nu}^x H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\lambda}] + \frac{1}{2} \partial_{\nu}^x \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; 1,1] \;. \end{eqnarray} Using the previous results \begin{eqnarray} H^R[ 1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu},1] &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 - \frac{7}{8} \ln z^2 M^2 }{z^2} + \ldots \end{eqnarray} However, this integral along with the other two, can be obtained with other method. The idea is to apply the CDR decomposition (\ref{CDR_T}) into trace part, traceless part and additional local terms to the divergent subdiagram $ (\partial_{\mu}^x \partial_{\nu}^x \Delta_{yu}) \Delta_{yv} \Delta_{uv}$. Considering this in the general integral \begin{eqnarray} \int &d^4 u d^4 v& \Delta_{xu} ( \partial_{\rho}^x \Delta_{xv} ) ( \partial_{\varepsilon}^y \partial_{\sigma}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \nonumber \\ &\stackrel{R}{\rightarrow}& - \frac{1}{4} \delta_{\varepsilon \sigma} [ \Delta \partial_{\rho} I^1 ]_R - \frac{\delta_{\varepsilon \rho}}{256 \pi^2} \partial_{\rho} \Delta^2_R - \frac{16}{(4 \pi^2)^5} I_{\rho \varepsilon \sigma \; R} \;, \label{I_rho_eps_sig} \\ \end{eqnarray} where $I_{\rho \varepsilon \sigma}$ stands for the traceless part. The one-loop ambiguity fixed by CDR is reflected in the second term of (\ref{I_rho_eps_sig}) that at two loops has become a logarithm of the scale. In the renormalization of the traceless part normal differential renormalization will be used, leaving ambiguities (local terms) not fixed. The expression for $I_{\rho \varepsilon \sigma}$ is \begin{eqnarray} I_{ \rho \varepsilon \sigma \; R } &=& B \frac{ x_{\varepsilon} x_{\sigma} x_{\rho}}{x^8} - \frac{1}{2} A \frac{x_{\rho}}{x^6} \delta_{\varepsilon \sigma} + ( A - \frac{1}{2} B) \left[ \frac{x_{\varepsilon}}{x^6} \delta_{\rho \sigma} + \frac{x_{\sigma}}{x^6} \delta_{\rho \varepsilon} \right] |_R \;, \end{eqnarray} or in the form of the integrals being discussed \begin{eqnarray} I_{\lambda \lambda \mu \; R} &=& - \frac{3}{8} ( 4 \pi^2)^2 ( 3A - B ) \partial_{\mu} \Delta^2_R \label{I_lam_lam_mu}\\ \partial_{\lambda} I_{\mu \lambda \nu \; R} &=& - (4 \pi^2)^2 ( 3A - B ) \left[ \frac{1}{24} \partial_{\mu} \partial_{\nu} \Delta^2_R + \frac{2}{3} ( 4 \pi^2) \delta_{\mu \nu} \Delta^3_R \right] \\ \partial_{\lambda} I_{\lambda \mu \nu \; R} &=& (4 \pi^2)^2 (3A - B) \left[ - \frac{1}{6} \partial_{\mu} \partial_{\nu} \Delta^2_R + \frac{1}{3} (4 \pi^2) \delta_{\mu \nu} \Delta^3_R \right] \;. \end{eqnarray} The value of $(3A-B)$ is easily obtained using (\ref{I_lam_lam_mu}), because this corresponds to integral (\ref{int6}) that was obtained previously. I.e. \begin{eqnarray} \int &d^4 u d^4 v& \left. \Delta_{xu} ( \partial_{\lambda}^x \Delta_{xv}) ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu}) \Delta_{yv} \Delta_{uv} \; \right|_R = \nonumber \\ &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 - \frac{7}{8} \ln z^2 M^2 }{z^2} + \ldots \nonumber \\ &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\nu} \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{1}{4} \ln z^2 M^2}{z^2}- \frac{16}{(4 \pi^2)^5} I_{\rho \varepsilon \sigma \; R} \;, \end{eqnarray} which implies that \begin{eqnarray} (3A - B) &=& \frac{3 \pi^4}{8} \;. \end{eqnarray} With this result the evaluation of (\ref{int8}) and (\ref{int10}) are straightforward \begin{eqnarray} \partial_{\lambda}^x H^R[ 1, \partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda}, 1] &=& - \frac{1}{4} \partial_{\nu} ( \Delta \partial_{\mu} I^1 )_R - \frac{1}{256 \pi^2} \partial_{\mu} \partial_{\nu} \Delta^2_R - \frac{16}{(4 \pi^2)^5} \partial_{\lambda} I_{\mu \lambda \nu \; R} \nonumber \\ &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{1}{8} \ln z^2 M^2}{z^2} + \delta_{\mu \nu} \Box \Box \frac{-\frac{1}{4} \ln z^2 M^2}{z^2} \right] \nonumber \\ & & + \ldots \end{eqnarray} \begin{eqnarray} \partial_{\lambda}^x H^R[ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu}, 1] &=& - \frac{1}{4} \delta_{\mu \nu} \partial_{\lambda} ( \Delta \partial_{\lambda} I^1 )_R - \frac{\delta_{\mu \nu}}{256 \pi^2} \Box \Delta^2_R - \frac{16}{(4 \pi^2)^5} \partial_{\lambda} I_{\lambda \mu \nu \; R} \nonumber \\ &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{2} \ln z^2 M^2}{z^2} + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] \nonumber \\ & &+ \ldots \end{eqnarray} \subsubsection{Evaluation of $\partial_{\lambda}^x H[1, \partial_{\lambda} \; ; \; 1, \partial_{\mu} \partial_{\nu}]$} Using integral relation \ref{rel_int1} we can write this contribution in terms of others previously obtained. Explicitly, we find \begin{eqnarray} \partial_{\lambda}^x H[1,\partial_{\lambda} \; ; \; 1, \partial_{\mu} \partial_{\nu}] &=& \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu}, 1] - \partial_{\lambda}^x \partial_{\mu}^x H [1, \partial_{\lambda} \; ; \; 1, \partial_{\nu}] \nonumber \\ & & - \partial_{\nu}^x \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; 1,\partial_{\mu}] - \partial_{\nu} \Box H[1, \partial_{\mu} \; ; \; 1,1] \nonumber \\ &\stackrel{R}{\rightarrow}& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] \nonumber \\ & & + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Evaluation of $H[1, \partial_{\mu} \; ; \; 1, \partial_{\nu}]$} Considering (\ref{int8}) and applying (\ref{rel_int1}) we can put this as \begin{eqnarray} \partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda} , 1] &=& \frac{1}{2} \Box H [1, \partial_{\mu} \; ; \; 1, \partial_{\nu}] + \frac{1}{2} \partial_{\mu}^x \partial_{\lambda}^x H[ 1, 1 \; ; \; 1, \partial_{\nu} \partial_{\lambda}] \nonumber \\ & &+ \frac{1}{2} \partial_{\nu}^x \partial_{\lambda}^x H[ 1, \partial_{\mu} \; ; \; 1, \partial_{\lambda}] + \frac{1}{2} \partial_{\nu}^x \Box H[ 1, \partial_{\mu} \; ; \; 1,1] \;. \end{eqnarray} Remembering previous results \begin{eqnarray} \partial_{\lambda}^x H^R [1, \partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda},1] &=& \frac{1}{32(4 \pi^2)^3} \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{1}{8} \ln z^2 M^2}{z^2} \nonumber \\ & & + \frac{1}{2} \Box H^R [1, \partial_{\mu} \; ; \; 1, \partial_{\nu}] + \textrm{(local terms)} \end{eqnarray} So that \begin{eqnarray} \Box H^R[1, \partial_{\mu}\; ; \; 1, \partial_{\nu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{- \frac{1}{2} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $H[1,1 \; ; \; \partial_{\mu} \partial_{\nu}, 1]$} Using (\ref{int10}), (\ref{int10a}) and the identity \begin{eqnarray} \Box H[ 1,1 \; ; \; \partial_{\mu} \partial_{\nu},1] &=& \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu},1] + \partial_{\lambda}^x H[ \partial_{\lambda},1 \; ; \; \partial_{\mu} \partial_{\nu},1] \;, \end{eqnarray} we can easily arrive to \begin{eqnarray} \Box H^R[1,1 \; ; \; \partial_{\mu} \partial_{\nu}, 1] &=& \frac{1}{32(4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $\partial_{\lambda}^x H[1,1 \; ; \; \partial_{\lambda} \partial_{\nu} , \partial_{\mu}]$} Using (\ref{rel_int3}) and (\ref{int11}) we can write this as \begin{eqnarray} \partial_{\lambda}^x H[1,1 \; ; \; \partial_{\lambda} \partial_{\nu} , \partial_{\mu}] &=& \frac{1}{2} \Box H[ 1,1 \; ; \; \partial_{\mu} \partial_{\nu} ,1] - \frac{1}{4} \partial_{\mu}^y \partial_{\nu}^y \partial_{\lambda}^y H[ 1,1 \; ; \; \partial_{\lambda},1] \nonumber \\ & & - \frac{1}{4} \partial_{\nu}^y \Box H[1,1 \; ; \; \partial_{\mu} , 1] \;. \nonumber \\ \end{eqnarray} Hence, we have only to use previous results to find \begin{eqnarray} \partial_{\lambda}^x H^R [1,1 \; ; \; \partial_{\lambda} \partial_{\nu} , \partial_{\mu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $\partial_{\lambda}^x H[ 1,1 \; ; \; \partial_{\mu} \partial_{\nu} , \partial_{\lambda}]$} With (\ref{rel_int3}) and (\ref{int5}),this contribution can be evaluated with the same procedure of the previous one. So, we have \begin{eqnarray} \partial_{\lambda}^x H[ 1,1 \; ; \; \partial_{\mu} \partial_{\nu} , \partial_{\lambda}] &=& \frac{1}{4} \partial_{\nu}^x \Box H[ 1,1 \; ; \; \partial_{\mu},1] + \frac{1}{4} \partial_{\lambda}^x \partial_{\mu}^x \partial_{\nu}^x H[ 1,1 \; ; \; \partial_{\lambda},1] \nonumber \\ & & + \frac{1}{2} \partial_{\lambda}^x \partial_{\mu}^x H[ 1,1 \; ; \; \partial_{\lambda} \partial_{\nu},1] \nonumber \\ &\stackrel{R}{\rightarrow}& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} \subsubsection{Evaluation of $H[1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda}, 1]$} In this case the CDR decomposition into trace+traceless+local terms (\ref{CDR_T}) will be used again, as in (\ref{int8}) and (\ref{int10}) \begin{eqnarray} H^R[1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda}, 1] &=& - \frac{1}{4} ( \Delta \partial_{\mu} \partial_{\nu} I^1 )_R - \frac{1}{4} [ \Delta ( \partial_{\mu} \partial_{\nu} - \frac{1}{4} \delta_{\mu \nu} \Box ) I^1 ]_R \nonumber \\ & &- \frac{1}{128 \pi^2} (\Delta \partial_{\mu} \partial_{\nu} \Delta)_R - \frac{1}{128 \pi^2} [ \Delta ( \partial_{\mu} \partial_{\nu} - \frac{1}{4} \delta_{\mu \nu} \Box ) \Delta ]_R \nonumber \\ & &+ \frac{64}{(4 \pi^2)^5} I_{\mu \lambda \nu \lambda \; R} \;, \end{eqnarray} where $I_{\mu \lambda \nu \lambda}$ stands for the integral with the traceless parts. This was calculated in \cite{Haagensen:1992vz}, and the result was found to be \begin{eqnarray} \frac{64}{(4 \pi^2)^5} I_{\mu \lambda \nu \lambda \; R} &=& \frac{5}{96(4 \pi^2)} \partial_{\mu} \partial_{\nu} \Delta^2_R + \frac{13}{48} \delta_{\mu \nu} \Delta^3_R \;. \end{eqnarray} Adding up all the terms, it is easy to arrive to \begin{eqnarray} H^R[1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda}, 1] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{6} \ln^2 z^2 M^2 - \frac{5}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ - \frac{1}{24} \ln^2 z^2 M^2 - \frac{29}{72} \ln z^2 M^2}{z^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Evaluation of $H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1 , \partial_{\lambda} \partial_{\nu} ]$} In this case, applying (\ref{rel_int1}), (\ref{int5}), (\ref{int6}), (\ref{int8}) and (\ref{int14}) we get \begin{eqnarray} H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1 , \partial_{\lambda} \partial_{\nu} ] &=& H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda},1] - \partial_{\lambda}^x H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\nu}] \nonumber \\ & & - \partial_{\nu}^x H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\lambda}]- \partial_{\nu}^x \partial_{\lambda}^x H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1,1] \nonumber \\ &\stackrel{R}{\rightarrow}& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{6} \ln^2 z^2 M^2 + \frac{49}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & + \left.\delta_{\mu \nu} \Box \Box \frac{- \frac{1}{24} \ln^2 z^2 M^2 - \frac{11}{72} \ln z^2 M^2}{z^2} \right] + \ldots \nonumber \\ \end{eqnarray} \section{UV and IR divergent integrals} \label{ap_UV_IR} In this section we will detail the renormalization of three relevant integral expressions that appear when considering two-loop diagrams made up by the insertion of a one-loop propagator. Examples of this are diagram $(a)$ of QED or diagram $(b)$ of the Yang-Mills case. These expressions are \begin{eqnarray} I^0 (x-y) &=& \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} \Delta^2_{uv} \nonumber \\ I^{0}_{\mu} (x-y) &=& \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial_{\mu}^u \Delta_{uv} ) \nonumber \\ I^{0}_{\mu \nu} (x-y) &=& \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial_{\mu}^u \partial_{\nu}^v \Delta_{uv} ) \;. \end{eqnarray} \subsection{Renormalization of $I^0$} The renormalization of $I^0$ is detailed in section \ref{IR_divergences}, and here we only recall the final renormalized result found there. \begin{eqnarray} I^0_R(x-y) &=& \frac{1}{32 (4 \pi^2)^2} \left[ \ln^2 x^2 M^2_{IR} + 2 \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2) \right]+ \ldots \nonumber \\ \label{I0_integral} \end{eqnarray} As in diagram $(b)$ of the Yang-Mills theory we have contributions of the form $\Delta \Box I^0$ and $\Delta \partial_{\mu} \partial_{\nu} I^0$, we have to evaluate them. For the first one, it is clear that \begin{eqnarray} \Box I^0(x-y) &=& \Box \int d^4 u d^4v \; \Delta_{xu} \Delta_{yv} \Delta^2_{uv} \nonumber \\&=& - \int d^4 v \; \Delta_{vy} \Delta^2_{xv} \nonumber \\ &=& - I^1 (x-y) \nonumber \\ &\stackrel{R}{\rightarrow}& - \frac{1}{4 (4 \pi^2)^2} \frac{\ln (x-y)^2 M^2}{(x-y)^2} \;, \end{eqnarray} where we have used the renormalized value found for $I^1$. Finally, in order to obtain $\Delta \partial_{\mu} \partial_{\nu}I^0$ we have to consider (\ref{I0_integral}) and apply usual DiffR. With this, we find \begin{eqnarray} [\Delta \partial_{\mu} \partial_{\nu} I^0 ]_R (x) &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{ \ln x^2 M^2}{x^2} + \delta_{\mu \nu} \Box \frac{ \frac{1}{4} \ln^2 x^2 M^2 + \frac{1}{4} \ln x^2 M^2}{x^2} \right] \nonumber \\ & & + \ldots \end{eqnarray} \subsection{Renormalization of $I^0_{\mu}$} \label{ap_UV_IR_I0m} The renormalization of $I^0_{\mu}$ is straightforward, once we recall that CDR imposes $I^0_{\mu \; R} = \frac{1}{2} \partial_{\mu}^x I^0_R$. \subsection{Renormalization of $I^0_{\mu \nu}$} Applying CDR to the subdivergence we find \begin{eqnarray} I_{\mu \nu}^0 &=& \frac{1}{3} \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \partial_{\mu} \partial_{\nu} - \frac{1}{4} \delta_{\mu \nu} \Box ) (\Delta^2_{uv})_{R} + \nonumber \\ & &+ \frac{1}{288 \pi^2} \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \delta (u-v) \nonumber \\ &=& \frac{1}{3} \partial_{\mu} \partial_{\nu} I^0_R - \frac{1}{12} \delta_{\mu \nu} \Box I^0_R + \frac{1}{72 (4\pi^2)} \partial_{\mu} \partial_{\nu} \int d^4 u \; \Delta_{xu} \Delta_{yu} + \frac{\delta_{\mu \nu}}{72 (4\pi^2)} \Delta \;. \nonumber \end{eqnarray} With this we can evaluate the expression that appears in diagram $(b)$ of the Yang-Mills case ($\Delta I^0_{\mu \nu} $). We find the following result \begin{eqnarray} [\Delta I_{\mu \nu}^0]_R (x) &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{ \frac{1}{3} \ln x^2 M^2}{x^2} + \delta_{\mu \nu} \Box \frac{ - \frac{1}{6} \ln x^2 M^2}{x^2} \right] + \ldots \end{eqnarray} \chapter{Gauge parameters and the RG equations} \label{ap_Gauge} In order to fix the gauge when quantizing the different theories that we have treated, a gauge fixing term that depends in an additional parameter $\alpha$ has been added to the action. Therefore, we have to take care of the running of this gauge parameter in the RG equations. In general, these equations applied to two-point functions are of the form \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta \frac{\partial}{\partial g} + \gamma_{\alpha} \frac{\partial}{\partial \alpha} - 2 \gamma \right] \Gamma_2 = 0 \;, \nonumber \end{eqnarray} where $\gamma$ is the anomalous dimension, $\beta$ the beta function and $\gamma_{\alpha}$ the function that takes care of the running of the gauge parameter. So, we need $\gamma_{\alpha}$ to verify the two-loop background RG equations. However, as in the models that we have considered the first dependence on $\alpha$ of the background two-point function can only arise at the one-loop level, we can obtain the relevant terms of the $\gamma_{\alpha}(g)$ expansion by means of the evaluation of the one-loop RG equations for the quantum gauge field self-energies. \section{Abelian examples} \subsection{QED} \begin{figure}[ht] \centerline{\epsfbox{QED1loop_quantum.eps}} \caption{One-loop QED diagram. Wavy lines correspond to gauge fields and solid lines to fermion fields.} \label{QED1loop_quantum} \end{figure} The one-loop contribution to the quantum photon self-energy is shown in figure \ref{QED1loop_quantum}. This has an explicit expression of the form \begin{eqnarray} \Pi_{\mu \nu}^{AA \;(1 \; loop)} &=& - (i e)^2 Tr \left[ \gamma_{\mu} \gamma^{\lambda} \partial_{\lambda}^x \Delta \gamma_{\nu} \gamma^{\sigma} \partial_{\sigma}^y \Delta \right] \;. \nonumber \\ \end{eqnarray} which is the same that we have found for the background gauge fields in (\ref{QED1loop_bare}). Hence, with the same procedure as for the background fields, we find the following renormalized value \begin{eqnarray} \Pi_{\mu \nu R}^{AA \; (1)} (x) &=& - ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \left[ - \frac{e^2}{12 \pi^2 ( 4 \pi^2 )} \Box \frac{ \ln x^2 M^2}{x^2} - \frac{e^2}{36 \pi^2} \delta (x) \right] \;. \end{eqnarray} With this, the quantum gauge field two-point function expanded to one-loop order in a general gauge is \begin{eqnarray} \Gamma_{\mu \nu \;R}^{A A}(x) &=& \left( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box \right) \left[\delta (x) - \frac{e^2}{9 (4 \pi^2)} \delta (x) - \frac{e^2}{3 (4 \pi^2)^2} \Box \frac{ \ln x^2 M^2}{x^2} \right] - \frac{1}{\alpha} \partial_{\mu} \partial_{\nu} \delta (x) + {\cal{O}}(e^4) \;. \nonumber \\ \label{QED1loop_Q_r} \end{eqnarray} As this function satisfies the usual RG equation \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta(e) \frac{\partial}{\partial e} + \gamma_{\alpha}(e) \frac{\partial}{\partial \alpha} - 2 \gamma_A (e) \right] \Gamma_{\mu \nu \;R}^{A A} = 0 \;, \end{eqnarray} using the one-loop expression (\ref{QED1loop_Q_r}) we find the following one-loop values for $\gamma_A$ and $\gamma_{\alpha}$ \begin{eqnarray} \gamma_A(e) &=& \frac{1}{3(4 \pi^2)} e^2 + \ldots \\ \gamma_{\alpha} (e) &=& - \frac{2 \alpha}{3 (4 \pi^2)} e^2 + \ldots \;. \label{QED1loop_q_alfa} \end{eqnarray} \subsection{Super QED} \begin{figure}[ht] \centerline{\epsfbox{SQED1loop_quantum.eps}} \caption{One-loop Super QED diagram. Wavy lines correspond to gauge fields and solid lines to $\Phi_{+}$ or $\Phi_{-}$ propagators.} \end{figure} For the supersymmetric extension of the previous model, the situation is very similar. We start by considering the kinetic part of the action for the quantum gauge field in a generic gauge. This is of the form \begin{eqnarray} S_0^{(2)} &=& \frac{1}{2} \int d^4 x d^4 \theta \; V D^{\alpha} \bar{D}^2 D_{\alpha} V - \frac{1}{\alpha} \int d^4 x d^4 \theta \; ( D^2 V) ( \bar{D}^2 V) \nonumber \\ &=& - \frac{1}{2} \int d^4 x d^4 \theta \; V \Box \Pi_{\frac{1}{2}} V - \frac{1}{2 \alpha} \int d^4 x d^4 \theta \; V \Box \Pi_0 V \;, \end{eqnarray} where we have used the projection operators $\Pi_{\frac{1}{2}} = - D^{\alpha} \bar{D}^2 D_{\alpha} / \Box$ and $\Pi_0 = ( D^2 \bar{D}^2 + \bar{D}^2 D^2 ) / \Box$. As in QED, the one-loop renormalized contribution to the quantum gauge field self-energy is the same that we have evaluated for the background case (\ref{SQED_1loop_ren}). So, the complete expansion to one-loop order of this self-energy is \begin{eqnarray} \Gamma(x) &=& - \frac{1}{2} \Box \Pi_{\frac{1}{2}} \delta (x) - \frac{1}{2 \alpha} \Box \Pi_0 \delta(x) + \frac{g^2}{4 ( 4 \pi^2)^2} \Box \Pi_{\frac{1}{2}} \Box \frac{ \ln x^2 M^2}{x^2} + {\cal{O}}(g^4)\;. \end{eqnarray} Thus, considering that this amplitude satisfies a RG equation of the form \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta (g) \frac{\partial}{\partial g} + \gamma_{\alpha}(g) \frac{\partial}{\partial \alpha} - 2 \gamma_V(g) \right] \Gamma = 0 \;, \end{eqnarray} the values that we find for $\gamma_V$ and $\gamma_{\alpha}$ are \begin{eqnarray} \gamma_V &=& \frac{1}{2(4 \pi^2)} g^2 + \ldots \nonumber \\ \gamma_{\alpha} &=& - \frac{\alpha}{(4 \pi^2)} g^2 + \ldots \end{eqnarray} \section{Non-abelian examples} \subsection{Yang-Mills} \label{ap_Gauge_YM} \begin{figure}[ht] \centerline{\epsfbox{YM1loop_quantum.eps}} \caption{One-loop Yang-Mills diagrams. Curvy lines correspond to gauge fields and dashed lines to ghosts.} \label{YM1loop_quantum_diag} \end{figure} We begin writing the effective action as \begin{eqnarray} \Gamma &=& \frac{1}{2} \int d^4 x d^4 y \; A_{\mu}^a (x) A_{\nu}^b (y) \Gamma^{AA \; ab}_{\mu \nu}(x-y) + {\cal{O}} (A^3) \;. \end{eqnarray} If we consider the part of the Yang-Mills lagrangian which depends only in the quantum fields $A_{\mu}^a$, in a generic gauge this is of the form \begin{eqnarray} \frac{1}{4} F_{\mu \nu}^a F_{\mu \nu}^a + \frac{1}{2} (1 + \xi) ( \partial_{\mu} A_{\mu}^a)(\partial_{\nu} A_{\nu}^a ) \;. \end{eqnarray} Notice that we have redefined the usual gauge parameter $\alpha$ as $\frac{1}{\alpha} = ( 1 + \xi)$. With this, the effective action can be written as \begin{eqnarray} \Gamma &=& \frac{1}{2} \int d^4 x d^4 y \; A_{\mu}^a \left[ \delta^{ab}\left( - \delta_{\mu \nu} \Box \delta (x-y) - \xi \partial_{\mu} \partial_{\nu} \delta (x-y) \right)- \Pi_{\mu \nu}^{AA\;ab}(x-y) \right] A_{\nu}^a(y) \nonumber \\ & & + {\cal{O}} (A^3) \;. \end{eqnarray} At the one-loop level, as is shown in figure \ref{YM1loop_quantum_diag}, we have contributions with gauge and ghost loops. We first obtain the fully expanded bare expressions (in Feynman gauge) and then renormalize them according to CDR rules. \begin{itemize} \item {\bf Gauge loop} \end{itemize} \begin{eqnarray} && \frac{g^2 f^{acd} f^{bdc}}{2} \Delta_{xy} \left[ \delta_{\mu \rho} ( D_{\sigma}^x - \stackrel{\leftarrow}{\partial_{\sigma}^{ x}}) + \delta_{\sigma \mu} ( \partial_{\rho}^x - D_{\rho}^x ) + \delta_{\rho \sigma} ( \stackrel{\leftarrow}{\partial_{\mu}^{x}} -\partial_{\mu}^x ) \right] \nonumber \\ & & \times \left[ \delta_{\nu \sigma} (D_{\rho}^y - \partial_{\rho}^y) + \delta_{\rho \nu} ( \stackrel{\leftarrow}{\partial_{\sigma}^{y}} - D_{\sigma}^y) + \delta_{\rho \sigma} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^{y}} ) \right] \Delta_{xy} \nonumber \\ &=& \frac{g^2 C_A \delta^{ab}}{2} \left[ 2 ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Delta^2 + 10 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 10 \Delta \partial_{\mu} \partial_{\nu} \Delta \right. \nonumber \\ & & - \left. 4 \delta_{\mu \nu} \partial^{\lambda} ( \Delta \partial_{\lambda} \Delta ) - 2 \delta_{\mu \nu} \Delta ( \Box \Delta ) \right] \;. \end{eqnarray} \begin{itemize} \item {\bf Ghost loop} \end{itemize} \begin{eqnarray} && - g^2 f^{adc} f^{bcd} \Delta_{xy} \stackrel{\leftarrow}{\partial_{\mu}^{ x}} \partial_{\nu}^y \Delta_{xy} = - g^2 C_A \delta^{ab} \left[ \partial_{\mu}( \Delta \partial_{\nu} \Delta) - \Delta \partial_{\mu} \partial_{\nu} \Delta \right] \;. \end{eqnarray} Adding the two previous results we find the total bare contribution to be \begin{eqnarray} \Pi_{\mu \nu\;(1)}^{AA\;ab}(x) &=& g^2 C_A \delta^{ab} \left[ \partial_{\mu} \partial_{\nu} \Delta^2 - \delta_{\mu \nu} \Box \Delta^2 + 4 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 2 \delta_{\mu \nu} \partial^{\lambda} ( \Delta \partial_{\lambda} \Delta ) \right. \nonumber \\ & & - \left. 4 \Delta \partial_{\mu} \partial_{\nu} \Delta - \delta_{\mu \nu} \Delta ( \Box \Delta ) \right] \;, \label{YM1loop_quantum_bare_prop} \end{eqnarray} and with CDR identities it is straightforward to obtain the renormalized result as \begin{eqnarray} \left. \Pi_{\mu \nu\;(1)}^{AA\;ab}(x) \right|_R &=& g^2 C_A \delta^{ab} \left[ \frac{5}{3}( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu } \Box) \Delta^2_R (x) - \frac{1}{72 \pi^2} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \delta (x) \right] \nonumber \\ &=& - \frac{g^2 C_A \delta^{ab}}{144 \pi^2} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ \frac{15}{4 \pi^2} \Box \frac{ \ln x^2 M^2}{x^2} + 2 \delta (x) \right] \;. \label{YM1loop_quantum} \end{eqnarray} So, $\Gamma^{ ab}_{\mu \nu}$ can be written as \begin{eqnarray} \Gamma^{AA \; ab}_{\mu \nu \;R } (x)&=& - \delta_{\mu \nu} \Box \delta (x) - \xi \partial_{\mu} \partial_{\nu} \delta (x) + \delta^{ab} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ \frac{5 g^2 C_A}{48 \pi^2 (4 \pi^2)} \Box \frac{\ln x^2 M^2}{x^2} \right. \nonumber \\ & &+ \left. \frac{g^2 C_A}{72 \pi^2 (4 \pi^2)} \delta (x) \right] + {\cal{O}} (g^4) \;. \end{eqnarray} Inserting this in the RG equation \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta (g) \frac{\partial}{\partial g} + \gamma_{\xi} \frac{\partial}{\partial \xi} - 2 \gamma_A \right] \Gamma^{AA \; ab}_{\mu \nu \;R} |_{\xi=0} =0 \;, \end{eqnarray} we can easily obtain the values for $\gamma_{\xi}$ and $\gamma_A$ as \begin{eqnarray} \gamma_{\xi} &=& - \frac{5 C_A}{24 \pi^2} g^2 + \cdots \nonumber \\ \gamma_A &=& - \frac{5 C_A}{48 \pi^2} g^2 + \cdots \ \label{YM1loop_g_xi} \end{eqnarray} \subsection{Super Yang-Mills} \begin{figure}[ht] \centerline{\epsfbox{SYM1loop_quantum.eps}} \caption{One-loop Super Yang-Mills diagrams. Wavy lines correspond to gauge fields and dashed lines represent ghosts.} \label{SYM1loop_quantum_diag} \end{figure} The kinetic term of the Super Yang-Mills action for quantum gauge fields in a generic gauge is \begin{eqnarray} S^{(2)}_V &=& - \frac{1}{2} tr \int d^4 x d^4 y d^4 \theta \; V^{a}(x,\theta) \left[ \Box \Pi_{1/2} + (1 + \xi ) \Box \Pi_0 \right] V^{a} (y, \theta) \delta (x-y) \;, \nonumber \\ \end{eqnarray} where again the usual gauge parameter $\alpha$ is redefined as $\frac{1}{\alpha} = 1 +\xi$. We have for these fields an effective action of the form of\begin{eqnarray} \Gamma_{V} = \int d^4 x d^4 y d^4 \theta \; V^{a}(x, \theta) \Gamma^{ab\;(2)}_V (x-y) V^{b}(y,\theta) + \ldots \end{eqnarray} The diagrams that contribute to the one-loop quantum gauge field self-energy are those of figure \ref{SYM1loop_quantum_diag}. As an example, we will detail the calculation of the ghost contribution (diagram $(a)$). This is of the following form \begin{eqnarray} \Gamma^{1 \; loop}_{V \;(a)} &=& \frac{g^2 C_A}{2^3} \int d^8 z_1 d^8 z_2 \; V^{a}(z_1) V^{a}(z_2) \left\{ \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \left[ D^2_2 \bar{D}^2_2 P_{12} \right] + \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \right. \nonumber \\ & & \left. \times \left[ \bar{D}^2_2 D^2_2 P_{12} \right] - 2 \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \right\} \nonumber \\ &=& \frac{g^2 C_A}{2^3} \int d^8 z_1 d^8 z_2 \; V^a(z_1) \left[ \bar{D}^2 D^2 V^{a}(z_2) \right] P_{12} \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \nonumber \\ & & + \frac{g^2 C_A}{2^3} \int d^8 z_1 d^8 z_2 \; V^{a}(z_1) \left[ D^2 \bar{D}^2 V^a (z_2) \right] P_{12} \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \nonumber \\ & & - \frac{g^2 C_A}{2^2} \int d^8 z_1 d^8 z_2 \; V^{a}(z_1) \left[\bar{D}^2 D^2 V^{a}(z_2) \right] P_{12} \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ & & - \frac{g^2 C_A}{2^2} \int d^8 z_1 d^8 z_2 \; V^{a}(z_1) V^{a}(z_2) P_{12} \left[ \Box \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ & & + \frac{i g^2 C_A}{2^2} \int d^8 z_1 d^8 z_2 \; V^{a}(z_1) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} V^a(z_2) \right] P_{12} \left[ \partial_{\alpha \dot{\alpha}}^2 \bar{D}^2_2 D^2_2 P_{12} \right] \;, \end{eqnarray} where in the second step we have used identity (\ref{D_algebra_id}). With this expression it is clear that we can apply the $\delta$-function property (\ref{SUSY_delta_propagators}) that leaves a free grassmanian $\delta$-function, allowing us to perform one of the $\theta$ integrals. At this point, renormalizing and identifying $x_1 = x$, $x_2 = y$ we find for this contribution \begin{eqnarray} \Gamma^{1\;loop}_{V\;(a)} &=& - \frac{g^2 C_A}{2} \int d^4 x d^4 y d^4 \theta \; V^a (x, \theta) \left[ - \frac{1}{4} \Box \Pi_0 - \frac{1}{4} \Box \Pi_{1/2} \right] V^a (y, \theta) \; \Delta^2_{xy \; R} \;. \nonumber \\ \end{eqnarray} Diagram $(b)$ can be evaluated in a similar way \cite{Grisaru:1979wc}. The final result is found to be \begin{eqnarray} \Gamma^{1\;loop}_{V\;(b)} &=& - \frac{g^2 C_A}{2} \int d^4 x d^4 y d^4 \theta \; V^a(x, \theta) \left[ - \frac{5}{4} \Box \Pi_{1/2} + \frac{1}{4} \Box \Pi_0 \right] V^a (y, \theta) \; \Delta^2_{xy \; R} \;. \nonumber \\ \end{eqnarray} The total contribution is then obtained as \begin{eqnarray} \Gamma^{1\;loop}_V &=& \Gamma^{1\;loop}_{V\;(a)} + \Gamma^{1\;loop}_{V\;(b)} \nonumber \\ &=& - \frac{3 g^2 C_A}{16 (4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; V^a(x, \theta) \Box \Pi_{1/2} V^{a} (y, \theta) \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \label{SYM1loop_eff_action_quantum} \end{eqnarray} From the effective action, defining $\Gamma_V^{ab\;(2)}(x) = \delta^{ab} \Gamma^{(2)}_V(x)$, we have a RG equation for quantum gauge fields of the form of \begin{eqnarray} \left. \left[ M \frac{\partial}{\partial M} + \beta(g) \frac{\partial}{\partial g} + \gamma_{\xi}(g) \frac{\partial}{\partial \xi} - 2 \gamma_V \right] \Gamma^{(2)}_V (x) \right|_{\xi = 0} = 0 \;, \end{eqnarray} and we find $\Gamma^{(2)}_V (x)$ to be evaluated as \begin{eqnarray} \Gamma^{(2)}_V (x) = - \frac{1}{2} \delta(x) \Box \Pi_{1/2} - \frac{1}{2} \delta(x) (1 + \xi) \Box \Pi_0 - \frac{3 g^2 C_A}{16 ( 4 \pi^2)^2} \Box \frac{ \ln x^2 M^2}{x^2} \Box \Pi_{1/2} + {\cal{O}}(g^4) \;. \nonumber \\ \end{eqnarray} Straightforward operations lead us to obtain the following values for $\gamma_V$ and $\gamma_\xi$ \begin{eqnarray} \gamma_V &=& - \frac{3 C_A}{8 (4 \pi^2)} g^2 + \ldots \nonumber \\ \gamma_\xi &=& - \frac{3 C_A}{4 (4 \pi^2)}g^2 + \ldots \label{SYM1loop_RG_quantum} \er \chapter{Conventions for supersymmetric calculations} \label{ap_SUSY} In this section we will briefly review the most relevant results and conventions of supersymmetry and superspace that are used in this work. Although this topic is covered with great detail in various references (e.g. \cite{Sohnius:1985qm,West:1990tg,Wess:1992cp}), we will follow closely \cite{Gates:1983nr}, where the reader can found a complete treatment of the subject. \section{Notation} \label{SUSY_Notation} Setting up the notation, we express the vectors (representations $(\frac{1}{2}, \frac{1}{2})$ of the Lorentz group) with two pairs of two-valued spinor indices, one dotted and another undotted $V^{\alpha \dot{a}}$. To relate a vector in an arbitrary basis with index $\underline{a}$ we use the Pauli matrices: \begin{tabular}{lcc} \\ Fields: & $V^{\alpha \dot{\alpha}} = \frac{1}{\sqrt{2}} \sigma_{\underline{b}}^{\alpha \dot{\alpha}} V^{\underline{b}}$ & $V^{\underline{b}} = \frac{1}{\sqrt{2}} \sigma^{\underline{b}}_{\alpha \dot{\alpha}} V^{\alpha \dot{\alpha}} $ \\ \\ Derivatives: & $ \partial_{\alpha \dot{\alpha}} = \sigma^{\underline{b}}_{\alpha \dot{\alpha}} \partial_{\underline{b}} $ & $ \partial_{\underline{b}} = \frac{1}{2} \sigma_{\underline{b}}^{\alpha \dot{\alpha}} \partial_{\alpha \dot{\alpha}}$ \\ \\ Coordinates: & $ x^{\alpha \dot{\alpha}} = \frac{1}{2} \sigma_{\underline{b}}^{\alpha \dot{\alpha}} x^{\underline{b}}$ & $ x^{\underline{b}} = \sigma^{\underline{b}}_{\alpha \dot{\alpha}} x^{\alpha \dot{\alpha}}$ \\ \\ \end{tabular} Pauli matrices satisfy \begin{equation} \sigma_{\underline{b}}^{\alpha \dot{\alpha}} \sigma^{\underline{c}}_{\alpha \dot{\alpha}} = 2 \delta_{\underline{b}}^{~\underline{c}} ~~~,~~~ \sigma^{\underline{b}}_{\alpha \dot{\alpha}} \sigma_{\underline{b}}^{\beta \dot{\beta}} = 2 \delta_{\alpha}^{~ \beta} \delta_{\dot{\alpha}}^{~ \dot{\beta}} \end{equation} With this conventions, the Super Yang-Mills coupling constant ($g$) that we use is related to the usual one ($g_{SYM}$) by $g = \sqrt{2} g_{SYM}$ \cite{Gates:1983nr}. A graded commutator $[\Omega_A, \Omega_B \} \equiv \Omega_A \Omega_B - (-)^{AB} \Omega_B \Omega_A $ is defined as the anticommutator $\anticomm{\Omega_A}{\Omega_B}$ when both $\Omega_A$ and $\Omega_B$ are fermionic operators and the commutator $\comm{\Omega_A}{\Omega_B}$ otherwise. Symmetrization and antisymmetrization (sum over all permutation of indices with the corresponding sign in the case of antisymmetrization) are indicated by $(~~ )$ and $[~~ ]$ respectively. Indices between vertical lines $|~~ |$ are not taken into account in the previous operations. We also define a graded antisymmetrization $[~~ )$ so that $[ \Omega_A, \Omega_B \} \equiv \Omega_{[A} \Omega_{B)}$. For raising and lowering spinor indexes, we use the matrices $C_{\alpha \beta}$ that have the following properties \begin{eqnarray} \bar{C_{\alpha \beta}} = C_{\dot{\beta} \dot{\alpha}} & & C_{\alpha \beta} C^{\gamma \delta} = \delta_{\left[ \alpha \right. }^{~~\gamma} \delta_{\left. \beta \right]}^{~~\delta} \nonumber \\ C_{\alpha \beta} = C^{\dot{\beta} \dot{\alpha}} & & C_{\alpha \beta} = C_{\dot{\alpha} \dot{\beta}} \end{eqnarray} With these matrices, if $\psi_{\alpha}$ denotes a spinor we define \begin{eqnarray} & &\psi^2 = \frac{1}{2} C_{\alpha \beta} \psi^{\alpha} \psi^{\beta}= \frac{1}{2} \psi^{\alpha} \psi_{\alpha} \\ & & \psi_{\alpha} = \psi^{\beta} C_{\beta \alpha} ~~~~,~~~~ \psi^{\alpha} = C^{\alpha \beta} \psi_{\beta} \end{eqnarray} Similar relations hold for the hermitian conjugate $\bar{\psi}^{\dot{\alpha}}$. In the case of a vector $V^{\alpha \dot{\alpha}}$, we define the square to be \begin{eqnarray} V^2 &=& \frac{1}{2} V^{\alpha \dot{\alpha}} V_{\alpha \dot{\alpha}} \;. \end{eqnarray} \section{Supersymmetric algebra} Coleman and Mandula \cite{Coleman:1967ad} showed that any group of {\em{bosonic}} symmetries of the S-matrix is the direct product of the Poincar\'e group with an internal symmetry group. This implies that the commutator of the bosonic generators of the internal symmetry group and the generators of the Poincar\'e group ($J_{\alpha \beta}, \bar{J}_{\dot{\alpha} \dot{b}}, P_{\alpha \dot{\beta}}$) vanish. In \cite{Haag:1974qh}, Haag, Lopuszanski and Sohnius avoided this no-go theorem by allowing fermionic symmetry generators $Q_{a \alpha}$ (where $a=1 ,\ldots, N$ is an isospin index). They found the most general super-Poincar\'e algebra to be \begin{eqnarray} \anticomm{Q_{a \alpha}}{\bar{Q}^b_{\dot{\beta}}} &=& \delta_{a}^{\; b} P_{\alpha \dot{\beta}} \nonumber \\ \anticomm{Q_{a \alpha}}{Q_{b \beta}} &=& C_{\alpha \beta} Z_{a b} \nonumber \\ \comm{Q_{a \alpha}}{P_{\beta \dot{\beta}}} &=& \comm{P_{\alpha \dot{\alpha}}}{P_{\beta \dot{\beta}}} = \comm{\bar{J}_{\dot{\alpha}{\dot{\beta}}}}{Q_{c \gamma}} = 0 \nonumber \\ \comm{J_{\alpha \beta}}{Q_{c \gamma}} &=& \frac{i}{2} C_{\gamma(\alpha}Q_{c \beta)} \nonumber \\ \comm{J_{\alpha \beta}}{P_{\gamma \dot{\gamma}}} &=& \frac{i}{2} C_{\gamma ( \alpha}P_{\beta ) \dot{\gamma}} \nonumber \\ \comm{J_{\alpha \beta}}{J^{\gamma \delta}} &=& - \frac{i}{2} \delta_{( \alpha}^{\; (\gamma} J_{\beta)}^{\; \delta)} \nonumber \\ \comm{J_{\alpha \beta}}{\bar{J}_{\dot{\alpha} \dot{\beta}}} &=& \comm{Z_{a b}}{Z_{c d}} = \comm{Z_{a b}}{\bar{Z}^{cd}} = 0 \end{eqnarray} where $Z_{a b}$ are $\frac{1}{2} N(N-1)$ complex central charges. The $N=1$ case is called simple supersymmetry, whereas $N>1$ is called extended supersymmetry. We consider only the $N=1$ case. For theories satisfying super-Poincar\'e algebra some remarkable properties can be derived \cite{Gates:1983nr}: \begin{itemize} \item Equality of bosonic and fermionic freedom degrees. \item Energy is positive, and in particular, vacuum energy can be shown to vanish. \end{itemize} \section{Superspace and superfields} A compact technique for working with supersymmetric theories is what is called superspace \cite{Salam:1974yz}. By means of anticommuting parameters we can integrate the super-Poincar\'e algebra and obtain a group, the super-Poincar\'e group\cite{Gates:1983nr,Sohnius:1985qm}. As usual spacetime can be defined as the coset space Poincar\'e group/Lorentz group, superspace is defined to be the coset space super-Poincar\'e group/Lorentz group. Hence, superspace is a space spanned by the usual real commuting space time coordinates and new anticommuting coordinates $z^{A} = ( x^{\alpha \dot{\alpha}}, \theta^{\alpha}, \bar{\theta}^{\dot{\alpha}})$. Supersymmetry generators are realized as coordinate transformations in superspace \cite{Gates:1983nr}. As the usual generators of the Poincar\'e algebra can be represented by differential operators, $Q_{\alpha}$ and $\bar{Q}_{\dot{\alpha}}$ are found to have the following expression \cite{Gates:1983nr} \begin{eqnarray} Q_{\alpha} &=& i \left( \partial_{\alpha} - \frac{i}{2} \bar{\theta}^{\dot{\alpha}} \partial_{\alpha \dot{\alpha}} \right) \nonumber \\ \bar{Q}_{\dot{\alpha}} &=& i \left( \bar{\partial}_{\dot{\alpha}} - \frac{i}{2} \theta^{\alpha} \partial_{\alpha \dot{\alpha}} \right) \end{eqnarray} It has to be noted that neither usual coordinate derivative $\partial_{\alpha \dot{\alpha}}$ nor fermionic coordinate derivatives $\partial_{\alpha}$, $\bar{\partial}_{\alpha}$ are invariant under supertranslations (those that are generated by $Q_{\alpha}$, $\bar{Q}_{\dot{\alpha}}$). However, supersymmetric covariant derivatives that are invariant under these transformations can be defined as \cite{Gates:1983nr}:\begin{eqnarray} D_{\alpha} &=& \partial_{\alpha} + \frac{i}{2} \bar{\theta}^{\dot{\alpha}} \partial_{\alpha \dot{\alpha}} \nonumber \\ \bar{D}_{\dot{\alpha}} &=& \bar{\partial}_{\dot{\alpha}} + \frac{i}{2} \theta^{\alpha} \partial_{\alpha \dot{\alpha}} \label{Susy_cov_dev} \end{eqnarray} We list here some relevant algebraic relations of these derivatives \begin{eqnarray} \anticomm{D_{\alpha}}{\bar{D}_{\dot{\alpha}}} &=& i \partial_{\alpha \dot{\alpha}} \nonumber \\ \anticomm{D_{\alpha}}{D_{\beta}} &=& \anticomm{\bar{D}_{\dot{\alpha}}}{\bar{D}_{\dot{\beta}}} = 0 \nonumber \\ \comm{D^{\alpha}}{\bar{D}^2} &=& i \partial^{\alpha \dot{\alpha}} \bar{D}_{\dot{\alpha}} \nonumber \\\anticomm{D^2}{\bar{D}^2} &=& \Box + D^{\alpha} \bar{D}^2 D_{\alpha} \nonumber \\ D^2 \bar{D}^2 D^2 &=& \Box D^2 \nonumber \\ D^2 \theta^2 &=& - 1 \;. \label{SUSY_D_algebra} \end{eqnarray} Also, with these derivatives we can define two projection operators as \begin{eqnarray} \Pi_{\frac{1}{2}} &=& - D^{\alpha} \bar{D}^2 D_{\alpha} / \Box \nonumber \\ \Pi_0 &=& ( D^2 \bar{D}^2 + \bar{D}^2 D^2 ) / \Box \;, \end{eqnarray} which verify that $\Pi_{\frac{1}{2}} + \Pi_0 = 1$. \subsection{Superfields} Superfields are defined to be multispinor functions over the superspace $\Phi \equiv \Phi(x, \theta, \bar{\theta})$. It is clear that Taylor-expansion of these functions in terms of $\theta^{\alpha}$ and $\bar{\theta}^{\dot{\alpha}}$ breaks off at order $\theta^2 \bar{\theta^2}$, due to the anticommuting nature of these parameters. The different terms of the expansion are called the {\em{component fields}}. We can impose constraints into a superfield and reduce the number of independent component fields. The most relevant ones for our work are the following two: {\bf{Chiral superfields}} With the aid of the covariant derivative (\ref{Susy_cov_dev}) we can impose the constraint \begin{eqnarray} \bar{D}_{\dot{\alpha}} \Phi = 0 \;. \end{eqnarray} If $\Phi$ verifies the previous relation it is called a chiral superfield. This constraint implies that this superfield has the following component expansion \cite{Gates:1983nr} \begin{eqnarray} \Phi &=& A + \theta^{\alpha} \psi_{\alpha} - \theta^2 F + \frac{i}{2} \theta^{\alpha} \bar{\theta}^{\dot{\alpha}} \partial_{\alpha \dot{\alpha}} A + \frac{i}{2} \theta^2 \bar{\theta}^{\dot{\alpha}} \partial_{\alpha \dot{\alpha}} \psi^{\alpha} + \frac{1}{4} \theta^2 \bar{\theta}^2 \Box A \;. \end{eqnarray} {\bf{Real superfields}} A superfield $V$ is called a real superfield if it verifies the constraint $V=V^{+}$. With this constraint, the component expansion is found to be \cite{Gates:1983nr} \begin{eqnarray} V &=& C + \theta^{\alpha} \chi_{\alpha} + \bar{\theta}^{\dot{\alpha}} \bar{\chi}_{\dot{\alpha}} - \theta^2 M - \bar{\theta}^2 \bar{M} + \theta^{\alpha} \bar{\theta}^{\dot{\alpha}} A_{\alpha \dot{\alpha}} - \bar{\theta}^2 \theta^{\alpha} \lambda_{\alpha} \nonumber \\ & & - \theta^2 \bar{\theta}^{\dot{\alpha}} \bar{\lambda}_{\dot{\alpha}} + \theta^2 \bar{\theta}^2 D \;. \label{SUSY_V_expansion} \end{eqnarray} \subsection{Superspace integration and superfunctional derivation} \label{SUSY_integration} In order to obtain supersymmetric invariant actions, we have to define the integration over the anticommuting coordinates. The basic properties of these integrals are \begin{eqnarray} \int d \theta \; \theta &=& 1 \nonumber \\ \int d \theta \; 1 &=& 0 \end{eqnarray} With this definition, the delta function of the anticommuting variables is found to be \begin{equation} \delta(\theta - \theta^{\prime}) = (\theta - \theta^{\prime}) \;. \end{equation} These properties imply that the integration over $\theta$ is identical to differentiation, or, which is the same, that inside a $d^4 x$ integration we have $D_{\alpha} = \int d \theta_{\alpha}$. Also, since supersymmetric variations are total derivatives, if we consider $\Psi$ a general superfield and $\Phi$ a chiral superfield, the following quantities are supersymmetric invariants \begin{eqnarray} S_{\Psi} &=& \int d^4 x d^4 \theta \; \Psi \nonumber \\ S_{\Phi} &=& \int d^4 x d^2 \theta \; \Phi \end{eqnarray} As it will be necessary when studying perturbation theory in superspace, we have to consider the superfield extension of the functional derivative. It is found \cite{Gates:1983nr} for a general superfield $\Psi$ and a chiral superfield $\Phi$ that the superfunctional derivation is \begin{eqnarray} \frac{\delta \Psi(z)}{\delta \Psi(z^{\prime})} &=& \delta^{8} (z-z^{\prime}) = \delta^4(x-x^{\prime}) \delta^{4}(\theta - \theta^{\prime}) \nonumber \\ \frac{\delta \Phi(z)}{\delta \Phi(z^{\prime})} &=& \bar{D}^2 \delta^8 (z-z^{\prime}) \end{eqnarray} Finally, this integral definition implies that we have the following integration by part rules for the superspace derivatives $D_{\alpha}$ \begin{eqnarray} \int d^8 z \; A (D_{\alpha} B ) C &=& \int d^8 z \; \left[ - (D_{\alpha} A) BC - AB (D_{\alpha} C) \right] \nonumber \\ \int d^8 z \; A (D_{\alpha} E_{\beta} ) C &=& \int d^8 z \; \left[ - (D_{\alpha} A) E_{\beta} C + A E_{\beta} (D_{\alpha} C) \right] \nonumber \\ \int d^8 z \; E_{\beta} ( D_{\alpha} A) C &=& \int d^8 z \; \left[ ( D_{\alpha} E_{\beta}) AC - E_{\beta} A (D_{\alpha} C) \right] \nonumber \\ \int d^8 z \; E_{\beta} (D_{\alpha} F_{\gamma} ) C &=& \int d^8 z \; \left[ (D_{\alpha} E_{\beta}) F_{\gamma} C + E_{\beta} F_{\gamma} (D_{\alpha} C) \right] \;, \label{SUSY_IntegrationByParts}\end{eqnarray} where $A$, $B$ and $C$ are bosonic (conmmuting) superfields and $E_{\beta}$, $F_{\gamma}$ ferminic (anticommuting) superfields. \section{Superspace formulation of supersymmetric gauge theories} Among the different models we can consider in superspace (Wess-Zumino, nonlinear $\sigma-$models, etc...), we will restrict ourselves to gauge theories. To deal with them there are two different approaches that we detail here, as both will be useful when studying the supersymmetric extension of the background field method. \subsection{Chiral representation. Prepotentials} Starting with the linear case, by studying the field content of the $N=1$ vector multiplet \cite{Gates:1983nr}, we can obtain that the corresponding irreducible off-shell field strength is a chiral superfield $W_{\alpha}$ that satisfies \begin{eqnarray} D^{\alpha} W_{\alpha} = - \bar{D}^{\bar{\alpha}} \bar{W}_{\dot{\alpha}} \;. \end{eqnarray} Hence, $W_{\alpha}$ can be expressed in terms of an unconstrained real scalar superfield $V$ by \begin{eqnarray} W_{\alpha} &=& i \bar{D}^2 D_{\alpha} V \nonumber \\ \bar{W}_{\dot{\alpha}} &=& - i D^2 \bar{D}_{\alpha} V \end{eqnarray} This definition is clearly invariant under gauge transformations with a chiral parameter $\Lambda$ of the form of \begin{eqnarray} V^{\prime} = V + i( \bar{\Lambda} - \Lambda ) \;. \end{eqnarray} This linear study has to be generalized to the non-abelian case. To do so, we consider a multiplet of chiral scalars fields transforming according to some representation of a group with generators $T_{A}$. These fields are postulated to transform with chiral parameters $\Lambda = \Lambda^{A} T_{A}$ as \begin{eqnarray} \Phi^{\prime} &=& e^{i g \Lambda} \Phi \;, \end{eqnarray} with $g$ a coupling constant. Considering an antichiral field $\bar{\Phi}$, we found that it transforms with the complex conjugate representation with antichiral parameter $\bar{\Lambda}$. To obtain an invariant expression we introduce a multiplet of real superfields $V^A$ that transforms as \begin{eqnarray} \left( e^{g V} \right)^{\prime} &=& e^{ig \bar{ \Lambda}} e^{g V} e^{-i g \Lambda} \;, \end{eqnarray} which implies that $\bar{\Phi} e^{g V} \Phi$ is invariant under gauge transformations. With the prepotential $V$ we can construct derivatives gauge covariant with respect to $\Lambda$ transformations as \cite{Gates:1983nr} \begin{eqnarray} \nabla_{A} &=& D_{A} - i \Gamma_{A} = ( \nabla_{\alpha}, \nabla_{\dot{\alpha}}, \nabla_{\alpha \dot{\alpha}} ) \nonumber \\ &=& ( e^{-g V} D_{\alpha} e^{g V}, \bar{D}_{\dot{\alpha}}, - i \anticomm{\nabla_{\alpha}}{\nabla_{\dot{\alpha}}} ) \;, \label{SUSY_CR_CovDev} \end{eqnarray} which satisfy the requirement \begin{eqnarray} ( \nabla_A \Phi )^{\prime} = e^{i g \Lambda} ( \nabla_A \Phi ) \hspace{2cm} \nabla^{\prime}_A = e^{i g \Lambda} \nabla_A e^{-i g \Lambda} \end{eqnarray} These derivatives are called gauge chiral representation covariant derivatives. Their conjugates $\bar{\nabla}$, that we call gauge antichiral representation covariant derivatives, are covariant with respect to $\bar{\Lambda}$ transformations \begin{eqnarray} \bar{\nabla}_A &=& ( D_{\alpha}, e^{gV} \bar{D}_{\dot{\alpha}} e^{-g V}, -i \anticomm{\bar{\nabla}_{\alpha}}{\bar{\nabla}_{\dot{\alpha}}}) \;. \end{eqnarray} Both representations are related by a nonunitary similarity transformation \begin{eqnarray} \bar{\nabla}_A = e^{gV} \nabla_A e^{-g V} \;. \end{eqnarray} Field strengths defined by commutation of the covariant derivatives can be expressed in terms of the following fields denoted as $W_{\alpha}$ and $W_{\dot{\alpha}}$ \cite{Gates:1983nr} \begin{eqnarray} W_{\alpha} &\equiv& i \bar{D}^2( e^{-g V} D_{\alpha} e^{gV} ) \nonumber \\ W_{\dot{\alpha}} &\equiv& e^{-g V} \bar{W}_{\dot{\alpha}} e^{gV} \equiv e^{-gV} ( - W_{\alpha})^{+} e^{gV} \end{eqnarray} that satisfy Bianchi identities of the form \begin{eqnarray} \nabla^{\alpha} W_{\alpha} = - \nabla^{\dot{\alpha}} W_{\dot{\alpha}} \;. \end{eqnarray} With these fields we can construct a gauge invariant action as \begin{eqnarray} S &=& \frac{1}{g^2} tr \int d^4 x d^2 \theta \; W^2 \nonumber \\ &=& - \frac{1}{2g^2} tr \int d^4 x d^4 \theta \; ( e^{-gV} D^{\alpha} e^{gV} ) \bar{D}^2 ( e^{-gV} D_{\alpha} e^{gV} ) \;. \end{eqnarray} \subsection{Vector representation. Covariant approach} \label{SUSY_vector_rep} In this approach we start defining covariant derivatives and, by means of covariant constraints, express all quantities in terms of a single irreducible representation of supersymmetry. For a Lie algebra with generators $T_A$ we covariantize the derivatives with the introduction of connection fields $\Gamma_A = \Gamma_A^{B} T_B$ as \begin{eqnarray} \nabla_A = D_A - i \Gamma_A \;. \end{eqnarray} Under gauge transformations these derivatives are postulated to transform with a real superfield $K= K^A T_A$ as \begin{eqnarray} \nabla_A^{\prime} &=& e^{i gK} \nabla_A e^{-i gK} \;. \end{eqnarray} By commutation, field strengths $F_{AB}$ are defined in terms of the connections and the flat superspace torsion $T_{AB}^{~~C}$ (where $T_{\alpha \dot{\beta}}^{~~\gamma \dot{\gamma}} = i \delta_{\alpha}^{~\gamma} \delta_{\dot{\beta}}^{~\dot{\gamma}}$ is the only nonzero component) as \begin{eqnarray} \left[ \nabla_A, \nabla_B \right\} &=& T_{AB}^{~~C} \nabla_C - i F_{AB} \nonumber \\ F_{AB} &=& D_{[A} \Gamma_{B \}} - i \left[ \Gamma_A , \Gamma_B \right\} - T_{AB}^{~~C} \Gamma_C \;. \end{eqnarray} Over these field strengths we can impose different constraints. \subsubsection{Conventional constraints} Due to the fact that one can always add a covariant term to the connection without changing the transformation of the covariant derivative, we can impose the constraint \begin{eqnarray} F_{\alpha \dot{\alpha}} = 0 \;, \end{eqnarray} as without imposing it we can define new connections $\Gamma_A^{\prime} = (\Gamma_{\alpha}, \bar{\Gamma}_{\dot{\alpha}}, \Gamma_{\alpha \dot{\alpha}} - i F_{\alpha \dot{\alpha}})$ that identically satisfy the constraints. Hence, the covariant derivatives take the following form \begin{eqnarray} \nabla_A = ( \nabla_{\alpha}, \bar{\nabla}_{\dot{\alpha}}, -i\anticomm{\nabla_\alpha}{\bar{\nabla}_{\dot{\alpha}}}) \;. \end{eqnarray} \subsubsection{Representation-preserving constraints} Let us define a {\em{covariantly chiral}} superfield $\Phi$, ie. a superfield that verifies \begin{eqnarray} \bar{\nabla}_{\dot{\alpha}} \Phi &=& 0 ~~,~~ \Phi^{\prime} = e^{ig K} \Phi \nonumber \\ \nabla_{\alpha} \bar{\Phi} &=& 0 ~~,~~ \bar{\Phi}^{\prime} = \bar{\Phi} e^{-ig K} \end{eqnarray} Hence, consistency implies that we have to impose the constraint \begin{eqnarray} F_{\alpha \beta} = F_{\dot{\alpha} \dot{\beta}} = 0 \;, \end{eqnarray} as with the previously defined covariantly chiral superfield we have that \begin{eqnarray} 0 = \anticomm{\bar{\nabla}_{\dot{\alpha}}}{\bar{\nabla}_{\dot{\beta}}} \Phi = - i F_{\dot{\alpha} \dot{\beta}} \Phi \;. \end{eqnarray} This constraint is solved with a complex superfield $\Omega = \Omega^A T_A$ which allow us to write the covariant derivatives as \begin{eqnarray} \nabla_{\alpha} &=& e^{-g \Omega} D_{\alpha} e^{g \Omega} \nonumber \\ \bar{\nabla}_{\dot{\alpha}} &=& e^{g \bar{\Omega}} \bar{D}_{\dot{\alpha}} e^{- g \bar{ \Omega}} \label{SUSY_VR_CovDEv} \end{eqnarray} With this superfield we have two types of gauge transformations \begin{itemize} \item K gauge transformations \begin{eqnarray} (e^{g \Omega})^{\prime} = e^{g \Omega} e^{-ig K} \;. \end{eqnarray} \item If we consider $\Lambda$ to be an usual chiral superfield $\bar{D}_{\dot{\alpha}} \Lambda = 0$, the derivatives defined in (\ref{SUSY_VR_CovDEv}) are invariant under \begin{eqnarray} (e^{g \Omega})^{\prime} = e^{i g \bar{\Lambda}} e^{g \Omega} \;. \end{eqnarray} \end{itemize} From the K-invariant hermitian part of $\Omega$ we can define a real superfield $V$ as \begin{eqnarray} e^{V} = e^{g \Omega} e^{g \bar{\Omega}} \;. \end{eqnarray} We can also use the $\Omega$ superfield to write all the quantities in a gauge chiral representation, where everything transforms only under $\Lambda$ transformations\cite{Gates:1983nr}: \begin{eqnarray} \nabla_{0 A} &=& e^{-g \bar{\Omega}} \nabla_A e^{g \bar{\Omega}} \nonumber \\ \Phi_0 &=& e^{-g \bar{\Omega}} \Phi \;, \end{eqnarray} where $\nabla_{0 A}$ are the chiral representation derivatives (\ref{SUSY_CR_CovDev}), $\Phi$ is a covariantly chiral superfield and $\Phi_0$ a chiral superfield. \subsubsection{Field content. Bianchi identities} The field content of the theory can be obtained through the Bianchi identities that are derived from the Jacobi identities satisfied by the covariant derivatives. With the aid of these identities and the different constraints imposed on the derivatives, all of the field strengths can be expressed in terms of a spinor superfield $W_{\alpha}$ \cite{Gates:1983nr} that is defined as \begin{eqnarray} \comm{\bar{\nabla}_{\dot{\alpha}}}{i \nabla_{\beta \dot{\beta}}} &=& - i C_{\dot{\beta} \dot{\alpha}} W_{\beta} \;. \end{eqnarray} This superfield can be shown to be covariantly chiral ($\bar{\nabla}_{\dot{\beta}} W_{\alpha} = 0$) and to satisfy the following identity \begin{eqnarray} \nabla^{\alpha} W_{\alpha} + \bar{\nabla}^{\dot{\alpha}} \bar{W}_{\dot{\alpha}} = 0 \;. \end{eqnarray} Finally, with these deriatives we can construct the gauge Lagrangian as \begin{eqnarray} tr W^2 = - \frac{1}{2} tr \left( \comm{\bar{\nabla}^{\dot{\alpha}}}{\anticomm{\bar{\nabla}_{\dot{\alpha}}}{\nabla_\alpha}} \right)^2 \;. \end{eqnarray} \section{Supergraphs} Although supersymmetric theories can be quantized at the component level with conventional methods, the use of superfields simplifies all the calculations. Along with the compact notation and automatic cancellation of graphs related by supersymmetry, in a superfield formalism supersymmetry is manifest. We will detail the extension of the usual functional methods to superspace \cite{Gates:1983nr,Grisaru:1979wc} . Let $\Psi$ be a generic superfield, $S(\Psi)$ the action and $J$ a source of the same type (chiral,etc.) as $\Psi$. The generating functional for Green functions is \begin{eqnarray} Z[J] &=& \int \; [d \Psi] e^{S(\Psi) + \int J \Psi} \;. \end{eqnarray} For connected Green functions, the generating functional is \begin{eqnarray} W[J] &=& \ln Z[J] \;. \end{eqnarray} Finally, the generating functional of 1PI graphs (the effective action) is \begin{eqnarray} \Gamma[ \hat{\Psi}] &=& W[J(\hat{\Psi})] - \int \; J(\hat{\Psi}) \hat{\Psi} \;, \end{eqnarray} where $\hat{\Psi}$ is the expectation value of $\Psi$ in the presence of the source: \begin{eqnarray} \hat{\Psi} &=& \frac{\delta W}{\delta J} \;. \end{eqnarray} Let us consider two examples: a real scalar superfield (as in gauge theories) and a chiral superfield. With the first one, the superspace partition function can be written as \begin{eqnarray} Z[J] &=& \int [d V] \; exp \left\{\int [- \frac{1}{2} V \Box V + {\cal{L}}_{int}(V) + J V] \right\} \nonumber \\ &=& exp \left\{ \int {\cal{L}}_{int}(\frac{\delta}{\delta J}) \right\} exp \left\{ \frac{1}{2} \int J \frac{1}{\Box} J \right\} \;, \end{eqnarray} whereas in the case of a massless chiral superfield, we have \begin{eqnarray} Z[j,\bar{j}] &=& \int [ d \Phi d \bar{\Phi} ] \; exp \left\{ \int d^8 z \; [ \bar{\Phi} \Phi + {\cal{L}}_{int}(\Phi,\bar{\Phi}) ] \int d^6 z j \Phi + \int d^6 \bar{z} \bar{j} \bar{\Phi} \right\} \nonumber \\ &=& exp \left\{\int {\cal{L}}_{int}(\frac{\delta}{\delta j},\frac{\delta}{\delta \bar{j}})\right\} exp \left\{- \int \bar{j} \frac{1}{\Box} j \right\} \;. \end{eqnarray} From the expansion of these expressions we can derive the propagators, vertices and symmetry factors. \subsubsection{Superspace Feynman rules} \begin{itemize} \item Propagators: We present here the propagators for massless chiral(antichiral) and real superfields. These propagators are those that we need in our work. A detailed list with more propagators can be found in \cite{Gates:1983nr} or \cite{Grisaru:1979wc}. \begin{eqnarray} VV: &\hspace{2cm}& -P(z_1 - z_2) \equiv - \Delta (x_1-x_2) \delta^4 (\theta_1 - \theta_2) \nonumber \\ \bar{\Phi} \Phi: &\hspace{2cm}& P(z_1 - z_2) \equiv \Delta (x_1 - x_2) \delta^4 ( \theta_1 - \theta_2 ) \end{eqnarray} \item Vertices: For each chiral(antichiral) line there is a $\bar{D}^2$($D^2$) factor acting on the propagator. If all the lines are purely chiral or antichiral, one of the factors is omitted. \item Apart of the usual space time integrals (or momentum space integrals), there is a $d^4 \theta$ integration at each vertex. \item When computing 1PI graphs (in order to obtain the effective action), for each external line we multiply by the corresponding superfield. In the case of a chiral (or antichiral) line, no $\bar{D}^2$($D^2$) factor appears. \item Some diagrams have symmetry factors. \end{itemize} A superspace diagram which corresponds to a contribution to the effective action is an expression formed by some external fields, supercoordinate integrals $\int d^4 x_i d^4 \theta_i$, propagators ($P_{ij}$) and superspace covariant derivatives acting on them \cite{Gates:1983nr,Grisaru:1979wc}. The propagators are of the form $P_{ij} = \Delta_{ij} \delta (\theta_i - \theta_j) \equiv \Delta_{ij} \delta_{ij}$ with $\Delta_{ij}$ the usual spacetime propagator and $\delta_{ij}$ the $\delta$-function on the anticommuting coordinates. The covariant derivatives can be integrated by parts, obey the Leibnitz rule and a ``transfer'' rule of the form of $\delta_{ij} \stackrel{\leftarrow}{D_j} = - D_i \delta_{ij}$. So, we can choose a propagator that links two vertices and remove all the $D$'s from its $\delta$-function. Then, if we have other propagators that link these two vertices, we can apply the following properties \begin{eqnarray} & & \delta_{ij}\delta_{ij} = \delta_{ji}\delta_{ij} = \delta_{ij} D^{\alpha}_i \delta_{ij} = \delta_{ij} D^2_i \delta_{ij} = \delta_{ij} D^{\alpha}_i \bar{D}^{\dot{\alpha}}_i \delta_{ij} = \delta_{ij} D^{\alpha}_i \bar{D}^2_i \delta_{ij} = 0 \nonumber \\ & & \delta_{ij} D^2_i \bar{D}^2_i \delta_{ij} = \delta_{ij} \bar{D}^2_i D^2_i \delta_{ij} = \delta_{ij} D^{\alpha}_i \bar{D}^2_i D_{i \alpha} \delta_{ij} = \delta_{ij} \label{SUSY_delta_propagators} \end{eqnarray} Now, with the free superspace $\delta$-function, we can contract the propagator between the two vertices to a point in $\theta$-space. As this procedure can be repeated for other pair of vertices, we conclude that we can write the effective action as \begin{eqnarray} \Gamma &=& \sum_{n} \int d^4 x_i \ldots d^4 x_n d^4 \theta \; \Gamma(x_1, \ldots, x_n) \Phi (x1, \theta) \ldots V(x_i, \theta) \ldots \end{eqnarray} This expression has one important consequence. We have found that in a perturbative calculation a contribution to the effective action with a purely chiral(antichiral) integral $d^2 \theta$($d^2 \bar{\theta}$) never gets generated. Hence, if the original action had purely chiral terms (as mass terms $\Phi^2$ or cubic interactions $\Phi^3$) they can not be modified with radiative corrections \cite{Abbott:1980jk,Haagensen:1991vd}. This is called the no-renormalization theorem for chiral superfields. \subsection{Quantization of supersymmetric gauge theories} As in usual gauge theories, when we quantize a supersymmetric gauge theory with functional methods we have to fix the gauge, as in the path integral we do not have to integrate over the physically equivalent gauge field configurations related by gauge transformations \cite{Peskin:1995ev}. Hence, we will present here the generalization of the usual gauge fixing procedure to superspace. We start with the previously defined SUSY Yang-Mills \cite{Sohnius:1985qm} action \begin{eqnarray} S_{0} = - \frac{1}{2 g^2} tr \int d^4 x d^4 \theta \; (e^{-g V} D^{\alpha} e^{g V}) \bar{D}^2 ( e^{-g V} D_{\alpha} e^{g V}) \;. \end{eqnarray} Then, with $\Lambda$ (the chiral parameter of the gauge transformation), an arbitrary function $f$ and a gauge-variant function $F$ such that $F=f$ for some value of $\Lambda$, we define a functional determinant as \begin{eqnarray} \Delta (V) &=& \int [ d \Lambda d \bar{\Lambda}] \; \delta[ F(V,\Lambda,\bar{\Lambda})-f] \; \delta[ \bar{F}(V,\Lambda,\bar{\Lambda})-\bar{f}] \nonumber \\ &=& \int [ d \Lambda d \bar{\Lambda} d \Lambda^{\prime} d \bar{\Lambda}^{\prime}] e^{ \int d^6 z \; \Lambda^{\prime} \left( \frac{\delta F}{\delta \Lambda} \Lambda + \frac{\delta F}{\delta \bar{\Lambda}} \bar{\Lambda} \right) + \int d^6 \bar{z} \; \bar{\Lambda}^{\prime} \left( \frac{\delta \bar{F}}{\delta \Lambda} \Lambda + \frac{\delta \bar{F}}{\delta \bar{\Lambda}} \bar{\Lambda} \right)} \;, \label{SUSY_det_gauge_fix} \end{eqnarray} where the variational derivatives are evaluated at $\Lambda = \bar{\Lambda} = 0$. Then, we introduce inside the partition function the unity in the form of the functional determinant and its inverse. With a change of variables that is a gauge transformation we set the $\Lambda$ integral as a constant infinite factor reabsorbed into the normalization \cite{Gates:1983nr}. Also, in order to get a result independent of $f$ and $\bar{f}$, we average with a gaussian weighting factor of the form $\int [d f d \bar{f}] \; exp(-\frac{1}{\alpha} tr \int d^8 z \bar{f}f)$. Hence, the partition function is written as \begin{eqnarray} Z &=& \int [dV] \; (\Delta (V))^{-1} \; exp\left\{ S_0 - \frac{1}{\alpha} tr \int d^8 z \; \bar{F} F\right\} \;. \end{eqnarray} Using as gauge fixing function $F = \bar{D}^2 V$ and replacing the parameters $\Lambda$, $\Lambda^{\prime}$ of (\ref{SUSY_det_gauge_fix}) by anticommuting chiral ghost fields $c$, $c^{\prime}$ (superfield extension of Faddeev-Popov ghosts \cite{Faddeev:1967fc}) we find \cite{Gates:1983nr} \begin{eqnarray} Z &=& \int [d V d c d c^{\prime} d \bar{c} d \bar{c}^{\prime}] e^{S_0 + S_{GF} + S_{FP} } \;, \end{eqnarray} where \begin{eqnarray} S_{GF} &=& - \frac{1}{\alpha} tr \int d^8 z \; ( D^2 V ) ( \bar{D}^2 V ) \nonumber \\ S_{FP} &=& tr \int d^4 x d^4 \theta \; ( c^{\prime} + \bar{c}^{\prime} ) L_{\frac{1}{2}g V} \left[ ( c + \bar{c})+coth L_{\frac{1}{2} g V} (c - \bar{c}) \right] ~~,~~ L_X Y = \comm{X}{Y} \nonumber \\ \end{eqnarray} \chapter{Differential Renormalization and CDR} \label{chap1} \section{Differential Renormalization} Differential Renormalization (DiffR) \cite{Freedman:1991tk} is a renormalization method in real space that consists in replacing coordinate-space expressions that are too singular by derivatives of less singular ones. This method does not need cutoff nor explicit counterterms, although they are implicitly used when performing formal integration by parts. The basic idea is that divergent expressions are well defined for non-coincident points, but at short distances the amplitude is too singular and does not have a Fourier transform. Hence, to renormalize we are instructed by the method to replace the divergent expression with the derivative of a less singular one that has the same values as the original outside the origin, but with a well defined Fourier transform (if formal integration by parts is used with the derivatives). This method is especially well suited for dimensional dependent theories (such as supersymmetric theories), because all the time we stay in four dimensions, which is not the case for dimensional regularization or dimensional reduction. As an example consider the one-loop contribution of $\lambda \phi^4$ theory. The bare expression is \begin{eqnarray} \Gamma (x_1 , x_2 , x_3, x_4 ) &=& \frac{\lambda^2}{2} \left[ \delta^{(4)} (x_1 - x_2 )\delta^{(4)} (x_3 - x_4) [ \Delta(x_1 - x_4)]^2 + (2\; perms) \right] \;, \nonumber \\ \end{eqnarray} where $\Delta(x-y)$ is the massless propagator \begin{eqnarray} \Delta(x-y) \equiv \Delta_{xy} &=& \frac{1}{(4 \pi^2)} \frac{1}{(x-y)^2} \;. \end{eqnarray} At short distance $\frac{1}{x^4}$ does not have a well defined Fourier transform, and DiffR proposes to replace it for the solution of \begin{eqnarray} \frac{1}{x^4} = \Box G(x^2) \; \; \; \; \; ~ x \ne 0 \;, \label{eq_dif} \end{eqnarray} which is \begin{eqnarray} \frac{1}{x^4} \rightarrow \left[\frac{1}{x^4} \right]_R &=& - \frac{1}{4} \Box \frac{ \ln x^2 M^2}{x^2} \;. \label{ren_D2} \end{eqnarray} Both expressions coincide for $x\neq 0$, but the new one has a well defined Fourier transform if we neglect the divergent surface terms that appears upon integrating by parts the d'alembertian. It is in these surface terms where the counterterms hide, and by applying formal integration by parts \cite{Freedman:1991tk} we are implicitly taking them into account, as we will detail later. Thus, with the renormalized expression we obtain \begin{eqnarray} \int d^4 x \; e^{i p \cdot x} \left[\frac{1}{x^4} \right]_R &=& - \frac{1}{4} \int d^4 x e^{i p \cdot x} \Box \frac{ \ln x^2 M^2}{x^2} = \frac{p^2}{4} \int d^4 e^{i p \cdot x} \frac{\ln x^2 M^2}{x^2} \nonumber \\ &=& - \pi^2 \ln \left( \frac{p^2}{{\bar{M}}^2}\right) \;. \end{eqnarray} A constant with mass dimension $M$ has been introduced for dimensional reasons. It parametrizes the {\em local ambiguity} \begin{eqnarray} \Box \frac{\ln x^2 {M'}^2}{x^2} = \Box \frac{\ln x^2 M^2}{x^2} + 2\ln\frac{M'}{M} ~\delta(x) \;. \label{ambigu} \end{eqnarray} A crucial observation is that this shift $M\rightarrow M'$ can be absorbed in a rescaling of the coupling constant $\lambda$ \cite{Freedman:1991tk}. This is a hint that renormalized amplitudes satisfy renormalization group equations, with M playing the r\^ole of the renormalization group scale. Let us take a closer look to the implicit counterterms that we are using in all of this procedure (this is discussed in \cite{Freedman:1991tk} and with more detail in \cite{Freedman:1992gr}). As we have stated previously, along with the substitution of the divergent expression with the solution of the differential equation, we also have to use the following formal integration by parts prescription \begin{eqnarray} \int d^4 x \; \frac{1}{x^4} T (x) \equiv - \frac{1}{4} \int d^4 x \; ( \Box \frac{ \ln x^2 M^2}{x^2} ) T(x)\equiv - \frac{1}{4} \int d^4 x \; \frac{ \ln x^2 M^2}{x^2} \Box T(x) \;, \nonumber \\ \end{eqnarray} i.e., we have neglected divergent surface terms. If we made this calculation again, but excluding a ball $ {\cal{B}}_\varepsilon$ of radius $\varepsilon$ around the origin and keeping surface terms we have \begin{eqnarray} \int_{R^4 / {\cal{B}}_\varepsilon} d^4 x \; T (x) \Box \frac{ \ln x^2 M^2}{x^2} &=& \int_{S_{\varepsilon}} d \sigma_{\mu} \; T(x) \partial_{\mu} \frac{ \ln x^2 M^2}{x^2} \nonumber \\ & & - \int_{R^4 / {\cal{B}}_\varepsilon} d^4 x \; \partial_{\mu} T (x) \partial_{\mu} \frac{ \ln x^2 M^2}{x^2} \;. \end{eqnarray} The contribution of the surface integral can be found to be \begin{eqnarray} \int_{S_{\varepsilon}} d \sigma_{\mu} \; T(x) \partial_{\mu} \frac{ \ln x^2 M^2}{x^2} = 4 \pi^2 T(0) ( 1 - \ln \varepsilon^2 M^2) + {\cal{O}}(\varepsilon) \;. \end{eqnarray} This is divergent as $\varepsilon \rightarrow 0$. However, this singular contribution in the 4-point function can be cancelled if we add to the action a local counterterm proportional to $\int d^4 x \; \phi^4 (x) ( 1 - \ln \varepsilon^2 M^2)$. Hence, as we have seen, the formal integration by parts rule is valid because we are implicitly using these counterterms. At the same time, we have to remark that the regularization method does not require us to make explicit use of them in any calculation. \subsection{Higher Loops} \label{Higher_Loops} Differential renormalization can be applied not only to one-loop diagrams, but to multi-loop expressions. In general, new scales appear corresponding to the renormalization of the different subdiagrams that form the total expression. From the various types of subdivergences that can occur in a typical higher-loop diagram, we will take a closer look to one of them, where independent scales are neatly seen to appear at each stage: nested divergences. As an example we consider the following amplitude which, in principle, could form part of a bigger one: $\Delta(x-y) I^1(x-y)$, where $I^1(x)$ is \begin{equation} I^1(x-y) = \int d^4 u \Delta_{xu} \Delta^2_{yu} \;. \label{def_I1} \end{equation} It corresponds to a diagram that looks as follows \begin{figure}[ht] \centerline{\epsfbox{DiffR_example1.eps}} \caption{Two-loop diagram with nested divergences.} \end{figure} As can be seen, divergences occur whenever 2 points come together. We can renormalize them starting from the most inner one, and proceeding recursively \begin{eqnarray} \left[ \Delta_{xy} \int d^4 u \Delta_{xu} [\Delta^2_{yu}]_{R}\right]_{R} &=& \left[- \frac{1}{4 (4 \pi^2)^4} \frac{1}{(x-y)^2} \int d^4 u \; \frac{1}{(x-u)^2} \Box \frac{\ln (y-u)^2 M^2_1 }{(y-u)^2} \right]_{R} \nonumber\\ &=& \left[\frac{1}{4 (4 \pi^2)^3} \frac{\ln (x-y)^2 M_1^2 }{(x-y)^4} \right]_{R} \nonumber\\ &=& - \frac{1}{32(4 \pi^2)^3} \Box \frac{ \ln^2 (x-y)^2 M_1^2 + 2 \ln (x-y)^2 M_2^{2}}{(x-y)^2} \label{High_loop_nested} \end{eqnarray} where, in going to the second line, we have integrated by parts the d'alembertian and made use of $\Box \frac{1}{(x-u)^2} = \delta(x-u)$. We observe the appearance of an independent scale associate with each renormalization step. A systematic implementation of differential renormalization to all orders in perturbation theory was presented in \cite{Latorre:1993xh}. The basics of the method are the separation of the divergences in two groups: one corresponds to divergences derived from two points collapsing, and the other to three or more points simultaneously closing up. For the first one the singularity is replaced with the renormalized form (once the derivatives are pulled in front), whereas the other one can be shown to be recursively written as two-point function problems of the first type. This procedure follows the BPHZ renormalization program and guarantees that differential renormalization maintains unitarity and it can be applied consistently (fulfilling locality and Lorentz invariance) to all orders \cite{Latorre:1993xh}. \subsection{Massive theories} Differential renormalization of massive theories has been studied in \cite{Freedman:1991tk,Haagensen:1992am}. The appearance of a bare mass does not interfere with the method, since DiffR is related to short-distance singularities and masses only change the long-distance behaviour of the correlators. Although in this work we will only deal with massless theories, we will give briefly as an example how this procedure works with massive $\lambda \phi^4$. The propagator of a particle of mass $m$ is \begin{eqnarray} \Delta_{m} (x) &=& \frac{1}{4 \pi^2} \sqrt{ \frac{m^2}{x^2}} K_1 ( \sqrt{m^2 x^2}) \end{eqnarray} where $K_1$ is a modified Bessel function. Let us now consider again the 4-point function contribution; in this case is clear that the expression we have to renormalize is \begin{eqnarray} \left[ \sqrt{ \frac{m^2}{x^2}} K_1 ( \sqrt{m^2 x^2}) \right]^2 \;. \end{eqnarray} We have to solve the massive generalization of the differential equation (\ref{eq_dif}), which has a solution of the form of \begin{eqnarray} \left[ \sqrt{ \frac{m^2}{x^2}} K_1 ( \sqrt{m^2 x^2}) \right]^2_R &=& \frac{1}{2} ( \Box - 4 m^2) \sqrt{\frac{m^2}{x^2}} K_0 ( \sqrt{m^2 x^2}) K_1 ( \sqrt{m^2 x^2}) \nonumber \\ & & + \pi^2 \ln \frac{ \bar{M}^2}{m^2} \delta (x) \;, \label{ren_mass_D2} \end{eqnarray} where $ \bar{M} = 2 M / \gamma$ and $\gamma$ is the Euler constant. The general solution has a contact term which depends on a new mass parameter $M$. This guarantees that in the limit where $m \rightarrow 0$ the renormalized expressions for (\ref{ren_D2}) and (\ref{ren_mass_D2}) coincide. \subsection{IR divergences} \label{IR_divergences} DiffR can be also applied to expression with IR divergences \cite{Mas:2002xh}, {\em i.e.} expressions that exhibit a divergence for $p^\mu \rightarrow 0$. The idea is to apply a dual version of Differential Renormalization to such quantities \begin{equation} \left[ \frac{1}{p^4} \right]_{\tilde{R}} = - \frac{1}{4}{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}_p \frac{\ln p^2/\bar{M}_{IR}^2}{p^2} + a_{IR} \delta(p) \label{basicIRidentity} \; . \end{equation} We have defined for convenience $\bar{M}_{IR}=2M_{IR}/\gamma_E$, where $\gamma_E$ is Euler's constant, and distinguished the IR scale from the UV one. As DiffR is an implementation of Bogoliubov's $R$ operation (an operation that yields directly renormalized correlation functions satisfying renormalization group equations), in momentum space this is an explicit realization of the so-called $\tilde{R}$ operation that subtracts IR divergences. Again, diagrams with IR subdivergences are treated according to a recursion formula~\cite{Chetyrkin:nn,Popov:1984xm} analogous to the UV one. \begin{figure}[ht] \centerline{\epsfbox{DiffR_example2.eps}} \caption{Two-loop diagram with UV and IR divergences.} \label{Fig_IR_div} \end{figure} Now, when going to higher loops a new effect will show up, namely the coexistence of UV and IR scales. Let us start by examining a prototypical diagram where such divergences arise in a $\lambda \phi^3$ type theory (figure \ref{Fig_IR_div}). The associated amplitude that we have in this case is of the form $G (x-y) = \Delta(x-y) I^0(x-y)$, where $I^0$ is \begin{eqnarray} I^0(x-y) &=& \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} \Delta^2_{uv} \;. \end{eqnarray} We begin by renormalizing the inner UV divergence as \begin{eqnarray} I^0_R (x-y) &=& - \frac{1}{4 (4 \pi^2)^2} \int d^4 u d^4 v \; \Box \frac{\ln (u-v)^2 M^2}{(u-v)^2} \Delta_{xu} \Delta_{vy} \nonumber \\ &=& \frac{1}{4 (4 \pi^2)^2} \int d^4 u \; \frac{ \ln (u-y)^2 M^2}{(u-y)^2} \Delta_{xu} \;. \end{eqnarray} In order to renormalize the IR divergence we have to pass to momentum space \begin{eqnarray} I^0_R (x-y) &=& - \frac{1}{4(4 \pi^2)^5} \int d^4 u d^4 p d^4 q \; \frac{ \ln p^2/M^2}{p^2} \frac{1}{q^2} e^{-i p(u-y)} e^{-i q (x-u)} \nonumber \\ &=& - \frac{1}{4 (4 \pi^2)^3} \int d^4 p \; \frac{\ln p^2/M^2}{p^4} e^{-i p(x-y)} \;. \end{eqnarray} We explicitly observe that the IR singularity at $p \rightarrow 0$ involves an UV scale $M$. Since UV and IR overall divergences are local in coordinate and momentum space, respectively, the $R$ and $\tilde{R}$ operations commute, and one can define an operation $R^*=\tilde{R}R$ to renormalize both UV and IR divergences~\cite{Chetyrkin:nn,Popov:1984xm}. The fact that the UV and IR renormalizations decouple means that the UV and IR renormalization scales should be independent. This is a non-trivial point that in DiffR can be achieved by a careful adjustment of the local terms involving both scales\footnote{IR DiffR was investigated in~\cite{Avdeev:jp} where it was concluded that the combination of UV and IR DiffR was inconsistent, as the results depended on the order in which integrations were performed. According to \cite{Smirnov:1994km}, however, this corresponds to the natural arbitrariness of the IR renormalization, and this author has actually proposed in~\cite{Smirnov:1996yi} a consistent version of DiffR that deals with both UV and IR divergences. Our approach here will be closer to the original version of DiffR}. As we have to guarantee that the IR renormalization commute with a rescaling of $M$, we have to fulfill the relation \begin{equation} M \frac{\delta}{\delta M} \left[\frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} = \left[M \frac{\delta}{\delta M} \frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} \;. \label{IR_relation} \end{equation} If we consider the usual expression for the renormalization of $(\ln p^2 / \bar{M}^2)/p^4$ \begin{eqnarray} \left[ \frac{\ln p^2 / \bar{M}^2}{p^4} \right]_{\tilde{R}} = -\frac{1}{8} \Box_p \frac{ \ln^2 p^2 / \bar{M}^2 + 2 \ln p^2 / \bar{M_{IR}}^2 }{p^2} \;, \end{eqnarray} we find that the left hand side of (\ref{IR_relation}) is \begin{eqnarray} M \frac{\delta}{\delta M} \left[\frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} &=& \frac{1}{2} \Box_p \frac{ \ln p^2 / \bar{M}_{IR}^2 + \ln {\bar{M}}^2 / \bar{M}_{IR}^2}{p^2} \;, \end{eqnarray} whereas the right and side has the form of \begin{eqnarray} \left[M \frac{\delta}{\delta M} \frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} &=& - 2 \left[ \frac{1}{p^4} \right]_R \nonumber \\ &=& \frac{1}{2} \Box_p \frac{ \ln p^2 / \bar{M}_{IR}^2}{p^2} \;, \end{eqnarray} So, the second expression differs from the first one in a local term in momentum space, that will be a non local one in position space. Hence, in order to fulfill (\ref{IR_relation}), we propose the following minimal solution that does the job \cite{Mas:2002xh} \begin{equation} \left[\frac{\ln p^2/\bar{M}^2}{p^4}\right]_{\tilde{R}} = -\frac{1}{8} {\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}_p \frac{-\ln^2 p^2/\bar{M}_{IR}^2 + 2 \ln p^2/\bar{M}_{IR}^2 \, (1 + \ln p^2/\bar{M}^2)}{p^2} + (a_{IR} \ln \frac{M^2_{IR}}{M^2} + b_{IR})\delta(p) \label{identityUVIR} \end{equation} This expression differs from the usual one by scale-dependent local terms proportional to $\ln^2 M^2/M_{IR}^2$ (apart from the explicit local terms with coefficients $a_{IR}$ and $b_{IR}$). It should be used whenever the ``new'' scale is to be treated as independent from the ``old'' one, for consistency of the loop expansion. It has to be noted that when we consider a purely UV expression as in (\ref{High_loop_nested}), we do not have to take care of all of this, because the extra term is cancelled in the RG equation when we derive wrt. the UV scales. With this, the result for $I^0$ (not taking care of the local terms $a_{IR}$ and $b_{IR}$) is \begin{eqnarray} I^0_R (x-y) &=& \frac{1}{32 (4 \pi^2)^3} \int d^4 p \; \Box^p \frac{ - \ln^2 p^2/M^2_{IR} + 2 \ln p^2 /M^2_{IR} \left( 1 + \ln p^2/M^2 \right) }{p^2} e^{-i p (x-y)} \nonumber \\ &=& - \frac{(x-y)^2}{32 ( 4 \pi^2)^3} \int d^4 p \; \frac{ - \ln^2 p^2/M^2_{IR} + 2 \ln p^2/M^2_{IR} \left( 1 + \ln p^2 /M^2 \right)}{p^2} e^{-i p (x-y)} \nonumber \;, \\ \end{eqnarray} That in position space is \begin{eqnarray} I^0_R (x) &=& \frac{1}{32 (4 \pi^2)^2} \left[ \ln^2 x^2 M^2_{IR} + 2 \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2) \right] \;. \label{I0_integral_diffR} \end{eqnarray} Observe that the UV scale $M$ only appears in~(\ref{identityUVIR}), and hence in the above expression for $I^0$, in single logarithms. This is fine, for double logarithms of $M$ are expected to appear only when the bare expression contains both a UV subdivergence and a UV overall divergence. Finally, once we have obtained $I^0$, we can straightforwardly evaluate $G$ to be \begin{eqnarray} G (x) = \frac{1}{32 ( 4 \pi^2)^3} \frac{ \ln^2 x^2 M^2_{IR} + 2 \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2)}{x^2} \;. \end{eqnarray} \subsection{Symmetries with DiffR} \label{DiffR_and_symmetries} One of the important properties that is required to every sensible renormalization procedure is not to break gauge symmetry when it is applied to a gauge theory. For DiffR it is found that gauge symmetry is preserved as long as Ward identities can be always satisfied with the renormalized amplitudes (except anomalies). However, at the same time we find that we have to make always explicit use of these identities to fix all the ambiguities that have appeared in the calculations; in particular with Ward identities we relate the different scales that we have to use when renormalizing different amplitudes related by a symmetry(i.e. we fix a renormalization scheme). As an example of this, consider the case of the one-loop renormalization of the photon self-energy in QED \cite{Haagensen:1992vz}. The bare expression is \begin{eqnarray} \Pi_{\mu \nu} (x) |_{bare} = - 4 e^2 \left[ 2 ( \partial_{\mu} \Delta(x) ) ( \partial_{\nu} \Delta(x) ) - \delta_{\mu \nu} (\partial_{\beta} \Delta(x) ) (\partial_{\beta} \Delta(x) ) \right] \;, \label{photon_1l} \end{eqnarray} and renormalizing, this becomes \begin{eqnarray} \Pi_{\mu \nu} (x) |_R &=& - \frac{e^2}{12 \pi^4} \left[ \partial_{\mu} \partial_{\nu} \frac{1}{x^4} - 8 \delta_{\mu \nu} \frac{1}{x^6} \right]_R \nonumber \\ &=& - \frac{e^2}{12 \pi^4} \left[ - \frac{1}{4} \partial_{\mu} \partial_{\nu} \Box \frac{ \ln x^2 M_1^2}{x^2} + \frac{1}{4} \delta_{\mu \nu} \Box \Box \frac{ \ln x^2 M^2_2}{x^2} - 8 \delta_{\mu \nu} \mu^2 \delta (x) \right] \;. \nonumber \\ \label{dljlasd} \end{eqnarray} In this expression we have renormalized with an independent scale the logarithmic divergence ($M_1$) and the quadratic one ($M_2$). At the same time, related to the latter, we have added a possible local term with a parameter of mass dimensions ($\mu$). The Ward identity satisfied by this expression imposes that this has to be transverse, so that we have to fix $M_1 = M_2$ and $\mu^2 = 0$. When going to higher loop computations, the Ward identities play a non-trivial r\^ole, as they influence part of the divergences that are obtained in the next step. The reason is that these identities relate all the relevant mass scales found. So, they allow us to write the one-loop renormalized subdiagrams that made up the two-loop expressions in terms of the same scale, say $M$, and fixed local terms, which then are promoted to logarithms of the scale. Going back to the example at two loops that was solved in (\ref{High_loop_nested}), suppose that after imposing the Ward identities we have the inner divergence written as $- \frac{1}{4} \Box \ln x^2 M^2 /x^2 + a \delta(x)$, with $a$ a fixed coefficient. Thus, in the two-loop expression we find \begin{eqnarray} \left[ \Delta_{xy} \int d^4 u \Delta_{xu} [\Delta^2_{yu}]_{R}\right]_{R} &=& \left[\frac{1}{4 (4 \pi^2)^4} \frac{\ln (y-u)^2 M^{2} + a}{(x-y)^4} \right]_{R} \nonumber\\ &=& - \frac{1}{32(4 \pi^2)^3} \Box \frac{ \ln^2 x^2 M^{2} +2 \ln x^2 M_2^{\prime 2} + 2a \ln x^2 M_2^{\prime \prime 2}}{x^2} \;, \nonumber \\ \end{eqnarray} where $M_2^{\prime}$ and $M_2^{\prime \prime}$ are two-loop scales. Hence, as we have anticipated, the one-loop Ward identities have fixed the coefficients of the logarithms of the scales in the two-loop final expression. Concerning the new two-loop scales $M_2^{\prime}$ and $M_2^{\prime \prime}$, it is clear that both can be set also equal to $M$ modulo a {\em local ambiguity} that will depend on quotients $M_2'/M$ and $M_2''/M$ (like in \eqref{ambigu}). Again, use of the Ward identities would set these quotients to certain computable values. In other words, after use has been made of the symmetry, in the two-loop expression the only scale that remains can be chosen to be $M$ and sits only inside the terms with logarithms {\em whose coefficients were determined from the one-loop Ward identities}. This observation is at the heart of the present work an permeates implicitly all the calculations contained in it. So we repeat it here for full clarity: if one is interested in computing a physical amplitude at two loops, a concrete value of the local terms is essential and use Ward identities at two loops is unavoidable. If however, as is the case of the present work, one is looking for the RG equations, then all the relevant information on the scale $M$ resides in the terms with logarithms, whose coefficients only need one-loop Ward identities to be fixed. \section{Constrained Differential Renormalization} \label{CDR_rules} Constrained Differential Renormalization (CDR) was developed in \cite{delAguila:1997kw,delAguila:1997su,delAguila:1998nd,Perez-Victoria:PhD} to avoid the necessity of imposing Ward identities in each calculation to fix the renormalization scheme, as we have seen in the previous section. The idea is to give a procedure that allows us to fix the scheme {\em a priory}. Central to the fulfilment of the Ward identities (and the action principle, from which they can be derived) is that the application of the kinetic differential operator to some propagator line inside a Feynman graph is equivalent to the contraction of the line to a point \cite{delAguila:1997kw}. This statement is guaranteed to hold if we apply the following set of rules \begin{enumerate} \item {\em Differential reduction} \begin{itemize} \item Functions with singular behaviour worse than logarithmic are reduced to derivatives of (at most) logarithmically divergent functions without introducing extra dimensionful constants. \item Logarithmically divergent expressions are written as derivatives of regular functions, introducing one single constant $M$, which has dimensions of mass and plays the r\^ole of the renormalization group scale. \end{itemize} \item {\em Formal integration by parts}. We do not take care of the divergent surface terms that appear when we integrate by parts. Related to this, differentiation and renormalization must be two commutative operations: let $F$ an arbitrary function, then $[ \partial F ]_R = \partial [F]_R$. \item {\em Renormalization rule of the delta function}: \begin{equation} [ F (x, x_1, \ldots , x_n ) \delta (x-y) ]_R = [ F ( x, x_1, \ldots , x_n)]_R \delta (x-y) \end{equation} \item {\em Validity of the propagator equation} \begin{equation} [F(x,x_1,\ldots,x_n) ( \Box - m^2) \Delta_{m}(x)]_R = - [F(x,x_1,\ldots,x_n) \delta(x)]_R \end{equation} where $\Delta_{m}$ is the propagator of a particle of mass $m$ and $F$ an arbitrary function. \end{enumerate} The upshot is a basic set of renormalized expressions (basic functions) with different numbers of propagators and various differential operators acting only on one of them, involving a single scale $M$. Therefore the CDR program amounts to the following two step operation: \begin{itemize} \item Express the Feynman diagram in terms of these basic functions performing all the index contractions (this is an important point, because CDR does not commute with index contraction) and, by means of the Leibniz rule, moving all the derivatives to make them act on one of the propagators. \item Replace the basic functions with their renormalized version. \end{itemize} Let us now obtain as an example some of these functions. Consider the one-point basic function $\Delta (x) \delta (x)$ (this corresponds to the one-loop correction to the two-point function in $\lambda \Phi^4$ theory). Power counting and the locality of the expression implies that the most general renormalized value that we have for this is \begin{eqnarray} [ \Delta(x) \delta(x) ]_R = ( c \Box + \mu^2 ) \delta(x) \;, \end{eqnarray} where $\mu$ is a mass-dimension constant and $c$ an adimensional constant. However, rule $1$ implies that $\mu = 0$. Now, considering $[ \Delta(x) \delta(x) ]_R \delta(y) $ and using rule $3$ we find \begin{eqnarray} [ \Delta(x) \delta(x) ]_R \delta(y) &=& [ \Delta(x) \delta(x) \delta(x+y) ]_R = [ \Delta(x) \delta(x) ]_R \delta(x+y) \nonumber \\ \end{eqnarray} and integrating over $x$ we arrive to \begin{eqnarray} \delta (y) \int d^4 x \; [ \Delta(x) \delta(x) ]_R = [ \Delta(y) \delta(y) ]_R \;. \end{eqnarray} Finally, with this result rule $2$ implies that $c=0$. Proceeding in a similar way we find that all the massless one-point functions in CDR vanish. As two-point function examples, we will consider $\Delta \partial_{\mu} \Delta$ and $ \Delta \Box \Delta$. In the first case we have to apply the Leibniz rule to find \begin{eqnarray} [ \Delta(x) \partial_{\mu} \Delta(x) ]_R &=& \partial_{\mu} [ \Delta(x) \Delta(x)]_R - [ (\partial_{\mu} \Delta (x) ) \Delta(x) ]_R \nonumber \\ &=& \frac{1}{2} \partial_{\mu} [ \Delta^2 (x)]_R = - \frac{1}{8 (4 \pi^2)^2} \partial_{\mu} \Box \frac{ \ln x^2 M^2}{x^2} \;, \end{eqnarray} where we have used the result of (\ref{ren_D2}). If we study now $\Delta \Box \Delta$, we have only to use rule 4 to arrive at \begin{eqnarray} [ \Delta(x) \Box \Delta(x) ]_R &=& - [ \Delta(x) \delta(x) ]_R = 0 \;. \end{eqnarray} Here we present as a summary the most relevant CDR identities that are used in this work. We only list the massless examples, although a complete list including massive propagators can be found in \cite{delAguila:1998nd,Perez-Victoria:PhD} \begin{eqnarray} \left[ \Delta^2 \right]_R (x) &=& - \frac{1}{4 (4 \pi^2)^2} \Box \frac{\ln x^2 M^2}{x^2} \nonumber \\ \left[ \Delta \partial_{\mu} \Delta \right]_R (x) &=& - \frac{1}{8 (4 \pi^2)^2}\partial_{\mu} \Box \frac{ \ln x^2 M^2 }{x^2} \nonumber \\ \left[ \Delta \partial_{\mu} \partial_{\nu} \Delta \right]_R (x) &=& - \frac{1}{12 (4 \pi^2)^2} (\partial_{\mu} \partial_{\nu} - \frac{1}{4} \delta_{\mu \nu} \Box) \Box \frac{ \ln x^2 M^2}{x^2} + \nonumber \\ & & + \frac{1}{288 \pi^2} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \delta (x) \nonumber \\ \left[ \Delta \Box \Delta \right]_R (x) &=& 0 \;. \label{basic_CDR_fun} \end{eqnarray} CDR can be applied to more than two propagators. In particular, when dealing with three propagators, defining $T[{\cal{O}}] = \Delta \Delta {\cal{O}} \Delta $, we can find the following relation to hold when making a decomposition into trace and traceless parts \cite{{delAguila:1997kw},{delAguila:1998nd}} \begin{eqnarray} T^R[\partial_{\mu} \partial_{\nu}] &=& T^R[\partial_{\mu} \partial_{\nu} - \frac{1}{4} \delta_{\mu \nu} \Box] + \frac{1}{4} \delta_{\mu \nu} T^R [\Box] - \frac{1}{128 \pi^2} \delta_{\mu \nu} \delta (x) \delta (y) \;. \label{CDR_T} \end{eqnarray} When using other gauges different from Feynman gauge, some bare expressions are written in terms of a quantity we define as $\bar{\Delta} (x) = \frac{1}{4 (4 \pi^2)} \ln x^2 s^2$, where $s$ is an irrelevant constant with mass dimension. For this structure, CDR prescribes \cite{delAguila:1997kw} \begin{eqnarray} \left[ \Delta \Box \bar{\Delta} \right]_R (x) &=& - \frac{1}{4 (4 \pi^2)^2} \Box \frac{ \ln x^2 M^2}{x^2} \nonumber \\ \left[ \Delta \partial_{\mu} \partial_{\nu} \bar{\Delta} \right]_R (x) &=& \frac{1}{4} \left( - \delta_{\mu \nu} \frac{1}{4(4 \pi^2)^2} \Box \frac{ \ln x^2 M^2}{x^2} - \frac{1}{32 \pi^2} \partial_{\mu} \partial_{\nu} \frac{1}{x^2} \right) \;. \label{CDR_rules_other_gauge} \end{eqnarray} CDR has been checked in abelian and non-abelian gauge theories \cite{delAguila:1997kw,Perez-Victoria:1998fj} and in supersymmetric calculations \cite{delAguila:1997ma,delAguila:1997yd}. As an example of its use, we will re-obtain the one-loop renormalization of the photon self-energy of QED that we have renormalized with DiffR in the previous section. From the bare expression (\ref{photon_1l}) we apply rule number 2 to write it in terms of the CDR basic functions as \begin{eqnarray} \Pi_{\mu \nu }(x) |_R &=& - 4 e^2 \left[ 2 \partial_{\mu} ( \Delta \partial_{\nu} \Delta) - 2 \Delta \partial_{\mu} \partial_{\nu} \Delta - \delta_{\mu \nu} \partial_{\beta} ( \Delta \partial_{\beta} \Delta ) + \delta_{\mu \nu} ( \Delta \Box \Delta ) \right]_R \;. \nonumber \\ \end{eqnarray} Now, we have to replace each basic expression with its renormalized value, and straightforwardly we arrive to \begin{eqnarray} \Pi_{\mu \nu }(x) |_R &=& ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \left[ \frac{e^2}{3(4 \pi^2)^2} \Box \frac{ \ln x^2 M^2}{x^2} + \frac{e^2}{36 \pi^2} \delta(x) \right] \;. \end{eqnarray} As we have remarked, CDR has fixed all the ambiguities {\em{a priori}}, obtaining a direct final result that is transverse, as it has to be to fulfill the Ward identity. Finally, it is worth to mention that this method is equivalent to a momentum-space regularization method defined also in four dimensions: Constrained Implicit Regularization (CIR). Implicit Regularization \cite{Battistel:1998sz,BaetaScarpelli:1998fd} is a regularization method based in the assumption of a regulating function as part of the integrand of divergent amplitudes, and the extension of the properties of regular integrals to regularized ones. As in differential renormalization, this procedure generates arbitrary parameters that with CIR are fixed {\em{a priori}} \cite{Pontes:2007fg}. \section{Two-loop uses of one-loop CDR results} \label{2loop_CDR} As we have seen, one of the drawbacks of DiffR is the plethora of scales that pop up at each step of the calculation. In symmetric theories, at fixed order in the perturbative expansion, these scales should reduce to a single one upon use of the Ward identities. In the previous section we have explained how CDR paves the way to this reduction of scales at the one-loop level. So far, CDR has not been fully developed at loop-order higher than one, and therefore it is not useful for computing, say, scattering amplitudes. However, as is mentioned at the end of section \ref{DiffR_and_symmetries}, as long as we are interested in the RG equations, all that we need are the terms with logarithms, and to obtain them the knowledge of the local terms at one loop-level is enough. Hence, we will discuss both the way the logarithms are generated from one loop to the next, and the implementation of the CDR rules in such diagrams \cite{Seijas:2006vt}. \subsection{Nested divergences} \label{Nested_div} This case is particularly simple because CDR can be applied in a systematic way. Starting from the ``inner'' divergence, its regularization according to CDR gives an expression with logarithms of a single scale ($ \ln x^2 M^2 $) and fixed local terms. The one-loop Ward identities are fulfilled. In the next step, when tackling the outer part of the diagram, a simple logarithm like the one shown above is promoted to an expression of the form $ \ln^2 x^2 M^2 + C \ln x^2 M^{\prime 2}$, with $C$ a calculable coefficient and $M^{\prime}$ a two-loop scale; at the same time, the local terms that multiply outer divergences will produce additional logarithms of new scales. CDR does not yet prescribe what the different two-loop scales should be; hence, we may take all of them the same, and equal to $M$, at the price of leaving undetermined local terms which are irrelevant when obtaining the RG equation. This simple scheme has some subtleties when considering diagrams with indices because, even at one-loop, index contraction does not commute with CDR. Therefore, the correct order is to first insert into the outer diagram the non-renormalized expression for the ``inner'' one-loop diagram, perform all the index contractions, and then renormalize. This crucial observations looks as the first one in a list of rules that eventually would setup the implementation of CDR at higher loops. Let us consider now the two-loop example discussed in sections \ref{Higher_Loops} and \ref{DiffR_and_symmetries}. If we had imposed CDR in the first step, $M_1=M$ is the only scale generated upon renormalizing the most internal divergence, and the local one-loop ambiguity will be fixed to zero, as can be seen from (\ref{basic_CDR_fun}). We express this by stating that the renormalization of $ I^1 (x-y) = \int d^4 u \Delta_{xu} \Delta^2_{yu} $ according to CDR rules is given by \begin{equation} I^{1}_R (x) = \frac{1}{4 (4 \pi^2)^2} \frac{\ln x^2 M^2}{x^2} \;. \end{equation} Once we have this, to renormalize the complete two-loop expression $\Delta I^1$, we have to apply usual differential renormalization and set, modulo local terms, the two-loop mass-scale $M_2^{\prime} = M$. So, we arrive to an expression of the form of \begin{eqnarray} \left[ \Delta I^1 \right]_R (x) = - \frac{1}{32(4 \pi^2)^3} \Box \frac{ \ln^2 x^2 M^{2} + 2 \ln x^2 M^2}{x^2} \label{2loop_CDR_ex2} + \ldots \;, \label{2loop_DR_id1} \end{eqnarray} where $\ldots$ stand for the two-loop local terms that we are not taking into account. Notice that in the rest of the work (unless explicitly stated otherwise) $\ldots$ in a two-loop renormalized expression like the one shown above will have the same meaning: local terms not considered. With this procedure we have renormalized all the different structures made of $I^1$ that we have encountered in our calculations. Apart from the previous one, we have found the following relevant expressions \begin{eqnarray} \left[ \Delta \partial_{\mu} I^{1} \right]_R (x) &=& - \frac{1}{64 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \ln^2 x^2 M^2 + \ln x^2 M^2}{x^2} +\ldots \label{2loop_DR_id2}\\ \left[ \Delta \partial_{\mu} \partial_{\nu} I^1 \right]_R (x) &=& - \frac{1}{96 (4 \pi^2)^3} \left[\partial_{\mu} \partial_{\nu} \Box \frac{ \ln^2 x^2 M^2 + \frac{2}{3} \ln x^2 M^2}{x^2} \right. \nonumber \\ & & - \left. \frac{1}{4}\delta_{\mu \nu} \Box \Box \frac{\ln^2 x^2 M^2 + \frac{11}{3} \ln x^2 M^2}{x^2} \right] + \ldots \label{2loop_DR_id3} \\ \left[ \Delta \Box I^1 \right]_R (x) &=& \frac{1}{32 ( 4 \pi^2)^2} \Box \Box \frac{ \ln x^2 M^2}{x^2} +\ldots \label{2loop_DR_id4} \end{eqnarray} To obtain each of these results, we only have to consider the CDR renormalization of $I^1$, and apply afterwards usual DiffR, setting all the two-loop mass scales equal to $M$. Let us illustrate the simplicity of the procedure with $\Delta\partial_{\mu} I^1$. Given the renormalized form of $I^1$ we find \begin{eqnarray} \left[ \Delta \partial_{\mu} I^1 \right]_R (x) &\stackrel{CDR}{=}& \frac{1}{4(4 \pi^2)^3} \left[ \frac{1}{x^2} \partial_{\mu} \frac{\ln x^2 M^2}{x^2} \right]_R \nonumber \\ &\stackrel{DiffR}{=}& - \frac{1}{16(4 \pi^2)^3} \left[ \partial_{\mu} \frac{1 - 2 \ln x^2 M^2}{x^4} \right]_R \nonumber \\ &=& - \frac{1}{64 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \ln^2 x^2 M^2 + \ln x^2 M^2}{x^2} + \ldots \end{eqnarray} \subsection{Overlapping divergences} \label{overlap_integrals} Diagrams with overlapping divergences are more complex as it is sometimes difficult to recognize the one-loop subdivergences that need to be treated with CDR to start with. Our approach will be to obtain through different methods (that we will explain later in detail), a list of renormalized two-loop integrals with overlapping divergences, where in each calculation one-loop CDR rules have been maintained in every step. Although this list is restricted to integrals with at most four derivatives acting on the propagators and two free indices, it is found to be very useful, as serves as a basis that we can use to express the renormalized overlapping contributions to two-point functions in theories with derivative couplings at two loops. As we detail in appendix \ref{ap_BFM}, these two-point functions are what we need to obtain the beta function if we use the background field method. This list will be applied in our work to renormalize and obtain the two-loop beta function of (Super)QED and (Super)Yang-Mills. We use the conventions of $z = x-y$ and $\partial_{\mu} \equiv \partial_{\mu}^x$. We also define $H(x-y) \equiv H(z)$ as \begin{eqnarray} H[{\cal{O}}_1,{\cal{O}}_2 \; ; \; {\cal{O}}_3,{\cal{O}}_4] = \int d^4 u d^4 v \; ( {\cal{O}}_1^{x} \Delta_{xu})( {\cal{O}}_2^{x} \Delta_{xv})( {\cal{O}}_3^{y} \Delta_{yu} ) ({\cal{O}}_4^{y} \Delta_{yv}) \Delta_{uv} \;, \label{H_definition} \end{eqnarray} with ${\cal{O}}_i$ a differential operator. \begin{eqnarray} H^R[1,1 \; ; \; 1,1] &=& \frac{6 \pi^4 \xi(3) }{ ( 4 \pi^2)^4} \Delta \equiv a \Delta \label{int1} \\ H^R[\partial_{\mu},1 \; ; \; 1,1] &=& \frac{ 3 \xi(3)}{16 (4 \pi^2)^2} ( \partial_{\mu} \Delta) \equiv \frac{a}{2} \partial_{\mu} \Delta \label{int2} \\ H^R[1,\partial_{\lambda} \; ; \; 1,\partial_{\lambda}] &=& - \frac{1}{16(4 \pi^2)^3} \Box \frac{\ln z^2 M^2}{z^2} + \ldots \label{int3} \\ \partial_{\lambda} H^R[1,\partial_{\mu} \; ; \; 1,\partial_{\lambda}] &=& - \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} + \dots \label{int4}\\ \partial_{\lambda} H^R[1,1 \; ; \; \partial_{\lambda} \partial_{\nu},1] &=& \frac {1}{32(4 \pi^2)^3} \partial_{\nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2 }{z^2} + \ldots \label{int5}\\ H^R[1,\partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\mu},1] &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 - \frac{7}{8} \ln z^2 M^2 }{z^2} + \ldots \label{int6} \\ H^R[\partial_{\mu} \partial_{\lambda},\partial_{\lambda} \; ; \; 1,1] &=& \frac{1}{32(4 \pi^2)^3} \partial_{\mu} \Box \frac{ - \frac{1}{2} \ln^2 z^2 M^2 - \ln z^2 M^2}{z^2}+ \ldots \label{int7} \\ \partial_{\lambda} H^R[1,\partial_{\mu} \; ; \; \partial_{\nu} \partial_{\lambda},1] &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{1}{8} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{-\frac{1}{4} \ln z^2 M^2}{z^2} \right] + \ldots \label{int8}\\ H^R[1,\partial_{\mu} \; ; \; 1,\partial_{\nu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \frac{- \frac{1}{2} \ln z^2 M^2}{z^2} + \ldots \label{int9} \\ \partial_{\lambda} H^R[1,\partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu},1] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{2} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] + \ldots \label{int10} \\ \partial_{\lambda} H^R[1,\partial_{\lambda} \; ; \; 1, \partial_{\mu} \partial_{\nu}] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{2} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} \right] + \ldots \label{int10a} \end{eqnarray} \begin{eqnarray} H^R[1,1 \; ; \; \partial_{\mu} \partial_{\nu},1] &=& \frac{1}{32(4 \pi^2)^3} \delta_{\mu \nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2}{z^2} + \ldots \label{int11} \\ \partial_{\lambda} H^R[1,1 \; ; \; \partial_{\lambda} \partial_{\nu},\partial_{\mu}] &=& \frac{1}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \label{int12} \\ \partial_{\lambda} H^R[1,1 \; ; \; \partial_{\mu} \partial_{\nu},\partial_{\lambda}] &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{8} \ln^2 z^2 M^2 + \frac{3}{8} \ln z^2 M^2}{z^2} + \ldots \label{int13} \\ H^R[1,\partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda},1] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{6} \ln^2 z^2 M^2 - \frac{5}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{ - \frac{1}{24} \ln^2 z^2 M^2 - \frac{29}{72} \ln z^2 M^2}{z^2} \right] + \ldots \label{int14}\\ H^R[1,\partial_{\mu} \partial_{\lambda} \; ; \; 1,\partial_{\nu} \partial_{\lambda}] &=& \frac{1}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{\frac{1}{6} \ln^2 z^2 M^2 + \frac{49}{36} \ln z^2 M^2}{z^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{- \frac{1}{24} \ln^2 z^2 M^2 - \frac{11}{72} \ln z^2 M^2}{z^2} \right] + \ldots \nonumber \\ \label{int15} \end{eqnarray} These integrals are obtained basically applying two properties: \begin{itemize} \item Integral relations presented in appendix \ref{ap_calc}. These exact relations allow us to put some of the integrals in terms of others that have an explicit d'alembertian acting on one of the propagators. Once we have done that, using $\Box \Delta = - \delta$ we can put these integrals in terms of the previously defined $I^1$. Then, we can straightforwardly apply the procedure for nested divergences that we have just presented in the previous section.\footnote{Also this is the reason why we have not listed here the cases where the differential operator is a d'alembertian. For example, it is obvious that $ H[ \Box,1 \; ; \; 1,1] = - \Delta I^1$. } \item The decomposition into trace part, traceless part and fixed local term imposed by CDR to $T[\partial_{\mu} \partial_{\nu}]$ as (\ref{CDR_T}). \end{itemize} As in the previous section, let us illustrate the procedure with an explicit example. Considering integral (\ref{int5}), this can be evaluated with both methods. First, we will make use of integral relation (\ref{rel_int2}) and put this integral as sum of different integrals that have the divergences nested \begin{eqnarray} \partial_{\lambda}^x \int &d^4 u d^4 v & \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} = \nonumber \\ &=& - \frac{1}{2} \partial_{\nu}^y \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \Box \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \nonumber \\ & & + \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \Box \Delta_{yu} ) ( \partial_{\nu}^y \Delta_{yv}) \Delta_{uv} \nonumber \\ & & - \frac{1}{2} \partial_{\nu}^y \partial_{\lambda}^y \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} \;. \nonumber \\ \end{eqnarray} Now, we have to apply as usual $\Box \Delta = - \delta $ and rewrite these integrals in terms of $I^1$. Note that the third integral can be easily shown to be finite, and its value is obtained in appendix \ref{ap_integrales} to be $a/4 \partial_{\nu} ( \Box \Delta)$ with $a=\frac{6 \pi^4 \xi(3) }{ ( 4 \pi^2)^4}$) . \begin{eqnarray} \partial_{\lambda}^x \int &d^4 u d^4 v & \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} = \nonumber \\ &=& - \frac{1}{2} \partial_{\nu} ( \Delta I^1 ) + \frac{1}{2} ( \Delta \partial_{\nu} I^1 ) + \frac{a}{4} \partial_{\nu} ( \Box \Delta) \;. \nonumber \\ \end{eqnarray} Applying the results found in section \ref{Nested_div} for the $I^1$ expression, the renormalized value of this is \begin{eqnarray} \partial_{\lambda}^x \int &d^4 u d^4 v & \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} = \nonumber \\ &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2}{z^2} + \ldots \end{eqnarray} where $\ldots$ stands for the finite terms that we are not taking into account and $z=x-y$. We can also obtain this integral making use of the CDR relation (\ref{CDR_T}) and perform a trace-traceless decomposition of $( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv}$ as \begin{eqnarray} \partial_{\lambda}^x \int &d^4 u d^4 v & \Delta_{xu} \Delta_{xv} ( \partial_{\lambda}^y \partial_{\nu}^y \Delta_{yu} ) \Delta_{yv} \Delta_{uv} = \nonumber \\ &=& \frac{1}{4} \partial_{\nu}^x \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} ( \Box \Delta_{yu}) \Delta_{yv} \Delta_{uv} \nonumber \\ & & + \partial_{\lambda}^x \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} \left[ ( \partial_{\lambda}^y \partial_{\nu}^y - \frac{1}{4} \delta_{\lambda \nu} \Box ) \Delta_{yu} \right] \Delta_{yv} \Delta_{uv} \nonumber \\ & & - \frac{1}{128 \pi^2} \partial_{\nu}^x \int d^4 u d^4 v \; \Delta_{xu} \Delta_{xv} \delta (y-u) \delta (y-v) \nonumber \\ &=& - \frac{1}{4} \partial_{\nu} ( \Delta I^1 )_R - \frac{1}{128 \pi^2} \partial_{\nu} \Delta^2_R + \partial_{\lambda} I_{\lambda \nu \; R} \\ &=& \frac{1}{32 (4 \pi^2)^3} \partial_{\nu} \Box \frac{ \frac{1}{4} \ln^2 z^2 M^2 + \frac{3}{4} \ln z^2 M^2}{z^2} + \partial_{\lambda} I_{\lambda \nu \; R} \;, \end{eqnarray} where $\partial_{\lambda} I_{\lambda \nu \; R}$ is the traceless part, that is finite. As we can see, both results agree. Although in this example we can perform the calculation with both methods with the same effort, with other integrals the situation is different, and we shall have to study each case in order to choose the best one. The explicit evaluation of all the integrals is presented in section \ref{ap_integrales} of appendix \ref{ap_calc} . \chapter{Abelian QFT applications} \label{chap_abelian_examples} In this chapter we apply the ideas and methods we have just presented to two of the most relevant examples of abelian gauge theories: QED and its supersymmetric extension, SuperQED. Although both theories have already been treated in \cite{Haagensen:1992vz,Song} using DiffR, we will show that our procedure simplifies the calculations, avoiding the use of Ward identities. \section{QED} QED is one of the simplest examples of a gauge theory, as the gauge symmetry group is an abelian one, $U(1)$. Hence, is a good theory to start with, as we can clearly see all the key points of our renormalization procedure. \subsection{The model} We use the same conventions as \cite{Haagensen:1992vz}. The $d=4$ massless QED lagrangian is \begin{eqnarray} {\cal{L}} &=& \frac{1}{4} F^{\mu \nu} F_{\mu \nu} + \bar{\psi} \gamma^{\mu} ( \partial_{\mu} + i e A_{\mu} ) \psi \;, \end{eqnarray} where $\psi$ is the fermion field, $A_{\mu}$ is the $U(1)$ gauge field and $F_{\mu \nu}$ is the field strength made up with $A_{\mu}$ as $F_{\mu \nu} (x) = \partial_{\mu} A_{\nu}(x) - \partial_{\nu} A_{\mu} (x)$. The $\gamma$ matrices satisfy the Clifford algebra $\anticomm{\gamma_{\mu}}{\gamma_{\nu}} = 2 \delta_{\mu \nu}$. With $w$ an infinitesimal parameter, the QED action is invariant under the following $U(1)$ transformations \begin{eqnarray} A_{\mu} \rightarrow A_{\mu} - \frac{1}{e} \partial_{\mu} w \nonumber \\ \psi \rightarrow \psi + i \psi w \nonumber \\ \bar{\psi} \rightarrow \bar{\psi} - i \bar{\psi} w \;. \end{eqnarray} Hence, when quantizing QED we have to take care of this invariance. We have an infinite number of different gauge field configurations (those obtained through gauge transformations from a given one) that correspond to the same physical state. In a path integral approach, we want to integrate only over the relevant gauge field configurations; hence, we have to pick up only one field from each gauge orbit. To accomplish this there is a well-known procedure \cite{Faddeev:1967fc,Peskin:1995ev} which implies that we have to add to the action a gauge fixing term that depends on a new parameter $\alpha$ and possible auxiliary fields (Faddeev-Popov ghosts fields, that in the concrete case of QED and SuperQED are not relevant). Different values for $\alpha$ correspond to different gauge choices. In particular, in our calculation we will use $\alpha = 1$ (Feynman gauge). The complete lagrangian is then \begin{eqnarray} {\cal{L}} &=& \frac{1}{4} F^{\mu \nu} F_{\mu \nu} + \frac{1}{2 \alpha} ( \partial_{\mu} A_{\mu} )^2 + \bar{\psi} \gamma^{\mu} ( \partial_{\mu} + i e A_{\mu} ) \psi \;. \end{eqnarray} With this action the gauge field and fermion propagators (in a generic gauge) are \begin{eqnarray} \Delta_{\mu \nu} (x-y) &=& \frac{1}{16 \pi^2} \left( \delta_{\mu \nu} \Box + (\alpha - 1) \partial_{\mu} \partial_{\nu} \right) \ln (x-y)^2 s^2 \nonumber \\ S(x-y) &=& - \gamma^{\lambda} \partial_{\lambda} \Delta (x-y) \;, \label{QED_propagators} \end{eqnarray} where $s$ stands for an irrelevant constant with mass dimension. Also, considering the expansion of the effective action, we write the terms corresponding to the vacuum polarization $\Pi_{\mu \nu}$ and fermion self-energy $\Sigma$ as \begin{eqnarray} \Gamma_{eff} &=& \frac{1}{2} \int d^4 x d^4 y \; A_{\mu}(x) \left[ \left( \left(1- \frac{1}{\alpha} \right) \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box \right) \delta^{(4)}(x-y) - \Pi_{\mu \nu}(x-y) \right] A_{\nu}(y) \nonumber \\ & & + \int d^4 x d^4 y \; \bar{\psi}(x) ( \gamma^{\mu} \partial_{\mu} \delta^{(4)}(x-y) - \Sigma (x-y))\psi(y) + \ldots \end{eqnarray} \subsubsection{Background field method} Fixing the gauge is necessary in order to quantize the theory, but has the drawback to make us lose explicit gauge invariance in the intermediate results. In order to avoid this, the background field method was developed \cite{DeWitt:1967ub}. As the method is detailed in appendix \ref{ap_BFM}, we only briefly outline it here. The key point is the splitting of the gauge field in two parts: the quantum and the background fields ($A_{\mu}$ and $B_{\mu}$ respectively) \begin{eqnarray} A_{\mu} \rightarrow A_{\mu} + B_{\mu} \;. \end{eqnarray} We can use $A_{\mu}$ as the integration variable of the partition function, which implies that the gauge has to be fixed only for this field. As a result, we retain explicit gauge invariance in $B_{\mu}$. Along with this, as is shown in appendix \ref{ap_BFM}, this procedure has other relevant consequences: the coupling constant and the background field renormalizations are related, which implies that the beta function can be obtained only from the two-point function contribution. Hence, we have a background effective action of the form \begin{eqnarray} \Gamma_{eff}[B] &=& \frac{1}{2} \int d^4 x d^4 y \; B_{\mu}(x) \left[ \left( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box \right) \delta^{(4)}(x-y) - \Pi^{BB}_{\mu \nu}(x-y) \right] B_{\nu}(y) + \ldots \;, \nonumber \\ \end{eqnarray} and our aim is to calculate the two-loop expansion of the two-point 1PI function $\Pi^{BB}_{\mu \nu}$. Notice also that, as the infinitesimal gauge transformation of the unsplit theory does not depend on the quantum gauge field, we can choose in the split theory the usual gauge fixing condition $G = \partial^{\mu} A_{\mu}$. So, we have a split lagrangian of the form of \begin{eqnarray} {\cal{L}} &=& \frac{1}{4} F^{\mu \nu} F_{\mu \nu} + \frac{1}{4} B^{\mu \nu} B_{\mu \nu} +\frac{1}{2 \alpha} ( \partial_{\mu} A_{\mu} )^2 + \bar{\psi} \gamma^{\mu} ( \partial_{\mu} + i e A_{\mu} ) \psi + ie \bar{\psi} \gamma^{\mu} B_{\mu} \psi \;, \end{eqnarray} with $B_{\mu \nu} = \partial_{\mu} B_{\nu} - \partial_{\nu} B_{\mu}$. From this lagrangian, we have two relevant interaction vertices, which are shown in figure \ref{QED_Feynman_rules}. \begin{figure}[ht] \centerline{\epsfbox{QED_Feynman_rules.eps}} \caption{QED interaction vertices. Thick wavy lines represent background external fields and thin wavy lines correspond to quantum gauge field propagators. Solid lines represent fermion fields.} \label{QED_Feynman_rules} \end{figure} \subsection{One-loop level} \begin{figure}[ht] \centerline{\epsfbox{QED1loop.eps}} \caption{One-loop QED diagrams.} \label{1loopQED} \end{figure} We first consider the one-loop renormalization of the background photon and fermion self-energies. The diagrams that correspond to these contributions are those of figure \ref{1loopQED}, where thick lines represent $B_{\mu}$ fields, and thin ones correspond to $A_{\mu}$. The correction to the background self-energy is what we need to obtain the one-loop coefficient of the beta function, although it will not be used in two-loop calculations. On the other hand, the fermion self-energy will be not relevant for the one-loop beta function study, but it will be used afterwards as a one-loop insertion in a two-loop diagram. \subsubsection{Photon self-energy} We consider now the one-loop contribution to the photon self-energy $\Pi_{\mu \nu}$. The expression for this diagram is \begin{eqnarray} \Pi_{\mu \nu \;(1)}^{BB} (x-y) &=& - (i e)^2 Tr \left[ \gamma_{\mu} \gamma^{\lambda} \partial_{\lambda}^x \Delta \gamma_{\nu} \gamma^{\sigma} \partial_{\sigma}^y \Delta \right] \nonumber \\ &=& - e^2 Tr\left[ \gamma_{\mu} \gamma^{\lambda} \gamma_{\nu} \gamma^{\sigma} \right] \left( \partial_{\lambda} ( \Delta \partial_{\sigma} \Delta ) - \Delta \partial_{\lambda} \partial_{\sigma} \Delta \right) \;. \label{QED1loop_bare} \end{eqnarray} To proceed with the CDR program, we first have to perform all the index contractions, writing this diagram in terms of the basic CDR functions (\ref{basic_CDR_fun}). To do that, we have to apply in \ref{QED1loop_bare} the following Clifford algebra result \begin{eqnarray} Tr\left[ \gamma_{\mu} \gamma_{\lambda} \gamma_{\nu} \gamma_{\sigma} \right] = 4 ( \delta_{\mu \lambda} \delta_{\nu \sigma} - \delta_{\mu \nu} \delta_{\lambda \sigma} + \delta_{\mu \sigma} \delta_{\nu \lambda} ) \;. \end{eqnarray} which allows us to find the fully expanded expression as \begin{eqnarray} \Pi_{\mu \nu \;(1) }^{BB} (x) &=& - 4 e^2 \left[ 2 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 2 \Delta \partial_{\mu} \partial_{\nu} \Delta - \delta_{\mu \nu} \partial_{\lambda} ( \Delta \partial_{\lambda} \Delta ) - \delta_{\mu \nu} \Delta \Box \Delta \right] \;. \end{eqnarray} Now, CDR renormalization of this expression entails only to replace these basic functions with their renormalized values. The full one-loop renormalized background self-energy is given by \cite{Haagensen:1992vz,delAguila:1997kw} \begin{eqnarray} \left. \Pi_{\mu \nu \;(1)}^{BB} (x) \right|_R &=& - ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \left[ - \frac{e^2}{12 \pi^2 ( 4 \pi^2 )} \Box \frac{ \ln x^2 M^2}{x^2} - \frac{e^2}{36 \pi^2} \delta (x) \right] \;. \label{QED_1loop_r} \end{eqnarray} As was guaranteed by the use of CDR, this result is transverse (as is required by the Ward identities), and has the ambiguity (local term) fixed. \subsubsection{Fermion self-energy} We will also consider the renormalization of the fermion self-energy $\Sigma$. In a general gauge, the bare expression for this diagram is \begin{eqnarray} \Sigma_{(1)} (x) &=& e^2 \gamma_{\mu} \Delta_{\mu \nu} (x) \gamma^{\lambda} \partial_{\lambda} \Delta (x) \gamma_{\nu} \;. \end{eqnarray} By making use of the expression for the photon propagator in a generic gauge (\ref{QED_propagators}), we can replace the basic functions by their CDR values and obtain the renormalized fermion self-energy as \cite{Haagensen:1992vz,delAguila:1997kw} \begin{eqnarray} \left. \Sigma_{(1)} (x) \right|_R &=& e^2 \gamma^{\lambda} \left[ \frac{1}{4(4 \pi^2)^2} \partial_{\lambda} \Box \frac{\ln x^2 M^2}{x^2} + (\alpha -1) \left( \frac{1}{4(4 \pi^2)^2} \partial_{\lambda} \Box \frac{\ln x^2 M^2}{x^2} + \frac{1}{16 \pi^2} \partial_{\lambda} \delta (x) \right)\right]. \nonumber \\ \end{eqnarray} \subsection{Two-loop level} \label{QED_two_loop} \begin{figure}[ht] \centerline{\epsfbox{QED2loop.eps}} \caption{Two-loop QED diagrams.} \end{figure} Now we proceed with the two-loop case. There are two relevant graphs with external background fields. Diagram (a) has the divergences nested, whereas diagram (b) has overlapping divergences. \subsubsection{Diagram (a)} The expression for this diagram is \begin{eqnarray} \Pi^{BB}_{\mu \nu \;(2\;a)} (x-y) &=& - (ie)^2 \int d^4 u d^4 v \; Tr \left[ \gamma_{\mu} \gamma^{\lambda} (- \partial_{\lambda}^x \Delta_{xu}) \Sigma^{(1)} (u-v) \gamma^{\varepsilon} (- \partial_{\varepsilon}^v \Delta_{vy}) \gamma_{\nu} \gamma^{\sigma} \right. \nonumber \\ & & \left. \times ( - \partial_{\sigma}^y \Delta_{yx} ) \right]. \end{eqnarray} where $\Sigma^{(1)}$ is the one-loop fermion self-energy. In the following we will restrict ourselves to Feynman gauge, as the term that takes care of the running of the gauge parameter in the RG equation will be shown not to be relevant for the two-loop beta function \cite{Haagensen:1992vz}. We will discuss this later in detail, when we apply the RG equation. So, in this gauge, the bare fermion self-energy is \begin{eqnarray} \Sigma_{(1)}(x) &=& - 2 e^2 \gamma^{\lambda} \left[ \Delta \partial_{\lambda} \Delta \right] (x) \;. \end{eqnarray} As we stated in section \ref{CDR_rules}, CDR imposes a strict order to the operations of index contraction and renormalization: first all the indices should be contracted, and only after that we can renormalize. Inserting here the bare fermion self-energy we are keeping this order. Expanding the expression of $\Pi^{BB}_{\mu \nu \; (2\;a)}$ we find \begin{eqnarray} \Pi^{BB}_{\mu \nu \;(2 \; a)} (x-y) &=& - 2 e^4 Tr[\gamma_{\mu} \gamma_{\lambda} \gamma_{\rho} \gamma_{\varepsilon} \gamma_{\nu} \gamma_{\sigma}] ( \partial_{\sigma}^x \Delta_{xy} ) \partial_{\lambda}^x \partial_{\varepsilon}^x \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial^u_{\rho} \Delta_{uv} ) \;. \nonumber \\ \label{QED_2loop_aext} \end{eqnarray} In order to simplify the notation, we define an integral expression of the form \begin{equation} I^0_{\mu} (x-y) = \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial_{\mu} \Delta_{uv}) \;. \end{equation} Then, with standard Clifford algebra, we can write (\ref{QED_2loop_aext}) as \begin{eqnarray} \Pi^{BB}_{\mu \nu \;(2\;a)} (x) &=& e^4 \left[ -32 (\partial_{\mu} \Delta) \partial_{\lambda} \partial_{\nu} I^0_{\lambda} + 16 \delta_{\mu \nu} ( \partial_{\sigma} \Delta ) \partial_{\lambda} \partial_{\sigma} I^0_{\lambda} + 16 ( \partial_{\mu} \Delta ) \Box I^0_{\nu} - 8 \delta_{\mu \nu} ( \partial_{\rho} \Delta ) \Box I^0_{\rho} \right] \;. \nonumber \\ \end{eqnarray} The renormalization of $I^0_{\mu}$ is studied in section \ref{ap_UV_IR_I0m} of appendix \ref{ap_calc}. It is found there that this integral expression verifies $\partial^{\mu} I^0_{\mu \; R} = - \frac{1}{2} I^1_R$ and $\Box I^0_{\mu \;R} = - \frac{1}{2} \partial_{\mu} I^1_R$. Thus, we need only the renormalized value that we have previously obtained for $I^1$. We have finally \begin{eqnarray} \left. \Pi^{BB}_{\mu \nu\;(2\;a)} (x) \right|_R &=& \frac{e^4}{24(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{- \ln^2 x^2 M^2 - \frac{5}{3} \ln x^2 M^2 }{x^2} + \delta_{\mu \nu} \Box \Box \frac{\ln^2 x^2 M^2 + \frac{8}{3} \ln x^2 M^2}{x^2} \right] \nonumber \\ & & + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Diagram (b)} This diagram, opposite to the previous one, has overlapping divergences. Following the procedure presented in section \ref{overlap_integrals}, we will express this in terms of the integrals listed in that section. We begin by considering the bare contribution \begin{eqnarray} \Pi_{\mu \nu \;(2\;b)}^{BB} (x-y) &=& - ( i e )^4 \int d^4 u d^4 v \; Tr \left[ \gamma_{\mu} ( \gamma^{\alpha} \partial^x_{\alpha} \Delta_{xu} ) \gamma^{\rho} ( \gamma^{\beta} \partial_{\beta}^u \Delta_{uy} ) \gamma_{\nu} \nonumber \right. \\ & & \times \left. ( \gamma^{\lambda} \partial_{\lambda}^y \Delta_{yv} ) \gamma_{\rho} ( \gamma^{\sigma} \partial_{\sigma} \Delta_{vx}) \Delta_{uv} \right] \;,\nonumber \\ \end{eqnarray} or, written in terms of the expressions we defined as $H$ in (\ref{H_definition}) \begin{eqnarray} \Pi_{\mu \nu \;(2\;b)}^{BB} (x-y) &=& 2 e^4 Tr[ \gamma_{\mu} \gamma_{\alpha} \gamma^{\rho} \gamma_{\beta} \gamma_{\nu} \gamma_{\lambda} \gamma_{\rho} \gamma_{\sigma}] H[ \partial_{\alpha}, \partial_{\sigma} \; ; \; \partial_{\beta}, \partial_{\lambda}] \;. \nonumber \\ \end{eqnarray} If we use the identity for $\gamma$ matrices $\gamma^{\mu} \gamma_{\nu} \gamma_{\rho} \gamma_{\sigma} \gamma_{\mu} = -2 \gamma_{\sigma} \gamma_{\rho} \gamma_{\nu}$ \cite{Peskin:1995ev} and integrate by parts the derivatives acting over $\Delta_{xu}$ and $\Delta_{yv}$, we find that this diagram can be put as \begin{eqnarray} \Pi_{\mu \nu}^{BB \; (2 \; b)} (x-y) &=& 2 e^4 Tr[ \gamma_{\mu} \gamma_{\alpha} \gamma_{\lambda} \gamma_{\nu} \gamma_{\beta} \gamma_{\sigma}] \left( - \partial_{\alpha}^x \partial_{\lambda}^x H [1, \partial_{\sigma} \; ; \; \partial^x_{\beta}, 1] - \partial^x_{\alpha} H[1, \partial_{\sigma} \; ; \; \partial_{\beta} \partial_{\lambda},1] \right. \nonumber \\ & & \left. + \partial^x_{\lambda} H[1, \partial_{\alpha} \partial_{\sigma} \; ; \; \partial_{\beta},1] + H[1, \partial_{\alpha} \partial_{\sigma} \; ; \; \partial_{\beta} \partial_{\lambda},1] \right) \;. \label{QED_diag_b_bare_H} \end{eqnarray} Using the properties of the trace, the clifford algebra definition $\anticomm{\gamma_{\mu}}{\gamma_{\nu}} = 2 \delta_{\mu \nu}$ and usual $\gamma$ matrices results as $Tr[\gamma_{\mu} \gamma_{\lambda} \gamma_{\nu} \gamma_{\beta} ] = 4 ( \delta_{\mu \lambda} \delta_{\nu \beta} - \delta_{\mu \nu} \delta_{\lambda \beta} + \delta_{\mu \beta} \delta_{\nu \lambda})$, we can write the right hand-side of the previous equation as \begin{eqnarray} e^4 &\left[\right.& - 8 \delta_{\mu \nu} \Box H[ 1 , \partial_{\lambda} \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\mu}^x H[ 1, \partial_{\nu} \; ; \; \Box, 1] - 8 \delta_{\mu \nu} \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; \Box , 1] \nonumber \\ & & - 16 \partial_{\mu}^x H[ 1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu}, 1] + 16 \partial_{\lambda}^x H [ 1, \partial_{\lambda} \; ; \; \partial_{\mu} \partial_{\nu} ,1] - 16 \partial_{\lambda}^x H [ 1, \partial_{\mu} \; ; \; \partial_{\lambda} \partial_{\nu},1] \nonumber \\ & & - 16 \partial_{\mu} H [ 1, \Box \; ; \; \partial_{\nu}, 1] + 8 \delta_{\mu \nu} \partial_{\lambda}^x H[ 1, \Box \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\lambda}^x H[ 1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\nu},1] \nonumber \\ & & - 16 \partial_{\lambda}^x H[ 1, \partial_{\mu} \partial_{\nu} \; ; \; \partial_{\lambda}, 1] + 16 \partial_{\nu}^x H[ 1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\lambda}, 1] - 16 H[ 1, \Box \; ; \; \partial_{\mu} \partial_{\nu},1] \nonumber \\ & & \left. + 8 \delta_{\mu \nu} H[ 1, \Box \; ; \; \Box ,1 ] + 32 H[ 1 , \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda}, 1] - 16 H[ 1 , \partial_{\mu} \partial_{\nu} \; ; \; \Box , 1] \; \right]. \nonumber \\ \end{eqnarray} As we have discussed in section \ref{overlap_integrals}, contributions containing a $\Box$ can be reduced by means of the propagator equation $\Box \Delta = - \delta$ and expressed in terms of $I^1$ as given in section \ref{Nested_div}. The remaining contributions can be found in the list of renormalized expressions with overlapping divergences given in section \ref{overlap_integrals} (or can be easily expressed in terms of integrals of the list). Hence, we obtain the renormalized result as \begin{eqnarray} \left. \Pi^{BB}_{\mu \nu \;(2\;b) } (x) \right|_R &=& \frac{e^4}{12 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \ln^2 x^2 M^2 + \frac{14}{3} \ln x^2 M^2}{x^2} - \delta_{\mu \nu} \Box \Box \frac{ \ln^2 x^2 M^2 + \frac{17}{3} \ln x^2 M^2}{x^2} \right] \nonumber \\ & & + \ldots \end{eqnarray} \subsubsection{Final expression} Adding the two previous results, the final two-loop renormalized expression is \begin{eqnarray} \left. \Pi_{ \; \mu \nu \;(2)}^{BB} (x) \right|_R &=& \left. 2 \left. \Pi_{\mu \nu \;(2\;a)}^{BB} (x) \right|_R + \Pi_{\mu \nu\;(2\;b)}^{BB} (x) \right|_R \nonumber \\ &=& \frac{e^4}{4(4 \pi^2)^3} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \frac{ \ln x^2 M^2}{x^2} + \ldots \end{eqnarray} up to local terms. \subsection{RG equation} When renormalizing the two-loop diagrams we have restricted ourselves to the Feynman gauge. We are allowed to do that as the term in the RG equation that takes into account the running of the gauge parameter ($\gamma_{\alpha} \partial / \partial \alpha$) will be shown not to be relevant when verifying the two-loop RG equations. To prove this, in appendix \ref{ap_Gauge} we evaluate the one-loop RG equation for quantum gauge fields. We find there the expansion of $\gamma_{\alpha}$ to be \begin{eqnarray} \gamma_{\alpha} (e) &=& - \frac{2 \alpha}{3 (4 \pi^2)} e^2 + \ldots \end{eqnarray} Along with this, notice that the tree level background effective action and the one-loop correction do not depend on the gauge parameter (in this theory we have no quantum-background coupling). Hence, as the first gauge corrections arise at the two-loop level, we do not have to take them into account in order to verify the two-loop RG equations ($\gamma_{\alpha} \partial / \partial \alpha$ acting on them is two orders higher in $e$). So, we are allowed to perform our calculation in the Feynman gauge. We define the background field two-point function to be \begin{eqnarray} \Gamma_{\mu \nu}^{BB}(x-y) &=& \left(\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box \right) \delta^{(4)}(x-y) - \Pi^{BB}_{\mu \nu} (x-y) \;. \end{eqnarray} As is detailed in appendix \ref{ap_BFM}, in the background field method the charge and background field renormalizations are related: $Z_{e} \sqrt{Z_B} = 1$. Hence, we will redefine the background field to be $B_{\mu} = \frac{1}{e} B_{\mu}^{\prime}$ as for this new field the anomalous dimension $\gamma_{B^{\prime}}$ cancels\footnote{With our definition $B^{\prime}_{\mu 0} = e_{0} B_{\mu 0} = Z_{e} \sqrt{Z_{B}} e B_{\mu} = B^{\prime}_{\mu}$ and then $\gamma_{B^{\prime}} = \frac{1}{2} M \frac{\partial}{\partial M} \ln Z_{B^{\prime}} = 0$}. Hence, the background two-point function up to two loops is found to be \begin{eqnarray} \Gamma_{\mu \nu}^{BB} (x) &=& ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \left[ \frac{1}{e^2} \delta^{(4)}(x) - \frac{1}{9( 4 \pi^2)} \delta^{(4)}(x) - \frac{1}{3( 4 \pi^2)^2} \Box \frac{\ln x^2 M^2}{x^2} \right. \nonumber \\ & & \left. - \frac{e^2}{4 (4 \pi^2)^3} \Box \frac{\ln x^2 M^2}{x^2} \right] + \ldots \;, \end{eqnarray} that verifies the following RG equation \begin{eqnarray} \left( M \frac{\partial}{\partial M} + \beta(e) \frac{\partial}{\partial e} \right) \Gamma_{\mu \nu}^{BB} = 0 \;, \end{eqnarray} where, again, $\beta(e)$ is the QED $\beta$-function and we have dropped out the term corresponding to the running of the gauge parameter, as it is of higher order in $e$. Then, we obtain the following two-loop expansion for $\beta(e)$ \begin{eqnarray} \beta (e) &=& \frac{1}{3(4 \pi^2)} e^3 + \frac{1}{4 (4 \pi^2)^2} e^5 + {\cal{O}}(e^7) \;. \end{eqnarray} These results agree with previous ones found in the literature \cite{Haagensen:1992vz,Schwinger:1973rv,Berestetsky:1982aq,Itzykson:1980rh,Arnone:2005vd}. \subsection{Comparison with DiffR} To stress the key points of our calculation, let us compare this procedure with usual DiffR \cite{Haagensen:1992vz}. With $M_{\Sigma}$ and $M_{V}$ the one-loop renormalization scales of the fermion self-energy and the three point vertex $V_{\mu}$ respectively, the Ward identity \begin{eqnarray} \frac{\partial}{\partial z^\mu} V_{\mu} (x-z,y-z) = - i e \left[ \delta^{(4)} (z-x) - \delta^{(4)}(z-y) \right] \Sigma (x-y) \end{eqnarray} imposes that these scales are related as \begin{eqnarray} \ln \frac{M^2_{\Sigma}}{M^2_{V}} = \frac{1}{2} \;. \label{QED_mass_rel} \end{eqnarray} When dealing with two-loop contributions, in each case one has to make use of the corresponding one-loop scale ($M_{\Sigma}$ or $M_{V}$). After a lengthy calculation we find the final values for $\Pi_{\mu \nu \;(2a)}^{BB}$ and $\Pi_{\mu \nu \;(2b)}^{BB}$ to be \begin{eqnarray} \left. \Pi_{\mu \nu \;(2a)}^{BB} (x) \right|_R &=& - \frac{e^4}{24 (4 \pi^2)^3} \left[ ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \left( \frac{ \ln^2 x^2 M^2_{\Sigma} + \frac{5}{3} \ln x^2 M^2}{x^2}\right) - \delta_{\mu \nu} \Box \Box \frac{\ln x^2 M^2}{x^2}\right] \nonumber \\ \left. \Pi_{\mu \nu \;(2b)}^{BB} (x) \right|_R &=& - \frac{e^4}{12 (4 \pi^2)^3} \left[ - ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \Box \left( \frac{\ln^2 x^2 M^2_V + \frac{17}{3} \ln x^2 M^2}{x^2}\right) \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\ln x^2 M^2}{x^2} \right] \;, \end{eqnarray} and to obtain the entire two-loop vacuum polarization, we have to use the mass relation (\ref{QED_mass_rel}) to put one of the scales in terms of the other \begin{eqnarray} \ln^2 x^2 M^2_{\Sigma} = \ln^2 x^2 M^2_{V} + \ln x^2 M^2_{V} + \frac{1}{4} \;. \end{eqnarray} \section{SuperQED} In this section we will deal with the supersymmetric extension of QED, SuperQED. As the gauge group is abelian, this is one of the simplest examples of supersymmetric gauge theory we can consider. This theory was yet renormalized using standard DiffR in \cite{Song}, where as usual, explicit evaluation of Ward identities played a central r\^ole. We will re-obtain those results applying our procedure. \subsection{Supersymmetry} Coleman and Mandula \cite{Coleman:1967ad} showed that the commutators of the generators of any internal bosonic symmetry group and the generators of the Poincare group vanish. Hence, space-time and internal symmetry groups can not be mixed in a non-trivial way. However, this {\em{no-go}} theorem can be avoided by allowing fermionic symmetry generators \cite{Haag:1974qh}, and the algebra that we obtain is the so-called {\em{supersymmetry algebra}}. Therefore, supersymmetric transformations are generated by traslationally invariant quantum operators which change fermionic states into bosonic states and vice versa. Hence, for each particle we have another one with the same mass and opposite statistic (of course, as this is not observed in nature we conclude that if supersymmetry is a fundamental symmetry of nature it is necessarily broken). Other relevant consequences of supersymmetry are the positivity of the energy \cite{Gates:1983nr,Sohnius:1985qm,West:1990tg,Wess:1992cp} and, due to the relations imposed to the coupling constants and the equality of the bosonic and fermionic degrees of freedom, some cancellations that occur between different Feynman diagrams that make supersymmetric theories to be more convergent \cite{Gates:1983nr,West:1990tg,Wess:1992cp}. In order to work efficiently with supersymmetric theories, an extension of the usual space-time with additional anticommuting coordinates was developed \cite{Salam:1974yz}. With this space (called superspace) and the extended fields defined in it (superfields), we can perform all the calculations with supersymmetry being manifest. In particular, perturbation theory can be extended to superspace, which allow us to simplify the calculations as component graphs related by supersymmetry are automatically cancelled out. In appendix \ref{ap_SUSY} we give a brief introduction to supersymmetry, superspace and the conventions that we use. Also, we give there a list of references where the reader can find a complete treatment of the subject. \subsection{The SuperQED model} A supersymmetric abelian gauge theory can be defined in terms of a field strength $W_{\alpha}$ \cite{Gates:1983nr} which is a chiral superfield ($\bar{D}_{\dot{\alpha}} W_{\alpha} = 0$) that verifies \begin{eqnarray} D^{\alpha} W_{\alpha} = - \bar{D}^{\dot{\alpha}} \bar{W}_{\dot{\alpha}} \;. \end{eqnarray} Hence, this field can be expressed in terms of an unconstrained real scalar superfield $V = \bar{V}$ as \begin{eqnarray} W_{\alpha} &=& i \bar{D}^2 D_{\alpha} V \nonumber \\ \bar{W}_{\dot{\alpha}} &=& -i D^2 \bar{D}_{\dot{\alpha}} V \;. \end{eqnarray} From the algebra of covariant derivatives (\ref{SUSY_D_algebra}) we can easily conclude that $W_{\alpha}$ is invariant under the transformations \begin{eqnarray} V^{\prime} &=& V + i ( \bar{\Lambda} - \Lambda) \;, \end{eqnarray} where $\Lambda$ is a chiral parameter. The relevant action that we find is \begin{eqnarray} S = \int d^4 x d^4 \theta \; W^2 = \frac{1}{2} \int d^4 x d^4 \theta \; V D^{\alpha} \bar{D}^2 D_{\alpha} V \;, \end{eqnarray} which is the supersymmetric gauge invariant generalization of the action for a free vector field, as can be seen if we write this expression in component fields \cite{Gates:1983nr}. As the matter part of the action can be expressed in terms of a chiral field $\Phi$ as $\int d^8 z \; \bar{\Phi} e^{V} \Phi$ \cite{Gates:1983nr,Wess:1992cp,Sohnius:1985qm}, the supersymmetric extension of massless QED is an action of the form \cite{Wess:1992cp} \begin{eqnarray} S &=& \int d^4 x d^2 \theta \; W^2 + \int d^4 x d^4 \theta \; \bar{\Phi}_{+} e^{gV} \Phi_{+} + \int d^4 x d^4 \theta \; \bar{\Phi}_{-} e^{-gV} \Phi_{-} \;. \end{eqnarray} \subsubsection{Background field method} In section \ref{BFM_SYM} of appendix \ref{ap_BFM} we discuss in detail the application of the background field method to supersymmetric gauge theories. It is found there that when dealing with an abelian theory, we have a linear quantum-background splitting of the form \begin{eqnarray} V \rightarrow V + B \;, \end{eqnarray} where $V$ and $B$ are the quantum and background gauge fields respectively. The background effective action that we have is then of the form \begin{eqnarray} \Gamma[B] &=& \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Gamma(x-y) + \ldots \end{eqnarray} Notice that the background field method allows us to obtain the beta function $\beta(g)$ only from the renormalization of the background field two-point function. Hence, we have to obtain the the one- and two-loop coefficients of the expansion of $\Gamma(x)$, as with them we can obtain $\beta(g)$ up to two-loop order. \subsection{One-loop level} \begin{figure}[ht] \centerline{\epsfbox{SQED1loop.eps}} \caption{One-loop SuperQED diagrams. Thick lines correspond to external background fields and solid lines represent $\Phi_{+}$ or $\Phi_{-}$ propagators.} \label{SQED1loop_diag} \end{figure} Although we can also consider a diagram which corresponds to a tadpole contribution of a loop of $\Phi_{\pm}$ fields, this diagram gives no contribution as CDR imposes $\Delta \Box \Delta |_R = 0$. Hence, the only relevant one-loop contribution from $\Phi_{\pm}$ fields to the background vacuum self-energy is the diagram shown in \ref{SQED1loop_diag}. We will obtain the contribution corresponding to a $\Phi_{+}$ loop, which is denoted as $\Gamma_{+}^{(1)}$, as the other one $\Gamma_{-}^{(1)}$ is exactly the same. With the superspace Feynman rules defined in appendix \ref{ap_SUSY}, we find the expression of the diagram to be (notice that the superspace propagator is $P_{ij} = \Delta (x_i - x_j) \delta^4 (\theta_i - \theta_j) \equiv \Delta_{x_i x_j} \delta_{ij}$) \begin{eqnarray} \Gamma^{(1)}_{+} &=& \frac{g^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) B(z_2) \left[ D^2_1 P_{12} \stackrel{\leftarrow}{D^2}_2 \right] \left[ D^2_2 P_{12} \stackrel{\leftarrow}{D^2}_1 \right] \nonumber \\ &=& \frac{g^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) B(z_2) \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \left[ D^2_2 \bar{D}^2_2 P_{21} \right] \;. \end{eqnarray} Applying D-algebra we remove all the derivatives from the first propagator and make them act over the external fields and the other propagator. So \begin{eqnarray} \Gamma^{(1)}_{+} &=& \frac{g^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) \left[ \bar{D}^2 D^2 B (z_2) \right] P_{12} \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ & & + \frac{g^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) B(z_2) P_{12} \left[ \Box \bar{D}^2_2 \bar{D}^2_2 P_{12} \right] \nonumber \\ & & - \frac{ig^2}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B (z_2) \right] P_{12} \left[ \partial_{\alpha \dot{\alpha}}^2 \bar{D}^2_2 D^2_2 P_{12}\right] \;, \end{eqnarray} where we have used the results of the D-algebra $D^2 \bar{D}^2 D^2 = \Box D^2$ and $\comm{D_{\alpha}}{\bar{D}^2} = - i \partial_{\alpha \dot{\alpha}} \bar{D}^{\dot{\alpha}}$. At this point, we can apply the identity (\ref{SUSY_delta_propagators}) for supercovariant derivatives $\delta_{12} D^2 \bar{D}^2 \delta_{12} = \delta_{12}$. This gives us one free $\theta$-space $\delta$-function that we can use to evaluate one of the $\theta$ integrals. With this, we have (using the identifications $x_1 = x$ and $x_2 = y$) \begin{eqnarray} \Gamma^{(1)}_{+} &=& \frac{g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^2 D^2 B(y, \theta) \right] \Delta^2_{xy} \nonumber \\ & & + \frac{g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta_{xy} \left( \Box \Delta_{xy} \right) \nonumber \\ & & - \frac{i g^2}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \Delta_{xy} \partial_{\alpha \dot{\alpha}}^y \Delta_{xy} \;. \end{eqnarray} This is the bare expression that we have to renormalize applying CDR rules. It is clear that the second term does not contribute as CDR imposes $\Delta \Box \Delta |_R = 0$. The CDR renormalization of the third term $\Delta \partial_{\alpha \dot{\alpha}} \Delta |_R= \frac{1}{2} \partial_{\alpha \dot{\alpha}} \Delta^2_R$ allows us to integrate by parts the space-time derivative and make it act over the external background fields. Hence, using the identity \begin{eqnarray} \bar{D}^2 D^2 + \frac{i}{2} \partial_{\alpha \dot{\alpha}} \bar{D}^{\dot{\alpha}} D^{\alpha} = \frac{1}{2} D^{\alpha} \bar{D}^2 D_{\alpha} \;, \end{eqnarray} we find the final renormalized expression to be \begin{eqnarray} \Gamma^{(1)}_{+ \;R} &=& \frac{g^2}{4} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[\Delta^2 \right]_{R} (x-y) \nonumber \\ &=& - \frac{g^2}{16(4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \nonumber \\ \end{eqnarray} As we can see, this term is manifestly gauge invariant, as is guaranteed by the use of CDR. The total one-loop effective action is the sum of both contributions corresponding to $\Phi_{+}$ and $\Phi_{-}$ \begin{eqnarray} \Gamma^{(1)}_R &=& \Gamma^{(1)}_{+ \; R} + \Gamma^{(1)}_{- \; R} \nonumber \\ &=& - \frac{g^2}{8(4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;. \label{SQED_1loop_ren} \end{eqnarray} \subsection{Two-loop level} \begin{figure}[ht] \centerline{\epsfbox{SQED2loop.eps}} \caption{Two-loop SuperQED diagrams. Thick wavy lines correspond to external background gauge fields, thin wavy lines correspond to quantum gauge field propagators and solid lines correspond to $\Phi_{+}$ or $\Phi_{-}$ propagators.} \label{SQED_two_loop_diag} \end{figure} As we can see from figure \ref{SQED_two_loop_diag}, two-loop calculations involve the quantum gauge field propagator. Although this propagator depends on the gauge parameter, we can use it evaluated in Feynman gauge, as the term that takes care of the running of the gauge parameter in the RG equation ($\gamma_{\alpha}(g) \partial / \partial \alpha$), starts its expansion in terms of the coupling constant at order $g^2$. Hence, $\gamma_{\xi}$ acting over any two-loop diagram will be two orders higher in $g$, and consequently, not relevant when verifying the RG equation. We will detail this later. Also notice that, as in the one-loop case, we will obtain only contributions corresponding to $\Phi_{+}$ fields, as the diagrams with $\Phi_{-}$ fields have the same expression. Before obtaining each diagram, let us discuss an useful identity we will use when we have to apply $D$-algebra to a supergraph \cite{Song}. Let $F_{i}$ be a function of superspace coordinates $z_i = (x_i,\theta_i)$, and suppose that we have an expression where the only dependence in $z_i$ is of the form \begin{eqnarray} \int d^8 z_i d^8 z_j d^8 z_k \ldots F_{i} \left[ D^2_i \bar{D}^2_i P_{ij} \right] \left[ \bar{D}^2_i D^2_i P_{ik} \right] \ldots \label{SQED_algebra_id_paso0} \end{eqnarray} Applying integration by parts rules in superspace (which can be found in section \ref{SUSY_integration} of appendix \ref{ap_SUSY}) and $D$-algebra, we obtain \begin{eqnarray} (\ref{SQED_algebra_id_paso0})&= & \int d^8 z_i d^8 z_j d^8 z_k \ldots \left[ D^2_i F_i \right] \left[ \bar{D}^2_i P_{ij} \right] \left[ \bar{D}^2_i D^2_i P_{ik} \right] \ldots \nonumber \\ &+& \int d^8 z_i d^8 z_j d^8 z_k \ldots F_i \left[ \bar{D}^2_i P_{ij} \right] \left[ \Box D^2_i P_{ik} \right] \ldots \nonumber \\ &+& \int d^8 z_i d^8 z_j d^8 z_k \ldots \left[ D^{\alpha}_i F_i \right] \left[ \bar{D}^2_i P_{ij} \right] \left[ \partial_{\alpha \dot{\alpha}}^i \bar{D}^{\dot{\alpha}}_i D^2_i P_{ik} \right] \ldots \label{SQED_algebra_id_paso1} \end{eqnarray} If we integrate by parts again to remove all the superspace derivatives from $P_{ij}$, it is clear that we will obtain some contributions that will vanish by the identities (\ref{SUSY_delta_propagators}), when we use the $\theta$-space $\delta$-functions to set $j=k$. As an example, consider $\left[ \bar{D}^2_i F_i \right] P_{ij} \left[ \Box D^2_i P_{ik} \right]$, which is obtained from the second term of (\ref{SQED_algebra_id_paso1}). As can be seen, this contribution will cancel when we set $j=k$ and apply $\delta_{ij} D^2 \delta_{ij} = 0$. So, the final relevant terms of the expansion of (\ref{SQED_algebra_id_paso0}) are \begin{eqnarray} & & \int d^8 z_i d^8 z_j d^8 z_k \ldots \left[ \bar{D}^2_i D^2_i F_i \right] P_{ij} \left[ \bar{D}^2_i D^2_i P_{ik} \right] \ldots \nonumber \\ &+& \int d^8 z_i d^8 z_j d^8 z_k \ldots F_i P_{ij} \left[ \Box \bar{D}^2_i D^2_i P_{ik} \right] \ldots \nonumber \\ &-& i \int d^8 z_i d^8 z_j d^8 z_k \ldots \left[ \bar{D}^{\dot{\alpha}}_i D^{\alpha}_i F_i \right] P_{ij} \left[ \partial_{\alpha \dot{\alpha}}^i \bar{D}^2_i D^2_i P_{ik} \right] \ldots \nonumber \\ &+& \textrm{(terms~that~vanish~when~j=k)} \;. \label{D_algebra_id} \end{eqnarray} \subsubsection{Diagram $(a)$} The bare expression of this diagram is \begin{eqnarray} \Gamma^{(2a)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) P_{43} \left[ D^2_3 \bar{D}^2_3 P_{43} \right] \left[ \bar{D}^2_2 D^2_2 P_{42} \right] \nonumber \\ & & ~ \times \left[ D^2_1 \bar{D}^2_1 P_{13} \right] \left[ D^2_2 \bar{D}^2_2 P_{21} \right] \nonumber \\ & & - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B (z_2) P_{43} \left[ D^2_4 \bar{D}^2_4 P_{43} \right] \left[ D^2_2 \bar{D}^2_2 P_{24} \right] \nonumber \\ & & ~ \times \left[ \bar{D}^2_1 D^2_1 P_{31} \right] \left[ D^2_1 \bar{D}^2_1 P_{12} \right] \;. \label{SQED_diag_a_basic} \end{eqnarray} We will study the first contribution, as the second one, except for having $D$ and $\bar{D}$ interchanged, is the same. Using the identity $P_{43} D^2_3 \bar{D}^2_3 P_{43} = \Delta^2_{43} \delta_{43}$, integrating by parts and applying $D$-algebra we find \begin{eqnarray} \Gamma^{(2aI)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) \left( \Delta^2_{43} \delta_{43} \right) \left[ \bar{D}^2_2 D^2_2 P_{42} \right] \left[ D^2_1 \bar{D}^2_1 P_{13} \right] \left[ D^2_2 \bar{D}^2_2 P_{21} \right] \nonumber \\ &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) \left( \Delta^2_{43} \delta_{43} \right) \left[ \bar{D}^2_2 D^2_2 P_{42} \right] \left( \Box P_{13} \right) \left[ D^2_2 \bar{D}^2_2 P_{21} \right] \;. \nonumber \\ \end{eqnarray} Making use of (\ref{D_algebra_id}) we can write this a \begin{eqnarray} \Gamma^{(2aI)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) \left[ \bar{D}^2 D^2 B (z_2) \right] \left( \Delta^2_{43} \delta_{43} \right) \left[ \bar{D}^2_2 D^2_2 P_{42} \right] \left( \Box P_{13} \right) P_{21} \nonumber \\ & & - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) \; \left( \Delta^2_{43} \delta_{43} \right) \left[ \Box \bar{D}^2_2 D^2_2 P_{42} \right] \left( \Box P_{13} \right) P_{21} \nonumber \\ & & + \frac{i g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B (z_2) \right] \left( \Delta^2_{43} \delta_{43} \right) \left[ \partial_{\alpha \dot{\alpha}}^2 \bar{D}^2_2 D^2_2 P_{42} \right] \left( \Box P_{13} \right) P_{21} \;. \nonumber \\ \end{eqnarray} After using identities (\ref{SUSY_delta_propagators}) we can evaluate three of the $\theta$ integrals with the free $\delta$-functions. Then, with the identifications $x_1 = x$, $x_2 = y$, $x_3 = u$ and $x_4 = v$, the contribution becomes \begin{eqnarray} \Gamma^{(2aI)}_{+} &=& - \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^2 D^2 B (y, \theta) \right] \Delta_{xy} \int d^4 u d^4 v \; \Delta_{yv} ( \Box \Delta_{xu} ) \Delta^2_{uv} \nonumber \\ & & - \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta_{xy} \int d^4 u d^4 v \; ( \Box \Delta_{yv} ) ( \Box \Delta_{xu} ) \Delta^2_{uv} \nonumber \\ & & + \frac{i g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \Delta_{xy} \int d^4 u d^4 v \; ( \partial_{\alpha \dot{\alpha}}^y \Delta_{yv} ) ( \Box \Delta_{xu} ) \Delta^2_{uv} \;. \nonumber \\ \end{eqnarray} Remembering $\Box \Delta = - \delta$ and the definition of integral expression $I^1$, this can be put as \begin{eqnarray} \Gamma^{(2aI)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^2 \bar{D}^2 B(y, \theta) \right] \left[ \Delta I^1 \right](x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta^3_{xy} \nonumber \\ & & + \frac{i g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \left[ \Delta \partial_{\alpha \dot{\alpha}}^x I^{1} \right](x-y) \;. \end{eqnarray} The second contribution of (\ref{SQED_diag_a_basic}) only differs from the first one by the interchange of $D$ and $\bar{D}$. Hence, using $D$-algebra identities (\ref{SUSY_D_algebra}), the total bare expression of diagram $(a)$ is found to be \begin{eqnarray} \Gamma^{(2a)}_{+} &=& \Gamma^{(2aI)}_{+} + \Gamma^{(2aII)}_{+} \nonumber \\ &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \left[ \Box ( \Delta I^{1} ) - 2 \Delta^3 - \partial^{\alpha \dot{\alpha}} ( \Delta \partial_{\alpha \dot{\alpha}} I^{1} ) \right] (x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \;. \end{eqnarray} \subsubsection{Diagram $(b)$} The bare expression of this contribution is \begin{eqnarray} \Gamma^{(2b)}_{+} &=& - g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) P_{31} \left[ D^2_3 \bar{D}^2_3 P_{31} \right] \left[ D^2_2 \bar{D}^2_2 P_{23} \right] \left[ D^2_1 \bar{D}^2_1 P_{12} \right] \nonumber \\ & & - g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) P_{31} \left[ D^2_3 \bar{D}^2_3 P_{23} \right] \left[ D^2_1 \bar{D}^2_1 P_{31} \right] \left[ D^2_2 \bar{D}^2_2 P_{21} \right] \;. \nonumber \\ \label{SQED_2loop_diag_b_bare} \end{eqnarray} As in the previous diagram, the two terms that form (\ref{SQED_2loop_diag_b_bare}) differ only by the interchange of $D$ and $\bar{D}$, so we will obtain the first one, that we name $\Gamma^{(2bI)}_{+}$. With identities (\ref{D_algebra_id}) and (\ref{SUSY_delta_propagators}) we find the relevant expansion of $\Gamma^{(2bI)}_{+}$ to be \begin{eqnarray} \Gamma^{(2bI)}_{+} &=& - g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) \left( \Delta^2_{31} \delta_{31} \right) \left[ D^2_2 \bar{D}^2_2 P_{32} \right] \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ &=& - g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) \left[ D^2_2 \bar{D}^2_2 B(z_2) \right] \left( \Delta^2_{31} \delta_{31} \right) \left[ D^2_2 \bar{D}^2_2 P_{32} \right] P_{12} \nonumber \\ & & - g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) \left( \Delta^2_{31} \delta_{31} \right) \left[ \Box D^2_2 \bar{D}^2_2 P_{32} \right] P_{12} \nonumber \\ & & + i g^4 \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) \left[ D^{\alpha} \bar{D}^{\dot{\alpha}} B(z_2) \right] \left( \Delta^2_{31} \delta_{31} \right) \left[ \partial_{\alpha \dot{\alpha}}^2 D^2_2 \bar{D}^2_2 P_{32} \right] P_{12} \;. \nonumber \\ \end{eqnarray} Using identities (\ref{SUSY_delta_propagators}) we can get rid of the superspace derivatives and obtain free grassmanian $\delta$-functions. After evaluating the grassmanian integrals, the identifications $x_1 = x$, $x_2 = y$ and $x_3 = u$ allow us to obtain \begin{eqnarray} \Gamma^{(2bI)}_{+} &=& - g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^2 \bar{D}^2 B(y, \theta) \right] \Delta_{xy} \int d^4 u \; \Delta_{yu} \Delta^2_{xu} \nonumber \\ & & - g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta_{xy} \int d^4 u \; ( \Box \Delta_{yu} ) \Delta^2_{xu} \nonumber \\ & & + i g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^{\dot{\alpha}} B(y, \theta) \right] \Delta_{xy} \int d^4 u \; ( \partial_{\alpha \dot{\alpha}}^y \Delta_{yu} ) \Delta^2_{xu}\;, \nonumber \\ \end{eqnarray} or, in terms of the $I^1$ integral expression \begin{eqnarray} \Gamma^{(2bI)}_{+} &=& - g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^2 \bar{D}^2 B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \nonumber \\ & & + g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta^3_{xy} \nonumber \\ & & - i g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^{\dot{\alpha}} B(y, \theta) \right] \left[ \Delta \partial_{\alpha \dot{\alpha}}^x I^1 \right] (x-y) \;. \end{eqnarray} Finally, adding up the other contribution and using $D$-algebra relations (\ref{SUSY_D_algebra}) we find the total bare contribution to $\Gamma^{(2b)}_{+}$ to be \begin{eqnarray} \Gamma^{(2b)}_{+} &=& g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y,\theta) \left[ - \Box ( \Delta I^1) + 2 \Delta^3 + \partial^{\alpha \dot{\alpha}} \left( \Delta \partial_{\alpha \dot{\alpha}} I^1 \right) \right] (x-y) \nonumber \\ & & - g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \;. \end{eqnarray} \subsubsection{Diagram $(c)$} This diagram is given by \begin{eqnarray} \Gamma^{(2c)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) P_{34} \left[ D^2_1 \bar{D}^2_1 P_{14} \right] \left[ \bar{D}^2_1 D^2_1 P_{13} \right] \nonumber \\ & & \times \left[ D^2_2 \bar{D}^2_2 P_{23} \right] \left[ \bar{D}^2_2 D^2_2 P_{24} \right] \;. \end{eqnarray} Applying identity (\ref{D_algebra_id}) we can split this expression in three contributions as \begin{eqnarray} \Gamma^{(2c)}_{+} &=& \Gamma^{(2cI)}_{+} + \Gamma^{(2cII)}_{+} + \Gamma^{(2cIII)}_{+} \;, \end{eqnarray} with \begin{eqnarray} \Gamma^{(2cI)}_{+} &=& - \frac{g^2}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) \left[ \bar{D}^2 D^2 B(z_2) \right] P_{23} P_{34} \left[ \bar{D}^2_2 D^2_2 P_{24}\right] \left[ D^2_1 \bar{D}^2_1 P_{14}\right] \nonumber \\ & & ~ \times \left[ \bar{D}^2_1 D^2_1 P_{13}\right] \nonumber \\ \Gamma^{(2cII)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) B(z_2) P_{23} P_{34} \left[ \Box \bar{D}^2_2 D^2_2 P_{24} \right] \left[ D^2_1 \bar{D}^2_1 P_{14} \right] \left[ \bar{D}^2_1 D^2_1 P_{13} \right] \nonumber \\ \Gamma^{(2cIII)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; B(z_1) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B (z_2) \right] P_{23} P_{34} \left[ \partial_{\alpha \dot{\alpha}}^2 \bar{D}^2_2 D^2_2 P_{24} \right] \nonumber \\ & & ~ \times \left[ D^2_1 \bar{D}^2_1 P_{14} \right] \left[ \bar{D}^2_1 D^2_1 P_{13} \right] \;. \end{eqnarray} We will evaluate each contribution separately. Starting by $\Gamma^{(2cI)}_{+}$ we can apply again (\ref{D_algebra_id}) and write this as \begin{eqnarray} \Gamma^{(2cI)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \bar{D}^2 D^2 B (z_1) \right] \left[ \bar{D}^2 D^2 B (z_2) \right] P_{34} P_{23} P_{24} P_{14} \left[ \bar{D}^2_1 D^2_1 P_{13} \right] \nonumber \\ & & - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^4 z_4 \; B (z_1) \left[ \bar{D}^2 D^2 B(z_2) \right] P_{34} P_{23} P_{24} P_{14} \left[ \Box \bar{D}^2_1 D^2_1 P_{13} \right] \nonumber \\ & & + \frac{i g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(z_1) \right] \left[ \bar{D}^2 D^2 B(z_2) \right] P_{34} P_{23} P_{24} P_{14} \nonumber \\ & & ~ \times \left[ \partial_{\alpha \dot{\alpha}}^1 \bar{D}^2_1 D^2_1 P_{13} \right] \;. \nonumber \\ \end{eqnarray} Integrating by parts and using the anticommutative nature of the superspace derivatives, we find that an expression of the form $\int dx dy d \theta [\bar{D}^{\dot{\alpha}} A (x, \theta)][\bar{D}^2 B(y, \theta) ]f(x-y)$ vanishes. Hence, the first and third expressions automatically cancel. Using the identifications $x_1 = x$, $x_2 = y$, $x_3 = u$ and $x_4 = v$ we have for this contribution \begin{eqnarray} \Gamma^{(2cI)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^2 D^2 B(y, \theta) \right] \Delta_{xy} \int d^4 v \; \Delta_{yv} \Delta^2_{xv} \nonumber \\ &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^2 D^2 B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \;. \end{eqnarray} We continue now evaluating $\Gamma^{(2cII)}_{+}$. As in this case we have the product of the superpropagators $P_{23} P_{34}$, we can use one of the free grassmanian $\delta$-functions and evaluate the integral over $\theta_3$. With this, we can write this expression as \begin{eqnarray} \Gamma^{(2cII)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^4 x_3 d^8 z_4 B(z_1) B(z_2) \Delta_{23} \Delta_{34} \left( \Box \Delta_{24} \right) \delta_{24} \left[ D^2_1 \bar{D}^2_1 P_{14} \right] \left[ \bar{D}^2_1 D^2_1 \Delta_{13} \delta_{12} \right] \nonumber \\ &=& \frac{g^4}{2} \int d^8 z_1 d^6 z_2 d^4 x_3 \; B(z_1) B(z_2) \Delta_{23}^2 \left[ D^2_1 \bar{D}^2_1 P_{12} \right] \left[ \bar{D}^2_1 D^2_1 \Delta_{13} \delta_{12} \right] \;. \end{eqnarray} We have performed the integral over $z_4$ applying $\Box \Delta(x_2-x_4) = - \delta (x_2-x_4)$ and the grassmanian $\delta$-function $\delta_{24}$. Introducing again the grassmanian coordinate $\theta_3$ with a $\delta$-function and integrating by parts the superspace derivatives that act over $\Delta_{13} \delta_{13} (\equiv P_{13})$ we find \begin{eqnarray} \Gamma^{(2cII)}_{+} &=& \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) \left[ D^2_2 \bar{D}^2_2 \Delta^2_{23} \delta_{23} \right] \left[ \bar{D}^2_2 D^2_2 P_{12} \right] P_{13} \;. \end{eqnarray} At this point, applying (\ref{D_algebra_id}), we can integrate by parts the superspace derivatives acting over $P_{12}$ and arrive to \begin{eqnarray} \Gamma^{(2cII)}_{+} &=& \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) \left[ D^2 \bar{D}^2 B(z_2) \right] \left[ D^2_2 \bar{D}^2_2 \Delta^2_{23} \delta_{23} \right] P_{12} P_{13} \nonumber \\ & & + \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) B(z_2) \left[ \Box D^2_2 \bar{D}^2_2 \Delta^2_{23} \delta_{23} \right] P_{12} P_{13} \nonumber \\ & & - i \frac{g^4}{2} \int d^8 z_1 d^8 z_2 d^8 z_3 \; B(z_1) \left[ D^{\alpha} \bar{D}^{\dot{\alpha}} B(z_2) \right] \left[ \partial_{\alpha \dot{\alpha}}^2 D^2_2 \bar{D}^2_2 \Delta^2_{23} \delta_{23} \right] P_{12} P_{13} \;. \nonumber \\ \end{eqnarray} These expressions can be evaluated straightforwardly with the identities (\ref{SUSY_delta_propagators}). Using also the coordinate identifications $x_1 = x$, $x_2 = y$, $x_3 = u$ and $x_4 = v$, we have this contribution written in terms of the integral expression $I^1$ as \begin{eqnarray} \Gamma^{(2cII)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^2 \bar{D}^2 B(y, \theta) \right] \left[ \Delta I^1 \right] (x-y) \nonumber \\ & & - \frac{g^4}{2} \int d^4 x d^4 y d^ \theta \; B(x, \theta) B(y, \theta) \Delta^3_{xy} \nonumber \\ & & + i \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^{\dot{\alpha}} B(y, \theta) \right] \left[ \Delta \partial_{\alpha \dot{\alpha}}^x I^{1} \right] (x-y) \;. \end{eqnarray} Finally, we take care of $\Gamma^{(2cIII)}_{+}$. With (\ref{D_algebra_id}), taking into account that the superspace derivatives anticommute and making the usual identifications $x_1 = x$, $x_2 = y$, $x_3 = u$ and $x_4 = v$, we find \begin{eqnarray} \Gamma^{(2cIII)}_{+} &=& i \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \nonumber \\ & & ~\times \int d^4 u d^4 v \; \Delta_{yu} \Delta_{uv} ( \partial_{\alpha \dot{\alpha}}^y \Delta_{yv}) \Delta_{xv} ( \Box \Delta_{xu}) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; \left[ \bar{D}^{\dot{\beta}} D^{\beta} B(x, \theta)\right] \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \nonumber \\ & & ~\times \int d^4 u d^4 v \; \Delta_{yu} \Delta_{uv} ( \partial_{\alpha \dot{\alpha}}^y \Delta_{yv} ) \Delta_{xv} ( \partial_{\beta \dot{\beta}}^x \Delta_{xu} ) \;, \end{eqnarray} or which is the same, integrating by parts the superspace derivatives of the last integral\footnote{$\int [ \bar{D}^{\dot{\beta}} D^\beta B(z_1) ] [ \bar{D}^{\dot{\alpha}} D^{\alpha} B(z_2) ] f_{\alpha \dot{\alpha}, \beta \dot{\beta}} (z_1 - z_2 ) = - \int B(z_1) [D^{\beta} \bar{D}^{\dot{\beta}} \bar{D}^{\dot{\alpha}} D^{\alpha} B(z_2)] f_{\alpha \dot{\alpha},\beta \dot{\beta}}(z_1 -z_2)$. Also it has to be noted that due to the anticommutative nature of the superspace derivatives $\bar{D}^{\dot{\alpha}} \bar{D}^{\dot{\beta}} = - D^2 C^{\dot{\alpha} \dot{\beta}}$} and using the integral expression $H$ defined in (\ref{H_definition}) \begin{eqnarray} \Gamma^{(2cIII)}_{+} &=& + \frac{i g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(y, \theta) \right] \left[ \Delta \partial_{\alpha \dot{\alpha}} I^1 \right](x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x,\theta) \left[ D^{\beta} \bar{D}^2 D^{\alpha} B(y,\theta) \right] C^{\dot{\beta} \dot{\alpha}} H[\partial_{\beta \dot{\beta}},1 \; ; 1, \partial_{\alpha \dot{\alpha}}] \;. \nonumber \\ \end{eqnarray} Adding up the three contributions, we find the final bare expression of $\Gamma^{(2c)}_{+}$ to be \begin{eqnarray} \Gamma^{(2c)}_{+} &=& \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \left[ \Delta I^1 \right](x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \left[ \Box ( \Delta I^1 ) - \Delta^3 - \partial^{\alpha \dot{\alpha}} ( \Delta \partial_{\alpha \dot{\alpha}} I^1 ) \right] (x-y) \nonumber \\ & & + \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta B(x, \theta) \left[D^{\beta} \bar{D}^2 D^{\alpha} B(y, \theta) \right] C^{\dot{\beta} \dot{\alpha}} H[\partial_{\beta \dot{\beta}},1 \; ; 1, \partial_{\alpha \dot{\alpha}}] \;, \nonumber \\ \label{SQED_diag_c} \end{eqnarray} with $C^{\dot{\beta} \dot{\alpha}}$ given in section \ref{SUSY_Notation} of appendix \ref{ap_SUSY}. \subsubsection{Diagram $(d)$} The bare contribution of this diagram is \begin{eqnarray} \Gamma^{(2d)}_{+} &=& - \frac{g^4}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) B(z_2) \left[ D^2_1 \bar{D}^2_1 P_{12} \right] P_{12} \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \;. \end{eqnarray} Applying the identity $\delta_{12} D^2 \bar{D}^2 \delta_{12} = \delta_{12}$ it is clear that, with the identifications $x_1 = x$ and $x_2 = y$, the non-renormalized expression of this diagram is \begin{eqnarray} \Gamma^{2d}_{+} &=& - \frac{g^4}{2} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) B(y, \theta) \Delta^3_{xy} \;. \end{eqnarray} \subsubsection{Renormalization} As our renormalization procedure verifies CDR at one loop, in order to obtain the final renormalized result we can replace each bare expression for its renormalized value and simply add them. Even more, as with our procedure each expression always has the same renormalized value, we can first add all the bare expressions, and then perform the renormalization. This is forbidden when we use DiffR, as each expression has to be renormalized with its corresponding scale. From the explicit form of the different contributions, it is clear that all the terms cancel exactly, except the last part of diagram $(c)$ (\ref{SQED_diag_c}). Hence, the two-loop renormalized contribution to the vacuum self-energy is (multiplying by two as we consider both contributions from the chiral matter superfields $\Phi_{+}$ and $\Phi_{-}$) \begin{eqnarray} \Gamma^{2}_R &=& \left. 2 \left( \Gamma_{+}^{(2a)} + \Gamma_{+}^{(2b)} + \Gamma_{+}^{(2c)} + \Gamma_{+}^{(2d)} \right) \right|_R \nonumber \\ &=& g^4 \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\beta} \bar{D}^2 D^{\alpha} B(y, \theta) \right] C^{\dot{\beta} \dot{\alpha}} H^R [\partial_{\beta \dot{\beta}},1 \; ; \; 1, \partial_{\alpha \dot{\alpha}} ] \nonumber \\ &=& - \frac{g^4}{16 (4 \pi^2)^3} \int d^4 x d^4 y d^4 \theta \; B(x, \theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} + \ldots \;, \nonumber \\ \end{eqnarray} where, when obtaining the final result, we have directly applied the corresponding identity from the list of integrals with overlapping divergences of section \ref{overlap_integrals}. Let us remark again that, as was guaranteed by fulfilling CDR rules at one loop, this expression is directly gauge invariant. Before dealing with the background RG equation, we have to justify the use of the Feynman gauge in the calculations. As for the QED case, in appendix $\ref{ap_Gauge}$, by means of the evaluation of the one-loop RG equation for the quantum gauge fields, we obtain the expansion of the term that takes into account the running of the gauge parameter in the RG equation ($\gamma_{\alpha} \partial / \partial \alpha$). This is of the form \begin{eqnarray} \gamma_{\alpha} &=& - \frac{\alpha}{(4 \pi^2)} g^2 + \ldots \end{eqnarray} At this point, notice that the first gauge corrections to the background effective action arise at the two-loop level. Hence, when verifying the two-loop RG equation, we may not take into account $\gamma_{\alpha} \partial / \partial \alpha$ acting on them, as this is two orders higher in $g$. This is the reason why we are allowed to use the Feynman gauge in our calculations in both QED and SuperQED models. Let us now proceed to the evaluation of the RG equation for the background gauge field self-energy. As in the QED case, if we make the redefinition $B \rightarrow \frac{1}{g}B$, the anomalous dimension term of the renormalization group equation cancels (remember that the coupling constant and the background field renormalizations are related: $Z_{g} \sqrt{Z_B} = 1$ ). So, with this definition, the background field two-point function to two-loop order is \begin{eqnarray} \Gamma_R(x) &=& \frac{1}{2g^2} \delta^4(x) - \frac{1}{8(4 \pi^2)^2} \Box \frac{\ln x^2 M^2}{x^2} - \frac{g^2}{16 ( 4 \pi^2)^3} \Box \frac{ \ln x^2 M^2}{x^2} + \ldots \;, \label{SQED_2_point_function} \end{eqnarray} and it fulfills the following RG equation \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta(g) \frac{\partial}{\partial g} \right] \Gamma_R (x) = 0 \;, \end{eqnarray} where we do not consider the term that takes care of the running of the gauge parameter. By solving this equation order by order in $g$ it is clear that the beta function is of the form \begin{eqnarray} \beta(g) &=& \beta_1 g^3 + \beta_2 g^5 + {\cal{O}}(g^7) \nonumber \\ \beta_1 &=& \frac{1}{16 \pi^2} \nonumber \\ \beta_2 &=& \frac{1}{8 (4 \pi^2)^2} \;. \end{eqnarray} However, as with our supersymmetry conventions the gauge coupling constant differs from the usual one $g_{SQED}$ ($g = \sqrt{2} g_{SQED}$) \cite{Gates:1983nr}, the expansion of the beta function to two-loop order in terms of the usual coupling constant is \begin{eqnarray} \beta(g_{SQED}) &=& \frac{1}{2(4\pi^2)} g^3_{SQED} + \frac{1}{2 (4 \pi^2)^2} g^5_{SQED} + {\cal{O}}(g^7_{SQED}) \;. \end{eqnarray} This agrees with previous results found in the literature \cite{Vainshtein:1986ja,Shifman:1985fi,Novikov:1985rd}. \begin{figure}[ht] \centerline{\epsfbox{SQEDWardID.eps}} \caption{One-loop SQED Ward identities.} \label{SQED_Ward_id_diag} \end{figure} Let us compare this procedure with the steps that we have to take when using usual DiffR. We have to consider first the Ward identities, that can be shown to relate the 3-point 1PI Green's function $<T \; B \Phi_{+} \bar{\Phi}_{+}>_{1PI}$ and the 2-point 1PI Green's function $<T \; \Phi_{+} \bar{\Phi}_{+}>_{1PI}$ as \cite{Song} \begin{eqnarray} \bar{D}^2(z_1) < T \; B (z_1) \Phi_{-}(z_2) \bar{\Phi}_{-} (z_3) >_{1PI} &=& - g < T \; \Phi_{-} (z_1) \bar{\Phi}_{-}(z_3)>_{1PI} \bar{D}^2(z_1) \delta^8(z_1-z_2) \nonumber \\ D^2(z_1) < T \; B(z_1) \Phi_{-}(z_2) \bar{\Phi}_{-}(z_3) >_{1PI} &=& - g < T \; \Phi_{-}(z_2) \bar{\Phi}_{-}(z_1) >_{1PI} D^2(z_1) \delta^8(z_1-z_3) \nonumber \\ \bar{D}^2(z_1) < T \; B(z_1) \Phi_{+} (z_2) \bar{\Phi}_{+} (z_3) > &=& g < T \; \Phi_{+}(z_1) \bar{\Phi}_{+}(z_3) >_{1PI} \bar{D}^2(z_1) \delta^8(z_1-z_2) \nonumber \\ D^2(z_1) < T \; B(z_1) \Phi_{+}(z_2) \bar{\Phi}_{+}(z_3)>_{1PI} &=& g < T \; \Phi_{+}(z_2) \bar{\Phi}_{+}(z_1)>_{1PI} D^2(z_1) \delta^8(z_1 - z_3 ) \;. \nonumber \\ \label{SQED_Ward_id} \end{eqnarray} With these identities, we can obtain the one-loop relation between the scales that renormalize the $B- \Phi - \bar{\Phi}$ vertex functions (diagrams $(a)$-$(c)$ of figure \ref{SQED_Ward_id_diag} with scales $M_{V_{a}}$, $M_{V_{b}}$ and $M_{V_{c}}$ respectively) and the $\Phi \bar{\Phi}$ self-energy corrections (diagram $(d)$ of figure \ref{SQED_Ward_id_diag} with scale $M_{V_{\Sigma}}$). Performing the explicit renormalization and imposing identities (\ref{SQED_Ward_id}), this relation is found to be \cite{Song} \begin{eqnarray} M^2_{V_{a}} M^2_{V_{\Sigma}} = M^2_{V_{b}} M^2_{V_{c}} \;. \label{SQED_mass_relation} \end{eqnarray} Hence, when renormalizing each of the two-loop diagrams, we have to use the corresponding one-loop scale, add up all the results and apply (\ref{SQED_mass_relation}). In \cite{Song} is shown that this relation cancels contributions that came from different diagrams and are grouped in an expression multiplied by $\ln [(M^2_{V_{\Sigma}} M^2_{V_{a}})/ (M^2_{V_{b}} M^2_{V_{c}})]$. As can be seen from our procedure, these cancellations take place automatically once we have renormalized the one-loop divergences with the rules of CDR. \chapter{Non-abelian QFT applications} \section{Yang-Mills} \subsection{Conventions and definitions} \subsubsection{Relevant group theory definitions} Let $G$ be a continuous symmetry group with generators $T^a$. We can define an associated Lie algebra through the commutation relation \begin{eqnarray} \comm{T^a}{T^b} = i f^{abc} T^c \;, \end{eqnarray} where $f^{abc}$ are the structure constants of the Lie algebra, which obey the Jacobi identity $f^{ade} f^{bcd} + f^{bde} f^{cad} + f^{cde} f^{abd} = 0$. We can have several representations of this Lie algebra in terms of matrices $t^a_r$: one of them is the {\em{adjoint representation}}, denoted by $r=G$, where the representation matrices are given by the structure constants $(t^b_G)_{ac} = i f^{abc}$. These representation matrices are found to satisfy \begin{eqnarray} tr[t^a_r t^b_r] &=& C(r) \delta^{ab} \nonumber \\ \sum_{a} t^a_r t^a_r &=& C_2(r) \boldsymbol{1} \;, \end{eqnarray} where $C(r)$ and $C_2(r)$ are constants, being the latter called the quadratic Casimir operator. For the concrete case of the adjoint representation, we write the relation for the Casimir operator as \begin{eqnarray} f^{acd} f^{bcd} = C_A \delta^{ab} \;. \end{eqnarray} where we define $C_A = C_2(G)$. \subsubsection{Yang-Mills model} \label{YM_conventions} Yang-Mills theory is one of the simplest examples of non-abelian gauge theory \cite{Yang:1954ek}. It is obtained by imposing invariance under a continuous symmetry group. We start by considering $V(x)$ to be an unitary $n \times n$ matrix representing one of the elements of a gauge group. Then, the fields transform according to \begin{eqnarray} \psi (x) &\rightarrow& V(x) \psi(x) \nonumber \\ &=& ( 1 + i w^a(x) t^a + {\cal{O}}(w^2)) \psi(x) \;, \end{eqnarray} where we have considered an infinitesimal parameter $w^a$, which has allowed us to expand $V(x)$ in terms of the generators of the symmetry group. Now we have to construct a covariant derivative that, when acting over $\psi(x)$, has the same transformation as the field. This derivative, expressed in terms of a connection (gauge potential) $A_{\mu \; ij}= A^a_{\mu} t^a_{ij}$, is found to be \cite{Peskin:1995ev} \begin{eqnarray} D_{\mu \; ij} &=& \partial_{\mu} \delta_{ij} - i g A_{\mu}^a t^a_{ij} \;. \end{eqnarray} If we choose the adjoint representation, this becomes $D_{\mu}^{ab} = \partial_{\mu} \delta^{ab} + g f^{abc} A_{\mu}^b$. As this derivative has to transform covariantly under the gauge group, the infinitesimal gauge transformation of $A_{\mu}^a$ is found to be \cite{Peskin:1995ev} \begin{eqnarray} A_{\mu}^a &\rightarrow& A_{\mu}^a + \frac{1}{g} \partial_{\mu} w^a - f^{abc} w^b A_{\mu}^c + {\cal{O}} (w^2) \nonumber \\ &=& A_{\mu}^a + \frac{1}{g} ( D_{\mu}w )^a + {\cal{O}} (w^2) \;. \end{eqnarray} By considering the commutator of covariant derivatives, we can define a field strength as $-i g F_{\mu \nu}^a t^a = \comm{D_{\mu}}{D_{\nu}}$ that in terms of the gauge potential has the form \begin{eqnarray} F_{\mu \nu}^a &=& \partial_{\mu} A_{\nu}^a - \partial_{\nu} A_{\mu}^a + g f^{abc} A_{\mu}^b A_{\nu}^c \;. \end{eqnarray} With this field strength is straightforward to define an gauge invariant quantity that is the Yang-Mills action: \begin{eqnarray} S &=& \frac{1}{4} \int d^4 x \; F_{\mu \nu}^a F_{\mu \nu}^a \;. \end{eqnarray} As in the abelian case, when quantizing the action in a path integral approach, we have to fix the gauge in order to suppress all the equivalent field configurations obtained from a given one through gauge transformations. The result is that the gauge-fixed partition function $Z$ is \begin{eqnarray} Z[J] = \int [dA] \; det \left[ \frac{\delta G^a (A^w)}{\delta w^b} \right]_{w = 0} exp \left[ - S(A) - \frac{1}{2 \alpha} \int d^4 x \; G^a G^a + J_{\mu}^a A_{\mu}^a \right] \;, \end{eqnarray} where $G^a$ is the gauge-fixing function. Writing the determinant in terms of anticommuting ghost fields\footnote{Being $\theta$ an anticommutative variable $ \int \Pi d \theta d \bar{\theta} e^{\Sigma \bar{\theta}_i a_{ij} \theta_j} = det[a]$} $\eta$ and choosing for the gauge-fixing function $G^a = \partial^{\mu} A_{\mu}^a$ we can find the complete Yang-Mills lagrangian to be \begin{eqnarray} \cal{L} &=& \frac{1}{4} F^a_{\mu \nu} F^a_{\mu \nu} + \frac{1}{2 \alpha} ( \partial_{\mu} A_{\mu} )^a ( \partial_{\nu} A_{\nu} )^a + ( \partial_{\mu} \bar{\eta})^a ( {D}_{\mu} \eta )^a \;. \end{eqnarray} This implies that we have the following gauge field and ghost propagators \begin{eqnarray} <A_{\mu}^a(x) A_{\nu}^b(y) > &=& \delta_{\mu \nu} \delta^{ab} \Delta (x-y) \nonumber \\ < \eta^a(x) \bar{\eta}^b(y) > &=& \delta^{ab} \Delta(x-y) \end{eqnarray} \subsubsection{Background field method} As is detailed in appendix \ref{ap_BFM}, with the standard quantum-background splitting $A_{\mu}^a \rightarrow A_{\mu}^a + B_{\mu}^a$ we can define two gauge covariant derivatives ${\bf{D}}_{\mu}^{ac} = \partial_{\mu} \delta^{ac} + g f^{abc} B_{\mu}^b $ and ${\cal{D}}_{\mu}^{ac} = \partial_{\mu} \delta^{ac} + g f^{abc} ( B_{\mu}^b + A_{\mu}^b)$. Using them, and with a background covariant gauge-fixing function as $G^{a} = ( {\bf{D}}^{\mu} A_{\mu})^a$, we find the split lagrangian to be written as \begin{eqnarray} \cal{L} &=& \frac{1}{4} F^a_{\mu \nu} F^a_{\mu \nu} + \frac{1}{2 \alpha} ( {\bf{D}}_{\mu} A_{\mu} )^a ( {\bf{D}}_{\nu} A_{\nu} )^a + ( {\bf{D}}_{\mu} \bar{\eta})^a ( {\cal{D}}_{\mu} \eta )^a \;, \label{YM_background_lagrangian} \end{eqnarray} with the field strength depending in both quantum and background fields $F_{\mu \nu}^a = F^a_{\mu \nu} ( A + B )$. As in the previous abelian examples, we will perform the calculations in Feynman gauge; however, there is one important difference. As can be seen from the lagrangian (\ref{YM_background_lagrangian}), we have an interaction term of the form \begin{eqnarray} g f^{abc} \left[ 2 ( \partial_{\mu} B_{\nu}^a) A_{\mu}^b A_{\nu}^c - B_{\mu}^a ( \partial_{\mu} A_{\nu}^b ) A_{\nu}^c \right] \;, \end{eqnarray} which implies that the one-loop background self-energy depends on the gauge parameter, as we have a loop with quantum gauge fields propagators. Hence, although the term in the RG equation that takes care of the running of the gauge parameter, $\gamma_{\alpha} \partial / \partial \alpha$, will be shown to be of order $g^2$ (like in QED and SQED), in this case we can not leave it out, as when acting on the one-loop contribution it will affect the verification of the two-loop RG equation. Then, our procedure will be as follows: first of all, the standard gauge fixing parameter $\alpha$ will be redefined here as $\frac{1}{\alpha} = 1 + \xi$, so that usual Feynman gauge ($\alpha = 1$) will correspond to $\xi = 0$. We will obtain the one-loop contribution to the background self-energy in this gauge. Then, by means of functional methods, we will expand the complete effective action at one loop at second order in the background fields and retain only the linear dependence in $\xi$, that we term $\Gamma_{\xi}^{(1)}$. We apply this procedure as in the renormalization group equation we will first take derivatives with respect to this parameter and after that impose Feynman gauge ($\xi$=0). Hence, we have a background effective action of the form \begin{eqnarray} \Gamma_{eff}[B] &=& \frac{1}{2} \int d^4 x d^4 y B_{\mu}^a(x) \Gamma_{\mu \nu}^{BB \; ab} (x-y) B_{\nu}^b(y) + \ldots \nonumber \\ &=& \frac{1}{2} \int d^4 x d^4 y B_{\mu}^a(x) \left[ \delta^{ab} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box )\delta^{(4)}(x-y) - \Pi_{\mu \nu \; \xi}^{BB \; ab} (x-y) \right] B_{\nu}^b(y) + \ldots \nonumber \\ &=& S_0[B] + \Gamma_{\xi}^{(1)} - \frac{1}{2} \int d^4 x d^4 y \; B_{\mu}^a(x) \Pi_{\mu \nu}^{BB \; ab} (x-y)B_{\nu}^b(y) + \ldots \;, \end{eqnarray} where $S_0$ is the tree-level background two-point function. \begin{figure}[ht] \centerline{\epsfbox{YM_Feynman_rules.eps}} \caption{Relevant interaction vertices of the Yang-Mills quantum-background split action. Thick lines represent external background fields, thin lines are quantum gauge propagators and dashed lines correspond to ghost propagators.} \label{YM_Feynman_rules} \end{figure} In figure \ref{YM_Feynman_rules}, the interaction vertices derived from the quantum-background split action which are relevant to this work are shown. We have the following corresponding Feynman rules (evaluated in Feynman gauge) \begin{eqnarray} \textrm{(v1)} &=& g f^{abc} \left[ \delta_{\mu \nu} ( \partial_{\rho}^{A_{\mu}^a} - \partial_{\rho}^{A_{\nu}^b}) + \delta_{\rho \mu} ( \partial_{\nu}^{A_{\rho}^c} - \partial_{\nu}^{A_{\mu}^a}) + \delta_{\nu \rho} ( \partial_{\mu}^{A_{\nu}^b} - \partial_{\mu}^{A_{\rho}^c}) \right] \nonumber \\ \textrm{(v2)} &=& - g^2 \left[ f^{abx} f^{xcd} ( \delta_{\mu \rho} \delta_{\nu \sigma} - \delta_{\mu \sigma} \delta_{\nu \rho}) + f^{acx} f^{xdb} ( \delta_{\mu \sigma} \delta_{\rho \nu} - \delta_{\mu \nu} \delta_{\rho \sigma} ) \right. \nonumber \\ & & \left. + f^{adx} f^{xbc} ( \delta_{\mu \nu} \delta_{\rho \sigma} - \delta_{\mu \rho} \delta_{\nu \sigma} ) \right] \nonumber \\ \textrm{(v3)} &=& g f^{abc} \left[ - 2 \delta_{\mu \rho} \partial_{\nu}^{B_{\mu}^a} + \delta_{\nu \rho} ( \partial_{\mu}^{A_{\nu}^b} - \partial_{\mu}^{A_{\rho}^c} ) + 2 \delta_{\mu \nu} \partial_{\rho}^{B_{\mu}^a} \right] \nonumber \\ \textrm{(v4)} &=& - g f^{abc} \partial_{\mu}^{\bar{\eta}^c} \nonumber \\ \textrm{(v5)} &=& - g f^{abc} ( \partial_{\mu}^{\bar{\eta}^c} - \partial_{\mu}^{\eta^b} ) \nonumber \\ \textrm{(v6)} &=& - g^2 f^{adx} f^{xbc} \delta_{\mu \nu} \end{eqnarray} \subsection{One-loop level} Although the background self-energy is all that we need to find the one-loop beta function, we will also obtain the linear dependence in the gauge parameter $\xi$ of the effective action calculated in a generic gauge and expanded to second order in background fields. We need this contribution in order to take care of the running of the gauge parameter in the RG equation. \subsubsection{Correction to the $B_{\mu}^a$ propagator} \begin{figure} \centerline{\epsfbox{YM1loop.eps}} \caption{One-loop YM diagrams.} \label{YM_1loop} \end{figure} This is the sum of two different diagrams: one with a loop of quantum gauge fields, and another of ghost fields, as can be seen in figure \ref{YM_1loop}. In these diagrams we have to apply the CDR procedure: First we write the expressions in terms of the basic functions defined in (\ref{basic_CDR_fun}), and after that we replace them with their renormalized values. Here and in the rest of the diagrams of the Yang-Mills theory, $D_{\mu}^{x,y}$ denotes a space-time derivative acting over one external field. Applying the Leibniz rule, $D_{\mu}^{x,y}$ becomes a minus derivative acting over the propagators. The bare expression for both contributions is \begin{itemize} \item {\bf Gauge loop} \end{itemize} \begin{eqnarray} & & \frac{ g^2 f^{acd} f^{bdc}}{2} \Delta_{xy} \left[ - 2 \delta_{\mu \sigma} D^x_{\rho} + \delta_{\rho \sigma} (\stackrel{\leftarrow}{\partial_{\mu}^{x}} - \partial_{\mu}^x) + 2 \delta_{\mu \rho} D_{\sigma}^x \right] \nonumber \\ & & \times \left[ - 2 \delta_{\nu \rho} D_{\sigma}^y + \delta_{\rho \sigma} (\partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) + 2 \delta_{\nu \sigma} D^y_{\rho} \right] \Delta_{xy} \nonumber \\&=& \frac{g^2 C_A \delta^{ab}}{2} \left[ 8 \partial_{\mu} \partial_{\nu} \Delta^2 - 8 \delta_{\mu \nu} \Box \Delta^2 + 8 \partial_{\mu}( \Delta \partial_{\nu} \Delta ) - 16 \Delta \partial_{\mu} \partial_{\nu} \Delta \right] \;. \nonumber \\ \end{eqnarray} \begin{itemize} \item {\bf Ghost loop} \end{itemize} \begin{eqnarray} - g^2 f^{abc} f^{bcd} \Delta_{xy} ( \stackrel{\leftarrow}{\partial_{\mu}^{x}} - \partial_{\mu}^x) (\partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) \Delta_{xy} = - g^2 C_A \delta^{ab} \left[ 2 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 4 \Delta \partial_{\mu} \partial_{\nu} \Delta \right] \;. \nonumber \\ \end{eqnarray} Adding the two previous results we find the total non-renormalized contribution to be \begin{eqnarray} \Pi_{\mu \nu\;(1)}^{BB\;ab} (x) &=& g^2 C_A \delta^{ab} \left[ 4 \partial_{\mu} \partial_{\nu} \Delta^2 - 4 \delta_{\mu \nu} \Box \Delta^2 + 2 \partial_{\mu} ( \Delta \partial_{\nu} \Delta ) - 4 \Delta \partial_{\mu} \partial_{\nu} \Delta \right] \;. \nonumber \\ \end{eqnarray} It is worth to mention here again that we are allowed to do the previous step (adding up the expression even before renormalizing) because we are using CDR, as we pointed out previously. With CDR the basic functions are renormalized always with the same expression, despite their origin. In an usual DiffR procedure, we first have to renormalize each diagram in a separate way, relate the different scales that appeared via the Ward identities, and only after that we can add up the results. Replacing the values of CDR for $\Delta^2$ and $\Delta \partial_{\mu} \partial_{\nu} \Delta$, the renormalized one-loop contribution to the $B_{\mu}^a$ propagator is obtained as \begin{eqnarray} \left. \Pi_{\mu \nu \;(1)}^{BB\;ab} (x) \right|_R &=& g^2 C_A \delta^{ab} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ \frac{11}{3} \Delta^2_R (x) - \frac{1}{72 \pi^2} \delta (x) \right] \nonumber \\ &=& g^2 C_A \delta^{ab} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box) \left[ - \frac{11}{48 \pi^2 (4 \pi^2)} \Box \frac{\ln x^2 M^2}{x^2} - \frac{1}{72 \pi^2} \delta (x) \right] \;. \nonumber \\ \label{1_loop} \end{eqnarray} As a check, the result found here is automatically transverse, fulfilling the corresponding Ward identity. \subsubsection{Effective action in a generic gauge} As we have discussed previously, in order to take care of the running of the gauge parameter in the RG equation, we will obtain the linear dependence in $\xi$ of the one-loop background effective action expanded to second order in the background fields. To perform this calculation we consider a functional approach: to obtain the exact one-loop effective action it is well known that we have to consider only the part of the lagrangian quadratic in the quantum $A_{\mu}^a$ fields \cite{Jackiw:1974cv,Peskin:1995ev}. This part is \begin{eqnarray} {\cal{L}}^{(2)}_{gauge} &=& g f^{abc} B_{\mu \nu}^a A_{\mu}^b A_{\nu}^c + \frac{1}{2} ({\bf{D}}_{\mu} A_{\nu})^a ({\bf{D}}_{\mu} A_{\nu})^a + \frac{\xi}{2} ({\bf{D}}_{\mu} A_{\mu})^a ({\bf{D}}_{\nu} A_{\nu})^a \nonumber \\ &=& - \frac{1}{2} A_{\mu}^a \left[ \delta_{\mu \nu} \Box^{ab} - 2 g f^{cab} B_{\mu \nu}^c + \xi ({\bf{D}}_{\mu} {\bf{D}}_{\nu} )^{ab} \right] A_{\nu}^b \;, \end{eqnarray} where ${\Box}^{ab} = ( {\bf{D}}^{\mu} {\bf{D}}_{\mu})^{ab}$ and $B_{\mu \nu}^a = \partial_{\mu} B_{\nu}^a - \partial_{\nu} B_{\mu}^a + g f^{abc} B_{\mu}^b B_{\nu}^c$. Then, the generating functional for connected Green functions can be put as \begin{eqnarray} W &=& - \frac{1}{2} tr \ln \left[ \delta_{\mu \nu} \Box^{ab} - 2 g f^{cab} B_{\mu \nu}^c + \xi ({\bf{D}}_{\mu} {\bf{D}}_{\nu})^{ab} \right] \;. \end{eqnarray} At first order in $\xi$ and second order in $B$ fields, this is expressed as \begin{eqnarray} W &=& W_0 + \xi C_A g^2 tr \left[ \frac{1}{2} \Delta B_{\mu \nu}^a \Delta B_{\mu \nu}^a - 2 \Delta B_{\mu \nu}^a \Delta B_{\nu \lambda}^a \Delta \partial_{\lambda} \partial_{\mu} \right] + {\cal{O}}(\xi^2, B^3)\;, \label{effective_1loop} \end{eqnarray} where as usual $\Box = \partial^{\mu} \partial_{\mu}$ and $\Delta = - \Box^{-1}$. We can write the renormalized expression of the first term of (\ref{effective_1loop}) as \begin{eqnarray} (A) &=& \frac{1}{2} \int d^4 x d^4 y \; B_{\mu \nu}^a (x) B_{\mu \nu}^a (y) \Delta^2 |_R \;, \nonumber \\ \end{eqnarray} whereas the second one is of the following form \begin{eqnarray} (B) &=& - 2 \int d^4 x d^4 y d^4 u \; ( \partial_{\lambda}^u \partial_{\mu}^u \Delta_{ux}) B_{\mu \nu}^a (x) B_{\nu \lambda}^a (y) \Delta_{xy} \Delta_{yu} \nonumber \\ &=& - 2 \int d^4 x d^4 y \; B_{\mu \nu}^a (x) B_{\nu \lambda}^a(y) \Delta_{xy} \partial_{\lambda}^x \partial_{\mu}^x \int d^4 u \; \Delta_{xu} \Delta_{yu} \;. \end{eqnarray} In order to evaluate the latter expression, we must apply CDR in momentum space \begin{eqnarray} \int d^4 u \; \Delta_{xu} \Delta_{uv} &=& \frac{1}{(4 \pi^2)^2} \int d^4 u \; \frac{1}{(x-u)^2} \frac{1}{(u-y)^2} \nonumber \\ &=& \frac{1}{(4 \pi^2)^4} \int d^4 u d^4 p d^4 q \; \frac{1}{p^2 q^2} e^{-i p(x-u)} e^{-iq(u-y)} \nonumber \\ &=& \frac{1}{(4 \pi^2)^2} \int d^4 p \; \frac{1}{p^4} e^{-i p(x-y)} \nonumber \\ &\stackrel{R}{\rightarrow}& - \frac{1}{4 (4 \pi^2)^2} \int d^4 p \; \Box^p \frac{\ln p^2/m^2}{p^2} e^{-ip(x-y)} \nonumber \\ &=& - \frac{1}{4(4 \pi^2)} \ln (x-y)^2 m^2 \nonumber \\ &\equiv& - \bar{\Delta} (x-y) \;. \label{CDR_momentum_space} \end{eqnarray} \begin{eqnarray} (B) &=& 2 \int d^4 x d^4 y \; B_{\mu \nu}^a (x) B_{\nu \lambda}^a (y) \left( \Delta_{xy} \partial^x_{\lambda} \partial^x_{\mu} \bar{\Delta}_{xy} \right)|_R \;. \end{eqnarray} Hence, remembering the CDR renormalization of $\Delta \partial_{\lambda} \partial_{\mu} \bar{\Delta}$ (\ref{CDR_rules_other_gauge}) we can obtain the renormalized expression for $(B)$ as \begin{eqnarray} (B) &=& - \frac{1}{2} \int d^4 x d^4 y \; B_{\mu \nu}^a (x) B_{\mu \nu}^a (y) \Delta^2 |_R - \frac{1}{16 \pi^2} \int d^4 x d^4 y B_{\mu \nu}^a (x) B_{\nu \lambda}^a (y) \partial^x_{\mu} \partial^x_{\lambda} \Delta_{xy} \;. \nonumber \\ \end{eqnarray} Adding up the two contributions we have \begin{eqnarray} (A)+(B) &=& - \frac{\xi C_A g^2}{4(4 \pi^2)} \int d^4 x d^4 y \; B_{\mu \nu}^a(x) B_{\nu \lambda}^a (y) \partial^x_{\mu} \partial^x_{\lambda} \Delta_{xy} \;, \end{eqnarray} which can be written in a more familiar form at explicit second order in the $B$ fields as \begin{eqnarray} \int d^4 x d^4 y B_{\mu \nu}^a (x) B_{\nu \lambda}^a (y) \partial^x_{\mu} \partial^x_{\lambda} \Delta_{xy} &=& \int d^4 x d^4 y (\partial_{\mu} B_{\nu}^a(x) - \partial_{\nu} B_{\mu}^a(x) )(\partial_{\nu} B_{\lambda}^a (y) - \partial_{\lambda} B_{\nu}^a (y) ) \nonumber \\ & & \times \partial^x_{\mu} \partial^x_{\lambda} \Delta_{xy} +{\cal{O}}(B^3) \nonumber \\ &=& - \int d^4 x d^4 y \; B_{\mu}^a (x) B_{\nu}^a (y) ( \partial^x_{\mu} \partial^x_{\nu} - \delta_{\mu \nu} \Box) \Box \Delta_{xy} + {\cal{O}}(B^3) \;. \nonumber \\ \end{eqnarray} With this result we obtain the previously defined $\Gamma_{\xi}^{(1)}$ to be \begin{eqnarray} \Gamma_{\xi}^{(1)} = - \frac{\xi C_A g^2}{4(4 \pi^2)} \int d^4 x d^4 y \; B_{\mu}^a (x) B_{\nu}^a (y) (\partial^x_{\mu} \partial^x_{\nu} - \delta_{\mu \nu} \Box) ( \Box \Delta(x-y) ) \;. \label{gauge_fix_ren} \end{eqnarray} \subsection{Two-loop level} Now we follow with the two-loop contribution to the background field self-energy. The relevant diagrams are those of figures \ref{2loop_1} and \ref{2loop_2} ((a) to (k)). Diagrams (a) to (h) have nested divergences, whereas diagrams (i), (j) and (k) have overlapping divergences. \begin{figure}[t] \centerline{\epsfbox{YM2loop_1.eps}} \caption{Two-loop YM diagrams (a)-(e).} \label{2loop_1} \end{figure} \begin{figure}[t] \centerline{\epsfbox{YM2loop_2.eps}} \caption{Two-loop YM diagrams (f)-(k).} \label{2loop_2} \end{figure} \subsubsection{Diagram (a)} This diagram has the following bare expression \begin{eqnarray} \Pi_{\mu \nu \; (2a)}^{BB\;ab} (x-y) &=& - 2 g^4 f^{aec} f^{bcd} f^{gdf} f^{gfe} \int d^4 u d^4 v \; \Delta_{xy} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x) ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) \Delta_{yv} \nonumber \\ & & \times ( \partial_{\lambda}^v \Delta_{uv}) \Delta_{uv} ( \partial_{\lambda}^u \Delta_{xu}) \;, \nonumber \\ \end{eqnarray} which can be rearranged in terms of the integral $I^1$ \begin{eqnarray} \Pi_{\mu \nu \;(2a)}^{BB\;ab} (x) &=& - g^4 C_A^2 \delta^{ab} \left[ 4 \partial_{\nu} ( \Delta \partial_{\mu} I^1 ) - \partial_{\mu} \partial_{\nu} ( \Delta I^1 ) - 4 \Delta \partial_{\mu} \partial_{\nu} I^1 \right] \;. \end{eqnarray} In order to renormalize, we have to replace these expressions with their renormalized values, arriving to \begin{eqnarray} \left. \Pi_{\mu \nu\;(2a)}^{BB\;ab}(x)\right|_R &=& \frac{g^4 C_A^2 \delta^{ab}}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{3} \ln^2 x^2 M^2 - \frac{8}{9} \ln x^2 M^2}{x^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{\frac{1}{3} \ln^2 x^2 M^2 + \frac{11}{9} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Diagram (b)} This diagram is of the form \begin{eqnarray} \Pi_{\mu \nu \;(2b)}^{BB\;ab} (x-y) &=& g^2 f^{ace} f^{bed} \int d^4 u d^4 v \; \Delta_{xu} \left[ - 2 \delta_{\mu \lambda} D_{\rho}^x + \delta_{\lambda \rho} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x) + 2 \delta_{\mu \rho} D_{\lambda}^x \right] \nonumber \\ & & \times \Pi^{AA\;cd}_{\rho \sigma\;(1)} (u-v) \Delta_{vy} \left[ - 2 \delta_{\nu \sigma} D_{\lambda}^y + \delta_{\sigma \lambda} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y }) + 2 \delta_{\nu \lambda} D_{\sigma}^y \right] \Delta_{xy} \;, \nonumber \label{YM_2loop_diagb_bare} \end{eqnarray} where $\Pi^{AA\;ab}_{\mu \nu\;(1)} (x-y) = \delta^{ab} \Pi_{\mu \nu\;(1)}^{AA}(x-y)$ is the one-loop correction to the quantum gauge field propagator. Its bare and renormalized expressions are found in section \ref{ap_Gauge_YM} of appendix \ref{ap_Gauge}, where it is used to obtain the leading term of the expansion of the function that takes care of the running of the gauge parameter in the RG equation. It has to be noted again that, in contrast with dimensional regularization, the renormalized one-loop expression for the quantum gauge field propagator (\ref{YM1loop_quantum}) can not be used in the two-loop diagram. The reason is that the indices of the one-loop insertion the will be contracted in a second step, and one of the rules of CDR is to make first all the index contractions before performing the renormalization. Hence, only the bare one-loop contribution (\ref{YM1loop_quantum_bare_prop}) can be inserted. Therefore, expanding (\ref{YM_2loop_diagb_bare}) we find \begin{eqnarray} \Pi_{\mu \nu \; (2b)}^{BB\;ab}(x-y) = - g^2 C_A \delta^{ab} \int & d^4 u d^4 v& - 4 \partial_{\mu}^x \partial_{\rho}^x \left[ \Delta_{xu} \Pi^{AA}_{\rho \nu\;(1)} (u-v) \Delta_{vy} \Delta_{xy} \right] \nonumber \\ & & + 4 \delta_{\mu \nu} \partial_{\rho}^x \partial_{\sigma}^x \left[ \Delta_{xu} \Pi^{AA}_{\rho \sigma\;(1)} (u-v) \Delta_{vy} \Delta_{xy} \right] \nonumber \\ & & + \Delta_{xu} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x )\Pi^{AA}_{\rho \rho\;(1)} (u-v) \Delta_{vy} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y} ) \Delta_{xy} \nonumber \\ & & + 4 \Box \left[ \Delta_{xu} \Pi^{AA}_{\mu \nu \;(1)} (u-v) \Delta_{vy} \Delta_{xy} \right] \nonumber \\ & & - 4 \partial_{\nu}^x \partial_{\sigma}^x \left[ \Delta_{xu} \Pi^{AA}_{\mu \sigma\;(1)} (u-v) \Delta_{vy} \Delta_{xy} \right]. \nonumber \end{eqnarray} If we use the bare result (\ref{YM1loop_quantum_bare_prop}) for $\Pi^{AA}_{\mu \nu\;(1)}$, straightforward operations lead us to write this in terms of the previously defined $I^0$ and a new integral expression of the form \begin{eqnarray} I_{\mu \nu}^0 (x-y)= \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} ( \Delta_{uv} \partial_{\mu}^u \partial_{\nu}^u \Delta_{uv} ) \;. \end{eqnarray} So, we have \begin{eqnarray} \left.\Pi_{\mu \nu \;(2b)}^{BB\;ab} (x) \right|_R &=& g^4 C_A^2 \delta^{b a} \left[ - 24 \partial_{\mu} \partial_{\sigma} ( \Delta \partial_{\nu} \partial_{\sigma} I^0) + 11 \partial_{\mu} \partial_{\nu} ( \Delta \Box I^0) + 32 \partial_{\mu} \partial_{\sigma} ( \Delta I^0_{\nu \sigma}) \right. \nonumber \\ & & + 12 \delta_{\mu \nu} \partial_{\sigma} \partial_{\rho} ( \Delta \partial_{\rho} \partial_{\sigma} I^0) - 16 \delta_{\mu \nu} \Box ( \Delta \Box I^0) - 16 \delta_{\mu \nu} \partial_{\rho} \partial_{\sigma} ( \Delta I^0_{\rho \sigma} ) \nonumber \\ & & + \left. 20 \partial_{\mu} ( \Delta \partial_{\nu} \Box I^0) - 20 \Delta \partial_{\mu} \partial_{\nu} \Box I^0 + 12 \Box ( \Delta \partial_{\mu} \partial_{\nu} I^0) - 16 \Box (\Delta I_{\mu \nu}^0) \right]_R \;. \nonumber \\ \end{eqnarray} It is clear that, as $\Box I^0 = -I^1$, with the renormalized values found for $I^1$ we can obtain all the expressions made up with $\Box I^0$. In appendix \ref{ap_UV_IR} we study the renormalization of the rest of the expressions made up with $I^0$ and $I^0_{\mu \nu}$ that appear in this diagram. It is found there for $\Delta \partial_{\mu} \partial_{\nu} I^0$ and $\Delta I_{\mu \nu}^0$ the following renormalized values \begin{eqnarray} \left[ \Delta \partial_{\mu} \partial_{\nu} I^0 \right]_R (x) &=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{ \ln x^2 M^2}{x^2} + \delta_{\mu \nu} \Box \frac{ \frac{1}{4} \ln^2 x^2 M^2 + \frac{1}{4} \ln x^2 M^2}{x^2} \right] +~\ldots \nonumber \\ \left[ \Delta I_{\mu \nu}^0 \right]_R (x)&=& \frac{1}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \frac{ \frac{1}{3} \ln x^2 M^2}{x^2} + \delta_{\mu \nu} \Box \frac{ - \frac{1}{6} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} With this, it is easy to arrive at the following renormalized expression \begin{eqnarray} \left.\Pi_{\mu \nu\;(2b)}^{BB\;ab}(x) \right|_R &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{- \frac{25}{3} \ln^2 x^2 M^2 - \frac{86}{9} \ln x^2 M^2}{x^2} \right. \nonumber \\ &+& \left. \delta_{\mu \nu} \Box \Box \frac{ \frac{25}{3} \ln^2 x^2 M^2 + \frac{71}{9} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Diagram (c)} This diagram is easily renormalized as its expression is \begin{eqnarray} \Pi_{\mu \nu\;(2c)}^{BB\;ab} (x) &=& - g^4 f^{acx} f^{xed} f^{bdy} f^{yec} \delta_{\mu \nu} \Delta^3 \nonumber \\ &=& - \frac{1}{2} g^4 C_A^2 \delta^{ab} \delta_{\mu \nu} \Delta^3 \nonumber \\ &\stackrel{R}{\rightarrow}& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{2} \ln x^2 M^2}{x^2} \;. \end{eqnarray} \subsubsection{Diagram (d)} This diagram is similar to the previous one, and we find \begin{eqnarray} \left. \Pi_{\mu \nu\;(2d)}^{BB\;ab} (x) \right|_R &=& \frac{9}{2} g^4 C_A^2 \delta^{ab} \delta_{\mu \nu} \Delta^3_R \nonumber \\ &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \delta_{\mu \nu} \Box \Box \frac{ -\frac{9}{2} \ln x^2 M^2}{x^2} \;. \end{eqnarray} \subsubsection{Diagram (e)} The bare expression of this diagram is \begin{eqnarray} \Pi_{\mu \nu\;(2e)}^{BB\;ab}(x) &=& - \frac{1}{4} g^4 f^{acd} f^{bge} \int d^4 u \; \Delta_{xu} [ - 2 \delta_{\mu \sigma} D_{\rho}^x + \delta_{\rho \sigma} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x ) + 2 \delta_{\mu \rho} D^x_{\sigma} ] \nonumber \\ & &\times \Delta_{xu} [ f^{cex} f^{xgd} ( \delta_{\rho \lambda} \delta_{\varepsilon \sigma } - \delta_{\rho \sigma} \delta_{\varepsilon \lambda} ) + f^{cgx} f^{xde} ( \delta_{\rho \sigma} \delta_{\varepsilon \lambda} - \delta_{\rho \varepsilon} \delta_{\sigma \lambda} ) \nonumber \\ & & + f^{cdx} f^{xeg} (\delta_{\rho \varepsilon} \delta_{\lambda \sigma} - \delta_{\rho \lambda} \delta_{\varepsilon \sigma} ) ] \Delta_{yu} [ - 2 \delta_{\nu \varepsilon} D_{\lambda}^y + \delta_{\varepsilon \lambda} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) \nonumber \\ & & + 2 \delta_{\nu \lambda} D_{\varepsilon}^y ] \Delta_{yu} \;, \end{eqnarray} which, making all the index contractions can be written as \begin{eqnarray} \Pi_{\mu \nu \;(2e)}^{BB\;ab}(x-y) &=& - 6 g^4 C_A^2 \delta^{ab} ( \partial_{\mu}^x \partial_{\nu}^x - \delta_{\mu \nu} \Box ) \int d^4 u \; \Delta^2_{xu} \Delta^2_{yu} \;. \nonumber \end{eqnarray} The renormalized expression of the integral is easily obtained as \begin{eqnarray} \int d^4 u \; \frac{1}{(x-u)^4} \frac{1}{u^4} \rightarrow - \frac{\pi^2}{4} \Box \frac{\ln^2 x^2 M^2}{x^2} \;, \nonumber \end{eqnarray} so that \begin{eqnarray} \left. \Pi_{\mu \nu\;(2e)}^{BB\;ab}(x) \right|_R &=& \frac{3}{8 (4 \pi^2)^3} g^4 C_A^2 \delta^{ab} (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \frac{ \ln^2 x^2 M^2}{x^2} + \ldots \end{eqnarray} \subsubsection{Diagram (f)} This diagram is of the following form \begin{eqnarray} \Pi_{\mu \nu \;(2f)}^{BB\;ab}(x-y) &=& 2 g^4 f^{acx} f^{xdf} f^{dce} f^{bef} \int d^4 u \; \Delta^2_{xu} ( \partial_{\mu}^u \Delta_{uy} )( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y} ) \Delta_{xy} \nonumber \\ & &+ 2 g^4 f^{afc} f^{ecd} f^{bfx} f^{xed} \int d^4 u \; \Delta_{xu} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x ) \Delta_{xy} ( \partial_{\nu}^u \Delta_{yu} ) \Delta_{uy} \;. \nonumber \end{eqnarray} Operating, this can be written in terms of $I^1$, which allows us to write \begin{eqnarray} \left. \Pi_{\mu \nu \;(2f)}^{BB\;ab} (x) \right|_R &=& - g^4 C_A^2 \delta^{ab} \left[ - 2 \partial_{\mu} ( \Delta \partial_{\nu} I^1 ) + 4 \Delta \partial_{\mu} \partial_{\nu} I^1 \right]_R \nonumber \\ &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{3} \ln^2 x^2 M^2 - \frac{1}{9} \ln x^2 M^2}{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ - \frac{1}{3} \ln^2 x^2 M^2 - \frac{11}{9} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Diagram (g)} Contracting the indices of the bare expression \begin{eqnarray} \Pi_{\mu \nu\;(2g)}^{BB\;ab} (x-y) &=& - 2 g^4 f^{acx} f^{xfd} f^{ecd} f^{bfe} \int d^4 u \; \delta_{\mu \sigma} \Delta_{xu} ( \partial_{\lambda}^u \Delta_{xu} ) \Delta_{uy} \left[ - 2 \delta_{\nu \lambda} D_{\sigma}^y \right. \nonumber \\ & &+ \left. \delta_{\lambda \sigma} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y} ) + 2 \delta_{\nu \sigma} D_{\lambda}^y \right] \Delta_{xy} \end{eqnarray} it is easy to write this diagram in terms of $I^1$, which implies that the renormalized form is \begin{eqnarray} \left. \Pi_{\mu \nu\;(2g)}^{BB\;ab}(x) \right|_R &=& - g^4 C_A^2 \delta^{ab} \left[ \frac{3}{2} \partial_{\mu} ( \Delta \partial_{\nu} I^1) - \Delta \partial_{\mu} \partial_{\nu} I^1 - \delta_{\mu \nu} \partial_{\lambda} ( \Delta \partial_{\lambda} I^1) \right]_R \nonumber \\ &=& \frac{g^4 C_A^2 \delta^{ab}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{5}{12} \ln^2 x^2 M^2 + \frac{19}{36} \ln x^2 M^2}{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ - \frac{5}{12} \ln^2 x^2 M^2 - \frac{7}{36} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Diagram (h)} From the bare expression \begin{eqnarray} \Pi_{\mu \nu\;(2h)}^{BB\;ab}(x-y) = &- g^4 & \int d^4 u \; \Delta^{(c)}_{xu} \left[ f^{acx} f^{xdf} ( \delta_{\mu \sigma} \delta_{\rho \varepsilon} - \delta_{\mu \varepsilon} \delta_{\rho \sigma} ) + f^{adx} f^{xfc} \right. \nonumber \\ &\times& \left. ( \delta_{\mu \varepsilon} \delta_{\sigma \rho} - \delta_{\mu \rho} \delta_{\varepsilon \sigma} ) + f^{afx} f^{xcd} ( \delta_{\mu \rho} \delta_{\sigma \varepsilon} - \delta_{\mu \sigma} \delta_{\rho \varepsilon} ) \right] \nonumber \\ &\times& \Delta^{(d)}_{xu} f^{edc} \left[ \delta_{\lambda \sigma} ( \stackrel{e}{\partial_{\rho}^{u}}- \stackrel{d}{\partial_{\rho}^u} )+ \delta_{\lambda \rho} ( \stackrel{c}{\partial_{\sigma}^u} - \stackrel{e}{\partial_{\sigma}^u} ) + \delta_{\sigma \rho} ( \stackrel{d}{\partial_{\lambda}^u} - \stackrel{c}{\partial_{\lambda}^u} ) \right] \nonumber \\ &\times& \Delta_{uy}^{(e)} f^{bfe} \left[ - 2 \delta_{\nu \lambda} D_{\varepsilon}^y + \delta_{\varepsilon \lambda} ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) + 2 \delta_{\nu \varepsilon} D_{\lambda}^y \right] \Delta_{xy} \;, \nonumber \end{eqnarray} where $\Delta^{(i)} \stackrel{i}{\partial_{\mu}} \Delta^{(j)} = ( \partial_{\mu} \Delta^{(i)} ) \Delta^{(j)}$, evaluating all the index contractions we can also express this contribution in terms of $I^1$. Hence, the renormalized result is \begin{eqnarray} \left. \Pi_{\mu \nu\;(2h)}^{BB\;ab} (x) \right|_R &=& - g^4 C_A^2 \delta^{ab} \left[ \frac{45}{2} \partial_{\mu} ( \Delta \partial_{\nu} I^1 ) - 27 \Delta \partial_{\mu} \partial_{\nu} I^1 - 9 \delta_{\mu \nu} \partial_{\lambda} ( \Delta \partial_{\lambda} I^1 ) \right]_R \nonumber \\ &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{9}{4} \ln^2 x^2 M^2 + \frac{21}{4} \ln x^2 M^2}{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ - \frac{9}{4} \ln^2 x^2 M^2 + \frac{15}{4} \ln x^2 M^2}{x^2} \right] + \ldots \end{eqnarray} \subsubsection{Diagram (i)} This diagram and the two following ones have overlapping divergences. In order to renormalize them, we will make use of the list of integrals obtained in section \ref{overlap_integrals}. The bare expression for this diagram which is \begin{eqnarray} \Pi_{\mu \nu\; (2i)}^{BB\;ab}(x-y) &=& - g^4 f^{afc} f^{gcd} f^{bde} f^{gef} \int d^4 u d^4 v \; \Delta_{xu} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x ) ( \partial_{\lambda}^u \Delta_{uy}) \nonumber \\ &\times& ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y}) \Delta_{yv} ( \partial_{\lambda}^v \Delta_{vx} ) \Delta_{uv} \;. \end{eqnarray} Expanding this expression, we can write it in terms of the $H$ integrals defined in (\ref{H_definition}). The bare contribution is then found to be \begin{eqnarray} \Pi_{\mu \nu\; (2i)}^{BB\;ab}(x-y) = - \frac{1}{2} g^4 C_A^2 \delta^{ab} &\left[ \right.& \partial_{\mu}^x \partial_{\nu}^y H[1, \partial_{\lambda} \; ; \; \partial_{\lambda},1] - 2 \partial_{\mu}^x H[1, \partial_{\lambda} \; ; \; \partial_{\lambda} \partial_{\nu} , 1] \nonumber \\ & & \left. - 2 \partial_{\nu}^y H[1, \partial_{\lambda} \partial_{\mu} \; ; \; \partial_{\lambda},1] + 4 H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; \partial_{\nu} \partial_{\lambda} , 1] \; \right] \;. \nonumber \\ \end{eqnarray} At this point, we have to straightforwardly use the list of overlapping divergences of section \ref{overlap_integrals}. In concrete, with (\ref{int3}), (\ref{int6}), and (\ref{int15}) we find the renormalized result to be \begin{eqnarray} \left. \Pi_{\mu \nu\; (2i)}^{BB\;ab}(x) \right|_R &=& \frac{g^4 C_A^2 \delta^{a b}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ - \frac{1}{12} \ln^2 x^2 M^2 - \frac{17}{36} \ln x^2 M^2 }{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ \frac{1}{12} \ln^2 x^2 M^2 + \frac{29}{36} \ln x^2 M^2}{x^2} \right] + \ldots \end{eqnarray} \subsubsection{Diagram (j)} The basic form of this diagram is \begin{eqnarray} \Pi_{\mu \nu\; (2j)}^{BB\;ab}(x-y) &=& 2 g^4 f^{acf} f^{cgd} f^{bde} f^{feg} \int d^4 u d^4 v \; \Delta_{xu} \left[ - 2 \delta_{\mu \sigma} D_{\rho}^x + \delta_{\rho \sigma} ( \stackrel{\leftarrow}{\partial_{\mu}^x} - \partial_{\mu}^x ) \right. \nonumber \\ & & + \left. 2 \delta_{\mu \rho} D_{\sigma}^x \right] ( \partial_{\rho}^u \Delta_{uy} ) ( \partial_{\nu}^y - \stackrel{\leftarrow}{\partial_{\nu}^y} ) \Delta_{yv} ( \partial_{\sigma}^v \Delta_{uv}) \Delta_{vx} \;. \end{eqnarray} Evaluating the index contractions this becomes \begin{eqnarray} \Pi_{\mu \nu\; (2j)}^{BB\;ab}(x-y) = - g^4 C_A^2 \delta^{ab} &\left[ \right.& - 4 \partial_{\nu}^x \partial_{\lambda}^x H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1 , 1] - 4 \partial_{\nu}^x \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\mu} ] \nonumber \\ & & - 4 \partial_{\lambda}^x H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\nu} ] - 4 \partial_{\lambda}^x H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\mu} \partial_{\nu} ] \nonumber \\ & & - 4 \partial_{\nu}^x H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\lambda} ] + \partial_{\mu}^x \partial_{\nu}^x H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\lambda}] \nonumber \\ & & - 4 H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\nu} \partial_{\lambda} ] + 4 \Box H[ 1, \partial_{\mu} \; ; \; 1, \partial_{\nu} ] \nonumber \\ & & + 4 \Box H[ 1, 1 \; ; \; 1, \partial_{\mu} \partial_{\nu} ] - 2 \partial_{\nu}^x H[ 1, \partial_{\mu} \; ; \; 1, \Box] \nonumber \\ & & + \partial_{\mu}^x \partial_{\nu}^x H[1, 1 \; ; \; 1, \Box] - 4 H[1, \partial_{\nu} \Box \; ; \; 1, \partial_{\mu} ] \nonumber \\ & & \left. - 2 \partial_{\mu}^x H[ 1, \partial_{\nu} \Box \; ; \; 1, 1] \; \right] \;. \end{eqnarray} From this list, the expressions that have a $\Box$ (remember $\Box \Delta (x) = -\delta(x)$) can be written in terms of the integral $I^1$, and their renormalization is straightforward. The rest of the integrals can be found in the list of integrals with overlapping divergences. So, we arrive at the following renormalized result \begin{eqnarray} \left. \Pi_{\mu \nu\; (2j)}^{BB\;ab}(x) \right|_R &=& \frac{g^4 C_A^2 \delta^{a b}}{32(4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{ \frac{1}{2} \ln^2 x^2 M^2 + \frac{1}{2} \ln x^2 M^2}{x^2} \right. \nonumber \\ & &+ \left. \delta_{\mu \nu} \Box \Box \frac{ - \frac{1}{2} \ln^2 x^2 M^2 - \frac{1}{2} \ln x^2 M^2}{x^2} \right] + \ldots \end{eqnarray} \subsubsection{Diagram (k)} In order to obtain all the contributions that form this diagram, the Mathematica package 'FeynCalc' was used, so that all the index contractions were performed by the computer. The output of this process are the final relevant expressions that need to be renormalized. The contributions shown here are those that have a divergent part, omitting those terms that are finite. \begin{eqnarray} \Pi_{\mu \nu\; (2k)}^{BB\;ab}(x-y) = \frac{1}{4} C_A^2 \delta^{ab} &\left[\right.& + 16 \delta_{\mu \nu} \partial_{\lambda}^x \partial_{\sigma}^x H[1,\partial_{\lambda} \partial_{\sigma} \; ; \; 1,1] - 20 \partial_{\nu}^x \partial_{\lambda}^x H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1,1] \nonumber \\ & & - 124 \partial_{\nu}^x \partial_{\lambda}^x H [ 1, \partial_{\lambda} \; ; \; 1, \partial_{\mu}] + 72 \partial_{\lambda}^x H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\nu}] \nonumber \\ & & + 56 \delta_{\mu \nu} \partial_{\lambda}^x \partial_{\sigma}^x H[1, \partial_{\lambda} \; ; \; 1, \partial_{\sigma}] - 72 \partial_{\nu}^x H[ 1, \partial_{\mu} \Box \; ; \; 1,1] \nonumber \\ & & + 20 \partial_{\mu}^x \partial_{\nu}^x H [ 1 , \Box \; ; \; 1, 1] - 144 H[ 1, \partial_{\mu} \Box \; ; \; 1, \partial_{\nu} ] \nonumber \\ & & + 72 \partial_{\mu}^x H[1, \Box \; ; \; 1, \partial_{\nu}] - 72 \partial_{\nu}^x H[1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\lambda}] \nonumber \\ & & + 34 \partial_{\mu}^x \partial_{\nu}^x H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\lambda} ] - 72 H[ 1, \partial_{\mu} \partial_{\lambda} \; ; \; 1, \partial_{\nu} \partial_{\lambda} ] \nonumber \\ & & + 40 \Box H [ 1, \partial_{\mu} \partial_{\nu} \; ; \; 1, 1] + 32 \Box H [ 1, \partial_{\mu} \; ; \; 1, \partial_{\nu}] \nonumber \\ & & \left. + 16 \delta_{\mu \nu} \Box H[ 1, \Box \; ; \; 1, 1] - 16 \delta_{\mu \nu} \Box H[ 1, \partial_{\lambda} \; ; \; 1, \partial_{\lambda}] \; \right] \nonumber \\ & & +~\textrm{(finite~terms)} \;. \end{eqnarray} Proceeding in the same way as in the two previous diagrams, we can easily found the renormalized form of this contribution to be \begin{eqnarray} \left. \Pi_{\mu \nu\; (2k)}^{BB\;ab}(x) \right|_R &=& \frac{g^4 C_A^2 \delta^{ab}}{32 (4 \pi^2)^3} \left[ \partial_{\mu} \partial_{\nu} \Box \frac{- \frac{27}{4} \ln^2 x^2 M^2 - \frac{45}{4} \ln x^2 M^2}{x^2} \right. \nonumber \\ & & \left. + \delta_{\mu \nu} \Box \Box \frac{ \frac{27}{4} \ln^2 x^2 M^2 + \frac{33}{4} \ln x^2 M^2}{x^2} \right] + \ldots \nonumber \\ \end{eqnarray} \subsubsection{Two-loop final results} In order to obtain the total two-loop renormalized contribution to the background gauge field self-energy, we only have to add all the renormalized expressions for the diagrams that we have obtained. So, we have \begin{eqnarray} \left. \Pi_{\mu \nu\;(2)}^{BB\;ab} (x) \right|_R &=& - \frac{g^4 C_A^2 \delta^{ab}}{2(4 \pi^2)^3} ( \partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box ) \Box \frac{ \ln x^2 M^2}{x^2} + \ldots \label{2_loop} \end{eqnarray} By fulfilling one-loop CDR rules, we have fixed the renormalization scheme {\em{a priori}}, which implies that one-loop local terms have defined values. Hence, as is imposed by gauge invariance, we have obtained directly a transverse result. \subsection{RG equation} \label{sec_RG} With the previously obtained expressions for the one- and two-loop corrections of the background gauge field propagator, we can obtain the first two coefficients of the expansion of the beta function of this theory. If we define \begin{equation} \Gamma^{BB \; ab}_{\mu \nu} (x) = (\partial_{\mu} \partial_{\nu} - \delta_{\mu \nu} \Box)\delta^{ab} \Gamma^{(2)} (x) \;, \end{equation} then, with the one-loop contribution (\ref{1_loop}), the gauge fixing renormalization (\ref{gauge_fix_ren}) and the two-loop contribution (\ref{2_loop}), the effective action for the background gauge fields is \begin{eqnarray} \Gamma^{(2)}(x) &=& \frac{1}{g^2} \delta (x) + \frac{11 C_A}{48 \pi^2 (4 \pi^2)}\Box \frac{\ln x^2 M^2}{x^2} + \frac{ C_A}{72 \pi^2} \delta (x) + \frac{\xi C_A}{8 \pi^2} \delta (x) \nonumber \\ & &+ \frac{g^2 C_A^2}{2 (4 \pi^2)^3} \Box \frac{ \ln x^2 M^2}{x^2} + \ldots \label{YM_2loop_eff_action} \end{eqnarray} With this definition, the equation we need to consider is \begin{eqnarray} \left[ M \frac{\partial}{\partial M} + \beta (g) \frac{\partial}{\partial g} + \gamma_{\xi} \frac{\partial}{\partial \xi} - 2 \gamma_{B} \right] \Gamma^{(2)}|_{\xi=0} =0 \;. \label{YM_RG_eq} \end{eqnarray} Notice that $\gamma_{\xi}$ is the coefficient that takes care of the running of the gauge parameter. The expansion of this function to order $g^2$ is obtained in appendix \ref{ap_Gauge}. To do so, we consider there the one-loop RG equation for quantum gauge fields. With this, we find for $\gamma_{\xi}$ \begin{eqnarray} \gamma_{\xi} &=& - \frac{5 C_A}{24 \pi^2} g^2 + \cdots \label{g_xi} \end{eqnarray} Also notice that if the background gauge field is redefined as $B^{\prime} = g B$, this implies $\gamma_{B} =0$ (the charge and background field renormalizations are related: $Z_{g} = Z_{B}^{-1/2}$). So, with (\ref{YM_2loop_eff_action}), (\ref{YM_RG_eq}) and (\ref{g_xi}), we evaluate the first two coefficients of the expansion of the beta function to be \begin{eqnarray} \beta (g) &=& \beta_1 g^3 + \beta_2 g^5 + {\cal{O}}(g^7) \nonumber \\ \beta_1 &=& - \frac{11 C_A}{48 \pi^2} \nonumber \\ \beta_2 &=& - \frac{17 C^2_A}{24 (4 \pi^2)^2} \;. \end{eqnarray} These results agree with those previously obtained in the literature \cite{Caswell:1974gg,Jones:1974mm,Abbott:1980hw,Morris:2005tv}. \section{$N=1$ Super Yang-Mills} In this section we consider the two-loop differential renormalization of the supersymmetric extension of the previous model, $N=1$ Super Yang-Mills \cite{Ferrara:1974pu,Salam:1974ig}. With this calculation we revisit an old controversy: the origin of higher-order perturbative contributions to the beta function in supersymmetric gauge theories \cite{Novikov:1983uc,Grisaru:1985tc,Shifman:1986zi,Arkani-Hamed:1997mj,Shifman:1999kf}. Differential renormalization has one important advantage over usual renormalization methods (as dimensional reduction), as in this case we have UV and IR divergences. With dimensional methods both renormalizations mix (we need to subtract the IR part in the final result), but differential renormalization clearly distinguishes between UV and IR divergences as both are renormalized with independent scales. \subsection{$N=1$ Super Yang-Mills model} As is detailed in appendix \ref{ap_SUSY}, in order to formulate a supersymmetric gauge theory we can follow two different approaches. In the first one, named chiral representation, we begin by considering a multiplet of unconstrained gauge superfields ($V = V^A T_A$, with $T_A$ the group generators), which are a generalization of the results found after studying the off-shell representations of the linear free theory. We use these gauge superfields to construct covariant derivatives $\nabla_A^c =( e^{-gV} D_{\alpha} e^gV, \bar{D}_{\dot{\alpha}}, - i \anticomm{\nabla^c_{\alpha}}{\nabla^c_{\dot{\alpha}}})$ that allow us to obtain gauge invariant expressions. On the other hand, with the second approach, called vector representation, we begin by considering covariant derivatives (termed $\nabla_A^v =( \nabla^v_{\alpha}, \bar{\nabla}^v_{\dot{\alpha}},\nabla^v_{\alpha \dot{\alpha}})$) that, after imposing covariant constraints on them, can be expressed in terms of prepotentials. With both approaches we find field strengths defined in terms of an spinorial field. In particular, for chiral representation we have \begin{eqnarray} W_{\alpha} &=& i \bar{D}^2( e^{-gV} D_{\alpha} e^gV) \nonumber \\ W_{\dot{\alpha}} &=& e^{-gV} \bar{W}_{\dot{\alpha}} e^gV = e^{-gV} ( - W_{\alpha})^{+} e^{gV} \;, \end{eqnarray} which allow us to define a gauge invariant action as \begin{eqnarray} S_0 &=& \frac{1}{g^2} tr \int d^4 x d^2 \theta \; W^2 = - \frac{1}{2 g^2} tr \int d^4 x d^4 \theta \; ( e^{-gV} D^{\alpha} e^gV) \bar{D}^2 ( e^{-gV} D_{\alpha} e^{gV}) \;. \end{eqnarray} For vector representation, the field strength is defined as \begin{eqnarray} -i C_{\dot{\beta} \dot{\alpha}} W_{\beta} &=& \comm{\bar{\nabla}^v_{\dot{\alpha}}}{i \nabla^v_{\beta \dot{\beta}}} ~,~\textrm{with}~\anticomm{\nabla^v_{\alpha}}{\bar{\nabla}^v_{\dot{\beta}}} = i \nabla^v_{\alpha \dot{\beta}} \;, \end{eqnarray} and with this we write the gauge action as \begin{eqnarray} S_0 = \frac{1}{g^2} tr \int d^4 x d^2 \theta \; W^2 = \frac{1}{2 g^2} tr \int d^4 x d^2 \theta \; \left( \frac{1}{2} \comm{\bar{\nabla}^{v \; \dot{\alpha}}}{\anticomm{\bar{\nabla}^v_{\dot{\alpha}}}{\nabla^v_{\alpha}}} \right)^2 \;. \end{eqnarray} Quantization of a supersymmetric gauge theory is also discussed in appendix \ref{ap_SUSY}. We have to add to the action a gauge-fixing term that depends on a gauge parameter $\alpha$ ($S_{GF}$) and anticommuting chiral ghost fields $c$, $c^{\prime}$ ($S_{FP}$). Their explicit expressions are \begin{eqnarray} S_{GF} &=& - \frac{1}{\alpha} tr \int d^8 z \; ( D^2 V ) ( \bar{D}^2 V ) \nonumber \\ S_{FP} &=& tr \int d^4 x d^4 \theta \; ( c^{\prime} + \bar{c}^{\prime} ) L_{\frac{1}{2} gV} \left[ ( c + \bar{c})+coth L_{\frac{1}{2} g V} (c - \bar{c}) \right] ~~,~~ L_X Y = \comm{X}{Y} \;. \nonumber \\ \end{eqnarray} One of the relevant features of the supergraph techniques applied to this theory is the appearance, along with the usual on-shell infrared divergences of Yang-Mills theory, of additional infrared divergences due to the form of the gauge propagator in a general covariant gauge \cite{Juer:1982mp,Clark:1977pq,Piguet:1981hh,Howe:1984xq,Abbott:1984pz}. This can be clearly seen if we consider the expression for this propagator in momentum space \cite{Abbott:1984pz} \begin{eqnarray} \Delta (k) &=& \frac{1 + (1 - \alpha)( D^2 \bar{D}^2 + \bar{D}^2 D^2) k^{-2}}{k^2} \delta^4 (\theta - \theta^{\prime}) \;. \end{eqnarray} As the leading term in $\bar{D}^2D^2 + D^2 \bar{D}^2$ is constant when $k^2 \rightarrow 0$, this propagator goes as $1/k^4$ at small $k$, which is the origin of the infrared divergences. Although Feynman gauge ($\alpha = 1$) seems to be the solution, this is not the case: the one-loop correction of the gauge propagator takes us out of the Feynman gauge; hence, when we consider two-loop diagrams that have as an insertion the one-loop corrected propagator, the infrared divergences reappear \cite{Abbott:1984pz}. With dimensional methods this situation represents a severe problem, as both UV and IR divergences are renormalized with the same dimensional parameter $\varepsilon$ ($d = 4 - 2 \varepsilon$). So, we have to subtract the IR contribution\footnote{In \cite{Abbott:1984pz} this was achieved at two-loops by choosing a non-local gauge-fixing term that cancels exactly the contributions that takes us out of Feynman gauge. In \cite{Grisaru:1985tc} the procedure was to define a $\tilde{R}-$operation \cite{Chetyrkin:1982nn}.}. However, life gets simpler if we use differential renormalization. As we have seen in section \ref{IR_divergences}, with differential renormalization UV and IR divergences are renormalized with different and independent scales. \subsubsection{Background field method} We discuss the application of the background field method to supersymmetric gauge theories in appendix \ref{ap_BFM}. As is explained there, due to the non-linear gauge transformations of SYM, a linear quantum-background splitting is unsuitable. The accurate spitting is achieved if we replace $e^{gV}$ with \begin{eqnarray} e^{g V_{(split)}} = e^{\boldsymbol{\Omega}} e^{g V} e^{\bar{\boldsymbol{\Omega}}} \;, \end{eqnarray} where $V$ is the quantum gauge superfield and ${\boldsymbol{\Omega}}$ is the background prepotential. It is worthwhile to mention that we have redefined the usual background field $B$ ($\boldsymbol{\Omega} = \bar{\boldsymbol{\Omega}} = \frac{1}{2} B$) as $g B \rightarrow B$. This splitting implies that we write the covariant derivatives in a quantum-chiral but background-vector representation as \begin{eqnarray} \nabla_{\alpha} = e^{- g V} \boldsymbol{\nabla}_{\alpha} e^{g V} ~~,~~ \bar{\nabla}_{\dot{\alpha}} = \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} ~~,~~ \nabla_{\alpha \dot{\alpha}} = -i \anticomm{ \nabla_{\alpha}}{\bar{\nabla}_{\dot{\alpha}}} \;, \end{eqnarray} where $\boldsymbol{\nabla}_{\alpha}$ and $\bar{\boldsymbol{\nabla}}_{\dot{\alpha}}$ are the background covariant derivatives. Relative to the quantization of the theory, we remark one of the particular features of SuperYM: the appearance of the Nielsen-Kallosh ghost \cite{Nielsen:1978mp}. In the usual quantization procedure we gauge-average with a simple exponential factor; however, if we use a more complicated function e.g., $ exp \int f M f$ with $M$ an operator, we have to normalize the procedure dividing by $detM$. As we have this situation when we use background chiral superfields, we introduce a new ghost field $b$ (Nielsen-Kallosh ghost), which have opposite statistics to $f$ and allow us to properly normalize the gauge-averaging procedure. As this new field does not interact with the quantum fields and enters quadratically in the action, we find that it only contributes at the one-loop level. Hence, all the relevant terms that form the split Super Yang-Mills action are \begin{eqnarray} S_{YM} &=& - \frac{1}{2 g^2} tr \int d^4 x d^4 \theta \; ( e^{-g V} \boldsymbol{\nabla}^{\alpha} e^{g V}) \bar{\boldsymbol{\nabla}}^2 ( e^{- g V} \boldsymbol{\nabla}_{\alpha} e^{g V} ) \nonumber \\ S_{GF} &=& - ( 1 + \xi) tr \int d^4 x d^4 \theta ( \boldsymbol{\nabla}^2 V ) ( \bar{\boldsymbol{\nabla}}^2 V ) \nonumber \\ S_{FP} &=& tr \int d^4 x d^4 \theta \; \left[ \bar{c}^{\prime} c - c^{\prime} \bar{c} + \frac{1}{2} ( c^{\prime} + \bar{c}^{\prime} ) \comm{gV}{c+\bar{c}} + \ldots\right] \nonumber \\ S_{NK} &=& (1 + \xi) tr \int d^4 x d^4 \theta \; \bar{b} b \;. \label{SYM_BFM_actions} \end{eqnarray} Notice that we have redefined the usual gauge parameter $\alpha$ as $\frac{1}{\alpha} = 1 + \xi$. Hence, as is discussed in appendix \ref{ap_BFM}, we have a background effective action of the form \begin{eqnarray} \Gamma[B] &=& tr \int d^4 x d^4 y d^2 \theta \; \left[ \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \right] \Gamma^{(2)}_B (x-y) + \ldots \nonumber \\ &=& S_0[B] + \Gamma_{\xi} + \nonumber \\ & & + exp S_{int} \left[\frac{\delta}{\delta J}, \frac{\delta}{\delta j}, \frac{\delta}{\delta \bar{j}} \right] exp \left[ \frac{1}{2} J \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1} J - \bar{j} \Box_{+}^{-1} j \right]_{J=j= \bar{j} =0} \;, \end{eqnarray} where $S_0[B]$ is the ``free'' part of the background action, $\Gamma_{\xi}$ stands for the one-loop contribution in the gauge $\xi$ and $J$, $j$ and $\bar{j}$ are the sources. $\Box_{+}$ and $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}$ are operators defined in appendix \ref{ap_BFM} as \begin{eqnarray} \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} &=& \boldsymbol{\Box} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \nonumber \\ \Box_{+} &=& \Box - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - \frac{i}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} ) \end{eqnarray} To apply supergraph techniques to this theory, we can follow two procedures: In the first one we expand all the background covariant derivatives in terms of the explicit background connections $\boldsymbol{\nabla}_{\alpha} = D_{\alpha} - i \boldsymbol{\Gamma}_{\alpha}$, so that we can employ usual D-algebra. This procedure is applied in \cite{Abbott:1984pz}, and we will use it in the one-loop level. However, another approach was considered in \cite{Grisaru:1984ja,Grisaru:1984jc}. In this case the spinorial background connection is not explicitly extracted from the background covariant derivative, and covariant D-algebra is used in the diagrams. At the end, we have all the diagrams expressed in terms of the background space-time connections $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}}$ or field strengths $\boldsymbol{W}_{\alpha}$. Thus, as we do not have explicit spinor connections $\boldsymbol{\Gamma}_{\alpha}$ (which are of lower dimension), the diagrams are more convergent and fewer in number. For the two-loop calculations we will follow this procedure. \subsection{One-loop level} We will proceed with the one-loop case. As the previous examples considered, we will obtain the one- and two-loop corrections evaluated in Feynman gauge (with our conventions, $\xi = 0$). Therefore, at this level we have not only to consider the background gauge field self-energy correction, but as we will have to take care of the running of the gauge parameter in the RG equation ($\gamma_{\xi} \partial / \partial \xi$), we will obtain the linear contribution in $\xi$ of the one-loop background gauge field two-point function. \subsubsection{Background gauge field self-energy} \begin{figure}[h] \centerline{\epsfbox{SYM1loop.eps}} \caption{One-loop background gauge field two-point function contribution. Thick lines correspond to external background fields and thin lines represent ghost propagators.} \label{SYM_1loop} \end{figure} To obtain the one-loop contribution, we begin expressing the covariantly chiral ghost fields in terms of ordinary chiral fields by \begin{eqnarray} c \rightarrow e^{ B /2}\; c \;e^{- B /2} \;, \end{eqnarray} where $B$ is the background gauge field. Hence, the one-loop relevant interaction terms are (see appendix \ref{ap_BFM}) \begin{eqnarray} tr \int d^4 x d^4 \theta \; \left[ - \frac{1}{2} V \left( \boldsymbol{\Box} - \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \right) V + \bar{c}^{\prime} B c + c^{\prime} B \bar{c} + \bar{b} B b \right] \;. \end{eqnarray} Note that in the previous expression, from the part which corresponds to the interaction between the quantum and background gauge superfields, it is clear that we do not have enough covariant derivatives to obtain a non-vanishing contribution to the background gauge field two-point function (remember that at least we need two $\bar{D}^2$ and two $D^2$). Hence, we conclude that the only non-vanishing contribution comes from ghost superfields (as they are chiral superfields, we have at the vertices additional superspace covariant derivatives \cite{Gates:1983nr}). With the definition of the superspace propagator $P_{ij} = \delta_{ij} \Delta_{ij}$, the total contribution of the ghost superfields is straightforwardly evaluated as \cite{Song,Gates:1983nr} \begin{eqnarray} \Gamma^{1\;loop} &=& - \frac{3 C_A \delta^{ab}}{2} \int d^8 z_1 d^8 z_2 \; B^a (z_1) B^{b} (z_2) \left[ D^2_1 P_{12} \stackrel{\leftarrow}{\bar{D}^2_2} \right] \left[ D^2_2 P_{12} \stackrel{\leftarrow}{\bar{D}^2_1} \right] \nonumber \\ &=& - \frac{3 C_A}{2} \int d^8 z_1 d^8 z_2 B^a(z_1) B^a(z_2) \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \left[ D^2_2 \bar{D}^2_2 P_{12} \right] \;. \end{eqnarray} Applying the identity (\ref{D_algebra_id}), we can write this expression as \begin{eqnarray} \Gamma^{1\;loop} &=& - \frac{3 C_A}{2} \int d^8 z_1 d^8 z_2 \; B^a(z_1) \left[ \bar{D}^2 D^2 B^a(z_2) \right] P_{12} \left[ \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ & & - \frac{3 C_A}{2} \int d^8 z_1 d^8 z_2 \; B^a(z_1) B^a(z_2) P_{12} \left[ \Box \bar{D}^2_2 D^2_2 P_{12} \right] \nonumber \\ & & + \frac{i 3 C_A}{2} \int d^8 z_1 d^8 z_2 \; B(z_1) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B^a(z_2) \right] P_{12} \left[ \partial_{\alpha \dot{\alpha}}^2 \bar{D}^2_2 D^2_2 P_{12} \right] \;. \end{eqnarray} And finally, with the usual superspace $\delta$-function property $\delta_{12} \bar{D}^2_2 D^2_2 \delta_{12} = \delta_{12}$ and the identifications $x_1 = x$, $x_2 = y$ we arrive at \begin{eqnarray} \Gamma^{1 \;loop} &=& - \frac{3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) \left[ \bar{D}^2 D^2 B^a (y, \theta) \right] \Delta^2_{xy} \nonumber \\ & & - \frac{3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) B^a (y, \theta) \Delta_{xy} \Box \Delta_{xy} \nonumber \\ & & + \frac{i 3 C_A}{2} \int d^4 x d^4 y d^4 \theta \; B^a(x, \theta) \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B^a (y, \theta) \right] \Delta_{xy} \partial_{\alpha \dot{\alpha}}^y \Delta_{xy} \;. \end{eqnarray} Applying CDR rules, we find the renormalized form of this expression to be \begin{eqnarray} \Gamma^{1\;loop} &=& \frac{ 3 C_A}{16 (4 \pi^2)^2} \int d^4 x d^4 y d^4 \theta \; B^a(x,\theta) \left[ D^{\alpha} \bar{D}^2 D_{\alpha} B^a (y, \theta) \right] \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} \;, \nonumber \\ \label{SYM_1loop_eff_action_background} \end{eqnarray} where we have used the superspace derivatives identity $\bar{D}^2 D^2 + (i/2) \partial_{\alpha \dot{\alpha}} \bar{D}^{\dot{\alpha}} D^{\alpha} = 1/2 D^{\alpha} \bar{D}^2 D_{\alpha}$. \subsubsection{Effective action in a generic gauge} Now we proceed with the additional result that we have to obtain in order to deal with the running of the gauge parameter: the contribution to the background effective action of quantum gauge fields evaluated in a generic gauge. We have to follow a procedure similar to that used in the Yang-Mills case. First, we have to consider the one-loop effective action contribution at second order in background gauge fields evaluated in a generic gauge. Then, we have to expand this in terms of the gauge parameter $\xi$, retaining the linear part. The reason for this is the same as in the non-supersymmetric case: in the background gauge field RG equation, after considering the term that takes care of the running of the gauge parameter $\gamma_{\xi} (g) \partial/ \partial \xi$, we will impose Feynman gauge ($\xi = 0$); hence, the only relevant term for us in the $\xi$-expansion is the linear one. As in the Yang-Mills case, to perform this calculation we consider a functional approach. The quadratic action that we have in this case implies that we find the following contributions from $V$ fields and Nielsen-Kallosh ghosts $b$ to the one-loop effective action \cite{Abbott:1984pz} \begin{eqnarray} \Gamma_{eff} &=& - \frac{1}{2} tr \ln \left[ \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} + \xi \left( \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 + \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right) \right] + tr \ln \left[ \Box_{-} + \xi \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] \;, \label{SYM_1loop_eff_gen_gauge} \end{eqnarray} which is written in terms of the operators defined in appendix \ref{ap_BFM} as \begin{eqnarray} \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} &=& \boldsymbol{\Box} - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \nonumber \\ \Box_{+} &=& \Box - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} - \frac{i}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} ) \nonumber \\ \Box_{-} &=& \Box - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} - \frac{i}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \;. \end{eqnarray} Expanding (\ref{SYM_1loop_eff_gen_gauge}) in $\xi$ we find \begin{eqnarray} \Gamma_{eff} &=& - \frac{1}{2} tr \ln \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} + tr \ln \Box_{-} + \Gamma_{\xi} \nonumber \\ &=& - \frac{1}{2} tr \ln \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} + tr \ln \Box_{-} + \frac{\xi}{2} \Gamma^{(1)}_{\xi} + {\cal{O}}(\xi^2) \;, \end{eqnarray} and the linear part is \begin{eqnarray} \Gamma_{\xi}^{(1)} &=& - tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} ( \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 + \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 ) \right] + tr \left[ \frac{1}{\Box_{-}} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] \nonumber \\ &=& tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} ( \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} - \Box_{-} ) \frac{1}{\Box_{-}} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] + tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} ( \hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits} - \Box_{+}) \frac{1}{\Box_{+}} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right] \nonumber \\ &=& tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} \left( - i \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} + \frac{i}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \right) \frac{1}{\Box_{-}} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] \nonumber \\ & & + tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} \left( - i \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} + \frac{i}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha}) \right) \frac{1}{\Box_{+}} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right] \;, \end{eqnarray} where in the second step we have used the property $tr \Box_{-}^{-1} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 = tr \Box_{+}^{-1} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2$ \cite{Grisaru:1984ja}. At this point, applying $\Box_{-}^{-1} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 = \boldsymbol{\nabla}^2 \Box_{+}^{-1} \bar{\boldsymbol{\nabla}}^2$ \cite{Grisaru:1984ja}, the anticonmutative nature of the covariant derivatives ($\boldsymbol{\nabla}_{\alpha} \boldsymbol{\nabla}^2 = 0$) and the Bianchi identity $\boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} = - \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}$ we arrive at \begin{eqnarray} \Gamma_{\xi}^{(1)} &=& \frac{i}{2} tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}} ) \frac{1}{\Box_{-}} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] - \frac{i}{2} tr \left[ \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \frac{1}{\Box_{+}} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right] \;. \end{eqnarray} So, considering the inverse of the operators, \begin{eqnarray} \frac{1}{\Box_{+}} &=& \frac{1}{\Box} + \frac{i}{\Box} \left( \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} + \frac{1}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} ) \right) \frac{1}{\Box} + \ldots \nonumber \\ \frac{1}{\Box_{-}} &=& \frac{1}{\Box} + \frac{i}{\Box} \left( \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} + \frac{1}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}} ) \right) \frac{1}{\Box} + \ldots \end{eqnarray} where $\Box = 1/2 \boldsymbol{\nabla}^{\alpha \dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}}$, the contribution at second order in the background gauge fields is \begin{eqnarray} \Gamma_{\xi}^{(1)} &=& - \frac{1}{2} tr \left[ \frac{1}{\Box_0} (\bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \frac{1}{\Box_0} \left( \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} + \frac{1}{2} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \right) \frac{1}{\Box_0} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2 \right] \nonumber \\ & & + \frac{1}{2} tr \left[ \frac{1}{\Box_0} ( \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \bar{\boldsymbol{W}}_{\dot{\alpha}}) \frac{1}{\Box_0} \left( \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} + \frac{1}{2} ( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha}) \right) \frac{1}{\Box_0} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2 \right] + {\cal{O}}(B^3) \;, \nonumber \\ \end{eqnarray} with $\Box_0 = 1/2 \partial^{\alpha \dot{\alpha}} \partial_{\alpha \dot{\alpha}}$ being the usual d'alembertian. Hence, as the terms that corresponds to $\bar{\boldsymbol{W}}^{\dot{\alpha}} \Box_0^{-1} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}^2$ and $\boldsymbol{W}^{\alpha} \Box_0^{-1} \boldsymbol{\nabla}_{\alpha} \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}^2$ have not enough covariant derivatives to give a non-vanishing result at second order in the background gauge fields (remember $\comm{\boldsymbol{\nabla}_{\alpha}}{\bar{\boldsymbol{\nabla}}^2} = - i \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} + i \boldsymbol{W}_{\alpha}$), the contribution is found to be (once we have imposed the Bianchi identities) \begin{eqnarray} \Gamma_{\xi}^{(1)} &=& - \frac{1}{2} tr \left[ \frac{1}{\Box_0} ( D^{\alpha} \boldsymbol{W}_{\alpha} ) \frac{1}{\Box_0} ( D^{\beta} \boldsymbol{W}_{\beta} ) \frac{1}{\Box_0} D^2 \bar{D}^2 \right] + {\cal{O}}(B^3) \nonumber \\ &=& \frac{1}{2} tr \int d^8 z_1 d^8 z_2 d^8 z_3 \; [ D^{\alpha} \boldsymbol{W}_{\alpha} (z_2) ] [ D^{\beta} \boldsymbol{W}_{\beta}(z_3) ] \left[ \bar{D}^2_1 D^2_1 P_{12} \right] P_{23} P_{13} + {\cal{O}}(B^3) \;. \nonumber \\ \end{eqnarray} After simplifying this result with the usual superspace $\delta$-function identity (\ref{SUSY_delta_propagators}) and using the identifications $x_2 = x$, $x_3 = y$ and $x_1 = u$ we have \begin{eqnarray} \Gamma_{\xi}^{(1)} &=& \frac{1}{2} tr \int d^4 x d^4 y d^4 \theta \; [ D^{\alpha} \boldsymbol{W}_{\alpha}(x, \theta) ] [ D^{\beta} \boldsymbol{W}_{\beta} (y, \theta) ] \Delta_{xy} \int d^4 u \; \Delta_{xu} \Delta_{yu} \;. \end{eqnarray} This expression is IR divergent, and it has been evaluated in (\ref{CDR_momentum_space}). The renormalized result found there is \begin{eqnarray} \Gamma_{\xi \; R}^{(1)} &=& - \frac{1}{8(4 \pi^2)^2} tr \int d^4 x d^4 y d^4 \theta \; [ D^{\alpha} \boldsymbol{W}_{\alpha} (x, \theta) ] [ D^{\beta} \boldsymbol{W}_{\beta} (y, \theta) ] \frac{\ln (x-y)^2 M^2_{IR}}{(x-y)^2} + {\cal{O}}(B^3) \nonumber \\ &=& - \frac{1}{8(4 \pi^2)^2} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Box \frac{\ln (x-y)^2 M^2_{IR}}{(x-y)^2} + {\cal{O}}(B^3) \;, \nonumber \\ \end{eqnarray} where in the last step we have applied the identity\footnote{$\displaystyle\int d^4 x d^4 y d^4 \theta (D^{\alpha} \boldsymbol{W}_{x \; a}) (D^{\beta} \boldsymbol{W}_{y \; \beta}) f(x-y) = - \int d^4 x d^4 y d^4 \theta \boldsymbol{W}_{x \; \alpha} ( \delta_{\beta}^{\;\alpha} D^2 \boldsymbol{W}^{\beta}_y ) f(x-y) \\ = - \int d^4 x d^4 y d^2 \theta \boldsymbol{W}_{x \; \alpha} (\comm{\bar{D}^2}{D^2} \boldsymbol{W}^{\alpha}_y) f (x-y) + {\cal{O}}(B^3) = \int d^4 x d^4 y d^2 \theta \boldsymbol{W}^{\alpha}_x ( \Box \boldsymbol{W}_{y \; \alpha}) f (x-y) + {\cal{O}}(B^3) \\ = \int d^4 x d^4 y d^2 \theta \boldsymbol{W}_{x}^{\alpha} \boldsymbol{W}_{y \; \alpha} \Box f(x-y) +{\cal{O}}(B^3)$} \begin{eqnarray} & &\int d^4 x d^4 y d^4 \theta \; [ D^{\alpha} \boldsymbol{W}_{\alpha} (x,\theta) ] [ D^{\beta} \boldsymbol{W}_{\beta} (y, \theta) ] f (x-y) \nonumber \\ & & = \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}(x,\theta) \boldsymbol{W}_{\alpha}(y, \theta) \Box f(x-y) + {\cal{O}}(B^3) \;. \end{eqnarray} Hence, the linear term of the expansion in the gauge parameter of the one-loop effective action evaluated at second order in the background gauge fields is \begin{eqnarray} \Gamma_{\xi} &=& - \frac{\xi}{16 (4 \pi^2)^2} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}(x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Box \frac{ \ln (x-y)^2 M^2_{IR}}{(x-y)^2} + {\cal{O}}(\xi^2 ; B^3) \;. \nonumber \\ \label{SYM_1loop_eff_action_gen_gauge} \end{eqnarray} \subsection{Two-loop level} We proceed now with the calculation of the two-loop contribution to the background gauge field self-energy. As we have stated previously, we will use here covariant D-algebra, which simplifies and reduces the number of the diagrams we have to consider. We begin with the pure contribution of quantum gauge fields, and leave ghost contributions for a later section. \subsubsection{Quantum gauge field contribution} \begin{figure}[ht] \centerline{\epsfbox{SYM2loop_vacuum.eps}} \caption{Two-loop contribution to the background effective action. Wavy lines correspond to quantum gauge field propagators.} \label{SYM_2loop_vacuum} \end{figure} In order to obtain these contributions, we have to expand the gauge action $S_0$ of (\ref{SYM_BFM_actions}) and obtain the different interaction terms. With them we are instructed by the background field method and covariant Feynman rules to consider diagrams with external background fields and covariant propagators $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$ inside loops. From this expansion, the requirement of having at least four $\boldsymbol{\nabla}$ and four $\bar{\boldsymbol{\nabla}}$ to get a non-vanishing contribution imposes that the only relevant interaction term in our problem is \cite{Grisaru:1984ja} \begin{eqnarray} \frac{g}{2} tr \left[ V \anticomm{\boldsymbol{\nabla}^{\alpha} V}{ \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}_{\alpha} V} \right] \;, \end{eqnarray} which allow us to construct a vacuum diagram like the one shown in figure \ref{SYM_2loop_vacuum}, where the wavy lines correspond to covariant propagators $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$. Note that we have implicit interactions with the background field in the covariant derivatives and the covariant propagators. To evaluate this diagram, we have to rearrange the covariant derivatives at the vertices, where their explicit expression is $(i g / 2) f^{abc} V^a \boldsymbol{\nabla}^{\alpha} V^b \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}_{\alpha} V^c$. At the left vertex we choose a given configuration of derivatives which, after using the commutation relations of the covariant derivatives and integration by parts, can be rewritten as \begin{eqnarray} \frac{ig}{2} f^{abc} V^a \boldsymbol{\nabla}^{\alpha} V^b \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}_{\alpha} V^c &=& \frac{ig }{2} f^{abc} (- 2 V^a \boldsymbol{\nabla}^2 V^b \bar{\boldsymbol{\nabla}}^2 V^c + V^a \boldsymbol{\nabla}^{\alpha} V^b i \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} V^c \nonumber \\ & & - V^a \boldsymbol{\nabla}^{\alpha} V^b \comm{i \boldsymbol{W}_{\alpha}}{V}^c ) \;. \label{SYM_2loop_Lvertex} \end{eqnarray} Once we have this fixed arrangement, at the right vertex we have to choose the six permutations that are possible. Then, we integrate by parts in each of them so that one specific line if free of any operators. This implies that this vertex is written as \cite{Grisaru:1984ja} \begin{eqnarray} & &\frac{ig}{2} f^{abc} V^a ( - 2 \bar{\boldsymbol{\nabla}}^2 V^b \boldsymbol{\nabla}^2 V^c + 2 \boldsymbol{\nabla}^{\alpha} V^b \bar{\boldsymbol{\nabla}}^2 \boldsymbol{\nabla}_{\alpha} V^c - 2 \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} V^b \boldsymbol{\nabla}^2 \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} V^c - \boldsymbol{\nabla}^{\alpha} V^b i \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} V^c \nonumber \\ & & + \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} V^b i \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \boldsymbol{\nabla}^{\alpha} V^c - i \boldsymbol{\nabla}^{\alpha \dot{\alpha}} V^b \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \boldsymbol{\nabla}_{\alpha} V^c + i \boldsymbol{\nabla}^{\alpha} V^b \comm{\boldsymbol{W}_{\alpha}}{V}^c - 2 i \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} V^b \comm{\bar{\boldsymbol{W}}_{\dot{\alpha}}}{V}^c ) \;. \nonumber \\ \label{SYM_2loop_Rvertex} \end{eqnarray} It can be shown that most of the different combinations of terms at each vertex either cancel by not having enough covariant derivatives, produce pairs of divergent contributions that cancel each other after using the Bianchi identity $\boldsymbol{\nabla} ^{\alpha} \boldsymbol{W}_{\alpha} = - \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} \boldsymbol{W}_{\dot{\alpha}}$ or give finite Feynman integrals \cite{Grisaru:1984ja}. The only non-vanishing divergent contribution that we find is given by the first terms of (\ref{SYM_2loop_Lvertex}) and (\ref{SYM_2loop_Rvertex}). At this stage, we can obtain explicit background gauge fields by expanding each of the covariant propagators to second order in background gauge fields as \begin{eqnarray} \frac{1}{\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}} - \frac{1}{\Box} &=& \frac{i}{\Box} \left( \boldsymbol{W}^{\alpha} \frac{1}{\Box} \boldsymbol{\nabla}_{\alpha} + \bar{\boldsymbol{W}}^{\dot{\alpha}} \frac{1}{\Box} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \right) + \frac{i}{\Box} \left( \boldsymbol{W}^{\alpha} \comm{\nabla_{\alpha}}{\frac{1}{\Box}} +\bar{\boldsymbol{W}}^{\dot{\alpha}} \comm{\bar{\boldsymbol{\nabla}}_{\dot{\alpha}}}{\frac{1}{\Box}} \right) \nonumber \\ & & - \frac{1}{\Box} ( \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha} + \bar{\boldsymbol{W}}^{\dot{\alpha}} \bar{\boldsymbol{\nabla}}_{\dot{\alpha}} ) \frac{1}{\Box} ( \boldsymbol{W}^{\beta} \boldsymbol{\nabla}_{\beta} + \bar{\boldsymbol{W}}^{\dot{\beta}} \bar{\boldsymbol{\nabla}}_{\dot{\beta}} ) \frac{1}{\Box} + \ldots \label{SYM_2loop_expansion_prop} \end{eqnarray} Notice that the first term of the expansion gives no contribution, as when combined with a similar term from another line is finite, and when combined with a space-time background connection $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}}$\footnote{Recall that there is another implicit dependence in background fields, as we have $\Box = 1/2 \boldsymbol{\nabla}^{\alpha \dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} = 1/2 ( \partial^{\alpha \dot{\alpha}} - i \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} )( \partial_{\alpha \dot{\alpha}} - i \boldsymbol{\Gamma}_{\alpha \dot{\alpha}})$, where $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}}$ is the background space-time connection.} it gives a contribution of the form of $\boldsymbol{\Gamma}( \boldsymbol{\nabla}^{\alpha} \boldsymbol{W}_{\alpha} + \bar{\boldsymbol{W}}^{\dot{\beta}} \bar{\boldsymbol{\nabla}}_{\dot{\beta}})$ that clearly vanishes by the Bianchi identities \cite{Grisaru:1984ja}. \begin{figure}[ht] \centerline{\epsfbox{SYM2loop.eps}} \caption{Diagrams corresponding to the expansion of $\hat{\mathop{\mathchoice\sqr64\sqr64\sqr{3.75}4\sqr34}\nolimits}^{-1}$. Thick lines correspond to external background fields.} \label{SYM_2loop} \end{figure} Let us consider first the third term of (\ref{SYM_2loop_expansion_prop}), which generates a diagram of the form of diagram $(a)$ of figure \ref{SYM_2loop}. Notice that all of these contributions have a common symmetry factor of $\frac{1}{2}$ that we will take into account at the end. As we have two explicit background field strengths, at second order in the background gauge fields we have $\Box^{-1} = \Box_0^{-1}$, with $\Box_0 = (1/2) \partial^{\alpha \dot{\alpha}} \partial_{\alpha \dot{\alpha}}$ the usual d'alembertian. Therefore, we have an explicit expression of the form \begin{eqnarray} \Gamma^{2 \; loop}_1 &=& - 3 g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \left[ \boldsymbol{W}^{\alpha} (z_2) \boldsymbol{\nabla}_{2 \; \alpha} + \bar{\boldsymbol{W}}^{\dot{\alpha}} (z_2) \bar{\boldsymbol{\nabla}}_{2 \; \dot{\alpha}} \right] P_{23} \nonumber \\ & & \times \left[ \boldsymbol{W}^{\beta} (z_3) \boldsymbol{\nabla}_{3 \; \beta} + \bar{\boldsymbol{W}}^{\dot{\beta}} (z_3) \bar{\boldsymbol{\nabla}}_{3 \; \dot{\beta}} \right] \left[ \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{41} \right] \;. \nonumber \\ \end{eqnarray} Due to the anticommutative nature of the covariant derivatives $\bar{\boldsymbol{\nabla}}_{3 \; \dot{\beta}} \bar{\boldsymbol{\nabla}}^2_3 = 0$. Hence, we have \begin{eqnarray} \Gamma^{2\;loop}_1 &=& - 3 g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \bar{\boldsymbol{W}}^{\dot{\alpha}}(z_2) \bar{\boldsymbol{\nabla}}_{2 \; \dot{\alpha}} P_{23} \boldsymbol{W}^{\beta}(z_3) \left[ \boldsymbol{\nabla}_{\beta \; 3} \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \nonumber \\ & & - 3 g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \boldsymbol{W}^{\alpha} (z_2) \boldsymbol{\nabla}_{2 \; \alpha} P_{23} \boldsymbol{W}^{\beta} (z_3) \left[ \boldsymbol{\nabla}_{3 \; \beta} \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{41} \right] \;. \label{SYM_2loop_g1_bare} \end{eqnarray} As can be seen, we have divided this expression in two contributions. When dealing with the first one, as $\boldsymbol{W}^{\beta}$ is a covariantly chiral superfield ($\bar{\boldsymbol{\nabla}}_{\dot{\alpha}} \boldsymbol{W}^{\beta} = 0$) and taking into account the basic relation between the covariant derivatives $\anticomm{\boldsymbol{\nabla}_{\dot{\alpha}}}{\boldsymbol{\nabla}_{\alpha}} = i \boldsymbol{\nabla}_{\alpha \dot{\alpha}}$, we find \begin{eqnarray} \Gamma^{2\;loop}_{1.1} &=& 3 i g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \bar{\boldsymbol{W}}^{\dot{\alpha}}(z_2) P_{23} \boldsymbol{W}^{\alpha} (z_3) \left[ \boldsymbol{\nabla}_{\alpha \dot{\alpha}}^3 \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{41} \right] \;. \end{eqnarray} As we need four $\boldsymbol{\nabla}$ and four $\bar{\boldsymbol{\nabla}}$ to get a non-vanishing result, and realizing that at second order in the background fields in this expression $\boldsymbol{\nabla}_{\alpha}$ and $\boldsymbol{\nabla}_{\alpha \dot{\alpha}}$ commute ($\boldsymbol{W}_{\alpha} = - \frac{1}{2} \comm{\bar{\boldsymbol{\nabla}}^{\dot{\alpha}}}{\boldsymbol{\nabla}_{\alpha \dot{\alpha}}}$), it is clear that integrating by parts we obtain the non-vanishing contribution to be \begin{eqnarray} \Gamma^{2\;loop}_{1.1} &=& 3 i g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; P_{12} \bar{\boldsymbol{W}}^{\dot{\alpha}} (z_2) P_{23} \boldsymbol{W}^{\alpha}(z_3) \left[ \boldsymbol{\nabla}_{\alpha \dot{\alpha}}^3 \boldsymbol{\nabla}^2_3 \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] + {\cal{O}}(B^3) \;. \end{eqnarray} At this point, replacing the covariant derivatives by the usual ones (as we have already two explicit background field strengths), using the usual $\delta$-function superspace identity and the identifications $x_1 = u$, $x_2 = x$, $x_3 =y$, $x_4 = v$, we have \begin{eqnarray} \Gamma^{2\;loop}_{1.1} &=& 3 i g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \bar{\boldsymbol{W}}^{\dot{\alpha}}(x, \theta) \boldsymbol{W}^{\alpha}(y, \theta) \Delta_{xy} \int d^4 u d^4 v \; \Delta_{xu} ( \partial_{\alpha \dot{\alpha}}^y \Delta_{yv}) \Delta_{uv}^2 \nonumber \\ & & + {\cal{O}} (B^3) \;. \end{eqnarray} To obtain the second contribution to $\Gamma^{2\;loop}_1$, we consider the covariant derivatives that are acting over $P_{12}$ and, integrating by parts, make them act over $P_{34}$ \begin{eqnarray} \Gamma^{2\;loop}_{1.2} &=& - 3 g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; P_{12} \boldsymbol{W}^{\alpha} (z_2) \boldsymbol{\nabla}_{2 \; \alpha} P_{23} \boldsymbol{W}^{\beta} (z_3) \left[ \boldsymbol{\nabla}_{3 \; \beta} \bar{\boldsymbol{\nabla}}^2_3 \boldsymbol{\nabla}^2_3 P_{34} \right] P_{41} \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{41} \right] \;. \end{eqnarray} Now, with the relation $\comm{\boldsymbol{\nabla}_{\alpha}}{\bar{\boldsymbol{\nabla}}^2} = - \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{\nabla}}^{\dot{\alpha}} + i \boldsymbol{W}_{\alpha}$, we conclude that this contribution vanishes, as we do not have enough covariant derivatives. We now consider the contributions that come from the second term of (\ref{SYM_2loop_expansion_prop}). In this case we study the commutator between $\Box^{-1}$ and the covariant derivatives, finding out \begin{eqnarray} \comm{\boldsymbol{\nabla}_{\alpha}}{\frac{1}{\Box}} &=& - \frac{1}{\Box} \comm{\boldsymbol{\nabla}_{\alpha}}{\Box} \frac{1}{\Box} \nonumber \\ &=& - \frac{1}{\Box} \left( \frac{1}{2} \bar{\boldsymbol{W}}^{\dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} + \frac{1}{2} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{W}}^{\dot{\alpha}} \right) \frac{1}{\Box} \nonumber \\ &=& - \frac{1}{\Box} \left( \bar{\boldsymbol{W}}^{\dot{\alpha}} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} + \frac{1}{2} ( \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{W}}^{\dot{\alpha}}) \right) \frac{1}{\Box} \;, \end{eqnarray} and \begin{eqnarray} \comm{\bar{\boldsymbol{\nabla}}_{\dot{\alpha}}}{\frac{1}{\Box}} &=& - \frac{1}{\Box} \left( \boldsymbol{W}^{\alpha} \boldsymbol{\nabla}_{\alpha \dot{\alpha}} + \frac{1}{2} ( \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \boldsymbol{W}^{\alpha}) \right) \frac{1}{\Box} \;. \end{eqnarray} Hence, for this case we also find diagrams of the form of diagram $(a)$ of figure \ref{SYM_2loop}. As we have two contributions that are identical, except for having $\boldsymbol{W}_{\alpha}$ and $\bar{\boldsymbol{W}}_{\dot{\alpha}}$ interchanged, we detail the calculation of only one of them. \begin{eqnarray} \Gamma^{2\;loop}_{2.1} &=& 3 i g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \boldsymbol{W}^{\alpha} (z_2) P_{23} \nonumber \\ & & \times \left[ \bar{\boldsymbol{W}}^{\dot{\alpha}}(z_3) \boldsymbol{\nabla}_{\alpha \dot{\alpha}}^3 + \frac{1}{2} ( \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{W}}^{\dot{\alpha}} (z_3)) \right] \left[ \bar{\boldsymbol{\nabla}}^2_4 P_{34} \right] P_{14} \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \;. \end{eqnarray} As when obtaining $\Gamma^{2\;loop}_{1.2}$, we can integrate by parts the covariant derivatives acting over $P_{12}$ to find \begin{eqnarray} \Gamma^{2\;loop}_{2.1} &=& 3 i g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; P_{12} \boldsymbol{W}^{\alpha}(z_2) P_{23} \left[ \bar{\boldsymbol{W}}^{\dot{\alpha}}(z_3) \boldsymbol{\nabla}_{\alpha \dot{\alpha}}^3 + \frac{1}{2} ( \boldsymbol{\nabla}_{\alpha \dot{\alpha}} \bar{\boldsymbol{W}}^{\dot{\alpha}} (z_3)) \right] \nonumber \\ & & \times \left[ \bar{\boldsymbol{\nabla}}^2_4 \boldsymbol{\nabla}^2_4 P_{34} \right] P_{14} \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \;, \end{eqnarray} that is an expression that can be directly simplify with the application the $\delta$-function identity (\ref{cov_SUSY_delta_id}) obtained in appendix \ref{ap_BFM}. With the same identifications as in the $\Gamma^{2\;loop}_1$ contributions, we find the total $\Gamma^{2\;loop}_2$ contribution to be \begin{eqnarray} \Gamma^{2\; loop}_2 &=& - 3 i g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \left[ \boldsymbol{W}^{\alpha}(x,\theta) \bar{\boldsymbol{W}}^{\dot{\alpha}}(y,\theta) + \bar{\boldsymbol{W}}^{\dot{\alpha}}(x,\theta) \boldsymbol{W}^{\alpha}(y,\theta) \right] \nonumber \\ & & \times \Delta_{xy} \int d^4 u d^4 v \; \Delta_{xu} \partial_{\alpha \dot{\alpha}}^y \Delta_{yv} \Delta^2_{uv} + {\cal{O}} (B^3) \nonumber \\ & & - \frac{3i}{2} g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \left[ \boldsymbol{W}^{\alpha}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}}(y,\theta) + \bar{\boldsymbol{W}}^{\dot{\alpha}} (x, \theta) \partial_{\alpha \dot{\alpha}}^y \boldsymbol{W}^{\alpha} (y, \theta) \right] \nonumber \\ & & \times \Delta_{xy} \int d^4 u d^4 v \; \Delta_{xu} \Delta_{yv} \Delta^2_{uv} + {\cal{O}} (B^3) \;. \end{eqnarray} Hence, the sum of the contributions $\Gamma^{2 \; loop}_1$ and $\Gamma^{2 \;loop}_2$ (written in terms of the $I^0$ integral expression defined in section \ref{IR_divergences}) is \begin{eqnarray} \Gamma^{2 \;loop}_1 + \Gamma^{2 \; loop}_2 &=& 3 i g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{W}^{\alpha}(x, \theta) \bar{\boldsymbol{W}}^{\dot{\alpha}}(y, \theta) \left[ \Delta \partial_{\alpha \dot{\alpha}} I^0 \right] (x-y) \nonumber \\ & & - \frac{3 i g^2 C_A^2}{2} tr \int d^4 x d^4 y d^4 \theta \; \left[ \boldsymbol{W}^{\alpha}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}} + \bar{\boldsymbol{W}}^{\dot{\alpha}}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \boldsymbol{W}^{\alpha}(y,\theta) \right] \nonumber \\ & & \times \left[ \Delta I^0 \right] (x-y) +{\cal{O}}(B^3) \;. \end{eqnarray} Once we have finished the study of the expansion of (\ref{SYM_2loop_expansion_prop}), we proceed now to consider diagrams with background space-time connections $\boldsymbol{\Gamma}$. We begin by considering the expansion of the inverse of the $\Box$ operator to second order in $\boldsymbol{\Gamma}$, which is obtained as \begin{eqnarray} \frac{1}{\Box} - \frac{1}{\Box_0} &=& \frac{i}{\Box_0} \left[ \frac{1}{2} ( \partial^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}}) + \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} \partial_{\alpha \dot{\alpha}}\right] \frac{1}{\Box_0} + \frac{1}{2} \frac{1}{\Box_0} \left[ \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}} \right] \frac{1}{\Box_0} \nonumber \\ & & - \frac{1}{\Box_0} \left[ \frac{1}{2} ( \partial^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}}) + \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} \partial_{\alpha \dot{\alpha}} \right] \frac{1}{\Box_0} \left[ \frac{1}{2} ( \partial^{\beta \dot{\beta}} \boldsymbol{\Gamma}_{\beta \dot{\beta}}) + \boldsymbol{\Gamma}^{\beta \dot{\beta}} \partial_{\beta \dot{\beta}} \right] \frac{1}{\Box_0} + \ldots \nonumber \\ \label{SYM_2loop_spacetime_exp} \end{eqnarray} If we start with the third term of (\ref{SYM_2loop_spacetime_exp}), what we have is a contribution of the form of diagram $(a)$ of figure \ref{SYM_2loop}. So, this is written as \begin{eqnarray} \Gamma^{2 \;loop}_3 &=& 3 g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \left[ \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(z_2) \partial^2_{\alpha \dot{\alpha}} + \frac{1}{2} ( \partial^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}})(z_2) \right] P_{23} \nonumber \\ & & \times \left[ \boldsymbol{\Gamma}^{\beta \dot{\beta}}(z_3) \partial^3_{\beta \dot{\beta}} + \frac{1}{2} ( \partial^{\beta \dot{\beta}} \boldsymbol{\Gamma}_{\beta \dot{\beta}})(z_3) \right] \left[ \bar{\boldsymbol{\nabla}}^2_3 P_{34} \right] P_{14} \left[ \bar{\boldsymbol{\nabla}}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \nonumber \\ &=& 3 g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; P_{12} \left[ \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(z_2) \partial^2_{\alpha \dot{\alpha}} + \frac{1}{2} ( \partial^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}})(z_2) \right] P_{23} \nonumber \\ & & \times \left[ \boldsymbol{\Gamma}^{\beta \dot{\beta}}(z_3) \partial^3_{\beta \dot{\beta}} + \frac{1}{2} ( \partial^{\beta \dot{\beta}} \boldsymbol{\Gamma}_{\beta \dot{\beta}})(z_3) \right] \left[ \bar{\boldsymbol{\nabla}}^2_3 \boldsymbol{\nabla}^2_3 P_{34} \right] P_{14} \left[ \bar{\boldsymbol{\nabla}}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \;, \end{eqnarray} where in the last step we have applied the same procedure as with $\Gamma^{(1)}_{1.2}$ and $\Gamma^{2 \; loop}_2$, integrating by parts the covariant derivatives that are acting over $P_{12}$. Hence, after the usual identifications and some simple algebra, we find for this contribution \begin{eqnarray} \Gamma^{2 \; loop}_3 &=& - \frac{3}{4} g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x,\theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y, \theta) \partial_{\alpha \dot{\alpha}}^x \partial_{\beta \dot{\beta}}^x \left[ \Delta I^0 \right] (x-y) \nonumber \\ & & + 3 g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y, \theta) \partial_{\alpha \dot{\alpha}}^x \left[ \Delta \partial_{\beta \dot{\beta}}^x I^0 \right] (x-y) \nonumber \\ & & -3 g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y, \theta) \left[ \Delta \partial_{\alpha \dot{\alpha}}^x \partial_{\beta \dot{\beta}}^x I^0 \right](x-y) + {\cal{O}}(B^3) \;. \nonumber \\ \end{eqnarray} Let us now consider the first term of the $\Box^{-1}$ expansion (\ref{SYM_2loop_spacetime_exp}). At second order in the background fields, we find the relevant contribution to be obtained when we expand two different lines, obtaining a diagram of the form of diagram $(b)$ of \ref{SYM_2loop}. Explicitly, we have \begin{eqnarray} \Gamma^{2 \; loop}_4 &=& \frac{3}{2} g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \left[ \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(z_2) \partial^2_{\alpha \dot{\alpha}} + \frac{1}{2} ( \partial^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}})(z_2) \right] \left[ \bar{\boldsymbol{\nabla}}^2_3 P_{23} \right] \nonumber \\ & & \times \left[ \boldsymbol{\nabla}^2_3 P_{34} \right] \left[ \boldsymbol{\Gamma}^{\beta \dot{\beta}}(z_4) \partial_{\beta \dot{\beta}}^4 + \frac{1}{2} (\partial^{\beta \dot{\beta}} \boldsymbol{\Gamma}_{\beta \dot{\beta}})(z_4) \right] \left[ \bar{\boldsymbol{\nabla}}^2_4 P_{41} \right] P_{13} \;. \end{eqnarray} In this expression we have only four $\boldsymbol{\nabla}$ and four $\bar{\boldsymbol{\nabla}}$, so it is obvious how we can obtain the non-vanishing contribution: we integrate by parts the covariant derivatives to move four of them (for example those that are acting over $P_{12}$ and $P_{34}$), and made them act on the other four ($ \bar{\boldsymbol{\nabla}}^2_3 P_{23}$ and $\bar{\boldsymbol{\nabla}}^2_4 P_{41}$ in our example). After some simple algebra, we find for this contribution \begin{eqnarray} \Gamma^{2 \; loop}_4 &=& \frac{3}{2} g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(x,\theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y,\theta) H[1, \partial_{\alpha \dot{\alpha}} \; ; \; 1, \partial_{\beta \dot{\beta}} ] \nonumber \\ & & + \frac{3}{8} g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y, \theta) \partial_{\alpha \dot{\alpha}}^x \partial_{\beta \dot{\beta}}^x H[ 1,1 \; ; \; 1,1] \;, \end{eqnarray} where we have used the $H$ integrals defined in (\ref{H_definition}). Finally, we consider the second term of the $\Box^{-1}$ expansion. It is clear that this generates a contribution of the form of diagram $(c)$ of figure \ref{SYM_2loop}, which is written as \begin{eqnarray} \Gamma^{2 \; loop}_5 &=& - \frac{3}{2} g^2 C_A^2 tr \int d^8 z_1 d^8 z_2 d^8 z_3 d^8 z_4 \; \left[ \boldsymbol{\nabla}^2_1 P_{12} \right] \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(z_2) \delta^8_{23} \boldsymbol{\Gamma}_{\alpha \dot{\alpha}}(z_3) \left[ \bar{\boldsymbol{\nabla}}^2_4 P_{34} \right] \nonumber \\ & & \times P_{14} \left[ \boldsymbol{\nabla}^2_4 \bar{\boldsymbol{\nabla}}^2_4 P_{14} \right] \;. \end{eqnarray} After freeing $P_{12}$ of covariant derivatives and make them act over $P_{34}$, the expression can be treated with the usual steps ($\delta$-function identity and point identifications). Hence, we find the bare expression to be \begin{eqnarray} \Gamma^{2 \; loop}_{5} &=& \frac{3}{4} g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}}(x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y, \theta) (2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}}) \left[ \Box( \Delta I^0) - \partial^{\; \alpha \dot{\alpha}}_x ( \Delta \partial_{\dot{\alpha}}^x I^0 ) \right. \nonumber \\ & & \left. + \Delta \Box I^0 \right](x-y) + {\cal{O}}(B^3) \;. \end{eqnarray} \subsubsection{Ghost contribution} \begin{figure}[ht] \centerline{\epsfbox{SYM2loop_ghosts.eps}} \caption{Two-loop ghost contribution to the background effective action} \label{SYM_2loop_ghosts} \end{figure} As the Nielsen-Kallosh ghosts only interact with the background gauge field and they enter quadratically in the action, it is clear that they only contribute at the one-loop level. Hence at two-loops we have contributions from $c$ and $c^{\prime}$ ghosts, which are proportional to the difference of the two graphs shown in figure \ref{SYM_2loop_ghosts}. Expanding the propagators as for the quantum gauge field, we easily find that terms with two $\boldsymbol{\Gamma}$ cancel \cite{Grisaru:1984ja}. For $\boldsymbol{W}$-terms, we obtain that the only divergent contributions come from factors acting on the same line (either $\boldsymbol{W} \boldsymbol{\nabla} \Box^{-1} \bar{\boldsymbol{\nabla}}$ or $\boldsymbol{W} \comm{\boldsymbol{\nabla}}{\Box^{-1}}$). However, all of these terms can be listed and showed to cancel each other or produce a combination that vanishes once we impose the Bianchi identities \cite{Grisaru:1984ja}. Hence, we conclude that the two-loop ghost contribution vanishes \cite{Abbott:1984pz,Grisaru:1984ja}. \subsubsection{Total renormalized contribution} As we have seen, all the divergent contributions have been written in terms of the integral expression we have defined as $I^0$ and two of the $H$ overlapping integrals listed in section \ref{overlap_integrals}. Hence, we only have to directly use the renormalized results previously found and replace the bare expressions with the renormalized ones. Thus, we find for the first two contributions \begin{eqnarray} \sum^{2}_{i=1}\Gamma^{2 \; loop}_i |_R &=& - \frac{3 i g^2 C_A^2}{2} tr \int d^4 x d^4 y d^4 \theta \; \left[ \boldsymbol{W}^{\alpha}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}} + \bar{\boldsymbol{W}}^{\dot{\alpha}}(x,\theta) \partial_{\alpha \dot{\alpha}}^y \boldsymbol{W}^{\alpha}(y,\theta) \right] \nonumber \\ & & \times \left[ \Delta I^0 \right]_R (x-y) \nonumber \\ & & + \frac{3 i g^2 C_A^2}{16 (4 \pi^2)^3} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \bar{\boldsymbol{W}}^{\dot{\alpha}}(y, \theta) \partial_{\alpha \dot{\alpha}}^x \frac{\ln (x-y)^2 M^2}{(x-y)^2} + {\cal{O}}(B^3) \;. \nonumber \\ \label{SYM_G1_G2} \end{eqnarray} Appliying the Bianchi identities we find an useful relation to simplify this result and express it in a explicit gauge invariant form. Let us consider an expression of the form \begin{eqnarray} \int d^4 x d^4 y d^4 \theta \; \boldsymbol{W}^{\alpha}(x,\theta) ( \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}}(y, \theta)) f(x-y) \;. \label{SYM_2loop_bianchi} \end{eqnarray} Then, at second order in the background gauge fields, we can replace $\boldsymbol{W}^{\alpha}_x \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}}_y$ with $\boldsymbol{W}^{\alpha}_x \boldsymbol{\nabla}_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}}_y $. If we write the space-time covariant derivative as the anti-commutator of the spinorial covariant derivatives, taking into account that $\boldsymbol{W}^{\dot{\alpha}}$ is a covariantly chiral superfield and using the Bianchi identity $\boldsymbol{\nabla} \boldsymbol{W} + \bar{\boldsymbol{\nabla}} \boldsymbol{W} = 0$, we find that the expression we are considering can be written as \begin{eqnarray} \boldsymbol{W}^{\alpha}_x \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}}_y &=& i \boldsymbol{W}^{\alpha}_x \boldsymbol{\nabla}_{\alpha} \boldsymbol{\nabla}_{\beta} \boldsymbol{W}_{y}^{\beta} + {\cal{O}}(B^3) \nonumber \\ &=& i \boldsymbol{W}^{\alpha}_x C_{\alpha \beta} \boldsymbol{\nabla}^2 \boldsymbol{W}_y^{\beta} + {\cal{O}}(B^3) \nonumber \\ &=& i \boldsymbol{W}^{\alpha}_x \boldsymbol{\nabla}^2 \boldsymbol{W}_{y \; \alpha} + {\cal{O}}(B^3) \nonumber \\ &=& i \boldsymbol{W}^{\alpha}_x D^2 \boldsymbol{W}_{y \; \alpha} + {\cal{O}}(B^3) \;. \end{eqnarray} Hence, the integral expression (\ref{SYM_2loop_bianchi}) can be put as \begin{eqnarray} \int d^4 x d^4 y d^4 \theta \; \boldsymbol{W}^{\alpha}_x ( \partial_{\alpha \dot{\alpha}}^y \bar{\boldsymbol{W}}^{\dot{\alpha}})_y f(x-y) &=& i \int d^4 x d^4 y d^4 \theta \; \boldsymbol{W}^{\alpha}_x ( D^2 \boldsymbol{W}_{\alpha})_y f(x-y) + {\cal{O}}(B^3) \nonumber \\ &=& i \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}_x ( \bar{D}^2 D^2 \boldsymbol{W}_{\alpha})_y f(x-y) + {\cal{O}}(B^3) \nonumber \\ &=& i \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}_x \boldsymbol{W}_{y \; \alpha} \Box f(x-y) + {\cal{O}}(B^3) \;. \nonumber \\ \label{SYM_int_bianchi_id} \end{eqnarray} Therefore, we have a gauge invariant expression for the $\boldsymbol{W}$-contributions of the form \begin{eqnarray} \sum^{2}_{i=1}\Gamma^{2 \; loop}_i |_R &=& 3 g^2 C_A^2 tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y,\theta) \Box \left[ \Delta I^0 \right]_R (x-y) \nonumber \\ & & - \frac{3 g^2 C_A^2}{16(4 \pi^2)^3} tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha}(x,\theta) \boldsymbol{W}_{\alpha}(y, \theta) \Box \frac{ \ln (x-y)^2 M^2}{(x-y)^2} + {\cal{O}}(B^3) \;. \nonumber \\ \end{eqnarray} The $\boldsymbol{\Gamma}$ contributions are also added up and renormalized as \begin{eqnarray} \sum^{5}_{i=3} \Gamma^{2 \; loop}_i |_R &=& - 3 g^2 C_A^2 tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}}(y,\theta) \left( \partial_{\alpha \dot{\alpha}}^x \partial_{\beta \dot{\beta}}^x - (2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}}) \Box \right) \nonumber \\ & & \times \left[ \frac{1}{4} [ \Delta I^0 ]_R (x-y) - \frac{1}{32(4 \pi^2)^3} \frac{\ln (x-y)^2 M^2}{(x-y)^2} \right] + {\cal{O}}(B^3) \;. \label{SYM_G3_G4_G5} \end{eqnarray} As this expression is transverse we can rewrite it in terms of the background field strength. To do so, we have to use the following property: Let $f$ be a generic function, then, to second order in the background gauge fields we have \begin{eqnarray} & & tr \int d^4 x d^4 y d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} (x, \theta) \boldsymbol{\Gamma}^{\beta \dot{\beta}} (y, \theta) \left( \partial_{\alpha \dot{\alpha}}^x \partial_{b \dot{\beta}}^x - 2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}} \Box \right) f(x-y) \nonumber \\ & & = - 3 tr \int d^4 x d^4 y d^4 \theta \; \left[ D^{\alpha} B(x, \theta) \right] \left[ \bar{D}^2 D_{\alpha} B(y, \theta) \right] \Box f(x-y) + {\cal{O}}(B^3) \nonumber \\ & & = 3 tr \int d^4 x d^4 y d^2 \theta \; \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Box f(x-y) + {\cal{O}}(B^3) \;. \label{SYM_2loop_st_conn_W} \end{eqnarray} To prove this relation we have only to write the connection in terms of the background gauge field as $\boldsymbol{\Gamma}_{\alpha \dot{\alpha}} = \left(\bar{D}_{\dot{\alpha}} D_\alpha - (i/2) \partial_{\alpha \dot{\alpha}} \right) B + {\cal{O}}(B^2)$. As a consequence of having a transverse structure, the integral is then written as \begin{eqnarray} & & tr \int d^4 x d^4 y d^4 \theta \; \left[ \bar{D}^{\dot{\alpha}} D^{\alpha} B(x,\theta) \right] \left[ \bar{D}^{\dot{\beta}} D^{\beta} B(y,\theta) \right] \left( \partial_{\alpha \dot{\alpha}}^x \partial_{b \dot{\beta}}^x - 2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}} \Box \right) \Box f(x-y) \nonumber \\ &=& tr \int d^4 x d^4 y d^4 \theta \; \left[ D^{\alpha} B(x,\theta) \right] \left[ \bar{D}^2 D^{\beta} B(y,\theta) \right] C^{\dot{\beta} \dot{\alpha}} \left( \partial_{\alpha \dot{\alpha}}^x \partial_{b \dot{\beta}}^x - 2 C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}} \Box \right) \Box f(x-y) \;, \nonumber \\ \end{eqnarray} where we have integrated by parts the superspace derivative $D^{\alpha}$. Then, as $C^{\dot{\beta} \dot{\alpha}} \partial_{\alpha \dot{\alpha}} \partial_{\beta \dot{\beta}} = \delta_{\alpha}^{~\beta} \Box$, $C^{\dot{\beta} \dot{\alpha}} C_{\alpha \beta} C_{\dot{\alpha} \dot{\beta}} = - 2 C_{\alpha \beta}$ and $\boldsymbol{W}_{\alpha} = (1/i) \bar{D}^2 D_{\alpha} B + {\cal{O}}(B^3)$ we straightforwardly obtain the relation (\ref{SYM_2loop_st_conn_W}). With this, we can express the space-time connection contribution (that is transverse as is required) in terms of the background field strength. Adding up all the results and taking into account the symmetry factor, we find the total two-loop renormalized contribution to the background gauge field self-energy to be \begin{eqnarray} \frac{1}{2}\sum^{5}_{i=1} \Gamma^{2 \; loop}_i &=& tr \int d^4 x d^4 y d^2 \theta \boldsymbol{W}^{\alpha} (x, \theta) \boldsymbol{W}_{\alpha} (y, \theta) \Gamma^{(2) \; 2\;loop}_B(x-y) \;, \end{eqnarray} with \begin{eqnarray} \Gamma^{(2) \; 2 \; loop}_B(x) &=& \frac{3 g^2 C_A^2}{64(4 \pi^2)^3} \Box \frac{\frac{1}{4} \ln^2 x^2 M^2_{IR} + \frac{1}{2} \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2) + \ln x^2 M^2}{x^2} + \ldots \;, \nonumber \\ \label{SYM_2loop_eff_action_background} \end{eqnarray} where $M_{IR}$ corresponds to the IR divergence and $M$ to the UV one. As is expected, we have found a result that has both types of divergences. However, opposite to the case of dimensional regularization, we are able to clearly distinguish them. \subsection{RG equation} With our conventions, remembering that in the background field method the background field and the coupling constant renormalizations are related ($Z_g \sqrt{Z_B} = 1$), the anomalous dimension of the background field cancels. Hence, the RG equation for this fields is\footnote{Note also that in the RG equation we have not included a term $\gamma_{IR} = M_{IR} \partial/ \partial M_{IR}$ because in the renormalization we have required that both IR and UV scales were independent, which implies that $\gamma_{IR} = M / M_{IR} \partial M_{IR} / \partial M = 0$.} \begin{eqnarray} \left. \left[ M \frac{\partial}{\partial M} + \beta(g) \frac{\partial}{\partial g} + \gamma_{\xi} (g) \frac{\partial}{\partial \xi} \right] \Gamma^{(2)}_B \right|_{\xi = 0} = 0 \;. \end{eqnarray} with $\Gamma_B^{(2)}$ containing the free action, the one- and two-loop contributions ((\ref{SYM_1loop_eff_action_background}) and (\ref{SYM_2loop_eff_action_background}) respectively) and the linear expansion in $\xi$ of the effective action in a generic gauge (\ref{SYM_1loop_eff_action_gen_gauge}): \begin{eqnarray} \Gamma_B^{(2)} (x) &=& \frac{1}{2g^2} \delta(x) + \frac{3 C_A}{16(4 \pi^2)^2} \Box \frac{\ln x^2 M^2}{x^2} - \frac{\xi C_A}{16 (4 \pi^2)^2} \Box \frac{\ln x^2 M_{IR}^2}{x^2} \nonumber \\ & & + \frac{3 g^2 C_A^2}{256 (4 \pi^2)^3} \Box \frac{\ln^2 x^2 M^2_{IR} + 2 \ln x^2 M^2_{IR} ( 1 - \ln x^2 M^2) + 4 \ln x^2 M^2}{x^2} + \ldots \nonumber \\ \label{SYM_2loop_eff} \end{eqnarray} As in all the previously considered models, the gauge-running term $\gamma_{\xi}$ is evaluated in appendix \ref{ap_Gauge} with the one-loop RG equation for the quantum gauge field self-energy. We find there that this function has an expansion to second order in $g^2$ of the form \begin{eqnarray} \gamma_\xi &=& - \frac{3 C_A}{4 (4 \pi^2)}g^2 + \ldots \end{eqnarray} Hence, inserting (\ref{SYM_2loop_eff}) in the RG equation, and using the result obtained for $\gamma_{\xi}$, we find the following value for the one- and two-loop coefficients of the expansion of the beta function \begin{eqnarray} \beta(g) &=& - \frac{3}{4} \left( \frac{C_A}{8 \pi^2} \right)g^3 - \frac{3}{4} \left(\frac{C_A}{8 \pi^2} \right)^2 g^5 \;. \end{eqnarray} Let us remark again that as our supersymmetric conventions differ in a $\sqrt{2}$ factor in the coupling constant wrt. the usual ones \cite{Gates:1983nr} ($g = \sqrt{2} g_{SYM}$), this results matches the standard beta function expansion $\beta(g_{SYM}) = - (3/2) [ C_A/ (8\pi^2)] g^3_{SYM} - (3/2) [ C_A / (8 \pi^2) ]^2 g^5_{SYM} + {\cal{O}}(g^7_{SYM})$. \subsubsection{Discussion of the result} There has been some controversy about the origin (UV or IR) of the higher-order perturbative contributions to the beta function in supersymmetric gauge theories. The ``exact beta function'' of $N=1$ SYM was discovered by Novikov, Shifman, Vainshtein and Zakharov (NSVZ) in \cite{Novikov:1983uc} (although the expression was first derived in \cite{Jones:1983ip}), and it is of the form (in this discussion, we will use the usual coupling constant, thus implicitly we have $g \equiv g_{SYM}$) \begin{eqnarray} \beta(g) &=& - \frac{3 C_A}{16 \pi^2} \frac{g^3}{1 - \frac{C_A g^2}{8 \pi^2}} \;. \label{beta_NSVZ} \end{eqnarray} In the NSVZ derivation of the function, instanton analysis was used, showing that the higher-loop corrections to the one-loop result were due an imbalance in the number of fermionic and bosonic zero modes. Thus, these correction had and IR origin. Afterwards, the two-loop coefficient of the expansion of the beta function was obtained using dimensional reduction \cite{Abbott:1984pz,Grisaru:1985tc,Grisaru:1984ja}, matching (\ref{beta_NSVZ}). However, as we have pointed out previously, in this calculation IR divergences appear and dimensional reduction regularizes both types of divergences with the same parameter \cite{Abbott:1984pz}. In \cite{Grisaru:1985tc} this is solved by subtracting the IR divergences by a $\tilde{R}$ operation \cite{Chetyrkin:nn}, obtaining that the two-loop correction of the beta function has its origin in an specific operator of dimensional reduction that is not available in a renormalization procedure that stays in four dimensions\footnote{This operator is written in terms of the background space-time connection $\boldsymbol{\Gamma}^{\alpha \dot{\alpha}}$ and the Kronecker delta functions in four ($\delta_{\alpha \dot{\alpha}}^{~\beta \dot{\beta}}$) and $n$ ($\hat{\delta}_{\alpha \dot{\alpha}}^{~\beta \dot{\beta}}$) dimensions as $tr \int d^4 x d^4 \theta \; \boldsymbol{\Gamma}^{\alpha \dot{\alpha}} \boldsymbol{\Gamma}_{\beta \dot{\beta}} ( \delta_{\alpha \dot{\alpha}}^{~\beta \dot{\beta}} - \hat{\delta}_{\alpha \dot{\alpha}}^{~\beta \dot{\beta}})$, which, by means of a Bianchi identity can be put in terms of the classical action as $- \varepsilon tr \int d^4 x d^2 \theta \; \boldsymbol{W}^{\alpha} \boldsymbol{W}_{\alpha}$}, which seems to imply, as is pointed out in \cite{Abbott:1984pz}, that no divergence should occur beyond one-loop. This situation is cleared if we distinguish between the running of the physical coupling constant (the constant that is used in perturbative calculations when obtaining the 1PI effective action), and the running of the coupling constant in the Wilson's effective action approach\footnote{In this method initially we consider a theory defined with a cutoff scale $M$, and study how the different terms of the lagrangian flow when we integrate over momentum slices down to another scale $M^{\prime}$. This flow implies that the coupling constants verify RG equations of the form $M \partial / \partial M \lambda = \beta (\lambda)$}. In the latter case, using different arguments as the holomorphic dependence on the complexified coupling constant or the fact that the relevant domain of the non-local operators responsible of the higher-loop corrections to be virtual momenta of the order of external momenta (and therefore excluded by definition in the Wilson action) \cite{Novikov:1985rd,Shifman:1986zi}, we can conclude that the flow of the coupling constant is exhausted at one loop. However, in the case of the physical coupling constant, we have higher-order contributions that appear when we take the expectation value (in a external field) of the operators in the Wilson action, being the relevant IR pole related to an anomaly \cite{Shifman:1986zi}. This IR origin of the higher-order corrections is questioned in \cite{Arkani-Hamed:1997ut,Arkani-Hamed:1997mj}. In these works, in a purely wilsonian framework, an NSVZ flow is obtained. The key idea is the differentiation of the flow of an ``holomorphic'' coupling constant and a ``canonical'' coupling constant. The first ``holomorphic'' coupling constant corresponds to a lagrangian which is normalized at a scale M as \begin{eqnarray} {\cal{L}}_h (M) &=& \frac{1}{g^2_h} tr \int d^4 x d^2 \theta W^{\alpha} (V_h) W_{\alpha} (V_h) +~h.c. \end{eqnarray} with $1/g^2_h = 1/g^2 + i \theta / (8 \pi^2)$ being the complexified coupling constant, whereas the ``canonical'' coupling constant corresponds to a lagrangian of the form \begin{eqnarray} {\cal{L}}_c (M) &=& ( \frac{1}{g^2_c} + i \frac{\theta}{8 \pi^2}) tr \int d^4 x d^2 \theta W^{\alpha} (g_c V_c) W_{\alpha} (g_c V_c) +~h.c. \end{eqnarray} Although the $g_h$ running can be shown to be one-loop, when we consider the running of the coupling constant $g_c$ we found that obeys a NSVZ flow. The reason is that in order to maintain ``canonical'' normalization in the lagrangian, we are forced to perform a rescaling of the fields, being this rescaling anomalous and the origin of the higher-order terms. In these works it is also argued that the flow of the 1PI coupling constant is closely related to this ``canonical'' coupling constant flow, being this confirmed in \cite{Bonini:1996bk}, where it is found that the first two coefficients of the 1PI beta function coincide (in any mass-independent scheme) with the first two coefficients of the ``canonical'' wilsonian beta function \cite{Mas:2002xh}. As the construction we have just described is made {\em{a la}} Wilson, it is claimed in \cite{Arkani-Hamed:1997ut,Arkani-Hamed:1997mj} that it only depends on the UV properties of the theory. However, in \cite{Shifman:1999kf}, this interpretation is criticized, as it is pointed out that the IR degrees of freedom must be included in the derivation of the anomaly, if we want to maintain low-energy physics unchanged under rescaling \cite{Shifman:1999kf,Shifman:1988zk}. We proceed to the discussion of our result. We have taken the following steps \begin{enumerate} \item We first renormalize the one-loop UV subdivergences with a scale M. \item If we have had an overall UV divergence, we could have renormalized it with a different scale, say $M^{\prime}$. By power counting only $\boldsymbol{\Gamma} \boldsymbol{\Gamma}$ diagrams could have overall UV divergences; nevertheless, as the traceless parts multiplying $\boldsymbol{\Gamma} \boldsymbol{\Gamma}$ are finite, they can only depend on the one-loop scale M, which implies that we can not have any $M^{\prime}$ dependence, as gauge invariance imposes transversality in the $\boldsymbol{\Gamma} \boldsymbol{\Gamma}$ expressions. \item Although we could have when taking the derivative wrt. the UV scale in the RG equation a non-local dependence in $M$ (see (\ref{SYM_G1_G2}) and (\ref{SYM_G3_G4_G5})), after integration over half of the supercoordinates these contributions become local. \item The one-loop scale is cancelled by the two-loop coefficient of the beta function in the RG equation, playing the off-shell IR scale a passive r\^ole, as it is exactly cancelled in this equation. \end{enumerate} Hence, we conclude that the scale associated to the one-loop renormalization of the quantum superfield is the one that gives rise to the two-loop coefficient of the beta function. The fact that no overall UV scale appears, is directly related to the conclusion obtained in \cite{Abbott:1984pz} that in a four-dimensional regularization method there are no superficial divergences. However, as we have seen, this not implies that the two-loop coefficient of the beta function cancels. The mechanism presented here agrees with previous calculations in which the corrections to the one-loop result arise from a one-loop anomaly \cite{Shifman:1986zi,Arkani-Hamed:1997ut,Arkani-Hamed:1997mj,Kraus:2001tg,Kraus:2001id}. In our case, the anomaly is to be associated with the external loop, and is responsible of the promotion of the $M$ dependence into a non-vanishing non-local structure that eventually generates the two-loop coefficient of the beta function. So, we have found that {\em{the two-loop coefficient of the beta function arises from a one-loop UV scale which survives at two loops when IR effects are included}}. \chapter{Introduction} At the beginning of the past century, two basic blocks of the modern physics were established: Quantum Theory and the Theory of Relativity. In the following years, Quantum Field Theory was developed in order to made both theories compatible (or, to be more precise, Quantum Mechanics and the Special Theory of Relativity, as the complete connection with General Relativity is still an open problem). One of the surprising points of this theory is that some of the calculations involved divergent quantities. At first this was thought to be a problem, but soon it was found that it was inherent to any Quantum Field Theory calculation, reflecting only the infinite degrees of freedom of these processes. Finally, the standard way of treating these divergences was established to be a two-step work: \begin{itemize} \item First of all, we have to {\bf{regularize}} the divergence. The idea is to introduce an extra parameter (the regulator) in terms of which we can rewrite the divergent expression as a finite function, being the infinite result a certain limit of this parameter. \item Once we have parametrized the divergence in terms of this regulator, the next step is to drop off the divergent part of each diagram, retaining only the finite part. This is what we call {\bf{renormalization}}. The easiest way of performing this is to modify the coefficients of the terms of the action, making them regulator-dependent, so that we have new terms in the calculation (counter-terms) that can be adjusted to cancel the divergent parts. The mass scale at which this procedure is applied is called {\em{renormalization scale}}. \end{itemize} Renormalizable theories are those theories where the previous procedure can be applied and only the values of a few parameters are affected. The origin of the harmlessness of the quantum fluctuations in these theories can be easily understood with a method developed by Wilson \cite{Wilson:1973jj}. Here, using a functional approach, we begin by considering a theory which has an ultraviolet cutoff scale $\Lambda$. Then, integrating over a momentum shell, we define the theory e have a new (infinitesimally) lower cutoff scale $\Lambda^{\prime}$. Although at first this redefinition implies that an infinite set of new different terms can appear in the lagrangian, it can be shown that in a renormalizable theory only a finite subset of them tend to grow if we iterate this procedure, whereas the rest vanishes. The differential equations that govern the flow of the coefficients of the lagrangian are called {\em{renormalization group}} equations. Another approach to the {\em{renormalization group}} is the Callan-Symanzik equation \cite{Callan:1970yg,Symanzik:1970rt}, which is obtained from the arbitrariness in the election of the scale that we use to impose the renormalization conditions when obtaining the parameters of a renormalized field theory. So, with the Callan-Symanzik equation, the renormalization group flows are obtained by looking at how the parameters of the theory depend on the renormalization scale. The shifts of the coupling constants are reflected in one special parameter of the equation, which is called the {\em{beta function}}. Hence, by studying this parameter, we can obtain relevant physical information, as the validity of perturbative approach to obtain the short- or large-distance behaviour of the theory. Central to the physics is the idea of symmetry, which is the invariance of a physical system under some kind of transformation. When Quantum Field Theory was developed, two different types of symmetries were found: the space-time symmetries, generated by the Poincare group, and on the other hand internal symmetries. It was shown that both types of symmetries can not be non-trivially mixed \cite{Coleman:1967ad}, unless we consider fermionic symmetry generators \cite{Haag:1974qh}(ie., operators that interchange fermions with bosons and vice versa). These generators allow us to obtain an extended Poincare algebra, which is called {\em{supersymmetry algebra}}. Since its discovery, and although it has not yet been experimentally verified, supersymmetry has become one of the basic elements of modern theoretical physics. Among the different reasons for that, we can stand out that is a key ingredient of the theoretical efforts for the unification of gravity and the other forces of nature (e.g., supergravity and superstring theories), and it provides models that are simpler to study and quantize, as the symmetry between fermions and bosons implies that some ``miraculous'' cancellations occur in the calculations. As a renormalized quantum theory should have (if possible) the same symmetries as the classical one, among the relevant features that we have to maintain in a renormalization procedure, we have the invariance with respect to local symmetry transformations, which is called {\em{gauge invariance}}. However, the quantization procedures force us to loose gauge invariance in the intermediate results (for example, we have to pick up only one representative gauge field of each gauge orbit in a functional quantization approach). To maintain explicit gauge invariance in every step of the calculations, the {\em{background field method}} \cite{DeWitt:1967ub} was developed. Here, the gauge field is split into two parts: quantum and background. We quantize the first one, which implies that we have to break the gauge invariance on it. At the same time, the second field is treated as a classical one, and therefore gauge invariance is retained in terms of it. This has relevant consequences: for example, it imposes a relation between the gauge coupling and background field renormalizations and allows us to obtain the beta function from a calculation of only the background field two-point function. When quantizing a gauge theory, it was found that only some renormalization procedures can preserve explicitly the symmetries, except for some exceptional cases called {\em{anomalies}}, being the most successful one dimensional renormalization. The key point of this method is to rewrite the original divergent integrals in four dimensions as integrals in $D$ dimensions, being the regulator the $\varepsilon$ parameter of this continuous dimension $D = 4 - 2 \varepsilon$. As we have stated before, it preserves explicitly gauge invariance, making also the calculations easy to perform (even in the higher-loop cases). However, this procedure has some drawbacks. In concrete, due to the fact of changing the space-time dimension, some incongruities are expected to appear when applying this method to a dimension-dependent theory as can be a supersymmetric one. To solve this problem, and offer an alternative renormalization procedure that works only in four dimensions, {\bf{differential renormalization}} (DiffR) was developed \cite{Freedman:1991tk}. The basics of the method are to work in coordinate space rather in momentum space, rewriting expressions that are too singular to have a well defined Fourier transform in terms of derivatives of less singular ones. With this prescription, it can be shown that the coefficients of the renormalization group equations that are satisfied by the correlators of the theory are easily obtained. At the same time, we stay all the time in four dimensions, making this a suitable renormalization procedure when dealing with supersymmetric theories. However, one important practical difficult arises. Although gauge invariance is not broken, to recover the explicit form of this invariance in the final results we have to fix the ambiguities generated by the method. So, we have to impose {\em{a posteriori}} the Ward identities. This important point was solved (at one loop) by the introduction of {\bf{Constrained Differential Renormalization}} (CDR) \cite{delAguila:1997kw}. The basic idea here is to give a minimal set of rules to manipulate singular expressions, so that all the ambiguities of the calculations are fixed {\em{a priori}}. At the same time, all of these manipulations are required to be compatible with the symmetries that have to be maintained. With this prescription, it can be seen that the renormalized expressions directly fulfil the Ward identities without any adjustment. The objective of this work is to show that differential renormalization can be easily and applied to the renormalization of gauge theories at the two-loop level. In concrete, we will show that with this method we can obtain with little effort the two-loop coefficient of the expansion of the beta function of these theories. We have to point out that, although it is only fully developed for the one-loop case, to perform some of these calculations we will use CDR prescriptions. This is due to the fact that when imposing CDR at the one-loop level, the coefficients of the logarithms of the mass-scales of the two-loop renormalized expression get fixed {\em{a priori}}. No Ward identities are needed to be used. Also, we will show that differential renormalization clearly distinguishes between ultraviolet and infrared divergences as both are renormalized with different and independent mass-scales. This is not the case for dimensional regularization, where both types of divergences get mixed in the results, as they are renormalized with the same dimensional parameter $\varepsilon$. Hence, this feature allows us to revisit one controversial point: the origin (ultraviolet of infrared) of the higher-order perturbative contributions to the beta function in supersymmetric gauge theories. Originally, Novikov, Shifman, Vainshtein and Zakharov obtained the so-called ``exact beta function'' of $N=1$ SYM ($\beta_{NSVZ}$) by means of instanton analysis \cite{Novikov:1983uc}, where the origin of the higher-order contributions was clearly infrared. However, this was questioned by Arkani-Hamed and Murayama \cite{Arkani-Hamed:1997ut,Arkani-Hamed:1997mj}, as they were able to obtain $\beta_{NSVZ}$ in a purely wilsonian framework, which only depends on the ultraviolet properties of the theory. With our approach, we will obtain perturbatively the two-loop coefficient of $\beta_{NSVZ}$ with the advantage of having the UV and IR divergences clearly separated. The structure of the work is as follows: In the first chapter, we made a brief presentation of DiffR and CDR, showing also how the results of the latter can be used in two-loop calculations. In the second chapter, we give a complete treatment of the calculation of the beta function of two of the most relevant abelian gauge theories: QED and SuperQED. Although these two theories were yet renormalized in the literature with standard DiffR, we will re-obtain their two-loop beta functions without imposing Ward identities. The third chapter is devoted to the renormalization of non-abelian gauge theories, studying the concrete models of Yang-Mills and SuperYang-Mills. Finally, we present our conclusions. In appendices \ref{ap_SUSY} and \ref{ap_BFM} we made a brief presentation of our supersymmetic conventions and the background field method respectively. In appendix \ref{ap_Gauge}, in order to obtain the function that takes into account the running of the gauge parameter in the RG equations, we evaluate the one-loop RG equations for the quantum gauge field two-point functions of each theory that we treat. Finally, in appendix \ref{ap_calc} we list some identities and calculations that are used in this work.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,860
\section{Introduction} Knot groups and their quotients provide effective techniques for distinguishing and tabulating knots, studying their properties and calculating a variety of classical invariants. Prime knots are determined, up to reflection, by their groups~\cite{gordon1989knots}. Further, dihedral and symmetric group quotients have been as instrumental as polynomial invariants in creating and expanding the knot table~\cite{HTW, perko1974classification}. In this paper, we adopt a computational approach to studying two notoriously elusive knot invariants: the bridge number and meridional rank. We perform an exhaustive search, covering the groups of tabulated knots through 16 crossings, for quotients onto finite Coxeter groups. We find $595,515$ quotients for knots of bridge number at least 3, which implies that $601,061$ out of the first 1,701,936 (non-cyclic) knot groups admit maximal rank quotients, in the sense defined below, onto finite Coxeter groups. For approximately $38\%$ of these knots, we compute the bridge number for the first time. Our findings are summarized in Section~\ref{section-tables}. Recall that given a Coxeter presentation for a Coxeter group $G$, a {\it reflection} is any element conjugate to one of the generators in this presentation. The {Coxeter rank} of $G$ is the cardinality of a minimal generating set of reflections for $G$. In this paper, the Coxeter rank will be denoted by $r(G)$ and may also be called simply ``the rank of $G$". Whenever we consider a group homomorphism $\rho: \pi_1(S^3\backslash K)\twoheadrightarrow G$ from a knot group onto a Coxeter group $G$, we will always assume that meridians of $K$ map to reflections in $G$. Sometimes we will emphasize this property by saying that $\rho$ is a {\it good} quotient. Consider a good quotient $\rho: \pi_1(S^3\backslash K)\twoheadrightarrow G$ as above. If $r(G)$ equals the bridge number of $K$, we say that $\rho$ is a {\it maximal rank} Coxeter quotient, abbreviated MRCQ. As the phrase suggests, the Coxeter rank of a good quotient for $K$ can never exceed the bridge number $\beta(K)$. Indeed, recall that $\beta(K)$ is an upper bound for the meridional rank $\mu(K)$. Furthermore, a generating set of meridians is mapped by a good quotient map to a generating set of reflections. Hence, for any good quotient map $\varphi: \pi_1(S^3\backslash K)\twoheadrightarrow G$, we have the inequalities \begin{equation} \label{ineqs1} \beta(K)\geq \mu(K)\geq r(G). \end{equation} Thus, we have a MRCQ precisely when $\beta(K)= r(G)$ holds, and this equality can sometimes be verified diagrammatically. \begin{prop}\label{prop-equalities}\cite{baader2021coxeter} Let $D$ be a diagram for a knot $K$. Denote by $\omega(D)$ the Wirtinger number (Definition~\ref{def-wirt}) of $D$. Assume that $G$ is a Coxeter group such that there exists a good quotient $\pi_1(S^3\backslash K)\twoheadrightarrow G$. If the Coxeter rank of $G$ satisfies $r(G)=\omega(D)$, the Meridional Rank Conjecture holds for $K$ and we have $$\omega(D)=\omega(K)=\beta(K)=\mu(K)=r(G).$$ \end{prop} \begin{proof} The result follows from Equation~\ref{ineqs1}, combined with the fact that the Wirtinger number of any diagram of $K$ is an upper bound for the bridge number: $\omega(D)\geq\omega(K)=\beta(K)$, which is proved in~\cite{blair2020wirtinger}. \end{proof} Given an knot $K$ with diagram $D$, we say that $D$ exhibits a maximal rank Coxeter quotient if there exists a good quotient $\varphi: \pi_1(S^3\backslash K)\twoheadrightarrow G$ such that $r(G)=\omega(D)$. The existence of such a $\varphi$ allows us to apply Proposition~\ref{prop-equalities} to prove the Meridional Rank Conjecture (Kirby List~\cite{kirby1995problems}, Problem~1.11) for $K$. Moreover, $D$ realizes the Wirtinger number of $K$, that is, $\omega(D)$ equals the bridge number $\beta(K)$. In this work, we determine the diagrams in the Hoste-Thistlethwaite-Weeks table~\cite{HTW} through 16 crossings which exhibit maximal rank quotients onto finite Coxeter groups. We thereby compute the meridional ranks and bridge numbers for the corresponding knots, along the way showing that these knots satisfy the Meridional Rank Conjecture of Cappell and Shaneson. Note that the conjecture has been proven in a variety of special cases, notably torus links~\cite{RZ87}, links of meridional rank two~\cite{BZ89}, Montesinos links~\cite{BZ85} and generalized Montesinos links~\cite{LM93}, twisted links~\cite{baader2021coxeter}, and certain classes of arborescent links ~\cite{baader2021coxeter, baader2020twigs}, among others \cite{BDJW, BJW, CH14, baader2019symmetric}. It is unknown how many and precisely which knots through 16 crossings are covered by one or more of these theoretical results. In practice, however, it can be challenging to determine whether a given knot satisfies the hypotheses of some of the theorems cited above, particularly when these hypotheses include the existence of a diagram with special properties. This makes it difficult to identify potential counter-examples to the conjecture, that is, knots which do not belong to any of the special cases for which the conjecture is known to hold. Our work is a step toward bridging this gap. Moreover, when the meridional rank of a knot $K$ is detected by a finite Coxeter quotient, we explicitly compute the bridge number and meridional rank of $K$ from Gauss code for a diagram of $K$. \begin{thm}[Main Theorem]\label{thm-main} Let $D$ be a knot diagram. $D$ admits a maximal rank quotient onto a finite Coxeter group $H$ if and only if such a quotient is detected by the algorithm outlined in Section \ref{section-homsearch}. \end{thm} The result follows from three main ingredients: the equality between the bridge number and Wirtinger number of a knot (Theorem~\ref{thm-wirt}); the easy fact that the existence of a Coxeter quotient of a knot group can be detected in {\it any} diagram of the knot (Proposition~\ref{lem-AnyDiagram}); and the celebrated classification of finite Coxeter groups (Theorem~\ref{CoxeterClassification}). These results are recalled in Section~\ref{section-background}, and our proof appears in Section~\ref{section-homsearch}, which is dedicated to showing that the homomorphism search we perform is exhaustive. Therein, we also describe our method for trimming the set of possible generating sets for finite Coxeter groups without compromising the exhaustiveness of the search; this step was necessary in order to make the computation feasible. We have implemented the algorithm and run it on all knots through 16 crossings. Our search identified all diagrams in the knot table which admit MRCQs onto finite Coxeter groups. The data obtained by running our algorithm for all tabulated knots through 16 crossings is summarized in Section~\ref{section-tables}. We conjecture that crossing number minimizing diagrams of prime knots through 16 crossings realize the Wirtinger numbers of the corresponding knots, that is, we posit that $\omega(D)=\omega(K)$ for any minimal diagram $D$ of a prime knot $K$ through 16 crossings. If true, this would imply that we have identified precisely the knots in the table which admit maximal rank quotients onto finite Coxeter groups. \subsection{Applications.} The values of bridge number and meridional rank established in this paper have implications for other difficult to evaluate knot invariants. An early version of our computation was used to find the stick number of knots in several challenging cases~\cite{blair2020knots}. Additionally, the bridge number gives a lower bound on the superbridge index~\cite{Kuiper87}. It is possible to use the values of bridge number established by our algorithm together with recent upper bounds on superbridge index ~\cite{Shonkwiler20} to compute the superbridge index of some knots for which the value was previously unknown. Further, our algorithm can be adapted to accept non-planar Gauss codes and thus to give lower bounds on the virtual bridge number of virtual knots. When paired with the upper bounds from~\cite{pongtanapaisan2019wirtinger}, this technique can be used compute the virtual bridge number of many virtual knots. Finally, our algorithm can be used in conjunction with the results in~\cite{joseph2021bridge} to establish the meridional rank and bridge number of certain twist spun knots, which are knotted 2-spheres in~$\mathbb{R}^4$. \section{Bridge numbers via Coxeter quotients} \label{section-background} We recall some basic definitions and necessary background on knot colorings, Coxeter groups and Wirtinger numbers of links, as well as the approach from~\cite{baader2021coxeter} for computing bridge numbers using Coxeter quotients of knot groups. \subsection{Classification of finite Coxeter groups} \label{section-coxeter review}\label{Coxeterbackground} \begin{defn} Let $\Gamma$ be a finite simple graph with edges labeled by intergers greater than $1$. The Coxeter group $C(\Gamma)$ is a group generated by a set in bijective correspondence with the vertices of $\Gamma$, subject to the following two types of relations: \begin{enumerate} \item $s^2=1$ for all generators s. \item $(st)^k=1$ for all pairs of generators $s,t$ connected by and edge of weight $k\in \mathbb{N}$. \end{enumerate} We call $\Gamma$ the Coxeter graph and the presentation determined by $\Gamma$ a Coxeter presentation of $C(\Gamma)$. \end{defn} Note that a given Coxeter group can have multiple Coxeter presentations. See Figure~\ref{fig-D6}. However, as we will see, this is not the case for the class of Coxeter groups that we use in this paper, namely finite Coxeter groups in which all reflections belong to the same conjugacy class. \begin{figure} \labellist \pinlabel $x$ at 84 65 \pinlabel $3$ at 140 100 \pinlabel $y^3$ at 140 200 \pinlabel $2$ at 100 140 \pinlabel $2$ at 178 140 \pinlabel $xy^2$ at 200 65 \pinlabel $x$ at 340 65 \pinlabel $6$ at 400 100 \pinlabel $xy$ at 460 65 \endlabellist \includegraphics[width=80mm]{D6} \caption{Two Coxeter presentations for the dihedral group $D_6=\langle x, y | x^2=y^6= 1, xyx=y^{-1}\rangle $} \label{fig-D6} \end{figure} Finite Coxeter groups are classified. Every finite Coxeter group is isomorphic to exactly one group in a list of four infinite families and six exceptional groups. For more details on this well-known classification theorem, see any of the standard Coxeter group references, for instance Bourbaki~\cite{Bourbaki}. \begin{thm}\label{CoxeterClassification} Every finite Coxeter group is a finite reflection group. Moreover, every finite Coxeter group is isomorphic to exactly one of the groups $A_n (n \geq 1), B_n (n \geq 2), D_n (n \geq 4), E_6, E_7, E_8, F_4, H_3, H_4$ and $I_2(m) (m \geq 2)$. \end{thm} Note that each of the groups listed in the previous theorem are defined by a Coxeter graph and corresponding Coxeter presentation. Given a Coxeter presentaion of a group $C(\Gamma)$, the Coxeter rank $r(C(\Gamma))$ is known to equal the number of vertices of $\Gamma$, see for example Lemma 2.1 in~\cite{felikson2010reflection}. In this paper, we are interested in all Coxeter presentations of finite Coxeter groups with Coxeter rank 3, 4, or 5, such that the set of reflections form a single conjugacy class. On the one hand, it was shown in~\cite{blair2020wirtinger} that all knots in the knot table up to 16 crossings have bridge number at most 5. On the other, knots of bridge number less than $3$ are classically known to admit MRCQ; indeed all two-bridge knots have dihedral quotients. Therefore, the Coxeter groups needed in order to find MRCQs for the remaining cases are those with Coxeter rank 3, 4, or 5. Additionally, all meridians in the fundamental group of a knot exterior are conjugate. Hence, if $\rho: \pi_1(S^3\backslash K)\twoheadrightarrow G$ is a maximal rank Coxeter quotient, then the reflections of $G$ form a single conjugacy class. The groups in Theorem \ref{CoxeterClassification} that meet these criteria are $A_3$, $A_4$, $A_5$, $D_4$, $D_5$, $H_3$, and $H_4$. However, to be sure that our search is exhaustive, we need to take into account all Coxeter {\it presentations} of finite Coxeter groups, since, in order to detect the existence of a surjective homomorphism onto $G$, we need to consider all possible minimal generating sets for $G$ within the specified conjugacy class of reflections. \begin{thm} Given a finite Coxeter group $C(\Gamma)$ such that the reflections of $C(\Gamma)$ form a single conjugacy class, then $\Gamma$ is of type $A, B, D, E, F, G, H,$ or $I$. \end{thm} This stronger statement follows from the proof of the classification of finite Coxeter groups (see, for example, Sections 2.7 and 6.4 of Humphreys \cite{Humphreys1}). As a consequence of this theorem and our previous observations, every Coxeter presentation of a finite Coxeter group with Coxeter rank 3, 4, or 5 such that the set of reflections form a single conjugacy class is one of $A_3$, $A_4$, $A_5$, $D_4$, $D_5$, $H_3$, and $H_4$. Thus, we can restrict to these presentations when implementing our search for all MRCQs from knot groups to finite Coxeter groups for tabulated knots through 16 crossings. \subsection{Cappell and Shaneson's Meridional Rank Conjecture} Recall that a meridian of a link $L$ is a based loop $m: S^1\to S^3\backslash L$ which is freely homotopic to the boundary of an embedded disk $D^2\hookrightarrow S^3$ intersecting $L$ transversally once. The {\it meridional rank} $\mu(L)$ is the smallest number of elements of $\pi_1(S^3\backslash L)$ represented by meridians which suffice to generate the group. The {\it bridge number} $\beta(L)$ is the minimal number of local maxima of $L$ with respect to the standard height function $h: \mathbb{R}^3\to \mathbb{R}$, taken over all embeddings $l: \coprod S^1\hookrightarrow \mathbb{R}^3$ isotopic to $L$ for which $h_{|l}$ is Morse. One readily derives from the Wirtinger presentation of $\pi_1(S^3\backslash L)$ that the inequality $\beta(L)\geq \mu(L)$ holds for all links, since the meridians near the local maxima of $L$ are seen to generate the group. Cappell and Shaneson asked if the two invariants in fact coincide. \begin{conj}[MRC] Let $L\subset S^3$ be a link. Are its bridge number and meridional rank equal? \end{conj} We approach this question by studying an intermediate quantity, the {\it Wirtinger number} of $L$, defined in~\cite{blair2020wirtinger} via a combinatorial procedure we now recall. Let $D$ be a link diagram and let $\text{W}(D)$ be a subset of the set of strands in $D$. We will refer to the elements of $\text{W}(D)$ as colored strands. The data $(D, \text{W}(D))$ represents a {\it partially colored diagram}. Denote by $c$ a crossing in $D$ and by $o$, $u_1$ and $u_2$ the overstrand and the two understrands at $c$. When $\{o, u_1\}\subset \text{W}(D)$ and $\{u_2\}\notin \text{W}(D)$, we say a {\it coloring move} can be performed at $c$, by setting $\text{W}'(D):=\text{W}(D)\cup \{u_2\}$. We refer to the partially colored diagram $(D, \text{W}'(D))$ as the result of performing a coloring move on $\text{W}(D)$ at $c$ and we write $\text{W}(D)\longrightarrow \text{W}'(D)$. Let $|D|$ denote the number of crossings in $D$. A {\it complete coloring sequence} for $D$ consists of a collection of $n$ strands in $D$, $\{s_1, \dots, s_n\}:=\text{W}_1(D)$, together with $|D|-n$ coloring moves $$\text{W}_1(D)\longrightarrow \text{W}_2(D)\dots \longrightarrow \text{W}_{|D|-n}(D),$$ where ${W}_{|D|-n}(D)$ is the set of all strands in $D$. Each of the initial strands $s_i\in {W}_1(D)$ is called a {\it seed strand} or simply a {\it seed} for the sequence. When a complete coloring sequence exists starting with $\text{W}_1(D)$, we say that the strands in $\text{W}_1(D)$ are a {\it generating set of seeds} for $D$. \begin{defn}\label{def-wirt} The {\it Wirtinger number} of a link diagram $D$, denoted $\omega(D)$, is the smallest integer $n$ such that there exist a generating set of seeds for $D$ with $n$ elements. The {\it Wirtinger number} of a link $L$, denoted $\omega(L)$, is the minimal value of $\omega(D)$ over all diagrams $D$ of $L$. \end{defn} The motivation for this definition is straight-forward: the Wirtinger number of a diagram $D$ gives a combinatorial upper bound on the meridional rank of the corresponding link $L$. Indeed, a coloring move at a crossing $c$ corresponds to the fact that, together, the Wirtinger meridians of the overstrand $o$ and of the understrand $u_1$ generate the Wirtinger meridian of the second understrand $u_2$; this is immediate from the Wirtinger relation at $c$. Thus, if there exists a coloring sequence for $D$ starting from a collection of seeds $\{s_1, \dots, s_n\}$, then the Wirtinger meridians of these strands generate the group of the link $L$ and, therefore, $\mu(L)\leq \omega (D)$. This inequality holds for any diagram $D$ of $L$, showing that, in fact, $$\mu(L)\leq \omega (L).$$ On the other hand, the argument used previously to show that $\beta(L)\geq \mu(L)$ for any link can be used without modification to show that $\beta(L)\geq \omega(L)$. Put differently, if the strands containing the local maxima of an embedding are chosen as seeds, a complete coloring sequence can be produced by extending the partial coloring successively at crossings of lower height. Combining the above observations, we see that for any link $L\subset S^3$, $$\beta(L)\geq \omega(L) \geq \mu(L).$$ \begin{thm}[\cite{blair2020wirtinger}] Let $L\subset S^3$ be a link. Its Wirtinger number and bridge number are equal: $\omega(L)=\beta(L).$ \label{thm-wirt} \end{thm} The meridional rank conjecture is thus equivalent to proving the inequality $\omega(L)=\mu(L)$ for all links. As outlined in the Introduction, one way to establish this equality is to exhibit a diagram $D$ of a link which admits a Coxeter quotient of rank $\omega(D)$. Our main result, Theorem~\ref{thm-main}, is identifying all diagrams $D$ of knots through 16 crossings whose groups admit quotients of rank $\omega(D)$ onto finite Coxeter groups. When a knot $K$ has this property, we conclude, as in Proposition~\ref{prop-equalities}, that $$\omega(D)\geq \beta(K)\geq \mu(K)\geq \omega(D).$$ A Coxeter quotient of a knot group can be described in any diagram of the knot, as reviewed next. Conversely, the existence of a quotient can be diagrammatically detected; see Lemma~\ref{lem-AnyDiagram}. \subsection{Knot colorings and knot group quotients} One of the early methods for distinguishing knots is via Fox $p$-colorings of their diagrams. Let $D$ be a diagram of a link $L$. A $p$-coloring of $D$ is an assignment $f(s)\in \{1, \dots, p\}$ for each strand $s$ in $D$, subject to the condition that at every crossing in $D$ the relation \begin{equation} \label{equation-Fox} f(u_1)+f(u_2)-2f(o)\equiv 0\mod p \end{equation} holds, where $o$ is the overstrand and $u_1, u_2$ the two understrands. Let $D_p=\langle x, y| x^2=y^p=1, xyxy=1\rangle$ denote the dihedral group of order $2p$. A Fox $p$-coloring defines a homomorphism $\varphi: \pi_1(S^3\backslash L)\to D_p$ by mapping the Wirtinger meridian $m_s$ of a strand $s$ to a reflection in $D_p$ determined by $f$: $$\varphi[m_s]:=xy^{f(s)}.$$ Given a crossing in $D$, Equation~(\ref{equation-Fox}) guarantees that the Wirtinger relation among the meridians at this crossing is satisfied by the images of these meridians under $\varphi$. The assignment of integers mod~$p$ to each strand in a link diagram $D$ determines a group homomorphism if and only if the equation is satisfied at every crossing in $D$. When a knot or link admits many distinct Fox $p$-colorings for a fixed $p$, the number of such colorings can be used to derive a lower bound on its meridional rank. However, the existence of a {\it single} homomorphism onto a given dihedral group can only prove that the meridional rank of a knot is bigger than one, since two reflections suffice to generate the image. For the purpose of studying MRC, it is therefore more helpful to find quotients from knot groups to groups which require many generators in a fixed conjugacy class (for link groups, conjugacy classes). We employ finite Coxeter groups to this end. See Section~\ref{section-coxeter review} for a definition and quick review of the properties and classification of these groups. We will make extensive use the fact that homomorphisms from a link group to any group can be described diagrammatically, just like Fox colorings. \begin{defn} \label{def-coherent} Let $G$ be a group and let $D$ be a diagram of an oriented link $L$. Denote by $s(D)$ the set of strands in $D$. A {\it $G$-coloring of $D$} is a map $$r: s(D)\to G$$ $$s_i\mapsto g_{s_i}$$ such that for any crossing $c$ in $D$ with overstrand $s_i$ and understands $s_j$ and $s_k$, the relation holds: \begin{equation}\label{crossing-relation} g_{s_i}g_{s_j}g_{s_i}^{-1}=g. \end{equation} \end{defn} In order to pass from a $G$-coloring of a diagram of $L$ to a homomorphism $\pi_1(S^3\backslash L)\to G$, we map the Wirtinger meridian of any strand $s$, with the orientation determined by the orientation of $L$, to the element $g_s\in G$. \begin{lem} \label{lem-AnyDiagram} Let $D$ be a diagram of a link $L$ and $G$ a Coxeter group. There exists a good quotient $\varphi: \pi_1(S^3\backslash L)\twoheadrightarrow G$ mapping meridians of $L$ to reflections in $G$ if and only if $D$ admits a $G$-coloring by reflections. \end{lem} \begin{proof} Let $\mu_{s_i}$ denote the Wirtinger meridian of the strand $s_i$, with the orientation induced by the given orientation on $L$. Given a good quotient $\varphi$, we define a $G$-coloring of $D$ by setting $r(s_i)=\varphi(\mu_{s_i})$. Since $\varphi$ is a good quotient, $\varphi(\mu_{s_i})$ is a reflection for each $s_i\in s(D)$, so $D$ admits a $G$-coloring by reflections. For the converse, denote by $r: s(D)\to G$ a given $G$-coloring of $D$ by reflections. As above, we define a corresponding map $\rho: \{m_{s} | s \text{ a strand in } D \} \to G$ by setting $\rho(m_{s})=g_{s}$, where $m_{s}$ is the Wirtinger meridian of the srand $s_i$. Since the Wirtinger meridians in a diagram generate the link group, the assignment $\rho$ extends to a map $\rho: \pi_1(S^3\backslash L)\to G$. As in the case of Fox colorings, Equation~\ref{crossing-relation} guarantees that this map is a homomorphism. All meridians of $L$ are conjugate to Wirtinger meridians, and therefore map to reflections in $G$. Hence, a $G$-coloring of $D$ induces a good quotient $\rho: \pi_1(S^3\backslash L)\to G$. \end{proof} This well-known lemma is included to highlight the fact that, if $D$ is a diagram of a link $L$ and $D$ does not admit a coherent labeling by reflections in a Coxeter group $G$, then no homomorphism $\pi_1(S^3\backslash L)\twoheadrightarrow G$ exists mapping meridians of $L$ to reflections. The ability to work with {\it any} diagram of $L$ is a useful counterpoint to results which establish the meridional rank conjecture under the assumption that there exists a diagram with certain preferred properties. When a $G$-coloring of $D$ induces a good maximal rank Coxeter quotient to $G$, we say the $G$-coloring of $D$ is a diagrammatic MRCQ. We will be performing exhaustive searches for such quotients, using the following observation. \begin{rem}\label{rem:seeds-suffice} Let $D$ be a diagram for a link $L$ and let $s$ denote a generating set of seeds for $D$. Assume $L$ admits a quotient $\rho: \pi_1(S^3\backslash L)\to G$. In order to define this quotient, it suffices to determine the images under $\rho$ of Wirtinger meridians of the strands in $s$.This partial coloring will extend uniquely to a $G$-coloring of~$D$. \end{rem} A class of link diagrams admiting natural Artin and Coxeter colorings was discovered by Brunner~\cite{brunner1992geometric}. The corresponding links were later called {\it twisted}. A link $L$ is twisted if it admits a diagram $D$, reduced in the sense of~\cite{brunner1992geometric}, with the following property: checkerboard color the complementary planar regions of $D$ in such a way that the unbounded region is ``white"; view the ``black" as a union of disks and twisted bands\footnote{Again, the surface is assumed to be reduced, which means that each disk is incident to at least 3 bands (otherwise the disk becomes absorbed into a band) and the crossings in each band have the same sign.}; this surface contains at least one full twist in each band. For example, standard diagrams of pretzel knots are twisted when every parameter of the pretzel is least 2 in absolute value. See Figure~\ref{fig-pretzel} for an example of a twisted diagram. Given a twisted link $L$, Brunner showed how to define a quotient of $\pi_1(S^3\backslash L)$ onto an Artin group $G$, by labeling the strands in a twisted diagram $D$ of $L$ with appropriate elements of $G$. In Figure~\ref{fig-pretzel}, a twisted diagram is labeled by elements of an Artin group, following Brunner's construction. A generating set for the group is in bijection with the planar regions in the complement of the twisted surface determined by $D$. The relations in the group are determined by the number of crossings in each twisted band of $D$. It is convenient to replace $G$ by its natural Coxeter quotient, where the relation $x^2=1$ is added for each of the Artin generators. This will allow us to disregard orientations (since a reflection is equal to its inverse) and to make use of results like the classification of finite Coxeter groups. \begin{figure} \begin{center}\labellist \small \pinlabel $a$ at 220 524 \pinlabel $a$ at 220 580 \pinlabel $b$ at 245 526 \pinlabel $b$ at 245 570 \pinlabel $c$ at 288 522 \pinlabel $c$ at 288 574 \endlabellist \includegraphics[height=2.6in, width=2.3in]{curvypretzel} \caption{The pretzel knot $P(3,-3,3)$, together with a quotient onto the Artin group $\langle a, b, c |(ab)^3=(bc)^3=(ac)^3=1\rangle$ or the Coxeter group $\langle a, b, c | a^2=b^2=c^2= 1, (ab)^3=(bc)^3=(ac)^3=1\rangle$. \label{fig-pretzel} } \end{center} \end{figure} For a link $L$ presented in a twisted diagram $D$, in order to describe a quotient of the group of L, it suffices to determine the images of the two meridians at one end of each twist region in $D$. The images of the remaining meridians under this quotient will be determined by the Wirtinger relations at crossings. Again, refer to Figure~\ref{fig-pretzel} for an explicit example of this general principle. Brunner's idea is to assign matching generators at the two ends of every twist region. This forces certain Coxeter relations in the quotient, determined by the number of crossings in each of the twist regions. We now turn to the rank of the quotient in Brunner's construction. As previously noted, Coxeter generators are in bijection with the regions in the complement of a twisted surface bounded by $L$. Thus, the number of such regions is a lower bound for the meridional rank of $L$. Using the Wirtinger number, matching upper bounds on the bridge numbers were found, which proved MRC for twisted links~\cite{baader2021coxeter}. A similar technique was applied beyond those links for which Brunner found diagrammatic Coxeter quotients, for example to Montesinos links and other natural infinite families of arborescent links~\cite{baader2021coxeter, baader2020twigs}, proving the MRC in these cases as well. Two-bridge knots are a class of examples which illustrate the limitations of working with twisted diagrams. As noted above, the meridional rank of a 2-bridge knot is always detected in a maximal Coxeter quotient, namely by a quotient onto a dihedral group. However, checkerboard-coloring the diagram of a two-bridge knot produces a twisted surface in few cases. An exhaustive search for Coxeter quotients can prove more effective in practice than one which relies on the existence of a diagram with certain favorable properties. \section{Homomorphism search}\label{section-homsearch} \subsection{Summary of the algorithm} The algorithm used for obtaining our results takes as input the Gauss code $G$ of a knot diagram $D_G$ representing a knot $K_G$. Following the algorithm developed in~\cite{blair2020wirtinger} and available at~\cite{paul2018}, the Gauss code is translated into the following data associated to $D_G$: a set of strands, denoted $S_G=\{s_1,s_2,...,s_j,...,s_n\}$; and a set of crossings, denoted $C_G=\{c_1,c_2,...,c_j,...,c_m\}$. Next, the algorithm from~\cite{paul2018} is run to calculate the Wirtinger number of $D_G$ and to identify a minimal set of seed strands $E_G=\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}$. See Figure~\ref{fig-Gauss} for an example of how the seeds strands in $D_G$ are recorded in terms of $G$. \begin{figure} \labellist \small \pinlabel $\bullet$ at 87 123 \pinlabel $1$ at 108 120 \pinlabel $e_1$ at 63 130 \pinlabel $e_2$ at 148 130 \pinlabel $e_3$ at 106 84 \endlabellist \includegraphics[width=3in, height=2in]{8_16} \caption{A diagram of the knot $8_{16}$ with Gauss code $\{-1, 2, -3,4, -8, 6, -7, 3, -4,$ $5, -6, 1, -2, 7, -5, 8\}$ and seed strands $\{e_1, e_2, e_3\} = \{(-8,5,-1), (-6,1,-2), (-8,6,-7)\}$.} \label{fig-Gauss} \end{figure} Let $r_1, r_2,...,r_n$ be a minimal generating set of reflections for a Coxeter group $H$ with Coxeter rank $n=\omega (D_G)$ for some fixed knot diagram $D_G$. As observed in Remark~\ref{rem:seeds-suffice}, coloring the strands in $E_G$ by elements of $H$ suffices to determine the image of all of $\pi_1(S^3\backslash K_G)$ under a (potential) homomorphism to $H$. Fix a bijective map from $r_1, r_2,...,r_n$ to $E_G$. Since $E_G$ is a generating set of seeds, by repeatedly applying the Wirtinger relations at crossings, this partial coloring can be extended to an assignment of reflections in $H$ to all strands of $D_G$. In case this assignment constitutes a coherent $H$-coloring of $D_G$, there exists a maximal rank Coxeter quotient from the knot group $K_G$ to $H$. By Proposition~\ref{prop-equalities}, such a homomorphism implies that $\omega (D_G)$ is equal to the meridional rank of $K_G$ and to the bridge number of $K_G$. Given a finite Coxeter group $H$, let $R(H)$ denote the set of all reflections in $H$ and let $Gen(H)$ be the set of all minimal generating sets of reflections for $H$. If $\omega (D_G)$ equals the Coxeter rank $r(H$), a brute force method of searching for good quotients for $K_G$ to $H$ would be to check, for every set $R$ in $Gen(H)$, whether every possible bijection from $R$ to $E_G$ extends to an $H$-coloring of $D_G$. This can be done as follows. Start with a bijection from $R$ to a generating set of seeds in $D_G$, and sequentially extend this partial $H$-coloring using the Wirtinger relations at crossings. Since the process started with a generating set of seeds, it is guaranteed to result in assigning an element of $H$ to each strand in $D_G$. Once every strand of $D$ has been labeled by an element of $H$, check whether the images of the strands under this potential homomorphism satisfy the Wirtinger relations at those crossings which have not been used to create the coloring. When this is the case, the initial assignment defines a quotient from $K_G$ to $H$. See Figure~\ref{fig-fig8}. \begin{figure} \labellist \small \pinlabel $(12)$ at 222 600 \pinlabel $(23)$ at 261 600 \pinlabel $(12)$ at 358 600 \pinlabel $(23)$ at 396 600 \pinlabel $(13)$ at 300 495 \pinlabel $(12)$ at 290 600 \pinlabel $(23)$ at 329 600 \pinlabel $(13)$ at 367 495 \pinlabel $(23)$ at 367 360 \endlabellist \includegraphics[width=14cm, height=3.8 cm]{figure8} \caption{Left: a bijection between a pair of seed strands for the given diagram and the generating set $\{(12), (23)\}\subset S_3$. Middle: extending the labeling by applying the Wirtinger relation at a crossing. Right: extending the labeling at a second crossing, then performing a check. The original bijection does not extend to a homomorphism from the group of the Figure-8 knot to $S_3$, as evidenced by the shaded crossing.} \label{fig-fig8} \end{figure} However, this is computationally intensive for the larger finite Coxeter groups. It is also highly redundant because, in general, many generating sets will be related by inner automorphisms of the group. Taking this redundancy into account, for larger Coxeter groups $H$ we implemented a preprocessing step in which we found a smaller subset of $Gen(H)$ which suffices for an exhaustive search. Given a Coxeter group $H$, define an equivalence relation on the set $Gen(H)$ by declaring $$\{r_1,r_2,...,r_{r(H)}\} \sim \{\rho_1,\rho_2,...,\rho_{r(H)}\}$$ if there exists $g\in H$ such that $\{g^{-1}r_1 g,g^{-1} r_2 g,...,g^{-1} r_{r(H)} g\}= \{\rho_1,\rho_2,...,\rho_{r(H)}\}$. We say a subset $A\subset Gen(H)$ is \emph{robust} if it contains at least one element from each equivalence class corresponding to the relation~``$\sim$". \begin{lem}\label{robust} Let $G$ be Gauss code for a knot diagram $D$, $E_G$ a minimal set of seed strands for $D_G$, and $A\subset Gen(H)$ a robust set for a Coxeter group $H$. There exists a diagrammatic MRCQ of $D_G$ onto $H$ if and only if there exists $\{\rho_1, \rho_2,...,\rho_n\}\in A$ and a bijection of $\{\rho_1, \rho_2,...,\rho_n\}$ to $E_G$ that can be extended to an $H$-coloring of $D_G$. \end{lem} \begin{proof} Suppose there exists $\{\rho_1, \rho_2,...,\rho_n\}\in A$ and a bijection of $\{\rho_1, \rho_2,...,\rho_n\}$ to $E_G$ that can be extended by repeatedly applying the Wirtinger relations at each crossing to an $H$-coloring of $D_G$. By Lemma \ref{lem-AnyDiagram}, there exists a maximal rank Coxeter quotient of $D_G$ to $H$. For the converse, suppose there exists a diagrammatic MRCQ $\phi$ of $D_G$ to $H$. Denote the elements of $E_G$ by $\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}$. By definition, $\phi$ is a good quotient so it maps $\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}$ to a set of reflections $\{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}$ in $H$. Since $\phi$ is of maximal rank, $\omega (D_G)=r(H)$. Since $\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}$ is a generating set of seeds for $D_G$, the corresponding Wirtinger meridians form a generating set for the knot group. Consequently, $\{\phi(e_1),\;\phi(e_2),...,\;\phi(e_{ \omega (D_G)})\}$ generates the image of $\phi$. Since $\omega (D_G)=r(H)$, it follows that $\{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}$ is a minimal generating set of reflections for $H$. Hence, $\{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}\in Gen(H)$. Since $A$ is a robust subset of $Gen(H)$, then there exists $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}\in A$ such that $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}\sim \{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}$. In particular, there exists $g\in H$ such that $\{g^{-1}r_1g,\;g^{-1}r_2g,...,\;g^{-1}r_{ \omega (D_G)}g\}=\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}$. If $\theta$ is the inner automorphism of $H$ given by conjugation by $g$, then $\theta \circ \phi$ is a maximal rank Coxeter quotient of $D_G$ to $H$ and there is a bijection from $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}$ to $E_G$ that can be extended by repeatedly applying the Wirtinger relations at each crossing to an $H$-coloring of $D_G$. \end{proof} By Lemma~\ref{robust}, in order to verify the existence of a MRCQ of $D_G$ to $H$, it suffices to check whether any bijection from a set in $A$, a robust subset of $Gen(H)$, to a minimal set of seed strands of $D_G$ can be extended to an $H$-coloring of $D_G$. Given Gauss code $G$ of a knot diagram $D_G$ representing a knot $K_G$, we implemented the following steps to perform an exhaustive search for good homomorphisms from $D_G$ to a finite Coxeter group $H$. \begin{enumerate} \item $D_G$ is parsed into a set of strands $S_G$ and the algorithm from~\cite{paul2018} is used to find a minimal set of seeds $E_G=\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}\subset S_G$. \item If $\omega (D_G)=r(H)$ and $A\subset Gen(H)$ is \emph{robust}, then for every $R\in A$ and every bijection $f:R\rightarrow E_G$ we test whether $f$ can be extended to an $H$-coloring of $D_G$. \end{enumerate} We can now prove the main result of this paper. \begin{proof}[Proof of Theorem~\ref{thm-main}] Let $G$ be the Gauss code for a diagram $D_G$ of a knot. Let $H$ be a Coxeter group such that the Coxeter rank of $H$ is $\omega (D_G)$ and $D_G$ has a maximal rank Coxeter quotient $\rho$ to $H$. We need to verify that $\rho$ will be detected by our search. By Lemma \ref{lem-AnyDiagram}, the homomorphism $\rho$ induces an $H$-coloring of $D_G$. Applying our algorithm to $G$, we find a minimal set of seed strands $E_G=\{e_1,\;e_2,...,\;e_j,...,\;e_{ \omega (D_G)}\}\subset S_G$. The above $H$-coloring of $D_G$ induces a labeling of $E_G$ by reflections $\{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}$. Since $E_G$ is a generating set for $\pi_1(S^3\setminus K)$ and the $H$-coloring of $D_G$ induces a MRCQ from $D_G$ to $H$, we know that $\{r_1,\;r_2,...,\;r_j,...,\;r_{ \omega (D_G)}\}$ is a generating set of $H$. Given $\mathcal{A}\subset Gen(H)$ a robust set, there exists an element $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}\in \mathcal{A}$ and a $g\in H$ such that $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}= \{g^{-1}r_1g,\;g^{-1}r_2g,...,\;g^{-1}r_jg,...,\;g^{-1}r_{ \omega (D_G)}g\}$. Note that conjugating each label of $D$ by $g$ gives a new $H$-coloring of $D_G$ such that $E_G$ is labeled by $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}$. Therefore, in its search through all bijections of the form $f:E_G\rightarrow A$, where $A$ is an element of the robust set $\mathcal{A}$, the algorithm will find a labeling of $E_G$ by the generators $\{\rho_1,\;\rho_2,...,\;\rho_j,...,\;\rho_{ \omega (D_G)}\}$ which induces an $H$-coloring of $D_G$. Thus, the algorithm will return a positive hit for a MRCQ to $H$. By construction, if the algorithm returns a positive hit for a maximal rank Coxeter quotient to $H$, then the Coxeter rank of $H$ is $\omega (D_G)$ and there is an $H$-coloring of $D_G$. \end{proof} \subsubsection{MRCQs to finite Coxeter groups among all knots up to 16 crossings} We implemented the algorithm described above in Python and searched for all maximal rank Coxeter qutients to finite Coxeter groups among the $1,701,936$ prime knots~\cite{HTW} of crossing number less than or equal to 16. It was shown in~\cite{paul2018} that all Gauss codes available in the census of these knots result in diagrams with Wirtinger number at most 5. Therefore, we designed the code to search for good homomorphisms to finite Coxeter groups of Coxeter rank at most 5. Moreover, the meridians of a knot group form a single conjugacy class. Additionally, the MRC is known for knots of Wirtinger number two~\cite{BZ89}, and generating suitable homomorphisms for the knot group of Wirtinger number two knots to dihedral groups is well-understood. As a result, our code searches for maximal rank Coxeter quotients to those finite Coxeter groups of Coxeter rank 3, 4 or 5 whose reflections constitute a single conjugacy class. As discussed in Section \ref{Coxeterbackground}, every Coxeter presentation for such a group is one of $A_3$, $A_4$, $A_5$, $D_4$, $D_5$, $H_3$, and $H_4$. In Section \ref{Sec:robust}, we discuss how we generated robust sets of generating sets for each of these groups. \subsection{Generating Robust Sets}\label{Sec:robust} To illustrate our approach, we outline the process we used to generate a robust set of generating sets for the group $D_4$. First, we represented $D_4$ as a subgroup of $GL_4(\mathbb{R})$, the general linear group of degree four. Specifically, $D_4$ is isomorphic to the subgroup generated by the following matrices: $$ \begin{bmatrix} -1 & 1 & 0 & 0\\ 0 & 1& 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 0 & 0\\ 1 & -1& 1 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}, \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -1 \end{bmatrix}$$ Since the reflections in $D_4$ are all contained in a single conjugacy class, we know that there exists a robust set for the group $D_4$ so that every generating set in the robust set contains the matrix $A=\begin{bmatrix} -1 & 1 & 0 & 0\\ 0 & 1& 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$. Since $D_4$ contains a total of 12 reflections, if we fix the first reflection, this leaves $11\times 10\times 9=990$ sets of four distinct reflections that contain $A$. Recall that each reflection in $\mathbb{R}^4$ has a rank 1 eigenspace corresponding to the eigenvalue $-1$ and a rank 3 eigenspace corresponding to the eigen value $1$. For a set of four reflections to generate $D_4$ is must be that the set of four eigenvectors corresponding to the four one-dimensional eigenspaces associated to the $-1$ eigenvalues for each of the four reflections must span $\mathbb{R}^4$. We determined by direct computation that $630$ of the $990$ sets of reflections had this property. We found that $624$ of these sets generated a group of order $192=|D_4|$ and that the remaining 6 groups generated $(\mathbb{Z}/2\mathbb{Z})^4$. Thus, this set of $624$ four-element sets of reflections is a robust set of generating sets for the group $D_4$. Naturally, smaller robust sets help reduce run time when searching for MRQCs across millions of knot diagrams. Building robust sets of generating sets for each of the groups $A_3$, $A_4$, $A_5$, $D_5$, $H_3$, and $H_4$ was done by an analogous method For each group, computational resources devoted to building a small robust set were balanced against computational time saved by running the maximal rank Coxeter quotient search algorithm using a smaller robust set. For example, significant computational time was spent to generate small robust sets for $H_4$, and $D_5$. As in the example outlined above, we started by generating all sets of four (resp. 5) reflections containing a fixed preferred reflection. These sets were trimmed in two ways: first, sets that generated a proper subgroup were removed. Then, we implemented a brute force search that identified when two generating sets were related by an inner automorphism and deleted one of the redundant generating sets. Ultimately, we found a robust set of generating sets for $H_4$ that contained $25,224$ elements, down from $11,703,240$ sets before the trimming process, and a robust set of generating sets for $D_5$ that contained $1,778$ elements, down from $1,860,480$ sets before the trimming process. All robust sets generated are available here \cite{nate2022}. \section{Computational findings}\label{section-tables} In this section we organize our computational data into tables. Note that we found maximal rank Coxeter quotients for $595,515$, roughly $35\%$, of all $1,696,390$ knots of crossing number at most 16 that have minimal diagrams with Wirtinger number 3, 4 or 5. It is important to note that, the Wirtinger number detects all 5546 2-bridge knots of crossing number at most 16 and all knots with crossing number at most 16 have Wirtinger number at most 5~\cite{blair2020wirtinger}. Moreover, all 2-bridge knots admit a maximal rank Coxeter quotient to a finite dihedral group. Hence, exactly $601,061$ of the $1,701,936$ prime knot diagrams with crossing number at most 16 admit a maximal rank Coxeter quotient to a finite Coxeter group. This work also verifies the MRC for a large portion of tabulated knots. Already, knots with diagrams of Wirtinger number 2 and 3 were known to satisfy the MRC~\cite{BZ89}. In addition, this computation establishes the MRC for $227,163$ knots with Wirtinger number 4 or 5. Altogether, the MRC has been verified for at least $1,363,137$, or approximately $80.1\%$, of all $1,701,936$ prime knots with crossing number at most 16. \begin{table}[ht] \caption{Knots with maximal rank Coxeter quotients by group type} \centering \begin{tabular}{||c c c c c||} \hline Crossing number & Prime knots & MRCQ to $A_3$, $A_4$, or $A_5$ & MRCQ to $D_4$, or $D_5$ & MRCQ to $H_3$, or $H_4$ \\ [0.5ex] \hline\hline 3 & 1 & 1 & 0 & 0 \\ \hline 4 & 1 & 0 & 0 & 0 \\ \hline 5 & 2 & 0 & 0 & 0 \\ \hline 6 & 3 & 1 & 0 & 0 \\ \hline 7 & 7 & 2 & 0 & 0 \\ \hline 8 & 21 & 7 & 0 & 0 \\ \hline 9 & 49 & 17 & 0 & 9 \\ \hline 10 & 165 & 39 & 0 & 40 \\ \hline 11 & 552 & 121 & 15 & 124 \\ \hline 12 & 2176 & 370 & 13 & 537 \\ \hline 13 & 9988 & 1772 & 316 & 2572 \\ \hline 14 & 46972 & 7069 & 1099 & 12494 \\ \hline 15 & 253293 & 37490 & 7997 & 66962 \\ \hline 16 & 1388705 & 183509 & 457923 & 363456 \\ \hline Totals & 1701936 & 230398 & 55233 & 446194 \\ [1ex] \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Prime knots with $\omega(D)=3$ which admit maximal rank Coxeter quotients} \centering \begin{tabular}{||c c c c c||} \hline Crossing number & Knots with $\omega(D)=3$ & MRCQ to $A_3$ & MRCQ to $H_3$ & MRCQ to $A_3$ or $H_3$ \\ [0.5ex] \hline\hline 3 & 0 & 0 & 0 & 0 \\ \hline 4 & 0 & 0 & 0 & 0 \\ \hline 5 & 0 & 0 & 0 & 0 \\ \hline 6 & 0 & 0 & 0 & 0 \\ \hline 7 & 0 & 0 & 0 & 0 \\ \hline 8 & 9 & 6 & 0 & 6 \\ \hline 9 & 24 & 8 & 9 & 16 \\ \hline 10 & 120 & 26 & 40 & 64 \\ \hline 11 & 446 & 85 & 109 & 190 \\ \hline 12 & 1952 & 312 & 489 & 729 \\ \hline 13 & 8614 & 1221 & 1995 & 2954 \\ \hline 14 & 39291 & 5495 & 8808 & 13104 \\ \hline 15 & 187121 & 25181 & 41771 & 61343 \\ \hline 16 & 892851 & 116071 & 198290 & 288557 \\ \hline Totals & 1130428 & 148405 & 251511 & 366963 \\ [1ex] \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Prime knots with $\omega(D)=4$ and maximal rank Coxeter quotients} \centering \begin{tabular}{||c c c c c c||} \hline Crossing $\#$ & $\omega(D)=4$ & MRCQ to $A_4$ & MRCQ to $H_4$ & MRCQ to $D_4$ & To $A_4$, $H_4$ or $D_4$ \\ [0.5ex] \hline\hline 3 & 0 & 0 & 0 & 0 & 0 \\ \hline 4 & 0 & 0 & 0 & 0 & 0 \\ \hline 5 & 0 & 0 & 0 & 0 & 0 \\ \hline 6 & 0 & 0 & 0 & 0 & 0 \\ \hline 7 & 0 & 0 & 0 & 0 & 0 \\ \hline 8 & 0 & 0 & 0 & 0 & 0 \\ \hline 9 & 0 & 0 & 0 & 0 & 0 \\ \hline 10 & 0 & 0 & 0 & 0 & 0 \\ \hline 11 & 15 & 15 & 15 & 15 & 15 \\ \hline 12 & 48 & 13 & 48 & 13 & 48 \\ \hline 13 & 1022 & 456 & 577 & 316 & 595 \\ \hline 14 & 6958 & 1387 & 3686 & 1069 & 3788 \\ \hline 15 & 64723 & 11944 & 25191 & 7975 & 29588 \\ \hline 16 & 488032 & 63258 & 165166 & 42282 & 189566 \\ \hline Totals & 560798 & 77073 & 194683 & 51670 & 223600 \\ [1ex] \hline \end{tabular} \end{table} \begin{table}[ht] \caption{Prime knots with $\omega(D)=5$ and maximal rank Coxeter quotients} \centering \begin{tabular}{||c c c c c||} \hline Crossing number & Knots with $\omega(D)=5$ & MRCQ to $A_5$ & MRCQ to $D_5$ & MRCQ to $A_5$ or $D_5$ \\ [0.5ex] \hline\hline 3 & 0 & 0 & 0 & 0 \\ \hline 4 & 0 & 0 & 0 & 0 \\ \hline 5 & 0 & 0 & 0 & 0 \\ \hline 6 & 0 & 0 & 0 & 0 \\ \hline 7 & 0 & 0 & 0 & 0 \\ \hline 8 & 0 & 0 & 0 & 0 \\ \hline 9 & 0 & 0 & 0 & 0 \\ \hline 10 & 0 & 0 & 0 & 0 \\ \hline 11 & 0 & 0 & 0 & 0 \\ \hline 12 & 0 & 0 & 0 & 0 \\ \hline 13 & 0 & 0 & 0 & 0 \\ \hline 14 & 30 & 30 & 30 & 30 \\ \hline 15 & 62 & 22 & 22 & 22 \\ \hline 16 & 5072 & 3479 & 3511 & 3511 \\ \hline Totals & 5164 & 3531 & 3563 & 3563 \\ [1ex] \hline \end{tabular} \end{table} \section{Brief remarks} \subsection{Bridge number and crossing number} Our computations suggest the following relationship between the bridge number and crossing number of a knot. \begin{conj} Let $n\geq 3$ and let $K$ be a prime knot with bridge number equal to $n$. The crossing number of $K$ is at least $3n-1$. \end{conj} For all prime knots through 16 crossings, the conjecture can be verified using the upper bounds on the bridge number obtained from the Wirtinger numbers of crossing-number minimizing diagrams in the knot table. Remark also that the lower bound we propose is optimal: for any $n\geq 3$ there exists a knot with exactly $3n-1$ crossings and bridge number $n$, namely the pretzel knot $P(2,3,3,...,3)$. Of course, the conjectured inequality would not hold for links as, for example, an unlink on more than one component would violate it. Non-prime knots also easily violate the inequality, for example the connected sum of a trefoil with itself. \subsection{Homomorphisms to infinite Coxeter groups} As previously discussed, maximal rank Coxeter quotients were used in~\cite{baader2021coxeter, baader2020twigs} to prove the Meridional Rank Conjecture for large infinite families of links. The Coxeter quotients used in that proof are in the vast majority of cases infinite. We expect that for a sizable fraction of the knots studied in \cite{baader2021coxeter, baader2020twigs} no maximal rank {\it finite} Coxeter quotients exist, though no large-scale computations have been performed due to the high crossing numbers of these knots and the absence of a tabulation. Nevertheless, we posit that extending the current work to infinite Coxeter groups is likely to result in computing the meridional rank of many more knots. As far as we know, it is an open question whether the meridional rank of a knot is always detected in a finite quotient (not necessarily to a Coxeter group). We give an explicit example of a 12-crossing knot whose meridional rank is detected in an infinite Coxeter quotient but not in a finite one. \begin{figure} \begin{center}\labellist \small \pinlabel $a$ at 196 568 \pinlabel $b$ at 394 607 \pinlabel $c$ at 454 606 \endlabellist \includegraphics[height=2.3in, width=4.1in]{12a210} \caption{The Montesinos knot $12a210$, together with a maximal rank quotient onto the Coxeter group $\langle a, b, c | a^2=b^2= c^2=1,(ab)^3= (ac)^7=(bc)^2=1\rangle$. The knot does not admit a MRCQ onto a finite Coxeter group. \label{fig-12a210} } \end{center} \end{figure} \begin{example} The knot $12a210$ admits a maximal rank Coxeter quotient to the infinite Coxeter group determined by the Coxeter matrix $\begin{bmatrix} 1 & 2 & 3 \\ 2&1&7 \\ 3&7&1 \end{bmatrix},$ see Figure~\ref{fig-12a210}. This is a Montesinos knot on three rational tangles, and the existence of this maximal rank Coxeter quotient also follows from~\cite[p. 1551, footnote 2]{baader2021coxeter}. In contrast, our algorithm proves that there does not exist a homomorphism onto $A_3$ or $H_3$. In other words, even though there is a homomorphism to an infinite Coxeter group that establishes the MRC for this knot, there is no maximal rank Coxeter quotient to a finite Coxeter group that achieves the same. Many other explicit examples of infinite MRCQs for 3-bridge knots can be found in~\cite{Ryffel2019}. \end{example} The non-existence of a maximal rank Coxeter quotient for some knots may guide the search for potential counter-examples to the Meridional Rank Conjecture. At a minimum, many knots are ruled out as possible counter-examples by our method. However, it is also known that the meridional rank of a knot is not always detected in a Coxeter quotient. It was shown by Ryffel that many torus knots not only do not admit a maximal rank Coxeter quotient but do not admit any nontrivial Coxeter quotients whatsoever. \begin{thm}[\cite{Ryffel2019}] Let $p, q\in\mathbb{Z}$ be coprime odd integers such that $p\geq 3$ and $q$ has no factor less than or equal to $\max \{5, p\}$. Then the $(p, q)$-torus knot does not admit any non-trivial Coxeter quotients. \end{thm} On the other hand, the Meridional Rank Conjecture holds for all torus links~\cite{rost1987meridional}. \\ \section*{Acknowledgement} The authors would like to thank Curtis Bennett, John Brevik and Jon McCammond for helpful conversations about Coxeter groups; and Sebastian Baader and Levi Ryffel for offering feedback on a draft of this paper. RB and NM were partially supported by NSF grant DMS-1821254. A portion of this work was completed while AK was a guest at the Max Planck Institute for Mathematics in Bonn. We are grateful to MPIM for its hospitality. \nocite{RZ87} \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,080
<!DOCTYPE html> <html> <head lang="en"> <meta charset="UTF-8"> <title></title> <script src="bower_components/knockout/dist/knockout.js"></script> </head> <body> <table> <thead> <tr> <th>name</th> <th>type</th> </tr> </thead> <tbody data-bind="foreach: anotherObservableArray"> <tr> <td> <input data-bind="value:name"> </td> <td> <input data-bind="value:type"> </td> </tr> </tbody> </table> <button onclick="showName();">show name</button> <script type="text/javascript"> var anotherObservableArray = ko.observableArray([ {name: "Bungle", type: "Bear"}, {name: "George", type: "Hippo"}, {name: "Zippy", type: "Unknown"} ]); ko.applyBindings(anotherObservableArray); anotherObservableArray.push({name: "ttt", type: "fff"}); function showName() { alert(anotherObservableArray()[0].name); } </script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
3,540
{"url":"http:\/\/projecteuclid.org\/euclid.cma\/1383587518","text":"## Communications in Mathematical Analysis\n\n### Contractibility of Simple Scaling Sets\n\n#### Abstract\n\nIn this paper, we show that the space of three-interval scaling functions with the induced metric of $L^2(\\mathbb R)$ consists of three pathcomponents each of which is contractible and hence, the first fundamental group of these spaces is zero. One method to construct simple scaling sets for $L^2(\\mathbb R)$ and $H^2(\\mathbb R)$ is described. Further, we obtain a characterization of a method to provide simple scaling sets for higher dimensions with the help of lower dimensional simple scaling sets and discuss scaling sets, wavelet sets and multiwavelet sets for a reducing subspace of $L^2(\\mathbb R^n)$. The contractibility of simple scaling sets for different subspaces are also discussed.\n\n#### Article information\n\nSource\nCommun. Math. Anal. Volume 16, Number 1 (2014), 31-46.\n\nDates\nFirst available in Project Euclid: 4 November 2013\n\nhttp:\/\/projecteuclid.org\/euclid.cma\/1383587518\n\nMathematical Reviews number (MathSciNet)\nMR3161734\n\nZentralblatt MATH identifier\n1297.42052\n\nSubjects\nPrimary: 42C40\n\n#### Citation\n\nShukla, N. K.; Yadav, G.C.S. Contractibility of Simple Scaling Sets. Commun. Math. Anal. 16 (2014), no. 1, 31--46. http:\/\/projecteuclid.org\/euclid.cma\/1383587518.","date":"2017-05-29 18:56:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7790893316268921, \"perplexity\": 1319.5298315747473}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463612537.91\/warc\/CC-MAIN-20170529184559-20170529204559-00314.warc.gz\"}"}
null
null
\section{Introduction} The Tarantula Nebula (30~Doradus, NGC\,2070) in the Large Magellanic Cloud (LMC) is the brightest and most massive H~{\scriptsize II} region in the Local Group. It is a beautiful and very intricate region, far removed from a `simple single-stellar-population'. Indeed, \cite{wb97} identified at least five distinct populations \cite{w09}: \begin{itemize} \item{The central `Carina Phase' concentration, rich in early O-type stars and including the dense cluster R136. } \item{A younger, likely triggered, `Orion Phase' to the north and west of R136.} \item{A `Sco OB1 Phase' of early-type supergiants throughout the central field.} \item{An older `h \& $\chi$ Persei Phase' in Hodge~301, containing cooler, more evolved, supergiants, to the northwest of the centre.} \item{A separate `Sco OB1 Phase' surrounding the luminous blue variable R143.} \end{itemize} \smallskip With its rich stellar populations, 30~Dor is the ideal laboratory in which to investigate a number of important outstanding questions regarding the physics, evolution, binary fraction, and chemical enrichment of the most massive stars. Building on the successes of the VLT-FLAMES Survey of Massive Stars \cite{e05}, here we introduce a new multi-epoch spectral survey of over 1,000 massive stars in the 30~Dor region. In the broader context, 30~Dor is at the northern end of a large column of molecular gas which extends south for over 2,000\,pc \cite{c88,f08}. N-body models examining the recent edge-on motion of the LMC through the halo of the Milky Way suggest significant star formation in the eastern part of the LMC, as manifested by 30~Dor, due to ram pressure \cite{m09}. With the reservoir of gas to the south, the region seems destined to become an even more spectacular star-formation complex over the next few million years. \section{Multiplicity in Massive Stars} The effects of binarity/multiplicity on the formation and subsequent evolution of high-mass stars is a vibrant area of research. Indeed, one of the key ingredients missing from current theories of both star formation and cluster evolution is a robust binary fraction of massive stars, and the distribution of the mass ratios in these systems. Some motivation in this direction was provided by \cite{zy07}: \begin{quotation} `The future of spectroscopic massive binary research lies in the near-IR and in multi-epoch radial velocity surveys of embedded massive stars' \end{quotation} These words were primarily concerned with the earliest stages of star formation but they coincide with growing interest in multi-epoch spectroscopic studies in open clusters, aimed at identification and characterisation of their binary populations (Table~\ref{binaries}). The most pertinent of these is the study of 50 early-type stars in 30~Dor by \cite{b09}. From Gemini spectroscopy at seven epochs they found a binary fraction of $\ge$50\%, noting that the data were not inconsistent with it being 100\%. Recent multi-epoch AO-corrected SINFONI observations found (tentative) evidence for a short-period companion in only one of the six central WR stars at the core of R136 \cite{schnurr09}, but it is clear that there is a very rich binary population in 30~Dor. \vspace*{-0.075in} \begin{table}[h] \begin{center} \caption{Selected multi-epoch spectroscopic surveys in open clusters.}\label{binaries} \begin{tabular}{lcl} \hline Cluster & Binary fraction & Reference \\ \hline IC\,1805 & $\ge$0.20 & \cite{db06} \\ NGC\,6231 & $\ge$0.63 & \cite{s08} \\ NGC\,6611 & $\ge$0.44 & \cite{s09} \\ NGC\,2244 & $\ge$0.17 & \cite{mahy09} \\ 30\,Dor & $\ge$0.50 & \cite{b09} \\ \hline \end{tabular} \end{center} \end{table} One of the serendipitous aspects of the FLAMES Survey of Massive Stars was the large number of spectroscopic binaries discovered (Table~\ref{fsms}). The time sampling of the service-mode observations did a reasonable (but not thorough) job of binary detection, with lower limits to the binary fraction of $\sim$30\% in three of the target clusters. Adopting the same methods as \cite{s09}, we have calculated the detection probabilities for short, intermediate and long period binaries for each cluster field. The aggregated detection probabilities (for systems with periods of two days to ten years) are given in the final column of Table~\ref{fsms}. The similarity in the detection probabilities suggests that the lower fraction found in NGC\,330 is genuinely different to the others. While it is unfair to compare the NGC\,330 observations with the rich, younger cluster fields of NGC\,346 and N11, NGC\,2004 is its LMC cousin; this difference in the binary fraction is intriguing and the subject of ongoing work. \vspace*{-0.075in} \begin{table}[h] \begin{center} \caption{Spectroscopic binaries from Evans et al. (2006).}\label{fsms} \begin{tabular}{lccccc} \hline Cluster & Galaxy & \#\,O$+$Early B & \#\,Binary & Binary fraction & Detection prob. [2d-10yr]\\ \hline NGC 346 & SMC & 103 & 27 & $\ge$\,26\% & 0.66 \\ NGC 330 & SMC & 104 & $\phantom{2}$4 & $\ge$$\phantom{2}$\,4\% & 0.71 \\ NGC 2004 & LMC & 105 & 24 & $\ge$\,23\% & 0.64 \\ N11 & LMC & 120 & 43 & $\ge$\,36\% & 0.64 \\ \hline \end{tabular} \end{center} \end{table} The relationship of the binary fraction with density and the spatial extent of a cluster is still unclear (e.g. Mahy et al., 2009), while the binary fraction in OB associations is often similar to clusters, but with fewer short-period systems (Zinnecker \& Yorke, 2007). \section{The Tarantula Survey} The new survey comprises 160\,hrs of VLT-FLAMES spectroscopy in the 30~Dor region (PI: Evans). Most of the observations (142\,hrs) have now been completed, with the remainder scheduled for the coming semester. One of the prime drivers for this survey was the issue of binarity, shaping the multi-epoch observational strategy. It is clear that identification of binaries, and the mass ratios in those systems, is an important empirical result for N-body models of star and cluster formation/evolution. Moreover, in many clusters, e.g. NGC\,6231 (Sana et al., 2008), the majority of O-type stars are members of a binary system. Thus, to gain a true understanding of the upper H-R diagram, the effects of binarity need to be fully included in theoretical models of stellar evolution \cite{selma}. Although we have focussed on this aspect for this symposium, the genesis of the survey arose from a much broader range of other scientific motivations, including: \begin{itemize} \item{The role of stellar rotation in the chemical enrichment and evolution of massive stars. Hunter et al. (2008) have revealed new challenges for theory in B-type stars; we seek to investigate these effects in the more dominant, massive O-type stars.} \item{Determination of the rotational velocity distribution in 30~Dor. Are there sufficient high-mass, rapidly-rotating stars to provide a channel for long-duration $\gamma$-ray bursts \cite{y06}?} \item{Armed with precise radial velocities and identification of binaries, do we see kinematic evidence of mass segregation and/or infant mortality in and around R136?} \item{A more holistic objective of a near-complete census of the closest `proto-starburst', with applications in the context of population synthesis methods and interpretation of spectra of unresolved massive stars clusters at Mpc distances.} \end{itemize} \section{GIRAFFE Observations} The primary dataset comprises spectroscopy of 1,000 stars using the GIRAFFE spectrograph, which is fed by 132 MEDUSA fibres available for science (or sky) observations across a 25$^\prime$ field \cite{flames}. Targets were selected from unpublished imaging with the Wide-Field Imager (WFI) on the ESO/MPG 2.2-m telescope, and from Brian Skiff's reworking of the \cite{selman} photometric catalogue in the central 90$^{\prime\prime}$. To obtain a representative sample of the upper part of the HR diagram, including evolved luminous stars, no colour cut was applied to potential targets but a faint cut-off ($V\,<\,$17) was enforced to ensure sufficient signal-to-noise for each star. Nine MEDUSA configurations were observed, each of which was observed at three wavelength settings (see Table~\ref{data}). This yields full coverage of the classical blue-optical region used for spectroscopic classification and analysis, combined with higher resolution spectroscopy of the H$\alpha$ region to enable determination of the stellar wind intensity. From inspection of initial reductions, the minimum signal-to-noise in the stacked spectra for the faintest stars is $\sim$50, i.e. the spectra are suitable for quantitative analysis as well as radial velocity monitoring. \begin{table}[h] \begin{center} \caption{Summary of FLAMES-GIRAFFE observations.}\label{data} \begin{tabular}{cccc} \hline GIRAFFE setting & $\lambda$-coverage (\AA) & $R$ & Exposures \\ \hline LR02 & 3980-4535 & 6,500 & 6$\times$(2$\times$1815s) \\ LR03 & 4505-5050 & 7,500 & 3$\times$(2$\times$1815s) \\ HR15N & 6470-6790 & 17,000 & 2$\times$(2$\times$2265s) \\ \hline \end{tabular} \end{center} \end{table} The distribution of the majority of the MEDUSA targets is shown in Figure~\ref{fig1}. The survey samples the full extent of 30~Dor and outwards into the `field' population and other nearby OB associations to fully exploit the FLAMES field-of-view and spare fibres, thus bolstering the observational sample. \begin{figure}[t] \begin{center} \includegraphics[width=5in]{medusa.pdf} \\ \caption{14$^\prime\,\times$14$^\prime$ V-band WFI image showing the FLAMES-GIRAFFE targets in and around 30~Dor (north to the top, east to the left).}\label{fig1} \end{center} \end{figure} \subsection{Preliminary Classification} Work is now progressing in earnest with the final science reductions of the GIRAFFE spectra. To characterise the spectral content of the survey, we extracted the reduced spectra from one pair of LR02 observations for each MEDUSA configuration. In advance of the full reductions, it was not possible to classify $\sim$150 stars from just one observation (although most are likely B-type stars) but from visual inspection of the spectra the sample contains: \begin{itemize} \item{In excess of 300 O-type stars and $\sim$20 Wolf-Rayet/`slash' stars. This is a hugely significant improvement in terms of sampling the upper HR diagram, e.g., compared to the analysis of 28 O-type stars in the LMC by \cite{m07}. Each star will be studied for binary companions, and then analysed to obtain physical and stellar wind parameters, including the first large-scale study of nitrogen enrichment in O-type stars.} \item{Over 400 B-type spectra, which will be used to establish the baseline chemical abundances in 30~Dor and, with such a large sample in one field, will be used to revisit the role of rotationally-induced mixing on surface nitrogen enrichment \cite{h08}.} \item{$\sim$150 cooler stars with spectral types of A, F and later. Some will be foreground objects to be discarded, but the majority will be evolved, luminous stars which will be used to investigate the short lifetimes of these evolutionary phases via population synthesis models.} \end{itemize} \subsection{Binary Detection Probabilities} Using the methods from \cite{s09} we have calculated the detection probabilities for binaries from the actual time sampling of the nine observed MEDUSA configurations. In these calculations we assume a $\Delta v_{\rm r}$ threshold of 20 km$~{\rm s}^{-1}$, requiring a radial velocity precision of $\sim$5 km$~{\rm s}^{-1}$ (which should be achievable at the resolving power of the new spectroscopy for all but the fastest rotating stars). The detection probabilities, as a function of orbital period, for the first MEDUSA configuration are shown by the black line in Figure~\ref{fig2}. We are relatively complete up to periods of a few 10s of days, with a steep fall-off beyond 100 days. The inclusion of one additional epoch in the coming observing season significantly helps with the detection of both intermediate and long period binaries (red/grey line). By quantifying our detection probabilities using such simulations, we will be able to put firm limits on the observed binary fraction. \begin{figure}[h] \begin{center} \includegraphics[width=4in]{fieldA_P.pdf} \\ \caption{Detection probability of binary companions for one of the MEDUSA configurations. The black line shows the detection results for the five epochs already executed; the red (grey) line illustrates the increased probability obtained from a sixth epoch, scheduled for observation in the coming semester, i.e. separated by approximately one year from the other epochs.}\label{fig2} \end{center} \end{figure} \vspace{-0.2in} \section{Supplementary Data} \subsection{ARGUS \& UVES Observations of R136} R136 is too dense for effective use of the MEDUSA fibres, so a 15$^{\prime\prime}$ exclusion radius around the core was employed in the fibre allocations. To investigate the dynamics and binarity of stars in and around R136, as part of the Large Programme we have observed five pointings with the ARGUS integral field unit (IFU) which delivers a 12$^{\prime\prime}\,\times\,$7$^{\prime\prime}$ field-of-view. Each pointing has been observed with the LR02 GIRAFFE setting (which delivers a resolving power of $\sim$10,000 in the IFU mode) at five epochs. In parallel to the ARGUS observations, we used the fibre-feed to the red arm of UVES to observe 25 stars that were not included in the MEDUSA configurations. The $\lambda$5200 standard set-up was used, delivering spectral coverage of $\sim\lambda\lambda$4200-6200 at $R\,=\,$47,000. \subsection{VLT-SINFONI K-band Spectroscopy} The majority of the known WR and extreme O-type emission-line stars in 30~Dor are in the central regions. Near-IR IFU observations with SINFONI (12hrs; PI: Gr\"{a}fener) will be used to obtain K-band spectroscopy of the central arcminute around R136. The stellar wind lines in the K-band, principally from Brackett~$\gamma$ and He~{\small II}, are more sensitive than the optical lines at low mass-loss rates, enabling a more precise determination of the physical parameters of the most extreme stars. \subsection{Faulkes Photometric Follow-up} In the longer term, the spectroscopy in the central $\sim$10 arcminutes will be supplemented with multi-band photometric monitoring from the Faulkes Telescope South, as part of their schools education programme. Faulkes has a 4{\mbox{\ensuremath{.\!\!^\prime}}}7\,$\times$\,4{\mbox{\ensuremath{.\!\!^\prime}}}7 field-of-view, so the main body of 30~Dor will be mapped with several pointings, delivering multi-epoch photometry that will, for example, assist with the analysis of identified binary systems. \section{Summary} We have an exceptional and unique data resource available to us to investigate the massive-star population in 30\,Dor, now the hard work begins! \bigskip \noindent{\bf Acknowledgements:} Based on observations from ESO programme 182.D-0222. We are indebted to Brian Skiff for his careful reworking of the Selman et al. astrometry. \vspace{-0.1in}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,479
\section{Introduction}\label{intro} The InfraRed Spectrograph (IRS; \citealt{houck04}) onboard the {\it Spitzer} Space telescope provided, over a period of more than five years, low- and high-resolution mid-infrared (MIR) spectra from many thousands of galactic and extragalactic sources, at wavelengths between 5 and 40 \textmu m. In active galactic nuclei (AGN), the MIR emission is believed to be UV light reprocessed by the hot dust surrounding the AGN. The dust, often assumed to form a toroidal structure in parsec scales around the nucleus in a plain extending that of the accretion disk, is considered to be distributed either smoothly \cite[e.g.][]{pier92, granato94, fritz06} or in clumps \cite[e.g.][]{hoenig06,nenkova08}. Dust is believed to consist of graphite and silicate grains, each leaving their unmistakable signature in the Spectral Energy Distribution (SED) of AGN, namely the $\sim$1500K black-body like rise of the MIR continuum of type (unobscured) AGN \cite[e.g.][]{hatzimi05}, corresponding to the sublimation temperature of graphites, and an absorption feature centred at 9.7 \textmu m, long known to appear in type 2 (obscured) AGN, attributed to silicate grains. The uncertain observational evidence for silicate in emission in type 1 AGN \citep{clavel00}, however, posed a problem for the Unified Scheme according to which the various types of AGN can be explained by alignment effects between the central sources, the obscuring material (torus) and the observer (\citealt{antonucci93}). The problem was finally solved when silicates were unambiguously observed in emission in the MIR spectra of many known AGN with IRS (\citealt{siebenmorgen05}; \citealt{sturm05}; \citealt{hao05}; \citealt{buchanan06}; \citealt{shi06}). Since then, various studies of the behaviour of the silicates have appeared. The very first such works, based on a few tens of AGN \citep[e.g.][]{spoon07, hao07, wu09}, demonstrated that the silicate feature shows a wide diversity. On average spectra it varies with AGN type, ranging from moderate emission in bright quasars to almost no emission or slight absorption in Seyfert 1 galaxies to stronger absorption in Seyfert 2 galaxies. Meanwhile, sparse reports on the detection of silicates in emission \citep{mason09, nikutta09} showed the diversity of the dust properties, albeit some of them rather rare. Several of the above observations also revealed a second silicate feature at 18 \textmu m, as predicted by the models. The relative strength of the silicate features at 9.7 and 18 \textmu m has been put forward as a possible diagnostic of the torus morphology \citep[e.g.][]{thompson09, feltre12} and chemistry \citep{sirocky08}. In this paper, we put together the largest sample of active galaxies with available IRS spectra ever composed (Sec. \ref{sec:sample}) with the aim to complement and extend previous studies on the MIR characteristics of AGN. To this aim, we apply a new spectral decomposition technique to separate the nuclear emission from that of the host (Sec. \ref{sec:decomp}). We then proceed with a thorough investigation of the behaviour of the silicate feature at 9.7 \textmu m and 18 \textmu m in the various AGN types (Sec. \ref{sec:silicates}). Section \ref{sec:discuss} discusses our most important results and places them into a more general context. \section{The sample}\label{sec:sample} The Cornell AtlaS of {\it Spitzer}/Infrared Spectrograph project (CASSIS\footnote{http://cassis.sirtf.com}; \citealt{lebouteiller11}) has made available the reduced spectra of all the sources observed with the low resolution modules of IRS, a total of about 11000 unique observations. Our master sample is derived from the CASSIS version 6 catalogue, keeping each object that fulfills the following requirements: i) has an identification in the NASA/IPAC Extragalactic Database\footnote{http://ned.ipac.caltech.edu} (NED); ii) has a robust (optical or infrared) spectroscopic redshift; iii) the IRS spectrum fully covers the range between 6 and 13 micron restframe; iv) the median signal-to-noise ratio (SNR) per pixel of the IRS spectrum is $>$2. To verify requirement i) we rely on the source cross-identification from CASSIS, which matches the source coordinates with the NED and SIMBAD databases \citep{lebouteiller11}. For sources with no spectroscopic redshift in NED, we measure the redshift from the IRS spectrum using the template matching method presented in \citet{hernan12}. The typical redshift uncertainty with this method is $\Delta$$z$/(1+$z$) $\sim$ 0.002, well below the spectral resolution of our resampled spectra ($\Delta$$\lambda$/$\lambda$=0.005--0.02). The resampling also increases the minimum SNR per resolution element from 2 to $>3$ (see Sec. \ref{sec:decomp}). Taking these criteria into account and removing duplicate entries, we end up with a list of 2299 extragalactic objects. Out of these, 784 objects have a NED classification as AGN of types 1, 2 or intermediate, and this is the sample we will be working with henceforth. AGN of undefined type were left out of the sample. Among the type 1 AGN lie 141 quasars from the Sloan Digital Sky Survey (SDSS) Data Release 7 Quasar Catalogue \citep{shen11}, that for few specific purposes will be examined separately. The numbers of the object per subsample are shown in Table \ref{tab:samples}. The AGN sample is heterogeneous but, nevertheless, representative of the infrared (IR) AGN population. \begin{table} \begin{center} \caption{Number of objects per AGN type. } \label{tab:samples} \begin{tabular}{llll} \hline Type & N$_{\rm obj}$ & Type & N$_{\rm obj}$ \\ \hline Type 1 AGN & 363 & Sy1.2 & 24 \\ Type 2 AGN & 325 & Sy1.5 & 32 \\ & & Sy1.8 & 18 \\ SDSS quasars & 141 & Sy1.9 & 22 \\ \hline \end{tabular} \end{center} \end{table} \section{Spectral decomposition}\label{sec:decomp} The observed MIR spectra of AGN are affected by the presence of their host galaxies in two important ways. One is the absorption or scattering of AGN emissions by material (gas and dust) in the AGN line of sight. This so-called foreground absorption modulates the AGN spectrum with a multiplicative factor $e^{-\tau(\lambda)}$, where $\tau(\lambda)$ represents the optical depth at wavelength $\lambda$. It is not possible to distinguish foreground absorption from intrinsic AGN absorption (that is, the one produced in the AGN torus) from MIR data alone, since similar extinction laws are considered to apply to dust grains in the torus and the host. The amount of foreground extinction varies from source to source, but at MIR wavelengths it is expected to be mild in most sources, with the exception of some dusty starbursts and edge-on spiral galaxies. The other important effect on the AGN spectra is the contamination from host galaxy emission that blends with the AGN spectrum. The importance of this background emission depends on the relative luminosities of the AGN and the host and --crucially-- on the spatial resolution of the spectroscopic observations. Since the emission from the AGN is typically unresolved, an increase in the spatial resolution implies that a larger fraction of the host emission can be resolved away. In any case, the background emission represents an additive modification to the AGN spectrum. The purpose of our spectral decomposition is to separate the AGN and host emissions in the integrated AGN+host spectra. If successful, this decomposition allows to study the AGN emission as if the host galaxy was resolved away. To this aim we employ the decomposition method presented in \cite{hernan15}. The method relies on the large number of high-quality spectra in CASSIS to reproduce the spectra of composite sources as a linear combination of three CASSIS spectra, each selected from subsamples of sources whose mid-IR emission is completely dominated by the AGN, the star-formation, or the stellar population. We select these `single-spectral-component' templates as follows: \begin{figure*} \begin{center} \includegraphics[width=17cm]{deblendIRS.pdf} \caption{Examples of best-fitting decomposition models for the IRS spectra. The black solid line with grey shading represents the IRS spectrum (resampled at $\Delta\lambda$=0.1 \textmu m resolution) and its 1-$\sigma$ uncertainty (photometric errors only). The dotted, dashed, and dot-dashed lines represent, respectively, the PAH, AGN, and stellar components of the best-fitting model, shown in yellow. The thin solid line at the bottom of each plot represents the residual (spectrum - model).} \label{fig:decompositions} \end{center} \end{figure*} For the `stellar' templates, we select 19 local elliptical and S0 galaxies. To ensure they have negligible star formation, we require the Polycyclic Aromatic Hydrocarbons (PAH) bands to be very weak or absent, with equivalent widths for the 6.2 \textmu m (EW$_{62}$) and 11.3 \textmu m (EW$_{113}$) PAH features $<$0.02 \textmu m. We also check that the IRS spectra have a blue stellar-like MIR continuum and the sources are not classified as AGN in NED. The 54 star-forming templates (`PAH' templates) are IRS spectra of normal star-forming and starburst galaxies at redshifts up to $z$=0.14. We make sure these sources do not have significant stellar contributions to their MIR spectra by requiring both high EW of the PAH features (EW$_{62}$ $>$ 1.0 \textmu m and EW$_{113}$ $>$ 1.0 \textmu m) and a very weak continuum at 5 \textmu m. We also verify that they are not classified as AGN in NED. Finally, the 147 `AGN' templates are IRS spectra of sources classified in the optical as quasars, Seyfert galaxies, LINERs, and blazars. We also include a variety of optically obscured AGN and radiogalaxies. The templates include sources at redshifts from $z$=0.002 to $z$=1.4 and cover several orders of magnitude in bolometric luminosity. We ensure that the AGN templates do not contain any significant emission from the host galaxy by requiring the PAH features to be extremely weak or absent (EW$_{62} <$ 0.02 \textmu m and EW$_{113} <$ 0.02 \textmu m). Because the AGN and PAH templates are real spectra, each of them already includes some amount of foreground extinction built in. Therefore, we rely on the large number of AGN and PAH templates to reproduce the diversity of observed spectra that arises from different levels of foreground extinction as well as source to source variation in the intrinsic AGN and host spectra. This approach has the advantage of not depending on assumptions about the --unobservable-- intrinsic AGN spectrum or the extinction law. Obtaining a good fit with this decomposition method requires finding an AGN template with the appropriate level of foreground absorption. This can be problematic for sources with very deep absorption features, since few pure-AGN spectra have them. Accordingly, decomposition results for sources with deep absorption features have larger residuals and uncertainties. We separate the AGN sample into two groups depending on the spectral coverage: those objects with silicate features observed both at 9.7 and at 18 \textmu m in their full extent and those for which only the feature at 9.7 \textmu m is covered. For sources in the first group, we fit the spectral range between 5.2 and 22 \textmu m restframe, while for those in the second, we fit only the 5.2 to 15.8 \textmu m range. We resample both the spectra and templates to a common wavelength grid with an uniform wavelength resolution of $\Delta\lambda$=0.1 \textmu m. This increases the SNR per resolution element by $\sim$60\% on average, while it still allows to resolve important features such as the PAH bands. For every galaxy in the sample we try spectral decompositions using every possible combination of a stellar template, a PAH template, and an AGN template. The best fitting model is the one that produces the absolute minimum of $\chi^2.$ However, to calculate expected values for observables (e.g. the luminosity of the AGN component or the strength of the silicate feature) and their uncertainties, we use the full probability distribution functions (PDFs) calculated with the `max' method described in \citet{noll09} \citep[for details see][Sec. 2] {hernan15}. The method also yields, for each object, the fractional contribution of each the three components to the total luminosity, in the wavelength range of interest. Thanks to the use of large sets of real spectra as templates, our decomposition method manages to reproduce the MIR spectrum of composite sources with unprecedented accuracy (see Fig. \ref{fig:decompositions}). Typical $\chi^2$ values are lower than 2, indicating that residuals in the model fits are dominated by noise in both the spectra and templates for most sources. \section{The Silicate features}\label{sec:silicates} The silicate features at 9.7 and 18 \textmu m observed in the IR spectra of AGN are believed to arise from the inner, hotter parts of the torus or the hot, illuminated side of the clumps. We define the strength of the silicate feature following \cite{pier92}: \begin{equation} {\rm S_{\lambda}}= {\rm ln} \frac{{\rm F}(\lambda_{peak})}{{\rm F}_{\rm c}(\lambda_{peak})} \label{eq:ssil} \end{equation} \noindent where F($\lambda_{\rm peak}$) and F$_{\rm c}(\lambda_{\rm peak})$ are the flux densities of the spectrum and the underlying continuum at the peak wavelength of the features, ${\lambda_{\rm peak}}$. A negative (positive) value indicates a feature in absorption (emission). \subsection{The Silicate feature at 9.7 micron}\label{sec:silicates97} The top panel of Fig. \ref{fig:ssilirshisto} shows the distribution of the strength of the silicate feature at 9.7 \textmu m as measured on the original IRS spectra, S$_{\rm 9.7 \,\, tot}$, for type 1 and type 2 AGN (the 96 AGN of intermediate type are not included here and will be discussed separately). \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{SsilIrsHistos_noDiv.pdf} \includegraphics[width=8cm,angle=0]{SsilAgnHistos_noDiv.pdf}\\ \caption{S$_{\rm 9.7}$ distribution per for type 1 (blue solid histogram) and type 2 (red dashed histogram), before (top panel) and after (bottom panel) the subtraction of the host galaxy.} \label{fig:ssilirshisto} \end{center} \end{figure} As already observationally established in the past ten years, S$_{\rm 9.7 \,\, tot}$ takes a wide range of values \citep[e.g.][just to name a few]{hao07, levenson07, spoon07, wu09}. The average spectra of type 1 AGN exhibit the feature in weak to moderate emission \citep[see e.g.][]{hao07,wu09}, while those of type 2 AGN present the feature in absorption \citep[][]{sturm06, schweitzer08, mason09, hernan11}. Individually, however, type 1 and type 2 AGN can show silicates in absorption and emission, respectively. In our sample of 698 type 1 and 2 AGN, 35\% of the type 1 AGN present the feature in absorption and about 15\% of the type 2 AGN show the feature in emission. At the same time, when in emission, the peak of the feature is often shifted to wavelengths longer than 9.7 \textmu m in the rest frame, while the shift affects much less the feature when in absorption, as already reported by e.g \cite{shi14}. Fig. \ref{fig:lpeakssil} shows the distribution of the shift, $\Delta\lambda_{\rm peak} = \lambda_{\rm peak}-9.7$ \textmu m, as a function of the fractional contribution of the AGN to the luminosity in the range between 5 and 15 \textmu m, $f_{\rm AGN}$, for type 1 and type 2 objects (filled and open symbols, respectively), colour-coded by S$_{\rm 9.7 \,\, tot}$. Looking at the sample as a whole, 65\% (20\%) of the objects with the silicates in emission have their $\lambda_{\rm peak} >$ 10.2 \textmu m ($\lambda_{\rm peak} > 10.6$ \textmu m), while the fraction of objects with the same amount of shift among the AGN with silicates in absorption is less than 3\%. The shift to longer wavelenghts is largely associated to a silicate feature in emission, and this in turn only occurs in strongly AGN-dominated spectra. \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{DeltalpeakAgnFracSsilIrs.pdf}\\ \caption{${\rm \Delta \lambda}_{\rm peak}$ as a function of $f_{\rm AGN}$, for type 1 and type 2 objects (filled and open symbols, respectively). The symbols are colour-coded based on the value of S$_{\rm 9.7 \,\, tot}$. The quantisation of ${\rm \Delta \lambda}_{\rm peak}$ is an artifact of the algorithm that measures $\lambda_{\rm peak}$.} \label{fig:lpeakssil} \end{center} \end{figure} \subsection{Removing the effects of the host}\label{sec:silagn} As S$_{\rm 9.7 \,\, tot}$ are measured on the original IRS spectra, we expect the derived values to be contaminated by the emission of the host galaxy for all objects but those for which the AGN completely dominates the MIR emission. The top panel of Fig. \ref{fig:ssilfagn} shows S$_{\rm 9.7 \,\, tot}$ as a function of $f_{\rm AGN}$. The filled and open symbols correspond to type 1 and type 2 AGN, respectively. Error bars for S$_{\rm 9.7 \,\, tot}$ are shown in this plot but they will not be repeated in following figures, in order to keep the plots as little crowded as possible. What we see here is that as the contribution of the host becomes more important (i.e. as $f_{\rm AGN}$ decreases) the strength of the silicate feature decreases, with only few objects exhibiting S$_{\rm 9.7 \,\, tot} > 0.0$ for $f_{\rm AGN} < 0.7$. The dashed line shows the (weak) trend for the full sample, with a linear correlation coefficient of $r$=0.45. The trend, however, is driven by type 1 objects (filled symbols and corresponding solid line) due to the contamination by the emission of the host galaxy. \begin{figure} \begin{center} \includegraphics[width=9cm,angle=0]{SsilAgnFrac_new.pdf}\\ \vskip -1.3cm \caption{S$_{\rm 9.7}$ as a function of $f_{\rm AGN}$, before (upper panel) and after (lower panel) the subtraction of the host galaxy. Filled and open circles denote type 1 and type 2 AGN, respectively. The thin dashed lines show S$_{\rm 9.7}$=0.0 and $f_{\rm AGN}$=0.7, the thick dashed line shows the weak ($r$=0.45) linear correlation of the full sample, and the solid and long-dashed lines show the correlations for the type 1 ($r$=0.51) and type 2 ($r$=0.26) objects, respectively.} \label{fig:ssilfagn} \end{center} \end{figure} In order to see how the silicates behave in the vicinity of the nucleus, we need to remove the contribution of the host galaxy, applying the spectral decomposition procedure described in Sec. \ref{sec:decomp}. The distribution of the strength of the silicate feature on the host-subtracted spectrum, S$_{\rm 9.7 \,\, AGN}$, is shown in the bottom panel of Fig. \ref{fig:ssilirshisto}. The behaviour of S$_{\rm 9.7 \,\, AGN}$ with $f_{\rm AGN}$ shown in the lower panel of Fig. \ref{fig:ssilfagn} differs from that of S$_{\rm 9.7 \,\, tot}$ in that there is now no correlation between the two quantities, confirming that the correlation was due to the contamination from the emission of the host, indeed. A direct comparison of the two measurements of the silicate feature at 9.7 \textmu m is shown in Fig. \ref{fig:silvssil}, colour-coded by the value of $f_{\rm AGN}$. Objects with MIR emission completely dominated by the AGN (light-colour symbols) are not affected by the subtraction of the host (they lie on or very near the 1:1 line). However, as the contribution of the host galaxy becomes more important, i.e. $f_{\rm AGN}$ decreases, (symbols become darker in Fig. \ref{fig:silvssil}), the points deviate more and more from the 1:1 line. By subtracting the emission of the galaxy, the number of type 1 AGN with silicates in emission increases by 20\%, reaching 80\% of all type 1 AGN, while the number of type 2 AGN with the feature in emission doubles, reaching a total of 25\%. At the same time, 35\% of both type 1 and type 2 AGN with the feature in absorption exhibit the feature in even deeper absorption once the emission from the host is removed. This happens because AGN with a silicate feature in deeper absorption than that of the surrounding host get their silicate feature `refilled' in the integrated spectrum. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{SsilAgnIrsAgnFrac_noDiv.pdf}\\ \caption{S$_{\rm 9.7 \,\, tot}$ versus S$_{\rm 9.7 \,\, AGN}$, colour-coded based on$f_{\rm AGN}$. Filled and open symbols denote type 1 and type 2 AGN, respectively.} \label{fig:silvssil} \end{center} \end{figure} In order to check whether S$_{\rm 9.7 \,\, AGN}$ is affected by the AGN luminosity, as proposed by e.g. \cite{maiolino07}, we have to rely on the luminosity at 7 \textmu m, L$_7$, that spans over six orders of magnitude in our sample. MIR luminosity in AGN has, in fact, been shown to tightly correlate with the X-ray luminosity \citep[see e.g.][]{lutz04,horst08,mateos15}, the ratio of the MIR to the bolometric luminosity depends, however, on the column density along the line of sight \citep[Fig. 21 in ][]{hatzimi09}. Fig. \ref{fig:ssill7} shows S$_{\rm 9.7 \,\, AGN}$ as a function of L$_{\rm 7}$ measured on the galaxy-subtracted spectrum, with the points coloured based on the redshift. There is clearly no dependence of S$_{\rm 9.7 \,\, AGN}$ with L$_{\rm 7}$ and the features can be in emission (S$_{\rm 9.7}>0.0$) even in the faintest AGN. Deep silicate features (S$_{\rm 9.7} < $-2) are only found at intermediate luminosities (or redshifts). A deeply obscured but low-luminosity AGN would be overwhelmed by the emission of its host, that would `refill' the silicate feature. The lack of deep silicates in objects with very high IR luminosities and/or high redshifts, on the other hand, suggests a selection effect, as they might be too faint in the optical for a reliable identification. \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{SsilAgnL7z_noDiv.pdf} \caption{The relation between S$_{\rm 9.7 \,\,AGN}$ and L$_{\rm 7}$, with points colour-coded based on their redshift. The filled and open circles correspond to type 1 and type 2 AGN, respectively.} \label{fig:ssill7} \end{center} \end{figure} \subsubsection{AGN of intermediate types}\label{sec:interm} Among the 784 AGN of our sample, 96 have a NED classification of intermediate type Seyfert galaxies. The number of objects per type (comparable in all four sub-samples) is shown in Table \ref{tab:samples}. Figure \ref{fig:ssilinterm} shows the distribution of S$_{\rm 9.7 \,\, AGN}$ for the four sub-samples. \begin{figure} \begin{center} \includegraphics[width=8cm,angle=0]{SsilAgn_interm.pdf} \caption{The distribution of S$_{\rm 9.7 \,\, AGN}$ for the four intermediate types of Seyfert galaxies.} \label{fig:ssilinterm} \end{center} \end{figure} The mean average S$_{\rm 9.7 \,\,AGN}$ decreases from Sy1.2 towards later types, with Sy 1.2, 1.5 and 1.8 showing on average the silicate feature in very weak or no emission, while Sy1.9 has a negative average value. Additionally, as we move to later intermediate types, the dominance of the AGN component to the MIR emission decreases: while 96\% of Sy1.2 objects have $f_{\rm AGN}>0.7$, the fraction drops to 55\% for Sy1.9s. This is in agreement with \cite{deo07} that found the MIR spectra of Sy1.8 and Sy1.9 to be dominated by starburst features (PAH). The mean values of S$_{\rm 9.7 \,\, AGN}$ and the fraction of objects with $f_{\rm AGN}>0.7$ for each intermediate type are shown in Table \ref{tab:interm}. Note that the shift towards lower values of S$_{\rm 9.7 \,\, AGN}$ when moving towards later Seyfert types persist even when only objects with $f_{\rm AGN}>0.7$ are considered, as seen in the right-most column of table \ref{tab:interm}. Finally, the $\lambda_{\rm peak}$ of intermediate Seyfert types shows the same behaviour as for the rest of the AGN sample, i.e. as described in Sec. \ref{sec:silicates97}. \begin{table} \begin{center} \caption{Mean values and standard deviations of S$_{\rm 9.7}$ for each intermediate AGN type, fraction of objects with $f_{\rm AGN} > 0.7$ and S$_{\rm 9.7}$ for that fraction.} \label{tab:interm} \begin{tabular}{lrcr} \hline Type & $\langle$S$_{\rm 9.7 \,\, tot} \rangle$ & $f_{\rm AGN}>0.7$ & $\langle$S$^{f_{\rm AGN}>0.7}_{\rm 9.7 \,\, tot}\rangle$\\ \hline Sy1.2 & 0.175$\pm$0.13 & 96\% & 0.160$\pm$0.14\\ Sy1.5 & 0.043$\pm$0.46 & 90\% & 0.087$\pm$0.22\\ Sy1.8 & 0.022$\pm$0.22 & 61\% & 0.059$\pm$0.11\\ Sy1.9 & -0.305$\pm$0.49 & 55\% & -0.227$\pm$0.35\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{SDSS quasars}\label{sec:sdss} Out of the 784 AGN, 141 are spectroscopically confirmed SDSS quasars, for which estimates of the mass of the central black hole, M$_{\rm BH}$, derived from emission line measurements, as well as bolometric luminosities, L$_{\rm bol}$, derived from fitting techniques are available \citep{shen11}. \cite{maiolino07} reported an increase of S$_{\rm 9.7}$ with increasing M$_{\rm BH}$, from low-luminosity, low-redshift type 1 AGN to high-luminosity, high-redshift quasars. \cite{thompson09}, on the other hand, found no trend of with luminosity. M$_{\rm BH}$ for the 141 SDSS quasars in question spans the range [10$^{7.3}$ M$_{\odot} - 10^{10}$ M$_{\odot}$], i.e. almost identical to that of the \cite{maiolino07} sample, but we do not find any correlation of S$_{\rm 9.7 \,\, AGN}$ with M$_{\rm BH}$, as shown in Fig. \ref{fig:ssilmbh}. Note, however, that the bolometric luminosities of the SDSS quasars of our sample are all above 8.2 $\times 10^{10}$ L$_{\odot}$ (10$^{44.5}$erg/sec), i.e. the two samples are not directly comparable. In the quasar sub-sample, S$_{\rm 9.7 \,\, AGN}$ and L$_{\rm bol}$ are completely uncorrelated (see colour-coding in Fig. \ref{fig:ssilmbh}), in agreement with \cite{thompson09}. \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{SsilAgnMbhLbol_noDiv.pdf} \caption{S$_{\rm 9.7 \,\, AGN}$ as a function of M$_{\rm BH}$ for the 141 SDSS quasars of the sample. The points are colour-coded by L$_{\rm bol}$.} \label{fig:ssilmbh} \end{center} \end{figure} \subsection{The Silicate feature at 18 micron}\label{sec:silicates18} The silicate feature at 18 \textmu m, though predicted by models and observed in many of the low-to-intermediate redshift AGN ($z$ typically $\le 0.5$) by now \citep[e.g.][]{hao05,sirocky08,thompson09}, has received less attention than its lower wavelength counterpart, as its measurement presents a greater challenge. For one, it often overlaps with the MIR bump of the AGN SED \citep[e.g.][]{prieto10}. Also, the steep mid-to-far IR emission from the host implies it contributes a higher fraction to the continuum emission at these longer wavelengths, making the measurement of the feature a tedious job. Following the procedure described in Sec. \ref{sec:decomp}, but extending the wavelength range to 22 \textmu m restframe, we perform spectral decompositions and measure the strength of the 18 \textmu m silicate feature in the AGN component, S$_{\rm 18 \,\, AGN}$, for the 631 AGN with adequate wavelength coverage. Fig. \ref{fig:s18} shows the distribution of S$_{\rm 18 \,\, AGN}$ for type 1 (blue solid histograms) and type 2 (red dashed histograms). \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{S18AgnHistos_noDiv.pdf} \caption{S$_{\rm 18 \,\, AGN}$ distribution per AGN type (type 1 in blue solid histogram, type 2 in red dashed histogram).} \label{fig:s18} \end{center} \end{figure} While at least as prominent as its counterpart at 9.7 \textmu m in emission, when in absorption the feature only reaches moderate depths. Furthermore, more than 50\% of type 2 AGN exhibit the feature in emission while only 10\% of type 1 AGN have it in absorption. Comparing these numbers with those for S$_{\rm 9.7 \,\, AGN}$ it becomes obvious that the feature can be in absorption at 9.7 \textmu m while still in emission at 18 \textmu m. We will get back to this point in Sec. \ref{sec:discuss}. \section{Discussion and Conclusions}\label{sec:discuss} Using the largest sample of AGN with MIR spectroscopy ever assembled we quantify, for the first time, the effects of the emission of the host galaxy on the behaviour of the silicate features, a tracer of the properties of the hot dust in the torus. We rely on the classification provided by NED in order to call an object ``AGN'' as well as for the classification of AGN in types 1 and 2. The sample includes a variety of AGN, from objects where the AGN completely dominates the MIR emission ($f_{\rm AGN} \sim 1$) to AGN whose MIR emission is almost entirely dominated by star formation ($f_{\rm AGN} << 1$). We find the fraction of objects with a strong contamination from the galaxy ($f_{\rm AGN} < 0.7$) to be much higher among type 2 AGN (43 pre cent) than among type 1 AGN (12 pre cent). The emission of the host affects the behaviour of the silicate features. Broadly speaking, the strength of the silicate feature at 9.7 \textmu m is a measure of the optical depth, $\tau_{9.7}$, along the line of sight, and goes from emission to absorption as $\tau_{9.7}$ increases \citep[see e.g.][Fig. 9]{fritz06}. In this simple picture, type 1 and type 2 AGN should show the feature in emission and absorption, respectively. Our study, however, shows that type 1 (type 2) AGN with the feature in absorption (emission) are very common, even after subtracting the contribution of the host galaxy. The numbers of type 1 and 2 AGN with the feature in emission increase by 20 and 50\%, respectively, once the host galaxy is removed, while 35\% of objects with the feature originally in absorption exhibit it in even deeper absorption after subtraction of the host. This means that the combined spectrum exhibits an S$_{\rm 9.7 \,\, tot}$ intermediate between S$_{\rm 9.7 \,\, AGN}$ and that of the host. The host galaxy nearly always shows mild silicate absorption, as the power sources (stars) are well mixed with the absorbing dust, and therefore the strength of the silicate feature does not correlate with the optical depth of the gas and dust along the line of sight. Consequently, contamination from the host increases the depth of the absorption if the feature intrinsic to the AGN is in emission or mild absorption, but decreases the depth (i.e. fills the gap) if the AGN has a deeper feature than the host. S$_{\rm 9.7 \,\, tot}$ is scarcely ever shown in emission when the MIR emission is strongly contaminated by the host galaxy ($f_{\rm AGN} < 0.7$), with those objects exhibiting on average the feature in deeper absorption than their AGN-dominated counterparts. S$_{\rm 9.7 \,\, AGN}$, on the other hand, shows no dependency on $f_{\rm AGN}$ and appears from moderate emission to quite deep absorption with all possibilities in between. The lack of correlation between S$_{\rm 9.7 \,\, AGN}$ and $f_{\rm AGN}$, two quantities that are physically unrelated, indicates that the decomposition mechanism successfully removes the most important part of the host galaxy emission, allowing for an almost unbiased silicate measurement of the silicate feature in the vicinity of the nucleus. We have also addressed the issue of the observed shift of the silicate feature at 9.7 \textmu m to longer wavelengths with respect to the nominal peak at 9.7 \textmu m. Our decomposition method does not allow reliable estimate of $\lambda_{\rm peak}$ on the galaxy-subtracted spectra. We therefore rely on the measurement carried out on the original IRS spectra. We find the largest shifts ($\lambda_{\rm peak}>10.2$ \textmu m) to appear only in objects with an important AGN component ($f_{\rm AGN} > 0.7$) with the feature in emission, regardless of their type. When the feature is in absorption, and again irrespective of the type, it appears at or near its nominal wavelength ($\lambda_{\rm peak}<10.2$ \textmu m). Since its discovery, various scenarii have been proposed in order to explain this shift, such as the presence of porous dust \citep{li08, smith10}, the presence of different dust species \citep{markwick07}, or radiative transfer effects \citep{nikutta09}. \cite{shi14}, however, posit that this is not a radiative transfer effect but rather the effect of direct exposure of the silicates to the nuclear radiation, that modifies the size or the chemical composition of the grains, more along the lines of \cite{smith10}. \subsection{Dust distribution inside and outside the torus}\label{sec:dust} S$_{\rm 9.7 \,\, AGN}$ is seen in only moderate emission or slight absorption in most type 1 AGN, a behaviour that traditionally favours a clumpy morphology, both because smooth models often predict silicates in stronger emission than observed and because, with the exception of a couple of smooth model parameter combinations, only clumpiness can explain the feature in absorption in unobscured AGN. However, and even though not producing the feature in absorption, a continuous dust distribution can also give rise to silicates in only weak emission for a large variety of parameters, as shown in Fig. 4 of \cite{feltre12}. Furthermore, \cite{sirocky08} showed that the use of the \cite{ossenkopf92} dust absorption and scattering coefficients result in considerably less prominent silicate emission features compared to other dust models like \cite{draine03}. We therefore interpret the behaviour of S$_{\rm 9.7 \,\, AGN}$ alone as favouring the \cite{ossenkopf92} over the \cite{draine03} silicates, but this property alone cannot provide much insight into the morphology of the dust. The transition of the mean values of S$_{\rm 9.7}$ from weak emission to absorption from Sy1.2 to Sy1.9 can be explained by either dust morphologies: in a smooth medium it can be attributed to an increase of the inclination (as measured from the poles) and hence the intervening material, with the silicate feature at 9.7 \textmu m that arises from the inner, hotter parts of the torus, being increasingly blocked by the bulk of the dust as the viewing angle increases. In a clumpy medium, on the other hand, this could simply be attributed to different levels of obscuration along the line of sight, independently on the orientation. The combined strength of the silicate features at 9.7 and 18 \textmu m is sensitive to the chemistry and morphology of the dust surrounding the AGN, i.e. the torus \citep{sirocky08, thompson09,feltre12}. To test this, we created three grids of models, two clumpy and a smooth, following \cite{nenkova08} and \cite{feltre12}, respectively. All three sets of models share the same the primary source, described in \cite{nenkova08}. One of the clumpy grids was created using the silicates absorption and scattering coefficients from \cite{draine03} while the other clumpy as well as the smooth grids were created using the \cite{ossenkopf92} silicates. The models have been created to have {\it matched} parameters, as defined in \cite{feltre12}, i.e. each of the smooth models in the grid has an equivalent model (in terms of geometrical properties) in the clumpy grid. The parameter space explored by the model grids is briefly described in Appendix \ref{sec:models}. Fig. \ref{fig:s10s18} shows the distributions of S$_{\rm 18 \,\, AGN}$ and S$_{\rm 9.7 \,\, AGN}$, compared to model predictions. The top (bottom) panel shows type 1 (type 2) AGN (in blue). Smooth models are shown in green, clumpy models using the \cite{ossenkopf92} and \cite{draine03} silicates are shown in grey and pink, respectively. The spread of the observed data points indicates the variety of torus geometries in nature in terms of size, shape and optical depths. \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{S10S18Type1andCCSModels.pdf} \vskip -0.8cm \includegraphics[width=8.5cm,angle=0]{S10S18Type2andCCSModels.pdf}\\ \caption{S$_{\rm 18 \,\, AGN}$ as a function of S$_{\rm 9.7 \,\, AGN}$, overplotted on model predictions. The top (bottom) panel shows type 1 (type 2) objects (in blue). Smooth models are shown in green, clumpy models with Ossenkopf et al. (1992) and Draine (2003) silicates are shown in grey and pink, respectively.} \label{fig:s10s18} \end{center} \end{figure} As previously noted by other authors, the so called `astronomical' silicates \citep{draine03} produce stronger emission features at 9.7 \textmu m and a wider range of S$_{\rm 18 \,\, AGN}$/S$_{\rm 9.7 \,\, AGN}$ than what is observed, compared to the \cite{ossenkopf92} silicates. The bulk of type 1 objects lie in the region of overlap between smooth and clumpy models. Overall, models reproduce in a better fashion the silicates in type 2 AGN. However, both smooth and clumpy models, albeit covering a large parameter space, fail to reproduce the deep absorption features seen in type 1 AGN (lower left quadrant in top panel of Fig. \ref{fig:s10s18}). At the same time, and as already suggested by \cite{imanishi07}, a continuous dust distribution is the only morphology that can reproduce the deepest absorption features seen in the sample for type 2 views (lower left quadrant of bottom panel in Fig. \ref{fig:s10s18}), at least without resorting to additional obscuration by the host galaxy. In fact, \cite{levenson07} suggested that a deep silicate absorption feature at 9.7 \textmu m requires the primary source to be embedded in a continuous, optically and geometrically thick dusty medium, while a clumpy medium, with clouds illuminated from the outside, will result in a much shallower feature, as the emission from these clouds will fill the absorption trough. We visually inspected all AGN with S$_{\rm 9.7 \,\, AGN} < -1.5$ for which we could find high enough resolution images and established that they are predominantly hosted in galaxies with high inclinations, galaxies with very visible dust lanes crossing the centre (e.g. NGC5793 or NGC7172), or reside in interactive systems at various stages of their interaction (e.g. Mkr 273, Mrk 331, NGC2623). In fact, \cite{goulding12} reach this same conclusion in their study of 20 nearby ($z<0.05$) bona-fide Compton-thick AGN. This implies that the source of the deepest silicate absorption features is dust in the host galaxy rather than dust in the torus \citep[see also][]{deo07}. In favour of this view, \cite{lagos11} find that type 1 AGN have a strong preference in residing in face-on galaxies, while the type 2 AGN reside in hosts of any orientation. The deep absorption features are, therefore, of no relevance to the modeling of the torus and they should not be used to favour smooth dust distributions over clumpy ones. We note that this is by no means an argument against the AGN unification scheme, but does suggest that the obscuration of the nucleus, i.e. a type 2 view, may, in some cases, be decoupled from the orientation of the torus. Finally, what is not reproduce by none of the torus models are many of the obscured and unobscured AGN with the silicates absorption at 9.7 \textmu m but in emission at 18 \textmu m. Smooth models do not predict at all such a behaviour in type 1 views, while both morphologies produce S$_{\rm 18 \,\, AGN}$ of about half the strength than that measured on the spectra when S$_{\rm 9.7 \,\, AGN}$ is in absorption. \cite{feltre12} showed that the adopted primary source can also affect the properties of the silicate feature. As a last resort, and since we have no control of the primary source of the Nenkova models, we produced an additional grid of smooth models with the \cite{ossenkopf92} silicates but using the primary source from \cite{feltre12}, that allows for more intrinsic AGN emission at wavelengths beyond 1 \textmu m compared to the primary source used by the \cite{nenkova08} models, which we have adopted for the other grids. These models still fail to reproduce the objects in the upper left quadrant of the figure for type 1 views, they do however reproduce a much larger fraction of the objects in that same quadrant for type 2 views than any of the other model grids presented before, as shown in Fig. \ref{fig:s10s18fritz}. \begin{figure} \begin{center} \includegraphics[width=8.5cm,angle=0]{S10S18Type2andSmoothFritz.pdf} \caption{S$_{\rm 18 \,\, AGN}$ as a function of S$_{\rm 9.7 \,\, AGN}$ for type 2 objects (in blue), overplotted on smooth model predictions (in yellow), using the Feltre et al. (2012) primary source.} \label{fig:s10s18fritz} \end{center} \end{figure} To summarise, and even though the emerging picture is still somewhat confusing, things start clearing up: the `cosmic' silicates \citep{ossenkopf92} represent in a satisfactory fashion the absorption and scattering properties of the silicates in the obscuring torus. Clumpiness is needed in order to produce absorption features in unobscured AGN, if no foreground absorber is invoked. Clumpiness can also cause the silicates to be in absorption at 9.7 \textmu m and in emission at 18 \textmu m in type 1 sources, but a primary source with more intrinsic AGN emission at $\lambda>1.0$ \textmu m might be necessary to create stronger S$_{\rm 18 \,\, AGN}$ in emission, when S$_{\rm 9.7 \,\, AGN}<0.0$. \section*{ACKNOWLEDGMENTS} The Cornell Atlas of {\it Spitzer}/IRS Sources (CASSIS) is a product of the Infrared Science Center at Cornell University, supported by NASA and JPL. We made use of the NASA/IPAC Extragalactic Database (http://ned.ipac.caltech.edu). We used the TOPCAT software written by Mark B. Taylor (http://www.star.bris.ac.uk/~mbt/topcat/). AHC acknowledges support by the Universidad de Cantabria Augusto Gonz{\'a}lez Linares programme and the Spanish Plan Nacional de Astronom{\'i}a y Astrof{\'i}sica under grant AYA2012-31447. The research leading to these results has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013 Grant Agreement no. 321323).
{ "redpajama_set_name": "RedPajamaArXiv" }
367
Home Page/Forums/HRIS/vRVgDeniZZG vRVgDeniZZG 4 years, 7 months ago Emery I'm from England order viagra online overnight delivery The problem is that crops such as rapeseed oil, palm oilfrom Malaysia or soyoil from the Americas, can displace foodproduction into new areas, forcing forest clearance and thedraining of peat land, as well as adding to food prices. (Reporting by Barbara Lewis) viagra cheapest prices thru nabp "A debt ceiling increase at only six weeks tied to budgetnegotiations would put us right back where we are today in justsix weeks, on the verge of Thanksgiving and the obviouslyimportant shopping season leading up to the holidays," Carneysaid. viagra price per pill uk This autumn's U.S. bounty follows massive crops in other key growing and exporting regions of the globe including South America and the Black Sea region, which have recovered from recent severe droughts that rattled international grain markets and fueled unrest in several import-dependent nations. The United States itself is just a year removed from its worst drought since the Dust Bowl days of the 1930s. us online pharmacy generic viagra A spate of consolidation between exchanges began in 2006,when the New York Stock Exchange and Euronext, home to theParis, Brussels, Amsterdam and Lisbon exchanges merged. Dealsbetween Nasdaq and Nordic group OMX and the LondonStock Exchange and the Borsa Italiana followed as eachinstitution tried to protect market share and trading volumes. legit websites to buy viagra After comparing data gathered by 11 spacecraft between 1972 and 2011, researchers concluded that interstellar winds have changed direction by 4 to 9 degrees, upending long-held beliefs that the gusts were eternally steady. buy viagra online united states The Knicks spent the summer acquiring first-round bust Andrea Bargnani and career crazy man Metta World Peace, who provided comic relief during Monday's media day. As the former Ron Artest tried his stand-up routine, you could see Knicks officials everywhere cringe. Normally, they only look that uncomfortable when Dolan is around. viagra 50 oder 100 mg Former Cy Young winner Greinke initially struggled on the mound but dug his way out of a jam in the top of the first with bases loaded and no outs, striking out Matt Adams and getting Yadier Molina to ground into an inning-ending double-play. cost of viagra at cvs "This 'faith' is a key underpinning of the U.S. dollar's global reserve currency status and reason why the US 'AAA' rating can tolerate a substantially higher level of public debt than other 'AAA' sovereigns," Fitch said. price of viagra at walmart Homeland Security Secretary Janet Napolitano had already announced as much last month, saying visas for gay couples would be processed "in the same manner as those filed on behalf of an opposite-sex spouse," hours after the Supreme Court handed down its ruling. generic viagra shipped to canada The drama playing out inside this house reflects a wider and increasingly urgent dilemma. The world's population is aging fast, due to longer life spans and lower birth rates, and there will soon be more old people than young for the first time in history. This has left families and governments struggling to decide: Who is responsible for the care of the elderly? viagra 25 mg sildenafil citrate Malta, a British air and naval base at the time, was on the brink of starvation and close to surrendering to the Axis powers that surrounded it on all sides. The operation's success, albeit with heavy losses, has gone down in military history as one of the most important British strategic victories of World War Two, even though it was in many ways a tactical disaster. best prices for viagra online In the first quarter, Apple ranked 5th in China with 9.7percent market share, well behind leader Samsung with 17.7percent, and lagging, among others, Lenovo Group Ltd and Huawei Technologies, which said on Wednesday it wason track to hit 10 percent revenue growth this year.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,808
{"url":"https:\/\/aviation.stackexchange.com\/questions\/57662\/how-to-recognize-the-horizontal-winglet-in-the-horten-design-or-the-experimenta\/58596","text":"# How to recognize the horizontal winglet in the Horten design, or the experimental glider in this video?\n\nComments below the NASA Armstrong Flight Research Center video Proving Prandtl- With A Twist! incclude:\n\n\u2022 This video is entirely a hidden jewel in youtube, it deserves much more attention than what it has had until now. It is both educational and inspirational!\n\n\u2022 It's a clever, non-obvious idea, using the washout to eliminate adverse yaw...\n\nWhat, and where is the Horizontal wiglet discussed in the video, or in the Horten design discussed there as well?\n\nWhen I look at the various images in the video, I just see a flat wing as far as the shape is concerned; what distinguishes the winglet from the rest of the wing?\n\nedit: I see a break in the wing near the end, but as far as I can see, the shape or orientation doesn't change from what the wing would look like anyway. What is it that makes the end of this wing a winglet?\n\n\u2022 I just watched the second video. This NASA academy brazenly claims to have newly discovered what we have known for 80 years (well, if we looked in the right places, that is). Another instance of NASA marketing overselling trivial \"discoveries\". And, of course, the same falsehood about tip vortices creating induced drag is repeated again. \u2013\u00a0Peter K\u00e4mpf Jan 2 '19 at 17:56\n\nYou can't from the planform alone.\n\nFirst, winglets are no magical device. Calling the wing with bell-shaped lift distribution one with horizontal winglets tries to free-ride on the mystique that NASA marketing has created around the winglet. But the physics behind it are rather mundane and the thrust which is created by the outer wing is a bit of payback for the higher losses at mid-span from a steep lift gradient over span.\n\nNext, all wings create thrust if you define it narrowly enough. This comes from the suction force on the forward upper side of the airfoil and is called leading edge thrust.\n\nInduced drag is the backward tilt of the aerodynamic forces and is caused by lift creation. The least amount of drag for a given amount of lift and wingspan can be achieved with the elliptic distribution over span. The bell-shaped distribution creates more drag for the same lift and wingspan, since it has higher spanwise lift gradients at mid-span.\n\nWhat is it that makes the end of this wing a winglet?\n\nThis is a matter of definition. The so-called winglet area is where the wingtips carry only little positive or even negative lift. Like in a winglet, this gives the local lift a forward component which works like the opposite of induced drag. Call it induced thrust, if you want: This is what is common to winglets and the negatively loaded wingtip. That in turn is caused by wing twist and local control surface deflection. You cannot see from the top view how lift is distributed over span.\n\nBut the bell-shaped lift distribution has some interesting advantages:\n\n\u2022 Since most lift is created near the wing root, the spar bending moment can be kept low for a given amount of lift. This allows for a lightweight wing structure and is especially important for large aircraft.\n\u2022 With aileron deflection, the lift distribution on the up-going wing becomes nearly elliptical while the one on the down-going wing becomes even worse, increasing induced drag there significantly. This reduces adverse yaw such that no vertical tail is needed.\n\nSounds great, doesn't it?\n\nActually no, it doesn't when you take a closer look:\n\n\u2022 Due to the low maximum lift coefficient of flying wings, the wing surface of a flying wing needs to be much higher than that of a conventional configuration of the same landing speed where a tail surface allows the use of powerful trailing edge flaps, raising wing weight and drag substantially.\n\u2022 The bell-shaped lift distribution is like flying all the time with spoilers half deployed. Aileron deflection retracts the spoiler on the up-going wing and extends it fully on the down-going wing. Kind of like the split ailerons of the B-2. I think it is better to only use spoilers during manoeuvring. Also, the Horten flying wings were all known for marginal directional stability, especially at high speed when sweep did not help much. It was too little to even compensate for unsymmetric thrust. A fin or added artificial stabilization would be highly advisable.\n\u2022 This is more than I bargained for, but I will hunker down and try to understand it all now. Thank you for taking the time to include so much into one post! \u2013\u00a0uhoh Jan 1 '19 at 14:19\n\u2022 So what is your view on that they just lengthened the span and called this a \"horizontal winglet\"? \u2013\u00a0jjack Jan 1 '19 at 16:36\n\u2022 @jjack: No, also the right twist is needed, correctly called washout. That's why they make the pun on \"Prandtl with a twist\". \u2013\u00a0Peter K\u00e4mpf Jan 1 '19 at 16:45\n\u2022 And you say that winglets generate thrust? What about the drag that they also generate? Do we have a net thrust? \u2013\u00a0jjack Jan 1 '19 at 16:47\n\u2022 @PeterK\u00e4mpf I get that \"Prandtl with a twist\". \u2013\u00a0jjack Jan 1 '19 at 16:48\n\nI have several posts and snarky comments on here that describe the function of winglets precisely as outlined in this video; that is, they exploit the circulation around the tip to generate thrust (like sails on a boat, which is why they were originally called \"tip sails\"). Almost all descriptions talk vaguely about how they give a reduction in induced drag. This is the first time in a long time I've seen it explained so clearly, and is great to see.\n\nAnyway, remembering that a winglet is a flying surface that generates thrust from tip circulation, what they have done here is simply a flat tip extension with its incidence set (more nose down than a normal wing tip) to exploit the same circulation, kind of earlier in the circular movement of the flow (at 9 o'clock instead of 12 you might say). This placement seems to generate a much stronger thrust component from the vortice than a vertical winglet, so strong that it is enough to completely cancel out the increased drag from the nearby down aileron.\n\nThis means that the elimination of adverse yaw this way, along with the sweep back that provides a natural weathervaning tendency, allows you to completely do away with rudders.\n\nCoping with asymmetric engine thrust is another job of rudders not addressed here and a multi-engine aircraft would still need some kind of asymmetric thrust compensation device, but besides that, it seems brilliant.\n\n\u2022 Thanks for your answer! I'm not very familliar with winglets; all I see is a wing in the image. What makes the winglet different from just the end of the wing? What delineates the end of the wing and the beginning of the winglet? Would it be articulated forward or backward when in flight? \u2013\u00a0uhoh Dec 3 '18 at 12:56\n\u2022 Is it possible then to add a short explanation to \"how can it be recognized\" in the context of this test glider? I just see a wing with a gap in it, but I don't understand what makes the end into a winglet, and not just the end of the wing. \u2013\u00a0uhoh Dec 7 '18 at 3:06\n\nLooking at the picture of the glider, the orientation of the hinges indicates that the winglets should provide directional control, replacing the rudder as well as the ailerons. The original Horten aircraft had spoilers to perform the same role as the rudder in a conventional design.\n\n\u2022 Thanks, but I still can not visualize the difference between \"flat winglets\" as discussed in the video say between04:30 and 05:45, and a wing of the same length without \"flat winglets\". \u2013\u00a0uhoh Dec 31 '18 at 12:37\n\nGreat work from the NASA team and an interesting way of thinking in 3 dimensions.\n\nA look at bird wing anatomy shows how they decrease lift and increase drag on the same side: by using their \"wrist\" to pivot their wing tip leading edge down. You can do this by sticking your arm out and rolling your wrist. Conventional aircraft need rudders to counteract \"adverse yaw\" created by downward pointing aileron (higher AOA, higher lift) on opposite side of turn. This is the \"coordinated\" turn. A spoiler on the same side is more \"bird like\" and can be found on the venerable B52.\n\nBut before we go throwing away our vertical stabilizers and rudders it is very important to study cross wind performance. A dihedral aircraft with roll away from the wind and get blown sideways from a strong gust. A \"weathervane\" motion of the tail speeds up the leeward wing, helping mitigate the roll. Birds mitigate cross wind roll by anhedralling their wing tips (again the wrist).\n\nThis relationship of aircraft dihedral and vertical stabilizers is expressed in discussions of \"Dutch roll\" (vertical stabilizer too small) and spiral instability (too large). This may be why the B52 reduced, but did not eliminate theirs.\n\nMaybe they have tried winglets as part of their design iteration and then decided to leave them off, or rather bend them downward thus obtaining a bigger wingspan.\n\n\"Horizontal winglet\" seems to be a term used in windtunnel studies to indicate \"the winglets are bent down\" as opposed to 60 degree, etc. winglets.\n\nThe difficulty with these \"horizontal winglets\" is that essentially you have a different wing from the one you started out with, one with a longer span. This is sort of like cheating on yourself.\n\nSo, there are no winglets in the given design. They just increased the wingspan and call this \"horizontal winglets\", which is a misnomer.\n\nAnd I also think their explanation with regards to thrust being created by the winglets is wrong.\n\n\u2022 I think you appreciate my confusion better than most, thank you! If you get a chance and can find a link or an image that helps me to visualize what you are explaining, that will be greatly appreciated. Thanks! \u2013\u00a0uhoh Jan 1 '19 at 0:12","date":"2021-04-16 22:47:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5798671841621399, \"perplexity\": 1724.0023309505264}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038092961.47\/warc\/CC-MAIN-20210416221552-20210417011552-00145.warc.gz\"}"}
null
null
Understanding one syllable mapping to one character promotes reading development in Chinese Dan LIN, Ling-Po SHIU Purpose. Extent research has demonstrated the importance of morphological awareness (e.g., McBride-Chang, et al., 2003), phonological awareness (e.g., Lin et al., 2010) and orthographic awareness (e.g., Siok, Fletcher, 2001) in Chinese reading development in young children. However, little is known about the fundamental mapping process between sound units (phonology) and visual symbols (visual-orthography) in Chinese. The present study aimed to investigate the role of syllable mapping, defined as the ability of mapping syllable (sound unit) to character (visual unit), in Chinese word reading development with the traditional well-documented reading predictors of visual skills and syllable awareness controlled. Method. Children participated in the study were 96 Hong Kong Chinese kindergartners. All children were native Cantonese speakers. In the syllable mapping task, children were asked to point out a particular character in a card with three-character word visually printed on and uttered by the examiner. Other tasks administered included syllable awareness, visual spatial relationship, and Chinese word reading. Results. Results showed that Chinese word reading was strongly associated with syllable awareness, r = .58 (p < .001), and syllable mapping, r = .75 (p < .001) Further hierarchical regression analyses found that with children's age, visual spatial relationship, and syllable awareness statistically controlled, syllable mapping explained 16% unique variance of Chinese word reading, and it emerged as a significant predictor in the final Beta weight, t = 6.40, p < .001. Conclusions. The results underscored the importance of the cross-modality ability of mapping syllable to character in Chinese reading development among preschoolers. Lin, D., & Shiu, L.-P. (2012, July). Understanding one syllable mapping to one character promotes reading development in Chinese. Paper presented at the Nineteenth Annual Meeting Society for the Scientific Study of Reading, Montreal, Canada.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,794
Lupu Wellness is a social media star, yogi, health coach, and Instagram model based in the United States. Lupu Wellness was born in Michigan, USA, to American parents. Lupe Wellness is 27 years old and her real name is Elizabeth. Her estimated net worth is $250,000, and she stands 5 feet 5 inches tall. wiki and biography Lupe Wellness Net Worth Height, Weight, and Body Measurements Residence and Contact Address Name Elizabeth nickname Liz Lupu Date Of Birth Not Known Profession / Occupation Instagram model, yogi, health coach, and social media star Mother Tongue Not Known Nation American Caste/ Ethnicity Not Known Zodiac Sign Not Known The estimated net worth of Lupu Wellness is $250,000. Paid subscriptions constitute her primary source of income. Despite being very well known via online entertainment stages, for example, Instagram and Tiktok, she needs to bring in cash from these stages. Father Not known Mother Not known Brothers Not known Sisters Not known Marital Status Not known Husband Not known Daughters Not known Sons Not known Affairs Not known Height in Centimetres 167 cm Height in Meters 1.67 m Height in Feet Inches 5 feet 5 inches Weight around 52 kg Favorite Colour Pink Favorite Actor Will smith Favorite Actress Not known Favorite Food Pizza Hobbies Traveling, reading, playing Favorite Director Not known Favorite Destination LA Birth Place Michigan, USA Home Town Michigan, USA Present Residence Michigan, USA House Address Not known Phone No / Mobile No Not known Email ID Not known Twitter Not known Instagram https://www.instagram.com/lupuwellness/?hl=en Wikipedia Not known YouTube Not known Before pursuing a career in social media, she was a singer. According to her YouTube channel, she released four singles in 2020, namely World for Myself, Dreams Matter, Desire and Love, and My Paradise in You. However, her music career didn't go as planned, and she stopped releasing new music at some point. Lupu Wellness is a health coach and yogi by training. She proudly mentions it in the bio of her Instagram account. She started her Instagram account more than five years ago, but at first, she only used it herself. Aside from this, she is also fairly active on all of the major social media platforms. She has over 90,000 followers on her Tiktok account, which goes by the handle @lupuwellnesss. She uploads brief videos to the app, typically showcasing her stunning appearance. Additionally, she recently launched a live-streaming channel on Twitch. She made her debut on October 4 and has done multiple streams but no live ones. At present, she has over 6k adherents on Jerk. After returning, she began posting gorgeous modeling images to her Instagram handle for some time. However, in 2022, she began to achieve success earlier. After being featured on Instagram's explore page, her posts began gaining traction. She reached almost one million followers in October of this year. She has more than 303,000 followers on the platform at the moment. She joined in October 2010 and has 5.7k tweets and more than 130k followers on Twitter. She has worked with Mia Huffman and other influencers. She has a few hundred thousand Instagram followers, but she only follows about 2,500 people. She has been featured in some podcasts, including Only Stans. She has traveled to numerous popular tourist destinations, including Bali, Amsterdam, and Ibiza. There are only 1600 subscribers to her YouTube channel. She enjoys physical activity. She uses iPhones over Androids on her mobile devices. Also read – https://aboutbiography.com/sophie-stonehouse-age/ Britt Barbie age, height, net worth, boyfriend, biography and more
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,846
Tomori's in Lakewood is one of two places Andrew and I regularly order delivery from. It's our neighborhood, family run pizzeria. The owner takes your order on the phone, makes the pizza and is often the one that delivers it. Tomori's is the first of a few carry-out pizzerias on the tour that we opted to order and eat at a bar next door. We know there are a lot of carry-out only pizzerias in NEO and we didn't want to knock them out of consideration for the tour because they don't have dine-in seating. We went to Patio Tavern a few doors down for happy hour and ate our pizzas there. It's your typical Lakewood bar – long and narrow with darts and a skee ball machine in the back. Patio Tavern lovessss Jameson and the entire bar's decor were Jameson signs, bottles for lamps, barrels, neon signs, etc. We ordered Irish car bombs because, hey why not? Since we didn't eat at Tomori's, the Atmosphere score was based on Patio Tavern. Did I mention that there was a big doggo that came in and said hi to everyone in the bar? Major atmosphere points were awarded because of the cute doggy. We walked over to Tomori's and placed our orders. The teenage son took our orders while the owners (his mom and dad) made the pizzas. They offered to deliver them to the Patio Tavern, but we ordered a lot of pizzas so we just walked back when they were ready and oh boy was it worth the wait. The pizzas are thin and we ordered 4 larges and 1 medium. A large could feed 2 people with no leftovers. They're pretty big, but the crust is so thin you could continue to eat the whole thing by yourself (slight exaggeration). They offer a wide variety of toppings like sopressata, homemade sausage and Philly steak. They even have an egg pizza with eggs and truffle oil on top. The crust has a slight bite and crispiness to it and overall, it was thoroughly enjoyed. Tomori's checks the box for all you need from a neighborhood pizzeria: 1. great, reliable pizza – you know what you're getting will be good every time 2. authenticity – not just because the owner has a strong Italian accent, but because it's a whole family affair and it's nice to know who's making the food you're eating. Oh, and they have really delicious cannolis so extra points for that! Follow a group of friends as we explore Cleveland, eat pizza and give our "expert" reviews.
{ "redpajama_set_name": "RedPajamaC4" }
9,591
Archfeld ist ein Ortsteil der Gemeinde Herleshausen im nordhessischen Werra-Meißner-Kreis. Das Dorf liegt auf dem Ringgau, dem südlichen Hochplateau des Mittelgebirges. Geschichte Ortsgeschichte Die älteste bekannte schriftliche Erwähnung von Archfeld erfolgte unter dem Namen Archfeld im Jahr 1279. Bis zur Einführung der Reformation in der Landgrafschaft Hessen war das Dorf im Besitz des Klosters Fulda. Dann kam es in den Besitz der Familie Treusch von Buttlar, die auch das benachbarte Altefeld als Lehen besaß. Durch die Wirren des Dreißigjährigen Krieges und die Pest starben viele Bewohner. Zum 1. Dezember 1970 erfolgte im Zuge der Gebietsreform in Hessen der freiwillige Zusammenschluss der der bis dahin selbständigen Gemeinden Altefeld, Archfeld, Breitzbach, Herleshausen (mit Frauenborn), Holzhausen, Markershausen, Nesselröden, Unhausen, Willershausen und Wommen zur Großgemeinde Herleshausen. Für die eingliederten Gemeinden und Herleshausen mit Frauenborn wurde je ein Ortsbezirk mit Ortsbeirat und Ortsvorsteher nach der Hessischen Gemeindeordnung gebildet. Verwaltungsgeschichte im Überblick Die folgende Liste zeigt die Staaten und Verwaltungseinheiten, in denen Archfeld lag: vor 1567: Heiliges Römisches Reich, Landgrafschaft Hessen, Amt Sontra (Gericht Treusch-Buttlar) ab 1654: Heiliges Römisches Reich, Landgrafschaft Hessen-Kassel, Amt Sontra ab 1806: Landgrafschaft Hessen-Kassel, Amt Sontra 1807–1813: Königreich Westphalen, Departement der Werra, Distrikt Eschwege, Kanton Netra ab 1815: Kurfürstentum Hessen, Amt Sontra ab 1818: Kurfürstentum Hessen, Amt Netra ab 1821: Kurfürstentum Hessen, Provinz Niederhessen, Kreis Eschwege ab 1848: Kurfürstentum Hessen, Bezirk Eschwege ab 1851: Kurfürstentum Hessen, Provinz Niederhessen, Kreis Eschwege ab 1867: Königreich Preußen, Provinz Hessen-Nassau, Regierungsbezirk Kassel, Kreis Eschwege ab 1871: Deutsches Reich, Königreich Preußen, Provinz Hessen-Nassau, Regierungsbezirk Kassel, Kreis Eschwege ab 1918: Deutsches Reich, Freistaat Preußen, Provinz Hessen-Nassau, Regierungsbezirk Kassel, Kreis Eschwege ab 1944: Deutsches Reich, Freistaat Preußen, Provinz Kurhessen, Landkreis Eschwege ab 1945: Amerikanische Besatzungszone, Groß-Hessen, Regierungsbezirk Kassel, Landkreis Eschwege ab 1949: Bundesrepublik Deutschland, Land Hessen (seit 1946), Regierungsbezirk Kassel, Landkreis Eschwege ab 1974: Bundesrepublik Deutschland, Land Hessen, Regierungsbezirk Kassel, Werra-Meißner-Kreis Bevölkerung Einwohnerstruktur 2011 Nach den Erhebungen des Zensus 2011 lebten am Stichtag dem 9. Mai 2011 in Archfeld 126 Einwohner. Darunter waren keine Ausländer. Nach dem Lebensalter waren 24 Einwohner unter 18 Jahren, 45 zwischen 18 und 49, 27 zwischen 50 und 64 und 30 Einwohner waren älter. Die Einwohner lebten in 45 Haushalten. Davon waren 9 Singlehaushalte, 12 Paare ohne Kinder und 21 Paare mit Kindern, sowie 3 Alleinerziehende und keine Wohngemeinschaften. In 9 Haushalten lebten ausschließlich Senioren und in 24 Haushaltungen lebten keine Senioren. Einwohnerentwicklung 1585: 35 Haushaltungen Historische Religionszugehörigkeit Politik Ortsvorsteher ist Karlheinz Deist. Kultur und Sehenswürdigkeiten Kirche Die evangelische Kirche erhebt sich an der höchsten Stelle des Ortes. Es wird angenommen, dass sie ursprünglich als Wehrkirche diente, in der die Bevölkerung in Notzeiten Zuflucht finden konnte. Von dem ursprünglichen Kirchenbau ist nichts mehr vorhanden. Der älteste Teil ist das Langhaus aus dem Jahr 1567. Der Kirchturm wurde in 1903 erbaut, nachdem der ursprüngliche durch ein Feuer vernichtet worden war. Das Innere ist ein einfacher Saal mit einer abschließenden Rundtonne. Die Ausstattung stammt ebenso wie der Turm aus dem Jahr 1903. Im Rahmen eines festlichen Kirchspielgottesdienstes anlässlich ihres 450-jährigen Jubiläums bekam die Dorfkirche den Namen "Johanneskirche", weil zwanzig Jahre zuvor eine neue Kirchenglocke mit dem Namen "Johannesglocke" im Turm aufgehängt wurde. Wegen ihrer künstlerischen, geschichtlichen und städtebaulichen Bedeutung ist die Kirche ein geschütztes Kulturdenkmal. Dorfanger Unterhalb der Kirche befindet sich der Dorfanger mit zwei Linden, deren Alter auf 300 bis 450 Jahre geschätzt wird. Der Kunsthistoriker und Fotograf Thomas Wiegand vermutet in seinem Buch "Bäume aus dem Werraland", dass das Pflanzdatum der beiden alten Bäume möglicherweise mit dem Bau der Kirche im Jahre 1567 übereinstimmt, als sich die Herren Treusch von Buttlar in ihrem neuerworbenen Dorf einrichteten. Vielleicht wurden sie auch nach der Renovierung der im Dreißigjährigen Krieg zerstörten Kirche im Jahre 1657 gepflanzt. Auf dem Anger vor der Kirchhofmauer wurden alle Angelegenheiten der niederen und der peinlichen Gerichtsbarkeit verhandelt, von der Regelung von Eigentumsfragen bis hin zu Urteilen über Mord und Totschlag. Auf der ehemaligen Gerichts- und Versammlungsstätte stand die Rechtsprechung bis 1539 dem Kloster Fulda zu, danach dem adeligen Gericht der Herren Treusch von Buttlar. Als einer der besterhaltenen Anger des Kreisgebietes ist die Anlage aus ortsgeschichtlichen Gründen als Kulturdenkmal erhaltenswert. Die beiden alten Archfelder Dorflinden werden als Naturdenkmale besonders geschützt. Literatur Denkmaltopographie Bundesrepublik Deutschland. - Kulturdenkmäler in Hessen. Werra-Meißner-Kreis I, Altkreis Eschwege. Peer Zietz in Zusammenarbeit mit Thomas Wiegand, Braunschweig; Wiesbaden: Vieweg. 1991. ISBN 3-528-06240-1. S. 126 f. Weblinks Einzelnachweise Ort im Werra-Meißner-Kreis Ortsbezirk von Herleshausen Ehemalige Gemeinde (Werra-Meißner-Kreis) Gemeindeauflösung 1970 Ersterwähnung 1279
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,954
{"url":"https:\/\/eskesthai.blogspot.com\/2020\/","text":"# Perfect fluid\n\nThe stress\u2013energy tensor of a perfect fluid contains only the diagonal components.\n\nIn physics, a perfect fluid is a fluid that can be completely characterized by its rest frame mass density ${\\displaystyle \\rho _{m}}$ and isotropic pressure p.\n\nReal fluids are \"sticky\" and contain (and conduct) heat. Perfect fluids are idealized models in which these possibilities are neglected. Specifically, perfect fluids have no shear stresses, viscosity, or heat conduction.\n\nIn space-positive metric signature tensor notation, the stress\u2013energy tensor of a perfect fluid can be written in the form\n\n${\\displaystyle T^{\\mu \\nu }=\\left(\\rho _{m}+{\\frac {p}{c^{2}}}\\right)\\,U^{\\mu }U^{\\nu }+p\\,\\eta ^{\\mu \\nu }\\,}$\n\nwhere U is the 4-velocity vector field of the fluid and where ${\\displaystyle \\eta _{\\mu \\nu }=\\operatorname {diag} (-1,1,1,1)}$ is the metric tensor of Minkowski spacetime.\n\nIn time-positive metric signature tensor notation, the stress\u2013energy tensor of a perfect fluid can be written in the form\n\n${\\displaystyle T^{\\mu \\nu }=\\left(\\rho _{\\text{m}}+{\\frac {p}{c^{2}}}\\right)\\,U^{\\mu }U^{\\nu }-p\\,\\eta ^{\\mu \\nu }\\,}$\n\nwhere U is the 4-velocity of the fluid and where ${\\displaystyle \\eta _{\\mu \\nu }=\\operatorname {diag} (1,-1,-1,-1)}$ is the metric tensor of Minkowski spacetime.\n\nThis takes on a particularly simple form in the rest frame\n\n${\\displaystyle \\left[{\\begin{matrix}\\rho _{e}&0&0&0\\\\0&p&0&0\\\\0&0&p&0\\\\0&0&0&p\\end{matrix}}\\right]}$\n\nwhere ${\\displaystyle \\rho _{\\text{e}}=\\rho _{\\text{m}}c^{2}}$ is the energy density and ${\\displaystyle p}$ is the pressure of the fluid.\n\nPerfect fluids admit a Lagrangian formulation, which allows the techniques used in field theory, in particular, quantization, to be applied to fluids. This formulation can be generalized, but unfortunately, heat conduction and anisotropic stresses cannot be treated in these generalized formulations.[why?]\n\nPerfect fluids are used in general relativity to model idealized distributions of matter, such as the interior of a star or an isotropic universe. In the latter case, the equation of state of the perfect fluid may be used in Friedmann\u2013Lema\u00eetre\u2013Robertson\u2013Walker equations to describe the evolution of the universe.\n\nIn general relativity, the expression for the stress\u2013energy tensor of a perfect fluid is written as\n\n${\\displaystyle T^{\\mu \\nu }=\\left(\\rho _{m}+{\\frac {p}{c^{2}}}\\right)\\,U^{\\mu }U^{\\nu }+p\\,g^{\\mu \\nu }\\,}$\n\nwhere U is the 4-velocity vector field of the fluid and where ${\\displaystyle g_{\\mu \\nu }}$ is the metric, written with a space-positive signature.\n\n## Monday, November 23, 2020\n\n### Solar Panel Revolution in the Wind?\n\nBy AleSpa - Own work, CC BY-SA 3.0, https:\/\/commons.wikimedia.org\/w\/index.php?curid=29290121\n\nI am encouraged by some research that is currently going on that is improving the efficiency of solar panels up and coming. This encouragement is based on designs I have seen in corollary manufacturing processes that could created a whole new industry.\n\nIt is a whole new research path that could greatly improve the energy retention otherwise seemingly at a standstill,\u00a0 although these manufacturing processes for solar panels are currently inexpensive.\n\nI have been pondering these ideas for sometime now and since the move to electrics for transportation is now more important then ever as I open the door to the studious and bright innovators who wonder about these potentials.\n\n# Dr. Christian Schuster, researcher from the Department of Physics, told The Week news \u201cWe found a simple trick for boosting the absorption of slim solar cells. Our investigations show that our idea actually rivals the absorption enhancement of more sophisticated designs, while also absorbing more light deep in the plane and less light near the surface structure itself. Our design rule meets all relevant aspects of light trapping for solar cells, clearing the way for simple, practical, and yet outstanding diffractive structures, with a potential impact beyond photonic applications.\u201d He added, \u201cThis design offers potential to further integrate solar cells into thinner, flexible materials and therefore create more opportunity to use solar power in more products.\u201d\n\n## Thursday, August 20, 2020\n\n### Everyday Einstein: GPS & Relativity\n\nSee also: Everyday Einstein: GPS and Relativity\u00a0 @Perimeter Institute for Theoretical Physics\n\n## Wednesday, August 05, 2020\n\n### Automated for the Future\n\nAutomated for the Future -Perimeter Institute for Theoretical Physics\n\n## Saturday, May 16, 2020\n\n### Gaslighting in America\n\nGaslighting is a form of psychological manipulation in which a person or a group covertly sows seeds of doubt in a targeted individual, making them question their own memory, perception, or judgment, often evoking in them cognitive dissonance and other changes such as low self-esteem. Using denial, misdirection, contradiction, and misinformation, gaslighting involves attempts to destabilize the victim and delegitimize the victim's beliefs. Instances can range from the denial by an abuser that previous abusive incidents occurred to the staging of bizarre events by the abuser with the intention of disorienting the victim.\n***\n\n***\nI must say having been involved in the consumption of the news of late and the pandemic forcing us into stay at home so I started to wonder.\n\n## The creation of the Artemis Accords\n\nThe ability to extract and utilize resources on the Moon, Mars, and asteroids will be critical to support safe and sustainable space exploration and development.\n\nThe Artemis Accords reinforce that space resource extraction and utilization can and will be conducted under the auspices of the Outer Space Treaty, with specific emphasis on Articles II, VI, and XI.\n\n***\n\n# Outer Space Treaty of 1967\n\n#### Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.\n\nArticle VI\nStates Parties to the Treaty shall bear international responsibility for national activities in outer space, including the moon and other celestial bodies, whether such activities are carried on by governmental agencies or by non-governmental entities, and for assuring that national activities are carried out in conformity with the provisions set forth in the present Treaty. The activities of non-governmental entities in outer space, including the moon and other celestial bodies, shall require authorization and continuing supervision by the appropriate State Party to the Treaty. When activities are carried on in outer space, including the moon and other celestial bodies, by an international organization, responsibility for compliance with this Treaty shall be borne both by the international organization and by the States Parties to the Treaty participating in such organization.\n\n#### Article XI\n\nIn order to promote international co-operation in the peaceful exploration and use of outer space, States Parties to the Treaty conducting activities in outer space, including the moon and other celestial bodies, agree to inform the Secretary-General of the United Nations as well as the public and the international scientific community, to the greatest extent feasible and practicable, of the nature, conduct, locations and results of such activities. On receiving the said information, the Secretary-General of the United Nations should be prepared to disseminate it immediately and effectively.","date":"2021-05-15 20:06:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 10, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32673460245132446, \"perplexity\": 1456.93919740528}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991378.52\/warc\/CC-MAIN-20210515192444-20210515222444-00498.warc.gz\"}"}
null
null
$17.97 7.18 will be given. A magically beautiful face oil serum. The wise woman knows that true magic always starts with clear intention and the right tools. We have formulated this Magic Face Potion to be a powerful self care tool for women over 40 who intend to move forward with vitality and clarity.Read our ingredients, they speak for themselves. Free of Parabens, Lanolin, Talc, Synthetics, Gluten, Grains and Fragrances. Never Tested on Animals.
{ "redpajama_set_name": "RedPajamaC4" }
483
Q: java: count opening and closing tag pair I have below text `h1` text `/h1` `i` text `/i` `u` text `/u` Here pair h1 /h1 , i /i , u /u perfectly exist so this text should be passed. Now take this text `h1` text `/h1` `i` text `/i` `u` text `/u here the u /u combination is missing. So the above text failed. I tried this String startTags[] = {"`b`","`h1`","`h2`","`h3`","`h4`","`h5`","`h6`","`ul`","`li`","`i`","`u`"}; String endTags[] = {"`/b`","`/h1`","`/h2`","`/h3`","`/h4`","`/h5`","`/h6`","`/ul`","`/li`","`/i`","`/u`"}; for(int i=0;i<startTags.length;i++){ if(str.indexOf(startTags[i])!=-1){ System.out.println(">>>>"+startTags[i]); startTagCount++; } if(str.indexOf(endTags[i])!=-1){System.out.println("+++"+endTags[i]); endTagCount++; } } if(startTagCount==endTagCount){ //TEXT IS OK }else{ // TEXT FAILED } It passes below text instead getting failed `h5`Is your question about programming? `/h5` `b` bbbbbbbbbbbbbb`/b` `b` bbbbbbbbbbbbbb`/b Any better solution or regex in java ? A: I'm afraid this problem cannot be solved by (strict) regular expressions, because the language you describe is not a regular language, it extends the language {anbn}, which is a well-known non-regular language. A: If all you care about is making sure all opening tags have matching closing tags, then you can use regular expressions. Your code has a logic problem, in that you count all opening tags and all closing tags, but don't check if the opening tags and closing tags actually match. The startTagCount and endTagCount variables are not sufficient. I would suggest using a map, using the tag type as a key and the value as the count. Increment count on open tag, decrement count on close tag. Check for non-zero after scanning is complete. What is the grammar of this "language"? Your approach might be not be proper validation. For example, this HTML has matching tag counts but is invalid: <b><i>Invalid</b></i>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,354
\section{Introduction} The past few years have seen a growing activity in applying low-rank tensor techniques to the approximate solution of high-dimensional problems, see, e.g.,~\cite{Grasedyck2013a,Hackbusch2012} for survey. The success of these techniques crucially depends on the ability to approximate the object of interest by a tensor of low rank with respect to the chosen tensor format. Although this property has been frequently confirmed in practice, there is little theoretical insight into this matter so far. An important special case of the problems considered in this work are matrix equations of the form ${\mathbf{A}}(U) = B$ for a linear operator ${\mathbf{A}}: {\mathbb{R}}^{M \times N} \to {\mathbb{R}}^{M \times N}$. Clearly, any such operator can be written in the form \[ {\mathbf{A}}(U) = A^{(1)}_1 U A^{(2)}_1 + A^{(1)}_2 U A^{(2)}_2 + \cdots + A^{(1)}_{r_{\mathbf{A}}} U A^{(2)}_{r_{\mathbf{A}}}, \qquad A^{(1)}_i \in {\mathbb{R}}^{M\times M}, \quad A^{(2)}_i \in {\mathbb{R}}^{N\times N} \] for some $r_{\mathbf{A}} \le MN$. For $r_{\mathbf{A}} = 1$ and invertible matrices $A^{(1)}_1\!,\ A^{(2)}_1$ the rank of the solution $U$ equals the rank of $B$. This property does not hold for $r_{\mathbf{A}} \ge 2$ and one then considers the question of low-rank approximability of $U$, that is, the decay of its singular values. Particular attention has been paid to the case of a Lyapunov matrix equation \[ A U + U A^T = B \] for a matrix $B$ of low rank, which plays an important role in control and model reduction, see, e.g.,~\cite{Benner2013}. A number of works~\cite{Antoulas2002,Baker2015,Grasedyck2004,Grasedyck2003a,Grubisic2014,Penzl2000,Sabino2006} have been devoted to studying low-rank approximability for this problem. In particular, it has be shown that the singular values of $U$ decay exponentially when $A$ is symmetric positive definite. All existing proof techniques implicitly rely on the fact that the two operators $U \mapsto AU$ and $U\mapsto U A^T$ commute. In particular, this allows for the simultaneous diagonalization of both operators, which greatly simplifies the approximation problem. When this commutativity property is lost, these techniques fail. For example, only partial results~\cite{Benner2013a,Merz2012} are available so far for the innocently looking modification \[ A U + U A^T + CU C^T= B \] for general matrix $C$, which plays a role in bilinear and stochastic control. This indicates that we cannot expect to obtain exponential singular value decay for such generalizations. In general, we consider linear systems and eigenvalue problems of the form \begin{equation} \label{eq:highdimblabla} {\mathbf{A}} {\mathbf{u}} = {\mathbf{b}}, \qquad {\mathbf{A}} {\mathbf{u}} = \lambda {\mathbf{u}}, \end{equation} where ${\mathbf{A}}$ is a self-adjoint positive definite and bounded linear operator on a tensor product $H_1 \otimes \cdots \otimes H_d$ of Hilbert spaces $H_\mu$, $\mu = 1,\ldots,d$. We will study the low-rank approximability of the solution ${\mathbf{u}} \in H_1 \otimes \cdots \otimes H_d$ in certain tensor network formats, such as the tensor train format~\cite{Oseledets2011} (matrix product states~\cite{OestlundRommer1995}) and the hierarchical Tucker format~\cite{HackbuschKuehn2009} (tensor tree networks~\cite{Shi2006}). For these formats, the low-rank approximability is closely tied to the singular value decays of certain bilinear unfoldings associated with the tensor~\cite{Hackbusch2012}. This plays an important role in the study of quantum many-body systems~\cite{Schollwock2011}, where these decays are reflected in bounds on the entanglement entropy~\cite{EisertCramerPlenio2010}. For linear lattice models, rigorous bounds by Hastings~\cite{Hastings2007} imply a low-rank approximability that does \emph{not} deteriorate as the order $d$ increases. In the special case of frustration-free systems, similar results~\cite{Arad2012} can be derived via a simplified construction that only takes the algebraic properties of the involved operators into account. The purpose of this work is to propose a general framework for obtaining singular value decay estimates for the solutions of~\eqref{eq:highdimblabla}. Following the basic idea of~\cite{Arad2012}, our results are based on controlling the rank growth of a fixed-point iteration. This approach is constructive and only exploits the tensor product structure of the involved operators. The assumed structure features quite frequently in applications, for example in Schr\"odinger type eigenvalue problems~\cite{Khoromskij2010d,Kressner2011a}, quantum many-body systems with local interactions~\cite{Schollwock2011}, the chemical master equation for simulating biochemical reaction networks~\cite{Kazeev2013}, and Markov models for queuing networks~\cite{Kressner2014a}. Under certain conditions, the derived estimates do not deteriorate with increasing $d$. Our construction shares similarities with recent results by Bachmayr and Dahmen~\cite{BachmayrDahmen2015}, who use the method of steepest descent to design a nearly optimal solver for linear systems. In contrast to our work, these results \emph{assume} the low-rank approximability of the solution a priori. Our results state algebraic approximation rates with respect to increasing ranks. An exponential approximation rate can only be obtained under certain commutativity assumptions, similar to the Lyapunov equation discussed above. One of the very few results in this direction is the approximation of the solution to the $d$-dimensional Poisson equation by means of exponential sums~\cite{Grasedyck2004,Hackbusch2012}. The rest of this paper is organized as follows. In Section~\ref{sec: abstract results}, we provide a general framework for assessing the interplay between rank growth and convergence rate of fixed point iterations on tensor products of Hilbert spaces. Section~\ref{sec:linearequations} specializes this framework to the method of steepest descent applied to linear systems with tensor product structure, resulting in singular value decay estimates for the solution. In a similar manner, Section~\ref{sec: eigenvectors} covers symmetric eigenvalue problems. \section{Approximation by fixed-point iterations with finite rank growth}\label{sec: abstract results} In this section, we develop our general framework for low-rank tensor approximation by first considering the case $d = 2$ and then extending these results to tensors of arbitrary order $d$. \subsection{Bilinear approximation} Let $H_1, H_2$ be two Hilbert spaces (either both real or both complex), and consider the tensor product ${\mathbf{H}} = H_1 \otimes H_2$ with the induced inner product $\langle u_1 \otimes v_1, u_2 \otimes v_2 \rangle_{{\mathbf{H}}} = \langle u_1 , v_1 \rangle_{H_1} \cdot \langle u_2 , v_2 \rangle_{H_2}$. Note that ${\mathbf{H}}$ is isomorphic to $HS(H_1,H_2)$, the space of Hilbert-Schmidt operators from $H_2$ to $H_1$. Every tensor ${\mathbf{u}} \in {\mathbf{H}}$ admits a \emph{singular value decomposition} (SVD) \begin{equation}\label{eq:SVD} {\mathbf{u}} = \sum_{k=1}^\infty \sigma_k u_k \otimes v_k, \end{equation} with $u_1,u_2,\dots$ and $v_1,v_2,\dots$ forming complete orthonormal systems in $H_1$ and $H_2$, respectively, and \emph{singular values} $\sigma_1 \ge \sigma_2 \ge \dots \ge 0$. The smallest $r$ for which $\sigma_{r+1}=0$ is called the rank of ${\mathbf{u}}$. If there is no such $r$, the rank of ${\mathbf{u}}$ is $\infty$. We denote by \[ \tau_r({\mathbf{u}}) = \inf_{\substack{\tilde u_1,\dots,\tilde u_r \in H_1 \\ \tilde v_1, \dots, \tilde v_r \in H_2}} \bigg\| {\mathbf{u}} - \sum_{k=1}^r \tilde u_k \otimes \tilde v_k \bigg\|_{{\mathbf{H}}} \] the error for the best bilinear approximation of rank at most $r$. It is well known that the infimum is achieved by the sum of the first $r$ terms in the singular value decomposition, and \[ \tau_r({\mathbf{u}}) = \min_{\rank ({\mathbf{v}}) \le r} \| {\mathbf{u}} - {\mathbf{v}} \|_{{\mathbf{H}}} = \bigg( \sum_{k=r+1}^\infty \sigma_k^2 \bigg)^{1/2}. \] In the sequel we will be concerned with the case that ${\mathbf{u}}$ is implicitly given, e.g., as the solution of an optimization problem that represents a linear operator equation or eigenvalue problem. The basis of our framework is to approach ${\mathbf{u}}$ by a fixed-point iteration \begin{equation}\label{eq:fixed-point iteration} {\mathbf{u}}_{n+1} = \Phi({\mathbf{u}}_n) \end{equation} which has a guaranteed convergence rate, but increases the ranks of the iterates at most by a constant factor in every step. Examples for~\eqref{eq:fixed-point iteration} relevant for linear systems are gradient descent methods, like the Richardson iteration that will be used later on. However, other fixed-point iterations are imaginable wherefore we first keep the setting general. We need the following properties. \medskip \begin{enumerate}[(i)] \item \emph{Contraction:} There exists $0<q<1$ and $c>0$ such that \begin{equation}\label{A1} \| {\mathbf{u}}_{n+1} - {\mathbf{u}} \|_{\mathbf{H}} \le c q^{n+1} \| {\mathbf{u}}_0 - {\mathbf{u}} \|_{\mathbf{H}} \quad \text{for all $n$.} \tag{A1} \end{equation} \item \emph{Finite rank growth:} There exists ${R} > 1$ such that \begin{equation}\label{A2} \rank({\mathbf{u}}_{n+1}) \le {R}\cdot \rank({\mathbf{u}}_n) \quad \text{for all $n$.} \tag{A2} \end{equation} \end{enumerate} \medskip The missing ingredient is that the starting point ${\mathbf{u}}_0$ should have known finite rank. In fact, we will assume that $\rank({\mathbf{u}}_0) \le 1$. The limit point (as well as the other properties) of the iteration may depend on the choice of ${\mathbf{u}}_0$ (this will become particularly visible for the case of eigenvalue problems in Section~\ref{sec: eigenvectors}). We therefore consider a set \begin{equation} \label{eq:domain} {\mathcal{D}} \subseteq \{ {\mathbf{u}}_0 \vcentcolon \text{the sequence $({\mathbf{u}}_n)$ generated from ${\mathbf{u}}_0$ by~\eqref{eq:fixed-point iteration} satisfies~\eqref{A1} and~\eqref{A2}}\}, \end{equation} and assume \begin{enumerate}[(iii)] \item \emph{Rank-one starting point:} Properties~\eqref{A1} and~\eqref{A2} can be satisfied using a starting point in ${\mathcal{D}}$ with rank at most one, that is, \begin{equation}\label{A4} {\mathcal{D}} \cap \{ {\mathbf{u}}_0 \in {\mathbf{H}} \vcentcolon \rank({\mathbf{u}}_0) \le 1 \} \neq \emptyset. \tag{A0} \end{equation} \end{enumerate} Given~\eqref{A4}, one can define the quantity \[ \pi_1({\mathbf{u}}) = \inf_{\substack{{\mathbf{v}} \in {\mathcal{D}} \\ \rank({\mathbf{v}}) \le 1}} \|{\mathbf{v}} - {\mathbf{u}}\|_{\mathbf{H}}, \] and derive the main result of this section. \begin{theorem}\label{th: abstract result d2} The existence of a map $\Phi$ on ${\mathbf{H}}$ satisfying~\eqref{A4} implie \begin{equation}\label{eq: first estimate} \tau_r({\mathbf{u}}) \le c \pi_1({\mathbf{u}}) \sqrt{\left( 1 - \frac{(1-q^2) (r - {R}^{\lfloor \log_{R} r \rfloor})}{({R}-1) {R}^{\lfloor \log_{R} r \rfloor} } \right)} q^{\lfloor \log_{R} r \rfloor}. \end{equation} Simplified bounds are given by \begin{equation}\label{eq: cleaner bilinear estimate} \begin{aligned} \tau_r({\mathbf{u}}) \le c \pi_1({\mathbf{u}}) q^{\lfloor \log_{R} r \rfloor} \le c \pi_1({\mathbf{u}}) q^{(\log_{R} r) - 1} = c \pi_1({\mathbf{u}}) q^{-1} \left( \frac 1 r \right)^{\abs{\frac{\ln q}{\ln {R}}}}. \end{aligned} \end{equation} \end{theorem} \begin{proof For brevity, we write $\tau_r$ instead of $\tau_r({\mathbf{u}})$. By~\eqref{A4}, there is a starting point ${\mathbf{u}}_0 \in {\mathcal{D}}$ of rank at most one such that the sequence $({\mathbf{u}}_n)$ formed by~\eqref{eq:fixed-point iteration} satisfies~\eqref{A1} and~\eqref{A2}. Consequently, $\rank( {\mathbf{u}}_n ) \le {R}^n$ and \[ \tau_{{R}^n} \le \| {\mathbf{u}}_n - {\mathbf{u}}\|_{{\mathbf{H}}} \le c q^{n} \| {\mathbf{u}}_n - {\mathbf{u}}_0 \|_{\mathbf{H}}. \] As this holds for all admissible ${\mathbf{u}}_0$, we may pass to the infimum: \begin{equation}\label{eq: estimate for powers of mult} \tau_{{R}^n} \le c \pi_1({\mathbf{u}}) q^{n}. \end{equation} Since the sequence $(\sigma_k)$ is decreasing, we have for every $0 \le s < {R}^{n+1} - {R}^n$ that \[ \sum_{k = {R}^n + 1}^{{R}^n + s} \sigma_k^2 \ge \frac{s}{{R}^{n + 1} - {R}^n} \sum_{k = {R}^n + 1}^{{R}^{n+1}} \sigma_k^2 = \frac{s}{({R}-1){R}^n} ( \tau_{{R}^n}^2 - \tau_{{R}^{n+1}}^2 ). \] Hence, using~\eqref{eq: estimate for powers of mult}, we obtain for $r = {R}^n + s$ the estimate \begin{align*} \tau^2_r = \tau_{{R}^n}^2 - \sum_{k = {R}^n + 1}^{{R}^n + s} \sigma_k^2 &\le \tau_{{R}^n}^2 - \frac{s}{({R}-1){R}^n} ( \tau_{{R}^n}^2 - \tau_{{R}^{n+1}}^2 ) \\ &\le c^2 \pi_1({\mathbf{u}})^2 \left( 1 - \frac{(1-q^2)s}{({R}-1){R}^n} \right) q^{2n}, \end{align*} as asserted by~\eqref{eq: first estimate}. The simplified bound~\eqref{eq: cleaner bilinear estimate} follows from the observation that the term under the square root in~\eqref{eq: first estimate} is bounded by one. \end{proof} By general results for ordered sequences~\cite{DeVore1998}, a decay rate for the tail $\tau_r({\mathbf{u}})$ yields a decay rate for the singular values themselves. For instance, using~\eqref{eq: cleaner bilinear estimate}, we obtain \begin{equation}\label{eq: singular value rate} \sigma_r^2 \le \frac{\sum_{k = \lfloor r/2 \rfloor + 1}^r \sigma_k^2}{r - \lfloor r/2 \rfloor} \le \frac{\tau_{\lfloor r/2 \rfloor}^2({\mathbf{u}})}{\lfloor r/2 \rfloor} \le c \pi_1({\mathbf{u}}) q^{-2} \left( \frac{1}{\lfloor r/2 \rfloor} \right)^{2\abs{\frac{\ln q}{\ln {R}}}} \le c \pi_1({\mathbf{u}}) q^{-2} \left( \frac{2}{r-1} \right)^{2\abs{\frac{\ln q}{\ln {R}}}}. \end{equation} One consequence of~\eqref{eq: singular value rate} is that the von Neumann entropy of the squared singular values, \[ S({\mathbf{u}}) = \sum_{k=1}^\infty \sigma_k^2 \log (\sigma_k^2), \] remains finite, \change{provided that $q^2 {R} < 1$. This is a non-trivial result since ${\mathbf{u}} \in H_1\otimes H_2$ only implies the convergence of $\sum_{k=1}^\infty \sigma_k^2$. Explicit bounds on the von Neumann entropy $S({\mathbf{u}})$ are of interest in many applications, for instance in quantum particle models where it represents the \emph{entanglement entropy} of ground states~\cite{Arad2012,EisertCramerPlenio2010,Hastings2007}. The quite strong condition~$q^2 {R} < 1$ on the fixed point iteration will reappear in Theorem~\ref{th: estimate the overlap} to deduce~\eqref{A4} from~\eqref{A1} and~\eqref{A2} in the case that ${\mathcal{D}}$ is the affine plane orthogonal to ${\mathbf{u}}$~\cite{Arad2012}.} \subsection{Multilinear approximation} We now consider $d \ge 2$ Hilbert spaces $H_1,H_2,\dots,H_d$ (either all real or all complex). For each subset $t \subseteq \{1,2,\dots,d\}$ of indices with $0<|t|<d$, we have the following isomorphism between the tensor product Hilbert space \[ {\mathbf{H}} = H_1 \otimes H_2 \otimes \dots \otimes H_d \] and Hilbert-Schmidt operators: \begin{equation}\label{eq: identification} {\mathbf{H}} \cong HS\bigg( \bigotimes_{\mu \in t} H_\mu, \bigotimes_{\nu \notin t} H_\nu \bigg), \end{equation} see, e.g.,~\cite{Hackbusch2012}. In the finite-dimensional case, this simply amounts to reshaping the tensor into a matrix, with the indices corresponding to $t$ merged into the row indices. The isomorphism~\eqref{eq: identification} allows us to introduce the \emph{$t$-rank} of ${\mathbf{u}} \in {\mathbf{H}}$, denoted by $\rank^{(t)}({\mathbf{u}})$, as the rank of the associated Hilbert-Schmidt operator. Correspondingly, the sequence of singular values $(\sigma_k^{(t)})$, and the best bilinear approximation errors \[ \tau^{(t)}_r({\mathbf{u}}) = \min_{\rank^{(t)} ({\mathbf{v}}) \le r} \| {\mathbf{u}} - {\mathbf{v}} \|_{{\mathbf{H}}} = \bigg( \sum_{k=r+1}^\infty (\sigma_k^{(t)})^2 \bigg)^{1/2} \] can be defined. Theorem~\ref{th: abstract result d2} implies for fixed $t$ that \begin{equation}\label{eq:simplified estimate for t-rank} \tau^{(t)}_r({\mathbf{u}}) \le c \pi_1^{(t)}({\mathbf{u}}) q^{-1}\left( \frac 1 r \right)^{\abs{\frac{\ln q}{\ln {R}^{(t)}}}} \end{equation} under slightly modified assumptions. In particular, the property~\eqref{A2} is replaced by \[ \rank^{(t)}({\mathbf{u}}_{n+1}) \le {R}^{(t)}\cdot \rank^{(t)}({\mathbf{u}}_n) \] for some ${R}^{(t)}>0$. The other properties remain the same. In principle, the constants $q$ and $c$ involved in~\eqref{A1} could also depend on $t$ but, for simplicity, we omit this dependence. With ${\mathcal{D}}$ defined as in~\eqref{eq:domain}, the analogue of the main assumption~\eqref{A4} is that the quantity \begin{equation} \label{eq:defpi} \pi_1^{(t)}({\mathbf{u}}) = \inf_{\substack{{\mathbf{v}} \in {\mathcal{D}} \\ \rank^{(t)}({\mathbf{v}}) \le 1}} \|{\mathbf{v}} - {\mathbf{u}}\|_{\mathbf{H}} \end{equation} is finite. Knowing the decay properties of $\tau^{(t)}_r({\mathbf{u}})$ for certain choices of $t$ is crucial for understanding the approximability of ${\mathbf{u}}$ in subspace based low-rank tensor formats. For instance, the tensor train format~\cite{Oseledets2011} involves the $t$-ranks of $t=\{1,2,\dots,\mu\}$ for $\mu = 1,2,\dots,d-1$. For prescribed ranks $r_\mu$, the best approximation error in this format admits the quasi-optimal bound~\cite[Thm. 2.2]{Oseledets2010} \[ \sqrt{\big(\tau^{\{1\}}_{r_1}({\mathbf{u}})\big)^2 + \big(\tau^{\{1,2\}}_{r_2}({\mathbf{u}})\big)^2 + \cdots + \big(\tau^{\{1,\ldots,d-1\}}_{r_{d-1}}({\mathbf{u}})\big)^2}. \] More specifically, $d$-independent bounds on the von Neumann entropies of the singular values $(\sigma_k^{(t)})$ for these specific choices of $t$ constitute one-dimensional \emph{area laws} in the theory of quantum spin systems~\cite{Arad2012,EisertCramerPlenio2010,Hastings2007}. \section{Linear equations with low-rank operators and low-rank data} \label{sec:linearequations} We now apply the general framework from Section~\ref{sec: abstract results} to a linear system \begin{equation}\label{eq: operator equation} {\mathbf{A}} {\mathbf{u}} = {\mathbf{b}}, \end{equation} where ${\mathbf{A}}$ is a self-adjoint operator on ${\mathbf{H}}$ with \begin{equation}\label{eq: coercivity} \gamma \| {\mathbf{v}} \|_{\mathbf{H}}^2 \le \langle {\mathbf{v}}, {\mathbf{A}} {\mathbf{v}} \rangle_{\mathbf{H}} \le \Gamma \| {\mathbf{v}} \|_{\mathbf{H}}^2 \end{equation} for some $0 < \gamma < \Gamma \change{< \infty}$. In particular, this is the case when all Hilbert space are finite-dimensional and ${\mathbf{A}}$ is a Hermitian positive definite matrix acting on ${\mathbf{H}}$. The solution ${\mathbf{u}}$ of~\eqref{eq: operator equation} is a fixed-point of the \emph{Richardson iteration} \begin{equation}\label{eq: Richardson step} {\mathbf{u}}_{n+1} = \Phi({\mathbf{u}}_n) := {\mathbf{u}}_n - \alpha ({\mathbf{A}} {\mathbf{u}}_n - {\mathbf{b}}), \quad \alpha = \frac{2}{\gamma + \Gamma}. \end{equation} It is well-known that the convergence rate is bounded as follows: \[ \| {\mathbf{I}} - \alpha {\mathbf{A}} \|_{{\mathbf{H}} \to {\mathbf{H}}} \le \frac{\kappa - 1}{\kappa + 1} < 1, \] with the condition number $\kappa = \Gamma / \gamma$. Therefore, \begin{equation}\label{eq: contraction of Richardson} \| {\mathbf{u}}_{n+1} - {\mathbf{u}} \|_{\mathbf{H}} \le \left( \frac{\kappa - 1}{\kappa + 1} \right)^{n+1} \| {\mathbf{u}}_0 - {\mathbf{u}} \|_{\mathbf{H}} \end{equation} holds for all $n\ge 0$. For a fixed choice of $t \subseteq \{1,2,\dots,d\}$, $0< \abs{t} < d$, we now assume that the operator and right-hand side admit a low-rank representation with respect to the splitting~\eqref{eq: identification}: \begin{equation}\label{eq: A with low t-rank} {\mathbf{A}} = \sum_{i=1}^{r_{\mathbf{A}}^{(t)}} A^{(t)}_i \otimes A^{(t^c)}_i, \qquad {\mathbf{b}} = \sum_{j = 1}^{r_{\mathbf{b}}^{(t)}} b_j^{(t)} \otimes b_j^{(t^c)}, \end{equation} where $t^c = \{1,2,\dots,d\} \setminus t$. We will assume $r_{\mathbf{b}}^{(t)} \le r_{\mathbf{A}}^{(t)}$, the general \change{finite rank} case can be obtained by superposition \change{as follows. If the $t$-rank of ${\mathbf{b}}$ is finite but exceeds $r_{\mathbf{A}}^{(t)}$, we first write ${\mathbf{b}} = {\mathbf{b}}_1 + \cdots + {\mathbf{b}}_m$ such that each summand has $t$-rank at most $r_{\mathbf{A}}^{(t)}$. We then apply the result below to each linear system ${\mathbf{A}} {\mathbf{u}}_1 = {\mathbf{b}}_1$, $\ldots$, ${\mathbf{A}} {\mathbf{u}}_m = {\mathbf{b}}_m$ to obtain approximability results for ${\mathbf{u}} = {\mathbf{u}}_1 + \cdots + {\mathbf{u}}_m$.} \begin{theorem}\label{th: bound from steepest descent for linear equations} Given~\eqref{eq: coercivity} and~\eqref{eq: A with low t-rank} with $r_{\mathbf{b}}^{(t)} \le r_{\mathbf{A}}^{(t)}$, the solution ${\mathbf{u}}$ of~\eqref{eq: operator equation} satisfies \begin{equation} \label{eq:lalabound} \tau_r^{(t)}({\mathbf{u}}) \le \frac{\| {\mathbf{u}} \|_{\mathbf{H}}}{q} \left( \frac 1 r \right)^{\abs{\frac{ \ln q}{\ln R^{(t)}}}} \end{equation} with $R^{(t)} = r_{\mathbf{A}}^{(t)}+2$ and $q = \frac{\kappa - 1}{\kappa + 1}$. If, additionally, $A_i^{(t)}$ or $A_i^{(t^c)}$ in~\eqref{eq: A with low t-rank} is the identity for some $i$, then~\eqref{eq:lalabound} holds with $R^{(t)} = r_{\mathbf{A}}^{(t)}+1$. \end{theorem} \begin{proof} By expanding all terms, one concludes from~\eqref{eq: Richardson step} and~\eqref{eq: A with low t-rank} that \begin{equation}\label{eq: rank increase SD} \rank^{(t)}({\mathbf{u}}_{n+1}) \le \rank^{(t)}({\mathbf{u}}_n) + r_{\mathbf{A}}^{(t)} \rank^{(t)}({\mathbf{u}}_n) + r_{\mathbf{b}}^{(t)} \le (r_{\mathbf{A}}^{(t)}+2) \rank^{(t)}({\mathbf{u}}_n). \end{equation} Taking also~\eqref{eq: contraction of Richardson} into account, we see that for any starting point ${\mathbf{u}}_0 \in {\mathbf{H}}$ the conditions~\eqref{A1} and~\eqref{A2} hold with $q = \frac{\kappa - 1}{\kappa + 1}$, $c=1$, and $R^{(t)} = r_{\mathbf{A}}^{(t)} + 2$. Hence, the domain ${\mathcal{D}}$ considered in~\eqref{eq:domain} can be taken to be ${\mathcal{D}} = {\mathbf{H}}$ for this choice of parameters, and therefore~\eqref{A4} trivially holds. Considering ${\mathbf{u}}_0 = \mathbf{0}$ yields the estimate $\pi^{(t)}_1({\mathbf{u}}) \le \| {\mathbf{u}} \|_{\mathbf{H}}$. Consequently, the first part of the theorem is an instance of~\eqref{eq:simplified estimate for t-rank}. To show the second part, we may assume w.l.o.g. that $A_1^{(t)} = I$ in~\eqref{eq: A with low t-rank}. Then we can rewrite \[ {\mathbf{u}}_n - \alpha {\mathbf{A}} {\mathbf{u}}_n = \left( I \otimes (I - \alpha A_1^{(t)}) - \alpha \sum_{i=2}^{r_{\mathbf{A}}^{(t)}} A^{(t)}_i \otimes A^{(t^c)}_i \right) {\mathbf{u}}_n, \] so that the rank actually increases at most by a factor of $R^{(t)} = r_{\mathbf{A}}^{(t)} + 1$. \end{proof} \begin{example}\label{example: nearest neighbor interaction} The following structure occurs frequently in applications of high-dimensional operator equations: \begin{equation} \label{eq:structureA} \mathbf A = \mathbf L + \mathbf V, \end{equation} where \begin{gather*} \mathbf L = A_1 \otimes I \otimes \cdots \otimes I + I \otimes A_2 \otimes \cdots \otimes I + \cdots + I \otimes \cdots \otimes I \otimes A_d,\\ \mathbf V = B_1 \otimes C_2 \otimes I \otimes \cdots \otimes I + I \otimes B_2 \otimes C_3 \otimes \cdots \otimes I + I \otimes \cdots \otimes I \otimes B_{d-1} \otimes C_{d}. \end{gather*} Here, the $\mu$th term of $\mathbf L$ represents the action on the $\mu$th variable. For example, a structured discretization of the $d$-dimensional Laplace operator takes this form. The terms in $\mathbf V$ describe interactions between two neighboring variables. We assume that all involved coefficients $A_\mu$, $B_\mu$, and $C_\mu$ are bounded self-adjoint operators satisfying the inequalities \[\gamma_A \le A_\mu \le \Gamma_A, \quad 0 \le B_\mu \le \Gamma_B, \quad 0 \le C_\mu \le \Gamma_C\] in the spectral sense, for some constants $\gamma_A, \Gamma_A, \Gamma_B, \Gamma_C>0$ independent of $\mu$. Then $\mathbf A$ is a bounded self-adjoint operator satisfying the inequality~\eqref{eq: coercivity} with $\gamma = d \gamma_A$ and $\Gamma = d\Gamma_A + (d-1) \Gamma_B \Gamma_C$. Consequently, the condition number $\kappa$ determining the contraction rate~\eqref{eq: contraction of Richardson} is bounded independently of $d$. On the other hand, it can be shown by an explicit construction~\cite{Khoromskij2010d,KreSU13} that any operator having the algebraic structure~\eqref{eq:structureA} admits a low-rank representation of the form~\eqref{eq: A with low t-rank} with $r_{\mathbf{A}}^{(t)} = 3$ for any $t = \{1,2,\ldots,\mu\}$. In turn, the solution to an operator equation with the structure in~\eqref{eq:structureA} and low-rank right-hand side ${\mathbf{b}}$ satisfies the decay estimate~\eqref{eq:lalabound} for any such $t$, independently of $d$. As discussed at the end of Section~\ref{sec: abstract results}, this implies $d$-independent approximability in the tensor train format. By~\cite[Ex. 5.2]{Kressner2011a}, the same conclusion holds for the hierarchical Tucker format. \end{example} \change{ It is instructive to discuss the special case $\mathbf V = 0$ in Example~\ref{example: nearest neighbor interaction}, which corresponds to the absence of the neighbor interaction terms $B_\mu$ and $C_\mu$. Resolving the recursion, the iterates produced by the method of steepest descent~\eqref{eq: Richardson step} take the form \begin{equation} \label{eq:explicitrepresentation} {\mathbf{u}}_{n} = ({\mathbf{I}} - \alpha {\mathbf{L}})^n {\mathbf{u}}_0 + \alpha \sum_{i = 0}^{n-1} ({\mathbf{I}} - \alpha {\mathbf{L}})^i {\mathbf{b}}. \end{equation} For a fixed choice of $t \subseteq \{1,2,\dots,d\}$, $0< \abs{t} < d$, the structure of ${\mathbf{L}}$ implies that we can partition, similarly as in~\eqref{eq: A with low t-rank}, \[ {\mathbf{I}} - \alpha {\mathbf{L}} = I \otimes L^{(t)} + L^{(t^c)} \otimes I. \] Noting that $I \otimes L^{(t)}$ and $L^{(t^c)} \otimes I$ commute, this allows to rewrite~\eqref{eq:explicitrepresentation} as \[ {\mathbf{u}}_{n} = p( L^{(t)}, L^{(t^c)}) {\mathbf{u}}_0 + \alpha q( L^{(t)}, L^{(t^c)}) {\mathbf{b}}, \] with \[ p(L^{(t)}, L^{(t^c)} ) = \sum_{k=0}^n \binom{n}{k} (L^{(t)})^{k} \otimes (L^{(t^c)})^{n-k} \] and \[ q(L^{(t)}, L^{(t^c)} ) = \sum_{\ell=0}^{n-1} \sum_{k=0}^{\ell} \binom{\ell}{k} (L^{(t)})^{k} \otimes (L^{(t^c)})^{\ell-k} = \sum_{k=0}^{n-1} (L^{(t)})^{k} \otimes \left( \sum_{\ell=k}^{n-1} \binom{\ell}{k} (L^{(t^c)})^{\ell-k}\right) \] Combined with~\eqref{eq:explicitrepresentation}, this implies \[ \rank^{(t)}({\mathbf{u}}_{n}) \le (n+1) \rank^{(t)}({\mathbf{u}}_0) + n \rank^{(t)}({\mathbf{b}}). \] This allows us to replace the error estimate~\eqref{eq: estimate for powers of mult} in the proof of Theorem~\ref{th: abstract result d2} } by $(\tau_n^{(t)})^2 \lesssim (\frac{\kappa - 1}{\kappa +1 })^{2n}$. In turn, we obtain exponential singular value decays with respect to all such $t$. Similar and even stronger results can be obtained by approximating the inverse ${\mathbf{L}}^{-1}$ of the Laplace-like operator ${\mathbf{L}}$ by exponential sums~\cite{Grasedyck2004,Hackbusch2012}. \section{Eigenvalue problems with low-rank operators}\label{sec: eigenvectors} As another application of our general framework, we now consider the approximability of an eigenvector ${\mathbf{u}}$ belonging to the smallest eigenvalue $\lambda_1$ of a bounded self-adjoint operator ${\mathbf{A}}\vcentcolon{\mathbf{H}}\to{\mathbf{H}}$. In particular, we have \begin{equation}\label{eq:spectrum bounds} \lambda_1 \| {\mathbf{v}} \|^2 \le \langle {\mathbf{v}}, {\mathbf{A}} {\mathbf{v}} \rangle_{\mathbf{H}}^{} \le \Gamma \| {\mathbf{v}} \|^2_{{\mathbf{H}}}, \end{equation} for some $\Gamma$. In the following, we assume $\lambda_1$ to be simple. This implies that the rest of the spectrum is contained in an interval $[\lambda_2, \Gamma]$ with $\lambda_2 > \lambda_1$. The \emph{absolute gap} and the \emph{relative gap} are denoted by \begin{equation} \label{eq: gaps} \delta = \lambda_2 - \lambda_1, \qquad \Delta = \frac{\delta}{\Gamma - \lambda_1}, \end{equation} respectively. These gaps play a critical role in our estimates. We now fix ${\mathbf{u}}$, and denote by $\langle {\mathbf{u}} \rangle$ the linear span of ${\mathbf{u}}$. To approximate ${\mathbf{u}}$, we apply the Richardson iteration to the singular linear system \begin{equation}\label{equivalent linear system} {\mathbf{A}}_{\lambda_1} {\mathbf{u}} := ({\mathbf{A}} - \lambda_1 \mathbf{I}) {\mathbf{u}} = \mathbf{0}, \end{equation} but on the nontrivial invariant subspace $\langle {\mathbf{u}} \rangle^\bot$. This results in the iteration \begin{equation}\label{eq: Richardson iteration for ground state} {\mathbf{u}}_{n+1} = \Phi({\mathbf{u}}_n) := {\mathbf{u}}_n - \beta {\mathbf{A}}_{\lambda_1} {\mathbf{u}}_n = (1 + \beta \lambda_1) {\mathbf{u}}_n - \beta {\mathbf{A}} {\mathbf{u}}_n, \quad \beta = \frac{2}{\delta + \Gamma - \lambda_1}, \quad {\mathbf{u}}_0 \in {\mathbf{u}} + \langle {\mathbf{u}} \rangle^\bot. \end{equation} We emphasize that this method assumes the knowledge of the exact $\lambda_1$ a priori. It is therefore primarily of theoretical interest, to derive the desired error estimates for the low-rank approximability of the eigenvector ${\mathbf{u}}$. In turn, these estimates could be used to design a practical method of optimal complexity, in the spirit of~\cite{BachmayrDahmen2015}. In order to apply Theorem~\ref{th: abstract result d2}, we now verify that the properties~\eqref{A1} and~\eqref{A2} are satisfied. We begin with discussing the convergence of~\eqref{eq: Richardson iteration for ground state}. By the simplicity of $\lambda_1$, the self-adjoint operator ${\mathbf{A}}_{\lambda_1} = {\mathbf{A}} - \lambda_1 {\mathbf{I}}$ has the one-dimensional kernel $\langle {\mathbf{u}} \rangle$. It is bounded from below and above by $\delta$ and $\Gamma -\lambda_1$, respectively, on the invariant subspace $\langle {\mathbf{u}} \rangle^\bot$, so its condition number on this subspace is \change{bounded by} $1/\Delta$. This implies that the spectral radius of ${\mathbf{I}} - \beta {\mathbf{A}}_{\lambda_1}$ on the invariant subspace $\langle {\mathbf{u}} \rangle^\bot$ is bounded by $\frac{1-\Delta}{1+\Delta}$. Since \( {\mathbf{u}}_{n+1} - {\mathbf{u}} = ({\mathbf{I}} - \beta {\mathbf{A}}_{\lambda_1})({\mathbf{u}}_n - {\mathbf{u}}), \) an induction shows that if ${\mathbf{u}}_0 - {\mathbf{u}} \in \langle {\mathbf{u}} \rangle^\bot$, then ${\mathbf{u}}_n - {\mathbf{u}} \in \langle {\mathbf{u}} \rangle^\bot$ for all $n$, and \begin{equation}\label{eq: contraction of ground state Richardson} \| {\mathbf{u}}_{n+1} - {\mathbf{u}} \|_{\mathbf{H}} \le \left( \frac{1-\Delta}{1+\Delta} \right)^{n+1} \| {\mathbf{u}}_0 - {\mathbf{u}} \|_{\mathbf{H}} \quad \text{if ${\mathbf{u}}_0 \in {\mathbf{u}} + \langle {\mathbf{u}} \rangle^\bot$.} \end{equation} In other words,~\eqref{A1} holds with $q = \frac{1 - \Delta}{1 + \Delta}$. As for ~\eqref{A2}, similar to~\eqref{eq: rank increase SD}, the $t$-ranks of the iteration~\eqref{eq: Richardson iteration for ground state} satisfy \begin{equation}\label{eq: rank estimate for groundstate SD} \rank^{(t)}({\mathbf{u}}_{n+1}) \le (r_{\mathbf{A}}^{(t)}+1) \rank^{(t)}({\mathbf{u}}_n), \end{equation} provided that ${\mathbf{A}}$ admits a representation of the form~\eqref{eq: A with low t-rank}. Once again, if one of the operators $A_i^{(t)}$ or $A_i^{(t^c)}$ in~\eqref{eq: A with low t-rank} is the identity, then $r_{\mathbf{A}}^{(t)} + 1$ can be replaced by $r_{\mathbf{A}}^{(t)}$ in~\eqref{eq: rank estimate for groundstate SD}. In both cases, property~\eqref{A2} is satisfied. \begin{paragraph}{\bf Assumption (A0).} By~\eqref{eq: contraction of ground state Richardson}, the set ${\mathcal{D}}$ defined in~\eqref{eq:domain} takes the form \[ {\mathcal{D}} = \{ {\mathbf{v}} \in {\mathbf{H}} \vcentcolon \langle {\mathbf{v}} - {\mathbf{u}} , {\mathbf{u}} \rangle_{\mathbf{H}} = 0 \}. \] To verify the main assumption~\eqref{A4}, we have to show that ${\mathcal{D}}$ contains a starting point having $t$-rank at most one. In fact, let ${\hat{\bu}}_0$ be any element with $\rank^{(t)}({\hat{\bu}}_0) = 1$ that is not orthogonal to ${\mathbf{u}}$. Then \begin{equation}\label{qe: rescaled rank-one starting point} {\mathbf{u}}_0 = \frac{\| {\mathbf{u}} \|_{\mathbf{H}}^2}{\langle {\mathbf{u}}, {\hat{\bu}}_0 \rangle_{\mathbf{H}}} {\hat{\bu}}_0 \in {\mathcal{D}} \end{equation} with $\rank^{(t)}({\mathbf{u}}_0) = 1$. In turn, the quantity $\pi_1^{(t)}({\mathbf{u}})$ defined in~\eqref{eq:defpi} is finite. \end{paragraph} Our findings above allow us to apply Theorem~\ref{th: abstract result d2} for estimating the $t$-rank approximation error of the eigenvector ${\mathbf{u}}$. \begin{theorem}\label{th: EV problems} Given~\eqref{eq: A with low t-rank} and~\eqref{eq:spectrum bounds}, the solution ${\mathbf{u}}$ of~\eqref{equivalent linear system} satisfies \begin{equation}\label{eq: decay rate for EV} \tau_r^{(t)}({\mathbf{u}}) \le \frac{\pi^{(t)}_1({\mathbf{u}})}{q} \left( \frac 1 r \right)^{\abs{\frac{ \ln q}{\ln R^{(t)}}}}, \end{equation} with $q = \frac{1 - \Delta}{1 + \Delta}$, $R^{(t)} = r_{\mathbf{A}}^{(t)}+1$, and the gaps $\delta,\Delta$ defined in~\eqref{eq: gaps}. If, additionally, $A_i^{(t)}$ or $A_i^{(t^c)}$ in~\eqref{eq: A with low t-rank} is the identity for some $i$, then~\eqref{eq: decay rate for EV} holds with $R^{(t)} = r_{\mathbf{A}}^{(t)}$. \end{theorem} A notable difference of Theorem~\ref{th: EV problems} to Theorem~\ref{th: bound from steepest descent for linear equations} is that it features the quantity $\pi^{(t)}_1({\mathbf{u}})$ in the estimate. This quantity measures the distance between ${\mathbf{u}}$ and the set of $t$-rank one tensors within the affine space ${\mathbf{u}} + \langle {\mathbf{u}} \rangle^\bot$. In this way, the problem of rank-$r$ approximability has been reduced to the problem of rank-one approximability. \subsection{The problem of $t$-rank one approximability} In this section, we derive upper bounds for the quantity $\pi^{(t)}_1({\mathbf{u}})$ defined in~\eqref{eq:defpi}. Trivially, every starting point ${\mathbf{u}}_0 \in {\mathcal{D}}$ of $t$-rank one yields the upper bound $\|{\mathbf{u}}_0 - {\mathbf{u}}\|_{{\mathbf{H}}}$. While this is of interest when considering a specific iteration, more insight would be gained from bounds that depend on $\delta$, $\Delta$, and $r_{\mathbf{A}}^{(t)}$ only. Deriving such bounds is surprisingly difficult and at the heart of related works on the entanglement entropy, see, e.g.,~\cite{Arad2012}. In an infinite-dimensional tensor product space ${\mathbf{H}}$, the ratio $\pi^{(t)}_1({\mathbf{u}}) / \| {\mathbf{u}} \|_{\mathbf{H}}$ may become arbitrarily large for arbitrary ${\mathbf{u}} \in {\mathbf{H}}$. Upper bounds are obtained from $t$-rank one approximations to ${\mathbf{u}}$ in the ${\mathbf{H}}$-norm. Specifically, considering~\eqref{qe: rescaled rank-one starting point} with $\|{\hat{\bu}}_0\|_{\mathbf{H}} = 1$, we get the estimate \begin{equation*} \pi^{(t)}_1({\mathbf{u}}) \le \| {\mathbf{u}}_0 -{\mathbf{u}} \|_{\mathbf{H}} \le \|{\mathbf{u}}_0\|_{\mathbf{H}} = \frac{\| {\mathbf{u}} \|_{\mathbf{H}}}{\abs{\left \langle \frac{{\mathbf{u}}}{\| {\mathbf{u}}\|_{\mathbf{H}}} , {\hat{\bu}}_0 \right\rangle_{{\mathbf{H}}}}} , \end{equation*} where we used that ${\mathbf{u}}_0 -{\mathbf{u}}$ is orthogonal to ${\mathbf{u}}$. Thus, the problem is further reduced to providing a lower bound on the overlap of the normalized eigenvector with normalized tensors of $t$-rank one: \begin{equation}\label{eq: reduction to overlap} \pi_1^{(t)}({\mathbf{u}}) \le \frac{\| {\mathbf{u}} \|_{\mathbf{H}}}{\theta_1^{(t)}({\mathbf{u}})}, \end{equation} where \begin{equation}\label{definition of theta} \theta_1^{(t)}({\mathbf{u}}) := \sup_{\substack{\rank^{(t)} ({\hat{\bu}}_0) = 1 \\ \| {\hat{\bu}}_0 \|_{\mathbf{H}} = 1}} \left \langle \frac{{\mathbf{u}}}{\| {\mathbf{u}}\|_{\mathbf{H}}} , {\hat{\bu}}_0 \right\rangle_{{\mathbf{H}}}. \end{equation} In the case that every $H_\mu$ has finite dimension $N_\mu$, $\mu=1,\dots,d$, a generic bound is obtained as follows. The singular value decomposition~\eqref{eq:SVD} of the solution with respect to the identification~\eqref{eq: identification} is a finite sum with \[ D^{(t)} = \min\bigg( \prod_{\mu \in t} N_\mu, \prod_{\nu \notin t} N_\nu \bigg) \] mutually orthogonal $t$-rank one tensors of decreasing norms $\sigma_k^{(t)}$. This implies that the overlap~\eqref{definition of theta} is at least $\sigma_1^{(t)} / \| {\mathbf{u}}\|_{\mathbf{H}} \ge 1/\sqrt{D^{(t)}}$. By~\eqref{eq: reduction to overlap}, we obtain \begin{equation}\label{eq: naive rank-one overlap estimate} \pi^{(t)}_1({\mathbf{u}}) \le \sqrt{ D^{(t)} } \| {\mathbf{u}} \|_{\mathbf{H}}. \end{equation} This bound is independent of $d$ only when the cardinality of $t$ does not grow, which is the case for the Tucker format~\cite{Hackbusch2012}. The tensor train and hierarchical Tucker formats, however, require to take large splittings like $t = \{1,\dots,d/2\}$ into consideration. Consequently, the bound~\eqref{eq: naive rank-one overlap estimate} grows exponentially with $d$. In~\cite{Arad2012}, one of the very few results on this question, it has been shown how this growth can be avoided in the case of frustration-free systems. This constitutes a rather limiting assumption. The following result adapts a technique from~\cite[Lemma III.2]{Arad2012}, which does not require this assumption but instead assumes a rather strong contraction relative to the rank growth. \begin{theorem}\label{th: estimate the overlap} With the notation introduced in Theorem~\ref{th: EV problems}, assume that $ q^2 {R}^{(t)} < 1 $ Then it holds \[ \big(\theta_1^{(t)} ({\mathbf{u}})\big)^2 \ge \frac{1}{2} \bigg( \frac{1}{{R}^{(t)}} \bigg)^{\Big\lceil \frac{- \ln 2}{\ln\left(q^2 {R}^{(t)}\right)} \Big\rceil} \] for $\theta_1^{(t)} ({\mathbf{u}})$ defined in~\eqref{definition of theta}. Consequently, by~\eqref{eq: decay rate for EV} and~\eqref{eq: reduction to overlap}, \[ \tau_r^{(t)}({\mathbf{u}}) \le \sqrt{2} ({R}^{(t)})^{\frac{1}{2}\Big\lceil \frac{- \ln 2}{\ln\left(q^2 {R}^{(t)}\right)} \Big\rceil} \| {\mathbf{u}} \|_{\mathbf{H}} \left( \frac 1 r \right)^{\abs{\frac{ \ln q}{\ln {R}^{(t)}}}}. \] \end{theorem} \begin{proof} Without loss of generality, we may assume $\| {\mathbf{u}} \|_{\mathbf{H}} = 1$. Let $P$ denote the orthogonal projection onto $\langle {\mathbf{u}} \rangle$. To simplify the notation, we write $\theta$ instead of $\theta_1^{(t)}({\mathbf{u}})$ Let $\epsilon > 0$ and ${\hat{\bu}}_0$ be an normalized rank-one tensor with $\| P {\hat{\bu}}_0 \|_{\mathbf{H}} = \langle {\mathbf{u}} , {\hat{\bu}}_0 \rangle_{{\mathbf{H}}} \ge \theta - \epsilon$. We let ${\hat{\bu}}_n$ denote the iterate obtained after $n$ steps of the Richardson method~\eqref{eq: Richardson iteration for ground state} with starting vector ${\hat{\bu}}_0$. Since $\hat {\mathbf{u}}_0 \in P \hat {\mathbf{u}}_0+ \langle {\mathbf{u}} \rangle^\bot$, this rescaled Richardson method converges to $P \hat {\mathbf{u}}_0 \not=0$ and, by induction, \begin{equation}\label{eq: constant overlap} P {\hat{\bu}}_n = P {\hat{\bu}}_0. \end{equation} By~\eqref{eq: contraction of ground state Richardson} and using $\| {\hat{\bu}}_0 \|_{\mathbf{H}} = 1$, \[ \|(I - P){\hat{\bu}}_n \|_{\mathbf{H}}^2 \le q^{2n} \|(I - P) {\hat{\bu}}_0 \|_{\mathbf{H}}^2 = q^{2n}(1 - \| P {\hat{\bu}}_0\|_{\mathbf{H}}^2). \] Hence, \begin{align}\label{eq: norm of hbun} \| {\hat{\bu}}_n \|_{\mathbf{H}}^2 &= \| P {\hat{\bu}}_n \|_{\mathbf{H}}^2 + \| (I - P) {\hat{\bu}}_n \|_{\mathbf{H}}^2 = \| P {\hat{\bu}}_0 \|_{\mathbf{H}}^2 + \| (I - P) {\hat{\bu}}_n \|^2_{\mathbf{H}}\notag \\ &\le \| P{\hat{\bu}}_0 \|_{\mathbf{H}}^2 + q^{2n}(1 - \| P {\hat{\bu}}_0\|_{\mathbf{H}}^2)\notag \\ &\le \theta^2 + q^{2n} (1 - (\theta - \epsilon)^2), \end{align} where we used that $\| P {\hat{\bu}}_0 \|_{{\mathbf{H}}} \le \theta$ by definition~\eqref{definition of theta} of $\theta$. Using the singular value decomposition, we can write \[ {\hat{\bu}}_n = \sum_{k=1}^{\rank^{(t)}({\hat{\bu}}_n)} \sigma_k {\mathbf{v}}_k, \] with mutually orthonormal $t$-rank one tensors ${\mathbf{v}}_k$. By the Cauchy-Schwarz inequality, \[ (\theta - \epsilon)^2 \le \abs{\langle {\mathbf{u}}, {\hat{\bu}}_0 \rangle_{\mathbf{H}}}^2 = \abs{\langle {\mathbf{u}}, {\hat{\bu}}_n \rangle_{\mathbf{H}}}^2 \le \bigg(\sum_{k=1}^{\rank^{(t)}({\hat{\bu}}_n)} \abs{ \langle {\mathbf{u}}, {\mathbf{v}}_k \rangle_{\mathbf{H}} }^2 \bigg) \| {\hat{\bu}}_n \|_{\mathbf{H}}^2, \] where the equality follows from~\eqref{eq: constant overlap}. As $\rank^{(t)}({\hat{\bu}}_n) \le ({R}^{(t)})^n$, we conclude using~\eqref{eq: norm of hbun} that \begin{equation* \theta^2 \ge | \langle {\mathbf{u}}, {\mathbf{v}}_k \rangle_{\mathbf{H}} |^2 \ge \frac{(\theta - \epsilon)^2}{({R}^{(t)})^n\| {\hat{\bu}}_n \|_{{\mathbf{H}}}^2} \ge \frac{(\theta - \epsilon)^2}{({R}^{(t)})^n(\theta^2 + q^{2n} (1 - (\theta - \epsilon)^2))} \end{equation*} holds for at least one $k$. Note that the first inequality again is due to the definition of $\theta$. As $\epsilon$ can be chosen arbitrary, we obtain \[ ({R}^{(t)})^n(\theta^2 + q^{2n} (1 - \theta^2)) \ge 1, \] or, equivalently, \begin{equation}\label{eq:equivalently} \theta^2 (1 - q^{2n}) \ge ({R}^{(t)})^{-n} - q^{2n} = ({R}^{(t)})^{-n} (1 - (q^2{R}^{(t)})^n). \end{equation} For $n \ge \frac{-\ln 2}{\ln(q^2{R}^{(t)})}$, which is positive by assumption, we have $(q^2{R}^{(t)})^n \le 1/2$. Then~\eqref{eq:equivalently} implies \begin{equation}\label{eq:final estimate} \theta^2 \ge \frac{1}{({R}^{(t)})^{n}} \frac{1 - (q^2{R}^{(t)})^n}{1 - q^{2n}} \ge \frac{1}{2({R}^{(t)})^{n}}. \end{equation} The assertion follows by choosing $n = \left\lceil \frac{- \ln 2}{\ln(q^2{R}^{(t)})} \right\rceil$. \end{proof} Note that better bounds on $\theta$ may be obtained from~\eqref{eq:final estimate} by estimating the maximum value of the middle term as a function of $n$ more carefully, but this quickly becomes clumsy. The proof of Theorem~\ref{th: estimate the overlap} is based on the intuition that the ratio between the energy contraction rate $q^{2n}$ and the reciprocal rank increase $1/({R}^{(t)})^n$ after $n$ steps of the Richardson iteration can be made arbitrarily small when $q^2 {R}^{(t)} < 1$. Interestingly, this assumption alone does not result in better singular value decays in any of the above theorems, as only the ratio of the logarithms enters. The consideration of several steps of the fixed-point iteration only pays off when improved estimates of ${R}^{(t)}$ are available, as discussed for linear systems \change{at the end of Section~\ref{sec:linearequations}}. An example of relevance to eigenvalue problems is given, for instance, by an operator of the form \[ {\mathbf{A}} = A_1 \otimes I + I \otimes A_2 + B \otimes C, \] see also Example~\ref{example: nearest neighbor interaction}. A direct calculation reveals that for such an operator two steps of steepest descent~\eqref{eq: Richardson iteration for ground state} do not increase the rank by a factor of $3^2 = 9$, but only by at most $6$. \section{Conclusions} We have established bounds on the singular value decays for solutions to tensor structured linear systems and eigenvalue problems. As these decays govern the low-rank approximability in various low-rank tensor formats, such as the tensor train and the hierarchical Tucker formats, our results allow to make a priori statements about the suitability of these formats to address a given application, possibly even for large orders $d$. With the assumptions made in this paper, our construction yields algebraic decays. To obtain exponential decays, as they are sometimes observed in practice, further assumptions may be needed. In \change{Section~\ref{sec:linearequations}}, a rather restrictive commutativity assumption is shown to yield exponential decays. It would certainly be of interest to identify less restrictive assumptions. \section*{Acknowledgment} We thank Markus Bachmayr and Bart Vandereycken for inspiring discussions on an earlier draft of this paper, which resulted in some valuable improvements. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,075
{"url":"https:\/\/www.physicsforums.com\/tags\/liquid-fuel-rocket\/","text":"# liquid fuel rocket\n\n1. ### What is the combustion efficiency of liquid fuel rockets?\n\nHow efficiently is the fuel burned in a typical liquid fuel rocket engine? I've heard numbers ranging from 95% to below 50%.","date":"2019-11-20 07:12:09","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8343031406402588, \"perplexity\": 5544.473336032987}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496670512.94\/warc\/CC-MAIN-20191120060344-20191120084344-00473.warc.gz\"}"}
null
null
Q: VS2015 + Cordova + HockeyApp +Windows Mobile Anyone have experience/documentation integrating the HockeyApp SDK into a VS2015 Community Windows 10 Mobile Cordova app and adding the version to the HockeyApp dashboard? When I use HockeyApp on my Windows device, I see the following error when I try to install This App cannot be installed over the air. I packaged the app using Project > Store > Create App Packages and uploaded the .aapx from the AppPackages. I select no when the wizard asks if Do you want to build packages to upload to the Windows Phone Store? Any thoughts? In addition, When I upload the release apk, or aapx I get the following on the HockeyApp dashboard. No statistics found. Please integrate HockeySDK to collect analytics, crash reports, and feedback: Note: I am using the cordova-hockeyapp-plugin for Android and IOS. https://github.com/wnyc/cordova-plugin-hockeyapp A: Right click your project and select store, then click Create App Packages. Select no as you mentioned, click next, you can set Version and output type in the page. Check ARM and click create button. You may upload .appx file to Hockeyapp. Hockeyapp will create a new project in the dashboard. Did you sign your app for company app distribution as described here? If yes, please upload your .aetx token to the app page, then the download page should offer both files and the installation should work. Did you set the right App ID? Does the version exist on HockeyApp? If you set right, the crash report will appear. A: This App cannot be installed over the air. error appears because an .aetx is required. An .aetx is generated using a purchased Enterprise Mobile Code Signing Certificate from Symantec Corp and AetGenerator.exe However, if you want to use the native HockeyApp for sideloading on the device, you'll still get This App cannot be installed over the air. Note: An .aetx is not necessary for sideloading Windows 10 mobile apps. Unfortunately, the native HockeyApp does not have the ability to download and install the app for sideloading. Instead, the app should be download and sideloaded to the device via the web ui. Solution: * *Purchase Symantec Certificate to create .aetx or * *Sideload apps using the HockeyApp web ui. Hope this helps!
{ "redpajama_set_name": "RedPajamaStackExchange" }
577
Вели́кий Аню́й (, ) — річка на Далекому Сході в Білібінському районі Чукотського автономного округу та Нижньоколимському улусі Республіки Саха (Якутія) Росії. Ліва складова річки Анюй, належить до водного басейну Колими. Географія Річка бере свій початок на Анадирському плоскогір'ї, на висоті приблизно 800 м над рівнем моря, на західних схилах гори Вітрової (975 м), за 9 км на північний схід від гори Мечкеревої (1303 м) і до впадіння правої притоки Правого Ілюкейвеєма (за 654 км від гирла, висота 553 м.р.м.), носить назву Лівий Ілюкейвеєм. Тече на північ, після впадіння Правого Ілюкейвеєма повертає на захід, а після гирла лівої притоки Алучин — на північ — північний-захід. Перед гирлом лівої притоки Бонаї, повертає на північ, а за десяток кілометрів до впадіння лівої притоки Овражної — повертає на північний захід і за 8 км від річки Колими та за 9 км на південний захід від села Нижньоколимськ (Нижньоколимський улус) на висоті 0,2 м над рівнем моря зливається з правою складовою, річкою Малий Анюй, утворює річку Анюй. Довжина річки 693 км. Площа басейну  км². Повне падіння рівня русла від витоку до гирла становить 799,8 м, що відповідає середньому похилу русла — 1,15 м/км. Швидкість течії доволі велика, і коливається від 1,4-1,8 м/с, у верхній течії — до 0,6-1,0, в пониззі. Ширина русла у верхній течії доходить до 60-95 м, місцями до 100 м, при глибині до 1,0-1,7 м, в середній течії ширина — до 90-180 м, місцями до 255—355 м, при глибині — 1,5-2,0 м; в нижній течії ширина коливається від 200 до 245—265 м, місцями до 440—445 м, при глибині — до 2,0-4,0 м. Дно русла складається із твердих ґрунтових порід. Річка судноплавна в нижній течії. Практично на всьому протязі русло річки звивисте, вона протікає у вузькій гірській долині, після впадіння правої притоки Яракваам, долина розширюється, стає заболоченою, русло розбиваються на густу сітку рукавів, яка утворює численну кількість різноманітних островів. Ця сітка тягнеться майже на всьому протязі русла, аж до впадіння правої притоки Камешкової. Гідрологія Живлення річки дощове та снігове, підземне живлення несуттєве через розташування водозбору в районі Анюйсько-Чукотської області, якій притаманна суцільна багаторічна мерзлота. Замерзає на початку жовтня і, розкривається на початку червня. Повінь у червні, межень у листопаді — квітні. Влітку паводки від дощів. За період спостереження протягом 23 років (1978–2000) на станції в заїмці Костянтинівська, за 67 км від гирла, середньорічна витрата води річки становила 267 м³/с для водного басейну  км², що становить майже 87 % від загальної площі басейну річки. Величина прямого стоку в цілому по цій частині басейну становила — 170 міліметра на рік, що вважається доволі високим для цієї області. За період спостереження встановлено, що мінімальний середньомісячний стік був (у квітні), що становить всього трохи більше 0,17 % максимального середньомісячного стоку, який відбувається у червні місяці та становить майже — і вказує на дуже велику амплітуду сезонних коливань. За період спостереження, абсолютний мінімальний місячний стік (абсолютний мінімум) був (у межень квітня 1979 року), абсолютний максимальний місячний стік (абсолютний максимум) становив (у червні 1991 року). Притоки Річка Великий Анюй приймає близько однієї сотні приток, довжиною понад 10 км. Найбільших із них, довжиною понад 50 км — 24, із них понад 100 км — 12 (від витоку до гирла): Населенні пункти Басейн і береги річки малозаселені. На берегах розташовані кілька невеликих населених пункти, споруди (бараки) зимників, заїмки та мисливські будиночки, а також селища геологів (поселення в основному нежилі; від витоку до гирла): селище Дачне (нежиле), Ангарка (нежиле), П'ятистінне (нежиле), заїмка Костянтинівська, село Дві Віски (нежиле), заїмка Злубінська. Господарське використання Великий Анюй використовується для судноплавства, лісосплаву, рибальства і водопостачання гірничої промисловості. У басейні річки багаті родовища розсипного та корінного золота. Див. також Найдовші річки Росії Примітки Посилання Фото річки Великий Анюй на Panoramio.com (від витоку до гирла): , , , , , , Maps for the world / Карти всього світу Колима Річки Чукотського автономного округу Річки Якутії
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,874
<?php namespace ImboIntegrationTest\Image\Transformation; use Imbo\Image\Transformation\Rotate, Imagick; /** * @covers Imbo\Image\Transformation\Rotate * @group integration * @group transformations */ class RotateTest extends TransformationTests { /** * {@inheritdoc} */ protected function getTransformation() { return new Rotate(); } public function getRotateParams() { return [ '90 angle' => [90, 463, 665], '180 angle' => [180, 665, 463], ]; } /** * @dataProvider getRotateParams * @covers Imbo\Image\Transformation\Rotate::transform */ public function testCanTransformImage($angle, $width, $height) { $image = $this->createMock('Imbo\Model\Image'); $image->expects($this->once())->method('setWidth')->with($width)->will($this->returnValue($image)); $image->expects($this->once())->method('setHeight')->with($height)->will($this->returnValue($image)); $image->expects($this->once())->method('hasBeenTransformed')->with(true)->will($this->returnValue($image)); $imagick = new Imagick(); $imagick->readImageBlob(file_get_contents(FIXTURES_DIR . '/image.png')); $this->getTransformation()->setImage($image)->setImagick($imagick)->transform([ 'angle' => $angle, 'bg' => 'fff', ]); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,184
\section{Introduction} \label{sec:intro} The solar wind supports a turbulent energy cascade where the spectrum of magnetic field fluctuations follows a Kolmogorov inertial range scaling of $k^{-5/3}$ extending over several decades \citep{Tu1995,Goldstein1995,Bruno2013}. We convert the wavenumber, $k$, of the turbulent fluctuations along the sampling direction of the solar wind flow to a frequency, $f$, using Taylor's hypothesis \citep{Taylor1938}: $f\sim kv_{sw}/2\pi$, where $v_{sw}$ is the solar wind speed. At frequencies in the plasma frame of order the ion gyrofrequency, $\Omega_i=q_iB_0/m_i$, typically measured around 0.1-1 Hz in the spacecraft frame at 1 AU, the spectrum steepens \citep[e.g.,][]{ColemanJr.1968,Russell1972}. Here, $q_i$ is the ion charge, $m_i$ is the ion mass, and $B_0$ is the background field strength. The observed spectral break in the magnetic field power spectra at these so-called ion-kinetic frequencies has been attributed to the onset of kinetic effects such as dispersion or turbulent dissipation \citep[see][and references therein]{Alexandrova2013,Kiyani2015,Chen2016}, although the actual physical mechanisms behind the steepening remain poorly understood. In-situ data from spacecraft have revealed a bimodal distribution in solar wind speed with two distinct peaks, leading to the designation of two types of wind: slow ($\sim350 \text{ km s}^{-1}$) and fast ($\sim600 \text{ km s}^{-1}$), attributed to different source regions in the solar corona \citep[e.g.,][]{Schwenn1990,Habbal1997OriginsWind}. In fast wind streams, the spectral steepening is sometimes associated with the start of a variable transition range spanning less than a decade in frequency \citep[see also the power spectra in \citealt{Kiyani2009GlobalTurbulence,Chen2010}]{Sahraoui2010,Smith2012,Kiyani2013EnhancedTurbulence,Bruno2014a,Bruno2017SolarScales}. The spectral index of the spectrum in this range typically lies between -2 and -4 \citep{Smith2006,Hamilton2008,Koval2013,Bruno2014a}. At even higher frequencies, the spectrum changes again to a second, more `universal' power law of $f^{\;-2.8}$ continuing towards electron scales, associated with dispersive modes such as kinetic Alfv\'en waves (hereafter, KAWs) and whistler waves \citep[e.g.,][]{Gary2009,TenBarge2012InterpretingWind,Boldyrev2013}, or small-scale coherent structures such as current sheets \citep[e.g.,][]{Perri2012DetectionTurbulence}. The slow wind in general, typically lacks a transition range and instead shows a single steepening from $f^{-5/3}$ to about $f^{-2.8}$ \citep{Bruno2014a,Bruno2017SolarScales}. There is strong evidence of the coupling between magnetic energy in the turbulent fluctuations and kinetic energy of the ions, linking the large-scale turbulent cascade with heating of the solar wind particle distributions. For example, the temperature of the solar wind decreases with radial distance more slowly than expected for adiabatic expansion \citep{Marsch1982b,Richardson1995}, implying an active heating process during its expansion \citep[e.g.,][]{Cranmer2009}, which is consistent with the energy cascade rate throughout the inertial range \citep[e.g.,][]{MacBride2008,Stawarz2009}. In the fast wind, the temperature anisotropy, $T_\perp/T_\parallel$, of the proton core population and plateau formation in the proton velocity distributions \citep{Tu2001OnCorona,Marsch2001,Tu2002,Marsch2004,Heuer2007DiffusionProtons} also suggests ongoing heating by the turbulent cascade. These observations indicate that dissipation of the turbulent fluctuations is a likely candidate for the spectral steepening. In fact, the steepness of spectra is correlated with the energy cascade rate and power level in the inertial range \citep{Smith2006,Bruno2014a}, as well as the thermal proton temperature \citep{Leamon1998a}, implying that steeper slopes are associated with greater heating rates. The kinetic features of the proton velocity distributions highlight a deviation from local thermal equilibrium that is due to the lack of Coulomb collisions in the solar wind \citep[][see also the review by \citealt{Marsch2006}]{Marsch2012}. Instead, these features are likely regulated by linear and non-linear wave-particle interactions \citep[e.g.,][]{Howes2008,Schekochihin2009,Chandran2010,Smith2012,Osman2014MagneticWind} such as ion-cyclotron resonance, Landau resonance and transit-time damping, stochastic heating, entropy cascades, and reconnection-associated mechanisms. There is also evidence that plasma instabilities play an important role \citep{Kasper2002a,Hellinger2006,Matteini2007,Bale2009,Maruca2012,Osman2013,Servidio2014}. These physical processes may lead to the dissipation of energy from the turbulence and subsequent heating of ions observed by spacecraft. Understanding these mechanisms in the collisionless solar wind plasma is a major outstanding problem in the field of heliophysics research. \subsection{Spectral Steepening at High Frequencies} Several different characteristic ion plasma scales have been suggested to correspond to the observed spectral steepening, and each one is associated with different plasma heating processes. Two scales that are commonly proposed to correspond to the spectral break are the ion inertial length, $d_i=v_A/\Omega_i$, and the ion gyroscale, $\rho_i=v_{th,\perp}/\Omega_i$. Here, $v_A=B_0/\,\sqrt[]{\mu_0n_im_i}$ is the Alfv\'en speed, $n_i$ is the ion number density, $v_{th,\perp}=\sqrt[]{2k_BT_{i,\perp}/m_i}$ is the ion thermal speed perpendicular to the background magnetic field, $\mathbf{B}_0$, and $T_{i,\perp}$ is the ion perpendicular temperature. The inertial length is associated with the onset of dispersive effects due to the Hall current term, as well as reconnection of small-scale current sheets \citep{Dmitruk2004,Galtier2006,Galtier2007}, whereas the transition from Alfv\'en wave to KAW-dominated turbulence occurs at scales comparable to the gyroscale \citep{Howes2008,Schekochihin2009,Boldyrev2012}. Another explanation for the observed spectral steepening is cyclotron resonance of Alfv\'en waves with solar wind ions \citep[e.g.,][]{ColemanJr.1968,Marsch1982,Denskat1983,Goldstein1994,Marsch2003Onwind,Gary2004,Smith2012}. Here, the only ions we consider are protons, and throughout this paper we use the subscript, $i$, to refer exclusively to protons. \citet{Leamon1998a} proposed a wavenumber for the onset of cyclotron damping of Alfv\'en waves \citep[see also][]{Gary1999CollisionlessTheory}. The cyclotron resonance condition for protons is given by equating the Doppler-shifted wave frequency in the plasma frame, $\omega$, to the proton gyrofrequency, $\Omega_i$ \citep[e.g., see][]{Stix1992}, \begin{equation} \omega(k_\parallel)-k_\parallel v_\parallel=\pm {{\Omega }_{i}}, \end{equation} \noindent where $v_\parallel$ is the parallel velocity of the resonant protons and $k_{\parallel}$ is the parallel component of the wavenumber with respect to $\mathbf{B}_0$. The $\pm$ sign takes into account the sense of polarization of the wave. The wave electric field vector of left-hand circularly-polarized Alfv\'en/ion-cyclotron waves (hereafter, AICs) propagating parallel to $\mathbf{B}_0$ rotates in the same direction as proton gyration, so we use the positive sign. This interaction is most effective if $k_\parallel v_\parallel<0$, reducing the resonance condition to: \begin{equation} \omega(k_\parallel)+k_\parallel \left|v_\parallel\right|={{\Omega }_{i}}. \end{equation} \noindent To obtain the minimum wavenumber, $k_\parallel=k_c$, at which dissipation of the waves by cyclotron resonance with the background solar wind proton distribution occurs, we take $v_\parallel= v_{th,\parallel}$, where $v_{th,\parallel}$ is the parallel thermal speed of the proton velocity distribution, and for simplicity, substitute for $\omega(k_\parallel)$ using the wave dispersion-relation of Alfv\'en waves (e.g. \citealt{Gary1993}): $\omega(k_\parallel)=k_{\parallel}v_A$, \begin{equation} \label{equ:kc} {{k}_{c}}=\frac{{{\Omega}_{i}}}{{{v}_{A}}+{{v}_{th,\parallel}}}\equiv\frac{1}{{{d}_{i}}+{{\sigma}_{i}}}. \end{equation} \noindent Here, $\sigma_i$ is the pseudo-gyroscale, defined as $v_{th,\parallel}/\Omega_i$ using the parallel proton temperature, $T_{i,\parallel}$, which we distinguish from the typical definition of the ion gyroscale, $\rho_i$. The waves do not necessarily need to be parallel-propagating for resonance to occur; as long as there is a large enough $k_\parallel$ component, a wave can resonate with the proton population, even if it also has a significant $k_\perp$ component. If there is a substantial population of AICs in the solar wind, then we may expect the spectral break to occur at the scale $1/k_c$. Several past studies have explored the physical processes behind the observed spectral steepening by comparing characteristic ion scales with the measured spectral break from \textit{in-situ} data \citep{Leamon1998a,Leamon2000,Smith2001,Markovskii2008,Perri2010,Bourouaine2012,Bruno2014,Chen2014,Roberts2017DirectTurbulence,Wang2018Ion-scaleTurbulence}, or through simulations \citep[e.g.,][]{Ghosh1996SimulationMagnetohydrodynamics,Howes2008KineticPlasmas,Cerri2016Subproton-ScaleSimulations,Franci2016PlasmaSimulations,Franci2017MagneticTurbulence}. However, these studies have produced various different conclusions, and there is currently no consensus on the dominant dissipation mechanism. The difficulty in determining the break scale arises from the fact that the measured scales $d_i$ and $\rho_i$ at 1 AU are linked by the proton perpendicular plasma beta, $\beta_{i,\perp}=n_ik_BT_{i,\perp}/\left(B_0^2/2\mu_0\right)$, \begin{equation} \label{equ:beta} \frac{{{\rho }_{i}}}{{{d}_{i}}}=\sqrt{{{\beta}_{i,\perp}}}, \end{equation} \noindent and typically $\beta_{i,\perp}\sim$1, so that these scales are inseparable, except in cases where $\beta_{i,\perp}\ll$1 or $\beta_{i,\perp}\gg$1 \citep[for example, see][]{Chen2014}. Therefore, the spectral break may be associated with different scales, depending on changing solar wind conditions. \subsection{Coherent Helicity Signature at High Frequencies} We can gain a better understanding of the possible dissipation mechanisms by looking at the nature of the fluctuations at these frequencies. The presence of fluctuations with different properties such as polarization will limit the role of certain mechanisms under different conditions. A useful quantity that can be used to diagnose certain types of fluctuations is the magnetic helicity, which characterizes the solenoidal structure of the magnetic field and twistedness of field lines \citep[][see also \citealt{Smith2003MagneticWind,Telloni2013}]{Moffat1978,Woltjer1958a}. For solar wind turbulence, the quantity of interest is the fluctuating magnetic helicity density \citep{Matthaeus1982}. A reduced form, $H_m(k)$, can be computed from single-spacecraft measurements, which based on several assumptions \citep{Batchelor1970,Matthaeus1982a,Montgomery1981}, is: \begin{equation} H_{m}(k)=\frac{{{2}_{{}}}\text{Im}\{\mathbf{P}_{yz}(k)\}}{k}, \end{equation} \noindent where $\mathbf{P}_{yz}$ is the $y-z$ component of the reduced power spectral tensor of the magnetic field fluctuations in Geocentric Solar Ecliptic (GSE) coordinates \citep[for details on reduced spectra, see][]{Wicks2012}. We define the reduced normalized magnetic helicity, $\sigma_m(k)$, as: \begin{equation} \label{equ:hel} {{\sigma }_{m}}(k)=\frac{{{k}_{{}}}{{H}_{m}}(k)}{{{E}_{b}}(k)}\equiv \frac{{{2}_{{}}}\text{Im}\left\{ {{\mathbf{P}}_{yz}}(k) \right\}}{\text{Tr}\left\{ {{\mathbf{P}}_{ij}}(k) \right\}}. \end{equation} \noindent Here, $E_b(k)$ is the reduced magnetic spectral energy, which is given by the trace of the reduced power spectral tensor: $\text{Tr}\{\mathbf{P}_{ij}\}=\mathbf{P}_{xx}+\mathbf{P}_{yy}+\mathbf{P}_{zz}$. The normalized magnetic helicity gives a dimensionless measure of the polarization of magnetic fluctuations to identify wave modes at a particular frequency in the turbulent spectrum; $\sigma_m$ is zero for linearly polarized waves and $\pm$1 for right- or left-hand circularly polarized fluctuations, respectively. Past studies using a global mean magnetic field have found a lack of coherent helicity at low frequencies in the inertial range, i.e., fluctuating almost randomly between negative and positive values \citep{Matthaeus1982}. However, at ion-kinetic frequencies there is a dominant coherent signature that suggests right-hand polarization for outward propagating fluctuations \citep{Goldstein1994,Leamon1998a,Hamilton2008,Brandenburg2011ScaleWind,Markovskii2015}. More recently, wavelet-based studies \citep[][and references therein]{Telloni2012} using the technique first developed by \citet{Horbury2008} for local mean field analysis have been employed \citep[see also,][for more details]{Podesta2009,Forman2011,Podesta2011,Wicks2010}. These studies attribute the right-handed signature to the presence of KAWs propagating at large angles to the local mean field and have also revealed the presence of a weaker left-hand polarized component due to quasi-(anti)parallel propagating AICs \citep{He2011,He2012a,He2012,Podesta2011,Klein2014,Bruno2015,Telloni2015}. From these results, we may interpret the coherent helicity signature first observed by \citet{Goldstein1994} as arising from the dominance of the right-handed component over the left-handed component, implying dissipation of AICs at these frequencies that may be due to ion-cyclotron resonance. In fact, \citet{Bruno2015} showed that transitioning from fast to slow wind in the trailing edge of a fast wind stream (i.e., for decreasing Alfv\'enicity), both signatures weaken and eventually disappear, although, the left-handed component is first to fade completely. However, \citet{Howes2010} showed KAWs alone can also reproduce the observed helicity signature without the need for cyclotron resonance. In this paper, we present a rigorous analysis of solar wind turbulence at ion-kinetic frequencies using a combined identification of the frequency of the spectral break and onset of the magnetic helicity signature. We compare these spectral properties of the fluctuating magnetic field with the characteristic plasma scales, $d_i$, $\rho_i$, and $1/k_c$, and attempt to link the coherent helicity signature with the spectral steepening to help identify possible dissipation mechanisms at these frequencies. We use magnetic field spectra at a much higher resolution than undertaken previously so that plasma scales do not vary considerably over the time-series of data used to compute the spectra. Our use of a large dataset over the course of a year also enables us to identify how changing solar wind conditions affect possible dissipation mechanisms. We find evidence of proton cyclotron resonance that occurs at least half the time in our studied interval, particularly in the more Alfv\'enic fast wind, and discuss the possible implications for plasma heating at ion-kinetic scales. \section{Data Analysis and Results} For this study, we use data from the \textit{Wind} spacecraft \citep{Acuna1995}, which launched in 1994. It moved permanently to the L1 point in 2004, providing almost 14 years of continuous \textit{in-situ} solar wind measurements. We obtain high-resolution 11 Hz (every 0.092 ms) magnetic field measurements in GSE coordinates from the MFI instrument \citep{Lepping1995}, using the calibration of \citet{Koval2013}, and ion moments at a resolution of 92 seconds, including solar wind speed, $v_{sw}$, proton density, $n_i$, and proton temperatures, $T_{i,\parallel}$ and $T_{i,\perp}$, from the SWE instrument \citep{Ogilvie1995}, using the fitting technique described by \citet{Maruca2013}. We pre-process the magnetic field data by removing small data gaps (<10 measurements, about 1 second of data) with linear interpolation, but leave larger gaps present. Similarly, we interpolate over small data gaps (<3 measurements, about 5 minutes) for the plasma moments. We also remove any plasma data from our analysis flagged as having unreliable fitting and remove manually any unphysical and anomalous measurements not identified by flagging. We use an entire year of data from 2012 in our analysis; this large dataset outweighs the presence of a small number of large data gaps, while any smaller gaps that are interpolated should have a minimal impact on our overall results. Due to the small amplitude of the turbulent fluctuations at ion-kinetic frequencies, instrumental and spacecraft-induced noise can lead to an artificial flattening of the power spectrum at the highest frequencies. For the MFI instrument, this `noise-floor' is thought to arise from the analog-to-digital conversion of the signal, the spacecraft spin, and spin-tone harmonics. The only past measurement for the noise level of the MFI instrument was by \citet{Lepping1995} (see Figure 3(b) therein), which was conducted on a prototype sensor before launch. To ensure that the amplitudes of power spectra at high frequencies are physical, we first determine the amplitude and frequency-dependence of the MFI noise-floor from in-flight measurements before analyzing solar wind data. We provide details of this `noise-floor' determination in Appendix \ref{sec:appB} and provide this dataset for use in future studies. \subsection{Analysis of Solar Wind Spectra} \label{sec:spectra} To compute solar wind spectra, we employ a continuous wavelet transform (CWT) with a Morlet wavelet of frequency-width, $\omega_0=6$, using the method described by \citet{Torrence1998}. We obtain wavelet coefficients, $W(s,t)$, as functions of the scale, $s$, at which the wavelets are evaluated, and time. We then convert these scales into equivalent Fourier frequencies using $f\approx\omega_0/2\pi s$ and calculate components of the reduced power spectral tensor, \begin{equation} {{\mathbf{P}}_{ij}}(f,t)={{W}_{i}}(f,t)W_{j}^{*}(f,t), \end{equation} \noindent where the asterisk indicates complex conjugate and the indexes describe the three GSE coordinates, $i,j=x,y,z$. The power spectral density (PSD) is then: \begin{equation} \label{equ:psd} \text{PSD}(f,t)=\frac{2}{{{f}_{s}}}\text{Tr}\left\{ {{\mathbf{P}}_{ij}}(f,t) \right\}, \end{equation} \noindent where $f_s=10.87$ Hz is the sampling frequency of the MFI instrument. We first pad the signal to account for any border effects arising from the finite width of the Morlet wavelet, and then calculate an estimate of the PSD at each frequency for every measurement of the original time-series. After removal of padding, we average the PSD over every 1000 MFI measurements. This averaging improves the accuracy of the amplitude of the power spectrum at each scale and results in one spectrum for every 92 s of data, which is the cadence of the SWE instrument. We take the time-stamp of each 92 s spectrum as the middle of the time-series used to produce that spectrum. Finally, we interpolate the time-series of plasma measurements onto the time-series of averaged PSD estimates to associate one measurement of the ion moments with each 92 s power spectrum. We note that the length of time over which we average the spectra is shorter than the correlation time of solar wind turbulence, and so the assumptions of stationarity and ergodicity do not hold at low frequencies \citep{Matthaeus1982b,Perri2010a}. As such, many of the usual results of turbulence are not recovered, for example, the spectra do not converge to the typical $f^{-5/3}$ power law expected in the inertial range. However, we are attempting to measure turbulent behavior at ion-kinetic frequencies (0.1-5.5 Hz) and not in the inertial range. At these high frequencies there are a larger number of wavelengths sampled at these smaller scales during the advection of the turbulence past the spacecraft, and therefore, the stationarity and ergodicity conditions are satisfied in our dataset for our frequency range of interest. \subsection{Estimation of the Break Frequency} \label{sec:break} To estimate the break frequency of each spectrum, $f_b$, we fit the PSD to the following linear function: \begin{equation} {{\log }_{10}}(\text{PSD})=m\,{{\log }_{10}}(f)+c, \label{equ:fit} \end{equation} \noindent where $m$ is the gradient of the line or spectral exponent. To accommodate for greater uncertainty in the spectra at low frequencies, we fit this function to the power spectra using windows in frequency that increase in width in logarithmic space towards lower frequencies, giving us a value for $m$ for each window. The frequencies, $f$, for fitting Equation (\ref{equ:fit}) to the spectrum included in each window are given by: \begin{equation} \log_{10}{(f_m)}-0.1j\le\log_{10}{(f)}\le\log_{10}{(f_m)}, \end{equation} \noindent where $f_m$ is the maximum frequency within the window and the index, $j=1,2,3,...$, increases by an integer factor for each successive window so that the term $0.1j$ widens the window as $j$ increases. For each successive window, we set: \begin{equation} \log_{10}{(f_{m,j+1})}=\log_{10}{(f_{m,j})}-0.1j/20, \end{equation} \noindent shifting the windows to lower frequencies as $j$ increases. The division by 20 in the last term allows us to overlap the windows and provide a sufficient number of fits for $m$ over the frequencies at which we evaluate the power spectra. We continue our windowing process along the spectrum as long as $\log_{10}{(f_m)}>-1$, giving us a total of 26 windows. The center frequency of each window, which we associate with a value of $m$, is taken as the median of the frequencies in that window. \begin{figure} \centering \includegraphics[width=0.425\textwidth]{Figure1v3} \caption{$Top$: An example 92 s solar wind magnetic field power spectrum, in black. The blue line is the MFI noise-floor from Appendix \ref{sec:appB}, the red line is the noise-floor multiplied by a signal-to-noise ratio of 10, and the red-dashed line is the noise cut-off frequency, $f_{noise}$ (see main text). $Bottom$: Results from the fitting of the function (Equation \ref{equ:fit}) to the spectrum, showing the spectral exponent, $m$, for each window in our fitting process. Error bars show the root-mean-square error of the fitting. The black dashed-line is our estimated break frequency, $f_b$.} \label{fig:1} \end{figure} \begin{figure*} \begin{center} \includegraphics[scale=0.35]{Figure2Revisedv2} \caption{July 2012 time series of (a) the components of the magnetic field, $\bf{B}$, smoothed using a 51-point median filter, (b) the solar wind speed, $v_{sw}$, (c) the solar wind proton density, $n_i$, and (d) the proton perpendicular plasma beta, $\beta_{i,\perp}$. In panel (d), the red line indicates $\beta_{i,\perp}=1$, where $\rho_i=d_i$ and therefore $1/k_c\simeq\rho_i+d_i=2\rho_i=2d_i$, from Equations (\ref{equ:kc}) and (\ref{equ:beta}), assuming $\sigma_i\simeq\rho_i$. (e) Contour plot of consecutive 92 s solar wind magnetic field spectra. The white areas indicate large data gaps or data with frequencies $f\geq f_{noise}$. We show the characteristic plasma scales, $1/k_c$, $d_i$, and $\rho_i$, converted to frequencies using Taylor's hypothesis (see main text) as the solid green, red, and black lines, respectively. (f) Contour plot of the spectral exponent, $m$, to the corresponding power spectra in panel (e). We also plot $f_{kc}$ in red and the estimated $f_b$ in black for comparison, which we smooth here by a 21-point median filter to improve visualization of the plot.} \label{fig:2} \end{center} \end{figure*} We show in the top panel of Figure 1 an example 92 s spectrum from July 2012 and in the bottom panel, our results for $m$ from our fitting process. We see a change in $m$ from about -1.2 to -3.8 from low to high frequencies, resulting from the transition between the power laws for the inertial range and the ion-kinetic range. This transition is not a simple step-function because of the finite width of the fitting window at the frequency of the spectral break. To determine the width of this transition, we calculate two frequencies that bound either side of it. We identify the first, $f_1$, when the difference between two successive values for $m$ exceeds the threshold $\left|m_{j+1}-m_j\right|\ge0.05$, and the second, $f_2$, as the frequency with the minimum value for $m$. We then estimate the break frequency $f_b$ for each spectrum as $f_b=(f_1+f_2)/2$, in a similar fashion to \citet{Chen2014}. The black dashed-line in Figure 1 is our estimate of $f_b$ for the example spectrum using this method, which we see agrees well with the break in the spectrum. Towards higher frequencies in Figure 1, the spectrum flattens and $m$ increases. This flattening is most likely due to the increasing contribution of instrumental noise to the signal at these frequencies. To ensure that our estimated $f_b$ is physical, we determine a cut-off frequency, $f_{noise}$, where the spectrum is equal to a signal-to-noise ratio (SNR) of 10 times our noise-floor estimate (see Appendix \ref{sec:appB}), indicated by the vertical red dashed-line in Figure 1. We neglect an estimate of the break frequency if $f_b\geq f_{noise}$. Close to the Nyquist frequency, there is a second decrease in the spectral exponent, which we attribute to artifacts of the CWT. To test the robustness of our automated fitting procedure and method to calculate $f_b$, we first apply it to consecutive 92 s power spectra over the course of one month, using data from July 2012. Panels (a-d) in Figure 2 show time-series of the components of the magnetic field, $\bf{B}$, the solar wind speed, $v_{sw}$, proton density, $n_i$, and proton perpendicular beta, $\beta_{i,\perp}$, respectively. We smooth $\bf{B}$ using a 51-point median filter here for visual purposes to emphasize the sectoral structure of the interplanetary magnetic field from the numerous crossings of the heliospheric current sheet, highlighted by the changing sign of the $B_x$ and $B_y$ components. We see that $v_{sw}$ varies between 300 and 700 km/s and $n_i$ from less than 1 cm$^{-3}$ to almost 35 cm$^{-3}$. There are several periods, often during fast wind intervals, where $\beta_{i,\perp}\sim1$. At other times $\beta_{i,\perp}$ typically does not exceed unity and reaches a minimum value of almost $1\times10^{-3}$. The spacecraft sampled periods of both slow and fast wind, as well as shocks, density enhancements, and transient ejecta, illustrating the variability of the solar wind during this interval. Panel (e) in Figure 2 shows a contour plot of consecutive 92 s power spectra over July 2012, i.e., a time series of spectra over the course of a month. In comparison with panels (a-d), we see that the spectra and therefore, the turbulent processes in the solar wind, depend on the overall plasma conditions, particularly at high frequencies. Here, white areas indicate data that we have removed, either due to the presence of a large data gap or because the frequencies exceed the defined noise-floor cut-off, $f_{noise}$. We also show as solid lines the three characteristic plasma scales, $1/k_c$, $d_i$, and $\rho_i$, in green, red, and black, respectively. We plot these three scales as frequencies assuming Taylor's hypothesis: $f_L=v_{sw}/2\pi l$, where $l$ is the appropriate length scale. According to panel (e), there are several periods during which $f_{noise}<f_L$, emphasizing the importance of our noise-floor treatment. In Figure 2(f) we show a contour plot of the spectral exponent, $m$, versus frequency for each corresponding spectrum in panel (e), along with our estimated $f_b$ and $f_{kc}$ in black and red, respectively, for comparison. We note that the break frequencies are discretized by the scales of the wavelets and hence, the windowing process in our fitting procedure. We discard values where $f_b\geq f_{noise}$, and also $f_b\leq0.1$ Hz. This second condition allows us to avoid times when the amplitude of fluctuations is so low that a physical break between two power laws is obscured by noise, and therefore, an estimate for $f_b$ by our automated method is unreliable. We smooth $f_b$ here only for this Figure using a 21-point median filter. We find that our fitting procedure performs an accurate estimate of $f_b$ for the $\sim$29,000 spectra from July 2012, since $f_b$ agrees well with the break in the spectrum from visual inspection of panel (f). \begin{figure*} \begin{center} \includegraphics[scale=0.8]{Figure3_Final} \caption{(a-c) Histograms for 2012 of the estimated break frequency, $f_b$, versus the three characteristic plasma scales, converted into frequencies using Taylor's hypothesis - $f_L$ represents $f_{kc}$, $f_{di}$ and $f_{\rho i}$, for each row respectively. (d-f) The corresponding results for only slow wind (<400 km/s) intervals. (g-i) The corresponding results for only fast wind (>500 km/s) streams. The color-bar represents the column-normalized number of spectra. The black dashed lines represent $f_b=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which give the resolution of the wavelet transform about the line $f_b=f_L$.} \label{fig:3} \end{center} \end{figure*} We now compare the three plasma scales as frequencies, $f_L$, with $f_b$, where $L=1/k_c$, $d_i$, and $\rho_i$, and extend our analysis to a year of data from 2012. We calculate $\sim$ 344,000 spectra, estimating $f_b$ for each spectrum and compare it to the corresponding values for the characteristic plasma scales, $f_L$. Figure 3 shows two-dimensional histograms for $f_b$ with $f_{kc}$, $f_{di}$, and $f_{\rho i}$ in the top, middle, and bottom rows, respectively. We show the results for all data and then separate according to slow ($v_{sw}$<400 $\text{km s}^{-1}$) and fast wind ($v_{sw}$>500 $\text{km s}^{-1}$) in the left, middle, and right columns, respectively. Separating by wind speed allows us to test for systematic effects due to large-scale solar wind stream structure. We normalize each column of the binned data in each plot by the maximum number of spectra in a bin for that column, highlighting the most probable $f_b$ measured as a function of $f_L$. We neglect values for $f_b$ when $f_b\geq f_{noise}$ and $f_b\leq0.1$ Hz, and to avoid under-sampling. We also omit bins with $\leq$10 spectra to avoid under-sampling. In each panel, the black dashed lines give the line $f_b=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which indicate the resolution of the wavelet transform about the line $f_b=f_L$ due to the finite width of the Morlet wavelet in frequency space \citep[i.e., the \textit{e}-folding frequency, see][]{Torrence1998}. To quantify any relationship between $f_b$ and $f_L$, we conduct a statistical analysis using this year of data. We first calculate the Pearson correlation coefficient, \begin{equation} \label{equ:coeff} R(f_b,f_L)=\frac{1}{N-1}\sum\limits_{{i=1}}^{{N}}{{\left(\frac{f_{b,i}-\mu_b}{\sigma_b}\right)\left(\frac{f_{L,i}-\mu_L}{\sigma_L}\right)}}, \end{equation} \noindent where $\mu$ is the mean and $\sigma$ is the standard deviation. The coefficient $R\in\left[-1,+1\right]$ measures the linear correlation between $f_b$ and $f_L$. A value of $R=\pm1$ indicates a positive or negative linear correlation, respectively, whereas zero indicates no linear correlation. If $\left|R\right|=1$, a linear equation describes the relationship between the variables $f_b$ and $f_L$. We also define a residual, $\rho$, which in analogy to the standard deviation is: \begin{equation} \label{equ:res} \rho(f_b,f_L)=\sqrt[]{\frac{1}{N-1}\sum\limits_{{i=1}}^{{N}}{{\left|f_{b,i}-f_{L,i}\right|}^2}}, \end{equation} \begin{figure*} \centering \includegraphics[scale=0.8]{Figure4_Final} \caption{Histograms for 2012 of the estimated break frequency, $f_b$, versus the three characteristic plasma scales, converted into frequencies using Taylor's hypothesis - $f_L$ represents $f_{kc}$, $f_{di}$ and $f_{\rho i}$, for each column respectively. The data used are for periods where $0.95\geq\beta_{i,\perp}\leq1.05$. The color-bar represents the column-normalized number of spectra. The black dashed lines represent $f_b=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which give the resolution of the wavelet transform about the line $f_b=f_L$.} \label{fig:4} \end{figure*} \noindent where $\rho\geq0$. The residual gives the difference between our measured $f_L$ and estimated $f_b$, in other words, a value of $\rho$ closer to zero indicates there is less spread of data about the line $f_b=f_L$. In our calculations of $R$ and $\rho$, we take the logarithms of both $f_b$ and $f_L$. We place more weight here on the statistical significance of $\rho$ over $R$ since a linear relationship does not necessarily imply that $f_b=f_L$. The correlation coefficients are also intrinsically linked because of the similar definitions of the three plasma scales in Equations (\ref{equ:kc}) and (\ref{equ:beta}), especially when $\beta_{i,\perp}\sim1$. As a final test, we count the number of spectra that lie between the two red dashed lines in each panel of Figure 3 to determine a percentage of the total number of spectra that satisfy $f_b\simeq f_L$, within the \textit{e}-folding frequency. For the total number of spectra, we do not include instances where there are large data gaps, and where we have discarded values for $f_b$. For instances where we filter data according to slow or fast wind, the total number of spectra we use is only for that filtered dataset. We give the results of our statistical analysis in Table \ref{tab:1}. \begin{deluxetable}{lcccc}[b!] \tablecaption{Correlation coefficients, residuals, and percentages for $f_L$ and $f_b$ from the data shown in Figures 3 and 4.\label{tab:1}} \tablehead{ \colhead{} & \colhead{Plasma Scale} & \colhead{Correlation} & \colhead{Residual} & \colhead{Percent} \\ \colhead{} & \colhead{$L$} & \colhead{$R$} & \colhead{$\rho$} & \colhead{\%} } \startdata & $k_c$ & 0.56 & 0.17 & 52.08 \\ All Data & $d_i$ & 0.52 & 0.25 & 30.19 \\ & $\rho_i$ & 0.34 & 0.43 & 9.14 \\ \hline & $k_c$ & 0.54 & 0.12 & 49.42 \\ Slow & $d_i$ & 0.48 & 0.19 & 27.74 \\ & $\rho_i$ & 0.38 & 0.33 & 6.41 \\ \hline & $k_c$ & 0.58 & 0.06 & 57.01 \\ Fast & $d_i$ & 0.58 & 0.08 & 36.08 \\ & $\rho i$ & 0.32 & 0.14 & 15.66 \\ \hline & $k_c$ & 0.60 & 0.02 & 51.81 \\ $\beta_{i,\perp}\sim1$ & $d_i$ & 0.61 & 0.04 & 26.44 \\ & $\rho_i$ & 0.61 & 0.04 & 26.42 \\ \vspace{-0.38cm} \enddata \end{deluxetable} For all 2012 data, regardless of wind speed, both $f_{kc}$ and $f_{di}$ have moderate correlations with $f_b$, with values of $R=0.56$ and $R=0.52$, respectively, whereas the correlation for $f_{\rho i}$ is weaker at $R=0.34$. The lowest residual is $\rho=0.17$ for $f_{kc}$, while for $f_{di}$ it is $\rho=0.25$ and for $f_{\rho i}$ it is even higher at $\rho=0.43$. These values show that the cyclotron resonance scale, $1/k_c$, is most closely associated with the spectral break (i.e., closest to $f_b\simeq f_L$) during the interval we study. This finding is supported by 52.08\% of the total number of spectra in our dataset falling within the two red dashed-lines for $f_{kc}$ in Figure 3(a). In contrast, we find only 30.19\% for $f_{di}$ in panel (b) and 9.14\% for $f_{\rho i}$ in panel (c). The gyroscale, $\rho_i$, therefore has a poor relationship with the break frequency, suggesting that it is least likely to be associated with the spectral steepening during the interval we study. These findings hold when we separate the data according to wind speed. For slow wind, $f_{kc}$ has the highest correlation coefficient at $R=0.54$ and lowest residual at $\rho=0.12$. During periods of fast wind streams, the residual for $f_{kc}$ is about half that of slow wind at $\rho=0.06$, the smallest for all three scales. From panels (d) and (g) in Figure 3, we are unable to visually differentiate between the fast and slow wind cases without statistical analysis. Also, comparing panels (f) and (g), we find that 57.01\% of spectra fall within the resolution limit of our wavelet transform for $f_{kc}$ in fast wind, which is higher than for slow wind at 49.42\%. The correlation coefficients for both $f_{kc}$ and $f_{di}$ in fast wind are equal, at $R=0.58$, which are only slightly larger than their slow wind values. Transitioning from slow to fast wind in panels (e) and (h), the percentages of spectra where $f_b\simeq f_{di}$ increase from 27.74\% to 36.08\%. From these values, we see that the relationship between $f_{kc}$ and $f_b$ is maintained even when the large-scale stream structure of the wind varies but is strongest in fast wind streams. According to Equations (\ref{equ:kc}) and (\ref{equ:beta}), $1/k_c$ will coincide with the larger of the two scales, $d_i$ or $\rho_i$, when $\beta_{i,\perp}\ll1$ or $\beta_{i,\perp}\gg1$, respectively, assuming an isotropic temperature (i.e., $\rho_i\simeq\sigma_i$). This expectation is consistent with observations by \citet{Chen2014} showing that the spectral break occurs at $d_i$ for $\beta_{i,\perp}\ll1$ and at $\rho_i$ for $\beta_{i,\perp}\gg1$, which they note is consistent with a break at $1/k_c$ in both cases. However, by definition, when $\beta_{i,\perp}\sim$1, $\rho_i\simeq d_i$ and therefore, $1/k_c\simeq\rho_i+d_i\simeq 2\rho_i\simeq 2d_i$. For periods with $\beta_{i,\perp}\sim1$ as seen in Figure 2(d-f), both $f_{di}$ and $f_{\rho i}$ coincide and $f_{kc}$ is shifted to lower frequencies by about a factor of 2. During these periods, there is a good agreement between $f_{kc}$ and $f_b$. To address what happens when $\beta_{i,\perp}\sim1$ quantitatively and clearly show the difference between $1/k_c$ and $d_i$ or $\rho_i$, we filter our year of data to include only periods where $0.95\geq\beta_{i,\perp}\leq1.05$ and show the corresponding 2D histograms in Figure 4. In addition, the results from our statistical analysis are shown in the bottom panel of Table \ref{tab:1}. We note that we do not remove bins with $\leq$10 spectra here due to the smaller amount of data available for these periods, but this only affects bins furthest from the black dashed-line. Comparing panels (b) and (c) to (a) in Figure 4 we see that our measured $f_b$ is consistently shifted to frequencies lower than $f_{di}$ and $f_{\rho i}$, i.e., the yellow enhancement in panels (b) and (c) is below the black dashed-line, but in panel (a) we see that it is closer to the dashed-line. These plots show that $1/k_c$ is a more likely candidate for the break scale than $d_i$ or $\rho_i$, and we quantify this result by calculating $R$ and $\rho$ for this dataset. The correlation coefficients are the same for $f_{di}$ and $f_{\rho i}$ at $R=0.61$, and almost the same at $R=0.60$ for $f_{kc}$, however, the latter has the lowest residual at $\rho=0.02$, compared to $\rho=0.04$ for $f_{di}$ and $f_{\rho i}$. We note that the statistics for $f_{\rho i}$ improve considerably when considering only periods of $\beta_{i,\perp}\sim$1, and are the same in this case for $f_{di}$. Again, we find a high percentage of the number of spectra where $f_b\simeq f_{kc}$ at 51.81\%, almost double that for the other two scales. When we consider all data in our interval, the results from our statistical analysis for both $f_{kc}$ and $f_{di}$ do not differ significantly, particularly in fast wind, as we see from similar values for $R$ and $\rho$ in Table \ref{tab:1}. From our analysis of periods where $\beta_{i,\perp}\sim1$, we explain this result as being due to the ratio of spectra where $\beta_{i,\perp}<$1 to $\beta_{i,\perp}>1$, which is almost 8 in our dataset. Finally, we conclude that $f_b$ is best associated with $f_{kc}$ and so the spectral break is most likely related to proton-cyclotron resonance. We then explain the correlations with $d_i$ and $\rho_i$ as due to the dependence of $1/k_c$ on both variables, and the fact that $d_i$ and $\rho_i$ are separated only by a factor of $\sqrt[]{\beta_{i,\perp}}$. \subsection{Quantification of the Helicity Signature} To further explore the possible role of proton-cyclotron resonance at ion-kinetic frequencies, we now investigate the nature of the fluctuations at these frequencies. We calculate helicity spectra from successive periods of 92 seconds using the normalized magnetic helicity, $\sigma_m$, from Equation (\ref{equ:hel}). Again, we use Taylor's hypothesis to obtain $\sigma_m$ as a function of frequency instead of wavenumber, giving us one helicity spectrum for each corresponding power spectrum from the previous section. To quantify the relationship between $1/k_c$ and the coherent helicity signature at high frequencies, we devise a method to calculate the helicity signature onset frequency, $f_h$, defined as the threshold frequency at which we see an enhancement in the helicity at ion-kinetic scales. We first fit a Gaussian function to the helicity spectra, \begin{equation} \label{equ:gaussfit} \sigma_m=\frac{1}{\sqrt[]{2\pi}\sigma_{D}}\exp{\left\{-\frac{\left(f-f_p\right)^2}{2\sigma_{D}^2}\right\}}, \end{equation} \noindent where the fitting parameters are the standard deviation, $\sigma_{D}$, and the mean, $f_p$, which corresponds to the frequency of the peak in the helicity signature. We perform the Gaussian fitting in linear space, so that the method is biased towards the peak in helicity at the highest frequencies, i.e., the coherent helicity signature. We show in the top panel of Figure 5 the example power spectrum from Figure 1, along with its corresponding helicity spectrum in the bottom panel, both in black. In the bottom panel, we also plot in red the Gaussian fit to the helicity spectrum using Equation (\ref{equ:gaussfit}). The red dashed-line gives $f_p$ from the fitting, whereas the black and gray dashed-lines are $f_b$ and $f_{noise}$ from before, respectively. To estimate the onset frequency $f_h$, we calculate the full-width at half-maximum of the Gaussian peak using $\Delta f=\sigma_{D}\sqrt{8\ln \left( 2 \right)}$ and then, \begin{equation} \label{equ:fh} {{f}_{h}}={{f}_{p}}-\Delta f/2. \end{equation} \noindent The minus sign is used to determine the onset frequency bounded towards lower frequencies. This method is independent of whether the peak in helicity is negative or positive and allows for an automated process estimating both $f_h$ and $f_p$ for $\sim$ 344,000 helicity spectra. In Figure 5, $f_h$ is given by the blue dashed-line. We see that $f_b$ and $f_h$ are separated by only about 0.1 Hz. From the results of Section \ref{sec:break}, this result implies that $f_h$ may also be associated with $f_{kc}$ and suggests that the presence of the helicity signature is related to cyclotron resonance. We further investigate this relationship between $f_b$ and $f_h$ using the same statistical analysis from the previous section. We also follow up the work of \citet{Markovskii2015} to confirm a relationship between $f_p$ and $f_{\rho i}$ using our dataset. \begin{figure} \begin{center} \includegraphics[width=0.425\textwidth]{Figure5} \caption{$Top$: An example 92 s solar wind magnetic field power spectrum from Figure 1, in black. The light gray line is the MFI noise-floor from Appendix \ref{sec:appB}, the dark gray line is the noise-floor multiplied by a signal-to-noise ratio of 10, and the gray dashed-line is the noise cut-off frequency, $f_{noise}$ (see main text). The black dashed-line is our estimated break frequency, $f_b$, from before. $Bottom$: The corresponding 92 s helicity spectrum in black and the fitting of the Gaussian function (Equation \ref{equ:gaussfit}) to the spectrum in red. The coherent helicity signature at high frequencies is well-represented by the Gaussian peak. From our fitting, we obtain the helicity onset frequency, $f_h$, from Equation (\ref{equ:fh}) and the peak helicity frequency, $f_p$, from the mean of the Gaussian peak, given by the blue and red dashed-lines, respectively.} \label{fig:5} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[scale=0.35]{Figure6Revisedv3} \caption{July 2012 time series of (a) the components of the magnetic field, $\bf{B}$, smoothed using a 51-point median filter to highlight the sectoral structure of the solar wind, and (b) the solar wind speed, $v_{sw}$, repeated from Figure 2. (c) Contour plot of consecutive 92 s reduced normalized magnetic helicity, $\sigma_m$, from July 2012, corresponding to each power spectrum in Figure 2(e). The spectra have been normalized by $1/f_{kc}$, and we plot the line $f_{kc}=1$ in red for reference. We also show the estimated helicity onset frequency, $f_h$, in black. (d) Similarly to panel (c), where the spectra are normalized instead by $1/f_{\rho i}$, and we plot the line $f_{\rho i}=1$ in red for reference. We also show the estimated helicity peak frequency, $f_p$, in black. Both $f_h$ and $f_p$ are smoothed here by a 21-point median filter to improve visualization of the plot.} \label{fig:6} \end{center} \end{figure*} We first use our method described above to automatically estimate both $f_h$ and $f_p$ for July 2012 to check that it accurately reproduces the features of the helicity spectra. In Figure 6(a-b) we show again time series of $\bf{B}$ and $v_{sw}$ for July 2012. In addition, panels (c) and (d) show contour plots of $\sigma_m$ for consecutive 92 s spectra over the course of July 2012, where we normalize the frequency of each spectrum by $1/f_{kc}$ in panel (c), and similarly by $1/f_{\rho i}$ in panel (d). We also plot the lines $f_{kc}=1$ and $f_{\rho i}=1$ for reference in red, as well as our estimated $f_h$ and $f_p$ in black, in panels (c) and (d), respectively. We normalize the spectra here only for the Figure, and not in any future analysis. From panel (c), we can see that $f_h$ bounds at lower frequencies the enhanced red and blue signature as we expect. Also, from panel (d) we see that the peak of the helicity signature is located close to the middle of the helicity signature before the enhancement disappears completely (it should not be located directly in the middle due to the logarithmic scale in frequency). Therefore, we conclude that our method works as required, quantifying the helicity signature accurately. We will now discuss our findings from Figure 6 in more detail. The persistent band of enhancement in $\sigma_m$ at higher frequencies varies between about -0.4 and +0.2. Figure 6(a) suggests that the sectoral structure of the solar wind is likely responsible for this changing sign in helicity over the course of the month, which is consistent also with the findings of \citet{He2011}. From Figure 6(c), we can see that the positive helicity signal is typically weaker in amplitude than the negative signal by a factor of 2. We currently have no explanation for this finding, but it is an interesting observation that should be explored in another study. We see that when an enhancement in helicity signature is present at high frequencies, $f_h$ is well-correlated with $f_{kc}$. By comparing panels (b) and (c), we find that the helicity enhancement weakens or almost completely disappears during periods of slow solar wind. Both these results show that the findings of \citet{Bruno2015,Telloni2015} apply to large volumes of the solar wind. Also, from Figure 6(d), the peak of this coherent helicity signature is correlated with $f_{\rho i}$, especially in fast wind where the helicity signature is strongest, which is consistent with \citet{Markovskii2015} and \citet{Telloni2015}. While we have not shown a similar plot for $f_{di}=1$, we find that it is not closely associated with either $f_h$ or $f_p$, and confirm this in our subsequent analysis. At lower frequencies than $f_{kc}$, the helicity fluctuates about zero, as expected for the inertial range of solar wind turbulence \citep{Matthaeus1982}, showing either a lack of or no dominant coherent circular polarization of fluctuations. There is an enhanced signature in the helicity that significantly deviates from the characteristic plasma scales between the 12th and 16th July, peaking at around 0.1 Hz (from Figure 6(c), about 0.5 in normalized frequency units). We associate this signature with AICs produced by instabilities from unstable particle distributions. These waves are often Doppler-shifted towards lower frequencies than the spectral break since they typically propagate towards the Sun, in the opposite direction to the turbulent magnetic fluctuations we consider here \citep[e.g., see][]{Tsurutani1994ElectromagneticObservations,Jian2009,Jian2010ObservationsAU,Jian2014,Roberts2015,Roberts2015a,Gary2015,Wicks2016}. To exclude these events from our analysis, we discard data with $f_h\leq$0.2 Hz and $f_p\leq$0.2 Hz. \begin{figure*} \centering \includegraphics[scale=0.8]{Figure7_Final} \caption{(a-c) Histograms for 2012 of the estimated helicity onset frequency, $f_h$, versus the three characteristic plasma scales, converted into frequencies using Taylor's hypothesis - $f_L$ represents $f_{kc}$, $f_{di}$ and $f_{\rho i}$, for each row respectively. (d-f) The corresponding results for only slow wind (<400 km/s) periods. (g-i) The corresponding results for only fast wind (>500 km/s) streams. The color-bar represents the column-normalized number of spectra. The black dashed lines represent $f_h=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which give the resolution of the wavelet transform about the line $f_b=f_L$.} \label{fig:7} \end{figure*} At frequencies $f>f_p$, the enhancement disappears, and the helicity returns to a value close to zero. If the trace of the power spectral tensor is increased artificially by instrumental noise, then the helicity will also reduce to zero due to the influence of noise, by definition from Equation (\ref{equ:hel}). In fact, the increasing contribution from noise should not affect the phase contribution to the helicity, but rather just its amplitude. Therefore, despite seeing a return to zero, we do not see the return of the signature to an incoherent one similar to that at low frequencies seen in Figure 6 where $\sigma_m$ oscillates in color between red and blue for opposite polarizations. We do not observe the signal to fluctuate about zero, but rather remain coherent with a value close to zero for $f>f_p$. Therefore, we cannot determine whether this effect is due to the MFI noise-floor or a physical effect. An alternative explanation is aliasing (See Appendix \ref{sec:appB}). Despite this, we find that typically $f_p<f_{noise}$ and therefore take the peak in the helicity signature and hence, $f_p$, as physical. \begin{deluxetable}{lcccc}[b!] \tablecaption{Correlation coefficients, residuals, and percentages for $f_L$ and $f_h$ from the data shown in Figures 7 and 8.\label{tab:3}} \tablehead{ \colhead{} & \colhead{Plasma Scale} & \colhead{Correlation} & \colhead{Residual} & \colhead{Percent} \\ \colhead{} & \colhead{$L$} & \colhead{$R$} & \colhead{$\rho$} & \colhead{\%} } \startdata & $k_c$ & 0.48 & 0.09 & 64.29 \\ All Data & $d_i$ & 0.40 & 0.15 & 33.37 \\ & $\rho_i$ & 0.35 & 0.29 & 7.61 \\ \hline & $k_c$ & 0.48 & 0.06 & 64.72 \\ Slow & $d_i$ & 0.39 & 0.10 & 32.71 \\ & $\rho_i$ & 0.38 & 0.21 & 4.97 \\ \hline & $k_c$ & 0.45 & 0.04 & 61.38 \\ Fast & $d_i$ & 0.42 & 0.06 & 35.33 \\ & $\rho i$ & 0.33 & 0.10 & 14.21 \\ \hline & $k_c$ & 0.46 & 0.01 & 62.29 \\ $\beta_{i,\perp}\sim1$ & $d_i$ & 0.47 & 0.03 & 23.45 \\ & $\rho_i$ & 0.47 & 0.03 & 23.32 \\ \vspace{-0.38cm} \enddata \end{deluxetable} We now extend our analysis to include an entire year of data from 2012 in the same way as Section \ref{sec:break}. In Figure 7 we show histograms in the same format as Figure 3 for $f_L$ against $f_h$ for all data and then separated into periods of slow and fast wind in panels (a-c), (d-f), and (g-i), respectively. In Figure 8 we show in a similar fashion to Figure 4, $f_L$ against $f_h$ for periods where $\beta_{i,\perp}\sim1$. Finally, in Figure 9, we plot histograms for $f_L$ against $f_p$. Here, we discard data with $f_h\geq f_{noise}$ and $f_p\geq f_{noise}$ to ensure that instrumental noise does not affect our results, and data where $f_h\leq0.2$ and $f_p\leq0.2$ Hz, as discussed previously. We provide the results from our statistical analysis for $f_L$ and $f_h$ in Table \ref{tab:3} and for $f_L$ and $f_p$ in Table \ref{tab:4}. We find that $f_{\rho i}$ has the lowest correlations and highest residuals with $f_h$ regardless of wind speed, which is consistent with Figure 7, where the distribution of data deviates significantly from the black dashed-line. We conclude that $f_{\rho i}$ is not directly comparable to $f_h$ within our studied interval. When we consider all data, $f_{kc}$ has the highest correlation coefficient of $R=0.48$ and lowest residual of $\rho=0.09$, compared to $R=0.40$ and $\rho=0.15$ for $f_{di}$. We find that 64.29\% of the total number of spectra fall within the two red dashed-lines for $f_{kc}$ in Figure 7(a), compared to 33.37\% for $f_{di}$ in panel (b). These percentages are similar regardless of wind speed. Besides similar correlation coefficients of about $R=0.42-0.45$ in fast wind streams, $f_{kc}$ is closer to the relationship $f_L\simeq f_h$ than $f_{di}$, from visual comparison of panels (g) and (h). In particular, we see that $f_{kc}$ has the lowest residual of $\rho=0.04$ during fast wind streams, compared to $\rho=0.06$ in slow wind. Comparing panels (d) and (g), the percentages of spectra within the red dashed-lines for $f_{kc}$ is lower in the fast wind at 61.38\% than in the slow wind where it is 64.72\%, despite a lower residual in the former. As in the previous section, we also filter the data to include only periods where $0.95\geq\beta_{i,\perp}\leq1.05$ and show the corresponding 2D histograms in Figure 8, as well as the results from our statistical analysis in the bottom panel of Table \ref{tab:3}. Our results are similar to those for $f_b$ and $f_L$, where we see clearly that $f_{kc}$ best corresponds to $f_h$. From our statistical analysis, the correlation coefficients are the same for both $f_{di}$ and $f_{\rho i}$ at $R=0.47$, and almost the same at $R=0.46$ for $f_{kc}$, however, the latter again has the lowest residual at $\rho=0.01$, compared to $\rho=0.03$ for the other two scales. Again, we find a high percentage of the number of spectra where $f_b\simeq f_{kc}$ at 62.29\%, almost triple that of the other two scales. Following Section \ref{sec:break}, we conclude that the onset of the helicity signature is also related to the cyclotron resonant scale and therefore, both the spectral steepening and coherent helicity signature are likely linked to the same physical process: proton-cyclotron resonance. This signature is most prevalent when the spacecraft measures fast wind streams, and therefore, we conclude that there is a stronger relationship between $f_{kc}$ and $f_h$ during these periods, as we also see for $f_{kc}$ and $f_b$. The lower percentage for $f_{kc}$ in fast wind is likely due to the reduced number of available measurements than for slow wind periods, as we see in plots in the right column of Figure 7, or because of the limited applicability of wind speed as the only criterion to categorize wind streams. \begin{figure*} \centering \includegraphics[scale=0.8]{Figure8_Final} \caption{Histograms for 2012 of the estimated helicity onset frequency, $f_h$, versus the three characteristic plasma scales, converted into frequencies using Taylor's hypothesis - $f_L$ represents $f_{kc}$, $f_{di}$ and $f_{\rho i}$, for each column respectively. The data used are for periods where $0.95\geq\beta_{i,\perp}\leq1.05$. The color-bar represents the column-normalized number of spectra. The black dashed lines represent $f_b=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which give the resolution of the wavelet transform about the line $f_b=f_L$ due to the finite width of the Morlet wavelet in frequency space.} \label{fig:8} \end{figure*} \begin{deluxetable}{lcccc} \tablecaption{Correlation coefficients, residuals, and percentages for $f_L$ and $f_p$ from the data shown in Figure 9.\label{tab:4}} \tablehead{ \colhead{} & \colhead{Plasma Scale} & \colhead{Correlation} & \colhead{Residual} & \colhead{Percent} \\ \colhead{} & \colhead{$L$} & \colhead{$R$} & \colhead{$\rho$} & \colhead{\%} } \startdata & $k_c$ & 0.55 & 0.19 & 16.83 \\ All Data & $d_i$ & 0.51 & 0.12 & 44.79 \\ & $\rho_i$ & 0.34 & 0.15 & 46.62 \\ \hline & $k_c$ & 0.56 & 0.11 & 19.52 \\ Slow & $d_i$ & 0.49 & 0.07 & 46.00 \\ & $\rho_i$ & 0.41 & 0.11 & 39.82 \\ \hline & $k_c$ & 0.52 & 0.08 & 14.15 \\ Fast & $d_i$ & 0.53 & 0.05 & 41.86 \\ & $\rho i$ & 0.28 & 0.06 & 51.74 \\ \vspace{-0.38cm} \enddata \end{deluxetable} Moving now to the peak frequency of the helicity signature and our results presented in Figure 9 and Table \ref{tab:4}, we find that both $f_{kc}$ and $f_{di}$ have similar correlation coefficients with $f_p$ of $R=0.49-0.56$, regardless of wind speed. We can also see little difference when comparing visually the three columns in the Figure for these two scales. However, the lowest residuals are seen for $f_{di}$, giving $\rho=0.07$ and $\rho=0.05$ during slow and fast wind streams, respectively. We find that $f_{\rho i}$ has the lowest correlation coefficients at $R=0.28-0.41$ compared to $f_{kc}$ and $f_{di}$. However, its residuals are comparable to that of $f_{di}$, at $\rho=0.06$ for fast wind and $\rho=0.11$ for slow wind. Figure 9 shows that $f_p$ correlates with both $f_{di}$ and $f_{\rho i}$, as expected since they differ only by a factor of $\sqrt[]{\beta_{i,\perp}}$, but there is a constant offset in frequency for $f_{di}$ that is not present for $f_{\rho i}$. When we consider all data, 46.62\% of spectra satisfy $f_p\simeq f_{\rho i}$ compared to 44.79\% for $f_p\simeq f_{di}$, within the \textit{e}-folding frequency. For frequencies $f_{\rho i}>1$ Hz, the most likely value for $f_p$ diverges from the black dashed line in panel (c) of Figure 9, which results in the low values for $R$ with $f_{\rho i}$, and higher correlation with $f_{di}$. However, we find that 39.82\% of spectra in slow wind and 51.74\% in fast wind satisfy $f_p\simeq f_{\rho i}$ within the \textit{e}-folding frequency. The higher percentage in fast wind is likely due to the stronger helicity signature compared to slow wind, making detection easier. The divergence at high frequencies may be caused by under-sampling, because $f_p$ can exceed $f_{noise}$ at these frequencies, but we try to account for this by discarding bins with $\leq$10 spectra. We also see a similar feature in panel (i) of Figure 7 as in panel (i) in Figure 3. Due to the noise-floor, it is difficult to distinguish whether this feature is physical or an artifact. Despite this divergence at high frequencies, we conclude that $\rho_i$ best corresponds with the peak in the helicity signature at ion-kinetic frequencies compared to the other two scales. \begin{figure*} \centering \includegraphics[scale=0.8]{Figure9_Final} \caption{(a-c) Histograms for 2012 of the estimated helicity peak frequency, $f_p$, versus the three characteristic plasma scales, converted into frequencies using Taylor's hypothesis - $f_L$ represents $f_{kc}$, $f_{di}$ and $f_{\rho i}$, for each row respectively. (d-f) The corresponding results for only slow wind (<400 km/s) periods. (g-i) The corresponding results for only fast wind (>500 km/s) streams. The color-bar represents the column-normalized number of spectra. The black dashed lines represent $f_p=f_L$ and similarly, the red dashed lines are $f_b=f_L\;\sqrt[]{2}$ and $f_b=f_L/\,\sqrt[]{2}$, which give the resolution of the wavelet transform about the line $f_b=f_L$.} \label{fig:9} \end{figure*} \section{Discussion} Our main result is the correlation of the cyclotron resonance scale, $1/k_c$, with the onset of the spectral steepening of the magnetic field fluctuation power spectrum and a coherent magnetic helicity signature at ion-kinetic scales. The helicity also reaches a maximum at scales comparable to $\rho_i$. Therefore, we suggest that these two signatures are related and result from proton-cyclotron damping of AICs, leading to a steeper power law due to dissipation at these scales. We then explain the resulting helicity signature as due to the residual population of KAWs left behind after the AICs are removed from the turbulent cascade. This cyclotron-resonant dissipation is consistent with the shape of proton distributions observed in the fast wind \citep[e.g.,][]{Tu2001OnCorona,Marsch2001,Tu2002,Marsch2004,Heuer2007DiffusionProtons,He2015,He2015a}. These results hold over most solar wind conditions, but in particular during periods of fast wind streams where the helicity signature is strongest. We find that over the course of 2012, the onset of the coherent helicity signature corresponds to $1/k_c$ for 64.29\% of the time, within the limits of the \textit{e}-folding frequency of our Morlet wavelet. This value does not change significantly when we filter data according to wind speed. Given how we measure $f_h$ using a Gaussian function, there is no reason for both $f_h$ and $f_{kc}$ to correlate well by random chance. The onset of the helicity peak determined from the FWHM does not necessarily need to occur at $1/k_c$ and yet, we find they are both closely related. Similarly, we find that 52.08\% of the time the break scale corresponds to $1/k_c$. These results imply that cyclotron resonance with protons likely occurs at least half the time in the solar wind at ion-kinetic scales. However, the lower percentage for $f_b\simeq f_{kc}$ indicates that resonance with AICs may not always lead to a sufficient amount of energy to be removed from the cascade to result in a spectral steepening at $1/k_c$. Alternatively, it may be due to the higher level of uncertainty in measuring the break scale compared to the onset of the helicity signature. We can also interpret the square of correlation coefficient values, $R^2$, as a percentage of the time that a parameter depends on another. From our results, this would give somewhat lower percentages (23-31\%) than our first estimates. We do not place too much weight on these correlation coefficients since they can be misleading, especially when considering our results for periods where $\beta_{i,\perp}\sim1$. Therefore, we take our first calculation as a more reliable estimate. The better agreement found in fast wind between $1/k_c$, the break scale, and the onset of the helicity signature suggests that cyclotron damping primarily occurs in fast wind streams, which are typically more Alfv\'enic with a higher population of AICs \citep{Roberts2015,Telloni2016,Lion2016a}. However, we find that the coherent helicity signature disappears or significantly weakens during slow wind periods, in agreement with \citet{Bruno2015}. The dispersion relation for Alfv\'en waves splits into KAWs or AICs at a critical angle to the magnetic field that is dependent on $\beta$ \citep{Gary1986Low-frequencyHelicity}. Therefore, an explanation for the reduction in the prevalence of helicity signatures in the slow wind may be due to different $\beta$ in fast and slow wind, affecting how we observe the helicity signature resulting from KAWs. Despite the lack of the coherent helicity signature in the slow wind, we still observe a spectral steepening at $1/k_c$, but the agreement is weaker than in the fast wind. The anisotropic nature of plasma turbulence in the solar wind implies a limited role of $k_\parallel$ in the energy cascade in the inertial range, due to a higher amount of power in perpendicular wavenumbers \citep{Horbury2008,Chen2010a,Chen2010,Wicks2010}. Despite this, we find that during the interval of data we study, the break most often occurs at $k\simeq k_c$ and not $kd_i\simeq 1$ or $k\rho_i\simeq1$, as clearly shown during periods where $\beta_{i,\perp}\sim1$. This result is consistent with studies of turbulence at extreme $\beta_{i,\perp}$ \citep[e.g.,][]{Smith2001,Chen2014}. In their study, \citet{Chen2014} rule out the role of $k_c$ since they assume $k_\perp\gg k_\parallel$ at ion-kinetic scales. However, other studies show that the $k_\parallel$ component of the turbulence, while small compared to $k_\perp$, increases around ion-kinetic scales \citep{Bieber1996,Leamon1998a,Dasso2005AnisotropyFluctuations,Hamilton2008,Roberts2015}. Our results suggest that this small $k_\parallel$ component of the turbulence is damped from the cascade, which leads to the observed spectral steepening at these scales. We note that previous studies \citep[e.g.,][]{Markovskii2008,Bourouaine2012,Bruno2014,Chen2014} include an additional $\sin{\theta_{Bv}}$ factor in the definition of the associated break scales in order to account for the anisotropic nature of the turbulence at kinetic scales, where $\theta_{Bv}$ is the angle between the magnetic field, $\bf{B}$, and velocity flow, $\textbf{v}_{sw}$. The inclusion of this factor in our analysis only slightly improves our correlations and lowers our residuals by about 10\% for $f_{di}$ and $f_{\rho i}$. Therefore, it is not necessary to include this factor since the agreement of $f_{kc}$ with $f_b$ or $f_h$ is still clearly better than the agreement of $f_{di}\sin{\theta_{Bv}}$ and $f_{\rho_i}\sin{\theta_{Bv}}$, showing that we cannot rule out cyclotron damping because of the anisotropy of inertial range turbulence. Recent studies by \citet{Markovskii2015}, \citet{Markovskii2016MagneticTemperature}, and \citet{Markovskii2016TheWind} attribute the coherent helicity signature to two competing processes, one which generates and the other which destroys magnetic helicity: the generation of helicity is due to the increased compressional component of KAW fluctuations at small scales and development of a magnetic field component parallel to the local mean field \citep{Howes2010,TenBarge2012EvidenceSimulations,Markovskii2013MagneticTurbulence,Markovskii2013MagneticTurbulenceb}, while the decrease in helicity arises from the demagnetization of the protons from the magnetic field \citep{Vasquez2012VelocityRegime}. \citet{Markovskii2015} interpret the peak in the helicity as arising from the balance of these two processes. In a later study, \citet{Markovskii2016TheWind} find that this peak is best correlated with the gyroscale modified by the electron beta, $\beta_e\,$: $\rho_i=d_i/\,\sqrt[]{\beta_i+\beta_e}\,$, and is therefore affected by the total plasma pressure. We do not use electron data here, however, if $\beta_i=\beta_e$, then $\beta_e$ will contribute a maximum factor of $1/\,\sqrt[]{2}$, roughly equivalent to the uncertainty in our results from the use of a Morlet wavelet. We see that the coherent helicity signature disappears towards smaller scales than $\rho_i$.Therefore, there is no significant difference in our results and overall accuracy by excluding electron data. We cannot rule out that a combination of processes may lead to the observed helicity signature, for example, from the increased compressional component of KAWs \citep[e.g.,][]{Howes2010}, or from the presence of magnetosonic/whistler waves \citep{Podesta2011a,Podesta2011}. However, our results suggest the dominant cause for the onset of the observed signature is due to proton-cyclotron resonance with AICs. We do not investigate the origin of these cyclotron-resonant fluctuations, but rather show evidence for their existence and subsequent dissipation. We also see that the coherent helicity signature disappears towards smaller scales than $\rho_i$. The disappearance of the signature at higher frequencies may be due to the demagnetization of protons \citep{Vasquez2012VelocityRegime}, the increasing balance of sunward and anti-sunward energy fluxes at smaller scales \citep{He2012}, aliasing of power \citep{Russell1972,Klein2014}, or instrumental noise. We are unable to determine the cause of the return of the helicity to around zero from this study. Our results also indicate that the transition range following the spectral break in the magnetic field power spectrum often seen in fast wind streams is due to proton-cyclotron resonance. This link between the transition range and cyclotron resonance is consistent with the findings of several other studies \citep{Podesta2009,Bourouaine2010CorrelationsWind,Bruno2014a,Bruno2015,Roberts2017DirectTurbulence}. We note that our results do not rule out the role of other non-linear wave-particle interactions and kinetic non-resonant mechanisms causing dissipation or dispersive effects. In fact, past studies by \citet{Leamon1998,Leamon1998a,Leamon1999,Leamon2000,Smith2012} have shown non-resonant damping (e.g., Landau or transit-time damping) of ions and electrons likely accounts for the remaining $\sim$50\% of dissipation, which is consistent with our findings that 52.13\% of the time cyclotron-resonant damping is occurring. \section{Summary and Conclusions} We use magnetic field and particle moment data from the MFI and SWE instruments onboard the \textit{Wind} spacecraft to study the nature of the solar wind turbulence at ion-kinetic scales. We first analyze solar wind data from 2012, investigating the spectral properties of the magnetic field. We use a Morlet continuous wavelet transform to compute the power and normalized magnetic helicity spectra for successive 92 s intervals. To determine whether spectral features are physical at high frequencies, we identify the noise-floor of the MFI instrument using tail-lobe crossings of the Earth's magnetosphere from early 2004, finding it at a higher amplitude than originally predicted. Finally, we use particle data at the same 92 s cadence to calculate the characteristic proton scales, $1/k_c$, $d_i$, and $\rho_i$, and investigate their relationship with the spectral break and coherent helicity signature at ion-kinetic scales. The automated routine we use to analyze solar wind magnetic power and helicity spectra combines both the identification of the break scale and analysis of the properties of the magnetic fluctuations at ion-kinetic scales. This analysis of high-resolution spectra accounts for the variability of the plasma scales under different solar wind conditions, while also processing large volumes of solar wind data. For the first time, we link the spectral break frequency, helicity onset, and cyclotron resonance scale. We expand on past results by investigating both fast and slow wind streams, as well as periods where $\beta_{i,\perp}\sim$1. In agreement with \citet{Bruno2014,Bruno2015,Telloni2015}, we find that the high-frequency spectral steepening in a fast wind stream is best associated with the cyclotron resonance scale, $1/k_c$, which also forms the low-frequency bound of a coherent helicity signature. We show for the first time, these results hold in general for fast streams and to a lesser extent, for slow wind, where the helicity signature weakens or disappears completely. We also find that the peak of the helicity enhancement is associated with the ion gyroscale, $\rho_i$, consistent with the findings of \citet{Markovskii2015}, again best seen within fast streams, where the enhancement in the helicity is strongest. Our key result presented here is evidence supporting proton-cyclotron resonant damping as a dissipation mechanism of solar wind turbulence at ion-kinetic scales, occurring at least half the time in the solar wind. This resonance results in the damping of Alfv\'en/ion-cyclotron waves, particularly in the more Alfv\'enic fast wind, leading to the steepening of the magnetic field fluctuation power spectrum. Therefore, we suggest that the AICs are removed from the turbulence at these scales, resulting in a coherent helicity spectrum from the remaining KAWs, which are not cyclotron resonant. We note that we do not speculate about the origin of the cyclotron-resonant fluctuations, but rather show evidence for their existence and dissipation. Further investigative work is on-going to determine the relative importance of proton-cyclotron resonance for the dissipation of turbulence and subsequent heating of the particle distributions. In particular, we still need to quantify the energy dissipated and the amount of energy that continues to cascade down to electron scales. We leave this work to a subsequent study. Understanding the nature of dissipation of the turbulence in the solar wind will provide us with a deeper understanding of the macroscopic properties of the solar wind and insight into similar processes in other collisionless plasmas. The future Solar Orbiter and Parker Solar Probe missions will also help us to explore these important areas of heliophysics research. \acknowledgments L.D.W. is funded by an STFC Studentship; D.V. is supported by an STFC Ernest Rutherford Fellowship, ST/P003826/1; C.J.O. is supported by the STFC consolidated grant to UCL/MSSL, ST/N000722/1. The authors thank R. L. Alexander for providing suitable intervals for our noise-floor analysis and the MFI/SWE instrument teams for provision of the data. The authors also acknowledge A. R. Macneil, G. A. Graham, N. M. E. Kalmoni, O. W. Roberts, C. H. K. Chen, and H. Wu for useful comments and discussions. Data from the WIND spacecraft were obtained from the \href{http://spdf.gsfc.nasa.gov}{SPDF web-site}. \bibliographystyle{yahapj}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,889
While cleaning up the apartment I found this cd with some pictures from one of our trips to Japan. I am still a bit scared about how it will feel when we were actually supposed to be in Japan. Right now we feel ok and have a lot of great things looking forward to at home but there will be some serious Japan missing.
{ "redpajama_set_name": "RedPajamaC4" }
6,593
Q: Acestream Failed to Start on ubuntu 16.04 LTS 1 I followed this guide How to watch Acestream / Sopcast Ubuntu 16.04 LTS? to install acestream on my desktop. Installation was successful and tried to run it and got the following error $ acestreamengine --client-console 2017-01-01 17:57:25,595|MainThread|acestream|error during startup Traceback (most recent call last): File "core.c", line 1146, in File "core.c", line 48, in File "core.c", line 26, in File "/usr/share/acestream/lib/psutil-1.2.1-py2.7-linux-x86_64.egg/psutil/__init__.py", line 88, in <module> File "/usr/share/acestream/lib/psutil-1.2.1-py2.7-linux-x86_64.egg/psutil/_pslinux.py", line 20, in <module> File "/usr/share/acestream/lib/psutil-1.2.1-py2.7-linux-x86_64.egg/_psutil_linux.py", line 7, in <module> File "/usr/share/acestream/lib/psutil-1.2.1-py2.7-linux-x86_64.egg/_psutil_linux.py", line 3, in __bootstrap__ ImportError: No module named pkg_resources Looking for help to make it work for kodi. Newbie Here, Thanks A: can't tell what caused the problem in the first place, but you should reinstall pkg_resources: sudo apt-get install --reinstall python-pkg-resources
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,533
ACCEPTED #### According to The Catalogue of Life, 3rd January 2011 #### Published in Sydowia 4: 549 (1950) #### Original name Asterostomula patula Petr., 1950 ### Remarks null
{ "redpajama_set_name": "RedPajamaGithub" }
4,012
These were the promising words spoken by Alec Baldwin's character to Tina Fey's Liz Lemon as a formal offering to be her mentor. Although she initially declined, the dynamics of their quirky relationship made for excellent TV, and yet, also highlighted the importance of fostering a budding mentor-mentee relationship in the workplace. The Harvard Business Review states "professional services firms live and die by their intellectual capital." Jack couldn't agree more. I would argue that we at DSO feel the same way – each one of us engagement leaders and student consultants bring valuable skills and traits to the organization and the projects we work on. Losing such talent and motivation would be a travesty. However, professional firms continue to face the inevitable problem of attrition, and according to DeLong, Gabarro, and Lees, this has only been accentuated by the hypercompetitive world we live in. Partners are too focused on satisfying clients that they no longer bother much with talent development. With such an uncaring attitude presented to associates at work, they opt for grander opportunities elsewhere – draining the company of knowledge. At the same time, the authors claim that partners do not share the desire to expend energy to teach associates who are bound to leave the firm anyways. 2 The partner's grievances are supported by the most recent data from the Bureau of Labor Statistics (BLS) showing that the average worker lasts at a firm for 4.4 years. That number drops by about 50% when considering Millennials exclusively. Greater desire for faster corporate ladder ascension and a competitive market is causing mentoring to fall by the way side. At DSO, I would like to believe that not only are we making an impact on communities and clients, but also on our student consultants who have chosen to volunteer with us. Thus, our mentorship model dictates that each student deserves a mentor. We aim to choose mentors for mentees that align with their stated interests and preferences. With all the diversity each individual brings, our solutions will only be just as diverse. Therefore nurturing each student's interests/skills at DSO only makes our product as a whole, better. We want everyone to be enthusiastic and excited about what it is they want to work on. After all, we are choosing to do this out of passion. Give and take can be equally rewarding and makes for a significant contribution to the intellectual and professional development of both parties. Now DSO does uniquely stand out from a traditional professional service as it is a self-selective group of young professionals and students volunteering their time and therefore it does not bear the same burdens that plague most firms of their workers. However, I would hope that mentoring does not become passé and the corporate world continues to promote the vital role a mentor can play on the psyche of a fresh employee. "Quotes." IMDb. IMDb.com, n.d. Web. 08 Dec. 2014. DeLong, Thomas J., John J. Gabarro, and Robert J. Lees. Of Making Talent. Why Mentoring Matters in a Hypercompetitive World (n.d.): n. pag. Web. Meister, Jeanne. "Job Hopping Is the 'New Normal' for Millennials: Three Ways to Prevent a Human Resource Nightmare." Forbes. Forbes Magazine, n.d. Web. 05 Dec. 2014.
{ "redpajama_set_name": "RedPajamaC4" }
5,427
Crusader was a jet-powered speed boat piloted by John Cobb. The combination of an aerodynamically stable hullform and turbojet propulsion was proposed by Reid Railton, Cobb's adviser. A rocket-powered scale model was tested at Haslar. The full size design was by Peter du Cane and built by Vospers of Portsmouth. Technical assistance came from Saunders-Roe and Vickers-Supermarine. It cost £15,000 in 1949. It was silver and scarlet in colour and 10 m long. The engine was a de Havilland Ghost Mk 48 centrifugal turbojet provided as a loan by the Ministry of Supply at the request of Major Frank Halford, the engine designer. The engine was rated at 5,000 lb thrust fed by two scoop inlets forward of the cockpit. The hull was of trimaran form, a main hull with a planing step, and two smaller rear-mounted outriggers. Construction was of birch plywood frames and stringers. The hull was skinned in birch ply covered in doped fabric with metal skin reinforcement for planing surfaces. Aircraft-style riveted aluminium was used for the box-section cantilevers to the outriggers. Expectation was that the boat could achieve more than 200 mph (320 km/h). The boat was destroyed and Cobb killed on 29 September 1952 when on a world record attempt at Loch Ness, Scotland. Fifty years later on 5 July 2002 the wreckage of Crusader was discovered by the Loch Ness Project in of water. The site was designated as a scheduled monument in 2005. See also Water speed record References , reprinted from Endpapers include a sectional drawing of Crusader External links British Pathe Newsreel : John Cobb's Crusader Water speed records Vehicles designed by Reid Railton Jet-powered hydroplanes Loch Ness
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,199
namespace Ui { class ScaleDialog; } class ScaleDialog: public QDialog { Q_OBJECT public: explicit ScaleDialog(QWidget* aParent = nullptr); ~ScaleDialog(); Ui::ScaleDialog* m_ui; }; #endif // SCALEDIALOG_H
{ "redpajama_set_name": "RedPajamaGithub" }
1,785
{"url":"https:\/\/math.stackexchange.com\/questions\/711459\/can-anything-be-said-for-the-topology-of-a-topological-monoid","text":"Can anything be said for the topology of a topological monoid?\n\nA topological group is one in which the group operations (the multiplication and inverse) are continuous, or equivalently as a group object in $\\mathbf{Top}$.\n\nThey are uniformisable and hence are completely regular & $R_0$.\n\nCan anything similar i.e. with respect to the separation axioms be said about topological monoids?\n\nThe space $X=(\\Bbb N,+,0)$ with the topology generated by the sets $$A_n=\\{0,1,...,n\\}$$ is a topological monoid. If $m,n$ are naturals, then the smallest neighborhood of $m+n$ is $A_{m+n}$, and it contains the sum $A_m+A_n$, so addition is continuous.\n$X$ is $T_0$, but not $T_1$, so it isn't uniformizable.","date":"2019-12-08 16:06:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9283658862113953, \"perplexity\": 126.83527919260257}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540511946.30\/warc\/CC-MAIN-20191208150734-20191208174734-00467.warc.gz\"}"}
null
null
\section{Introduction} Time discretization of parabolic problems, discretized in space using finite element methods, is a well studied topic, see for example the monograph by Thom\'ee \cite{Thom97}. The analysis for all such methods relies on the satisfaction of the hypothesis of the Lions theorem \cite{Lions61}, stating the existence, uniqueness and stability properties of the problem. The classical problem can be cast in the abstract form, find $u\in V$ such that \begin{align}\label{} \label{eq:abstract_parabolic} &(\partial_t u, v)_H + a(u,v) = \left<f,v\right>_{V',V}, \\\label{eq:inital_data} &u(0) = u_0 \in H, \end{align} where $V,\,H$ are some Hilbert spaces, with $V$ dense in $H$ and imbedded with continuous identity, $\left<\cdot,\cdot\right>_{V',V}$ denotes the duality pairing between $V$ and its dual, and $a(u,v):V\times V \mapsto \mathbb{R}$ a symmetric bilinear form representing the weak form of a second order differential operator. A key ingredient of the theory is that the spatial operator satisfies the the G\aa rding's inequality, there are $\alpha>0$ and $\beta \ge 0$ such that for all $v \in V$ there holds \begin{equation}\label{eq:gaarding} a(v,v) \ge \alpha \|v\|_V^2 - \beta \|v\|_H^2. \end{equation} In many situations for instance in environmental science and meteorology the initial data is not available, instead some other data in the space time domain have been collected through measurements. This leads to a data assimilation problem, that is, a problem to incorporate the observations of the physical system into the state of a computational model of the system. Computations can not be based on the classical theory, since the equation (\ref{eq:inital_data}) can not be enforced when $u_0$ is not known. It is then an interesting problem in computational mathematics what quantities can be approximated and what is the effect of measurement errors on such an approximation. The approximation methods need to take in the account the fact that these data assimilation problems are ill-posed in the sense that a necessary condition for them to be solvable is that the observations indeed come from the system. In other words, it must be assumed apriori that the solution exists, and the mathematical theory concerns only uniqueness and stability. In \cite{BO16}, we studied finite element methods for two data assimilation problems with unknown $u_0$. The two problems differ in the sense that the lateral boundary data for $u$ is either known or unknown. In the first case (\ref{eq:gaarding}) holds, whereas unknown lateral boundary data leads to a failure of \eqref{eq:gaarding}. This again gives rise to very different stability properties. When the lateral boundary is known, the data assimilation problem is Lipschitz stable in suitable spaces, but the optimal stability is of conditional H\"older type when no information is given on the lateral boundary. Here we restrict our attention to the case with known later boundary data, and extend the corresponding results of \cite{BO16} to a fully discrete method. In \cite{BO16} discretization only in space was considered. The fully discrete analysis does not reduce straightforwardly to the semi-discrete case, as demonstrated by the fact that, in order to achieve the optimal convergence rate with respect to the size of the time step, an additional regularization term is needed, see Theorem \ref{th_main} below. There we consider two different asymptotic rates, $\tau = \mathcal O(h)$ and $\tau = \mathcal O(h^2)$, between the size of the finite element mesh $h$ and the time step $\tau$, and the analysis under the less restrictive rate $\tau = \mathcal O(h)$ is valid only when additional regularization is present (the case $\gamma_1>0$ in the theorem). In Section \ref{sec_comp}, we give a computational example showing that the additional regularization is necessary. To keep the exposition simple, we assume that the physical system is modelled by the heat equation \begin{equation}\label{heat} \partial_t u -\Delta u = f \quad \mbox{ in } (0,T) \times \Omega, \end{equation} with $u = 0$ on the boundary $\partial \Omega$. Here $\Omega \subset \mathbb{R}^d$ is a connected polyhedral domain. Of course, in the absence of additional information, the equation (\ref{heat}) does not have a unique solution. We assume that measurements of $u$, denoted by $q$, are available in the space time domain $(0,T) \times \omega$, where $\omega$ is a non-empty, open subset of $\Omega$. We want to solve (\ref{heat}) under the additional constraint that \begin{equation}\label{Mdata} u = q \quad \mbox{ in } (0,T) \times \omega. \end{equation} It is known that if there exists a solution $u$ to the equations (\ref{heat}) and (\ref{Mdata}), then the solution is unique. A convenient way of solving the problem (\ref{heat})-(\ref{Mdata}) is through optimization. Casting the problem in a form where the distance to the measured data in some norm is minimised under the constraint of the heat equation, lead to a 4DVAR type method. Such methods are important in data assimilation for meteorology and environmental science and we refer to \cite{QJ:QJ340,QJ:QJ49712051912,dimet1986variational} for some results in the applied sciences. Although these methods are widely used and popular tools, there appears to be no rigorous numerical analysis assessing discretisation errors for them. One objective of the present publication is to start filling this gap. We will now discuss the previous mathematical literature on the problem (\ref{heat})-(\ref{Mdata}). We focus on techniques that work in dimensions $1+d$ with $d > 1$, and refer to the paper \cite{Wang2010} and references therein for the $1+1$-dimensional case. Our finite element method builds on the stability estimate \cite{Emanuilov1995}, and in a wider context, the literature on continuum stability estimates for parabolic data assimilation (or unique continuation) problems is reviewed in \cite{Isakov2006, Yamamoto2009}. Computational methods for the problem (\ref{heat})-(\ref{Mdata}) go back to \cite{Lattes1967} where the quasi-reversibility method was introduced. Variations of this method for parabolic problems were developed in \cite{Klibanov2006, Klibanov1990, Tadi2002} and in \cite{Becache2015}, and we refer to \cite{Klibanov2013} for a review of the quasi-reversibility method outside the parabolic context. Although for example the papers \cite{Klibanov2006, Becache2015} consider convergence with respect to a Tikhonov type regularization parameter, none of the above papers prove convergence rates with respect to the refinement of a discretization. Proving such a convergence rate is the main novelty of the present paper. Moreover, compared to the previous literature, an attractive feature of our method is that no auxiliary Tikhonov type regularization parameters need to be introduced, the only asymptotic parameters are the size of the finite element mesh in space and the size of the time step. Both the quasi-reversibility method and our method are based on Carleman estimates for the continuous problem. An alternative approach is to derive Carleman estimates directly on the discrete level, see for example \cite{BHR11} where such an approach was used for the closely related null controllability problem for the heat equation. The approach in the present paper has grown out of the study of stabilized finite element methods for unique continuation problems for elliptic equations \cite{Bu13,Bu14, BHL16}. Another line of research that appears to be converging to a similar optimization based approach originates from the numerical analysis of the exact controllability of the wave equation \cite{Castro2014,Cindea2013,Cindea2015}. The approach has been applied to stable unique continuation problems for the wave equation \cite{Cindea2015a,Cindea2016} and to the null controllability problem for the heat equation \cite{Muench2016}. Drawing from this line of research, a numerical analysis of the data assimilation problem for the heat equation is in preparation \cite{Muench2016b}, based on the continuous mixed formulation \cite{Muench2016a}. \section{Discrete optimization problem} Following \cite{BO16}, we first discretize \eqref{heat} in space only. Let $\mathcal{T}_h$ be a conforming triangulation of the polyhedral domain $\Omega$. Let $h_K = \mbox{diam}(K)$ be the local mesh parameter and $h = \max_{K\in \mathcal{T}_h} h_K$ the mesh size. We assume that the family of triangulations $\{\mathcal{T}_h\}_h$ is quasi uniform in the sense that there exists a constant $c_1$ such that for all $K \in \mathcal{T}_h$ it holds that $ h_K \leq h \leq c_1 h_K$. Let $V_h$ be the standard space of piecewise affine continuous finite elements satisfying the zero boundary condition, \[ V_h = \{v \in H^1_0(\Omega);\ v \vert_{K} \in \mathbb{P}_1(K), \; \forall K \in \mathcal{T}_h \}. \] We may then write a semi-discrete finite element formulation of \eqref{heat} as follows, find $u \in C^1(0,T; V_h)$ such that \begin{equation}\label{FEM} (\partial_t u , v)+ a(u,v) = (f,v), \quad v \in V_h, \end{equation} where \[ (u,v) = \int_\Omega u v\, dx, \quad a(u,v) = \int_\Omega \nabla u\cdot \nabla v\, dx. \] The idea is then to minimize the distance to the data (\ref{Mdata}) under the constraint of this dynamical system. In order to outline this idea, let us consider the following preliminary Lagrangian functional, \begin{equation}\label{Lagrange_space} \mathcal{L}_0(u,z) := \frac12 \|u - q\|_{L^2((0,T) \times \omega)}^2 + \int_0^T (\partial_t u , z)+ a(u,z) - (f,z) \, dt . \end{equation} Writing the Euler-Lagrange equations for $\mathcal{L}_0$ we arrive to the following problem, find $(u,z)$ such that \begin{align* \left<\partial_{u} \mathcal{L}_0(u,z), v \right> &= \int_0^T (\partial_t v , z)+ a(v,z) + (u - q,v)_\omega\, dt = 0, \\ \left<\partial_{z} \mathcal{L}_0(u,z), w \right> &= \int_0^T (\partial_t u , w)+ a(u,w) - (f,w) \, dt = 0 \end{align*} for all $v,w$. Here $(\cdot, \cdot)_\omega$ is the inner product on $L^2(\omega)$. Clearly, if $z=0$ and $u$ solves \eqref{FEM} with $u\vert_{(0,T) \times \omega} = q$, then these equations are satisfied, and hence they are consistent with the data assimilation problem that we set. This leads to a first possible approach: discretize this system in time and find the stationary points of the discrete system. A numerical analysis however shows that this approach is unlikely to be successful as the term $(u - q,v)_\omega$ does not seem to give enough stability for the problem to converge, and indeed, our computational examples in Section \ref{sec_comp} verify this. Instead we add certain regularization terms in the fully discrete context that we will describe next. Let $N \in \mathbb N$ and $\tau > 0$ satisfy $N \tau = T$, and define $t_n = n \tau$. Furthermore, define for $u = (u^n)_{n=0}^N \in V_h^{N+1}$, $$ \partial_\tau u^n = \frac{u^n- u^{n-1}} \tau, \quad n=1,\dots,N. $$ Consider the Lagrangian $\mathcal L : V_h^{N+1} \times V_h^N \to \mathbb R$ defined by \begin{align}\label{def_L} \mathcal L(u,z) &= \frac 1 2 \gamma_M \tau \sum_{n=1}^N \norm{u^n - q^n}_\omega^2 + \frac 1 2 \gamma_0 \norm{h \nabla u^0}^2 + \frac 1 2 \gamma_1 \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u^n}^2 \\\notag&\quad + \tau \sum_{n=1}^N \left( (\partial_\tau u^n, z^n) + a(u^n, z^n) - (f^n, z^n) \right), \end{align} where, for fixed functions $f \in C(0,T; L^2(\Omega))$ and $q \in C(0,T; L^2(\omega))$, $$ f^n = f(t_n), \quad q^n = q(t_n), \quad n=1,\dots,N. $$ We make the standing assumption that the fixed constants $\gamma_M, \gamma_0$ and $\gamma_1$ satisfy the following \begin{align}\label{gamma_pos} \gamma_M, \gamma_0 > 0 \quad \text{and} \quad \gamma_1 \ge 0. \end{align} Defining the bilinear forms \begin{align* A_1(u,w) &= \tau \sum_{n=1}^N \left( (\partial_\tau u^n, w^n) + a(u^n, w^n) \right), \\ A_2((u,z),v) &= \gamma_M \tau \sum_{n=1}^N (u^n, v^n)_\omega + \gamma_0 (h \nabla u^0, h \nabla v^0) + \gamma_1 \tau \sum_{n=1}^N (\tau \nabla \partial_\tau u^n, \tau \nabla \partial_\tau v^n) \\&\quad + \tau \sum_{n=1}^N \left( (\partial_\tau v^n, z^n) + a(v^n, z^n) \right), \end{align*} the Euler-Lagrange equations for $\mathcal L$ are \begin{align}\label{normal} A_1(u,w) = \tau \sum_{n=1}^N (f^n, w^n), \quad A_2((u,z),v) = \gamma_M \tau \sum_{n=1}^N (q^n, v^n)_\omega. \end{align} We define the seminorms \begin{align* \tnorm{u}_R^2 &= \gamma_M \tau \sum_{n=1}^N \norm{u^n}_\omega^2 + \gamma_0 \norm{h \nabla u^0}^2 + \gamma_1 \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u^n}^2, \\ \tnorm{u,z}_D^2 &= \norm{z^1}^2 + \norm{z^N}^2 + \tau^2 \sum_{n=2}^N \norm{\partial_\tau z^n}^2 + \tau \sum_{n=1}^N \norm{\nabla z^n}^2 \\&\quad + \norm{h \nabla u^N}^2 + h^2 \tau \sum_{n=1}^N \norm{\partial_\tau u^n}^2 + h^2 \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u^n}^2, \\ \tnorm{v,w}_C^2 &= \tnorm{v}_R^2 + \tau \sum_{n=1}^N \norm{w^n}^2. \end{align*} Note that $\tnorm{\cdot}_D$ is, in fact, a norm on $V_h^{2N+1}$. Also, if $\gamma_1 > 0$ then $\tnorm{\cdot}_R$ and $\tnorm{\cdot}_C$ are norms on $V_h^{N+1}$ and $V_h^{2N+1}$, respectively. The system (\ref{normal}) has the following coercivity property. \begin{proposition} \label{prop_coer} There is $C > 0$ such that for all $N \in \mathbb N$, $h > 0$ and $(u,z)$ in $V_h^{2N+1}$ there is $(v,w)$ in $V_h^{2N+1}$ satisfying $$ \tnorm{u}_R^2 + \tnorm{u,z}_D^2 \le C\left( A_1(u, w) + A_2((u, z), v) \right) , \quad \tnorm{v,w}_C \le C \tnorm{u}_R + C \tnorm{u,z}_D. $$ \end{proposition} \begin{proof} We will show first that there is $\alpha > 0$ such that for all $(u,z) \in V_h^{2N+1}$ \def\hat z{\hat z} \begin{align}\label{coer1} \frac 1 2 \left( \tnorm{u}_R^2 + \alpha \tnorm{u,z}_D^2 \right) \le A_1(u, -z + \alpha h^2 \partial_\tau u) + A_2((u, z), u + \alpha \hat z), \end{align} where $\partial_\tau u = (\partial_\tau u^n)_{n=1}^N \in V_h^{N}$ and $\hat z = (\hat z^n)_{n=0}^N \in V_h^{N+1}$ is defined by $\hat z^0 = 0$ and $\hat z^n = z^n$, $n=1,\dots,N$. Observe that $$ \tnorm{u}_R^2 = A_1(u, -z) + A_2((u, z), u). $$ The identity \begin{align}\label{disc_antid} \tau \sum_{n=1}^N (\partial_\tau u^n, u^n) = \frac 1 2 \left( \norm{u^N}^2 - \norm{u^0}^2 \right) + \frac{\tau^2} 2 \sum_{n=1}^N \norm{\partial_\tau u^n}^2 \end{align} is the discrete analogue of $$ \int_0^T (\partial_t u, u)\, dt = \frac 1 2 \left( \norm{u(T)}^2 - \norm{u(0)}^2 \right).$$ To derive (\ref{disc_antid}) we employ the polarization identity \begin{align* \tau (\partial_\tau u^n, u^n) = \norm{u^n}^2 - (u^{n-1}, u^n) = \norm{u^n}^2 - \frac 1 2 \left( \norm{u^{n}}^2 + \norm{u^{n-1}}^2 - \norm{u^n-u^{n-1}}^2 \right), \end{align*} and observe that there is a telescoping type cancellation. Using the identity (\ref{disc_antid}) with the bilinear form $(\cdot, \cdot)$ replaced by $a(\cdot,\cdot)$, we have \begin{align* A_1(u, \partial_\tau u) &= \tau \sum_{n=1}^N \left( \norm{\partial_\tau u^n}^2 + a(u^n, \partial_\tau u^n) \right) \\&= \tau \sum_{n=1}^N \norm{\partial_\tau u^n}^2 + \frac 1 2 \left( \norm{\nabla u^N}^2 - \norm{\nabla u^0}^2 \right) + \frac{\tau^2} 2 \sum_{n=1}^N \norm{\nabla \partial_\tau u^n}^2. \end{align*} Observe that if $\alpha \le \gamma_0$ then $-\alpha h^2 \norm{\nabla u^0}^2/2$ is absorbed by $\tnorm{u}_R^2$. We have \begin{align* A_2((u, z), \hat z) &= \gamma_M \tau \sum_{n=1}^N (u^n, z^n)_\omega + \gamma_1 \tau \sum_{n=1}^N (\tau \nabla \partial_\tau u^n, \tau \nabla \partial_\tau \hat z^n) \\&\quad + \tau \sum_{n=1}^N \left( (\partial_\tau \hat z^n, z^n) + \norm{\nabla z^n}^2 \right). \end{align*} The identity (\ref{disc_antid}) gives \begin{align* \tau \sum_{n=1}^N (\partial_\tau \hat z^n, z^n) = \frac 1 2 \norm{z^N}^2 + \frac{\tau^2} 2 \sum_{n=1}^N \norm{\partial_\tau \hat z^n}^2 = \frac 1 2 \norm{z^N}^2 + \frac 1 2 \norm{z^1}^2 + \frac{\tau^2} 2 \sum_{n=2}^N \norm{\partial_\tau z^n}^2. \end{align*} Let us now consider the cross terms. The Poincar\'e inequality gives \begin{align* (u^n, z^n)_\omega \le (4\delta)^{-1} \norm{u^n}_\omega^2 + C \delta \norm{\nabla z^n}^2, \end{align*} and the second term can be absorbed by $\norm{\nabla z^n}^2$ for small $\delta > 0$. The first term is absorbed by $\tnorm{u}_R^2$ for small $\alpha > 0$. For the second cross term, \begin{align* \tau \sum_{n=1}^N (\tau \nabla \partial_\tau u^n, \tau \nabla \partial_\tau \hat z^n) \le (2\delta)^{-1} \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u^n}^2 + \delta \tau \sum_{n=1}^N \norm{\nabla z^n}^2 \end{align*} and we see that these two terms are absorbed analogously with the above. This finishes the proof of (\ref{coer1}). It remains to show that $$ \tnorm{v,w}_C \le C \tnorm{u}_R + C \tnorm{u,z}_D. $$ when $v = u + \alpha \hat z$ and $w = -z + \alpha h^2 \partial_\tau u$. We have \begin{align* \tnorm{\hat z}_R^2 &= \gamma_M \tau \sum_{n=1}^N \norm{z^n}_\omega^2 + \gamma_1 \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau \hat z^n}^2 \le C \tau \sum_{n=1}^N \norm{\nabla z^n}^2 \le C \tnorm{0,z}_D^2, \end{align*} where the Poincar\'e inequality and the triangle inequality was used for the first and the second term, respectively. Using the Poincar\'e inequality again, we have $$ \tau \sum_{n=1}^N \norm{z^n}^2 \le C \tnorm{0,z}_D^2. $$ The bounds for the terms containing $u$ are trivial. \end{proof} Denote by $N_h$ the dimension of $V_h$. The equations (\ref{normal}) define a square linear system of $(2N+1)N_h$ unknowns, and taking $f^n = 0$ and $q^n = 0$, $n=1,\dots,N$, it follows from Proposition \ref{prop_coer} that $(u,z) = 0$ is the only solution of the corresponding homogeneous system. Thus (\ref{normal}) has a unique solution. \section{A priori error estimates} \begin{proposition} \label{prop_tnorm} Suppose that $\Omega$ is a convex polyhedral domain and that $u$ is in \begin{align} \label{star_space} H^1(0,T; H^1_0(\Omega)) \cap H^2(0,T;L^2(\Omega)). \end{align} Denote by $\norm{\cdot}_*$ the norm in (\ref{star_space}). Let $(u_h,z_h) \in V_h^{2N+1}$ be the solution of (\ref{normal}) with $f = \partial_t u - \Delta u$ and $q = u|_{(0,T) \times \omega}$, and suppose that $f \in C(0,T;L^2(\Omega))$. Then \begin{align*} &\tnorm{\pi_h u - u_h}_R + \tnorm{\pi_h u - u_h, z_h}_D \le C (h+\tau) \norm{u}_*, \end{align*} where $\pi_h u$ is the orthogonal projection defined by \begin{align}\label{pi_ortho} a(\pi_h u, w) = a(u,w), \quad w \in V_h. \end{align} \end{proposition} \begin{proof} We use the shorthand notation $\xi_h = \pi_h u - u_h$. By Proposition \ref{prop_coer} it is enough to show that $$ A_1(\xi_h, w) + A_2((\xi_h, z_h), v) \le C (h + \tau) \tnorm{v,w}_C \norm{u}_*, \quad (v,w) \in V_h^{2N+1}. $$ The point values $u^n = u(t_n)$ satisfy $$ (\partial_t u^n , \phi) + a(u^n , \phi) = (f^n, \phi), \quad n = 1,\dots,N,\ \phi \in H^1_0(\Omega). $$ This implies the following consistency relation \begin{align* A_1(u - u_h, w) &= \tau \sum_{n=1}^N \left( (\partial_\tau u^n , w^n) + a(u^n , w^n) \right) - \tau \sum_{n=1}^N (f^n, w^n) \\&= \tau \sum_{n=1}^N (\partial_\tau u^n - \partial_t u^n, w^n). \end{align*} Using also the orthogonality (\ref{pi_ortho}), we get \begin{align*} A_1(\xi_h, w) &= A_1(\pi_h u - u, w) + A_1(u - u_h, w) \\\notag&= \tau \sum_{n=1}^N ((\pi_h - 1)\partial_\tau u^n, w^n) + \tau \sum_{n=1}^N (\partial_\tau u^n - \partial_t u^n, w^n). \end{align*} The Cauchy-Schwarz inequality implies that $A_1(\xi_h, w) \le 2 (I_1 + I_2)^{1/2} \tnorm{0,w}_C$ where \begin{align*} I_1 = \tau \sum_{n=1}^N \norm{(\pi_h - 1)\partial_\tau u^n}^2, \quad I_2 = \tau \sum_{n=1}^N \norm{\partial_\tau u^n - \partial_t u^n}^2. \end{align*} We estimate $I_1$ by using the approximation properties of $\pi_h$, see e.g. \cite[Th. 3.16 and 3.18]{Ern2004}, \begin{align* I_1 &= \tau^{-1} \sum_{n=1}^N \norm{\int_{t_{n-1}}^{t_n} (\pi_h - 1) \partial_t u\, dt}^2 \le \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \norm{(\pi_h - 1) \partial_t u\, dt}^2 \\&\le C h^{2} \int_0^T \norm{\nabla \partial_t u}^2\, dt. \end{align*} For $I_2$ we use Taylor's theorem with the integral form of the remainder, \begin{align* I_2 &= \tau^{-1} \sum_{n=1}^N \norm{\int_{t_{n-1}}^{t_n} \frac {t_n - t} 2\, \partial_t^2 u \, dt }^2 \le \tau^{-1} \sum_{n=1}^N \int_{t_{n-1}}^{t_n} (t_n - t)^2\, dt \int_{t_{n-1}}^{t_n} \norm{\partial_t^2 u}^2 dt \\&\le \tau^2 \int_0^T \norm{\partial_t^2 u}^2 \, dt. \end{align*} Let us now turn to the second bilinear form. We have \begin{align* A_2((\xi_h,z_h),v) &= \gamma_M \tau \sum_{n=0}^N (\pi_h u^n - u^n, v^n)_\omega + \gamma_0 (h \nabla \pi_h u^0, h \nabla v^0) \\&\quad + \gamma_1 \tau \sum_{n=1}^N (\tau \nabla \partial_\tau \pi_h u^n, \tau \nabla \partial_\tau v^n). \end{align*} Thus $A_2((\xi_h,z_h),v) \le C (I_3 + I_4 + I_5)^{1/2} \tnorm{v,0}_C$, where \begin{align I_3 &= \tau \sum_{n=0}^N \norm{\pi_h u^n - u^n}_\omega^2 \le h^{2} \tau \sum_{n=0}^N \norm{\nabla u^n}^2 \le C h^{2} \norm{\nabla u}_{H^1(0,T;L^2(\Omega))}^2, \nonumber \\ I_4 &= \norm{h \nabla \pi_h u^0}^2 \le C h^2 \norm{\nabla u}_{H^1(0,T;L^2(\Omega))}^2, \nonumber \\ I_5 &= \tau \sum_{n=1}^N \norm{\nabla \pi_h \tau \partial_\tau u^n}^2 = \tau \sum_{n=1}^N \norm{\int_{t_{n-1}}^{t_n} \nabla \pi_h \partial_t u\, dt}^2 \le \tau^2 \int_0^T \norm{\nabla \partial_t u}^2 dt. \label{eq:useful} \end{align} Here we used the trace inequality in time and the continuity of $\pi_h$. \end{proof} We recall the following variation of \cite{Emanuilov1995} that was proven in \cite{BO16}. \begin{theorem} \label{th_cont_stable} Let $\Omega \subset \mathbb R^d$ be a convex polyhedron, let $\omega \subset \Omega$ be open and non-empty, and let $0 < \delta < T$. Then there is $C > 0$ such that for all $u$ in the space \begin{align} \label{energy_space} H^1(0,T; H^{-1}(\Omega)) \cap L^2(0,T; H_0^1(\Omega)), \end{align} it holds that \begin{align*} &\norm{u}_\delta \le C (\norm{u}_{L^2((0, T) \times \omega)} + \norm{\partial_t u - \Delta u}_{L^2(0, T; H^{-1}(\Omega))}), \end{align*} where $\norm{\cdot}_\delta$ is the norm in $C(\delta, T; L^2(\Omega)) \cap L^2(\delta, T; H^1(\Omega)) \cap H^1(\delta, T; H^{-1}(\Omega))$. \end{theorem} For $u_h = (u_h^n)_{n=0}^N \in V_h^{2N+1}$ we define the linear interpolation \begin{align}\label{interpolation} \tilde u_h(t) = \tau^{-1} \left( (t-t_{n-1}) u^n_h + (t_{n}-t) u_h^{n-1} \right), \quad t \in [t_{n-1},t_n],\ n = 1,\dots,N. \end{align} Observe that $\tilde u_h$ is in the space (\ref{energy_space}) and also in $C(0,T;H^1_0(\Omega))$. We are now ready to prove our main result on the convergence of the stabilized finite element method. \begin{theorem} \label{th_main} Let $\omega \subset \Omega \subset \mathbb R^d$ and $\delta > 0$ be as in Theorem \ref{th_cont_stable}. Let $u$, $f$ and $(u_h, z_h)$ be as in Proposition \ref{prop_tnorm} and define $\tilde u_h$ by (\ref{interpolation}). Suppose that $f \in H^1(0,T; L^2(\Omega))$. Furthermore, in the case $\gamma_1 > 0$ suppose that $\tau = \mathcal O(h)$, and in the case $\gamma_1 = 0$ suppose that $\tau = \mathcal O(h^2)$. Then $$ \norm{u - \tilde u_h}_\delta \le C h \left( \norm{u}_* + \norm{f}_{H^1(0,T; L^2(\Omega))} \right). $$ \end{theorem} \begin{proof} Let $e = u - \tilde u_h$, and define the linear form $$ \pair{r, w} = \int_0^T (\partial_t e, w) + a(e,w)\, dt, \quad w \in L^2(0,T; H_0^1(\Omega)). $$ By Theorem \ref{th_cont_stable} it is enough to show the following two inequalities \begin{align}\label{e_omega} \norm{e}_{L^2((0, T) \times \omega)} &\le C h \norm{u}_*, \\\label{r} \pair{r, w} &\le C h \left( \norm{u}_* + \norm{f}_{L^2((0,T) \times \Omega)} \right) \norm{w}_{L^2(0,T; H_0^1(\Omega))}. \end{align} Let us begin with (\ref{e_omega}). We define the projection on the piecewise constant functions $$ \pi_0 v(t) = v(t^n), \quad t \in (t_{n-1},t_n],\ n = 1,\dots,N. $$ Observe that $$ \norm{\pi_0 v - v}_{L^2(0,T)} \le \tau \norm{\partial_t v}_{L^2(0,T)}, \quad v \in H^1(0,T). $$ We have \begin{align* \norm{e}_{L^2((0, T) \times \omega)}^2 \le C (h^2 + \tau^2)\norm{u}_{H^1(0,T;H^1(\Omega))}^2 + \int_0^T \norm{\pi_0 \pi_h u - \tilde u_h}_\omega^2 dt, \end{align*} and \begin{align* \int_0^T \norm{\pi_0 \pi_h u - \tilde u_h}_\omega^2 dt &\le \int_{0}^{T} \norm{\pi_0 \pi_h u - \pi_0 \tilde u_h}_\omega^2 dt + \int_0^T \norm{\pi_0 \tilde u_h - \tilde u_h}_\omega^2 dt \\&= \tau \sum_{n=1}^N \norm{\pi_h u^n - u_h^n}_\omega^2 + \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \norm{\pi_0 \tilde u_h - \tilde u_h}_\omega^2 dt. \end{align*} Here the first term is bounded by $\tnorm{\pi_h u - u_h}_R^2$, and we use the identity \begin{align}\label{interp_id} \tilde u_h = u^n_h + (t-t_{n}) \partial_\tau u_h^n \end{align} to estimate the second one as follows \begin{align* \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \norm{\pi_0 \tilde u_h - \tilde u_h}^2 dt &= \sum_{n=1}^N \int_{t_{n-1}}^{t_n} \norm{(t_{n}-t) \partial_\tau u_h^n}^2 dt \le \tau \sum_{n=1}^N \norm{\tau \partial_\tau u_h^n}^2 \\&\le \tau \sum_{n=1}^N \norm{\tau \partial_\tau (\pi_h u^n - u_h^n)}^2 + \tau \sum_{n=1}^N \norm{\tau \partial_\tau \pi_h u^n}^2. \end{align*} As $\tau = \mathcal O(h)$, the first term above is bounded by $\tnorm{\pi_h u - u_h,0}_D^2$, and the second term is bounded by $\tau^2 \norm{u}^2_*$. The inequality (\ref{e_omega}) follows from Proposition \ref{prop_tnorm}. We turn to (\ref{r}), and define the piecewise constant function defined by local time averages \def\overline w{\overline w} $$ \overline w(t) = \tau^{-1} \int_{t_{n-1}}^{t_n} w\, dt, \quad t \in (t_{n-1},t_n],\ n = 1,\dots,N. $$ We have \begin{align* \int_0^T (\partial_t u, w) + a(u,w)\, dt = \int_0^T (f, w)\, dt = \int_0^T (f - \pi_0 f, w)\, dt + \tau \sum_{n=1}^N (f^n, \overline w), \end{align*} and using the identity (\ref{interp_id}) and the orthogonality (\ref{pi_ortho}), \begin{align* &-\int_0^T (\partial_t \tilde u_h, w) + a(\tilde u_h, w)\, dt = -\tau \sum_{n=1}^N (\partial_\tau u_h^n, \overline w) -\int_0^T a(\tilde u_h, \pi_h w)\, dt \\&\quad= -\tau \sum_{n=1}^N (\partial_\tau u_h^n, \overline w) -\tau \sum_{n=1}^N a(u_h^n, \pi_h \overline w) -\sum_{n=1}^N \int_{t_{n-1}}^{t_n} (t-t_n)\, a(\partial_\tau u_h^n, \pi_h w)\, dt. \end{align*} As $u_h$ satisfies (\ref{normal}), it holds that \begin{align* \pair{r,w} &= \int_0^T (f - \pi_0 f, w)\, dt + \tau \sum_{n=1}^N (f^n, \overline w - \pi_h \overline w) -\tau \sum_{n=1}^N (\partial_\tau u_h^n, \overline w- \pi_h \overline w) \\&\quad -\sum_{n=1}^N \int_{t_{n-1}}^{t_n} (t-t_n)\, a(\partial_\tau u_h^n,\pi_h w)\, dt. \end{align*} We have \begin{align* \int_0^T (f - \pi_0 f, w)\, dt &\le \tau \norm{f}_{H^1(0,T;L^2(\Omega))} \norm{w}_{L^2((0,T) \times \Omega)}, \\ \tau \sum_{n=1}^N (f^n, \overline w - \pi_h \overline w) &\le C h \norm{f}_{H^1(0,T;L^2(\Omega))} \norm{w}_{L^2(0,T; H^1(\Omega))}. \end{align*} Moreover, \begin{align* \tau \sum_{n=1}^N (\partial_\tau u_h^n, \overline w- \pi_h \overline w) \le C h \norm{u}_{H^2(0,T;L^2(\Omega))} \norm{w}_{L^2(0,T; H^1(\Omega))}, \end{align*} where we used Proposition \ref{prop_tnorm}, after observing that \begin{align* h^2 \tau \sum_{n=1}^N \norm{\partial_\tau u_h^n}^2 \le \tnorm{u_h - \pi_h u,0}_D^2 + h^2 \norm{u}_*^2. \end{align*} Finally, \begin{align* \sum_{n=1}^N \int_{t_{n-1}}^{t_n} (t-t_n)\, a(\partial_\tau u_h^n,\pi_h w)\, dt \le \tau \left( \tau \sum_{n=1}^N \norm{\nabla \partial_\tau u_h^n}^2 \right)^{\frac 1 2} \norm{w}_{L^2(0,T; H^1(\Omega))}, \end{align*} and using the triangle inequality and \eqref{eq:useful}, \begin{align* \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u_h^n}^2 \le \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau (u_h^n - \pi_h u^n)}^2 + C \tau^2 \int_0^T \norm{\nabla \partial_t u}^2 dt. \end{align*} Observe that \begin{align* \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau (u_h^n - \pi_h u^n)}^2 \le C \begin{cases} \tnorm{u_h - \pi_h u}^2_R, & \gamma_1 > 0, \\[3mm] \tnorm{u_h - \pi_h u, 0}^2_D, & \tau = \mathcal O(h^2). \end{cases} \end{align*} The inequality (\ref{r}) follows from Proposition \ref{prop_tnorm}. \end{proof} If $\gamma_1=0$ and $\tau = \mathcal O(h)$ then Theorem \ref{th_main} does not predict optimal convergence. Indeed, in this case the bound in the last step becomes \[ \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau (u_h^n - \pi_h u^n)}^2 \leq C h^{-1} \tnorm{u_h - \pi_h u, 0}^2_D. \] This then leads to a convergence of order $O(h^{\frac12}+\tau^{\frac12})$ using Proposition \ref{prop_tnorm}. \subsection{The case of perturbations in data} Thanks to the Lipschitz stability of Theorem \ref{th_cont_stable} the extension of the above analysis to the case where the data is perturbed is straightforward. Indeed assume that instead of $(q^n,f^n)_{n=1}^N$ in \eqref{def_L} we have at are disposal the perturbed data $(\tilde q^n,\tilde f^n)_{n=1}^N$, \[ \tilde q^n = q^n + \delta q^n, \quad \tilde f^n = f^n + \delta f^n \] with $\delta q^n \in L^2(\omega)$ and $\delta f^n \in H^{-1}(\Omega)$. Then a standard perturbation argument leads to similar results as Proposition \ref{prop_tnorm} and Theorem \ref{th_main}, but with an additional term of the form $$C \tau^{\frac12} \left(\sum_{n=1}^N \left( \|\delta q^n\|^2_\omega + \|\delta f^n\|^2_{H^{-1}(\Omega)} \right) \right)^{\frac12}$$ in the right hand side of the bounds of the error estimates. This is a similar result as one would obtain for a well-posed problem. \section{Computational examples} \label{sec_comp} The main objectives of the computational examples are twofold. \begin{enumerate} \item First we verify that the predicted reduction in convergence order to $O(h^{\frac12}+\tau^{\frac12})$ for $\gamma_1=0$ and $\tau = \mathcal O(h)$ indeed takes place, even in a simple model case. \item Then we confirm that the situation is rectified for $\gamma_1>0$. \end{enumerate} The Euler-Lagrange equations (\ref{normal}) form a non-singular, symmetric system of $(2 N + 1) N_h$ linear equations. We emphasize that the system is not positive definite. In principle, it can be solved using off-the-shelf methods, for example the MINRES method \cite{Paige1975}. We implemented this straightforward strategy in the case that $\gamma_1 = 0$, and verified that the convergence order in space is that predicted by Theorem \ref{th_main}. For the convergence order in time we verify that failure to meet the condition $\tau = \mathcal{O}(h^2)$ indeed leads to suboptimal convergence. We observe $\mathcal{O}(\tau^{\frac12})$ convergence under refinement of $\tau$ in the regime where $\tau = \mathcal{O}(h)$. In all our computational examples $\Omega$ is the unit interval $(0,1)$, $\omega = (a, 1-a)$, $a = 0.2$, and we use a regular mesh on $\Omega$. Moreover, the function $u$ is of the form \begin{align}\label{u_comp} u(t,x) = e^{-\pi^2 k^2 t} \sin (\pi k x), \quad k = 1,2. \end{align} Computations for $k = 2$ and $T = 0.02$ are summarized in Table \ref{tab_monolithic}. We also verified that the computations diverge when no regularization is introduced, that is, when $\gamma_0 = 0$. In these computations we used the MINRES implementation of SciPy with the default parameters \cite{Jones2001--}, and the initial guess was set to zero. The convergence is typically slow, requiring thousands of iterations. \begin{table}\centering \begin{tabular}{ l | c c c } $h$ & 0.02 & 0.01 & 0.005 \\\hline error & 0.224 & 0.119 & 0.043 \end{tabular} \qquad \begin{tabular}{ l | c c c } $\tau$ & 0.004 & 0.002 & 0.001 \\\hline error & 0.104 & 0.073 & 0.048 \end{tabular} \medskip \caption{Convergence with $\gamma_M = \gamma_0 = 1$ and $\gamma_1=0$ using the MINRES method. The error is $\norm{u(T) - u_h^N}_{L^2(\Omega)}$. {\em Left.} Order $1$ convergence in $h$ with $N=16$. {\em Right.} Order $1/2$ convergence in $\tau$ with $N_h=200$. } \label{tab_monolithic} \end{table} The remaining examples will exploit the structure of (\ref{normal}) to reduce the computational burden. \subsection{The Euler-Lagrange equations as a system of two coupled heat equations} An attractive feature of the regularization in (\ref{def_L}) is that it acts only on the primal variable $u$. This leads to the one-way coupling in (\ref{normal}), that is, the dual variable $z$ does not appear in the equation involving $A_1$. We present next a method solving (\ref{normal}) that is based on the one-way coupling. Note that the first equation in (\ref{normal}), that is, \begin{align}\label{heat_u} \tau \sum_{n=1}^N \left( (\partial_\tau u^n, w^n) + a(u^n, w^n) \right) = \tau \sum_{n=1}^N (f^n, w^n), \end{align} is simply a discretization of the heat equation (\ref{heat}). Let us next interpret the second equation in (\ref{normal}) as a discretization of a heat equation for $z$. Observe that, setting $z^{N+1} = 0$, we obtain \begin{align* \tau \sum_{n=1}^N (\partial_\tau v^n, z^n) = - \tau \sum_{n=1}^N (v^n, \partial_\tau z^{n+1}) - (v^0, z^1). \end{align*} Thus choosing $v^0 = 0$ in (\ref{normal}) for the moment, we see that $z$ satisfies \begin{align}\label{heat_z} &\tau \sum_{n=1}^N \left( - (v^n, \partial_\tau z^{n+1}) + a(v^n, z^n) \right) \\\notag&\quad= \gamma_M \tau \sum_{n=1}^N (q^n - u^n, v^n)_\omega - \gamma_1 \tau \sum_{n=1}^N (\tau \nabla \partial_\tau u^n, \tau \nabla \partial_\tau v^n), \end{align} and this can be interpreted as a discretization of \begin{align* -\partial_t z - \Delta z = \gamma_M (q- u) 1_\omega. \end{align*} Here $1_\omega$ is the indicator function of $\omega$, that is, $1_\omega(x) = 1$ if $x \in \omega$ and $1_\omega(x) = 0$ otherwise. Note that, when rescaled by $\tau^{-2}$, the second term on the right-hand side of (\ref{heat_z}) is a discretization of $\int_0^T (\nabla \partial_t u, \nabla \partial_t v)\, dt$. Taking now $v^n = 0$, $n=1,\dots,N$, in (\ref{normal}) we get the additional constraint \begin{align*} \gamma_0 (h \nabla u^0, h \nabla v^0) - \gamma_1 \tau (\tau \nabla \partial_\tau u^1, \nabla v^0) - (z^1, v^0) = 0. \end{align*} Define $U(\phi)$ to be the solution of (\ref{heat_u}) with $u^0 = \phi$, and $Z(\phi)$ the solution of (\ref{heat_z}) with $z^{N+1} = 0$ and $u = U(\phi)$. Observe that these can be easily computed by using time stepping. Furthermore, define the function $$ \mathcal C(\phi, \psi) = \gamma_0 (h \nabla U^0(\phi), h \nabla \psi) - \gamma_1 \tau (\tau \nabla \partial_\tau U^1(\phi), \nabla \psi) - (Z^1(\phi), \psi), \quad \psi \in V_h. $$ Then $(u,z) = (U(\phi), Z(\phi))$ solves (\ref{normal}) if and only if \begin{align}\label{coupling} \mathcal C(\phi, \psi) = 0, \quad \psi \in V_h. \end{align} We will use a gradient descent type method to solve (\ref{coupling}). Starting from an initial guess $\phi_0 \in V_h$, we define the iteration \begin{align}\label{graddesc} (\phi_{m+1}, \psi) = (\phi_m, \psi) -\alpha \mathcal C(\phi_m, \psi), \quad \psi \in V_h, \end{align} where $\alpha > 0$ is a step size. The system (\ref{graddesc}) is a discretization of the differential equation \begin{align}\label{def_Phi} \Phi(0) = \phi_0, \quad (\partial_s \Phi(s), \psi) = -\mathcal C(\Phi(s), \psi), \quad \psi \in V_h, \end{align} and its use to solve (\ref{coupling}) is justified by the following lemma. \begin{lemma} Let $\phi_0 \in V_h$ and define a one parameter family $\Phi(s)$, $s \ge 0$, in $V_h$ by (\ref{def_Phi}). Let $(u_h, z_h)$ be the solution of (\ref{normal}). Then $\Phi(s)$ converges to $u_h^0$ as $s \to \infty$. \end{lemma} \begin{proof} For each $s \ge 0$ it holds by definition that $u(s) = U(\Phi(s))$ and $z(s) = Z(\Phi(s))$ satisfy (\ref{heat_u}) and (\ref{heat_z}), respectively. Hence \begin{align* \partial_s \mathcal L(u, z) = (\partial_u L, \partial_s u) + (\partial_z L, \partial_s z) = \mathcal C(\Phi, \partial_s u^0) = \mathcal C(\Phi, \partial_s \Phi) = -\norm{\partial_s \Phi}^2. \end{align*} The equation (\ref{heat_u}) implies also that $$ \mathcal L(u,z) = \frac 1 2 \gamma_M \tau \sum_{n=1}^N \norm{u^n - q^n}_\omega^2 + \frac 1 2 \gamma_0 \norm{h \nabla \Phi}^2 + \frac 1 2 \gamma_1 \tau \sum_{n=1}^N \norm{\tau \nabla \partial_\tau u^n}^2. $$ As $\mathcal L$ is non-negative and decreasing along the family $(u(s), z(s))$, it follows that $\partial_s \mathcal L(u,z) \to 0$ as $s \to \infty$. Hence also $\partial_s \Phi \to 0$ as $s \to \infty$, and the differential equation (\ref{def_Phi}) implies that the limit $\phi_\infty = \lim_{s \to \infty} \Phi(s)$ exists and satisfies (\ref{coupling}). By the discussion preceding the proof, we have $\phi_\infty = u_h^0$. \end{proof} We will use the above gradient descent method in the computational examples below and assume that the initial guess $\phi_0$ is a small perturbation of $u(0)$. Such an assumption can be relevant for many data assimilation applications. Indeed, it is typical that new observations need to be incorporated into the state of the system, and the current state can then be used as an initial guess. \subsection{The effect of regularization on the convergence in $\tau$} \begin{figure} \centering \includegraphics[width=0.75\linewidth]{"fig7b"} \caption{ The effect of regularization on the convergence in $\tau$. The convergence is of order $1/2$ (slope of dashed reference line) when $\gamma_1 = 0$ (data with square markers) and of order $1$ (slope of dotted reference line) when $\gamma_1 = 1$ (data with circle markers) . Here $\gamma_M = \gamma_0 = 1$, $h = 10^{-2}$, and the error is $\norm{u(T) - u_h^N}_{L^2(\Omega)}$. } \label{fig:fullsolverconvergencefinal} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{"parameter_study"} \caption{The error for various choices of the constants $\gamma_0, \gamma_1$. Here $\gamma_M=1$, $h=\tau=10^{-2}$ and the error is $\norm{u(T) - u_h^N}_{L^2(\Omega)}$. For each $0.1 \le \gamma_0 \le 1.2$, the method is robust for a large range in $\gamma_1$. There also is an optimal value of $\gamma_1$ for each such $\gamma_0$. However, this is mesh dependent and it is not clear if the phenomenon can be exploited in practice. ($\gamma_0 = 0.1$ - dotted line; $\gamma_0 = 0.2$ - dashed line; $\gamma_0 = 0.6$ - dash/dotted line; $\gamma_0 = 1.0$ - dash/doubledotted line; $\gamma_0 = 1.2$ - doubledash/doubledotted line; $\gamma_0 = 1.5$ - filled line.)} \label{fig:parameterstudyfinalgraph} \end{figure} We verified that the presence of the additional regularization in the case $\gamma_1 > 0$ leads to the improved convergence rate in $\tau$ as predicted by Theorem \ref{th_main}. Indeed, in the computations summarized in Figure \ref{fig:fullsolverconvergencefinal}, the convergence is of order $1/2$ when $\gamma_1 = 0$ and of order $1$ when $\gamma_1 = 1$. Here $\gamma_M = \gamma_0 = 1$, $h = 10^{-2}$, $u$ is of the form (\ref{u_comp}) with $k=1$, and $T=0.1$. We used the gradient descent method with the initial guess $\phi_0 = v + h$ where $v$ is the interpolation of $u(0)$ on $V_h$. The step size in (\ref{graddesc}) was taken $ \alpha = 0.1$ and the iteration (\ref{graddesc}) was terminated when $\norm{z^1}$ started to increase. \subsection{Sensitivity to the choice of $\gamma_0$ and $\gamma_1$.} In all the numerical experiments above we have taken the parameters $\gamma_0$ and $\gamma_1$ to be either one or zero. This was to avoid special effects that can appear due to parameter tuning. In a final numerical experiment we verified that the method is not sensitive to the particular choices of the constants $\gamma_0, \gamma_1 > 0$. The conclusion of the study is that the method is robust for a wide range of choices of $\gamma_0$ and $\gamma_1$, including $\gamma_0=\gamma_1=1$. We observed that choosing both parameters large resulted in solutions that were over regularized and yielded suboptimal accuracy compared to lower values of the parameters. See the filled line of Figure \ref{fig:parameterstudyfinalgraph} for an example. We also observed that there are certain ``sweet spot'' combinations of values of $\gamma_0$ and $\gamma_1$ for which the errors are orders of magnitude smaller than for the neighbouring parameter combinations. These optimal parameter combinations however did not appear to be stable under mesh refinement and it is unclear if this effect can be of any use in practice. The computations are summarized in Figure \ref{fig:parameterstudyfinalgraph}, with particular focus on the parameter interval where the optimal parameter choices appeared. Here $h = \tau = 10^{-2}$ and the other choices are as in the previous example. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,720
Where first/birth/natural/real mothers share news & opinions. And vent. Part 2: The daughter I gave up for adoption had a daughter she gave up for adoption Continuing the story of a granddaughter lost to adoption, and later found: Upon recovering from the shock of learning that my daughter, Jane, whom I gave up for adoption, now had a daughter she was planning to give up for adoption, I thought: Well, at least she won't have to go what I went through. She'll be able to place her daughter in an "open adoption," where she would always know where her daughter was, who her adoptive parents where, she or I would be available in case the adoption did not work out. It was 1986, and while open adoptions were not as commonplace as they are today, they were possible then, and, I believe, in Wisconsin. (In fact, I think they were pioneered there and in Traverse, City, Michigan, and if anybody has any information to add about this, please do!) So I started talking about open adoption to Jane, but it was falling on deaf ears. Jane said that the father, a young African American man she met at Burger King where she was working, and he were together for several months, but they had broken up before the baby was born. She casually added that he did not want her to give up the child, and his mother wanted to raise her. Oh my god, I thought, that is wonderful! She can be raised by her own family! I'll be able to know her too! Jane will never have to be a woman haunted by the loss of her daughter--and the child will not have to be adopted by strangers, genetic strangers. The ironic twist is that the father's family was from Inkster, Michigan, the community directly next to the town I grew up in, Dearborn, and that is where she would be raised. Dearborn in the Fifties and Sixties was a community run by an out-and-out racist, Mayor Orville L. Hubbard, who made no secret of his racist views. Dearborn sits directly to the west of Detroit, and during the era of "white flight" from the city center of Detroit, people of all races were moving out of the downtown ghettos to the edges of the city, moving, in other words into suburbs just like Dearborn. Wikipedia says: Dearborn in the Orville Hubbard years became known nationally as a symbol of racial segregation. Hubbard's longstanding campaign to "Keep Dearborn Clean" was widely understood as a thinly veiled campaign to keep Dearborn white. Hubbard became the most famous segregationist north of the Mason-Dixon line, and when he left office in 1978 [note: he was mayor for 36 years, having been elected 15 times], only 20 African-Americans lived in Dearborn--a city with a population of 90,000. Frankly, I'm surprised any African Americans lived there then, the atmosphere was so poisonous to them. No, we did not have segregated lunch counters--but no blacks ever tried to eat there. Police stopped non-whites when they drove through town. When a black family in the early Sixties rented a house and moved in a crowd gathered outside, epithets were yelled, the police simply stood around, doing nothing and under orders to do nothing, it would later be learned. Nothing got more violent than the few people who threw vegetables and eggs at the house, but it was not a good scene. Hubbard's statue in Dearborn Not everyone in Dearborn was a bigot. Some year up to thirty percent of the population, including my parents, voted against Hubbard, but no one thought anyone else had a chance of winning. Thirty percent against means seventy percent for. The music played on a loud speaker at campaign events was--and I'm not kidding--"Bye Bye Blackbird," with the clear implication of who the "black bird" was. The only Freedom March of the Sixties in the North was in Dearborn. I was a college student then at Wayne State University, and went to see the marchers as they walked by City Hall in silence. It was quiet that day, no one jeered, a few people clapped but mostly it was eerily still and calm as the marchers passed by a small crowd gathered at the corner of Michigan Avenue and Schaefer Road, the center of east Dearborn right by City Hall. Personally, I think most of the people who came out that day were in support of them. I'm sorry I did not join the marchers, but instead stood on the steps of City Hall. A story that's still remembered in journalism circles is that the Time magazine reporter, Ben Cate was physically hustled out of City Hall a few weeks later when he was researching a my home town and the Freedom March. Not long after that I graduated from college and left Dearborn for good, but on one trip home I happened to take my mother, who still lived there, to the Dearborn Youth Center--a rather huge facility with an indoor roller rink. She was going to some senior citizen event that day. Dearborn had lots of nice frills like that, having the benefit of being the home town of the Ford Motor Company, which paid a lot of tax dollars and funded a lot of extras like frequent trash collections, great schools, snow-plowing on the sidewalks, a well-maintained park system with swimming pools and artificial ice skating rinks. Anyway, directly past the entrance of the Youth Center hung a poster-sized blowup of a photograph of an African American woman kissing a white man hung on the wall. You could not get in without walking past it. Everybody who came to the Youth Center saw it. Even though I know how racist Dearborn had been and still was, I was stunned. Hubbard ran the town with an iron fist, and the citizens of Dearborn were letting him get away with this. No words were written under the picture, but the message was clear: Look at what can happen if Dearborn were to integrate. Miscegenation! Horrors! After I got over my shock, I realized I knew the black woman in the photograph. I'd interviewed her on a number of occasions. She was Charlayne Hunter, who in 1961 was one of two black students who first broke the color barrier in higher education in Georgia. While she was waiting for the courts to force integration of the University of Georgia, she attended Wayne for her first semester, where I was a reporter for The Daily Collegian. And yes, she did marry a white man she met at the University of Georgia. I know I've gotten off the seeming track here, but it's worth knowing our history; it's worth remembering that racism crept far North. Today there is a statue of Hubbard in Dearborn (see above), and one of the senior-citizen housing units is named for him: instead, he ought to be stripped of any glory, the statue toppled, his memory thrown into the dust bin of history. His family, still in Dearborn, however, bristle and complain when anyone around remembers what a horrible bigot he was and their thoughts are published in the Detroit newspapers. And yep, that's a book about him: Orvie: The Dictator of Dearborn : The Rise and Reign of Orville L. Hubbard (Great Lakes Books) The point of this riff about my hometown, Dearborn--it's still iffy to mention to a black person of a certain age you grew up there, without quickly explaining you know what that means to him--is that the community where my granddaughter's father lived, where my granddaughter might be raised, was Inkster, directly west of Dearborn. Jane was adamantly opposed to letting this happen. I was more than a thousand miles away, halfway across the country. She also refused to pursue open adoption. I did not know how much of an effort his family would put into stopping the adoption proceedings, and did not tell her I hoped they succeeded. She was my daughter, first and foremost, and I listened to her grief, gave her my love, and would not risk alienating her by opposing her. We'd already had too many ups and downs over the course of the five years I knew her when her first daughter was born. In the end. the baby was adopted. At the court proceeding, she said, neither he nor his family showed up. She told me that she met the adopting parents, a white woman lawyer and a black doctor, and shook their hands. I never went to court when she was adopted, so I don't know how much of this is true--because, I hate to admit, Jane had a slippery relationship with the truth. She often said what was most convenient at the moment to the person she was speaking to; she made up innumerable, improbable stories that she later forgot. Her lying hampered our relationship on many occasions, and I know, it did the same with her relationship to her adoptive parents. It was as if the truth did not matter to her. And she knew that one way to shut me up--about an open adoption, about his mother wanting to raise her--was to come up with such desirable parents--one white, one black, both well educated with fancy careers--for her biracial daughter, my granddaughter. Neither my husband or I ever really believed her on this, but there was no point in refuting her. She would just insist that it happened that way.--lorraine Posted by Lorraine Dusky at 3:57 PM Labels: biracial adoptee , black father-white mother adoptee , Dearborn , open adoption , Orville L. Hubbard Lori June 10, 2010 at 3:25 AM Lorraine, I hate to say it, but that was the lie they used to shut the last nail in my motherhood coffin. The white lawyer (woman) and the black doctor (man). I don't know how your granddaughters story stands now, but I do know that the likelihood of that occurring in that time frame was very slim....at least on the professional level. Sigh, aint racism grand..... Carolina Whitefreeze June 10, 2010 at 10:48 AM As a long-time (but no longer) resident of Dearborn, I came after the Hubbard regime, but while its effects were definitely still apparent. At times, it was embarrassing to give Dearborn as my hometown because of that. I still have a small, hotel-size bar of soap that says "Keep Dearborn Clean," which they apparently gave out at some point. I'm glad that part of Dearborn history is (at least mostly) over. Lorraine Dusky June 10, 2010 at 11:06 AM Lori--same bleepin' story? It is worth telling our stories so that the many lies are revealed. In my case, however, I'm sad that this came from my daughter. Carolina: I've written a number of op-eds about racism and I often mention my youth in Dearborn, and explain a bit. The piece usually draws a letter from at least one African American who is relieved that someone remembers how rampant racism was in the Forties, Fifties and Sixties. I believe Hubbard was mayor until 1974 when he had a massive stroke, and the rest of his term was filled by the president of the city council. I also found this on Wikipedia: In 2005, Senator Carl Levin spoke at the funeral of Rosa Parks, making the following comments about Hubbard: "The South had Orval Faubus; Michigan had Orville Hubbard. Orville Hubbard vowed to keep Dearborn clean, meaning keep Dearborn white."[17] Levin's comments drew an angry response from Hubbard's family. A letter published in the Detroit Free Press from Hubbard's granddaughter, Susan L. Hubbard, referred to Levin's comments as "mean-spirited ramblings of an arrogant, Washington politician." NO, Levin spoke the truth. That stupid statue of him is listed among 20 monuments in the country that should be topped. It is am embarrassment that it still stands. KristySearching June 10, 2010 at 3:57 PM I am so glad you have re-connected with your lost granddaughter Lorraine... I hope that the cycle ends with her. What did the social workers tell your daughter's adoptive parents about you? Maybe your daughter just did what she knew happened in adoption - social workers lied. I've said this before. I think that is one of the reason so many adoption agencies are opposed to reunion. They know the lies that are out there and that will be revealed when adoptee and mother meet. Lorraine Dusky June 10, 2010 at 8:17 PM UM--You forget that my daughter said she met them, shook their hands...so their occupations were not a lie told by the social worker in her case. I was not lied to by my social worker, Helen Mura, when my daughter was adopted. She told me that they were "professional people," and they were. A nurse and an insurance adjuster who handled disasters, such as hurricanes and tornadoes. Your story is so crazy Lorraine. Interesting, as in the Chinese (?) curse, "may you live in interesting times" You got to live your dreams, you got to move to New York and be a writer, something I imagine in your generation and circumstance was not common. Correct me if I am wrong. The nightmare aspect seems to have loomed just as large however. I wish I could get behind Buddhism more, I am too human. I feel all of the nitty-gritty. What we (collectively) live through is pretty darn amazing. Lori June 11, 2010 at 12:37 AM Lorraine, yep, just about. But they did not introduce me to anyone. It was, according to what I have found from research, a common rouse in that time frame - I am guessing somewhere from 1975 to 1989, and possibly later. I think that the saddest part is that it is likely that your daughter told you that to make you believe she did something with full knowledge. In essence to protect you from her being lied to and hurt. I know that a lot of people think that I am bitter, I used to be, but I am not. Sad about the lies. Sad that her father never got to know her, and he wanted to. And a thousand other reasons. But I can't change it. Yes, our stories need to be told. I recently was contacted by someone who wants to share our stories, but I am not so sure I want to be part of her deal with....advice privately would be appreciated! KimKim June 11, 2010 at 6:16 AM A mother needs our help, her 18 month old baby was stolen and sold off to adoption, she managed with help from an organization to track him down in the Netherlands. There is a court case this month and they still need several hundred euros for the airfare. Please can we help this family? WHERE TO DONATE:http://againstchildtrafficking.org/Donations.html (write for Nagarini) newpaper article here:http://www.timesonline.co.uk/tol/news/world/asia/article7144837.ece The Improper Adoptee June 11, 2010 at 8:20 AM " Orville Hubbard vowed to keep Dearborn clean, meaning keep Dearborn white." This remark, along with handing out bars of soap is disgusting. I agree any image of this terrible man should be swept out of Dearborn. He is the one who wasn't clean with his dirty soul. Hubbard is lucky I didn't grow up in that town because I would of ripped that poster to shreds and dumped it on his front lawn, but then again, I;ve always been improper... damn laptop with the keyboard from hell, LoL but then again I've always been improper.... COMMENTS AT BLOGS OLDER THAN 30 DAYS ARE UNLIKELY TO BE PUBLISHED COMMENTS ARE MODERATED. Our blog, our decision whether to publish. We cannot edit or change the comment in any way. Entire comment published is in full as written. If you wish to change a comment afterward, you must rewrite the entire comment. We DO NOT post comments that consist of nothing more than a link and the admonition to go there. PLEASE REMEMBER TO ORDER FROM AMAZON THOUGH FMF.CLICK ON ANY BOOK WE LIST TO GET TO AMAZON, AND THEN ORDER WHATEVER. thanxxx <br /> 2ND EDITION!!! I hope to have some news soon about the 2nd edition of hole in my heart. Sorry for the delay! THANK YOU AND LEGAL NOTICE As an Amazon Associate we earn from qualifying purchases. THANK YOU TO ALL THE READERS WHO REMEMBER TO GO TO AMAZON VIA FIRST MOTHER FORUM. IT MATTERS NOT WHAT YOU PURCHASE. "Lorraine Dusky, a writer who relinquished a daughter as a young single mother in New York State in 1966, supports opening the records. She reported in her 2015 memoir that in the handful of states that offered women the opportunity to remove their names from original birth certificates, only a small fraction of women — fewer than 1 percent — chose to do so." --Don't Keep Adopted People in the Dark by Gabrielle Glaser, June 19, 2018 Jane Edwards Lorraine Dusky "On FirstMotherForum.com, a blog that discusses issues among women who had given children up for adoption, Lorraine Dusky, one of the site's authors, praised the series (ABC's 10-episode Find My Family): 'Maybe this will be heard by people who think it is unloyal somehow for a person to search out his or her roots, parents, family, when it is a most natural desire of consciousness.'--Two Reality Shows Stir Publicity and Anger"--Dec. 6, 2009. This blog takes cookies. "It shouldn't take a miracle to find people you are related to by blood."--Jenn Gentlesk forumfirstmother@gmail.com Finding Your Roots could be the conversation starter you need to talk about adoption Philomena: A forced adoption, a lifetime quest, a longing that never waned Adoptive Parents Ask: What Could They Do? If you don't care about your origins, why are you searching First Mother sites? Mother denied visitation with son conceived with her egg Joan Didion's Blue Nights, an adoption memoir revisited on the release of documentary about her When people say: I'm not curious about my roots.... Laws, Searching, Reunion Keep Your Baby Considering Open Adoption? Letter to Birth Mother or Sibling Writing the First Letter 'Positive' Adoption Language? What We Think About Adoption Favorite Adoption Quotes Oregon court records available Instructions and forms for accessing adoption records are on the Oregon Judicial Department's website. Material from First Mother Forum may be quoted as long as FMF is credited and with a link to original source here. Over 350 words, contact for permission: forumfirstmother@gmail.com. MAKING AN ADOPTION PLAN? Check this out before you go foward: What you should Know if You're Considering Adoption for Your Baby Lorraine Dusky. Theme images by Roofoo. Powered by Blogger.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
381
Home » Policymakers & Admins » Champions of CRE It really can make a difference when administrators can find others in similar circumstances who have successfully advocated for conflict resolution education. We want to profile administrators who have done so in this section. If you are such a person, please contact us! Ms. Barbara Sugarman Grochal The CRE Connection is pleased to present Barbara as a Champion of CRE Barbara Sugarman Grochal is currently the Deputy Director of the School's Conflict Resolution Education Programs at the Center for Dispute Resolution at the University of Maryland School of Law (CDRUM). Within the University of Maryland School of Law, the Center for Dispute Resolution (C-DRUM) works collaboratively with public and private institutions as well as individuals and groups to: promote, enhance and teach conflict resolution skills; research and develop conflict resolution systems and change the way conflicts are resolved throughout the state and beyond. Barbara is also a Life Coach, Trainer, Mediator and Restorative Justice Facilitator. Ms. Grochal completed a B.A. together with a Masters of Arts in Teaching in English and History at Cornell University. She later completed an MBA, in Management and Finance, at Loyola College in Maryland. At C-DRUM, Ms. Grochal directs the statewide grants program for public schools in Maryland, assisting schools in developing conflict resolution programs. In her support of the school programs, she develops training programs for teachers and staff, students, and parents, and builds networks to support conflict resolution education resources for schools on a state and national level. She has also served on past national Planning Committees for the International Summit on CRE. Summaries of past school grant projects funded can be found at www.cdrum.org under Initiatives, School Grants Program. Barbara has served on numerous Maryland committees to promote safer schools, including the Safe Schools Action Committee, the Model Anti-Bullying Workgroup that developed the Maryland Model Policy, the Planning Committee on Bullying Prevention, and the Maryland Peer Helper's Conference. She also helped to establish a truancy mediation program administered by C-DRUM for Baltimore City Schools, BSMART (Baltimore Schools: Mediations About Reducing Truancy), and assists by conducting truancy mediations. In Baltimore County, Barbara facilitates community conferences, a restorative justice tool used for resolving conflicts that occur in communities and sometimes in schools. This process provides an opportunity outside of the judicial system for the parties, their supporters, and key community members to reach agreements regarding offenders "repairing the harm" done. A trained and certified professional and life coach, Ms. Grochal also works with a variety of coaching clients privately, helping them achieve personal goals and address internal and external conflicts. In addition, Ms. Grochal conducts parenting workshops in Baltimore City schools, aimed at helping parents build stronger connections with their children. As a volunteer, she facilitates motivational values circles at a Baltimore women's homeless shelter. Ms. Grochal administers the C-DRUM CRE list serve designed to share CRE resources, opportunities and ideas. More information on how to join the list serve is available at: www.cdrum.org under Initiatives, School Grants Program. Dr. Pamela Lane-Garon The CRE Connection is pleased to highlight our first Champion of CRE – Dr. Pamela Lane-Garon Dr. Lane-Garon is an Educational Psychologist with clinical and teaching experience in early childhood populations. Dr. Lane-Garon is also a mediator who trains teachers, administrators, counselors and other professionals in conflict resolution skills. She is the Associate Director of the Bonner Center for Character Education and project developer of the Advancing Professional Ethics in Teacher Education initiative. At California State University Fresno's Kremen School of Education and Human Development, Dr. Lane-Garon directs the Mediator Mentors Program with Karen DeVoogd, project coordinator. Mediator Mentors is a university-public school partnership in which future teachers, counselors, social workers and school psychologists help support the development of conflict resolution skills in school children. In addition, in the Kremen School, she is a Profressor helping to prepare and educate teachers, counselors and administrators. Dr. Lane-Garon completed a Masters in Special Education and a Masters in Counseling at Arizona State University, and she went on to complete her Ph.D. in Educational psychology in 1997. Some of Dr. Lane-Garon's most important accomplishments include assisting local schools to develop conflict resolution education and peer mediation programs. Over the past several years, her research has focused on the effects of conflict resolution practices on students, processes and settings. Pamela also serves as the President of the Central California Chapter of the Association for Conflict Resolution and is on the Board of the National Association for Conflict Resolution's Education Section. Dr. Lane-Garon's research and scholarly writing interests have been focused on social-cognitive development of students and on programmatic ways in which perspective-taking (considering the thoughts and feelings of others) can be fostered. She has examined this developmental phenomenon in the context of peer mediation programs. The current application of her work to school-based violence prevention and school improvement programs has resulted in many consultation relationships with schools in the San Joaquin Valley. One of the more notable events taking place at the Bonner Center for Character Education is the 26th Annual Conference on Character and Civic Education on April 9, 2010. For additional details on this event please visit the Center website link: http://snipurl.com/bonnercenter If you would like to have a Mediator Mentor Program developed at your school, you can receive assistance and information about the steps to follow by visiting the Mediator Mentors Program page website link: http://snipurl.com/mediatormentors . Admin's Menu CRE Admin & Policymakers Blog Why CRE Matters Peace Studies at Community Colleges Connecting to the Issues Academic Achievement & CRE Bullying and CRE Juvenile Delinquency and CRE Juvenile Corrections Training Curriculum Truancy and CRE CRE Legislation Federal CRE Legislation State CRE Legislation CRE Standards & Best Practices Facilicase to Offer Free Case Management Software to Peer Mediation Programs Notice of Funding Availability – JAMS/ACR 2018 funding cycle
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,094
This bed is what The Cool Wood Company is about in terms of design and construction. We believe the beauty of the bed comes from its contemporary clean lines blended with traditional woodworking techniques. Its chunky and bold whilst remaining classic and timeless. The foot and headboard are handmade from solid Pine timbers and fitted using mortice and tenon joints. It is of course hand, shown here finished in our Dark Oak Wax. You can also choose to have 4 fitted under bed drawers added to the design for extra handy storage.
{ "redpajama_set_name": "RedPajamaC4" }
7,976
{"url":"https:\/\/www.physicsoverflow.org\/22112\/constructing-susy-algebra-via-index-structure","text":"Constructing SUSY algebra via index structure\n\n+ 2 like - 0 dislike\n110 views\n\nOften in literature the SUSY algebra is simply given, but various books, for example Bailin and Love, goes through the trouble of showing how the SUSY commutation relations are the only possible ones that you can write down. This is the content of my question.\n\nSUSY adds two spinor generators to the Poincare algebra, our complete list of generators is given by the independent components of $$M^{\\mu\\nu}, P^\\mu, Q_\\alpha, \\bar{Q}_{\\dot\\beta}$$ Where the first two are the usual Lorentz and translation generators, the last two are the added spinor generators.\n\nAmong other thing, Bailin and Love makes statements such as:\n\n$[P^\\mu,Q_\\alpha]$ must yield a spinor, the only possibility is $c \\sigma^\\mu _{\\alpha\\dot\\beta}\\bar Q^\\dot\\beta$.\n\nWhere $c$ is a constant, and $\\sigma^\\mu=(I,\\sigma^i)$. They then go on to show that the Jacobi identity implies $c=0$.\n\nThe quoted statement is not obvious to me, in particular I cannot fully understand why the expression written above is the only possible commutation relation.\n\nLet me make this a little more specific:\n\n1. From Bailin and Love, as well as other pedagogical literature, it appears that the structure constant must be some combination of $$c,\\sigma^\\mu_{\\alpha \\dot\\beta},\\sigma^{\\mu\\nu}_{\\alpha\\beta}$$ Where the last symbol is the usual SL(2) generators constructed from the Paulis. I want to know why these are the only things one is allowed to write. In particular, can one not construct other matrices that have the same index structure? Is it that these are the most general things with these index structures, or is it that they're fixed by symmetry?\n2. Restricting myself to constructing the structure constants from the things I wrote above, I was able to verify that as claimed, the index structure of the commutators of interest fix unique structure constants (up to a scaling factor). However I verified this by exhaustion, is there a systematic way to find these combinations?\n3. It turned out that all of the commutators yielded linear combination of one type of generator: for example in the quoted statement the result is a sum of right handed spinor generators. Is the result that commutators yield linear combinations of a single type of generator a general one? Or is it only a side effect of the uniqueness of the form of the structure constant fixed by index structure, as mentioned in point 2?\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user bechira\n\n+ 2 like - 0 dislike\n\nIt is maybe simpler to consider all the generators as representations of $SL(2,C)$, so, using spinor indices, you will have : $M^{\\alpha \\dot \\alpha \\beta \\dot \\beta}, P^{\\beta \\dot \\beta}, Q_\\alpha, \\bar Q^\\dot\\beta$\n\nIndices are raised and lowered with the Levi-Civita symbols $\\epsilon_{\\alpha \\beta}, \\epsilon^{\\alpha \\beta},\\epsilon_{\\dot \\alpha \\dot \\beta},\\epsilon^{\\dot \\alpha \\dot \\beta}$\n\nNow, what is $[P^{\\beta \\dot \\beta}, Q_\\alpha]$ ?\n\nWe see that there is no generator with the form $G^{\\beta \\dot \\beta}_\\alpha$.\n\nLevi-Civita symbols are not useful too, because they have $2$ lower or upper indices of same kind, so we cannot write something like $[P^{\\beta \\dot \\beta}, Q_\\alpha] = \\epsilon_{\\alpha \\beta}Q^\\dot\\beta$ (there would be an obvious problem with the $_\\beta$ indice).\n\nSo the only solution is a contraction on indices $\\alpha$ and $\\beta$, that is :\n\n$[P^{\\beta \\dot \\beta}, Q_\\alpha] = \\delta_{\\alpha} ^{\\beta} \\bar Q^\\dot\\beta$\n\nWith $P^\\mu = \\sigma^\\mu_{\\beta \\dot \\beta}P^{\\beta \\dot \\beta}$, (which means simply that the $(\\frac{1}{2}, \\frac{1}{2})$ representation of $SL(2,C)$ is equivalent to the fundamental representation of $SO(3,1)$ ) we get finally :\n\n$[P^\\mu, Q_\\alpha] = \\sigma^\\mu_{\\beta \\dot \\beta}\\delta_{\\alpha} ^{\\beta} \\bar Q^\\dot\\beta = \\sigma^\\mu_{\\alpha \\dot \\beta} \\bar Q^\\dot\\beta$\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user Trimok\nanswered Aug 11, 2014 by (950 points)\nThanks a bunch. I am still slightly confused about my first question: in this case,are the Levi-Civita symbol and the identity the only tensors that transform like tensors with 2 spinor indices?\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user bechira\nThe Levi-Civita symbol is the only quantity which transforms as a representation with $2$ lower or upper spinor indices of the same kind : $\\epsilon_{\\alpha \\beta}, \\epsilon^{\\alpha \\beta},\\epsilon_{\\dot \\alpha \\dot \\beta},\\epsilon^{\\dot \\alpha \\dot \\beta}$. \"Identity\" is a quantity $\\delta^a_b$ or $\\delta^\\dot a_\\dot b$, so you have one upper and one lower indices of the same kind. The momentum transforms as $P^{\\beta \\dot \\beta}$ or $P_{\\beta \\dot \\beta}$ (if you lower the indices), so you have $2$ lower or upper indices of different kind.\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user Trimok\nThis is probably a silly question, but can you point me somewhere that shows your first sentence, that the Levi-Civita symbol is the only tensor we can construct which transforms as shown?\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user bechira\nLevi-Civita symbols are the spinor metrics (used to raise and lower indices). Independently of the metrics, the only possible operations are contractions on identical indices (one lower and one upper of the same kind), and the generators. There is no other possibility.\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user Trimok\n+ 2 like - 0 dislike\n\n1. The super-Poincare group is supposed to be an extension of the Poincare group, which contains the Lorentz group and translations. We will complexify the Lorentz group. The Lie group $G:=SL(2,\\mathbb{C})\\times SL(2,\\mathbb{C})$ is (isomorhic to the double cover of) the complexified Lorentz group $SO(1,3;\\mathbb{C})$, cf. e.g. this Phys.SE post. This fact gives rise to the undotted and dotted irreps. An irrep $(s,\\dot{s})$ of $G$ is characterized by two non-negative half-integers $s,\\dot{s}\\in \\frac{1}{2}\\mathbb{N}_0$.\n\n2. The investigated commutator $[P_{\\beta\\dot{\\beta}}, Q_{\\alpha}]$ belongs to a tensor product representation of $G$, $$(\\frac{1}{2},\\frac{1}{2}) \\otimes (\\frac{1}{2},0) ~\\cong~(\\frac{1}{2},0)^{\\otimes 2}\\otimes (0,\\frac{1}{2})$$ $$\\tag{1}~\\cong~[(0,0)\\oplus(1,0)]\\otimes (0,\\frac{1}{2}) ~\\cong~(0,\\frac{1}{2})\\oplus (1,\\frac{1}{2}),$$ which we, in turn, have decomposed in irreps.\n\n3. Of the 14 super-Poincare generators $t_a$, only the dublet $\\bar{Q}_{\\dot{\\gamma}}$ transforms in one of the two irreps on the rhs. of eq. (1), namely first irrep $(0,\\frac{1}{2})$. If we would like the super-Poincare algebra to close on the 14 generators $t_a$ without introducing new generators; in particular, if we would like the investigated commutator $$\\tag{2}[P_{\\beta\\dot{\\beta}}, Q_{\\alpha}]~\\in~ {\\rm span}(t_a),$$ then the first irrep $(0,\\frac{1}{2})$ on the rhs. of eq. (1) must be proportional to $\\bar{Q}_{\\dot{\\gamma}}$, and the second irrep $(1,\\frac{1}{2})$ must be annihilated.\n\nThis post imported from StackExchange Physics at 2014-08-12 09:38 (UCT), posted by SE-user Qmechanic\nanswered Aug 11, 2014 by (2,790 points)\n\n Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the \"link\" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor)\u00a0\u00a0 Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\\varnothing$ in the following word:p$\\hbar$y$\\varnothing$icsOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.","date":"2017-11-23 14:49:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8541258573532104, \"perplexity\": 581.8399827719456}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806842.71\/warc\/CC-MAIN-20171123142513-20171123162513-00157.warc.gz\"}"}
null
null
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/49371\/decarboxylation-elimination-type-reaction\/49382","text":"Decarboxylation \/ Elimination type reaction\n\nI'm quite familiar with E1\/E2 reactions and usually use those ideas to explain elimination reactions, however I came across a reaction which was a bit different (in the sense it isn't E1\/E2) but still an elimination.\n\nLooking at this reaction I ruled out an E2 type elimination because there's no axial hydrogen which is trans to the bromine group, so something else must have happened here. I drew out the lowest energy chair conformation ('locked' in that conformer because of the t-butyl group) that would react to help me understand what happened and this is what I came up with (excuse the poor chair drawings):\n\nI know hydride is a very strong base and the first step would be deprotonation of the carboxylic acid group (Due to it being the most acidic proton). However after this, mechanistically I wasn't so sure what would happen and how it would happen. My thinking was the oxygen which is now quite charged would push the electron density in towards the carbonyl carbon, and hence the bond would break (between the carbonyl carbon and the cyclohexane carbon). That would form a double bond, and $\\ce{CO2}$ and bromide would leave. Is this realistically a correct explanation for what is observed? Also what kind of reaction is this? I haven't come across it before during the usual E1\/E2 reactions. Are there any other examples of similar elimination reactions?\n\n\u2022 Yes, absolutely correct. \u2013\u00a0jerepierre Apr 12 '16 at 13:50\n\u2022 Yeah, KH seems like an extreme choice here. Anything that deprotonates that acid should work (probably needs heat, though). Your mechanism is what I thought as soon as I saw the starting material. \u2013\u00a0SendersReagent Apr 12 '16 at 17:14","date":"2020-07-12 04:01:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6528729200363159, \"perplexity\": 1351.7619214112246}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657129517.82\/warc\/CC-MAIN-20200712015556-20200712045556-00238.warc.gz\"}"}
null
null
Российская империя по состоянию на года делилась на генерал-губернаторства, губернии, области и уезды Польское Царство, Финляндское княжество, Бухарское и Хивинское ханства общее число генерал-губернаторств — 4 общее число губерний — 59 общее число областей на правах губернии — 19 столица — город Санкт-Петербург Отличия от 1 января 1852 года: вновь образованы: Акмолинская область (1868 год) из ранее (с 1854 года) образованной Области Сибирских Киргизов на юге Западной Сибири Амурская область (1858 год) из новых земель Батумская область (1878 год) из новых земель Бессарабская губерния (1873 год) из Бессарабской области Бухарское ханство (1868 год) под протекторатом Российской империи Дагестанская область (1860 год) из части Дербентской губернии и некоторых уездов Тифлисской губернии Елисаветпольская губерния (1868 год) из восточных частей Тифлисской и западных частей Бакинской области Закаспийская область (1881 год) из Закаспийского отдела (основан в 1874 году) на западе нынешнего Туркменистана из новых земель Зеравшанский округ (1868 год) из новых земель к югу от Ташкента Карсская область (1878 год) из новых земель Кубанская область (ноябрь 1860 года) из ранее не оформленных земель Черноморского (с ноября 1860 года — Кубанского) войска. а также из 6 бригад Кавказского линейного войска Оренбургское генерал-губернаторство (1868 год) — в составе Тургайской и Уральской области Приморская область ( года) из части Иркутской губернии и упразднённой Камчатской области и новых земель (1858 год, 1860 год) Семипалатинская область (1854 год) из новых земель на юге Западной Сибири Семиреченская область (июль 1867 года) из части Туркестанской области Степное генерал-губернаторство (1882 год) из Акмолинской, Семипалатинской и Семиреченской областей Сухумский отдел ( года) в составе Кавказского наместничества Сыр-Дарьинская область (июль 1867 года) из части Туркестанской области Терская область (1860 год) из ранее неоформленных земель Терского (ранее — Кавказского линейного) войска (без 6 бригад переведенных в Кубанскую область) Тургайская область (1868 год) Туркестанская область (1865 год) в составе Оренбургского генерал-губернаторства из новых земель: Семиреченский и Заилийский края, города Аулие-Ата (сейчас — Тараз), Туркестан, Чимкент, а с 1866 года — и город Ташкент, Ходжент, Зачирчикский край Туркестанское генерал-губернаторство (июль 1867 года) из Туркестанской области Уральская область (1868 год) из Оренбургских Киргизов область, основанную в 1859 году из новых земель (Младший Жуз и Зауральские степи) Уфимская губерния (1865 год) из части Оренбургской губернии Ферганская область (1876 год) из упразднённого Кокандского ханства (в составе России с 1868 года) Хивинское ханство (1873 год) под протекторатом Российской империи Черноморский округ (1867 год) Кубанской области упразднены: Бессарабская область (1873 год) — в Бессарабскую губернию Дербентская губерния (1860 год) — в Бакинскую губернию и Дагестанскую область Западно-Сибирское генерал-губернаторство ( года) — в Степное генерал-губернаторство, Тобольскую и Томскую губернии Кавказское наместничество (1883 года) Камчатская область (14 ноября 1856 года) — в Приморскую область Туркестанская область (июль 1867 года) — в Туркестанское генерал-губернаторство переименованы: Земля Войска Донского (1870 год) в Область Войска Донского Шемахинская губерния (1859 год) в Бакинскую губернию список генерал-губернаторств: Восточно-Сибирское генерал-губернаторство (центр — Иркутск) Енисейская губерния Иркутская губерния Амурская область Забайкальская область Приморская область Якутская область остров Сахалин (особый отдел) Оренбургское генерал-губернаторство (центр — Оренбург) Тургайская область Уральская область Степное генерал-губернаторство (центр — Омск) Акмолинская область Семипалатинская область Семиреченская область (из Туркестанского генерал-губернаторства в 1882 году) Туркестанское генерал-губернаторство (центр — Ташкент) Сыр-Дарьинская область список всех губерний: Архангельская Астраханская (в 1876 году включена территория Букеевской Орды) Бакинская Бессарабская Виленская Витебская Владимирская Вологодская Волынская Воронежская Вятская Гродненская Екатеринославская Александровский уезд Бахмутский уезд Верхнеднепровский уезд Екатеринославский уезд Новомосковский уезд Павлоградский уезд Ростовский уезд Славяносербский уезд Елисаветпольская Енисейская (центр — Красноярск) Иркутская Казанская Калужская Киевская Ковенская Костромская Курляндская (центр — Митава) Курская Кутаисская Лифляндская (центр — Рига) Минская Могилёвская Московская Нижегородская Новгородская Олонецкая Оренбургская Орловская Пензенская Пермская Подольская Полтавская Псковская Рязанская Самарская Санкт-Петербургская Саратовская Симбирская Смоленская Ставропольская Таврическая Тамбовская Тверская Тифлисская Тобольская Томская Тульская Уфимская Харьковская Херсонская Черниговская Эриванская Эстляндская (центр — Ревель) Ярославская список областей: Акмолинская (центр — Омск) Амурская (центр — Благовещенск) Батумская Дагестанская (центр — Дербент, с 1866 года — Темир-Хан-Шура) Войска Донского Забайкальская (центр — г. Чита) Закаспийская Карсская Кубанская (центр — Екатеринодар) Приморская (центр — г. Софийск, с 1858 года — Николаевск-на-Амуре, с 1880 — Хабаровск) Семипалатинская Семиреченская (центр — Верный) Сыр-Дарьинская область (центр — Ташкент) Терская (центр — Владикавказ) Тургайская (центр — Оренбург) Уральская (центр — Уральск) Ферганская (центр — Скобелев) Якутская 1883 1883 год в России
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,516
Area Republicans set example for the GOP By Pacific Coast Business Times Staff Monday, November 30th, 2009 At first blush, there doesn't seem to be too much connection between Nao Takasugi and Abel Maldonado. They are generations and miles apart in their backgrounds. But Takasugi, who passed away Nov. 19 at age 87, and Maldonado, who was nominated to the lofty post of lieutenant governor shortly before Thanksgiving, do share plenty of common political ground. Both came from agrarian roots, worked in family-owned businesses and overcame adversity to make their mark as moderate Republicans. Both rose to prominence in the state Legislature, with Takasugi representing Oxnard and Maldonado representing Santa Maria and San Luis Obispo. Takasugi's way was to hold fast to values, support free enterprise and make steady, methodical progress toward goals. When the city of Oxnard turned down his request for a permit to put up a sign for the family's store, he ran for city council and won — setting a career in motion. Maldonado also built a reputation as a champion of small business. He, too, has been patiently building his career in politics and had the fortitude to hang in there after a narrow defeat for controller. Takasugi's family was interned during World War II in a government seizure of property that cost it most of its possessions. With help from the Quakers, Takasugi gained release from the camp to attend college and eventually earned an MBA from the Wharton School at the University of Pennsylvania. The Maldonados worked their way up from the fields of Santa Maria to own Agri-Jal, a large family farming enterprise. Maldonado made both friends and enemies when he became the last vote needed to break the budget deadlock with California hanging on the edge of insolvency. With Sam Blakeslee, Maldonado's successor in the Assembly, now serving as minority leader, the Central Coast has carved out quite a legacy for leadership in the Republican Party — it is a rich history that stretches back to Takasugi and many others. If California's Republican Party is going to regain the majority in the state, it could do a lot worse than to look to the politicians who have brought common sense, fiscal responsibility and a measured, pro-business approach to their offices. That's why it is wise to honor the amazing life and career of Nao Takasugi — a life that was memorialized in Tom Brokaw's book "The Greatest Generation." It also would be wise for the state legislature to approve Abel Maldonado's nomination for lieutenant governor with deliberate speed.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
318
\section{Introduction} Plain Kolmogorov complexity $\C(x)$ of a bitstring~$x$ was independently defined by Ray Solomonoff~\cite{solomonoffI} and later by Andrei Kolmogorov~\cite{Kolmogorov65} as the minimal length of a program that produces~$x$ on a Turing machine. In both definitions programs are strings of zeros and ones written on a work tape; the beginning and end of the program is marked by blanc symbols. During the execution, the Turing machine (which we call {\em plain} machine) can scan the beginning and end of the program and use its length as additional information during the computation. After the computation, the output string should appear on the work tape, again the beginning and end should be marked by blank symbols (see~\cite{LiVitanyi,GacsNotes} for details). Kolmogorov complexity on such a machine is called {\em plain} complexity. It is currently the most popular notion of Kolmogorov complexity. A closely related notion of complexity was introduced by Leonid Levin~\cite{LevinPpi,LevinCK} and Gregory Chaitin~\cite{Chaitin75} and has many applications in the study of algorithmic randomness. Imagine a Turing machine on which programs are presented on a separate $2$-symbol input tape. The tape does not have blanc symbols, only zeros and ones. During the execution more input is scanned until the machine reaches a halting state, after which an output $x$ is defined. We write $U(p) = x$ if $p$ is the minimal initial segment of the input tape that contains all scanned cells and if the result of the computation is~$x$. During the computation, the length of $p$ is no longer available. Programs on such a machine are also called {\em self-delimiting}. Note that the set of programs on which $U$ halts is prefix-free. The minimal length of a program outputting $x$ on such a machine is called {\em prefix} complexity $\K(x)$. Prefix complexity is larger (up to an $O(1)$ constant) than plain complexity and the difference is at most $O(\log |x|)$, where $|x|$ denotes the length of $x$. For many applications this difference is not important. However, for applications in the theory of algorithmic randomness, often $O(1)$-precise relations are used, and often one raises the question what happens when plain and prefix complexity are exchanged in a result or a definition. The goal of the paper is two-fold. First, we present a simple proof on a result that relates plain and prefix complexity. Secondly, we refine a proof-technique (from~\cite{BauwensCompcomp}) to build strings where plain and prefix complexity behave differently, and apply it to solve three open questions. \bigskip Several results are related to one of the oldest questions in algorithmic randomness, raised by Robert Solovay~\cite{Solovay} (see~\cite[page 263]{Downey}). The maximal plain complexity of a string of length $n$ is $n + O(1)$ and we say that a string has $c$-maximal complexity if $\C(x) \ge |x|-c$. Martin-L\"of observed that for no $c$ and no infinite sequence all initial segments $x$ have $c$-maximal complexity. On the other hand, the class of sequences for which some $c$ and infinitely many initial segments $x$ exist with $\C(x) \ge n - c$ has measure one. Similar observations hold for prefix complexity, (where the maximal complexity is $n + K(n) + O(1)$). Solovay's question is whether the classes of sequences with infinitely often maximal plain and prefix complexity are the same; in other words, is $\liminf_{x \sqsubset \omega} |x| - \C(x)$ finite iff $\liminf_{x \sqsubset \omega} \K(|x|) + |x| - \K(x)$ is finite? To answer this question, Solovay investigated whether there was a monotone relation between $\C(\cdot)$ and $\K(\cdot)$. He found that this was approximately the case by showing \begin{eqnarray*} \K(x) &=& \C(x) + \CC(x) + O(\CCC(x)) \\ \C(x) &=& \K(x) - \KK(x) + O(\KKK(x)) \,, \end{eqnarray*} where complexity of a number $n$ is the complexity of the $n$-bit string $00\dots 0$ and where $\CC(x)$, $\KK(x)$, etc, be short for $\C(\C(x))$, $\K(\K(x))$, etc. The proof in~\cite{Solovay} is cumbersome and Joseph Miller~\cite{MillerContrasting} made some simplifications using symmetry of information for prefix complexity. Here we use this technique to give an even much simpler proof. (Readers only interested in this result can directly go to sections~\ref{sec:prerequisites} and \ref{sec:relatingCandK}.) \smallskip Solovay showed that the continuation of the first equation with terms up to $O(\CCCC(x))$ does not hold. He also showed that maximal prefix complexity implies maximal plain complexity, but the reverse is not true: there exist infinitely many $n$ and $x$ of length $n$ such that $n - \C(x) \le O(1)$ and \begin{equation}\label{eq:prefixDef} \K(n) + n - \K(x) \ge \log^{(2)} n - O(\log^{(3)} n) \,. \end{equation} In~\cite{BauwensCompcomp} a simple proof (and generalizations) are presented. Here we further develop the proof technique to solve several open questions. Despite this negative result, Miller~\cite{Miller2randC,Miller2randK} showed a positive answer for Solovay's question: the sequences that have infinitely many initial segments with maximal plain and prefix complexity are the same. The proof is indirect: it shows that both classes coincide with the class of $2$-random sequences, i.e. Martin-L\"of random sequences relative to the halting problem (the equivalence of the first class with $2$-randomness was also shown in~\cite{Nies2rand}). Miller raised the question whether an (elegant) direct proof exists. In~\cite{2rand} simple proofs of these equivalences with $2$-randomness are given, but still no direct proof. It is also shown that \[ \liminf_{x \sqsubset \omega} [|x| - \C(x)] = \liminf_{x \sqsubset \omega} [\K(|x|) + |x| - \K(x)] + O(1) \,, \] by showing both sides equal $2$-randomness deficiency (see further). Laurent Bienvenu~\cite{personalBienvenuNov2012} asked whether for a $2$-random sequence, the initial segments for which plain and prefix-free complexity are maximal are the same; more precisely, for $2$-random $\omega$, does there exist $c$ and $d$ such that for all $n$: $n-\C(\omega_1\dots\omega_n) \le c$ implies $\K(n) + n - \K(\omega_1\dots\omega_n) \le d$? (For some $c$ and $d$ the reverse implication is always true.) We show that this is not the case: for every $3$-random sequence (a subset of the $2$-random sequences) there are infinitely many initial segments $x$ with $|x| - \C(x) \le O(1)$ for which~\eqref{eq:prefixDef} holds. This makes the existence of a simple direct proof unlikely. We refer to section~\ref{sec:contrastingIn3Random} for the proof of this result. \bigskip In algorithmic information theory, many relations are known between highly random sequences and highly compressible sequences~\cite[Section 3.5]{BarmpaliasQuestions}. The second application of our technique considers one such class called {\em the infinitely often $K$-trivial sequences}: the sequences $\omega$ for which there exist $c$ and infinitely many $n$ such that $\K(\omega_1\dots\omega_n) \le \K(n) + c$, i.e. \[ \liminf_n \left[ \K(\omega_1\dots \omega_n) - \K(n) \right] \le O(1) \] This class contains the computably enumerable sequences and the (weakly) $1$-generic sequences. Similar observations hold for the infinitely often $C$-trivial sequences, i.e. the sequences for which \[\liminf_n [\C(\omega_1\dots \omega_n) - \C(n)] \le O(1)\,. \] Question 1 in~\cite{BarmpaliasQuestions} asks whether both classes coincide. We show that this is not the case. \bigskip A last application of the proof technique concerns randomness deficiency for infinite sequences. Suppose one million zeros are prepended before a random string. The new string is still random, but one might argue that it is somehow ``less random''. Randomness deficiency quantifies the amount of structure in a random sequence (see~\cite[Section 3.6.2]{LiVitanyi} and~\cite{GacsTestsInClass}). Let $\mu(\omega)$ denote the uniform measure. Two closely related notions of deficiency exist in literature. \begin{itemize} \item A lower semicomputable\footnote{ A non-negative rational function $f$ on $\{0,1\}^\infty$ is {\em basic} if $f(\omega)$ is determined by a finite prefix of $\omega$. A function $f$ into $\overline{\mathbb{R}}^+$, is {\em lower-semicomputable} if there exist a uniformly computable series of (non-negative) basic functions $f_i$ such that $f = \sum_i f_i$. } function $f:\{0,1\}^\infty \rightarrow \overline{\mathbb{R}}^+$ (i.e. $\mathbb{R}^+$ extended with $+\infty$) is a {\em probability bounded randomness test} if for each $k$ \[ \mu \left\{ \omega : f(\omega) \ge k \right\} \le k,\,. \] \item A measurable function $f:\{0,1\}^\infty \rightarrow \overline{\mathbb{R}}^+$ is an {\em expectation bounded randomness test} if \[ \int_{\{0,1\}^{^\infty}} f(\omega) \text{d}\omega \le 1 \,. \] \end{itemize} The first notion is inspired by to the notion of confidence in statistical hypothesis testing, while the second is closely related, but mathematically more convenient to handle. There exists a lower semicomputable expectation bounded test $f_E$ that exceeds any other such test $g$ within a constant factor, i.e. for all $g$ there exist $c$ such that $g \le cf_E$. The logarithm of such a universal test is called {\em expectation bounded randomness deficiency~$d_E$}. The deficiency depends on the choice of the universal test, but this choice affects the deficiency by at most an additive constant. Similar for probability bounded tests and {\em probability bounded deficiency~$d_P$}. Both deficiencies are related: $d_E = d_P + O(\log d_P)$, and both deficiencies are finite iff the sequence is Martin-L\"of random. We argue that the relation between plain and prefix complexity is very similar to the relation between $d_P$ and $d_E$. Question 1 in \cite{GacsTestsInClass} asks whether there exists a monotone relation between probability bounded deficiency and expectation bounded deficiency that holds within additive $O(1)$ terms. If this is not the case then there exist two families of sequences $\omega_i$ and $\omega'_i$ such that \[ d_P(\omega_i) - d_P(\omega'_i) \rightarrow +\infty \] for increasing $i$, and \[ d_E(\omega_i) - d_E(\omega'_i) \rightarrow -\infty ,\,. \] In Section \ref{sec:contrastingDeficiencies}, we translate the main proof technique to deficiencies and construct such sequences. Hence, no monotone relation exists between the deficiencies. \bigskip The paper is organized as follows: first we discuss two old results which will be used throughout the paper: Levin's formula relating plain and prefix complexity and Levin's formula for symmetry of information. In the next section we present a simple proof for Solovay's formulas relating $C$ and $K$. All further results in the paper demonstrate different behaviour of $C$ and $K$ and the proofs have a common structure. In section~\ref{sec:contrastingCandK}, we repeat the simplest such proof by showing that some strings have maximal plain but non-maximal prefix complexity. Afterwards, in section~\ref{sec:infOftenTrivial}, we show that the class of infinitely often $C$ and $K$ trivial sequences are different. In section~\ref{sec:contrastingIn3Random}, we show that each $3$-random sequence has infinitely many initial segments with maximal plain complexity but non-maximal prefix complexity. Finally, in section~\ref{sec:contrastingDeficiencies}, we show that no monotone relationship exists between plain and prefix randomness deficiency. Section \ref{sec:relatingCandK}, sections \ref{sec:contrastingCandK}, \ref{sec:infOftenTrivial}, \ref{sec:contrastingIn3Random}, and section~\ref{sec:contrastingDeficiencies} can be red independently. \section{Prerequisites} \label{sec:prerequisites} Two results are central in most proofs. The first is Levin's symmetry of information~\cite{complexityOfComplexity}: for all $x,y$ \[ \K(x) + \K(y|x,\K(x)) = \K(x,y) \,. \] The conditional variant is given by \[ \K(x|z) + \K(y|x,\K(x|z),z) = \K(x,y|z) \,. \] The second result relates plain and prefix complexity for random strings. For all $n$-bit $x$: $\C(x) = n + O(1)$ iff $\K(x|n) = n + O(1)$. We will use a more general variant. \begin{lemma}[Folklore]\label{lem:relateDeficiencies} For all $j$ and $x$ \[ |j - \C(x)| = \Theta \left(\left| j - \K(x|j)\right| \right) \] \end{lemma} \begin{proof} The Lemma implies Levin's formula \[ \C(x) = \K(x|\C(x)) + O(1)\,, \] and in fact, it is equivalent to it: for any $j$ it implies $\K(x|j) = C(x)$ up to terms $O(\log |j - \C(x)|)$, and by the triangle inequality: \[ |j - \K(x|j)| = |j - \C(x)| + O\left(\log |j - \C(x)|\right)\,.\qedhere \] \end{proof} \section{Relating plain and prefix complexity} \label{sec:relatingCandK} Recall that $\KK(x)$, $\CC(x)$, etc, are short for $\K(\K(x))$, $\C(\C(x))$, etc. \begin{theorem}\label{th:SolovayI} \begin{eqnarray} \K(x) &=& \C(x) + \CC(x) + O(\CCC(x)) \nonumber\\ \C(x) &=& \K(x) - \KK(x) + O(\KKK(x)) \label{eq:second}\,. \end{eqnarray} \end{theorem} \begin{proof} Using symmetry of information we have \[ \K(x) = \K(x,\K(x)) = \KK(x) + \K(x|\K(x), \KK(x) ) + O(1)\,. \] The last term equals $\K(x|\K(x)-\KK(x)\,) + O(\KKK(x))$. For $j = \K(x) - \KK(x)$ the equality is \[ j = \K(x|j) + O\left(\KKK(x)\right)\,. \] Thus $\C(x) = j + O\left(\KKK(x)\right)$ by Lemma~\ref{lem:relateDeficiencies}, i.e. \eqref{eq:second}. \smallskip We obtain the first equation of the theorem from the second by showing that \begin{eqnarray} \CC(x) &=& \KK(x) + O(\KKK(x)) \label{eq:CCvsKK} \\ \KKK(x) &\le& O(\CCC(x)) \label{eq:CCCvsKKK}\,. \end{eqnarray} For \eqref{eq:CCvsKK}, note that $a = b - c + O(d)$ implies $\C(a) = \C(b) + O(\K(c) + d)$. Applying this to \eqref{eq:second} we obtain \begin{equation* \C(\C(x)) = \C(\K(x)) + O(\K(\KK(x)) + \KKK(x))\,. \end{equation*} Substituting $x \leftarrow \K(x)$ in \eqref{eq:second} gives \[ \C(\K(x)) = \K(\K(x)) + \KK(\K(x)) + O(\KKK(x))\,. \] Combining both equations implies \eqref{eq:CCvsKK}. \smallskip It remains to show that \eqref{eq:CCvsKK} implies \eqref{eq:CCCvsKKK}. Using $\K(a) \le \K(b) + \K(b-a) + O(1)$: \[ \K(\KK(x)) \le \K(\CC(x)) + \K(\KK(x) - \CC(x)) + O(1) \] The first term at the right is bounded by $2\C(\CC(x))+O(1)$. For the second, note that $\K(d) \le O(\log d)$ for any number $d$, hence \begin{equation}\label{eq:relateKKKtoCCCprecise} \KKK(x) \le 2\CCC(x) + O(\log \KKK(x)) \,, \end{equation} i.e. \eqref{eq:CCCvsKKK}. \end{proof} \begin{remark}\label{rem:CCCvsKKK} The proof implies that $\K(x) = \C(x) + O(\CC(x))$ and $\KK(x) = \CC(x) + O(\CCC(x))$. Alexander Shen raised the question whether $\KKK(x) = \CCC(x) + O(\CCCC(x))$? This does not hold. The proof is cumbersome and uses a topological argument from~\cite{ShenTopological}, see appendix~\ref{sec:CCCvsKKK}.\footnote{ \label{foot:DoubleComplexities} For later use in the appendix, note that the proof above also implies \[ \CC(x), \; \mathit{CK}\,(x), \; \mathit{KC}\,(x), \; \KK(x), \] are all equal within error $O(\CCC(x))$ and error $O(\KKK(x))$. (Indeed, to relate $\KK(x)$ to $\mathit{KC}\,(x)$, apply $K(\cdot)$ to \eqref{eq:second}.) Moreover, for all $U,V,W,X,Y,Z \in \{C,K\}$ we have that $\mathit{UVW}\,(x) \le O\left(\mathit{XYZ}\,(x)\right)$. % Indeed, by applying $\C(a) = \C(b) + O(\log (a-b))$ on the equalities above, we obtain that $\mathit{CYZ}\,(x) = \mathit{CCC}\,(x) + O(\log \CCC(x))$. In the same way one shows that $\mathit{KYZ}\,(x) = \mathit{KKK}\,(x) + O(\log \KKK(x))$. The result follows now from \eqref{eq:relateKKKtoCCCprecise}. } \end{remark} \section{Contrasting maximal plain and prefix complexity} \label{sec:contrastingCandK} To get used to the main proof technique for the remainder of this paper, we start by showing the subsequent variant of Solovay's theorem. \begin{theorem}[Solovay~\cite{Solovay}, Bauwens and Shen~\cite{BauwensCompcomp}]\label{th:SolovayII} There exist infinitely many $x$ such that $|x| - C(x) \le O(1)$ and $\K(|x|) + |x| - \K(x) \ge \log^{(2)} |x|-O(1)$. \end{theorem} The main technique is to combine the two results from Section~\ref{sec:prerequisites} with a third result: Peter G\'acs' quantification of incomputability of Kolmogorov complexity~\cite{complexityOfComplexity}. He showed that for all lengths, there are $x$ such that $\K(\K(x)|x)$ is close to $\log |x|$ (and similar for plain complexity); if complexity were computable, then this would be bounded by $O(1)$. The following tight variant from~\cite{BauwensCompcomp} will be used: \begin{theorem}\label{th:GacsTight} For some $c$ and all $l$ there exist an $n$ such that $\log n = 2^l$, $\K(n) \ge (\log n)/2$ and $\K(\K(n)|n) \ge l - c$. \end{theorem} \begin{lemma}\label{lem:GacsTight} If $n$ satisfies the conditions of Theorem~\ref{th:GacsTight}, then \[ \log^{(2)} n = \log \K(n) + O(1) = \K(\K(n)|n) + O(1) \,. \] \end{lemma} \begin{proof} Indeed, dropping additive $O(1)$ terms, the left equality follows from \[ \log^{(2)} n \le \log ((\log n)/2) \le \log K(n) \le \log (2\log n) \le \log^{(2)} n \,. \] It remains to show that $\K(\K(n)|n) \le \log^{(2)} n$. Indeed, $\K(\K(n)|n) \le \K(\K(n) | \log^{(2)} n)$. and using $\log^{(2)} n = \log \K(n)$ this follows from $\K(i|\log i) \le \log i$.\footnote{ \label{foot:Gacs} For the proof in the appendix note that this argument implies $\K(\K(n)|\log^{(2)} n) = \log^{(2)} n$. By Lemma~\ref{lem:relateDeficiencies} this implies $\C(\K(n)) = \log^{(2)} n$. } \end{proof} \bigskip We informally explain why some strings have maximal plain complexity but non-maximal prefix complexity. There exist plain machines $U$ for which a string $w$ exist such that $U(wx) = x$ for all $x$. If $x$ has $O(1)$-maximal plain complexity, then $wx$ is an $O(1)$-shortest program for $x$. In a similar way, there exist a prefix machine $V$ such that for some $w$ we have $V(wx|\,|x|) = x$ for all $x$; indeed, $V$ just copies the input from the program tape and uses the condition $|x|$ to know when to stop this operation. If the length of $x$ is not available in the condition, no such trivial programs might exist. To decide when to halt the copying procedure, the length of $x$ must somehow be represented in the program in self-delimited form. If the length of the program is minimal (within an $O(1)$ constant), this encryption of the length should also be minimal. Mathematically, this corresponds to the following observations: $\K(x) = \K(n,x)$, (here and below we omit $O(1)$ terms); and by symmetry of information \[ \K(n,x) = \K(n) + \K(x|n,\K(n)) \,. \] Thus, any shortest program for $x$ can be reorganized into a concatenation of two self-delimiting programs: the first computes $n$ and the second uses $n$ and the length of the first program to compute $x$. The prefix deficiency is $\K(n) + n - \K(x) = n - \K(x|n,\K(n))$ and this is different from the plain deficiency which is close to $n - \K(x|n)$ by Lemma~\ref{lem:relateDeficiencies}. This explains why small prefix deficiency implies small plain deficiency, but not vice versa. In particular the deficiencies can only be different if $\K(\K(n)|n)$ is large, and this might indeed happen because of Theorem~\ref{th:GacsTight}. For appropriate $n$ the discussion explains how we construct $x$, it should contain $\K(n)$ and then be filled up further with bits independent from $n$ and $\K(n)$ until the plain complexity is $n$. This is the approach in~\cite{BauwensCompcomp}, here we take advantage of the fact that the program with largest computation time of length at most $n$ can also compute $\K(n)$ from $n$. The proof below is even shorter than that of~\cite[Corrolary 6]{BauwensCompcomp}. \begin{proof} As discussed above, we choose $n$, the length of $x$, such that \begin{equation}\label{eq:compOfComp} \K(\K(n)|n) = \log^{(2)} n +O(1)\,. \end{equation} By Theorem~\ref{th:GacsTight} and Lemma~\ref{lem:GacsTight}, there exist infinitely many such $n$. Let $x = B(n)$ be the program of length at most~$n$ with maximal running time on a plain machine. We drop $O(1)$ terms. Note that $\C(B(n)) = n = |B(n)|$. It remains to show $\K(B(n)) \le n + \K(n) - \log^{(2)} n$ and this follows from \[ \K(B(n)|n,\K(n)) \le n - \log^{(2)} n \] (see above or note that $\K(B(n)) = \K(n,B(n)) = \K(n) + \K(B(n)|n,\K(n))$). From $n$ and $B(n)$ we can compute $\K(n)$, thus $n = \C(B(n)) = \K(B(n)|n)$ also equals \[ \K(\K(n),B(n)|n) \\ = \K(\K(n)|n) + \K(B(n)|\K(n), \K(\K(n)|n), n)\,. \] Applying \eqref{eq:compOfComp} twice implies $n = \log^{(2)} n + \K(B(n) | \K(n),n)$. \end{proof} \begin{remark} As a corollary it follows that $\K(x) = \C(x) + \CC(x) + \CCC(x) + O(\CCCC(x))$ is false. To show it contradicts Theorem~\ref{th:SolovayII} note that $\CCCC(x) \le O(\log^{(3)}(n)$. Let $x$ satisfy the conditions of the theorem and choose $y$ of length~$n$ with maximal plain and prefix complexity. Now $\K(x) - \K(y) \ge \log^{(2)} n - O(\log^{(3)} n)$. For similar reasons the following inequality is not an equality \[ \K(x) \le \K(\C(x)) + \C(x)\,, \\ \] see also Remark \ref{remark:openQuestion} below. \end{remark} \begin{remark} Miller generalized Solovay's theorem~\cite{MillerContrasting}. The proof above also implies this generalization. \begin{theorem*} If a co-enumerable set (i.e. the complement can be algorithmically enumerated) of strings contains a string of each length, then it also contains infinitely many strings $x$ such that $K(|x|) + |x| - \K(x) \ge \log^{(2)} |x| - O(1)$. \end{theorem*} This theorem also implies that the set of strings with maximal prefix complexity is not co-enumerable. \end{remark} \begin{proof} Suppose $n$ satisfies the conditions of Theorem~\ref{th:GacsTight}. Let $x$ be the lexicographically first string of length $n$ in the set. We show that $x$ can be computed from $B(n+c)$ for some constant $c$, and this suffices because we know from the proof above that $\K(BB(n+c)) \le n + \K(n) - \log^{(2)} n + O(c)$. Consider a list of all strings of length $n$ and remove the strings outside the set using an enumeration of its complement. The moment the last string was removed can be computed with a program of length $n + O(1)$ on a plain machine (by the total number of removed strings prepended with zeros to have an $n$-bit number). Thus, this moment must be before $B(n+c)$ for large $c$. \end{proof} \begin{remark} The proof above can be used to contrast {\em computational depth} with plain and prefix complexity. In~\cite[Tentative\footnote{ Although it was called ``tentative'' definition, this version is simpler than the others and is more often used in literature. } definition 1]{Bennett} the computational depth of a string $x$ with precision $c$ is given by the minimal computation time of a plain program for $x$ of length at most $\C(x) + c$: \[ \depth_{C,c}(x) = \min \left\{t: |p| \le \C(x) + c \text{ and } U(p) = x \text{ in $t$ steps} \right\}\,. \] In a similar way, computational depth $\depth_{K,c}(x)$ with prefix machines can be defined.\footnote{ We assume in all these definitions that the machine $U$ is universal in the sense that for each other machine $V$ there exist $w$ such that $U\left( wp \right) = V(p)$ each time $V(p)$ is defined and that simulating $V$ by $U$ in this way increases the computation time by a computable function. } With this assumption it follows easily that there exist a computable $f$ such that $\depth_{K,c+2\log |x|}(x) \le f(\depth_{C,c}(x))$ and that $\depth_{C,c+2\log |x|}(x) \le f(\depth_{K,c}(x))$ for $x$ of large length. The subsequent proposition shows that with higher precision, the equivalence is not possible. Let $BB(n)$ be the maximal computation time of a program of length at most $n$ on a plain machine (i.e. the computation time of $B(n)$). \end{remark} \begin{proposition*}\label{prop:compareDepth} There exist a $c$ and infinitely many $x$ such that $\depth_{C,c}(x)$ is bounded by a computable function of $x$ (and in fact bounded by a constant for an appropriate universal machine) and $\depth_{K,\log^{(2)} |x| - c}(x)$ exceeds $BB(|x|-c)$. \end{proposition*} \begin{proof} Consider the proof of Theorem~\ref{th:SolovayII}. Rather than choosing $x$ to be $B(n)$, we fix some appropriate $c$ (see further), and choose $x$ to be the lexicographically first $n$-bit string such that $\C(x) \ge n-2$ and no self-delimiting program of length $n + \K(n)-c$ outputs $x$ in at most $BB(n)$ steps. $x$ exist because for large $d$ there are at most $O(2^{n-d})$ strings of length $n$ with complexity $n + \K(n)-d$ (see~\cite[Theorem 3.7.6 p. 129]{Downey}, this also follows from the coding theorem). By construction $\C(x) \ge n - O(1)$ thus a trivial program of $x$ on a plain machine is shortest within $O(1)$. Hence, the depth of $x$ is small on a plain machine. Because $x$ can be computed from $B(n)$, the proof above guarantees that for infinitely many $n$ we have $\K(x) \le \K(B(n)) + O(1) \le n + \K(n) - \log^{(2)} n + O(1)$. Fix such an $n$. To have $\depth_{K,\log^{(2)} n - e}(x) < BB(n)$, we need a program for $x$ that computes $x$ in time less than $BB(n)$ of length $n + \K(n) - \log^{(2)} n + O(1) + (\log^{(2)} n - e) = n + \K(n) + O(1) - e$. For large $e$ this contradicts the choice of $x$, and hence the depth is at least $BB(n-O(1))$. \end{proof} \begin{remark}\label{remark:openQuestion} There exist infinitely many $x$ such that $\K(\K(x)|x,\C(x)) \ge \log^{(2)} n - O(1)$. Indeed, let $n$ be as in Theorem~\ref{th:GacsTight}. Let $x$ of length $n$ have maximal prefix (and hence plain) complexity such that $\K(\K(n)|x,n) \ge \K(\K(n)|n) - O(1)$. This implies \[ \K(\K(x)|x,\C(x)) = \K(n + \K(n)|x,n) = \K(\K(n)|x,n) \ge \K(\K(n)|n) \ge \log^{(2)} n \] up to $O(1)$ terms. On the other hand $\K(\C(x)|x,\K(x))$ must be very small and it is an {\em open question} whether it is bounded by a constant. In particular this would imply that the inequality \[ \K(x) \le \K(\C(x)) + \K\left(x|\C(x),\K(\C(x))\right) \] is an equality, which is also an {\em open question}. \end{remark} \section{Infinitely often $C$ and $K$ trivial sequences} \label{sec:infOftenTrivial} In the previous section we argued why a shortest self-delimiting program for a string can contain more information than a shortest plain program. This suggest that the classes of infinitely often $C$ and $K$ trivial sequences might be different. The following theorem illustrates this. \begin{theorem}\label{th:trivialCvsK} There exists a sequence $\omega$ for which $\K(\omega_1\dots \omega_N) - \K(N) \le O(1)$ for infinitely many $N$, and for which $\C(\omega_1\dots \omega_N) - \C(N)$ tends to infinity. \end{theorem} \begin{figure} \begin{tikzpicture} \draw[thick,gray] node[anchor=east] {\dots} (0,0) -- (7,0) node[anchor=west] {\dots}; \draw[very thick,gray] (2,0) -- node[anchor=north,black] {$2^{n-1}$} ++(0,-0.1) ; \draw[very thick,gray] (4,0) -- node[anchor=north,black] {$2^{n}$} ++(0,-0.1) ; \draw[very thick,gray] (6,0) -- node[anchor=north,black] {$2^{n+1}$} ++(0,-0.1) ; \draw[ultra thick] (1.1,0) -- node[anchor=south] {$1w_{n-1}$} (1.95,0); \draw[ultra thick] (3.1,0) -- node[anchor=south] {$1w_{n}$} (3.95,0); \draw[ultra thick] (5.1,0) -- node[anchor=south] {$1w_{n+1}$} (5.95,0); \end{tikzpicture} \caption{Construction of $\omega$ in the proof of Theorem~\ref{th:trivialCvsK}.} \label{fig:trivialCvsK} \end{figure} \begin{proof} Recall that $B(n)$ is a program of length at most $n$ with maximal running time on a plain machine. $\omega$ consists of zeros, except at small neighborhoods before indexes $2^n$ for all large $n$, and in these neighborhoods strings $w_n = B(n + \log^{(2)} n)$ are placed, see Figure~\ref{fig:trivialCvsK}; more precisely $\omega_{2^n - |w_n|} \dots \omega_{2^n} = 1w_n$ (the prepended one in $1w_n$ allows us to identify the beginning of $w_n$). We show that $\C(\omega_1\dots \omega_N) - \C(N) \ge \log^{(3)} N - O(1)$ for all $N$, which obviously tends to infinity. Fix any $N$ and let $n$ be such that $2^n \le N < 2^{n+1}$. The initial segment $\omega_1\dots \omega_N$ computes $w_n$, thus $\C(\omega_1\dots \omega_N) \ge \C(w_n) \ge n + \log^{(2)} n$ (here and below we omit terms $O(1)$). On the other hand we have $\C(N) \le \log N = n$, hence \[ \C(\omega_1\dots \omega_N) - \C(N) \ge (n + \log^{(2)} n) - n = \log^{(2)} n = \log^{(3)} N\,. \] \smallskip It remains to construct $c$ and infinitely many $N$ such that $\K(\omega_1\dots \omega_N)\le \K(N)+c$. The idea is to choose for infinitely many $n$ some $N$ such that $2^n \le N < 2^{n+1}-|w_{n+1}|$ and such that some shortest program for $N$ can compute $w_n$ with $O(1)$ of information; thus it can also compute $w_1, w_2, \dots, w_{n-1}$ and $\omega_1\dots\omega_N$ with $O(1)$ bits of information. As one might guess, we choose $n$ such that $\K(\K(n)|n) = \log^{(2)} n$. Let us compute $\K(w_n| n,\K(n))$ in a similar way as before. We drop $O(1)$ terms: \begin{eqnarray*} n + \log^{(2)} n & = & \C(w_n) = \K(w_n|n) = \K(\K(n),w_n|n) \\ &=& \K(\K(n)|n) + \K(w_n|\K(n),\K(\K(n)|n),n) \\ &=& \log^{(2)} n + \K(w_n|\K(n),n) \,. \end{eqnarray*} Thus $\K(w_n| n,\K(n)) = n$. Let $N$ in binary be the first $n-2$ bits of a program witnessing this equation (i.e. a program of length at most $n+O(1)$ computing $w_n$ from $n$ and $\K(n)$) prepended with the string ``$10$''. Prepending ``$10$'' guarantees that $2^n \le N < 2^{n+1} - |w_{n+1}|$ for large $n$. By construction, if $n$ and $\K(n)$ are given, $N$ can compute $w_n$ with $O(1)$ bits of information. Thus it also computes $w_1, \dots, w_{n-1}$ and $\omega_1\dots\omega_N$. On the other hand, every shortest program for $N$ can also compute $n$ and $\K(n)$ with $O(1)$ bits of information. Indeed, \[ \K(N) = \K(N,n) = \K(n) + \K(N|n,\K(n))\,; \] thus on a universal prefix machine, there exist a $O(1)$-shortest program for $N$ that is the concatenation of two self-delimiting programs and the length of the first is $\K(n)$. Together: \[ \K(N) = \K(n,\K(n),N) = \K(w_1,\dots,w_n,n,\K(n),N) \ge \K(\omega_1\dots \omega_N)\,. \qedhere \] \end{proof} \section{Contrasting plain and prefix complexity in $3$-random sequences} \label{sec:contrastingIn3Random} \begin{theorem}\label{th:unboundedK_boundedC} For every $3$-random sequence $\omega$ there are a $c$ and infinitely many $j$ such that $j - \C(\omega_1\dots \omega_j) \le c$ and $\K(j) + j - \K(\omega_1\dots \omega_j) \ge \log^{(2)} j -c$. \end{theorem} We conjecture that the result holds for all $2$-random sequences. It is possible to present the proof in game structure, but both the game and the strategy are quite complicated. We give a proof that has the same core structure as the other proofs above. In the proof we use two lemmas. The first roughly states that randomness deficiency of a string is bounded by the deficiency of an initial segment. \begin{lemma}\label{lem:deficiencyInitialSegment} Let $j = |x|$ and $n = |xy|$ \begin{eqnarray*} j - \K(x|j) \le n - \K(xy|j,n) + O(1) \\ \end{eqnarray*} \end{lemma} \begin{proof} We omit $O(1)$ terms. Observe that $\K(xy|j,n) = \K(x,y|j,n)$, and this is bounded by \[ \le \K(x|j,n) + \K(y|x,j,n) \le \K(x|j) + n-j \,, \] because $\K(y | \,|y|) \le |y|$ for all strings $y$ and $|y|=n-j$ is computable from the condition. The inequality of the lemma follows after rearranging. \end{proof} Let $a$ and $b$ be two strings of the same length. Let $XOR(a,b)$ denote the bitwise XOR operator on these strings. The following lemma states that if $a$ is incompressible, and $b$ is incompressible given $a$, then also $b$ is incompressible relative to $XOR(a,b)$. In fact, we will use a generalization which states that if an extension $bw$ is incompressible given $a$, then this extension is incompressible given $XOR(a,b)$. \begin{lemma}\label{lem:XORencryption} Let $a$ and $b$ be strings of equal length $\ell$, let $w$ be any string, let $n = |bw|$, and let $i$ be any number. If \[ K(a|\ell,n,i) \ge \ell -c \text{\;\;\; and \;\;\; } K(bw|a,n,i) \ge n - c\,, \] then \[ \K(bw|XOR(a,b),n,i) \ge n - O(c)\,. \] \end{lemma} \begin{proof In the lemma all complexities are conditional to $i$. The proof of the conditional form follows the unconditional one, presented here. We first consider the case where $w$ is the empty string, the proof for non-empty $w$ follows the same structure and will be presented afterwards. We need to show that for all $c,\ell,a,b$ such that $|a|=|b|=\ell$, $\K(a|\ell) \ge \ell-c$ and $\K(b|a) \ge \ell -c$ we have \begin{equation}\label{eq:lemXorGoalSimple} \K(b|XOR(a,b)) \ge \ell + O(c) \,. \end{equation} \smallskip Indeed, \[ \K(a,b|\ell) = \K(a|\ell) + \K(b|a,\ell,\K(a|\ell)) + O(1)\,. \] By assumption $\K(a|\ell) \ge \ell-c$, thus $\K(a|\ell) = \ell + O(c)$ and the last term simplifies to $\K(b|a,\ell) + O(c)$ and this equals $\ell + O(c)$. Hence $\K(a,b|\ell) = 2\ell + O(c)$. Let $xor = XOR(a,b)$. Because $a = XOR(b,xor)$ we have up to additive terms $O(c)$: \[ 2\ell = \K(a,b|\ell) \le \K(xor, b|\ell) \le \K(xor|\ell) + \K(b|xor,\ell) \le \ell + \K(b|xor)\,, \] and this implies \eqref{eq:lemXorGoalSimple}. \smallskip We modify the equations above for the case where $w$ is not empty. Let $n = |bw|$ and remind that $|a|=\ell$. We start with \[ \K(a,b,w|\ell,n) = \K(a|\ell,n) + \K(b,w|a,\K(a|\ell,n),n) \ge \ell + n - O(c)\,. \] Note that because $\ell=|b|$ we have $\K(bw,\dots|\ell,\dots) = \K(b,w,\dots|\ell,\dots)$. The left-hand also equals \[ \K(xor, b,w|\ell,n) \le \K(xor|\ell,n) + \K(b,w|xor,\ell,n) \le \ell + \K(b,w|xor,n)\,, \] hence $\K(b,w|xor,n) \ge n - O(c)$. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:unboundedK_boundedC}] Let $\omega$ be $3$-random. By Lemma~\ref{lem:relateDeficiencies}, it suffices to construct infinitely many~$j$ such that \begin{equation}\label{eq:condC} \K(\omega_1\dots \omega_j|j) \ge j - O(1) \end{equation} and $\K(\omega_1\dots \omega_j|j,\K(j)) \le j - \log^{(2)} j + O(1)$. (Indeed, the last inequality implies $\K(\dots) \le j + \K(j) - \log^{(2)} j + O(1)$ for the same reasons as in the proof of Theorem~\ref{th:SolovayII}.) The second inequality follows from \begin{equation}\label{eq:condK} \K(\omega_1\dots \omega_{\log^{(2)} j}|j,\K(j)) \le O(1)\,. \end{equation} \medskip \textbf{Sketch of the proof.} As usual, we construct $j$ such that $\K(\K(j)|j) \ge \log^{(2)} j - O(1)$. For technical reasons, we start with an index $i$ that will have almost the same information as $j$ and that satisfies $\K(\K(i)|i) \ge \log^{(2)} i - O(1)$. We also show that $i$ can be chosen such that $i$ and $K(i)$ are independent from $\omega_1\dots\omega_n$ for an initial segment with maximal plain complexity (for this we need that $\omega$ is $3$-random). The main idea is to use $K(i)$ to encrypt the first $\log \K(i)$ bits of~$\omega$ (using the bitwise XOR operator). Let $q$ be this encryption. We show (using Lemma~\ref{lem:XORencryption}) that \[ K(\omega_1\dots \omega_n|i,q,n) \ge n-O(1)\,. \] But with our encryption key $\K(i)$, we can decrypt the initial segment of $\omega$, thus \[ \K(\omega_1\dots \omega_{\log \K(i)}|i,q,\K(i)) \le O(1)\,. \] Finally, we define $j \le n$ by applying a bijective computable function of $i$ and~$q$. Thus the pair $(q,i)$ contains the same information as $j$, i.e. $\K(\omega_1\dots \omega_n|i,q,n) = \K(\omega_1\dots \omega_n|j,n) + O(1)$. Thus $\K(\omega_1\dots \omega_j|j,n) \ge j - O(1)$ by Lemma~\ref{lem:deficiencyInitialSegment}. On the other hand, the construction implies that $\log^{(2)} j = \log \K(i) + O(1)$ and that $\K(i)$ and $\K(j)$ carry the same information. Hence \begin{equation* \K(\omega_1\dots \omega_{\log^{(2)} j}|j,\K(j)) = \K(\omega_1\dots \omega_{\log \K(i)}|i,q,\K(i)) + O(1)\le O(1)\,, \end{equation*} and this finishes the proof. \medskip \textbf{Requirements for $n,i$ and $q$.} We choose infinitely many triples $(n,i,q)$ and start with formulating five requirements from which equations \eqref{eq:condC} and \eqref{eq:condK} follow. Let $\langle \cdot,\cdot \rangle$ be a computable bijective pairing function from numbers and strings to numbers. For later use we assume that $\log \langle k,x \rangle = \log k + O(|x|)$ for all $k$ and $x$. Equation \eqref{eq:condC} with $j = \langle i,q \rangle$, follows from \begin{itemize} \item[$(a)$] $\K(\omega_1\dots \omega_n|i,q,n) \ge n - O(1)$, \item[$(b)$] $\langle i,q\rangle \le n$ for large $n$, \end{itemize} and Lemma~\ref{lem:deficiencyInitialSegment}. Equation~\eqref{eq:condK} follows from: \begin{itemize} \item[$(A)$] $\K(\omega_1\dots \omega_{\log \K(i)} | \K(i),q) \le O(1)$, \item[$(B)$] $\log \K(i) = \log^{(2)} \langle i,q \rangle + O(1)$, \item[$(C)$] $\K(i,q) = \K(i) + \log \K(i) + O(1)$. \end{itemize} Indeed, for all $z$, $(C)$ implies $\K(z|i,q,\K(i)) = \K(z|i,q,\K(j)) + O(1)$. \medskip \textbf{Construction of $n$ and $i$.} We use the characterization of $2$-random sequences with plain complexity: \begin{theorem*}[Joseph Miller~\cite{Miller2randC}, Nies--Stephan--Terwijn~\cite{Nies2rand}] A sequence $\omega$ is Martin-L\"of random relative to the Halting problem if and only if there exist a $c$ and infinitely many $n$ such that $\C(\omega_1\dots\omega_n) \ge n - c$. \end{theorem*} The proof of this theorem relativizes to the halting problem $\mathbf{0'}$, i.e., a sequence is $3$-random if and only if there are a $c$ and infinitely many $n$ such that $\CH(\omega_1\dots\omega_n) \ge n - c$. Fix such an $n$. By Lemma~\ref{lem:relateDeficiencies}: \begin{equation}\label{eq:select_n} \KH(\omega_1\dots\omega_n| n)\ge n - O(1)\,. \end{equation} From now on we only use complexities that are conditional to~$n$. For notational simplicity we drop $n$ from the condition, thus $\K(a) \equiv \K(a|n)$, $\K(a|b) \equiv \K(a|b,n)$, etc. Let $i$ be the largest number such that \begin{itemize} \item[$(i)$] $\K(\K(i)|i) \ge \log^{(2)} i - c$ and $\K(i) \ge (\log i)/2$, where $c$ is the constant from Theorem~\ref{th:GacsTight}. \item[$(ii)$] $\langle i,x \rangle \le n$ for all $x$ of length at most $1 + \log^{(2)} i$. \end{itemize} Such $i$ exists because also the conditional version of Theorem~\ref{th:GacsTight} holds. In fact, for increasing choices of $n$, we find infinitely many such $i$. By Lemma~\ref{lem:GacsTight}, the first condition implies \begin{equation}\label{eq:logKiVsLoggi} \log \K(i) = \log^{(2)} i + O(1)\,. \end{equation} Note that $i$ and $\K(i)$ can be computed from $\mathbf{0'}$ and $n$, hence \eqref{eq:select_n} implies \begin{equation}\label{eq:select_i} \K(\omega_1\dots\omega_n| i,\K(i)) \ge n - O(1)\,. \end{equation} \medskip \textbf{Construction of $q$.} $q$ is given by the bitwise XOR-function of $K(i)$ in binary, and the initial segment of $\omega$ with the same length: \[ q = XOR\left(\omega_1\dots \omega_{\log K(i)}, \langle K(i) \rangle \right) \,. \] Because $XOR(a,XOR(a,b)) = b$ this implies $(A)$. Recall that all complexities implicitly have $n$ in the condition and that $\K(\K(i)|i) \ge \log^{(2)} i + O(1)$. Together with \eqref{eq:select_i}, this can be applied to Lemma~\ref{lem:XORencryption} (with $l = \log \K(i) = \log^{(2)} i + O(1)$, $bw = \omega_1\dots \omega_n$ and $a = \langle \K(i) \rangle$) and we conclude that $\K(\omega_1\dots \omega_n|i,q,n) \ge n - O(1)$, i.e. condition $(a)$. For large $n$, we have large $i$, and hence $|q| = \log \K(i) \le \log (2\log i) = 1 + \log^{(2)} i$. By choice of $i$ (the second condition) this implies $(b)$. We assumed that the pairing function satisfies $\log \langle i,q \rangle = \log i + O(|q|) = \log i + O(\log^{(2)} i)$. Thus $\log^{(2)} \langle i,q \rangle = \log^{(2)} i + O(1)$. By~\eqref{eq:logKiVsLoggi} this implies $(B)$. It remains to show $(C)$. Note that \[ \K(i,q) = \K(i) + \K\left(q | i, \K(i)\right)\,. \] The last term equals $\K(\omega_1\dots \omega_{\log \K(i)}|i, \K(i))$. By \eqref{eq:select_i} and Lemma~\ref{lem:deficiencyInitialSegment} this is at least $\log \K(i) + O(1)$, and in fact it is equal to this, because $K(z|\,|z|) \le |z|$ for all $z$. \end{proof} \section{Contrasting expectation and probabilistically bounded deficiency} \label{sec:contrastingDeficiencies} Recall from the introduction that there exist two different notions of randomness deficiency for a sequence $\omega$. We start by showing that the two notions are related. \begin{proposition}\label{prop:characterize_d_P} \[ d_P(\omega) = \sup \{k: d_E(\omega|k) \ge k\} + O(1)\;\;\; \footnote{ Conditional probability bounded deficiency is defined in the natural way: it is the logarithm of a multiplicatively maximal function $f(\cdot|k)$ that is lower semicomputable uniformly in $k$, such that for each $k$ the function is a probability bounded test. } \] \end{proposition} This characterization is closely related to a characterization of plain complexity in terms of prefix complexity (see~\cite[Lemma 3.1.1 p. 203]{LiVitanyi}): \[ \C(x) = \min \left\{k: \K(x|k) \le k \right\} + O(1)\,. \] Many results relating and contrasting prefix and plain complexity on one side, can be translated to results about expectation and probability bounded deficiency. (In these results $d_E(\cdot)$ corresponds to $\K(\cdot)$ and $d_P(\cdot)$ to $\C(\cdot)$.) \begin{proof} For the $\ge$-direction we need to show that the exponent of the supremum defines a lower-semicomputable probability bounded test. $d_E$ is lower semicomputable, thus also the supremum is lower semicomputable, and it remains to show that the measure where it exceeds $\ell$ is bounded by $O(2^{-\ell})$. % By definition we have $\int 2^{d_E(\omega|k)} \text{d}\omega \le 1$ for all $k$, thus the measure of $\omega$ such that $d_E(\omega|k) \ge k$ is at most $2^{-k}$. If the supremum exceeds $\ell$ for some $\omega$, then $d_E(\omega|k) \ge k$ for some $k \ge \ell$. The total measure for which this can happen is at most $2^{-\ell} + 2^{-\ell-1} + \dots \le O(2^{-\ell})$. For the $\le$-direction note that every probability bounded test $f$ defines a family of expectation bounded tests $g(\cdot|k)$ such that $g(\omega|k) = 2^k$ iff $f(\omega) \ge 2^k$. Indeed the condition implies $\int f(\omega|k) \text{d}\omega \le 2^k\cdot 2^{-k} = 1$. Obviously, if $f$ is lower semicomputable, the tests $g(\cdot|k)$ are lower semicomputable uniformly in $k$. If $f$ is the universal test corresponding to $d_P$, then $d_P(\omega) \ge k$ implies $f(\omega) \ge 2^k$, which implies $g(\omega|k) \ge 2^k$ thus $d_E(\omega|k) \ge k - O(1)$. \end{proof} The question was raised in~\cite[Question 1]{GacsTestsInClass} whether the two deficiencies are related by a monotone function, or \textit{does there exist two families of sequences $\omega^{\ell}$ and $\omega'^{\ell}$ such that \[ d_A(\omega^{\ell}) - d_A(\omega'^{\ell}) \rightarrow \infty \] for $\ell \rightarrow \infty$ and \[ d_P(\omega^{\ell}) - d_P(\omega'^{\ell}) \rightarrow -\infty\,. \] } \!We show this is indeed the case. \begin{theorem}\label{th:averageVsProbDeficiency} There exist families of sequences $\omega^{\ell}$ and $\omega'^\ell$ such that for infinitely many~$\ell$ \[ |d_P(\omega^{\ell}) - d_P(\omega'^\ell)| \le O(1) \] if $\ell \rightarrow \infty$ and \[ d_E(\omega^\ell) - d_E(\omega'^\ell) \ge \ell - O(1)\,. \] \end{theorem} The positive answer to the question above follows by prepending $\ell/2$ zeros to $\omega'^\ell$ for all $\ell$. This decreases the complexities in the definition of $d_P(\omega'^\ell)$ and $d_E(\omega'^\ell)$ by $\ell/2 + O(\log \ell)$ and hence increases these deficiencies by the same amount; and this is enough for the question. Before presenting the proof, we show two lemmas that play the same role as symmetry of information and Levin's result relating plain and prefix complexity (i.e. Lemma~\ref{lem:relateDeficiencies}). \begin{lemma}[Symmetry of deficiency]\label{lem:symmetryOfDeficiency} For all $\omega$ and all $x$ that belong to a prefix-free computably enumerable set, we have \[ d_E(x\omega) = |x| - \K(x) + d_E(\omega|x,\K(x)) + O(1)\,, \] here $x\omega$ denotes concatenation of $x$ and $\omega$. The $O(1)$-term depends on the choice of the computably enumerable set. \end{lemma} The proof uses a characterization of expectation bounded deficiency in terms of prefix Kolmogorov complexity (see for example~\cite[Proposition 2.22]{GacsTestsInClass}): \begin{theorem*} $ d_E(\omega|z) = \sup_n \left\{n - \K(\omega_1\dots\omega_n|z)\right\} + O(1) $ \end{theorem*} \begin{proof}[Proof of Lemma~\ref{lem:symmetryOfDeficiency}.] Let $x$ be a member of the prefix-free computably enumerable set. From $xy$ we can compute $x$ by enumerating the prefix-free set until an initial segment of $xy$ and this segment can only be $x$. Symmetry of information implies \[ \K(xy) = \K(x,y) + O(1) = \K(x) + \K(y|x,\K(x)) + O(1)\,, \] i.e. \[ |xy| - \K(xy) = |x|-\K(x) + |y| - \K(y|x,\K(x))\,. \] If we take on both sides the supremum of $y$ over all prefixes of $\omega$, we \textit{almost} obtain the equation of the lemma; the problem is that in the definition of $d_E(x\omega)$ we also need to consider prefixes $z$ of $x$. It remains to verify that \[ |z| - \K(z) \le |x| - \K(x) + O(1) \] for all prefixes $z$ of $x$. In general this is false, but for $x$ in a prefix-free enumerable set it holds. For any $z$ and $x$, let $P(x|z) = 2^{-|x|+|z|}$ if $x$ is an extension of $z$ that belongs to the prefix-free set, otherwise let $P(x|z) = 0$. Note that $\sum_x P(x|z) \le 1$ and $P(x|z)$ is lower-semicomputable, hence the coding theorem implies $\K(x|z) \le -\log P(x|z) + O(1) \le |x|-|z| + O(1)$. Symmetry of information implies \[ K(x) \le \K(x,z) \le \K(z) + \K(x|z) + O(1) \le \K(z) + |x| - |z| + O(1)\,, \] and this implies the equation above. \end{proof} The analogue of Lemma~\ref{lem:relateDeficiencies} for deficiencies of sequences is \begin{lemma}\label{lem:relateInfDeficiency} For all $j$ and $\omega$ \[ \left|j - d_E(\omega|j) \right| = \Theta \left|j - d_P(\omega)\right| \,. \] \end{lemma} \begin{proof} For fixed random $\omega$, the map $t \rightarrow d_E(\omega|t)$ maps points at distance $d$ to points at distance $O(\log d)$. Hence, the map has a unique fixed point $t$ within precision $O(1)$, i.e. $d_E(\omega|t) = t + O(1)$ for some $t$. This implies that $t$ is $O(1)$-close to the minimal $s$ such that $d_E(\omega|s) \ge s$, i.e. $d_P(\omega)$. Our observation implies that $d_E(\omega|t+d) = t + O(\log d)$, thus for $j = t + d$ we have $j - d_E(\omega|j) = j - d_P(\omega) + O(\log (j-d_P(\omega)))$, and this implies the lemma. \end{proof} \begin{proof}[Proof of Theorem~\ref{th:averageVsProbDeficiency}.] For each $\ell$ we choose a $k$ such that $\log^{(2)} k \le \ell$ and $\K(\K(k)|k) \ge \log^{(2)} k - c$ where $c$ is the constant from Theorem~\ref{th:GacsTight}. By Lemma~\ref{lem:GacsTight} \begin{equation}\label{eq:annoying} \ell = \log^{(2)} k = \log \K(k) + O(1) \,. \end{equation} We choose $\omega$ such that \[d_P(\omega|k,\K(k)) \le O(1)\,.\] Let $0^k1\omega$ be the sequence that starts with $k$ zeros, followed by a one and followed by $\omega$. Let $0^k1\langle\K(k)\rangle\omega$ be $0^k1$ followed by $\K(k)$ in binary, followed by~$\omega$. The theorem follows from the values of the expectation and probability bounded deficiencies of these strings, given in the table below: \[ \begin{array}{r|l|l} \alpha & d_E(\alpha) & d_P(\alpha) \\ \hline 0^k1\omega & k - \K(k) + O(1) & k + O(1) \\ 0^k1\langle\K(k)\rangle^l\omega & k - \K(k) + \ell + O(1) & k + O(1) \end{array} \] It remains to prove that the values in the table are correct. \medskip The values of $d_E(\cdot)$ in the first column are obtained from Lemma~\ref{lem:symmetryOfDeficiency}. In the first case, the prefix-free set is the set of strings $0^m1$ for all~$m$, thus \[ d_E(0^k1\omega) = k - \K(k) + d_E(\omega|k,\K(k)) + O(1)\,. \] In the second case, the prefix-free set is the of all strings $0^m1z$ for all~$m$ and all $z$ of length $\log^{(2)} m$. Recall that $\K(k,\K(k)) = \K(k) + O(1)$, thus \[ d_E(0^k1\langle\K(k)\rangle\omega) = k + \log^{(2)} k - \K(k) + d_E(\omega|k,\K(k)) + O(1)\,. \] \medskip To evaluate $d_P(\cdot)$ we use Lemma~\ref{lem:relateInfDeficiency}. Hence, let us compute $d_E(0^k1\omega|k)$. Again we use Lemma~\ref{lem:symmetryOfDeficiency}: \[ d_E(0^k1\omega|k) = k - \K(0^k1|k) + d_E(\omega|\K(0^k1|k),k) + O(1) = k + d_E(\omega|k) + O(1)\,. \] This implies $d_P(0^k1\omega) = k + O(1)$. For the second case, note that $\K(0^k1\langle\K(k)\rangle|k) = \K(\K(k)|k) + O(1) = \log^{(2)} k + O(1)$ by choice of $k$. With similar reasoning we determine $d_P(0^k1\langle \K(k) \rangle\omega)$: \[ d_E(0^k1\langle\K(k)\rangle\omega|k) = (k + \log^{(2)} k) - \K(\K(k)|k) + d_E(\omega|\K(k), k) + O(1) \,.\\ \] This equals $k + O(1)$ by \eqref{eq:annoying}. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,584
Why your brand should create infographics | APEX Public Relations Inc. Infographics was one of Mashable's 30 most overused buzzwords in digital. Still, we're making the case that they are an invaluable communications tool. In fact, the Mashable argument itself was presented as an infographic – because infographics work. This month, the APEX team is launching a new blog challenge – to have all APEXers create infographics. Canva.com (an APEX favourite to create entire infographics or specific images for other uses). To kick it off, the first infographic is on infographics. Created using a combination of PowerPoint and Canva, 5 reasons why your brand should create infographics. Diane Bégin is an Account Director at APEX Public Relations. Follow her on Twitter. Is your brand breaking into the Canadian market? Check out 3 rules of engagement for U.S. retailers eyeing Canada.
{ "redpajama_set_name": "RedPajamaC4" }
4,516
\section{Some results on atomic toposes} In this section we present some results on atomic toposes which are relevant to our characterization theorem in the second section.\\ Let us recall the following standard definition. \begin{definition} Let $\mathcal{E}$ be a topos. An object $A\in \cal E$ is said to be an atom of $\cal E$ if the only subobjects of $A$ (up to isomorphism) are the identity arrow $1_{A}:A\to A$ and the zero arrow $0_{A}:0\to A$, and they are distinct from each other. \end{definition} The following proposition describes the behaviour of associated sheaf functors with respect to atoms. \begin{proposition}\label{prop1} Let $\cal E$ be a topos and $j$ a topology on it with associated sheaf functor $a_{j}:{\cal E}\to \sh_{j}(\cal E)$. If $A$ is an atom of $\cal E$ then $a_{j}(A)$ is an atom of $\sh_{j}(\cal E)$, provided that it is non-zero. \end{proposition} \begin{proofs} Given a monomorphism $m:C\to a_{j}(A)$ in $\sh_{j}({\cal E})$, $m$ is a monomorphism also in $\cal E$ since the inclusion $i:\sh_{j}({\cal E})\hookrightarrow {\cal E}$ preserves monomorphisms (having a left adjoint). Now, denoted by $\eta$ the unit of the adjuction $a_{j}\dashv i$, consider the pullback \[ \xymatrix { C' \ar[r]^{m'} \ar[d] & A \ar[d]^{\eta_{A}} \\ C \ar[r]^{m} & a_{j}(A) } \]\\ in $\cal E$. The arrow $m'$ is a monomorphism in $\cal E$, being the pullback of a monomorphism, so, since $A$ is an atom of $\cal E$ we deduce that $m'$ is either (isomorphic to) the identity arrow on $A$ or the zero arrow $0_{A}$. Now, by applying $a_{j}$ to the pullback above we obtain a pullback in $\sh_{j}({\cal E})$ (as $a_{j}$ preserves pullbacks); but $a_{j}(\eta_{A})\cong 1_{a_{j}(A)}$, so $m\cong a_{j}(m')$ and $m$ is either (isomorphic to) the identity or the zero arrow on $a_{j}(A)$; of course, if $a_{j}(A)\ncong 0_{\sh_{j}({\cal E})}$ these two arrows are distinct from each other. \end{proofs} We recall that an atomic topos is an elementary topos $\cal E$ which possesses an atomic geometric morphism ${\cal E}\to \Set$. We refer the reader to section C3.5 in \cite{El2} for a comprehensive treatment of the topic of atomic toposes. Here we limit ourselves to remarking the following facts. \begin{proposition}\label{propn1} Let $\cal E$ be a Grothendieck topos. Then\\ (i) $\cal E$ is atomic if and only if it has a generating set of atoms;\\ (ii) if $\{a_{i} \textrm{ | } i\in I\}$ is a generating set of atoms for $\cal E$ then the atoms of $\cal E$ are exactly the epimorphic images of the atoms in the generating set; in particular, $\cal E$ has only a set of (isomorphism classes of) atoms. \end{proposition} \begin{proofs} (i) Suppose that $\cal E$ is atomic. Then all the subobject lattices in $\cal E$ are atomic Boolean algebras (cfr. p. 685 \cite{El2}) and hence every object of $\cal E$ can be written as a disjoint coproduct of atoms; on the other hand, there can be only a set of atoms (up to isomorphism) in $\cal E$, by the argument at the top of p. 690 \cite{El2}. Conversely, if $\cal E$ has a generating set of atoms then the full subcategory $\cal C$ of $\cal E$ on it satisfies the right Ore condition and ${\cal E}\cong \Sh({\cal C}, J_{at})$, where $J_{at}$ is the atomic topology on $\cal C$ (cfr. the discussion p. 689 \cite{El2}); so it is atomic (by Theorem C3.5.8 \cite{El2}).\\ (ii) This was remarked p. 690 \cite{El2}. \end{proofs} As a consequence of Propositions \ref{prop1} and \ref{propn1}(i), we may deduce that any subtopos of an atomic Grothendieck topos $\cal E$ is atomic; indeed, the images of the atoms in a generating set of $\cal E$ via the corresponing associated sheaf functor clearly form a generating set for the subtopos. In fact, this property holds more generally at the elementary level (i.e. every subtopos of an atomic topos is atomic), by the following argument. Let $\cal E$ be an atomic topos; then, $\cal E$ being Boolean, every subtopos $\cal F$ of $\cal E$ is open (by Proposition A4.5.22 \cite{El2}) and hence the inclusion of $\cal F$ into $\cal E$ is an atomic morphism (by Proposition A4.5.1 \cite{El}); this implies that the geometric morphism ${\cal F}\to \Set$ is atomic, being the composite of two atomic morphisms (the inclusion ${\cal F}\hookrightarrow {\cal E}$ and the morphism ${\cal E}\to \Set$); so $\cal F$ is atomic. In terms of sites, if ${\cal E}\cong \Sh({\cal C}, J^{\cal C}_{at})$ (where $\cal C$ satisfies the right Ore condition and $J^{\cal C}_{at}$ is the atomic topology on it) then the subtoposes of it can be described as follows.\\ Let $\cal F$ be a subtopos of $\cal E$; as we have already remarked, $\cal F$ must be open, that is of the form ${\cal E}/U\hookrightarrow {\cal E}$ for a subterminal object $U$ in $\cal E$. Now, by Remark C2.3.21 \cite{El2}, $U$ can be identified with a $J_{at}$-ideal on $\cal C$, that is with a collection of objects ${\cal C}'$ of $\cal C$ with the property that for any arrow $f:a\to b$ in $\cal C$, $a\in {\cal C}'$ if and only if $b\in {\cal C}'$. If we regard ${\cal C}'$ as a full subcategory of $\cal C$ then $\Sh({\cal C}, J^{\cal C}_{at})/U\cong \Sh({\cal C}', J^{{\cal C}'}_{at})$ (where $J^{{\cal C}'}_{at}$ is the atomic topology on ${\cal C}'$). Indeed, we may define an equivalence as follows. Given a object $G\to U$ in $\Sh({\cal C}, J^{\cal C}_{at})/U$, for every $c\in {\cal C}$ which does not belong to ${\cal C}'$ we must have $G(c)=\emptyset$, since we have an arrow $G(c)\to U(c)$ and $U(c)=\emptyset$; so we associate to it the restriction $G|_{{\cal C}'}$, which is a $J^{{\cal C}'}_{at}$-sheaf since $J^{{\cal C}'}_{at}$ clearly coincides with the Grothendieck topology induced by $J^{\cal C}_{at}$ on ${\cal C}'$. It is now clear that this assigment defines an equivalence between our two categories. So we have proved that the subtoposes of $\Sh({\cal C}, J^{\cal C}_{at})$ are exactly those of the form $\Sh({\cal C}', J^{{\cal C}'}_{at})$ where ${\cal C}'$ is a full subcategory of $\cal C$ with the property that for any arrow $f:a\to b$ in $\cal C$, $a\in {\cal C}'$ if and only if $b\in {\cal C}'$. Also, since the assigment sending a subterminal object in $\cal E$ to the corresponding open subtopos of $\cal E$ is a lattice isomorphism from $\Sub_{\cal E}(1)$ to the lattice of open subtoposes of $\cal E$, two such subtoposes of $\Sh({\cal C}, J^{\cal C}_{at})$ are equivalent if and only if the corresponding categories are equal (as subcategories of $\cal C$).\\ Next, let us consider a general category $\cal C$. We know that, provided that $\cal C$ satisfies the right Ore condition, one can define the atomic topology on $\cal C$ as the topology having as covering sieves exactly the non-empty ones. Such a topology does not exist on a general category $\cal C$ but, by analogy with it, we may define the atomic topology $J^{\cal C}_{at}$ on $\cal C$ as the smallest Grothendieck topology on $\cal C$ such that all the non-empty sieves are covering; of course, this definition specializes to the well-known one in the case $\cal C$ satisfies the right Ore condition. As stated in following proposition, the corresponding category of sheaves is an atomic topos. \begin{proposition}\label{propat} Let $\cal C$ be a category and $J^{\cal C}_{at}$ the atomic topology on it. Then $\Sh({\cal C}, J^{\cal C}_{at})$ is an atomic topos. \end{proposition} \begin{proofs} Let ${\cal C}'$ be the full subcategory of $\cal C$ on the objects which are not $J^{\cal C}_{at}$-covered by the empty sieve. Then, by the Comparison Lemma, we have that $\Sh({\cal C}, J^{\cal C}_{at})\cong \Sh({\cal C}', J^{\cal C}_{at}|_{{\cal C}'})$. We now prove that ${\cal C}'$ satisfies the right Ore condition and $J^{\cal C}_{at}|_{{\cal C}'}=J^{{\cal C}'}_{at}$, that is for every sieve $R$ in ${\cal C}'$, $R\neq \emptyset$ if and only if $R$ is $J^{\cal C}_{at}|_{{\cal C}'}$-covering; from this our thesis will clearly follow. In one direction, suppose that $R\neq \emptyset$. Then the sieve $\overline{R}$ generated by $R$ in $\cal C$ is obviously non-empty and, ${\cal C}'$ being a full subcategory of $\cal C$, we have that $\overline{R}\cap arr({\cal C}')=R$; so $R$ is $J^{\cal C}_{at}|_{{\cal C}'}$-covering by definition of induced topology. Conversely, suppose that $R$ is a $J^{\cal C}_{at}|_{{\cal C}'}$-covering sieve on an object $c\in {\cal C}'$. Then there exists a $J^{\cal C}_{at}$-covering sieve $H$ on $c$ in $\cal C$ such that $H\cap arr({\cal C}')=R$. Suppose $R$ be empty; then for every arrow $f$ in $H$ we have $\emptyset\in J^{\cal C}_{at}(dom(f))$. But $H$ is $J^{\cal C}_{at}$-covering so from the transitivity axiom for Grothendieck topologies it follows that $\emptyset \in J^{\cal C}_{at}(c)$, contradiction since $c\in {\cal C}'$. So we conclude that $R$ is non-empty, as required. \end{proofs} \begin{rmk} \emph{By the transitivity axiom for Grothendieck topologies, the subcategory ${\cal C}'$ in the proof of the proposition above satisfies the property that for any arrow $f:a\to b$ in $\cal C$, $a\in {\cal C}'$ if and only if $b\in {\cal C}'$; in other words, ${\cal C}'$ is a union of connected components of $\cal C$. In particular, if ${\cal C}'\neq {\cal C}$ (i.e. $\cal C$ does not satisfy the right Ore condition) and $\cal C$ is connected then ${\cal C}'=\emptyset$, that is the topos $\Sh({\cal C}, J^{\cal C}_{at})$ is trivial.} \end{rmk} The following result generalizes the proposition above. \begin{proposition}\label{propel} Let $\cal E$ be a Grothendieck topos with a generating set $\cal L$ and $j$ be an elementary topology on $\cal E$ such that all the monomorphisms $a\to b$ in $\cal E$ where $a\ncong 0$ and $b\in {\cal L}$ are $j$-dense. Then $\sh_{j}({\cal E})$ is an atomic topos. \end{proposition} \begin{proofs} By Proposition \ref{prop1}, it is enough to prove that the images of the objects of $\cal L$ via the associated sheaf functor $a_{j}$ form a generating set of objects of $\sh_{j}({\cal E})$ which are either zero or atoms. Our argument follows the lines of the proof of Proposition \ref{prop1}. Given an object $b\in {\cal L}$ and a monomorphism $m:a\to a_{j}(b)$ in $\sh_{j}({\cal E})$, consider the pullback \[ \xymatrix { a' \ar[r]^{m'} \ar[d] & b \ar[d]^{\eta_{b}} \\ a \ar[r]^{m} & a_{j}(b) } \]\\ in $\cal E$. The arrow $m'$ is a monomorphism in $\cal E$, being the pullback of a monomorphism, so, if $a'\ncong 0$ then $m'$ is $j$-dense by our hypotheses, that is $a_{j}(m')$ is an isomorphism. But $a_{j}$ preserves pullbacks, from which it follows that $m$ is an isomorphism. If instead $a'\cong 0$ then $a\cong a_{j}(a')\cong a_{j}(0)=0_{\sh_{j}({\cal E})}$ so $m$ is the zero arrow on $a_{j}(b)$. \end{proofs} \begin{rmk} \emph{We note that Proposition \ref{propat} is the particular case of Proposition \ref{propel} when $\cal E$ is a presheaf topos $[{\cal C}^{\textrm{op}}, \Set]$, $\cal L$ is the collection of all the representables on $\cal C$ and $j$ is the elementary topology on $[{\cal C}^{\textrm{op}}, \Set]$ corresponding to the atomic topology on $\cal C$; indeed, the sieves in $\cal C$ on an object $c\in {\cal C}$ can be identified with the subobjects in $[{\cal C}^{\textrm{op}}, \Set]$ of the representable ${\cal C}(-,c)$.} \end{rmk} Now, let us briefly consider another approach for obtaining an atomic topos starting from a general one, based on the consideration of the atoms of the given topos. \begin{proposition} Let $\cal E$ be a Grothendieck topos and $\cal L$ a collection of atoms of $\cal E$, regarded as a full subcategory of $\cal E$. Then, if $J^{\cal E}_{can}$ is the canonical topology on $\cal E$, the topos $\Sh({\cal L}, J^{\cal E}_{can}|_{\cal L})$ is atomic. \end{proposition} \begin{proofs} Obviously, since every arrow in $\cal L$ is an epimorphism in $\cal E$, we have $J^{\cal L}_{at}\subseteq J^{\cal E}_{can}|_{\cal L}$ so $\Sh({\cal L}, J^{\cal E}_{can}|_{\cal L})$ is a subtopos of the topos $\Sh({\cal L}, J^{\cal L}_{at})$. But $\Sh({\cal L}, J^{\cal L}_{at})$ is atomic by Proposition \ref{propat}, hence $\Sh({\cal L}, J^{\cal L}_{at})$ is atomic by the discussion following the proof of Proposition \ref{propn1}. \end{proofs} Let us now characterize the atoms of the topos $\Sh({\cal C}, J^{\cal C}_{at})$, where $\cal C$ is a category satisfying the right Ore condition. \begin{proposition}\label{loccon} Let $\Sh({\cal C}, J)$ be a locally connected topos, and $a_{J}:[{\cal C}^{\textrm{op}}, \Set]\to \Sh({\cal C}, J)$ be the associated sheaf functor. Then all the functors $a_{J}({\cal C}(-,c))$ are connected objects of $\Sh({\cal C}, J)$ if and only if all the constant functors ${\cal C}^{\textrm{op}} \to \Set$ are $J$-sheaves. \end{proposition} \begin{proofs} Consider the diagram \[ \xymatrix { \Sh({\cal C}, J) \ar_{p}[dr] \ar^{i}[rr] & & [{\cal C}^{\textrm{op}}, \Set] \ar^{q}[dl] \\ & \Set &} \]\\ of geometric morphisms in the 2-category of Grothendieck toposes, where $p$ and $q$ are the unique geometric morphisms respectively from $\Sh({\cal C}, J)$ and $[{\cal C}^{\textrm{op}}, \Set]$ to $\Set$. Both these geometric morphisms are essential, that is their inverse image functors have left adjoints, which we indicate respectively by $p_{!}$ and $q_{!}$; indeed, $p$ is essential because by hypothesis $\Sh({\cal C}, J)$ is locally connected, while $q$ is essential by Example A4.1.4 \cite{El}. It is well-known that the representables in $[{\cal C}^{\textrm{op}}, \Set]$ are all indecomposable, so $q_{!}({\cal C}(-,c))=1$ for each $c\in {\cal C}$. Now, the condition that all the constant functors ${\cal C}^{\textrm{op}} \to \Set$ are $J$-sheaves is clearly equivalent to demanding that $q^{\ast}=i\circ p^{\ast}$ where $i$ is the inclusion $\Sh({\cal C}, J)\hookrightarrow [{\cal C}^{\textrm{op}}, \Set]$ or, passing to the left adjoints, that $q_{!}=p_{!}\circ a$ (of course, the equalities here are intended to be isomorphisms); but, since all these functors preserve colimits (having right adjoints) and every functor in $[{\cal C}^{\textrm{op}}, \Set]$ is a colimit of representables, the equality above holds if and only if $1=q_{!}({\cal C}(-,c))=p_{!}(a_{J}({\cal C}(-,c))$, that is if and only if the $a_{J}({\cal C}(-,c))$ are all connected objects of $\Sh({\cal C}, J)$. \end{proofs} \begin{rmk}\label{locconrmk} \emph{We note that for a general Grothendieck site $({\cal C}, J)$, the constant functor $\Delta{\emptyset}:{\cal C}^{\textrm{op}} \to \Set$ is a $J$-sheaf if and only if every $J$-covering sieve is non-empty, and all the constant functors $\Delta{L}:{\cal C}^{\textrm{op}} \to \Set$ for a non-empty set $L\in \Set$ are $J$-sheaves if and only if for each object $c\in {\cal C}$, all the $J$-covering sieves on $c$ are empty or connected as full subcategories of ${\cal C}/c$; in particular, the conjuction of these two conditions implies, by Theorem C3.3.10 \cite{El2}, that the topos $\Sh({\cal C}, J)$ is locally connected.} \end{rmk} As a consequence of Proposition \ref{loccon} and Remark \ref{locconrmk}, we deduce that if $\cal C$ is a category satisfying the right Ore condition and $J$ is a Grothendieck topology on $\cal C$ such that every $J$-covering sieve is non-empty, then all the functors $a({\cal C}(-,c))$ are connected objects of the locally connected topos $\Sh({\cal C}, J)$. In particular, if $J^{\cal C}_{at}$ is the atomic topology on $\cal C$ then the $a({\cal C}(-,c))$ are all atoms of the atomic topos $\Sh({\cal C}, J^{\cal C}_{at})$ (since in an atomic topos the atoms are precisely the connected objects, cfr. p. 685 \cite{El2}); since they also form a generating set for the topos $\Sh({\cal C}, J^{\cal C}_{at})$, we deduce from Proposition \ref{propn1}(ii) that the atoms of $\Sh({\cal C}, J^{\cal C}_{at})$ are exactly the epimorphic images of the functors of the form $a({\cal C}(-,c))$. By using Yoneda's lemma, one can easily rephrase this condition as follows: a $J^{\cal C}_{at}$-sheaf $F$ is an atom of $\Sh({\cal C}, J^{\cal C}_{at})$ if and only if there exists an object $c\in {\cal C}$ and an element $x\in F(c)$ with the property that every natural transformation $\alpha$ from $F$ to any $J^{\cal C}_{at}$-sheaf $G$ is uniquely determined by its value $\alpha(c)(x)$ at $x$. \section{The characterization theorem} In this section we prove our main characterization result concerning the geometric theories classified by an atomic topos with enough points.\\ Let us first introduce the relevant definitions and establish some basic facts. For the general background we refer the reader to \cite{El2}.\\ Concerning notation, for convenience signatures are supposed to be one-sorted throughout the whole section, but all the arguments can be easily adapted to the general many-sorted case. \begin{definition} Let $\mathbb{T}$ be a geometric theory. $\mathbb{T}$ is said to be atomic if its classifying topos $\Set[\mathbb T]$ is an atomic topos. \end{definition} \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. $\mathbb T$ is said to have enough models if for every geometric sequent $\sigma$ over $\Sigma$, $M\vDash \sigma$ for all the $\mathbb T$-models $M$ in $\Set$ implies that $\sigma$ is provable in $\mathbb T$. \end{definition} Note that since the soundness theorem for geometric logic always holds (see for example Proposition D1.3.2 p. 832 \cite{El2}), the class of theories with enough models is exactly the class of geometric theories for which `the' completeness theorem holds. \begin{proposition}\label{enough} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. Then $\mathbb T$ has enough models if and only if its classifying topos $\Set[\mathbb T]$ has enough points. \end{proposition} \begin{proofs} By definition, $\Set[\mathbb T]$ has enough points if and only if the inverse image functors $f^{\ast}$ of the geometric morphisms $f:\Set \rightarrow \Set[\mathbb T]$ are jointly conservative. Now, since the geometric morphism $f_{M}:\Set \rightarrow \Set[{\mathbb T}]$ corresponding to a $\mathbb T$-model $M$ in $\Set$ satisfies $f^{\ast}(M_{\mathbb{T}})=M$ (where $M_{\mathbb{T}}$ is the universal model of $\mathbb T$ lying in $\Set[\mathbb T]$) then it follows from Lemma D1.2.13 p. 825 \cite{El2} that if a geometric sequent $\sigma$ over $\Sigma$ is satisfied in every $\mathbb T$-model $M$ in $\Set$ then $\sigma$ is satisfied in $M_{\mathbb{T}}$, equivalently it is provable in $\mathbb T$.\\ Conversely, suppose that $\mathbb T$ has enough models. Then it is easily seen, by using an argument analogous to that employed in the proof of Proposition D3.3.13 p. 915 \cite{El2}, that $\Set[\mathbb T]$ has enough points. \end{proofs} \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. $\mathbb T$ is said to be complete if every geometric sentence $\phi$ over $\Sigma$ is $\mathbb T$-provably equivalent to $\top$ or $\bot$, but not both. \end{definition} \begin{rmk}\label{rmk2} \emph{From the topos-theoretic point of view, a geometric theory is complete if and only if its classifying topos is two-valued (to see this, it suffices to consider the syntactic representation for the classifying topos as the category of sheaves on the geometric syntactic category of the theory with respect to the `syntactic topology' on it); moreover, if $\mathbb{T}$ is atomic then its classifying topos is two-valued if and only if it is (atomic and) connected (cfr. the proof of Theorem 2.5. \cite{OC2}).}\\ \end{rmk} Given a geometric theory $\mathbb T$ over a signature $\Sigma$, from now on we will denote the relation of $\mathbb T$-provable equivalence of geometric formulas over $\Sigma$ in the same context by $\stackrel{\mathbb T}{\sim}$. \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. $\mathbb T$ is said to be Boolean if it classifying topos is a Boolean topos. \end{definition} \begin{rmk}\label{rmk1} \emph{We recall from \cite{OC3} that a geometric theory $\mathbb T$ over a signature $\Sigma$ is a Boolean if and only if for every geometric formula $\phi(\vec{x})$ over $\Sigma$ there exists a geometric formula $\psi(\vec{x})$ over $\Sigma$ in the same context, denoted $\neg \phi(\vec{x})$, such that $\phi(\vec{x}) \wedge \psi(\vec{x})\stackrel{\mathbb T}{\sim}\bot$ and $\phi(\vec{x}) \vee \psi(\vec{x})\stackrel{\mathbb T}{\sim}\top$.\\ From this criterion, it follows that if $\mathbb T$ is a Boolean then every infinitarily disjunctive first-order formula over $\Sigma$ (i.e. an infinitary first-order formula over $\Sigma$ which do not contain infinitary conjunctions) is $\mathbb T$-provably equivalent using classical logic to a geometric formula in the same context; indeed, this can be proved by an inductive argument as in the proof of Theorem D3.4.6 p. 921 \cite{El2}.} \end{rmk} \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. Two $\mathbb T$-models (in $\Set$) $M$ and $N$ are said to be geometrically equivalent if and only if for every geometric sentence $\phi$ over $\Sigma$, $M\vDash \phi$ if and only if $N\vDash \phi$. \end{definition} Let us recall that a model $M$ of a geometric theory $\mathbb T$ over a signature $\Sigma$ is said to be conservative if $M\vDash \sigma$ for every geometric sequent $\sigma$ over $\Sigma$ implies $\sigma$ provable in $\mathbb T$.\\ The following result represents the geometric analogue of the well-known characterization of completeness of a first-order theory in model theory. Below, by a trivial geometric theory we mean a geometric theory in which $\bot$ is provable. \begin{proposition}\label{prop} Let $\mathbb{T}$ be a non-trivial Boolean geometric theory with enough models. Then the following are equivalent:\\ (i) $\mathbb T$ is complete;\\ (ii) for every geometric sentence $\phi$, either $\phi\stackrel{\mathbb T}{\sim}\top$ or $\neg \phi \stackrel{\mathbb T}{\sim} \top$;\\ (iii) every two $\mathbb T$-models in $\Set$ are geometrically equivalent;\\ (iv) every $\mathbb T$-model $M$ in $\Set$ is conservative.\\ \end{proposition} \begin{proofs} (i) $\biimp$ (ii) is obvious.\\ (i) $\imp$ (iii) For any geometric sentence $\phi$ over $\Sigma$, either $\phi \stackrel{\mathbb T}{\sim} \top$, and hence $M\vDash \phi$ for all the $\mathbb T$-models, or $\phi \stackrel{\mathbb T}{\sim} \bot$, and hence $M\nvDash \phi$ for all $\mathbb T$-models; so (iii) immediately follows.\\ (iii) $\imp$ (i) Given a geometric sentence $\phi$ over $\Sigma$, since $\mathbb T$ has enough models, if $\phi \stackrel{\mathbb T}{\nsim} \top$ then there exists a $\mathbb T$-model $M$ in $\Set$ such that $\phi$ does not hold in $M$; then $\phi$ does not hold in any $\mathbb T$-model in $\Set$, these models being all geometrically equivalent. This precisely means that the geometric sequent $\phi \: \vdash_{[]} \bot$ holds in every $\mathbb T$-model in $\Set$, that is, $\mathbb T$ having enough models, $\phi \stackrel{\mathbb T}{\sim}\bot$.\\ (iii) $\imp$ (iv) Given a geometric sequent $\phi \: \vdash_{\vec{x}} \psi$ over $\Sigma$, it is clear that for any $\mathbb T$-model $M$, $\phi \: \vdash_{\vec{x}} \psi$ holds in $M$ if and only if the infinitarily disjunctive first-order sentence $\forall \vec{x}(\phi \to \psi)$ holds in $M$. But, by Remark \ref{rmk1}, this formula is $\mathbb T$-provably equivalent using classical logic to a geometric sentence; so we conclude that if a geometric sequent is satisfied in a $\mathbb T$-model $M$ then it is satisfied in every $\mathbb T$-model in $\Set$ and hence, $\mathbb T$ having enough models, it is provable in $\mathbb T$.\\ (iv) $\imp$ (iii) is obvious.\\ \end{proofs} \begin{rmks} \emph{(a) As it is clear from the proof, the equivalence (i) $\biimp$ (iii) in the proposition above holds in general for any geometric theory with enough models.\\ (b) Since every Boolean topos having enough points is atomic (Corollary C3.5.2 p. 685 \cite{El2}), the implication (i) $\imp$ (iv) in the proposition above can be seen, in view of Remark \ref{rmk2}, as the logical version of the topos-theoretic fact that every point of a connected atomic topos is a surjection (cfr. Proposition C3.5.6(ii) \cite{El2}).} \end{rmks} \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. A type-in-context (or, more briefly, a type) of $\mathbb T$ is any set of geometric formulas over $\Sigma$ in the same context of the form $\{\phi(\vec{x}) \textrm{ | } M\vDash \phi(\vec{a})\}$, where $M$ is a model of $\mathbb T$ in $\Set$ and $\vec{a}$ is a tuple of elements of (the underlying set of) $M$; the type $\{\phi(\vec{x}) \textrm{ | } M\vDash \phi(\vec{a})\}$ will be denoted by $S^{\mathbb{T}}_{(M,\vec{a})}$.\\ A type of $\mathbb T$ is said to be complete if it is maximal (with respect to the inclusion) in the set of all types of $\mathbb T$.\\ A type $S$ of $\mathbb T$ is said to be principal if there exists a formula $\phi(\vec{x})\in S$ such that for any geometric formula $\psi(\vec{x})$ over $\Sigma$ in the same context, $\phi(\vec{x})$ $\mathbb T$-provably implies $\psi(\vec{x})$ if (and only if) $\psi(\vec{x})\in S$; the formula $\phi(\vec{x})$ is said to be a generator of the type $S$. \end{definition} \begin{rmk}\label{rmkcomplete} \emph{Note that, by Proposition \ref{prop}, the notion of complete geometric theory introduced above rewrites in terms of types as follows: a non-trivial geometric theory $\mathbb T$ having enough models is complete if and only if for any two $\mathbb T$-models $M$ and $N$ in $\Set$, $S^{\mathbb{T}}_{(M,[])}=S^{\mathbb{T}}_{(N,[])}$.} \end{rmk} \begin{definition} Let $\Sigma$ be a signature, $M$ a $\Sigma$-structure and $N$ a substructure of $M$. Then $N$ is said to be a geometric substructure of $M$ if, for every geometric formula $\phi(\vec{x})$ over $\Sigma$ and any tuple of elements $\vec{a}$ (of the same length as $\vec{x}$) from $N$, $M \vDash \phi(\vec{a})$ if and only if $N \vDash \phi(\vec{a})$; equivalently, $S^{\emptyset}_{(M, \vec{a})}=S^{\emptyset}_{(N, \vec{a})}$ for any tuple $\vec{a}$ of elements of $N$ (where $\emptyset$ denotes the empty geometric theory over $\Sigma$). \end{definition} \begin{rmk}\label{rmk3} \emph{It is easy to prove by induction on the structure of geometric formulas that every geometric formula is equivalent in geometric logic to an infinitary disjunction of geometric formulas which do not contain infinitary disjunctions; since these latter formulas are in particular first-order, we may deduce that if $N$ is an elementary substructure of $M$ then $N$ is a geometric substructure of $M$; moreover, given a geometric sequent $\phi(\vec{x}) \vdash_{\vec{x}} \psi(\vec{x})$, if this sequent holds in $M$ then it also holds in $N$. Indeed, for every tuple $\vec{a}$ of elements in $N$ (of the same length as $\vec{x}$), $N\vDash \phi(\vec{a})$ implies $M\vDash \phi(\vec{a})$, which in turn implies $M\vDash \psi(\vec{a})$ and hence $N\vDash \psi(\vec{a})$ (where the first and third implications follow from the fact that $N$ is a geometric substructure of $M$). We note that this remark justifies the use of the downward L\"owenheim-Skolem theorem in the context of geometric logic; more precisely, given a geometric theory $\mathbb T$ over a signature $\Sigma$ of cardinality $|\Sigma|$, if $\mathbb T$ has a model $M$ such that $|M|\geq |\Sigma|$ then $\mathbb T$ has a model of cardinality $|\Sigma|$.} \end{rmk} Below by `countable' we mean either finite or denumerable. \begin{definition} Let $\mathbb{T}$ be a geometric theory. Then $\mathbb T$ is said to be countably categorical if any two models of $\mathbb T$ in $\Set$ of countable cardinality are isomorphic. \end{definition} We remark that, by our definition, any geometric theory having no models in $\Set$ is (vacously) countably categorical.\\ The following definition is the geometric equivalent of the notion of atomic model in classical model theory. \begin{definition} Let $\mathbb{T}$ be a geometric theory over a signature $\Sigma$. A model $M$ of $\mathbb T$ in $\Set$ is said to be atomic if for any tuple of elements $\vec{a}$ of $M$, the type $S^{\mathbb{T}}_{(M,\vec{a})}$ is principal and complete. \end{definition} Let us recall from \cite{OC3} that a geometric theory over a signature $\Sigma$ is Boolean if and only if every geometric formula $\phi(\vec{x})$ over $\Sigma$ which is stably consistent with respect to $\mathbb T$ (i.e. such that $\phi(\vec{x})\wedge \psi(\vec{x})\stackrel{\mathbb T}{\nsim}\bot$ for every geometric formula $\psi(\vec{x})$ over $\Sigma$ in the same context) is provable in $\mathbb T$; let us also recall from \cite{El2} that a geometric theory $\mathbb T$ is atomic if and only if all the subobject lattices in the geometric syntactic category ${\cal C}_{\mathbb T}$ of $\mathbb T$ are atomic Boolean algebras (this also follows from the results in the first section by using the fact that every subobject in the classifying topos $\Set[{\mathbb T}]$ of $\mathbb T$ of an object in ${\cal C}_{\mathbb T}$ lies in ${\cal C}_{\mathbb T}$). We will make use of these characterizations in the proof of the theorem below.\\ \begin{theorem}\label{teofond} Let $\mathbb T$ be a complete geometric theory having a model in $\Set$. Then the following are equivalent:\\ (i) $\mathbb T$ is countably categorical and Boolean\\ (ii) $\mathbb T$ is atomic\\ (iii) every $\mathbb T$-model in $\Set$ is atomic \end{theorem} \begin{proofs} (i) $\imp$ (ii) By Proposition \ref{prop}, any Boolean complete geometric theory with a model in $\Set$ has enough models; so the thesis follows from the fact that every Boolean topos with enough points is atomic (Corollary C3.5.2 p. 685 \cite{El2}).\\ (ii) $\imp$ (iii) Let $M$ be a $\mathbb T$-model in $\Set$ and $\vec{a}$ be a tuple of elements of $M$; we want to prove that $S^{\mathbb{T}}_{(M,\vec{a})}$ is principal and complete. Consider the subobject lattice $\Sub_{{\cal C}_{\mathbb T}}(\{\vec{x}.\top\})$ in the geometric syntactic category ${\cal C}_{\mathbb T}$ of $\mathbb T$, where $\vec{x}$ is a set of variables of the same length as $\vec{a}$. Since $\Sub_{_{{\cal C}_{\mathbb T}}}(\{\vec{x}.\top\})$ is an atomic Boolean algebra, we can write $\{\vec{x}.\top\}$ as a disjuction of atoms of $\Sub_{{\cal C}_{\mathbb T}}(\{\vec{x}.\top\})$; so, since $\{\vec{x}.\top\}$ obviously belongs to $S^{\mathbb{T}}_{(M,\vec{a})}$, there exists exactly one atom of $\Sub_{_{{\cal C}_{\mathbb T}}}(\{\vec{x}.\top\})$ (up to $\mathbb T$-provable equivalence) which belongs to $S^{\mathbb{T}}_{(M,\vec{a})}$; then it is clear that this atom generates the type $S^{\mathbb{T}}_{(M,\vec{a})}$. So we have proved that all the types of $\mathbb T$ are principal; it remains to verify that they are also complete. To this end, let us first observe that $\mathbb T$ is Boolean (since every atomic topos is Boolean). So, given an inclusion $S^{\mathbb{T}}_{(M,\vec{a})}\subseteq S^{\mathbb{T}}_{(N,\vec{b})}$ of types of $\mathbb T$, this inclusion must be an equality because if there were a formula $\phi(\vec{x})\in S^{\mathbb{T}}_{(N,\vec{b})}\setminus S^{\mathbb{T}}_{(M,\vec{a})}$ then, by definition of $\neg \phi(\vec{x})$, we would have $\neg \phi(\vec{x})\in S^{\mathbb{T}}_{(M,\vec{a})}$ and hence $\neg \phi(\vec{x})\in S^{\mathbb{T}}_{(N,\vec{b})}$, a contradiction.\\ (iii) $\imp$ (ii) Let us first prove that $\mathbb T$ is Boolean, that is every formula $\phi(\vec{x})$ which is stably consistent with respect to $\mathbb T$ is provable in $\mathbb T$. Given a $\mathbb T$-model $M$ and a tuple $\vec{a}$ of elements of $M$ of the same length as $\vec{x}$, let $\psi_{(M, \vec{a})}$ be a generator of the type $S^{\mathbb{T}}_{(M,\vec{a})}$. As we have already observed, under our hypotheses $\mathbb T$ has enough models so, since $\phi(\vec{x})\wedge \psi_{(M, \vec{a})}\stackrel{\mathbb T}{\nsim}\bot$, there exists a $\mathbb T$-model $N$ and a tuple $\vec{b}$ of elements of it (of the same length as $\vec{x}$) such that $\phi(\vec{x})$ and $\psi_{(M, \vec{a})}$ both belong to $S^{\mathbb{T}}_{(N,\vec{b})}$. Now, since $\psi_{(M, \vec{a})}$ generates the type $S^{\mathbb{T}}_{(M,\vec{a})}$, it follows that $S^{\mathbb{T}}_{(M,\vec{a})}\subseteq S^{\mathbb{T}}_{(N,\vec{b})}$ and hence, since all the types of $\mathbb T$ are complete, $S^{\mathbb{T}}_{(M,\vec{a})}=S^{\mathbb{T}}_{(N,\vec{b})}$. This in turn implies that $\phi(\vec{x})\in S^{\mathbb{T}}_{(M,\vec{a})}$, that is $M \vDash \phi(\vec{a})$. Since the $\mathbb{T}$-model $M$ and the tuple $\vec{a}$ are arbitrary, we conclude, again by invoking the fact that $\mathbb T$ has enough models, that $\phi(\vec{x})$ is provable in $\mathbb{T}$, as required. Now that we have proved that $\mathbb T$ is Boolean, to show that $\mathbb T$ is atomic, it remains to verify that all the Boolean subobject lattices in the geometric syntactic category ${\cal C}_{\mathbb T}$ of $\mathbb T$ are atomic, equivalently for every formula $\phi(\vec{x})\stackrel{\mathbb T}{\nsim} \bot$ there exists an atom below it in the Boolean algebra $\Sub_{{\cal C}_{\mathbb T}}(\{\vec{x}.\top\})$. If $\phi(\vec{x})\stackrel{\mathbb T}{\nsim} \bot$ then, since $\mathbb T$ has enough models, there exists a $\mathbb T$-model $M$ and a tuple $\vec{a}$ of elements of it (of the same length as $\vec{x}$) such that $\phi(\vec{x})\in S^{\mathbb{T}}_{(M,\vec{a})}$. It is now enough to check that the generator $\psi_{(M, \vec{a})}$ of the type $S^{\mathbb{T}}_{(M,\vec{a})}$ is an atom of $\Sub_{{\cal C}_{\mathbb T}}(\{\vec{x}.\top\})$; this follows similarly as above by using the fact that $\mathbb{T}$ has enough models and the types of $\mathbb T$ are complete.\\ (ii) $\imp$ (i) Being atomic, $\mathbb{T}$ is Boolean, as every atomic topos is Boolean. To prove that $\mathbb T$ is countably categorical, let us distinguish two cases: either $\mathbb T$ has a finite model in $\Set$ or all the models of $\mathbb{T}$ are infinite.\\ Let us suppose that all the models of $\mathbb{T}$ are infinite. We have to prove that any two denumerable models of $\mathbb T$ are isomorphic. We will construct explicitly such an isomorphism as in the proof of Theorem 7.2.2 p. 336 \cite{Hodges}. Let $M$ and $N$ be two models of $\mathbb{T}$ of cardinality $\aleph_{0}$. Then, $\mathbb{T}$ being complete, we have $S^{\mathbb{T}}_{(M,[])}=S^{\mathbb{T}}_{(N,[])}$ by Remark \ref{rmkcomplete}. Let us first prove by induction on $k \in \mathbb{N}$ the following fact: given tuples $\vec{a}$ and $\vec{b}$ of length $k$ respectively in $M$ and $N$ such that $S^{\mathbb{T}}_{(M, \vec{a})}=S^{\mathbb{T}}_{(N, \vec{b})}$, and an element $d\in N$ there exists an element $c\in M$ such that $S^{\mathbb{T}}_{(M, \vec{a},c)}=S^{\mathbb{T}}_{(N, \vec{b},d)}$ (and, symmetrically, given an element $c\in M$ there exists an element $d\in N$ such that $S^{\mathbb{T}}_{(M, \vec{a},c)}=S^{\mathbb{T}}_{(N, \vec{b},d)}$). Consider the type $S^{\mathbb{T}}_{(N, \vec{b},d)}$; this is principal, by our hypotheses (having already proved the implication (ii) $\imp$ (iii) in the theorem), so it is generated by a formula $\psi(\vec{x}, y)$. Now, $N \vDash (\exists y \psi(\vec{x}, y))(\vec{b})$ so since $S^{\mathbb{T}}_{(M, \vec{a})}=S^{\mathbb{T}}_{(N, \vec{b})}$ we deduce that there exists $c\in M$ such that $M \vDash \psi(\vec{a}, c)$; but $\psi(\vec{x}, y)$ is a generator of $S^{\mathbb{T}}_{(N, \vec{b},d)}$ and all the types of $\mathbb T$ are complete by our hypothesis, so we conclude that $S^{\mathbb{T}}_{(M, \vec{a},c)}=S^{\mathbb{T}}_{(N, \vec{b},d)}$, as required. Now, since $M$ and $N$ are geometrically equivalent by Proposition \ref{prop}, an obvious back-and-forth argument yields two sequences $(m_{0}, m_{1}, \ldots, m_{k}, ... )$ and $(n_{0}, n_{1}, \ldots, n_{k}, ... )$ enumerating respectively $M$ and $N$, such that for each $k\in \mathbb{N}$ $S^{\mathbb{T}}_{(M, m_{0}, m_{1}, \ldots, m_{k})}=S^{\mathbb{T}}_{(N, n_{0}, n_{1}, \ldots, n_{k})}$; then the map $f:M\to N$ sending each $m_{k}$ to $n_{k}$ is an isomorphism of $\mathbb{T}$-models, as it is a bijection preserving the interpretation of all the atomic formulas.\\ Let us instead suppose that $\mathbb{T}$ has a finite model $M$ in $\Set$ of cardinality $n$. Consider the geometric sequents (over $\Sigma$)\\ $\top \:\vdash_{[]}\: \exists x_{1}\ldots \exists x_{n} ( \mathbin{\mathop{\textrm{\huge $\wedge$}}\limits_{1\leq i\lt j\leq n}}x_{i}\neq x_{j})$ and\\ $ \mathbin{\mathop{\textrm{\huge $\wedge$}}\limits_{1\leq i\lt j\leq n}}x_{i}\neq x_{j} \:\vdash_{x_{1},\ldots, x_{n}, y}\: \mathbin{\mathop{\textrm{\huge $\vee$}}\limits_{1\leq i\leq n}}y=x_{i}$,\\ where for each $i$ and $j$, the expression $x_{i}\neq x_{j}$ denotes the complement of the formula $x_{i}\neq x_{j}$ in the subobject lattice $\Sub_{{\cal C}_{\mathbb{T}}}(\{x_{i}, x_{j}. \top\})$ of the geometric syntactic category ${\cal C}_{\mathbb{T}}$ of $\mathbb{T}$ (recall that, since the classifying topos of $\mathbb T$ is Boolean, these sublattices are all Boolean algebras).\\ Clearly, a model $N$ of $\mathbb{T}$ satisfies these sequents if and only if it has cardinality $n$; so in particular $M$ satisfies them. But, $\mathbb{T}$ being Boolean and complete, $M$ is a conservative model of $\mathbb T$ by Proposition \ref{prop}, so these sequents are provable in $\mathbb T$. From this, it follows that all the models of $\mathbb{T}$ have cardinality $n$. Since they are all atomic (by the implication (ii) $\imp$ (iii) in the theorem), a back-and-forth argument as above yields an isomorphism between any two models of $\mathbb{T}$.\\ \end{proofs} \begin{rmks} \emph{ (a) The equivalence (i) $\biimp$ (ii) in the theorem above generalizes the analogous result for coherent theories obtained by A. R. Blass and A. \v{S}\v{c}edrov in \cite{blasce}.\\ (b) As it is clear from the proof of the theorem above, the equivalence (ii) $\biimp$ (iii) holds in general for any geometric theory with enough models, while the implication (ii) $\imp$ (i) holds for any complete geometric theory. } \end{rmks} Given a geometric theory $\mathbb{T}$ over a signature $\Sigma$, by a `quotient' of $\mathbb{T}$ we mean a geometric theory $\mathbb{T}'$ over $\Sigma$ such that every axiom of $\mathbb{T}$ is provable in $\mathbb{T}'$; if $\mathbb{T}'$ is complete, then we say that $\mathbb{T}'$ is a completion of $\mathbb{T}$.\\ Let us now describe the completions of an atomic theory $\mathbb{T}$. Since $\Sub_{{\cal C}_{\mathbb{T}}}(\{[].\top\})$ is an atomic Boolean algebra, we can write $\top$ as a disjunction $\mathbin{\mathop{\textrm{\huge $\vee$}}\limits_{i\in I}}\phi_{i}$ of geometric sentences which are atoms of $\Sub_{{\cal C}_{\mathbb{T}}}(\{[].\top\})$. Then the completions of $\mathbb T$ are precisely the theories $\mathbb{T}_{i}$ obtained from $\mathbb T$ by adding to it an axiom of the form $\top \vdash_{[]} \phi_{i}$. Indeed, by our results in the first section, a subtopos ${\cal E}\slash U$ of an atomic topos $\cal E$ is two-valued if and only if $U$ is an atom of $\cal E$; also, if $\cal E$ is atomic then we have a decomposition of $1_{\cal E}$ as a disjoint sum of atoms $\mathbin{\mathop{\textrm{\huge $\vee$}}\limits_{i\in I}}U_{i}$ of $\Sub_{\cal E}(1_{\cal E})$ and hence $\cal E$ clearly decomposes as the coproduct of the toposes ${\cal E}\slash U_{i}$ for $i\in I$. Now, if $\cal E$ is the classifying topos $\Set[\mathbb{T}]$ of an atomic theory $\mathbb T$, then the toposes appearing in such decomposition can be clearly identified as the classifying toposes $\Set[\mathbb{T}_{i}]\simeq \Set[\mathbb{T}]\slash [[\phi_{i}]]_{G}$ of the $\mathbb{T}_{i}$, where $G$ is the universal model of $\Set[\mathbb{T}]$; so we may conclude by Remark \ref{rmk2} that the completions of $\mathbb T$ are precisely the $\mathbb{T}_{i}$, and in particular that they are all atomic theories. In passing, we note that if $\cal E$ is the category $\Sh({\cal C}, J^{\cal C}_{at})$ of sheaves on a category $\cal C$ with the respect to the atomic topology $J^{\cal C}_{at}$ on it (cfr. the first section of this paper for the definition of the atomic topology on a general category), this decomposition coincides (by the results in the first section) with the decomposition of $\Sh({\cal C}, J^{\cal C}_{at})$ as the coproduct of the toposes $\Sh({\cal C}', J^{{\cal C}'}_{at})$ as ${\cal C}'$ ranges in the set of connected components of $\cal C$.\\ By combining this discussion with Theorem \ref{teofond} we thus obtain the following result: all the completions of an atomic geometric theory are countably categorical.\\ Finally, let us indicate how it is possible to deduce from Theorem \ref{teofond} a representation result for connected atomic toposes with a point. From the proof of the theorem, it is clear that, provided that it exists, the unique (up to isomorphism) countable model $M$ of an atomic complete theory $\mathbb T$ over $\Sigma$ satisfies the following property: any two tuples from $M$ safisfy exactly the same geometric formulas over $\Sigma$ if and only if there exists an automorphism of $M$ which sends one to another. Then one can prove, by arguments analogous to those employed in the proof of Theorem 3.2 \cite{blasce}, that the classifying topos for $\mathbb T$ is equivalent to the topos of continuous $G$-sets where $G$ is the group of automorphisms of $M$ equipped with the `topology of pointwise convergence' (i.e. the topology defined by declaring a basis of neighbourhoods of the identity to consist of the subgroups $G_{\vec{a}}=\{\alpha\in G \textrm{ | $\alpha$ fixes each element of $\vec{a}$}\}$, for finite tuples $\vec{a}$ in $M$.\\ \section{Applications} \begin{theorem}\label{appl} Let $\mathbb T$ be a geometric theory having a model in $\Set$ in which every stably consistent formula with respect to $\mathbb T$ is satisfied. Then $\mathbb T$ has a quotient which is complete, countably categorical, and has a model in $\Set$. \end{theorem} \begin{proofs} Consider the Booleanization $\mathbb{T}'$ of the theory $\mathbb{T}$ (as it was defined in \cite{OC3}). $\mathbb{T}'$ is a geometric theory over $\Sigma$, and our hypotheses say precisely that $\mathbb{T}'$ has a model $M$ in $\Set$. Then, the geometric theory $Th(M)$ over $\Sigma$ having as axioms all the geometric sequents over $\Sigma$ which are satisfied in $M$, is complete and contains (in the obvious sense) the theory $\mathbb{T}'$; so its classifying topos $\Set[Th(M)]$ is a subtopos of the Boolean topos $\Set[{\mathbb{T}}']$, and hence it is a Boolean topos (by Proposition A4.5.22 \cite{El}). But the theory $Th(M)$ has enough models ($M$ being a conservative model for it), so $\Set[Th(M)]$ has enough points (by Proposition \ref{enough}) and hence it is atomic, by Corollary C3.5.2 \cite{El2}. Our thesis now follows from Theorem \ref{teofond}. \end{proofs} \begin{rmk}\label{rmk4} \emph{We note that if the signature of the theory $\mathbb T$ in Theorem \ref{appl} is countable then the quotient of $\mathbb T$ in the statement of the theorem has exactly one countable model in $\Set$ up to isomorphism; indeed, this follows from the downward L\"owenheim-Skolem theorem (cfr. Remark \ref{rmk3}).} \end{rmk} The terminology in the following result is taken from \cite{OC2}. \begin{theorem}\label{teofraisse} Let $\mathbb{T}$ be a theory of preshaf type such that the category $(\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))$ satisfies the amalgamation and joint embedding properties. Then any two countable homogeneous $\mathbb T$-models in $\Set$ are isomorphic. \end{theorem} \begin{proofs} As it is remarked in \cite{OC3}, the Booleanization $\mathbb{T}'$ of $\mathbb{T}$ axiomatizes the homogeneous $\mathbb T$-models. Now, we have already observed that an atomic geometric theory is complete if and only if its classifying topos is (atomic and) connected (cfr. Remark \ref{rmk2}). So $\mathbb{T}'$ is complete, since its classifying topos $\mathbb{T}'\simeq \Sh((\textrm{f.p.} {\mathbb T}\textrm{-mod}(\Set))^{\textrm{op}}, J_{at})$ is atomic and connected, by Theorems 2.5. and 2.6. in \cite{OC2}. Our thesis now follows from Theorem \ref{teofond}. \end{proofs} \vspace{10 mm} {\bf Acknowledgements:} I am grateful to my Ph.D. supervisor Peter Johnstone for many useful discussions.\\ \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
4,012
{"url":"https:\/\/chem.libretexts.org\/Core\/Physical_and_Theoretical_Chemistry\/Physical_Properties_of_Matter\/Atomic_and_Molecular_Properties\/Intermolecular_Forces\/Specific_Interactions\/Van_Der_Waals_Interactions","text":"# Van Der Waals Interactions\n\nVan der Waals forces are driven by induced electrical interactions between two or more atoms or molecules that are very close to each other. Van der Waals interaction is the weakest of all intermolecular attractions between molecules. However, with a lot of Van der Waals forces interacting between two objects, the interaction can be very strong.\n\n### Introduction\n\nHere is a chart to compare the relative weakness of Van der Waals forces to other intermolecular attractions.\n\nWeak Intermolecular Interactions\nForce Strength (kJ\/mol) Distance (nm)\nVan der Waals\u00a0 0.4-4.0 0.3-0.6\nHydrogen Bonds 12-30 0.3\nIonic Interactions 20 0.25\nHydrophobic Interactions <40 varies\n\n### Causes of Van der Waals Forces\n\nQuantum Mechanics strongly emphasizes the constant movement of electrons in an atom through the Schr\u00f6dinger Equation and the Heisenberg\u2019s Uncertainty Principle. The Heisenberg\u2019s Uncertainty Principle proposes that the energy of the electron is never zero; therefore, it is constantly moving around its orbital. The square of the Schr\u00f6dinger Equation for a particle in a box suggests that it is probable of finding the electron (particle) anywhere in the orbital of the atom (box).\n\nThese two important aspects of Quantum Mechanics strongly suggest that the electrons are constantly are moving in an atom, so dipoles are probable of occurring. A dipole is defined as molecules or atoms with equal and opposite electrical charges separated by a small distance.\n\nIt is probable to find the electrons in this state:\n\nThis is how spontaneous (or instantaneous) dipoles occur. When groups of electrons move to one end of the atom, it creates a dipole. These groups of electrons are constantly moving so they move from one end of the atom to the other and back again continuously. Therefore, the opposite state is as probable of occurring.\n\nOpposite state due to fluctuation of dipoles:\n\n### Dipole-Dipole Interaction\n\nDipole-Dipole interactions occur between molecules that have permanent dipoles; these molecules are also referred to as polar molecules. The figure below shows the electrostatic interaction between two dipoles.\n\nThe potential energy of the interaction for the top pair of the image above is represented by the equation:\n\n$V = -\\dfrac{2\\mu_A\\mu_B}{4\\pi\\epsilon_o r^3} \\tag{1}$\n\nThe potential energy of the interaction for the bottom pair is represented by the equation:\n\n$V = -\\dfrac{\\mu_A\\mu_B}{4\\pi\\epsilon_o r^3} \\tag{2}$\n\nwith\n\n\u2022 $$V$$ is the potential energy\n\u2022 $$\\mu$$ is the dipole moment\n\u2022 $$\\epsilon_o$$ is the vacuum permittivity\n\u2022 $$r$$ is the length between the two nuclei\n\nThe negative sign indicates that energy is released out of the system, because energy is released when bonds are formed, even weak bonds. The negative sign also suggests that the interaction is driven by an attractive force (a positive sign would indicate a repulsion force between the two molecules). If the conditions of these two samples are the same except for their orientation, the second pair of the electron will always have a larger potential energy, because both the negative and positive ends are involve in the interactions.\n\n### Induced Dipoles\n\nAn induced dipole moment is a temporary condition during which a neutral nonpolar atom (i.e. Helium) undergo a separation of charges due to the environment. When an instantaneous dipole atom approaches a neighboring atom, it can cause that atom to also produce dipoles. The neighboring atom is then considered to have an induced dipole moment.\n\nEven though these two atoms are interacting with each other, their dipoles may still fluctuate. However, they must fluctuate in synchrony in order to maintain their dipoles and stay interacted with each other. Result of synchronizing fluctuation of dipoles:\n\nThe potential energy representing the dipole-induced dipole interaction is:\n\n$V = -\\dfrac{\\alpha\\mu^2}{4\\pi\\epsilon_o r^6} \\tag{4}$\n\n\u2022 $$\\alpha$$\u00a0= polarizability of the nonpolar molecule\n\nPolarizability defines how easy the electron density of an atom or a molecule can be distorted by an external electric field.\n\n### Spontaneous Dipole-Induced Dipole Interaction\n\nSpontaneous dipole-induced dipole interactions are also known as dispersion or London forces (name after the German physicist Fritz London). They are large networks of intermolecular forces between nonpolar and non-charged molecules and atoms (i.e. alkanes, noble gases, and halogens). Molecules that have induced dipoles may also induce neighboring molecules to have dipole moments, so a large network of induced dipole-induced dipole interactions may exist. The image below illustrates a network of induced dipole-induced dipole interactions.\n\nThe potential energy of an induced dipole-induced dipole interaction is represented by this equation:\n\n$V = -\\dfrac{3}{2}\\dfrac{I_aI_b}{I_a + I_b}\\dfrac{\\alpha_a\\alpha_b}{r^6} \\tag{5}$\n\n\u2022 $$I$$ = The first ionization energy of the molecule\n\nThe radius is a huge determinant of how large the potential energy is since the potential energy is inversely proportional to $$r^6$$. A small increase in the radius, would greatly decrease the potential energy of the interaction.\n\n### References\n\n1. Atkins, Peter and Julio de Paula. Physical Chemistry for the Life Sciences. Oxford, UK: Oxford University Press. 2006. 458.\n2. Chang, Raymond. Physical Chemistry for the Biosciences. Sausalito, CA: Edwards Brothers, Inc. 2005. 492-498.\n3. Garrett, Reginald H. and Charles M. Grisham. Biochemistry. Belmont, CA: Thomas Brooks\/Cole. 2005. 13.\n4. Petrucci, Ralph H. et. al. General Chemistry. Eight Edition. Upper Saddle River, NJ: Prentice-Hall, Inc. 2002. 497-500.\n\n### Contributors\n\n\u2022 Justin Than (UCD)","date":"2017-03-25 00:02:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6654770374298096, \"perplexity\": 953.092201560805}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218188717.24\/warc\/CC-MAIN-20170322212948-00444-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
Doritos + chicken wings. Is that combination possible??? The answer is yes. My daughter is a huge fan of Doritos (Cool Ranch flavor), so you could imagine how excited she was when I made these. Thanks to the good people from Froth who gave me this inspiration and also yummly.com, I was able to share this amazingly simple yet mouth-watering recipe with you. If deep frying method is not your cup of tea, you could also bake them in the oven. Having tried both versions, it was no surprise deep frying yielded a crisper texture than baking. But both versions of winglets tasted just as moist and juicy as the other. This is the brand of buttermilk I'm using. You could find it in Singapore's supermarkets like Fairprice, Giant or Cold Storage. Rinse the wings and set aside. Combine the buttermilk, salt, garlic powder, black pepper in a bowl and mix well. Put the winglets into a large zip loc bag and pour the buttermilk mixture into the bag. Close the bag and squish them around, making sure they are thoroughly coated. Refrigerate for at least 4 hours or overnight. Using a food processor, pulse the Doritos into coarse breadcrumb-like bits. Alternatively, you could put them in a zip loc bag and crush them with a rolling pin. I'm using a mix of cool ranch and spicy nachos flavored Doritos. When you are ready to make the winglets, take them out of the fridge and bring to room temperature, about 30 mins. Place the flour, eggs and Doritos separately into 3 bowls / plates. Take the winglets one by one and dredge them in the flour, follow by the egg and lastly the Doritos. Make sure the winglets are well coated during each process. Place a few winglets in hot oil and deep fry for about 5 mins. Make sure they are cooked under medium low heat so they will be cooked through. Alternatively, you could bake them in a baking tray lined with baking paper at 200°C/400°F for 15 mins, turning them half way through the cooking process. For all chicken wings lovers - wings coated with Doritos. Place the flour, eggs and Doritos separately into 3 bowls / plates. Take the winglets one by one and dredge them in the flour, follow by the egg and lastly the Doritos. Make sure the winglets are well coated during each process. Place a few winglets in hot oil and deep fry for about 5 mins. Make sure they are cooked under medium low heat so they will be cooked through. Alternatively, you could bake them in a baking tray lined with baking paper at 200°C/400°F for 15 mins, turning them half way through the cooking process.
{ "redpajama_set_name": "RedPajamaC4" }
1,078
\section{Introduction} In the context of the emerging technology of microscale processing the use of surface substrates with micropatterns has become increasingly important for microfluidics applications. Chemically structured surfaces that exhibit lateral patterns of varying wettability can be produced by techniques such as photolithography \cite{1,2}, microcontact printing \cite{3,4,5}, vapor deposition through grids \cite{6}, domain formation in Langmuir-Blodgett monolayers \cite{7,8}, electrophoretic colloid assembly \cite{9}, lithography with colloid monolayers \cite{10}, microphase separation in diblock copolymer films \cite{11}, etc. For patterned surfaces in the micrometer range, on which liquid droplets or thin liquid films are adsorbed, fascinating wetting morphologies have been predicted \cite{12,13,14,15,16,17,18,18a} (including ``morphological wetting transitions''\cite{12,16,17}) and observed \cite{6}. Liquid bridges between droplets at walls are also of interest in the context of forces between droplets (or bubbles, respectively) \cite{19,20}, long range forces between colloidal particles \cite{21,22}, etc. The theoretical treatments mostly apply phenomenological quasi-macroscopic concepts (in terms of interfacial tensions, contact angles, \cite{23,24} etc.); but for droplets on the micrometer scale the lack of knowledge on the line tension \cite{24,25,26,27,28,29,30,31,32,33,34,35} is a serious drawback already, and the understanding of the density distribution of the droplet near the contact line is a difficult problem \cite{33,34}. The present drive towards nanoscale technology creates also more interest in droplets on the nanometer scale, and in fact some of the techniques mentioned above are well suited to create surface patterns on the nanoscale \cite{10,11}. Although sometimes macroscopic concepts do allow reasonable predictions down to the nanoscale \cite{36,37,38,39}, there is no guarantee that the phenomenological theories are quantitatively accurate for bridge formation between polymer nanodroplets. We study this problem here by Molecular-Dynamics simulation, to complement existing knowledge on the problem by insight on a molecular level. In the next section we shall present our model, which has been used successfully in previous work to study static and dynamic properties of polymers in the bulk and at surfaces \cite{36,37,38,40,41,42,43,44,45,46}. In section 3 we shall present our simulation results on droplets on single walls, as well as on the structure of liquid bridges in slit pores of varying width. Section 4 is devoted to a discussion of dynamic aspects of bridge formation, while Section 5 summarizes some conclusions. \section{Some comments on the model and the simulation technique} We employ a coarse-grained off-lattice model of polymer chains, where each chain consists of $N = 32$ effective monomers, which are connected by anharmonic springs. These effective bonds are thought to represent groups of a few successive chemical monomers along the chain, and therefore inclusion of torsional potentials and even bond-bending potentials is not considered. The springs are described by the finitely extensible nonlinear elastic (FENE) potential \begin{equation}\label{eq1} U_{FENE}(\ell) = - \frac K 2 R^2\ell n [1-\frac{(\ell-\ell_0)^2}{R^2} ] \;, \end{equation} $\ell$ being the bond length, which can vary in between $\ell_{\textrm{min}}$ and $\ell_{\textrm{max}}$, and thus the equilibrium value $\ell _0$ for which $U_{FENE}(\ell _0)=0$, and $R= \ell_{\textrm{max}}- \ell_0 = \ell_0 - \ell _{\textrm{min}}$. The spring constant $K$ is taken as in previous work $K/k_BT=40$, and we again choose $\ell_{\textrm{max}}=1$ as our unit of length, with $R=0.3$ (hence $\ell_0 =0.7$, $\ell_{\textrm{min}}=0.4$) \cite{40,41,42,43,44,45,46}. Between the effective monomers a Morse potential acts, \begin{equation}\label{eq2} U_M(r)= \epsilon _M {\exp[-2\alpha (r-r_{\textrm{min}})] - 2 \exp[-\alpha(r-r_{\textrm{min}})]}\;, \end{equation} $r$ being the distance between the beads, and the parameters are chosen as $r_{\textrm{min}}=0.8, \; \epsilon_M=1$ and $ \alpha = 24$. Owing to the large value of $\alpha$, $U_M(r)$ decays to zero very rapidly for $r >r_{\textrm{min}}$, and is completely negligible for $r >1$. This choice of parameters is useful, particularly for Monte Carlo simulations, since it allows the use of a very efficient link-cell algorithm \cite{40}. The Theta-temperature for this model is \cite{42} $k_B\Theta \approx 0.62$. We hence choose in the following a temperature $k_BT=0.49$, where at zero pressure the system already is in the state of a dense melt, and to a very good approximation the gas density is zero for $N=32$. Thus from a polymer droplet at a surface no chains evaporate into the surrounding gas. The adsorbing walls are treated as perfectly flat and structureless; in particular, no atomic corrugation of the walls is considered. The interactions between the effective monomers and the walls are represented by a Lennard-Jones potential, integrated over a (semi-infinite) substrate \cite{38} \begin{equation}\label{eq3} U_{\textrm{wall}}(z)=4\pi \epsilon _{\textrm{w}} [\frac{1}{45} (\frac{\sigma_{\textrm{wall}}}{z})^9-\frac 1 6 (\frac{\sigma_{\textrm{wall}}}{z})^3]\;, \end{equation} where we choose parameters $\sigma_{\textrm{wall}}=1$, $\epsilon_{\textrm{w}}=0.20$ in the lyophilic part of the substrate, while in the lyophobic area $\epsilon_{\textrm{w}}=0.05$ is chosen. Typically, the lyophilic area is a circle of radius $R_D$, with 3 $\leq R_D \leq 15$. The total lateral linear dimensions of the simulation box is chosen $64 \times 64$, so that (for a total number of ${\mathcal{N}}=128$ chains in the polymer droplet) it never can happen that a droplet interacts with its images generated by the periodic boundary condition. For a study of single droplets, the linear dimensions in the perpendicular direction is $L=32$, and the top wall of the box is taken purely repulsive (omitting the attractive term from Eq.(\ref{eq3}), and choosing also $\epsilon_{\textrm{w}}=0.20$). For our study of slit pores, we choose $5 \leq L \leq 22$, and the potentials from both walls are chosen exactly of the same type. The initial preparation of droplets in equilibrium is done by Monte Carlo methods, applying the Velocity-Verlet algorithm and keeping the temperature constant by the Nos\'e-Hoover thermostat \cite{47}. In the case of bridging droplets, however, we resort to a Langevin thermostat\cite{47a} which provides greater stability of the algorithm for small separation between the solid planes. For more details on our Molecular Dynamics algorithm, we refer the reader to \cite{38}. Here we only note that due to the steep variation of the Morse potential a very small integration time step needs to be used, $\delta t=0.0009$ MD time units, and typically runs over 1.1 million time steps were carried out. It is certainly useful to translate the time units [t.u.], used in our simulation, into seconds by mapping our model results to laboratory data. A typical substance used in numerous experiments on wetting is, for example, the PDMS (polydimethylsiloxane) melt $(C_2 H_6 O Si)_n$. In ellipsometric experiments on droplet spreading\cite{Voue} one has measured a diffusion coefficient ${\cal D}_{diff} \approx 3.3\times 10^{-10} m^2/s$ for chain lengths $N=10$, and ${\cal D}_{diff} \approx 0.3\times 10^{-10} m^2/s$ for $N=20$. These data can be compared to our measurements of precursor diffusion\cite{38} in spreading droplets which yield ${\cal D}_{diff}^{sim} \approx 0.11 \ell_0^2 / t.u.$ in a melt of chains with $N=32$. Bearing in mind that the bond length in PDMS is\cite{Arbe} $\approx 1.59 \AA$, and assuming that, say, three chemical units form a persistent length of $\approx 4 \AA$, one obtains for the simulation time unit $1 t.u. \approx 0.5 ns$ as a rough estimate. \section{Polymer nanodroplets on chemically structured walls: static properties} As discussed in our previous work on droplets adsorbed on flat walls without any chemical structuring on the substrate, the average density distribution $\rho(R,Z)$ is a function of two coordinates, the distance $Z$ from the substrate (note that we orient the z-axis perpendicular to the surface and the origin of the coordinate system is the projection of the center of mass of the droplet on the substrate plane $z=0$), and the distance $R$ from the z-axis. While individual configurations of the droplet due to statistical fluctuations depend on the angle $\psi$ relative to the x-axis as well \cite{36}, the average density distribution must have rotation symmetry around the z-axis \cite{36,37,38}. When we create a chemical structure on the surface such that the constant $\epsilon_{\textrm{w}}$ describing the wall potential $U_{\textrm{wall}}(Z)$ in Eq.~(\ref{eq3}) has one value $\epsilon_{\textrm{w}}=0.2$ for $0<R<R_D$ and a smaller value $\epsilon_{\textrm{w}}=0.05$ outside this circle whereby the symmetry axis of the droplet coincides with the axis perpendicular to the midpoint of this lyophilic circle, of course. Therefore it is meaningful to record density profiles $\rho(R,Z)$ of the same type to characterize the average droplet density profile as in our previous work. Fig.~\ref{fig1} shows a few representative examples. While for $R_D=15$ the droplet sits fully inside the lyophilic circle and hence hardly differs from a droplet on an infinitely extended lyophilic surface, for $R_D \leq 12$ the droplet always extends over the full range of the lyophilic region. One thus can see that the shape of the adsorbed droplet changes when the radius $R_D$ of the lyophilic domain decreases. As expected, the contact angle then varies continuously with $R_D$, due to the interaction of the contact line and the boundary between the lyophilic and lyophobic regions at the substrate, and is no longer identical to the contact angle that applies for an infinitely extended flat lyophilic substrate (this contact angle is determined, for very large droplets, in terms of the polymer-wall and polymer-gas interfacial energies, through the Young equation \cite{23,24} while for not so large droplets also a correction due to the line tension \cite{34,35} needs to be taken into account \cite{36,48}). As discussed in our earlier work \cite{36}, for nanodroplets considered here, some ambiguity in the definition of the contact-angle for these nanodroplets is inevitable. We follow the earlier work \cite{36,37,38}, fitting a straight line to the density contour $\rho(Z,R)=1$ in the regime $2 \leq Z \leq 4$, disregarding the slight curvature of this contour in that region. Fig.~\ref{fig2} presents a plot of the resulting variation, showing that $cos (\theta)\approx const$ for $R_D\geq 12$, while $cos (\theta) <0$ for $R_D<9$. Comparing the density profiles shown in the insert of Fig.~\ref{fig2} with those of Fig.~\ref{fig1} one sees that indeed for $R_D>12$ the droplet shape in Fig.~\ref{fig1} is the same as that for a homogeneous surface with $\epsilon_w=0.2$. Qualitatively, the behavior seen in Figs.~\ref{fig1},\ref{fig2} nicely corresponds to the theoretical predictions of Lipowsky et al.\cite{12,13,14,15,16,17,18}. When $R_D$ gets very small, the droplet shape gradually approaches the shape that a droplet takes on a uniformly lyophobic substrate surfaces, as the comparison of the droplet profile for $R_D=3$ in Fig.~\ref{fig1} and the droplet profile for $\epsilon_w=0.05$ in Fig.~\ref{fig2} shows. We next study the behavior of a slit pore of width $L$, where both walls exhibit a lyophilic domain of radius $R_D$ exactly opposite to each other. When $L$ is large enough, in equilibrium (for a fixed total number ${\mathcal{N}}$ of chains) we expect that droplets containing ${\mathcal{N}}/2$ chains each (assuming ${\mathcal{N}}$ is an even integer) will be adsorbed exactly opposite to each other. However, when the distance L between the plates gets smaller, the two droplets start to interact, and a liquid bridge between both walls can form. This formation of liquid bridges by variation of L is illustrated in Fig.~\ref{fig3}, choosing $R_D=14$ and ${\mathcal{N}}/2=128$, so that the single non-bridging droplets on the separated walls are exactly equivalent to the situation considered in Figs.~\ref{fig1},~\ref{fig2}. One can see that for $L=22$ the droplets are still separated and identical in shape to those seen in Fig.~\ref{fig1} for $12 \leq R_D \leq15$. For $L \leq 19$, however, bridge formation has occurred. One can clearly see the change of the bridge morphology with decreasing distance L between the two walls. When the separated droplets touch each other, they form an ``in-bridge'' (i.e., the curvature of the bridge surface is concave) which is typical for liquids wetting the substrate. This catenoid shape is displayed for the profiles when the pore width is $L=17,15,$ or $13$, respectively. On further decrease of L, the shape of the liquid bridge becomes perfectly cylindrical (the shapes for $L=10$ and $L=9$ are close to this shape), while for still smaller slit pore widths $L$ (such as $L=6$ and $L=5$) the shape is that of an ``out bridge'', with a convex curvature of the bridge surface, i.e. a barrel-like shape. This shape is typical for lyophobic substrates, as experienced here by those monomers of the polymer droplet whose coordinates $R(X,Y)$ exceed the radius $R_D$ of the hydrophilic domain. Again the behavior in Fig.~\ref{fig3} is very nicely consistent with the behavior as predicted by the theory for quasi-macroscopic droplets \cite{12,13,14,15,16,17,18,18a}, and hence we again find that these phenomenological concepts due to Lipowsky et al. \cite{12,13,14,15,16,17,18} work qualitatively down to the nanoscopic scale. In view of the fact that there is some ambiguity in the precise numerical estimation of the contact angle of nanodroplets, as mentioned above, it is of significant interest to estimate additional properties quantitatively, which could be compared to analytical theories on this problem. Such properties are the base radius $R_{\textrm{lat}}$ of the droplet (Fig.\ref{fig4}) or liquid bridge (Fig.\ref{fig5}), the height of the droplet $H$ (Fig.\ref{fig4}), and the midheight radius $R_{\textrm{lat}}(Z=H/2)$ of the droplet (Fig.~\ref{fig4}) or of a liquid bridge $R_{\textrm{lat}}(Z=L/2)$, (Fig.~\ref{fig5}), respectively. Figs.~\ref{fig4},\ref{fig5} demonstrate that these quantities can be measured with relatively small statistical errors in our simulations. From such data it also is evident that a pronounced change in the behavior occurs at $R_D\approx 8.5$ (Fig.~\ref{fig4}). For a macroscopic sessile droplet (satisfying the Young equation with the contact angle $\theta$ and having a sphere-cap shape) we simply would have the relations in terms of the sphere radius $r$ \begin{equation}\label{eq4} R_{\textrm{lat}}= r\;\sin (\theta), \; H=r(1-\cos (\theta))\;, \end{equation} and hence \begin{equation}\label{eq5} H/R_{\textrm{lat}}=(1-cos (\theta))/\; \sin \;(\theta) \approx \theta /2\;, \quad \theta \rightarrow 0\;. \end{equation} Indeed this relation is roughly fulfilled for our model. It also is of interest to calculate the pressure in the liquid bridge, using the virial formula \cite{45,49}. Fig.(~\ref{fig6}) shows that the pressure is positive for the smallest distances between the plates $(L \leq 6)$ only, while for larger distances the pressure is negative. This observation already indicates that this situation is unfavorable, if we would not enforce the distance $L$ between the plates as given parameter, such distances would not occur if the walls could freely move against each other. This fact is very clearly borne out when we compute the normal force between the walls (Fig.~\ref{fig7}). Of course, for large distances $L$ for which two separate droplets occur on each wall that do not yet touch the force is zero, while for bridging droplets an attractive force arises, and only for very small $L$, where the bridge is squeezed into the lyophobic part, the force becomes repulsive (Fig.~\ref{fig7}). The force goes through zero at $L \approx 6$. Since it has been shown\cite{18a} that for the underlying {\em catenoid} geometry a closed analytical expression for the surface area dependence on the wall separation $L$ does not exist, we tentatively use a relation proposed by Swain and Lipowsky\cite{14} for the bridge between a single pair of opposing lyophilic stripes. The corresponding contact angle $\theta ^*$ can be calculated then from $\theta ^* = - tan ^{-1} (2R_D/L) \approx 100^o$ and agrees with our observations. As expected, this contact angle exceeds $\pi/2$. The strongest attractive force actually does occur near $L\approx 10$, where the angle is $\pi/2$. For $L > 12$ an almost linear decrease of the force sets in, before it discontinuously jumps to zero for $L=22$. It is interesting to note that very long range attractive forces have been experimentally detected in AFM measurements of forces between a polystyrene sphere and liquid interfaces \cite{21,22}, showing a {\em linear} variation with distance on the scale of 30 nm. Unlike such experiments, in our simulation an electrostatic origin of such long range forces is excluded by construction of our model. The force seen in in Fig.~\ref{fig7} is entirely due to the interplay of the various interfacial interactions that control the shape of the liquid bridge. One should also point out that both the course of the bridging force against inter-plate distance $L$ as well as the particular bridge configurations corresponding to various parts of the force distance curve closely match some recent results\cite{18a} on classical structureless droplets obtained by two different surface minimization techniques. \section{Dynamic aspects of bridge formation} For $L=21.5$ one can observe initial states where the two substrates still carry separate droplets, but the wings of their density distributions already overlap, and this interaction between the two droplets starts a merging process which leads to the formation of a liquid bridge. Figs.(~\ref{fig8}) and (\ref{fig9}) analyze the dynamical aspects of this merging process of the two droplets in more detail. One can see that the formation of the liquid bridge is a very slow process, it is clearly diffusion-controlled, there is no evidence of faster hydrodynamic mechanisms. The radius of the midpoint of the bridge seems to grow towards its equilibrium value over a transient period of time according to a $t^{1/4}$ law. This behavior is reminiscent of the growth law with which an interfacial profile between coexisting phases approaches equilibrium \cite{50}. \section{Concluding discussion} In the present work, we have presented Molecular Dynamics simulations addressing the shape of a droplet adsorbed on a spherical lyophilic domain on an otherwise lyophobic flat substrate surface, and the formation of liquid bridges between two such surfaces. Also the forces between these surfaces caused by such liquid bridges have been measured, and the kinetics of the merging of two such droplets into one bridge has been studied. According to the predictions of Lipowsky et al. \cite{12,13,14,15,16,17,18} one should expect for large enough domain radius $R_D$ that the contact angle $\theta$ of the droplet is constant ($\theta= \theta_\textrm{phil}$, the ``lyophilic'' value, with $\cos \theta_\textrm{phil} \approx 0.57$ in our case, see inset of Fig.~\ref{fig2}), until the radius $R_\textrm{lat}$ of the droplet matches $R_D$. For our choice of parameters, this happens for $R_D = R_D^\textrm{phil} \approx 12$. For $R_D < R_D^\textrm{phil}$, one expects that $\theta$ should increase such that $R_\textrm{lat}=R_D$ is always maintained, until $\theta$ reaches the value of the lyophobic part of the substrate surface, $\theta=\theta_\textrm{phob}$, for $R_D=R_D^\textrm{phob}$, with $\cos \theta_\textrm{phob} \approx - 0.6$ in our case. In fact, Fig.~\ref{fig4} nicely verifies the linear variation $R_\textrm{lat}=R_D$ quantitatively up to about $R_D \approx 9$, while for $9 < R_D < R_D^{\textrm{phil}}$ the further increase of $R_\textrm{lat}$ with $R_D$ is slower, and the saturation value of $R_\textrm{lat}$, reached for $R \geq R_D^{\textrm{phil}}$, is in fact smaller than expected, namely only around $R_\textrm{lat}^\textrm{max}\approx 10$. Of course, some quantitative deviations of our results from the predictions of Lipowsky et al. \cite{12,13,14,15,16,17,18} must be expected, since the latter predictions are asymptotically valid for very large, almost macroscopic, droplets, while our simulations concern nanodroplets, and hence there are no sharp transitions possible when $R_D$ is varied: so Fig.~\ref{fig2} does not show a sharp kink of the curve $\cos (\theta) $ vs. $R_D$ at $R_D = R^{\textrm{phil}}_D$ but rather a rounded crossover, and also at $R_D = R_D^{\textrm{phob}}$ only a very smooth variation is seen (in fact, $R_D^\textrm{phob}$ is of order unity in our case, and hence cannot be even uniquely identified from our data). Clearly, it would be interesting to analyze the amount of rounding of these ``transitions'' at $\theta=\theta_\textrm{phil}$ and $\theta=\theta_\textrm{phob}$ theoretically, and to repeat our simulations for larger droplets (and correspondingly chosen larger values of $R_D$) to see the extent to which these transitions become sharper, but all such extensions of our work would present major difficulties and hence have not been attempted. We also note that the theory implies that the shape of the droplet should be a sphere cap throughout. In particular, since the volume $V$ of the droplet is constant, we should have \begin{equation} \label{eq6} V=\frac{\pi H}{6} (3 R^2_\textrm{lat} + H^2) = \textrm{const} \end{equation} and combining this equation with the geometrical relations for the contact angle , Eqs.~(\ref{eq4}),~(\ref{eq5}), in principle the variation of $\cos (\theta)$ with $R_D$ in the regime $R^\textrm{phob}_D < R_D < R_D^\textrm{phil}$ can be explicitly predicted. Noting from Fig.~\ref{fig4} that for $R_D > R_D^\textrm{phil}$ we have $R_\textrm{lat}\approx10$, $H \approx 7.5$, one obtains $V \approx 1400.$ However, for $R_D \approx 7$ where $R_\textrm{lat} \approx6.8$, $H \approx9.3$ (Fig.~\ref{fig4}) we find that Eq.~(\ref{eq6}) would yield only $V \approx 1100$, and for $R_D \approx5$, where $R_\textrm{lat}\approx5$, $H \approx11$ we would get $V \approx 1130$, while for $R_D=R_\textrm{lat}=3$ where $H\approx13$ we find $V \approx 1330$. Thus, the geometrical relations on the basis of the sphere cap picture are not verified accurately in our case. This problem is related to the fact that the linear increase of $R_\textrm{lat}$ with $R_D$ does not continue all the way to $R_\textrm{lat}=R_D^\textrm{phil}\approx12$ but $R_\textrm{lat}$ is significantly too small for $R_D >9$. On the other hand, the discrepancies found are not dramatic either, they do not exceed 10-15\%, in spite of the nanoscopic size of the droplets. One needs also to consider the fact that this small size leads also to substantial errors and ambiguities when one tries to read off $R_\textrm{lat}$ and $H$ from the data. In view of all these problems, the agreement between the simulations and the theoretical descriptions certainly is satisfactory. \bigskip \underline{Acknowledgments}: This research was supported in part by the Deutsche Forschungsgemeinschaft (DFG) under grant number 436 BUL 113/130/2-1. \clearpage \newpage FIG. 1. Contour diagrams representing the density profiles $\rho(Z,R(X,Y))=\nu \Delta \rho$, $\nu =1,2,...11$, $\Delta \rho =0.2$, in the $(Z,R)$ plane, for a droplet containing 128 chains with 32 monomers each, at a temperature $k_BT=0.49$. The droplet is in contact with an ideally flat lyophobic substrate (represented by the thick grey line in the bottom) decorated with one lyophilic circle of radius $R_D$ (the thick black line in the bottom). The adsorption strengths of the lyophobic wall is $\epsilon _{\textrm{w}}=0.05$, while for the lyophilic circle it is $\epsilon_{\textrm{w}}=0.2$. Profiles are shown for $R_D=15,12,9,7,5$ and $3$, respectively. These profiles are obtained by averaging over 10 runs of $2.1 \cdot 10^6$ integration steps each. One MD time step is $\delta t=0.0009$ MD time units.\\[1cm] FIG. 2. Cosine of the contact angle $\theta$ plotted versus the radius $R_D$ of the lyophilic domain, for droplets containing ${\mathcal{N}}=128$ chains with $N=32$ monomers each, at $k_BT=0.49$. The inset shows the dependence of the contact angle of the droplet at an infinitely extended homogeneous substrate (as considered in Refs.\cite{36,37,38}) on the strength $\epsilon_{\textrm{w}}$ of the wall potential. The two contour diagrams in the inset represent the density profiles $\rho(Z,R(X,Y))=\nu\Delta\rho ,\; \nu =1,2,\ldots,11$ in the $(Z,R)$ plane of a droplet for the two different adsorption strengths relevant for the present paper, namely for $\epsilon_{\textrm{w}}=0.05$ (the value used to model the lyophobic region of the wall) and for $\epsilon_{\textrm{w}}=0.2$ (the value used for the lyophilic domain on the surface).\\[1cm] FIG. 3. Contour diagrams representing the density profiles $\rho(Z,R(X,Y))=\nu \Delta \rho,\; \nu =1,2,\ldots,11 (\Delta \rho =0.2)$ in the $(Z,R)$ plane for two droplets adsorbed on opposite walls $(L=22)$ or a liquid bridge between the walls $(L=19,17,15,13,10,9,6,5)$ respectively. The simulated system contains 256 chains with 32 monomers each, the temperature is chosen as $k_BT=0.49$, and the walls are ideally flat and lyophobic, with an adsorption strength $\epsilon_w=0.05$, except for a lyophilic circle of radius $R_D=14$ (with adsorption strength $\epsilon_w=0.2$). The lyophilic parts of the adsorbing surface are shown by thick black bars, the lyophobic parts by thick grey bars. The profiles are averaged over 18 runs of $1.05 \cdot 10^6$ MD steps.\\[1cm] FIG. 4. Variation of the base radius $R_{\textrm{lat}}$ (circles), the height H (squares) and the midheight radius $R_{\textrm{lat}}(Z=H/2)$ (diamonds) of a droplet adsorbed on a single circular lyophilic domain of radius $R_D$ plotted vs. $R_D$ at $k_BT=0.49$. The droplet contains 128 chains with 32 monomers each. The strength of the adsorption potential is $\epsilon_{\textrm{w}}=0.2$ inside the lyophilic domain and $\epsilon_{\textrm{w}}=0.05$ outside of it. The radii and the height are measured from density profile contour diagrams (taking the contour at midpoint density, $\rho=1$) averaged over 10 runs of $2.1 \cdot 10^6$ integration steps each. $R_{\textrm{lat}}$ is taken as the maximum lateral extension of the contour (it occurs roughly at $Z=1.5$). The error bars are obtained by estimating the radii from every single run.\\[1cm] FIG. 5. Variation of the base radius $R_{\textrm{lat}}$ (circles) and the midheight radius $R_{\textrm{lat}}(Z=L/2)$ (squares) for the liquid bridges of Fig.~\ref{fig3} (for further explanations see the caption of Fig.~\ref{fig4}). A log-log plot of $R_{\textrm{lat}}(Z=L/2)\; \mbox{vs}\; L$ (see inset) reveals a power-law relationship $R_{\textrm{lat}}\propto L^{-0.72}$.\\[1cm] FIG. 6. The pressure in a liquid bridge connecting two flat substrates is plotted against the distance $L$ between the walls. The total pressure is shown by filled circles. Open diamonds show the contribution to the pressure that comes from the wall-monomer interactions, while the open squares show the contribution to the pressure from the monomer-monomer interaction and the kinetic part.\\[1cm] FIG. 7. The normal force acting between two substrates with a bridging droplet plotted versus the distance $L$. The insets show contour diagrams of the density profiles of the liquid bridge at 4 different distances $L$: the two separated droplets are at $L=22$, the ``in-bridge'' is at $L=17$, the cylindrical bridge is at $L=9,5$ and the ``out-bridge'' at $L=6$. The thick dark lines indicate the lyophilic region of the substrate. Each point is obtained from averaging over 19 runs of $1.05 \cdot 10^6$ MD steps, while the data points around the minimum ($6.5 < L < 8.5$) are averaged over 29 runs. The arrow at $L=22$ indicates that bridges are no longer stable for $L \geq 22$. The contact angle at zero force is $\theta ^* \approx 100^0$.\\[1cm] FIG. 8. Midheight radius $R(Z=L/2,t)$ of a forming liquid bridge as a function of time, for $L=21.5$. The radial distance from the vertical axis of the droplet to the point where the density is $\rho = 1.0$ is measured. The origin of time $t=0$ is chosen as the time when the two droplets sitting on opposite parallel substrates have just touched each other. The insets show vertical cross sections of the resulting bridges for three different times indicated in the figure. $R(Z=L/2,t)$ is averaged over 7 independent runs. For $300 \leq t \leq 3000$ an effective growth law $R(Z=L/2,t)\propto const + t^{0.242}$ is observed, as indicated by a straight line.\\[1cm] FIG. 9. Density profiles of a forming liquid bridge for four different times, as indicated in the figure. The diffusive interpenetration of species which originally belong to the two separated droplets is indicated. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
8,712
{"url":"http:\/\/mathhelpforum.com\/calculus\/199244-latex.html","text":"1. ## Latex\n\nI'm new to both this forum and to latex. How do I get my latex into a thread? I am writing in mathbot, but when I copy and paste, it's huge. Is there a way to type the code inside a thread so that I don't have to bother with resizing and copying and pasting? Thanks.\n\n2. ## Re: Latex\n\n$\\frac{a}{b}$\n\n[TEX]\\sum\\limits_{r = 2}^n {\\frac{2}{{r\\left( {{r^2} - 1} \\right)}}}[\/TEX] gives $\\sum\\limits_{r = 2}^n {\\frac{2}{{r\\left( {{r^2} - 1} \\right)}}}$","date":"2018-02-24 06:33:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 2, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9248273372650146, \"perplexity\": 2276.7793774088314}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891815435.68\/warc\/CC-MAIN-20180224053236-20180224073236-00131.warc.gz\"}"}
null
null
#9 out of 9 in 2010 Affordable Large SUVs Average Price Paid $9,656 - $13,513 2010 Nissan Armada Interior Review Note: This interior review was created when the 2010 Nissan Armada was new. Interior: 7.6 The interior of the Armada reflects its exterior, which means the cabin is expansive -- though not more so than competitors. Cruising in the seven- or eight-passenger (depending on second-row seat configuration) Armada is comfortable, according to most reviewers. A few complain of squeaks and rattles. The ride height gives rise to a reviewer annoyance -- it's tough to get in to and out of the Armada. "This interior is more lush than plenty of so-called luxury cars, working several shades of brown, perfect touches of wood accent, satin-finish metal trim with a bit of chrome sparkle here and there." -- Automobile Magazine "Interior decor is mostly plain with materials that trail Armada's large-SUV rivals. Test examples had various creaks and rattles, including a squeaking steering column and wind whistle from the cargo area." -- Consumer Guide "Difficult entry and exit." -- Cars.com "Every time you need to reach into the second row, which is often when you have a 2-year-old and an 11-month-old, I had to put the car in Park, unbuckle my seat belt and practically do a backbend to get there. Don't worry; I practice yoga." -- Mother Proof The Armada offers two seating configurations. The standard configuration, with two second-row captain's chairs, accommodates seven passengers. An optional second-row bench increases capacity to eight. "Roomy, supportive [front] seats . . . Second-row space is generous, but the available bucket seats are narrow and lack proper contouring and thigh support. They tumble forward, but leave a slim passage that means a jungle-gym climb into or out of 3rd row. Once there, adults find a flat, hard, undersized bench and less space than in most other large SUVs." -- Consumer Guide "Armada easily accommodates seven or eight passengers." -- Auto Mall USA "Big, comfy, well-upholstered buckets, with 8-way power adjustment for the driver and power adjustable pedals on all models." -- Motor Week "The comfortable driver's seat is power-adjustable, as are the pedals, so finding a good driving position is pretty easy," -- About.com "Excellent set of second-row captain's chairs that are separated by a removable center console. With the console out, passengers are granted easy access to the third-row seat without having to disturb those already seated in the second row." -- Kelley Blue Book "Despite 'theatre style' elevation, relatively thin cushion padding and smaller key dimensions make the one-piece third-row bench a kid-only zone." -- Automobile Magazine Most reviewers like the Armada's interior, which offers plenty of entertainment options, as well as a stylish design. "Sorting through the plethora of pushbuttons and layers of electronics just to change how the air flows on you can prove to be a challenge. I even had a hard time finding the radio, which, by the way, was right in the middle of dash. " -- Automobile Magazine "The gauges are easy to read. Simple three-dial climate system may be a stretch away for some drivers, and dial positions can be tough to decipher in daylight." -- Consumer Guide The Armada offers plenty of cargo space. Though it has less sheer cubic footage than some class competitors, it has easy-to-fold seats for quick conversion from people to cargo-hauling. Several reviewers commented on the storage space for smaller items, which include a deep center console and storage compartments in the ceiling. "In addition to the enormous cargo area, there are not one, not two, but six storage compartments in the ceiling. If you opt for the DVD entertainment system, one of these ceiling cubbies would be taken away. I stored books for my kids in one of the ceiling cubbies, which was a novel and useful storage solution. . . . The center console between the front seats is huge. I swear I could fit my infant daughter in there (not that I'm advocating storing a child in a cubby; I'm simply offering a point of reference)." -- Mother Proof "Overall, there's 188.4 cubic feet of passenger volume, and 20 cubic feet of cargo space behind the third row. With the second and third row folded, the cargo space grows to 97.1 cubic feet, which is less than its competitors." -- Cars.com "With all seats in their full upright positions, the Armada provides 20 cubic feet of space behind the third row, which is similar to that of the Expedition. It's deep enough to fit a 30-gallon cooler," -- Auto Mall USA 2010 Armada Photos All Exterior Photos » All Interior Photos » More on the 2010: Prices, Specs Calculate 2010 Nissan Armada Monthly Payment Calculate 2010 Nissan Armada Monthly Lease Payment Which Cars You Can Afford?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
349
Q: save and clear terminal window in linux This is probably a dumb question that's been answered before. But, I don't know how to phrase it correctly for it to show up in the search results. In the Linux terminal, programs like less, man, or vim take up the entire screen and display their info, then when they're closed, are replaced with the original contents of the terminal. Showing the previously run commands and such. How do I do this?, what are the keywords associated with this action, so I can look it up. Or if you wish, could you provide a brief explanation of the process?
{ "redpajama_set_name": "RedPajamaStackExchange" }
500
Q: How to install ImageMagick header files of specific version? I am on RHEL and installed ImageMagick from source using the following: yum install -y libpng libpng-devel curl -LO http://www.imagemagick.org/download/releases/ImageMagick-6.8.9-9.tar.gz tar -xvzf ImageMagick.tar.gz cd ImageMagick-6.8.9-9/ ./configure --prefix=/usr/local make install I also need to install the header files. How do I do this? The Yum latest repository only has 6.5.4 and if I install those I get version conflicts. A: According to the Install-unix.txt it says: 533 By default, ImageMagick is installs binaries in /../usr/local/bin, libraries 534 in /../usr/local/lib, header files in /../usr/local/include and documentation 535 in /../usr/local/share. You can specify an alternative installation prefix 536 other than /../usr/local by giving configure the option --prefix=PATH. This 537 valuable in case you don't have privileges to install under the default 538 paths or if you want to install in the system directories instead. So it should already be present. I verified this with the same version you installed. it is located in $prefix/include/ImageMagick.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,559
Q: Error when copying vector bool to CUDA memory I encountered a compilation error where copying a bool vector to cuda memory will fail bool *gpu; cudaMalloc(reinterpret_cast<void **>(&gpu), 100); std::vector<bool> cpu(100); for(int i=0;i<100;i++){ cpu[i]=true; } cudaMemcpy(gpu, cpu.data(), 100*sizeof(bool), cudaMemcpyHostToDevice); It returns error: invalid use of void expression cudaMemcpyHostToDevice); but the same code with a float vector will compile. float *gpu; cudaMalloc(reinterpret_cast<void **>(&gpu), 100); std::vector<float> cpu(100); for(int i=0;i<100;i++){ cpu[i]=i; } cudaMemcpy(gpu, cpu.data(), 100*sizeof(float), cudaMemcpyHostToDevice); Why is this happening? A: vector<bool> is a mistake from C++98 that we cannot get rid of (at least in terms of occupying the name). The standard recommends that it keeps the storage as a space-optimized representation of bits and that's what most implementations do. You can work around this by using vector<uint8_t> instead.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,225
Mahdi Ahmed writes THIS IS A PROMISE, I WRITE By mahdiahmed on January 8, 2013 After carefully going through the already written eight episodes of VAUDHEKEY MIEE (THIS IS A PROMISE) which was written and being directed by Abdul Fatthaah, I have started writing the remaining five episodes. A little while ago, I just finished episode nine. But I must confess that outlining the last five was not an easy task than I first thought it would be. Mostly when I outlined, I was forced to tie up several loose ends. And even in the last five episodes I had to redefine some of the key characters. Since the first four episodes were already shot, I didn't have much choice to tweak the other written four episodes which goes behind the camera on 10th of this month. I wanted to do few adjustments here and there but realized that doing so would require rewriting all of them, meaning I won't be able to complete them when the production resumes. So I decided to leave them as it is and continue outlining the last five. I had to do a lot of research which again was done very little in the already written eight episodes. I hope to complete at least episode ten and eleven before Fatthaah leaves with his cast and crew to Eydhafushi of Baa Atoll. To do that I have only two days. And I have roughly a week to complete all five. So without spending too much time, I cut short this post for now and turn my focus to write episode ten. 4.175000 73.508889 Tags: Abdul Fatthaah, Baa Atoll, Dramas, rewrite, Vaudhekey Miee PLAY SCHOOL, A PREVIEW AND THIS IS A PROMISE DAY ONE OF 2013 2 Responses to "THIS IS A PROMISE, I WRITE" amira January 8, 2013 I have never taken our film industry seriously. But reading through your post, I am starting realize how much hard work really goes into the making of any footage. The success of a movie or a series rely lies very much on the story and screenplay I guess. mahdiahmed January 9, 2013 I'm glad you're beginning to understand our local film industry. As for the hard work that goes into making even a minute long scene or mere few seconds even, you just couldn't be more right, you know? And yes, a screenplay contributes heavily to the success of a movie since the entire movie depends on its foundation. If it's weak, the movie falls flat. But all in all, it's a team effort and all the departments of film making have to be good. Despite having limited knowledge in film making which was the case long time back but no excuse in this century as self educating in any subject to any level is possible via internet, I still salute the film makers in our local industry for breaking their bones and limbs to entertain an audience. And most of the time, they really do. Producing low grade movies every now and then happens not just in here, but a familiar occurrence all over the world. And funny how, those productions do have a faithful following. At the end of the day, every film maker is creative in his own right. Cheers! WRITING SEASON 3 OF KARUHAKURU BIGIL (2019) REVIEW SUPER DELUXE (2019) REVIEW JERSEY (2019) REVIEW Copyright © 2018 Mahdi Ahmed
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,891
Dinara Rafikovna Sadretdinova (, b June 4, 1976) is a Russian actress and TV presenter. She was born in Moscow, Russia. Education Sadretdinova studied at The Russian Academy of Theatre Arts, the Moscow Institute of Open Education and the Training Institute of Television and Radio. Career Sadretdinova was presenter of the television program Islam on the satellite channel "AST " from 1999 to 2001. In 2002, she became the TV presenter of the television program "Muslims" on the Information and entertainment channels. In March 2008, Sadretdinova was the guest of honor at the Al Jazeera International Documentary Film Festival in Qatar. She has been the guest of honor and presenter of the "Kazan International Festival of Muslim Cinema" two times. Sadretdinova has also be a part of the International Media Forum "Interaction in the common interest" in the Republic of Adygea. In October 2010, she was a part of the All Russian female Islamic Conference "Women will save the world" in the Chechen Republic. That year, she was listed among the "best 10 female journalists in the Islamic world" and was invited to the International Conference of Women Journalists, held in Iran, where she was awarded the international prize "The Word Zainab".<> References External links The Quran in the girls hearts . Dubai International Holy Quran Award TV-presenter of the weekly TV-program "Muslims" tells how she appeared in television and whether Russian TV viewer should wait for a new programme about Islam Official site of Dinara Women conference on "Role of women in modern society" to be held in Saratov CNN Student News The headscarf makes me feel more feminine Russian television personalities Russian Muslims Tatar people of Russia 1976 births Living people
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,700
\section{Introduction} During the last 20 years there has been a lot of progress in the design of neural networks (NNs); however, their employment in scientific machine learning with the purpose of learning hidden physics of complex system is relatively recent. In this work, we consider the problem of designing optimal deep NNs for learning tasks such as identifying unknown governing laws and classifying images. To achieve this goal, we pursue a new network architecture that 1) is guaranteed to be independent of the input resolution, 2) is stable in the limit of deep layers, and 3) considers long-range interactions in the feature space (i.e. node-to-node interactions). Among relevant works that use NNs with the purpose of learning governing laws, we mention physics-informed NNs \cite{raissi2019physics} where the solution of a {\it (partially) known} partial differential equation (PDE) is modeled by a deep NN whose weights and biases are learned together with the PDE's unknown parameters. More recently, the use of NNs has been extended to learning maps between inputs of a dynamical system and its state, so that the network is a surrogate for a solution operator and it can be referred to as {\it neural operator} \cite{lu2019deeponet,lu2021learning,li2020neural,li2020multipole,li2020fourier}. This approach finds applicability when constitutive laws are unknown or when the presence of high degrees of heterogeneity makes classical, PDE models inaccurate. Relevant works in this direction are the graph kernel network (GKN) architecture \cite{li2020neural,li2020multipole} (also known as the first form of a integral neural operator), the Fourier neural operator (FNO) architecture \cite{li2020fourier}, and the DeepONet architecture \cite{lu2019deeponet,lu2021learning}. We briefly discuss the implications of the properties 1)--3). Being resolution independent implies that the accuracy of the prediction is invariant with respect to the resolution of input parameters such as loadings and material properties. This fact is in stark contrast with classical finite-dimensional approaches which build NN models between finite-dimensional Euclidean spaces, so that their accuracy is tied to the input's resolution \cite{guo2016convolutional,zhu2018bayesian,adler2017solving,bhatnagar2019prediction,khoo2021solving}. Furthermore, being generalizable with respect to different input parameter instances means that once the neural operator is trained, solving for a new instance of the input parameter only requires a forward pass of the network. This property is in contrast with traditional PDE-constrained optimization techniques \cite{de2015numerical} and some NN models which directly parameterize the solution \cite{raissi2019physics,weinan2018deep,bar2019unsupervised,smith2020eikonet,pan2020physics}, as these methods only approximate the solution for a {\it single instance of the input}. Being stable in the limit of deep layers is particularly important when the complexity of the problem at hand requires deep networks to achieve a desired prediction accuracy. This is the case in tasks such as learning governing equations of complex systems (as will be clear later on in the paper) and in image classification tasks. The lack of stability occurs in different forms with error stagnation and vanishing gradients being the most common. Being able to guarantee that, by construction, the network architecture will not incur any of these issues, warrants robustness and trustability of the surrogate. Enabling long-range interactions within the set of nodes, or, in other words, node-to-node interactions, makes a neural operator particularly suitable for identifying physical laws for highly-heterogeneous physical systems thanks to the fact that the architecture can explore interactions in the feature space and, as testified by several examples in the literature (see, e.g., convolutional NNs \cite{avelar2019discrete} where parts of the node set interact via convolutional operators), make the architecture suitable for image processing tasks. We point out that achieving these properties is not new and there are several examples in the literature of architectures that achieve some of the properties above. What is lacking, and what we achieve in this paper, is the design of a network architecture that embeds all properties 1)--3). Below, we provide a concise summary of architectures that feature some of our desired properties and highlight their advantages and limitations. In convolutional NNs (CNNs) \cite{avelar2019discrete,lefkimmiatis2017non,o2015introduction}, the interaction of nodes within network layers is achieved via convolutional operators and makes the network particularly suitable for image processing tasks, thanks to its ability to learn complex and nonlinear dependencies in the feature space. In a similar manner, graph neural networks (GNNs) take into account long-range interactions via graph operators \cite{gu2020implicit,iakovlev2020learning,poli2019graph,xhonneux2020continuous}. Despite their success, the applicability of these networks can be hindered by the following issues. First, in both CNNs and GNNs the connection between nodes is achieved via discrete operators, making the resulting network resolution dependent which limits its generalizability and practicability. Second, during the training of GNNs, slow convergence or even divergence may occur, especially in the limit of deep layers \cite{tao2018nonlocal}. To circumvent the first issue above while maintaining node-to-node interactions so to achieve resolution-independent networks, a few works in the literature propose to connect nodes within layers by continuous operators \cite{alet2019graph,haber2018learning,li2020neural} and treat the set of nodes as a continuum so that the value of the network at each layer is a continuous function of a ``space'' variable (the nodes) and may be interpreted as the state of a system over the space domain (i.e. the continuum feature space). Among these works, the graph kernel network (GKN) approach, proposed in \cite{li2020neural} can be interpreted as a continuous version of a GNN or of the nonlocal NN introduced in \cite{Wang2018nonlocal}. However, while achieving properties 1) and 3), this architecture may feature instabilities in the limit of deep layers, hence failing to achieve property 2). Despite this, GKNs have been successfully used in PDE learning tasks in the context of Darcy's flow and Navier-Stokes equations \cite{li2020neural,li2020multipole}. With the purpose of improving the stability in GNNs, Tao et al \cite{tao2018nonlocal} proposed a nonlocal NN (NNN) whose network update is characterized by a nonlocal discrete operator \cite{Du2012} that allows one to reinterpret the network as a discretization of a nonlocal diffusion equation, for which stability results are available. This network architecture achieves properties 2) and 3); however, by treating the interactions within nodes in a discrete manner, this architecture is not resolution independent, hence failing at achieving property 1). Moreover, as opposed to GKN's where the integral operators are parameterized, in this architecture the integral operators are defined in advance, so that the only parameters to be learned are the weights of the network. This reduces the descriptive power of these operators that may fail in complex learning tasks, such as in PDE learning problems. In fact, in \cite{tao2018nonlocal}, NNNs were employed only in image classification tasks, where they outperformed standard ResNet approaches by adding NNN's network updates within ResNet layers. The architecture we propose can be interpreted as a combination of GKNs and the continuous counterpart of NNN, so that we inherit the advantages of both architectures and circumvent their limitations. Specifically, we treat node-to-node interactions continuously by means of an integral operator that is equivalent to a nonlocal diffusion-reaction operator. As such, our network is guaranteed to be resolution independent and stable even in the deep layer limit. The latter claim is supported by the nonlocal vector calculus theory that allows us to establish stability properties via variational arguments. Our proposed architecture, which we refer to as nonlocal kernel networks (NKN), outperforms GKNs, FNO and NNNs in both PDE learning and image classification tasks. The interpretation of NKNs as a parabolic nonlocal equation also allows us to consider the deep network limit and to exploit initialization methods recently developed for deep CNNs \cite{haber2018learning}. Specifically, we consider a shallow-to-deep initialization technique \cite{haber2018learning,modersitzki2009fair} where optimal parameters learned on shallow networks are considered as (quasi-optimal) initial guesses for deeper networks. The use of NKNs updates within CNNs augmented with the shallow-to-deep technique outperforms standard CNN approaches in image classification tasks. We summarize our major contributions below. \begin{enumerate} \item We introduce a novel deep neural network based on nonlocal theory, referred to as NKN, that models the feature space continuously, by means of integral operators acting on the node domain. \item By identifying layers with time instants, NKNs can be interpreted as discretized nonlocal time-dependent diffusion-reaction equations and their limit as the number of layers goes to infinity is a nonlocal parabolic equation. Consequently, by means of the nonlocal vector calculus we can guarantee the stability of NKNs. \item The interpretation of NKNs as a diffusion-reaction equation also allows for accelerated learning techniques for deep networks, such as the shallow-to-deep technique \cite{haber2018learning}, for which optimal parameters of shallow networks are used as initial guesses of deeper networks. \item When applied to the task of learning governing equations, NKNs' accuracy is independent of the resolution of the input so that different input discretizations can be handled in an equally accurate manner. \item When applied to image classification tasks, NKNs not only are stable in the deep network limit but also enable classification of high-resolution images trained with low-resolution images and vice versa. \item NKNs are general and flexible with respect to tasks: not only do they handle both learning governing equations and image classification tasks, but, in both cases they outperform baseline methods. \end{enumerate} \paragraph{Paper Outline} In Section \ref{sec:background} we introduce three network architectures that inspired our work and highlight their advantages and limitations. In Section \ref{sec:nkn} we introduce NKNs and recall fundamental concepts of the nonlocal vector calculus. With these analysis tools, we then prove the stability of NKNs and describe efficient initialization techniques. In Section \ref{sec:experiments} we report several experiments that illustrate the efficacy of our network in comparison with baseline networks such as GKNs, FNOs, NNNs, and multiscale CNNs. Specifically, we consider two examples in the context of learning hidden governing laws (using as a reference the Poisson and Darcy equations) and two image data sets for which we perform image classification. In Section \ref{sec:conclusion} we provide a summary of our achievements and concluding remarks. In \ref{sec:newapp_pde}, additional numerical results are provided. \section{Background and Related Work} \label{sec:background} This section provides the necessary background for the rest of the paper and it is organized in two parts. First, we review three approaches recently proposed in the literature that inspired the proposed NKN and highlight their benefits and limitations, as summarized in Table \ref{tab:comparison}. NKNs are designed in such a way that all the benefits of these approaches are preserved, while limitations are overcome. \begin{table} \begin{center} {\small\begin{tabular}{ c | c | c | c | c | c | c } \hline Model & PDE & Image &Continuous in& Resolution & Stability in &Ref\\ & Learning & Classification & Depth (Time) & Independence & Deep Networks&\\ \hline GKN and FNO & \checkmark & -- & -- & \checkmark & -- & \cite{li2020neural,li2020multipole,li2020fourier}\\ NNN & -- & \checkmark & \checkmark & -- & \checkmark & \cite{tao2018nonlocal}\\ \hline NKN & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline \end{tabular}} \end{center} \caption{\small List of properties for GKNs, FNOs, NNNs, and NKNs.} \label{tab:comparison} \end{table} \subsection{Problem statement: learning operators} In this work, we aim to learn an operator between two functions, which can be seen as a mapping between two infinite dimensional spaces, given a collection of observed input-output function pairs. Let $D\subset\mathbb{R}^s$ be a bounded open set which is the domain of our input and output functions, we consider the problem of learning a general operator between two Banach spaces of functions taking values in $\mathbb{R}^{d_b}$ and $\mathbb{R}^{d_u}$, respectively. In what follows, we denote the input and output function spaces as $\mathcal{B}=\mathcal{B}(D;\mathbb{R}^{d_b})$ and $\mathcal{U}=\mathcal{U}(D;\mathbb{R}^{d_u})$, respectively. Let $\{\mathbf{b}_j,\mathbf{u}_j\}_{j=1}^N$ be a set of observations where the input $\{\mathbf{b}_j\}\subset\mathcal{B}$ is a sequence of independent and identically distributed random fields from a known probability distribution $\mu$ on $\mathcal{B}$, and $G^\dag(\mathbf{b}_j)=\mathbf{u}_j(\mathbf{x})\in\mathcal{U}$, possibly noisy, is the output of the map $G^\dag:\mathcal{B}\to\mathcal{U}$. We aim to build an approximation of $G^\dag$ by constructing a nonlinear parametric map $$ G(\cdot\,;\,\theta):\mathcal{B}\times\Theta\rightarrow\mathcal{U}, $$ in the form of a NN, for some finite-dimensional parameter space $\Theta$. Here $\theta\in\Theta$ is the set of parameters in the network architecture to be inferred by solving the following minimization problem \begin{equation}\label{eqn:opt} \min_{\theta\in\Theta}\mathbb{E}_{\mathbf{b}\sim\mu}[C(G(\mathbf{b};\theta),G^\dag(\mathbf{b}))]\approx \min_{\theta\in\Theta}\sum_{j=1}^N[C(G(\mathbf{b}_j;\theta),\mathbf{u}_j)], \end{equation} where $C$ denotes a properly defined cost functional $C:\mathcal{U}\times\mathcal{U}\rightarrow\mathbb{R}$. Although $\mathbf{b}_j$ and $\mathbf{u}_j$ are (vector) functions defined on a continuum of points, with the purpose of doing numerical simulations, we assume that they are defined on a discretization of the domain $D$. In particular, for each data pair $(\mathbf{b}_j,\mathbf{u}_j)$ we assume observations of $\mathbf{b}_j$ and $\mathbf{u}_j$ are available on a $M-$point discretization of the domain defined as $D_j=\{\mathbf{x}_1,\cdots,\mathbf{x}_M\}\subset D$. With such a discretization, when learning governing laws, a popular choice the cost functional $C$ is the mean square error, i.e., the difference between $G(\mathbf{b}_j;\theta)$ and $\mathbf{u}_j$ in the $l^2$ norm defined on $D_j$. On the other hand, in image classification tasks, where $\mathbf{b}_j$ represents the pixel values of the input image and $\mathbf{u}_j$ the learnt feature function {which will be connected to a softmax layer for } classification, the cost functional (or classification loss) is usually the cross entropy loss \cite{haber2018learning}. To stress the importance and challenges of learning operators, we now consider the problem of learning governing laws as an illustration. Let ${\rm L}_\mathbf{b}$ be a differential operator depending on the parameter $\mathbf{b}$ and consider the PDE \begin{equation}\label{eqn:pde} \begin{aligned} -{\rm L}_\mathbf{b}[\mathbf{u}](\mathbf{x})=\mathbf{f}(\mathbf{x}),\quad&\mathbf{x}\in D,\\ \mathbf{u}(\mathbf{x})=0,\quad&\mathbf{x}\in\partial D, \end{aligned} \end{equation} for a given forcing term $\mathbf{f}$. When the operator ${\rm L}$ is known, existing methods, ranging from the classical discretization of PDEs with known coefficients to modern ML approaches such as the basic version of physics-informed NNs \cite{raissi2019physics}, aim at finding the solution $\mathbf{u}\in\mathcal{U}$ for a single instance of the parameter $\mathbf{b}\in\mathcal{B}$. However, when the operator ${\rm L}$ is unknown, which is the case of interest here, the goal is to provide a {\it neural operator}, i.e. an approximated solution operator, $G(\cdot;\theta):\mathbf{b}\rightarrow \mathbf{u}$ that delivers solutions of the system for any input $\mathbf{b}$. The latter problem not only is more realistic, as it is often the case that governing equations are not known for complex systems, but it is also a more challenging task for several reasons. First, in contrast to classical NN approaches where the solution operator is parameterized between finite-dimensional Euclidean spaces \cite{guo2016convolutional,zhu2018bayesian,adler2017solving,bhatnagar2019prediction,khoo2021solving}, neural operators are discretization and resolution independent. Therefore, \textit{no further modification or tuning will be required for different resolutions and discretizations} in order to achieve an equally accurate solution. Specifically, the neural operator generalizes to different grid geometries and discretizations. Second, for every new instance of $\mathbf{b}$ neural operators requires only a forward pass of the network. Therefore, the optimization problem \eqref{eqn:opt} \textit{only needs to be solved once and the resulting NN can be utilized to solve for multiple instances of the input parameter}. This property is in contrast to the classical numerical PDE methods \cite{leveque2007finite,zienkiewicz1977finite,karniadakis2005spectral} and some ML approaches \cite{raissi2019physics,weinan2018deep,bar2019unsupervised,smith2020eikonet,pan2020physics}, where the optimization problem needs to be solved for every new instance of the input parameter of a know differential operator ${\rm L}$. Lastly, of fundamental importance is the fact that neural operators can find solution maps regardless of the presence of an underlying PDE and only require the observed data pairs $\{(\mathbf{b}_j,\mathbf{u}_j)\}_{j=1}^N$. Examples include experimental measurements \cite{ranade2021generalized} and molecular dynamics simulations \cite{kim2019peri} for which an upscaled PDE is not available. \subsection{Three relevant network architectures} In this section, we discuss the network architecture of three baseline methods, namely, GKNs and the general integral kernel networks \cite{li2020neural,li2020multipole,li2020fourier}, NNNs \cite{tao2018nonlocal}, and multiscale CNNs \cite{haber2018learning}. To provide a consistent description of all three networks and illustrate their connections with the proposed NKN architecture, we describe each model following a formulation similar to the one presented in \cite{li2020neural,li2020multipole,li2020fourier}. First, we lift the input $\mathbf{b}(\cdot)\in\mathcal{B}$ to a higher dimensional representation $\mathbf{h}(\cdot,0)$ that corresponds to the first network layer; here, we identify the first argument of $\mathbf{h}$ with space (the set of nodes) and the second argument with time (the set of layers). Second, we formulate the NN architecture in an iterative manner: $\mathbf{h}(\cdot,0)\rightarrow \mathbf{h}(\cdot,\Delta t)\rightarrow\mathbf{h}(\cdot,2\Delta t)\rightarrow \cdots \rightarrow \mathbf{h}(\cdot,T)$, where $\mathbf{h}(\cdot,j\Delta t)$, $j=0,\cdots,L:=T/\Delta t$, is a sequence of functions representing the values of the architecture at each layer, taking values in $\mathbb{R}^{d}$. Third, the output $\mathbf{u}(\cdot)\in\mathcal{U}$ is obtained by projecting $\mathbf{h}(\cdot,T)$ onto $\mathcal{U}$. In what follows, we provide rigorous descriptions of these three steps. Given an input vector field $\mathbf{b}(\mathbf{x}):\mathbb{R}^s\to\mathbb{R}^{d_b}$, we define the first network layer as $$\mathbf{h}(\mathbf{x},0)=P(\mathbf{x},{\widetilde\mathbf{b}}(\mathbf{x}),\nabla{\widetilde\mathbf{b}}(\mathbf{x}))+\mathbf{p},$$ {where $\widetilde \mathbf{b}$ represents a smoothed version of $\mathbf{b}$, i.e. a continuous function of $\mathbf{x}$. A common smoothing technique is given by Gaussian kernels \cite{li2020neural}. Note that this step would be helpful as inputs are usually in the form of vectors, e.g. function evaluations at grid points or pixel values of an image. As anticipated above, within each layer, we treat the nodes within a layer as a continuum so that we have an infinite number of nodes, i.e. a layer has infinite width. As such, each layer can be represented by a function of the continuum set of nodes \footnote{Considering an infinite width, i.e. defining neural networks in infinite-dimensional spaces, is not new and has been studied in, e.g., \cite{Williams1996,Roux2007}.} $D\subset\mathbb{R}^s$ . Then we denote the $l$-th network layer by $\mathbf{h}(\mathbf{x},l\Delta t):\mathbb{R}^s\times \mathbb N^+\to{\mathbb{R}^d}$, or, equivalently, $\mathbf{h}(\mathbf{x},l\Delta t)=\mathbf{h}(\mathbf{x},t):\mathbb{R}^s\times(0,T]\to{\mathbb{R}^d}$. Here, $l=0$ (or equivalently, $t=0$) denotes the initial layer, whereas $t=L\Delta t$ (or $t=T$) denotes the last layer. The use of the symbol $t$ stems from the relationship that can be established between the network update and a time advancing scheme (or, in the limit of infinite layers, a dynamical system). The final output, computed using the network's last layer, is defined as $\mathbf{u}(\mathbf{x})=Q\mathbf{h}(\mathbf{x},T)+\mathbf{q}$. Here, $P\in\mathbb{R}^{d\times(s+2d_b)}$, $Q\in\mathbb{R}^{d_u\times d}$, $\mathbf{p}\in\mathbb{R}^{d}$ and $\mathbf{q}\in\mathbb{R}^{d_u}$ are appropriately sized matrices and vectors that are part of the parameter set that we aim to learn}. We stress the fact that $\mathbf{h}$ is a vector of dimension $d$ and, as such, a network layer has $d$ sets of nodes, each one associated with a component of $\mathbf{h}$. \paragraph{Graph kernel networks (GKNs)} Proposed in the context of learning governing equations, the GKN introduced in \cite{li2020neural} has foundation in the representation of the solution of a PDE by the Green's function. Here, {for an $L-$layer NN,} the $l-$th layer network update is given by \begin{equation}\label{eq:gkn} \mathbf{h}(\mathbf{x},l+1)=\sigma\left(R\mathbf{h}(\mathbf{x},l)+\int_D k(\mathbf{x},\mathbf{y},\mathbf{b}(\mathbf{x}),\mathbf{b}(\mathbf{y});\mathbf{v})\mathbf{h}(\mathbf{y},l) d\mathbf{y} + \mathbf{c}\right). \end{equation} Here, $\sigma$ is an activation function, $R\in\mathbb{R}^{d\times d}$ is a tunable tensor, $\mathbf{c}\in\mathbb{R}^d$ a constant vector and $k\in\mathbb{R}^{d\times d}$ a tensor kernel function that takes the form of a (usually shallow) NN whose parameters $\mathbf{v}$ are to be learned. In GKNs, different layers share the same parameters $\mathbf{v}$, $R$ and $\mathbf{c}$, and the kernel $k$ is therefore layer-independent. This network update resembles the original ResNet block \cite{He2016Resnet} where the usual discrete affine transformation is substituted by a continuous integral operator. Differently from the networks that we consider later on, unless $\sigma$ is the identity operator, we cannot establish a connection between \eqref{eq:gkn} and a discretized PDE or an ordinary differential equation. {While in the original version of GKNs the integral is extended to the whole set $D$, for efficiency purposes, restrictions to a ball of radius $r$ centered at $\mathbf{x}$, i.e. $B_r(\mathbf{x})$, can also be considered, keeping in mind that this choice might compromise the accuracy}. Single-layer and shallow GKNs have been shown to be successful in learning governing equations for, e.g., the Darcy \cite{li2020neural} and Burger \cite{li2020multipole} equations. The most notable advantage of this approach is that the learnt network parameters are resolution-independent: the learned $R$, $\mathbf{c}$, and $\mathbf{v}$ are optimal even when used with different resolutions, i.e. with different partitions/discretizations of the feature space $D$. Even though not exploited in \cite{li2020neural}, resolution-independence can be critical in image transfer learning tasks. However, in the presence of complex learning tasks, shallow networks might not be sufficiently accurate, so that deep networks become mandatory. As we illustrate in numerical studies of Section \ref{sec:experiments} the major drawback of GKNs is their instability with respect to increasing number of layers; in fact, as the GKN becomes deeper, either there is no gain in accuracy or increasing values of the loss function occur. \paragraph{Fourier neural operators (FNOs)} We mention in this paragraph also a new variant of integral neural operators, namely the Fourier neural operator (FNO) proposed in \cite{li2020fourier}, where the integral kernel $k$ is parameterized in Fourier space. In particular, FNO drops the dependence of kernel $k$ on the input $\mathbf{b}$ and assumes that $k(\mathbf{x},\mathbf{y};\mathbf{v}):=k(\mathbf{x}-\mathbf{y};\mathbf{v})$. The integral operator in \eqref{eq:gkn} then becomes a convolution operator so that $k$ can be parameterized in Fourier space. The corresponding $l-$th layer update is then given by \begin{equation}\label{eq:fno} \mathbf{h}(\mathbf{x},l+1)=\sigma\left(R(l)\mathbf{h}(\mathbf{x},l)+\mathcal{F}^{-1}(\mathcal{F}(k(\cdot;\mathbf{v}_l))\cdot \mathcal{F}(\mathbf{h}(\cdot,l)))(\mathbf{x})+ \mathbf{c}(l)\right), \end{equation} where $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and its inverse, respectively. {Here we use $R(l)$, $\mathbf{c}(l)$ and $\mathbf{v}_l$ to highlight the fact that in FNOs, each layer has different parameters (i.e. different kernels, weights and biases).} This is in contrast with the layer-independent kernel in the original GKNs. As a consequence when the number of layers increases, the memory consumption of FNOs increases, which makes the training process of FNOs more challenging and potentially prone to over-fitting. \paragraph{Nonlocal neural networks (NNNs)} With the purpose of circumventing the instability properties of a nonlocal network architecture proposed in \cite{Wang2018nonlocal} (similar to a GNN), the paper \cite{tao2018nonlocal} proposes a modified nonlocal network architecture where the nonlocal operator is augmented in such a way that it corresponds to a discrete nonlocal diffusion operator. Here, the set of nodes is not treated as a continuum and the {\it discrete} network update is defined as \begin{equation}\label{eq:nnn} \mathbf{h}_i(l+1)=\mathbf{h}_i(l)+ R(l)\sum_{j=1}^M k(i,j) (\mathbf{h}_j(l)-\mathbf{h}_i(l)), \end{equation} where the subscript $i$ indicates the node and $l=0,\cdots,L$ still denotes the layer and where the only parameters to be learned are the entries of the ``weight'' matrix $R(l)\in \mathbb{R}^{d\times d}$ at every layer. The pairwise affinity function $k(i,j)\in\mathbb{R}$ is given and is usually a symmetric, nonnegative function. The introduction of the term $(\mathbf{h}_j(l)-\mathbf{h}_i(l))$ in \cite{tao2018nonlocal}, in place of $\mathbf{h}_j(l)$ only as in \cite{Wang2018nonlocal}, significantly improves the accuracy of the network when utilized for image processing tasks. In particular, the network update \eqref{eq:nnn}, also called ``nonlocal block'', is used within more standard networks, such as ResNets, with the purpose of improving their accuracy thanks to the fact that nonlocal blocks take into account long-range node interactions. The major drawback of this architecture, being formulated at the discrete level, is that it cannot be resolution independent and, hence, the learned parameters are not optimal when utilized within networks of different width. As this property can only be achieved in the presence of continuous operators, we report for the sake of completeness the continuous version of the NNN in \eqref{eq:nnn} \begin{equation}\label{eq:nnn-cont} \mathbf{h}(\mathbf{x},l+1)=\mathbf{h}(\mathbf{x},l)+R(l)\int_D k(\mathbf{x},\mathbf{y})(\mathbf{h}(\mathbf{y},l)-\mathbf{h}(\mathbf{x},l)) d\mathbf{y}, \end{equation} where $k$ is a given, symmetric, nonnegative function of its arguments. While a comparison of \eqref{eq:nnn-cont} with GKNs has not been conducted in the literature, we expect the latter to be outperformed in the limit of deep networks for stability reasons. However, in the shallow case, GKNs are likely to perform better due to their increased descriptive power as the kernel $k$ is part of the unknowns while in \eqref{eq:nnn-cont} it is given. We point out that the GKN architecture \eqref{eq:gkn} can also be seen as the continuous version of the nonlocal network proposed in \cite{Wang2018nonlocal}, where the authors introduce a discrete update based on convolution operators acting on nodes, at the discrete level. As such, the approach in \cite{Wang2018nonlocal} not only does not feature resolution independence, but also shows instabilities in the deep network limit, as pointed out in \cite{tao2018nonlocal}. \paragraph{Multiscale CNN} Paper \cite{haber2018learning} introduces a new approach to training CNNs that allows for ``learning across scales'' (i.e. for independence with respect to width and depth). By reinterpreting the CNN architecture as a discretization of a time-dependent nonlinear differential equation, the network depth corresponds to advancing in time. When the network is stable, the idea of \cite{haber2018learning} is to interpolate and reuse optimal parameters of a shallow network into a deeper one. More specifically, by identifying the number of layers with the number of time steps in a time-discretization scheme, they employ multilevel learning algorithms that accelerate the training of deep NNs by solving a series of learning problems from shallow to deep architectures. We refer to the resulting technique as shallow-to-deep learning. Formally, let $t=l\Delta t$; then, the $(l+1)$-th network layer is given by \begin{equation}\label{eq:cnn} \mathbf{h}(t+\Delta t)=\mathbf{h}(t)+ \Delta t\, \sigma(R(\mathbf{k};t)\mathbf{h}(t)+\mathbf{c}), \end{equation} where $\sigma$ is an activation function, $R(\mathbf{k};t)\in\mathbb{R}^{d\times d}$ is a convolution matrix (a circulant matrix that depends on the convolution kernel $\mathbf{k}$), and $\mathbf{c}$ is a bias vector. It is easy to see that by diving both sides of \eqref{eq:cnn} by $\Delta t$, the term $(\mathbf{h}(t+\Delta t)-\mathbf{h}(t))/\Delta t$ corresponds to the discretization of a first order derivative so that this architecture can indeed be interpreted as a nonlinear differential equation in the limit of deep layers, i.e. as $\Delta t\to 0$. Thus, when the real parts of the eigenvalues of the convolution and the time steps are sufficiently small, this architecture is stable with respect to the number of layers. The shallow-to-deep learning mentioned above corresponds to training the network for increasing values of network layers and using optimal parameters obtained {with $L$ layers as initial guesses for the $\tilde{L}$-layer CNN }, after appropriate scaling and interpolation across layers. {Here $\tilde{L}>L$.} {We point out that, even though successful in image processing tasks, standard CNNs are not resolution independent unless appropriately modified (via, e.g., multiscale or multigrid methods \cite{haber2018learning}). Furthermore, due to the fact that interactions between nodes occur only in limited node-windows, they are not as flexible as, e.g., NNNs where node-to-note interactions are extended to the {whole node set}.} \section{Nonlocal Kernel Networks (NKN)}\label{sec:nkn} To overcome the limitations of the architectures mentioned in Section \ref{sec:background} and still preserve their benefits, in this section, we propose a new, stable, and resolution-independent network update. We first describe the Nonlocal Kernel Network (NKN) architecture and review relevant definitions and results of the nonlocal vector calculus. These tools are then used to prove the stability properties of the proposed network architecture in the deep-layer limit. Lastly, we illustrate how to perform shallow-to-deep training, exploiting the stability of the network in the limit of deep layers. \subsection{The network architecture} Using the same notation of Section \ref{sec:background}, we introduce the network update for the proposed NKN architecture. Let $t=l\Delta t$, being $l$ the current layer, and, as before, let $\mathbf{x}\in D$ span the continuum set of nodes within each layer. We propose the following iterative network update formulation {\begin{equation}\label{eq:NKN} \mathbf{h}(\mathbf{x},t+\Delta t)=\mathbf{h}(\mathbf{x},t)+ {\Delta t}\left(\int_D k(\mathbf{x},\mathbf{y},\mathbf{b}(\mathbf{x}),\mathbf{b}(\mathbf{y});\mathbf{v})(\mathbf{h}(\mathbf{y},t)-\mathbf{h}(\mathbf{x},t)) d\mathbf{y}-R(\mathbf{x};\mathbf{w})\mathbf{h}(\mathbf{x},t)+\mathbf{c}\right). \end{equation}} As for GKNs, the kernel tensor function $k\in\mathbb{R}^{d\times d}$ is modeled by a NN parameterized by $\mathbf{v}$. To enhance the descriptive power and stability properties of the network a reaction term is added on the right-hand side. Here, the tensor function $R\in\mathbb{R}^{d\times d}$ is modeled by another NN parameterized by $\mathbf{w}$. Both $k$ and $R$ are usually shallow NNs, such as the multilayer perceptron (MLP) employed in our numerical examples. Their depth and width depend on the specific application and will be specified later on. Note that the integral operator on the right-hand side of \eqref{eq:NKN} can be interpreted as a nonlocal Laplacian $\mathcal L_k[\cdot]$, as clarified in the following section. The NKN architecture above preserves the continuous, integral treatment of the interactions between nodes that characterizes GKNs and replaces the integral operator acting on $\mathbf{h}(\mathbf{y},t)$ in that formulation with the continuous version of the nonlocal diffusion operator introduced in \cite{tao2018nonlocal}, as defined in \eqref{eq:nnn-cont}. While the resemblance with GKNs enables resolution independence with respect to the inputs, the use of the nonlocal operator provides rigorous analysis tools that will allow us to show that the architecture is stable in the deep network limit. We point out that in our formulation the network parameters are not time-dependent, i.e. they are constant across the layers; this feature enables the straightforward application of the shallow-to-deep initialization technique and reduces the computational effort and memory allocation. The idea using constant parameters across layers was also proposed in implicit networks \cite{el2021implicit,bai2019deep,winston2020monotone,bai2020multiscale} where fixed-point methods are employed as an efficient training procedure. In Table \ref{tab:comparison} we summarize relevant properties of NKNs in comparison with GKNs and NNNs. These statements are confirmed and illustrated by both the theoretical results presented in the following sections and by the numerical tests reported in Section \ref{sec:experiments}. In summary, being resolution independent and stable in the limit of deep layers makes the NKN's architecture a viable tool for both PDE learning and image processing tasks. \subsection{Connection to the nonlocal vector calculus}\label{sec:nonlocal-calculus} In this section we recall important concepts of the nonlocal vector calculus that are useful to prove stability properties of the proposed network architecture \eqref{eq:NKN}. Note that, for the sake of simplicity, we limit our description to the scalar case for which $h:\mathbb{R}^s\times (0,T]\to \mathbb{R}$, {although the description and analysis can be extended to the vector case $\mathbf{h}:\mathbb{R}^s\times (0,T]\to\mathbb{R}^d$.} For more details on this topic we refer the reader to the review articles \cite{DeliaDuEtAl2020_NumericalMethodsNonlocalFractionalModels,du2013nonlocal}. The main feature of nonlocal models is that every point in a domain of interest, $D\in\mathbb{R}^s$, interacts with a {\it nonlocal neighborhood} of points, usually described by the Euclidean ball $B_r(\mathbf{x})$. As a consequence, when solving a nonlocal equation in a bounded domain, boundary conditions must be prescribed on a {\it nonlocal boundary}, that accounts for all the points outside of $D$ that interact with $D$. We refer to this set of points as interaction domain and denote it by $D_I$. When the nonlocal neighborhood is $B_r(\mathbf{x})$, the interaction domain corresponds to a layer of thickness $r$ surrounding the domain (see Figure \ref{fig:domain}), where nonlocal boundary conditions must be prescribed to guarantee well-posedness of solutions. We denote the union of domain and interaction domain by $\overline D$. \begin{figure}[t] \centering \includegraphics[width=0.3\columnwidth]{domain.pdf} \caption{Two dimensional $(s=2)$ illustration of domain $D$, interaction domain $D_I$, and nonlocal neighborhood $B_r(\mathbf{x})$.} \label{fig:domain} \end{figure} The nonlocal vector calculus \cite{d2021towards,Delia2017,Du2012}, provides a variational settings that allows one to study nonlocal equations in a very similar way as the classical PDEs are analyzed. Given a square integrable kernel function $k:\overline D\times\overline D\to\mathbb R^+$ with compact support in $B_r(\mathbf{x})$, the nonlocal Laplacian operator is defined as \begin{equation}\label{eq:nonlocal-Laplacian} \mathcal{L}_k[h](\mathbf{x}) = \int_{\overline D} k(\mathbf{x},\mathbf{y})(h(\mathbf{y},t)-h(\mathbf{x},t)) d\mathbf{y}. \end{equation} In this work we consider parabolic nonlocal diffusion-reaction equations due to the resemblance of our network architecture with such equations in the limit of deep layers. We define the strong form of such an equation as follows: given a reaction term $R:D\to\mathbb{R}$, such that $0<R_0\leq R(\mathbf{x}) \leq R_1<\infty$, a constant forcing term $c\in\mathbb{R}$, a kernel $k$ with the above properties and an initial state $h_0(\mathbf{x})$, find $h:D\to\mathbb{R}$ such that \begin{equation}\label{eq:nonlocal-parabolic_1} \begin{aligned} &\dfrac{\partial h}{\partial t}(\mathbf{x},t) -\mathcal{L}_k[h](\mathbf{x},t) +R(\mathbf{x})h(\mathbf{x},t)=c, & (\mathbf{x},t)\in D\times[0,T], \\[2mm] &h(\mathbf{x},0) = h_0(\mathbf{x}), & \mathbf{x}\in \overline D, \\[2mm] &h(\mathbf{x},t) = 0, & (\mathbf{x},t)\in D_I\times[0,T], \end{aligned} \end{equation} where the last condition is the nonlocal counterpart of a homogeneous Dirichlet boundary condition, prescribed on the interaction domain $D_I$. We denote by $\mathcal{A}$ the nonlocal elliptic operator $\mathcal{A}_k[\cdot]= -\mathcal{L}_k[\cdot]+R(\mathbf{x}) \, [\cdot]$ that features a nonlocal diffusion component and a (classical) reaction component, respectively. By using the nonlocal vector calculus we can analyze the variational form of \eqref{eq:nonlocal-parabolic_1}, that we introduce next. Given a kernel $k$ defined as above, a reaction coefficient $R\in L^\infty(D)$ such that $0<R_0\leq R(\mathbf{x}) \leq R_1<\infty$, a constant $c\in\mathbb{R}$, and an initial state $h_0\in L_0^2(\overline D)$, the weak solution $h\in L^2(0,T;L^2_0(\overline D))$ of \eqref{eq:nonlocal-parabolic_1} satisfies, for all $\eta\in L^2_0(\overline D)$ \begin{equation}\label{eq:parabolic-weak_1} \int_D \dfrac{\partial h}{\partial t}\eta \,d\mathbf{x} + \int_D \mathcal{A}[h]\eta\,d\mathbf{x} = \int_D c \,\eta \,d\mathbf{x}, \end{equation} where $L^2_0(\overline D)$ is the space of square integrable functions on $\overline D$ that are zero on $D_I$. By using nonlocal integration by parts \cite{Du2012}, we have that $$ \int_D \mathcal{L}_k[h]\eta\,d\mathbf{x} = \iint_{\overline{D}\times \overline{D}}(h(\mathbf{y},t)-h(\mathbf{x},t))(\eta(\mathbf{y},t)-\eta(\mathbf{x},t))k(\mathbf{x},\mathbf{y})\,d\mathbf{y}\,d\mathbf{x}:= a_k(h,\eta), $$ where we have exploited the fact that $h=0$ in $D_I$. The nonlocal vector calculus theory \cite{Du2012} guarantees that for square integrable, compactly supported, kernel functions, the bilinear form $a_k(\cdot,\cdot)$ induces an inner product in $L^2{(\overline{D})}$, or, in other words, there exist positive constants $\underline{C}$ and $\overline{C}$ such that \begin{equation}\label{eq:norm-equivalence} \underline{C} \|\eta\|_{L^2(\overline{D})}\leq \sqrt{a_k(\eta,\eta)}\leq\overline{C}\|\eta\|_{L^2(\overline{D})}, \quad \forall\,\eta\in L^2_0(\overline D). \end{equation} This property implies that the bilinear form associated with $\mathcal{A}$ is coercive and continuous in the $L^2$ metric, yielding well-posedness of equation \eqref{eq:parabolic-weak_1}. \subsection{NKNs as stable parabolic nonlocal equations}\label{sec:stability} In this section we analyze the mathematical properties of the NKN model. Without loss of generality, and to be consistent with Section \ref{sec:nonlocal-calculus}, we consider the case for which $\mathbf{h}:D\to\mathbb{R}$. Thus, we denote the network by $h$. With the purpose of highlighting the connection to a time discretization scheme, we divide both sides of \eqref{eq:NKN} by $\Delta t$ and rewrite the NKN update as \begin{equation}\label{eq:euler} \dfrac{h(\mathbf{x},t+\Delta t)-h(\mathbf{x},t)}{\Delta t} -\mathcal{L}_k[h](\mathbf{x},t) +R(\mathbf{x})h(\mathbf{x},t)=c. \end{equation} Here, we note that the first term on the left-hand side corresponds to the explicit Euler discretization of a time derivative. As such, we can claim that the limit as $\Delta t\to 0$ of \eqref{eq:euler} corresponds to \begin{equation}\label{eq:nonlocal-parabolic} \dfrac{\partial h}{\partial t}(\mathbf{x},t) -\mathcal{L}_k[h](\mathbf{x},t) +R(\mathbf{x})h(\mathbf{x},t)=c. \end{equation} As described in Section \ref{sec:nonlocal-calculus}, \eqref{eq:nonlocal-parabolic} is a parabolic nonlocal equation with nonlocal elliptic operator $\mathcal{A}_k[\cdot]= -\mathcal{L}_k[\cdot]+R(\mathbf{x}) \, [\cdot]$. Standard variational theory and the nonlocal vector calculus enable the analysis of the weak form of \eqref{eq:nonlocal-parabolic} for which we prove well-posedness and a-priori bounds on the solution in the following theorem. \begin{thm}\label{thm} {Let $k\in L^2(\overline D\times\overline D)$, $R\in L^\infty(D)$ such that $0<R_0\leq R(\mathbf{x}) \leq R_1<\infty$, $c\in\mathbb{R}$, and $h_0\in L_0^2(\overline D)$. Then, problem \eqref{eq:nonlocal-parabolic_1} is well-posed and, in particular, for all $t>0$,} \begin{equation}\label{eq:a-priori_1} \|h(\cdot,t)\|^2_{L^2(\overline{D})}+ \widetilde C\int_0^t \|h(\cdot,s)\|^2_{L^2(\overline{D})}\,ds \leq \|h_0\|^2_{L^2(\overline{D})} +\dfrac{c^2|\overline{D}|t}{2\widetilde C}, \end{equation} {\it where $\widetilde C=\underline{C}^2(\underline{C}^2+R_0)$.} \end{thm} \begin{proof} Property \eqref{eq:norm-equivalence} and the bounds on the reaction term $R$ imply that the bilinear form associated with the operator $\mathcal{A}$ is coercive and continuous in $L^2_0(\overline D)$. In fact, the following inequalities hold \begin{equation}\label{eq:coer-cont} \begin{aligned} &\int_D \mathcal{A}[h]h\,d\mathbf{x} \geq (\underline{C}^2+R_0)\|h\|^2_{L^2(\overline{D})} & \quad \text{(coercivity),}\\ &\left| \int_D \mathcal{A}[h]\eta\,d\mathbf{x} \right| \leq (\overline{C}^2+R_1)\|h\|_{L^2(\overline{D})} \|\eta\|_{L^2(\overline{D})} &\quad \text{(continuity).} \end{aligned} \end{equation} Continuity and coercivity, combined with the continuity of the functional $\int_D c\eta d\mathbf{x}$, are sufficient conditions for the well-posedness of equation \eqref{eq:parabolic-weak_1}. Furthermore, by using standard arguments of variational PDE theory (see, e.g., \cite{Delia2017,d2021analysis}), paper \cite{Delia2017} shows that the unique solution $h\in L^2(0,T;L^2_0(\overline D))$ satisfies the a priori bound \eqref{eq:a-priori_1} for all $t>0$. We note that the theory developed in \cite{Mengesha2013} allows us to extend this result to sign-changing kernels, like the one utilized in this work. Finally we point out that the arguments used in this proof can be extended to the vector case $\mathbf{h}\in\mathbb{R}^d$. \end{proof} As a consequence, for any given final time, the solution $h(\mathbf{x},t)$ is guaranteed to be bounded. This fact proves the stability of the NKN model; the latter will be confirmed by our numerical experiments in Table \ref{tab:eigen} of Section \ref{section:pde}. \subsection{Shallow-to-deep NKN learning} The stability properties of NKNs allow us to consider deep networks and to exploit efficient initialization techniques such as the shallow-to-deep approach introduced in Section \ref{sec:background}. Let $R_L\in\mathbb{R}^{d\times d}$, $\mathbf{c}_L\in \mathbb{R}^d$ and $k_L(\mathbf{x},\mathbf{y},\mathbf{b}(\mathbf{x}),\mathbf{b}(\mathbf{y}))$ be the optimal network parameters obtained by training a NKN of depth $L$. With the purpose of improving the accuracy of the network, we increase the number of layers (or equivalently, time steps), and train a new network of depth $\widetilde{L}>L$. The idea of the shallow-to-deep technique is to interpolate in time (or across layers) the optimal parameters obtained at depth $L$ and to scale them in such a way that the final time of the differential equation remains unchanged. In our specific setting, due to the fact that the network parameters are not time dependent, this technique simply corresponds to initializing the (deeper) $\widetilde L$-layer network by $R_L$, $\mathbf{c}_L$, and $k_L$. \section{Numerical experiments}\label{sec:experiments} In this section, we illustrate the superior performance of NKNs in both learning governing laws and image classification tasks, and compare it to baseline approaches. Our numerical experiments are performed on a machine with 2.8 GHZ 8-core CPU and a single Nvidia V100 GPU. \subsection{Learning governing laws}\label{section:pde} To demonstrate the stability of NKNs in the deep layer limit and its superiority with respect to other methods, we consider two learning examples employed in \cite{li2020neural} for GKNs, and compare the performance of NKNs with GKNs and FNOs for layers from $L=1$ to $L=32$. Specifically, we consider the problem of learning neural operators that act as solution maps for the PDE \eqref{eqn:pde}, without any prior knowledge on the PDE itself, but solely on the basis of an input-output data set. The training set consists of $N$ pairs of input parameter functions and solutions $\{\mathbf{b}_j(\mathbf{x}_i),\mathbf{u}_j(\mathbf{x}_i)\}_{j=1}^N$ available at $\mathbf{x}_i\in D_j:=\{\mathbf{x}_i,i=1,\cdots,M\}\subset D.$ For simplicity and without loss of generality, we focus on the simple setting where all function pairs are evaluated on the same, structured grid of points with grid size $\Delta x$, and we refer to is as $D_{\Delta x}=D_j$ for all $j=1,\cdots,N$. We recall that our major goal is to design a network architecture that is stable in the limit of deep layers and resolution independent, so that we can reach increasingly better levels of accuracy for deeper networks and predict an equally accurate solution $\mathbf{u}$ when using different values of $\Delta x$. For the implementation of GKNs and NKNs, we use the pytorch library provided in \cite{li2020neural}. For FNOs, we use the Pytorch package provided in \cite{li2020fourier}. The optimization is performed with the Adam optimizer. To conduct a fair comparison, for each method, we have tuned the hyperparameters, including the learning rates, the decay rates and the regularization parameters, to minimize the training loss. Furthermore, for each example and each method we repeat the numerical experiment for 5 different random initializations, and report the averaged relative mean squared errors and their standard error. With the purpose of having a compact presentation of the results, we report the errors in plots, as functions of the number of NN's layers. A more detailed error comparison is provided in Tables \ref{tab:1DPoisson_new}-\ref{tab:2DDarcy_reso_more} of \ref{sec:newapp_pde}. \subsubsection{Example 1: 1D Poisson's equation} \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & $L=1$ &$L=2$&$L=4$&$L=8$&$L=16$&$L=32$ \\ \hline GKN & 66.82k & 66.82k & 66.82k & 66.82k & 66.82k & 66.82k \\ NKN & 67.02k & 67.02k & 67.02k & 67.02k & 67.02k & 67.02k\\ FNO & 78.33k & 148.03k & 287.43k & 566.21k & 1.12M & 2.24M\\ \hline \end{tabular} \caption{Example 1: 1D Poisson's equation. Number of trainable parameters for each model.} \label{tab:1d_param} \end{table} \begin{table}[] \centering \begin{tabular}{|c|cc|cc|} \hline \multirow{2}{*}{Depth} & \multicolumn{2}{c|}{NKN} & \multicolumn{2}{c|}{GKN}\\ \cline{2-5} & max eigenvalue & min eigenvalue & max eigenvalue & min eigenvalue\\ \hline 1 & 1.6012 & -0.1183 & 0.9958 & -6.3943\\ 2 & 2.2862 & 0.8871 & 5.6733 & 1.7302\\ 4 & 3.7286 & 1.6576& 13.3126& -0.2470\\ 8 & 5.3263 & 2.1790& 27.1865 & -3.6409\\ 16 & 6.6001 & 2.4899& 51.3217 & -0.2268\\ 32 & 7.3043 & 2.5560& 48.8081 & 36.3245\\ \hline \end{tabular} \caption{Example 1: 1D Poisson's equation. Maximum and minimum eigenvalues of the (linearized) amplification operators for NKNs and GKNs, from the $l-$th layer to the $(l+1)-$th layer on an original training sample.} \label{tab:eigen} \end{table} In this example we consider ${\rm L}=\frac{\partial^2}{\partial x^2}$, i.e. the one-dimensional Poisson's equation, in $D=[0,1]$ taking the form: \begin{equation}\label{eqn:1dpoisson} \begin{aligned} -\frac{\partial^2 u}{\partial x^2}(x)=f(x),\quad&x\in D,\\ u(x)=0,\quad&x\in\partial D. \end{aligned} \end{equation} We aim to learn the operator mapping the loading function $f(x)$ to the solution $u(x)$. The training data set consist of $N=500$ pairs of $f_j(x_i)$ and $u_j(x_i)$ for $x_i\in D_{\Delta x}$, where $D_{\Delta x}=\{0.01i|i=0,\cdots,100\}$, a set of 101 uniformly distributed points in $D$. To generate each sample pair $\{f_j(x),u_j(x)\}$, we first set $$ u_j(x) = \sum_{k=0}^{100} \widehat{u}_{k,j} \cos(2\pi kx) $$ with $\widehat{u}_{k,j}$ being constant coefficients. For each $k \in \{1,\cdots,100\}$, $\widehat{u}_{k,j}$ is randomly generated as $\widehat{u}_{k,j} \sim \mathcal{U}[0,\exp(-0.1 k^2)]$, the uniform distribution on $[0,\exp(-0.1 k^2)]$. The term $\widehat{u}_{0,j}$ is chosen such that the boundary condition $u_j(0)=u_j(1)=0$ is satisfied. Then $f_j(x)$ is obtained from $u_j(x)$ via a numerical Fourier transform and sample pairs are obtained by evaluating $u_j$ and $f_j$ at points on $D_{\Delta x}$. To validate the performance of the trained model, we generate $100$ additional pairs following the same procedure used for the training set. Note that the solution of \eqref{eqn:1dpoisson} can be represented as \begin{equation} u(x)=\int_D G_b(x,y)f(y)dy \end{equation} where $G_b(x,y):=\frac{1}{2}(x+y-|y-x|)-xy$ is the Green's function. The integral form above suggests that a 1-layer NKN can provide an exact solution map by setting the dimension of $\mathbf{h}$ as $d=1$, the initial layer and the final output as $$h(x,0)=f(x),\quad u(x)=h(x,1),\quad T=1,\quad L=1,$$ and the network update formulation as $$ c=0,\quad R(x)=1-\int_D G_b(x,y)dy,\quad {\rm and} \quad k(x,y,f(x),f(y))=G_b(x,y). $$ Therefore, in principle, when the number of training pairs $N\rightarrow\infty$ and the integral on $D$ is evaluated exactly, a 1-layer NKN can provide an exact map. Note that, for different choices of parameters, this statement holds true for GKNs as well. It is important to stress that for both networks increasing the number of layers {would not yield significant improvements in the prediction accuracy }; instead, in general, it may generate instabilities that might compromise the network performance. This fact makes the 1D Poisson equation the best candidate example to explore the network stability when the number of NN layer increases. Note that these considerations do not apply to more complex learning examples such as the prediction of solutions in highly heterogeneous environments, where, deeper and deeper networks are required for accuracy purposes. Inspired by the discussion above and following \cite{li2020neural}, we set $d=1$, $T=1$ and initialize our network by $ h(x,0)=P(x,f(x))+p. $ Since the ground-truth kernel (the Green's function $G_b$) is independent of $f$, we set the kernel $k(x,y,f(x),f(y)):=k(x,y)$. By setting $\Delta t=1/L$, the NKN network update reads $$h(x,t+\Delta t)=h(x,t)+\Delta t\left(-R(x;\mathbf{w})h(x,t)+\int_D k(x,y;\mathbf{v})(h(y,t)-h(x,t)) dy + c\right).$$ The inner kernel network $k(x,y):\mathbb{R}^{2}\rightarrow \mathbb{R}$ is parameterized as a 3-layer feed forward network with widths $(2,256,256,1)$ and ReLU activation. The reaction network $R(x):\mathbb{R}\rightarrow\mathbb{R}$ is taken as a 2-layer feed forward network with widths $(1,64,1)$ and ReLU activation. The solution $u$ is then computed as $u(x)=Q(h(x,L))+q$. Here $P$, $p$, $Q$, $q$ and $c$ are all trained. We apply the shallow-to-deep training technique to initialize the optimization problem when the number of layers $L>1$. Specifically, we start from depth $L = 1$, train until the loss function reaches a plateau and use the estimated parameters to initialize the parameters for $L=2$, until $L=32$ (recall that the optimal parameters do not depend on the layer/time). To investigate the stability properties of each neural operator learning models, we compare the performance of NKNs with GKNs and FNOs as the number of layers increases. For all methods we train until the loss function reaches a plateau ({10000} epochs at most). In Figure \ref{fig:poisson_loss} we present the averaged relative mean squared errors for each model as a function of $L$; the number of trainable parameters is provided in Table \ref{tab:1d_param}. To study the impact of normalization, we report learning results on both normalized (denoted as the ``with normalization'' cases) and original (denoted as the ``w/o normalization'' cases) training data sets. \begin{figure} \centering \includegraphics[width=.48\textwidth]{poisson_train-eps-converted-to.pdf} \includegraphics[width=.48\textwidth]{poisson_test-eps-converted-to.pdf} \caption{Example 1: 1D Poisson's equation. Comparison of relative mean squared errors from GKNs, FNOs, and NKNs. Error bars represent standard errors over $5$ simulations. Left: errors on training dataset. Right: errors on test dataset.} \label{fig:poisson_loss} \end{figure} \textit{Comparison between GKNs and NKNs.} In the left plot of Figure \ref{fig:poisson_loss} we compare the training errors, from which we can observe the very poor performance of GKNs for $L>2$. Note that reducing the learning rate or increasing the learning epochs does not mitigate this convergence issue. In contrast, for increasing values of $L$, NKNs are stable and the loss function slightly decreases. To have a better understanding of the trained NKNs and GKNs, we look at the eigen-spectrum of their ``amplification matrices''. In particular, let the (discretized) $l-$th network layer be defined as $\mathbf{H}_{l}:=[h(x_1,l\Delta t),h(x_2,l\Delta t),\cdots,h(x_M,l\Delta t)]$, then the amplification from $\mathbf{H}_{l}$ to $\mathbf{H}_{l+1}$ can be written as as $\frac{\mathbf{H}_{l+1}-\mathbf{H}_{l}}{\Delta t} = \mathbf{A} \mathbf{H}_{l}+C\mathbf{1}$, where $\mathbf{A}$ is an $M\times M$ matrix, $C$ is a constant, and $\mathbf{1}$ is a size $M$ vector with all its elements equal to $1$. Note that since the kernel is layer-independent in both GKNs and NKNs, the amplification matrices $\mathbf{A}$ are also layer-independent. The analysis conducted in Theorem \ref{thm} tells us that if all eigenvalues of $\mathbf{A}$ are positive, the learnt operator is positive definite and the network is stable in the limit of deep layers. To test this fact, we randomly select a training sample pair $(f_j(x),u_j(x))$, extract the amplification matrices that connect subsequent trained layers, and compute their maximum and minimum eigen-values. These are reported in Table \ref{tab:eigen}; here, we observe that the NKNs' matrix is positive definite, which illustrates the theoretical results of Section \ref{sec:nkn}. In contrast, the GKNs' matrix exhibits negative eigenvalues, indicating that instabilities might occur. \textit{Comparison between FNOs and NKNs:} Compared to GKNs and NKNs, FNO reaches a relatively low level of error on the training dataset ($O(10^{-3})$) when $L<32$. However, for $L\geq 32$, training FNOs becomes challenging due to the vanishing gradient phenomenon \cite{hochreiter1998vanishing}. From the right plot of Figure \ref{fig:poisson_loss} we can see that the test error of FNOs is $O(10^{-2})$; this values, being much larger than the training error, indicates that the network is overfitting the training data. This fact is possibly due to the fact the number of parameters increases with $L$. In fact, as reported in Table \ref{tab:1d_param}, for a $L-$th layer NN, FNO requires $L$ times more parameters than GKN and NKN. In contrast, NKNs trained with the shallow-to-deep initialization are robust and not subject to overfitting issues. Furthermore, FNOs proves to be more sensitive to the distribution of the training samples: without normalization, the test error increases by $10$ times. In contrast, regardless of normalization, NKNs reach the lowest test errors when $L>1$. \subsubsection{Example 2: 2D Darcy's equation} \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & $L=1$ &$L=2$&$L=4$&$L=8$&$L=16$&$L=32$ \\ \hline GKN & 473.20k & 473.20k & 473.20k & 473.20k & 473.20k & 473.20k \\ NKN & 945.31k & 945.31k & 945.31k & 945.31k & 945.31k & 945.31k\\ FNO & 171.42k & 338.37k & 672.26k & 1.34M & 2.68M & 5.35M\\ \hline \end{tabular} \caption{Example 2: 2D Darcy's equation. Number of trainable parameters for each model.} \label{tab:2d_param} \end{table} We consider the two-dimensional heterogeneous PDE describing Darcy's flow and follow the same settings as in paper \cite{li2020neural} where GKNs are utilized. Here, the physical domain is $D=[0,1]^2$, the operator ${\rm L}_b$ is an elliptic operator with Neumann boundary conditions and permeability coefficient $b(\mathbf{x})$. We have: \begin{align*} -\nabla\cdot(b(\mathbf{x})\nabla u(\mathbf{x}))=f(\mathbf{x}),&\quad\mathbf{x}\in D,\\ u(\mathbf{x})=0,&\quad\mathbf{x}\in\partial D. \end{align*} We aim to learn the operator mapping from the parameter function $b(\mathbf{x})$ to the solution $u(\mathbf{x})$. As is standard in simulations of subsurface flow, the permeability $b(\mathbf{x})$ is modeled as a two-valued piecewise constant function with random geometry such that the two values have ratio 4. It is generated randomly for every sample and it is defined as $\psi_{\#}\mathcal{N}(0,(-\Delta+9I)^{-2})$, where $\psi$ takes the value 12 on the positive part of the real line and 3 on the negative. Different resolutions of data sets are down-sampled from a $241\times 241$ grid solution generated by using a second-order finite difference scheme. Training and validation are performed on the benchmark data set provided in \cite{li2020neural}; the corresponding data can be found at \url{https://github.com/zongyi-li/graph-pde}. We consider two training data sets: a ``coarse'' data set with grid size $\Delta x=1/15$ and hence $M=16\times 16$, and a ``fine'' data set with grid size $\Delta x=1/30$ and correspondingly $M=31\times 31$. With the purpose of testing generalization properties with respect to resolution, we consider three testing data sets: a ``coarse'' data set with grid size {$\Delta x=1/15$}, a ``fine'' data set with grid size {$\Delta=1/30$}, and a ``finer'' data set with grid size {$\Delta=1/60$}. 100 training samples and 40 test samples are employed. We again report learning results on both normalized (denoted as the ``with normalization'' cases) and original (denoted as the ``w/o normalization'' cases) training data sets. For this example, we set the dimension $d$ of $\mathbf{h}$ equal to 64. Following \cite{li2020neural}, we initialize $\mathbf{h}(\mathbf{x},0)$ as \begin{equation}\label{eqn:init2D} \mathbf{h}(\mathbf{x},0) = P(\mathbf{x},b(\mathbf{x}),b_{\epsilon}(\mathbf{x}),\nabla b_{\epsilon}(\mathbf{x}))+{\mathbf{p}}, \end{equation} where $P\in\mathbb{R}^{64\times 6}$, ${\mathbf{p}}\in\mathbb{R}^{64}$, and $b_{\epsilon}(\mathbf{x})$ is a Gaussian smoothed version of the coefficients $b(\mathbf{x})$ obtained with a centered isotropic Gaussian distribution of variance $5$; $\nabla b_{\epsilon}(\mathbf{x}))$ is its gradient. For an $L-$layer network, we apply \eqref{eq:NKN} iteratively, with $k(\mathbf{x},\mathbf{y},b(\mathbf{x}),b(\mathbf{y})):\mathbb{R}^6\rightarrow \mathbb{R}^{4096}$ parameterized as a 3-layer feed forward network with widths $(6,512,1024,4096)$ and ReLU activation function. Note that the output of the network is then reshaped so to obtain a 64$\times$64 tensor. The domain of integration is restricted to the ball $B_r(\mathbf{x})$, with interaction radius $r = 0.10$, i.e., each node $\mathbf{x}$ is only connected to nodes within distance $r$. The reaction network $R(\mathbf{x}):\mathbb{R}^2\rightarrow \mathbb{R}^{4096}$ is parameterized as a 3-layer feed forward network with widths $(2,512,1024,4096)$ and ReLU activation. Also in this case, the output of the network is reshaped so to obtain a 64$\times$64 tensor. The network is trained with the shallow-to-deep training procedure. For each depth $L$, we initialize the network parameters from the $(L/2)-$layer NKN model, then train the network for {1000} epochs with a learning rate of $1e{-4}$, then decrease the learning rate with a ratio $0.8$ every 50 epochs. \begin{figure} \centering \includegraphics[width = .7\textwidth]{shatodeep_training_32-eps-converted-to.pdf} \caption{Example 2: 2D Darcy's equation. Training loss of the 2D Darcy problem using random initialization and shallow-to-deep approach, from 2 layers to 32 layers.} \label{fig:loss_idea3_2D} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{s16_darcy_train_withnorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s16_darcy_test_withnorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s16_darcy_train_nonorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s16_darcy_test_nonorm-eps-converted-to.pdf} \caption{ Example 2: 2D Darcy's equation. Comparison of relative mean squared errors from GKNs, FNOs, and NKNs when using the ``coarse'' training set ($\Delta x=1/15$). Error bars represent standard errors over 5 simulations. Top plots: training with the normalized dataset. Bottom plots: training with the original dataset. Left column: errors on the training dataset. Right column: errors on the test dataset with different resolutions.} \label{fig:loss_2ddarcy_16} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{s31_darcy_train_withnorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s31_darcy_test_withnorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s31_darcy_train_nonorm-eps-converted-to.pdf} \includegraphics[width=0.48\textwidth]{s31_darcy_test_nonorm-eps-converted-to.pdf} \caption{ Example 2: 2D Darcy's equation. Comparison of relative mean squared errors from GKNs, FNOs, and NKNs when using the ``fine'' training set ($\Delta x=1/30$). Error bars represent standard errors over 5 simulations. Top plots: training with the normalized dataset. Bottom plots: training with the original dataset. Left column: errors on the training dataset. Right column: errors on the test dataset with different resolutions.} \label{fig:loss_2ddarcy_s31} \end{figure} \textit{Effect of the shallow-to-deep technique.} To illustrate the benefits of the shallow-to-deep initialization strategy, in Figure \ref{fig:loss_idea3_2D} we compare the convergence properties of the learning algorithm using random initialization and the shallow-to-deep initialization with $s=16$. Here, we successively double the number of layers from 2 to 32\footnote{For illustration we show the training loss with $300$ epochs for each depth $L$ in Figure \ref{fig:loss_idea3_2D}, although in the rest of this section we use $1000$ epochs to guarantee that each model has reached a plateau.}. The training losses are plotted with respect to the number of epochs. It can be seen that the initial guesses provided by the last network correspond to a lower value of the loss function. Therefore not only we have faster convergence, but we can also reach lower loss values. This is particularly important for deeper-layer networks, which are notoriously difficult to train and for which random initialization fails to provide accurate answers. This, we conclude that the shallow-to-deep technique provides a good initialization and an improved accuracy, which also helps avoiding the vanishing gradient issue in training. \textit{Comparison between GKNs, FNOs and NKNs.} In Figures \ref{fig:loss_2ddarcy_16} and \ref{fig:loss_2ddarcy_s31}, we report the relative mean squared errors from the ``coarse'' and ``fine'' training data sets, respectively. Similarly to the Poisson's case, when increasing the number of layers, the relative training errors of GKNs and FNOs deteriorates for $L>8$, after initially decreasing. In contrast, the accuracy of NKNs monotonically improves\footnote{The only exception is for $\Delta x=1/30$ and $L=32$, where the training loss slightly increases from $L=16$, because we had to decrease the batch size in training, due to GPU memory constraints.} for increasing values of $L$. Also in this case, FNOs suffer from the overfitting: the test error increases as FNOs get deeper. Instead, when $L>4$, NKN consistently outperforms GKNs and FNOs in the testing experiments. Thus, while GKNs and FNOs remain reasonable choices when the network is at most 4 layers deep, NKNs achieve a better accuracy when the network is deeper than $4$. On the other hand, differently from Example 1, in this example normalizing the training data set helps improving the test error for all three architectures. In particular, for GKNs, the original training data set yields severe instabilities: the training loss blows up when $L>8$. For FNOs and NKNs, normalization also helps improving the test error. However, NKNs are still reliable when normalization is not performed. This fact becomes particularly important in online training, where normalization is not an option. In what follows, we always focus on the normalized case, unless otherwise stated. To provide a qualitative comparison between GKNs, FNOs and NKNs, in Figure \ref{fig:s16_test} we show plots of solutions obtained with a 16-layer NKN, GKN, and FNO in correspondence of three instances of permeability parameter $b(\mathbf{x})$. For all cases the model is trained on the ``coarse'' data set and tested on the same resolution. Both the solutions and the errors are plotted. One can observe that all solutions obtained with NKN are visually consistent with the ground-truth solutions, while GKN loses accuracy near the material interfaces. FNO results are off in a even larger regions. These results provide further qualitative demonstration of the superiority of NKNs and confirm the conclusion inferred from the comparison in Figure \ref{fig:loss_2ddarcy_16}. For this case, the relative test errors for GKN, FNO and NKN are $3.69e-2\pm 9.28e-4$, $8.46e-2\pm1.03e-3$, $1.29e-2\pm7.61e-4$, respectively. \begin{figure} \centering \includegraphics[width=.8\textwidth]{s16_tests-eps-converted-to.pdf} \caption{Example 2: 2D Darcy's equation. A visualization of 16-layer FNO, GKN, and NKN performance on three instances of permeability parameter $b(\mathbf{x})$, when using (normalized) ``coarse'' training dataset ($\Delta x=1/15$) and test on the dataset with the same resolution.} \label{fig:s16_test} \end{figure} \begin{figure} \centering \includegraphics[width=.8\textwidth]{s31_tests-eps-converted-to.pdf} \caption{PDE learning task 2: 2D Darcy's equation. A visualization of 16-layer FNO, GKN, and NKN performance on three instances of permeability parameter $b(\mathbf{x})$, when using (normalized) ``coarse'' training dataset ($\Delta x=1/15$) and test on the ``fine'' dataset ($\Delta x=1/30$).} \label{fig:s31_test} \end{figure} \begin{figure} \centering \includegraphics[width=.8\textwidth]{s61_tests-eps-converted-to.pdf} \caption{PDE learning task 2: 2D Darcy's equation. A visualization of 16-layer FNO, GKN, and NKN performance on three instances of permeability parameter $b(\mathbf{x})$, when using (normalized) ``coarse'' training dataset ($\Delta x=1/15$ and test on the ``finer'' dataset ($\Delta x=1/60$).} \label{fig:s61_test} \end{figure} \textit{Generalization to different resolutions.} To illustrate the generalization properties of GKNs, FNOs and NKNs to different grid resolutions, we train them with samples from a grid with resolution {$\Delta x=1/(s-1)$} and test them on samples from a grid with resolution {$\Delta x=1/(s'-1)$}. Test errors are provided in the right columns of Figures \ref{fig:loss_2ddarcy_16} and \ref{fig:loss_2ddarcy_s31}. We can observe that for each fixed training resolution $s$, the test errors at different resolutions remain on a similar scale for all three methods. We observe that when training on a grid of resolution {$\Delta x=1/30$}, the test error is smaller when the network is tested on resolution {$\Delta x=1/60$} than on {$\Delta x=1/15$}, indicating that testing on a fine grid provides better results. This is due to the fact that, for smaller $\Delta x$, the support of the kernel includes more grid points, leading to a better numerical integration. Instead, when utilizing the learnt network on a coarser resolution, the kernel is more likely to become less accurate, especially when the interaction radius $r$ is small. This observation was also reported in \cite{li2020neural}. A similar phenomenon is observed in image classification tasks, as further discussed in Section \ref{section:img}. To provide a visual comparison of the cross-resolution learning results, in Figures \ref{fig:s31_test} and \ref{fig:s61_test} we test the architectures trained with {$\Delta x=1/15$} on two data sets corresponding to {$\Delta x=1/30$} and {$\Delta x=1/60$}, and report the results for the same three instances of $\mathbf{b}(\mathbf{x})$. It is again observed that NKNs outperform both baseline methods. We conclude this section stressing once again that the resolution-independence of these neural operators only guarantees that the generalization error is of the same order of the training error, i.e. when utilizing the operator to predict the solution associated to an input parameter on a finer (coarser) grid, the accuracy does not improve (worsen). For example, when utilizing NKNs trained with $\Delta x=1/15$ to predict inputs characterized by {$\Delta x=1/15$, $\Delta x=1/30$, and $\Delta x=1/60$}, we observe that the testing errors are of the same order, i.e. $1.29e-2\pm 7.41e-4$, $3.99e-2\pm2.02e-3$, and $3.28e-2\pm7.36e-4$, respectively. \subsection{Image Classification Tasks}\label{section:img} We illustrate the stability and resolution independence of NKNs using two supervised image classification problems. Specifically, we classify low-resolution images using networks trained on high-resolution images and vice-versa. Two benchmark image data sets are considered: the MNIST data set \cite{lecun1995learning} of handwritten digits available at \url{http://yann.lecun.com/exdb/mnist/}, and the CIFAR-10 data set \cite{krizhevsky2009learning} available at \url{https://www.cs.toronto.edu/~kriz/cifar.html}. This task corresponds to identifying the solution operator that maps the original image (represented by a discretized pixel valued function $\mathbf{b}(\mathbf{x})$, where $\mathbf{x}$ is the pixel location) to a vector-valued function $\mathbf{u}(\mathbf{x})$ which represents the feature of this image. The class of the image will be obtained by applying a softmax classifier to $\mathbf{u}(\mathbf{x})$. A resolution-independent map is such that it is equally accurate when classifying images $\mathbf{b}$ with resolutions different from the training one. We proceed as follows: given an image sample, we project it into the feature space by applying the transformation $\mathbf{h}(\mathbf{x},0)=P(\mathbf{x},\mathbf{b}(\mathbf{x}))+{\bf p}$, where $\mathbf{x}=(i,j)$, $i,j\in\mathbb{N}$, represents the pixel location and $\mathbf{b}(\mathbf{x})$ is the initial pixel value at $\mathbf{x}$. Then, we iteratively apply \eqref{eq:NKN}, $$ \mathbf{h}(\mathbf{x},t+\Delta t)=\mathbf{h}(\mathbf{x},t)+\Delta t\left(-R(\mathbf{x})\mathbf{h}(\mathbf{x},t)+\int_{B_r(\mathbf{x})} k(\mathbf{x},\mathbf{y};\mathbf{v})(\mathbf{h}(\mathbf{y},t)-\mathbf{h}(\mathbf{x},t)) d\mathbf{y}+\mathbf{c}\right), $$ and finally calculate the output feature function $\mathbf{u}(\mathbf{x})=Q\mathbf{h}(\mathbf{x},T)+{\bf q}$ and the predicted class of the given image sample as $\text{softmax}(\mathbf{u}(\mathbf{x}))$. Note that, in the integral above, to accelerate the training we restrict the domain of integration to a neighborhood. In other words, each node $\mathbf{x}$ is only connected to nodes within a distance $r$, i.e. to nodes in the neighborhood $B_r(\mathbf{x}):=\{\mathbf{y}:|\mathbf{y}-\mathbf{x}|<r\}$. In all image classification tasks, we set the dimension $d$ of $\mathbf{h}$ equal to 16, and the inner kernel network $k$ to be a 3-layer feed forward network with widths $(4,32,32,256)$ and ReLU activation function. $R$ is also a 3-layer feed forward network with widths $(2,32,32,256)$ and ReLU activation. Both $k$ and $R$ are then reshaped into tensors of size 16$\times$16. Note that in image classification tasks, the network update above is often added to standard ResNet architectures, rather than utilized as a stand-alone network. This technique was also used in \cite{tao2018nonlocal} to enhance the accuracy of ResNets. Thus, in this case, $\mathbf{b}$ may also represent the output of the previous ResNet layer. In this section we compare NKNs to three baseline methods: CNNs \cite{albawi2017understanding}, multiscale CNNs \cite{haber2018learning}, and NNNs \cite{tao2018nonlocal}. For CNNs, we consider the standard convolution kernels of dimension $3 \times 3 \times 16$ and ReLU activation functions. After $L$ layers, we connect the output with another dense layer of output dimension 128 and a ReLU activation function, and finally connected to a softmax classifier. In the cross-resolution tests, we do not change any trained parameter nor the CNN kernels. For the multiscale CNN, we follow \cite{haber2018learning} and employ the same CNN structure, with a tanh activation function instead of the ReLU activation function for the CNN layers. In the cross-resolution tests, two transformation matrices are employed: a prolongation matrix $\mathbf{S}$ that maps coarse images into higher resolutions and a restriction matrix $\mathbf{U}$ that performs the opposite mapping. $\mathbf{S}$ is given by a bilinear interpolation and constant padding. $\mathbf{U}$ maps a fine image into a coarse image in such a way that $\mathbf{U}\mathbf{S} = \mathbf{I}$, the identity operator. Note that the CNN layer on a fine image can be viewed as a linear operator and rewritten as a sparse matrix $K_h$. Therefore, when using CNNs trained with fine images on coarse images, the convolution operator is adjusted to the coarse scale as $K_H = \mathbf{U}K_h\mathbf{S}$. When applying CNNs trained with coarse images on fine images, CNN layers are similarly adjusted as $K_h = \mathbf{S}K_H\mathbf{U}$. For NNNs we follow the conventions in \cite{tao2018nonlocal}: the NNN's input layer is followed by a dense layer with 16 output dimensions. The iterative formulation \eqref{eq:nnn} is then employed, followed by another dense layer of output dimension 128 and a ReLU activation function, and finally connected to a softmax classifier. We use the Adam optimizer to train all these baseline models until a plateau is reached (often within 200 epochs). \subsubsection{Example 1: MNIST} \begin{table}[] \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{Trained on fine images} & \multicolumn{2}{c|}{Trained on coarse images} \\ \cline{2-5} & Validation(fine) & Validation(coarse) & Validation(fine) & Validation(coarse) \\ \hline CNN, $L=1$ & 2.55\% & 27.88\% & 36.25\% & 3.23\% \\ CNN, $L=2$ & 2.08\% & 28.75\% & 48.35\% & 2.09\%\\ CNN, $L=4$ & {\bf1.68\%} & 90.42\% & 37.14\% & {1.84\%} \\ \hline Multiscale CNN$^*$ (\cite{haber2018learning}) &1.82\%&5.08\%&9.98\%&{\bf1.72\%}\\ Multiscale CNN, $L=1$ & 3.50\% & 49.21\% & 14.46\% & 4.22\% \\ Multiscale CNN, $L=2$ & 2.46\% & 57.74\% & 78.45\% & 2.54\% \\ Multiscale CNN, $L=4$ & 2.01\% & 56.56\% & 91.08\% & 1.84\% \\ \hline NNN, $L=1$ & 4.31\% & 10.66\% & 9.51\% & 5.05\% \\ NNN, $L=2$ & 4.48\% & 9.58\% & 8.06\% & 4.63\% \\ NNN, $L=4$ & 4.15\% & 11.27\% & 10.51\% & 4.72\% \\ \hline NKN, $r=2$, $L=1$ & 3.37\% & {\bf 4.37\%} & 4.55\% & 4.53\% \\ NKN, $r=2$, $L=2$ & 3.26\% & 4.98\% & 9.15\% & 4.35\% \\ NKN, $r=2$, $L=4$ & 3.26\% & 4.51\% & 10.92\% & 4.29\% \\ NKN, $r=3$, $L=1$ & 3.40\% & 4.96\% & {\bf3.76\%} & 3.75\% \\ NKN, $r=3$, $L=2$ & 3.20\% & { 4.85\%} & 4.02\% & 3.52\%\\ NKN, $r=3$, $L=4$ & 3.28\% & 5.87\% & 5.95\% & 3.40\% \\ NKN, $r=4$, $L=1$ & 3.28\% & 5.83\% & 5.37\% & 3.94\%\\ NKN, $r=4$, $L=2$ & 3.26\% & 6.12\% & 4.90\% & 3.63\% \\ NKN, $r=4$, $L=4$ & 3.23\% & 5.48\% & 4.88\% & 3.58\%\\ \hline \end{tabular} \caption{Image classification task 1: MNIST. Image classification errors on test dataset (lower is better). Bold numbers highlight the best case. ``Multiscale CNN$^*$ \cite{haber2018learning}'' reports the values from \cite{haber2018learning}. ``$r=*$'' and ''$L=*$'' indicates the interacting radius in NKN and the number of CNN/NNN/NKN layers employed in the model, respectively.} \label{tab:mnist_more} \end{table} We first consider the MNIST data set which has a training set of 60,000 labeled images. These samples consist of $28\times 28$ black and white images and they will be employed as the fine-scale images. We randomly divide the data set into a training set consisting of 50,000 images, and a test set consisting of 10,000 images. In the cross-resolution classification task, images of two levels of resolutions are considered. We denote the original MNIST images as the ``fine images'', and generate ``coarse images'' by downsampling each image to a $14\times 14$ resolution using bilinear interpolation. We train two networks using the coarse and fine training data sets and then use the trained networks to classify both the fine and coarse validation data sets. Results are reported in Table \ref{tab:mnist_more} for both the baseline architectures and NKNs. We point out that for Multiscale CNNs we show both the values reported in \cite{haber2018learning}, denoted by ``Multiscale CNN$^*$ \cite{haber2018learning}'' and the results from our implementation. Our Multiscale CNN results mostly differ from the ones in \cite{haber2018learning} in the cross-resolution test errors; this is due to the fact that non-standard loss functions (regression loss), different optimization methods (Block-Coordinate-Descent method), and additional regularization terms (derivative-based regularization term) are employed in \cite{haber2018learning}. Instead, in our setting, for a fair comparison with other methods, we employ the cross entropy loss and the Adam optimizer. The latter choices are standard in image classification tasks. From Table \ref{tab:mnist_more} we can see that, while CNNs perform best when training and testing resolutions are the same, NKNs outperform other architectures when tested on a resolution different from the training one. In fact, when $r>2$, NKNs' testing errors at different resolutions are of the same order of the ones at the same resolution. This fact illustrates the resolution-independence property of NKNs. When the interaction radius $r$ is as small as $2$, NKNs are less accurate on cross-resolution tasks, although the overall test error is still of the same order as the training one and greatly outperforms the two baseline CNNs. This is due to the fact that when the interaction radius $r$ is too small, the support of the kernel contains only a small number of grid points, inducing a less accurate numerical integration. When comparing the NKN with $r=3$ and the NKN with $r=4$, we do not observe a significant improvement in accuracy as $r$ increases. This is possibly due to the fact that MNIST's data-label relation is relatively simple, so that $r=3$ is sufficient. \subsubsection{Example 2: CIFAR} We utilize the CIFAR-10 data set to illustrate the performance of NKNs in cross-resolution testing. CIFAR-10 consists of 50,000 training images and 10,000 test images of size $32\times 32$, belonging to ten classes. In this test, we consider three validation data sets containing images of three different resolution levels, following the same approach as in \cite{haber2018learning}. The ``original'' resolution data set consists of the original $32\times 32$ images, the ``fine'' resolution data set consists of $64\times 64$ images generated by bilinear interpolation, and the ``coarse'' resolution data set consists of $16\times 16$ images also generated by bilinear interpolation. Differently from the approach used for MNIST in the previous section, and following the strategy described in \cite{tao2018nonlocal}, we incorporate the NKN network update (or nonlocal block) into a 20-layer pre-activation ResNet (PreResNet-20) \cite{he2016identity}. We compare NKNs with two baseline architectures: the standard PreResNet-20 with CNN blocks (denoted as ``baseline''), and NNNs where the nonlocal blocks (of depth $L=2,3,4,5$) are incorporated into the standard PreResNet-20 after the second residual block. Also for NKNs, we insert network updates into PreResNet-20 following the same procedure used for NNNs. To improve the descriptive power of NKNs, we employ different kernels $k$ at each layer, i.e., the kernel $k(\mathbf{x},\mathbf{y},t)$ and $R(\mathbf{x},t)$ are time-dependent functions. Therefore, the overall nonlocal network can be written as $\mathbf{h}(l+1):=\mathbf{h}(l)+\mathcal{F}(\mathbf{h}(l);W(l)), $ where $W(l)$ is the parameter set, $l=0,1,\cdots,L_{total}$ with $L_{total}$ being the total number of network blocks. When the $l-$th block is nonlocal, we employ the architecture in \eqref{eq:NKN} and set $t=l\Delta t$ with $$\mathcal{F}(\mathbf{h}(t+\Delta t)):=\Delta t\left(-R(\mathbf{x},t;\mathbf{w})\mathbf{h}(\mathbf{x},t)+\int_D k(\mathbf{x},\mathbf{y},t;\mathbf{v})(\mathbf{h}(\mathbf{y},t)-\mathbf{h}(\mathbf{x},t)) d\mathbf{y}+\mathbf{c}(t)\right),$$ otherwise, the block is a traditional residual block of the pre-activation ResNet: $\mathcal{F}(\mathbf{h}(l)):=W_2^l g (W_1^l g(\mathbf{h}(l))),$ where $g=\text{ReLU}\circ \text{BN}$ denotes the composition of ReLU and batch normalization (BN). The dimension of $\mathbf{h}$ is set to $d=16$. For each NKN layer, the kernel network $k(\cdot,\cdot,t): \mathbb{R}^4 \rightarrow \mathbb{R}^{256}$ is parametrized as a 3-layer feed forward network with dimensions $(4, 32, 32, 256)$ and ReLU activation, and the reaction network $R(\cdot,t):\mathbb{R}^2 \rightarrow \mathbb{R}^{256}$ is parametrized as a 3-layer feed forward network with widths $(2, 32, 32, 256)$ and ReLU activation. Both are then reshaped into a 16$\times$16 tensor. As done for the MNIST data set, different radii $r = \{2,3,4\}$ are utilized. All models are implemented based on a 20-layer pre-activation ResNet (PreResNet) package in Keras provided in \cite{He2016Resnet} with default structure. Following the settings reported in \cite{tao2018nonlocal}, we set Adam's initial learning rate to $1e{-3}$, and train for 200 epochs. Classification results are reported in Table \ref{tab:Image_CIFAR_more}. Here, for NNNs tested on the original resolution data set, we report both the results obtained with our implementation and the ones reported in \cite{tao2018nonlocal}. We observe that the performance of these two implementations is slightly different; this is possibly due to the differences in the Tensorflow version or in the available hardware. When testing on a data set with the original resolution, we can see that NKNs with $r=4$ and $4$ blocks outperform both the baseline and the best NNN. As for the cross-resolution classification tests, we train the networks using the $32\times 32$ images and then test their generalization properties on finer ($64\times 64$) and coarser ($16\times 16$) images. Due to the poor performance of CNNs in cross-resolution tasks (since they are formulated at the discrete level and hence not resolution-independent), when testing NNNs and NKNs on different-resolution images, we follow an approach similar to what we described for multiscale CNNs. Precisely, testing on finer images, the convolution operator $K_h$ is approximated by the trained convolution operator $K_H$ as $K_h = \mathbf{S}K_H\mathbf{U}$. If multiple CNNs are stacked together, we have $K_{h_n}K_{h_{n-1}}\cdots K_{h_1} = \mathbf{S}K_{H_n}K_{H_{n-1}}\cdots K_{H_1}\mathbf{U}$, since $\mathbf{S}\mathbf{U}= \mathbf{I}$. This is equivalent to multiplying by a restriction matrix $\mathbf{U}$ after the input layer, a prolongation matrix $\mathbf{S}$ before the NNN/NKN layer, and a restriction matrix $\mathbf{U}$ after the NNN/NKN layer. A similar procedure can be utilized when testing on coarser images; however, we expect results to be less accurate as $\mathbf{U} \mathbf{S} \neq \mathbf{I}$. We can see that among all architectures, NKNs are again the most accurate classifiers. Differently from what we observed for MNIST, here, NKNs are more accurate when a larger radius $r$ and a deeper network is employed. This is possibly due to the fact that the CIFAR-10 data set has a more complex data-label relation and therefore requires deeper architectures. Another interesting finding is that for all architectures it is easier to generalize to fine-scale images than to coarse-scale images. This is because when generalizing to a smaller grid, part of the support of the kernel is lost which causes the kernel to be inaccurate. \begin{table}[] \centering { \begin{tabular}{|c|c|c|c|} \hline Model& Original/Reported in \cite{tao2018nonlocal} &Fine & Coarse \\ \hline Baseline&8.69\%/8.19\% & 50\% & 37.15\% \\ \hline NNN, block $L=2$ & 8.04\%/7.74\% & 11.27\% & 38.15\% \\ NNN, block $L=3$ & 8.09\%/7.62\% & 8.80\% & 28.18\% \\ NNN, block $L=4$ & 8.10\%/7.37\% & 9.56\% & 32.03\% \\ NNN, block $L=5$ (best)& 8.03\%/7.29\%& 11.86\% & 48.69\% \\ \hline NKN, $r=2$, block $L=2$ & 7.94\% & 8.10\% & 46.86\% \\ NKN, $r=2$, block $L=3$ & 7.60\% & 7.71\% & 40.34\%\\ NKN, $r=2$, block $L=4$ & 7.52\% & 7.61\% & 40.28\% \\ NKN, $r=3$, block $L=2$ & 7.60\% & 7.77\% & 24.81\% \\ NKN, $r=3$, block $L=3$ & 7.67\% & 7.78\%& 25.96\% \\ NKN, $r=3$, block $L=4$ & 7.94\% & 8.11\% & 26.78\% \\ NKN, $r=4$, block $L=2$&7.70\% & 7.41\% & 31.80\% \\ NKN, $r=4$, block $L=3$&7.23\%& 7.30\% & {\bf 23.16\%} \\ NKN, $r=4$, block $L=4$ & {\bf 7.08\%} & {\bf 7.23\%} & 24.30\% \\ \hline \end{tabular}} \caption{Image classification task 2: CIFAR-10. Image classification task errors. Bold numbers highlight the best case. For the baseline (PreResNet-20) and NNN cases, we report both the results from our implementation using the same hyperparameters and the ones reported in \cite{tao2018nonlocal}. For NNN and NKN cases, ``block $L=*$'' indicates the number of NNN/NKN layers employed in the inserted nonlocal block.} \label{tab:Image_CIFAR_more} \end{table} \section{Conclusion}\label{sec:conclusion} We proposed a new integral neural operator, inspired by graph kernel networks, that has rigorous mathematical foundations provided by the nonlocal theory. This network, referred to as nonlocal kernel network (NKN), is stable in the deep network limit by construction. Similarly to neural ODEs, NKNs can be reinterpreted as time dependent equations. Furthermore, both layers and nodes are treated continuously. This fact, enables resolution independence and the use of efficient initialization techniques that exploit the continuous-in-time nature of NKNs. Our results show that, in both learning governing equations and image classification tasks, NKNs outperform baseline methods in stability and generalizability to different resolutions. Similarly to GKNs, since NKNs' building blocks are integral operators characterized by space dependent kernels with minimal assumptions, they come at the price of higher computational cost compared to other networks whose kernels have a convolutional structure such as the standard CNN and FNO. However, since training cost can be seen as an offline cost, once the network is trained, prediction is a fast operation. Therefore, the excellent generalization properties of NKNs make them a valuable and robust tool for offline learning tasks and, due to the fact that they are insensitive to normalization, also for online learning tasks. Finally, NKNs represent one of the first examples of universal learning tools, being able to succeed in learning tasks of substantially different nature. \section*{Acknowledgements} The authors would like to thank Dr. Yunzhe Tao and Dr. Zongyi Li for sharing their codes and for the helpful discussions. The authors also want to acknowledge Dr. Lars Ruthotto for providing implementation details regarding Multiscale CNN. H. You and Y. Yu would like to acknowledge support by the National Science Foundation under award DMS 1753031. Portions of this research were conducted on Lehigh University's Research Computing infrastructure partially supported by NSF Award 2019035. S. Silling and M. D'Elia would like to acknowledge the support of the Sandia National Laboratories (SNL) Laboratory-directed Research and Development program and by the U.S. Department of Energy, Office of Advanced Scientific Computing Research under the Collaboratory on Mathematics and Physics-Informed Learning Machines for Multiscale and Multiphysics Problems (PhILMs) project. SNL is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract {DE-NA0003525}. This paper, SAND2022-0110, describes objective technical results and analysis. Any subjective views or opinions that might be expressed in this paper do not necessarily represent the views of the U.S. Department of Energy or the United States Government.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,265
class Leaderboard { private Map<Integer, Integer> scoreMap; public Leaderboard() { scoreMap = new HashMap<>(); } public void addScore(int playerId, int score) { scoreMap.put(playerId, scoreMap.getOrDefault(playerId, 0) + score); } public int top(int K) { List<Integer> list = scoreMap.entrySet().stream().map(entry -> entry.getValue()).sorted().collect(Collectors.toList()); int sum = 0; for (int i = list.size() - 1; K > 0; K--, i--) { sum += list.get(i); } return sum; } public void reset(int playerId) { scoreMap.put(playerId, 0); } } /** * Your Leaderboard object will be instantiated and called as such: * Leaderboard obj = new Leaderboard(); * obj.addScore(playerId,score); * int param_2 = obj.top(K); * obj.reset(playerId); */
{ "redpajama_set_name": "RedPajamaGithub" }
7,974
Chris Amon Passes Away at 73 Former Le Mans winner Chris Amon passes away at age of 73… John Dagys Photo: Ford Performance Former Formula One driver and 24 Hours of Le Mans winner Chris Amon has died at the age of 73. Amon, widely regarded as one of the best drivers to never win a F1 race, was well-known in the sports car racing world, having claimed victory at Le Mans in 1966 alongside Bruce McLaren in a Ford GT40. The Le Mans win was the first for the American manufacturer and was celebrated this year on the 50th anniversary. "At the time I was probably more interested in F1 than sports car racing," Amon said in reflection ahead of this year's race. "It's been said that I was an unlucky F1 driver because I should have won a lot of races but the fact is many of my contemporaries were killed in F1 so I think I'm lucky to still be around. "There's no question that winning Le Mans with Ford was a very special moment in my career." Amon, who made 96 Grand Prix starts from 1963 to 1976, retired in his native New Zealand where he remained active in the sport. He helped re-design Taupo Motorsports Park, supported the Toyota Racing Series and made occasional starts in historic racing. A family statement read: "Chris battled cancer in recent years but retained not only a close interest in Formula One – and his very wide range of favorite topics – but also his wonderful sense of humor, complete with infectious chuckle." Related Topicsbreakingchris amon John Dagys is the founder and Editor-in-Chief of Sportscar365. Dagys spent eight years as a motorsports correspondent for FOXSports.com and SPEED Channel and has contributed to numerous other motorsports publications worldwide. Contact John More in 24H Le Mans H24Racing strengthens development team with Stoffel Vandoorne joining Norman Nato... Vaillante-Mirage Partnership Targeting Le Mans in 2023 GT4 squad Mirage Racing set for LMP3 step next year in bid to reach... Test Day to Be Held One Weekend Before Le Mans ACO confirms Le Mans test day to return next year on Sunday, Jun. 6... Akin Award Winner Hardwick Targeting 2021 Le Mans Debut Possible debut in 24 Hours of Le Mans in frame for Bob Akin Award... Third Audi Unlikely for 24H Le Mans in 2017 United Autosports Eyeing IMSA LMP3 Program; 24H Le Mans in 2017
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,619
{"url":"https:\/\/en.wikipedia.org\/wiki\/Talk:IP_(complexity)","text":"# Talk:IP (complexity)\n\nWikiProject\u00a0Mathematics (Rated\u00a0Start-class,\u00a0Low-importance)\nThis article is within the scope of WikiProject\u00a0Mathematics, a collaborative effort to improve the coverage of Mathematics on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.\nMathematics\u00a0rating:\n Start\u00a0Class\n Low\u00a0Importance\nField: \u00a0Discrete mathematics\nWikiProject Computer science (Rated Start-class, Low-importance)\nThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.\nStart\u00a0 This article has been rated as Start-Class on the project's quality scale.\nLow\u00a0 This article has been rated as Low-importance on the project's importance scale.\n\n## Bad format in new version\n\nunreadble format in the proof for: ${\\displaystyle {\\text{IP}}\\subseteq {\\text{NSPACE}}}$ makes the proof harder to read and\\or understand. I was expecting something like:\nlet ${\\displaystyle {\\text{w}}\\in {\\text{IP}}\\dots }$\n\"Now we can define\".... \"\nrollback? Msshapira (talk) 11:24, 2 January 2011 (UTC)\n\n## New proof makes strange assertions\n\nThe statement \"#SAT in IP\" doesn't even make sense - #SAT isn't even a decision problem. It's complete for #P, but how does this proof imply IP is a subset of PSPACE or vice versa? Can the contributor clear up these assertions? Thanks. Deco 23:22, 15 November 2005 (UTC)\n\nThe decision problem for #SAT is: For phi and k, does phi have exactly k satisfiable assignments? And the proof for showing #SAT is in IP doesn't imply PSPACE is a subset of IP, but it introduces the technique that is key to showing PSPACE is a subset of IP. --18.244.7.203 08:45, 24 November 2006 (UTC)\nMakes sense.\u00a0:-) Dcoetzee 23:35, 3 December 2008 (UTC)\n\n## Typography\n\n${\\displaystyle wt-avg\\,}$\n${\\displaystyle {\\text{wt-avg}}\\,}$\n\nAm I right in guessing that the second of the above was what was intended? That's what I changed the first to in my recent edits.\n\nSomeone didn't know that\n\n\u2022 \"Displayed\" TeX should be indented;\n\u2022 One should write\n${\\displaystyle \\max A\\,}$\nrather than\n${\\displaystyle maxA\\,}$\n\u2022 One should write\n${\\displaystyle {\\text{accepts }}w\\,}$\nrather than\n${\\displaystyle accepts\\ w}$\n\u2022 One should write\n${\\displaystyle a,\\dots ,z\\,}$\nrather than\n${\\displaystyle a,...,z\\,}$\n\u2022 One should write\n(1\u00a0\u2212\u00a0x)\nrather than\n(1-x)\n\u2022 lots of other stuff\u2014see my recent cleanups.\n\nThis is all in Wikipedia:Manual of Style (mathematics). Michael Hardy (talk) 00:32, 11 February 2009 (UTC)\n\n## Copyvio?\n\nIt looks like the proof here is taken from [1]. Does anyone know if we have permission to use it? If not I'm afraid it'll have to get scrapped. Dcoetzee 09:57, 4 April 2009 (UTC)\n\n## Overlap with Interactive proof system\n\nCurrently this article has a lot of overlap with Interactive proof system, especially when discussing variants. I propose that this article be only about IP (and theorems about IP), whereas Interactive proof system can be a summary of all the major interactive proof systems, and describe all the variants, and the various relations between them. --Robin (talk) 14:31, 8 December 2009 (UTC)\n\n## Not clear definition\n\nIt's not clear at the definition section what's ${\\displaystyle Q}$ stands for. Also, although it was mentioned in the introduction, I think it should be stated more formally what are ${\\displaystyle P}$ and ${\\displaystyle V}$ (probabilistic TM's, their computational power, etc.). \u2014\u00a0Preceding unsigned comment added by 93.173.63.178 (talk) 14:12, 10 April 2016 (UTC)\n\n## a polynomial number, p(n), of messages\n\nThe phrase \"a polynomial number, p(n), of messages\" doesn't compute for me. n is a string. Does this mean polynomial in the length of the string? If so, is it wrong as written? If so, does one normally write p(len(n)) or some such? Or is this the convention, and is there a common way to explain that on wikipedia? \u2605NealMcB\u2605 (talk) 17:16, 26 September 2016 (UTC)","date":"2017-08-20 09:31:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 13, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5462537407875061, \"perplexity\": 1825.3847328698464}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886106358.80\/warc\/CC-MAIN-20170820073631-20170820093631-00661.warc.gz\"}"}
null
null
Home International Dhaka attack masterminds identified: Bangladesh minister Dhaka attack masterminds identified: Bangladesh minister Dhaka(Bangladesh), July 17, 2016: Gunmen killed 20 hostages, mostly foreigners, in a gruesome 12-hour siege at a restaurant in Dhaka's diplomatic zone on July 1. Bangladesh Home Minister Asaduzzaman Khan yesterday claimed the police have identified the masterminds of the Gulshan attack. "Those who were behind the attack have been specifically identified," he told reporters at his secretariat office in the city. He, however, did not go into any detail about the identities of the masterminds. Meanwhile, Dhaka Metropolitan Police (DMP) Commissioner Asaduzzaman Mia said they made a "significant progress" in the probe into the Gulshan attack case and identified the places the militants were staying and being trained. "We are now working to bring those involved to book after reviewing the information," he told a press conference at the media centre of DMP. The DMP organised the conference on safety and security issues. The city police chief said only five to six terrorists cannot commit such a big crime. "Terrorists were being recruited and trained. Some people instigated them; gave them shelter and arms." Police were trying to trace the instigators and masterminds of the attack, he added. Asked whether several people reportedly arrested by the Indian authorities were involved in the incident, the DMP commissioner said law enforcers did not have any such information. The people so far found involved in the incident are Bangladeshi citizens, he said, adding that they were not ruling out involvement of any national and international quarters. Asaduzzaman said different quarters were working to destabilise the country. "Trial of war criminals is underway. We are not ruling out any probable suspects who in the recent past created anarchy in the country in the name of movement." Asked if the militants were able to enter the high-security diplomatic zone and carry out the massacre due to negligence in duties by policemen, the DMP chief said only an investigation would determine whether there was any negligence on the part of cops. On July 1, armed militants, evading some police checkpoints, swooped on Holey Artisan Bakery in Gulshan and killed 20 people — nine Italian, seven Japanese, two Bangladeshis, an Indian and a Bangladesh-born US citizen. The attackers also killed two police officers who tried to end the hostage-taking soon after the incident began around 8:40pm. The 11-hour hostage crisis ended when army commandos stormed the café around 8:00am on July 2. In the operation, code-named Thunderbolt, five militants and a chef of the café were killed. Law enforcers said the chef was a suspect, because he "helped" the terrorists. His family, however, denied the allegation. Another café employee, detained by cops as a suspect, later died of injuries at Dhaka Medical College Hospital. The hostage-taking came following a spate of targeted killings of secular writers, bloggers, publishers, university teachers and religious leaders across the country over the last three years. Global terror outfit Islamic State took credit for the Gulshan attack, but the government denied the claim, saying that home-grown militants were to blame. Previous articleTwitter service slowed in Turkey Next articleFrom My Diary: The Debate Training Days in Eastern Nepal Nearest black hole to Earth discovered Renee Zellweger Wins Oscar for Best Actress NASA Astronaut Christina Koch sets a New Spaceflight Record The Best Archery: The First Indoor Archery in Nepal These two brothers turned an unused space in their home into a hub for... Helping women and the environment: MyEarth Eco Bags
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,922
{"url":"https:\/\/reimbar.org\/posts\/ole-from-random-ole\/","text":"The other day I was trying to remember how to construct a simple Oblivious Linear Evaluation (OLE) from Random OLE. I couldn\u2019t find the formula online and it took a few trials to reconstruct it, so I decided to keep it here. There is nothing deep but it\u2019s a good way to recap what OLE is.\n\nOblivious Linear Evaluation is a two-party protocol. Alice holds an input x from some ring and Bob holds two inputs (a,b). They want to compute ax+b without revealing anything else (i.e. Alice gets ax+b and doesn\u2019t learn anything about a or b, and Bob doesn\u2019t learn anything about x). This picture from Peter Scholl1 summarizes the functionality:\n\nRandom OLE is a variation of OLE where the inputs x,a,b are random instead of being chosen by the participants.\n\nThen, it is a fact that we can construct (standard) OLE from Random OLE. But how? The idea is to use the outputs of the Random OLE as one-time pads for the true OLE inputs. Here is the explicit protocol:\n\n1. Run Random OLE with (random) inputs $$x_r, a_r, b_r$$:\n\n$$A: x_r \\longrightarrow R-OLE \\longleftarrow (a_r, b_r): B$$\n\n$$A: a_r x_r + b_r \\longleftarrow R-OLE$$\n\n1. Exchange the true inputs $$x,a,b$$ \u2013 carefully padded with the random values:\n\n$$A: (x - x_r) \\longrightarrow B$$\n\n$$A \\longleftarrow a_r(x-x_r) + b - b_r : B$$\n\n$$A \\longleftarrow (a - a_r): B$$\n\n1. Finally, Alice can reconstruct the desired output:\n\n$$(a_r x_r + b_r) + a_r(x-x_r) + b - b_r + (a - a_r)x = a_r x + (a - a_r)x + b = ax + b$$\n\nWe can verify that in the honest-but-curious case Alice and Bob don\u2019t learn each other\u2019s inputs.\n\n1. Source: 12th BIU Winter School \u21a9\ufe0e","date":"2022-05-20 05:13:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7649093270301819, \"perplexity\": 1471.3305911970263}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662531352.50\/warc\/CC-MAIN-20220520030533-20220520060533-00732.warc.gz\"}"}
null
null
{"url":"https:\/\/www.thepoorcoder.com\/hackerrank-maximize-it-solution\/","text":"You've successfully subscribed to The Poor Coder | Hackerrank Solutions\nGreat! Next, complete checkout for full access to The Poor Coder | Hackerrank Solutions\nWelcome back! You've successfully signed in.\n\nHackerrank - Maximize It! Solution\n\nYou are given a function . You are also given lists. The list consists of elements. You have to pick one element from each list so that the value from the equation below is maximized:\n\nBeeze Aal\n\nYou are given a function f(X) = X2. You are also given K lists. The ith list consists of Ni elements.\n\nYou have to pick one element from each list so that the value from the equation below is maximized:\n\nS = (f(X1)+f(X2)+....+f(Xk))%M\n\nXi denotes the element picked from the ith list . Find the maximized value Smax obtained.\n\n% denotes the modulo operator.\n\nNote that you need to take exactly one element from each list, not necessarily the largest element. You add the squares of the chosen elements and perform the modulo operation. The maximum value that you can obtain, will be the answer to the problem.\n\nInput Format\n\nThe first line contains 2 space separated integers K and M.\nThe next K lines each contains an integer Ni, denoting the number of elements in the ith list, followed by Ni space separated integers denoting the elements in the list.\n\nOutput Format\n\nOutput a single integer denoting the value Smax.\n\nSample Input\n\n3 1000\n2 5 4\n3 7 8 9\n5 5 7 8 9 10\n\n\nSample Output\n\n206\n\n\nExplanation\n\nPicking 5 from the 1st list, 9 from the 2nd list and 10 from the 3rd list gives the maximum S value equal to (52 + 92 + 10 2)%1000 =206.\n\nSolution in python\n\nfrom itertools import product\nK,M = map(int,input().split())\nnums = []\nfor _ in range(K):\nrow = map(int,input().split()[1:])\nnums.append(map(lambda x:x**2%M, row))\nprint(max(map(lambda x: sum(x)%M, product(*nums))))\n\n\nRequired Knowledge\n\nBefore we get started, we must know that the follow 2 gives us equal results\n\n(52 + 92 + 10 2)%1000 =206\n\n(52 %1000 + 92 %1000 + 10 2 %1000) =206%1000 = 206\n\nAlso we should know the following python functions\n\n\u2022 map function\n\u2022 split method\n\u2022 list function\n\nThe following code takes the value of K (no. of rows) and M(modulus) from the user and convert both values to integer using map() function\n\nK,M = map(int,input().split()) \n\nThen we create an empty list and name it nums, and we loop K(no. of row ) times\n\nnums = []\nfor _ in range(K):\nrow = map(int,input().split()[1:])\nnums.append(map(lambda x:x**2%M, row))\n\nWe use map and split function to convert the row input into list of integers.\n\n>>> 2 5 4\n[2,5,4]\n>>> 3 7 8 9\n[3,7,8,9]\n>>> 5 5 7 8 9 10\n[5,5,7,8,9,10] \n\nThen we use [1:] to slice out the first number of each row because it is actually the count of items in that row and we don't need it.\n\n2 5 4 means we have 2 numbers in our row and 5,4 are the required numbers, \u00a03 7 8 9 means we have 3 numbers in our row and 7,8,9 are the required numbers and so and so\n\nAs required by the question we square and find the remainder(or we can say modulus) after diving the squared number by M for each numbers in the row and then we append that list to nums variable\n\nnums.append(map(lambda x:x**2%M, row))\n>>> list(map(lambda x:x**2%M, [5,4]))\n[25,16]\n>>> list(map(lambda x:x**2%M, [7,8,9]))\n[49, 64, 81]\n>>> list(map(lambda x:x**2%M, [5,7,8,9,10]))\n[25, 49, 64, 81, 100]\n\nIn the above example i have added a list function just for unpacking the values inside the map function. However our code works without unpacking the values.\n\nNow the following gives us all the possible ways of picking K numbers from our nums variable\n\n>>> list(product(*nums))\n[(25, 49, 25),\n(25, 49, 49),\n(25, 49, 64),\n(25, 49, 81),\n(25, 49, 100),\n(25, 64, 25),\n(25, 64, 49),\n(25, 64, 64),\n(25, 64, 81),\n(25, 64, 100),\n(25, 81, 25),\n(25, 81, 49),\n(25, 81, 64),\n(25, 81, 81),\n(25, 81, 100),\n(16, 49, 25),\n(16, 49, 49),\n(16, 49, 64),\n(16, 49, 81),\n(16, 49, 100),\n(16, 64, 25),\n(16, 64, 49),\n(16, 64, 64),\n(16, 64, 81),\n(16, 64, 100),\n(16, 81, 25),\n(16, 81, 49),\n(16, 81, 64),\n(16, 81, 81),\n(16, 81, 100)]\n\n\nNow our task is to sum each list and find the remainder after diving by M. For which we will use lambda, sum and map function\n\n>>> list(map(lambda x: sum(x)%M, product(*nums)))\n[99,\n123,\n138,\n155,\n174,\n114,\n138,\n153,\n170,\n189,\n131,\n155,\n170,\n187,\n206,\n90,\n114,\n129,\n146,\n165,\n105,\n129,\n144,\n161,\n180,\n122,\n146,\n161,\n178,\n197]\n\nAnd here you go, the greatest number of this list is our answer. Let's use the max function for finding the biggest number.\n\n>>> print(max(map(lambda x: sum(x)%M, product(*nums))))\n206\n\nIf you have any confusion just leave a comment below and I will try to make it clear for you.","date":"2020-09-30 18:43:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.38675621151924133, \"perplexity\": 2716.523216976866}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600402127397.84\/warc\/CC-MAIN-20200930172714-20200930202714-00549.warc.gz\"}"}
null
null
{"url":"https:\/\/math.stackexchange.com\/questions\/3014519\/prove-that-if-a-sequence-converges-then-limx-n-lim-sup-x-n-or-limx","text":"# Prove that if a sequence converges then $\\lim{x_n} = \\lim \\sup {x_n}$ or $\\lim{x_n} = \\lim \\inf {x_n}$\n\nGiven a convergent sequence $$\\{x_n\\}$$ prove that either: $$\\lim_{n \\to\\infty}\\{x_n\\} = \\lim_{n \\to \\infty} \\sup \\{x_n\\}$$ or $$\\lim_{n \\to\\infty}\\{x_n\\} = \\lim_{n \\to \\infty} \\inf \\{x_n\\}$$\n\nI believe this problem has been solved several times here, but i couldn't find such a question (probably due to translation issues, since the original problem is in another other).\n\nI've started with gathering what is given in the problem statement. So we have that a sequence is convergent, thus: $$\\lim_{n\\to\\infty}x_n = L \\iff \\{ \\forall\\varepsilon >0, \\exists N\\in \\mathbb N:\\forall n> N \\implies |x_n-L|<\\varepsilon \\}$$\n\nAlso we have that the sequence is bounded, so: $$m = \\inf\\{x_n\\} \\le x_n\\le \\sup\\{x_n\\} = M \\\\ m \\le x_n \\le M$$\n\nNow using these facts I believe I should make some assumption (for example that $$x_n$$ doesn't reach any bound and proceed by contradiction), but i can't wrap my mind for several hours already.\n\nI would appreciate if someone could show me how to prove this or point to an already answered question.\n\nIf $$m = M$$, then the sequence is constant, so the result holds. If not, then either $$m$$ or $$M$$ (maybe both, but it doesn't matter: pick either in that case) is not equal to $$L$$. Whichever it is (call that one $$k$$), there is some $$N$$ such that for all $$n > N$$, $$|x_n - L| < \\frac{|L-k|}{2}$$. Since $$k$$ is an exact bound for $$(x_n)$$, there must, for any $$\\delta > 0$$ be some $$n$$ such that $$|k - x_n| < \\delta$$. But for any $$\\delta < \\frac{|L-k|}{2}$$, this can't happen after the $$N$$th term, so must be in the first $$N$$ somewhere, so $$k$$ is an exact bound for the set of the first $$N$$ terms of $$(x_n)$$. But there are finitely many such, and every finite set achieves its exact bounds, so in particular, there is some $$n < N$$ such that $$x_n = k$$.","date":"2019-07-22 07:41:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 27, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9690650105476379, \"perplexity\": 85.8658424107695}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195527828.69\/warc\/CC-MAIN-20190722072309-20190722094309-00251.warc.gz\"}"}
null
null
#include <crypto/algapi.h> #include <linux/module.h> #include <linux/crypto.h> #define SALSA20_IV_SIZE 8U #define SALSA20_MIN_KEY_SIZE 16U #define SALSA20_MAX_KEY_SIZE 32U struct salsa20_ctx { u32 input[16]; }; asmlinkage void salsa20_keysetup(struct salsa20_ctx *ctx, const u8 *k, u32 keysize, u32 ivsize); asmlinkage void salsa20_ivsetup(struct salsa20_ctx *ctx, const u8 *iv); asmlinkage void salsa20_encrypt_bytes(struct salsa20_ctx *ctx, const u8 *src, u8 *dst, u32 bytes); static int setkey(struct crypto_tfm *tfm, const u8 *key, unsigned int keysize) { struct salsa20_ctx *ctx = crypto_tfm_ctx(tfm); salsa20_keysetup(ctx, key, keysize*8, SALSA20_IV_SIZE*8); return 0; } static int encrypt(struct blkcipher_desc *desc, struct scatterlist *dst, struct scatterlist *src, unsigned int nbytes) { struct blkcipher_walk walk; struct crypto_blkcipher *tfm = desc->tfm; struct salsa20_ctx *ctx = crypto_blkcipher_ctx(tfm); int err; blkcipher_walk_init(&walk, dst, src, nbytes); err = blkcipher_walk_virt_block(desc, &walk, 64); salsa20_ivsetup(ctx, walk.iv); if (likely(walk.nbytes == nbytes)) { salsa20_encrypt_bytes(ctx, walk.src.virt.addr, walk.dst.virt.addr, nbytes); return blkcipher_walk_done(desc, &walk, 0); } while (walk.nbytes >= 64) { salsa20_encrypt_bytes(ctx, walk.src.virt.addr, walk.dst.virt.addr, walk.nbytes - (walk.nbytes % 64)); err = blkcipher_walk_done(desc, &walk, walk.nbytes % 64); } if (walk.nbytes) { salsa20_encrypt_bytes(ctx, walk.src.virt.addr, walk.dst.virt.addr, walk.nbytes); err = blkcipher_walk_done(desc, &walk, 0); } return err; } static struct crypto_alg alg = { .cra_name = "salsa20", .cra_driver_name = "salsa20-asm", .cra_priority = 200, .cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER, .cra_type = &crypto_blkcipher_type, .cra_blocksize = 1, .cra_ctxsize = sizeof(struct salsa20_ctx), .cra_alignmask = 3, .cra_module = THIS_MODULE, .cra_u = { .blkcipher = { .setkey = setkey, .encrypt = encrypt, .decrypt = encrypt, .min_keysize = SALSA20_MIN_KEY_SIZE, .max_keysize = SALSA20_MAX_KEY_SIZE, .ivsize = SALSA20_IV_SIZE, } } }; static int __init init(void) { return crypto_register_alg(&alg); } static void __exit fini(void) { crypto_unregister_alg(&alg); } module_init(init); module_exit(fini); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION ("Salsa20 stream cipher algorithm (optimized assembly version)"); MODULE_ALIAS("salsa20"); MODULE_ALIAS("salsa20-asm");
{ "redpajama_set_name": "RedPajamaGithub" }
4,707
Q: Reboot few seconds later after program termination I am maintaining a c/c++ program under linux, which will change BIOS settings and reboot to enable new settings. Now test teams need to verify exit status, but the program will reboot right now after termination, so their script doesn't have enough time to record the status. I have try system("shutdown -r -t 1"), but it will wait for 1 minute.I just need few seconds delay time but shutdown has no option for it. Are there other methods( besides at or cron) to implement a few seconds reboot delay time after program termination? A: You can try next line: system("nohup bash -c 'sleep 10; shutdown -r -t now' > /tmp/shutdown.log&"); This with return inmedialty your program and after 10 seconds will invoke the sutdown
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,924
W&L Athletics Hall of Famer Walt Michaels '51 Passes Away Clark and Dodson Earn Google Cloud Academic All-District Honors LEXINGTON, Va. -- The College Sports Information Directors of America (CoSIDA) released its 2018 Google Cloud Division III Academic All-District Football squad on Thursday afternoon and Washington and Lee had two players selected to the 26-member District 5 team. Junior offensive lineman Sean Clark (Washington, D.C. / Georgetown Prep) and junior defensive back Matt Dodson (Oxford, Conn. / Oxford) both earned Academic All-District laurels for the first time in their collegiate careers. An accounting major, Clark has started all 20 games at tackle each of the past two seasons, earning Third Team All-ODAC accolades a season ago. This year, he helped the Generals average 292.7 rushing yards per game, fifth-best in Division III. Dodson started the first six games of the season, before suffering an injury. He poated 23 tackles and notched one interception for a defense that ranked among the best in the ODAC this season. For his career, the mathematics major has tallied 70 tackles and five interceptions. The criteria for the CoSIDA All-District program states that a player must be of sophomore academic standing, be a starter or important reserve and claim a GPA of at least 3.30 on a 4.0 scale. Washington and Lee is a member of District 5, which includes players from small colleges in Alabama, Arkansas, Florida, Georgia, Mississippi, Missouri, North Carolina, Puerto Rico, South Carolina, Tennessee and Virginia. Both players will now move on to the national ballot where they could be voted to earn Academic All-America laurels. Washington and Lee finished the season with a 5-4 overall record and a 3-4 mark in league play. The Generals recorded their fourth straight winning season and the eighth non-losing campaign in the last nine years. -- http://www.generalssports.com --
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,128
{"url":"https:\/\/www.jobilize.com\/algebra\/course\/6-4-graphs-of-logarithmic-functions-by-openstax?qcr=www.quizover.com&page=7","text":"# 6.4 Graphs of logarithmic functions \u00a0(Page 8\/8)\n\n Page 8 \/ 8\n\n## Verbal\n\nThe inverse of every logarithmic function is an exponential function and vice-versa. What does this tell us about the relationship between the coordinates of the points on the graphs of each?\n\nSince the functions are inverses, their graphs are mirror images about the line $\\text{\\hspace{0.17em}}y=x.\\text{\\hspace{0.17em}}$ So for every point $\\text{\\hspace{0.17em}}\\left(a,b\\right)\\text{\\hspace{0.17em}}$ on the graph of a logarithmic function, there is a corresponding point $\\text{\\hspace{0.17em}}\\left(b,a\\right)\\text{\\hspace{0.17em}}$ on the graph of its inverse exponential function.\n\nWhat type(s) of translation(s), if any, affect the range of a logarithmic function?\n\nWhat type(s) of translation(s), if any, affect the domain of a logarithmic function?\n\nShifting the function right or left and reflecting the function about the y-axis will affect its domain.\n\nConsider the general logarithmic function $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{b}\\left(x\\right).\\text{\\hspace{0.17em}}$ Why can\u2019t $\\text{\\hspace{0.17em}}x\\text{\\hspace{0.17em}}$ be zero?\n\nDoes the graph of a general logarithmic function have a horizontal asymptote? Explain.\n\nNo. A horizontal asymptote would suggest a limit on the range, and the range of any logarithmic function in general form is all real numbers.\n\n## Algebraic\n\nFor the following exercises, state the domain and range of the function.\n\n$f\\left(x\\right)={\\mathrm{log}}_{3}\\left(x+4\\right)$\n\n$h\\left(x\\right)=\\mathrm{ln}\\left(\\frac{1}{2}-x\\right)$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\frac{1}{2}\\right);\\text{\\hspace{0.17em}}$ Range: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\infty \\right)$\n\n$g\\left(x\\right)={\\mathrm{log}}_{5}\\left(2x+9\\right)-2$\n\n$h\\left(x\\right)=\\mathrm{ln}\\left(4x+17\\right)-5$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(-\\frac{17}{4},\\infty \\right);\\text{\\hspace{0.17em}}$ Range: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\infty \\right)$\n\n$f\\left(x\\right)={\\mathrm{log}}_{2}\\left(12-3x\\right)-3$\n\nFor the following exercises, state the domain and the vertical asymptote of the function.\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{b}\\left(x-5\\right)$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(5,\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=5$\n\n$\\text{\\hspace{0.17em}}g\\left(x\\right)=\\mathrm{ln}\\left(3-x\\right)$\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)=\\mathrm{log}\\left(3x+1\\right)$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(-\\frac{1}{3},\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=-\\frac{1}{3}$\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)=3\\mathrm{log}\\left(-x\\right)+2$\n\n$\\text{\\hspace{0.17em}}g\\left(x\\right)=-\\mathrm{ln}\\left(3x+9\\right)-7$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(-3,\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=-3$\n\nFor the following exercises, state the domain, vertical asymptote, and end behavior of the function.\n\n$f\\left(x\\right)=\\mathrm{ln}\\left(2-x\\right)$\n\n$f\\left(x\\right)=\\mathrm{log}\\left(x-\\frac{3}{7}\\right)$\n\nDomain: $\\left(\\frac{3}{7},\\infty \\right)$ ;\nVertical asymptote: $x=\\frac{3}{7}$ ; End behavior: as $x\\to {\\left(\\frac{3}{7}\\right)}^{+},f\\left(x\\right)\\to -\\infty$ and as $x\\to \\infty ,f\\left(x\\right)\\to \\infty$\n\n$h\\left(x\\right)=-\\mathrm{log}\\left(3x-4\\right)+3$\n\n$g\\left(x\\right)=\\mathrm{ln}\\left(2x+6\\right)-5$\n\nDomain: $\\left(-3,\\infty \\right)$ ; Vertical asymptote: $x=-3$ ;\nEnd behavior: as $x\\to -{3}^{+}$ , $f\\left(x\\right)\\to -\\infty$ and as $x\\to \\infty$ , $f\\left(x\\right)\\to \\infty$\n\n$f\\left(x\\right)={\\mathrm{log}}_{3}\\left(15-5x\\right)+6$\n\nFor the following exercises, state the domain, range, and x - and y -intercepts, if they exist. If they do not exist, write DNE.\n\n$h\\left(x\\right)={\\mathrm{log}}_{4}\\left(x-1\\right)+1$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(1,\\infty \\right);\\text{\\hspace{0.17em}}$ Range: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=1;\\text{\\hspace{0.17em}}$ x -intercept: $\\text{\\hspace{0.17em}}\\left(\\frac{5}{4},0\\right);\\text{\\hspace{0.17em}}$ y -intercept: DNE\n\n$f\\left(x\\right)=\\mathrm{log}\\left(5x+10\\right)+3$\n\n$g\\left(x\\right)=\\mathrm{ln}\\left(-x\\right)-2$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(-\\infty ,0\\right);\\text{\\hspace{0.17em}}$ Range: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=0;\\text{\\hspace{0.17em}}$ x -intercept: $\\text{\\hspace{0.17em}}\\left(-{e}^{2},0\\right);\\text{\\hspace{0.17em}}$ y -intercept: DNE\n\n$f\\left(x\\right)={\\mathrm{log}}_{2}\\left(x+2\\right)-5$\n\n$h\\left(x\\right)=3\\mathrm{ln}\\left(x\\right)-9$\n\nDomain: $\\text{\\hspace{0.17em}}\\left(0,\\infty \\right);\\text{\\hspace{0.17em}}$ Range: $\\text{\\hspace{0.17em}}\\left(-\\infty ,\\infty \\right);\\text{\\hspace{0.17em}}$ Vertical asymptote: $\\text{\\hspace{0.17em}}x=0;\\text{\\hspace{0.17em}}$ x -intercept: $\\text{\\hspace{0.17em}}\\left({e}^{3},0\\right);\\text{\\hspace{0.17em}}$ y -intercept: DNE\n\n## Graphical\n\nFor the following exercises, match each function in [link] with the letter corresponding to its graph.\n\n$d\\left(x\\right)=\\mathrm{log}\\left(x\\right)$\n\n$f\\left(x\\right)=\\mathrm{ln}\\left(x\\right)$\n\nB\n\n$g\\left(x\\right)={\\mathrm{log}}_{2}\\left(x\\right)$\n\n$h\\left(x\\right)={\\mathrm{log}}_{5}\\left(x\\right)$\n\nC\n\n$j\\left(x\\right)={\\mathrm{log}}_{25}\\left(x\\right)$\n\nFor the following exercises, match each function in [link] with the letter corresponding to its graph.\n\n$f\\left(x\\right)={\\mathrm{log}}_{\\frac{1}{3}}\\left(x\\right)$\n\nB\n\n$g\\left(x\\right)={\\mathrm{log}}_{2}\\left(x\\right)$\n\n$h\\left(x\\right)={\\mathrm{log}}_{\\frac{3}{4}}\\left(x\\right)$\n\nC\n\nFor the following exercises, sketch the graphs of each pair of functions on the same axis.\n\n$f\\left(x\\right)=\\mathrm{log}\\left(x\\right)\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)={10}^{x}$\n\n$f\\left(x\\right)=\\mathrm{log}\\left(x\\right)\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)={\\mathrm{log}}_{\\frac{1}{2}}\\left(x\\right)$\n\n$f\\left(x\\right)={\\mathrm{log}}_{4}\\left(x\\right)\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)=\\mathrm{ln}\\left(x\\right)$\n\n$f\\left(x\\right)={e}^{x}\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)=\\mathrm{ln}\\left(x\\right)$\n\nFor the following exercises, match each function in [link] with the letter corresponding to its graph.\n\n$f\\left(x\\right)={\\mathrm{log}}_{4}\\left(-x+2\\right)$\n\n$g\\left(x\\right)=-{\\mathrm{log}}_{4}\\left(x+2\\right)$\n\nC\n\n$h\\left(x\\right)={\\mathrm{log}}_{4}\\left(x+2\\right)$\n\nFor the following exercises, sketch the graph of the indicated function.\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{2}\\left(x+2\\right)$\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)=2\\mathrm{log}\\left(x\\right)$\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)=\\mathrm{ln}\\left(-x\\right)$\n\n$g\\left(x\\right)=\\mathrm{log}\\left(4x+16\\right)+4$\n\n$g\\left(x\\right)=\\mathrm{log}\\left(6-3x\\right)+1$\n\n$h\\left(x\\right)=-\\frac{1}{2}\\mathrm{ln}\\left(x+1\\right)-3$\n\nFor the following exercises, write a logarithmic equation corresponding to the graph shown.\n\nUse $\\text{\\hspace{0.17em}}y={\\mathrm{log}}_{2}\\left(x\\right)\\text{\\hspace{0.17em}}$ as the parent function.\n\n$\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{2}\\left(-\\left(x-1\\right)\\right)$\n\nUse $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{3}\\left(x\\right)\\text{\\hspace{0.17em}}$ as the parent function.\n\nUse $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{4}\\left(x\\right)\\text{\\hspace{0.17em}}$ as the parent function.\n\n$f\\left(x\\right)=3{\\mathrm{log}}_{4}\\left(x+2\\right)$\n\nUse $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{5}\\left(x\\right)\\text{\\hspace{0.17em}}$ as the parent function.\n\n## Technology\n\nFor the following exercises, use a graphing calculator to find approximate solutions to each equation.\n\n$\\mathrm{log}\\left(x-1\\right)+2=\\mathrm{ln}\\left(x-1\\right)+2$\n\n$x=2$\n\n$\\mathrm{log}\\left(2x-3\\right)+2=-\\mathrm{log}\\left(2x-3\\right)+5$\n\n$\\mathrm{ln}\\left(x-2\\right)=-\\mathrm{ln}\\left(x+1\\right)$\n\n$x\\approx \\text{2}\\text{.303}$\n\n$2\\mathrm{ln}\\left(5x+1\\right)=\\frac{1}{2}\\mathrm{ln}\\left(-5x\\right)+1$\n\n$\\frac{1}{3}\\mathrm{log}\\left(1-x\\right)=\\mathrm{log}\\left(x+1\\right)+\\frac{1}{3}$\n\n$x\\approx -0.472$\n\n## Extensions\n\nLet $\\text{\\hspace{0.17em}}b\\text{\\hspace{0.17em}}$ be any positive real number such that $\\text{\\hspace{0.17em}}b\\ne 1.\\text{\\hspace{0.17em}}$ What must $\\text{\\hspace{0.17em}}{\\mathrm{log}}_{b}1\\text{\\hspace{0.17em}}$ be equal to? Verify the result.\n\nExplore and discuss the graphs of $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{\\frac{1}{2}}\\left(x\\right)\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)=-{\\mathrm{log}}_{2}\\left(x\\right).\\text{\\hspace{0.17em}}$ Make a conjecture based on the result.\n\nThe graphs of $\\text{\\hspace{0.17em}}f\\left(x\\right)={\\mathrm{log}}_{\\frac{1}{2}}\\left(x\\right)\\text{\\hspace{0.17em}}$ and $\\text{\\hspace{0.17em}}g\\left(x\\right)=-{\\mathrm{log}}_{2}\\left(x\\right)\\text{\\hspace{0.17em}}$ appear to be the same; Conjecture: for any positive base $\\text{\\hspace{0.17em}}b\\ne 1,$ $\\text{\\hspace{0.17em}}{\\mathrm{log}}_{b}\\left(x\\right)=-{\\mathrm{log}}_{\\frac{1}{b}}\\left(x\\right).$\n\nProve the conjecture made in the previous exercise.\n\nWhat is the domain of the function $\\text{\\hspace{0.17em}}f\\left(x\\right)=\\mathrm{ln}\\left(\\frac{x+2}{x-4}\\right)?\\text{\\hspace{0.17em}}$ Discuss the result.\n\nRecall that the argument of a logarithmic function must be positive, so we determine where $\\text{\\hspace{0.17em}}\\frac{x+2}{x-4}>0\\text{\\hspace{0.17em}}$ . From the graph of the function $\\text{\\hspace{0.17em}}f\\left(x\\right)=\\frac{x+2}{x-4},$ note that the graph lies above the x -axis on the interval $\\text{\\hspace{0.17em}}\\left(-\\infty ,-2\\right)\\text{\\hspace{0.17em}}$ and again to the right of the vertical asymptote, that is $\\text{\\hspace{0.17em}}\\left(4,\\infty \\right).\\text{\\hspace{0.17em}}$ Therefore, the domain is $\\text{\\hspace{0.17em}}\\left(-\\infty ,-2\\right)\\cup \\left(4,\\infty \\right).$\n\nUse properties of exponents to find the x -intercepts of the function $\\text{\\hspace{0.17em}}f\\left(x\\right)=\\mathrm{log}\\left({x}^{2}+4x+4\\right)\\text{\\hspace{0.17em}}$ algebraically. Show the steps for solving, and then verify the result by graphing the function.\n\nwhat are you up to?\nnothing up todat yet\nMiranda\nhi\njai\nhello\njai\nMiranda Drice\njai\naap konsi country se ho\njai\nwhich language is that\nMiranda\nI am living in india\njai\ngood\nMiranda\nwhat is the formula for calculating algebraic\nI think the formula for calculating algebraic is the statement of the equality of two expression stimulate by a set of addition, multiplication, soustraction, division, raising to a power and extraction of Root. U believe by having those in the equation you will be in measure to calculate it\nMiranda\nstate and prove Cayley hamilton therom\nhello\nPropessor\nhi\nMiranda\nthe Cayley hamilton Theorem state if A is a square matrix and if f(x) is its characterics polynomial then f(x)=0 in another ways evey square matrix is a root of its chatacteristics polynomial.\nMiranda\nhi\njai\nhi Miranda\njai\nthanks\nPropessor\nwelcome\njai\nWhat is algebra\nalgebra is a branch of the mathematics to calculate expressions follow.\nMiranda\nMiranda Drice would you mind teaching me mathematics? I think you are really good at math. I'm not good at it. In fact I hate it. \ud83d\ude05\ud83d\ude05\ud83d\ude05\nJeffrey\nlolll who told you I'm good at it\nMiranda\nsomething seems to wispher me to my ear that u are good at it. lol\nJeffrey\nlolllll if you say so\nMiranda\nbut seriously, Im really bad at math. And I hate it. But you see, I downloaded this app two months ago hoping to master it.\nJeffrey\nwhich grade are you in though\nMiranda\noh woww I understand\nMiranda\nJeffrey\nJeffrey\nMiranda\nhow come you finished in college and you don't like math though\nMiranda\ngotta practice, holmie\nSteve\nif you never use it you won't be able to appreciate it\nSteve\nI don't know why. But Im trying to like it.\nJeffrey\nyes steve. you're right\nJeffrey\nso you better\nMiranda\nwhat is the solution of the given equation?\nwhich equation\nMiranda\nI dont know. lol\nJeffrey\nMiranda\nJeffrey\nanswer and questions in exercise 11.2 sums\nhow do u calculate inequality of irrational number?\nAlaba\ngive me an example\nChris\nand I will walk you through it\nChris\ncos (-z)= cos z .\nwhat is a algebra\n(x+x)3=?\n6x\nObed\nwhat is the identity of 1-cos\u00b25x equal to?\n__john __05\nKishu\nHi\nAbdel\nhi\nYe\nhi\nNokwanda\nC'est comment\nAbdel\nHi\nAmanda\nhello\nSORIE\nHiiii\nChinni\nhello\nRanjay\nhi\nANSHU\nhiiii\nChinni\nh r u friends\nChinni\nyes\nHassan\nso is their any Genius in mathematics here let chat guys and get to know each other's\nSORIE\nI speak French\nAbdel\nokay no problem since we gather here and get to know each other\nSORIE\nhi im stupid at math and just wanna join here\nYaona\nlol nahhh none of us here are stupid it's just that we have Fast, Medium, and slow learner bro but we all going to work things out together\nSORIE\nit's 12\nwhat is the function of sine with respect of cosine , graphically\ntangent bruh\nSteve\ncosx.cos2x.cos4x.cos8x\nsinx sin2x is linearly dependent\nwhat is a reciprocal\nThe reciprocal of a number is 1 divided by a number. eg the reciprocal of 10 is 1\/10 which is 0.1\nShemmy\nReciprocal is a pair of numbers that, when multiplied together, equal to 1. Example; the reciprocal of 3 is \u2153, because 3 multiplied by \u2153 is equal to 1\nJeza\neach term in a sequence below is five times the previous term what is the eighth term in the sequence\nI don't understand how radicals works pls\nHow look for the general solution of a trig function","date":"2020-09-19 00:21:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 112, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6334241032600403, \"perplexity\": 1073.8641850471531}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400189264.5\/warc\/CC-MAIN-20200918221856-20200919011856-00142.warc.gz\"}"}
null
null
\section{Introduction} \label{Intro} The modest aim of this article is to construct non-trivial extensions in Voe\-vodsky's category of effective geometrical motives, by studying a very special and concrete geometric situation, namely that of a singular proper surface. \\ This example illustrates a much more general principle: varie\-ties $Y$ that are singular (or non-proper, for that matter), can provide interesting extensions of motives. The cohomological theories of mixed sheaves suggest where to look for these motives: the one should come from the open smooth part $Y_{\mathop{{\rm reg}}\nolimits}$ of $Y$ --- the \emph{intersection motive} of $Y$ --- the other should be constructed out of the complement of $Y_{\mathop{{\rm reg}}\nolimits}$ in (a compactification of) $Y$ --- the \emph{boundary motive} of $Y_{\mathop{{\rm reg}}\nolimits}$. This principle (for which no originality is claimed, since it has been part of the mathematical culture for some time) will be discussed in more detail separately, in order to preserve the structure of the present article. It is intended as a research article with a large instructional component. \\ The geometric object of interest is a proper surface $\overline{X}$ over an arbitrary base field $k$. \\ The first three sections contain nothing fundamentally new, except maybe for the systematic use of K\"unneth filtrations (which are canonical) instead of K\"unneth decompositions (which in general are not). Section~\ref{1} reviews a special case of a result of Borho and MacPherson \cite{BoMp}, computing the intersection cohomology of $\overline{X}$ in terms of the cohomology of a desingularization $\mathop{\widetilde{X}} \nolimits$. The result, predicted by the Decomposition Theorem of \cite{BBD}, implies that the former is a direct factor of the latter. More precisely (Theorem~\ref{1A}), its complement is given by the second cohomology of the exceptional divisor $D$ of $\mathop{\widetilde{X}} \nolimits$. This follows from the well-known non-degeneracy of the intersection pairing on the components $D_m$ of $D$. As remarked already by de Cataldo and Migliorini \cite{CM}, this latter observation allows to directly translate the construction into the motivic world, and to construct the intersection motive $h_{!*} (\overline{X})$ of $\overline{X}$. This is done in Section~\ref{2}. We get a canonical decomposition \[ h(\mathop{\widetilde{X}} \nolimits) = h_{!*} (\overline{X}) \oplus \bigoplus_m h^2(D_m) \] in the category of Chow motives over $k$. Recall that this category is pseudo-Abelian. The above decomposition should be considered as remarkable: to construct a sub-motive of $h(\mathop{\widetilde{X}} \nolimits)$ does not \emph{a priori} necessitate the \emph{identification}, but only the \emph{existence} of a complement. In our situation, the complement \emph{is} canonical, thanks to the very special geometrical situation. This point is reflected by the rather subtle functoriality properties of $h_{!*} (\overline{X})$ (Proposition~\ref{2E}): viewed as a sub-motive of $h(\mathop{\widetilde{X}} \nolimits)$, it is respected by pull-backs, viewed as a quotient, it is respected by push-forwards under dominant morphisms of surfaces. Section~\ref{3} is devoted to the existence and the study of the K\"unneth filtration of $h_{!*}(\overline{X})$. The main ingredient is of course Murre's construction of K\"unneth projectors for the motive $h(\mathop{\widetilde{X}} \nolimits)$ \cite{Mr}. Theorem~\ref{3C} shows how to adapt these to our construction. \\ As suggested by one of the fundamental properties of intersection cohomology \cite{BBD}, the intersection motive of $\overline{X}$ satisfies the Hard Lefschetz Theorem for ample line bundles on $\overline{X}$. We prove this result (Theorem~\ref{4A}) in Section~\ref{4}. In fact, we give a slight generalization (Variant~\ref{4A'}), which will turn out to be useful for the setting we shall study in the last section. \\ Section~\ref{5} is concerned with the motive of the boundary $D$ of the desingularization $\mathop{\widetilde{X}} \nolimits$ of $\overline{X}$. This boundary being singular in general, the right language for the study of its motive is given by Voevodsky's triangulated category of effective geometrical motives \cite{VSF}. The section starts with a review of the definition of this category, and of its relation to Chow motives. It is then easy to define motivic analogues of $H^0$ and $H^2$ of $D$, and to see that they are Chow motives. The most interesting part is the motivic analogue of the part of degree one $H^1$, which will be seen as a canonical sub-quotient of the motive of $D$. \\ In Section~\ref{6}, we unite what was said before, and give our main result (Theorem~\ref{Main}). Assuming that all geometric irreducible components of $D$ are of genus zero, we construct a one-extension of the degree two-part of the intersection motive of $\overline{X}$ by the degree one-part of the motive of $D$. We have no difficulty to admit that this statement was greatly inspired by the main result of a recent article of Caspar \cite{Cs}. It thus appeared appropriate to conclude this article by a discussion of his result. This is what is done in Section~\ref{7}, where we show that in the geometric setting considered in [loc.$\;$cit.] , Theorem~\ref{Main} yields a motivic interpretation of Caspar's construction. \\ Part of this work was done while I was enjoying a \emph{cong{\'e} pour recherches ou conversions th{\'e}matiques}, granted by the \emph{Universit{\'e} Paris~13}, and during a visit to the \emph{Centre de Recerca Matem\`atica} at Bellaterra--Barcelona. I am grateful to both institutions. I also wish to thank J.\ Ayoub, J.I.\ Burgos, M.A.A.\ de Cataldo, F.\ D\'eglise, B.\ Kahn, K.\ K\"unnemann and F.\ Lemma for useful discussions. \\ {\bf Notations and convention}: $k$ denotes a fixed base field, and $CH$ stands for the tensor product with ${\mathbb{Q}}$ of the Chow group. The ${\mathbb{Q}}$-linear category of Chow motives over $k$ is denoted by $CHM(k)_{{\mathbb{Q}}}$. Our standard reference for Chow motives is Scholl's survey article \cite{Sch}. \bigskip \section{Intersection cohomology of surfaces} \label{1} In order to motivate the construction of the intersection motive, to be given in the next section, we shall recall the computation of the \emph{intersection cohomology} of a complex surface. \\ Thus, throughout this section, our base field $k$ will be equal to ${\mathbb{C}}$. We consider the following situation: \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r]^-{j} & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] The morphism $i$ is a closed immersion of a sub-scheme $Z$, with complement $j$. The scheme $X^*$ is a surface over ${\mathbb{C}}$, all of whose singularities are contained in $Z$. Thus, the surface $X$ is smooth. \\ Our aim is to identify the intersection cohomology groups $H^n_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}})$. Note that since $X$ is smooth, the complex ${\mathbb{Q}}_X [2]$ consisting of the constant local system ${\mathbb{Q}}$, placed in degree $-2$, can be viewed as a \emph{perverse sheaf} (for the middle perversity) on $X({\mathbb{C}})$ \cite[Sect.~2.2.1]{BBD}. Hence its \emph{intermediate extension} $j_{!*} {\mathbb{Q}}_X [2]$ \cite[(2.2.3.1)]{BBD} is defined as a perverse sheaf on $X^*({\mathbb{C}})$. By definition, \[ H^n_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}}) = H^{n-2} (X^*({\mathbb{C}}),j_{!*} {\mathbb{Q}}_X [2]) \; , \; \forall \, n \in {\mathbb{Z}} \; . \] In order to identify $H^n_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}})$, note first that the normalization of $\mathop{X^*} \nolimits$ is finite over $\mathop{X^*} \nolimits$, and the direct image under finite morphisms is exact for the perverse $t$-structure \cite[Cor.~2.2.6~(i)]{BBD}. Therefore, intersection cohomology is invariant under passage to the normalization. In the sequel, we therefore assume $\mathop{X^*} \nolimits$ to be normal. In particular, its singularities are isolated. \\ Next, note that if $\mathop{X^*} \nolimits$ is smooth, then the complex $j_{!*} {\mathbb{Q}}_X [2]$ equals ${\mathbb{Q}}_{\mathop{X^*} \nolimits} [2]$. Transitivity of $j_{!*}$ \cite[(2.1.7.1)]{BBD} shows that we may enlarge $X$, and hence assume that the closed sub-scheme $Z$ is finite. \\ Choose a resolution of singularities. More precisely, consider in addition the following diagram, assumed to be cartesian: \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r]^-{\tilde{\jmath}} \ar@{=}[d] & {\mathop{\widetilde{X}} \nolimits} \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r]^-{j} & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] The morphism $\pi$ is assumed proper (and birational) and the surface $\mathop{\widetilde{X}} \nolimits$, smooth. We then have the following special case of \cite[Thm.~1.7]{BoMp}. \begin{Thm} \label{1A} (i) For $n \ne 2$, \[ H^n_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}}) = H^n (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \; . \] \noindent (ii) The group $H^2_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}})$ is a direct factor of $H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}})$, with a \emph{cano\-ni\-cal} complement. As a sub-group, this complement is given by the map \[ {\tilde{\imath}}_*: H^2_{D({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \] from cohomology with support in $D({\mathbb{C}})$; this map is injective. As a quotient, the complement is given by the restriction \[ {\tilde{\imath}}^*: H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2(D({\mathbb{C}}),{\mathbb{Q}}) \; ; \] this map is surjective. \end{Thm} Note that this result is compatible with further blow-up of $\mathop{\widetilde{X}} \nolimits$ in points belonging to $D$. \\ Let us construct the maps between $H^n_{!*} (\mathop{X^*} \nolimits({\mathbb{C}}),{\mathbb{Q}})$ and $H^n (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}})$ leading to the above identifications. Consider the total direct image $\pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}$~; following the convention used in \cite{BBD}, we drop the letter ``$R$'' from our notation. \begin{Lem} \label{1B} The complex $\pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}[2]$ is a perverse sheaf on $X^*$. \end{Lem} \begin{Proof} Let $P$ be a point (of $Z$) over which $\pi$ is not an isomorphism, and denote by $i_P$ its inclusion into $\mathop{X^*} \nolimits$. By definition \cite[D{\'e}f.~2.1.2]{BBD}, we need to check that (a)~the higher inverse images $H^n i_P^* \pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}$ vanish for $n>2$, (b)~the higher exceptional inverse images $H^n i^!_P \pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}$ vanish for $n<2$. (a) By proper base change, the group in question equals $H^n (\pi^{-1}(P),{\mathbb{Q}})$. Since $\pi^{-1}(P)$ is of dimension at most one, there is no cohomology above degree two. (b) The surface $\mathop{\widetilde{X}} \nolimits$ is smooth. Duality and proper base change imply that the group in question is abstractly isomorphic to the dual of $H^{4-n} (\pi^{-1}(P),{\mathbb{Q}})$. This group vanishes if $4-n$ is strictly larger than two. \end{Proof} For $a \in {\mathbb{Z}}$, denote by $\tau_{\le a}$ the functor associating to a complex the $a$-th step of its canonical filtration (with respect to the classical $t$-structure). Recall that $j_{!*} {\mathbb{Q}}_X [2]$ equals $\tau_{\le -1} (j_* {\mathbb{Q}}_X[2])$ \cite[Prop.~2.1.11]{BBD}. We now see how to relate it to $\pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}[2]$: apply $\pi_*$ to the exact triangle \[ {\tilde{\imath}}_* {\tilde{\imath}}^! {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits} \longrightarrow {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits} \longrightarrow {\tilde{\jmath}}_* {\mathbb{Q}}_X \longrightarrow {\tilde{\imath}}_* {\tilde{\imath}}^! {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits} [1] \; . \] This gives an exact triangle \[ i_* F [0] \longrightarrow \tau_{\le -1} (\pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}[2]) \longrightarrow j_{!*} {\mathbb{Q}}_X [2] \longrightarrow i_* F [1] \; ; \] in fact, as in the proof of Lemma~\ref{1B}, one sees that $F$ is a sheaf concentrated in $Z$. More precisely, the restriction to any point $P$ of $Z$ of this sheaf equals the kernel of the composition \[ {\tilde{\imath}}^* {\tilde{\imath}}_* : H^2_{\pi^{-1}(P)}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2(\pi^{-1}(P),{\mathbb{Q}}) \; . \] We thus get the following. \begin{Lem} \label{1C} There is a canonical exact sequence \[ 0 \longrightarrow i_* F [0] \longrightarrow \tau_{\le -1} (\pi_* {\mathbb{Q}}_{\mathop{\widetilde{X}} \nolimits}[2]) \longrightarrow j_{!*} {\mathbb{Q}}_X [2] \longrightarrow 0 \] of perverse sheaves on $X^*$. \end{Lem} \begin{Proofof}{Theorem~\ref{1A}} We shall show that the composition \[ {\tilde{\imath}}^* {\tilde{\imath}}_* : H^2_{D({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2(D({\mathbb{C}}),{\mathbb{Q}}) \] is in fact an isomorphism. This implies that the sheaf $F$ is zero. It also implies injectivity of \[ {\tilde{\imath}}_*: H^2_{D({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \; , \] as well as surjectivity of \[ {\tilde{\imath}}^*: H^2 (\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow H^2(D({\mathbb{C}}),{\mathbb{Q}}) \; . \] Hence the statement of our theorem. In order to prove bijectivity of ${\tilde{\imath}}^* {\tilde{\imath}}_*$, note that we may assume that $D$ is a divisor, whose irreducible components are smooth. Indeed, if $f: \mathop{\widetilde{X}} \nolimits' \to \mathop{\widetilde{X}} \nolimits$ is a further blow-up, such that $f^{-1}(D)$ has the required property \cite[Thm.~$I_2^{N,n}$]{Hi}, then the push-forward $f_*$ is a left inverse of the pull-back $f^*$, and the diagrams invol\-ving cohomology of $D({\mathbb{C}})$ and $f^{-1} (D({\mathbb{C}}))$, and cohomo\-lo\-gy with support in $D({\mathbb{C}})$ and $f^{-1} (D({\mathbb{C}}))$, respectively, commute thanks to proper base change. Therefore, bijectivity on the level of $\mathop{\widetilde{X}} \nolimits$ follows from bijectivity on the level of $\mathop{\widetilde{X}} \nolimits'$. If $D_m$ are the irreducible components of $D$, then the closed covering $D = \cup_m D_m$ induces canonical isomorphisms \[ \bigoplus_m H^2_{D_m({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \arrover{\sim} H^2_{D({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}}) \] and \[ H^2(D({\mathbb{C}}),{\mathbb{Q}}) \arrover{\sim} \bigoplus_m H^2(D_m({\mathbb{C}}),{\mathbb{Q}}) \; . \] Purity identifies each $H^2_{D_m({\mathbb{C}})}(\mathop{\widetilde{X}} \nolimits({\mathbb{C}}),{\mathbb{Q}})$ with $H^0(D_m({\mathbb{C}}),{\mathbb{Q}})(-1)$ (it is here that we use that the $D_m$ are smooth). The induced morphism \[ {\tilde{\imath}}^* {\tilde{\imath}}_*: \bigoplus_m H^0(D_m({\mathbb{C}}),{\mathbb{Q}}) \longrightarrow \bigoplus_m H^2(D_m({\mathbb{C}}),{\mathbb{Q}})(1) \] corresponds to the intersection pairing on the components of $D$. This pairing is well known to be negative definite \cite[p.~6]{M}. In particular, it is non-degenerate. \end{Proofof} \begin{Rem} \label{1D} The analogue of Theorem~\ref{1A} holds for $\ell$-adic cohomology, and when $k$ is a finite field of characteristic unequal to $\ell$. The proof is exactly the same. Note that by Abhyankar's result on resolution of singularities in dimension two \cite[Theorem]{L2}, $X^*$ can be desingularized for \emph{any} base field $k$. In addition (see the discussion in \cite[pp.~191--194]{L1}), by further blowing up possible singularities of (the components of) the pre-image $D$ of $Z$, it can be assumed to be a divisor with normal crossings, whose irreducible components are smooth. This discussion also shows that the system of such resolutions is filtering. \end{Rem} \bigskip \section{Construction of the intersection motive} \label{2} Fix a base field $k$, and assume given a proper surface $\overline{X}$ over $k$. The aim of this section is to recall the construction of the \emph{Chow motive} modelling intersection cohomo\-logy of $\overline{X}$, and to study its functoriality properties. The discussion preceding Theorem~\ref{1A} showed that intersection cohomology is invariant under passage to the normalization $\mathop{X^*} \nolimits$ of $\overline{X}$; the same should thus be expected from the motive we intend to construct. \footnotemark \footnotetext{ This principle also explains why the problem of constructing the intersection motive of a proper curve $\overline{C}$ is not very interesting: the intersection motive of $\overline{C}$ is equal to the motive of the normalization $C^*$ of $\overline{C}$ (which is smooth and projective).} Fix \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] where $i$ is a closed immersion of a finite sub-scheme $Z$, with smooth complement $X$. Choose a resolution of singularities. More precisely, consider in addition the following diagram, assumed to be cartesian: \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] where $\pi$ is proper (and birational), $\mathop{\widetilde{X}} \nolimits$ is smooth (and proper), and $D$ is a divisor with normal crossings, whose irreducible components $D_m$ are smooth (and proper). \\ \begin{Rem} \label{2a} Note that $\mathop{\widetilde{X}} \nolimits$, as a smooth and proper surface, is projective: Zariski proved this result for algebraically closed base fields in \cite[p.~54]{Z}, and \cite[Cor.~7.7]{SGA} allows to descend to arbitrary base fields. \end{Rem} \medskip Theorem~\ref{1A} suggests how to construct the intersection motive; in particular, it should be a canonical direct complement of $\oplus_m h^2(D_m)$ in $h(\mathop{\widetilde{X}} \nolimits)$. Recall \cite[Sect.~1.13]{Sch} that the $h^2(D_m)$ are canonically defined as quotient objects of the motives $h(D_m)$. Hence there is a canonical morphism \[ {\tilde{\imath}}^*: h(\mathop{\widetilde{X}} \nolimits) \longrightarrow \bigoplus_m h(D_m) \ontoover{\ } \bigoplus_m h^2(D_m) \] of Chow motives. Similarly \cite[Sect.~1.11]{Sch}, there is a canonical morphism \[ {\tilde{\imath}}_*: \bigoplus_m h^0(D_m)(-1) \lhook\joinrel\longrightarrow \bigoplus_m h(D_m)(-1) \longrightarrow h(\mathop{\widetilde{X}} \nolimits) \; . \] Here, the twist by $(-1)$ denotes the tensor product with the Lefschetz motive ${\mathbb{L}}=h^2({\mathbb{P}}^1)$. The following is a special case of \cite[Sect.~2.5]{CM}. \begin{Thm} \label{2B} (i) The composition $\alpha := {\tilde{\imath}}^*{\tilde{\imath}}_*$ is an isomorphism of Chow motives. \\[0.1cm] (ii) The composition $p:= {\tilde{\imath}}_*\alpha^{-1}{\tilde{\imath}}^*$ is an idempotent on $h(\mathop{\widetilde{X}} \nolimits)$. Hence so is the difference ${\rm id}_{\mathop{\widetilde{X}} \nolimits}-p$. \\[0.1cm] (iii) The image $\mathop{{\rm im}}\nolimits p$ is canonically isomorphic to $\oplus_m h^2(D_m)$. \end{Thm} \begin{Proof} (ii) and (iii) are formal consequences of (i). The formula ``$\varphi_* \varphi^* = \deg \varphi$'' for finite morphisms $\varphi$ \cite[Sect.~1.10]{Sch} shows that we may prove our claim after a finite extension of our ground field $k$. In particular, we may assume that all components $D_m$ are geometrically irreducible, with field of constants equal to $k$. We then have canonical isomorphisms $h^0(D_m) \cong h(\mathop{{\rm Spec}}\nolimits k)$ and $h^2(D_m) \cong {\mathbb{L}}$. Denote by $i_m$ the closed immersion of $D_m$ into $\mathop{\widetilde{X}} \nolimits$. The map $\alpha$ in question equals \[ \bigoplus_{m,n} \; i_m^* i_{n,*}: \bigoplus_n h^0(D_n)(-1) \longrightarrow \bigoplus_m h^2(D_m) \; . \] For each pair $(m,n)$, the composition $i_m^* i_{n,*}$ is an endomorphism of ${\mathbb{L}}$. Now the degree map induces an isomorphism \[ \mathop{\rm End}\nolimits ({\mathbb{L}}) = CH^0 (\mathop{{\rm Spec}}\nolimits k) \arrover{\sim} {\mathbb{Q}} \; . \] We leave it to the reader to show that under this isomorphism, the endomorphism $i_m^* i_{n,*}$ is mapped to the intersection number $D_n \cdot D_m$. Our claim follows from the non-degeneracy of the intersection pairing on the components of $D$ \cite[p.~6]{M}. \end{Proof} Following \cite[p.~158]{CM}, we propose the following definition. \begin{Def} \label{2C} The \emph{intersection motive} of $\overline{X}$ is defined as \[ h_{!*} (\overline{X}) := (\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits}-p,0) \in CHM(k)_{{\mathbb{Q}}} \; . \] \end{Def} Here, we follow the standard notation for Chow motives (see e.g.\ \cite[Sect.~1.4]{Sch}). Idempotents on Chow motives admit an image; by definition, the image of the idempotent ${\rm id}_{\mathop{\widetilde{X}} \nolimits}-p$ on the Chow motive $(\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits},0)=h(\mathop{\widetilde{X}} \nolimits)$ is $(\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits}-p,0)=h_{!*} (\overline{X})$. Note that by definition, we have the equality $h_{!*} (\overline{X})=h_{!*} (\mathop{X^*} \nolimits)$. \\ Theorem~\ref{2B} shows that there is a canonical decomposition \[ h(\mathop{\widetilde{X}} \nolimits) = h_{!*} (\overline{X}) \oplus \bigoplus_m h^2(D_m) \] in $CHM(k)_{{\mathbb{Q}}}$. By Theorem~\ref{1A} and Remark~\ref{1D}, the Betti, resp.\ $\ell$-adic realization of the intersection motive (for the base fields for which this realization exists) coincides with intersection cohomology of $\overline{X}$ (and of $\mathop{X^*} \nolimits$). \begin{Prop} \label{2D} As before, denote by $\mathop{X^*} \nolimits$ the normalization of $\overline{X}$. The definition of $h_{!*} (\overline{X})$ is independent of the choices of the finite sub-scheme $Z$ containing the singularities $\mathop{X^*} \nolimits$, and of the desingularization $\mathop{\widetilde{X}} \nolimits$ of $\mathop{X^*} \nolimits$. \end{Prop} This statement is going to be proved together with the functoriality pro\-perties of the intersection motive, whose formulation we prepare now. Consider a dominant morphism $f:\overline{X} \to \overline{Y}$ of proper surfaces over $k$. By the universal property of the normalization $\mathop{Y^*} \nolimits$ of $\overline{Y}$, it induces a morphism, still denoted $f$, between $\mathop{X^*} \nolimits$ and $\mathop{Y^*} \nolimits$. It is generically finite. Hence we can find a finite closed subscheme $W$ of $\mathop{Y^*} \nolimits$ containing the singularities, and such that the pre-image under $f$ of $Y := \mathop{Y^*} \nolimits - W$ is dense, and smooth. The closed sub-scheme $f^{-1}(W)$ of $X$ contains the singularities of $\mathop{X^*} \nolimits$. We thus can find a morphism $F$ of desingularizations of $\mathop{X^*} \nolimits$ and $\mathop{Y^*} \nolimits$ of the type considered before: \[ \vcenter{\xymatrix@R-10pt{ \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{i_D} \ar[d]_F & D \ar[d]^F \\ \mathop{\widetilde{Y}} \nolimits \ar@{<-^{ )}}[r]^{i_C} & C \\}} \] This means that $\mathop{\widetilde{X}} \nolimits$ and $\mathop{\widetilde{Y}} \nolimits$ are smooth, and $D$ and $C$ are divisors with normal crossings, whose irreducible components $D_m$ resp.\ $C_n$ are smooth, and lying over finite closed sub-schemes of $\mathop{X^*} \nolimits$ and $\mathop{Y^*} \nolimits$, respectively. Choose and fix such a dia\-gram. Note that if the original morphism $f:\overline{X} \to \overline{Y}$ is finite, then the diagram $F$ can be chosen to be cartesian. \begin{Prop} \label{2E} (i) The pull-back $F^*: h(\mathop{\widetilde{Y}} \nolimits) \to h(\mathop{\widetilde{X}} \nolimits)$ maps the sub-object $h_{!*}(\overline{Y})$ of $h(\mathop{\widetilde{Y}} \nolimits)$ to the sub-object $h_{!*}(\overline{X})$ of $h(\mathop{\widetilde{X}} \nolimits)$. \\[0.1cm] (ii) The push-forward $F_*: h(\mathop{\widetilde{X}} \nolimits) \to h(\mathop{\widetilde{Y}} \nolimits)$ maps the quotient $h_{!*}(\overline{X})$ of $h(\mathop{\widetilde{X}} \nolimits)$ to the quotient $h_{!*}(\overline{Y})$ of $h(\mathop{\widetilde{Y}} \nolimits)$. \\[0.1cm] (iii) The composition $F_*F^*: h_{!*}(\overline{Y}) \to h_{!*}(\overline{Y})$ equals multiplication with the degree of $f$. \\[0.1cm] (iv) If $f$ is finite, and if the morphism $F$ is chosen to be cartesian, then both $F^*$ and $F_*$ respect the decompositions \[ h(\mathop{\widetilde{Y}} \nolimits) = h_{!*} (\overline{Y}) \oplus \bigoplus_n h^2(C_n) \] and \[ h(\mathop{\widetilde{X}} \nolimits) = h_{!*} (\overline{X}) \oplus \bigoplus_m h^2(D_m) \] of $h(\mathop{\widetilde{Y}} \nolimits)$ and of $h(\mathop{\widetilde{X}} \nolimits)$, respectively. \end{Prop} \begin{Proof} By definition, there are (split) exact sequences \[ 0 \longrightarrow h_{!*} (\overline{X}) \longrightarrow h(\mathop{\widetilde{X}} \nolimits) \stackrel{i_D^*}{\longrightarrow} \bigoplus_m h^2(D_m) \longrightarrow 0 \] and \[ 0 \longrightarrow \bigoplus_m h^0(D_m)(-1) \stackrel{i_{D,*}}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \longrightarrow h_{!*} (\overline{X}) \longrightarrow 0 \; ; \] similarly for $\mathop{\widetilde{Y}} \nolimits$ and $C$. Obviously, the first sequence is contravariant, and the second is covariant. This proves parts (i) and (ii). Part (iii) follows from this, and from the corresponding formula for $F_*F^*$ on the motive of $\mathop{\widetilde{Y}} \nolimits$ \cite[Sect.~1.10]{Sch}; note that the degree of $F$ equals the one of $f$. If $F$ is cartesian, then the above sequences are both co- and contravariant thanks to the base change formulae $F_*i_D^* = i_C^*F_*$ and $F^*i_{C,*} = i_{D,*}F^*$. This proves part (iv). \end{Proof} \begin{Proofof}{Proposition~\ref{2D}} First, let us show that for a fixed choice of $Z$, the definition of $h_{!*} (\overline{X})$ is independent of the choice of the desingularization $\mathop{\widetilde{X}} \nolimits$ of $\mathop{X^*} \nolimits$. Using that the system of such desingularizations is filtering, we reduce ourselves to the situation considered in Proposition~\ref{2E}, with $f={\rm id}$. We thus have a cartesian diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{i_D} \ar[d]_F & D \ar[d]^F \\ \mathop{\widetilde{X}} \nolimits' \ar@{<-^{ )}}[r]^{i_C} & C \\}} \] Let us denote by $h_{!*} (\overline{X})$ and $h_{!*}' (\overline{X})$ the two intersection motives formed with respect to $\mathop{\widetilde{X}} \nolimits$ and $\mathop{\widetilde{X}} \nolimits'$, respectively. We want to show that $F^*: h_{!*}' (\overline{X}) \to h_{!*}(\overline{X})$ is an isomorphism. The scheme $\mathop{\widetilde{X}} \nolimits'$ is normal, and the morphism $F$ is proper. By the valuative criterion of properness, the locus of points of $\mathop{\widetilde{X}} \nolimits'$ where $F^{-1}$ is not defined is of dimension zero. Let $P$ be a point in this locus. If the fibre over $P$ were finite, then $F$ would be quasi-finite near $P$. Since it is proper, it would be finite. But since both its source and target are normal, it would be an isomorphism near $P$, contrary to our assumption. This shows that the fibre over $P$ is of dimension one. Since the fibre is connected \cite[Cor.~(4.3.12)]{EGA3}, it is pure of dimension one, i.e., it is a divisor. By the universal property of the blow-up, $\mathop{\widetilde{X}} \nolimits$ dominates the blow-up of $\mathop{\widetilde{X}} \nolimits'$ in the points $P_1,\ldots,P_r$ where $F$ is not an isomorphism. This blow-up lies between $\mathop{\widetilde{X}} \nolimits$ and $\mathop{\widetilde{X}} \nolimits'$, and satisfies the same conditions on desingularizations. Repeating this argument and using the fact that $\mathop{\widetilde{X}} \nolimits$ is Noetherian, one sees that this process stops at some point; $F$ is therefore the composition of blow-ups in points. By induction, we may assume that $F$ equals the blow-up of $\mathop{\widetilde{X}} \nolimits'$ in one point $P$. The exceptional divisor $E := F^{-1}(P)$ is a projective bundle (of rank one) over $P$. It is also one of the irreducible components $D_m$ of $D$; in fact, the morphism $F$ induces a bijection between the components of $D$ other than $E$ and the components $C_n$ of $C$. Denote by $i_E$ the closed immersion of $E$ into $\mathop{\widetilde{X}} \nolimits$. By Manin's computation of the motive of a blow-up \cite[Thm.~2.8]{Sch}, the sequence \[ 0 \longrightarrow h(\mathop{\widetilde{X}} \nolimits') \stackrel{F^*}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \stackrel{i_E^*}{\longrightarrow} h^2(E) \longrightarrow 0 \] is (split) exact. But obviously, so is \[ 0 \longrightarrow \bigoplus_n h^2(C_n) \stackrel{F^*}{\longrightarrow} \bigoplus_m h^2(D_m) \stackrel{i_E^*}{\longrightarrow} h^2(E) \longrightarrow 0 \; . \] Hence $F^*$ maps the kernel $h_{!*}' (\overline{X})$ of $i_C^*$ isomorphically to the kernel $h_{!*} (\overline{X})$ of $i_D^*$. In the same way, one shows that enlarging $Z$ by adding non-singular points of $X^*$ does not change the value of $h_{!*} (\overline{X})$. \end{Proofof} Recall the definition of the \emph{dual} of a Chow motive \cite[Sect.~1.15]{Sch}. For example, for any desingularization $\mathop{\widetilde{X}} \nolimits$ of $X^*$, the dual of $(\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits},0)=h(\mathop{\widetilde{X}} \nolimits)$ is given by $(\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits},2)=h(\mathop{\widetilde{X}} \nolimits)(2)$. \begin{Prop} \label{2F} The dual of the intersection motive $h_{!*} (\overline{X})$ is canonically isomorphic to $h_{!*} (\overline{X})(2)$. \end{Prop} \begin{Proof} By definition, the dual of $(\mathop{\widetilde{X}} \nolimits,{\rm id}_{\mathop{\widetilde{X}} \nolimits}-p,0)$ equals $(\mathop{\widetilde{X}} \nolimits,{ }^t({\rm id}_{\mathop{\widetilde{X}} \nolimits}-p),2)$, where ${ }^t$ denotes the transposition of cycles in $\mathop{\widetilde{X}} \nolimits \times \mathop{\widetilde{X}} \nolimits$. But $p$ is symmetric: in fact, ${}^t({\tilde{\imath}}^*) = {\tilde{\imath}}_*$, and ${}^t({\tilde{\imath}}_*) = {\tilde{\imath}}^*$. One checks as in the proof of Proposition~\ref{2D} that this identification of $h_{!*} (\overline{X})^*$ with $h_{!*} (\overline{X})(2)$ does not depend on the choice of $\mathop{\widetilde{X}} \nolimits$. \end{Proof} \bigskip \section{The K\"unneth filtration of the intersection motive} \label{3} We continue to consider the situation of Section~\ref{2}. Thus, $\overline{X}$ is a proper surface over the base field $k$ with normalization $\mathop{X^*} \nolimits$, and we fix \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] where $i$ is a closed immersion of a finite sub-scheme $Z$, with smooth complement $X$. In addition, we consider the following cartesian diagram: \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r]^{i} & Z \\}} \] where $\pi$ is proper, $\mathop{\widetilde{X}} \nolimits$ is smooth and proper (hence projective), and $D$ is a divisor with normal crossings, whose irreducible components $D_m$ are smooth. The aim of this section is to recall Murre's construction of \emph{K\"unneth decompositions} of the motive of $\mathop{\widetilde{X}} \nolimits$ \cite{Mr}, following Scholl's presentation \cite[Chap.~4]{Sch}, and to study the resulting filtration on the intersection motive. \\ Thus, fix (i)~a hyperplane section $C \subset \mathop{\widetilde{X}} \nolimits$ that is a smooth curve (observe that $C$ might only be defined over a finite extension $k'$ of $k$). As explained in \cite[Sect.~4.3]{Sch}, the embedding of $C$ into $\mathop{\widetilde{X}} \nolimits$ induces an isogeny $P \to J$ from the Picard variety to the Albanese variety of $\mathop{\widetilde{X}} \nolimits$. This isogeny is actually independent of the choice of the smooth curve $C$ representing the fixed very ample class in $CH^1(\mathop{\widetilde{X}} \nolimits)$ (and a non-zero multiple of the isogeny is defined over $k$). Fix (ii)~an isogeny $\beta: J \to P$ such that the composition of the two isogenies equals multiplication by $n > 0$. Finally, fix (iii)~a $0$-cycle $T$ of degree one on $C$. Then by \cite[Thm.~3.9]{Sch}, $\beta$ corresponds to a symmetric cycle class \[ \widetilde{\beta} \in CH^1 (\mathop{\widetilde{X}} \nolimits \times \mathop{\widetilde{X}} \nolimits) \] satisfying the condition $p_{\mathop{\widetilde{X}} \nolimits,*} (\widetilde{\beta} \cdot [\mathop{\widetilde{X}} \nolimits \times T]) = 0 \in CH^1 (\mathop{\widetilde{X}} \nolimits)$, where $p_{\mathop{\widetilde{X}} \nolimits}$ is the first projection from the product $\mathop{\widetilde{X}} \nolimits \times \mathop{\widetilde{X}} \nolimits$ to $\mathop{\widetilde{X}} \nolimits$. \\ One then defines \cite[Sect.~4.3]{Sch} projectors $\pi_0 := [T \times \mathop{\widetilde{X}} \nolimits]$ and $\pi_4 := { }^t \pi_0 = [\mathop{\widetilde{X}} \nolimits \times T]$, as well as $p_1 := \frac{1}{n} \widetilde{\beta} \cdot [C \times \mathop{\widetilde{X}} \nolimits]$ and $p_3 := { }^t p_1$. All orthogonality relations are satisfied, including $p_3 p_1 = 0$, except that $p_1 p_3$ is not necessarily equal to zero. This is why a modification is necessary: one puts $\pi_1 := p_1 - \frac{1}{2} p_1 p_3$ and $\pi_3 := { }^t \pi_1 = p_3 - \frac{1}{2} p_1 p_3$. \footnotemark \footnotetext{ This differs from Murre's original solution \cite[Rem.~6.5]{Mr}, where one takes $p_1 - p_1 p_3$ and $p_3$ instead of $\pi_1$ and $\pi_3$.} This, together with $\pi_2 := {\rm id}_{\mathop{\widetilde{X}} \nolimits} - \pi_0 - \pi_1 - \pi_3 - \pi_4$, gives a full auto-dual set of orthogonal projectors. We thus get a K\"unneth decomposition of $h(\mathop{\widetilde{X}} \nolimits)$ (first over $k'$, then by pushing down, over $k$): \[ h(\mathop{\widetilde{X}} \nolimits) = {}'h^0(\mathop{\widetilde{X}} \nolimits) \oplus {}'h^1(\mathop{\widetilde{X}} \nolimits) \oplus {}'h^2(\mathop{\widetilde{X}} \nolimits) \oplus {}'h^3(\mathop{\widetilde{X}} \nolimits) \oplus {}'h^4(\mathop{\widetilde{X}} \nolimits) \; , \] with \[ {}'h^n(\mathop{\widetilde{X}} \nolimits) := (\mathop{\widetilde{X}} \nolimits, \pi_n, 0) \subset (\mathop{\widetilde{X}} \nolimits, {\rm id}_{\mathop{\widetilde{X}} \nolimits}, 0) = h(\mathop{\widetilde{X}} \nolimits) \; , \quad 0 \le n \le 4 \; . \] \begin{Def} \label{3a} (a)~The \emph{K\"unneth filtration of $h(\mathop{\widetilde{X}} \nolimits)$} is the ascending filtration of $h(\mathop{\widetilde{X}} \nolimits)$ by sub-motives induced by a K\"unneth decomposition of $h(\mathop{\widetilde{X}} \nolimits)$: \[ 0 \subset h^0(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 1}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 2}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 3}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 4}(\mathop{\widetilde{X}} \nolimits) = h(\mathop{\widetilde{X}} \nolimits) \; , \] where we set $h^{\le r} (\mathop{\widetilde{X}} \nolimits) := \oplus_{n=0}^r {}'h^n(\mathop{\widetilde{X}} \nolimits)$, $r \le 4$. \\[0.1cm] (b)~The $n$-th \emph{K\"unneth component of $h(\mathop{\widetilde{X}} \nolimits)$}, $0 \le n \le 4$, is the sub-quotient of $h(\mathop{\widetilde{X}} \nolimits)$ defined by \[ h^n(\mathop{\widetilde{X}} \nolimits) := h^{\le n}(\mathop{\widetilde{X}} \nolimits) / h^{\le n-1}(\mathop{\widetilde{X}} \nolimits) \; . \] \end{Def} \begin{Rem} The sub-objects $h^{\le n}(\mathop{\widetilde{X}} \nolimits)$ are direct factors of $h(\mathop{\widetilde{X}} \nolimits)$, hence the sub-quotients $h^n(\mathop{\widetilde{X}} \nolimits)$ exist. Similarly, one may define the quotients \[ h^{\ge r} (\mathop{\widetilde{X}} \nolimits) := h(\mathop{\widetilde{X}} \nolimits) / h^{\le {r-1}} (\mathop{\widetilde{X}} \nolimits) \] of $h(\mathop{\widetilde{X}} \nolimits)$. \end{Rem} Note that a number of choices is involved in the construction of the projectors $\pi_0,\ldots,\pi_4$: mainly, a very ample line bundle ${\cal L}$ on $\mathop{\widetilde{X}} \nolimits$, and a $0$-cycle on a smooth curve in the divisor class corresponding to ${\cal L}$. The following is the content of \cite[Thm.~14.3.10~i)]{KMrP}. \begin{Prop} \label{3b} The K\"unneth filtration of $h(\mathop{\widetilde{X}} \nolimits)$ is independent of the choices made in the construction of the K\"unneth decomposition. \end{Prop} \begin{Rem} \label{3c} (a)~In particular, the K\"unneth components $h^n(\mathop{\widetilde{X}} \nolimits)$ are ca\-nonically defined sub-quotients of $h(\mathop{\widetilde{X}} \nolimits)$. \\[0.1cm] (b)~\emph{A posteori}, one may define the notion of K\"unneth decomposition of $h(\mathop{\widetilde{X}} \nolimits)$ as being a decomposition splitting the K\"unneth filtration. Such decompositions include the ones obtained by Murre's construction, but there could be others. \end{Rem} Our aim (see Theorem~\ref{3C}) is to deduce from the K\"unneth filtration of $h(\mathop{\widetilde{X}} \nolimits)$ a filtration of the intersection motive $h_{!*}(\overline{X}) \subset h(\mathop{\widetilde{X}} \nolimits)$: \[ 0 \subset h^0_{!*}(\overline{X}) \subset h^{\le 1}_{!*}(\overline{X}) \subset h^{\le 2}_{!*}(\overline{X}) \subset h^{\le 3}_{!*}(\overline{X}) \subset h^{\le 4}_{!*}(\overline{X}) = h_{!*}(\overline{X}) \; . \] The idea is of course to take the ``induced'' filtration. But since we are working in a category which is only pseudo-Abelian, we need to proceed with some care. Recall the quotient $\oplus_m h^2 (D_m)$ and the sub-object $\oplus_m h^0 (D_m)$ of $\oplus_m h (D_m)$. \begin{Prop} \label{3B} The K\"unneth filtration of $h(\mathop{\widetilde{X}} \nolimits)$ satisfies the following conditions. \begin{enumerate} \item[(1)] Duality $h(\mathop{\widetilde{X}} \nolimits)^{\vee} \arrover{\sim} h(\mathop{\widetilde{X}} \nolimits)(2)$ induces isomorphisms \[ h^{\le r}(\mathop{\widetilde{X}} \nolimits)^{\vee} \arrover{\sim} h^{\ge 4-r}(\mathop{\widetilde{X}} \nolimits)(2) \; . \] \item[(2)] The composition of morphisms \[ h^{\le 1}(\mathop{\widetilde{X}} \nolimits) \lhook\joinrel\longrightarrow h(\mathop{\widetilde{X}} \nolimits) \stackrel{{{\tilde{\imath}}}^*}{\longrightarrow} \bigoplus_m h (D_m) \ontoover{\ } \bigoplus_m h^2 (D_m) \] equals zero. \end{enumerate} \end{Prop} \begin{Proof} The K\"unneth filtration satisfies (1) since the decompositions obtained by Murre's construction are auto-dual: ${}'h^n(\mathop{\widetilde{X}} \nolimits)^{\vee} \cong {}'h^{4-n}(\mathop{\widetilde{X}} \nolimits)(2)$ under the duality $h(\mathop{\widetilde{X}} \nolimits)^{\vee} \cong h(\mathop{\widetilde{X}} \nolimits)(2)$. By \cite[Prop.~5.8]{J}, condition~(2) is a consequence of Murre's Conjecture~B \cite[Sect.~1.4]{Mr2} on the triviality of the action of the $\ell$-th K\"unneth projector on $CH^j (Y)$, for $\ell > 2j$. Here, $Y$ equals the product of $\mathop{\widetilde{X}} \nolimits$ and $D_m$, $j=2$, and $\ell = 5, 6$. Note that for products of a surface and a curve, the conjecture is known to hold (see \cite[Lemma~8.3.2]{Mr3} for the case $j=2$). But since the argument proving (2) is rather explicit, we may just as well give it for the convenience of the reader. We need to compute the composition of correspondences \[ h(\mathop{\widetilde{X}} \nolimits) \stackrel{\pi_n}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \stackrel{{{\tilde{\imath}}}^*}{\longrightarrow} \bigoplus_m h (D_m) \stackrel{pr}{\ontoover{\ }} \bigoplus_m h^2 (D_m) \; , \] for $n = 0, 1$. The composition is zero if and only if it is zero after base change to a finite field extension. Hence we may assume that all $D_m$ are geometrically irreducible, with field of constants $k$. Then the $h^2 (D_m)$ equal ${\mathbb{L}}$, and the composition $pr \circ {{\tilde{\imath}}}^*$ corresponds to the cycle class \[ ([D_m])_m \in \bigoplus_m CH^1(\mathop{\widetilde{X}} \nolimits) \] on $\coprod_m \mathop{\widetilde{X}} \nolimits \times \mathop{{\rm Spec}}\nolimits k$. By definition of the composition of correspondences, we then find \[ pr \circ {{\tilde{\imath}}}^* \circ \pi = \bigl( p_{\mathop{\widetilde{X}} \nolimits,*}(\pi \cdot [\mathop{\widetilde{X}} \nolimits \times D_m]) \bigr)_m \in \bigoplus_m CH^1(\mathop{\widetilde{X}} \nolimits) \; , \] for any $\pi \in CH^2(\mathop{\widetilde{X}} \nolimits \times \mathop{\widetilde{X}} \nolimits)$. Here as before, $p_{\mathop{\widetilde{X}} \nolimits}$ is the first projection from the product $\mathop{\widetilde{X}} \nolimits \times \mathop{\widetilde{X}} \nolimits$ to $\mathop{\widetilde{X}} \nolimits$. Let us fix $m$. We need to show that for $n = 0, 1$, the cycle class \[ p_{\mathop{\widetilde{X}} \nolimits,*}(\pi_n \cdot [\mathop{\widetilde{X}} \nolimits \times D_m]) \in CH^1(\mathop{\widetilde{X}} \nolimits) \] is zero. For $n = 0$, this is easy: the intersection \[ \pi_0 \cdot [\mathop{\widetilde{X}} \nolimits \times D_m] = [T \times \mathop{\widetilde{X}} \nolimits] \cdot [\mathop{\widetilde{X}} \nolimits \times D_m] = [T \times D_m] \] has one-dimensional fibres under $p_{\mathop{\widetilde{X}} \nolimits}$. Therefore, its push-forward under $p_{\mathop{\widetilde{X}} \nolimits}$ is zero. For $n=1$, observe first that by definition of $\pi_1$, and by associativity of composition of correspondences, it suffices to show that \[ p_{\mathop{\widetilde{X}} \nolimits,*}(p_1 \cdot [\mathop{\widetilde{X}} \nolimits \times D_m]) = 0 \; . \] By definition, the intersection $p_1 \cdot [\mathop{\widetilde{X}} \nolimits \times D_m]$ is a non-zero multiple of \[ \widetilde{\beta} \cdot [C \times \mathop{\widetilde{X}} \nolimits] \cdot [\mathop{\widetilde{X}} \nolimits \times D_m] \; . \] By the projection formula, the image under $p_{\mathop{\widetilde{X}} \nolimits,*}$ of this cycle equals the image under the push-forward $CH^0(C) \to CH^1(\mathop{\widetilde{X}} \nolimits)$ of \[ p_{1,*} ( \widetilde{\beta}_C \cdot [C \times D_m] ) \; , \] where $\widetilde{\beta}_C$ denotes the pull-back of $\widetilde{\beta}$ to $C \times \mathop{\widetilde{X}} \nolimits$, and $p_1$ the projection from $C \times \mathop{\widetilde{X}} \nolimits$ to $C$. Denote by $p_2$ the projection from this product to $\mathop{\widetilde{X}} \nolimits$. Now symmetry of $\widetilde{\beta}$ and the condition $p_{\mathop{\widetilde{X}} \nolimits,*} (\widetilde{\beta} \cdot [\mathop{\widetilde{X}} \nolimits \times T]) = 0$ imply that \[ p_{2,*}(\widetilde{\beta}_C \times [T \times \mathop{\widetilde{X}} \nolimits]) = 0 \in CH^1(\mathop{\widetilde{X}} \nolimits) \; . \] It follows that \[ p_{2,*}(\widetilde{\beta}_C \times [T \times D_m]) = 0 \in CH^1(D_m) \; . \] In particular, the degree $a$ of this $0$-cycle is zero. But since $T$ is of degree one, we have \[ p_{1,*} ( \widetilde{\beta}_C \cdot [C \times D_m] ) = a [C] \in CH^0(C) \; . \] \end{Proof} Given that duality $h(D_m)^{\vee} \arrover{\sim} h(D_m)(1)$ induces an isomorphism \[ h^0(D_m)^{\vee} \arrover{\sim} h^2(D_m)(1) \; , \] it is easy to see that the morphism ${\tilde{\imath}}_*$ dual to the one from condition~(2) \[ \bigoplus_m h^0 (D_m) \lhook\joinrel\longrightarrow \bigoplus_m h (D_m) \stackrel{{\tilde{\imath}}_*}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits)(1) \ontoover{\ } h^{\ge 3}(\mathop{\widetilde{X}} \nolimits)(1) \] is zero, i.e., the map ${\tilde{\imath}}_*: \oplus_m h^0 (D_m) \to h(\mathop{\widetilde{X}} \nolimits)(1)$ factors through the sub-motive $h^{\le 2}(\mathop{\widetilde{X}} \nolimits)(1)$. On the other hand, by condition~(2), the inverse image ${{\tilde{\imath}}}^*: h(\mathop{\widetilde{X}} \nolimits) \to \oplus_m h^2 (D_m)$ factors through the quotient motive $h^{\ge 2}(\mathop{\widetilde{X}} \nolimits)$. It follows that the composition \[ \alpha = {{\tilde{\imath}}}^* {\tilde{\imath}}_* : \bigoplus_m h^0 (D_m)(-1) \longrightarrow \bigoplus_m h^2 (D_m) \] considered in Section~\ref{2} factors naturally through $h^2(\mathop{\widetilde{X}} \nolimits)$. By Theorem~\ref{2B}~(i), the morphism $\alpha$ is an isomorphism. \begin{Def} \label{3d} Define the motive $h^2_{!*}(\overline{X})$ as the kernel of \[ {\tilde{\imath}}_* \alpha^{-1} {{\tilde{\imath}}}^* : h^2(\mathop{\widetilde{X}} \nolimits) \longrightarrow h^2(\mathop{\widetilde{X}} \nolimits) \; . \] \end{Def} Note that ${\tilde{\imath}}_* \alpha^{-1} {{\tilde{\imath}}}^*$ is an idempotent on $h^2(\mathop{\widetilde{X}} \nolimits)$; it therefore admits a kernel. Its image is of course canonically isomorphic (via ${{\tilde{\imath}}}^*$) to $\oplus_m h^2 (D_m)$. Dually, the image of the projector ${\rm id}_{h^2(\mathop{\widetilde{X}} \nolimits)} - {\tilde{\imath}}_* \alpha^{-1} {{\tilde{\imath}}}^*$ is $h^2_{!*}(\overline{X})$. Its kernel is canonically isomorphic (via ${{\tilde{\imath}}}_*$) to $\oplus_m h^0 (D_m)(-1)$. \begin{Rem} In \cite[Sect.~14.2.2]{KMrP}, the \emph{transcendental part} $t^2(\mathop{\widetilde{X}} \nolimits)$ of the motive of the surface $\mathop{\widetilde{X}} \nolimits$ is defined, as a complement in $h^2(\mathop{\widetilde{X}} \nolimits)$ of the algebraic, i.e., ``N\'eron--Severi''-part $h^2(\mathop{\widetilde{X}} \nolimits)_{\mathop{\rm alg}\nolimits}$. It follows that under the projection from $h^2(\mathop{\widetilde{X}} \nolimits)$, the transcendental part $t^2(\mathop{\widetilde{X}} \nolimits)$ maps monomorphically to $h^2_{!*}(\overline{X})$. \end{Rem} By condition~(2) from Proposition~\ref{3B}, the projector $p = {\tilde{\imath}}_*\alpha^{-1}{\tilde{\imath}}^*$ on $h(\mathop{\widetilde{X}} \nolimits)$ used to define $h_{!*}(\overline{X})$ gives rise to compatible factorizations \[ p^{\ge r} := {\tilde{\imath}}_*\alpha^{-1}{\tilde{\imath}}^* : h^{\ge r}(\mathop{\widetilde{X}} \nolimits) \longrightarrow h^{\ge r}(\mathop{\widetilde{X}} \nolimits) \; , \; r \le 2 \] and \[ p^{\le r} := {\tilde{\imath}}_*\alpha^{-1}{\tilde{\imath}}^* : h^{\le r}(\mathop{\widetilde{X}} \nolimits) \longrightarrow h^{\le r}(\mathop{\widetilde{X}} \nolimits) \; , \; r \ge 2 \; , \] all of which are again idempotent. Consequently, we get (split) exact sequences of motives \[ 0 \longrightarrow h^{\le 1}(\mathop{\widetilde{X}} \nolimits) \longrightarrow \ker (p^{\le 2}) \longrightarrow h^2_{!*}(\overline{X}) \longrightarrow 0 \; , \] \[ 0 \longrightarrow \ker (p^{\le 2}) \longrightarrow \ker (p^{\le 3}) \longrightarrow h^3(\mathop{\widetilde{X}} \nolimits) \longrightarrow 0 \] etc. \begin{Thm} \label{3C} (i)~The K\"unneth filtration of $h(\mathop{\widetilde{X}} \nolimits)$ \[ 0 \subset h^0(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 1}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 2}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 3}(\mathop{\widetilde{X}} \nolimits) \subset h^{\le 4}(\mathop{\widetilde{X}} \nolimits) = h(\mathop{\widetilde{X}} \nolimits) \] induces a filtration of the intersection motive $h_{!*}(\overline{X})$ \[ 0 \subset h^0_{!*}(\overline{X}) \subset h^{\le 1}_{!*}(\overline{X}) \subset h^{\le 2}_{!*}(\overline{X}) \subset h^{\le 3}_{!*}(\overline{X}) \subset h^{\le 4}_{!*}(\overline{X}) = h_{!*}(\overline{X}) \; . \] It is uniquely defined by the following property: both the canonical projection from $h(\mathop{\widetilde{X}} \nolimits)$ to $h_{!*} (\overline{X})$ and the canonical inclusion of $h_{!*} (\overline{X})$ into $h(\mathop{\widetilde{X}} \nolimits)$ are morphisms of filtered motives. The filtration is split in the sense that all $h^{\le r}_{!*}(\overline{X})$ admit direct complements in $h_{!*}(\overline{X})$. In particular, the quotients \[ h^{\ge r}_{!*}(\overline{X}) := h_{!*}(\overline{X}) / h^{\le {r-1}}_{!*}(\overline{X}) \] of $h_{!*}(\overline{X})$ exist. \\[0.1cm] (ii)~The filtration of $h_{!*}(\overline{X})$ is independent of the choice of desingularization $\mathop{\widetilde{X}} \nolimits$. \\[0.1cm] (iii)~Duality $h_{!*}(\overline{X})^{\vee} \arrover{\sim} h_{!*}(\overline{X})(2)$ (Proposition~\ref{2F}) induces isomorphisms \[ h^{\le r}_{!*}(\overline{X})^{\vee} \arrover{\sim} h^{\ge 4-r}_{!*}(\overline{X})(2) \; . \] \end{Thm} \begin{Proof} Define \[ h^{\le r}_{!*}(\overline{X}) := h^{\le r}(\mathop{\widetilde{X}} \nolimits) \quad \text{for} \quad r \le 1 \] and \[ h^{\le r}_{!*}(\overline{X}) := \ker (p^{\le r}) \quad \text{for} \quad r \ge 2 \; . \] Claim (i) is a consequence of the compatibility of the idempotents $p^{\le r}$, (ii) is a consequence of Proposition~\ref{2E}~(iv), and (iii) follows from symmetry of $p$. \end{Proof} \begin{Def} \label{3e} (a)~The filtration \[ 0 \subset h^0_{!*}(\overline{X}) \subset h^{\le 1}_{!*}(\overline{X}) \subset h^{\le 2}_{!*}(\overline{X}) \subset h^{\le 3}_{!*}(\overline{X}) \subset h^{\le 4}_{!*}(\overline{X}) = h_{!*}(\overline{X}) \; . \] from Theorem~\ref{3C} is called the \emph{K\"unneth filtration of $h_{!*}(\overline{X})$}. \\[0.1cm] (b)~The $n$-th \emph{K\"unneth component of $h_{!*}(\overline{X})$}, $0 \le n \le 4$, is the sub-quotient of $h_{!*}(\overline{X})$ defined by \[ h^n_{!*}(\overline{X}) := h^{\le n}_{!*}(\overline{X}) / h^{\le n-1}_{!*}(\overline{X}) \; . \] \end{Def} For future reference, let us note the following immediate consequence of our construction. \begin{Prop} \label{3f} Let $n$ be an integer unequal to two. Then there is a canonical isomorphism of motives \[ h^n_{!*}(\overline{X}) \arrover{\sim} h^n(\mathop{\widetilde{X}} \nolimits) \; . \] \end{Prop} \begin{Rem} One may define the notion of K\"unneth decomposition of the intersection motive as being a decomposition splitting the K\"unneth filtration. Adding the complement $\oplus_m h^2 (D_m)$ of $h_{!*}(\overline{X})$ in $h(\mathop{\widetilde{X}} \nolimits)$, one gets a K\"unneth decomposition of $h(\mathop{\widetilde{X}} \nolimits)$ (in the abstract sense of Remark~\ref{3c}~(b)). With these choices, both the canonical projection from $h(\mathop{\widetilde{X}} \nolimits)$ to $h_{!*} (\overline{X})$ and the canonical inclusion of $h_{!*} (\overline{X})$ into $h(\mathop{\widetilde{X}} \nolimits)$ are morphisms of graded motives. It is not clear to me whether such K\"unneth decompositions of $h(\mathop{\widetilde{X}} \nolimits)$ can be obtained using Murre's construction recalled earlier, when $D$ has more than one component. The problem is the relation \[ p_{\mathop{\widetilde{X}} \nolimits,*}(p_3 \cdot [\mathop{\widetilde{X}} \nolimits \times D_m]) = 0 \] (we use the same notation as in the proof of Proposition~\ref{3B}). The cycle class in question is a non-zero multiple of \[ p_{\mathop{\widetilde{X}} \nolimits,*} (\widetilde{\beta} \cdot [\mathop{\widetilde{X}} \nolimits \times C \cdot D_m]) \; . \] For any fixed $m$, the K\"unneth decomposition of $h(\mathop{\widetilde{X}} \nolimits)$ can be \emph{chosen} such that this cycle class vanishes: take $T$ to be equal to $\frac{1}{d} [C \cdot D_m]$, where $d$ is the degree of $C \cdot D_m$. \end{Rem} \bigskip \section{Hard Lefschetz for the intersection motive} \label{4} We continue to consider a proper surface $\overline{X}$ over the base field $k$. Let us consider the K\"unneth filtration \[ 0 \subset h^0_{!*}(\overline{X}) \subset h^{\le 1}_{!*}(\overline{X}) \subset h^{\le 2}_{!*}(\overline{X}) \subset h^{\le 3}_{!*}(\overline{X}) \subset h^{\le 4}_{!*}(\overline{X}) = h(\overline{X})_{!*} \] of the intersection motive. The aim of this section is to prove the following. \begin{Thm} \label{4A} Let ${\cal L}$ be a line bundle on $\overline{X}$. \\[0.1cm] (i)~There is a morphism of motives \[ c_{{\cal L}}: h_{!*}(\overline{X})(-1) \longrightarrow h_{!*}(\overline{X}) \; , \] which is uniquely characterized by the following two properties: \begin{enumerate} \item[(1)] If $\overline{X}$ is smooth, then $c_{{\cal L}}$ equals the cup-product with the first Chern class of ${\cal L}$ on $h(\overline{X})(-1) = h_{!*}(\overline{X})(-1)$ \cite[Sect.~2.1]{Sch}. \item[(2)] The morphism $c_{{\cal L}}$ is contravariantly functorial with respect to dominant morphisms $g: \overline{Y} \to \overline{X}$ of proper surfaces over $k$: the diagram \[ \vcenter{\xymatrix@R-10pt{ h_{!*}(\overline{Y})(-1) \ar[r]^-{c_{g^* \! {\cal L}}} & h_{!*}(\overline{Y}) \\ h_{!*}(\overline{X})(-1) \ar[r]^-{c_{{\cal L}}} \ar[u]^{g^*(-1)} & h_{!*}(\overline{X}) \ar[u]_{g^*} \\}} \] (see Proposition~\ref{2E}~(i)) commutes. \end{enumerate} (ii)~If ${\cal L}'$ is a second line bundle on $\overline{X}$, then \[ c_{{\cal L} \otimes {\cal L}'} = c_{{\cal L}} + c_{{\cal L}'} \; . \] In other words, the map \[ \mathop{\rm Pic}\nolimits(\overline{X}) \longrightarrow \mathop{\rm Hom}\nolimits \bigl( h_{!*}(\overline{X})(-1), h_{!*}(\overline{X}) \bigr) \; , \; {\cal L} \longmapsto c_{{\cal L}} \] is a morphism of groups. \\[0.1cm] (iii)~The morphism $c_{{\cal L}}$ is filtered in the following sense: it induces morphisms \[ c_{{\cal L}}: h^{\le n-2}_{!*}(\overline{X})(-1) \longrightarrow h^{\le n}_{!*}(\overline{X}) \] and hence, morphisms \[ c_{{\cal L}}: h^{n-2}_{!*}(\overline{X})(-1) \longrightarrow h^n_{!*}(\overline{X}) \] for all $n \in {\mathbb{Z}}$. \\[0.1cm] (iv)~If ($\overline{X}$ is projective and) ${\cal L}$ or ${\cal L}^{-1}$ is ample, then \[ c_{{\cal L}}^2 = c_{{\cal L}} \circ c_{{\cal L}}: h^0_{!*}(\overline{X})(-2) \longrightarrow h^4_{!*}(\overline{X}) \] and \[ c_{{\cal L}}: h^1_{!*}(\overline{X})(-1) \longrightarrow h^3_{!*}(\overline{X}) \] are isomorphisms. \end{Thm} Part (iv) of this result should be seen as the motivic analogue of the Hard Lefschetz Theorem for intersection cohomology \cite[Thm.~6.2.10]{BBD}. \\ In order to prepare the proof of Theorem~\ref{4A}, let us recall the ingredients of the proof when $\overline{X}$ is smooth (in which case Theorem~\ref{4A} is of course known). The morphism $c_{{\cal L}}$ then equals the cup-product with the first Chern class, which can be described as follows. In the category $CHM(k)_{{\mathbb{Q}}}$, the vector space $CH^1(\overline{X})$ equals the group of morphisms from ${\mathbb{L}}$ to $h(\overline{X})$. We define $c_{{\cal L}}$ as being the composition \[ h(\overline{X})(-1) = h(\overline{X}) \otimes {\mathbb{L}} \stackrel{{\rm id}_{\overline{X}}^* \otimes [{\cal L}]}{\longrightarrow} h(\overline{X}) \otimes h(\overline{X}) \stackrel{\Delta^*}{\longrightarrow} h(\overline{X}) \] ($\Delta:= $ the diagonal embedding $\overline{X} \hookrightarrow \overline{X} \times_k \overline{X}$). From this description, pro\-perties (i)~(2) (for smooth $\overline{Y}$) and (ii) are immediate. Recall that $\overline{X}$, as a smooth and proper surface, is projective. Since the group $\mathop{\rm Pic}\nolimits(\overline{X})$ is generated by the classes of very ample line bundles, in order to prove (iii) and (iv), we may (by (ii)) assume that ${\cal L}$ is very ample. In addition, we may prove the claims after base change to a finite extension of $k$, and hence assume that $\overline{X}$ is geometrically connected, and that ${\cal L}$ is represented by a smooth curve $C$ embedded into $\overline{X}$ via the closed immersion $i_C$. The morphism $c_{{\cal L}}$ then equals the composition of \[ i_C^*(-1): h(\overline{X})(-1) \longrightarrow h(C)(-1) \] and of \[ i_{C,*}: h(C)(-1) \longrightarrow h(\overline{X}) \; . \] By auto-duality of the K\"unneth filtrations for $C$ and for $\overline{X}$, it suffices for (iii) to show that $i_C^*: h(\overline{X}) \to h(C)$ is a morphism of filtered motives. But this follows from \cite[Lemma~8.3.2]{Mr3} and \cite[Prop.~5.8]{J}. As for (iv), observe that identifying $h^0(\mathop{\widetilde{X}} \nolimits)(-2)$ and $h^4(\mathop{\widetilde{X}} \nolimits)$ with ${\mathbb{Q}}(-2)$ allows to relate the morphism $c_{{\cal L}}^2: h^0(\mathop{\widetilde{X}} \nolimits)(-2) \to h^4(\mathop{\widetilde{X}} \nolimits)$ to the self-intersection number $C \cdot C$, which is strictly positive since ${\cal L}$ is very ample. The statement on $c_{{\cal L}}: h^1(\mathop{\widetilde{X}} \nolimits)(-1) \to h^3(\mathop{\widetilde{X}} \nolimits)$ is the most difficult to prove. We refer to \cite[Thm.~4.4~(ii)]{Sch} for the details. \\ Given the contravariance property of the intersection motive (Proposition~\ref{2E}~(i)), it is now clear what remains to be done in order to prove Theorem~\ref{4A} in the generality we stated it. First note that in our statement, we may replace $\overline{X}$ by its normalization $\mathop{X^*} \nolimits$. Indeed, $h_{!*}(\overline{X}) = h_{!*}(\mathop{X^*} \nolimits)$, and the morphism $\mathop{X^*} \nolimits \to \overline{X}$ being finite, the pull-back of an ample line bundle on $\overline{X}$ is ample on $\mathop{X^*} \nolimits$. Next, fix a cartesian diagram \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] which is a desingularization of $\mathop{X^*} \nolimits$. Thus, $\pi$ is proper, $\mathop{\widetilde{X}} \nolimits$ is smooth and proper (hence projective), $Z$ is finite, and $D$ a divisor with normal crossings, whose irreducible components $D_m$ are smooth. We need to show that for any line bundle ${\cal L}$ on $\mathop{X^*} \nolimits$, the composition \[ h_{!*}(\overline{X})(-1) \lhook\joinrel\longrightarrow h(\mathop{\widetilde{X}} \nolimits)(-1) \stackrel{c_{\pi^* \! {\cal L}}}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \] lands in $h_{!*}(\overline{X}) \subset h(\mathop{\widetilde{X}} \nolimits)$ --- this will then be our definition of $c_{{\cal L}}$ --- and that we have the Hard Lefschetz Theorem \ref{4A}~(iv). In fact, we shall prove a more general result. \begin{Var} \label{4A'} Let $\widetilde{{\cal L}}$ be a line bundle on $\mathop{\widetilde{X}} \nolimits$, whose restrictions to all $D_m$ are trivial (for example, the pull-back of a line bundle on $\mathop{X^*} \nolimits$). \\[0.1cm] (i)~The restriction of the morphism of motives \[ c_{\widetilde{{\cal L}}} : h(\mathop{\widetilde{X}} \nolimits)(-1) \longrightarrow h(\mathop{\widetilde{X}} \nolimits) \] to the sub-motive $h_{!*}(\overline{X})(-1)$ induces a morphism $h_{!*}(\overline{X})(-1) \to h_{!*}(\overline{X})$. In other words, there is a commutative diagram \[ \vcenter{\xymatrix@R-10pt{ h(\mathop{\widetilde{X}} \nolimits)(-1) \ar[r]^-{c_{\widetilde{{\cal L}}}} & h(\mathop{\widetilde{X}} \nolimits) \\ h_{!*}(\overline{X})(-1) \ar[r]^-{c_{\widetilde{{\cal L}}}} \ar@{_{ (}->}[u]^{\pi^*(-1)} & h_{!*}(\overline{X}) \ar@{^{ (}->}[u]_{\pi^*} \\}} \] (ii)~If $\widetilde{{\cal L}}'$ is a second line bundle on $\mathop{\widetilde{X}} \nolimits$ with trivial restrictions to all $D_m$, then \[ c_{\widetilde{{\cal L}} \otimes \widetilde{{\cal L}}'} = c_{\widetilde{{\cal L}}} + c_{\widetilde{{\cal L}}'} \; . \] (iii)~The morphism $c_{\widetilde{{\cal L}}}$ is filtered: it induces morphisms \[ c_{\widetilde{{\cal L}}}: h^{\le n-2}_{!*}(\overline{X})(-1) \longrightarrow h^{\le n}_{!*}(\overline{X}) \] for all $n \in {\mathbb{Z}}$. \\[0.1cm] (iv)~Assume in addition that $\widetilde{{\cal L}}$ is the line bundle associated to a divisor $C$ on $\mathop{\widetilde{X}} \nolimits$ such that $C - \sum_m a_m D_m$ or $-C - \sum_m a_m D_m$ is ample for a suitable choice of integers $a_m \ge 0$ (for example, $\widetilde{{\cal L}} = \pi^* \! {\cal L}$ for an ample line bundle ${\cal L}$ on $\mathop{X^*} \nolimits$). Then \[ c_{\widetilde{{\cal L}}}^2: h^0_{!*}(\overline{X})(-2) \longrightarrow h^4_{!*}(\overline{X}) \] and \[ c_{\widetilde{{\cal L}}}: h^1_{!*}(\overline{X})(-1) \longrightarrow h^3_{!*}(\overline{X}) \] are isomorphisms. \end{Var} \begin{Proof} In order to prove (i), we have to check that the composition \[ h_{!*}(\overline{X})(-1) \stackrel{\pi^*(-1)}{\lhook\joinrel\longrightarrow} h(\mathop{\widetilde{X}} \nolimits)(-1) \stackrel{c_{\widetilde{{\cal L}}}}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \stackrel{{\tilde{\imath}}_* \alpha^{-1} {{\tilde{\imath}}}^*}{\longrightarrow} h(\mathop{\widetilde{X}} \nolimits) \] is zero. Since the formation of Chern classes is compatible with pull-backs, the composition ${{\tilde{\imath}}}^* c_{\widetilde{{\cal L}}}$ equals \[ h(\mathop{\widetilde{X}} \nolimits)(-1) \stackrel{\oplus_m i_m^*}{\longrightarrow} \bigoplus_m h(D_m)(-1) \stackrel{\oplus_m c_{i_m^* \! \widetilde{{\cal L}}}}{\longrightarrow} \bigoplus_m h(D_m) \ontoover{\ } \bigoplus_m h^2(D_m) \; , \] where $i_m$ denotes the immersion of $D_m$ into $\mathop{\widetilde{X}} \nolimits$. But by assumption, the morphisms $c_{i_m^* \! \widetilde{{\cal L}}}: h(D_m)(-1) \to h(D_m)$ are all zero. Claims (ii) and (iii) hold since they hold for $c_{\widetilde{{\cal L}}} : h(\mathop{\widetilde{X}} \nolimits)(-1) \to h(\mathop{\widetilde{X}} \nolimits)$. As for (iv), observe that according to Proposition~\ref{3f}, \[ h^n_{!*}(\overline{X}) \cong h^n(\mathop{\widetilde{X}} \nolimits) \; , \; n \ne 2 \; . \] Thus, we have to prove that \[ c_{\widetilde{{\cal L}}}^2: h^0(\mathop{\widetilde{X}} \nolimits)(-2) \longrightarrow h^4(\mathop{\widetilde{X}} \nolimits) \] and \[ c_{\widetilde{{\cal L}}}: h^1(\mathop{\widetilde{X}} \nolimits)(-1) \longrightarrow h^3(\mathop{\widetilde{X}} \nolimits) \] are isomorphisms. As before, the claim for $c_{\widetilde{{\cal L}}}^2$ is essentially equivalent to showing that the self-intersection number $C \cdot C$ is non-zero. Since the restriction of $\widetilde{{\cal L}}$ to any of the $D_m$ is trivial, we have the formula \[ C \cdot C = \bigl( \pm C - \sum_m a_m D_m \bigr) \cdot \bigl( \pm C - \sum_m a_m D_m \bigr) - \bigl( \sum_m a_m D_m \bigr) \cdot \bigl( \sum_m a_m D_m \bigr) \; . \] The intersection matrix $(D_n \cdot D_m)_{n,m}$ is negative definite \cite[p.~6]{M}, hence the matrix $\bigl( (a_n D_n) \cdot (a_m D_m) \bigr)_{n,m}$ is negative semi-definite. It follows that the term $(\sum_m a_m D_m) \cdot (\sum_m a_m D_m)$ is non-positive. Hence \[ C \cdot C \ge \bigl( \pm C - \sum_m a_m D_m \bigr) \cdot \bigl( \pm C - \sum_m a_m D_m \bigr) \; . \] But by assumption, one of the divisors $C - \sum_m a_m D_m$, $-C - \sum_m a_m D_m$ is ample. Therefore, its self-intersection number is strictly positive. In order to prove the claim for $c_{\widetilde{{\cal L}}}: h^1(\mathop{\widetilde{X}} \nolimits)(-1) \to h^3(\mathop{\widetilde{X}} \nolimits)$, observe first that by (ii), we may assume $C - \sum_m a_m D_m$ to be very ample. By passing to a finite extension of $k$, we find a smooth curve $H$ embedded into $\mathop{\widetilde{X}} \nolimits$ via the closed immersion $i_H$, and such that there is an equivalence of divisors \[ C - \sum_m a_m D_m \sim H \; . \] In particular, $H$ is very ample, and \[ c_{\widetilde{{\cal L}}} = i_{H,*} i_H^* + \sum_m a_m i_{m,*} i_m^* : h^1(\mathop{\widetilde{X}} \nolimits)(-1) \longrightarrow h^3(\mathop{\widetilde{X}} \nolimits) \; . \] Hard Lefschetz \ref{4A}~(iv) tells us that $i_{H,*} i_H^*$ is an isomorphism. In order to see that the same still holds after adding the ``error term'' $\sum_m a_m i_{m,*} i_m^*$, we neeed to recall more details of the proof. In fact, as follows from \cite[Prop.~4.5]{Sch}, the full sub-category of motives isomorphic to $h^1(Y)$, for smooth projective varieties $Y$ over $k$, is equivalent to the category of Abelian varieties over $k$ up to isogeny. More precisely, this equivalence is such that $h^1(Y)$ corresponds to the Picard variety $P_Y$, and that the motive $h^{2d_Y-1}(d_Y-1)$ (for $Y$ of pure dimension $d_Y$) corresponds to the Albanese variety $A_Y$. Furthermore, for a morphism $f:Y_1 \to Y_2$, the pull-back of motives $f^*: h^1(Y_2) \to h^1(Y_2)$ corresponds to $f^*: P_{Y_2} \to P_{Y_1}$, while the push-forward $f_*: h^{2d_{Y_1}-1}(d_{Y_1}-1) \to h^{2d_{Y_2}-1}(d_{Y_2}-1)$ (for $Y_i$ of pure dimension $d_{Y_i}$, $i=1,2$) corresponds to $f_*: A_{Y_1} \to A_{Y_2}$. Proving that $c_{\widetilde{{\cal L}}}$ is an isomorphism of motives is thus equivalent to proving the following statement: the composition of \[ I^* : P_{\widetilde{X}} \longrightarrow P_H \times_k \prod_m \bigl( P_{D_m} \bigr)^{a_m} \] with its dual \[ I_* : A_H \times_k \prod_m \bigl( A_{D_m} \bigr)^{a_m} \longrightarrow A_{\widetilde{X}} \] is an isogeny from the Picard variety of $\widetilde{X}$ to the Albanese variety of $\widetilde{X}$ (recall that our motives are with ${\mathbb{Q}}$-coefficients). Here, $I$ denotes the morphism from the disjoint union of $H$ and $a_m$ copies of $D_m$, for all $m$, to $\widetilde{X}$. Also, we have identified the Picard and the Albanese varieties of the curves $H$ and $D_m$ to the respective Jacobians, using the fact that these are canonically principally polarized. The decisive ingredient of the proof is \cite[Cor.~1 of Thm.~7]{We}, which states that since $H$ is very ample, the kernel of $i_H^*: P_{\widetilde{X}} \to P_H$ is finite. The same is thus true for $I^*$. Now observe that a polarization on an Abelian variety (such as $P_H \times_k \prod_m \bigl( P_{D_m} \bigr)^{a_m}$) induces a polarization on any sub-Abelian variety. The composition $I_* I^*$ is therefore an isogeny. \end{Proof} \bigskip \section{The motive of the exceptional divisor} \label{5} At this point, we need to enlarge the category of motives we are working in since we wish to consider motives of genuinely singular varieties. Let us first set up the notation, which follows that of \cite{VSF}. From now on, our base field $k$ is assumed to be perfect. We write $Sch/k$ for the category of schemes which are separated and of finite type over $k$, and $Sm/k$ for the full sub-category of objects of $Sch/k$ which are smooth over $k$. Recall the definition of the category $\mathop{SmCor(k)}\nolimits$ \cite[p.~190]{VSF}: its objects are those of $Sm/k$. Morphisms from $Y$ to $X$ are given by the group $c(Y,X)$ of \emph{finite correspondences} from $Y$ to $X$. The category $\mathop{Shv_{Nis}(SmCor(k))}\nolimits$ of \emph{Nisnevich sheaves with transfers} \cite[Def.~3.1.1]{VSF} is the category of those contravariant additive functors from $\mathop{SmCor(k)}\nolimits$ to Abelian groups, whose restriction to $Sm/k$ is a sheaf for the Nisnevich topo\-logy. Inside the derived category $D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$ of complexes bounded from above, one defines the full triangulated sub-category $\mathop{DM^{eff}_-(k)}\nolimits$ of \emph{effective motivic complexes} over $k$ \cite[p.~205, Prop.~3.1.13]{VSF} as the one consisting of objects whose cohomology sheaves are \emph{homotopy invariant} \cite[Def.~3.1.10]{VSF}. The inclusion of $\mathop{DM^{eff}_-(k)}\nolimits$ into $D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$ admits a left adjoint $\mathop{{\bf R} C}\nolimits$, which is induced from the functor \[ \mathop{\underline{C}}\nolimits_*: \mathop{Shv_{Nis}(SmCor(k))}\nolimits \longrightarrow C^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits) \] which maps $F$ to the simple complex associated to the \emph{singular simplicial complex} \cite[p.~207, Prop.~3.2.3]{VSF}. One defines a functor $L$ from $Sch/k$ to $\mathop{Shv_{Nis}(SmCor(k))}\nolimits$: it associates to $X$ the Nisnevich sheaf with transfers $c(\argdot,X)$; note that the above definition of $c(Y,X)$ still makes sense when $X \in Sch/k$ is not necessarily smooth. One defines the \emph{motive} $M (X)$ as $\mathop{{\bf R} C}\nolimits (L(X))$. We shall use the same symbol for $M (X) \in \mathop{DM^{eff}_-(k)}\nolimits$ and for its canonical representative $\mathop{\underline{C}}\nolimits_* (L(X))$ in $C^- (\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$. There is a second functor $L^c$, which associates to $X \in Sch/k$ the Nisnevich sheaf of quasi-finite correspondences \cite[p.~223, 224]{VSF}. One defines the \emph{motive with compact support} $M^c (X)$ of $X \in Sch/k$ as $\mathop{{\bf R} C}\nolimits (L^c(X))$. It coincides with $M(X)$ if $X$ is proper. \\ A second, more geometric approach to motives is the one developed in \cite[Sect.~2.1]{VSF}. There, the triangulated category $\mathop{DM^{eff}_{gm}(k)}\nolimits$ of \emph{effective geometrical motives} over $k$ is defined. There is a canonical full triangulated embedding of $\mathop{DM^{eff}_{gm}(k)}\nolimits$ into $\mathop{DM^{eff}_-(k)}\nolimits$ \cite[Thm.~3.2.6]{VSF}, which maps the geometrical motive of $X \in Sm/k$ \cite[Def.~2.1.1]{VSF} to $M (X)$. Using this embedding, we consider $M (X)$ as an object of $\mathop{DM^{eff}_{gm}(k)}\nolimits$. The \emph{Tate motive} ${\mathbb{Z}}(1)$ in $\mathop{DM^{eff}_{gm}(k)}\nolimits$ is defined as the \emph{reduced motive} of ${\mathbb{P}}^1_k$ \cite[p.~192]{VSF}, shifted by $-2$. There is a canonical direct sum decomposition \[ M({\mathbb{P}}^1_k) = {\mathbb{Z}}(0) \oplus {\mathbb{Z}}(1)[2] \; . \] The category $\mathop{DM_{gm}(k)}\nolimits$ of \emph{geometrical motives} over $k$ is obtained from the category $\mathop{DM^{eff}_{gm}(k)}\nolimits$ by inverting ${\mathbb{Z}}(1)$. All categories $\mathop{DM^{eff}_{gm}(k)}\nolimits$, $\mathop{DM_{gm}(k)}\nolimits$, $D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$, and $\mathop{DM^{eff}_-(k)}\nolimits$ are tensor triangulated, and admit unit objects, which we denote by the same symbol ${\mathbb{Z}} (0)$ \cite[Prop.~2.1.3, Cor.~2.1.5, p.~206, Thm.~3.2.6]{VSF}. For $M \in \mathop{DM_{gm}(k)}\nolimits$ and $n \in {\mathbb{Z}}$, write $M(n)$ for the tensor product $M \otimes {\mathbb{Z}} (n)$. According to \cite{V3}, the functor $\mathop{DM^{eff}_{gm}(k)}\nolimits \to \mathop{DM_{gm}(k)}\nolimits$ is a full triangulated embedding (see \cite[Thm.~4.3.1]{VSF} for a proof when $k$ admits resolution of singularities). \\ Let us denote by $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ and $\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ the triangulated categories obtained by the ${\mathbb{Q}}$-linear analogues of the above constructions \cite[Sect.~16.2.4 and Sect.~17.1.3]{A}. The relation to Chow motives is given by the following result due to Voevodsky. \begin{Thm} \label{5A} (i)~There is a natural contravariant ${\mathbb{Q}}$-linear tensor functor \[ R : CHM(k)_{{\mathbb{Q}}} \longrightarrow \mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}} \; . \] $R$ is fully faithful. \\[0.1cm] (ii)~For any smooth projective variety $S$ over $k$, the functor $R$ maps the Chow motive $h(S)$ to the motive $M(S) \in \mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}} \subset \mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}$. \\[0.1cm] (iii)~The functor $R$ maps the Lefschetz motive ${\mathbb{L}}$ to the motive ${\mathbb{Z}}(1)[2]$, compatibly with the decompositions \[ h({\mathbb{P}}^1_k) = h(\mathop{{\rm Spec}}\nolimits k) \oplus {\mathbb{L}} \] in $CHM(k)_{{\mathbb{Q}}}$ and \[ M({\mathbb{P}}^1_k) = {\mathbb{Z}}(0) \oplus {\mathbb{Z}}(1)[2] \] in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{\mathbb{Q}}$. \forget{ (iv)~If $k$ admits resolution of singularities, thenthere are no non-trivial higher extensions between Chow motives in the category $\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}$. More precisely, given smooth projective varieties $S_1$, $S_2$ over $k$ and $i > 0$, the group \[ \mathop{\rm Hom}\nolimits_{\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(S_1) , M(S_2)[i] \bigr) \] is zero. } \end{Thm} \begin{Proof} The essential point of the proof is to show equality of morphisms: \[ \mathop{\rm Hom}\nolimits_{CHM(k)_{{\mathbb{Q}}}} \bigl( h(Y)(-q) , h(X) \bigr) = \mathop{\rm Hom}\nolimits_{\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(X) , M(Y)(q)[2q] \bigr) \] for smooth projective varieties $X$ and $Y$ over $k$ and $q \ge 0$. Duality in $\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ \cite[Thm.~18.4.1.1]{A} (\cite[Thm.~4.3.7]{VSF} if $k$ admits resolution of singularities) allows us to reduce to the case $Y = \mathop{{\rm Spec}}\nolimits k$, in which case the claim follows from \cite[Cor.~2]{V2}. \end{Proof} \forget{ \begin{Rem} It should be noted that the preceding result holds already before tensoring with ${\mathbb{Q}}$, i.e., there is a version for Chow motives with ${\mathbb{Z}}$-coefficients. But we intend to apply the results of the preceding sections, where invertibility of non-zero integers was needed. First, the determinant $d$ of the intersection matrix $(D_n \cdot D_m)_{n,m}$ is a non-zero integer, a fact which is essential for the construction of the intersection motive (Theorem~\ref{2B}~(i)). This could be controlled by tensoring only with ${\mathbb{Z}}[1/d]$. Second, and more seriously, our proofs systematically made use of the principle that a finite extension of the base field does not affect the statement in question. This principle is of course based on the invertibility of the degree of the field extension. In general, it will not be possible to control these degrees (unless for example, our field is algebraically closed). \end{Rem} } \begin{Ex} \label{5B} Fix a proper surface $\overline{X}$ over $k$. Recall the K\"unneth filtration of the intersection motive \[ 0 \subset h^0_{!*}(\overline{X}) \subset h^{\le 1}_{!*}(\overline{X}) \subset h^{\le 2}_{!*}(\overline{X}) \subset h^{\le 3}_{!*}(\overline{X}) \subset h^{\le 4}_{!*}(\overline{X}) = h_{!*}(\overline{X}) \; , \] the quotients \[ h^{\ge r}_{!*}(\overline{X}) := h_{!*}(\overline{X}) / h^{\le r-1}_{!*}(\overline{X}) \; , \] and the K\"unneth components \[ h^n_{!*}(\overline{X}) = h^{\le n}_{!*}(\overline{X}) / h^{\le n-1}_{!*}(\overline{X}) \] (Definition~\ref{3e}). Let us write $M^{!*}(\overline{X}) := R(h_{!*}(\overline{X}))$, \[ M^{!*}_{\ge r}(\overline{X}) := R(h^{\ge r}_{!*}(\overline{X})) \; , \] \[ M^{!*}_{\le n}(\overline{X}) := R(h^{\le n}_{!*}(\overline{X})) \; , \] \[ M^{!*}_n(\overline{X}) := R(h^n_{!*}(\overline{X})) \; . \] We thus have exact triangles \[ M^{!*}_{\ge r+1}(\overline{X}) \longrightarrow M^{!*}(\overline{X}) \longrightarrow M^{!*}_{\le r}(\overline{X}) \stackrel{\delta}{\longrightarrow} M^{!*}_{\ge r+1}(\overline{X})[1] \; , \] \[ M^{!*}_n(\overline{X}) \longrightarrow M^{!*}_{\le n}(\overline{X}) \longrightarrow M^{!*}_{\le n-1}(\overline{X}) \stackrel{\delta}{\longrightarrow} M^{!*}_n(\overline{X})[1] \] in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$, which are all split in the sense that the boundaries $\delta$ are zero. \end{Ex} For the rest of this section, fix a (not necessarily proper) surface $\overline{X}$ over $k$, and a cartesian diagram \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r] \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] which is a desingularization of the normalization $\mathop{X^*} \nolimits$. Thus, $\pi$ is proper, $\mathop{\widetilde{X}} \nolimits$ is smooth, $Z$ is finite, and $D$ a divisor with normal crossings, whose irreducible components $D_m$ are smooth projective curves. The exact triangle associated to the closed covering of $D$ by the $D_m$ \cite[Prop.~4.1.3]{VSF} (but see also the proof of Proposition~\ref{6D}~(i)) shows that $M(D)$ belongs to the category $\mathop{DM^{eff}_{gm}(k)}\nolimits$. \begin{Def} \label{5C} Define Chow motives $h^0(D)$ and $h^2(D)$ as follows. \\[0.1cm] (a)~$h^0(D) := h(S)$, where $S$ equals the spectrum of the ring of global sections of the structure sheaf of $D$. \\[0.1cm] (b)~$h^2(D) := \oplus_m h^2 (D_m)$. \end{Def} Let us write $M_0(D) := R(h^0(D))$ and $M_2(D) := R(h^2(D))$. The morphism $D \to S$ and the inclusions $i_m$ of the components $D_m$ into $D$ induce morphisms $M(D) \to M_0(D)$ and $M_2(D) \to M(D)$ in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$. \forget{ \begin{Ex} \label{5Ba} The hypothesis of Variant~\ref{4A'}~(i)--(iii) on $\widetilde{{\cal L}}$ can be weakened further, using the motive $M(D)$ of to the variety $D$ (which in general is singular), and Theorem~\ref{5A}. Using the notation of the proof of Variant~\ref{4A'}, the composition \[ \beta: h(\mathop{\widetilde{X}} \nolimits)(-1) \stackrel{\oplus_m i_m^*}{\longrightarrow} \bigoplus_m h(D_m)(-1) \stackrel{\oplus_m c_{i_m^* \! \widetilde{{\cal L}}}}{\longrightarrow} \bigoplus_m h(D_m) \ontoover{\ } \bigoplus_m h^2(D_m) \] is zero if and only if its image under the functor $R$ is. Now observe that $R(\beta)$ clearly factors through $M(D)(1)[2]$, yielding \[ \bigoplus_m M_2(D_m) \lhook\joinrel\longrightarrow \bigoplus_m M(D_m) \stackrel{\oplus_m c_{i_m^* \! \widetilde{{\cal L}}}}{\longrightarrow} \bigoplus_m M(D_m)(1)[2] \stackrel{\oplus_m i_{m,*}}{\longrightarrow} M(D)(1)[2] \; , \] where we have set $M_2(D_m) := R(h^2(D_m))$. Duality \cite[Thm.~18.4.1.1]{A} implies that the morphism dual to $\oplus_m i_{m,*}: \oplus_m M(D_m)(-1)[-2] \to M(D)(-1)[-2]$ equals $M(D)^*(1)[2] \to \oplus_m M(D_m)^*(1)[2] = \oplus_m M(D_m)$. Therefore, the first term in the above composition $\oplus_m M_2(D_m)$ can be considered as a sub-object of $M(D)^*(1)[2]$. Therefore, $R(\beta)$ factors through a morphism \[ \gamma: M(D)^*(1)[2] \longrightarrow M(D)(1)[2] \; . \] By \cite[Prop.~4.2.9]{VSF}, \[ \mathop{\rm Hom}\nolimits_{\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(D)^* , M(D) \bigr) = CH^2 (D \times_k D) \; . \] Checking the definitions, one verifies that under this identification, the morphism $\gamma$ corresponds to the push-forward under the diagonal $\Delta$ of the class of the restriction of ${\cal L}$ in $CH^1(D)$, i.e., to the cup-product with the first Chern class of $\tilde{\imath}^* {\cal L}$. Therefore, the conclusions of Variant~\ref{4A'}~(i)--(iii) hold once $\Delta_*([\tilde{\imath}^* {\cal L}]) \in CH^2 (D \times_k D)$ is trivial. \end{Ex} } \begin{Lem} \label{5D} The morphism $M(D) \to M_0(D)$ is a split epimorphism, and $M_2(D) \to M(D)$ is a split monomorphism. The composition of the two morphisms $M_2(D) \to M(D) \to M_0(D)$ is trivial. \end{Lem} \begin{Proof} The composition \[ \bigoplus_m R(h^0(D_m)) \longrightarrow \bigoplus_m R(h(D_m)) = \bigoplus_m M(D_m) \longrightarrow M(D) \longrightarrow M_0(D) \] is a split epimorphism, hence so is $M(D) \to M_0(D)$. The composition \[ M_2(D) \longrightarrow M(D) \longrightarrow M(\mathop{\widetilde{X}} \nolimits) \] is a split monomorphism (Theorem~\ref{2B}~(i)), hence so is $M_2(D) \to M(D)$. The last claim is obvious. \end{Proof} It follows that the objects \[ M_{\ge 1}(D) := \ker \bigl( M(D) \longrightarrow M_0(D) \bigr) \; , \] \[ M_{\le 1}(D) := M(D) / M_2(D) \; , \] and \[ M_1(D) := \ker \bigl( M_{\le 1}(D) \longrightarrow M_0(D) \bigr) = M_{\ge 1}(D) / M_2(D) \] exist. They give rise to what we might call the K\"unneth filtration of $M(D)$: \[ M(D) =: M_{\le 2}(D) \ontoover{\ } M_{\le 1}(D) \ontoover{\ } M_0(D) \; , \] \[ M_2(D) \lhook\joinrel\longrightarrow M_{\ge 1}(D) \lhook\joinrel\longrightarrow M_{\ge 0}(D) := M(D) \; . \] Note that there are split exact triangles \[ M_2(D) \longrightarrow M(D) \longrightarrow M_{\le 1}(D) \stackrel{\delta = 0}{\longrightarrow} M_2(D)[1] \; , \] \[ M_1(D) \longrightarrow M_{\le 1}(D) \longrightarrow M_0(D) \stackrel{\delta = 0}{\longrightarrow} M_1(D)[1] \] in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$. For all $m$, let us also define $M_i(D_m)$, $0 \le i \le 2$ and $M_{\le 1} (D_m)$ as the images under the functor $R$ of the Chow motives $h^i(D_m)$ and $h^{\le 1} (D_m)$, respectively. \begin{Rem} Unlike $M_0(D)$ and $M_2(D)$, the sub-quotient $M_1(D)$ should not in general be expected to come from a Chow motive. Indeed, as we shall see, the ``kernel'' of \[ \bigoplus_{n < m} M(D_n \cap D_m)[1] \longrightarrow \bigoplus_m M_0(D_m)[1] \] contributes to $M_1(D)$. \end{Rem} \bigskip \section{An extension of motives} \label{6} We continue to study the situation \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r]^{\tilde{\jmath}} \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r]^j & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] fixed in Section~\ref{5}, but assume in addition that the surface $\overline{X}$ is proper. The morphism $\tilde{\imath}_*: M(D) \to M(\mathop{\widetilde{X}} \nolimits)$ will be at the base of the construction of an extension in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ (Theorem~\ref{Main}). Let us start with a number of elementary observations. \begin{Lem} \label{6a} The composition \[ M(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M(\mathop{\widetilde{X}} \nolimits) \ontoover{\ } M^{!*}(\overline{X}) \] factors uniquely through a morphism $\tilde{\imath}_*: M_{\le 1}(D) \to M^{!*}(\overline{X})$. \end{Lem} \begin{Proof} We identify $M^{!*}(\overline{X})$ with the categorical quotient of $M(\mathop{\widetilde{X}} \nolimits)$ by $M_2(D)$. The composition in question thus vanishes on $M_2(D)$. It therefore factors uniquely over the categorical quotient $M_{\le 1}(D)$ of $M(D)$ by $M_2(D)$. \end{Proof} \begin{Rem} \label{6B} If $k$ admits resolution of singularities, then we have \emph{localization} for the motive with compact support \cite[Prop.~4.1.5]{VSF}. In our situation, this means that there is a canonical exact triangle \[ M(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M(\mathop{\widetilde{X}} \nolimits) \stackrel{\tilde{\jmath}^*}{\longrightarrow} M^c(X) \longrightarrow M(D)[1] \; . \] From this, one deduces easily that $\tilde{\imath}_*: M_{\le 1}(D) \to M^{!*}(\overline{X})$ sits in an exact triangle \[ M_{\le 1}(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M^{!*}(\overline{X}) \stackrel{j^*}{\longrightarrow} M^c(X) \longrightarrow M_{\le 1}(D)[1] \; . \] \end{Rem} Consider the sub-object $M_1(D)$ of $M_{\le 1}(D)$, and the quotient $M_0^{!*}(\overline{X})$ of $M^{!*}(\overline{X})$. \begin{Lem} \label{6C} The composition \[ M_1(D) \lhook\joinrel\longrightarrow M_{\le 1}(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M^{!*}(\overline{X}) \ontoover{\ } M_0^{!*}(\overline{X}) \] is trivial. \end{Lem} \begin{Proof} The motive $M_0^{!*}(\overline{X})$ equals $M_0(\mathop{\widetilde{X}} \nolimits) := R(h^0(\mathop{\widetilde{X}} \nolimits))$ (Proposition~\ref{3f}), hence the composition \[ M_{\le 1}(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M^{!*}(\overline{X}) \ontoover{\ } M_0^{!*}(\overline{X}) \] equals the composition \[ M_{\le 1}(D) \ontoover{\ } M_0(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M_0(\mathop{\widetilde{X}} \nolimits) \; . \] It is therefore trivial on $M_1(D)$. \end{Proof} \begin{Cor} \label{6Ca} The morphism $\tilde{\imath}_*: M_{\le 1}(D) \to M^{!*}(\overline{X})$ respects the K\"un\-neth filtrations. \end{Cor} The inclusion $\tilde{\imath}$ therefore induces a morphism, equally denoted $\tilde{\imath}_*$ from $M_1(D)$ to $M^{!*}_{\ge 1}(\overline{X})$. Consider the quotient $M_1^{!*}(\overline{X})$ of $M^{!*}_{\ge 1}(\overline{X})$. \begin{Prop} \label{6D} Assume that all geometric irreducible components of $D$ are of genus zero. \\[0.1cm] (i)~The object $M_1(D)[-1]$ of $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ is an Artin motive, i.e., it is isomorphic to the motive of some zero-dimensional variety over $k$. More precisely, there is a canonical exact sequence of Artin motives \[ 0 \longrightarrow M_1(D)[-1] \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_0(D_m) \; , \] and $M_1(D)[-1]$ is a direct summand of $\oplus_{n < m} M(D_n \cap D_m)$. \\[0.1cm] (ii)~The composition \[ M_1(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M^{!*}_{\ge 1}(\overline{X}) \ontoover{\ } M_1^{!*}(\overline{X}) \] is trivial. \end{Prop} \begin{Proof} (i)~Consider the closed covering of $D$ by the $D_m$. It induces an exact sequence of Nisnevich sheaves with transfers \[ 0 \longrightarrow \bigoplus_{n < m} L(D_n \cap D_m) \longrightarrow \bigoplus_m L(D_m) \longrightarrow L(D) \longrightarrow 0 \; , \] hence an exact triangle \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M(D_m) \longrightarrow M(D) \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m)[1] \; . \] Given the definition of $M_2$, we get an exact triangle \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_{\le 1}(D_m) \longrightarrow M_{\le 1}(D) \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m)[1] \; . \] But the $M_1(D_m)$ are zero by assumption. Hence the exact triangle takes the form \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_0(D_m) \longrightarrow M_{\le 1}(D) \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m)[1] \; ; \] it thus belongs to the full triangulated sub-category $d_{\le 0} \mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ gene\-rated by motives of dimension $0$. This triangulated sub-category is canonically equivalent to the bounded derived category of the Abelian semi-simple category of Artin motives (with ${\mathbb{Q}}$-coefficients) over $k$ \cite[Prop.~3.4.1 and Remark~2 following it]{VSF}. The sequence \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_0(D_m) \longrightarrow M_0(D) \longrightarrow 0 \] of Artin motives is exact. From this and the above exact triangle, we see that $M_1(D)[-1]$ is an Artin motive, which fits into an exact sequence \[ 0 \longrightarrow M_1(D)[-1] \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_0(D_m) \; . \] (ii)~The motive $M_1^{!*}(\overline{X})$ equals $M_1(\mathop{\widetilde{X}} \nolimits)$ (Proposition~\ref{3f}). We shall show triviality of \[ \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(Y)[1] , M_1(\mathop{\widetilde{X}} \nolimits) \bigr) \] for any smooth variety $Y$ over $k$. Hard Lefschetz \[ M_1(\mathop{\widetilde{X}} \nolimits) \cong M_3(\mathop{\widetilde{X}} \nolimits)(-1)[-2] \] and duality in $\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ imply that this group is isomorphic to \[ \mathop{\rm Hom}\nolimits_{\mathop{DM_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(\mathop{\widetilde{X}} \nolimits) \otimes M(Y)(-1)[-1] , {\mathbb{Z}}(0) \bigr) \; , \] which equals the direct factor $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(\mathop{\widetilde{X}} \nolimits) \otimes M(Y) , {\mathbb{Z}}(1)[1] \bigr)$ of \[ \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(\mathop{\widetilde{X}} \nolimits \times_k Y) , {\mathbb{Z}}(1)[1] \bigr) \; . \] According to \cite[Cor.~3.4.3]{VSF}, for any smooth variety $W$ over $k$, the group $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(W) , {\mathbb{Z}}(1)[1] \bigr)$ equals the group of global sections $\Gamma(W, {\mathbb{G}}_m)$, tensored with ${\mathbb{Q}}$. The inclusion of $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_0(\mathop{\widetilde{X}} \nolimits) \otimes M(Y) , {\mathbb{Z}}(1)[1] \bigr)$ into $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(\mathop{\widetilde{X}} \nolimits \times_k Y) , {\mathbb{Z}}(1)[1] \bigr)$ is therefore an isomorphism (recall that $\mathop{\widetilde{X}} \nolimits$ is proper). \end{Proof} Putting everything together, we thus get the following result. \begin{Thm} \label{Main} Assume that all geometric irreducible components of $D$ are of genus zero. Then there is a canonical morphism \[ M_1(D) \stackrel{\tilde{\imath}_*}{\longrightarrow} M^{!*}_{\ge 2}(\overline{X}) \ontoover{\ } M_2^{!*}(\overline{X}) \; . \] \end{Thm} It will be convenient to interpret this morphism as a one-extension ${\mathbb{E}}$ in $\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}$ of the Artin motive $M_1(D)[-1]$ by $M^{!*}_2(\overline{X})[-2]$. \begin{Rem} \label{6E} (a)~Remark~\ref{6B} shows where to look for a natural candidate for the cone of ${\mathbb{E}}: M_1(D) \to M^{!*}_2(\overline{X})$: it should be a canonical sub-quotient of the motive with compact support $M^c(X)$. \\[0.1cm] (b)~Note that the object $M_1(D)$ is trivial (and hence so is ${\mathbb{E}}$) if $\mathop{X^*} \nolimits$ is smooth. \\[0.1cm] (c)~Without the assumption on the genus of the geometric irreducible components of $D$, we still get morphisms \[ M_1(D) \longrightarrow M_2^{!*}(\overline{X}) \; , \] by composing $\tilde{\imath}_*: M_1(D) \to M^{!*}_{\ge 1}(\overline{X})$ with projections $p_2$ from $M^{!*}_{\ge 1}(\overline{X})$ to its direct factor $M_2^{!*}(\overline{X})$. In special cases, the dependence on the choice of the projection $p_2$ may be controlled. \end{Rem} \bigskip \section{Motivic interpretation of a construction of A.~Caspar} \label{7} We keep the geometric situation studied in the previous section: $\overline{X}$ is a proper surface over our perfect base field $k$, and \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r] \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] is a cartesian diagram which is a desingularization of the normalization $\mathop{X^*} \nolimits$ of $\overline{X}$, meaning that $\pi$ is proper, $\mathop{\widetilde{X}} \nolimits$ is smooth, $Z$ is finite, and $D$ a divisor with normal crossings, whose irreducible components $D_m$ are smooth projective curves. Let us start by proving the following result (cmp.\ \cite[Lemma~1.1]{Cs}). \begin{Lem} \label{7A} Denote by $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)'$ the group of line bundles on $\mathop{\widetilde{X}} \nolimits$, whose restrictions to all $D_m$ are trivial. Assume that all geometric irreducible components of $D$ are of genus zero. Then the map $\tilde{\jmath}^* : \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \to \mathop{\rm Pic}\nolimits(X)$ induces an isomorphism \[ \tilde{\jmath}^* \otimes {\mathbb{Q}}: \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \otimes_{\mathbb{Z}} {\mathbb{Q}} \arrover{\sim} \mathop{\rm Pic}\nolimits(X) \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] \end{Lem} \begin{Proof} We may assume that our (perfect) base field $k$ is algebraically closed. Any element in the kernel of $\tilde{\jmath}^*: \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits) \to \mathop{\rm Pic}\nolimits(X)$ is represented by a linear combination $\sum_m a_m D_m$ of the $D_m$. If the class of $\sum_m a_m D_m$ belongs to $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)'$, then its intersection numbers with all $D_m$ must be zero. Thus the vector $(a_m)_m$ is in the kernel of the intersection matrix, which is invertible (in $\mathop{\rm GL}\nolimits_r ({\mathbb{Q}})$) since the intersection pairing on the $D_m$ is non-degenerate \cite[p.~6]{M}. Hence $(a_m)_m$ is zero. For the surjectivity of $\tilde{\jmath}^* \otimes {\mathbb{Q}}$, observe that $\tilde{\jmath}^*: \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits) \to \mathop{\rm Pic}\nolimits(X)$ is surjective. The non-degeneracy of the intersection matrix shows that any divisor $C$ on $\mathop{\widetilde{X}} \nolimits$ can be modified by a rational linear combination of the $D_m$ such that the difference $C'$ has trivial intersection numbers with all the $D_m$. Since these are supposed to be of genus zero, the restriction of $C'$ to all $D_m$ is principal. \end{Proof} \begin{Prop} \label{7a} Assume that all geometric irreducible components of $D$ are of genus zero. There is a canonical morphism of vector spaces \[ \mathop{\rm Pic}\nolimits(X) \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_0(\overline{X})(1) \bigr) \; . \] Here, $\mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}}(\argdot,\argast)$ denotes $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}}(\argdot,\argast[1])$. \end{Prop} \begin{Proof} As before, denote by $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)'$ the group of line bundles on $\mathop{\widetilde{X}} \nolimits$, whose restrictions to all $D_m$ are trivial. Define a morphism \[ \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_0(\overline{X})(1) \bigr) \] by mapping the class of ${\cal L} \in \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)'$ to the image of \[ {\mathbb{E}} \in \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_2(\overline{X})[-2] \bigr) \] (Theorem~\ref{Main}) under $R(c_{{\cal L}}) : M^{!*}_2(\overline{X})[-2] \to M^{!*}_0(\overline{X})(1)$ (Variant~\ref{4A'}~(iii)). Now use Lemma~\ref{7A}. \end{Proof} Given a sub-scheme $Z_\infty$ of the finite scheme $Z$, we may consider the pre-image $D_\infty \subset D$ of $Z_\infty$ under $\pi$, and define $M_1(D_\infty)$ as before. It is a direct factor of $M_1(D)$, with a canonical complement. \begin{Cor} \label{7b} Assume that all geometric irreducible components of $D$ are of genus zero. There is a canonical morphism of vector spaces \[ \mathop{\rm Pic}\nolimits(X)\otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D_\infty)[-1] , M^{!*}_0(\overline{X})(1) \bigr) \; . \] \end{Cor} \begin{Ex} \label{7c} Here, our base field is equal to ${\mathbb{Q}}$. Let us recall the geo\-metric setting studied in \cite{Cs}. Let $F$ be a real quadratic extension of ${\mathbb{Q}}$ with discriminant $d$. Assume that the class number in the narrow sense of $F$ equals one. Let $X'$ be the \emph{Hilbert modular surface} of full level associated to $F$ \cite[Sect.~X.4]{vdG}. Denote by $X^*$ its \emph{Baily--Borel compactification}, and by $X$ the smooth part of $X'$. All these surfaces are normal and geometrically connected. The complement of $X^* - X'$ consists of one ${\mathbb{Q}}$-rational point, denoted $\infty$ (the \emph{cusp} of $X^*$). The finite sub-scheme $Z := (X^* - X)_{\mathop{{\rm red}}\nolimits}$ includes the cusp, but also the singularities of $X'$. There is a canonical desingularization \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r] \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] $\mathop{\widetilde{X}} \nolimits$ is a smooth, and $D$ a divisor with normal crossings, whose irreducible components are smooth. Furthermore, all geometric irreducible components of $D$ are of genus zero. The irreducible components of the pre-image $D_\infty \subset D$ of $\infty$ under $\pi$ are isomorphic to ${\mathbb{P}}^1_{{\mathbb{Q}}}$, and form a polygon: for the complex surface underlying $\mathop{\widetilde{X}} \nolimits$, this is due to Hirzebruch \cite[Chap.~II]{vdG}; that the statement holds over ${\mathbb{Q}}$ follows from \cite[Sect.~5]{R}. \\[0.1cm] (1)~We claim that the Artin motive $M_1(D_\infty)[-1]$ is canonically isomorphic to $H_1(D_\infty({\mathbb{C}}),{\mathbb{Z}}) \otimes_{\mathbb{Z}} {\mathbb{Z}}(0)$. (Any of the two orientations of the polygon $D_\infty$ will thus induce an isomorphism from $M_1(D_\infty)[-1]$ to ${\mathbb{Z}}(0)$.) Indeed, by the same reasoning as in Proposition~\ref{6D}, the Artin motive $M_1(D_\infty)[-1]$ equals the kernel of \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_m M_0(D_m) \; , \] where $D_m$ are the components of $D_\infty$. Since $D_\infty$ is a polygon, all $M_0(D_m)$ are equal to ${\mathbb{Z}}(0)$, while the $M_1(D_m)$ are zero. The $M(D_n \cap D_m)$ are equal to ${\mathbb{Z}}(0)$ for consecutive indices $n,m$. Hence the kernel in question equals the tensor product of the motive ${\mathbb{Z}}(0)$ with the kernel of the morphism \[ \bigoplus_{n < m} H_0 \bigl( (D_n \cap D_m)({\mathbb{C}}),{\mathbb{Z}} \bigr) \longrightarrow \bigoplus_m H_0 \bigl( D_m({\mathbb{C}}),{\mathbb{Z}} \bigr) \] of homology groups. \\[0.1cm] (2)~The variety $\mathop{\widetilde{X}} \nolimits$ being geometrically connected, we have \[ M^{!*}_0(\overline{X}) = M_0(\mathop{\widetilde{X}} \nolimits) = {\mathbb{Z}}(0) \; . \] Corollary~\ref{7b} thus yields the following. \\[0.1cm] (3)~Let $k$ be an extension of ${\mathbb{Q}}$. Denote by $X_k$ the base change of $X$ to $k$. Then there is a canonical morphism $cl_{\mathop{{\rm KCE}}\nolimits}$ mapping $\mathop{\rm Pic}\nolimits(X_k)\otimes_{\mathbb{Z}} {\mathbb{Q}}$ to \[ \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( H_1(D_\infty({\mathbb{C}}),{\mathbb{Z}}) \otimes_{\mathbb{Z}} {\mathbb{Z}}(0) , {\mathbb{Z}}(1) \bigr) = H^1 \bigl( D_\infty({\mathbb{C}}),k^* \bigr) \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] Any of the two orientations of the polygon $D_\infty$ thus induces a morphism \[ cl_{\mathop{{\rm KCE}}\nolimits}: \mathop{\rm Pic}\nolimits(X_k)\otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow k^* \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] Indeed, the only point to be verified is the equality \[ \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( {\mathbb{Z}}(0) , {\mathbb{Z}}(1) \bigr) = k^* \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] But this is the content of \cite[Cor.~3.4.3]{VSF}. \\[0.1cm] (4)~Following the terminology of \cite{Cs}, the image of the class of a line bundle ${\cal L}$ under $cl_{\mathop{{\rm KCE}}\nolimits}$ will be called the \emph{Kummer--Chern--Eisenstein extension} associated to ${\cal L}$. \\[0.1cm] (5)~Now consider the case $k = F = {\mathbb{Q}} (\sqrt{d})$. Let $\sigma_1,\sigma_2$ be the (real) embeddings of $F$ into ${\mathbb{C}}$. We consider the two line bundles ${\cal L}_i$ on $X_F$, $i = 1,2$, characterized by their factors of automorphy ``$(\gamma \tau_i + \delta)^2$'' over ${\mathbb{C}}$. We propose ourselves to identify their images under the map $cl_{\mathop{{\rm KCE}}\nolimits}$ from (3). To do so, fix an orientation of $D_\infty$. Denote by $\varepsilon \in {\cal O}^*_F$ the generator of the totally posi\-tive units. We shall show (Example~\ref{final}): if $d$ is a prime congruent to $1$ modulo $4$, then \[ cl_{KCE} ({\cal L}_1 \otimes {\cal L}_2) \quad \text{is trivial} \quad \text{and} \quad cl_{KCE} ({\cal L}_1) = \varepsilon^{\pm 1} \in F^* \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] (The ambiguity concerning the sign in the exponent comes from the choice of the orientation.) \\[0.1cm] (6)~This claim implies in particular that the realizations of the Kummer--Chern--Eisenstein extensions $cl_{KCE} ({\cal L}_1)$ and $cl_{KCE} ({\cal L}_2)$ can be identified. For the $\ell$-adic and Hodge--de Rham realization, this identification is the content of Caspar's main results \cite[Thm.~2.5, Thm.~3.4]{Cs}. Our claim is compatible with [loc.$\;$cit.]. Note that it also implies that the extension \[ {\mathbb{E}} \in \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(\BQ)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_2(X^*)[-2] \bigr) \] from Theorem~\ref{Main} is non-trivial in the present geometric situation. \end{Ex} In order to prove the claim made in Example~\ref{7c}~(5), let us come back to the more general situation \[ \vcenter{\xymatrix@R-10pt{ X \ar@{^{ (}->}[r] \ar@{=}[d] & \mathop{\widetilde{X}} \nolimits \ar@{<-^{ )}}[r]^{\tilde{\imath}} \ar[d]_\pi & D \ar[d]^\pi \\ X \ar@{^{ (}->}[r] & \mathop{X^*} \nolimits \ar@{<-^{ )}}[r] & Z \\}} \] considered in the beginning of this section. In particular, the irreducible components $D_m$ of $D$ are supposed smooth (and projective), but not ne\-cessarily of genus zero. We need to generalize the construction of the cup product with the first Chern class of a line bundle. Recall that for a smooth and projective variety $Y$, the vector space $CH^1(Y) = \mathop{\rm Pic}\nolimits(Y) \otimes_{\mathbb{Z}} {\mathbb{Q}}$ equals \[ \mathop{\rm Hom}\nolimits_{CHM(k)_{{\mathbb{Q}}}} \bigl( {\mathbb{L}} , h(Y) \bigr) = \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M(Y) , {\mathbb{Z}}(1)[2] \bigr) \; . \] In fact, Voevodsky \cite[Cor.~3.4.3]{VSF} proved the following result. \begin{Thm} \label{7d} Let $Y \in Sm/k$. For any $j \in {\mathbb{Z}}$, there is a canoni\-cal isomorphism \[ H_{Zar}^{j-1}(Y,{\mathbb{G}}_m) \arrover{\sim} \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(Y) , {\mathbb{Z}}(1)[j] \bigr) \; , \] which is contravariantly functorial in $Y$. \end{Thm} In particular, we then have $\mathop{\rm Pic}\nolimits(Y) = \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(Y) , {\mathbb{Z}}(1)[2] \bigr)$. It follows from the construction of [loc.$\;$cit.] \ that for $Y$ smooth and projective, the tensor product of this isomorphism with ${\mathbb{Q}}$ is the one we used before to produce morphisms ${\mathbb{L}} \to h(Y)$ of Chow motives. Analyzing more closely the ingredients of Voevodsky's proof, we are able to show the following. \begin{Prop} \label{7e} (i)~There is a canonical isomorphism \[ \mathop{\rm Pic}\nolimits(D) \arrover{\sim} \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr) \; . \] (ii)~The diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\rm Pic}\nolimits(D) \ar[r]^-{\cong} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr) \\ \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits) \ar[r]^-{\cong} \ar[u]^{{\tilde{\imath}}^*} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(\mathop{\widetilde{X}} \nolimits) , {\mathbb{Z}}(1)[2] \bigr) \ar[u]_{{\tilde{\imath}}^*} \\}} \] commutes. \\[0.1cm] (iii)~Denote by $\tilde{\imath}_m$ the inclusion of $D_m$ into $D$. Then for all $m$, the diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\rm Pic}\nolimits(D_m) \ar[r]^-{\cong} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D_m) , {\mathbb{Z}}(1)[2] \bigr) \\ \mathop{\rm Pic}\nolimits(D) \ar[r]^-{\cong} \ar[u]^{{\tilde{\imath}_m}^*} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr) \ar[u]_{{\tilde{\imath}_m}^*} \\}} \] commutes. \end{Prop} \begin{Proof} Recall (see the introduction to Section~\ref{5}) that $M = \mathop{{\bf R} C}\nolimits \circ L$, and that $\mathop{{\bf R} C}\nolimits: D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits) \to \mathop{DM^{eff}_-(k)}\nolimits$ is left adjoint to the inclusion of $\mathop{DM^{eff}_-(k)}\nolimits$ into $D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$. It follows that for any Nisnevich sheaf with transfers $G$, any integer $r$, and any $Y \in Sch/k$, we have \[ \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_-(k)}\nolimits} \bigl( M(Y) , G[r] \bigr) = \mathop{\rm Hom}\nolimits_{D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)} \bigl( L(Y) , G[r] \bigr) \; . \] Note that if $Y$ is smooth, then $L(Y)$ is the Nisnevich sheaf with transfers represented by $Y$, hence by Yoneda's Lemma, \[ \mathop{\rm Hom}\nolimits_{\mathop{Shv_{Nis}(SmCor(k))}\nolimits} \bigl( L(Y) , G \bigr) = \Gamma(Y,G) \; . \] By definition of $L$, the sequence \[ 0 \longrightarrow \bigoplus_{n < m} L(D_n \cap D_m) \longrightarrow \bigoplus_n L(D_n) \longrightarrow L(D) \longrightarrow 0 \] is exact (even as a sequence of presheaves --- recall that the $D_n$ are the irreducible components of $D$). This shows that \[ \mathop{\rm Hom}\nolimits_{\mathop{Shv_{Nis}(SmCor(k))}\nolimits} \bigl( L(D) , G \bigr) = \ker \bigl( \prod_n \Gamma(D_n,G) \longrightarrow \prod_{n < m} \Gamma(D_n \cap D_m,G) \bigr) \; . \] For any open subset $U$ of $D$, the formula \[ \Gamma(U,{\mathfrak{H}}^0(G)) := \ker \bigl( \prod_n \Gamma(D_n \cap U,G) \longrightarrow \prod_{n < m} \Gamma(D_n \cap D_m \cap U,G) \bigr) \] \emph{defines} a functor on $\mathop{Shv_{Nis}(SmCor(k))}\nolimits$. Letting $U$ vary, we get a left exact functor \[ {\mathfrak{H}}^0 : \mathop{Shv_{Nis}(SmCor(k))}\nolimits \longrightarrow \mathop{Shv_{Zar}(D)}\nolimits \; , \] where we denote by $\mathop{Shv_{Zar}(D)}\nolimits$ the category of Zariski sheaves with values in Abelian groups on the topological space underlying $D$. We claim that there are natural morphisms \[ H^r_{Zar} \bigl( D,{\mathfrak{H}}^0(G) \bigr) \longrightarrow \mathop{\rm Hom}\nolimits_{D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)} \bigl( L(D) , G[r] \bigr) \] for any Nisnevich sheaf with transfers $G$. Observe that by what was said before, there is a natural isomorphism for $r = 0$. The morphisms in question will be defined as the boundaries in a spectral sequence \[ H^p_{Zar} \bigl( D,R^q ( {\mathfrak{H}}^0 )(G) \bigr) \Longrightarrow \mathop{\rm Hom}\nolimits_{D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)} \bigl( L(D) , G[p+q] \bigr) \] which we shall construct now. The category $\mathop{Shv_{Nis}(SmCor(k))}\nolimits$ has sufficiently many injectives \cite[Lemma~3.1.7]{VSF}. Hence the existence of the spectral sequence is equivalent to \[ \quad H^r_{Zar} \bigl( D,{\mathfrak{H}}^0(I) \bigr) = 0 \; , \; r \ge 1 \; , \] for any injective $I \in \mathop{Shv_{Nis}(SmCor(k))}\nolimits$. The proof of this vanishing is a faithful imitation of the proof of \cite[Prop.~3.1.8]{VSF}; note that the vital ingredient of [loc.$\;$cit.] \ is \cite[Prop.~3.1.3]{VSF}, which is valid without any smoothness assumptions. Let us now specialize to the case $G = {\mathbb{G}}_m$ and $r = 1$. For two indices $n < m$, denote by $\tilde{\imath}_{n,m}$ the inclusion of $D_n \cap D_m$ into $D$. The short exact sequence of Zariski sheaves on $D$ \[ (\ast) \quad \quad 1 \longrightarrow {\mathbb{G}}_{m,D} \longrightarrow \prod_n \tilde{\imath}_{n,*} {\mathbb{G}}_{m,D_n} \longrightarrow \prod_{n < m} \tilde{\imath}_{n,m,*} {\mathbb{G}}_{m,D_n \cap D_m} \longrightarrow 1 \] shows that ${\mathbb{G}}_{m,D} = {\mathfrak{H}}^0({\mathbb{G}}_m)$. Hence the above construction yields \[ \mathop{\rm Pic}\nolimits(D) = H^1_{Zar} \bigl( D,{\mathbb{G}}_m \bigr) \longrightarrow \mathop{\rm Hom}\nolimits_{D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)} \bigl( L(D) , {\mathbb{G}}_m[1] \bigr) \; . \] But by \cite[Thm.~3.4.2]{VSF}, there is a canonical isomorphism ${\mathbb{Z}}(1)[1] \cong {\mathbb{G}}_m$ in $\mathop{DM^{eff}_-(k)}\nolimits \subset D^-(\mathop{Shv_{Nis}(SmCor(k))}\nolimits)$. Altogether, we get the required morphism \[ \mathop{\rm Pic}\nolimits(D) \longrightarrow \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr) \; . \] By construction, it is compatible with the isomorphisms from Theorem~\ref{7d} (for $j = 2$) under morphisms of schemes $Y \to D$ and $D \to Y$, for $Y \in Sm/k$. It remains to show that $\mathop{\rm Pic}\nolimits(D) \to \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} ( M(D) , {\mathbb{Z}}(1)[2])$ is in fact an isomorphism. But this follows easily from the Five Lemma, from the long exact Zariski cohomology sequence induced by $(\ast)$, and the long exact $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} ( \bullet , {\mathbb{Z}}(1)[1])$-sequence induced by the exact triangle \[ \bigoplus_{n < m} M(D_n \cap D_m) \longrightarrow \bigoplus_n M(D_n) \longrightarrow M(D) \longrightarrow \bigoplus_{n < m} M(D_n \cap D_m)[1] \; , \] and from Theorem~\ref{7d}. \end{Proof} \forget{ The short exact sequence induces a long exact cohomology sequence, which shows that $\mathop{\rm Pic}\nolimits(D)'$ equals the cokernel of the map \[ \prod_m H^0 \bigl( D_m , {\cal O}_{D_m}^* \bigr) \longrightarrow \prod_{n < m} H^0 \bigl( D_{n,m} , {\cal O}_{D_{n,m}}^* \bigr) \; . \] The exact triangle induces an exact $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} ( \bullet , {\mathbb{Z}}(1)[1] )$-triangle, which shows that $\mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr)'$ equals the cokernel of the map \[ \prod_m \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D_m) , {\mathbb{Z}}(1)[1] \bigr) \longrightarrow \prod_{n < m} \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D_{n,m}) , {\mathbb{Z}}(1)[1] \bigr) \; . \] Now use Theorem~\ref{7d}. } \begin{Rem} We leave it to the reader to prove that the conclusions of Proposition~\ref{7e} are in fact true whenever $D$ is a normal crossing divisor in $\mathop{\widetilde{X}} \nolimits \in Sm / k$, with smooth irreducible components $D_m$. \end{Rem} \forget{ \begin{Cor} \label{7f} Denote by $\mathop{\rm Pic}\nolimits(D)'$ and $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)'$ the groups of classes of line bundles on $D$ and $\mathop{\widetilde{X}} \nolimits$ respectively, whose restrictions to all $D_m$ are trivial. Similarly, denote by \[ \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr)' \quad \text{and} \quad \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(\mathop{\widetilde{X}} \nolimits) , {\mathbb{Z}}(1)[2] \bigr)' \] the groups of morphisms $M(D) \to {\mathbb{Z}}(1)[2]$ and $M(\mathop{\widetilde{X}} \nolimits) \to {\mathbb{Z}}(1)[2]$ respectively, whose restrictions to all $M(D_m)$ are trivial. \\[0.1cm] (i)~There is a canonical isomorphism \[ \mathop{\rm Pic}\nolimits(D)' \arrover{\sim} \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr)' \; . \] (ii)~The diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\rm Pic}\nolimits(D)' \ar[r]^-{\cong} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(D) , {\mathbb{Z}}(1)[2] \bigr)' \\ \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \ar[r]^-{\cong} \ar[u]^{{\tilde{\imath}}^*} & \mathop{\rm Hom}\nolimits_{\mathop{DM^{eff}_{gm}(k)}\nolimits} \bigl( M(\mathop{\widetilde{X}} \nolimits) , {\mathbb{Z}}(1)[2] \bigr)' \ar[u]_{{\tilde{\imath}}^*} \\}} \] commutes. \end{Cor} } For any line bundle ${\cal K}$ on $D$, we can now define a morphism \[ R(c_{\cal K}) : M(D) \longrightarrow M(D)(1)[2] \] in complete analogy to the smooth projective case, namely as the composition \[ M(D) \stackrel{\Delta_*}{\longrightarrow} M(D) \otimes M(D) \stackrel{{\rm id}_{D,*} \otimes [{\cal K}]}{\longrightarrow} M(D)(1)[2] \] ($\Delta :=$ the diagonal embedding $D \hookrightarrow D \times_k D$). \begin{Cor} \label{7g} (i)~Let ${\cal L}$ be a line bundle on $\mathop{\widetilde{X}} \nolimits$. Then the diagram \[ \vcenter{\xymatrix@R-10pt{ M(D) \ar[r]^-{R(c_{\tilde{\imath}^* \! {\cal L}})} \ar[d]_{\tilde{\imath}_*} & M(D)(1)[2] \ar[d]^{\tilde{\imath}_*(1)[2]} \\ M(\mathop{\widetilde{X}} \nolimits) \ar[r]^-{R(c_{{\cal L}})} & M(\mathop{\widetilde{X}} \nolimits)(1)[2] \\}} \] commutes. \\[0.1cm] (ii)~Let ${\cal K}$ be a line bundle on $D$. Then for all $m$, the diagram \[ \vcenter{\xymatrix@R-10pt{ M(D_m) \ar[r]^-{R(c_{\tilde{\imath}_m^* \! {\cal K}})} \ar[d]_{\tilde{\imath}_{m,*}} & M(D_m)(1)[2] \ar[d]^{\tilde{\imath}_{m,*}(1)[2]} \\ M(D) \ar[r]^-{R(c_{{\cal K}})} & M(D)(1)[2] \\}} \] commutes. \end{Cor} \begin{Cor} \label{7h} Let ${\cal K}$ be a line bundle on $D$, whose restrictions to all $D_m$ are trivial. Then $R(c_{{\cal K}}):M(D) \to M(D)(1)[2]$ factors uniquely through a morphism $R(c_{{\cal K}}): M_{\le 1}(D) \to M(D)(1)[2]$. \end{Cor} \begin{Proof} Recall that $M_{\le 1}(D)$ is the categorial quotient of $M(D)$ by $M_2(D)$. Our claim thus follows from Corollary~\ref{7g}~(ii), Proposition~\ref{7e}~(iii) and the equation $M_2(D) = \oplus_m M_2(D_m)$. \end{Proof} Composition with the monomorphism $M_1(D) \hookrightarrow M_{\le 1}(D)$ and the epimorphism $M(D)(1)[2] \ontoover{\ } M_0(D)(1)[2]$ thus yields a map \[ cl_D: \mathop{\rm Pic}\nolimits(D)' \otimes_{{\mathbb{Z}}} {\mathbb{Q}} \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M_0(D)(1) \bigr) \; . \] \begin{Prop} \label{7i} Assume that all geometric irreducible components of $D$ are of genus zero. Then the morphism \[ cl_X: \mathop{\rm Pic}\nolimits(X) \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_0(\overline{X})(1) \bigr) \] of Proposition~\ref{7a} factors canonically through $cl_D$. More precisely, the diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\rm Pic}\nolimits(D)' \otimes_{\mathbb{Z}} {\mathbb{Q}} \ar[rr]^-{cl_D} && \mathop{\rm Ext}\nolimits^1 \bigl( M_1(D)[-1] , M_0(D)(1) \bigr) \ar[d]^{{\tilde{\imath}}_*} \\ \mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \otimes_{\mathbb{Z}} {\mathbb{Q}} \ar[r]^-{\cong}_-{\ref{7A}} \ar[u]^{{\tilde{\imath}}^*} & \mathop{\rm Pic}\nolimits(X) \otimes_{\mathbb{Z}} {\mathbb{Q}} \ar[r]^-{cl_X} & \mathop{\rm Ext}\nolimits^1 \bigl( M_1(D)[-1] , M^{!*}_0(\overline{X})(1) \bigr) \\}} \] commutes, where we abbreviated $\mathop{\rm Ext}\nolimits^1 := \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}}$. \end{Prop} \begin{Proof} Let ${\cal L}$ be a line bundle on $X$. Recall that the morphism of Proposition~\ref{7a} maps the class of ${\cal L}$ to the image of \[ {\mathbb{E}} \in \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( M_1(D)[-1] , M^{!*}_2(\overline{X})[-2] \bigr) \] (Theorem~\ref{Main}) under $R(c_{{\cal L}}) : M^{!*}_2(\overline{X})[-2] \to M^{!*}_0(\overline{X})(1)$ (Variant~\ref{4A'}~(iii)), where by abuse of notation we denote by ${\cal L}$ also the unique extension of ${\cal L}$ to $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits)' \otimes_{\mathbb{Z}} {\mathbb{Q}}$ (Lemma~\ref{7A}). Our claim thus follows from Corollary~\ref{7g}~(i). \end{Proof} \begin{Ex} \label{final} Let us reconsider the situation from Example~\ref{7c}, and prove the claim made in \ref{7c}~(5). The polygon $D_\infty$ is geometrically connected, therefore $M_0(D_\infty) \to M^{!*}_0(\overline{X})$ is an isomorphism (both sides equal ${\mathbb{Z}}(0)$). By Proposition~\ref{7i}, the morphism \[ cl_{\mathop{{\rm KCE}}\nolimits}: \mathop{\rm Pic}\nolimits(X_k) \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow H^1 \bigl( D_\infty({\mathbb{C}}),k^* \bigr) \otimes_{\mathbb{Z}} {\mathbb{Q}} \] factors through $cl_{D_\infty}$, where \[ cl_{D_\infty}: \mathop{\rm Pic}\nolimits(D_{\infty,k})' \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow H^1 \bigl( D_\infty({\mathbb{C}}),k^* \bigr) \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] Using the long exact Zariski cohomology sequence induced by \[ 1 \longrightarrow {\mathbb{G}}_{m,D_\infty} \longrightarrow \prod_n \tilde{\imath}_{n,*} {\mathbb{G}}_{m,D_n} \longrightarrow \prod_{n < m} \tilde{\imath}_{n,m,*} {\mathbb{G}}_{m,D_n \cap D_m} \longrightarrow 1 \] and the calculation of \ref{7c}~(1), one sees that $cl_{D_\infty}$ is in fact an isomorphism. Any of the two orientations of the polygon $D_\infty$ thus induces an isomorphism \[ cl_{D_\infty}: \mathop{\rm Pic}\nolimits(D_{\infty,k})' \otimes_{\mathbb{Z}} {\mathbb{Q}} \arrover{\sim} k^* \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] Checking the definitions, we can identify $cl_{D_\infty}$: we fix a point $x_0 \in D_\infty(k)$. It lies on a component $D_{m_0}$. For any line bundle ${\cal K}$ on $D_{\infty,k}$ with trivial restrictions to all $D_{m,k}$, we fix an element $s$ in the fibre ${\cal K}_{x_0}$. The restriction $\Gamma(D_{m_0,k},{\cal K}) \to {\cal K}_{x_0}$ being an isomorphism, $s$ can be uniquely extended to the whole of $D_{m_0,k}$. We restrict this extension to the ($k$-rational) point $x_1$ which is the intersection of $D_{m_0}$ with the ``next'' component (in the sense of the chosen orientation). We repeat the process until we are again on $D_{m_0}$. Restriction to ${\cal K}_{x_0}$ gives a non-zero multiple $c \cdot s$, and we have $cl_{D_\infty}([{\cal K}]) = c$. In order to prove the claim made in \ref{7c}~(5), one needs to apply this recipe to the line bundles ${\cal K}_i$ obtained by restricting to $D_{\infty,F}$ the unique extensions of ${\cal L}_i$ to $\mathop{\rm Pic}\nolimits(\mathop{\widetilde{X}} \nolimits_F)' \otimes_{\mathbb{Z}} {\mathbb{Q}}$, $i = 1,2$. But this is exactly the content of \cite[Lemma~1.2]{Cs}. \end{Ex} \bigskip \forget{ The question posed in \cite[Sect.~1.4]{Cs} remains open. The ``classical'' geometric construction of the one-extension in \[ \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(k)}\nolimits_{{\mathbb{Q}}}} \bigl( {\mathbb{Z}}(0) , {\mathbb{Z}}(1) \bigr) \] mapping to $1 \ne x \in k^* \subset k^* \otimes_{\mathbb{Z}} {\mathbb{Q}}$ under the identification of \cite[Cor.~4.3.4]{VSF} is via a motivic interpretation of the first cohomology group of ${\mathbb{G}}_m$ relative to the two points $1$ and $x$. Is there a geometric link between this construction (for $x = \varepsilon^{\pm 2}$) and $cl_{KCE} ({\cal L}_1 \otimes {\cal L}_2^{-1})$~? } \forget{ Denote by $\chi$ the primitive Dirichlet cha\-racter modulo $d$, and by ${\mathbb{Z}}(\chi)$ the Artin motive over ${\mathbb{Q}}$ associated to $\chi$, i.e., the vector space ${\mathbb{Q}}$ on which the absolute Galois group of ${\mathbb{Q}}$ acts via $\chi$. Compatibility of $cl_{\mathop{{\rm KCE}}\nolimits}$ with the action of the Galois group implies the following. \begin{Cor} \label{7C} Denote by $\mathop{\rm Pic}\nolimits(X)^{\chi = -1} \subset \mathop{\rm Pic}\nolimits(X_F)$ the subgroup of line bundles in $\mathop{\rm Pic}\nolimits(X_F)$, on which the non-trivial automorphism of $F$ acts by $[{\cal L}] \mapsto [{\cal L}^{-1}]$. Denote by $(F^*)^{\chi = -1} \subset F^*$ the kernel of the norm. Then the restriction of $cl_{\mathop{{\rm KCE}}\nolimits}$ to $\mathop{\rm Pic}\nolimits(X)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}}$ induces a morphism of vector spaces \[ cl_{\mathop{{\rm KCE}}\nolimits}: \mathop{\rm Pic}\nolimits(X)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(\BQ)}\nolimits_{{\mathbb{Q}}}} \bigl( {\mathbb{Z}}(\chi) , {\mathbb{Z}}(1) \bigr) \; , \] and any of the two orientations of the polygon $D_\infty$ induces a morphism \[ \mathop{\rm Pic}\nolimits(X)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \longrightarrow (F^*)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \; . \] \end{Cor} } \forget{ In order to do so, consider the \emph{$\ell$-adic realization}, for a fixed prime number $\ell$ \cite[Sect.~1.5]{DGo}. It is a triangulated covariant functor \[ r_\ell : \mathop{DM^{eff}_{gm}(\BQ)}\nolimits_{\mathbb{Q}} \longrightarrow D^- \bigl( Shv_{et} (\mathop{{\rm Spec}}\nolimits {\mathbb{Q}}), {\mathbb{Q}}_\ell \bigr) \] to the ``derived category'' of constructible ${\mathbb{Q}}_\ell$-sheaves on $\mathop{{\rm Spec}}\nolimits {\mathbb{Q}}$ \cite{E}. \begin{Prop} \label{7D} Denote by ${\mathbb{Q}}_\ell(\chi)$ the Artin motive ${\mathbb{Z}}(\chi)$, tensored with ${\mathbb{Q}}_\ell$, and by ${\mathbb{Q}}_\ell(1)$ the $\ell$-adic Tate twist. Let $(F^*)^\wedge$ be the $\ell$-adic completion of $F^*$. There is a canonical isomorphism \[ \mathop{\rm Ext}\nolimits^1_{( Shv_{et} (\mathop{{\rm Spec}}\nolimits {\mathbb{Q}}), {\mathbb{Q}}_\ell )} \bigl( {\mathbb{Q}}_\ell(\chi) , {\mathbb{Q}}_\ell(1) \bigr) \arrover{\sim} \bigl( (F^*)^\wedge \bigr)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \; , \] fitting into a commutative diagram \[ \vcenter{\xymatrix@R-10pt{ \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(\BQ)}\nolimits_{{\mathbb{Q}}}} \bigl( {\mathbb{Z}}(\chi) , {\mathbb{Z}}(1) \bigr) \ar[r]^-{\sim} \ar[d]_{r_\ell} & (F^*)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \ar[d] \\ \mathop{\rm Ext}\nolimits^1_{( Shv_{et} (\mathop{{\rm Spec}}\nolimits {\mathbb{Q}}), {\mathbb{Q}}_\ell )} \bigl( {\mathbb{Q}}_\ell(\chi) , {\mathbb{Q}}_\ell(1) \bigr) \ar[r]^-{\sim} & \bigl( (F^*)^\wedge \bigr)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}} \\}} \] \end{Prop} The following appears plausible. \begin{Ass} \label{A} The functor $r_\ell$ maps the localization triangle \[ M(D) \longrightarrow M(\mathop{\widetilde{X}} \nolimits) \longrightarrow M^c(X) \longrightarrow M(D)[1] \] from \cite[Prop.~4.1.5]{VSF} to the dual of the localization triangle for constructible ${\mathbb{Q}}_\ell$-sheaves \[ R \Gamma_c (X,{\mathbb{Q}}_\ell) \longrightarrow R \Gamma (\mathop{\widetilde{X}} \nolimits,{\mathbb{Q}}_\ell) \longrightarrow R \Gamma (D,{\mathbb{Q}}_\ell) \longrightarrow R \Gamma_c (X,{\mathbb{Q}}_\ell)[1] \; . \] \end{Ass} Under this assumption, our construction shows (see Remark~\ref{6E}~(a)) that the image of $cl_{KCE} ({\cal L}_1 \otimes {\cal L}_2^{-1})$ under $r_\ell$ is the extension of Galois modules constructed in \cite[Section~1.2]{Cs}. One of the main results of [loc.$\;$cit.] \ describes this extension. \begin{Thm} \label{7E} Assume that $d$ is a prime congruent to $1$ modulo $4$. Denote by $\varepsilon \in {\cal O}^*_F$ the generator of the totally posi\-tive units, and by $\zeta_F$ the Dedeking zeta function of $F$. Under Assumption~\ref{A}, the composition of $r_\ell$ and the isomorphism from Proposition~\ref{7D} maps \[ cl_{KCE} ({\cal L}_1 \otimes {\cal L}_2^{-1}) \in \mathop{\rm Ext}\nolimits^1_{\mathop{DM^{eff}_{gm}(\BQ)}\nolimits_{{\mathbb{Q}}}} \bigl( {\mathbb{Z}}(\chi) , {\mathbb{Z}}(1) \bigr) \] to $\varepsilon^{\pm 2} \in \bigl( (F^*)^\wedge \bigr)^{\chi = -1} \otimes_{\mathbb{Z}} {\mathbb{Q}}$. \end{Thm} \begin{Proof} This is \cite[Thm.~2.5]{Cs}. The ambiguity concerning the sign in the exponent comes from the fact that we have made no distinguished choice of an embedding of $F$ into ${\mathbb{R}}$ (hence the r\^oles of ${\cal L}_1$ and ${\cal L}_2$ are symmetric). \end{Proof} This result allows us to identify $cl_{KCE} ({\cal L}_1 \otimes {\cal L}_2^{-1})$ itself. }
{ "redpajama_set_name": "RedPajamaArXiv" }
316
The burning of Palestinian toddler: not an exception, but a result of Zionism Palestinian toddler Ali Dawabsha burned to death in an arson attack by Israeli Settlers Overnight on Friday, 31 July, a group of masked Jewish settlers threw firebombs through a window of the Dawabsha family house in Kufr Douma, near Nablus. They fell in the bedroom where the whole family had been sleeping peacefully, setting the house on fire. The arsonists left graffiti, reading "revenge" and "long live the Messiah", alongside a Star of David on the walls as their footnotes to this atrocious attack. They then fled, according to local witnesses, to the illegal settlement of Ma'aleh Ephraim, where approximately 1,800 armed settlers live under the security of the Israeli occupation forces. 18-month-old Ali Dawabsha was found a charred body. The rest of the family, Ali's parents and his four-year-old brother, survived the fire with critical injuries. The aftermath inside the house is horrifying: utter destruction and black walls, burnt clothes and photos of the family laid on the ground, among them Ali's smiling photos and his tiny white bib reading "Good morning Mama". This Israeli attack is another crime in the never-ending Nakba the Palestinian people have endured since Zionism's inception. Ali is another Mohammed Abu Khudeir, who was burnt alive by a group of settlers in Jerusalem on 2 July 2014. He is another Palestinian child falling prey to the Israeli murder machine, as Palestinians commemorate the first anniversary of Israel's 51-day offensive on Gaza, which it called 'Operation Protective Edge', during the summer of 2014. Over 2,200 people were brutally killed, mostly civilians, including 551 children. This morning Israeli leaders rushed to feign humanity and condemn the arson, calling it a "terror attack". The Times of Israel reported that Natanyahu expressed his "shock" at what he called a "horrific, heinous act", before saying, "The State of Israel deals forcefully with terror, regardless of who the perpetrators are." It also reported that Netanyahu's remarks were echoed by Defense Minister Moshe Ya'alon and the Israeli Defense Forces. At the same time, heavily armed Israeli forces spread across the West Bank to employ collective-punishment policies against Palestinians and prevent any rage from being expressed. As I write this, several injuries to Palestinians were reported after Ali Dawabsha's funeral. Update at 11 pm: One of the injured people, 14-year-old Laith al-Khaldi just passed away. As a Palestinian who is well-informed about the history of bloodshed and dispossession inflected on Palestinians who collectively bear the trauma of our encounter with Zionism, and one who carries the memories of many brutal Israeli attacks on Gaza, this claimed "shock" didn't hit me. It rather outraged me at Israel's crocodile tears and pretentious humanitarianism, despite its brutal military occupation of West Bank, the continued expansion of its illegal settlements, the suffocating siege of the Gaza Strip that remains in ruins after Israel's genocidal war last summer, and its ongoing assertion of itself as a "Jewish state", not a state for its citizens, as it discriminates against 1948 Palestinian citizens of Israel, or what its leaders call a "potential fifth column". The world should not look at today's appalling incident as a singular event. It is another link in the Zionist settler-colonial mentality which always sees Palestinians as an existential threat, dehumanises us and constantly views us as inferior and marginal. Israel cannot absolve its responsible for these settlers' acts, nor pretend they don't represent its own warped morality. Israel is the one to blame, not only because it encourages illegal settlements to expand, arms settlers with advanced weapons and further protects them with its "defence" forces, but also because these actions are an extension of the longstanding Zionist enterprise that, as much as it sought to dehumanise Palestinians, in return dehumanised Israeli society. This is evident in the Israeli cultural discourse, which celebrates Israel and portrays it as a "heroic," while ignoring the political and humanitarian costs "others" endure due to its "successes". The persistent portrayal of Jews as "victims," facing "hostile" and "terrorist" Palestinians, also feeds this mentality. Even Israeli children's books are exploited to demonise Palestinians and Jew as victims against terrorist "Arabs". Today's attack cannot be decontextualized. It is deeply connected to Israel's celebrated "War of Independence," which declared Israel as a Jewish state after a systematic process of ethnic cleansing that ranged between massacres, like that of Deir Yassin, to psychological violence, and made almost a million Palestinians refugees. These acts of terror reproduce the same mentality that led to the Kafr Kassim massacre of 1956, whose perpetrators were pardoned and freed after a year. An Israeli border police unit, for no reason whatsoever, opened fire at Palestinians returning from their farms, unaware of the new military curfew imposed on their village. The gunfire killed 49, almost half of them children. It is also the same mentality that led to the second mass expulsion of Palestinians in 1967. According to an Israeli soldier whose testimony appeared in Haolam Haze, 10 October 1967: We fired such shots every night on men, women and children. Even during moonlit nights when we could identify the people, that is distinguish between men, women, and children. In the mornings we searched the area and, by explicit order from the officer on the spot, shot the living, including those who hid or were wounded, again including the women and children. And again, it is the same attitude that blames Palestinian civilians in Gaza for the collective punishment against them and periodic attacks that are, by Israel's own dehumanising description, nothing more than "mowing the lawn" . The last Gaza attack was only the latest episode in this ongoing war of alleged "self-defence". The arson attack should be seen within this context of the Zionist state's history of negating Palestinians and relentless attacks against our very existence. Most international media covered it as "unique" before emphasising Israeli leaders' condemnation of it, suggesting that it was not representative of the state. It is absolutely representative and should be received with outrage, not against setters' violence, but against their host regime that has been built and lives on terror, yet continues to be celebrated in the West's political and cultural discourse, feeding its impunity. We should demand not just denunciation of this atrocious attack against 18-month-old Ali Dawabsha, but delegitimization of Israel and its Zionist ideology that produces and endorses such violence, and has long justified it morally and politically. This entry was posted on July 31, 2015 by Shahd Abusalama. It was filed under Reflections and memories and was tagged with #GazaUnderAttack, Illegal settlements, Israeli Defense Forces, Jewish Settlers, operation protective edge, West Bank, zionism. Be strong and keep the struggle going. Although I am in far away Chennai , my heart is with you. Take care of your children and watch over them. It is not their struggle right now but will probably soon become theirs. Until then, keep them safe. Let them not engage with the Zionist army , the terms are too unequal, but keep them in the hope that your problems will end before the next generation. With respect and regard glt100 I tried to view this post but got "Oops, what you are looking for could not be found." message. Seems you are being censored Shahd. Greg Thomas. Pingback: The burning of a Palestinian child: not an exception, but a result of Zionism – Mondoweiss Fernando de Sousa Falcão The world must combat the zionism as did with the nazism, because they are similar. Pingback: Tuer est dans la nature du sionisme | Arrêt sur Info apocalypse29 Reblogged this on News For The Blind. Pingback: ". . . They surrounded Him like a halo. . . They gave Him their vows. . ." (Abdul Karim Sabawi) | Palestine InSight Pingback: Rania Jalek y Ali Abunimah sobre el ataque a la familia Dawabsha | Palestina en el corazón Pingback: After the murder of baby Ali, is Zionism beyond redemption? | Uprootedpalestinians's Blog John Richardson I believe both sides are at fault for this ongoing altercation which has been kept fomenting for at least 2000 years according to the KJ bible's OT . WHY is the question ! When are people going to step up to the plate and take a strike without retaliating !!! The other side is always looking for a reason- to strike back ! Quit being adolescent & BE the people you are meant to be ! Tolerance is presently on the side of the Palestinians !!!! Can't Israelis be the example Hashem has been waiting for. You are to be the example, atop of the hillside= HaShem's way, not yours !!! Give a people a break Israel. Perfection is lacking on BOTH sides 1 ENOUGH SAID ! Hanna Bard This terrorist attack is horrible and was condemned by the Israeli government. The attack is not "a result of Zionism" – it's a result of terrorism and extremism. Zionism means to support the Jewish people's right to Israel as its' independent state, and there are different kinds of Zionism. I'm a Zionist, but I don't support the right-wing Religious Zionism of the settler movement. I support the two-state solution and I'm against harrassments of civilians in any way. The Jewish people's right to PALESTINE you mean? Return to history books, read the Israeli historian Ilan Pappe's The Ethnic Cleansing of Palestine. Israel didn't exist before they ethnically cleansed almost 800 thousands of Palestine's indigenous people like my grandparents. Leave a Reply to Shahd Abusalama Cancel reply
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,521
Q: How is my EditText content being saved? I created a simple app that has nothing except an EditText element. When I run the app, I type text into the element and then press Ctrl-F11 to change the emulator's orientation. I've added logging information to make sure that the activity gets destroyed and re-created when I change orientation. I haven't added any code to save the text in the EditText element and yet, after the change of orientation, the text that I typed stays in the EditText element. What mechanism in Android is saving and then restoring the element's text (is it savedInstanceState) and how can I see for myself the details of this saving operation? A: onSaveInstanceState()/onRestoreInstanceState() along with unique widget IDs. Some links that utilize: Saving Android Activity state using Save Instance State http://groups.google.com/group/android-developers/browse_thread/thread/5d7fd8da11c8e971
{ "redpajama_set_name": "RedPajamaStackExchange" }
447
« A long overdue round-up of recent publications in ethics | Main | Vegetarianism: A Guide for the Perplexed » Samuel Beckett and Arnold Geulincx Our literature and philosophy publishing has been growing over the last 6 months and I'm delighted to bring you news about one of the strongest new additions to the list - Samuel Beckett and Arnold Geulincx:Tracing 'a literary fantasia' by David Tucker, Visiting Research Fellow at the University of Sussex and currently teaching at the University of Oxford, UK. This book is the first full-length study of Samuel Beckett's fascination with the seventeenth-century philosopher Arnold Geulincx and endorsements have so far been outstanding: 'Every now and again, rarely, a book comes along that offers a definitive account of a particularly vexing critical question – this study is one of them. Drawing on a range of published and unpublished materials, David Tucker offers a comprehensive and sensitive examination of the role Geulincx plays in Beckett's writing and aesthetics, and in doing so makes us think differently about Beckett's work.' Mark Nixon, Director, Beckett International Foundation, and Lecturer, University of Reading, UK 'Samuel Beckett's debt to Arnold Geulincx has been recognized on occasion, as it were, but this is the first detailed treatment of its lasting impact. David Tucker traces Beckett's early readings of the Belgian post-Cartesian philosopher from their manifest presence in Murphy (1938), through their more subtle intimations in the great writings of the 1940s, to their faint stirrings in the later prose, showing that Geulincx's Ethica and their central axiom, ubi nihil vales, ibi nihil velis, continued to define for Beckett a viable ethical principle in a worthless world.' Professor Chris Ackerley, University of Otago, New Zealand Samuel Beckett once wrote that were he in the 'unfortunate position' of a critic studying his work, one of his points of departure would be the ideas of the seventeenth-century philosopher, Arnold Geulincx. This book examines Beckett's engagement with Geulincx, and of how this engagement marks, and is marked by, broader changes in Beckett's aesthetic thinking. You can read our interview with the author here and find an exclusive edited extract from the Introduction to the book here.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,156
\section{Introduction} Nowadays, we are observing an increasing deployment of software systems based on Deep Learning (DL) in real life, from personal banking to autonomous driving \cite{heaton2020applications}. A DL program encodes the network structure of a desirable DL model and the process by which the model learns from a training dataset. Easy-to-use libraries such as Keras have been introduced to simplify the development process of DL programs. However, leveraging these libraries to implement a DL program is still challenging, in particular for developers who are not experts in Machine Learning (ML) and neural networks. A developer must make multiple architectural (e.g., type, size, number, and order of layers) and configuration (e.g., optimizer, regularization methods, and activation functions) choices that affect the quality of the DL models, and consequently software quality. A poorly-designed DL model may train successfully but is likely to perform poorly when deployed in production. Design smells in DL programs are poor design and–or configuration decisions that can have a negative impact on the performance and then quality of a DL-based software system. By performance, we mean accuracy of prediction, like precision of classifying samples in the correct target class, that may affect the quality of final decisions. In software engineering, traditionally code/design smells deal with non-functional requirements such as testability or maintainability, but in ML-based systems the accuracy can be regarded as a functional requirement. In this paper, we define design smells in DL programs as poorly designed/configured models that may affect the entire performance, i.e. prediction accuracy, of DL-based systems. An example of a poor design decision in a DL model and its refactored version are shown in Fig. \ref{fig:motivation}. When training the model to detect images of handwritten digits, the developer selected an inadequate optimiser at the last line; i.e., ``Adam" in \texttt{compile} function instead of Stochastic Gradient Descent (SGD) optimizer as pointed in the correct answer, which caused the accuracy of the model to remained unchanged between epochs 2 to 10. Consequently, the model was not able to train well on the data, leading to a low classification accuracy. Such low classification accuracy results in poor decisions like misclassification of input images. Changing the optimizer led to successfully addressing the problem and the performance improved significantly. \begin{figure*} \centering \includegraphics[width=.9\textwidth]{figs/motivation.jpg} \caption{A poorly-designed model (left) and its refactored version (right). The optimizer has been changed to improve the performance in a classification problem. The recommended changes have been highlighted by the red color (simplified from SO\_37213388).} \label{fig:motivation} \end{figure*} Deploying a DL model with poor performance can have severe consequences, especially in the context of safety-critical systems. It is therefore important to raise the awareness of development teams about poor design and configuration issues that are likely to have a negative impact on the quality of DL models. Design smells can cause a program to exhibit extraordinary poor accuracy or other low quality outputs during the execution phase. Having a list of known bad design practices for DL models can help developers avoid pitfalls during the development of their DL programs; resulting in better software quality. Although poor design choices and performance issues in DL programs have been studied previously \cite{DL_bugs_1, DL_bugs_2, CNN_design_patterns, CNN_principles}, to the best of our knowledge, this paper is the first empirical study on design smells in DL programs. In this paper, we propose a catalog of 8 design smells in DL models with a focus on deep Feedforward Neural Networks (FNN) that use convolutional components. Fig. \ref{fig:metodology} illustrates the schematic diagram of our study in this paper. We start by conducting an investigation to determine the type of smells and their prevalence using two main sources: (1) previous research studies that highlighted bad practices in designing DL models, and (2) DL programs with design or performance issues. We have identified two main categories of design smells: Formation of the feature map and usage of regularization methods. Context, consequences and recommended refactorings for removing each smell are specified in the catalogue with some examples from real DL programs. Finally, the relevance of design smells are assessed by running a survey among 81 eligible DL developers/researchers. In general, the developers perceived the proposed design smells as reflective of design or implementation problems, with agreement levels varying between 47\% and 68\%. The contributions of this paper are: 1) proposing a catalogue of 8 design smells in DL models, and 2) validating the catalogue through a survey with 81 eligible DL developers/researchers. The remainder of this paper is organised as follows. Section~\ref{background} briefly reviews background knowledge about DL, deep FNNs and the development of DL program/models. Section \ref{smells} introduces the methodology adopted for the identification of smells and a full description of the identified design smells in DL models. Section~\ref{survey} presents the design of the survey used to validate the proposed design smells, and the obtained results. Section~\ref{threats} discusses threats to the validity of this study. Finally, we conclude the paper and discuss future work in Section~\ref{conclusion}. \section{Background}\label{background} \begin{figure*} \centering \includegraphics[width=.6\textwidth]{figs/Methodology.pdf} \caption{Schematic diagram of our study.} \label{fig:metodology} \end{figure*} \subsection{Feedforward Neural Networks (FNN)} FNN \cite{DL_ebook_2016} is the principal neural network architecture used for solving classification and function approximation problems, where the task is to learn a mapping function capable of converting input data to a target output. FNN consists of several, and sometimes diverse, sequences of layers of computational units. These computational layers are trained to extract features hierarchically. This starts from low-level features in early layers to high-level ones in middle layers. FNN, then, detects discriminative and informative patterns in last layers, which serve it to derive either the class label (in classification problems) or continuous outcome (in function approximation problems). It is called feedforward because the information flows in a forward manner from the input layer, through the hidden layers and to the output layer, e.g., a class probability or a predicted real value. The basic FNN architecture consists of stacking dense layers, where all the neurons of two consecutive layers are fully-connected. The regularization is required to improve the convergence and generalizability of the training procedure of DNNs. Many regularization techniques have been proposed and the most used ones are dropout and batch normalisation (batchnorm). Dropout \cite{dropout} masks at every training iteration a random subset of units (i.e., nullify them). The stochasticity injected into the inference calculation, only during the training, prevents the co-adaptation of feature detectors and encourages the DNN to learn robust patterns against partially-hidden information. Batchnorm \cite{batchnorm} acts differently on activations by normalizing their values using statistics (i.e., mean and variance) of the current batch of data during the training. During the testing, it updates internally, the population statistics of all batches for each level of activations in order to switch to normalizing against population, rather than batch, statistics. This normalization of intermediary inputs data has shown its effectiveness in smoothing the loss landscape, which ensures faster and safer training convergence with high potential to escape weak local minima. Convolutional architectures represent a particular type of FNN designed for multi-dimensional input data, such as 2D images, audio spectrograms, or 3D videos \cite{krizhevsky2012imagenet}. The benefit of Convolutional Neural Networks (CNN) lies in their ability to take into account the spatial information in their feature extraction process. To do that, CNNs stack, earlier, two specialized layers: \begin{itemize} \item Convolutional layer: it applies spatial filters over the input data and each filter's weights are learned to detect relevant features supporting the network's task. Thus, it yields a feature map for each learned filter, where each unit is connected to a local region (i.e., size of spatial filtering window) in its previous layer's feature maps. \item Pooling layer: this layer performs spatial pooling over the computed feature map to reduce its dimensionality and retain the most relevant information. The spatial pooling can be either average or max aggregation that computes, respectively, the average or max of all the units in the specified spatial window. \end{itemize} Indeed, some bad configurations and poor design choices may definitely introduce inefficiencies on the internal functioning of the FNN or one of its components, which can hinder the expressiveness of mapping functions or computational resource consumption. Such configurations or design choices have been reported in several studies as a root cause of bad performance in DL programs \cite{DL_bugs_1, DL_bugs_2}. DL researchers have studied performance issues in DL models \cite{CNN_design_patterns, CNN_principles} as well. Moreover, other researchers have reported some principles and best practices for designing CNN \cite{systematic_CNNs, practical_CNNs}. \subsection{Developing DL programs} The development of DL programs lies in constructing the Deep Neural Network (DNN) by calling built-in DL routines to create layers (processing units), then connecting them by either feeding one or more layers' outputs as inputs to another. Then, the developer should train the DNN by configuring a learning algorithm on a dataset. The training process consists in updating iteratively the DNN's parameters, towards minimizing the loss of DNN's predictions compared to the training data. A loss/cost function is defined to estimate the average distance between predicted and actual outcomes. Commonly, the best-fitted FNN is found after multiple epochs (i.e., passes over all the training data). However, leveraging DL libraries to implement a DNN and then a training program for the designed DNN is not straightforward and it can be error-prone. DL libraries often have to trade off between the coverage of novel DL functionalities and the ease of rapid implementation and extension of DNN software prototypes. As a compromise solution, they uniformly include, for each newly-implemented DL functionality, a bundle of automated steps and default settings following its common usage trends. This enables quick prototyping of regular DNNs while keeping the flexibility to try other configurations with the tweakable setting options available for every provided DL routine. As a consequence, DL developers should be aware of the intricacies of these DL libraries to choose the appropriate configurations and avoid breaking their implicit assumptions in regard to the usage of their built-in routines. \section{Design Smells in DL models}\label{smells} In this section, first we describe our methodology for eliciting design smells by analyzing existing literature and related DL programs. Then, we explain identified design smells in feedforward DL models in detail. We explain the context of each smell, its characteristics, consequences, and the recommended refactoring to address it, following the template provided by Brown et al. \cite{brown1998refactoring}. Moreover, code snippets are provided as examples in some cases. \subsection{Methodology} In this study, we focus specifically on FNNs. This popular architecture inside the DL community is considered as "quintessential" in DL and they has many industrial applications like object recognition from images \cite{DL_ebook_2016}. In fact, a special feedforward architecture which is called Convolutional Neural Network (CNN) has shown its effectiveness on public computer vision datasets and competitions such as ImageNet classification \cite{deng2009imagenet} or COCO object detection \cite{lin2014microsoft}. Moreover, FNN is a conceptual milestone on the road to recurrent networks that are employed widely in Natural Language applications. Thus, we limit our study to deep FNNs and do not consider other DL models. The goal of this study is to identify design smells that could affect the performance of a DL program. We examined two main sources of information to identify such smells: (1) previous research studies that highlighted performance issues in DL models, and (2) DL programs that exhibited design or performance issues. We reviewed empirical research studies on DNN design principles and bad performance in DL programs to identify frequent and influential design smells in deep FNNs, including poor design choices/configurations that lead to bad performance in DL programs \cite{DL_bugs_1, DL_bugs_2}, performance issues in DL models \cite{CNN_design_patterns, CNN_principles}, and reported principles and best practices for designing CNN \cite{systematic_CNNs, practical_CNNs}. The second source of information about design smells is real DL programs that have design inefficiencies. To find a proper set of real-world design smells in DL programs, we have used two main sources: 1) samples found by directly searching over SO with keywords related to such issues, and 2) public datasets of faulty DL programs (from SO and GitHub) released by previous research studies. For the former, we chose SO because it is the most popular Q\&A forum for software development and has been leveraged by previous studies on DL software systems~\cite{DL_bugs_1, DL_bugs_2, DL_challenges}. Since TensorFlow and Keras are very popular among DL developers, in this paper we searched SO posts tagged by one of these libraries with the objective of collecting relevant DL models/programs. We refined our search queries with keywords related to the scope of our study: "low performance", "bad performance" and "design issues". We consider SO posts, containing full code scripts or code snippets that are related to one or multiple issues since we need to investigate the code to understand the potential design smell. Also, we have searched for publicly released datasets of faulty DL programs (including design issues and low performance) by checking replication packages of all published papers that studied problems in DL programs. Finally, we obtained four publicly available datasets of faulty DL programs gathered from SO and GitHub \cite{DL_bugs_1, DL_bugs_2, DL_faults, DL_fix2020}. All these studies investigated various faulty DL programs from SO and GitHub for their own research objectives including empirical study of bugs occurring in DL software systems written by TensorFlow, PyTorch and Caffe \cite{DL_bugs_1, DL_bugs_2}, proposing a taxonomy of real faults occurred in DL software systems \cite{DL_faults} and bug fix patterns in DL programs \cite{DL_fix2020}. For inspecting collected DL programs from either direct searching over SO or public datasets, we relied on certain inclusion and exclusion criteria to find relevant programs for identifying design smells: \begin{itemize} \item The program must have performance issues (e.g., low accuracy or detection precision), \item The issue must not lead to program crash, hang or incorrect functionality. The program should be able to run and produce results, \item The DL program must be developed using TensorFlow or Keras, \item The DL model must be FNN, \end{itemize} This process left us with 659 DL programs to be analyzed. We have manually inspected all these artifacts to find relevant examples to identify design smells. We have used an open coding procedure \cite{seaman1999qualitative}. A shared document including the link to all artifacts have been used to make it possible for all authors to work together during the analysis. Each artifact was inspected by reading specific parts of its document (code snippet, comment, description) and all related discussion provided by the developer or other users (for samples from SO). Each sample was inspected by at least two of the authors to make sure that the root cause of the performance issue was a design inefficiency and was not related to generic programming faults or implementation issues. After analyzing all these data sources, we have derived a catalogue of 8 distinct design smells in deep FNN (a popular DL architecture). Since the arrangement of convolutions/poolings layers for extracting features and type/location of regularizers are two significant factors that affect the performance of deep FNNs, so we present the smells organised in two categories: Formation of the feature map and usage of regularization. \subsection{Formation of the feature map, convolutions and poolings layers} \textbf{Context:} Conventionally, a CNN architecture incorporates a bundle of convolutional layers with increasing filters count and separated by pooling layers to shrink gradually the feature map area. Hence, the extracted feature space tends to become deeper and narrower throughout the network until it becomes ready to be flatten and fed to the dense layers in charge of mapping the features into the target output. \\ \\ \textbf{1. Non-expanding feature map}\\ \textbf{\textit{Bad smell description:}} A possible design mistake in CNNs is keeping the number of features the same (or even decrease it) as the architecture gets deeper. There should be a balance between retaining the detected features (and corresponding spatial relationship between them) and increasing the depth of the network \cite{depth_comp}.\\ \textbf{\textit{Consequences:}} If the developer fails to have a proper balance between the depth and size of the feature map, the overall performance would be negatively affected. While the stack of convolution and pooling layers extract and then compress the relevant feature map, if the architecture cannot increase the number of features, it will fail to deliver promising features to the dense layers.\\ \textbf{\textit{Recommended Refactoring:}} The number of feature maps should be gradually expanded while the feature map area is retracted. The growth of feature maps count is recommended \cite{depth_comp} to compensate the loss of representational expressiveness caused by the continuous decreasing of the spatial resolution of the learned feature maps. Therefore, throughout the layers, the feature space becomes synchronously narrower and deeper until it gets ready to be flatten and fed as input vector to the dense layers.\\ \textbf{\textit{Example:}} An example of this bad smell is illustrated in Fig. \ref{fig:sample1} extracted from SO post \#50426349. The developer did not grow the number of feature maps through layers 4 to 6. The number of layers and the size of 2-Dimensional convolution layers in the code snippet are highlighted in red. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/sample1.png} \caption{A part of DL program mentioned in SO\_50426349 as an example of design smell No. 1.} \label{fig:sample1} \vspace{-15pt} \end{figure} \\ \\ \textbf{2. Losing local correlation}\\ \textbf{\textit{Bad smell description:}} In CNNs, promising features are extracted and then delivered to the dense layers by the stack of convolutional layers. For an effective feature extraction, setting proper window size for spatial filtering is crucial. If the developer does not grow the window size when the model gets deeper, the model will fail to extract the relevant features \cite{lecun2015deep}. Some developers start with a relatively large window size for spatial filtering and keep it the same for all convolutional layers which is a bad practice leading to loss of feature information. In fact, some developers only rely on the internal mechanism of convolutional and pooling layers for extracting relevant information without proper parameter settings/tuning.\\ \textbf{\textit{Consequences:}} If the model does not start with a relatively small window size (for gathering low-level information) and then grow the window size gradually (to extract high-level features), it will fail to extract useful features for the next processings. It makes sense that by using CNNs, the locality of information is crucial for performing the task. Thus, it is important to preserve locality throughout CNN to guarantee its success in detecting various features and relations between them \cite{lecun2015deep}. Furthermore, early convolutional layers learn lower level features while deeper ones learn more high-level and domain specific concepts.\\ \textbf{\textit{Recommended refactoring:}} The local window size for spatial filtering should generally increase or stay the same throughout the convolutional layers. It is recommended to start with small spatial filtering to collect much local information and then gradually increase it to represent more compound information \cite{VGGNet, szegedy2016rethinking}.\\ \textbf{\textit{Example:}} Fig. \ref{fig:sample2} shows a part of the code from SO post \#38584268 that defines a CNN with two convolutional layers The developer increased the kernel size (local window size) in successive convolution layers while should increase or at least keep it the same. The affected layers and corresponding API's arguments are marked in red in the code snippet. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figs/sample2.png} \caption{A part of DL model from SO\_38584268 as an example of design smell No. 2.} \label{fig:sample2} \end{figure} \\ \\ \textbf{3. Heterogeneous blocks of CNNs}\\ \textbf{\textit{Bad smell description:}} Building a deeper model by only stacking a set convolution and pooling layers without appropriate configuration is a bad practice among DL developers. Even with proper adjustment of the number of features, the size of the local window, and the area of feature map along convolutional/pooling layers (as mentioned in the \textit{Non-expanding feature map} and the \textit{Losing local correlation} smells), efficient feature extraction can be affected by the lack of sufficient convolutional blocks \cite{he2016deep}. DL developers are used to define only one convolutional layer at each stage of a cascade of convolutional/pooling layers and increase the kernel size if it does not work properly. Depending on the application and the input data, usually, only one block of convolutional with large spatial filtering size at each stage is the minimum that the model needs to extract effective features efficiently.\\ \textbf{\textit{Consequences:}} Only one convolutional block may not be enough for providing the required nonlinearity of feature extraction. On the other hand, large kernel sizes increase the computational burden significantly. As an example, recent NVIDIA cuDNN library (version 5.x or higher) is not optimized for larger kernels such as 5 × 5 and 7 × 7, whereas CNN with entirely 3 × 3 filters achieved a substantial boost in cuDNN performance \cite{cuDNNlink}.\\ \textbf{\textit{Recommended refactoring:}} Deep CNN should favor blocks of 2, 3, or even 4 homogeneous convolutional layers with similar characteristics. Advanced CNN architectures \cite{krizhevsky2012imagenet, he2016deep, iandola2014densenet} have shown the benefit of having several homogeneous groups of layers, where each one is specialized to achieve a particular goal. Indeed, building blocks of convolutional layers with similar characteristics (i.e., the same number of feature maps and feature map sizes) increases the homogeneity and the structure symmetry within the CNN. Hence, larger kernels can be replaced into a cascade of smaller ones, e.g., one 5 × 5 can be replaced by two 3 × 3, or four 2 × 2 kernels. Spatial filtering with reduced size enhances the nonlinearity and yields better accuracy \cite{VGGNet}. Moreover, it massively decreases the computation power requirement.\\ \\ \textbf{4. Too much down-sampling}\\ \textbf{\textit{Bad smell description:}} Usually DL developers define a pooling layer (down-sampling) after any convolutional layer. While down-sampling is inevitable in CNN models, it is not a good practice to perform the down-sampling right after each convolutional layer particularly for early layers.\\ \textbf{\textit{Consequences:}} Larger feature-maps, especially in the early layers, provide more valuable information for the CNN to utilize and improve its discriminative power \cite{he2015convolutional, szegedy2016rethinking, iandola2016squeezenet}. Therefore, it is crucial to avoid prematurely down-sampling and excessive appliance of pooling. Otherwise, the model will lose some information extracted in early layers resulting in poor performance.\\ \textbf{\textit{Recommended refactoring:}} Deep CNN should not apply pooling after every convolution. For instance, we use, as an approximation, the minimum of 10 layers to consider a CNN deep and 1/3 as threshold for the proportion of pooling layers with respect to the total of convolutional layers (convolution + pooling) to pinpoint a high amount of pooling.\\ \\ \textbf{5. Non-dominating down-sampling}\\ \textbf{\textit{Bad smell description:}} In fact, down-sampling \cite{strided_conv} in the cascade of CNNs can be done by max- or average-pooling or strided convolution (strides greater than 1). Using average-pooling is recognized as a bad design choice for CNN models \cite{MaxPooling_Sup}, particularly for image-like data.\\ \textbf{\textit{Consequences:}} Average-pooling ignores some invariances in data. Since extracting invariant features (those are not affected by scaling or various transformations) is crucial for image processing and object recognition, failure to deliver such features to the dense layers leads to an accuracy degradation of classification. Moreover, it can affect the generalization capability of the model.\\ \textbf{\textit{Recommended refactoring:}} Max-pooling is the preferred down-sampling strategy, so all the down-sampling is recommended to be changed to max-pooling. Max-pooling operation has been shown to be extremely superior for capturing invariances in data with spatial information, compared to other down-sampling operations \cite{MaxPooling_Sup}.\\ \textbf{\textit{Example:}} Fig. \ref{fig:sample5} illustrates a part of code from a GitHub repository\footnote{\url{https://github.com/yumatsuoka/comp_DNNfw/commit/30e0973892bc344aa17cd36a63dc61a062ad93e4}} as an example of this bad smell. It is highlighted in the code snippet that developer used average-pooling instead of recommended max-pooling. \begin{figure} \centering \includegraphics[width=.95\linewidth]{figs/sample5.png} \caption{A part of DL program from GitHub as an example of design smell No. 5.} \label{fig:sample5} \end{figure} \subsection{Using regularization} \textbf{Context:} Order and combination of regularization can affect the performance of FNN significantly \cite{batchnorm, disharmony_dropout_batchnorm, systematic_CNNs}. Moreover, the regularization functionality may interfere with other FNN's components. Therefore, regularization should be used properly (place, order and combination) to ensure their effectiveness. The following smells discuss bad practices on the usage of regularizations in a FNN architecture.\\ \\ \textbf{6. Useless Dropout}\\ \textbf{\textit{Bad smell description:}} It is well-known among DL developers that dropout helps to avoid overfitting, however, using it before down-sampling layers will counteract its effect \cite{systematic_CNNs}.\\ \textbf{\textit{Consequences:}} Dropping out the activation before the pooling could have no effect except in cases where the masked units correspond to maximums within input pooling windows. The reason is that the max-pooling keeps only these maximums as inputs for next layers. With the neutralized dropouts, the model will suffer from overfitting and poor performance.\\ \textbf{\textit{Recommended refactoring:}} Dropout layer must be placed after the maximum pooling layer to be more effective. Considering the case studies with max-pooling layers \cite{dropout}, the dropout has been applied on the pooled feature maps, which becomes a heuristic followed by the state-of-the-art CNN architectures \cite{systematic_CNNs, practical_CNNs}.\\ \textbf{\textit{Example:}} In the example shown in Fig. \ref{fig:sample6}, extracted from SO post \#60566498, the developer has used "Dropout" before "MaxPooling2D" (both underlined by red in the code). The developer complained about increasing validation loss and bad performance of his model in the post.\\ \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figs/sample6.png} \caption{A part of DL program mentioned in SO\_60566498 as an example of design smell No. 6.} \label{fig:sample6} \end{figure} \\ \textbf{7. Bias with Batchnorm}\\ \textbf{\textit{Bad smell description:}} Normally learning layers in FNN benefits from bias with different initializations. When using batchnorm, keeping bias values in layers is not a good practice \cite{batchnorm}.\\ \textbf{\textit{Consequences:}} Actually, the effect of batchnorm will be diminished in the presence of a bias. Batchnorm applies, after the normalization, a linear transformation to scale and shift the normalized activations $\hat{a} = \alpha a + \beta$, where $\alpha$ and $\beta$ are learnable parameters. This allows DNN to compensate for any loss of information by the value distortions in order to preserve its expressive power. Since, batchnorm already adds a $\beta$ term fulfilling the same role of bias, "its effect will be canceled" \cite{batchnorm} in the presence of a bias.\\ \textbf{\textit{Recommended refactoring:}} The bias should be removed or ignored in a learning layer that is equipped with a batchnorm.\\ \textbf{\textit{Example:}} The code snippet in Fig. \ref{fig:sample7}, extracted from SO post \#49117607, shows that the developer has used two learning layers ("Conv2D") without turning off the bias along with Batchnorm (both underlined by red in the code with 1 and 2 respectively).\\ \begin{figure} \centering \includegraphics[width=0.75\linewidth]{figs/sample7.png} \caption{A part of DL program mentioned in SO\_49117607 as an example of design smell No. 7.} \label{fig:sample7} \end{figure} \\ \textbf{8. Non-representative Statistics Estimation}\\ \textbf{\textit{Bad smell description:}} Another bad practice regarding regularizations is using batchnorm after dropout. The developers usually use different regularization techniques to maintain and improve performance of DL, however, they should be careful about the internal mechanism and effects of these two different regularization techniques \cite{disharmony_dropout_batchnorm}.\\ \textbf{\textit{Consequences:}} If the batchnorm is placed after the dropout, it will compute non-representative global statistics (i.e., moving average and moving variance) on the dropped outputs of the layer. Li et al. \cite{disharmony_dropout_batchnorm} discussed the effects of this disharmony between dropout and batchnorm and showed experimental results asserting their explanation.\\ \textbf{\textit{Recommended refactoring:}} Batchnorm should be applied before dropout. Therefore, a substitution in the model design is recommended if batchnorm is applied after dropout to address the issue.\\ \textbf{Example:} Fig. \ref{fig:sample7} illustrates a part of program presented in SO post \#55776436, showing that "Dropout" has been used before the "BatchNormalization" (a red box indicates affected lines and they are highlighted both with 1 and 2 respectively). The developer in his post complained about low classification accuracy. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/sample8.png} \caption{A part of DL program mentioned in SO\_55776436 as an example of design smell No. 8.} \label{fig:sample8} \end{figure} \section{Relevance Assessment of Design Smells}\label{survey} After identifying bad design smells in DL models, we wanted to assess them. Our goal was to know whether developers/researchers evaluate them as relevant and possibly worthwhile to be addressed. Hence, we run a survey to validate our catalogue of DL design smells and collect views of DL developers/researchers about it. In the following, first the methodology followed to conduct the survey is explained, then the results are presented. \subsection{Survey Design} Our survey was created using Google Forms \cite{googleForm}, a well-known online tool for creating and sharing online surveys and quizzes. The survey is organized in three parts. In the first part, we ask some demographic questions about the participant: i) their role in the organization or job title (e.g., developer, researcher, student), ii) their number of years of work/research experience in ML/DL and iii) their used programming languages/frameworks. The second part asks specific questions about the design smells. We provide a description for each of our 8 design smells and a multiple-choice question asking the participant about the perceived relevance of the smell. The participant is instructed to provide a score on a 5-level Likert scale \cite{oppenheim2000questionnaire}. Moreover, for each question, we provide an open comment box to the participants, asking for their feedback about the definition of the design smell. In the final part, we ask (i) if the participant has observed any other frequent/significant design issues that have not been considered in our survey. (ii) We also ask them if a tool for detecting such smells would be useful or not, and (iii) whether they would opt for using such tool. We ask this last question because one could find a tool useful, but more for others like junior developers/researches) than for themselves. At the end of the survey, we provided an open comment box allowing participants to share any additional comments (that they wished) with us. The target group of candidates for this survey is developers, practitioners, or researchers with a good experience in DL and particularly in FNNs. The first group of candidates was derived from authors' personal contacts, actually 16 experts. The second group of candidates came from GitHub. To find participants with a good understanding of FNNs over GitHub, we used its REST APIs \cite{githubREST}. First, we identified the relevant repositories that include "feedforward neural networks" and "convolutional neural networks" in their description. We excluded repositories that were not active since 2019. Finally, we extracted active contributors' emails from 12192 selected repositories. This process left us with 3650 unique email addresses and we successfully distributed the survey participation request to 3605 email addresses. The third group of candidates came from \textit{Reddit}. To recruit participants, the questionnaire was posted on two relevant Reddit channels: \textit{deeplearning} and \textit{MachineLearning}. When sending/posting the questionnaire, we explained the purpose, scope and the estimated participation duration (5-10 minutes) of the survey in a quick message. Moreover, we asserted that the survey is kept anonymous, but the respondents were able to provide their emails for further communication and receiving a summary of the study. \subsection{Validation results} The survey was open for three weeks resulting in 81 responses in total. Regarding our question on work/research experience in DL, 20 respondents had less than 1 year experience, 41 between 1 and 3 years, 10 between 3 and 5 years, and 10 had more than 5 years. Almost all of the respondents (80 of 81) were using Python for DL development and only one indicated C++ as his favorite programming language. Among DL frameworks, TensorFlow was the most popular one with 59 votes. Keras and PyTorch received 45 and 42 votes respectively. Fig. \ref{fig:result1} shows the results of relevance assessment for 8 identified smells in the form of diverging stacked bar charts. Dark/light green color indicates the proportion of "Strongly agree" and "Agree" responses, while dark/light brown indicates the proportion of "Strongly disagree" and "Disagree" responses. \textit{Non-representative Statistics Estimation} is the most popular smell in our survey as it received 68\% of positive votes ("Strongly agree" and "Agree") while \textit{Bias With Batchnorm} received the minimum positive rate of 47\%. On the other hand, the highest negative feedback ("Strongly disagree" and "Disagree") was recorded for \textit{Losing local correlation} with 27\%. In the following, we discuss the validation results and received comments for each smell.\\ \begin{figure*} \centering \includegraphics[width=.95\linewidth]{figs/result-N.pdf} \caption{Validation results: Perceived relevance of the 8 design smells} \label{fig:result1} \end{figure*} \textbf{1. Non-expanding feature map:} In general, respondents agree (about 63\% of positive responses: "Strongly agree" and "Agree") that keeping the number of features the same (or even decrease it) as the architecture gets deeper is a design mistake in DL models, e.g., one commented that: \textit{"I strongly agree with this statement. The number of channels must be increased so as to capture more complex features which appear as the layers grow deeper"}. However, there are some neutral and negative responses. Some of them asserted that this is the case only for classification tasks. Most of the negative/neutral comments explained that this design smell is not always true and the expansion of the feature map depends on data, application (task that DL model designed for) or network architecture. They used to consider the size of the feature map as a hyperparameter that should be tuned on the validation loss, e.g., \textit{"According to me the size of feature map is a hyperparameter and will depend on the size of the network (Depth) hence I neither agree or disagree with the given statement, since sometimes a combination of small and larger feature maps work well like in inception model."}. Another respondent mentioned that s/he preferred to see an only slightly decreasing number of information processing units as the model gets deeper, and if the number of points is quartered (e.g., by max-pooling), the number of feature channels should be doubled or tripled. \textbf{2. Losing local correlation:} This smell receives a low positive response rate of 49\%, the highest negative feedback among all smells (27\%: "Strongly disagree" and "Disagree") from respondents and 24\% of neutral responses. While respondents agree that the window size is an important factor and should be adjusted as the network gets deeper (e.g., \textit{"I agree with this statement however increasing the window size will slow the training but our aim for a better model is achieved"}), they believe that non-growing window size across the network is not always a bad practice (e.g., \textit{"I think the windows size for spatial filtering should be directly proportional to how deep the network's layers are"}). They mentioned that there are plenty of simple applications where fixing a window size is enough to achieve a reasonable performance and this approach makes implementation easier and hyperparameter tuning simpler (e.g., \textit{"The models I've worked with are all relatively small but I kept the window size the same, it worked fine"}). There are comments stating that if we start by a small dimension and grow it, we may have false correlation as a result of the larger subsequent layers in some cases. Another respondent rephrased our statement as \textit{"start with and keep (or slightly grow) a small window size"}. Three other comments mentioned autoencoder networks (since they benefit from CNNs) by stating that this characteristic is observed on the second half (decoder) of autoencoders but not in the first half, so this design smell can be true or false depending on context. From neutral responses, we have: \textit{"I have seen a case where first a large spatial filter after that constant filter size provided more performance than gradually increasing filter size in a larger CNN model. Though I have also seen the logic above working well"}. \textbf{3. Heterogeneous blocks of CNNs:} Respondents have an agreement (64\%) with soundness and prevalence of this smell. Also, it received the minimum negative response of 10\% in our survey. They believed that we need multiple symmetric blocks of CNNs for effective feature extraction particularly in large models with enough depth not in small or medium ones. It was acknowledged that multiple layers are needed, not only to map complex relationships but also to be able to generate a sufficiently large receptive field: \textit{"a higher representation level is obtained with every additional convolutional layer"}. However, we received opposite views mentioning different aspects. Some experts commented that the designer should not spend too much effort on interpreting the activity of a single block and not try to set a goal for each block a priori, for example: \textit{"I agree with your claim except the last sentence"}. Others stated that convolutional blocks may be made of a single, several homogeneous or heterogeneous ones, and the design choice depends on the application: \textit{"the network size is determined primarily by the dataset size"}. \textbf{4. Too much down-sampling:} More than half of respondents vote positively for this case (56\%), and the same proportion vote neutrally and negatively (22\%). We observed an agreement on the necessity of a balance between down-sampling vs. feature detection and not using too much down-sampling (\textit{"Too much down sampling can provide rigged results"} or \textit{"You do want to avoid downsampling too much, mostly because you're going to bottleneck all your information to nothing"}). However, controversial opinions are on accepting it as a rule and on the suggested 1/3 threshold. Some comments mentioned that there is no fix ratio and the optimum ratio that fits perfectly could be achieved by hyperparameter tuning, for example: \textit{"but I've seen optimal architectures in which that ratio is much higher (e.g.: 1:1) as well as much lower (e.g.: 1:10)"} or \textit{"I think it would be difficult to prove such rules apply to every CNN and every problem domain. Also, I have seen and used CNNs with no down-sampling layers"}. Another respondent mentioned that hesitancy to down-sample may increase CNN processing time while mostly preserving "junk" data in the network so the designer should be careful about it. \textbf{5. Non-dominating down-sampling:} Similar to the previous smell, there is a marginal agreement on this one by 56\% of positive responses. Moreover, this case received a substantial rate of negative reactions, i.e., 26\%. According to the submitted comments, respondents acknowledged max-pooling as a dominant choice in most cases supported by results-driven (e.g., natural image data) and neuroscience-driven arguments. However, this is not the case always: \textit{"max pooling proves better than avg pooling but it cannot be completely ruled out"}, \textit{"Indiscriminate use of average pooling may suggest a code smell"} or \textit{"the decision I would say should be based on what features are being extracted and what is the model trying to learn"}. They mentioned that for some applications like extraction of a global parameter from an image, average-pooling can be more useful. Another respondent suggested using average-pooling instead of max-pooling in Generative Adversarial Networks (GAN) to avoid sparse loss. Finally, we found this comment very helpful: \textit{"Although contrast is a good way to see things, nuance is important. Nuance is lost with max-pooling especially with aggressive down-sampling or at later layers"}. \textbf{6. Useless Dropout:} According to received responses, 56\% of respondents indicate their agreement with this smell. Although there were some strong positive comments like: \textit{"I generally don't include dropout before pooling"} or \textit{" it's a rough heuristic to keep dropouts after pooling but it works well"}, negative responses expressed two main points against the statement of the smell: 1) type of dropout: element-wise vs. feature-wise, and 2) its effectinevess compared to batchnorm. Three respondents proposed that feature-wise dropout (dropping some proportion of feature maps rather than pixels or spatial dropout) should be more effective than random dropout for most applications by considering that \textit{"it does not matter at all whether it's used before or after pooling (since entire feature maps are dropped)"}. Two others suggested that dropout was being deprecated by batchnorm. \textbf{7. Bias With Batchnorm:} Less than half of respondents went positively with this smell (47\%) while it received the most neutral votes in our survey by 33\%. Responders with positive votes stated that using bias with batchnorm is a bad practice and they avoid it generally. By reviewing comments, we come to the conclusion that negative and neutral voters believed that using bias with batchnorm is not harmful: \textit{"The conv bias is redundant with the BN bias, but I don't think it's harmful to keep it (just wasteful)"}, \textit{"I cannot see the presence of bias nodes being a problem"} or \textit{"the additional bias will simply "cancel" and the same representation is learned anyway"}. Therefore, the design smell does not look wrong and avoiding it can be helpful at least for keeping the model simpler. \textbf{8. Non-representative Statistics Estimation:} There is a general agreement in this case since we received 68\% of positive votes as the most popular smell in our survey. A majority of respondents believed that using batchnorm after dropout would lead to non-representative statistics: \textit{"if batch normalisation is done after dropout then it will normalise the output coming after dropping the some connection (nodes)"}. However, there were also some negative comments on the smell. The main criticism was that the order of batchnorm and dropout does not have a significant impact on the performance of a DL model. The results of our questions about the usefulness of a potential tool for detecting the identified smells are shown in Fig. \ref{fig:result2}. A significant majority of respondents, actually 90\%, expressed a positive opinion for such a detection tool. Our follow-up question regarding whether they would use this tool if it became available, received another high positive reaction rate of 86\%. We attribute the slight drop to some experienced respondents recognizing that a detection tool would be useful but not necessary to them. Finally, all respondents surprisingly answered our question about other frequent/significant smells not considered in this survey and further identification of smells. They suggested the investigation of potential design smells related to various components of DL programs, including: (i) Initialization methods, (ii) Other architectures like fully and autoencoder CNNs, (iii) Some hyperparameter: like learning rate for different layers, (iv) The choice and location of activation functions, (v) Attention layers, (vi) Transfer learning. \begin{figure} \centering \includegraphics[width=0.9\linewidth]{figs/result-tool-N.pdf} \caption{Survey results about a detection tool} \label{fig:result2} \end{figure} \subsection{Discussion} Among the comments received in our survey, some respondents mentioned that although the proposed design smells have stated promising points for sketching DL models, hyperparameter tuning is inevitable after any initial design and the model's performance can be improved significantly by a proper hyperparameter search, for example: \textit{"... just set up your hypermodel to accept these as tunable parameters and search the space"} or \textit{"... allowing users to perform a flexible hyperparameter to fit the model to their particular needs"}. They stated that given the range of applications for DL, many design/configuration choices are domain-, data- and preprocessing-dependent. Therefore, experiments (including for hyperparameter tuning) may be required in some cases to identify the issues. However, we believe that having a catalogue of known bad practices while designing DL models, will help developers to avoid smells in their models. Even if the proposed smells do not cover all domains, they are still useful for the covered architecture/domains. Moreover, avoiding those smells will save time, effort and computational resources during test or hyperparameter tuning. \section{Threats to Validity}\label{threats} First of all, threats to construct validity may affect the relevance of the identified design smells which is assessed by a survey. In our survey, respondents were requested to indicate the perceived significance of smells described by a short explanation of the problem/situation. We have used relevant terminology and provided technical details in our descriptions to address this threat. Moreover, respondents were able to mention comments for each smell in the survey and we have not observed any comment complaining about possible misunderstanding in the description or context. It is also possible that our descriptions in the survey affected participant's view directing them toward our proposed design smells. To address this concern, we asked participants at the end of our survey to freely comment on missing issues in our study. There are internal threats to the validity of this research that may affect its achievements. The identification of design smells could be biased during reviewing previous works and manual inspection of artifacts. To address this issue, a clear systematic approach is followed in our study. We have investigated only ``closed" issues from GitHub and questions with ``at least one accepted" answer from SO; ensuring that we analyzed only issues that were solved. Moreover, participants in the survey have not been involved in the process of identifying smells and have different levels of expertise/background. Although the catalogue was prepared using DL programs developed by two popular frameworks of TensorFlow and Keras, we kept the title and description of the smells as general as possible and we believe that they are helpful for developers/researchers working with other frameworks as well. External validity threats may impact the generalization of our findings. We indeed are aware that the proposed catalogue is not complete. Since our paper is a first step in identifying design smells in DL programs, further studies are required to comprehensively investigate design smells in DL programs utilizing various structures. Furthermore, some smells can be extended in future work since currently they are specified for particular cases. \section{Conclusion}\label{conclusion} In this paper, we have specified 8 design smells in DL programs. Due to the prevalence and effectiveness of deep CNNs in real-world applications (particularly with image-like data), we have focused on this architecture. Basically, these smells are structural inefficiencies in DL models, that affect the performance of DL programs. We evaluated the validity and relevance of this catalogue by running a survey with 81 DL developers/researchers. In general, the developers perceived the proposed design smells as reflective of design or implementation problems, with agreement levels varying between 47\% and 68\%. The analysis of the multiple comments received for each of the smells, indicates that almost all the design smells are found to be relevant and helpful by respondents. Many of the survey respondents encountered similar design issues described by the smells. There are several directions for future work. First, we plan to introduce a detection tool for the proposed smells. An automatic method for finding design smells in DL programs will help developers to improve their DL models prior to deployment. Second, we plan to generalize some of the already identified smells to cover other contexts. Finally, a more comprehensive variety of smells can be proposed by covering other DL architectures. \balance \section{Detection approach: \tool{}}\label{detection} \subsection{Meta-modeling} A DL program has different components. The core of each DL program is a DNN. For the sake of simplicity, we only consider the feedforward multilayer perceptron (MLP) architecture. Like other computational models, DNN attempts to find a mathematical mapping from the input into the output during a learning phase. Usually, a set of inputs and desired outputs (or targets) is provided for learning, which is called Dataset. Therefore, our meta-model includes three main parts: Architecture of DNN, Learner, and Data. Since we have used GTS for modeling DL programs, our proposed meta-model is represented by a type graph. The proposed type graph is illustrated in Figure \ref{fig:meta-model}. The node representing the \textbf{DL program} has three edges to \textbf{Architecture}, \textbf{Learner} and \textbf{Data} nodes indicating its main components. In the following, we describe the meta-model in detail. It should be noted that our aim of meta-modeling is the detection of faults in DL programs; therefore the most relevant components have been incorporated into the meta-model.\\ An architecture starts with the input layer, continues with some hidden layers and ends with the output layer. We have considered a distinctive node for the \textbf{InputLayer} because of its importance but all other successive layers are modelled as \textbf{Layer}. Each layer has a \textbf{size} indicating the number of neurons in that layer. There are specific properties among nodes that are modelled as edges. For example, \textbf{Architecture} starts by \textbf{Input Layer}, \textbf{Input Layer} is followed by other \textbf{Layer}s, each \textbf{Layer} may have next layers and each \textbf{Layer} has a \textbf{Type} as an attribute. There are different types for a layer in DL, e.g., dense, 1D and 2D convolution, pooling or data processing layers like flatten. There may be other attributes for a layer like \textbf{Bias}, \textbf{Weights}. An architecture ends with \textbf{Labels}, the desired outputs of DNN that are used to calculate the error of the network in \textbf{Loss} function. Actually, \textbf{Labels} is a part of \textbf{Data} associated with the DL program.\\ On the other hand, a model could be configured according to a DL program that has already been developed by a programmer. The source code of a DL program is converted to a model, which is an attributed graph. Dedicated convertors are programmed in \tool{} to convert a DL program written by different DL libraries to its model. The source code of a program is parsed to extract relevant information that is necessary to configure the model. The meta-model is generic enough to be independent of any specific DL library. Hence, we can have a model of a DL program that conforms to the meta-model; making possible further investigations on the model, such as verification. Apart from the work and analysis that are presented in the rest of this paper, we believe that this meta-model can be very useful to understand DL programs written by third parties. It will be helpful in understanding the development activities of DL practitioners; the way they write DL programs and the type of faults that they experience. \begin{figure} \centerline{\includegraphics[width=\linewidth]{figs/meta-model.pdf}} \caption{The meta-model for DL models targeting deep FNNs.} \label{fig:meta-model} \end{figure} \subsection{Graph Transformations for Smell Detection} In this paper, the meta-model is presented as a type graph and each model is a graph, instantiating the type graph. Each DL program is converted to a graph, as well. As a straightforward approach, graph transformations are chosen to implement the verification rules. Each verification rule is implemented as one or some graph transformations or graph processing operators. In fact, graph transformations are used to detect possible faults in a model, faults that are caused by violating the verification rules. Consequently, a transformation is applicable where the conditions of the corresponding rule are violated. In other words, if conditions of a verification rule are violated representing a fault in a model then the graph operation(s) of that rule will be applicable. Graph transformations are very flexible to find violation of some conditions in a graph. Recalling that a graph transformation \textit{r} is defined by a triplet of \textit{$(LHS_r, RHS_r, NAC_r)$}, a specific condition would be checked by finding a match of \textit{$LHS_r$} in the graph and/or the absence of \textit{$NAC_r$}. Once a graph operation is applied, i.e., detecting a fault in a part of graph, a specific fault code is added to the node or edge in which the violation occurred. This action is represented by the right hand-side of the rule \textit{$RHS_r$}. \subsection{The tool} In this section, we describe our approach, \tool{} for detecting faults in DL programs. \tool{} is a model-based automated approach that performs a static analysis of a DL program to detect faults and design inefficiencies. Algorithm shows the pseudocode of \tool{}. The inputs are a DL program and a graph grammar, i.e., a set of graph transformations rules. As presented in Algorithm , \tool{} has three main steps: extract a graph from DL program, perform graph checking and generate a report from the resultant graph. At first, the DL program is modeled as a graph that conforms to the proposed meta-model, i.e., type graph. Then, a checking process runs to find bugs/issues in the model. This process attempts to apply rules to the graph and stops when further rule application becomes impossible. Then, \tool{} traverses this graph to generate a report for the user, containing a description of the faults and design issues found for each component. Except graph checking and graph transformations, all other parts of \tool{} are implemented in Python. We discuss details of each steps in the rest of this section.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{figs/tool.pdf} \caption{The tool uses the DNN built by ML framework APIs to model a graph from DNN and then check the model to detect potential design smells.} \label{fig:tool} \end{figure} In our graph-based approach, a DL program is modeled by a graph instance conformed to the type graph, i.e., meta-model. To fulfill this primary step, we implement the graph generation relying on static code analysis that examines the source code and extracts the valuable code units and segments the information needed to instantiate the type graph's components. This provides a more holistic and semantic view of the analyzed DL program, that allows detecting faults related to either DL library's API misuse or DL algorithm implementation requirements. Hence, a specific graph generator should be implemented for each supported DL library. Without loss of its generality, \tool{} currently supports DL programs written using TensorFlow and Keras as two well-known and popular libraries. It should be noted that \tool{} can be extended to work for other DL libraries, as well. In the following, we describe steps of modeling of DL programs as attributed graphs. The verification rules are implemented as graph transformations to process and verify the graph. Each graph transformation applies to the graph if conditions of the rule are violated. Once the DL source code is modeled as a graph, the violations of rules can be detected with a graph transformation tool that executes the sequence of rules over the model of the DL program. In this paper, we have used the GROOVE toolset \cite{rensink2004groove} to perform graph operations. GROOVE is a tool for implementing, simulating, and analysis of graph transformation systems. It is capable of exploring recursively and collecting all possible rule applications over a host (start) graph. This is referred to as the exploration of the state space of a graph grammar. GROOVE explores the state space by applying a slightly modified version of standard graph traversal algorithms, like depth-first search (DFS) or breadth-first search (BFS). Furthermore, it has a graphical interface for editing graphs and rules, and for exploring and visualising the GTS which could be called via command line, as well. The output of GROOVE is called the final graph on which no further rule application is possible. For more information about GROOVE's internal mechanism and its capabilities for modeling and simulating GTS, the interested reader may refer to \cite{ghamarian2012modelling}.\\ \section{Evaluation}\label{evaluation} \subsection{Evaluated programs} We manually inspected, for each type of bug, the top-10 relevant SO posts (i.e., according to built-in SO relevance criterion) mentioning one or more of its associated keywords. We consider SO posts, containing full code script or code snippets that are related to one or multiple bugs belonging to the above-mentioned categories. \subsection{Results}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,623
I found one of my abusers on Facebook a few months ago. Finding him wasn't as monumental as I thought it would be. I expected it to hit me like a car crash but instead it was more like a wave. I gasped, held my breath and let the wave wash over me. I came up for air. Then the wave was gone and it was just me floating in the calm water. I owe it to recovery that I didn't immediately do anything. I could've lashed out and sent him an angry message or I could've tried to figure out where he lived so that I could hold something over him. It was odd, even to me, that I didn't burst into tears. I never even journalled about it. Many people relate recovery to addiction but people are in recovery for all sorts of reasons. Recovery is like reprogramming. It requires a commitment to learning to feel and to reject the desire to numb. It's antifreeze for the soul. I like the Google definition: the action or process of regaining possession or control of something stolen or lost. What my abusers stole from me was the freedom to live without fear, without bracing myself for the next impact. I creatively came up with ways to survive what was done to me, most of which only made things worse. Recovery gave me tools that work (when I remember I have them and use them). I was 11 years old when he abused me. He was the babysitter's friend. I knew his name because I wrote it in my Hello Kitty diary. Even though I have always remembered his face, his smell and the feel of his hands on my little girl body, a part of me wondered if any of it was real. I thought that seeing his face would validate me. He is a real person. He exists. Instead, I was left with the feeling that I was missing something really important. My Dad jokes that if you want him to remember something trivial he will have to forget something important. That's how I describe dissociation. For victims of abuse, it's a survival mechanism that allows us to separate ourselves from what would otherwise be unbearable. It has always bothered me that I have friends who can remember every teacher they ever had or who their best friend was in 1st grade. I have very few memories from my childhood. My brain power was dedicated to navigating abuse, addiction, poverty and depression. My kids remember the names of all the stuffed animals they've ever had. That fact brings me so much joy. Instead of remembering, I collected. I kept notes, birthday cards, even Valentines. I put them in paper grocery bags and kept them in my closet. Those pieces of paper were pieces of me. They were proof that I existed. I may not have remembered anything about Shauna in second grade but I had the Valentine she gave me. If she was real, I was real. When I was a teenager, my stepmom threatened that if I didn't clean my (bordering on biohazard) room, she would. I didn't and she did. She saw those grocery bags and thought they were trash. When I found out that she threw them away, I was devastated. She had no idea that she inadvertently threw out proof of my existence. I had no memories and now I had no proof. She couldn't possibly understand why I was so upset because I had never told anyone what I had gone through, much less how I felt inside. Amazingly, the Hello Kitty diary survived. And so have countless journals, new bags of notes, letters and cards. To this day, I still need to write it out or it's not quite real. Recently, I felt an urge to go back to his Facebook profile. I found a picture that he posted years ago but I somehow missed the first time I looked. It's a picture of him about the age he was when he molested me. This time, I cried. For a few moments, I indulged the urge to wonder what it would be like if I messaged him. What were you thinking? Do you even remember me? Do you have any idea what you've done? I played with it in my mind and realized that none of his answers would mean anything to me. I don't want to hear them. He is no one. He never needed to be found. From the moment I sparked into existence, I have been held by God, guided by angels on earth and cared for by family, friends and strangers in profound and humbling ways. I may never be able to fully remember the little girl I was but she shows herself in the woman I have become. She is fully embodied in the lives of my playful and joyful children. She shows herself in my quirky sense of humor. She is the tiny hand that holds yours when you're sad and need comforting. She once was lost but now is found. I tried to find my abuser but I can't. I heard he was working as a teacher, I hoped to find him so I could maybe report him if he was. He shouldn't be around small innocents. It hurts me that I can't find him. Thank you for writing about this. Sad and poignant and powerful, all at the same time. Thanks so much for sharing – very relevant to my life. I had a somewhat similar experience a while ago and found tremendous relief in just saying the same to someone who knew him, and validating that the person existed.
{ "redpajama_set_name": "RedPajamaC4" }
7,459
O Clube Desportivo de Portugal é um clube português localizado na freguesia de Bonfim, concelho do Porto, distrito do Porto. O clube foi fundado em 25 de Agosto de 1925. Os seus jogos em casa são disputados no campo de jogos Rui Navega. O Clube Desportivo de Portugal tem cerca de 150 atletas distribuídos pelas equipas de futebol nos mais diversos escalões desde infantis, iniciados, juvenis, juniores e seniores, sendo que a sua equipa de futebol sénior participa, na época de 2021-2022, na Divisão de Honra da Associação de Futebol do Porto. É um dos mais populares clubes da zona oriental da cidade do Porto e conta actualmente com cerca de 450 sócios. Ligações externas Portugal AF Porto Clubes de futebol fundados em 1925 Paranhos (Porto) Fundações em Portugal em 1925
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,955
A Long March-7 Y5 rocket carrying Tianzhou-4 cargo spacecraft, with supplies for the Chinese space station under construction, takes off from the Wenchang Spacecraft Launch Site in Hainan province, China May 10, 2022. (File photo: Reuters) China to launch next crewed mission on Sunday to build space station Reuters, Beijing Published: 04 June ,2022: 09:14 AM GST Updated: 04 June ,2022: 09:25 AM GST China will launch a spacecraft on Sunday carrying three astronauts to the core module of the unfinished Chinese space station, where they will work and live for six months as construction enters advanced stages. A Long March-2F rocket carrying the Shenzhou-14 spacecraft is set to blast off from Jiuquan Satellite Launch Centre in the northwestern province of Gansu at 10:44 a.m. local time (0244 GMT) on Sunday, a China Manned Space Agency official told a news conference on Saturday. For the latest headlines, follow our Google News channel online or via the app. Mission commander Chen Dong will be accompanied by Liu Yang and Cai Xuzhe aboard Shenzhou, meaning "Divine Vessel" in Chinese. "All preparations for the launch are basically ready," said Lin Xiqiang, an agency official. Shenzhou-14 will be the third of four crewed missions - and the seventh of a total of 11 missions - needed to complete the space station by the end of the year. China began constructing its three-module space station in April 2021 with the launch of Tianhe - the first and biggest of the station's three modules. Tianhe, slightly larger than a metro bus, will form the living quarters of visiting astronauts once the T-shaped space station is completed. Following Shenzhou-14, the remaining two modules - the laboratory cabins Wentian and Mengtian - will be launched in July and October, respectively. Wentian will feature a robotic arm, an airlock cabin for trips outside of the station, and living quarters for an additional three astronauts during crew rotations. The Shenzhou-14 crew will help with the setup of Wentian and Mengtian and conduct functionality tests on both modules. The space station will have a designed lifespan of a decade. At 180 tons, it will be slightly heavier than Russia's decommissioned Mir, and about 20 percent of the International Space Station by mass. China's Geely launches low-orbit satellites for autonomous vehicles successfully Chinese Mars rover findings of minerals suggest water existed for longer on planet Biden administration barring anti-satellite missile testing by US Woman gives birth to baby boy on high-speed train in France Iran, Russia connect interbank banking systems amid Western sanctions Home News World News
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,831
{"url":"https:\/\/static.iter.org\/imas\/assets\/smiter\/html\/intro\/smiter.html","text":"# 1. Introduction\u00b6\n\nThe SMITER fieldline tracing code together with its graphical user interface (GUI) provides a simulation framework for variety of use cases. Its main uses at ITER Organisation are:\n\n\u2022 Power deposition mapping for first wall and divertor plasma-facing components\n\u2022 Input to control algorithms and production of synthetic surface temperatures for diagnostic design\n\nThe GUI framework provides CAD integration, meshing, visualisation, scripting, and state storage in hierarchical data files (HDF) with several simulation cases in one study. SMITER uses the SMARDDA [Arter16] kernel for field line tracing that was thoroughly benchmarked against the CEA field line following code PFCFLUX [Fird13]. SMITER allows fast and accurate calculation and prediction of the power deposition by the plasma on the limiter and the divertor geometries.\n\nTechnical features of SMITER are:\n\n1. accurate field-line integration which includes user controlled tolerance to check the convergence, cubic spline interpolation of the magnetic field, the option to use flux coordinates for speed and accuracy in simple flux geometries, and vacuum field option in 3D space,\n2. local or global calculations for the limiter or the divertor,\n3. toroidally periodic feature which offers the ability to specify only 1\/18th of the geometry SMITER engine features.\n\nSMITER requires accurate representations of first wall geometry and magnetic field equilibrium as an input. Optionally, ripple magnetic field representation can be added to the study.\n\nThe graphical user interface of the SMITER code (SMITER GUI) has been developed in order to facilitate easier manipulation of geometry and representation of the magnetic field. The SMITER GUI application is integrated in the [SALOME] platform, which is an open software framework for integration of numerical solvers in various physical domains. The SMITER GUI provides an user-friendly and efficient user interface for data preparation and data post-processing of the SMITER simulation results with enhancements in visualisation output. The SMITER GUI provides an interface for the field-line tracing, shadowing effects, and power surface deposition calculations. The GUI framework integrates the SMITER software into a single and extensible framework in which model set-up, visualization, code execution and analysis of the results can be performed. It also provides a built-in 3D analysis tool that will re-assemble required functions normally provided by the ParaView as an external tool to the SMITER.\n\nAll input parameters required by the SMITER workflow are easily visible and modifiable by the user in the interface. The GUI interface allows import of CAD objects, meshing or input of the meshes produced from the software NASTRAN, and input of the EQDSK files which describe the plasma equilibrium. The SMITER GUI consists of several pre-processing and post-processing modules with a central SMITER module that provides modelling of several SMITER cases in one study which is then saved as a HDF5 (Hierarchical Data Format) file. The study file contains all the data required to repeat previous cases (configuration, equilibrium, geometry and results). The CAD geometry set-up is integrated in one environment within the SMITER GUI interface which includes: visualisation, selection, modifications and meshing. The workflow shown in the Fig.1.4 is unified where the steps in the blue box (CAD geometry preparation and meshing) are integrated into one interface without the need of using external, proprietary meshing tools.\n\nThis documentation provides introduction of SMITER GUI, step-by-step instructions on how the SMITER GUI works and tutorials of the cases and studies. It is included as a help in HTML and PDF formats in the SMITER GUI framework. Furthermore, the code of the SMITER GUI component is documented for users and developers needing programmatic interface to the framework.\n\n [Arter16] W. Arter et al., Power Deposition on Tokamak Plasma-Facing Components, IEEE Tr. Plasma Sc. 42(7), 2016(v2), p. 1932-1942, arXiv:1403.7142\n [Fird13] M. Firdaouss et al., Modelling of power deposition on the JET ITER like wall using the code PFCFLux, J. Nucl. Mat. 438 (2013) S536-S539\n [SALOME] Salome platform website http:\/\/salome-platform.org\n\n## 1.1. Power Deposition Model\u00b6\n\nSMITER maps profiles of scrape-off layer (SOL) heat flux density flowing parallel to the magnetic fieldlines onto the plasma facing component (PFC) surfaces. Magnetic fieldlines on the flux surfaces within the magnetic equilibrium need to be followed in 3D space until they intersect a solid surface. The geometry is obtained from the provided CAD model of the structure converted to high precision triangular mesh. In practice, fieldlines are followed backwards from the surface in question, with proper mapping of the heat flux profile specified in the free SOL while taking into the consideration the magnetic flux expansion. This fieldline tracing must take into the account the neighbouring structures around the object of interest in order to ensure that the fieldlines are not intersected by other solid surfaces. Otherwise, the fieldline is shadowed resulting in zero heat flux at the place it started. The SMITER algorithms are programmed for two cases:\n\n1. limiter case\n2. divertor case\n\nThe geometry for both cases is separated into two types:\n\n1. the \u201cresult geometry\u201d \u2013 the part where the power deposition is calculated,\n2. the \u201cshadowing geometry\u201d \u2013 the part which protects the edges of the \u201cresult geometry\u201d by the fieldline shadowing.\n\nThe \u201cshadowing geometry\u201d covers the \u201cresult geometry\u201d. That means that some fieldlines hit the shadow geometry before they can hit the target geometry. This results in no power deposition in that region of the target.\n\nMagnetic equilibrium, usually desribed in text format files with .eqdsk extension, is defined as a simple model where input parameters besides equilibrium file are decay length and power loss. Decay length $$\\lambda_{q||}$$ (sometimes $$\\lambda_{m}$$) is power decay length at outer midplane defined in $$mm$$. Power loss $$P_{SOL}$$ (sometimes $$P_{loss}$$) is total input power to the plasma crossing the separatrix and is defined in $$W$$. In order to construct the radial profile of parallel power flux we need to impose 0D power balance at outer midplane.\n\nFig. 1.2 Outer midplane point. Black polyline represents wall mesh.\n\nHeat flux at the PFC is calculated from\n\n(1.1)$q_{\\parallel}(r)=q_{\\parallel omp}\\exp(-(r-r_{sep})\/\\lambda_q),$\n\nwhere $$q_{\\parallel omp}$$ is defined as\n\n(1.2)$q_{\\parallel omp}=P_{SOL} \/ (4 \\pi R_{omp}\\lambda_q (B_{\\theta}\/B_{\\Phi})_{omp})$\n\nMagnetic field line tracing gives the power flux anywhere else, accounting for flux tube distortion. Power deposition is then computed with an equation\n\n(1.3)$q_{\\parallel PFC}=q_{\\parallel}(r)R_{omp}\/R_{PFC}$\n\nNote that specifications are always for a heat flux density parallel to magnetic field lines and are not perpendicular heat fluxes onto plasma facing components. Because of that we need to use full field line trace in order to obtain component heat flux by projection onto real surface. To do that use the following equation.\n\n(1.4)$q_\\bot = q_\\parallel \\sin{\\alpha}$\n\nFig. 1.3 Flux tube of the magnetic field $$B$$ connecting the torus midplane and surface.\n\nThe power deposition in SMITER is calculated using mathematical model with flux tubes. The basic idea is explained in Fig. 1.3, where the flux-tube connects the tokamak midplane with a physical surface of area $$A_1$$. It is assumed from the figure that input power at midplane is traced through the flux-tube to the surface at the bottom and particles follow the fieldlines.\n\nSo the power at the top of the tube falls exponentially with radial distance at an empirically determined rate $$\\lambda_m$$ from the last closed flux surface (LCFS). The LCFS is given by $$\\psi = \\psi_m$$ where $$\\psi_m$$ is the value of the poloidal flux where the geometry touches the plasma, or equal to $$\\psi$$ at the X-point in case of divertor plasmas. The power density $$Q$$, deposited on the PFC is shown to vary as:\n\n(1.5)$Q = C_{std} \\cdot \\mathbf{B} \\cdot \\mathbf{n} \\cdot \\exp\\left(-\\frac{\\psi - \\psi_m}{\\lambda_m R_m B_{pm}}\\right),$\n\nwhere $$\\psi$$ is the flux function value for the tube at the midplane, $$B$$ magnetic field, $$n$$ the normal of surface $$A_1$$, $$B_{pm}$$ midplane poloidal component of the field, and $$R_m$$ major radius.\n\n## 1.2. Workflow\u00b6\n\nSMITER is usually installed in a directory ~\/smiter under which also the GUI framework is built. The GUI interface is single study and multiple case interface that allows different workflows depending on the input provided and case study.\n\nThe SMITER framework is composed of several modules for pre- and post- processing. Pre-processing provides transformation of input CAD surfaces into meshes required by the main module SMITER that provisions calculation. Post-processing allows analysis of the results with ParaView visualisation module and exporting data to other formats.\n\nThe SMITER module consists of several codes that transform input data into expected format. The transformation can start with building up the CAD model or from meshes provided externally. The CAD model is built from curves and surfaces which are defined by the non uniform rational basis splines (NURBS). However, intersection of the NURBS surfaces are approximately represented by the NURBS. Approximation is needed in order to compute the parametric representation of non-parametric algebraic intersection curves. This approximation can result in small gaps between the surfaces. In order to avoid these small gaps, surfaces are represented by triangles, that is, the CAD geometry is meshed. Meshing of the CAD model can be done with a meshing module SMESH inside SMITER or externally, in this case with MSC Nastran.\n\nAnother input that SMITER requires for the calculation is the magnetic equilibrium file (EQDSK file). The GEOQ code is used to analyse the flux function usually defined in the EQDSK file. It produces contour plots of the flux gradients, overlay flux function contours with silhouette wall, and CAD geometry as $$(R, Z)$$ points.\n\nThe HDSGEN code computes the multi-octree hierarchical data structure (HDS), which is designed to accelerate the computation of track\/ray-triangle intersection.\n\nThe POWCAL code performs the power surface deposition calculation, following the fieldlines using the transformed geometry from the GEOQ and the HDS from the HDSGEN.\n\nThe vacuum magnetic field is defined with the MAGTFM.\n\nThe complete workflow from the start to the end of calculation can be described as:\n\n1. Inputs\n\u2022 Magnetic equilibrium from simulation or experiment\n\u2022 CAD surfaces (STEP) or 3D triangular meshes (NASTRAN)\n\u2022 Heat flux profile\n2. Shadowing and heat flux calculations on 3D surfaces\n\u2022 Particles are following magnetic field lines\n3. Outputs\n\u2022 Heat flux and incident angles\n\u2022 Thermal model (in development)\n\nFig. 1.4 SMITER workflow\n\nFig. 1.4 shows top-down data-flow through several SMITER GUI modules.\n\n### 1.2.1. Input\u00b6\n\nPreparation of input data for SMITER consist of several pre-processing steps that are all handled in 3D space. Throughout these steps different coordinate systems are used due to their convenience for particular operation in space.\n\n1. The first step before calculating the power deposition is description of PFC geometry. This can be provided by many commercial and open-source Computer-Aided-Design (CAD) programs such as SMESH included in SMITER GUI. CAD data for the first wall geometry is usually provided in CATIA geometry database or portable STEP format. The geometry is described in cartesian coordinates (X, Y, Z) with dimensions in millimeters.\n\n1. The cylindrical coordinate system is used for description of magnetic fields and their parameters. Cylindrical polars form $$RZ\\zeta$$ space, where R and Z are in metres (m) and $$\\zeta$$ in radians.\n\n2. Toroidal polars in $$r\\theta\\zeta$$ space where r is in metres and angles in radians (or in form $$r\\theta\\xi$$). Angle is measured from vertical through X-point.\n\n1. The flux mapped coordinate system is used for description of flux topology. Flux coordinates in $$\\psi\\theta\\zeta$$ space (or $$\\psi\\theta\\xi$$). Angles are in radians and are measured from the vertical through X-point.\n\nThe SMITER codes have many configurable options and switches passed in at run time as (.ctl) control files that are actually standard FORTRAN namelist files used for changing code\u2019s internal variables. The standard way to execute FORTRAN codes with configurable parametres is:\n\ncode .ctl file\n\n\ne.g. geoq wall, where file wall.ctl contains inputs controlling geoq run.\n\nThe .ctl files contain Fortran namelists which must appear in a strict order although the variables within a namelist may be (re)defined in any order. Variables are documented using doxygen.\n\nExample wall.ctl file input to geoq:\n\n&inputfiles\nvtk_input_file='wall.vtk',\neqdsk_input_file='16_97s.eqdsk'\n\/\n\n&miscparameters\n\/\n\n&plotselections\nplot_geoqx = .true.,\nplot_geoqvolx = .false.,\n\/\n\n&beqparameters\nbeq_rmove=-6.,\nbeq_cenopt=4,\nbeq_psiopt=1,\nbeq_bdryopt=3,\nbeq_thetaopt=1,\n\/\n\n\nNamelist &inputfiles describes input files. Namelist &miscparameters can be used for future\/expert use etc. Namelist &plotselections allows creation of different outputs (see index of outputs in Section 1.2.2). For namelist &beqparameters one should use doxygen documentation headed \u201cData Types List\u201d in SMITER reference manual (index under \u201cData Fields\u201d sub-heading).\n\nEditing .ctl parameters for the code with GUI is possible with the \u201ceditor\u201d included or by using dialog boxes when right-clicking on the code icon of the study. Details on use are described in Sec. Tutorials.\n\n### 1.2.2. Output\u00b6\n\nAll the codes produce a log file, which contains typically time-and-date stamps, information about the files used, a selection of key parameters, execution times and at end a summary of errors and warnings, terminating with \u201cEND OF LOG FILE\u201d. Output to the terminal (Fortran print command) consists of the code header, errors associated with log-file writing and a small sub-set of the log-file output, notably serious errors, and the execution times if the code completes normally.\n\nThe suite is set up so as to terminate codes when a serious error occurs, other errors are classified as warnings, and execution continues. Most also produce a .out file, which is primarily used for inter-code communication, and also acts as a \u201clock-file\u201d, in that the same .ctl file cannot be re-run until the corresponding .out file is deleted. (Good practice is to change the .ctl file-name if the contents change.)\n\nIndex of suffix strings of output files, either with terminating .vtk or graphics (.png, .ps) suffixes.\n\nThe .vtk files begin with the geometry definition, then the quantities in angled brackets, which are cell-centred rather than point quantities (unless stated otherwise) in case this is an option. J is the Jacobian of the mapping to flux coordinates (so R\/J is the right-hand side of the ODE representing the fieldline), $$I = R B_T$$ , Bcart is B in Cartesian coordinates, Msign is mask (0=shadowed) with the sign of B.n (where n is the surface normal), $$\\psi_{start}$$ is $$\\psi$$ at the nominal start of the fieldline (i.e. at the end which is NOT the PFC intersection). L=lenpath and objhit are taken from trackx file header line described below. Mapped geometry refers either to flux-mapped coordinates or cylindricals depending on the type of calculation.\n\n_geoqvolm\n<$$(0,J,I) (point)$$> geoq output on volume flux-mapped geometry\n_beqvolx\n<Bcart (point)> geoq output on volume Cartesian geometry\n_geobjq\n<surfaces> hdsgen output in quantised geometry\n_geofldx\n<XYZ $$\\psi (point)$$ Bcart (point) Normal Bcell> geoq output on Cartesian geometry\n_geoptq\n<points> hdsgen output in quantised geometry\n_geoqm\n<XYZ Bcart> geoq output on mapped geometry\n_hdsm\n<HDS> hdsgen output in mapped geometry\n_hdsq\n<HDS> hdsgen output in quantised geometry\n_powlenx\n<Q Msign $$\\psi_{start}$$ L objhit> _powx file augmented using addlenvtk\n_powm\n<Q Msign $$\\psi_{start}$$> powcal output on mapped geometry\n_powstatm\n<Q-avg Q-dev> powcal output on mapped geometry (EXPERT)\n_powstatx\n<Q-avg Q-dev> powcal output on Cartesian geometry (EXPERT)\n_powx\n<Q Msign $$\\psi_{start}$$> powcal output on Cartesian geometry\n_RZsil\n<points $$\\psi$$> geoqgnu output in RZ geometry\n_silm\n<points $$\\psi$$> geoqgnu output in flux-mapped geometry\n\nThe following files have no prefix, because unless stated, they are 2-D plots of flux-related quantities, and of course the fieldline tracks only depend on the geometry to the extent of their end-points, files:\n\nbeq+wall\n<$$\\psi$$ silhouette> geoqgnu output in RZ geometry\ndR\n<$$\\partial \\psi \/\\partial R$$> geoqgnu output in RZ geometry\ndZ\n<$$\\partial \\psi \/\\partial Z$$> geoqgnu output in RZ geometry\nR\n<$$R$$> geoqgnu output in flux-mapped geometry\nRJ\n<$$R\/J$$> geoqgnu output in flux-mapped geometry\nRZ\n<$$\\psi$$> geoqgnu output in RZ geometry showing flux-mapped region\nRZoom\n<$$\\psi$$> geoqgnu output in RZ geometry confined to flux-mapped region\ntrackm.*n* 01.vtk\n<element $$(n-1)$$ fieldline> powcal output in mapped geometry\ntrackx.*n* 01.vtk\n<element $$(n-1)$$ fieldline> powcal output in Cartesian geometry\nZ\n<$$Z$$> geoqgnu output in flux-mapped geometry\n\nIn track files, the header line contains\n\n$$elt=n$$\ni.e. 1+(identifier of triangular element line starts at)\nsub=1\n(EXPERT)\nlenpath\nlength of fieldline track in file in mm (accurate in trackx files only)\nobjhit\n\ncode,\n\n>0 1+(element hit)\n\n0 no collision, ODE integration limits reached\n\n-1 track left computational domain\n\n-2 track hit mid-plane\n\n## 1.3. Description of Constituent Codes\u00b6\n\nFig. Workflow indicates how the codes are combined to perform a shadowing calculation. Each of the codes is briefly described below. As described in the introduction, they are driven ultimately by namelists in .ctl files. The namelists expected in the respective .ctl files are listed below. Precise descriptions of the usage of the variables in each namelist may be referenced in the online documentation, as data types at smiter\/doc\/srcdoc\/html\/classes.html.\n\n### 1.3.1. geoq code\u00b6\n\nThe geoq code may be used to analyse the flux function $$\\psi$$ , usually defined in a .eqdsk file. (A good of effort has been exerted to ensure that SMITER can read the many variants of the EQDSK G \u201cstandard\u201d.) To aid understanding, geoq produces as output .gnu files which may be plotted using the geoqgnu script (from the smiter\/Extras directory). Running geoq makes it possible to\n\n\u2022 Produce contour plots of flux gradients ( $$\\propto B$$ components)\n\u2022 Overlay $$\\psi$$ contours with silhouette wall.vtk, and also CAD geometry as $$(R,Z)$$ points\n\u2022 Obtain the extrema of flux on geometry $$\\psi_{ltr}$$ , flux values from .eqdsk file $$\\psi_{q}$$ and the flux value at nearest X-point $$\\psi_{X}$$ as labelled output in the .log file. This output the user should inspect, and if necessary set up new input file with appropriate $$\\psi_{m}$$ . Sometimes it may be necessary to define a special search box to locate the correct X-point, by setting beq_xsearch=1 in namelist beqparameters. The .ctl file also controls whether inboard (HFS) or outboard (LFS) values of $$R$$ and $$B$$ are used in the power calculation (to account for the effects of flux expansion) via input beq_bdryopt in namelist beqparameters.\n\nIt is also possible to define the flux function analytically, if the the variable equil_input_file has a suffix .ana, this file must contain namelists as set out below. The suffix .equ is also recognised and corresponds to an output format related to that produced by the FIESTA code, see example the MAST-U test case Test-MNEW8-sha12-res12 .\n\nNamelist order in .ctl as follows\n\n\u2022 inputfiles\n\u2022 miscparameters\n\u2022 plotselections\n\u2022 beqparameters\n\nNamelist order in .ana as follows\n\n\u2022 equilparameters\n\u2022 meshparameters\n\n### 1.3.2. hdsgen code\u00b6\n\nThis computes the multi-octree HDS, which is designed to accelerate the computation of track\/ray-triangle intersection. The user should not normally need to be concerned with details of hdsgen operation, except for the need to increase the parameter limit_geobj_in_bin for geometries with a large number of triangles, i.e. greater than approximately 100000 triangles, depending on their degree of clustering.\n\nNamelist order in .ctl as follows\n\n\u2022 inputfiles\n\u2022 hdsgenparameters\n\u2022 btreeparameters\n\u2022 positionparameters\n\u2022 plotselections\n\n### 1.3.3. powcal code\u00b6\n\nThis is the code that actually performs the power surface deposition calculation, following fieldlines using transformed geometry from geoq and the HDS from hdsgen. There is the capability to select from a range of power deposition profiles, and even define new ones, provided by namelist edgprofparameters (see the documentation for namelist edgprofparameters). For global calculations, a wide range of fieldline termination criteria may be set using namelist termplaneparameters, including multiple simultaneous criteria, useful for private flux region calculations.\n\nPartly to indicate that execution is continuing successfully, powcal outputs to the terminal, the numbers of elements for which the starting direction is uncertain, together with the values of B.n in the Cartesian co-ordinate system and in the mapped system which because of their different signs gave rise to the condition (these elements are regarded as shadowed). powcal local or \u201cescape\u201d calculations give rise to warnings in the .log file that \u201cObject not found in binary tree\u201d,one for each element that is thus regarded as illuminated or \u201cwetted\u201d.\n\npowcal may also produce the track files described in SMITER output files if the right plotselections are made. The utility addlenvtk is provided to process connection length information, and append to the corresponding _powx.vtk file (the root of which must be specified), producing a file root_powlenx.vtk\n\nNamelist order in .ctl as follows\n\n\u2022 inputfiles\n\u2022 miscparameters\n\u2022 plotselections\n\u2022 powcalparameters\n\u2022 (termplaneparameters if termination_planes=T)\n\u2022 (edgprofparameters if more_profiles=T)\n\u2022 odesparameters\n\n### 1.3.4. magtfm code\u00b6\n\nGenerally, in SMITER, the goal is to study field lines in axisymmetric magnetic field. For most cases, magnetic field is given in the form of equilibrium file with 2D data, that repeats itself on every step around the torus. The assumption of 3D field is different. In this case the magnetic field is represented as a 3D vector field, that contains magnetic field components in all three coordinate axes.\n\nThe purpose of taking into an account full 3D field is to study the effect of perturbations or anomalies of the magnetic field in the tokamak. Those perturbations could appear because of coil misalignment, deformation of the coil due to its mass, installation error of the coils, etc.\n\nFull 3D field case inside SMITER was primarily designed for use on ITER tokamak geometry but can be easily extended to other tokamaks, as long as the input data follows a specified input format.\n\nNamelist order in .ctl as follows\n\n\u2022 magfiles\n\u2022 plotselections\n\u2022 miscparameters\n\nIn this documentation, several studies on 3-D magnetic field are presented. These studies were performed in order to verify the accuracy of MAGTFM code.\n\n\u2022 First step: Create a uniform 3-D magnetic field and apply it to toroidal target of first wall panels (FWP) 4. The goal of this study was to verify the code by showing that for the axisymmetric 3-D magnetic field the result is the axisymmetric power deposition. One expects that the wetted area on every FWP 4 in toroidal direction should be the same.\n\u2022 Second step: Repeat the NF_55 benchmark case, but this time with 3-D magnetic field. First the study with axisymmetric magnetic case was prepared. One expects the same wetted area as per NF_55 benchmark case. Then the case was run with perturbed 3-D magnetic field. Results for both cases are presented here.\n\u2022 Third step: Use the full toroidal FWP 4 with circular plasma equilibrium. The goal was to again first use axisymmetric 3-D field and show that each panel has the same wetted area as one would expect. Then the perturbed 3-D magnetic field was studied. The differences in wetted area are presented here.\n\u2022 Fourth step: Take studies from step 3 and increase the helicity in order to magnify the differences in wetted areas. Both studies are presented in this chapter.\n\n#### 1.3.4.1. SMITER 3D file format\u00b6\n\nInput file to SMITER with 3D magnetic field is defined in a .txt file and has the following format:\n\nZeta grid\nNzeta\nZeta_1\nZeta_2\n:\nZeta_Nzeta\n\nR grid\nNr\nR_1\nR-2\n:\nR_Nr\n\nZ grid\nNz\nZ_1\nZ_2\n:\nZ_Nz\n\nB(component:{x,y,z},zeta,r,z). Toroidal sector in negative y adjoining x=0.\nBx(R_1, Z_1, Zeta_1) By(R_1, Z_1, Zeta_1) Bz(R_1, Z_1, Zeta_1)\n:\nBx(R_1, Z_1, Zeta_Nzeta) By(R_1, Z_1, Zeta_Nzeta) Bz(R_1, Z_1, Zeta_Nzeta)\nBx(R_2, Z_1, Zeta_1) By(R_2, Z_1, Zeta_1) Bz(R_2, Z_1, Zeta_1)\n:\nBx(R_Nr, Z_1, Zeta_Nzeta) By(R_Nr, Z_1, Zeta_Nzeta) Bz(R_Nr, Z_1, Zeta_Nzeta)\nBx(R_1, Z_2, Zeta_1) By(R_1, Z_2, Zeta_1) Bz(R_1, Z_2, Zeta_1)\n:\nBx(R_Nr, Z_Nz, Zeta_Nzeta) By(R_Nr, Z_Nz, Zeta_Nzeta) Bz(R_Nr, Z_Nz, Zeta_Nzeta)\n\n\nThe magnetic field is given in structured grid with equally spaced steps in each direction ($$\\zeta, R, Z$$). First, one set of sample points is given as specified above. $$R$$ and $$Z$$ are specified in metres, while $$\\zeta$$ is specified in radians, one value per line. Magnetic field $$B$$ is given in Cartesian components, one vector per line ($$Bx, By, Bz$$). Note that for SMITER input, the B vectors are ordered with $$\\zeta$$ varying the fastest, then $$R$$ and finally $$Z$$ varying the slowest. Each of the 4 items ($$\\zeta$$, $$R$$, $$Z$$ and $$B$$ ) in the file is separated from the next by a blank line, each item starts with a description line, followed by the array size, except in the case of B.\n\nThe vacuum magnetic field is defined in a .txt file, in what will be referred to as mag format. Details of the required contents may be deduced by inspection of the example file .\/smiter-aux\/Data\/Equilibrium\/testov5.txt.\n\n#### 1.3.4.2. Inputs and Outputs\u00b6\n\nThe input parameters that correspond to 3D magnetic field case, are\n\n\u2022 data_layout: Parameter data_layout corresponds to the format of input file that contains 3D magnetic field components, given in .txt file. For working with full $$360^{\\circ}$$ ITER magnetic field, there are two main options that should be used in this case.\n\u2022 data_layout=14: Option 14 corresponds to magnetic field data, where the number of points in toroidal direction ($$\\zeta$$ direction) $$n_{\\zeta}$$ subtracted by 1 ($$n_{\\zeta}-1$$) is a prime number. This is unfortunate because of the existing Fourier transform algorithm then equires an expensime matrix multiply, that is possibly also subject to significant rounding error.\n\u2022 data_layout=12: This option corresponds to the magnetic field data where the number of points in toroidal direction ($$\\zeta$$ direction) $$n_{\\zeta}$$ subtracted by 1 ($$n_{\\zeta}-1$$) is not a prime number. If possible, this option should be used instead of option 14.\n\u2022 zeta_start: Toroidal angle $$\\zeta_{start}$$ of first point in samples (degrees). This is the angle of the beginning of the first point in the first segment, in case of ENERGOPUL data this parameter should be set to $$-10^{\\circ}$$.\n\u2022 plot_bcartx: Plot option that creates 3D vtk plot of magnetic field components B. The magnetic field can be thus visualized in Paraview.\n\u2022 i_requested: This number is the product of toroidel magnetic field $$B_T$$ at the nominal radial position $$R$$, i.e. i_requested= $$B_TR$$.\n\u2022 mode_cutoff: Relative mode cut-off amplitude, default $$.0001$$.\n\n## 1.4. Graphical User Interface\u00b6\n\nThe SMITER graphical user iterface (GUI) uses several modules (or components) that are used independently and interoperable in the workflow. Only one module is activated at one time. Modules create data that is usually stored into study that is saved into HDF file for reuse.\n\nFig. 1.5 Main window of SMITER GUI.\n\nWe can use icon to create a new case. To save the study use icon that will write HDF file containing all data.\n\nInitial SALOME window window contains the following important icons:\n\nInitial SALOME window\nNew case Starts New case in Salome.\nSave study Saves current study.\nOpen SMITER module Opens SMITER module in SALOME.\nOpen Mesh Module Opens Mesh Module in SALOME.\nOpen Geometry Module Opens Geometry Module in SALOME.\nOpen Study Opens study.\n\nMain SMITER GUI modules are:\n\nSmiter module\n\nWith a click on we activate Smiter module. If a study has not been created yet, the message box menu will appear. In this case we can click New button to start the a new case or Open button to open the existing case.\n\nActions are available through icons shown in the toolbar, by right-clicking on the study tree depending on the context, or through menu as shown in the following figure.\n\nSome of actions available are listed in the table below.\n\nSMITER Module icons Function\nCreate Wall Mesh Adds ITER wall mesh to Mesh Module. Wall is hard-coded to SMITER module and can be used as a reference in displaying results.\nNASTRAN to MESH Converts NASTRAN file to Salome mesh.\nNASTRAN to MESH Converts PATRAN file to Salome mesh.\nSMESH to PATRAN Converts Salome mesh to PATRAN file.\nNew Case Creates new SMITER case.\nVerify case Verifies the setup of SMITER case.\nLoad CTL file parameters case to SMITER case Load CTL file parameters into SMITER case components\nCustom single exe profile Dialog to prepare a custom single exp power deposition profile\nCalculate fieldlines by selecting \u2026 Calculate fieldlines by selecting triangles in ParaViS and a SMITER case\nGet Salome object IOR Displays Interoperable Object Reference (IOR) value of selected Object in Object Browser.\nShow object in Paraview Displays IOR value of selected GEOM or SMESH object in Object Browser and ParaVieW.\nWrite SMESH mesh to IDS Read SMESH mesh data and writes it to IMAS ids.\nWrite EQDSK G to IDS Write the contents of EQDSK G file and store it to IMAS ids.\nRead mesh data from IDS Reads mesh data from IMAS ids and imports it into SMESH\nRead EQDSK data from IDS Reads eqdsk data from IMAS ids and writes it on a file you specify.\n\n### 1.4.1. Case actions\u00b6\n\nThese actions allow user to prepare, modify and run SMITER cases.\n\n New Case Creates new SMITER case.\n\nThis opens a dialog in which the user creates a SMITER case. In the dialog the user specifies the following parameters:\n\n\u2022 name\n\nThe name of the case\n\n\u2022 decay length\n\nspecified in meters\n\n\u2022 power loss\n\nspecified in MW\n\n\u2022 Wall mesh (optional)\n\na 2D polyline silhouette mesh from which the reference flux on the silhouette (flux value on the LCFS)\n\n\u2022 Target mesh\n\na triangle mesh on which the power deposition is calculated\n\na triangle mesh that causes shadowing\n\n\u2022 EQDSK file\n\nan EQDSK G file describing the equilibrium\n\n\u2022 MAG file (optional)\n\n3D magnetic field in magtfm format\n\nNote\n\nOnly the meshes that are inside the SMESH module can be selected.\n\n Verify case Verifies the setup of SMITER case.\n\nThis opens a dialog in which you select a Smiter case and verify that the mesh entries (SALOME object entry) is not pointing to an empty object.\n\nThe following scenario explains this. Each object in the SALOME object browser has a unique entry, i.e. \u201c0:1:1\u201d. This entry holds the position of the object in the object browser. Whenever you delete a SALOME object, it is no more visible in the SALOME object browser, BUT a null or empty object still exist with the unique entry. Since the names of the SALOME objects are not unique, the entries are used for actual position of the objects and therefore the underlying data (geometry, mesh, other,\u2026).\n\nNow we create a Smiter case and select a mesh as our target mesh, but find out that we need to change it (resolution, shape, ..) which means delete the old one and produce a new one with the same name. In the Smiter case we will still have the same name for the target mesh, and since the name hasn\u2019t changed it should know how to pick the right mesh. But the actual position of the mesh is stored in the entries and the name only serves for easier recognition. So while the name of the mesh is the same, it\u2019s entry, the unique ID, has been changed. In this case if we\u2019d compute the Smiter case, it would fail.\n\nWe can verify the case to see if the mesh are not pointing to an empty object, or also check the checkbox to automatically change the entries to same named mesh objects.\n\n Load CTL file parameters case to Smiter case component Load CTL file parameters into Smiter case components\n\nThis opens a dialog in which you select a component of the Smiter case from the SALOME object browser and a CTL file on the filesystem, after which the contents of the CTL, i.e., parameters are loaded into the component.\n\nThis utility eases the loading of parameters of a CTL file to a Smiter component. Each parameter is also checked if it belongs to the component, so nothing will happen if the wrong CTL is loaded to a Smiter case component.\n\n Custom single exe profile Dialog to prepare a custom single exp power deposition profile\n\nThis action opens a dialog, where the user can set a custom single exponential, by setting the $$Q_0$$, $$\\lambda$$ and ranges for the profile. To apply the profile to a SMITER case, a SMITER case must be highlighted (or clicked). It is mandatory that the target GEOQ has been computed so that the necessary values are read from it\u2019s output for the normalization factor.\n\nThe user can plot both profiles to see how it will look like and to apply the profile to the case click the Apply profile to POWCAL while a SMITER case is selected.\n\n Calculate fieldlines by selecting \u2026 Calculate fieldlines by selecting triangles in ParaViS and a Smiter case\n\nThis action requires that you first select a Smiter case and from an already precomputed POWCAL VTK result, select the triangles for which you would like to see the fieldlines.\n\nAfterwards click on the Calculate fieldlines\u2026 button. This will generate trackx*.vtk files in the run directory. When opened in ParaViS you will have fieldlines for those selected triangles.\n\nUnder Smiter -> Case we can access all case actions that are in the Smiter toolbar and other actions that are normally executed by right clicking on Smiter objects in the SALOME object browser.\n\n Find LCFS with interpolations Find the LCFS by selecting Smiter EQDSK, object, specifying flux value\n\nThis action opens a dialog in which you select a Smiter EQDSK object from the SALOME object browser, specify a flux value and then with the use of bilinear interpolation the LCFS curve is found and plotted.\n\n Edit CTL Edit the parameters of GEOQ, HDSGEN MAGTFM and POWCAL objects Compute case Run a Smiter case Rename case Rename a Smiter case Delete case Delete a Smiter case Duplicate case Duplicate a Smiter case Replace mesh Replace mesh in a GEOQ object Replace EQDSK Replace EQDSK in a EQDSK object Plot plasma and limiter geometry Plot LCFS from EQDSK object\n\nThese are standard actions for modifying a case.\n\n Prepare case for running in batch Prepares a case for running in batch\n\nThis action packages the case by archiving the input files and a script for running the case in non-GUI mode. This can be used to prepare the case to be run on a HPC by remote submit or just for preparing a case to be used in non-GUI mode via terminal.\n\n Plot gnuplot files Plot output gnuplot files of a Smiter case\n\nThere are output gnuplot files when you run a case and this utility helps you plot them inside SMITER. This opens a dialog in which you select a Smiter case, then browse the Smiter case run directory and select which gnuplot files to plot.\n\n GEOM\/SMESH from GEOQ eqdsk gnuplot files Extract the LCFS from the GEOQ eqdsk gnuplot file and export it to GEOM\/SMESH\n\nGEOQ has a switch called plot_eqdsk_boundary. This produces a gnuplot file containing the R and Z points of the LCFS. This action imports these values to SMESH and GEOM.\n\n### 1.4.2. Mesh actions\u00b6\n\nDifferent mesh operations that are needed to run the field line case are included into Smiter.\n\n Create Wall Mesh Adds ITER wall mesh to Mesh Module. Wall is hard-coded to Smiter module and can be used as a reference in displaying results.\n\nThis opens a dialog in which you can select the following silhouettes:\n\n\u2022 c_3NHXN_v3_15\n\u2022 nf_55_033019\n\u2022 smiterauxWallmesh\n\nand produce SMESH mesh and optionally GEOM geometries. These silhouettes describe the curve of the tokamak reactor and are used to calculate the reference LCFS flux value on it. They are not necessary to have in a Smiter case, if you already know the LCFS flux value, which can be set in target and shadow GEOQ components, under beq_psiref.\n\n NASTRAN to MESH Converts NASTRAN file to Salome mesh. PATRAN to MESH Converts PATRAN file to Salome mesh.\n\nThese actions serve to either convert NASTRAN\/PATRAN mesh files to SMESH mesh objects.\n\n SMESH to PATRAN Converts Salome mesh to PATRAN file.\n\nThis action is used to convert a SMESH mesh object to PATRAN file.\n\n### 1.4.3. CORBA actions\u00b6\n\nSALOME architecture is based on Common Object Request Broker Architecture technology using distributed system model of applications. SALOME combines several software components, which are built in such a way that it allows to integrate solvers and existing meshing algorithms along with the specification of physical properties for a given domain. These actions are needed if user wishes to transfer information between different components.\n\n Get Salome object Interoperable Object Reference (IOR) Displays Interoperable Object Reference (IOR) value of selected Object in Object Browser.\n\nThis action displays the Interoperable Object Reference (IOR) value of the selected object in the Object browser.\n\n Show object in Paraview Displays IOR value of selected GEOM or SMESH object in Object Browser and ParaVieW.\n\nThis action displays the IOR value and exports a GEOM or SMESH object into ParaViS.\n\n### 1.4.4. IMAS actions\u00b6\n\nThese actions can save and load different data to and from IMAS.\n\n Write SMESH mesh to IDS Read SMESH mesh data and writes IMAS ids.\n\nThis opens a dialog in which you select SMESH mesh objects from the SALOME object browser, specify the IDS parameters, which then writes those SMESH objects to the IDS.\n\n Read mesh data from IDS Reads mesh data from IMAS ids and imports it into SMESH\n\nThis opens a dialog in which you specify the IDS parameters, from which meshes are read and stored to SMESH.\n\n Write EQDSK G to IDS Write the contents of EQDSK G file to IMAS ids. Read G EQDSK from IDS Reads G EQDSK from IMAS ids and saves it to a file\n\nSimilar functionality as the before mentioned actions, this one writes\/reads G EQDSK files to\/from IMAS IDS.\n\n### 1.4.5. VTK actions\u00b6\n\nThese are utility actions for data transform from VTK format to X format.\n\n Convert VTK to MATLAB Converts data in vtk format to MATLAB format\n\nWith this dialog you select a VTK file and afterwards select which array or vector quantities to write in a MATLAB format file.\n\nGeometry module\n\nGEOM module is capable of many different functionalities for creation, visualization and modification of geometric CAD models. It can read CAD files in many different formats, including STEP and IGES. It enables us to create geometrical and topological objects with different modelling operations.\n\nMesh module\n\nThis module is used to create meshes on the basis of geometrical models created or imported into GEOM. It uses a set of meshing algorithms and their corresponding conditions (hypotheses) to compute meshes. Main functionalities are computation of meshes based on different hypotheses and algorithms, group management of meshes and mesh modifications. This module also provides information and quality control functions of computed meshes and importer\/exporter od different mesh format files.\n\nParaViS\n\nParaViS is a data analysis and visualization application that embeds ParaView tool inside SMITER GUI. ParaVis is a post-processing tool used to analyze data using qualitative and quantitative techniques. The data exploration can be done interactively in 3D or programmatically using ParaView\u2019s batch processing capabilities.\n\n### 1.4.6. Preferences\u00b6\n\nSMITER has several preferences that can be set after module is activated. With File \u2023 Preferences \u2023 Smiter the following sections can be set or are enforced or suggested by environment variables:\n\nPath settings\n\nSmiter directory points to SMARDDA exec\/ subdirectory. It can be enforced by SMITER_DIRECTORY by environment module, user or AppImage.","date":"2020-10-24 20:37:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.40969306230545044, \"perplexity\": 4435.487559598907}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107884755.46\/warc\/CC-MAIN-20201024194049-20201024224049-00569.warc.gz\"}"}
null
null
Q: net::ERR_CONNECTION_REFUSED to cube.dv I'm taking over a website. The previous developer decided to use cube.dev to build the KPI page. But I have no knowledge on it. He showed me once the KPI page worked. But now, when I launch it on my side (https://jsaddin.10studio.tech/kpi), there is an error: GET http://localhost:4000/cubejs-api/v1/load?query=%7B%22measures%22%3A%5B%22Customs.count%22%5D%2C%22dimensions%22%3A%5B%22Customs.offerdisplayname%22%5D%2C%22timeDimensions%22%3A%5B%7B%22dimension%22%3A%22Customs.timestamp%22%2C%22dateRange%22%3A%22this+week%22%2C%22granularity%22%3A%22day%22%7D%5D%7D net::ERR_CONNECTION_REFUSED I also see in his code: const cubejsApi = cubejs( 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE1NzE0OTIxNzYsI...', { apiUrl: 'http://localhost:4000/cubejs-api/v1' }, ); I cannot reach the developer anymore. Does anyone know what may the reason of the problem? A: Just change apiUrl to point to the production Cube.js API endpoint or just to the root if it's the same host: const cubejsApi = cubejs( 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpYXQiOjE1NzE0OTIxNzYsI...', { apiUrl: '/cubejs-api/v1' }, );
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,546
Case Study: Culture Saves Lives HomeOur ResourcesICAD Publications and ResourcesCase Study: Culture Saves Lives Target Population: First Nations in Vancouver, especially those who have struggled with addictions and mental health "Colonization has not been good to our people. Where people have suffered, the ones that have been left out [of the circle]…those are the ones who are most affected by colonization, residential school, foster care. Those are the ones that are in the alleys. They're not there for no reason, so those are the ones we reach out to." – Patrick Smith Background: Culture Saves Lives was founded by Patrick Smith, a Kwaguilth man, whose own experience of colonization, including the '60s scoop, inspired him to nearly two decades of Indigenous health and social service work in the Downtown East Side. Patrick was moved to further action by the 2015 release of the Paige report, a publication of the BC representative for children and youth that detailed the system-wide neglect and indifference that tragically led one young Indigenous girl to overdose in Oppenheimer Park two months after aging-out of care. What began with public art installations to "uplift the minds and hearts of our people", such as a 30 foot Medicine Wheel, a 16-foot feather, and a 60 foot Eagle Staff, soon grew into street-side drumming and ceremonies for Indigenous people with little access to Indigenous teachings and ceremonies. Description: Culture Saves Lives provides opportunity and space for community members, especially First Nations people who have been oppressed by colonization, such as residential schools and the foster care system, to rediscover and celebrate their roots. It also provides awareness and education to mainstream service providers of the power, beauty, and enduring strength of First Nations traditions and ceremonies. With financial support from First Nations Health Authority and British Columbia Health, Culture Saves Lives works tirelessly to bring culture, traditions, and ceremony to the streets, alleys, and parks of Vancouver, and to reconnect the disconnected to their Indigenous identity and heritage. Elements of Culture Saves Lives include: Freestyle outreach: bringing drums and regalia to street corners, alleys, or wherever its needed. Bringing ceremony to the people who need it most Memorial services for those who have fallen victim to the opioid crisis and to the on-going crisis of colonialism In-house and public art making In partnership with First Nations Health Authority, three-day train-the-trainer harm reduction workshops that teach participants how to talk about and address substance use and accidental poisonings with trainings based on culture, connecting and relationship building Education and awareness of Indigenous culture for mainstream service providers Wise Practices: Must be own agents of change: programming for Indigenous communities must be designed and delivered by Indigenous people, using Indigenous approaches that meet the needs of Indigenous communities. Mainstream models or goals are inappropriate and do not meet the needs of community. Flexible, adaptable programming: funding and programming must be flexible enough to adapt to emerging needs. Rooted in loving kindness: Must meet people where they are at, with loving kindness and non-judgement. It is about building relationships with people, not behaviours. The biggest challenges to bringing culture and ceremony to the streets are: the stigma and the judgement that is directed at those who use substances, in both mainstream and Indigenous communities. This creates unsafe spaces and services for Indigenous community members who use substances and leads to further isolation and unsafe substance using practices. the perceived tension in Indigenous communities between abstinence and harm reduction approaches. This creates barriers for organizations and individuals who make culture and ceremony available to those who use substances. "Our original teachers, earth, air, sun, water – they give life to each and every one of us – they don't judge, they don't say you're clean and you're not. So who am I to say, 'oh you can smudge but you can't'?" – Patrick Smith Evidence of Success: The impact of Culture Saves Lives is most evident in the faces of those who continue to show up at Culture Saves Lives events and meeting places; in the testimonials from individuals who have said that Culture Saves Lives has literally saved their life or been there for them in a time of need; in their rapidly expanding staff numbers; and in the relationships that have been built over the years. Patrick and the Culture Saves Lives staff are building bridges to the future by educating mainstream service providers about Indigenous culture and healing practices and connecting the disconnected to their birthright. Patrick Smith, Executive Director, culture@mvaec.ca Media Links: http://www.megaphonemagazine.com/culture_saves_lives https://www.youtube.com/watch?v=vTwV3rOFBkQ https://www.youtube.com/watch?v=tn2OyCiMrok Indigenous Harm Reduction September 27, 2019 "Indigenous harm reduction means reducing the harms of colonialism."- Rawiri Evans, Maori Educator Mainstream harm reduction practices such as needle exchange programs, naloxone distribution and opioid substitution therapies have been… Case Study: 13 Moons Harm Reduction Initiative March 25, 2019 Background This 2-year substance use and addictions program is funded by the Public Health Agency of Canada and delivered in Winnipeg, Manitoba. 13 Moons was informed by the Native Youth Sexual… Indigenous Harm Reduction = Reducing the Harms of Colonization June 10, 2019 "Indigenous harm reduction is love." - Traditional Knowledge Carrier, Wanda Whitebird The purpose of this policy brief is to outline Indigenous approaches to harm reduction. We also recommend ways in… Case Study: Sturgeon Lake Traditional Health Program March 25, 2019 "Reconciliation is not about Indigenous people changing, it's about non-Indigenous people recognizing that Indigenous people have the right to practice our culture as it was given by Creator, and to… Call for Case Studies: Thematic Segment on "Zero discrimination in health care settings" at the 41st meeting of the UNAIDS Programme Coordinating Board (PCB) - 14 December 2017 October 3, 2017 Discrimination in health care settings remains a grim reality for people living with HIV and key populations, such as men who have sex with men, transgender people, people who use… NGO Observer Statement - HIV in Prisons and Closed Settings March 10, 2016 UNAIDS PCB - Statement Agenda Item: 6 HIV in Prisons and Closed Settings 27 October 2015 - 37th UNAIDS PCB Meeting, Geneva, Switzerland Before I begin I would like to…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
532
\section{Introduction} Deterministic approaches to treating strong correlation in interacting quantum systems are often rendered intractable by the exponential scaling of the size of the Hilbert space with the number of particles.\cite{Troyer2005} In contrast, quantum Monte Carlo (QMC) methods~\cite{Barker1979, Hammond1994, CalandraBuonaura1998, Maksym2005, Needs2010, Booth2014, Austin2012, Shepherd2014} can be computationally more efficient because they employ a sparse representation of the wave function in this space, obtained via stochastic sampling. Methods that utilize a continuous basis of configurations in real space have long existed, e.g. diffusion Monte Carlo~\cite{Umrigar1993, Senatore1994, Kosztin1996, Foulkes2001, Manten2001, Hairer2014}. The application of these methods to fermionic systems requires nodal constraints due to the antisymmetry of the wave function. This has motivated the development of discrete-space methods, e.g.~full configuration interaction QMC (FCIQMC) and auxiliary-field QMC~\cite{Booth2009, Li2015, Alavi2016, Motta2018}, in which the antisymmetry is provided by a Slater determinant basis, thereby obviating the need to impose nodal constraints on the wave function.\cite{Booth2009, Austin2012, Spencer2012, Umrigar2015} A disadvantage of discrete-basis methods is that the basis is not complete, but this can be addressed using standard extrapolation techniques.\cite{Halkier1998, Halkier1999} Recently, Lim and Weare \cite{Lim2017} introduced the fast randomized iteration (FRI) framework, a class of methods that use techniques similar to those used in discrete-basis QMC methods to solve large, generic linear algebra problems. Sparsity is imposed stochastically in matrices and vectors, which reduces the computational cost and storage requirements of these methods and facilitates their application to problems significantly larger than those treatable by conventional linear algebra approaches. Many existing QMC algorithms, including the FCIQMC method, can be understood as specific methods within the FRI framework. The central purpose of this work is to describe, in a more general context, the application of FRI methods to calculations on interacting fermionic systems in a discrete basis. Importantly, we leverage this generality to develop alternative methods within this framework and investigate their statistical error and convergence properties through numerical tests on small molecular systems. The FRI framework can be applied in a variety of ways to calculate ground- and excited-state observables of electronic systems. This study discusses only the application of FRI to calculate the ground-state energy of the full configuration interaction (FCI) Hamiltonian matrix in a Slater determinant basis. Such applications of the FRI framework will be referred to in this manuscript as FCI-FRI. In these methods, calculation of the ground-state energy is achieved via stochastic implementations of the power method, in which an initial trial vector is evolved towards the ground state eigenvector by repeatedly applying the Hamiltonian, scaled and shifted such that the ground state is dominant. The power method can be viewed as a discretization of the imaginary-time propagation used in many QMC methods. In order to reduce computational cost, the Hamiltonian matrix and solution vector are compressed stochastically, meaning that randomly selected subsets of their elements are zeroed in each iteration. Calculating the energy after each iteration and averaging yields an estimate of the ground-state energy. This estimate can be systematically improved by executing more iterations and by retaining more nonzero elements in each compression. Unlike the original FCIQMC method, some FRI methods become identical to the deterministic power method as the number of randomly selected elements increases to the size of the basis. The various approaches to matrix and vector compression within the FRI framework differ in terms of their computational cost and statistical efficiency. In this study, we combine these approaches in two new FCI-FRI methods and compare them to the original FCIQMC method.\cite{Booth2009} In the first method, multinomial matrix compression, which is used in FCIQMC, is combined with systematic vector compression. Multinomial and systematic sampling are reviewed in Section~\ref{sec:samplScheme}. In the original presentation of FRI~\cite{Lim2017}, systematic vector compression was shown to yield the least statistical error out of all other schemes considered. In contrast, vector compression is achieved by integerizing elements in FCIQMC. Comparing the original FCIQMC method to the ``multinomial FCI-FRI'' method, which uses the same matrix compression scheme, illustrates the gains in efficiency that an improved vector compression scheme can enable. In the second method, ``systematic FCI-FRI,'' we seek to further improve the efficiency by also compressing the matrix systematically instead of multinomially. We introduce a new hierarchical scheme to reduce the computational cost of performing this compression. In numerical tests on five small molecules, we find that systematic FCI-FRI yields consistently greater statistical efficiency (defined below) than multinomial FCI-FRI by at least an order of magnitude, and multinomial FCI-FRI is also more statistically efficient than FCIQMC in its original form. An additional purpose of this work is to better understand how the features of each of these methods influence their errors and computational cost. To this end, we also compare two methods applied recently to FCI problems~\cite{Lu2017} in which the matrix is not compressed. Although expensive, such approaches are feasible because of the sparse structure of the Hamiltonian. In the first of these methods, the vector is compressed using the stochastic systematic scheme, whereas in the second, it is compressed using a deterministic thresholding scheme. Both methods have similar cost and are tractable for problems beyond the reach of deterministic FCI. However, the stochastic method achieves significantly less error, highlighting the advantages of stochastic methods over their deterministic counterparts. A number of recent extensions to the original FCIQMC algorithm have been found to enable improvements in performance by orders of magnitude. For example, in semi-stochastic FCIQMC\cite{Petruzielo2012, Blunt2015b}, a fixed subspace within the Slater determinant basis is treated deterministically, greatly reducing the statistical error in that portion of the solution vector. A related extension involves preserving some elements exactly if their magnitude exceeds a user-specified threshold~\cite{Overy2014}. In the initiator approximation\cite{Cleland2010, Booth2011, Cleland2011}, elements in the solution vector are zeroed in each iteration according to deterministic compression rules to better constrain the sign structure of the solution vector, which introduces a small bias. The FCI-FRI methods discussed here also include some deterministic features, although these differ in key aspects from those in the FCIQMC extensions. In FCI-FRI, the vector and matrix elements elements to be preserved exactly are chosen dynamically in each iteration on the basis of their relative magnitudes. The criteria for selecting these elements do not rely on user-specified parameters and instead were chosen to minimize compression error given a finite number of samples. Unlike the initiator approximation, this approach does not introduce an additional bias. Another FCIQMC extension that can be applied to FCI-FRI involves calculating perturbative corrections to the energy.\cite{Blunt2018} Due to the versatility of the FRI framework, many recent FCIQMC extensions can also be applied to FCI-FRI methods, which may yield further performance improvements. Here, we compare FCI-FRI methods only to the original FCIQMC method, without extensions, in order to (1) facilitate clarity in our presentation of the FCI-FRI methods, and (2) isolate the effects of different matrix and vector compression schemes in our results. Future work will be devoted to incorporating these complementary extensions into FCI-FRI methods. The remainder of this article is organized as follows. In Section \ref{sec:methods}, we summarize the FRI framework in the context of the power method for FCI calculations and describe the compression schemes considered in this study. Efficient compression of the Hamiltonian matrix is accomplished using a hierarchical scheme introduced in Section \ref{sec:hierMat} and discussed in more detail in Appendix \ref{sec:matFact}. In Section \ref{sec:results}, we discuss results obtained by applying these methods to five small molecular systems and compare their statistical efficiencies. In Section \ref{sec:concl}, we summarize our key findings and comment further on the differences among the methods in relation to potential future research directions. \section{Methods} \label{sec:methods} \subsection{The Power Method for Full Configuration Interaction Calculations} The FCI formalism casts the treatment of a system of interacting fermions in terms of linear algebra~\cite{Knowles1984}. In the FCI-FRI and FCIQMC methods discussed here, a randomization of the power method is used to calculate observables associated with the ground-state (lowest-energy) eigenvector of the FCI Hamiltonian matrix, $\mathbf{H}$. This matrix is expressed in a Slater determinant basis for $N$ electrons in $M$ orbitals. Its only nonzero off-diagonal elements are those corresponding to single and double excitations between pairs of Slater determinants. The matrix element corresponding to a single excitation from determinant $\ket{{K}}$ to $\ket{{L}} = \hat{c}^\dagger_a \hat{c}_i \ket{{K}}$ is \begin{equation} H_{LK} \equiv H_K(i \to a) = \mel{L}{ \hat{H} }{ K} = \gamma^{K}_{ia} \left(h_{ia} + \sum_{j \in \text{occ}} \mel{ i j}{}{ a j } \right) \end{equation} where $h_{ia}$ represents a matrix element of the one-electron component of the Hamiltonian and $\mel{ i j }{}{ a j }$ is an antisymmetrized two-electron repulsion integral. These are both readily obtained from the output of a Hartree-Fock calculation. The parity of the excitation $\gamma^K_{ia}$ is determined by the order of the orbitals comprising the Slater determinants in this basis \cite{Holmes2016}. The sum is over the orbitals occupied in $\ket{{K}}$. The notation $H_K(i \to a)$ will be used throughout this paper to denote the index of an excitation from determinant $\ket{K}$. The matrix element for the double excitation to $\ket{{M}} = \hat{c}^\dagger_a \hat{c}^\dagger_b \hat{c}_i \hat{c}_j \ket{{K}}$ is \begin{equation} H_{MK} \equiv H_K(ij \to ab) = \mel{{M} }{ \hat{H} }{ {K}} = \gamma^{K}_{ia} \gamma^{K}_{jb} \mel{ab }{}{ ij} \end{equation} and the diagonal matrix element associated with $\ket{{K}}$ is \begin{equation} H_{KK} = \mel{{K} }{ \hat{H} }{ {K}} = \sum_{j \in \text{occ}} h_{jj} + \frac{1}{2} \sum_{i,j \in \text{occ}} \mel{i j }{}{ i j} \end{equation} The ground-state eigenvalue of this matrix is therefore the system's electronic energy. Applying the generic power method to $\mathbf{H}$ involves iteratively generating a sequence of vectors, here referred to as iterates. Each iterate $\mathbf{v}^{(\tau)}$, where $\tau$ denotes the iteration index, is obtained by multiplying the previous iterate by the matrix $\mathbf{P} = \mathbf{1} - \varepsilon \mathbf{H}$, where $\mathbf{1}$ is the identity and $\varepsilon$ is a positive number that is sufficiently small to ensure that the ground state of $\mathbf{H}$ is the dominant eigenvector of $\mathbf{P}$. The initial iterate, $\mathbf{v}^{(0)}$, must have nonzero overlap with the ground-state eigenvector, $\mathbf{v}_\text{GS}$. In FCI, the Hartree-Fock unit vector is usually a suitable choice and is used in all of the calculations presented here. The iterates converge to the ground-state eigenvector up to a normalization factor, \begin{equation} \lim_{\tau \to \infty} \frac{\mathbf{v}^{(\tau)}}{||\mathbf{v}^{(\tau)}||} = \mathbf{v}_\text{GS} \end{equation} After sufficiently many iterations, convergence to the ground-state is geometric, with error decaying by a factor of $(1-\varepsilon E_0)/(1-\varepsilon E_1)$ after each iteration. Here $E_0$ is the ground-state eigenvalue of $\mathbf{H}$, and $E_1$ is the first excited-state eigenvalue. Alternative choices of $\mathbf{v}^{(0)}$ may be used to reduce the number of iterations required for convergence~\cite{Blunt2015}. The norms of the iterates $||\mathbf{v}^{(\tau)}||$ tend to either 0 or $\infty$, depending on the sign of $E_0$, as $\tau \to \infty$. An energy shift, $S^{(\tau)}$, is therefore included in the matrix $\mathbf{P}^{(\tau)}$ at each iteration to stabilize the norm, \begin{equation} \label{eq:Puncomp} \mathbf{P}^{(\tau)} = \mathbf{1} - \varepsilon \left( \mathbf{H} - S^{(\tau)} \mathbf{1} \right) \end{equation} where $S^{(\tau)}$ is updated dynamically after every $A$ iterations, where $A$ is a user-specified parameter (10 in our calculations), according to the formula introduced in the FCIQMC method, \cite{Booth2009} \begin{equation} \label{eq:enShift} S^{(\tau)} = S^{(\tau - A)} - \frac{\xi}{A \varepsilon} \ln \frac{|| \mathbf{v}^{(\tau)} ||_1}{|| \mathbf{v}^{(\tau - A)} ||_1} \end{equation} Here $\xi$ is a user-specified damping parameter (taken to be 0.05 in the calculations presented here), and $|| \cdot ||_1$ denotes the one-norm, defined for an arbitrary vector $\mathbf{x}$ as \begin{equation} \label{eq:oneNorm} || \mathbf{x} ||_1 = \sum_i |x_i| \end{equation} This procedure is used to stabilize the one-norm of the iterates in all methods considered in this study. In FCIQMC, the shift is updated only after the one-norm of the iterates (i.e. the number of walkers) has reached a specified target. \cite{Booth2009} The iterates are generated by the relation \begin{equation} \label{eq:friMarkov} \mathbf{v}^{(\tau + 1)} = \mathbf{P}^{(\tau)} \mathbf{v}^{(\tau)} \end{equation} \subsection{FRI Compression Schemes} \label{sec:friComp} The size of the FCI basis, $N_{\mathrm{FCI}} \sim O(M\ \mathrm{choose}\ N)$, renders it impossible to apply the power method as described above to many systems of chemical interest. The memory cost is $O(N_\mathrm{FCI})$ and the computational cost of matrix-vector multiplication is $O(N^2 V^2 N_\mathrm{FCI})$, where $V = M-N$ is the number of virtual (unoccupied) orbitals. For large systems, these costs are prohibitive. The FCI-FRI methods circumvent these bottlenecks by stochastically compressing the vector $\mathbf{v}^{(\tau)}$, and possibly the matrix $\mathbf{P}^{(\tau)}$, in each iteration. Stochastic compression is defined such that (1) the resulting compressed vector or matrix has at most a desired number $m$ of nonzero elements and (2) the expectation value of each element in the compressed vector or matrix is equal to the corresponding element in the input vector or matrix, i.e. \begin{equation} \label{eq:compDef} \text{E} \left[ \Phi \left(\mathbf{x} \right) \right]_i = x_i \end{equation} where $\Phi$ denotes the compression operation and $\mathbf{x}$ is an arbitrary vector. The fact that many of the elements in the compressed matrix or vector are zero facilitates the use of sparse linear algebra schemes, which enables the efficiency of FRI methods. As an example, in an FCI-FRI method that uses only vector compression, matrix-vector multiplication is performed as \begin{equation} \mathbf{v}^{(\tau + 1)} = \Phi\left( \mathbf{P}^{(\tau)} \mathbf{v}^{(\tau)}\right) \end{equation} This method has a memory cost of $O(N^2V^2 m)$ (to store the nonzero elements in the matrix-vector product before compression) and a computational cost of $O(N^2 V^2 m \log m)$. For many systems of chemical interest, these costs can be significantly less than those for deterministic FCI. There are many possible compression methods in FRI with the above defining properties that differ in the degree of statistical error they introduce. In order to emphasize the generality of the FRI framework, we begin by introducing several such methods in more abstract linear algebra terms before discussing their specific application to the FCI problem. \subsubsection{Vector Compression} \label{sec:vecComp} In this study, we compare several different approaches to vector compression. These have been applied in previous stochastic quantum chemistry calculations, although they can be applied more generally to any vector. The simplest approach to compressing an arbitrary vector $\mathbf{x}$ involves randomly selecting a subset of its elements, each with probability \begin{equation} p_i = \frac{|x_i|}{|| \mathbf{x} ||_1} \end{equation} The expected number of times each element is sampled is \begin{equation} \text{E}[n_i] = mp_i \end{equation} where $m$ is the total number of elements selected. Therefore, assigning each element of the compressed vector the value \begin{equation} \Phi(\mathbf{x})_i = \frac{n_i ||\mathbf{x}||_1 \text{sgn}(x_i)}{m} \end{equation} ensures that the condition in eq \ref{eq:compDef} is satisfied and that the vector has at most $m$ elements (fewer if any $n_i > 1$). Possible methods for randomly generating the values $\lbrace n_i \rbrace$ will be discussed below. It is often beneficial to preserve the largest-magnitude elements of $\mathbf{x}$ exactly in order to reduce the overall statistical error incurred in compressing the vector. Lim and Weare\cite{Lim2017} proposed the following criterion for determining the number $\rho$ to preserve exactly. If $\mathbf{s}$ is a vector, with length $\ell$, of indices that sorts the elements of $\mathbf{x}$ in order of decreasing magnitude (i.e. $|x_{s_j}| \geq |x_{s_{j+1}}|$ for all $j < \ell$), then $\rho$ is the minimum value of $h$ for which \begin{equation} \label{eq:rhoCriterion} (m - h) |x_{s_{h+1}}| \leq \sum_{j=h+1}^c |x_{s_j}| \end{equation} where $m$ denotes the desired number of nonzero elements in $\Phi(\mathbf{x})$, and $c$ is the number of nonzero elements in $\mathbf{x}$. Thus, $\rho$ depends both on $m$ and $\mathbf{x}$. Calculating $\rho$ requires identifying the largest-magnitude elements of $\mathbf{x}$. This can be done efficiently, in $O(\rho \log c)$ time, by using a binary heap structure rather than sorting the entire vector. The elements of $\mathbf{x}$ with indices $\lbrace s_1, s_2, ..., s_\rho \rbrace$ are unchanged in the compression. If $m \geq c$, this criterion naturally specifies that all elements are preserved exactly. Otherwise, the remaining elements of $\Phi(\mathbf{x})$ are determined by applying random sampling with $(m - \rho)$ samples to the vector $\mathbf{x}'$, which is obtained by zeroing the $\rho$ largest-magnitude elements of $\mathbf{x}$. The resulting elements of the compressed vector are \begin{equation} \label{eq:CompVecEl} \Phi(\mathbf{x})_{s_i} = \begin{cases} x_{s_i} & i \leq \rho \\ n_{s_i} ||\mathbf{x}'||_1 \text{sgn}(x_{s_i}) (m - \rho)^{-1} & i > \rho. \end{cases} \end{equation} An alternative, deterministic approach to vector compression is preserving the $m$ largest-magnitude elements of $\mathbf{x}$ exactly and zeroing the remaining elements. The additional sampling step introduced above has the notable advantage that the compressed vector is equal to the original in expectation. Even with a high degree of vector sparsity, results that are exact to within a controllable statistical error can be obtained by averaging over many independent vector compressions, provided there are no other sources of error. \begin{figure} \includegraphics[width=\linewidth]{Multi_sys} \caption{An illustration of the multinomial and systematic sampling schemes applied to the selection of $m=3$ elements from a probability distribution $\mathbf{p}$. The $\times$'s represent the random numbers $U_k$ generated on the interval $(0,1)$. The indices selected in both schemes correspond to the intervals in $\mathbf{p}$ with which the $\times$'s are aligned. The vector $\mathbf{n}$ shown for each scheme represents the number of times each element is selected.} \label{fig:multiSys} \end{figure} \subsubsection{Sampling Schemes} \label{sec:samplScheme} We compare two approaches to generating the integers $\lbrace n_i \rbrace$ used for vector compression in eq \ref{eq:CompVecEl}. Both involve selecting $m$ (or $m-\rho$) elements from a probability distribution $\mathbf{p}$ and are summarized in Figure~\ref{fig:multiSys}. In \textit{multinomial} sampling, selections are made independently. The simplest implementation involves generating $m$ random numbers $\lbrace U_k \rbrace$ uniformly on the interval $(0, 1)$. The index of the $k^\text{th}$ element selected is the value of $j$ which satisfies \begin{equation} \sum_{i=1}^{j-1} p_i \leq U_k < \sum_{i=1}^j p_i \end{equation} Any index can potentially be selected more than once, as the random numbers $\lbrace U_k \rbrace$ are generated independently. The alias method is a more efficient implementation of multinomial sampling than the one described above\cite{Walker1974, Holmes2016}. The \textit{systematic} sampling scheme typically achieves reduced variance in the vector $\mathbf{n}$. The $m$ random numbers $\lbrace U_k \rbrace$ used in the selection of elements are generated from a single random number $r$ chosen uniformly on the interval $(0,1)$, as follows: \begin{equation} \label{eq:sysRNs} U^{(k)} = \frac{k-1+r}{m} \end{equation} with $k = 1, 2, ..., m$. The value of $r$ determines the position of the $\times$'s in each of the $m$ subintervals of $(0,1)$ in the Systematic portion of Figure~\ref{fig:multiSys}. The indices of elements selected are determined as described in multinomial sampling. Although systematic sampling is expected to yield less statistical error than multinomial in general, this difference is expected to become smaller as the number of elements selected $(m)$ decreases relative to the size of the vector. When $m=1$, systematic sampling coincides exactly with multinomial sampling. \subsubsection{Hierarchical Matrix Factorization} \label{sec:hierMat} The vector compression methods discussed above enable the application of FRI to iterative linear algebra methods based on matrix-vector multiplication at less cost than their deterministic counterparts. However, even the cost of multiplying a sparse vector by $\mathbf{P}^{(\tau)}$ is prohibitive for large problems in quantum chemistry. This cost can be further reduced by compressing both the matrix and vector in each iteration. In principle, the vector compression methods described above could also be applied to compress the matrix before multiplication in each iteration, e.g. by treating each of its columns as a vector. This would require enumerating all of its nonzero elements, which offers few advantages over calculating the matrix-vector product without compression. This section describes an alternative hierarchical approach to randomly approximating a matrix-vector product using compression. For a generic matrix-vector product $\mathbf{Ax}$, this involves factoring $\mathbf{A}$ into a product of matrices and performing a sequence of vector compressions. For example, if $\mathbf{A} = \mathbf{A}^{(3)} \mathbf{A}^{(2)} \mathbf{A}^{(1)}$, then $\mathbf{Ax}$ can be approximated as: \begin{equation} \mathbf{Ax} = \mathbf{A}^{(3)} \Phi(\mathbf{A}^{(2)} \mathbf{x}^{(1)}) \end{equation} where \begin{equation} \mathbf{x}^{(1)} = \Phi(\mathbf{A}^{(1)} \mathbf{x}) \end{equation} The compressions after each multiplication are performed independently in this study, but other approaches in which they are not independent are possible as well. If $\mathbf{A}^{(1)}$, $\mathbf{A}^{(2)}$, and $\mathbf{A}^{(3)}$ are sparse, this approach can be made more efficient than calculating $\mathbf{Ax}$ directly. The multinomial selection of excitations in FCIQMC~\cite{Booth2014} can be understood as a specific implementation of this approach, but we describe it in more general terms to demonstrate that it can be used with any compression scheme in FRI. There are multiple ways to factor the Hamiltonian matrix and correspondingly the matrix $\mathbf{P}^{(\tau)}$ for quantum chemistry calculations. These can be applied in contexts other than FCI, e.g. for stochastic coupled-cluster~\cite{Thom2010, Scott2017}. Here we consider two such factorings, near-uniform~\cite{Booth2014} and heat-bath Power-Pitzer (HB-PP)~\cite{Holmes2016, Neufeld2019}. The structure of each matrix in these factorizations is dictated by the two-body structure of the Hamiltonian. Both have the form $\mathbf{B} \mathbf{C}^{(\tau)} \mathbf{Q}$, where $\mathbf{Q}$ is factored further into a product of matrices. Elements of these matrices can be calculated efficiently using information about the symmetry of the system and, in the case of the HB-PP factorization, information from the Hamiltonian matrix. Elements of $\mathbf{Q}$ have been introduced as the probabilities for sampling excitations in previous descriptions of FCIQMC, and multiplication by $\mathbf{B}$ sums contributions from different excitations to the same determinant. Off-diagonal elements of the matrix $\mathbf{BQ}$ can be interpreted as an approximation to those of $\mathbf{P}^{(\tau)}$ or $\mathbf{H}$. The extra factor of $\mathbf{C}^{(\tau)}$ corrects for this discrepancy between $\mathbf{BQ}$ and $\mathbf{P}^{(\tau)}$ by multiplying by elements of $\mathbf{P}^{(\tau)}$ and dividing by elements of $\mathbf{Q}$. This form ensures that matrix elements can be calculated efficiently and that multiplication by the matrix factors is equivalent to multiplication by $\mathbf{P}^{(\tau)}$. The detailed forms of these factorizations are given in Appendix \ref{sec:matFact}. \subsection{FCI-FRI Methods Considered in this Study} \label{sec:FRIforFCI} The previous sections discussed compression techniques applicable to matrices and vectors in general. This section summarizes the particular implementations of these schemes in the three FCI-FRI methods considered in this study, as well as FCIQMC. A Python/Cython implementation of these methods with OpenMP parallelism is available on GitHub.\cite{resipy} In all three FCI-FRI methods, iterate vectors are compressed systematically following matrix multiplication, regardless of which matrix compression scheme is used. A subset of $\rho$ vector elements is preserved exactly, with $\rho$ calculated as described in the discussion surrounding eq \ref{eq:rhoCriterion}, and $(m - \rho)$ additional nonzero vector elements are sampled randomly using the systematic scheme described in Section \ref{sec:samplScheme}. In order to quantify the error introduced by compressing the matrix $\mathbf{P}^{(\tau)}$ in each iteration, we considered three different matrix compression schemes in the three FCI-FRI methods. In the ``full-matrix FCI-FRI'' method, the matrix is not compressed. This method has been discussed previously and compared to FCIQMC~\cite{Lu2017}. As discussed above, its memory and CPU cost per iteration is approximately $O(N^2V^2m \log m)$. In the remaining two FCI-FRI methods, $\mathbf{P}^{(\tau)}$ is compressed either multinomially or systematically using a hierarchical factorization scheme, with additional constraints as discussed in Appendix \ref{sec:FCIcomp}. Excluding the diagonal elements of $\mathbf{P}^{(\tau)}$, which are preserved exactly, $N_\text{mat}$ samples are used in each compression. Matrix compression in ``multinomial FCI-FRI'' corresponds more closely to the scheme used in the original FCIQMC method, whereas ``systematic FCI-FRI'' is designed to reduce statistical error. These algorithms are summarized in Table \ref{tab:steps}. \begin{table} \caption{An overview of the steps in each iteration of the FCI-FRI methods considered in this study. The right column indicates the approximate scaling of the CPU cost of each step. The variable $N$ is the number of electrons in the system; $M$ is the number of spatial orbitals in the single-particle basis; $V = M - N$ is the number of virtual orbitals; $m$ is the number of nonzero elements kept in the solution vector; $N_\text{mat}$ is the number of off-diagonal elements sampled from the Hamiltonian matrix.} \begin{tabular}{l | l} \textbf{Full-matrix FCI-FRI} & CPU cost/iteration \\ \hline 1. Calculate $\mathbf{v}^{(\tau + 1)\prime} = \mathbf{P}^{(\tau)} \mathbf{v}^{(\tau)}$ & $O(N^2 V^2 m \log m)^a$ \\ 2. Compress $\mathbf{v}^{(\tau + 1)\prime}$ systematically to & $O(N^2 V^2 m)$ \\ $m$ nonzero elements \\ 3. Adjust the energy shift, $S^{(\tau)}$ (eq \ref{eq:enShift}) & $O(1)$ \end{tabular} \\~\\~\\ \begin{tabular}{l | l} \textbf{Multinomial \& systematic FCI-FRI} & CPU cost/iteration \\ \hline 1. Calculate $\mathbf{v}^{(\tau + 1)\prime} = \mathbf{P}^{(\tau)} \mathbf{v}^{(\tau)}$ using & $O(N_\text{mat})$ or $O(M N_\text{mat})$ \\ hierarchical factorization with & $+ O(N_\text{mat} \log m)^b$ \\ multinomial or systematic compression \\ to $N_\text{mat}$ nonzero elements \\ 2. Compress $\mathbf{v}^{(\tau + 1)\prime}$ systematically to $m$ & $O((N_\text{mat} + m) \log (N_\text{mat} + m))^c$ \\ nonzero elements \\ 3. Adjust the energy shift, $S^{(\tau)}$ (eq \ref{eq:enShift}) & $O(1)$ \end{tabular} \\~\\ \raggedright $^a$The $(\log m)$ factor here arises because our implementation uses a less efficient binary search algorithm to perform matrix-vector multiplication. This cost could be reduced by using a hashing algorithm~\cite{Booth2014}. \\ $^bO(N_\text{mat})$ is the approximate cost of compressing the near-uniform distribution, and $O(M N_\text{mat})$ is the cost for HB-PP. The $O(N_\text{mat} \log m)$ term comes from multiplication by $\mathbf{B}$ in both factorizations and can be reduced to $O(N_\text{mat})$ using hashing. \\ $^c$Worst-case scaling. More typical scaling, corresponding to preserving relatively few elements exactly, is $O(N_\text{mat} + m)$. \label{tab:steps} \end{table} \subsection{Comparison with FCIQMC} As discussed above, the FCIQMC method described in ref \citenum{Booth2009} can be viewed as a specific method within the FRI framework. Although our presentation of the method differs somewhat from previous studies, we implemented FCIQMC in its original form, i.e. without any of its existing extensions (e.g. initiator or semi-stochastic), for comparison to FCI-FRI. This section summarizes the compression techniques in FCIQMC using the unifying language of the FRI framework, in order to facilitate comparison to the new FCI-FRI methods in this study. Further details about compression in FCIQMC can be found in Appendix \ref{sec:FCIcomp}. In the original FCIQMC algorithm, each iterate $\mathbf{v}$ is represented by a number of signed walkers, so each of its elements $v_K$ is an integer. The total number of walkers is $||\mathbf{v}||_1$. The random selection of excitations in FCIQMC corresponds to multinomial compression of $\mathbf{P}^{(\tau)}$ using one of the factorizations discussed in Appendix \ref{sec:matFact}. The ``spawning'' step corresponds to integerization of off-diagonal elements after multiplication by $\mathbf{C}^{(\tau)}$ in the hierarchical scheme, and the ``death/cloning'' step corresponds to integerization of diagonal elements. ``Annihilation,'' i.e. the summation of matrix elements corresponding to the same Slater determinant basis element, is performed by multiplying by $\mathbf{B}$ in the hierarchical scheme. The key difference between the original FCIQMC algorithm and multinomial FCI-FRI methods lies in the compressions performed after the final two matrix multiplications performed in the hierarchical scheme. In FCIQMC, after multiplication by $\mathbf{C}^{(\tau)}$, elements are rounded to integers using a random binomial integerization procedure. Like other vector compression techniques, this ensures sparsity in the resulting vector since many elements are rounded to zero. This reduces the cost of multiplication by $\mathbf{B}$ (i.e. ``annihilation''), since this involves summing fewer nonzero elements, but it also introduces additional statistical error. The vector obtained after multiplication by $\mathbf{B}$ is not compressed and is instead treated as the next iterate. In multinomial FCI-FRI, the vector obtained after multiplication by $\mathbf{C}^{(\tau)}$ is not compressed, so the elements that are summed during multiplication by $\mathbf{B}$ are real-valued (i.e. not necessarily integers). Sparsity is instead enforced by compressing the iterate systematically after the final matrix multiplication. It should be noted that compression is performed after multiplication by $\mathbf{B}$ in the semi-stochastic FCIQMC extension, as in FCI-FRI, although this extension was not considered in this study. One advantage of FCIQMC is its straightforward parallelizability. Since elements are selected independently in the multinomial matrix compression scheme, they can be selected in parallel. Similarly, the stochastic rounding of matrix elements to integers can be performed in parallel, as each element is treated independently. In contrast, elements are not selected independently in systematic compression, so these strategies cannot be applied in exactly the same way. Nevertheless, parallelizing systematic schemes is possible, e.g. by performing parallel compressions in subspaces of the Slater determinant space. Investigation of these strategies will be the subject of future research. The original FCIQMC method and FCI-FRI methods become more similar as the number of nonzero elements in the compressions (number of walkers) decreases relative to the size of the basis $(N_\text{FCI})$: the probability of choosing repeated elements in multinomial matrix compression decreases, and the frequency of annihilation events in FCIQMC decreases. However, our examples suggest that the number of walkers required to obtain reasonable results from the original FCIQMC method is already sufficient to observe a substantial benefit from FRI. \begin{table*} \caption{The parameters used in calculations on each of the systems in this study. Unless otherwise specified, the geometry is the diatomic bond length. MP2 natural orbitals with occupancies below the occupancy threshold, if specified, were excluded from the single-particle basis. The resulting number of (spatial) orbitals is reported as $M$. The number of unfrozen electrons considered for each system is $N$, and $N_\text{FCI}$ is the size of the FCI basis. The parameter $\varepsilon$ (eq \ref{eq:Puncomp}) is chosen to ensure convergence of the power method. $E_\text{FCI}$ denotes the exact FCI energy (including nuclear repulsion) used for comparison to our stochastic results.} \begin{tabular}{l | c | c | c | c | c | c} & & Occupation & & & & \\ System & Geometry & threshold / $10^{-4}$ & ($N, M)$ & $N_\text{FCI} / 10^6$ & $\varepsilon/10^{-4} E_h$ & $E_\text{FCI} / E_h$ \\ \hline Ne (aug-cc-pVDZ) & - & - & (8, 22) & 6.69 & 10 & $-128.709476^\text{a}$ \\ HF (cc-pCVDZ) & $0.91622$ \AA & - & (10, 23) & 283 & 1 & $-100.270929^\text{b}$\\ \ce{H2O} (cc-pVDZ) & $r_{\text{O} - \text{H}} = 0.975512$ \AA & 6 & (10, 18) & 18.3 & 10 & $-76.167449^\text{b}$ \\ & $\angle_\text{HOH} = 110.565^\circ$ & & & & &\\ \ce{N2} (cc-pVDZ) & $1.0944$ \AA & 30 & (10, 17) & 4.8 & 5 & $-109.228042^\text{b}$ \\ \ce{C2} (cc-pVDZ) & $1.27273$ \AA & 5 & (8, 22) & 6.7 & 5 & $-75.7260112^\text{b}$ \end{tabular} \begin{flushleft} \textsuperscript{a}From ref \citenum{Olsen1996} \\ \textsuperscript{b}Calculated using the PySCF software package \cite{Sun2018} \end{flushleft} \label{tab:params} \end{table*} \subsection{Statistical Error Analysis} \label{sec:errors} Although in principle the iterates can be averaged to obtain an estimate of the ground-state eigenvector, the memory requirements of such an approach are prohibitive for large systems. In practice, we are only interested in observables calculated from the ground-state eigenvector, so their average values are accumulated rather than the eigenvector itself. This section addresses the calculation of the average ground-state energy and the methods used to quantify the statistical error in this average. Conventionally, the energy of a state vector $\mathbf{x}$ is calculated as a Rayleigh quotient, defined here as: \begin{equation} \label{eq:rayEn} E_\mathrm{R} (\mathbf{x}) = \frac{\mathbf{x}^* \mathbf{H} \mathbf{x}}{\mathbf{x}^* \mathbf{x}} \end{equation} where $\mathbf{x}^*$ denotes the conjugate transpose of $\mathbf{x}$. Averages of the energy obtained from the Rayleigh quotient estimator applied to an ensemble of random vectors will exhibit a statistical bias due to the products of correlated random vectors in both the numerator and denominator.\cite{Overy2014} Consequently, a projected energy estimator is instead used to calculate averages: \begin{equation} \label{eq:projEst} E_\text{P}(\mathbf{x}) = \frac{\mathbf{v}_\text{ref}^* \mathbf{H} \mathbf{x}}{\mathbf{v}_\text{ref}^* \mathbf{x}} \end{equation} where $\mathbf{v}_\text{ref}$ is a constant, appropriately chosen reference vector. In principle, using a reference vector that is closer to the exact ground-state eigenvector of the Hamiltonian will yield a better estimate of the correlation energy~\cite{Alavi2016}. In this study we use the Hartree-Fock unit vector for simplicity. If this estimator is to be applied to multiple vectors $\mathbf{x}$ (in this case, the iterates obtained after each iteration), the numerator can be calculated efficiently by storing the matrix-vector product $\mathbf{H} \mathbf{v}_\text{ref}$ and taking its inner product with each vector $\mathbf{x}$. In the FCI-FRI methods in this study, this inner product is calculated before each iterate is compressed. The numerator and denominator of eq \ref{eq:projEst} at a particular iteration are denoted as \begin{equation} n^{(\tau)} = \mathbf{v}_\text{ref}^* \mathbf{H} \mathbf{v}^{(\tau)} \end{equation} and \begin{equation} d^{(\tau)} = \mathbf{v}_\text{ref}^* \mathbf{v}^{(\tau)} \end{equation} Because $n^{(\tau)}$ and $d^{(\tau)}$ are correlated within each iteration due to their mutual dependence on $\mathbf{v}^{(\tau)}$, averaging the quotients $n^{(\tau)} / d^{(\tau)}$ over all iterations would introduce a statistical bias. Therefore, the mean energy is calculated instead as $\langle E_\text{P} \rangle = \langle n \rangle / \langle d \rangle$, where \begin{equation} \label{eq:numAve} \langle n \rangle = \frac{1}{N_i - \tau_c} \sum_{\tau \geq \tau_c} n^{(\tau)} \end{equation} and the corresponding expression for the denominator is defined analogously. Here the total number of iterations in the trajectory is denoted $N_i$, and the equilibration time, $\tau_c$, is the number of iterations at the beginning of the trajectory not included in the average. Our approach to determining $\tau_c$ will be described below. If the expected value of the iterates $\mathbf{v}^{(\tau)}$ converges to the exact ground-state eigenvector (to within a normalization factor) after infinitely many iterations, the mean energy will also converge to its exact value, since the numerator and denominator are averaged separately. In practice, a systematic bias is still observed after infinitely many iterations in FCI-FRI and FCIQMC because the expected value of the iterates does not converge to the exact ground-state eigenvector. This has been discussed previously in the context of FCIQMC and diffusion Monte Carlo methods as the population control bias.\cite{Umrigar1993, Vigor2015} The delta method is used to calculate the variance of the average $\langle E_p \rangle$ as follows: \begin{equation} \label{eq:delEp} \begin{aligned} \text{Var}[\langle E_\text{P} \rangle] &= \text{Var}\left[\frac{\langle n \rangle}{\langle d \rangle} - \frac{n_0}{d_0} \right] \\ &\approx \text{Var} \left[\frac{\langle n \rangle - n_0}{d_0} - \frac{n_0(\langle d \rangle - d_0)}{d_0^2} \right] \\ &= \text{Var} \left[\frac{\langle n \rangle}{d_0} - \frac{n_0 \langle d \rangle}{d_0^2} \right] \end{aligned} \end{equation} where $n_0$ and $d_0$ represent the deterministic quantities $\mathbf{v}_\text{ref}^* \mathbf{H} \mathbf{v}_\text{GS}$ and $\mathbf{v}_\text{ref}^* \mathbf{v}_\text{GS}$, up to an irrelevant normalization factor. We define $E^{(\tau)}_\text{delta}$ as \begin{equation} \begin{aligned} E^{(\tau)}_\text{delta} &= \frac{n^{(\tau)}}{d_0} - \frac{n_0 d^{(\tau)}}{d_0^2} \\ &\approx \frac{n^{(\tau)}}{\langle d \rangle} - \frac{\langle n \rangle d^{(\tau)}}{\langle d \rangle^2} \end{aligned} \end{equation} Because subsequent iterates in a trajectory are correlated, the variance in eq \ref{eq:delEp} cannot be calculated naively as $\sigma^2/(N_i - \tau_c)$, where $\sigma^2$ is the mean squared deviation from the average, i.e. \begin{equation} \sigma^2 = \frac{1}{N_i - \tau_c}\sum_{\tau \geq \tau_c} \left( E^{(\tau)}_\text{delta} \right)^2 \end{equation} Instead, $\sigma^2$ must be multiplied by the integrated autocorrelation time (IAT), a measure of the degree of correlation. The IAT is estimated using the iterative procedure described in ref \citenum{Sokal1997}, as implemented in the emcee software package~\cite{Foreman2013}, using the sequence of values $\lbrace E^{(\tau)}_\text{delta} \rbrace$ as the input. If the sequence $\lbrace n^{(\tau)} / d^{(\tau)} \rbrace$ was used instead, the resulting variance would not correspond to an energy estimate in which the numerator and denominator are averaged separately. The equilibration time $\tau_c$ is determined for each trajectory by inspecting plots of the IATs of the numerator and denominator of the energy estimator separately vs. $\tau_c$. Typically, the IAT is greater for smaller values of $\tau_c$, both because of their dependence on the initial iterate $\mathbf{v}^{(0)}$ and because iterates can become trapped around metastable energy values before converging to the ground-state eigenvector~\cite{Chodera2016}. Equilibration times were therefore chosen to exclude this initial period of decreasing IATs. In FCIQMC, $\tau_c$ is also constrained to be greater than the first index at which the energy shift is updated (eq \ref{eq:enShift}). The Flyvbjerg-Petersen blocking method\cite{Flyvbjerg1989} has been used in previous FCIQMC studies\cite{Booth2009, Spencer2012, Blunt2015, Vigor2016} to calculate the variance. The approach described here has the notable advantage that no data from after the initial equilibration period ($\tau \geq \tau_c$) is discarded in the calculation of the mean and variance. Either of these methods requires a very long trajectory to achieve an accurate estimate of the variance, and it is likely that some of the statistical error estimates reported in this study are not fully converged. The standard error of the energy estimator is calculated as \begin{equation} \sigma_e = \left( {\text{Var}[\langle E_\text{P} \rangle]} \right)^{1/2} \end{equation} This error is expected to scale as $(N_i - \tau_c)^{-1/2}$ after sufficiently many iterations, according to the Markov chain central limit theorem with standard assumptions of ergodicity~\cite{Chung1960, Sokal1997}. This scaling renders it impossible to directly compare the standard errors from two trajectories with different numbers of iterations. Therefore, the primary metric that will be used to compare the methods discussed here is the statistical efficiency, defined as~\cite{Holmes2016} \begin{equation} \label{eq:eff} E = \frac{1}{\sigma_e^2 (N_i - \tau_c)} \end{equation} For two methods executed for the same number of iterations after the equilibration period, the method with the greater statistical efficiency will typically yield less variance. From an alternative perspective, in order to achieve a target standard error, the method with greater statistical efficiency can be executed for fewer iterations. For example, to achieve a standard error of $10^{-5} E_h$, a method with statistical efficiency $E$ requires $[(10^{-5} E_h)^2 E]^{-1}$ iterations after the equilibration period. In this study, we do not normalize the efficiency based on the computational cost of each iteration. Therefore, for a given FCI-FRI method applied to a particular system, increasing the number of matrix or vector samples increases the statistical efficiency due to the expected decrease in error, regardless of the corresponding increase in computational cost. For this reason, when comparing the statistical efficiencies of different FCI-FRI methods and FCIQMC, we ensure that the same number of matrix and vector samples are used in all methods for each system. This ensures that any differences in the resulting statistical efficiencies are due to features inherent to the methods. \section{Results} \label{sec:results} The methods described in the previous section are applied to a subset of the molecular systems considered in ref \citenum{Booth2009}. The parameters relevant to the Hartree-Fock and randomized FCI calculations performed for these five systems are presented in Table \ref{tab:params}. In order to run calculations for sufficiently many iterations to obtain robust estimates of the mean energy and associated standard error, fewer single-particle orbitals are used for three systems than in ref \citenum{Booth2009}, thus reducing the size of the FCI basis $(N_\text{FCI})$. This truncation is performed by discarding natural orbitals obtained from a second-order M\o ller-Plesset perturbation theory (MP2) calculation with occupation numbers less than a specified threshold. We emphasize that truncating the basis is necessary only because of inefficiencies in our implementations of these methods. Optimizing our implementations should enable the treatment of significantly larger systems. Core electrons are frozen in Ne, \ce{C2}, and \ce{N2}, as in ref \citenum{Booth2009}. The same value of $\varepsilon$ is used to construct the matrix $\mathbf{P}^{(\tau)}$ (eq \ref{eq:Puncomp}) used in all methods for each system. The PySCF electronic structure software package\cite{Sun2018} is used to perform Hartree-Fock, MP2, and deterministic FCI calculations. In ref \citenum{Booth2009}, the average FCIQMC energy for the hydrogen fluoride (HF) molecule was compared to coupled-cluster theory with perturbative triple excitations, CCSD(T). Our deterministic FCI result, calculated using PySCF, differs from the CCSD(T) result by $4.89 \times 10^{-4} E_h$, and from the FCIQMC result from ref \citenum{Booth2009} by $5.4 \times 10^{-5} E_h$, a value greater than the reported uncertainty. \subsection{FCI-FRI without Matrix Compression} \label{sec:friFull} In order to isolate the contribution of vector compression to the statistical error in calculations of the ground state energy, we first consider results obtained by applying the ``full-matrix FCI-FRI'' method, which does not use matrix compression, to the Ne atom. We compare calculations with differing numbers of nonzero elements retained in the compression of each iterate ($m$). As $m$ approaches the size of the FCI basis, this method becomes identical to the deterministic power method. The difference between the estimated ground-state energy at each iteration and the exact energy is plotted for calculations with three different values of $m$ in the top panel of Figure~\ref{fig:neTrajEff}. The energy of the first iterate in each trajectory is the Hartree-Fock energy, since the first iterate was initialized to the Hartree-Fock unit vector. The energy decreases towards the exact energy in subsequent iterations. After the estimator is determined to be sufficiently close to the exact energy, at iteration $\tau_c$, the mean is accumulated according to eq \ref{eq:numAve}. This cumulative mean is plotted in Figure~\ref{fig:neTrajEff} for $\tau \geq \tau_c$. The value of the equilibration time $\tau_c$ used in these trajectories increases with increasing $m$ (Table \ref{tab:friAll}), primarily due the greater degree of noise in trajectories with fewer nonzero elements in each iterate. When $m$ is smaller, the energy decreases more quickly towards the ground state, causing a lesser value of $\tau_c$, but fluctuates to a greater extent after $\tau = \tau_c$. In the deterministic power method, the asymptotic convergence rate is determined by the ratio $({1} - \varepsilon E_0)/({1} - \varepsilon E_1)$. Randomized implementations of the power method can exhibit different convergence properties, depending on the statistical error introduced in each iteration. This trend in $\tau_c$ is therefore not surprising, and it suggests that an accurate energy estimate can be achieved at less computational cost if the values of $m$ and $\varepsilon$ are varied dynamically during the calculation. \begin{figure} \includegraphics[scale=1]{ne_all_traj_eff} \caption{Results obtained by applying the ``full-matrix FCI-FRI'' method to the Ne atom. (top) Differences between the energy estimator ($E_\text{P}^{(\tau)}$, eq \ref{eq:projEst}) and the exact FCI ground-state energy for three trajectories with different numbers, $m$, of nonzero elements in the compressed vectors. After the initial equilibration period $(\tau > \tau_c)$, the cumulative mean $\langle E_\text{P} \rangle$ is plotted, with the shaded region indicating the corresponding 95\% confidence interval $(\pm 2 \sigma_E)$. (bottom) The statistical efficiency for trajectories executed with different values of $m$. The dashed line with slope 1 represents the expected scaling of the efficiency with respect to $m$ (for $m$ large but less than $N_\text{FCI}$).} \label{fig:neTrajEff} \end{figure} \begin{figure} \includegraphics[scale=1]{ne_det_traj} \caption{Results obtained by applying the power method with deterministic vector truncation to the Ne atom. Only the $m$ greatest-magnitude elements of the vector were preserved exactly after each iteration. Differences between the energy estimator $E_\text{P}^{(\tau)}$ and the exact energy at each iteration are plotted for four trajectories with different values of $m$. Results from the ``full-matrix FCI-FRI'' calculation with $m=50,000$ elements from Figure \ref{fig:neTrajEff} are presented for comparison. Note the log scale on the vertical axis. } \label{fig:neTrajDet} \end{figure} The difference $E_\text{diff}$ between the final estimate of the energy, obtained by averaging over all $\tau \geq \tau_c$, and the exact FCI energy from ref \citenum{Olsen1996}, is presented for each $m$ in Table \ref{tab:friAll}. The number of iterations included in each of these averages can be obtained by subtracting $\tau_c$ from the reported total number of iterations, $N_i$. The reported uncertainties, twice the standard error $\sigma_E$ calculated as described in Section \ref{sec:errors}, represent 95\% confidence intervals for the means. The exact energy is within these confidence intervals for all values of $m$ reported here (i.e. $|E_\text{diff}| < 2 \sigma_E$). The standard error is expected to decrease after more iterations, with an asymptotic scaling of $(N_i - \tau_c)^{-1/2}$. Confidence intervals for intermediate values of $\tau$, calculated by scaling the final confidence intervals reported in Table \ref{tab:friAll}, are shown as shaded areas in Figure \ref{fig:neTrajEff}. The value $E_\text{diff}$ is not expected to converge to 0 but rather to the statistical bias, as discussed in Section \ref{sec:errors}. This bias scales as $m^{-1}$ when $m$ is sufficiently large (but still much smaller than the size of the FCI basis, $N_\text{FCI}$)\cite{Lim2017}, but the number of iterations performed in our calculations is not sufficient to measure the biases in these calculations accurately. \begin{table} \caption{Results obtained by applying the ``full-matrix FCI-FRI'' method to the Ne atom with different values of $m$. The difference $E_\text{diff}$ between the mean and exact (FCI) energy for each calculation is presented, with twice the standard error $\sigma_E$ (95\% confidence interval). The length of the equilibration period $(\tau_c)$ and total number of iterations $(N_i)$ are given. The statistical efficiency is calculated using eq \ref{eq:eff}. The mean number of Hamiltonian matrix evaluations in each iteration $N_\text{mat}$ is presented for comparison to other methods.} \begin{tabular}{c | c | c | c | c | c} $m/10^3$ & $N_\text{mat}/10^6$ & $(E_\text{diff} \pm 2\sigma_E) / (10^{-5} E_h$) & Eff./($10^6 E_h^{-2}$) & $\tau_c/10^3$ & $N_i/10^3$ \\ \hline 1 & 0.93 & $6437 \pm 16099$ & $1.25 \times 10^{-10}$ & 0.8 & 1237 \\ 2 & 1.9 & $141 \pm 242$ & $6.4 \times 10^{-7}$ & 1.1 & 1062 \\ 5 & 4.7 & $-0.089 \pm 4.60$ & 0.0015 & 1.2 & 1200 \\ 10 & 9.3 & $0.307 \pm 0.480$ & 0.296 & 3.2 & 589 \\ 25 & 23.4 & $-0.053 \pm 0.112$ & 12.8 & 4.8 & 256 \\ 50 & 46.8 & $0.034 \pm 0.063$ & 86.9 & 6.1 & 123 \end{tabular} \label{tab:friAll} \end{table} In Table \ref{tab:friAll}, decreased standard error is observed in calculations with greater values of $m$, despite the fact that fewer iterations were included in these calculations. If the errors from these calculations are compared after the same number of iterations, the trend with increasing $m$ would be more pronounced. The statistical efficiency does not depend on the number of iterations and therefore allows for a more direct comparison. Statistical efficiencies calculated from all trajectories are presented in Table \ref{tab:friAll} and in the bottom panel of Figure~\ref{fig:neTrajEff}. While the computational cost of full-matrix FCI-FRI calculations is approximately proportional to $m$, the statistical efficiency appears to increase at a faster-than-$m$ rate for small $m$. This indicates that, in terms of reducing the standard error, it is more advantageous to increase $m$ in this pre-asymptotic regime than to increase the number of iterations. The statistical efficiency is expected to increase linearly with $m$ for $m$ sufficiently large (but still much smaller than $N_\text{FCI}$)~\cite{Lim2017}. Similar faster-than-$m$ pre-asymptotic scaling has been observed in other methods that use sequential Monte Carlo sampling on a classical problem~\cite{Webber2019}, suggesting that it is {not} (solely) a manifestation of the fermion sign problem in this case. Before considering the effect of matrix compression on the statistical error, we comment briefly on the benefits of using stochastic, rather than deterministic, vector compression. Results for the Ne atom obtained using a deterministic vector compression scheme are presented in Figure~\ref{fig:neTrajDet}. In each iteration, the matrix is not compressed, the $m$ greatest-magnitude elements in the vector are preserved exactly, and the remaining vector elements are zeroed. For all values of $m$ considered, the energy calculated from the projected estimator, $E_\text{P}$, converges after approximately 3000 iterations. Energies obtained from the ``full-matrix FCI-FRI'' method, with $m=50,000$ nonzero elements kept after each iteration, are also presented for comparison. The error in the corresponding deterministic calculation after a similar number of iterations is almost two orders of magnitude greater than the 95\% confidence interval in the FCI-FRI calculation. Similar results for other electronic systems were observed previously in ref \citenum{Lu2017}. These results indicate that the success of the FCI-FRI method in these cases cannot be attributed to its discarding vector elements that do not contribute significantly to the energy, as is done in the deterministic approach. The stochastic representation of these small-magnitude elements is crucial to its success. This observation may be relevant to selected CI methods~\cite{Huron1973,Tubman2016,Zhang2016, Sharma2017, Wang2019}, which utilize a similar greedy optimization scheme. \subsection{Methods with Matrix Compression} The cost of the full-matrix FCI-FRI method renders it intractable for larger systems, so we also evaluate the performance of methods that use matrix compression, including the original FCIQMC method. \subsubsection{Near-Uniform Factorization} Methods that utilize the near-uniform factorization described in Appendix \ref{sec:nearUniQ} will be discussed first. In order to ensure a fair comparison among these methods, all calculations for each system are executed with approximately the same cost, i.e.~using the same numbers of nonzero elements in the matrix and vector compressions in each iteration ($N_\text{mat}$ and $m$, respectively). In an FCIQMC calculation, $N_\text{mat}$ is the number of walkers, and $m$ is determined by their distribution among the Slater determinant basis elements. In FCIQMC, the number of walkers and $m$ fluctuate randomly in each iteration. Previous studies have determined that the number of walkers must be greater than a system-dependent critical value in order to ensure convergence. The number of walkers used in the FCIQMC calculations discussed here are constrained to be greater than these critical values. Critical values for the Ne and HF systems are given in ref \citenum{Booth2009}, and those for the remaining systems considered in this study are determined using the same scheme, i.e. by observing trends in the growth of the number of walkers before the energy shift $S^{(\tau)}$ is updated. The values of $N_\text{mat}$ and $m$ used in FCI-FRI calculations are fixed at the corresponding average values obtained from the FCIQMC calculations after walker growth has stabilized. Results from these calculations for all molecular systems are presented in Table \ref{tab:allNearUni}. In all calculations, average energies converge to the exact FCI energies reported in Table \ref{tab:params} to within twice the standard error (95\% confidence interval). Strictly speaking, all methods considered here exhibit a statistical bias, although for these calculations it is very likely less than the reported confidence intervals. After more iterations, we expect that the standard error for all trajectories will decrease, and the energy differences $E_\text{diff}$ for both trajectories of a particular method and system will converge to the same statistically significant bias. It is impossible to draw definitive conclusions about the relative biases of the three methods described here without more iterations. Standard errors from FCIQMC calculations range from $3 \times 10^{-5} E_h$ to $20 \times 10^{-5} E_h$, while those from the FCI-FRI methods are smaller ($2 \times 10^{-5} E_h$ to $6 \times 10^{-5} E_h$ for multinomial FCI-FRI, and $0.4 \times 10^{-5} E_h$ to $1.7 \times 10^{-5} E_h$ for systematic FCI-FRI), \textit{despite their use of fewer iterations}. This trend is also reflected in the corresponding efficiencies (Figure~\ref{fig:allEff}, top), which are normalized based on the different number of iterations considered in the calculation of each standard error. For all systems, efficiencies for systematic FCI-FRI calculations are more than an order of magnitude greater than those for multinomial FCI-FRI calculations, which are in turn 2 to 113 times greater than those for FCIQMC calculations. The integrated autocorrelation times (IATs), calculated as described in Section \ref{sec:errors} for all three methods, are similar within each system considered here. This is likely because the same value of the imaginary time step, $\varepsilon$, is used for each system (Table \ref{tab:params}). A previous study~\cite{Holmes2016} found that reducing the statistical error in matrix compression in FCIQMC enabled the use of greater values of $\varepsilon$. This reduces the degree of correlation between iterates, thereby decreasing the IAT and increasing the statistical efficiency. This suggests that using greater values of $\varepsilon$ in the multinomial and systematic FCI-FRI methods could potentially increase the observed difference in their efficiencies. Furthermore, increasing $\varepsilon$ may reduce the equilibration times $\tau_c$ for the FCI-FRI methods. Because the systematic FCI-FRI method converges to the deterministic power method as $N_\text{mat}$ and $m$ approach finite values, we expect that the reported performance advantages for systematic FCI-FRI relative to the other two methods would increase for greater values of $N_\text{mat}$ and $m$. On the other hand, because the compression schemes used in these FCI-FRI methods become more similar to those in FCIQMC as the size of the FCI basis increases relative to $N_\text{mat}$ and $m$, the statistical efficiencies of these methods are expected to become more similar in this limit. For many systems, however, the values of $N_\text{mat}$ and $m$ required to calculate reasonably accurate energy estimates also increase with system size. In the calculations we have compared thus far, the values of these parameters are dictated by the critical number of walkers in FCIQMC~\cite{Booth2009}. Calculations for the Ne and HF systems were also compared with fewer matrix and vector samples. Using only 164,000 walkers in an FCIQMC calculation on Ne yields an energy estimate that differs from the exact energy by $(-163 \pm 20783) \times 10^{-5} E_h$, whereas a systematic FCI-FRI calculation with equivalent numbers of samples yields an energy estimate that differs by $(0.58 \pm 5.15) \times 10^{-5} E_h$ after a similar number of iterations. The efficiencies of these two calculations differ by seven orders of magnitude. A similar comparison for HF with only 812,000 walkers also shows a factor of $10^7$ difference in efficiencies. This suggests that FCI-FRI methods may allow for the use of significantly fewer matrix and vector samples than the original FCIQMC method. \begin{figure} \includegraphics[scale=1]{eff} \caption{Increases in statistical efficiency are robust across five molecular systems and two choices of matrix factorization schemes, near-uniform (top) and heat-bath Power-Pitzer (bottom). Reported statistical efficiencies represent an average over the two independent trajectories obtained using each method and do not reflect differences in computational cost for systems with different sizes. Note the log scale on the y-axis.} \label{fig:allEff} \end{figure} \begin{table*} \caption{Differences between mean energy estimates and those reported in Table \ref{tab:params} $(E_\text{diff})$ for each of the systems considered here calculated using the FCIQMC, multinomial FCI-FRI, and systematic FCI-FRI methods with the near-uniform factorization scheme. The parameter $m$ represents the sparsity of the iterates (mean sparsity for FCIQMC), and $N_\text{mat}$ represents the number of Hamiltonian matrix elements evaluated in each iteration (mean number of walkers for FCIQMC). Results from two independent trajectories are presented for each method. Mean energy differences $\pm$ twice the standard error (95\% confidence interval) are reported for each calculation, followed by the length of the equilibration period ($\tau_c$) and total number of iterations ($N_i$). For each chemical system, the three methods share a similar computational cost per iteration.} \begin{tabular}{c | c | c | c | c | c | c | c | c | c | c | c} & & & \multicolumn{3}{c |}{FCIQMC} & \multicolumn{3}{c |}{multinomial FCI-FRI} & \multicolumn{3}{c }{systematic FCI-FRI} \\ System & $m/10^3$ & $N_\text{mat}/10^6$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ \\ \hline Ne & 242 & 0.26 & $-1.44 \pm 7.36$ & 22.5&2800 & $0.06 \pm 5.66$ & 15.0&2373 & $-0.16 \pm 1.09$ & 11.5&1422 \\ & & & $2.89 \pm 7.47$ & 22.5&2800 & $-3.12 \pm 4.99$ & 15.0&3200 & $-0.74 \pm 1.11$ & 11.0&1445 \\ \hline HF & 926 & 1.00 & $10.57 \pm 26.86$ & 160.0&1469 & $-9.76 \pm 11.17$ & 400.0&1104 & $0.49 \pm 2.57$ & 620.0&1495 \\ & & & $21.09 \pm 33.50$ & 430.0&1474 & $-7.03 \pm 11.28$ & 380.0&1100 & $-0.37 \pm 3.37$ & 620.0&994 \\ \hline \ce{H2O} & 491 & 0.57 & $-0.96 \pm 6.52$ & 30.0&2400 & $0.61 \pm 5.54$ & 20.0&1232 & $-0.41 \pm 1.29$ & 25.0&1055 \\ & & & $0.54 \pm 6.47$ & 30.0&2400 & $-2.08 \pm 5.63$ & 20.0&1228 & $0.17 \pm 1.16$ & 25.0&1059 \\ \hline \ce{N2} & 1014 & 1.21 & $-7.46 \pm 29.75$ & 200.0&1788 & $-1.05 \pm 5.02$ & 80.0&822 & $0.14 \pm 0.82$ & 76.7&554 \\ & & & $4.78 \pm 39.85$ & 200.0&1791 & $2.41 \pm 5.55$ & 52.1&512 & $-0.89 \pm 1.33$ & 170.0&557 \\ \hline \ce{C2} & 2622 & 4.14 & $9.53 \pm 9.56$ & 50.0&2908 & $1.32 \pm 3.55$ & 540.0&2051 & $0.71 \pm 1.08$ & 42.2&513 \\ & & & $4.76 \pm 11.54$ & 50.0&2768 & $-2.30 \pm 3.92$ & 450.0&1327 & $-0.50 \pm 0.77$ & 50.6&516 \\ \end{tabular} \label{tab:allNearUni} \end{table*} \subsubsection{Heat-Bath Power-Pitzer Factorization} Results obtained using the three methods with the HB-PP factorization matrix mostly follow the same trends as those for the near-uniform factorization (Table \ref{tab:allHeatBath}). Standard errors for systematic and multinomial FCI-FRI calculations are less than those from FCIQMC, as is reflected in their associated efficiencies (Figure \ref{fig:allEff}, bottom). One FCIQMC calculation on \ce{H2O} did not converge to within the 95\% confidence interval, although given the relative magnitude of its standard error, this is likely a statistical anomaly. Systematic FCI-FRI calculations on \ce{C2} were particularly expensive due to the number of orbitals and cost of evaluating elements of matrices in the HB-PP factorization, rendering it difficult to accumulate sufficiently many samples to obtain an accurate estimate of the integrated autocovariance. Consequently, the estimated standard errors for these calculations are likely more inaccurate than for the other calculations in this study. This highlights the need for more efficient implementations of these FCI-FRI methods. \begin{table*} \caption{Mean energy differences $\pm$ twice the standard error for randomized methods using the heat-bath Power-Pitzer factorization scheme. Parameters are reported for each trajectory as in Table \ref{tab:allNearUni} (iterate vector sparsity, number of matrix samples, and number of iterations).} \begin{tabular}{c | c | c | c | c | c | c | c | c | c | c | c} & & & \multicolumn{3}{c |}{FCIQMC} & \multicolumn{3}{c |}{multinomial FCI-FRI} & \multicolumn{3}{c }{systematic FCI-FRI} \\ System & $m/10^3$ & $N_\text{mat}/10^6$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ & ($E_\text{diff} \pm 2\sigma_E$)/($10^{-5} E_h$) & $\tau_c/10^3$ & $N_i/10^3$ \\ \hline Ne & 242 & 0.26 & $0.01 \pm 13.43$ & 15.0&902 & $3.96 \pm 7.83$ & 15.0&917 & $-0.44 \pm 1.61$ & 15.0&657 \\ & & & $-3.41 \pm 13.22$ & 20.0&963 & $0.75 \pm 7.92$ & 15.0&905 & $-1.09 \pm 1.61$ & 15.0&686 \\ \hline HF & 926 & 1.00 & $-4.15 \pm 17.31$ & 130.0&502 & $-4.78 \pm 18.28$ & 180.0&436 & $-0.91 \pm 2.99$ & 40.0&447 \\ & & & $4.41 \pm 15.89$ & 120.0&507 & $0.66 \pm 13.42$ & 50.0&430 & $-0.54 \pm 3.00$ & 27.4&654 \\ \hline \ce{H2O} & 491 & 0.57 & $-12.33 \pm 10.66$ & 30.0&938 & $-1.53 \pm 5.95$ & 20.0&645 & $-0.30 \pm 1.52$ & 20.0&533 \\ & & & $-4.04 \pm 10.45$ & 30.0&936 & $-3.65 \pm 5.69$ & 20.0&646 & $-0.18 \pm 1.63$ & 20.0&531 \\ \hline \ce{N2} & 997 & 1.15 & $33.68 \pm 94.08$ & 200.0&663 & $1.03 \pm 5.53$ & 53.2&699 & $0.43 \pm 1.18$ & 64.6&373 \\ & & & $55.74 \pm 75.77$ & 200.0&659 & $4.19 \pm 5.07$ & 57.1&700 & $-0.19 \pm 1.72$ & 72.7&372 \\ \hline \ce{C2} & 2620 & 4.14 & $-11.20 \pm 17.99$ & 50.0&573 & $-1.02 \pm 4.59$ & 190.0&432 & $-0.15 \pm 2.02$ & 130.0&331 \\ & & & $12.40 \pm 22.95$ & 140.0&581 & $-0.12 \pm 5.61$ & 36.8&494 & $-0.73 \pm 1.96$ & 50.0&213 \\ \end{tabular} \label{tab:allHeatBath} \end{table*} \begin{figure} \includegraphics[scale=1]{rayleigh} \caption{The difference between the minimum variational energy estimate from each method and the exact FCI energy from Table \ref{tab:params}. Results from the FCIQMC, multinomial FCI-FRI, and systematic FCI-FRI methods, using both the near-uniform (top) and heat-bath Power-Pitzer (bottom) matrix factorization schemes, are shown for each of the molecular systems considered in this study. Mean energy differences from the FCIQMC method for each system are plotted for comparison. Error bars represent 95\% ($2\sigma_E$) confidence intervals.} \label{fig:rayleigh} \end{figure} \subsection{Variational Energy Estimates} Finally, we evaluate the possibility that the primary utility of the FCI-FRI methods considered here is that they efficiently identify the most important Slater determinant basis elements in the ground-state eigenvector. Variational Rayleigh quotients (eq \ref{eq:rayEn}) for a subset of the iterates (i.e. every 100\textsuperscript{th} iterate) in each trajectory were calculated in addition to the projected estimates used to obtain average energies. If FCI-FRI is only an efficient search for significant basis elements, then we expect many of these Rayleigh quotients to be close to the ground-state energy. We calculate the minimum Rayleigh quotient over both independent trajectories for each system considered. Differences between these minimum energies and the exact ground-state energies for each system are plotted in Figure~\ref{fig:rayleigh}. The mean energy difference from the original FCIQMC method is also plotted for comparison, with error bars denoting the corresponding 95\% confidence interval. For all methods and systems considered, this difference for the minimum Rayleigh quotient is more than an order of magnitude greater than the maximum of the FCIQMC confidence interval. The minimum Rayleigh quotients from FCIQMC are greater than those from the FCI-FRI methods considered and, for all systems except \ce{C2}, are also greater than the Hartree-Fock energy. This difference between the FCIQMC and FCI-FRI Rayleigh quotients can possibly be attributed to the lower-variance vector compression scheme employed in FCI-FRI. Even though the average of the FCIQMC iterates converges to the ground state to within a bias, the binomial integerization scheme used in FCIQMC displaces each iterate further from the ground state than in FCI-FRI. These results indicate that none of the vectors from the FCIQMC or FCI-FRI trajectories are particularly close to the ground-state, as measured by the variational energy estimates. The facts that the average of each component of the solution vector converges quickly to its exact value, to within a controllable statistical bias, and that the projected estimator is linear in these components, rather than quadratic, are essential for the success of FCI-FRI methods. \section{Conclusions} \label{sec:concl} This paper describes several generic matrix and vector compression techniques within the FRI framework in the context of the FCI problem. Hierarchical approaches to matrix compression are discussed and shown to offer significant advantages over approaches that require enumerating all nonzero elements. Two examples of hierarchical factorization schemes for the FCI Hamiltonian matrix are presented, namely near-uniform and heat-bath Power-Pitzer. We describe how these various techniques can be combined in methods for calculating the FCI ground-state energy using power iteration, and we compare these ``FCI-FRI'' methods to FCIQMC in its original form. Calculations on small molecules are used to compare the performance of these methods in terms of statistical efficiency, a metric inversely related to the square of the standard error. FCI-FRI calculations on the Ne atom demonstrate that using matrix compression in addition to vector compression can enable significant reductions in computational cost while only moderately decreasing the statistical efficiency. We show that systematic matrix compression offers significant advantages over multinomial matrix compression, which has been used previously in FCIQMC. FCI-FRI calculations with systematic matrix compression applied to five small molecular systems are 11 to 45 times more efficient than those with multinomial compression, which are in turn 1.4 to 178 times more efficient than calculations performed using the original FCIQMC method. The advantages of these stochastic methods over related deterministic compression methods are investigated. The error in a stochastic calculation on the Ne atom is nearly two orders of magnitude less than a deterministic calculation with comparable cost, which illustrates the importance of stochastically representing all components of the solution vector in the FCI Slater determinant space. Furthermore, by applying variational energy estimators to stochastic calculations performed on all molecular systems, we demonstrate the importance of averaging over many sparse, stochastic iterates in producing an accurate energy estimate. These features of stochastic methods and the results in this study suggest the applicability of FCI-FRI methods to strongly correlated systems with dense solution vectors. Future research will investigate strategies for further improving the performance of FCI-FRI methods. We will develop implementations of these methods that exploit parallelism more effectively, possibly using techniques developed previously for FCIQMC. Due to the generality of the FRI framework, the compression techniques introduced here can be applied in tandem with the complementary initiator and semi-stochastic extensions to FCIQMC, which suggests an approach to further improving statistical efficiency. Additionally, examining the effect of the choice of parameters used in FCI-FRI calculations on the statistical efficiency may provide additional insight into how to optimize performance. For example, our results suggest that FCI-FRI methods allow more flexibility than FCIQMC in the choice of the parameter $\varepsilon$, which corresponds to the time step in imaginary time propagation. Varying $\varepsilon$ may affect the statistical efficiency of FCI-FRI methods. Furthermore, the number of nonzero elements in each matrix and vector compression in FCIQMC is determined by the number of walkers, whereas in FCI-FRI, these parameters can be varied independently. FCIQMC methods require a critical number of walkers to reliably converge to the ground-state energy. Our results suggest that using improved matrix compression schemes in FCI-FRI methods can reduce the number of matrix and vector elements required for convergence. Exploring these possibilities may facilitate the development of stochastic methods for quantum chemistry that are able to treat larger systems than currently possible.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,369
{"url":"http:\/\/cpanforum.com\/posts\/5947","text":"I wanted to build an awesome place for people to discuss module specific issues, but I don't have any more time for this, and there are much better places to discuss Perl-related issues. I'd recommend asking your question on Stack Overflow or on Perl Monks.\nIf you are looking for a Perl Programming tutorial or Perl-related news, I hope these links will serve you well.\n Posted on 2007-08-24 09:54:10-07 by pontifex Installation Issues on Cygwin Win XP I'm working at trying to install fetchyahoo (http:\/\/cpan.uwinnipeg.ca\/par\/webstart.html), which requires installing this package. I installed the CPAN tool to make it easier and installing all of the pre-reqs went by without incident. Then when I attempted run the 'pnlp_registry.pl' script to add entries into my resgistry for the next part of the installation. But... $perl pnlp_registry.pl This script will add the appropriate registry settings necessary to use perlws of PAR::WebStart to open PNLP files. Do you wish to continue? [yes] Cannot find \"perlws\" in the PATH at pnlp_registry.pl line 23, line 1. The normal install docs do a typical 'make', 'make test', 'make install'. None of those failed. In the 'perlws.pl' file I found a reference: Using the C and C buttons, arrange to associate with this file type an C of C, with the associated application being C:\\Path\\to\\Perl\\bin\\perlws.bat \"%1\" So I'm looking for a batch file to do my work for me, apparently. I checked my cygwin directory and couldn't find the file. Here's my 'make test': Running make test make[1]: Entering directory \/home\/peter\/.cpan\/build\/PAR-WebStart-0.18-5XqSX2\/li b' make[1]: Leaving directory \/home\/peter\/.cpan\/build\/PAR-WebStart-0.18-5XqSX2\/lib ' make[1]: Entering directory \/home\/peter\/.cpan\/build\/PAR-WebStart-0.18-5XqSX2\/li b' \/usr\/bin\/perl.exe \"-MExtUtils::Command::MM\" \"-e\" \"test_harness(0, '..\/blib\/lib', '..\/blib\/arch')\" t\/*.t t\/basic.......ok t\/config......ok 1\/20Could not find \"par.pl\" at \/home\/peter\/.cpan\/build\/PAR-WebS tart-0.18-5XqSX2\/lib\/..\/blib\/lib\/PAR\/WebStart.pm line 44. t\/config......dubious Test returned status 2 (wstat 512, 0x200) DIED. FAILED tests 2-20 Failed 19\/20 tests, 5.00% okay t\/make_par....Added to MANIFEST: lib\/D.pm Added to MANIFEST: MANIFEST Added to MANIFEST: script\/main.pl t\/make_par....ok 1\/5Use of uninitialized value in system at t\/make_par.t line 28 . Can't exec \"\": No such file or directory at t\/make_par.t line 28. # Test 5 got: \"-1\" (t\/make_par.t at line 29) # Expected: \"0\" t\/make_par....NOK 5\/5# t\/make_par.t line 29 is: ok($rc, 0); t\/make_par....FAILED test 5 Failed 1\/5 tests, 80.00% okay Failed Test Stat Wstat Total Fail List of Failed ------------------------------------------------------------------------------- t\/config.t 2 512 20 38 2-20 t\/make_par.t 5 1 5 Failed 2\/3 test scripts. 20\/26 subtests failed. Files=3, Tests=26, 2 wallclock secs ( 1.32 cusr + 0.64 csys = 1.96 CPU) Failed 2\/3 test programs. 20\/26 subtests failed. make[1]: *** [test_dynamic] Error 255 make[1]: Leaving directory \/home\/peter\/.cpan\/build\/PAR-WebStart-0.18-5XqSX2\/lib ' RKOBES\/PAR-WebStart-0.18.tar.gz \/cygdrive\/c\/cygwin\/bin\/make test -- OK There are some errors, but they weren't fatal. This being the first time I've used the CPAN tool, I'm not sure what I should be looking out for. In this instance I might have some cascading error. But I think I could fix this particular problem by just making batch file from scratch and continuing on. --Pontifex Direct Responses: Write a response","date":"2013-12-11 12:35:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.39648503065109253, \"perplexity\": 4818.843941503134}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-48\/segments\/1386164036080\/warc\/CC-MAIN-20131204133356-00065-ip-10-33-133-15.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} Neutrino oscillation data is at present consistent \cite{Maltoni:2004ei, Abe:2008ee} with just three light neutrinos with near tri-bi-maximal (TBM) mixing between flavours \cite{Wolfenstein:1978uw, Harrison:2002er, Harrison:2002kp, Harrison:2003aw, Low:2003dz}. However the nature of the mass spectrum is still not established, being consistent with either a normal or an inverted hierarchy. Moreover, although the magnitude of the mass squared difference between neutrinos is reasonably well determined, the absolute scale of mass is not, being consistent with both a strongly hierarchical spectrum or a quasi-degenerate (QD) spectrum. Radiative running is especially important for QD neutrinos, as the effects on mixing angles are larger for QD neutrinos than in the hierarchical case. This was stressed in \cite{Ellis:1999my, Casas:1999tp} where the mixing favoured at the time, bi-maximal mixing, was studied in-depth. More recent studies of mixing angles running include \cite{Antusch:2003kp, Plentinger:2005kx, Dighe:2006sr, Boudjemaa:2008jf} (and references therein). Here we discuss radiative corrections to TBM mixing, assuming that it arises through new physics, such as a family symmetry, at a high-energy scale. We determine how high, in a supersymmetric extension of the Standard Model, the initial energy scale can be while maintaining near TBM mixing at the low-energy scales relevant to oscillation experiments. The main difference from existing work is that emphasis is placed on the energy scales rather than on the resulting low-energy angles. Specifically, we set the angles to their TBM values at high-energy scales, run the angles to low-energy and iterate the process to find the highest-energy scale that still keeps the low-energy angles within current experimental bounds. The process is then repeated for different points of the parameter space, and the results are presented as a contour plot in the $m_{\nu _{i}}-\tan \beta $ plane ($i=1$ for normal and $i=3$ for inverted hierarchy). The underlying question raised by the observed near TBM mixing is the origin of the pattern and the reason it is so different from quark mixing. Models based on family symmetries, particularly discrete non-Abelian family symmetries, have been constructed to explain this pattern, e.g. \cite{deMedeirosVarzielas:2006fc, King:2006np}. In these models the difference between the quark and lepton sector follows naturally from the see-saw mechanism together with a strongly hierarchical right-handed neutrino Majorana mass spectrum. However these models only apply to the case of an hierarchical neutrino mass spectrum. Here we discuss how a discrete non-Abelian family symmetry can also give rise to near TBM mixing for the case of a QD spectrum. \section{Radiative corrections to TBM mixing} Family-symmetry models are typically constructed at some high scale, $M_{F}$, at which the model specifies relationships among parameters. To compare the predictions to low-energy data, radiative effects should be considered through the use of the renormalization group equations. When there is a strong hierarchy, it is often the case that these running effects do not change the mixing angles by much \cite% {Antusch:2003kp,Plentinger:2005kx,Dighe:2006sr, Boudjemaa:2008jf}. In the case of QD neutrinos, however, the mixing angles can change a lot with the energy scale, to the point of erasing any special structure arranged by a family symmetry. For model-building purposes it is very important to know the highest-energy scale at which we can start with TBM mixing and still be consistent with mixing-angle data after running the angles down to the low-energy scale $M_{Z}$ (the Z-boson mass scale). The Standard Model (SM) suffers from the hierarchy problem associated with the need to keep electroweak breaking much below the Planck scale. This problem is evaded if the theory is supersymmetric, with supersymmetry broken close to the electroweak scale. For this reason we consider the radiative corrections to neutrino masses and mixing in the context of the minimal supersymmetric extension of the Standard Model (MSSM). We specify the low-energy boundary conditions of the renormalization group equations to be consistent with the three gauge coupling constants and the quark and lepton masses \cite{PDBook2008}. We assume an effective SUSY scale of $M_{S}=500$ GeV. We use the SM renormalization group equations below $M_{S}$ and the MSSM renormalization group equations above $M_{S}$. The only boundary condition set at the family-symmetry breaking scale $M_{F}$ is exact TBM mixing for the leptons \footnote{We ignore the small departures from TBM at the high scale which may arise from diagonalising the charged-lepton mass matrix \cite{Plentinger:2005kx, Antusch:2005kw}.}. The neutrino masses are set at the low-energy boundary relative to the lightest neutrino mass state ($m_{\nu_{1}}$ with a normal hierarchy and $m_{\nu_{3}}$ with an inverted hierarchy). We keep $\left| \Delta m^2_{12} \right|$ the solar mass difference and $\left| \Delta m^2_{23} \right|$ the atmospheric mass difference. \begin{figure}[tbp] \centerline{\includegraphics[width=3.3in]{FigContourObsNeutrinoScaleTopDown}\ \ \includegraphics[width=3.3in]{FigContourObsNeutrinoScaleTopDownIH}} \caption{Shows contours of $\mathrm{Log}_{10}(M_{F})$ where $M_{F}$ is the highest-energy family-symmetry breaking scale at which we can set TBM and have the neutrino mixing within $4\protect\sigma $ of the low-energy observed values. The white region in the lower left of the contour plots are the regions where $M_F$ can be greater than $10^{16}$ GeV.} \label{FigContourSUSY} \end{figure} Figure \ref{FigContourSUSY} shows two contour plots. For the normal hierarchy the plot shows $m_{\nu_1}$ versus $\tan \beta$ and for the inverted hierarchy shows $m_{\nu_3}$ versus $\tan \beta$. The contours specify $\mathrm{Log}_{10} M_F$ where $M_F$ is the highest-energy family-symmetry breaking scale at which we can set TBM mixing and have the low-energy mixing angles consistent to within $4 \sigma$ of the low-energy observations. The solar mixing angle $\theta_{12}$ is the most sensitive to radiative corrections. Exact TBM mixing gives $\tan^2 \theta_{12}=0.5$, and our $4 \sigma$ requirement at low energy translates to $\tan^2 \theta_{12} = 0.47 \pm 0.2$ \cite{Abe:2008ee}. The difference between the two graphs can mostly be understood by the slight bias of the observational data ($\tan^2\theta_{12} < 0.5$) and the opposite directions in which the $\tan^2\theta_{12}$ runs between the normal and inverted hierarchies. Starting with perfect TBM mixing at the scale $M_F$ a normal hierarchy has $\tan^2\theta_{12}$ become larger as the the renormalization scale becomes smaller. Once one falls below $M_S$, then $\tan^2\theta_{12}$ begins to get smaller as the renormalization scale goes down the during the final leg. The inverted hierarchy has the opposite behavior. Because there is a longer region of supersymmetric running, there is more parameter space of $m_{\nu_3}$ versus $\tan \beta$ compatible with $M_F \ge 10^{16}$ GeV. The slight bulge visible in the upper-right of the inverted-hierarchy contour plot with $M_F \approx 10^4$ GeV is due to the opposite directions of the supersymmetric running and the standard model running on $\tan^2 \theta_{12}$ in the region where $M_F \approx M_S$. The contours in Figure \ref{FigContourSUSY} hold implications for QD TBM family-symmetry models. For $m_{\nu _{1}}>0.1$ eV, the neutrino spectrum is referred to as quasi-degenerate (QD) \cite{Vogel:2006sq} \footnote{% We define QD as $m_{\nu _{1}}>0.1$ eV because above this value the $\beta \beta _{0\nu }$ constraints for differing hierarchies and phases converge to a common region, as shown in figure \ref{doublebeta}.}. If cosmological observations are considered, they constrain the sum of the neutrinos $\sum_{i}m_{\nu _{i}}\leq 0.42$ eV at the $95\%$ confidence level \cite{Tegmark:2005cy}. This implies $m_{\nu _{1}}\leq 0.14$ eV which excludes the right half of Figure \ref% {FigContourSUSY}. The remaining allowed narrow strip is consistent with the non-observation of neutrinoless double beta decay $\beta \beta _{0\nu }$ which places a limit of $m_{ee}<0.34$ eV. Uncertainties in nuclear matrix element weaken this bound by about a factor of $3$. If we believe that the family-symmetry scale is greater than $M_{F}>10^{10}$ GeV, and hypothesize a model which leads to a QD neutrino spectrum with normal hierarchy, then $\tan \beta <6$ (or $\tan \beta < 8$ for a model with inverted hierarchy). In contrast, if a normal hierarchy model has $\tan \beta >6$ (or $\tan \beta > 8$ for an inverted hierarchy model), then the lightest neutrino need be less than $0.1$ eV and therefore hierarchical. \section{A discrete non-Abelian family symmetry model of QD neutrinos with TBM mixing} As stressed in \cite{Barbieri:1999km} an underlying $SO(3)$ family symmetry readily leads to a near degenerate neutrino mass spectrum. In their model the chiral superfields, $L^{i}$ (where $i$ is the $SO(3)$ family index), contain the lepton doublets and transform as triplets under the $SO(3)$ group. The chiral superfields containing the conjugates of the right-handed electron, muon and tau, respectively $e^{c}$, $\mu^{c}$ and $\tau^{c}$, are $SO(3)$ singlets. The effective Majorana neutrino mass is constrained by the symmetry and comes from the superpotential \begin{equation} W_{eff}=y_{0}(L^{i}L^{i})H_{u}H_{u}/M \label{so3} \end{equation}% where $H_{u}$ is the supermultiplet containing the Higgs field whose vacuum expectation value (VEV), $\left\langle H_{u}\right\rangle =v$, is responsible for up quark masses in the MSSM and $M$ is the messenger scale associated with the mechanism generating this dimension 5 term (in the Type II see-saw it is the mass of the exchanged isotriplet Higgs field). The important point to be taken from eq.(\ref{so3}) is that the family symmetry forces the three light neutrinos to be degenerate. Small departures from degeneracy result when the $SO(3)$ family symmetry is broken. In what follows we will show how this can naturally lead to a mass mixing matrix which gives near TBM mixing. This is done through the breaking of the family symmetry by the non-vanishing vacuum expectation values (VEVs) of familon fields, denoted as $\phi _{A}^{i}$, where the $A=3$, $23$, $123$ labels three distinct fields and serves as a reminder of their VEV directions which are given by \begin{equation} \left\langle \phi _{3}\right\rangle =\left( \begin{array}{c} 0 \\ 0 \\ a% \end{array}% \right) \ \left\langle \phi _{23}\right\rangle =\left( \begin{array}{c} 0 \\ -b \\ b% \end{array}% \right) \ \ \ \left\langle \phi _{123}\right\rangle =\left( \begin{array}{c} c \\ c \\ c% \end{array}% \right) \label{eq:P123 vev} \end{equation}% where $a,b$ and $c$ are complex parameters. Table \ref{ta:SO3} lists the full set of supermultiplets and their symmetry properties under the $SO(3)$ symmetry extended by a further set of symmetries $G=Z_{3R}\times Z_{2}\times U_{\tau }(1)$ which limit the terms that can appear in the superpotential. $Z_{3R}$ is a discrete $R-$symmetry which ensures the familon fields are moduli and cannot appear in the superpotential except coupled to ``matter'' fields carrying non-zero $R-$charge. The $U_{\tau }(1)$ symmetry is introduced to distinguish the third family of leptons from the first two. In practice it also explains why the mixing in the charged-lepton sector is different from that in the neutrino sector which leads to near tri-bi-maximal mixing. \begin{table}[tbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Field & $SO(3)$ & $Z_{3R}$ & $U_{\tau }(1)$ & $Z_{2}$ \\ \hline $L^{i}$ & 3 & 1 & 0 & + \\ $e^{c}$ & 1 & 1 & 0 & + \\ $\mu ^{c}$ & 1 & 1 & 0 & - \\ $\tau ^{c}$ & 1 & 1 & -1 & + \\ $H_{u,d}$ & 1 & 0 & 0 & + \\ \hline $\phi _{3}^{i}$ & 3 & 0 & 1 & + \\ $\phi _{23}^{i}$ & 3 & 0 & 0 & - \\ $\phi _{123}^{i}$ & 3 & 0 & 0 & + \\ \hline $X$ & 1 & 2 & 0 & - \\ \hline \end{tabular}% \end{center} \caption{Assignment of the fields under the $SO(3)$ family symmetry.} \label{ta:SO3} \end{table} The special structure of the VEVs in eq(\ref{eq:P123 vev}) is what will generate TBM mixing and is clearly the most important aspect of the model. This can happen naturally if the underlying family symmetry is not $SO(3)$ but a discrete non-Abelian subgroup. We will discuss below the nature of this symmetry and the vacuum alignment leading to eq(\ref{eq:P123 vev}) (the $X$ field of Table \ref{ta:SO3} is introduced to facilitate this vacuum alignment), but first we show that it does generate approximate TBM mixing. The leading terms in the superpotential responsible for neutrino masses that are invariant under the family symmetries are given by \begin{equation} W_{\nu }=y_{0}(L^{i}L^{i})H_{u}H_{u}+y_{\odot }(\phi _{123}^{i}L^{i})^{2}H_{u}H_{u}+y_{@}(\phi _{23}^{i}L^{i})^{2}H_{u}H_{u}. \label{win} \end{equation}% where we have suppressed the messenger scale. Note that due to the $Z_{2}$ factor there are no cross terms involving $\phi _{23}\phi _{123}$ \cite% {Ross:2007zz, deMedeirosVarzielas:2008en} and due to the $U_{\tau }(1)$ factor there is no term involving $\phi _{3}$. As in eq(\ref{so3}), the QD mass scale is set by the first term of eq(\ref{win}). For near degeneracy, the other terms must be relatively small ($y_{\odot } c^2, y_{@} b^2 \ll y_{0}$, still suppressing the messenger scale). The charged-lepton masses come from the superpotential \begin{equation} W_{e}=\lambda _{e}(L^{i}\phi _{123}^{i})e^{c}H_{d}+\lambda _{\mu }(L^{i}\phi _{23}^{i})\mu ^{c}H_{d}+\lambda _{\tau }(L^{i}\phi _{3}^{i})\tau ^{c}H_{d}. \label{lepton} \end{equation}% The $m_{\mu }/m_{\tau }$ ratio is given by $\lambda _{\mu }\langle \phi _{23}^{i}\rangle /\lambda _{d}\langle \phi _{3}^{i}\rangle$. Using this the mixing between the second and third families of charged leptons is small of $% O(m_{\mu }/m_{\tau })$. Similarly one may see that the mixing between the first and second families is of $O(m_{e}/m_{\mu })$ and that between the first and third families is of $O(m_{e}/m_{\tau })$, both very small. Ignoring the small corrections from the charged-lepton sector, the light neutrino mass eigenstates are proportional to the combinations $\phi _{123}^{i}L^{i} H_u$ and $\phi _{23}^{i}L^{i} H_u$ \footnote{In finding the mass eigenstates with a complex Majorana mass matrix, one needs to be careful to diagonalize $M_\nu M_\nu^\dag$ and not just $M_\nu$. Because $M_\nu$ is symmetric, it can also be diagonalized by an orthogonal transformation $O M_\nu O^T$. In general $O \neq U_\nu$ and the square of the eigenvalues of $M_\nu$ are not the same as those of $M_\nu M_\nu^\dag$ \cite{Doi:1980yb}.}. From eq(\ref{eq:P123 vev}) we see that these are given by% \begin{eqnarray} \nu _{@} &=&\frac{1}{\sqrt{2}}\left( \nu _{\mu }-\nu _{\tau }\right) \label{neutrino eigenstates} \\ \nu _{\odot } &=&\frac{1}{\sqrt{3}}\left( \nu _{e}+\nu _{\mu }+\nu _{\tau }\right) \nonumber \end{eqnarray}% where $\nu _{e,\mu ,\tau }$ are the components of $L^{e,\mu ,\tau }$ respectively (selected by the VEV of $H_u$). Ignoring the small charged-lepton mixings discussed above, $\nu _{e,\mu ,\tau }$ can be identified with the current eigenstates. If $b$ and $c$ are real and positive, and $m_{\odot }=y_{\odot }c^{2}v^{2}<m_{@}=y_{@}b^{2}v^{2}$, one can see from eq(\ref{win}) and eq(\ref{neutrino eigenstates}) that we obtain the normal hierarchy, in which $\nu _{@}$ may be identified with the atmospheric neutrino with bi-maximal mixing while $\nu _{\odot }$ may be identified with the solar neutrino with tri-maximal mixing. The normal hierarchy persists for a range of complex $b$ and $c$ values in the neighbourhood of the real solution. An inverted hierarchy is possible and viable if $b$, $c$ are approximately imaginary and real, respectively. Although here we are working at the effective Lagrangian level, we already noted that $(L^{i}L^{i})HH$ naturally arises from the $SO(3)$ invariant Type II see-saw mechanism. The other two neutrino mass terms can arise from Type I see-saw through exchange of appropriate heavy right-handed Majorana neutrinos, in a manner similarly to that discussed for a $SU(3)$ based model in \cite{deMedeirosVarzielas:2005ax}. Being of different origin it can readily happen that the common mass, $m_{0}=y_{0}v^{2}$ is much larger than $m_{@}$ and $m_{\odot }$. \section{Discrete non-Abelian symmetry and vacuum alignment} We turn now to a discussion as to how the pattern of VEVs displayed in eq(% \ref{eq:P123 vev}) is dynamically generated. This can be achieved relatively simply if the underlying family symmetry is a discrete non-Abelian subgroup of $SO(3)$ (and $SU(3)$). A very simple example is given by $A_4 \equiv \Delta(12)$, belonging to the $\Delta (3 n^2)$ family of groups \cite{Luhn:2007uq}. The $\Delta(12)$ invariant terms in the potential are those invariant under the group elements of the semi-direct product $Z_3 \ltimes Z_2$ (which generate the group $\Delta(12)$). The action of these group elements on a triplet representation $\phi^{i=1,2,3}$ is shown in Table \ref{Table2}. \begin{table}[tbp] \begin{center} \begin{tabular}{|l|l|l|} \hline & $\mathbf{Z}_{3}$ & $\mathbf{Z}_{2}$ \\ \hline $\phi ^{1}$ & $\phi ^{2}$ & $\phi ^{1}$ \\ $\phi ^{2}$ & $\phi ^{3}$ & $-\phi ^{2}$ \\ $\phi ^{3}$ & $\phi ^{1}$ & $-\phi ^{3}$ \\ \hline \end{tabular}% \caption{Action of the group factors $Z_3$ and $Z_2$ on the triplet representation $\phi^i$.}\label{Table2}% \end{center} \end{table}% Since $\Delta(12)$ is a subgroup of $SO(3)$, all $SO(3)$ invariants are allowed by the discrete subgroup. Thus the terms of eq(\ref{win}) and eq(\ref{lepton}) are allowed. The discrete subgroup allows additional terms, but these are all higher dimensional and consequently small provided the VEVs of eq(\ref{eq:P123 vev}) are small relative to the relevant messenger mass. Thus the lepton mass and mixing structure discussed is a consequence of the non-Abelian discrete group even though the $SO(3)$ structure used above to motivate it is only approximate. Turning now to the question of vacuum alignment, consider the leading terms in the potential for the triplet familon fields. Because of the $R-$% symmetry, in the absence of the $X-$field, there are no $F-$terms involving just the familon fields coming from the superpotential. The leading $D-$% terms consistent with symmetries of Table \ref{ta:SO3} are \begin{equation} V\left( \phi \right) =\alpha m^{2}\sum_{i}\left\vert \phi ^{i}\right\vert ^{2}+\beta m^{2}\left\vert \sum_{i}\left\vert \phi ^{i}\right\vert ^{2}\right\vert ^{2}+\gamma m^{2}\sum_{i}\left\vert \phi ^{i}\right\vert ^{4}+\delta m^{2} \left\vert \sum_{i} (\phi^{i})^{2} \right\vert ^{2} \label{potential} \end{equation}% Here the quadratic term is driven by supersymmetry breaking and $m$ is the gravitino mass. The coefficient includes radiative corrections which can drive it negative at some scale $\Lambda$, triggering a VEV for $\phi$. The remaining terms can arise through radiative corrections and also only arise if supersymmetry is broken - hence the factor of $m^{2}$ on every term. The second term is generated at one-loop order if the superpotential contains a term of the form $\xi Y\sum_{i}\phi ^{i}\chi ^{i}$ where $\chi ^{i}$ and $Y$ are ($ Z_{3R}=1)$ massive chiral superfields which we take for presentational simplicity to have mass $M$. These two terms are invariant under the larger group $SU(3)$ and, if $\alpha$ is negative, generate a VEV of the form $\langle \phi \rangle =(r,s,t)$ where $r^{2}+s^{2}+t^{2}=x^{2}$, with $x^2$ a constant of $% O(\Lambda ^{2})$. The third term, consistent with the non-Abelian family group, breaks $SU(3)$ and $SO(3)$. It will be generated if the underlying theory contains a superpotential term of the form $((\phi^1)^2 + \omega^2 (\phi^2)^2 + \omega (\phi^3)^2) Z$, where $\omega$ is the cube root of unity ($\omega^3 =1$) and $Z$ is in a singlet representation of $\Delta(12)$ (one of the three irreducible singlet representations and distinct from the representation of $Y$) \footnote{One may readily check that it is easy to assign charges under the group $G$ to the new fields, $\chi^{i}$, $Y$ and $Z$ to allow these couplings.}. This coupling is invariant under the discrete group but not under $SU(3)$ or $SO(3)$. The resulting third term of eq(\ref{potential}) splits the vacuum degeneracy. For negative $\alpha$, the minimum for $\gamma$ positive has $\left\vert \langle \phi ^{i} \rangle \right\vert =x(1,1,1)/\sqrt{3}$ while for $\gamma $ negative $% \left\vert \langle \phi ^{i} \rangle \right\vert =x(0,0,1)$. Finally the fourth term also results from a one-loop radiative correction due to the $\xi Y\sum_{i}\phi ^{i}\chi ^{i}$ interaction. It is $SO(3)$ but not $SU(3)$ invariant and constrains the phases of the familon fields. For $\delta $ negative and $% \gamma $ positive the minimum has $\langle \phi ^{i} \rangle = x(1,1,1)/\sqrt{3}$ where $x$ can be complex. This provides a mechanism to generate the vacuum alignment of $\phi _{3}$ and $\phi _{123}$ as each will have a potential of the form in eq(\ref{potential}) - as we are considering more than one familon, we label the coefficients with the familon's subscript to identify which term they correspond to. The structure of eq(\ref{eq:P123 vev}) results if $% \gamma _{3}$ is positive and $\gamma _{123}$, $\delta _{123}$ are negative. Finally what about $\phi _{23}?$ Its VEV of the form in eq(\ref{eq:P123 vev}% ) readily results once one includes the effect of the $X$ field of Table \ref% {ta:SO3} because the symmetries allow a term in the superpotential proportional to $X (\phi _{23} \phi _{123})$. This leads to a positive semi-definite term in the potential proportional to $\left\vert \phi _{23}\phi _{123}\right\vert ^{2}$. There remains the need to align $\phi_{23}$ and $\phi_{3}$. This is readily done if radiative corrections generate a term $m^2 \left\vert \phi^{\dagger}_{23} \phi_{3} \right\vert ^2$ with a negative coefficient, thus a VEV will develop for $\phi _{23}$ in the direction given by eq(\ref{eq:P123 vev}) \footnote{Note that in eq(\ref{eq:P123 vev}) we have used the freedom to define the directions such that $\langle \phi_{3}^{1,2} \rangle = 0$, $ \langle \phi_{23}^{1} \rangle = 0$.}. One may readily check that the higher dimension terms allowed by the symmetry which involve the $X$ field always involve an odd factor of $\phi _{23}\phi _{123}$ and do not disturb the vacuum alignment mechanism discussed above. \section{Neutrinoless double-beta decay} \begin{figure}[tbp] \centerline{\includegraphics[width=3in]{bb_plot_2007}} \caption{Neutrinoless double-beta decay $m_{\beta \beta}$ plots, from \cite{PDBook2008}. From left to right, $m_{min}$ is the absolute value of the lightest neutrino mass, $M$ is the sum of the light neutrino masses, and $\langle m_{\beta} \rangle$ is the average mass determined from low energy beta decays. The shaded areas has width due to the unknown Majorana phases and the areas enclosed by solid lines take into account the errors of oscillation data. The two sets of solid lines correspond to the normal and inverted hierarchies.} \label{doublebeta} \end{figure} The implication for neutrinoless double-beta decay in this model is unambiguous because the relative phases of the familon fields are determined. The amplitude for neutrinoless double-beta decay is proportional to the magnitude of $\sum m_{\nu _{i}}U_{ei}^{2}\equiv m_{\beta \beta }$ and this is what is measured. For TBM mixing $U_{e\tau }$ vanishes. The relative phase between the remaining two terms is given by $Arg[m_{0}+e^{2i p_{123}}m_{\odot }] - Arg[m_{0}]$ where $p_{123}=Arg[y_{\odot }\phi _{123} \phi_{123} /y_{0}]$. As $m_{\odot} < m_{0}$ the relative phase remains small. This corresponds to the upper branches of Figure \ref{doublebeta} in the QD region. Complex phases in the VEVs induce other CP violations through the charged-lepton sector that do not significantly affect $m_{\beta \beta }$. \section{Conclusion} Attempts to explain the structure of fermion masses and mixings often rely on structure at a high (Grand Unified?) scale, $M_{F},$ to generate the observed pattern. One possibility, consistent with neutrino oscillation, is that neutrinos are nearly degenerate. However, due to enhanced radiative corrections in this case, the observation of near TBM mixing is difficult to reconcile with such a high scale mechanism. To keep the deviations from TBM mixing within experimental limits it is necessary to limit the scale at which TBM mixing is generated. We have determined this scale for the MSSM and found significant bounds on $M_{F}$. For example, for degenerate ($m_{\nu _{i}}>0.1$ eV), normal hierarchy neutrinos and $\tan \beta >6$ (or $\tan \beta >8$, for inverted hierarchy neutrinos), $M_{F}<10^{10}$ GeV is required. To get close to the Grand Unified scale with QD neutrinos it is necessary to have very small $\tan \beta$. Turning to the origin of the structure, we have constructed a model based on a discrete non-Abelian family symmetry which gives a QD neutrino spectrum and near TBM mixing. This relies on a natural mechanism for vacuum alignment of the familons which break the family symmetry. The mechanism predicts that neutrinoless double-beta decay should be maximal. Although we only constructed the low-energy effective theory, it fits very well with a see-saw mechanism in which the degenerate mass comes from a Type II see-saw while the small departures from degeneracy are driven by a Type I see-saw. \section*{Acknowledgments} IdMV would like to thank Oxford University for their warm hospitality and support during a short stay when this project was initiated. The work of IdMV was supported by FCT under the grant SFRH/BPD/35919/2007. The work of GGR was partially supported by the EC Network 6th Framework Programme Research and Training Network ``Quest for Unification'' (MRTN-CT-2004-503369) and by the EU FP6 Marie Curie Research and Training Network ``UniverseNet'' (MPRN-CT-2006-035863). MS acknowledges support from the United States Air Force Institute of Technology. This work was partly supported by the Science and Technology Facilities Council of the United Kingdom. The views expressed in this paper are those of the authors and do not reflect the official policy or position of the United States Air Force, Department of Defense, or the US Government. \bibliographystyle{./jhep}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,489
Our meeting on Monday features a pertinent topic and a great presenter. Have you heard about Quirky? But, wondered what it was all about and whether its an option for your invention? I've found that there are many opinions, a little confusion and some blurry understanding. Andrew Erlick, Invention Ambassador at Quirky will be here to clarify it all. Hope to see you Monday.
{ "redpajama_set_name": "RedPajamaC4" }
7,452
\section{Introduction} \subsection{YouTube-8M segment dataset} \begin{figure*} \begin{center} \fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}} \end{center} \caption{Example of a short caption, which should be centered.} \label{fig:short} \end{figure*} \section{Formatting your paper} \begin{table} \begin{center} \begin{tabular}{|l|c|} \hline Method & Frobnability \\ \hline\hline Theirs & Frumpy \\ Yours & Frobbly \\ Ours & Makes one's heart Frob\\ \hline \end{tabular} \end{center} \caption{Results. Ours is better.} \end{table} \section{Final copy} You must include your signed IEEE copyright release form when you submit your finished paper. We MUST have this form before your paper can be published in the proceedings. {\small \bibliographystyle{ieee_fullname} \section{Introduction} With the fast development of digital recording devices and online video sharing platforms, the number of videos available is increasing exponentially, making video understanding a challenging problem in computer vision. As a first step towards video understanding, a significant amount of work has been dedicated to video classification. However, the video understanding problem goes beyond a standard classification problem. Temporally localizing the presences of objects/actions can help us to identify relevant moments in a video and thus better understand its content. Moreover, a video can contain a number of topics that are not always characterized by every time segment within the video. Hence, a better temporal localization algorithm can enable applications such as improved video search (search within a video), highlights extraction, etc. To accelerate the research of temporal localization, Google AI recently released the YouTube-8M Segment Dataset. In this work, we focus on a segment-level classification task presented in the third YouTube-8M Challenge using this segment-level dataset. \par Previous YouTube-8M challenges focused on developing models for video-level predictions. Standard deep neural networks like convolutional neural networks (CNNs) \cite{karpathy2014large,wang2017monkeytyping} and recurrent neural networks (RNNs) \cite{li2017temporal,ostyakov2018label} have been used for video-level classification and both have achieved state-of-the-art results. Pooling via clustering schemes, such as Vector of Locally aggregated Description (VLAD) \cite{jegou2010aggregating,miech2017learnable} and deep bag-of-frames (DBoF) \cite{abu2016youtube,sivic2003video}, has also been widely used among the top competitors. However, these frame-level classifiers are designed to classify video-level labels and cannot necessarily perform segment-level predictions well. Different temporal action localization networks have also been proposed to solve the temporal localization problem. One popular structure is a two-stage, proposal-plus-classification framework \cite{chao2018rethinking}. But to utilize large video-labeled training sets, such a model cannot be directly applied. \par To better leverage the large training dataset which only has noisy video-level labels together with a comparatively smaller segment-level validation dataset, we propose to utilize a multi-instance learning (MIL) model \cite{maron1998framework,ilse2018attention} to simultaneously temporally localize and classify the target segments. The core idea is to use multiple attention weights to emphasize critical frames from different high-level topics in the video. The proposed model performed better than both standard neural networks and pooling via clustering methods using our training procedure. Before discussing the models for the task in this challenge, we will give a brief overview of the dataset and the unique challenges it poses. \subsection{YouTube-8M Segment Dataset} The YouTube-8M Segment Dataset is an extension of the previous YouTube-8M Dataset \cite{abu2016youtube,lee20182nd} which includes human-verified segment-level labels. The previous YouTube-8M dataset contains 6 million high-quality video samples from YouTube, which were split into 3 partitions: training, validation and test sets, following approximately $70\%$, $20\%$ and $10\%$ split. The video samples were encoded as a hidden representation produced by Inception-v3 \cite{szegedy2016rethinking} pretrained on the ImageNet dataset \cite{ioffe2015batch} for both audio and video features taken at a rate of 1 Hz. This dataset only contains video-level annotations with 3862 class labels and an average of 3 labels per video. \par In the YouTube-8M Segment Dataset, multiple 5-second segments are sampled based on classifier scores to encourage both positive and negative within a video. Then each segment is labeled by human raters from a subset of original vocabulary, excluding entities that are not temporally localizable. In total, ~237K segments covering 1000 categories are labeled. These video segment labels provide a valuable resource for temporal localization. In the 3rd YouTube-8M video understanding challenge, we are required to predict segment-level labels in the test set. Submissions are evaluated using the Mean Average Precision @$K_s$ ($\text{MAP}$@$K_s$), where $K_s=100,000$. For each entity, the $\text{MAP}$@$K_s$ score is calculated as \begin{equation} \text{MAP}@K_s = \frac{1}{C} \sum_{c=1}^C \frac{\sum\limits_{k=1}^{K_s} P(k)rel(k)}{N_c} \end{equation} where $C$ is the number of Classes, $P(k)$ is the precision at cutoff $k$, $K_s$ is the number of segments predicted per class, $rel(k)$ is an indicator function equaling $1$ if the item at rank $k$ is a relevant (correct) class, or zero otherwise, and $N_c$ is the number of positively-labeled segments for each class. \par The paper is organized as follows. In section 2, we review the general model architectures we used, along with some related work. Details of our models are introduced in section 3. Section 4 presents the evaluation of all models, and section 5 concludes the paper. \section{Related work} Most of the models used in previous challenges consist of two components, frame-level pooling and a video-level classifier. The first component pools frame-level features over time and the second component classifies the pooled features. For the pooling component, the DBoF \cite{abu2016youtube} model typically projects the features from a fixed number of randomly selected frames in a video into a higher dimensional vector and pools across frames in that space to create a video-level representation. NetVLAD \cite{arandjelovic2016netvlad} determines differentiable soft assignments of frames to different clusters and uses the concatenation of residuals for each visual word as the final representation. RNNs extract hidden representations of frames over time and use the final output state as a representation of the video. CNNs use convolution kernels to extract higher level representations of features and implements max-pooling across time. For the classifier component, typically a logistic layer or a Mixture of Experts (MoE) \cite{jordan1994hierarchical} are used to obtain the final predictions. Other tricks, including context gating \cite{miech2017learnable}, data distillation \cite{wang2017monkeytyping,skalic2018building} and exponential weight averaging \cite{skalic2018building} have also been reported to boost the final prediction outcome. Our solution too, in the same vein, consists of two components. However, in our model, we developed an attention/multi-attention-based mechanism to pool the frame-level features. \section{Models} In this section, we will discuss how we formulated the problem and how we used a multi-attention model to address the challenge of temporal localization. \begin{figure*} \begin{center} \includegraphics[page=1,width=\textwidth]{Figures/attention_network2.pdf} \end{center} \caption{(a) A schematic showing attention-based pooling for temporal localization. Frames are pooled using an attention network with frame-level features. (b) An extension of the model shown in (a), where multiple attention networks are used.} \label{fig:attn} \end{figure*} \subsection{Multiple instance multi-label learning} Multiple instance learning (MIL) deals with problems with incomplete knowledge of labels in the training set. In the case of MIL, there is a bag of instances, $X=\{x_1,x_2,...,x_K\}$, where $K$ represents the total number of instances in the bag. However, we only have access to the label $y$ associated with the bag instead of the labels of individual instances. In the YouTube-8M frame-level video dataset, each frame at one timepoint $x_i$ can be considered as one instance in the bag of frames. Furthermore, each video (bag of frames) is annotated with multiple class labels $y \in [0,1]^n$ (n=3862 in the training set and $n=1000$ in the validation and testing set). \par A generic MIL model can be described as \begin{equation} S(X) = g \Bigg(\theta \Big(f(X) \Big) \Bigg). \end{equation} The choice of the functions $g,\theta,f$ determines the specific model to predict label probability of the bag. In the YouTube-8M competition, a majority of the commonly used models can be categorized as embedding-based MIL methods. In the embedding-based methods, the transformation $f$ maps each instance to a low-dimensional feature space. Then individual instances are aggregated by the MIL pooling $\theta$ and finally classified by the model $g$. Taking two baseline models as examples, each component can be understood as follows \begin{itemize} \item Frame-level logistic model: The pretrained Inception-v3 network first maps the original image and audio at each time point into a frame-level feature. Then frame-level features are pooled across the bag by the mean operator \begin{equation} \theta \Big(f(X) \Big) = \frac{1}{K} \sum_{i=1}^K f(x_i). \end{equation} Finally, the pooled features are classified by a logistic regression model. \item Frame-level DBoF model: The frame-level features extracted from the Inception-v3 network are projected onto a higher dimensional space. Then the max pooling was used to perform the aggregation \begin{equation} \theta \Big(f(X) \Big) = \max\limits_{i=1,...,K} f(x_i). \end{equation} Other MIL pooling methods can be used to replace the mean or max operator, such as log-sum-exp \cite{ramon2000multi}, noisy-and \cite{kraus2016classifying}, etc. But to better emphasize the frames that contribute most to the final prediction result, we propose a learnable weighted average of frames as the pooling method. \end{itemize} \subsection{Gated attention network} In the training dataset, each bag contains all the frames from a video. In the validation/test dataset, the model only needs to predict the labels of 5-frames segments. To bridge the gap between the regular training set and the validation/testing set, it is particularly important that the pooling method can emphasize critical frames or segments of a video. We formulate the pooling as a weighted average of frames \begin{equation} \theta \Big(f(X) \Big) = \sum_{i=1}^K w_i h_i \end{equation} where, $h_i=f(x_i)$ are the frame-level features after initial mapping. The attention weights $w_i$ are determined by a neural network \begin{equation} w_i = \frac{\exp \Big(a^T tanh(Vh_i^T) \Big)}{\sum\limits_{j=1}^K \exp \Big(a^T tanh(Vh_j^T) \Big) } \end{equation} where $V∈ \in \mathcal{R}^{L \times D}$ and $a \in \mathcal{R}^{L \times 1}$ are parameters of the neural network. The softmax function is used to enforce the sum of weights to 1 such that it is invariant to the length of a video. Furthermore, an additional gating mechanism using sigmoid function \cite{dauphin2017language} along with $tanh⁡(∙)$ nonlinearity is used to help the attention network learn complex non-linear relationships among instances \cite{ilse2018attention} \begin{equation} w_i = \frac{\exp \Bigg(a^T \Big(tanh(Vh_i^T) \bigodot \sigma(Uh_i^T) \Big) \Bigg)}{\sum\limits_{j=1}^K \exp \Bigg(a^T \Big(tanh(Vh_j^T) \bigodot \sigma(Uh_j^T) \Big) \Bigg) } \end{equation} where $U \in \mathcal{R}^{L \times D}$ are parameters, $\bigodot$ is an elementwise multiplication and $\sigma(.)$ is sigmoid non-linearity. Once frame-level features are pooled by weighted averaging, a video-level classifier model is used to obtain final prediction probabilities, as shown in Figure 1 (a). \par We used attention weights $w_i$ to select the important frames in a video. Such attention mechanism can also be considered as a temporal localization scheme, in which frames or segments with more discriminant information will be emphasized. We further experimented with a sparsemax function \cite{martins2016softmax} instead of softmax, to have a more selective and compact focus on frames. The sparsemax model is also included in the final ensemble of models. \subsection{Multi-attention network} \begin{figure*} \begin{center} \includegraphics[page=1,width=0.95\textwidth]{Figures/label_loss_fig.png} \end{center} \caption{The loss function for an attention network over two phases of training. Loss function was computed using cross entropy over labels with more weights assigned to the 1000 classes that are present in the segment-level dataset. Evolution of the loss function during fine-tuning step (Phase 2) is also shown.} \label{fig:loss} \end{figure*} Attention networks need to detect the presence of any frames that contain target information. As individual videos may contain multiple labels, it may be difficult for a single attention network to detect all the topics present. Therefore, we extended our attention network to a multi-attention network, where we used multiple sets of parameters for the attention network $\{a^m,V^m,U^m \}, \enskip m=1,...,M$, to obtain different sets of attention weights $W^m$. Frame-level features were subsequently pooled by different attention weights \begin{equation} \theta^m \Big(f(X) \Big) = \sum_{i=1}^K w_i^m h_i \end{equation} Each pooled feature $\theta^m (f(X))$ was then fed into the video-level classifier separately \begin{equation} S^m(X) = g \Bigg(\theta^m \Big(f(X) \Big) \Bigg) \end{equation} Finally, as shown in Figure 1 (b), the prediction outputs were pooled to obtain the final prediction result \begin{equation} S(X) = \max\limits_{m=1,...,M} S^m(X). \end{equation} In our final submission, we used 8 or 16 sets of parameters ($M=8$ or $16$) to select frames differently. The number roughly matched the top-level topic in the vocabulary. We found that more sets of parameters than these led to worse results. \subsection{Other models used } In addition to the above-mentioned attention networks, we also adapted gated attention mechanisms to train RNN and DBoF models. For the RNN models, the final output state contains critical information about the video. But there is a possibility that the actions to be classified get masked by other non-relevant actions. This can be problematic particularly when we train using very long videos. To overcome this problem, we performed pooling across all the hidden states using gated-attention mechanism. In the DBoF models, we also experimented using attention-based pooling to aggregate the up-projected features. For all the models, the video and audio features were transformed and pooled separately. The features were first fed into a context gating layer before they were sent to the classifier component. \section{Experiments} \subsection{Training procedure} To better utilize the large frame-level training set and the segment-annotated validation set, we divided our training procedure into two phases. In phase 1, we trained the model on the 1.4 TB regular training set. For each video, we sampled a subset of frames and used attention network to pool across those them. We tried two ways of sampling, random sampling with replacement and sampling one frame every five frames. We found different models favor different sampling schemes. The first and the last 15 frames were excluded as they may only contain the title, ending and credit frames of the video. In phase 2, we fine-tuned the model pre-trained on the regular training set using the validation set with segment labels. In both phases, cross entropy was used as the loss function. As the test set contains only 1000 class labels, more weights were assigned to those classes during training. Each model was trained using the Adam optimizer with a batch size of 128. Models were trained for about 60K steps in phase 1 and around 2K to 3K steps in phase 2. A plot of the loss function over time during the two phases of training is shown in Figure 2. As the number of segments with human-annotated labels are comparatively smaller than the regular training set, more training steps in Phase 2 would make the model overfit to the segment-level dataset and lead to worse prediction outcomes. We also found that more training steps in phase 1 led to worse results, also due to over-fitting. All the training jobs were done in Google Cloud Platform using a single P100 GPU (for attention/multi-attention models, this took around 6 hours in phase 1 and 20 minutes in phase 2). \subsection{Results} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Model} & \textbf{MAP@100,000} \\ \hline \makecell{Attention 1 \\ (120 samples, Sparsemax, MoE)} & 0.769 \\ \hline \makecell{Attention 2 \\ (subsampling, Softmax, MoE)} & 0.768 \\ \hline \makecell{Attention 3 \\ (120 samples, Softmax, Logistic)} & 0.768 \\ \hline \makecell{Multi-attention 1 \\ (8 sets, Logistic)} & 0.771 \\ \hline \makecell{\textbf{Multi-attention 2} \\ \textbf{(8 sets, MoE)}} & \textbf{0.772} \\ \hline \makecell{\textbf{Multi-attention 3} \\ \textbf{(16 sets, MoE)}} & \textbf{0.772} \\ \hline \end{tabular} \end{center} \caption{Performance of Attention/Multi-attention models.} \end{table} Our implementation is based on the TensorFlow starter code\footnote{\url{http://github.com/google/youtube-8m}} and 2nd year's winning solution\footnote{\url{https://github.com/miha-skalic/youtube8mchallenge}}. The main results of our attention/multi-attention models are summarized in Table 1. The Mean Average Precision @K(MAP@K), where K=100,000 was used to evaluate the models. All the MAP@100,000 scores are obtained from the public leaderboard\footnote{\url{ https://www.kaggle.com/c/youtube8m-2019/leaderboard}}. \par We experimented with different variations of attention/multi-attention models. Variations in the attention networks include sampling schemes (randomly sampling 120 frames with replacement or sample 1 frame every 5 frames), output layer non-linearity of the attention network (Softmax or Sparsemax) and the video-level classifier used (logistic layer or MoE). Variations in the multi-attention networks include output layer non-linearity (Softmax or Sparsemax) and the number of parameter sets used (either 8 or 16). All the models were first trained on the frame-level video training set and fine-tuned on the segment-level validation dataset. The finetuning procedure generally improved the scores by around $0.05$. From the evaluation results, we found that a more complex video-level classifier (the size of models with MoE classifier is around 140 MB and the size of models with logistic classifier is around 25 MB) will not necessarily improve the prediction performance. Such effects may be due to the limited size of the ground truth segment-level dataset. Moreover, adding multiple attention networks would only increase the overall model size by 10 MB, which makes our models very resource efficient. In our models, the gating mechanism was mainly used in two places: gated attention network and context gating after frame-level pooling. The first gating mechanism is designed to help the network learn complex relations across frames. The context gating is used to learn non-linear interactions among activations of features. Both gating mechanisms improved the model performance. \par Other models used in the final ensemble were also evaluated using the same procedures and the public leaderboard scores are reported in Table 2. \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline \textbf{Model} & \textbf{MAP@100,000} \\ \hline CNN1 & 0.757 \\ \hline CNN2 & 0.755 \\ \hline DBoF1 & 0.763 \\ \hline DBoF2 & 0.757 \\ \hline NetVLAD & 0.753 \\ \hline GRU & 0.758 \\ \hline \end{tabular} \end{center} \caption{Performance of other models.} \end{table} \begin{figure}[t] \begin{center} \includegraphics[width=0.95\linewidth]{Figures/ensemble_fig_v2.png} \end{center} \caption{Ensemble of all the models trained and their respective MAP@100,000 scores.} \label{fig:long} \label{fig:onecol} \end{figure} The two CNN models used differ in their sampling schemes. The first CNN model subsamples frames (1 frame every 5 frames) whereas the second randomly samples a sequence of frames. Out of the two DBoF models, the first uses max-pooling and the second uses attention-based pooling. For recurrent neural network models, we found that using Gated Recurrent Units (GRU) results in a slightly better performance compared to Long Short-term Memory (LSTM) units. Additionally, we tried using bi-directional units instead of uni-directional units. However, the model performance deteriorated using bi-directional units. \par Different models favor different sampling schemes in the first phase of training. CNN models give a better performance when the video is subsampled every 5 frames. Sampling a sequence of frames with a length of 150 works best for the RNN model. For DBoF, a much smaller sampling size, 30 frames, is found to be optimal. For attention/multi-attention models, sampling $90-150$ frames gave similar results. The final ensemble consists of a weighted summation of the 12 models reported before, as shown in Figure 3. \section{Conclusion} In this work, we presented a multi-attention model to address the challenge of temporal localization for video understanding. We demonstrated the effectiveness of attention-based mechanism to identify important frames over time. And we found that using different attention networks to detect frames from different topics improved the prediction outcome. Such method narrows down the gap between the video-level labeled training dataset and the segment-level labeled validation dataset. Code used in this challenge, as well as the full model architectures and learning parameters, are available at \url{https://github.com/mv-lab/youtube8m-19}. \par There are multiple potential directions to improve our current model. For attention models, we only used video-level labels or segment-level labels to train the parameters. However, in the validation dataset, start times and end times for the segments are also provided. It will be beneficial to use the start time information as another supervisor. We can add another loss related to segment timing information and the weights put to that segment by the attention network to the loss function. This may enable the attention network to predict the important segments more accurately. In phase 2 of training, we only used 5 frames within the segment as our input to the models. Including the context of that segment will also help the models better understand and classify segment entities. Finally, as the validation dataset is comparatively small, data-augmentation may benefit the model. Besides manipulation of the validation dataset itself, such as producing "virtual" segments by linear combinations of existing segment samples, a pseudo-labeling procedure may also help. A typical pseudo-labeling procedure will choose the top scorer segments in the test set as new training samples for the models. All these methods will potentially improve the performance of our models. \section{Acknowledgements} We would like to thank Kaggle and the Google team for hosting the YouTube-8M video understanding challenge and for providing the starter YouTube-8M TensorFlow code. We would also like to thank the previous participants for sharing their solutions. {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,912
{"url":"http:\/\/hal.in2p3.fr\/view_by_stamp.php?label=APC&langue=fr&action_todo=view&id=hal-00708878&version=1","text":"159 articles\u00a0\u2013\u00a02000 Notices\u00a0 [english version]\n HAL\u00a0:\u00a0hal-00708878, version 1\n arXiv\u00a0:\u00a01002.4626\n Monthly Notices of the Royal Astronomical Society 408 (2010) 2128-2136\n Galaxy Counterparts of metal-rich Damped Lyman-alpha Absorbers - I: The case of the z=2.35 DLA towards Q2222-0946\n (2010)\n We have initiated a survey using the newly commissioned X-shooter spectrograph to target candidate relatively metal-rich damped Lyman-alpha absorbers (DLAs). The spectral coverage of X-shooter allows us to search for not only Lyman-alpha emission, but also rest-frame optical emission lines. We have chosen DLAs where the strongest rest-frame optical lines ([OII], [OIII], Hbeta and Halpha) fall in the NIR atmospheric transmission bands. In this first paper resulting from the survey, we report on the discovery of the galaxy counterpart of the z_abs = 2.354 DLA towards the z=2.926 quasar Q2222\\$-0946. This DLA is amongst the most metal-rich z>2 DLAs studied so far at comparable redshifts and there is evidence for substantial depletion of refractory elements onto dust grains. We measure metallicities from ZnII, SiII, NiII, MnII and FeII of -0.46+\/-0.07, -0.51+\/-0.06, -0.85+\/-0.06, -1.23+\/-0.06, and -0.99+\/-0.06, respectively. The galaxy is detected in the Lyman-alpha, [OIII] lambda4959,5007 Halpha emission lines at an impact parameter of about 0.8 arcsec (6 kpc at z_abs = 2.354). We infer a star-formation rate of 10 M_sun yr^-1, which is a lower limit due to the possibility of slit-loss. Compared to the recently determined Halpha luminosity function for z=2.2 galaxies the DLA-galaxy counterpart has a luminosity of L~0.1L^*_Halpha. The emission-line ratios are 4.0 (Lyalpha\/Halpha) and 1.2 ([OIII]\/Halpha). The Lyalpha line shows clear evidence for resonant scattering effects, namely an asymmetric, redshifted (relative to the systemic redshift) component and a much weaker blueshifted component. The fact that the blueshifted component is relatively weak indicates the presence of a galactic wind. The properties of the galaxy counterpart of this DLA is consistent with the prediction that metal-rich DLAs are associated with the most luminous of the DLA-galaxy counterparts.\n \u00e9quipe(s) de recherche\u00a0:\u00a0APC - AHE\n Domaine : Physique\/Astrophysique\/Cosmologie et astrophysique extra-galactiquePlan\u00e8te et Univers\/Astrophysique\/Cosmologie et astrophysique extra-galactique\n Mots Cl\u00e9s\u00a0:\u00a0galaxies \u2013 formation \u2013 galaxies \u2013 high-redshift \u2013 ISM \u2013 quasars \u2013 absorption lines \u2013 individual \u2013 Q 2222-0946 \u2013 cosmology \u2013 observations\n Lien vers le texte int\u00e9gral\u00a0: http:\/\/fr.arXiv.org\/abs\/1002.4626\n hal-00708878, version 1 http:\/\/hal.archives-ouvertes.fr\/hal-00708878 oai:hal.archives-ouvertes.fr:hal-00708878 Contributeur\u00a0:\u00a0Paolo Goldoni\u00a0<> Soumis le\u00a0:\u00a0Samedi 16 Juin 2012, 10:46:18 Derni\u00e8re modification le\u00a0:\u00a0Samedi 16 Juin 2012, 16:48:08","date":"2014-09-17 15:46:52","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8085933327674866, \"perplexity\": 9653.62104698665}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-41\/segments\/1410657123996.28\/warc\/CC-MAIN-20140914011203-00151-ip-10-196-40-205.us-west-1.compute.internal.warc.gz\"}"}
null
null
Peel sweet potatoes. Slice in half lengthwise, then in 1/4 inch slices. I like to soak them in water until ready to use. I think it keeps them from being sticky. Remove from water and pat dry with paper towels. Line cookie sheet with foil and spray with non-stick cooking spray. Toss potatoes with oil to coat. Lay in pan in single layer. Sprinkle with salt and paprika. Bake in preheated 400 degree oven for 20 minutes. Turn potatoes over and bake another 10 minutes or so. Drain on paper towels and serve. If you've never eaten sweet potato fries, don't expect them to taste like french fries or think you kids won't notice the difference. In fact, if serving to kids who like regular fries, you may not want to call them fries. They may be more open to the sweeter taste. Of course, these can be served with a variety of dipping sauces.
{ "redpajama_set_name": "RedPajamaC4" }
88
Q: Validate array suitable for csv Is it possible in php to validate an array so that it can be determined early on if the structure is suitable for export to a csv format? If it is, what would be the best way to do it? Im thinking of exceptions like the array must be an array of arrays (for each row) i.e.: array( array('row1-col1', 'row1-col2') ) also that first child arrays cannot be arrays i.e. array( array(array('this is not allowed'), 'row1-col2') ) and there must be many more exceptions to the format for successful import. I am importing with a simple loop and fputcsv
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,785
<!DOCTYPE html> <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>Simple Responsive Template | Internal Site</title> <meta name="description" content="Simple Responsive Template is a template for responsive web design. Mobile first, responsive grid layout, toggle menu, navigation bar with unlimited drop downs, responsive slideshow"> <meta name="keywords" content=""> <!-- Mobile viewport --> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="shortcut icon" href="images/favicon.ico" type="image/x-icon" /> <!-- CSS--> <!-- Google web fonts. You can get your own bundle at http://www.google.com/fonts. Don't forget to update the CSS accordingly!--> <link href='http://fonts.googleapis.com/css?family=Droid+Serif|Ubuntu' rel='stylesheet' type='text/css'> <link rel="stylesheet" href="css/normalize.css"> <link rel="stylesheet" href="js/flexslider/flexslider.css" /> <link rel="stylesheet" href="css/basic-style.css"> <!-- end CSS--> <!-- JS--> <script src="js/libs/modernizr-2.6.2.min.js"></script> <!-- end JS--> </head> <body id="home"> <!--[if lt IE 7]> <p class="chromeframe">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> or <a href="http://www.google.com/chromeframe/?redirect=true">activate Google Chrome Frame</a> to improve your experience.</p> <![endif]--> <!-- header area --> <header class="wrapper clearfix"> <div id="banner"> <div id="logo"><a href="index.html"><img src="images/basic-logo.svg" alt="logo"></a></div> </div> <!-- main navigation --> <nav id="topnav" role="navigation"> <div class="menu-toggle">Menu</div> <ul class="srt-menu" id="menu-main-navigation"> <li><a href="index.html">Home page</a></li> <li class="current"><a href="index-internal.html">Internal page demo</a></li> <li><a href="#">menu item 3</a> <ul> <li> <a href="#">menu item 3.1</a> </li> <li class="current"> <a href="#">menu item 3.2</a> <ul> <li class="current"><a href="#">menu item 3.2.1</a></li> <li><a href="#">menu item 3.2.2 with longer link name</a></li> <li><a href="#">menu item 3.2.3</a></li> <li><a href="#">menu item 3.2.4</a></li> <li><a href="#">menu item 3.2.5</a></li> </ul> </li> <li><a href="#">menu item 3.3</a></li> <li><a href="#">menu item 3.4</a></li> </ul> </li> <li> <a href="#">menu item 4</a> <ul> <li><a href="#">menu item 4.1</a></li> <li><a href="#">menu item 4.2</a></li> </ul> </li> <li> <a href="#">menu item 5</a> </li> </ul> </nav><!-- #topnav --> </header><!-- end header --> <section id="page-header" class="clearfix"> <!-- responsive FlexSlider image slideshow --> <div class="wrapper"> <h1>Internal page header</h1> </div> </section> <!-- main content area --> <div class="wrapper" id="main"> <!-- content area --> <section id="content"> <h1>Header 1</h1> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> <h2>Header 2</h2> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> <h3>Header 3</h3> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> <h4>Header 4</h4> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> <h5>Header 5</h5> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> </section><!-- #end content area --> <!-- left sidebar --> <aside> <h2>Secondary Section menu</h2> <nav id="secondary-navigation"> <ul> <li><a href="#">menu item</a></li> <li class="current"><a href="#">current menu item</a></li> <li><a href="#">menu item</a></li> <li><a href="#">menu item</a></li> <li><a href="#">menu item</a></li> </ul> </nav> </aside><!-- #end left sidebar --> </div><!-- #end div #main .wrapper --> <!-- footer area --> <footer> <div id="colophon" class="wrapper clearfix"> footer stuff </div> <!--You can NOT remove this attribution statement from any page, unless you get the permission from prowebdesign.ro--><div id="attribution" class="wrapper clearfix" style="color:#666; font-size:11px;">Site built with <a href="http://www.prowebdesign.ro/simple-responsive-template/" target="_blank" title="Simple Responsive Template is a free software by www.prowebdesign.ro" style="color:#777;">Simple Responsive Template</a></div><!--end attribution--> </footer><!-- #end footer area --> <!-- jQuery --> <script src="js/libs/jquery-1.9.0.min.js"></script> <script defer src="js/flexslider/jquery.flexslider-min.js"></script> <!-- fire ups - read this file! --> <script src="js/main.js"></script> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
5,990
This fan made Game of Thrones Dragon Egg is simply gorgeous! Game of Thrones has brought some amazing fantasy to our screens, and we would be lying if we said that we have never wanted to have something we saw on the show. Now, asking for the dragons would be a stretch, but we can at least ask for props like swords, or, the dragon eggs! Ever wondered how you could make one of those dragon eggs yourself? Read on! A design shop based out of Poland, called Rextorn Metalwork, has created this replica of the dragon egg, and it looks gorgeous! Made from Copper, it took about 40 hours to finish. Watch the video clip, below : Would you try making one yourself? Talk to us in the comments, down below! Game of Thrones is a giant fandom which has a lot of talented fans. While most of us are busy spinning theories that might never see fulfilled, there are some really talented ones that get into fan art. Today, we have one such fan, Robbin Gregorio, who is an illustrator and designer from Philippines, who has made some really detailed paper cut-out art, which are caricatures of Game of Thrones characters. Let's jump right in, and take a look at them: What do you think about this? Make sure your follow the artist on Instagram. Talk to us in the comments, down below! Game of Thrones is a massive fandom, and some of the fans are just way too dedicated and and involved, to the point where they put in a lot of effort to create something as an homage to the show. We have fan-art, we have fan-fiction, and we have cosplays. Today, we have a cosplay of one of the older couples on the show, that sadly fell apart: Jon Snow, and his Wildling lover, Ygritte. Read on! Now, while the pair fell apart, the actors playing them, Kit Harington, and Rose Leslie, are very much a real life couple, and they recently got engaged, and are scheduled to be wed later this year. As a celebration of their love, check out this brilliant cosplay of the couple, by Russian models, Olya Bondarenko and Mikhail Selgis : This is a very good cosplay, especially because they look exactly like their inspirations in a couple of the pictures. What do you think? Tell us in the comments, down below! Game of Thrones is a great show with an even greater fandom. We have seen some great stuff coming from the fans, and continue to do so, and most of this stuff comes from Reddit. As you know, the events of Game of Thrones are shown to start 17 years after Robert's Rebellion, which is an event that we've seen being heavily referenced to. A Redditor has now made an infographic, explaining Robert's Rebellion in detail. Read on! The infographic is made by Redditor KingInTheNorthish, who posted it to the Game of Thrones subreddit.Check it out, below : The infographic is great, and jumps even into certain details that the show has skipped. What would definitely like to see this Redditor cover more events in the history of Westeros. What do you think? Talk to us in the comments, down below! Maisie Williams initially thought that the Arya-Gendry scene was a prank Filming for Game of Thrones season 7 finale commences on 8th November in Seville's Royal Shipyards.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,088
Jennifer Lopez and Shakira to perform at the Super Bowl Pepsi Halftime Show The Latina divas will be together on stage for the very first time to sing at the Super Bowl LIV. September 26, 2019 - 22:00 UTC By Natalia Trejo Get ready! Two of music's biggest stars are set to perform at the Super Bowl LIV halftime show. Jennifer Lopez and Shakira have announced via social media they'll be taking over the stage at the most-anticipated sports event of the year. This year's Super Bowl will take place on February 2, 2020, in Miami Gardens, Florida at the Hard Rock Stadium. ©@jlo/@shakira JLo and Shakira have confirmed they'll be performing at the 2020 Super Bowl halftime show Each of the superstars took to their accounts to share images of one another. JLo, who has more than 101 million followers, posted an image of Shakira. Even though her face is cropped, her enviable curves and long blonde mane are unmistakable! The singer added the date of the Super Bowl and told her followers, "This is happening 02.02.20." ©@jlo JLo shared a photo of Shakira to announce she'll be performing at the upcoming sports event Meanwhile, the Chantaje singer did the same with the Bronx Diva's respective image. Her fans immediately recognized JLo's sizzling curves and were overjoyed as this collaboration will surely be one of the most memorable of all time. Coincidentally, the Super Bowl's 54th edition happens to fall on Shak's birthday, so you bet it's going to be double the celebration. Jennifer Lopez breaks the internet in iconic Versace dress 20 years later ©@shakira Both singers shared their excitement over the upcoming show In regards to their upcoming performance, both singers agreed it was a great accomplishment they had always dreamt of. "Ever since I saw Diana Ross fly off into the sky at the Halftime Show, I dreamed of performing at the Super Bowl," said JLo in a statement. "And now it's made even more special not only because it's the NFL's 100th anniversary, but also because I am performing with a fellow Latina. I can't wait to show what us girls can do on the world's biggest stage." Shakira reveals what sparkly must-have item she needs to perform As for Shakira, the Hips Don't Lie singer showed to be ecstatic, "I'm so honored to be taking on one of the world's biggest stages in the company of a fellow female artist to represent Latinos and Latinas from the U.S. and all over the world -- and to top it off, on my birthday!" said the Colombiana. "This is a true American dream and we are going to bring the show of a lifetime!" For the past few weeks, there was major speculation of Jennifer Lopez possibly performing at the halftime show. This actually wouldn't be her first time performing at a football event as in February 2018 the entertainer performed ahead of the Super Bowl at the DirecTV Now Super Saturday Night Concert at the Armory Stadium in Minneapolis, where she wowed the audience with some of her greatest hits such as Love Don't Cost A Thing and I'm Real. This Is Latinx Celebrity Families Jennifer Lopez wore custom leggings with Max and Emme's names on them Jennifer Lopez's new "In The Morning" music video is all about growth and evolution Here's another glimpse of Jennifer Lopez's white cutout dress Jennifer Lopez left little to the imagination while paddle- boarding [Pics] Watch Selena Gomez's music video for her new Spanish song 'De Una Vez' Patrick Dempsey confirms he is starring alongside Amy Adams in 'Enchanted' sequel Check out Elsa Pataky and Chris Hemsworth's stunning vacation in Australia! Mary Kate Olsen settles divorce agreement following emergency order
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,618
A range of cards with a host of benefits to match your needs while helping you keep track of your day-to-day business expenses and manage your cash flow. Stay in control of your expenses with the card that's best for you while you separate your personal and business spending. Accepted both locally and internationally, our cards offering provide purchasing convenience, financial flexibility, high credit limit and a suite of benefits that will help your business thrive. Our business cards offer convenience and are accepted worldwide.
{ "redpajama_set_name": "RedPajamaC4" }
7,173
<?php namespace Ahe\gsbBundle\Entity; use Doctrine\ORM\EntityRepository; /** * LigneFraisHorsForfaitRepository * * This class was generated by the Doctrine ORM. Add your own custom * repository methods below. */ class LigneFraisHorsForfaitRepository extends EntityRepository { public function findAllById($idVisiteur) { $res = $this->getEntityManager()->createQueryBuilder('SELECT a.id from LigneFraisHorsForfait a where idVisiteur = :visiteur') ->setParameter('visiteur', $idVisiteur); return $res->getQuery()->getResult(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
1,112
Q: onPause/Resume/Start/Stop order Im trying to detect the global Application onPause. For doing it I'm registering every Activity onResume, onPause, onStop calls. I'd like to know if I can assume the calling order (when the ActivityA leaves and enters the ActivityB) is always: A ->onPause; B ->onStart; B ->onResume; A ->onStop; Is there some case where the A-> onStop is called before the B-> onResume? I'm asking that because thats the method I'm using for detecting the globalPause and want to validate it: onPause(){ active=false; } onResume(){ active=true; } onStop(){ if(!active) onGlobalPause(); } thanks
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,190
Cruella Release Date and Details for Emma Stone Disney Film Cruella is a revisionist take on 101 Dalmatians with Emma Stone as Cruella De Vil that you'll have to wait a little longer for. By Joseph Baxter | September 25, 2019 | Photo: Disney Disney continues to power through a drive to produce live-action versions of the myriad classic animated features it's amassed over the better part of the last century. Recent examples include 2014's Maleficent, 2016's The Jungle Book, 2017's Beauty and the Beast, as well as this year's Aladdin and The Lion King. However, one such offering, Cruella, is a prequel of sorts to the traditional story of 101 Dalmatians, which will see Emma Stone playing the ruthlessly avaricious Cruella De Vil. Cruella will be directed by Craig Gillespie, who recently led star Margot Robbie to Oscar-nominated glory in the biopic, I, Tonya. He works off a screenplay recently rewritten by Jez Butterworth (Spectre). Here is what you need to know about Cruella! Cruella Cast Kirby Howell-Baptiste has joined the Cruella cast for an undisclosed role, reports Variety. An English actress, Howell-Baptiste is probably best known for her role as assistant-turned-spy Elena Felton on Season 1 of BBC America's Killing Eve, and can be currently seen fielding a run on CBS All Access crime comedy Why Women Kill. She's previously fielded TV runs on Hulu's revived Veronica Mars, HBO's Barry, NBC's The Good Place and Netflix's Love. Emily Beecham has been cast for an unspecified role, reports Deadline. The Manchester-born actress is best known for displaying impressive combat skills as the Widow on AMC's martial-arts-intensive post-apocalyptic series, Into the Badlands, which she fielded off a TV run on BBC drama The Village. She's also appeared in films like Berlin, I Love You, Daphne, Coen Brother comedy Hail, Caesar!, and will next be seen in the December-scheduled sci-fi drama, Little Joe. Emma Thompson, the two-time Oscar-winning English actress, is also onboard, cast for the role of the Baroness. Thompson will next be seen in the November-scheduled rom-com, Last Christmas, headlined by Game of Thrones' Emilia Clarke and Crazy Rich Asians' Henry Golding. The film's first photo (the article's title image), serves as an unnerving-yet-spectacular first look at Emma Stone as the titular Cruella. After her 2017 Best Lead Actress Oscar win for La La Land, and a recent Best Supporting nomination for The Favourite, she's changed her project-picking strategy with a co-starring run on Netflix TV series Maniac, played Billie Jean King in the biopic, Battle of the Sexes, and readies a return to one of her earliest career successes, reprising her role in the October-scheduled sequel, Zombieland: Double Tap. She's also set to co-star in a horror-comedy, opposite Ralph Fiennes, called The Menu. Cruella Release Date Cruella is scheduled to arrive at theaters on May 28, 2021, taking advantage of a Memorial Day weekend. The film was originally scheduled for December 23, 2020, but the studio opted for a move to warmer weather. Cruella Details Cruella will serve as an origin story for the titular Dalmatian-coat-chasing sociopathic socialite, as played by Emma Stone. Originating in Dodie Smith's 1956 novel The Hundred and One Dalmatians, Cruella was quickly immortalized onscreen by Disney in 1961's 101 Dalmatians voiced by Betty Lou Gerson. The character had live-action manifestations, notably played by Glenn Close in the 1996 101 Dalmatians film and its 2000 sequel 102 Dalmatians. She also appeared on Seasons 4 and 5 of ABC's Disney-reverent television series Once Upon a Time, with an adult version played by Victoria Smurfit and a child version by Milli Wilkinson. Director Craig Gillespie stepped in back in December 2018 to replace Alex Timbers (Amazon's Mozart in the Jungle), who exited over scheduling issues. Gillespie previously helmed films like The Finest Hours, Million Dollar Arm, the 2011 Fright Night remake and Mr. Woodcock, as well as a television stint on Showtime's United States of Tara. Gillespie is working off a script that has passed through a few writers' hands (Dana Fox, Kelly Marcel, Steve Zissis), most recently rewritten by Jez Butterworth, a scribe who worked on films such as James Bond hit Spectre, Black Mass and Edge of Tomorrow, as well as the Amazon/Sky TV miniseries, Britannia. Joseph Baxter is a contributor for Den of Geek and Syfy Wire. You can find his work here. Follow him on Twitter @josbaxter. Tags: DisneyEmma Stone An often overly-analytical, sometimes sarcastic writer whose work can be seen on Syfy Wire. It was previously seen on Cinema Blend and during a longtime tenure… Read more from Joseph Baxter I, Tonya Director in Talks to Helm Disney's Cruella December 6, 2018 | By Richard Jordan How Disney is remaking its animated films in live action April 14, 2014 | By Mark Harrison The Little Mermaid Live-Action Movie Cast and Everything to Know November 18, 2019 | By Joseph Baxter Disney Live-Action Remakes and Fairy Tale Movies Release Schedule November 19, 2019 | By David Crow and 1 other
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,523
Fraternity in the Constitution: Cultural Policing in Dakshina Kannada On April 5, 2009 By Tarunabh Khaitan In Uncategorized A recent report by the PUCL-Karnataka on 'Cultural Policing in Dakshina Kannada: Vigilante Attacks on Women and Minorities 2008-9' released in March 2009 fills in the gaps on the cultural policing debate by providing valuable evidence that led upto the Ram Sene pub attack in Mangalore, and its aftermath, by locating it in a wider politico-cultural context. This blog has discussed the issue previously in the following two respects: 1. One of us had taken exception to the media referring to the incidents as 'moral policing'. The Report rightly uses the term 'cultural policing' rather than moral policing. Cultural policing essentialises the cultural practices of a particular group as the aspirational culture of a place and imposes it on everyone else. The Report says that 'The aim of cultural policing is to produce a form of social apartheid where the various communities become self-enclosed structures with inter-community social interaction being actively discouraged.' (p 2) 2. Another post on this blog had said that progressive movement should take mode of protests and their efficacy seriously, in light of the 'pink chaddi campaign' to oppose the attacks. Chapter V of the Report makes an interesting read in this regard. Our readers would find Chapter IV of the Report titled Cultural Policing leading to Social Apartheid: Violation of the Constitutional Order' particularly interesting. This chapter conceptualizes cultural policing as a form of social apartheid which attacks the idea of fraternity in the Indian Constitution: Dr. Ambedkar recognized how difficult, yet important, the principle of fraternity was. As he put it, "Fraternity means a sense of common brotherhood of all Indians—if Indians are seen as being one people. It is the principle which gives unity and solidarity to social life. It is a difficult thing to achieve.' He goes on to underline the centrality of fraternity by noting that that ' Without fraternity, equality and liberty will be no deeper than a coat of paint." Cultural policing, in its insistence that communities should not interact with each other and in its attempts to punish all those who try to live out the meaning of the Preamble's promise of 'fraternity', is a fundamental attack on the very Constitutional order. The promise of fraternity held out in the Preamble is what is contested at its very roots by cultural policing. What cultural policing wants to produce are monolithic self-enclosed communities with no form of social interaction between them. It is antithetical to the idea of 'We, the people of India' and insists that India is no more one nation, but rather a conglomeration of separate peoples. (p 40) The chapter then goes on to outline the various rights within the fundamental rights framework (right to form intimate association, right to freedom of speech and expression, right against discrimination, and the right to education) as providing the content to the preambular idea of fraternity. As Sudhir Krishnaswamy puts it, fraternity is perhaps the least talked-about ideal in the Preamble to the Constitution. With its roots in the French Revolution, the importance of fraternity (and related notions of solidarity, cohesion and social inclusion) is receiving increasing academic attention in the English-speaking world. One of the most notable legal treatments of the idea is by Hugh Collins in two articles: (i) 'Discrimination, Equality and Social Inclusion' 66 (1) Modern Law Review 2003 (16), and (ii) 'Social Inclusion: A Better Approach to Equality Issues?' 14 Transnational Law and Contemporary Problems (2004-5) 897. The Report must be commended for highlighting this oft-forgetten pillar of our constitutional framework, and may be seen as the beginning of civil society, academic (and hopefully, judicial) conversations on fraternity. Finally, certain specific legal strategies outlined in Chapter VI will also be of interest to some readers. Tagged with Constitutional Interpretation, Constitutional Law, Criminal Justice System, fraternity, Indian Constitutional and Legal History CJI defends Cr.P.C. amendments HNLU in trouble?
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,579
Our Doodle World Map is the perfect way to teach children about the World in a fun way. The map is filled with wonderful cartoon illustrations of animals and landmarks from around the world. The Doodle World Map is a stunning way to combine learning and activity, by letting your children interact with the wonderful world we live in. Help them pick and choose any colour they like, and colour in country borders, animals and landmarks around the world. We even have some stunning country flags at the bottom of the map, designed with 'colour by numbers' to help them. This world map can be a welcome addition to any child's room or study area and is a fun way to start their adventures in geography.
{ "redpajama_set_name": "RedPajamaC4" }
1,071